Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
AI Google Games Technology

Go Champion Lee Se-dol Beats Google's DeepMind AI For First Time (theverge.com) 109

An anonymous reader writes: Korean Go grandmaster Lee Se-dol on Sunday registered his first win over Google's AlphaGo. The win comes after AlphaGo won first three games in the DeepMind challenge earlier this week. The win should serve as a reminder that Google's artificial intelligence computer is not perfect after all, at least for now. Se-dol said earlier this week that he was not able to defeat AlphaGo because he could not find any weakness in its strategy. Commenting after his win, Se-dol said, "I've never been congratulated so much just because I won one game!"
This discussion has been archived. No new comments can be posted.

Go Champion Lee Se-dol Beats Google's DeepMind AI For First Time

Comments Filter:
  • by javilon ( 99157 ) on Sunday March 13, 2016 @05:22AM (#51688705) Homepage

    So AlphaGo is not so far away from a Dan 9 human player.

    My guess is that the mistake AlphaGo made on move 79 will be analyzed and a new version will be created, stronger than the current one. Maybe this analysis will point to a whole class of mistakes that will be fixed.

    It is a bit like when Google's self driving cars make a mistake. This mistake is used as input for the next release of the software so it doesn't act the same way next time. With this process, one car making a mistake results in a change in behavior of all of the cars, because with AI it is possible to communicate new knowledge to the rest of the cars. All of them improve, unlike humans for whom transmitting the new knowledge involves a lot of work or may not even be possible.

    • by Anonymous Coward on Sunday March 13, 2016 @05:33AM (#51688727)

      Given that it's an self-learning AI, this match will of course be added to the training set, and the neural network will adapt to it. I'd be surprised if there would be a new _version_ of AlphaGo to fix this. There'll rather be an improved neural network - but that's a continuous process as AlphaGo keeps playing against itself and learning from it.

      • by Anonymous Coward on Sunday March 13, 2016 @03:14PM (#51690507)

        I don't think it's that easy.

        The game changed when Lee joined together two large fronts, creating an inside outside problem with territory far far bigger than in any of the previous 3 games.

        Alpha go seemed in capable of doing the calculations for that area, as time ran down it first resorted to playing off the smaller fronts, then it eventually resorted to last ditch moves which were rubbish moves that would only pay off if Lee made a mistake, then it resigned. It seems it was never even able to compute the middle of the board.

        Alpha go looked indecisive and risk averse. Lee was the opposite.

        • I don't think it's that easy.

          The game changed when Lee joined together two large fronts, creating an inside outside problem with territory far far bigger than in any of the previous 3 games.

          Alpha go seemed in capable of doing the calculations for that area, as time ran down it first resorted to playing off the smaller fronts, then it eventually resorted to last ditch moves which were rubbish moves that would only pay off if Lee made a mistake, then it resigned. It seems it was never even able to compute the middle of the board.

          Alpha go looked indecisive and risk averse. Lee was the opposite.

          This is very interesting - is there a link to the game that was played (like, perhaps an animated gif showing the game played)?

    • Will the team even continue pursuing the game of Go ? I can imagine they have accomplished their goal, and will now move to other targets. No doubt that other teams will continue the work on Go, inspired by this method and its success.
      • by Anonymous Coward

        It's been thousands of years and people are still trying to master the old Chinese game. Now they've got computers on it and still have troubles.

      • This is just like chess computers, but a decade later because it uses different types of algorithm. As with chess engines, they will now enter a golden age of go computers, and a lot of the people involved will go on to write commercial offerings.

        Chess computers were a novelty until they got good, then they became an important training tool for both amateur and professional competitors. A lot of people think they are are were "brute force" because they don't understand the algorithms, and that "brute force"

    • by Anonymous Coward

      [...] With this process, one car making a mistake results in a change in behavior of all of the cars, because with AI it is possible to communicate new knowledge to the rest of the cars. All of them improve, unlike humans for whom transmitting the new knowledge involves a lot of work or may not even be possible.

      Humanity learns by culture. It is a lossy process (like the memory of individuals), or how would you explain Trump after we already had Hitler? But all in all, civilization is the process of knowledge transfer. Which is why we use copyright and patents to suppress it.

      • by mikael ( 484 )

        Because before Hitler, Germany's civilian population was heavily burdened by high taxes to pay war reparations back to the Allied powers as compensation for starting World War I. The major cities had become decadent along with high crime. Now, this little man with a funny mustache and promises that his militias of smart young men will clean up the country and end the high taxation regime. What could go wrong?

    • by jbolden ( 176878 )

      That's a lot like human beings. You didn't invent the wheel, fire, foods that form complete proteins, paper, language... for yourself. You inherited a culture full of knowledge and add your little bit to it. Human cultures evolve, computer software evolves. Computer software is evolving at a much faster pace than human culture for now (and for at least the next several decades).

      • by esonik ( 222874 )

        It's the human learning principle taken further. The point is that an AI is immortal, whereas human beings are not. It takes 10-20 years for a human child to learn a (meanwhile tiny) fraction of human knowledge and ideas. In contrast, an AI can keep learning as long as you keep the hardware running - you can even copy it's state to a new hardware. That will become a huge difference in future, potentially making AIs more powerful thinkers. It could be that AIs will be used to combine research and ideas from

    • by mikael ( 484 )

      Perhaps it's possible to place the game pieces on the board so that it is possible to win regardless of what the next players move. Like that situation in 5-in-a-row or tic-tac-toe where you can create two lines and whichever one the opponent blocks, you can win the next move.

    • To be fair, the computer has almost certainly analyzed thousands of Lee Se-dol's previous matches. Now that Lee has seen the computer play a bit, maybe he can win a few more times.

      Of course, the inevitable remains inevitable.

      • by nojayuk ( 567177 )

        There are only a few hundred records of games played by Lee Sedol and only a few dozen of them were championship games lasting several hours or even a couple of days like the AlphaGo series. The playing style of these longer games is different to the shorter games played against lower-ranked players or for tuition or study.

        The DeepMind people have stated clearly that AlphaGo has NOT been prepped with games by Lee Sedol. I don't know if the reverse is true but it's common for Go players facing a particular o

    • With this process, one car making a mistake results in a change in behavior of all of the cars

      Its funny that you assume that this will make for an improvement.

      An improvement requires both correct analysis of the behavior and the right solution. Just experiencing something doesn't make you avoid it, learning has to occur.

      'Correcting' that mistake at move 79 can easily turn into a total meltdown on move 81, this is the way of Go. You are silly to assume this was a (or the) mistake in the first place.

    • But then the human will learn to disguise a genuine move as an apparent distraction and the AI will fail to account for that having learned that other players should be ignored when they appear to do illogical things. The human win was triggered by a key insight about the weakness of the current AI design, a weakness that it may not be able to learn itself out of without being upgraded by it's human creators. i.e. The AI may not be able to adapt to this human strategy, without the help of other humans. We s
  • Go Turing Test (Score:5, Interesting)

    by PolygamousRanchKid ( 1290638 ) on Sunday March 13, 2016 @05:27AM (#51688715)

    It would be interesting to set up a Go Turing Test. Either have another top Go player or AlphaGo behind a wall calling the moves.

    Can the human champ Lee Se-dol determine if he is playing against a computer or a human . . . ?

    Also, the more he plays against AlphaGo, will he develop different strategies for playing against computers, as opposed to humans . . . ?

    • by Vapula ( 14703 )

      In fact, Lee Sedol was quite surprised by some unconventionnal moves from Alphago...

      My bet is that these moves will be analysed and bring down some "don't do" rules, a little like when Go Seigen played successfully a 13-13 move as 3rd move on a cross-fuseki.

      I think that AlphaGo will make Human Go make great progress by shaking down some (bad) implicit rules... I think that a rematch in a few years would be quite interresting...

      • Re:Go Turing Test (Score:4, Insightful)

        by slashping ( 2674483 ) on Sunday March 13, 2016 @06:05AM (#51688793)

        I think that a rematch in a few years would be quite interresting.

        If the development of the software continues, the human will be massacred, even with the new knowledge.

        • Nope, not necessarily. You are making a small but significant mistake here. If AlphaGo was a conventional program that was somehow able to actually, reliably analyse what it is doing... and to plan ahead based on fine-tuned (but known) heuristics that we as the designers of the system understand... but that no human player could ever use due to their complexity (computational and/or time-wise)... then it would stand to reason that future versions of it stand a good chance of pulling ahead of humans for all

          • But AlphaGo is not such a program. Sure, it learns from its mistakes (it is designed that way). But as we don't understand the inner workings of the net that well, there might well be a level of play at which such neural net based systems simply ceiling out at.

            There is no ceiling. You can always evaluate the tree wider, deeper, and more efficiently for starters, and you can improve the evaluation. AlphaGo was made in 2 years time. People have been working on chess playing programs for decades, and they still haven't hit a ceiling.

            But as we don't understand the inner workings of the net that well,

            That hasn't stopped people from reaching current standards. Why would it stop them from reaching even higher standards ?

            • by HiThere ( 15173 )

              Higher standards for humans depend on more efficient "algorithms". Higher standards for computers can derive not only from more efficient "algorithms", but also from more powerful hardware. And the current computer hardware is quite inferior to brains for neural processing, so much improvement is possible.

            • by ezdiy ( 2717051 )
              > There is no ceiling. You can always evaluate the tree wider, deeper, and more efficiently for starters, and you can improve the evaluation.

              RNNs don't involve "trees". As for training more layers, the parameters must be carefuly fine tuned by humans. The more layers, the more tricky this gets.

              I'd compare it more to the process of die shrink.
          • Humans have limited hardware, the computer can be upgraded. The human won't catch back up over time. The computer is changeable and controlled by humans, so any blind spot would get fixed over time.

            There is no reason to expect this to go differently than with chess computers. Check the timeline; it has been a decade since a human could beat a top computer.

            It is like saying that maybe humans will outrun a car after they find the car's speed ceiling. Not likely.

            • I never said I thought it particularly likely that the neural network Google has come up with is inherently limited to Go-playing capabilities equal to, or less, of that found in gifted humans. However, with these networks you do have the issue that you never quite know where their limit is. Specifically, for some networks, throwing more hardware at them makes them more capable - but for others, that only has a very weak effect (if one at all).

              Or put differently: if hardware was all that mattered, whales wo

              • whales would be way more intelligent than humans (their brains are significantly larger, after all).

                Their brains are larger, but humans have more neurons in the neo-cortex.

      • I think that a rematch in a few years would be quite interresting... No, it will be utterly boring.

        In a few years, 1, 2, 3 the calculation power of the processors will have improved roughly by factors of 2, 4 and 8.
        Considering that you have the same factors on size on the die etc. we have improvements of: 4, 16, 64.
        In three years a top go computer can play against 200 top class go players simultaneously and win most of the games.

        I think that AlphaGo will make Human Go make great progress by shaking down so

        • In chess the computers definitely helped break down walls about ideas of "bad" moves, and now top level chess play is highly "pragmatic" or unprincipled; victory is in the ability to find the exceptions to ideas that champions of the past considered rules, or at least more clearly good/bad.

    • Re:Go Turing Test (Score:5, Interesting)

      by Kjella ( 173770 ) on Sunday March 13, 2016 @07:09AM (#51688945) Homepage

      It would be interesting to set up a Go Turing Test. Either have another top Go player or AlphaGo behind a wall calling the moves.

      Can the human champ Lee Se-dol determine if he is playing against a computer or a human... ?

      Well at least in the end game the pros were pretty clear that this was not the kind of plays you'd make to try to confuse a 9 dan pro into losing a slightly favorable position. It was forcing Lee Se-dol to counter but all it really did was give him more time to consider the remaining contested areas while playing moves he could blitz if he'd wanted to. Also previously they felt AlphaGo took some really convoluted ways to win where a human would just simplify to claim the win. So when you step out of the game and into the meta-game it seems obvious - to them at least - that you're playing a computer.

    • by jbolden ( 176878 )

      I don't know Go but computer chess doesn't feel like human chess. Computer players have worse strategy and much better tactics than a comparably rated human.

  • What would happen if the AlphaGo was versing another AlphaGo? And what would it teach us about its AI if anything?
    • Re:What would happen (Score:4, Informative)

      by PolygamousRanchKid ( 1290638 ) on Sunday March 13, 2016 @06:00AM (#51688787)

      Actually I believe that this was part of AlphaGo's training . . . playing against itself.

      • Yeah makes sense they they would have now i think about it. Just had not come across much about it.
  • A truly great advance. No longer will man be subject to the tedium that is the game of go.
  • by Skiron ( 735617 ) on Sunday March 13, 2016 @08:44AM (#51689201) Homepage
    The big factor here (as Kasparov stated playing Deep Blue), is that computers don't get tired and don't get distracted - that is a big advantage.
    • by Ecuador ( 740021 )

      Yep, if we wanted to level the playing field we'd turn off the AC at the server room and disconnect those CPU fans. Let me see AlphaGo... eh, Go then.

      • we'd turn off the AC at the server room and disconnect those CPU fans.

        I'm sorry, Dave, I can't allow you to turn off the AC.

  • Match reviews (Score:5, Informative)

    by belthize ( 990217 ) on Sunday March 13, 2016 @09:41AM (#51689351)

    I'm looking forward to the eventual move by move analysis of these games. For now there's some interesting commentary here: https://gogameguru.com/alphago... [gogameguru.com]

    It's been 20+ years since I played Go semi-seriously. I used to have a collection of Ishi Press books which I've long since misplaced. I suddenly find myself very interested in the game again.

    • Good lord, where did the time go. I thought about it for a bit and realized it's been 30 years not 20. In my mind I'm still 23, the fact that I have a son that's over 30 is kind of beside the point.

  • by PPH ( 736903 ) on Sunday March 13, 2016 @11:51AM (#51689769)

    ... DeepMind just sank into a depressed state, refusing to display anything other than the Windows Metro interface.

  • Sounds like the IBM matches vs Gary Kasparov IBM deep blue watched all of Gary's games but Gary was not given the opportunity to watch any of deep blues. Only 3 games, of which Gary got closer. Watching previous games gives you a massive advantage!

I'd rather just believe that it's done by little elves running around.

Working...