Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
Check out the new SourceForge HTML5 internet speed test! No Flash necessary and runs on all devices. ×
AI Classic Games (Games) Google Games

Google's AlphaGo Beats Lee Se-dol In the First Match ( 119

New submitter Fref writes with news from The Verge that "A huge milestone has just been reached in the field of artificial intelligence: AlphaGo, the program developed by Google's DeepMind unit, has defeated legendary Go player Lee Se-dol in the first of five historic matches being held in Seoul, South Korea. Lee resigned after about three and a half hours, with 28 minutes and 28 seconds remaining on his clock. "
Lee will face off against AlphaGo again tomorrow and on Saturday, Sunday, and Tuesday.
Also at the New York Times. Science magazine says the loss may be less significant than it seems at first.
This discussion has been archived. No new comments can be posted.

Google's AlphaGo Beats Lee Se-dol In the First Match

Comments Filter:
  • by rmdingler ( 1955220 ) on Wednesday March 09, 2016 @09:55AM (#51665261) Journal
    This is a great accomplishment for A.I., but it's likely he will rebound from this opening round loss.
    • by infolation ( 840436 ) on Wednesday March 09, 2016 @10:11AM (#51665345)
      And it will be no match for him at kickboxing.
    • by AmiMoJo ( 196126 ) <> on Wednesday March 09, 2016 @10:17AM (#51665389) Homepage

      It might he harder for Lee to beat a computer because he says that he relies heavily on reading his opponent. Unlike poker you can't just calculate odds on everything, and unlike chess there are too many permutations to plan right to the end of a game.

      It really depends if he can find a way to figure the computer out without the usual cues he gets from human players.

      • Poker is called a "game of imperfect information" and just calculating the odds will not make you a winner. Like Go, interaction with the players is important in that you may be getting false or misleading information. This 2015 heads up match was an interesting contest: []
        • by Anonymous Coward on Wednesday March 09, 2016 @01:30PM (#51666719)

          In fact let me spell it out for people who've never paid any real attention to Poker.

          At any moment, a single human player of (say) Texas Hold 'Em can see some of the state of the game, but not all of it, and, except for the final round of betting (the "river" round) there is still a random element which in most cases can be decisive.

          It might seem as though a perfect player would calculate the odds that they've got the best hand, and bet accordingly. But actually that's awful because now the other players can determine from how you bet exactly what cards you've got. Instead then, a good player must "balance" their behaviour so that whatever they do their opponent doesn't learn anything valuable without paying for it. Some high end professional players have balanced play where they'll occasionally bet very strong with total air (ie they know they don't have the best hand) so that even when you suspect they have great cards you can't be sure. Seeing just one hand of this looks completely insane - but they make money every year, because they don't play just one hand, they play thousands of hands and over time this unpredictability makes them hard to beat.

      • Poker is a game of incomplete information. Go is a game of complete information. Making an A.I. good at playing go is interesting because of the game's complexity and vast universe of permutation, but making an A.I. good at playing poker is probably more interesting in terms of decision-assistance (for example). Making an A.I. good at both would be quite the breakthrough.
    • by BigFire ( 13822 )

      AlphaGo did beat the European champion 5 out of 5. So Lee will have to step up his game a bit.

      • by arth1 ( 260657 ) on Wednesday March 09, 2016 @01:20PM (#51666651) Homepage Journal

        AlphaGo did beat the European champion 5 out of 5. So Lee will have to step up his game a bit.

        The European master was 2 Dan. Lee is 8 Dan (or 9, but the last one is honorary, so it doesn't count for comparing strengths). In the world of Go, that difference is rather staggering. It's like the difference between a chess ELO rating of 2100 and 2600. Someone consistently beating the lower ranked player may not have a chance against the higher rated player.

        If Lee were to play the European master, he'd be expected to trounce him too. Quite thoroughly.

        What counts in AlphaGo's favor here is that it has had over a year to improve. That it did win the first game says a lot more about its strength than any play against much lower ranked players.

        • Re: (Score:2, Informative)

          by Anonymous Coward

          It's not just that it won a game. It won the game AS WHITE! I haven't been seeing this pointed out elsewhere...

          The other human professionals who read out this game to score it said that the win was beyond the Komi (the 7.5 points granted to white due its not going first). IOW, AlphaGo beat TWO levels of disadvantage; playing as white and achieving more than the balance granted by komi.

          I think Lee Sedol will have a much more difficult time in the next game as AlphaGo will have black (first move). And while k

          • by arth1 ( 260657 )

            I think Lee Sedol will have a much more difficult time in the next game as AlphaGo will have black (first move). And while komi is supposed to make up for the first move advantage, it doesn't take away the feeling from the _human_ playing white that they play the entire game attempting to come from behind.

            True, but Lee is also known for playing an exceptionally strong white (much like Viktor Kortchnoi excelled at black in chess).

            The biggest damage might be psychological here - he went to the game thinking he would win the fight 5-0 or 4-1, but now have to rethink that - while 4-1 is still possible, the odds are not on his side.

  • by Anonymous Coward

    But not nearly a milestone such as the first Chess grandmaster win or the Jeopardy win.

    I would like to see how well the computer does at Diplomacy with its complex negotiations.

    No doubt the AI singularity will come, but we aren't even close yet.

  • Science magazine says the loss may be less significant than it seems at first.

    Err, no, not really. It's still has about the same significance as it first seemed to me.

  • by Anonymous Coward

    Whats significant here isn't that it beat him in the first round, or that it may win, but that if it wins, it will be a remarkable achievement because the method is not the same one that was used to beat kasparov or used by deep blue to beat players in jeopardy. It is in a more general purpose algorithm that is being used. This system is actually learning.

  • We keep hearing about 'AI this' and 'AI that', but by my standards there is no such thing. I can't sit down with a computer, have a conversation, and for one second feel like I'm talking to the intellectual equivalent (or better) of a human being, therefore there's been no such thing an 'artificial intelligence' as of yet. All we've got are so-called 'expert systems', which at best mimick a human being's ability to think -- but only on specific subjects. Even so-called 'machine learning' is a far cry from a
    • "Expert systems", as you indicated, is a far better term for what we have these days, but I think it just doesn't have that same media-friendly click-generating panache. Hell, I'm just thankful whenever the media calls it "AI" instead of the ridiculous over-broad term "robots" when they're just talking about computer algorithms.

    • by Lotana ( 842533 )

      You are referring to something which is called "Strong AI". This article talks about "Weak AI". The difference is that the weak AI is focused on mastering a small, specialized set of problems (Playing Go, chatter-bots, etc) while strong AI takes on the big consciousness part of the problem. This distinction was created by John Searle who came up with a thought experiment that counters Turing Test: Chinese Room []

      Weak AI has had much more research focus and success. One big part of the reason is that it is so

  • Is there a link to the actual game played ? The pgn file, for example ? Would be curious to see the actual moves.

    • Re: (Score:2, Informative)

      by Anonymous Coward

      go uses sgf

      • by Anonymous Coward on Wednesday March 09, 2016 @12:42PM (#51666345)

        Since I doubt that most people unfamiliar with go have a way to view an sgf file here is link to the gogameguru article about the game. At the bottom there is a javascript applet you can use to play through the game.

  • I can't find anywhere what hardware is used in the game, and also what hardware was used for training.

    I assume it's a cluster of computers talking to each other, writing down trees of possible moves (to RAM, but still writing down), after they have played more games than a single human could in their lifetime against one human brain who is not allowed to talk to other players and has only seen a small fraction of games the AI version has.

    Yes, computers with enough resources to compute numerous trees will wi

    • To make it fair we could limit the AI to the physical size and energy constraints of a human brain ;)
  • Deepmind Video (Score:4, Informative)

    by Roceh ( 855826 ) on Wednesday March 09, 2016 @01:14PM (#51666613)
    Here's a talk by deepmind about this AI []
  • Alphabet's AlphaGo.

  • Lee spent a lot less energy than the data center that powered AlphaGo. Even if lee burned 3000 calories in 3.5 hours, that would only be around 3.4 kilo watt hour. AlphaGo's energy usage would be in the tens (or hundreds?) of "mega" watt hour.

FORTUNE'S FUN FACTS TO KNOW AND TELL: #44 Zebras are colored with dark stripes on a light background.