Google's AlphaGo Beats Lee Se-dol In the First Match (theverge.com) 119
New submitter Fref writes with news from The Verge that "A huge milestone has just been reached in the field of artificial intelligence: AlphaGo, the program developed by Google's DeepMind unit, has defeated legendary Go player Lee Se-dol in the first of five historic matches being held in Seoul, South Korea. Lee resigned after about three and a half hours, with 28 minutes and 28 seconds remaining on his clock. "
Lee will face off against AlphaGo again tomorrow and on Saturday, Sunday, and Tuesday. Also at the New York Times. Science magazine says the loss may be less significant than it seems at first.
Lee will face off against AlphaGo again tomorrow and on Saturday, Sunday, and Tuesday. Also at the New York Times. Science magazine says the loss may be less significant than it seems at first.
Lee underestimated the computer (Score:3)
Re:Lee underestimated the computer (Score:5, Funny)
Re: (Score:3)
And it will be no match for him at kickboxing.
I've read that somewhere before..
Re: (Score:2)
I've read that somewhere before..
It was shown in the documentary Kung Fury. The best robots were unable to master the intricate fighting moves.
Re: (Score:2)
Re:Lee underestimated the computer (Score:4, Interesting)
It might he harder for Lee to beat a computer because he says that he relies heavily on reading his opponent. Unlike poker you can't just calculate odds on everything, and unlike chess there are too many permutations to plan right to the end of a game.
It really depends if he can find a way to figure the computer out without the usual cues he gets from human players.
Re: (Score:3)
https://en.wikipedia.org/wiki/Claudico [wikipedia.org]
Re:Lee underestimated the computer (Score:5, Interesting)
In fact let me spell it out for people who've never paid any real attention to Poker.
At any moment, a single human player of (say) Texas Hold 'Em can see some of the state of the game, but not all of it, and, except for the final round of betting (the "river" round) there is still a random element which in most cases can be decisive.
It might seem as though a perfect player would calculate the odds that they've got the best hand, and bet accordingly. But actually that's awful because now the other players can determine from how you bet exactly what cards you've got. Instead then, a good player must "balance" their behaviour so that whatever they do their opponent doesn't learn anything valuable without paying for it. Some high end professional players have balanced play where they'll occasionally bet very strong with total air (ie they know they don't have the best hand) so that even when you suspect they have great cards you can't be sure. Seeing just one hand of this looks completely insane - but they make money every year, because they don't play just one hand, they play thousands of hands and over time this unpredictability makes them hard to beat.
Re: (Score:2)
Re: (Score:2)
AlphaGo did beat the European champion 5 out of 5. So Lee will have to step up his game a bit.
Re:Lee underestimated the computer (Score:5, Informative)
AlphaGo did beat the European champion 5 out of 5. So Lee will have to step up his game a bit.
The European master was 2 Dan. Lee is 8 Dan (or 9, but the last one is honorary, so it doesn't count for comparing strengths). In the world of Go, that difference is rather staggering. It's like the difference between a chess ELO rating of 2100 and 2600. Someone consistently beating the lower ranked player may not have a chance against the higher rated player.
If Lee were to play the European master, he'd be expected to trounce him too. Quite thoroughly.
What counts in AlphaGo's favor here is that it has had over a year to improve. That it did win the first game says a lot more about its strength than any play against much lower ranked players.
Re: (Score:2, Informative)
It's not just that it won a game. It won the game AS WHITE! I haven't been seeing this pointed out elsewhere...
The other human professionals who read out this game to score it said that the win was beyond the Komi (the 7.5 points granted to white due its not going first). IOW, AlphaGo beat TWO levels of disadvantage; playing as white and achieving more than the balance granted by komi.
I think Lee Sedol will have a much more difficult time in the next game as AlphaGo will have black (first move). And while k
Re: (Score:3)
I think Lee Sedol will have a much more difficult time in the next game as AlphaGo will have black (first move). And while komi is supposed to make up for the first move advantage, it doesn't take away the feeling from the _human_ playing white that they play the entire game attempting to come from behind.
True, but Lee is also known for playing an exceptionally strong white (much like Viktor Kortchnoi excelled at black in chess).
The biggest damage might be psychological here - he went to the game thinking he would win the fight 5-0 or 4-1, but now have to rethink that - while 4-1 is still possible, the odds are not on his side.
Re: (Score:2)
I watched the whole game live, and I have reasons to be confident that Lee will win the remaining 4 games.
So, how confident are you now?
Re: (Score:3)
It's not hard to play a game.
Well, that's up for debate. Go is arguably the hardest game to play (and master) there is.
Re: (Score:2, Interesting)
"Go is arguably the hardest game to play (and master) there is."
Hardly. Try Diplomacy some time. Complex negotiation and justifying back-stabbing,
If the computer disdains or is incompetent at unstructured negotiation with other players,
let's see how long it will last with the players ganged up against it.
Re: (Score:3, Interesting)
I respectfully disagree.
Diplomacy with it's self-references and multi-body perturbations is closer to a set of non-linear partial differential equations whose solution is extremely difficult and idiosyncratic and chaotic.
Ever had a disagreement with someone who is basing their behavior upon yours, who in turn is basing their behavior upon theirs? Now add as many as five more players, all intertwining their interactions with yours over time. We are not talking here about enumerating numerical solutions to ni
Re:Big Whoop (Score:4, Interesting)
In game design theory, the kinds of politics you describe is normally treated as a form of luck - and as such it makes determining "who is better at this game" a meaningless question. If 3 random people are playing Risk against the best Risk player in the world (the person who understands the game the best), there's a rational argument that their best strategy is to co-operate and eliminate him first (no matter what he does or says or how he behaves). This sort of interaction effectively decouples skill from game success.
This is also why modern game design has generally abandoned games with lots of politics - they effectively all become the same game, and that game is really uninteresting after a while.
Re: (Score:2)
Re: (Score:2)
Well, that's up for debate. Go is arguably the hardest game to play (and master) there is.
Hex (a.k.a. Con-tac-tix or Nash) is a very subtle and interesting game. Programs still can't beat the best human Hex players. DeepMind's CEO was quoted in the NYTimes today saying, "Really, the only game left after chess is Go". I wish reporters knew to ask him, "What about Hex?"
Re: (Score:2)
I found other sources much less categorical (the like of: "first player has demonstrably an advantage, but the winning strategy cannot be computed"), so feel free to fix the article (with sources).
Your quote from Wikipedia and the one in quotes are in complete agreement. It would be difficult to fix it.
Re: (Score:1)
Re: (Score:2)
I see no reason to believe this game wouldn't fall to serious effort (ie. computers would surpass human players if a good team made a large effort as has been made here with Go). Rather, I think there's many reasons to believe it would be much easier to reach that point. The game state is much clearer and much more amenable to search than Go is, and the game doesn't have the kinds of complicating factors that would make me think of it as a "hard game" for computers to play (eg. hidden information, simulta
Re: (Score:2)
Go is arguably the hardest game to play (and master) there is.
No. Try human copulation.
Re: (Score:2)
Wha'? That's as simple as a baby toy shapes. Which only makes it more fun.
Re:Big Whoop (Score:5, Informative)
I am still waiting for a computer who can recognize my bags at a conveyor belt at least as efficiently as me
I've worked with industrial vision devices in the past and trust me, you could set up a machine to recognize luggage as efficiently as a human being today if you wanted to. In fact, it will do better.
The only thing surprising about the Go event is that it did not happen like ten, or even twenty, years ago. You may be impressed, but I find this most underwhelming.
That's likely because you don't understand what it involves. Go is unlike chess in the sense that just throwing raw computing power at the problem won't help you at all; for a "small" 13x13 there are over 10^300 valid game trees to compute, and the number gets exponentially worse once the board increases in size. For reference, the estimated number of atoms in the universe is 10^130.
Google's AlphaGo engine is an actual machine-learning AI which had to be trained plays the game much like a regular person would - Myungwan Kim actually remarked that it feels like playing against a human being. Having a competitive Go engine today is a major milestone, make no mistake about it.
Re: (Score:2)
The only thing surprising about the Go event is that it did not happen like ten, or even twenty, years ago. You may be impressed, but I find this most underwhelming.
That's likely because you don't understand what it involves. Go is unlike chess in the sense that just throwing raw computing power at the problem won't help you at all; for a "small" 13x13 there are over 10^300 valid game trees to compute,
....except that no program ever computes the full game tree. It would be impossible to do in many games, eg. Chess.
The trick is to prune the tree. This requires skill by human programmers.
(What this news is really about is that some humans have managed to produce a workable pruning algorithm for Go, it really has very little to do with AI).
Re:Big Whoop (Score:5, Informative)
It requires more than skill. Pruning such a massive game tree is no minor feat - in fact, we don't know how to do it even today. All Go engines are based on some form of adaptive AI.
Again, chess is waaaaay easier in comparison. Pretty much all chess engines work the same way: they start with precomputed moves from an opening book and then move to what's an essentially brute force approach where the engine tries positions, assigns them scores and then picks the highest score available. How these positions are scored / discarded is what separates them, but the base procedure is unchanged. This is also what leads to what chess players call "computer moves" - most chess engines will favor unassuming, conservative moves yielding small positional advantages instead of, well, more "human", intelligent ones. Picking up pivotal moments from classic games (move 17 on Fischer-Byrne, for example) and feeding them to top-rated chess engines is an enlightening exercise.
This is all but impossible to perform in Go with even modest board sizes. The game complexity, given its simple rules, is just staggering.
Re: (Score:2)
No argument that chess is simpler but at the end of the day the process is the same. A bunch of humans did trial and error with lots of heuristics and ran machines against each other all day long to find out which ones worked best. When it gets complex you can vary the scores for each heuristic randomly and let the machines fight it out while you sleep. End of story.
The big advantage of machines compared to humans is that they're methodical and don't overlook stuff. They don't get tired, they don't have bad
Re: (Score:2)
True, but not particularly helpful.
And a necessary prerequisite before pruning the game tree is an efficient algorithm for comparing the likely score for one board position compared to another well before you get to the end of the game. While the AlphaGo team have made progress in this aspect - as well as in other aspects
Re: (Score:2)
It's not hard to play a game.
Well, that's up for debate. Go is arguably the hardest game to play (and master) there is.
It still follows very fixed rules.
Lee's defeat at Go doesn't demonstrate machine "intelligence" any more than Kasparov's defeat at chess did. It just shows better algorithms and advances in computer processing power.
Re: (Score:3)
Go is a far better demonstration of "intelligence" than chess in the sense that you require some form of actual AI to be competitive in it. The opening book+brute force combo used by modern chess engines is useless here.
AlphaGo relies heavily on machine learning neural network to play.
Re: (Score:2)
Neural networks are just a fancy form of heuristic.
At the end of the day the underlying main loop in the program will be very similar to chess (or reversi, or tic-tac-toe,,,,).
a) It generates moves
b) It gives the new board position a score
c) It plays the move with the highest score
Part (b) is the tricky bit but at the end of the day it's just a case adding up values output by a set of heuristics applied to the board. The machine isn't showing any "intelligence" at all. All the intelligence comes from the pe
Interesting game.... (Score:2)
The only winning move is not to play.
Re: (Score:2)
5-10 years ago the news like this would have triggered 1000-1500 comments, but now just few dozens.
Within the first 25 minutes after the submission, sure.
Goodbye AC, we won't miss you, nor your alleged 5-digit uid account.
Re: (Score:2)
Re: (Score:2)
You forgot the 80 attempts at "First Post"
Re: RIP (Score:2)
5-10 years ago it also would've been more significant.
It wasn't that long ago where chess was able to beat a master.
PS: 5 digit uid.
Re: (Score:2)
Tic Tac Toe. A computer can never beat me at Tic Tac Toe. Of course, given a good enough computer, I'll never beat it at Tic Tac Toe either.
Re: (Score:2)
A computer will never beat us at offline interpretation of literature. Well at least not for a long time.
Re: (Score:2)
Re: (Score:2)
Sure. The nuance of literature is not overly binary. Most AI attempts at parsing phrases are little more than workarounds. Actual Intelligence in computing is at its earliest stages. Do I predict that one day a computer will be able to give compelling interpretations of a text ... YES... However not in isolation from internet reviews or a data pool of prior interpretations to parse with a markov engine...
Re: (Score:1)
Eventually of course computers will best us at everything.
After all, we are just computers made out of meat.
It would be pretty arrogant to presume that 80 kilograms of meat is the ultimate intelligence in the universe.
Re: (Score:1)
They already have. God's kind of an idiot.
Re: (Score:1)
Re: (Score:3)
"The Game".
See? They've already lost.
Re: (Score:1)
Mildly interesting (Score:1)
But not nearly a milestone such as the first Chess grandmaster win or the Jeopardy win.
I would like to see how well the computer does at Diplomacy with its complex negotiations.
No doubt the AI singularity will come, but we aren't even close yet.
Re: (Score:2)
Re: (Score:2, Insightful)
Re: (Score:2)
Re: (Score:1)
Are there any classic games left where humans have a marked advantage over computers ?
Hex. It has neither a centuries-long tradition nor a large player base, but many of us who have learned the game consider it classic in the sense of having great depth and beauty. There is active work on Hex programs and they are still far behind the best human players.
Re: (Score:2)
They can already win at Pong too.
Re: (Score:2)
Re: (Score:1)
Are there any classic games left where humans have a marked advantage over computers ?
obligatory: Game AIs [xkcd.com]
Not yet solved (Score:2)
Contact bridge is one game where (in spite of some serious efforts like Ginsberg's Intelligent Bridge Player) AIs can still not play near the level of top human players. The game combines imperfect information with being a partnership game. Perhaps an even greater challenge, you are prohibited by the rules from using optimum bidding systems and card signalling methods as these are too difficult for the average player to defend against.
Significance (Score:2)
Science magazine says the loss may be less significant than it seems at first.
Err, no, not really. It's still has about the same significance as it first seemed to me.
Re: (Score:2)
The fact that there's so little rule makes the game harder for computer. And the possibility of play is many order of magnitude harder than chess.
The system is learning (Score:1)
Whats significant here isn't that it beat him in the first round, or that it may win, but that if it wins, it will be a remarkable achievement because the method is not the same one that was used to beat kasparov or used by deep blue to beat players in jeopardy. It is in a more general purpose algorithm that is being used. This system is actually learning.
So-called 'AI' (Score:2)
Re: (Score:2)
"Expert systems", as you indicated, is a far better term for what we have these days, but I think it just doesn't have that same media-friendly click-generating panache. Hell, I'm just thankful whenever the media calls it "AI" instead of the ridiculous over-broad term "robots" when they're just talking about computer algorithms.
Re: (Score:2)
I don't expect to have a conversation with a dog. I expect the dog to listen, and me to have to make guesses based on the dog's behavior as to whether or not it's listening to me/comprehending me. But since you bring it up: We so far haven't even managed to produce a machine that can 100%
Re: (Score:2)
You are referring to something which is called "Strong AI". This article talks about "Weak AI". The difference is that the weak AI is focused on mastering a small, specialized set of problems (Playing Go, chatter-bots, etc) while strong AI takes on the big consciousness part of the problem. This distinction was created by John Searle who came up with a thought experiment that counters Turing Test: Chinese Room [wikipedia.org]
Weak AI has had much more research focus and success. One big part of the reason is that it is so
Actual game, anywhere ? (Score:2)
Is there a link to the actual game played ? The pgn file, for example ? Would be curious to see the actual moves.
Re: (Score:2, Informative)
go uses sgf
https://gogameguru.com/i/2016/03/Lee-Sedol-vs-AlphaGo-20160309.sgf
Re:Actual game, anywhere ? (Score:4, Informative)
Since I doubt that most people unfamiliar with go have a way to view an sgf file here is link to the gogameguru article about the game. At the bottom there is a javascript applet you can use to play through the game.
https://gogameguru.com/alphago-defeats-lee-sedol-game-1/
What hardware do they use? (Score:1)
I can't find anywhere what hardware is used in the game, and also what hardware was used for training.
I assume it's a cluster of computers talking to each other, writing down trees of possible moves (to RAM, but still writing down), after they have played more games than a single human could in their lifetime against one human brain who is not allowed to talk to other players and has only seen a small fraction of games the AI version has.
Yes, computers with enough resources to compute numerous trees will wi
Re: (Score:2)
Deepmind Video (Score:4, Informative)
Alphabet (Score:2)
Alphabet's AlphaGo.
Re: (Score:1)