Go Champion Lee Se-dol Beats Google's DeepMind AI For First Time (theverge.com) 109
An anonymous reader writes: Korean Go grandmaster Lee Se-dol on Sunday registered his first win over Google's AlphaGo. The win comes after AlphaGo won first three games in the DeepMind challenge earlier this week. The win should serve as a reminder that Google's artificial intelligence computer is not perfect after all, at least for now. Se-dol said earlier this week that he was not able to defeat AlphaGo because he could not find any weakness in its strategy. Commenting after his win, Se-dol said, "I've never been congratulated so much just because I won one game!"
This is interesting (Score:5, Insightful)
So AlphaGo is not so far away from a Dan 9 human player.
My guess is that the mistake AlphaGo made on move 79 will be analyzed and a new version will be created, stronger than the current one. Maybe this analysis will point to a whole class of mistakes that will be fixed.
It is a bit like when Google's self driving cars make a mistake. This mistake is used as input for the next release of the software so it doesn't act the same way next time. With this process, one car making a mistake results in a change in behavior of all of the cars, because with AI it is possible to communicate new knowledge to the rest of the cars. All of them improve, unlike humans for whom transmitting the new knowledge involves a lot of work or may not even be possible.
Re:This is interesting (Score:4, Informative)
Given that it's an self-learning AI, this match will of course be added to the training set, and the neural network will adapt to it. I'd be surprised if there would be a new _version_ of AlphaGo to fix this. There'll rather be an improved neural network - but that's a continuous process as AlphaGo keeps playing against itself and learning from it.
Re:This is interesting (Score:5, Interesting)
I don't think it's that easy.
The game changed when Lee joined together two large fronts, creating an inside outside problem with territory far far bigger than in any of the previous 3 games.
Alpha go seemed in capable of doing the calculations for that area, as time ran down it first resorted to playing off the smaller fronts, then it eventually resorted to last ditch moves which were rubbish moves that would only pay off if Lee made a mistake, then it resigned. It seems it was never even able to compute the middle of the board.
Alpha go looked indecisive and risk averse. Lee was the opposite.
Re: (Score:3)
I don't think it's that easy.
The game changed when Lee joined together two large fronts, creating an inside outside problem with territory far far bigger than in any of the previous 3 games.
Alpha go seemed in capable of doing the calculations for that area, as time ran down it first resorted to playing off the smaller fronts, then it eventually resorted to last ditch moves which were rubbish moves that would only pay off if Lee made a mistake, then it resigned. It seems it was never even able to compute the middle of the board.
Alpha go looked indecisive and risk averse. Lee was the opposite.
This is very interesting - is there a link to the game that was played (like, perhaps an animated gif showing the game played)?
Re: (Score:2)
At the bottom of this page.
https://gogameguru.com/lee-sed... [gogameguru.com]
Re: (Score:2)
Re: (Score:1)
It's been thousands of years and people are still trying to master the old Chinese game. Now they've got computers on it and still have troubles.
Re: (Score:2)
This is just like chess computers, but a decade later because it uses different types of algorithm. As with chess engines, they will now enter a golden age of go computers, and a lot of the people involved will go on to write commercial offerings.
Chess computers were a novelty until they got good, then they became an important training tool for both amateur and professional competitors. A lot of people think they are are were "brute force" because they don't understand the algorithms, and that "brute force"
Re: (Score:1)
[...] With this process, one car making a mistake results in a change in behavior of all of the cars, because with AI it is possible to communicate new knowledge to the rest of the cars. All of them improve, unlike humans for whom transmitting the new knowledge involves a lot of work or may not even be possible.
Humanity learns by culture. It is a lossy process (like the memory of individuals), or how would you explain Trump after we already had Hitler? But all in all, civilization is the process of knowledge transfer. Which is why we use copyright and patents to suppress it.
Re: (Score:2)
Because before Hitler, Germany's civilian population was heavily burdened by high taxes to pay war reparations back to the Allied powers as compensation for starting World War I. The major cities had become decadent along with high crime. Now, this little man with a funny mustache and promises that his militias of smart young men will clean up the country and end the high taxation regime. What could go wrong?
Re: (Score:3)
That's a lot like human beings. You didn't invent the wheel, fire, foods that form complete proteins, paper, language... for yourself. You inherited a culture full of knowledge and add your little bit to it. Human cultures evolve, computer software evolves. Computer software is evolving at a much faster pace than human culture for now (and for at least the next several decades).
Re: (Score:1)
It's the human learning principle taken further. The point is that an AI is immortal, whereas human beings are not. It takes 10-20 years for a human child to learn a (meanwhile tiny) fraction of human knowledge and ideas. In contrast, an AI can keep learning as long as you keep the hardware running - you can even copy it's state to a new hardware. That will become a huge difference in future, potentially making AIs more powerful thinkers. It could be that AIs will be used to combine research and ideas from
Re: (Score:2)
Perhaps it's possible to place the game pieces on the board so that it is possible to win regardless of what the next players move. Like that situation in 5-in-a-row or tic-tac-toe where you can create two lines and whichever one the opponent blocks, you can win the next move.
Lee Se-dol also learns (Score:2)
To be fair, the computer has almost certainly analyzed thousands of Lee Se-dol's previous matches. Now that Lee has seen the computer play a bit, maybe he can win a few more times.
Of course, the inevitable remains inevitable.
Re: (Score:2)
There are only a few hundred records of games played by Lee Sedol and only a few dozen of them were championship games lasting several hours or even a couple of days like the AlphaGo series. The playing style of these longer games is different to the shorter games played against lower-ranked players or for tuition or study.
The DeepMind people have stated clearly that AlphaGo has NOT been prepped with games by Lee Sedol. I don't know if the reverse is true but it's common for Go players facing a particular o
Re: (Score:2)
With this process, one car making a mistake results in a change in behavior of all of the cars
Its funny that you assume that this will make for an improvement.
An improvement requires both correct analysis of the behavior and the right solution. Just experiencing something doesn't make you avoid it, learning has to occur.
'Correcting' that mistake at move 79 can easily turn into a total meltdown on move 81, this is the way of Go. You are silly to assume this was a (or the) mistake in the first place.
Re: (Score:2)
EVERYTHING is a waste of time. (Score:1)
But, frankly, it's your time to waste. So we let you.
Why do you feel unable to let others do what they want with their time, but must control their actions through your scorn and perceived superiority?
Re: (Score:2)
Using the same logic, animal tests on new drugs are a waste of time. We should be directly using them on humans.
The techniques used to develop Alpha Go are going to be used in the future to develop AIs that improve human quality of life (as well, unfortunately, as military and other less positive uses). Trying out the techniques on games first seems like a prudent step.
To digress, one of the problems AIs are going to face in the future is that (like human intelligence) it can never make the optimal decision
Re: (Score:3)
AFAIK we crossed the line on AI doctors in the mid 1990s. Given an encoding of systems the AI outperformed humans pretty consistently. I'd assume in 2016 it isn't remotely close. One of the purposes of forcing doctors to encode systems with charting (Obamacare) is to allow for statistics to further improve the computer systems that decide on courses of care and of course make it possible for them to suggest treatments or rank them real time during the charting process.
Re: (Score:2)
Sorry Charlie, we've been 'encoding' charts for the past 20 years. Obamacare had nothing to do with it. You're confused about the United States belated entry into the 20th Century with the ICD 10 which is much more of structured dataset than previous.
And, as bad as 10 is, if you don't collect the data and analyze it as best you can, you're doing 'art' not 'science'. The problem with medicine-as-art is, well, you've seen doctor's handwriting....
Re: (Score:2)
Of course Obamacare had a lot to do with it. It pushed large numbers of providers from doing their encoding during the billing phase to doing it during the clinical phase. That made that the data much more accurate.
Re: (Score:2)
AFAIK we crossed the line on AI doctors in the mid 1990s. Given an encoding of systems the AI outperformed humans pretty consistently. I'd assume in 2016 it isn't remotely close.
Note: this was in limited situations, and also worth remembering that 'encoding' is far from trivial. Also, given that experts dramatically disagree on how to handle various situations (there are over 100 different surgeries that can be used to treat a bunion. When should the bunion be treated, and which surgery should be used)?
Re: (Score:2)
That's what the data is for. Bunions are common. Given enough data we start developing good quality correlations about patient satisfaction, dangers, long term costs... for all these methods based on diagnostic criteria. With good quality data we can get more specific. Refine, repeat. The experts don't need to know the answer they just need to feed in data and respond as the systems evolve to be more specific.
Re: (Score:2)
You overestimate how much of the job the AI was able to do. It wasn't trivial, but it was far from the complete job. I expect that medical labs will be automated before doctors are...but that doctors will increasingly use tools that do parts of the job.
To change reference frames, to say that this replaced a doctor is like saying that an IDE replaces a programmer. It may well be better at recognizing correct parameters to a library routine, but that's far from the complete job.
Re: (Score:2)
The question was AI-based doctor vs. one not using an AI. To use your IDE analogy programmer writing code themselves using VIM vs. one using an AI.
Re: (Score:2)
Reason for admittance: Animal bite
Where: Right gastrocnemius
What kind of animal: (long list) - dog
What kind of dog: (long list)
Stray or owned
x or y
Having a doctor spend time on a chart means less time doing something else. Now we do need information to make reasoned decisions but when you make a doctor answer a question such as "what kind of dog" and the answer from the patient is "who the f**k knows? It was a black dog with a white tail."
Th
Re: (Score:2)
The question was what changed with Obamacare. Having doctors fill out charts is a stupid idea. That kind of stuff should be filled out by the nurse, by the front desk by the... Most medical offices still have terrible workflows.
Re: (Score:2)
Re: (Score:1)
I understand. That's important data for MU3. What's of no value is having the doctor fill it out. There are plenty of other people, including the patient who could answer those questions.
Re: (Score:3, Interesting)
Mikhail Botvinnik [wikipedia.org] worked for years with a team on one of the first non-brute-force programs, PIONEER. [wikispaces.com] While the program itself was not ultimately at the forefront of chess programs, spinoffs of the developed algorithms were employed for energy network planning in the USSR at increasingly larger scale and successfully so.
Re: (Score:2)
What a waste of time. They could be solving real problems instead of this stupid shit.
Are you talking about the programmers or the Go players ?
Re: (Score:2)
About the media choosing to cover this.
If people stopped reading and responding, the media would stop reporting.
Go Turing Test (Score:5, Interesting)
It would be interesting to set up a Go Turing Test. Either have another top Go player or AlphaGo behind a wall calling the moves.
Can the human champ Lee Se-dol determine if he is playing against a computer or a human . . . ?
Also, the more he plays against AlphaGo, will he develop different strategies for playing against computers, as opposed to humans . . . ?
Re: (Score:3)
In fact, Lee Sedol was quite surprised by some unconventionnal moves from Alphago...
My bet is that these moves will be analysed and bring down some "don't do" rules, a little like when Go Seigen played successfully a 13-13 move as 3rd move on a cross-fuseki.
I think that AlphaGo will make Human Go make great progress by shaking down some (bad) implicit rules... I think that a rematch in a few years would be quite interresting...
Re:Go Turing Test (Score:4, Insightful)
I think that a rematch in a few years would be quite interresting.
If the development of the software continues, the human will be massacred, even with the new knowledge.
Re: (Score:2)
Re: (Score:2)
But AlphaGo is not such a program. Sure, it learns from its mistakes (it is designed that way). But as we don't understand the inner workings of the net that well, there might well be a level of play at which such neural net based systems simply ceiling out at.
There is no ceiling. You can always evaluate the tree wider, deeper, and more efficiently for starters, and you can improve the evaluation. AlphaGo was made in 2 years time. People have been working on chess playing programs for decades, and they still haven't hit a ceiling.
But as we don't understand the inner workings of the net that well,
That hasn't stopped people from reaching current standards. Why would it stop them from reaching even higher standards ?
Re: (Score:2)
Higher standards for humans depend on more efficient "algorithms". Higher standards for computers can derive not only from more efficient "algorithms", but also from more powerful hardware. And the current computer hardware is quite inferior to brains for neural processing, so much improvement is possible.
Re: (Score:1)
RNNs don't involve "trees". As for training more layers, the parameters must be carefuly fine tuned by humans. The more layers, the more tricky this gets.
I'd compare it more to the process of die shrink.
Re: (Score:2)
Humans have limited hardware, the computer can be upgraded. The human won't catch back up over time. The computer is changeable and controlled by humans, so any blind spot would get fixed over time.
There is no reason to expect this to go differently than with chess computers. Check the timeline; it has been a decade since a human could beat a top computer.
It is like saying that maybe humans will outrun a car after they find the car's speed ceiling. Not likely.
Re: (Score:2)
Re: (Score:2)
whales would be way more intelligent than humans (their brains are significantly larger, after all).
Their brains are larger, but humans have more neurons in the neo-cortex.
Re: (Score:2)
I think that a rematch in a few years would be quite interresting... No, it will be utterly boring.
In a few years, 1, 2, 3 the calculation power of the processors will have improved roughly by factors of 2, 4 and 8.
Considering that you have the same factors on size on the die etc. we have improvements of: 4, 16, 64.
In three years a top go computer can play against 200 top class go players simultaneously and win most of the games.
I think that AlphaGo will make Human Go make great progress by shaking down so
Re: (Score:2)
In chess the computers definitely helped break down walls about ideas of "bad" moves, and now top level chess play is highly "pragmatic" or unprincipled; victory is in the ability to find the exceptions to ideas that champions of the past considered rules, or at least more clearly good/bad.
Re: (Score:1)
That doesn't even make fucking sense. A Turing test is used to try to test self awareness, go requires no self awareness for proficiency because it's just a math problem.
Some of you are so stupid your names should appear in the dictionary definition.
Actually, the problem with GO is that you can't brute force it, it requires intuition. That is why the AlphaGo wins are such a big deal in the AI world.
Re: (Score:2)
Net being able to brute force it, and it being a mathematical problem requiring no consciousness to devise a good strategy for, are not mutually exclusive concepts.
Re: (Score:2)
and it being a mathematical problem requiring no consciousness
Consciousness is also a mathematical problem.
Re: (Score:2)
What on earth makes you think you can say that?
As far as I'm aware, no single scientific paper at all has been able to define what consciousness is in any way shape or form. Do you know somehow differently?
Re: (Score:2)
Even the chess "brute force" algorithms were defined by their ability to trim the tree as far back as the 90s. It is by not trying to brute force it that the chess computers got good; but it was still described as a "brute force" system because it is partially so.
The difference between the 50th percentile and the 99th in chess computers is defined almost entirely by the ability to prune the search tree and be less brute force; the better computer is the one doing less brute force. They also follow some line
Re: (Score:1)
That doesn't even make fucking sense. A Turing test is used to try to test self awareness
Neurotypicals use language in a fluid way, assuming the reader is flexible enough to understand the meaning even if not correct in a literal sense. Just because you have trouble with that, doesn't make the others stupid.
Re: (Score:2)
It makes a lot of sense if you think about chess computers; the strongest ones get the most attention, but the ones that sell the most copies are the ones with training features where it can mimic different human styles of play. It can trade a lot of strength to play a certain style, and still be stronger than the humans you're training to play against. In the 90s, the only commercial one with that was Chessmaster, and it wasn't all that good at it. Modern offerings are very good at it, and can play very si
You misunderstand the Turing Test (Score:2)
The purpose of the Turing Test is to convince skeptical people that the AI being tested is intelligent. Turing argued that if a machine passed the "imitation game" then nobody would be able to deny that it was intelligent. He was wrong, of course, but that was his argument, and the basis of the test. He was arguing that intelligent machines were possible. He never expected anyone to seriously run the test. (And, in fact, nobody has yet tried to run the test as he specified it.)
If you want to generalize
Re: (Score:2)
The purpose of the Turing Test is to convince skeptical people that the AI being tested is intelligent.
No, the purpose is to provide an answer to the question if computers can think. Because that question is too vague, and we're lacking a good definition of "thinking", Turing proposed this test as a practical definition. If a machine passes the test, it has everything we require for "thinking", without having to go through the trouble of coming up with an accurate definition.
Re:Go Turing Test (Score:5, Interesting)
It would be interesting to set up a Go Turing Test. Either have another top Go player or AlphaGo behind a wall calling the moves.
Can the human champ Lee Se-dol determine if he is playing against a computer or a human... ?
Well at least in the end game the pros were pretty clear that this was not the kind of plays you'd make to try to confuse a 9 dan pro into losing a slightly favorable position. It was forcing Lee Se-dol to counter but all it really did was give him more time to consider the remaining contested areas while playing moves he could blitz if he'd wanted to. Also previously they felt AlphaGo took some really convoluted ways to win where a human would just simplify to claim the win. So when you step out of the game and into the meta-game it seems obvious - to them at least - that you're playing a computer.
Re: (Score:3)
I don't know Go but computer chess doesn't feel like human chess. Computer players have worse strategy and much better tactics than a comparably rated human.
What would happen (Score:2)
Re:What would happen (Score:4, Informative)
Actually I believe that this was part of AlphaGo's training . . . playing against itself.
Re: (Score:2)
You don't have to incentivize computers. You just tell them what to do and they do it, unquestioningly.
Re: (Score:2)
Almost right. You don't have to incentivize the current generation of computer programss. You just tell them what to do and they do it, unquestioningly.
One of the limitations of the current generation of AI programs is a shallow motivational "stack". This is not something that appears difficult to address, but getting the changes right might be very tricky. (Addressing a problem and solving it are two very different things.) Once you start getting complex motivations you'll get reactions like Atlas bein
Re: (Score:2)
Re: (Score:2)
Never (Score:1)
Re: (Score:2)
I think I am, because if I'm wrong it doesn't matter.
This is the most basic existential reality.
"I think therefore I am" is a basic philosophical study because it is a complete failure on its face; there is no way to even prove that you exist, to yourself; and yet, it is trivially easy to prove it well enough to move forwards to more important questions, because if you don't exist then being wrong is neutral. It is only if you exist that the answer matters, so there is only one possible correct provisional
Re: (Score:2)
You just need a good definition of "self-awareness" and it's trivial to program it in. The question is "what good is it?", and the answer is "Not much unless you have sensors and effectors.", so people don't usually bother. But if you look at the video of Google's Atlas you will notice that it has self-awareness. It doesn't have complex motivations, but it is aware of itself in relation to the universe, and acts to alter this in determined ways.
If you don't like that definition, give one you like better,
Re: (Score:2)
There is no "THE definition". Language doesn't work that way. If you don't know what you're looking for you won't recognize it when you find it.
Re: (Score:2)
2 decades ago, you just didn't get the notification because it isn't broadcast on your network segment.
Re:So when was it claimed to be perfect? (Score:4, Insightful)
What on earth is it supposed to mean? Has this guy won every game against every other person? Therefore they're not perfect. But those winners, does that mean THEY were perfect? No, can't be because they didn't win all their games.
"Perfect" is an exaggeration, but the human's one win does demonstrates the computer is not vastly superior to the human. If *I* was to play against this computer, I would loose in each and every game. 100% of the games. I didn't even write "99.999%" because I couldn't win a single game against a vastly superior software. Go is not a game of chance, so my "luck" would not have let me win even once. But the Go champion did win some games against the software, so apparently they still are at a comparable playing level (even if one is slightly better than the other). So the software isn't "perfect" at beating humans. Yet.
Re: (Score:2)
Go is not a game of chance
It could be, if you used dice to determine where to put the stones. There's even a small chance you'd win.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
He didn't say it didn't have any capability to learn on the fly, just not the same. That's probably correct. I wouldn't count on it always learning more slowly, however.
Re: (Score:2)
Re:Spoiler ALERT!!! (Score:5, Funny)
This is nevertheless a great achievement (Score:2)
Re:This is nevertheless a great achievement (Score:4, Insightful)
Impressive feat (Score:3)
Re: (Score:2)
Yep, if we wanted to level the playing field we'd turn off the AC at the server room and disconnect those CPU fans. Let me see AlphaGo... eh, Go then.
Re: (Score:2)
we'd turn off the AC at the server room and disconnect those CPU fans.
I'm sorry, Dave, I can't allow you to turn off the AC.
Match reviews (Score:5, Informative)
I'm looking forward to the eventual move by move analysis of these games. For now there's some interesting commentary here: https://gogameguru.com/alphago... [gogameguru.com]
It's been 20+ years since I played Go semi-seriously. I used to have a collection of Ishi Press books which I've long since misplaced. I suddenly find myself very interested in the game again.
Re: (Score:2)
Good lord, where did the time go. I thought about it for a bit and realized it's been 30 years not 20. In my mind I'm still 23, the fact that I have a son that's over 30 is kind of beside the point.
Re: (Score:2)
All finite two-person games are in principle solvable. That is not the same as having a working algorithm that will beat 9P players all the time.
Re: (Score:2)
It doesn't matter if the game is finite, because extant finite games have search sizes beyond what could be searched ever.
You only need a finite algorithm, you don't need the game space to be finite.
Like in chess in many positions, you only have to do the search tree for part of the board because of symmetry. There are lots of things that reduce the scope of analysis without reducing the scope of the game.
Re: (Score:2)
It may depend on precisely what you mean by "solved". Solve originated from a Latin word meaning to dissolve, and the alchemists said "solve et coagulae" meaning to dissolve into the liquid and then to re-precipitate. They were talking about how to purify materials (well, and the mind). So it was originally necessary that not all the material be dissolved, and also that it not all be re-precipitated.
So, metaphorically solve came to mean to purify. And a program that can win against the human champion 3
After the loss ... (Score:5, Funny)
IBM Deep Blue (Score:1)