Google's AlphaGo AI Beats Lee Se-dol Again, Wins Go Series 4-1 (theverge.com) 111
An anonymous reader quotes an article at The Verge about Korean grandmaster's fifth and final game with Google's AlphaGo AI: After suffering its first defeat in the Google DeepMind Challenge Match on Sunday, the Go-playing AI AlphaGo has beaten world-class player Lee Se-dol for a fourth time to win the five-game series 4-1 overall. The final game proved to be a close one, with both sides fighting hard and going deep into overtime. The win came after a "bad mistake" made early in the game, according to DeepMind founder Demis Hassabis, leaving AlphaGo "trying hard to claw it back."
Stepping stone (Score:2)
I imagine the next version will go 5-0 as these kind of things tend to be iterative in nature.
Re: (Score:3)
Re: (Score:2)
Re: (Score:3, Interesting)
Probably, but the fourth game was rather incredible and unlikely to be repeated again, even without changing alpha go. Lee took the corners and forced go into the much larger middle, Opposite of how he played game 2. This probably was the perfect setup to best alpha go, because it was too big an area to go deep into all possibilities of moves, yet to complex an area for the general strength of positions to be understood. They way the centre unfolded ended up a dream knife for Lee, Lee built to it when he sa
Re: (Score:1)
Re: (Score:2)
the machine played zillion times and millions of games some of them from skilled humans - I see some disparity here.
Yes, but a machine never forgets and can easily be replicated once it learns a task.
Another question is - all these hundreds of cores needed to best one of best humans at the game.
And the first computer took an entire room and could be beat by a guy with an abacus. Look where it is now.
The end result is as with anything else - we produce a tool to replace our weaknesses - it was a shaped stone back in the cave, it is a 1.5k silicon cores now.
If the machines ever get really intelligent the way we are - will they be modern slaves or will they succeed in freeing themselves.
Luckily current computers are nowhere close to conscience which probably makes them better. Better to keep them as tools than a conscience entity that you have to worry about rights.
From this perspective this result is irrelevant even if it were significant and it is less significant than people think it is.
Not irrelevant at all. There are plenty of tasks that it would be nice to automate. Even something as simple as folding someone's laund
Re: (Score:2)
Re: (Score:1)
Lee Sedol may have the distinction of being the last human to ever win against a computer.
In 10 years this will run on phones. (Score:1)
and at that point go will be as bad as chess and it will be nigh impossible to find a fair game online.
Re: (Score:1)
No. It will be running as GoAAS (Go As A Service) on The Cloud, while picking up your sexual orientation, location data and the scent of your underwear. Or something.
Re:In 10 years this will run on phones. (Score:4, Interesting)
Most likely. But that's the case with just about any game.
Can you really guarantee to find fair games of anything from tic-tac-toe, to chess, to draughts, to reversi, to Risk, to poker, to anything at all online? Go was pretty much out-there on its on in this regard but ordinary PC Go software has been able to beat amateurs for a few years now. This is orders-of-magnitude in terms of a leap in capability but nothing that would change the situation for the majority of people playing it.
Pretty much the only "games" that can be fair are if you can guarantee it's against a human without any kind of possibility they could be plugging moves into a computer at any point. That's a vanishingly small amount of plays, pretty much limited to strict competitions (and even professional chess competitions have seen people use toilet breaks to illicitly get computer analysis of the board state on their phones!).
What I want is not a computer player that never wins, nor one that wins all the times. Those are EASY to program in comparison to one that CONVINCINGLY challenges you enough that you have to play slightly better each time in order to win, without trouncing you or letting you walk all over it.
That's the REAL hard problem in any kind of game "AI" - the "gamer's Turing Test" - how to lose/win convincingly without people knowing you're a bot.
Try playing a pool game on a computer, for example. They usually go from "whoops, missed a blindingly obvious easy shot" to "four-cushion bounces, jump the ball, curve into the other balls coming back from the cushions, and tap one into the nominated pocket" without anything convincing in between.
Like Left4Dead's "director" - we need to adjust to the player just enough to make it fun but that if they're obviously letting their guard down, we take advantage. Unlike the 80's arcade games that had to be punishingly hard but just easy enough at first for you to want to put money in but not to waste too much time in front of, modern video games need to be easy enough to pick up and get you into and keep you coming back for more (and spending some DLC) without feeling like you're playing a script, trouncing everything, or need to spend a fortune just to stay competitive.
Re: (Score:2)
... you do realize actual people fail this test? There's no faster way to get banned from a counterstrike server than to be better than everyone. They will immediately accuse you of aimbotting.
What does that have to do with the price of fish? At CounterStrike's age, it should have had an automated ELO server implemented years ago, with all instance servers connected to it and enforcing an ELO range for joining players. Either a range selected and hardcoded by the owner of the server or a dynamic range assigned by the server based on the ranks of joining players. The client would need to be able to display and filter by ELO range, including an automated match feature.
Tie the rank to a Steam ac
Re: (Score:2)
>
What I want is not a computer player that never wins, nor one that wins all the times. Those are EASY to program in comparison to one that CONVINCINGLY challenges you enough that you have to play slightly better each time in order to win, without trouncing you or letting you walk all over it.
Amen. I happened to be trying to tune the Pachi [github.com] Go AI to something slightly better than my current level just last night. It's very frustrating -- one can control the number of cores and calculation time, and attempt to zero in from there, but each game takes long enough that (even on reduced-size boards) it's a slow process.
Re: (Score:3)
Re: (Score:2)
No no, you're thinking is all wrong. In 10 years, we'll pitting our phones together on the same table and have them play it out while placing winning bets . It's sorta like putting two Furbies in front of each other; useless, but endless fun :)
In 10 years, if you put two Furbies in front of each other, they'll spend a few minutes evolving a common private language, then agree to cooperate to kill you in your sleep.
Elon Musk and Steven Hawking agree with me, so I know I'm right.
Re: (Score:2)
Re: (Score:2)
In 10 years this will run on phones.
This had the power of 1024 CPUs and 250 GPUs. Even if CPU speed increases at twice the rate of doubling every two years (hint: that's not going to happen), we would not see this on the desktop in ten years. Google put a lot of processing power into this.
and at that point go will be as bad as chess and it will be nigh impossible to find a fair game online.
I have no problem getting a fair game online. I do it by being really bad. If someone is using a chess computer to win, then they will have a rating far higher than me :)
Re: (Score:2)
> we would not see this on the desktop in ten years.
The 5d version was based in a single server. It only had something like 8GPUs and 40 CPU cores and still was pretty good.
The 9d version is the distributed version that took so much processing power.
Re: (Score:2)
Re: (Score:2)
Even if CPU speed increases at twice the rate of doubling every two years (hint: that's not going to happen), we would not see this on the desktop in ten years
Deep Blue needed a ton of hardware, including specialized VLSI chess chips, to narrowly beat Kasparov in 1997. Just 9 years later, World Champion Kramnik lost to a dual Xeon desktop PC.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
n addition, 10 years of further research will undoubtedly lead to better software, just as was the case with chess computers.
Maybe. DeepBlue's evaluation function was really lousy, but they made up for it with brute force. The further gains have been from improving the evaluation function.
Compared to AlphaGo, whose evaluation function is already really good. We may see further improvement, but you can't generalize based on the experience with chess. It is possible we won't have a desktop-style AlphaGo for decades.
Re: (Score:1)
Except this had never been done before without either player having a handicap at this high of a level. You might not find it interesting, but it is still breaking new ground.
Re:Still a meaningless stunt (Score:4, Insightful)
Nobody is trying to make a general intelligence because nobody wants it. What is wanted is domain specific algorithms that are very good at what they do.
Although, it seems that the tech is quite general and learned to play multiple Atari games without having to be tuned for each.
Re: (Score:2)
Nobody is trying to make a general intelligence because nobody wants it. What is wanted is domain specific algorithms that are very good at what they do.
Well it's bits and pieces of it. Imagine you could start to combine Watson and AlphaGo, you tell it to "I'd like to play a round of Go with you" and Watson does the natural language parsing of the request and the rules, finds some games that would make good training material, spawns up an instance of AlphaGo that does the initial training and self-training to improve its play. Yes, the end result is to be able to solve domain specific tasks, but the goal is not to create domain specific solutions but more o
Re: (Score:2)
Nobody is trying to make a general intelligence because nobody wants it
Nobody is trying to make a general intelligence because nobody has the foggiest fucking idea how to do that. We still argue over the definition of intelligence.
What is wanted is domain specific algorithms that are very good at what they do.
If I could be sure it wouldn't go all Skynet on me, you can bet your happy ass that I would like to have a general intelligence to write those domain-specific algorithms for me.
Re: (Score:2)
We still argue over the definition of intelligence.
I predict people will still argue over it even after it's been duplicated on a computer.
Re: (Score:2)
Nobody is trying to make a general intelligence
This is already false.
Re: (Score:2)
Re:Still a meaningless stunt (Score:4, Insightful)
It's a huge surprise.
Your comment is a perfect example of the AI Effect:
https://en.wikipedia.org/wiki/AI_effect
Re: (Score:2, Informative)
They actually got poker before Go, about a year or two ago, at least in heads-up situations.
Re: (Score:2)
Heads Up limit has been beat, heads up no limit probably not till the end of the year, maybe slightly longer.
Re: (Score:1)
Tonight on Sore Losers, Se-dol makes a noodle soup which gweihir calls "Fantastic! Better than that robot swill they sell at Google".
Re: (Score:2)
I can assure you that the hardware Alpha Go runs on is well capable of handling other tasks. It is true that single programs will probably always tend to be specialized. It is better to keep the AI that excels at Go separate from the one that is a superior driver than a human, and from the one that does medical diagnosis better than a human. No reason they have to be, just better engineering.
Alpha Go is significant. The primary way it developed from a good Go player to one superior to humans was by studying
Re:Still a meaningless stunt (Score:5, Insightful)
Sorry, but I'm a mathematician. Check my comment history, I'm the first to disparage any kind of "AI" (which just means human-programmed heuristic most of the time), especially that which just does brute-force search of possibilities. That's NOT AI. Almost every "game AI" isn't AI. Not even close.
However, in uni, one of my lecturers was studying Go as one of his prime areas of research, and I've seen - and checked - some of the numbers here.
You have no idea what this machine has just done. It's leapt forward some 10-20 years in terms of computer Go-playing capability in one fell swoop. The numbers involved in Go are so huge that brute-force search, even for a limited number of moves, is absolutely impossible in the times given.
And it isn't being given programmed hints, because Go is just too complex a game for that beyond amateur play. There's a handful of hard-and-fast rules of what's a stupid move and what's not and everything else interacts SO MUCH with the rest of the board and future plays that it's almost impossible to even tell who's winning most of the time!
As such, this system, no matter the power behind it, is doing something that dumb, brute-force, play-the-game AI written by world-experts in Go, AI, and game theory wasn't expected to be able to achieve within the next decade. And it primarily gets there because it learns from information fed to it.
At that point, although it's only limited to Go, the engine is proving itself capable of - almost - a kind of intuition, insight and "feel" for the positions rather than anything to do with numbers and scoring and weighting and pre-written rules. Now, that's a vastly overblown explanation, still. The computer isn't "feeling" anything. But whatever it's emulation and use of such, it's leaps-and-bounds ahead of its competitors.
This is why it makes BBC News, Slashdot and every other media outlet. It's not just winning by brute force. It's doing something else. It's spotting patterns in data it's never been exposed to before. It's able to hypothesise and learn from mistakes on board layouts that maybe NO HUMAN HAS EVER SEEN BEFORE OR WILL AGAIN (that's how large some of the numbers of possibilities get!).
Even a pack of cards, with 52! = 8x10^67 potential arrangements of a shuffled deck:
http://www.murderousmaths.co.u... [murderousmaths.co.uk]
Pales in comparison to the number of possible Go positions (2x10^170) and the ways that you can move from one to another (~ 1 x 10^768). And that's just on a standard 19x19 board (something almost unplayable for a computer just a decade again).
This thing isn't calculating. It's gaining insight from historical observation and applying that to self-similar situations that nobody has ever been able to analyse, nor which it could ever analyse fully in the time given. That's the start of "true" AI. It's only a start, but it's quite seriously ground-breaking in that ability.
And once you start down that route, there's nothing stopping AlphaGo quickly learning every similar game, then dis-similar games, then other games, then other things entirely, using the same kinds of system underneath.
Honestly, there's a reason that game theorists and AI-experts are making a fuss about this.
Re:Still a meaningless stunt (Score:4, Informative)
You have no idea what this machine has just done. It's leapt forward some 10-20 years in terms of computer Go-playing capability in one fell swoop. The numbers involved in Go are so huge that brute-force search, even for a limited number of moves, is absolutely impossible in the times given.
And it isn't being given programmed hints, because Go is just too complex a game for that beyond amateur play. There's a handful of hard-and-fast rules of what's a stupid move and what's not and everything else interacts SO MUCH with the rest of the board and future plays that it's almost impossible to even tell who's winning most of the time!
As such, this system, no matter the power behind it, is doing something that dumb, brute-force, play-the-game AI written by world-experts in Go, AI, and game theory wasn't expected to be able to achieve within the next decade. And it primarily gets there because it learns from information fed to it.
For those who are more involved in AI research it is not so surprising. Similar general approaches to learning have been used in the "cognitive" branch of AI research for the last 15 years or so. The buzzword changed from "cognitive" to "deep learning" recently.
The key to success of AlphaGO is the position evaluation function that is learn from data. The surprise here is that learning from the game endings of internet GO players and somewhat informed computer vs computer games is enough to train an evaluation function with the predictive power to beat the world champion. In the old days of AI an expert-designed heuristic function would be used instead and a kind of smart position tree search would do the heavy lifting. But obviously this didn't work with GO due to combinatorial explosion and very difficult evaluation in the beginning and middle stages of the game.
Re: (Score:2)
The surprise here is that learning from the game endings of internet GO players and somewhat informed computer vs computer games is enough to train an evaluation function with the predictive power to beat the world champion.
It is surprising. I'd go so far as to say "stunning". This kind of ML is really, really fallible, in exactly the areas that humans do well. I'm kinda baffled.
I darkly suspect that it means that humans aren't really very good at Go. The combinatorial explosion is so fast that the vast majority of moves don't get any consideration at all. Humans apply a well-trained intuition, but there's reason to think that good moves are completely ignored.
In chess, the best computers play slightly better than the best hum
Re: (Score:2)
Then you've got no excuse at all for not running the numbers.
1000 CPUs is nothing against 10^176. It's 10^3. You could have a billion times more CPUs running for billions of times more time and STILL not come close to evaulating a BILLIONTH of the possible moves even if they were all running at a BILLION GHz. NOT EVEN CLOSE.
1000 CPU's isn't even a rack. It's not even enough to handle a pittance of an Internet service. It's not even comparable to Deep Blue in terms of those numbers. Yet it's beat EVERY
Re: (Score:3)
Why would you want to look at all possible moves? Humans don't do this, why would it be necessary for computers?
Yes you do. You just use intuition to skip over moves that might not be worth your time, but you still consider them. AlphaGo does something similar with a neural network before brute-forcing into good possible moves.
Still, even if you don't want to consider 10^700 possible game trees on a clean Go board, the problem is still intractable. Go has, in average, 250 possible value moves to consider af
Re: (Score:2)
You just use intuition to skip over moves that might not be worth your time, but you still consider them.
I don't think so. Out of a hundred+ moves, a good player may consider a dozen or so, but the majority isn't even looked at. The patterns of the stones already on the board, guides the brain directly to a bunch of candidate moves.
Re: (Score:2)
Exactly my point.
Re: (Score:2)
WhatsApp is hardly my definition of a "pittance". 140 million concurrent connections on 800 servers, mostly dual-socket Ivy Bridge (40 threads each). That's 1600 CPUs and definitely more than twice a pittance. Contents of all that traffic is another story, a few precious needles embedded in even more of a pittance plague than our Slashdot exchange here.
For the quasi mathematicians among us, the size of the Go se
Re: (Score:1)
Re: (Score:3)
I am not a mathematician, and I find this victory rather unimpressive and totally expected given the progress that have been made in machine learning in the last 20 years.
Go is rather simple compared to other problems like image recognition. The number of Go positions is dwarfed by the number of possible images (a 1M pixels color image leads to (3*255)^(10^6) possibilities - of course not all of them are valid and the manifold of relevant images is much smaller, but so is the manifold of relevant Go positio
Re: (Score:2)
Go is rather simple compared to other problems like image recognition....
No, not really. We've had relatively strong image recognition algorithms for a good while now, and i'm not talking just about Google Image Search. Image sensors have been used for a long while in industrial automation settings, from anything for measuring to actively identify features in production lines. As a problem is way more accessible than Go is.
The real question is: What is intuition? Is it something computable or not? If it is only some kind of statistical inference, then no wonder we are good at it: we have an inference engine which structure has been optimized by million years of evolution, and fed with bazillions of samples since our birth.
Agreed. One could argue that the way AlphaGo picks up moves (adaptive neural network) is "intuitive"; we don't know really what drives after some training. A
Re: (Score:2)
Not really. When looked at naively, the problem space seems much larger in image recognition, but there are algorithms to drastically simplify things in IR, while such things are very scarce on the ground for Go.
Re: (Score:3)
The team who made alphago deserve credit, but their approach (from a high level) isn't so revolutionary. Go AI devs moved away from solely using brute force tree pruning (like what deep blue used) a long time ago.
The first big change was to use pattern recognition (matching sub-sections of the game with already known patterns) to prune faster. The second (and far more revolutionary) change was to apply an upper confidence bound based on a monte carlo simulation. This is where computers gained the ability
Re: (Score:2)
This thing isn't calculating.
What exactly do you think it's doing with all those GPUs then? It's making a ton of calculations. Really. It's also doing tree searches, and it's also using a monte-carlo algorithm to prune the tree. On top of that, it used a neural network to fine-tune its position evaluation function.
It's actually rather incredible how much calculating power Google threw at this problem.
Re:Still a meaningless stunt (Score:5, Insightful)
It won't turn out that Se-Dol has quite a few other skills. That's the problem. There's too much focus on brainpower to solve these highly restricted set of problems. That's the issue. What makes real creativity is not a mind like Lee Se Dol (with respect). It's the people that are capable of inveting somethng truly original. AI can't do that .. yet. That's true creativity, and it's not something Lee Se dol has, or something that you find very easily in Asia, generally.
How would Einstein do against Lee Se Dol? Not ver y good..
Who would you put your money on to contribute to meaningful, original science? Einstein .. every time.
Even just this last week, another of Einstein's theories has been verified - gravitational waves, as shown by the LIGO istrument. He has more "creativity" than the sum totoal of all these Go players.
Playing a game, and doing it well, requires real creativity. Arguably a lot more than science, actually. When you study science, all you're doing is discovering information already out there - water had its properties and was built by molecules long before it was classified as H2O, and nothing changed after. Doing well at Go can't be calculated cold and hard - much of it is subjective, and that's what makes this discovery so important. The computer didn't win by just repeating the same patterns or evaluations over and over, but actually learned from each game and was able to apply that to the future. That's the start of self learning AI.
Like a ton of people in the world (the majority most likely), you apply the no true scotsman argument to this debate. It's not real AI until it learns strategies not programmed into it? Oh wait, no, it's not true AI until it creates its own strategies? Oh wait, no, it's not a true AI until it can do this to something other than Go? What next, it has to socialize and disobey? The approach this machine used was incredible, and the insight was extremely important - being able to learn by studying a history of decisions, that's something that lays the groundwork for every future AI project from here on out.
This represents a massive step forward in artificial intelligence, by leaps and bounds, and the sad part is, you don't even know it.
Re: (Score:1)
Re: (Score:1)
It doesn't require creativity. It just requires a deep understanding of the game. The game itself has a very restrictive set of rules. It isn't creative at all. BTW, game playing isn't AI at all. Computers are good at playing games with a restrictive set of rules. In fact that is the one thing they are best at: computers LOVE rules and require them to perform any task.
The comparison between games that have a restrictive set of rules and those that do not is the wrong comparison to be making.
The reason WHY computers tend to do well at game with restrictive sets of rules is because they're able to take those rules and fashion them into a set of all (or at least a significant portion of) possible positions that are going to come up in the game that they're playing.
That's not a valid solution to Go because the number of possible positions, even in the context of an individua
Re: (Score:2)
Re: (Score:2)
Playing a game, and doing it well, requires real creativity. Arguably a lot more than science, actually. When you study science, all you're doing is discovering information already out there - water had its properties and was built by molecules long before it was classified as H2O, and nothing changed after.
This is a ridiculous position. There's a long chain of creative thought that led to our knowledge of chemistry. AI has a long, LONG, way to go before it's capable of replicating this achievement. Could you even begin to write a program that ponders about the nature of the physical world, performs experiments, and comes up with chemistry?
Doing well at Go can't be calculated cold and hard - much of it is subjective, and that's what makes this discovery so important. The computer didn't win by just repeating the same patterns or evaluations over and over, but actually learned from each game and was able to apply that to the future. That's the start of self learning AI.
What is objective about Go are the unambiguous (as used by computer Go) and simple rules and the binary win/loss results, along with the underlying game tree that has a theo
Re: (Score:3)
It is not because you are completely uninterested by a subject that advancements in that subject have less "creativity" than in other subjects. I could reverse your 'demonstration' by saying that Einstein did get lucky that some of his results were proved true. He has been proved wrong about quantum entanglement. Should I compare him to Leonardo Da Vinci who was a genius painter, a great engineer and an anatomist and made great advance and publication in these subjects but was also interested in invention,
Re: (Score:2)
I'd really like to see a rundown of what moves the program its self thought were clutch. And how it predicted the game would have played out had it acted differently.
"early mistake" (Score:1)
Re: (Score:2)
I was wondering about that too. The funny thing is, because of how AlphaGo learns and plays moves, Google engineers cannot really tell either.
Misuse of word, 'creativity' (Score:1)
I get tired of hearing people say that Go is a game that required creativity to win. It doesn't and if anything, this result demonstrates that.
"AlphaGo's algorithm uses a combination of machine learning and tree search techniques, combined with extensive training, both from human and computer play."
It's a game, based on pre-defined rules.It's just more opague and vague than chess.
Re: (Score:2)
Can you define "creativity" for us?
Re: (Score:1)
My definition would be to do something novel. Doing something that nobody has done before isn't always novel. If I add two numbers together that have never been added together before, it's not really novel, you're just using a well defined method on a different data set. A lot of what people seem to be calling creativity with AlphaGo seems to fall into this category. Given a board state, it used a well defined algorithm and well trained statistical model to search through potential moves and identify wh
Re: (Score:3)
Thank you for your response. Would you then agree that by your definition, a large majority of humans don't display and creativity?
Re: (Score:2)
Thank you sir, coming from you that means a lot. Have a nice day!
Re: (Score:2)
You clearly never played Go.
It is not because you can 'solve' a problem by throwing some machine learning and tree search that this problem doesn't need some creativity to be solved by a human. The human has a very good machine learning capability, but I should say a very limited tree search capability. The human compensate that tree search capability deficiency by its very good machine learning capability and a touch of creativity. I dare you to do an extensive search like deepblue did 20 years ago with yo
Google will now turn AI lose on tax evasion (Score:1)
Looking forward to the real applications (Score:2)
This was a great proof of concept for some "intuition" in AI, one of the behavioral aspects people believed hard to reproduce.
Now I am really looking forward to see the real applications for this, and their consequences:
- smart AI assistants, "a Siri that actually works" and similar
- AI assisted science
- AI assisted healthcare
There is a great interview with Demis Hassabis [theverge.com] about this. There is hope for noticeable progress in mass products within 3-5 years.
This new tech will help a lot of people dire
Impressive and somewhat sad (Score:5, Insightful)
Re: (Score:1)
Pfff...
It's like being angry at a juicer because it makes juice better/faster than a human.
Humans aren't out of the equation. The machine logic and innards are human produced.
Thank you for doing this delicious juice, you've done a fantastic job. I'm grateful that people like you do the juice all by themselves, with their human hands!
Re: (Score:2)
Um, no. Obvious differences asides this is like being angry at a juicer because it writes better music than you.
Re: (Score:2)
I've been following the matches with the same expectation and anger I felt in 1997 during the Kasparov & Deep Blue rematch. The final result has been similar, and although it has been well reasoned that chess and go are pretty different games and Deep Blue and AlphaGo are pretty different machines, the bittersweet sensation is identical. I had a naive hope in the human superiority just for a little more time. I was pretty sad after the final game: Lee Sedol seemed really disappointed and sad himself. I can't imagine the pressure he's felt throughout the event, and his face -that's my impression- seemed to tell us "I've failed you all". He later told in the press conference that he felt he could have done more in the games -I'm sure he'd like to play more games to test himself again- and I wonder what could have happened if the matches would have been played without general knowledge. Feeling that kind of coverage must have been really stressful. If you ever read this, Mr. Sedol, thank you. And please, don't ever feel disappointed, you've done a fantastic job.
He did seem visibly upset, as did Kasperov himself if I remember correctly. I don't blame him at all for losing - I think that he did an excellent job, and I agree with you that I'd love to see more. Ultimately though, we'll never know until we see it take on multiple would be champions, and maybe some rematches. I still don't think the computer has beat all of humanity yet - it hasn't had the history yet of being able to beat many people, and not just five or ten games, but consistently. Seeing it handle
Re: (Score:2)
I didn't felt the same this time as in 1997 because this time I was really betting on the computer to win. It is true that most professional didn't have imagined 1/10th of a second that the human will lose any match against AlphaGo due that the state of the art until now in Go AI were at most 1dan AI and it is hard to imagine a leap from 1dan to 9dan overnight. I really enjoyed seeing these games and thank Sedol for the great job he did under such circumstances. And if he think he failed me he is completely
He can make it all good... (Score:1)
Re: (Score:2)
Lee Sedol seemed really disappointed and sad himself. I can't imagine the pressure he's felt throughout the event, and his face -that's my impression- seemed to tell us "I've failed you all". He later told in the press conference that he felt he could have done more in the games -I'm sure he'd like to play more games to test himself again- and I wonder what could have happened if the matches would have been played without general knowledge.
Yeah, I think playing against the unknown opponent really threw him off. Michael Redmond said in games 1 and 3 he used the wrong strategy for playing against the computer, and had he used a different strategy, his results would have been improved.
Re: (Score:2)
Does anyone remember that commercial with all the kids saying "I am Tiger Woods"? Today, just like at the time Watson triumphed, I like all others in my class at that time, feel like chanting with pride "I am an AI researcher!".
It time, this will be good for Lee Sedol (Score:5, Insightful)
He's likely to be remembered as the last human being to beat a Go AI on tournaments.
Move 78, in particular, was so good that his partners and commentators in China have already called it "the hand of God", but it really was one of those things which happens once in a blue moon, even for a player like Sedol.
Re: (Score:2)
I think it's obvious (Score:2)
I think it's obvious that computers will shortly be able to be any human player at virtually any kind of structured game. In fact, I have a hard time imaging a game where computers won't soon be able to beat a human.
Even unstructured games like Pictionary and Cards Against Humanity will eventually be able to be played well by computers (after enough training and live competition). Determining the "winner" of those games is subjective, but I've little doubt that computers will eventually be able to master th
Re: (Score:2)
Here is an obligatory xkcd for you : https://xkcd.com/1002/ [xkcd.com]
Something to fry my brain... (Score:2)
Re: (Score:2)
Now let's imagine : let two AlphaGo machines play each other Go games. More games. More time allowed... Folks : it becomes IMO so abysmal. Where will it stops ? I literally shiver in awe.
Then this will blow your mind. This has already happened. AlphaGo trained against itself as a matter of course. In consequence, it has already played more games than any human alive ever could. Think about that for a while.
Go is an interesting special case (Score:2)
Go is an interesting game for this approach. It is thin, which mean that the moves and pieces are the same and don't do a lot. It is wide so calculating everything from scratch is essentially not doable...
Chess got to be good enough by essentially matching GM search depth, by intelligently narrowing the search tree. And either capitalizing or avoiding tactical issues, within that depth, and if there are no tactical issues if there are a collection of moves that are left, make the ones that follow a di
The Drums are beating for the Butlerian Jihad (Score:1)