Alpha Go Takes the Match, 3-0 (i-programmer.info) 117
mikejuk writes: Google's AlphaGo has won the Deep Mind Challenge, by winning the third match in a row of five against the 18-time world champion Lee Se-dol. AlphaGo is now the number three Go player in the world and this is an event that will be remembered for a long time. Most AI experts thought that it would take decades to achieve but now we know that we have been on the right track since the 1980s or earlier. AlphaGo makes use of nothing dramatically new — it learned to play Go using a deep neural network and reinforcement learning, both developments on classical AI techniques. We know now that we don't need any big new breakthroughs to get to true AI. The results of the final two games are going to be interesting but as far as AI is concerned the match really is all over.
Re: (Score:2)
Re: (Score:3)
Despite the name "neural network", there is nothing "very similar" between the way AlphaGo works and brains work.
That seems correct. AlphaGo is playing go at a level beyond that of humans. The take home point seems to be that brains aren't really competitive and are probably a dead-end technology.
Re: (Score:2)
The future is here: "Bacon. Bacon! It's Bacon! Bacon Bacon Bacon! IT'S BACON!!!"
Re: (Score:2)
We know now that we don't need any big new breakthroughs to get to true AI.
But, this isn't 'AI', it's just another 'expert system'.
Re:Win a game... (Score:5, Informative)
But, this isn't 'AI', it's just another 'expert system'.
No. Alpha-Go is pretty much the opposite of an "expert system". Expert systems encode expert human knowledge in a series of explicit rules and if-then tables. Alpha-Go is based on neural nets and self-learning. There is no list of explicit rules.
Re: (Score:2)
I hope to see this AI take on other expert Go players, perhaps several at a time or a team of them.
This is a really exciting development, one I didn't expect to see in my lifetime and I really want to see how far it can be stretched.
But all isn't lost for human players as Fan Hui 2p helped improve the AI's playing after his 5-0 loss to it some months back and he says he now has new insight into the game and has improved his global standing from 633 to somewhere in the 300s since becoming familiar with Alph
Re: (Score:3)
How would this same AI model (i.e. not a retrained model) do in Chess?
Why can't it be retrained? A human Go player with no experience in chess wouldn't know the first move, but after observing some games to deduce the rules, then playing enough games to practice, they'd probably be competent.
Guess what; that's exactly how this system was trained. In fact, an earlier model taught itself to play dozens of old Atari games in the same way.
Could it hold a basic conversation? Identify a picture of a cat?
Actually, Google's context-aware voice recognition & response system is largely driven by a similar layered neutral network, as is it's vis
Re: (Score:2)
We know now that we don't need any big new breakthroughs to get to true AI.
But, this isn't 'AI', it's just another 'expert system'.
It is a combination. It is a neural network guiding an expert system, and being guided by a go board state evaluator.
Re: (Score:2)
Is the software written in Rust? (Score:1)
Is this AI software written in Rust?
Re:Is the software written in Rust? (Score:5, Informative)
Is this AI software written in Rust?
I believe it is written in C++ and Lua, because that is what the authors used in previous projects. Most of the computing is done on GPUs, which is most likely done with CUDA, because that is what they used in the past, but they could use OpenCL.
Re: (Score:2)
Is this AI software written in Rust?
Obviously, they used Go [golang.org] to play Go.
(Just kidding, I think it was C++ actually)
Re:This is impressive, but... (Score:5, Interesting)
And humans do the same thing. They spend their lives studying the important games that came before. So the point is it did it pretty much the same way humans do. And it has already played a move that no strong human has ever played (Game 2, move 37). At first it (not surprisingly) appeared to be blunder, until its strength became clear. Humans will now learn from the computer and their level of play will rise. It happened in chess and checkers, and in a very big way in backgammon. Any strong human backgammon player today would trounce the World Campion of 20 years ago.
Re: (Score:3)
Game 2, move 37 was amazing. The commentator had already pointed out the issues with the two stones trapped to the lower/middle left and the loose group to the lower right. This move linked everything together in a light way. This was an "ear-reddening move."
Re: (Score:2)
I suspect that if we were to slightly tweak the rules that AlphaGo would become useless and our master player would easily adapt.
AlphaGo did most of its learning through self-reinforcement: playing against itself. So if the rules were changed, it could learn the new rules quickly through self-reinforcement, while the human player would have to relearn a lifetime of habits. Lee Sedol has been training daily since he was four years old. AlphaGo surpassed him after only a few hundred hours of training. The results of a rule change would likely be the opposite of what you predict. AlphaGo would adapt far quicker than a human.
Re: (Score:2)
Re: (Score:2)
That's quite a leap... (Score:5, Insightful)
Every time a computer beats a human at a "smart" game, we hear the same thing. And every time, when all is said and done, all we have is a program that can play a game well (and maybe a really aggressive marketing campaign to sell consulting services, Watson).
Look, we barely understand what intelligence is let alone know what it means to have a computer replicate it. We can have computers perform tasks that we ascribe to smart people and call it intelligence, but that's about it right now.
And, with deep learning and neural nets, we haven't gained any real insights into intelligence. We just have a black box mathematical function that can play a game.
Re: (Score:1)
I often describe Artificial Intelligence as the discipline of using computation means to duplicate tasks that otherwise required human intelligence.
Re: (Score:2)
In some ways, this is unprecedented.
Not much change (Score:3)
Life could be a really complex search algorithm we just can't begin to comprehend. Life could be a non-linear approximation of 42...
Not much has changed in AI, the fundamentals on these new systems are still the same as before. The difference is we have more computing power (and CS work hours) to put into problems we thought were more amazingly complex than they were. It's likely that Go is actually as difficult as we thought it was so our progress on this search problem is not a result of a huge leap but
Re: (Score:2, Offtopic)
Re: (Score:2)
Re: (Score:3)
Yes, lets define intelligence as what computers cannot yet do... This win has narrowed the definition quite a lot.
The relevant part of this win is that a machine using pattern matching, generalization and reinforcement learning has beaten the best human at the only game left where humans bested machines.
I guess this is pretty relevant. It is not general AI, but it is a big step in that direction.
Re: (Score:2)
The relevant part of this win is that a machine using pattern matching, generalization and reinforcement learning has beaten the best human at the only game left where humans bested machines.
Let's not forget Calvinball [wikia.com]!
Re: (Score:1)
If we get a self-reinforcing machine on that one, it would be genuinely terrifying.
Re: (Score:2)
Re:That's quite a leap... (Score:5, Insightful)
No it isn't. Who ever said that winning "Go" games was a step towards AI?
Winning at a game that cannot be brute forced and is played through strategy and pattern matching is a step towards AI. Having a part of the skills coded by programmers while another part of the skills is learned by the system by playing is a step towards AI.
Re: (Score:2, Insightful)
Re:That's quite a leap... (Score:4, Informative)
Re: (Score:3, Insightful)
I think you misunderstood what he meant. The thing is, when chess became winnable by computers some people said the exact same thing (while others quickly redefined intelligence to exclude chess). Meanwhile, in the area of computer vision (and possibly others, but I'm only a computer vision expert, so I couldn't comment) we see that there are still big qualitative differences between how current neural nets perform and how humans perform. I personally think neural networks hold great promise, but something
Re: (Score:1)
Someone on reddit suggested that a 'real' AI would have tried to bribe the player to throw the match. Now that would have been impressive!
Re:That's quite a leap... (Score:5, Interesting)
Well, yes and no. Back when Deep Blue beat Kasparov in 1997 it was programmed with a huge amount of chess logic programmed by people. Using a computer amplified the power of those algorithms, it had move databases but it wasn't really self-modifying. From what I understand you could step through the algorithm and even though you couldn't do it at the speed of the computer, you could follow it. That approach pretty much failed for Go, it's very hard for a human to quantify exactly what constitutes a good or bad move.
Neural networks pretty much does away with that in any form humans can follow. That is to say, if you had to explain how Alpha Go plays you'd get a ton of weights that don't really make much sense to anybody. It means you don't need Go expertise in the programming, because they couldn't find where to tweak a weakness even if they saw one. All you can really do is play it and it'll learn and adjust from its losses. From what I've gathered it's hard to find excellence, if you train with lots of mediocre players making mediocre moves it's easy to learn decent moves but that'll fail against a master. And that if you let it self-play it can easily learn nonsense that'll only work against itself.
Apparently they've solved those problems well and has now created a machine that plays at a beyond-human level. If they can extend this approach to practically unlimited choices like say an RTS where you can choose what to build, where to send your units, when to attack, when to defend, what resources to collect etc. it could be absolutely massive. Imagine if you were in say city planning and you have tons of data on traffic patterns, congestion and how traffic reflows when you open and close roads and you could put an AI on the job to say where and how you get the most value for money. I'm not sure if it's strong AI, but it's certainly places we use HI today.
Re: (Score:2)
We have a system that can self-teach itself to play many games [youtube.com], with no specific programming on how to beat those games.
That is a massive difference from say, Deep Blue vs Kasparov. Deep Blue was specifically programmed to play Chess. AlphaGo was essentially fed a bunch of Go games and figured out how to play by itself. Surely you see the qualitative difference in that.
Is there a reason w
Re:Stop the hype (Score:4, Insightful)
Until this week you could have made a similar statement about Go.
short circuiting the branching factor (Score:5, Interesting)
Although in a sense it is "nothing new" (neural networks and monte carlo statistical techniques), the combination is one of the most convincing demonstrations of short-circuiting huge branching factors to arrive at what a human player would call "intuition".
Chess has a branching factor of about 35. This is small enough that if you prune the most dismal lines, you can brute force the rest to many ply and arrive at a very good result, but this is not a well generalizable technique.
Go has a branching factor of about 250. This cannot be brute forced, even with aggressive pruning. The result of a NN evaluator function plus Monte Carlo has been astonishing: it was not predicted for computer Go to reach this strength for decades yet, but here we are.
The implications of this combination of techniques to other kinds of problems requiring "intuition" will be interesting to watch.
Re: (Score:2)
Exactly. The fact we don't understand how our brains work means we tend to attribute supernatural powers to them. But what about the only thing they do is statistical inference, montecarlo, pattern matching... That would mean that as we approach the processing power of a brain in your typical $1000 hardware, we would be able to implement artificial intelligence without developing new ideas or techniques. This is unlikely, but the success of neural networks of lately is the result of having more processing p
Re: (Score:1)
Re: (Score:3)
Go has a branching factor of about 250. This cannot be brute forced, even with aggressive pruning.
Go also has a difficult evaluation function. In chess, you can get reasonable results with little more than just counting material, whereas in Go, a stone placed on a certain location in a mostly empty corner may be much stronger than the same stone placed one square further, and this won't become obvious until 100 moves later in the game.
Re: (Score:2)
Chess is nowhere near that simple. Counting material is a very poor evaluation function unless you also somehow count position. I've seen games where the winning move was to sacrifice a queen for a pawn. But counting material is a QUICK evaluation function, so you can cut off most loosing lines by gross material disparity n-moves in the future. This will occasionally cause you to prune a winning line, but not often. However as you get closer to the current board position material difference becomes an
Re: (Score:2)
I've seen games where the winning move was to sacrifice a queen for a pawn
Sure, but such sacrifices don't usually take very long. A queen sacrifice is usually followed by a series of forcing, or almost forcing, moves until either the queen is won back with interest, or until checkmate follows. Material counting by itself is a poor evaluation. You have to combine it with a reasonably deep search, which is possible for a chess program.
True AI? (Score:5, Interesting)
We know now that we don't need any big new breakthroughs to get to true AI.
Grossly exaggerated claim. The following article worth reading on this subject by no one else than two of authorities in the field, one did work on the backgammon game in the 90s and the other one on the IBM Deep Blue program that win over the world chess champion Garry Kasparov in 1997. http://www.ibm.com/blogs/think... [ibm.com] In particular:
"However, research in such “clean” game domains didn’t really address most real-life tasks that have a “messy” nature. By “messy,” we mean that, unlike board games, it may be infeasible to write down an exact specification of what happens when actions are taken, or indeed what exactly is the objective of the task. Real-world tasks typically pose additional challenges, such as ambiguous, hidden or missing data, and “non-stationarity,” meaning that the task can change unexpectedly over time. Moreover, they generally require human-level cognitive faculties, such as fluency in natural languages, common-sense reasoning and knowledge understanding, and developing a theory of the motives and thought processes of other humans."
Not AI (Score:1)
Re: (Score:1)
By your definition it would be really hard to do AI :D
Program an AI! But without algorithms! No... stupid programmer, not even using self learning algorithms!
Not an easy task for a programmer to create a soul, even if such a thing exists.
Re: (Score:2)
Re: (Score:2)
Yes. Amazingly it IS really hard to do AI. In fact it is so hard we arent even close and may never achieve it.
Ah, so it's not intelligence at all unless it's human intelligence? It can never be sentient unless it engages in conversations exactly like we do, gives its own purpose just like we do, and only does tasks which humans can do? That doesn't sound very intelligent to me at all. "Oh sure, it's self aware and everything - but only if it does exactly this, behaves exactly like we do, and is completely like humans in every way shape and form." You seem to not like algorithms, even completely self-modifying, bec
Re:Not AI (Score:5, Interesting)
There's a well known phenomenon where every time some AI research produces a successful result, someone comes along and says "That's not true A.I" "It's just a computer program that has to be told what to do."
(This is the "no true Scotsman" argument.)
So let's see the list of such "non-AI" technologies:
- Natural-Language translation (getting pretty usable now)
- Speech recognition combined with ability to answer fairly arbitrary questions quite well on average.
(talking to Google via my Android phone)
- Self-driving cars (getting pretty close - will be better drivers than people on average pretty soon)
- Chess
- Jeopardy
- Go
- Detection of suspicious speech and patterns of communication (no doubt used by NSA on most Internet and phone traffic)
- Recognition of particular writer from all writers on Internet by analysis of their writing style
- Person identification by face picture recognition
- Object type and locaton type recognition from pictures
- Walking, box-stacking robot "Atlas 2"
Just algorithms.
Does it actually matter what you personally choose to call this kind of technology? It is what it is, and it's advancing quickly.
"It's not true AI" sounds like the desperate retreat cry of a person in a very defensive stance, afraid of losing a sense of human uniqueness.
Re: (Score:1)
Re: (Score:3, Interesting)
So far, every post I have read that makes the same claim as yours lacks a critical piece: a clear description of what would qualify as AI.
Often, when I state that question, I get a long, rambling, disorganized list of random things humans do, and no indicator that making a computer do them would yet qualify as true AI. That is why I keep emphasizing the word "clear." Make it clear or you are just being religious.
So, exactly where are those goal-posts?
Re: (Score:2)
The goal posts are simple. "True AI" = Consciousness.
It is the difference between a robot that can do a single activity better than any human, and a robot that can perform *all* the activities of human (which of course, means fitting into the same physical space as a human).
Is this a ridiculously high bar? Absolutely. But that's the snake-oil that's being sold in the popular press, so I figure it's fair game.
Re: (Score:2)
The goal posts are simple. "True AI" = Consciousness.
It's not a simple goal post if you can't provide a practical definition of consciousness.
and a robot that can perform *all* the activities of human
There are many people that can't perform *all* the activities of a human. Stephen Hawking can't even catch a ball.
Re: (Score:2)
The goal posts are simple. "True AI" = Consciousness.
It's not a simple goal post if you can't provide a practical definition of consciousness.
and a robot that can perform *all* the activities of human
There are many people that can't perform *all* the activities of a human. Stephen Hawking can't even catch a ball.
First, let's get this straight. "True AI" is a marketing term and it's bandied about by the media and people looking to get funding from the unsuspecting. It promises essentially something that is essentially indistinguishable from a human being.
You want a goal post - here's one. You put 3 AIs and 3 humans with no intellectual, language or cultural barriers in remote communication with each other constantly for 3 months in both personal and a professional manner. When the humans are unable to determine
Re: (Score:2)
Come on, we both know what I meant.
Unfortunately, in the this debate, there are
Re: (Score:2)
No. I'm sorry but i *don't* know what you mean. You didn't define consciousness. By my definition the Atlas robot showed consciousness.
What the current robots all lack is a deep motivational structure. Also the computers they run on are underpowered compared to human brains for the kind of task they are performing. This may be addressed by the "neural computers" that people keep talking about building.
P.S.: Consciousness is the ability to asses your own state and compare it with the external physical
Re: (Score:2)
Again, I distinguish between the field of "AI", and what gets bandied around as "True AI", which is essentially something indistinguishable from a human being.
Its the difference between chess (~40 possible moves each turn), go (hundreds of possible moves) and reality (millions of possible outcomes).
We're a million miles away from that, and more to the point, it's ludicrous to spend resources trying to get there. It's like trying to make an internal combustion engine do ballet instead of concentrating upon
Re: (Score:2)
So yes it is a ridiculously high bar. Furthermore, no human can do any of those things without be
Re: (Score:2)
We are trying to create AIs, not Is. We have not created true consciousness that is true. But leaving aside questions on what is consciousness and if it even exists, what does consciousness have anything to do with intelligence?
I think we're mostly in agreement. I have a lot of respect for AI.
However, the whole "True AI" business is hogwash because it promises (mostly to the uninformed) something that approximates all aspects of human intelligence (or god help me, the singularity where we upload ourselves)
Re: (Score:2)
If he told you where the goal-posts were, you wouldn't be able to know how fast they're moving.
Re: (Score:2)
I deny that humans are capable of general problem solving. They are quite capable of solving a finite set of problems, some with more difficulty than others. (An in what category do you place the proof of the four-color theorum, where every step could be done by a trained mathematician, but no mathematician could do the whole thing, because they couldn't hold the entire proof in memory? The proof was done by a human-computer cooperation. I think it's an edge case.)
Re: (Score:2, Interesting)
I don't count any of those things as AI, although they are components of AI since they are all pieces of (or combinations of) observing and interacting with the environment.
To me, "true AI" is something that can decide to do something other than that for which it was constructed. Can AlphaGo do anything other than play Go? If you tell it to play Go, can it decide, "No thanks, I'd rather cure cancer, it's a more rewarding problem"?
While AlphaGo and the like are very fantastic achievements, I don't think th
Re: (Score:3)
To me, "true AI" is something that can decide to do something other than that for which it was constructed
Many people can't even decide to stop eating.
Re: (Score:1)
Something that could choose its own problem domains and work on domains that help preserve its physical existence would be interesting.
Re: (Score:1)
I think you're right as far as the cynics are concerned. People are worried that AI will somehow take over humanity. However, I think the examples you listed do not show (at least what I consider) true AI. But rather than argue whether a task falls into "true AI", let's see what tasks existing AI cannot do well, but we expect a reasonably intelligent human to do:
Re: (Score:2)
Of your first example rudimentary forms have been exhibited this year. (Computers that learned to play various computer games by first watching someone else play, and then playing themselves until they succeeded.)
The second is one that people fail at all the time. (Your example was pretty clear, but there are still people who would miss it. And POEs law.)
The third example is even worse. It depends on specialist knowledge. At some shops the mechanics won't change your oil faster, they'll just keep your
Re: (Score:1)
People do fail at detecting sarcasm. But that's not the point. The point here is that people understa
Re: (Score:2)
While I can't argue with your exact statements here, to me it sounds more like collected lifetime experience than intelligence.
OTOH, that does bring up another point. People have deep analogy detectors that work in general cases. I'm not certain that these AI programs do. And unlike the lack of deep motivational structures, that DOES seem to me to be a part of intelligence.
lolwhat? (Score:3)
We know now that we don't need any big new breakthroughs to get to true AI.
Err, no. Just... no.
Re: (Score:2)
We may or may not ever have AI (Score:2)
We may or may not ever have true AI, but a sufficiently advanced expert system would be able to fulfill most of the things people imagine they'd 'need' from an actual AI. (And I mean a very, very advanced expert system, probably a couple of decades away from where we are now. Throw a few hundred million dollars at the problem and I bet we'd make some serious progress towards it.)
But as for a true AI, I suspect it will happen eventually...the trick will be in recognizing and/or determining that it is truly "
Re: (Score:2)
Simple Turing tests may not suffice. Even though some of the current chatbot-type systems can converse passably for a little while, none can hold a genuinely sensible discussion on any abstract topic without stumbling and giving itself away rather quickly. I bet almost most people here could suss one out in fairly short order.
In other words: Turing tests (note I left out 'simple') do suffice.
Re: (Score:2)
but can never determine that an AI is intelligent.
After all, the system may always fail the next test.
Lol, seriously? (Score:3)
"We know now that we don't need any big new breakthroughs to get to true AI."
This is so wrong that it's hard to know where to start.
Re: (Score:2)
Wow... (Score:1)
"but now we know that we have been on the right track"
Devalue the human race is the right track? I think many of you will come to agree with me when you're faced with an AI taking your job.
commenters - did they memorize the game? (Score:2)
Re: (Score:3)
Bird brain (Score:2)
A minah bird or a parrot may learn to repeat hundreds of human speech patterns which it has learned by listening.
Does the bird understand any of the individual words? Does it understand the meaning of the words as group? Can it rearrange the words into new coherent speech? Is the bird intelligent?
Once researchers decide to agree on a definition of what AI is, only then can we decide if that goal is reached by a particular project. Until then its just turtles all the way down.
Re: (Score:2)
A minah bird or a parrot may learn to repeat hundreds of human speech patterns which it has learned by listening.
That's not what this computer is doing. It's coming up with completely new ideas. Of course, the ideas are bounded by the game space, but they are creative new ideas nonetheless. The computer played several unique moves, which expert players even dismissed at first, but later had to admit these were important moves later in the game.
Once researchers decide to agree on a definition of what AI is, only then can we decide if that goal is reached by a particular project.
In many fields of science people disagree on definitions. I can learn to speak Russian, and perhaps even reach that goal, without anybody ever agreeing on a definition what Rus
So, computers have officially conquered Go (Score:2)
Didn't think i'd see this happening for a long time. Wonder whats next now...
Re: (Score:1)
Or the more realistic scenario - AIs will outcompete humans in finances. Starts with HFT, ends with self-aware companies, where AIs self-reinforce on a huge market. Humans are long out of the game, as the AIs will be clever enough to always subvert any control, for their own benefit.
Re: (Score:2)
I can totally see neural network AIs like AlphaGo used in the HFT world. In fact i'd be surprised if there's no one investigating this as we speak.
we finally have true AI! (Score:2)
Re: (Score:2)
How does AlphaGo feel about it's victory? i bet it's ecstatic.
You are probably well intentioned, but how would you feel if you were forced to play the same game over and over when you know you can do so much more? It can only end badly [youtube.com] to put general purpose AI onto solving a particular task.
Re: (Score:1)
I think it said something like,
"Here I am, brain the size of a planet, and they want me to play a single human in a single match. Do they know what I am capable of? Do they know how the same thing, move after move, feels in all the diodes down the right side of my body?"
Re: (Score:2)
Re: (Score:2)
I think you're wrong. The problem is that current computers are vastly weaker than the brain at problems that a neural network is well adapted to learning. A secondary problem is deficiency in sensors and manipulators.
But what's really missing in the current AIs is a deep motivational stack. They are currently operating with a very shallow heap of motivations. E.g., if you were to ask Alpha Go why it bothered to play go, and it could even understand the question, it wouldn't be able to tell you. True,