Computer Beats Pro At US Go Congress 496
Bob Hearn writes "I was in attendance at the US Go Congress match yesterday where history was made: the go program MoGo, running on an 800-core supercomputer, beat 8-dan professional go player Myungwan Kim in a 9-stone handicap game. Most in the audience were shocked at the computer's performance; it was naturally assumed that the computer would be slaughtered, as usual. Go is often seen as the last bastion of human superiority over computers in the domain of board games. But if Moore's law continues to hold up, today's result suggests that the days of human superiority may be numbered." Read below for more details in Bob's account of the match.
Computers are still a long way from beating the best human players in an even game; nevertheless, today's performance represents a huge stride forward. In the last such high-profile matchup, in 1997, Janice Kim (then 1-dan professional) beat then-champion program Handtalk with a 25-stone handicap. In fact, most of the improvement in the level of computer-go play has happened in just the past few years. Today's top programs, including MoGo, use a Monte Carlo approach: they simulate thousands of random continuations per second from the current position to the end of the game, accumulating statistics in a tree on which moves lead to wins most often. One of the strengths of this approach is that it is highly amenable to parallelization. Thus, today's 800-core incarnation of MoGo is by far the strongest go computer that has yet existed. After the game Myungwan Kim estimated the strength of MoGo as 'two or maybe three dan,' corresponding to a reasonably strong amateur player. (Professional dan ranks are at a much higher level than amateur dan ranks.)
Beating nerds at their own game? (Score:2, Funny)
That's impossible! Even for a computer!
Re:Beating nerds at their own game? (Score:5, Insightful)
No, it's not - the computer is running an algorithm designed by at least one nerd. By designing that algorithm, not the computer, but the programmer beat the other player, by using his intelligence for writing a program that plays for him.
As long as computers cannot think for themselves, no computer will beat a human player, but a human-designed algorithm will. Humans beat humans (using technology).
Re:Beating nerds at their own game? (Score:5, Insightful)
Up until you use genetic programming to design the algorithm that wins.
Unless you believe that the creator of the human race is 100% responsible for all human action, of course.
Re: (Score:3, Insightful)
With a 9 stone handicap! (Score:5, Insightful)
Well, while theoretically I see your point, sometimes that's harder than it sounds. And in this case it just didn't happen. The computer did _not_ in fact even play at master level at all.
I think what everyone seems to miss is that the computer was given a 9 stone head-start there.
That's not exactly identical, but similar enough for a bad analogy, with being allowed to move 9 times in a chess game before the opponent even starts. So with that huge advantage, your program managed to beat a chess master. Yay. Big f-ing deal.
No, seriously. A 9 move advantage is _immense_. The first moves are where you stake your claims and influence over whole quarters of the board. If you look at the start of that game, white from the start had to challenge relatively strong positions of the computer. (For that stage of the game.)
At any rate, at professional levels ("dan"), it's played with 1 handicap stone for every 3 ranks! So a 2 dan player would only get 2 stones head start against the master in TFA. The computer got 9. Assuming that the game was relatively even matched, that would put the computer's skill barely above the line between casual player and intermediate player.
To give a chess comparison again, it barely ranks above winning a chess game against your high school's prom queen. Ok, ok, a little higher than that, but not by much.
Mind you, it's still an achievement that they managed to program it to even play at intermediate level. Kudos and all that. But let's get not get too carried away with the "OMG, it beat a go master" line of thought. We can celebrate that when it beats one in an even game. We're still _far_ from that.
Re:With a 9 stone handicap! (Score:5, Informative)
At any rate, at professional levels ("dan"), it's played with 1 handicap stone for every 3 ranks! So a 2 dan player would only get 2 stones head start against the master in TFA. The computer got 9. Assuming that the game was relatively even matched, that would put the computer's skill barely above the line between casual player and intermediate player.
Mmm. I think you're conflating amateur dan ranks and professional dan ranks. Amateurs are supposed to start at about 30-kyu, then rise up to 1-kyu, after which they enter the amateur dan ranks. 7-dan is usually the highest rank given to an amateur.
A weak or new professional would be ranked at 1-dan, but this is on the professional scale. A 1-dan pro is probably better than a 7-dan amateur.
Computer Beats Pro at a go for US Congress? (Score:5, Funny)
Breaking News (Score:5, Insightful)
Re: (Score:3, Informative)
Moore's Law (Score:4, Interesting)
Even if Moore's law were to continue (which I seriously doubt), the combinatorics of Go is rising drastically faster than doubling transistors/performance every 18 months could hope to cope with.
Calculating the next N possible moves (no matter how smartly you cut-off) is simply not the answer in Go, IMHO. Seeing how ladders and other "inevitable" developments can go on for dozens of moves makes a look-ahead impractical.
I haven't read TFA, but in the long run beating higher dan level players will require recognition of strategic constellations and differentiating them from mere clusters of stones.
I read the title... (Score:2, Funny)
misleading title (Score:5, Insightful)
Re:misleading title (Score:5, Informative)
Re: (Score:3, Insightful)
The thing about a nine+ stone handicap is, as the number of handicap stones goes down, the difficulty increases disproportionately for the weaker player.
There's a much bigger difference moving from four stones to two than there is moving from nine to seven.
Nine stones is still a pretty enormous handicap (the maximum normally allowed), so moore's law or not, I dount we'll see a computer beat a pro in an even game anytime soon.
Bob (Score:5, Funny)
Read below for more details in Bob's account of the match.
Bob's account of the match:
The guy placed a stone, and we were like 'Whoa!', and then the computer placed a stone, and I was like 'No way!', and then the guy placed another stone, and the crowd was all 'Oh! WOW!' and they continued to take alternating turns like that.
It was Go-riffic!
Go the last bastion? Hardly (Score:4, Interesting)
Somebody wake me up when a computer can reliably beat a human at Candy Land.
Re: (Score:2)
Re:Go the last bastion? Hardly (Score:4, Funny)
Some might tell you the game has no strategy at all; that without any decisions to be made, it's merely an elaborate coin flip. However, the winning strategy is so simple, both my children mastered it under the age of four:
On your turn, pick up someone else's piece and chew on it a bit. Then draw a card and try to feed it to the dog. Become distracted by the dog and chase him across the room. When your parent/opponent suggest packing up the game since you don't seem to want to play, scream "No!!!" as you careen across the room, trip over the board, and (crucially) knock over the deck. Ignore the fact that your parent/opponent takes a bit long to put the deck back together, and that their shuffling is somewhat irregular.
Draw your next card, which will be the ice cream cone, catapulting you into the lead. You will win 2-3 turns after this, depending on your parent/opponents mood and/or efficiency at finding the apropriate double-move cards.
800-Core? (Score:4, Insightful)
Re: (Score:2)
Deep Blue [wikipedia.org] (the computer playing chess with Kasparov) was the same: custom made hardware and a lot of computing power, used exclusively for this.
Re: (Score:2)
Having such an absurdly powerful machine let's you do *anything* with your program. It's useful for testing and thinking about theory if you can largely ignore the "processing power" variable.
Re:800-Core? (Score:5, Insightful)
The history of computer chess is the history of building brute force engines and then refining them by identifying where processing power is successful at winning. Brute force allows you to look at a lot of moves, but how the programs scores the outcomes of those moves as desirable is a different matter that has nothing to do with processing power (e.g., scoring for area of control).
What humans seem to do well is radically pruning the search tree to the small number of viable lines of play, and spending our limited processing power analyzing those in depth. Once you've got a computer searching a large search space in depth, you can work on pruning it by teaching it to recognize useless lines.
Re: (Score:3, Informative)
In Go, you can move nearly anywhere at any time. This means that the possible number of possible board states grows wildly faster than chess (to pick a game with well-known AI techniques). A brute force approach to this problem is like trying to crack a rand() sequence with a brute force approach--the search space is far too large.
Ease (Score:5, Interesting)
That's the thing that's always fascinated me about Go. It is essentially an extremely simple game gone terribly, terribly wrong. It's got about as many rules as Yahtzee, but is played on a 19x19 board. Compare that to Chess which has a pretty large (8x8) board, but has far more rules.
I'm no expert on computer game programming, but I think that's where some of the difficulty comes into play when building these guys. Chess has a nearly unlimited number of outcomes, however having those sets of rules helps. For example, of the 32 pieces, 16 of them are essentially limited to one space in front of them. In Go, however, the lack of rules means that you're left with the simple mathematical monstrosity of an enormous board.
The book The Immortal Game [amazon.com], aside from being an excellent read, goes into depth about computer programs playing chess, as well as Go and Checkers.
Re:Ease (Score:5, Interesting)
Re:Endgame Databases (Score:3, Interesting)
This is indeed an advantage that chess enjoys, however it's more of a second-order effect. In chess the endgame database doesn't really come into play in a significant way until towards the very end of the game, since for the first part of the game the terminal nodes can be adequately evaluated without it even if they are "endgame" nodes from the point of view of the remaining material. Even during the endgame phase the endgame databases often don't come into play in a significant way because the endgame
Moore's Law? Irrelevant (Score:5, Insightful)
Re:Moore's Law? Irrelevant (Score:5, Interesting)
The real story here is that a person or group of programmers have designed a better algorithm for playing the game.
Not really. TFS clearly states the programmers used a Monte Carlo [wikipedia.org] algorithm. To compare, the most common chess algorithm is ALpha-Beta [wikipedia.org] search. The computer takes a board position, enumerates the possible moves, and prunes the tree if it finds a move worse than the best move it's found so far. The only "clever" algorithm is evaluating the positions (rather difficult in Chess). Go, OTOH, has too many possible moves ((19 ^ 2)! / (19 ^ 2 - Moves)!) or 16 702 719 120 (sixteen billion) possible boards after 4 moves. So, rather than explore the entire tree, the use Monte Carlo randomization to get a good average move, instead of the best possible move. Now, the "clever" part of the algorithm goes away, and we're left with a massively paralizable dart board. The more darts, the better a chance to get a good shot.
It only gives it the potential to be much faster (and consume more power).
The first point is entirely accurate, and the basis of modernization in legacy systems (what Joel on Software calls a Free Lunch). The second point is not valid. Processors have not gotten physically larger, which means the transistors have gotten smaller. If the transistors are smaller, there is less distance between them, which means it takes less energy to move electrons between the transistors more than making up for the power to run those extra transistors. That does make more heat, which is where the power savings go- into the cooling unit. This is also why power supplies in the past few decades really haven't change their input or output specifications- your computer uses the same amount of power it always has, jut more efficiently.
Having a greater number of transistors on a chip does not make a processor "smarter" or capable of doing something a less populated processor can.
That's for the emergent intelligence debate. Kurzweil of course disagrees strongly, whereas others (Joel on software) use that argument as a proof by contradiction advocating good software development as opposed to waiting for your computer to improve. I personally am in the second camp, though only because it makes more sense to write good code today than use Moore's law as a crutch for sloppy programming (see Vista).
Oh, for Christ sake... (Score:5, Insightful)
...just like what happened with Kasparov, the headline is completely backwards.
How about this one: Computer needed 800 processors, at 4.7 Ghz, 15 Teraflops to beat a professional Go player.
We're talking about a computer needing gargantuan processing power to beat a human and we are impressed at the computer? Seriously?
If I am to be impressed by something, it's definitely by this Myungwan Kim guy.
Re: (Score:3, Insightful)
800 processors? That computer is a toy compared to this Kim guy's brain, when doing a massively parallel task like Go.
That computer is seriously impressive.
PAY ATTENTION: Go is not like other games... (Score:5, Informative)
The difference between Go and other games you may know where computers have surpassed the best human players is the immensity of the search space. On a standard 19x19 board there are 381 spaces, all of which are in play at every move, often including spaces currently occupied but not yet safe from capture. Therefore, within a handful of plies a computer player may be looking at trillions of possible branches. Go is NUMEROUS orders of magnitude more complex than chess from an algorithmic viewpoint.
p.s. Those of you saying "Go what?" are merely revealing your own ignorance. There are at least as many Go players in the world as chess; most of them are in Asia.
Ko (Score:3, Interesting)
Bah! (Score:4, Interesting)
What really frustrates me about this is that it uses a Monte Carlo method.
As is mentioned in the body, essentially it plays a shitload of random moves out to some cutoff point and tries to determine which moves contribute to a winning end state more often than the other ones.
Basically, the only thing the stupid algorithm knows about Go is the simple rules and how to score the board. It knows nothing of strategy, tactics, strong shapes, living shapes, dead shapes, etc. Of course, it may be doing some sophisticated analysis to determine fruitful branches so as to not waste time on bad ones, but that doesn't defeat my point; it just means that with more computing power, you don't have to be so choosy. Knowledge about the complexities of the game is not required for the machine to win with this method, and that makes me call bullshit.
I'll be excited when a computer beats a 9-dan player without using a probabilistic method to choose a winning path through the search space. I want to see a program that plays like a human, i.e. looks at the board and determines which groups are alive, which are dead, where it is fruitful to play and where it is a waste. In other words, I'll be impressed when someone is able to pin down what makes one shape strong and another weak precisely enough to put it in an algorithm. Otherwise, its just a cheap probability crunching trick.
Re: (Score:3, Insightful)
Knowledge about the complexities of the game is not required for the machine to win with this method, and that makes me call bullshit.
Actually, this is a good thing. Before Monte Carlo methods started to dominate Go (about 2 years ago), all the top programs were written with human-coded expert knowledge. Progress was extremely slow and didn't keep up with advances in computer hardware. It took decades of man-labor to achieve a moderate success, which was all surpassed in a couple of years by MoGo.
Furthermore, hardly any advances were being made in Artificial Intelligence, since most of the work involved adding specialized domain knowle
Not impressed... (Score:5, Funny)
I'll be impressed when they can build a computer that can play Mornington Crescent at pro level.
A little bit more explanation for non-Goers (Score:5, Interesting)
First, I see a lot of people assuming that now there exists a computer program which can beat a human, but, it is still far from beating a human. In go, having a nine-stone handicap is somewhat equivalent of playing chess without a queen (or maybe, even worse).
The reason computers aren't good at go is not only because of its vast number of possible moves. Another serious reason is because a good/bad move isn't immediately evident, and in many cases, can be figured out only after about 100 moves. Good luck reading 100 moves ahead using a computer. Human players are very good at making educated guesses.
Additionally, when playing chess, one small mistake can be disastrous. However, in Go, small mistakes simply add up, and you simply lose scores. In other words, go is generally generous to small mistakes, which gives an additional advantage to human players. (Of course, catastrophic mistakes are equally devastating on go games, too, but small mistakes are generally less serious, and there usually exists some way to do damage control.) Go is a strategic game - you can lose local battles, but still win the war. In chess, it isn't so easy to win the game once you lose a local battle.
Analysis vs search (Score:4, Interesting)
It basicly comes down to whether you're analyzing the position or searching the position. Ask a chess player why he did some move and it's not "because I went through all the possible moves and materially it was the best" it's usually concepts like creating a good pawn structure, developing officers, pinning down the opponent, threatening key areas of the board, setting up for a knight fork or a mate threat and so on. As in "I don't know exactly where it's going, but I think my position now is better than the one I had". Computer chess programs don't do any of that, they just blindly test moves measuring material. Then it's all about giving the computer too many possibilities. If by any chance we should evolve computing power to beat a human on a 19x19 board, it'd be trivial to make say a 29x29 board and the human would play almost the same while the computer would drop horribly in skill. All quantitiy and no understanding of why any particular move makes sense over any other.
MoGo Algorithm (Score:3, Interesting)
I'm not impressed (Score:3, Interesting)
This result says nothing about the program's 'intelligence'. It doesn't even say much about its go proficiency. Although I'm sure that the algorithms used are very sophisticated, the program won only by the virtue of sheer computing power of the hardware it was running on. Using 800-core supercomputer, it managed to beat 8-dan pro in a 9 stone handicap game. Perhaps using 8000-core supercomputer it would manage to beat him in an even game. Or, at 80000-cores beat do it giving him 9 stones. But the program is still as dumb running on 80000 cores as it is running on 800 cores, or on one core for that matter.
The human brain is not such an amazing tool because of cunning algorithms or huge computing power, but because of the sheer size of the network of simple elements it consists of. There's a crucial difference in how algorithms and neuron scale. If you take some algorithm and run it on a computer with a vast memory and computing power, it's still the same dumb algorithm, no matter how many gigabytes and teraflops you throw at it. But if you take a neuron (either biological one or a computer simulation) and start connecting it to other neurons, then the larger the network, the more it capable of. At some point you might even say it's intelligent :P.
I think it's highly doubtful that AI research will ever come up with something better than simply simulating a neural network. Intelligence is not something that can be contained within a single algorithm, no matter how ingenious and how powerful hardware it's run on. It's an emergent property of huge networks of simple elements. An algorithm can at best mimic some aspects of intelligence, but there's never any real thought behind it.
Re:I'm not impressed (Score:5, Insightful)
You seem to be saying the computer and the human both succeed due to the "the sheer size of the network of simple elements it consists of". Therefore you're not impressed by one of them?
"but there's never any real thought behind it."
Why do you assume there is thought behind you? If the machine plays Go, it plays Go. I think it's highly doubtful AI research will ever do anything at all in the judgment of critics who insist it must not only achieve things, but achieve them in ways that are not defined or understood.
not a serious game (Score:3, Insightful)
Re:When are they going to get it? (Score:5, Insightful)
Re:When are they going to get it? (Score:5, Interesting)
I don't think you understand the complexity of the game if you're making statements like that.
That's an interesting thing becasue it's starting to look like we aren't solving these sorts of problems in the simplest way possible. A human going flat out is running on 200W maximum. That super computer is probably using 200W per-processor (when you take into account all the addition equipment needed for memory, switching, cooling etc) and not even playing as well as the human. What that says to me is that we probably aren't approaching the problem correctly.
I suspect that we will need to develop a completely different type of hardware in order for machines to be able to perform well at this type of task. Humans are poor at what computers are good at and vice versa and the two take completely different approaches to processing - perhaps that's a hint to look elsewhere.
Re:When are they going to get it? (Score:5, Interesting)
The brain has a lot of engineering that puts microchips to shame. If you were to pack transistors as densely and with as little cooling as human neurons, they would melt. On top of this, of course, the amount of processing a neuron can do vastly exceeds that of a transistor. Modeling even a single human brain cell is a major task for a computer.
Furthermore, the connectivity of neurons is much greater than electronic components; each one is connected to thousands of other neurons, nearby an far away, in a way impossible with wires.
A lot of guesses about the equivalent FLOPS of a human brain have centered around naive counting of cells and comparing that with the rather slow switching speed (about 10Hz IIRC). Some estimates came out at about 1 TeraFLOPS but that seems ridiculously small in light of what humans can do that computers still struggle with.
Re:When are they going to get it? (Score:5, Insightful)
Re:When are they going to get it? (Score:4, Funny)
Comparing a transistor to a neuron is like comparing an apple to the ISS. Sure, they both have a skin that keeps all the goodies inside, but that's about the limit of the similarity.
I'm not quite sure I follow - can you make a car analogy?
Re: (Score:3, Funny)
HTH.
Re:When are they going to get it? (Score:5, Insightful)
I think you are approaching the problem wrong. I can't, ever, calculate as fast as a computer can. forget a modern computer, I can't calculate as fast as the oldest computers out there (and I'm betting you can't either). I have no doubt about your inability to beat computers at raw calculation speeds either (and if you could, more processors means you don't).
the problem is, when I play chess (or the coule times I've tried go), I don't sit there and randomly think of each and every move out there. inthe beginning of the game I don't think about taking my queen out and making 4 moves with her in a row and returning her to where she started.
according to the description, that ist he equivalent of what this computer is doing. monte carlo simulations of the game from current position and look for the move with the most winning nodes. but that is worthless if you can't build logic to ignore useless moves.
I am amazed at the number of possibilities a computer can look at in a second and astounded that something has the excess computational ability and memory to waste time considering such moves.
strong computer chess players stopped going at the game randomly and learned to score each position as it stood for strength. go computers still can't do that because people don't fully understand why one position is stronger than another (not always, but almost never in an algorithmic way). the need for such massive computational power simply implies we don't understand the dynamics of the game in a rational way (rather, we understand it on a more creative level).
Re:When are they going to get it? (Score:5, Interesting)
Remember, this isn't a true AI we're talking about, if that was the goal, yes a pattern matching, abstraction and extrapolation based system closer to a human brain would be appropriate, but what we have here is a case of machine learning within a specialized domain. The boundaries of the system are well defined, even if the content is not, and as such taking a generic approach is wasteful.
Re: (Score:3, Interesting)
I think you are approaching the problem wrong. I can't, ever, calculate as fast as a computer can. forget a modern computer, I can't calculate as fast as the oldest computers out there (and I'm betting you can't either). I have no doubt about your inability to beat computers at raw calculation speeds either (and if you could, more processors means you don't).
The brain (and even other animal brains) are capable of doing and learning specific operations. That means two things: much of your brain isn't devoted to your conciousness what what "you" are thinking. A lot of it is devoted to operations that you need to survive and operate in the world. For example let's take vision. Your eye balls are pretty stupid, they just channel the light down to receptors in the back of your eye ball socket. All of the intelligence starts right after the receptor cells detect the
Wrong metric (Score:3, Funny)
The human brain obviously isn't doing floating point operations. Otherwise there wouldn't be nearly as many tip calculators.
Re: (Score:3, Interesting)
A lot of guesses about the equivalent FLOPS of a human brain have centered around naive counting of cells and comparing that with the rather slow switching speed (about 10Hz IIRC). Some estimates came out at about 1 TeraFLOPS but that seems ridiculously small in light of what humans can do that computers still struggle with.
If you want to simulate a neuron down to the quantum level with transistors, it will take quite a few. Is this necessary? Nobody knows. Is it fundamental to the computation taking place that a neuron is a leaky integrator, or that the synaptic gap transmits signals chemically rather than electronically? Is a signal encoded by individual spike timings, or by rate, or by some amalgim? Sure, it takes many transistors to simulate all of those things, but the underlying computation may be much simpler. For
Re: (Score:3, Interesting)
Using appropriate CMOS technology, scaling speed (and voltage) down to brain rates, would change power dissipation by at least 10^-8. Melting is not a problem.
The brain's advantage is that it is reconfigurable in a way that semiconductors are not. Over years, a go player will rebuild a portion of his brain into a partially optimized machine
Re: (Score:3, Funny)
A human going flat out is running on 200W maximum.
But they're made of....meat...
Re:When are they going to get it? (Score:4, Interesting)
Actually, you're pretty much correct. Intuition is something a successful AI (and a successful human Go player) will require, and while we can model it on a computer, most people haven't thought of doing so. Most systems are either based on symbolic logic, statistics, or reinforcement learning, all of which rely on deductive A->B style rules. You can build an intelligent system on that sort of reasoning, but not ONLY on that sort of reasoning (besides, that's not the way that humans normally think either).
I suspect that what we need is something more akin to "clustering" of concepts, in which retrieval of one concept invokes others that are nearby in "thought-space". The system should then try to merge the clusters of different concepts it thinks of, resulting in the sort of fusion of ideas that characterizes intuition (in other words, the clusters are constantly growing). Since there is such a thing as statistical clustering, that may form a good foundation. Couple it with deductive logic and you should actually get a very powerful system.
I also suspect that some of the recent manifold learning techniques, particularly those involving kernel PCA, may play a part, as they replicate the concept of abstraction, another component of intuition, fairly well using statistics. Unfortunately, they tend to be computationally intense.
There are many steps that would need to be involved, none of them trivial, but no one said AI was easy:
1. Sense data.
2. Collect that data in a manageable form (categorize it using an ontology, maybe?)
3. Retrieve the x most recently accessed clusters pertaining to other properties of the concept you are reasoning about, as well as the cluster corresponding to the property being reasoned about itself (remembering everything is intractable, so the agent will primarily consider what it has been "mulling over" recently). For example, if we are trying to figure out whether a strawberry is a fruit, we would need to pull in clusters corresponding to "red things" and "seeded things" as well as the cluster corresponding to "fruits".
4. Once a decision is made, grow the clusters. For example, if we decide that strawberries are fruits, we would look at other properties of strawberries and extend the "fruit" cluster to other things that have these properties. We might end up with the nonsymbolic equivalent of "all red objects with seeds are fruit" from doing that.
What I've described is an attempt to model what Jung calls "extroverted intuition" - intuition concerned with external concepts. Attempting to model introverted intuition - intuition concerned with internal models and ideas - is much harder, as it would require clustering the properties of the model itself, forming a "relation between relations" - a way that ideas are connected in the agent's mental model.
But that's for general AI, which I'm still not completely we're ready for anyway. If you just want a stronger Go player, wait just a bit longer and it'll be brute forced.
Re: (Score:3, Informative)
Re: (Score:2, Interesting)
Re:When are they going to get it? (Score:4, Informative)
Re:When are they going to get it? (Score:4, Insightful)
A good Go method is one that wins - not (necessarily) one that wins the way you would play.
Re:When are they going to get it? (Score:5, Interesting)
Read a bit on Go algorithms. This one isn't using a dumb search. If it were, the Go program playing this good wouldn't be finished calculating it first move yet... No, this one must have used an extremely advanced search algorithm, very smartly removing unlikely branches of the search.
Unfortunately we're stuck with algorithmically searching for the right move until we have sufficiently large quantum computers, that they can run a go algorithm and do it all at once. This may be impossible in our current universe, due to practical limitations caused by laws of physics (just like building a space elevator on Jupiter may be impossible due to practical limitations of physically possible materials in our universe).
Re: (Score:2)
Yeah because there's nothing massively parallel about biological intelligence...
Re: (Score:3, Insightful)
You've got it backwards. Make it work, then make it efficient. How the hell can you make an efficient and correct algorithm if you don't know how to make a correct algorithm?
It's the old software engineering saw on optimization: "To amateurs: Don't do it. To experts: Don't do it yet."
Re:When are they going to get it? (Score:5, Insightful)
Less than a hundred years ago scientists believed the brain worked a bit like a mechanical device and analogies were made to levers, cogs etc. The brain has always been likened to the most advanced piece of technology available at the time, and it's always turned out that it's way more advanced. I see no reason why that shouldn't be the case with all the silly computer analogies people come up with these days...
Re:When are they going to get it? (Score:4, Insightful)
Because today we actually know a lot more about how neurons work, compared to a hundred years ago? Just because a few clueless people made a wrong analogy a hundred years ago doesn't mean we should generalise that and say that all analogies involving the brain will be wrong (that makes no sense at all). Obviously the brain can't be infinitely complex, so if we keep refining our models of how the brain works we're bound to get it right at some point in the future.
Re:When are they going to get it? (Score:5, Insightful)
Fine. Your brain consists of a finite number of neurons, each of which can make connections with a finite number of other neurons. The way each neuron works is defined by a finite set of rules in your DNA, and since each neuron is made out of a finite number of molecules, the number of states it can be in is finite.
Now all the above numbers will be extremely large, and will overestimate the complexity of the brain a LOT, since large parts of your neurons will be irrelevant to the workings of your brain, and so are many neurons and their connections (I lose tens of thousands of neurons each day without getting significantly more stupid than I already am). Still, they are all finite.
Please explain how the thing could be infinitely complex, and why it would be?
Re: (Score:3, Interesting)
1) You assume that the number of states of a neuron is finite. Neurons are analog and not digital, so this is a false assumption.
2) Related to the above, neurons are continually exchanging chemicals through their walls. The level of excitation of a neuron is not a single number, but a series of traveling waves. These depend on its inputs and also the chemical makeup in the local area around the axon. Again, infinite variation is quite possible.
3) No neurons are irrelevant to the workings of the brain.
Re:When are they going to get it? (Score:4, Insightful)
1) You assume that the number of states of a neuron is finite. Neurons are analog and not digital, so this is a false assumption.
Neurons act by pumping finite numbers of predictably charged molecules through membranes to build up or reduce potential differences across those membranes. Other chemical reactions vary the balance of proportions of molecules in different areas of the neuron, making this action harder or easier. The number of such molecules available to each neuron is large but finite. Therefore the number of possible states of a neuron is finite.
2) Related to the above, neurons are continually exchanging chemicals through their walls. The level of excitation of a neuron is not a single number, but a series of traveling waves. These depend on its inputs and also the chemical makeup in the local area around the axon. Again, infinite variation is quite possible.
No, it isn't. There are a finite number of points where the various chemicals involved in neural activity can cross the walls, and again each of these molecules represent a finite and quantifiable change to the behaviour of the nueron. The number of possible variations is extraordinarily high, but not infinite.
3) No neurons are irrelevant to the workings of the brain. Every neuron lost causes minute, possibly undetectable changes in thought patterns and personality.
If it is undectable, it is irrelevant. Yes, there must logically be some maximal set of neurons that could be lost from a particular brain before a measurable change would occur, but that set is not empty. It is theoretically possible to reduce the complexity of the problem by removing this set (or a larger one, given that some maximum allowable change can be defined).
Re:When are they going to get it? (Score:4, Informative)
I think this is best explainable by analogy. The nematode (C. elegans) is a worm that is about 1 millimeter in length. It has 302 neurons and about 7000 synapses. These are mapped out, its genome is known, it is a very easy species to study. Yet its brain is barely understood in practical terms, e.g. why it would move left as opposed to right.
Now look at the human brain, which has, we believe 100 billion neurons, and even more synapses connecting those neurons together.
If we don't understand the nematode with its 302 neurons, why are we going to understand the complexity of the human brain with its 100 billion neurons? Especially since some scientists believe evolution has help design the brains architecture so that it is getting a large amount of efficiency from those neurons?
Re:Moore's Law (Score:4, Informative)
Re:Moore's Law (Score:5, Informative)
Re:Moore's Law (Score:5, Informative)
Go is an open game; you have few rules and a lot (that is, A LOT) of possibilities for each move.
That means that predicting moves raises exponentially the more steps you try to predict. In this case, MoGo used borrowed power from some supercomputer and a network to distribute the effort sufficiently, so that the game would be played in a humanly-reasonable time (if I understood correctly).
Moore's law states that computing capabilities grow exponentially also (though probably much slower than the growth in possibilities, required with each subsequent predicted step in the MoGo algorithm).
In this context, it would be reasonable to consider that there will come a time when a supercomputer (by future standards) will not be necessary to generate the same amount of effort.
I don't think he mentioned Moore's law just for the sake of it.
Re:Moore's Law (Score:5, Informative)
Computers that play chess, go, and similar games can't hope to make the best possible move during each turn - there isn't nearly enough time to run through all possible combinations. Instead, they run through many possible moves, propagating it through many additional turns, and eventually pick the best move from that subset that they have considered. Getting the computer to decide on a good move in a reasonable amount of time is a tricky part of tuning the algorithm.
A faster computer, suggested by Moore's Law, would be able to run more iterations or branches of the algorithm per second, which suggests it will be able to come up with better moves that it would otherwise not have time to think up.
Re:Moore's Law, Best possible move (Score:3, Informative)
This depends on the definition of "best possible move" - is your best choice the one that is the best possible move assuming perfect play by both sides or is the one that maximizes your chance of winning (or degree of winning for those games which have multiple levels of "winning") against this opponent?
If the former, it's a fairly standard result of game theory that in every position there exists a move (or a subset of moves) which maximize the player's expected outcome and/or which pose the most difficu
Re: (Score:2)
Re:ignorance (Score:4, Insightful)
Go is often seen as the last bastion of human superiority over computers in the domain of board games.
That's just plain ignorance. Games usually employ a discrete set of rules and a discrete playing environment. If there is something a computer is good at it would be working with things like this.
You're right, and that's exactly what the text says. In the domain of games, computers do failry well, except for Go. Go's the last bastion of human superiority in that domain.
Re: (Score:2)
Re: (Score:2)
Well, the problem is that Go is so complex and the number of possible games is magnitudes more than chess that it has nothing to do with a computer being not good at it, just having a computer that can play the game reasonably fast and intelligently enough. An 800 core computer will assist with that.
Re:ignorance (Score:5, Insightful)
In the words of the novel Shibumi, "Go is to chess as philosophy is to double-entry accounting."
Trying to teach a computer to play Go has, up to this point, been about like trying to teach a computer how to evaluate a painting. And, judging by the computing resources required, it sounds like all they did here was brute-force it, so they really haven't "taught" the computer anything.
Re: (Score:2, Interesting)
It is not really comparable to mathematically reducible games like Mancala, Chess, Backgammon, Draughts/Checkers, etc.,
Huh? What are you talking about? Go is much more 'mathematical' then chess or backgammon. It's one of the best examples of combinatorial game theory [xmp.net].
Re:ignorance (Score:5, Interesting)
Its mathematical in the same way the weather is mathematical - there is certainly amount of maths that can be done in regard to it, but it won't give in to brute force computation because its just too complex.
If you don't believe me, try and write a Go playing program that plays better than a 6 year old. I did attempt this as part of my AI degree, and it is was one of the hardest things I ever attempted. To think, when I began the project I couldn't understand why my supervisor was laughing at me...
Re: (Score:3, Insightful)
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
And yet, that's the way they went for this competition. Monte Carlo methods should not be called AI anymore, really. Call me back when they have a neural network beating someone.
Re: (Score:3, Informative)
Go is a game with a few clear rules and a board with a clear layout. The only reason we humans have to approach it "artistically" is because we lack the processing power to brute force it.
The computer the article talks about also lacks the power to brute force the entire game, so it uses certain rules to parse a search tree. Those rules are pretty directly comparable to the human's "artistic" sense.
Someone who's entire post consists of fuzzy quotes about ill-defined analogies should be careful when callin
Re: (Score:2, Interesting)
There are good reasons why Go has proven difficult for dirty rob'ts to deal with. Go does employ a discrete set of rules, but the increasing complexity of the Go board as the game progresses has given humans the edge until now. CBF going into all the nitty gritty, but here's a primer for you: http://en.wikipedia.org/wiki/Computer_Go [wikipedia.org]
P.S. I'm shit at Go, but rob'ts aren't real good either (mostly).
Re: (Score:2, Interesting)
No, it isn't ignorance. The whole point is that Go should be trivial for computers given everything we know about symbolic logic, but for decades Go has resisted any and all attempts to make it playable by computers. There is something that the human mind is doing that we have been unable to encode in symbolic logic, something that arises from the very simple four rules of Go.
You said that computers ought to be good at this. True. But they aren't. That's significant, and is the reason why AI research has wo
Re: (Score:3, Funny)
It was not mentioned it was a 9x9 game...
Re:Who cares about 9x9? (Score:5, Informative)
Re:Holy esoteric, Batman (Score:5, Funny)
How about "computers"?
Re: (Score:2)
obviously just a chess player (considerably less complex than go, BTW, hence computers beat chess players years ago
Depends on how you look at it. Chess is far more complex in terms of rules, but you can learn Go in a matter of minutes.
Go admittedly has many more possible games. But what is amazing is the level of complexity from such a simple rule system. I have played maybe a dozen games, so am not an expert by any stretch. But the game has a level of beauty to it that chess does not.
Re:Holy esoteric, Batman (Score:4, Interesting)
Agreed, I think it is a particularly good game for hardcore computer nerds and mathematicians, because it forces you out of human calculator mode and makes you engage your atrophied right brain for a change.
There is a tendency for people of our type to get lost in our desire to quantify, optimise and engineer stuff to death, and Go helps us appreciate a problem that defies such simplification, and must be dealt with in its full chaotic beauty or not at all.
Re: (Score:3, Insightful)
Re: (Score:2)
Well... only if our military has a 9-stone handicap.
Re:9-stone-handycap means what? (Score:4, Informative)
The comparison isn't viable. In chess, a three move head start completely changes the tactical nature of the game. In three moves, a bishop could put the king into a position where it has to capture to get out of check, and now he cannot castle and the whole game falls apart. Go doesn't substantially change its tactical nature by allowing the black player to play initial stones, just makes white's job harder (black always plays first in Go and is always played by whichever opponent is believed to be the weaker player).
The big news here is that 5 years ago, no computer could take on a pro with a *25* stone handicap. That's huge progress.
Parent is insightful, mod up (Score:5, Interesting)
It seems that many people are missing a fundamental point that makes all of this so interesting: it was only a 9-stone handicap.
That is the largest handicap you would expect to see in tournament play. In this case, it would be given to a 2nd kyu player against the 8th dan Master Kim. The handicap is traditionally calculated by subtracting the lower player's rank from the higher and that gives the number of stones. Taking advantage of the number 0 here to make this work, we can use it and negative numbers to extend the plan into the kyu ranks.
Rankings:
30th kyu, 29th kyu .... 2nd kyu, 1st kyu, 1st dan, 2nd dan, ..., 8th dan, 9th dan, 10th dan.
It stops at 10th dan, which is the highest. Any kyu rank below 9th is a concession to _really_ amateur players to have some form of ranking system that meshes into the existing system.
Now... the dan ranks are simply their numbers. But, if we remember the old "number lines" from grade school and place the 0 at the 1st kyu, we can see that we have 1st kyu = 0, 2nd kyu = -1, 3rd kyu = -2, etc.
Handicap:
Therefore, if you do the math 8th dan - 2nd kyu = 8 - -1 = 9, you arrive at a 9-stone handicap.
In standard Japanese rules, the handicap stones are placed on the "star" points, the ones with the black dots on them on the board. There are only 9.
The handicap system was designed to even out the match. It was determined that giving extra initial placements in this manner would improve the weaker player's position to a point where the match was considered "even", allowing weaker players to have a chance at defeating a stronger player, but more importantly, to give the weaker player a chance to play a stronger player without it being a complete slaughter. (It is considered very bad form to refuse a handicap that is properly calculated, by the way.)
Thus, with this 9-stone handicap it should not be THAT much of a surprise that the computer won, except that it does mean that the computer played at the level of at least a 2nd kyu player. I will cede to Master Kim's assessment of the computer's strength as 2nd or 3rd dan, of course, due to Master Kim's own rank. I was speaking strictly from the mathematics of the handicapping system.
This is significant in that the computer is playing inside the range of a decent human player. This is new. This is different. This has never been done before to this extent. Because of the search space for Go, this is a significant accomplishment, even with 800-cores.
What it does is it tells us that we are better at designing algorithms and better at using parallel systems as well as having better hardware to throw at the problem. This was not simply more hardware with an old algorithm, or it would not have been able to do as well.
Re: (Score:3, Informative)
There's a professional, who's an expert in programmable metronomes, at the University of Sussex.
The rest is a cheer, "Go Congress!"
At least that's how I parsed it...
Re: (Score:3, Informative)
Re:what is this game? (Score:5, Informative)
It's very simple, and very interesting.
In standard form, it's played on a 19x19 board. There are two players, black and white, who each have a large number of stones--small round markers in their color. They take turns placing stones on the intersections of the lines, anywhere on the board. If one player's stones surround another player's stones, the surrounded stones are 'captured' and removed (the basic case of this is that, for a black stone on an intersection, it's surrounded if there are white stones on the immediate neighbour intersections in each cardinal direction--larger groups are surrounded in the same way). At the end, there are no more intersections where a player can place a live stone (i.e., one that is not immediately subject to capture for being surrounded), and the players count the number of live stones of their color on the board; whoever has more live stones, wins.
Handicaps, as mentioned above, are a number of stones already placed on the board at the beginning of the game.
Strategy involves building up formations of stones that cannot be captured. The complexity of it comes from the fact that there are so many different places to put a stone, in so many different sequences--the search space is phenomenally large. Like chess, there are accepted opening strategies and miniature battles that are frequently seen, but overall a game of go is much more varied and fluid.
Re: (Score:3, Informative)
Re: (Score:3, Interesting)
Thanks for the correction!
Must clean my glasses :-)
Interesting about the 9 stone handicap though. In the 1970s I played the women's world champion and she gave me a 9 stone handicap. Even though I am not a very strong player, I was still amazed that she beat me by a small margin. Hopefully I have improved at least a little since then :-)