Forgot your password?
typodupeerror
Programming Entertainment Games IT Technology

Computer Beats Pro At US Go Congress 496

Posted by kdawson
from the going-going-gone dept.
Bob Hearn writes "I was in attendance at the US Go Congress match yesterday where history was made: the go program MoGo, running on an 800-core supercomputer, beat 8-dan professional go player Myungwan Kim in a 9-stone handicap game. Most in the audience were shocked at the computer's performance; it was naturally assumed that the computer would be slaughtered, as usual. Go is often seen as the last bastion of human superiority over computers in the domain of board games. But if Moore's law continues to hold up, today's result suggests that the days of human superiority may be numbered." Read below for more details in Bob's account of the match.

Computers are still a long way from beating the best human players in an even game; nevertheless, today's performance represents a huge stride forward. In the last such high-profile matchup, in 1997, Janice Kim (then 1-dan professional) beat then-champion program Handtalk with a 25-stone handicap. In fact, most of the improvement in the level of computer-go play has happened in just the past few years. Today's top programs, including MoGo, use a Monte Carlo approach: they simulate thousands of random continuations per second from the current position to the end of the game, accumulating statistics in a tree on which moves lead to wins most often. One of the strengths of this approach is that it is highly amenable to parallelization. Thus, today's 800-core incarnation of MoGo is by far the strongest go computer that has yet existed. After the game Myungwan Kim estimated the strength of MoGo as 'two or maybe three dan,' corresponding to a reasonably strong amateur player. (Professional dan ranks are at a much higher level than amateur dan ranks.)
This discussion has been archived. No new comments can be posted.

Computer Beats Pro At US Go Congress

Comments Filter:
  • by Anonymous Coward

    That's impossible! Even for a computer!

    • by Rah'Dick (976472) on Friday August 08, 2008 @09:01AM (#24523481)

      No, it's not - the computer is running an algorithm designed by at least one nerd. By designing that algorithm, not the computer, but the programmer beat the other player, by using his intelligence for writing a program that plays for him.

      As long as computers cannot think for themselves, no computer will beat a human player, but a human-designed algorithm will. Humans beat humans (using technology).

      • by Kingrames (858416) on Friday August 08, 2008 @09:34AM (#24523921)

        Up until you use genetic programming to design the algorithm that wins.

        Unless you believe that the creator of the human race is 100% responsible for all human action, of course.

      • Re: (Score:3, Insightful)

        by rootneg2 (984395)
        Something tells me, though, that programmer nerd would be *crushed* in a head-to-head with an 8-dan, even with a 9 stone handicap. You really have to give the algorithm itself most of the credit. The theory and strategy for algorithmic go-playing is very, very different from on-the-fly strategy; algorithmic strategy isn't even really applicable to normal gameplay, and vice-versa. Two totally different ways of thinking.
      • by Moraelin (679338) on Friday August 08, 2008 @10:12AM (#24524507) Journal

        Well, while theoretically I see your point, sometimes that's harder than it sounds. And in this case it just didn't happen. The computer did _not_ in fact even play at master level at all.

        I think what everyone seems to miss is that the computer was given a 9 stone head-start there.

        That's not exactly identical, but similar enough for a bad analogy, with being allowed to move 9 times in a chess game before the opponent even starts. So with that huge advantage, your program managed to beat a chess master. Yay. Big f-ing deal.

        No, seriously. A 9 move advantage is _immense_. The first moves are where you stake your claims and influence over whole quarters of the board. If you look at the start of that game, white from the start had to challenge relatively strong positions of the computer. (For that stage of the game.)

        At any rate, at professional levels ("dan"), it's played with 1 handicap stone for every 3 ranks! So a 2 dan player would only get 2 stones head start against the master in TFA. The computer got 9. Assuming that the game was relatively even matched, that would put the computer's skill barely above the line between casual player and intermediate player.

        To give a chess comparison again, it barely ranks above winning a chess game against your high school's prom queen. Ok, ok, a little higher than that, but not by much.

        Mind you, it's still an achievement that they managed to program it to even play at intermediate level. Kudos and all that. But let's get not get too carried away with the "OMG, it beat a go master" line of thought. We can celebrate that when it beats one in an even game. We're still _far_ from that.

        • by MutantEnemy (545783) on Friday August 08, 2008 @04:36PM (#24531521) Homepage

          At any rate, at professional levels ("dan"), it's played with 1 handicap stone for every 3 ranks! So a 2 dan player would only get 2 stones head start against the master in TFA. The computer got 9. Assuming that the game was relatively even matched, that would put the computer's skill barely above the line between casual player and intermediate player.

          Mmm. I think you're conflating amateur dan ranks and professional dan ranks. Amateurs are supposed to start at about 30-kyu, then rise up to 1-kyu, after which they enter the amateur dan ranks. 7-dan is usually the highest rank given to an amateur.

          A weak or new professional would be ranked at 1-dan, but this is on the professional scale. A 1-dan pro is probably better than a 7-dan amateur.

  • by metamechanical (545566) on Friday August 08, 2008 @08:56AM (#24523401)
    If only the summary had actually been what I read it as. I think we need more computers in Congress.
  • Breaking News (Score:5, Insightful)

    by matt4077 (581118) on Friday August 08, 2008 @08:58AM (#24523429) Homepage
    Breaking News: Snail is faster than cheetah encased in 200 pound lead handicap
  • Moore's Law (Score:4, Interesting)

    by jonastullus (530101) * on Friday August 08, 2008 @09:00AM (#24523459) Homepage

    Even if Moore's law were to continue (which I seriously doubt), the combinatorics of Go is rising drastically faster than doubling transistors/performance every 18 months could hope to cope with.

    Calculating the next N possible moves (no matter how smartly you cut-off) is simply not the answer in Go, IMHO. Seeing how ladders and other "inevitable" developments can go on for dozens of moves makes a look-ahead impractical.

    I haven't read TFA, but in the long run beating higher dan level players will require recognition of strategic constellations and differentiating them from mere clusters of stones.

  • ...and thought a "computer beats" professional walked in on a filibuster, or something. Then my imagined hilarity was confused by the notice of "go", then I was just disappointed when I realized that a computer literate individual was not actually in Congress.
  • misleading title (Score:5, Insightful)

    by yepster (1050284) on Friday August 08, 2008 @09:00AM (#24523475)
    Considering the 9 stone handicap, maybe we'll have to wait yet a few years before actually seeing what the title implies ...
    • Re:misleading title (Score:5, Informative)

      by Denial93 (773403) on Friday August 08, 2008 @09:17AM (#24523693)
      It says in the article they've gone down 9 stones in the last year, so they may likely shed the rest of the handicap in another. That's why Moore's law was mentioned in the summary...
      • Re: (Score:3, Insightful)

        by SoVeryTired (967875)

        The thing about a nine+ stone handicap is, as the number of handicap stones goes down, the difficulty increases disproportionately for the weaker player.

        There's a much bigger difference moving from four stones to two than there is moving from nine to seven.

        Nine stones is still a pretty enormous handicap (the maximum normally allowed), so moore's law or not, I dount we'll see a computer beat a pro in an even game anytime soon.

  • Bob (Score:5, Funny)

    by Kamineko (851857) on Friday August 08, 2008 @09:02AM (#24523503)

    Read below for more details in Bob's account of the match.

    Bob's account of the match:

    The guy placed a stone, and we were like 'Whoa!', and then the computer placed a stone, and I was like 'No way!', and then the guy placed another stone, and the crowd was all 'Oh! WOW!' and they continued to take alternating turns like that.

    It was Go-riffic!

  • by Asmor (775910) on Friday August 08, 2008 @09:02AM (#24523505) Homepage

    Somebody wake me up when a computer can reliably beat a human at Candy Land.

    • Or at the jumping rope...
    • by 2short (466733) on Friday August 08, 2008 @12:56PM (#24527723)
      That's easy; Candy Land is a solved game.
          Some might tell you the game has no strategy at all; that without any decisions to be made, it's merely an elaborate coin flip. However, the winning strategy is so simple, both my children mastered it under the age of four:
          On your turn, pick up someone else's piece and chew on it a bit. Then draw a card and try to feed it to the dog. Become distracted by the dog and chase him across the room. When your parent/opponent suggest packing up the game since you don't seem to want to play, scream "No!!!" as you careen across the room, trip over the board, and (crucially) knock over the deck. Ignore the fact that your parent/opponent takes a bit long to put the deck back together, and that their shuffling is somewhat irregular.
          Draw your next card, which will be the ice cream cone, catapulting you into the lead. You will win 2-3 turns after this, depending on your parent/opponents mood and/or efficiency at finding the apropriate double-move cards.
  • 800-Core? (Score:4, Insightful)

    by Slippery Pete (941650) on Friday August 08, 2008 @09:04AM (#24523533)
    Just out of curiosity, doesn't this seem like overkill? If you require such a massive amount of computing power, it seems you would take away from any decent AI algorithms and just do brute force computing.
    • Deep Blue [wikipedia.org] (the computer playing chess with Kasparov) was the same: custom made hardware and a lot of computing power, used exclusively for this.

    • Having such an absurdly powerful machine let's you do *anything* with your program. It's useful for testing and thinking about theory if you can largely ignore the "processing power" variable.

    • Re:800-Core? (Score:5, Insightful)

      by jjohnson (62583) on Friday August 08, 2008 @10:11AM (#24524497) Homepage

      The history of computer chess is the history of building brute force engines and then refining them by identifying where processing power is successful at winning. Brute force allows you to look at a lot of moves, but how the programs scores the outcomes of those moves as desirable is a different matter that has nothing to do with processing power (e.g., scoring for area of control).

      What humans seem to do well is radically pruning the search tree to the small number of viable lines of play, and spending our limited processing power analyzing those in depth. Once you've got a computer searching a large search space in depth, you can work on pruning it by teaching it to recognize useless lines.

    • Re: (Score:3, Informative)

      by slodan (1134883)

      In Go, you can move nearly anywhere at any time. This means that the possible number of possible board states grows wildly faster than chess (to pick a game with well-known AI techniques). A brute force approach to this problem is like trying to crack a rand() sequence with a brute force approach--the search space is far too large.

  • Ease (Score:5, Interesting)

    by Amorymeltzer (1213818) on Friday August 08, 2008 @09:09AM (#24523593)

    That's the thing that's always fascinated me about Go. It is essentially an extremely simple game gone terribly, terribly wrong. It's got about as many rules as Yahtzee, but is played on a 19x19 board. Compare that to Chess which has a pretty large (8x8) board, but has far more rules.

    I'm no expert on computer game programming, but I think that's where some of the difficulty comes into play when building these guys. Chess has a nearly unlimited number of outcomes, however having those sets of rules helps. For example, of the 32 pieces, 16 of them are essentially limited to one space in front of them. In Go, however, the lack of rules means that you're left with the simple mathematical monstrosity of an enormous board.

    The book The Immortal Game [amazon.com], aside from being an excellent read, goes into depth about computer programs playing chess, as well as Go and Checkers.

    • Re:Ease (Score:5, Interesting)

      by fractic (1178341) on Friday August 08, 2008 @09:20AM (#24523731)
      In chess you also have the advantage of being able to make an endgame database for your program. Chess positions only get simpler as the game progresses because pieces are removed. In Go this doesn't happen. An endgame database is simply impossible.
      • Re:Endgame Databases (Score:3, Interesting)

        by bcwright (871193)

        This is indeed an advantage that chess enjoys, however it's more of a second-order effect. In chess the endgame database doesn't really come into play in a significant way until towards the very end of the game, since for the first part of the game the terminal nodes can be adequately evaluated without it even if they are "endgame" nodes from the point of view of the remaining material. Even during the endgame phase the endgame databases often don't come into play in a significant way because the endgame

  • by RootsLINUX (854452) <rootslinux@3.14gmail.com minus pi> on Friday August 08, 2008 @09:15AM (#24523667) Homepage
    Having a greater number of transistors on a chip does not make a processor "smarter" or capable of doing something a less populated processor can. It only gives it the potential to be much faster (and consume more power). The real story here is that a person or group of programmers have designed a better algorithm for playing the game. This is a human achievement, not a machine one.
    • by ELProphet (909179) <davidsouther@gmail.com> on Friday August 08, 2008 @10:09AM (#24524455) Homepage

      The real story here is that a person or group of programmers have designed a better algorithm for playing the game.

      Not really. TFS clearly states the programmers used a Monte Carlo [wikipedia.org] algorithm. To compare, the most common chess algorithm is ALpha-Beta [wikipedia.org] search. The computer takes a board position, enumerates the possible moves, and prunes the tree if it finds a move worse than the best move it's found so far. The only "clever" algorithm is evaluating the positions (rather difficult in Chess). Go, OTOH, has too many possible moves ((19 ^ 2)! / (19 ^ 2 - Moves)!) or 16 702 719 120 (sixteen billion) possible boards after 4 moves. So, rather than explore the entire tree, the use Monte Carlo randomization to get a good average move, instead of the best possible move. Now, the "clever" part of the algorithm goes away, and we're left with a massively paralizable dart board. The more darts, the better a chance to get a good shot.

      It only gives it the potential to be much faster (and consume more power).

      The first point is entirely accurate, and the basis of modernization in legacy systems (what Joel on Software calls a Free Lunch). The second point is not valid. Processors have not gotten physically larger, which means the transistors have gotten smaller. If the transistors are smaller, there is less distance between them, which means it takes less energy to move electrons between the transistors more than making up for the power to run those extra transistors. That does make more heat, which is where the power savings go- into the cooling unit. This is also why power supplies in the past few decades really haven't change their input or output specifications- your computer uses the same amount of power it always has, jut more efficiently.

      Having a greater number of transistors on a chip does not make a processor "smarter" or capable of doing something a less populated processor can.

      That's for the emergent intelligence debate. Kurzweil of course disagrees strongly, whereas others (Joel on software) use that argument as a proof by contradiction advocating good software development as opposed to waiting for your computer to improve. I personally am in the second camp, though only because it makes more sense to write good code today than use Moore's law as a crutch for sloppy programming (see Vista).

  • by Keyper7 (1160079) on Friday August 08, 2008 @09:19AM (#24523709)

    ...just like what happened with Kasparov, the headline is completely backwards.

    How about this one: Computer needed 800 processors, at 4.7 Ghz, 15 Teraflops to beat a professional Go player.

    We're talking about a computer needing gargantuan processing power to beat a human and we are impressed at the computer? Seriously?

    If I am to be impressed by something, it's definitely by this Myungwan Kim guy.

    • Re: (Score:3, Insightful)

      by ceoyoyo (59147)

      800 processors? That computer is a toy compared to this Kim guy's brain, when doing a massively parallel task like Go.

      That computer is seriously impressive.

  • by frankie (91710) on Friday August 08, 2008 @09:31AM (#24523857) Journal

    The difference between Go and other games you may know where computers have surpassed the best human players is the immensity of the search space. On a standard 19x19 board there are 381 spaces, all of which are in play at every move, often including spaces currently occupied but not yet safe from capture. Therefore, within a handful of plies a computer player may be looking at trillions of possible branches. Go is NUMEROUS orders of magnitude more complex than chess from an algorithmic viewpoint.

    p.s. Those of you saying "Go what?" are merely revealing your own ignorance. There are at least as many Go players in the world as chess; most of them are in Asia.

  • Ko (Score:3, Interesting)

    by sanosuke001 (640243) on Friday August 08, 2008 @09:47AM (#24524111)
    Just start a Ko at the beginning of the game. Let's see if the computer can keep track of that mess of a board!
  • Bah! (Score:4, Interesting)

    by Daemin (232340) on Friday August 08, 2008 @10:06AM (#24524399)

    What really frustrates me about this is that it uses a Monte Carlo method.

    As is mentioned in the body, essentially it plays a shitload of random moves out to some cutoff point and tries to determine which moves contribute to a winning end state more often than the other ones.

    Basically, the only thing the stupid algorithm knows about Go is the simple rules and how to score the board. It knows nothing of strategy, tactics, strong shapes, living shapes, dead shapes, etc. Of course, it may be doing some sophisticated analysis to determine fruitful branches so as to not waste time on bad ones, but that doesn't defeat my point; it just means that with more computing power, you don't have to be so choosy. Knowledge about the complexities of the game is not required for the machine to win with this method, and that makes me call bullshit.

    I'll be excited when a computer beats a 9-dan player without using a probabilistic method to choose a winning path through the search space. I want to see a program that plays like a human, i.e. looks at the board and determines which groups are alive, which are dead, where it is fruitful to play and where it is a waste. In other words, I'll be impressed when someone is able to pin down what makes one shape strong and another weak precisely enough to put it in an algorithm. Otherwise, its just a cheap probability crunching trick.

    • Re: (Score:3, Insightful)

      by Raenex (947668)

      Knowledge about the complexities of the game is not required for the machine to win with this method, and that makes me call bullshit.

      Actually, this is a good thing. Before Monte Carlo methods started to dominate Go (about 2 years ago), all the top programs were written with human-coded expert knowledge. Progress was extremely slow and didn't keep up with advances in computer hardware. It took decades of man-labor to achieve a moderate success, which was all surpassed in a couple of years by MoGo.

      Furthermore, hardly any advances were being made in Artificial Intelligence, since most of the work involved adding specialized domain knowle

  • by Bazman (4849) on Friday August 08, 2008 @10:26AM (#24524757) Journal

    I'll be impressed when they can build a computer that can play Mornington Crescent at pro level.

  • by ihavnoid (749312) on Friday August 08, 2008 @10:55AM (#24525265)

    First, I see a lot of people assuming that now there exists a computer program which can beat a human, but, it is still far from beating a human. In go, having a nine-stone handicap is somewhat equivalent of playing chess without a queen (or maybe, even worse).

    The reason computers aren't good at go is not only because of its vast number of possible moves. Another serious reason is because a good/bad move isn't immediately evident, and in many cases, can be figured out only after about 100 moves. Good luck reading 100 moves ahead using a computer. Human players are very good at making educated guesses.

    Additionally, when playing chess, one small mistake can be disastrous. However, in Go, small mistakes simply add up, and you simply lose scores. In other words, go is generally generous to small mistakes, which gives an additional advantage to human players. (Of course, catastrophic mistakes are equally devastating on go games, too, but small mistakes are generally less serious, and there usually exists some way to do damage control.) Go is a strategic game - you can lose local battles, but still win the war. In chess, it isn't so easy to win the game once you lose a local battle.

  • Analysis vs search (Score:4, Interesting)

    by Kjella (173770) on Friday August 08, 2008 @11:00AM (#24525353) Homepage

    It basicly comes down to whether you're analyzing the position or searching the position. Ask a chess player why he did some move and it's not "because I went through all the possible moves and materially it was the best" it's usually concepts like creating a good pawn structure, developing officers, pinning down the opponent, threatening key areas of the board, setting up for a knight fork or a mate threat and so on. As in "I don't know exactly where it's going, but I think my position now is better than the one I had". Computer chess programs don't do any of that, they just blindly test moves measuring material. Then it's all about giving the computer too many possibilities. If by any chance we should evolve computing power to beat a human on a 19x19 board, it'd be trivial to make say a 29x29 board and the human would play almost the same while the computer would drop horribly in skill. All quantitiy and no understanding of why any particular move makes sense over any other.

  • MoGo Algorithm (Score:3, Interesting)

    by emeraldemon (1167599) on Friday August 08, 2008 @11:10AM (#24525599)
    For anyone interested, MoGo was the Phd thesis of Sylvain Gelly, and that thesis, which describes mogo, is available here: http://www.lri.fr/~gelly/ [www.lri.fr] As was said before, alpha-beta search is the most common strategy in chess, but my understanding is that it doesn't parallelize very well. Monte Carlo has been around for a long time, but it hasn't ever really succeeded at Go. Gelly's main contribution was to borrow a successful solution to the multi-armed bandit problem, an algorithm called UCB, and apply to the search tree. Initial values are determined through Monte Carlo style search, and then for each branch the algorithm estimates the upper confidence bound on the reward of each move, and preferentially searches the parts of the tree that seem to have good potential. This made it good, but still not quite as good as gnugo, so he used reinforcement learning offline to make a quick pattern-matching style evaluation to aid UCT. There are some other tweaks to improve play style mentioned in the thesis. Anyway, it's worth a look if your interested, and especially if you feel like MoGo has an obvious or boring algorithm.
  • I'm not impressed (Score:3, Interesting)

    by Oidhche (1244906) on Friday August 08, 2008 @11:42AM (#24526251)

    This result says nothing about the program's 'intelligence'. It doesn't even say much about its go proficiency. Although I'm sure that the algorithms used are very sophisticated, the program won only by the virtue of sheer computing power of the hardware it was running on. Using 800-core supercomputer, it managed to beat 8-dan pro in a 9 stone handicap game. Perhaps using 8000-core supercomputer it would manage to beat him in an even game. Or, at 80000-cores beat do it giving him 9 stones. But the program is still as dumb running on 80000 cores as it is running on 800 cores, or on one core for that matter.

    The human brain is not such an amazing tool because of cunning algorithms or huge computing power, but because of the sheer size of the network of simple elements it consists of. There's a crucial difference in how algorithms and neuron scale. If you take some algorithm and run it on a computer with a vast memory and computing power, it's still the same dumb algorithm, no matter how many gigabytes and teraflops you throw at it. But if you take a neuron (either biological one or a computer simulation) and start connecting it to other neurons, then the larger the network, the more it capable of. At some point you might even say it's intelligent :P.

    I think it's highly doubtful that AI research will ever come up with something better than simply simulating a neural network. Intelligence is not something that can be contained within a single algorithm, no matter how ingenious and how powerful hardware it's run on. It's an emergent property of huge networks of simple elements. An algorithm can at best mimic some aspects of intelligence, but there's never any real thought behind it.

    • by 2short (466733) on Friday August 08, 2008 @01:13PM (#24528033)

      You seem to be saying the computer and the human both succeed due to the "the sheer size of the network of simple elements it consists of". Therefore you're not impressed by one of them?

      "but there's never any real thought behind it."

      Why do you assume there is thought behind you? If the machine plays Go, it plays Go. I think it's highly doubtful AI research will ever do anything at all in the judgment of critics who insist it must not only achieve things, but achieve them in ways that are not defined or understood.
  • not a serious game (Score:3, Insightful)

    by toxygen01 (901511) on Friday August 08, 2008 @12:04PM (#24526693) Journal
    I already claimed my doubts about the seriousness of the match. The time the MyungWang spent thinking (7 minutes) compared to those of mogo (55 minutes) seem to say it all - it was not a serious match at all. It was just show off for the program. I don't know whether many people here know it, but pro players use to do this - lose with very small difference (1.5 in this match) to encourage the opponent and made him able to say "I defeated a pro". 7 minutes vs 55 minutes - decide yourself

A LISP programmer knows the value of everything, but the cost of nothing. -- Alan Perlis

Working...