Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Programming Entertainment Games IT Technology

Computer Beats Pro At US Go Congress 496

Bob Hearn writes "I was in attendance at the US Go Congress match yesterday where history was made: the go program MoGo, running on an 800-core supercomputer, beat 8-dan professional go player Myungwan Kim in a 9-stone handicap game. Most in the audience were shocked at the computer's performance; it was naturally assumed that the computer would be slaughtered, as usual. Go is often seen as the last bastion of human superiority over computers in the domain of board games. But if Moore's law continues to hold up, today's result suggests that the days of human superiority may be numbered." Read below for more details in Bob's account of the match.

Computers are still a long way from beating the best human players in an even game; nevertheless, today's performance represents a huge stride forward. In the last such high-profile matchup, in 1997, Janice Kim (then 1-dan professional) beat then-champion program Handtalk with a 25-stone handicap. In fact, most of the improvement in the level of computer-go play has happened in just the past few years. Today's top programs, including MoGo, use a Monte Carlo approach: they simulate thousands of random continuations per second from the current position to the end of the game, accumulating statistics in a tree on which moves lead to wins most often. One of the strengths of this approach is that it is highly amenable to parallelization. Thus, today's 800-core incarnation of MoGo is by far the strongest go computer that has yet existed. After the game Myungwan Kim estimated the strength of MoGo as 'two or maybe three dan,' corresponding to a reasonably strong amateur player. (Professional dan ranks are at a much higher level than amateur dan ranks.)
This discussion has been archived. No new comments can be posted.

Computer Beats Pro At US Go Congress

Comments Filter:
  • Moore's Law (Score:4, Interesting)

    by jonastullus ( 530101 ) * on Friday August 08, 2008 @09:00AM (#24523459) Homepage

    Even if Moore's law were to continue (which I seriously doubt), the combinatorics of Go is rising drastically faster than doubling transistors/performance every 18 months could hope to cope with.

    Calculating the next N possible moves (no matter how smartly you cut-off) is simply not the answer in Go, IMHO. Seeing how ladders and other "inevitable" developments can go on for dozens of moves makes a look-ahead impractical.

    I haven't read TFA, but in the long run beating higher dan level players will require recognition of strategic constellations and differentiating them from mere clusters of stones.

  • by Asmor ( 775910 ) on Friday August 08, 2008 @09:02AM (#24523505) Homepage

    Somebody wake me up when a computer can reliably beat a human at Candy Land.

  • by fractic ( 1178341 ) on Friday August 08, 2008 @09:05AM (#24523549)
    MoGo uses a Monte Carlo approach. Basically it keeps playing random games to see which give the best results. Faster computers mean more random games can be played thus given better results.
  • by Urkki ( 668283 ) on Friday August 08, 2008 @09:07AM (#24523577)

    Read a bit on Go algorithms. This one isn't using a dumb search. If it were, the Go program playing this good wouldn't be finished calculating it first move yet... No, this one must have used an extremely advanced search algorithm, very smartly removing unlikely branches of the search.

    Unfortunately we're stuck with algorithmically searching for the right move until we have sufficiently large quantum computers, that they can run a go algorithm and do it all at once. This may be impossible in our current universe, due to practical limitations caused by laws of physics (just like building a space elevator on Jupiter may be impossible due to practical limitations of physically possible materials in our universe).

  • Ease (Score:5, Interesting)

    by Amorymeltzer ( 1213818 ) on Friday August 08, 2008 @09:09AM (#24523593)

    That's the thing that's always fascinated me about Go. It is essentially an extremely simple game gone terribly, terribly wrong. It's got about as many rules as Yahtzee, but is played on a 19x19 board. Compare that to Chess which has a pretty large (8x8) board, but has far more rules.

    I'm no expert on computer game programming, but I think that's where some of the difficulty comes into play when building these guys. Chess has a nearly unlimited number of outcomes, however having those sets of rules helps. For example, of the 32 pieces, 16 of them are essentially limited to one space in front of them. In Go, however, the lack of rules means that you're left with the simple mathematical monstrosity of an enormous board.

    The book The Immortal Game [amazon.com], aside from being an excellent read, goes into depth about computer programs playing chess, as well as Go and Checkers.

  • Re:ignorance (Score:2, Interesting)

    by felonius maximus ( 601940 ) on Friday August 08, 2008 @09:14AM (#24523657)
    Care to clarify what you mean by "ignorance", El M?

    There are good reasons why Go has proven difficult for dirty rob'ts to deal with. Go does employ a discrete set of rules, but the increasing complexity of the Go board as the game progresses has given humans the edge until now. CBF going into all the nitty gritty, but here's a primer for you: http://en.wikipedia.org/wiki/Computer_Go [wikipedia.org]

    P.S. I'm shit at Go, but rob'ts aren't real good either (mostly).

  • Re:ignorance (Score:2, Interesting)

    by Aristos Mazer ( 181252 ) on Friday August 08, 2008 @09:14AM (#24523665)

    No, it isn't ignorance. The whole point is that Go should be trivial for computers given everything we know about symbolic logic, but for decades Go has resisted any and all attempts to make it playable by computers. There is something that the human mind is doing that we have been unable to encode in symbolic logic, something that arises from the very simple four rules of Go.

    You said that computers ought to be good at this. True. But they aren't. That's significant, and is the reason why AI research has worked hard on Go -- it clearly highlights a difference between the human mind and the computer. What difference? We don't know yet. Perhaps we're just doing Monte Carlo deep searches in extreme parallel, like MoGo. Perhaps there is something more human.

  • Re:ignorance (Score:2, Interesting)

    by fractic ( 1178341 ) on Friday August 08, 2008 @09:15AM (#24523673)

    It is not really comparable to mathematically reducible games like Mancala, Chess, Backgammon, Draughts/Checkers, etc.,

    Huh? What are you talking about? Go is much more 'mathematical' then chess or backgammon. It's one of the best examples of combinatorial game theory [xmp.net].

  • by squoozer ( 730327 ) on Friday August 08, 2008 @09:19AM (#24523715)

    I don't think you understand the complexity of the game if you're making statements like that.

    That's an interesting thing becasue it's starting to look like we aren't solving these sorts of problems in the simplest way possible. A human going flat out is running on 200W maximum. That super computer is probably using 200W per-processor (when you take into account all the addition equipment needed for memory, switching, cooling etc) and not even playing as well as the human. What that says to me is that we probably aren't approaching the problem correctly.

    I suspect that we will need to develop a completely different type of hardware in order for machines to be able to perform well at this type of task. Humans are poor at what computers are good at and vice versa and the two take completely different approaches to processing - perhaps that's a hint to look elsewhere.

  • Re:Ease (Score:5, Interesting)

    by fractic ( 1178341 ) on Friday August 08, 2008 @09:20AM (#24523731)
    In chess you also have the advantage of being able to make an endgame database for your program. Chess positions only get simpler as the game progresses because pieces are removed. In Go this doesn't happen. An endgame database is simply impossible.
  • by damburger ( 981828 ) on Friday August 08, 2008 @09:26AM (#24523805)

    The brain has a lot of engineering that puts microchips to shame. If you were to pack transistors as densely and with as little cooling as human neurons, they would melt. On top of this, of course, the amount of processing a neuron can do vastly exceeds that of a transistor. Modeling even a single human brain cell is a major task for a computer.

    Furthermore, the connectivity of neurons is much greater than electronic components; each one is connected to thousands of other neurons, nearby an far away, in a way impossible with wires.

    A lot of guesses about the equivalent FLOPS of a human brain have centered around naive counting of cells and comparing that with the rather slow switching speed (about 10Hz IIRC). Some estimates came out at about 1 TeraFLOPS but that seems ridiculously small in light of what humans can do that computers still struggle with.

  • Re:ignorance (Score:5, Interesting)

    by damburger ( 981828 ) on Friday August 08, 2008 @09:30AM (#24523853)

    Its mathematical in the same way the weather is mathematical - there is certainly amount of maths that can be done in regard to it, but it won't give in to brute force computation because its just too complex.

    If you don't believe me, try and write a Go playing program that plays better than a 6 year old. I did attempt this as part of my AI degree, and it is was one of the hardest things I ever attempted. To think, when I began the project I couldn't understand why my supervisor was laughing at me...

  • by damburger ( 981828 ) on Friday August 08, 2008 @09:35AM (#24523933)

    Agreed, I think it is a particularly good game for hardcore computer nerds and mathematicians, because it forces you out of human calculator mode and makes you engage your atrophied right brain for a change.

    There is a tendency for people of our type to get lost in our desire to quantify, optimise and engineer stuff to death, and Go helps us appreciate a problem that defies such simplification, and must be dealt with in its full chaotic beauty or not at all.

  • by aldousd666 ( 640240 ) on Friday August 08, 2008 @09:42AM (#24524029) Journal
    or a kickass neural net needs to be trained to recognize moves on the fly and respond. if you're talking about brute force optimization problems with minima and maxima based heuristics then yes, it might be nice to have a new algorithm or giant super computer, but if you're doing it a totally different way, that might not be true. Like with evolved neural networks that play GO, whose fitness per epoch is evaluated by a super computer crunching raw numbers like this, THEN you might have something that ends up being genuinely smarter than we are at this, in the same way that we are.
  • Ko (Score:3, Interesting)

    by sanosuke001 ( 640243 ) on Friday August 08, 2008 @09:47AM (#24524111)
    Just start a Ko at the beginning of the game. Let's see if the computer can keep track of that mess of a board!
  • by Anonymous Coward on Friday August 08, 2008 @09:49AM (#24524161)

    On a standard 19x19 board there are 381 spaces, all of which are in play at every move, often including spaces currently occupied but not yet safe from capture.

    There are 361 spaces not 381.

    Something that sets Go aside is even scoring a completed game can take a remarkable amount of computation.

    p.s. Go what?

  • Re:ignorance (Score:3, Interesting)

    by odourpreventer ( 898853 ) on Friday August 08, 2008 @09:56AM (#24524255)

    and brute force is definitely not the way to go

    And yet, that's the way they went for this competition. Monte Carlo methods should not be called AI anymore, really. Call me back when they have a neural network beating someone.

  • Bah! (Score:4, Interesting)

    by Daemin ( 232340 ) on Friday August 08, 2008 @10:06AM (#24524399)

    What really frustrates me about this is that it uses a Monte Carlo method.

    As is mentioned in the body, essentially it plays a shitload of random moves out to some cutoff point and tries to determine which moves contribute to a winning end state more often than the other ones.

    Basically, the only thing the stupid algorithm knows about Go is the simple rules and how to score the board. It knows nothing of strategy, tactics, strong shapes, living shapes, dead shapes, etc. Of course, it may be doing some sophisticated analysis to determine fruitful branches so as to not waste time on bad ones, but that doesn't defeat my point; it just means that with more computing power, you don't have to be so choosy. Knowledge about the complexities of the game is not required for the machine to win with this method, and that makes me call bullshit.

    I'll be excited when a computer beats a 9-dan player without using a probabilistic method to choose a winning path through the search space. I want to see a program that plays like a human, i.e. looks at the board and determines which groups are alive, which are dead, where it is fruitful to play and where it is a waste. In other words, I'll be impressed when someone is able to pin down what makes one shape strong and another weak precisely enough to put it in an algorithm. Otherwise, its just a cheap probability crunching trick.

  • by querist ( 97166 ) on Friday August 08, 2008 @10:07AM (#24524413) Homepage

    It seems that many people are missing a fundamental point that makes all of this so interesting: it was only a 9-stone handicap.

    That is the largest handicap you would expect to see in tournament play. In this case, it would be given to a 2nd kyu player against the 8th dan Master Kim. The handicap is traditionally calculated by subtracting the lower player's rank from the higher and that gives the number of stones. Taking advantage of the number 0 here to make this work, we can use it and negative numbers to extend the plan into the kyu ranks.

    Rankings:

    30th kyu, 29th kyu .... 2nd kyu, 1st kyu, 1st dan, 2nd dan, ..., 8th dan, 9th dan, 10th dan.

    It stops at 10th dan, which is the highest. Any kyu rank below 9th is a concession to _really_ amateur players to have some form of ranking system that meshes into the existing system.

    Now... the dan ranks are simply their numbers. But, if we remember the old "number lines" from grade school and place the 0 at the 1st kyu, we can see that we have 1st kyu = 0, 2nd kyu = -1, 3rd kyu = -2, etc.

    Handicap:

    Therefore, if you do the math 8th dan - 2nd kyu = 8 - -1 = 9, you arrive at a 9-stone handicap.

    In standard Japanese rules, the handicap stones are placed on the "star" points, the ones with the black dots on them on the board. There are only 9.

    The handicap system was designed to even out the match. It was determined that giving extra initial placements in this manner would improve the weaker player's position to a point where the match was considered "even", allowing weaker players to have a chance at defeating a stronger player, but more importantly, to give the weaker player a chance to play a stronger player without it being a complete slaughter. (It is considered very bad form to refuse a handicap that is properly calculated, by the way.)

    Thus, with this 9-stone handicap it should not be THAT much of a surprise that the computer won, except that it does mean that the computer played at the level of at least a 2nd kyu player. I will cede to Master Kim's assessment of the computer's strength as 2nd or 3rd dan, of course, due to Master Kim's own rank. I was speaking strictly from the mathematics of the handicapping system.

    This is significant in that the computer is playing inside the range of a decent human player. This is new. This is different. This has never been done before to this extent. Because of the search space for Go, this is a significant accomplishment, even with 800-cores.

    What it does is it tells us that we are better at designing algorithms and better at using parallel systems as well as having better hardware to throw at the problem. This was not simply more hardware with an old algorithm, or it would not have been able to do as well.

  • by ELProphet ( 909179 ) <davidsouther@gmail.com> on Friday August 08, 2008 @10:09AM (#24524455) Homepage

    The real story here is that a person or group of programmers have designed a better algorithm for playing the game.

    Not really. TFS clearly states the programmers used a Monte Carlo [wikipedia.org] algorithm. To compare, the most common chess algorithm is ALpha-Beta [wikipedia.org] search. The computer takes a board position, enumerates the possible moves, and prunes the tree if it finds a move worse than the best move it's found so far. The only "clever" algorithm is evaluating the positions (rather difficult in Chess). Go, OTOH, has too many possible moves ((19 ^ 2)! / (19 ^ 2 - Moves)!) or 16 702 719 120 (sixteen billion) possible boards after 4 moves. So, rather than explore the entire tree, the use Monte Carlo randomization to get a good average move, instead of the best possible move. Now, the "clever" part of the algorithm goes away, and we're left with a massively paralizable dart board. The more darts, the better a chance to get a good shot.

    It only gives it the potential to be much faster (and consume more power).

    The first point is entirely accurate, and the basis of modernization in legacy systems (what Joel on Software calls a Free Lunch). The second point is not valid. Processors have not gotten physically larger, which means the transistors have gotten smaller. If the transistors are smaller, there is less distance between them, which means it takes less energy to move electrons between the transistors more than making up for the power to run those extra transistors. That does make more heat, which is where the power savings go- into the cooling unit. This is also why power supplies in the past few decades really haven't change their input or output specifications- your computer uses the same amount of power it always has, jut more efficiently.

    Having a greater number of transistors on a chip does not make a processor "smarter" or capable of doing something a less populated processor can.

    That's for the emergent intelligence debate. Kurzweil of course disagrees strongly, whereas others (Joel on software) use that argument as a proof by contradiction advocating good software development as opposed to waiting for your computer to improve. I personally am in the second camp, though only because it makes more sense to write good code today than use Moore's law as a crutch for sloppy programming (see Vista).

  • by Metasquares ( 555685 ) <slashdot.metasquared@com> on Friday August 08, 2008 @10:44AM (#24525081) Homepage

    Actually, you're pretty much correct. Intuition is something a successful AI (and a successful human Go player) will require, and while we can model it on a computer, most people haven't thought of doing so. Most systems are either based on symbolic logic, statistics, or reinforcement learning, all of which rely on deductive A->B style rules. You can build an intelligent system on that sort of reasoning, but not ONLY on that sort of reasoning (besides, that's not the way that humans normally think either).

    I suspect that what we need is something more akin to "clustering" of concepts, in which retrieval of one concept invokes others that are nearby in "thought-space". The system should then try to merge the clusters of different concepts it thinks of, resulting in the sort of fusion of ideas that characterizes intuition (in other words, the clusters are constantly growing). Since there is such a thing as statistical clustering, that may form a good foundation. Couple it with deductive logic and you should actually get a very powerful system.

    I also suspect that some of the recent manifold learning techniques, particularly those involving kernel PCA, may play a part, as they replicate the concept of abstraction, another component of intuition, fairly well using statistics. Unfortunately, they tend to be computationally intense.

    There are many steps that would need to be involved, none of them trivial, but no one said AI was easy:

    1. Sense data.
    2. Collect that data in a manageable form (categorize it using an ontology, maybe?)
    3. Retrieve the x most recently accessed clusters pertaining to other properties of the concept you are reasoning about, as well as the cluster corresponding to the property being reasoned about itself (remembering everything is intractable, so the agent will primarily consider what it has been "mulling over" recently). For example, if we are trying to figure out whether a strawberry is a fruit, we would need to pull in clusters corresponding to "red things" and "seeded things" as well as the cluster corresponding to "fruits".
    4. Once a decision is made, grow the clusters. For example, if we decide that strawberries are fruits, we would look at other properties of strawberries and extend the "fruit" cluster to other things that have these properties. We might end up with the nonsymbolic equivalent of "all red objects with seeds are fruit" from doing that.

    What I've described is an attempt to model what Jung calls "extroverted intuition" - intuition concerned with external concepts. Attempting to model introverted intuition - intuition concerned with internal models and ideas - is much harder, as it would require clustering the properties of the model itself, forming a "relation between relations" - a way that ideas are connected in the agent's mental model.

    But that's for general AI, which I'm still not completely we're ready for anyway. If you just want a stronger Go player, wait just a bit longer and it'll be brute forced.

  • by JustLikeToSay ( 651328 ) on Friday August 08, 2008 @10:46AM (#24525119)
    When I was a lad ... we had books with titles like "Chess, computers and long-range planning". The idea was that the lessons learned from developing clever algorithms to help computers play chess would result in computers that could plan - because planning is key to the way people play chess, so "naturally" it will be the key to developing good computer players. However grunt took the lead role in chess-computer performance-improvement, which isn't to say algorithms didn't improve - they did, but not as quick as hardware. So this long preamble is really to respond to your post with a question - what's the purpose behind the development of chess-playing computers? If it's just to have computers beat humans, that's fine, but if it's about combining the best of human and computer "algorithms" then I'm not sure this is the best route. After all, these programs will get their asses whupped (I believe that's the corrct idiom) by the eventual (long way-off) emergence of software that does combine the best of both.
  • by ihavnoid ( 749312 ) on Friday August 08, 2008 @10:55AM (#24525265)

    First, I see a lot of people assuming that now there exists a computer program which can beat a human, but, it is still far from beating a human. In go, having a nine-stone handicap is somewhat equivalent of playing chess without a queen (or maybe, even worse).

    The reason computers aren't good at go is not only because of its vast number of possible moves. Another serious reason is because a good/bad move isn't immediately evident, and in many cases, can be figured out only after about 100 moves. Good luck reading 100 moves ahead using a computer. Human players are very good at making educated guesses.

    Additionally, when playing chess, one small mistake can be disastrous. However, in Go, small mistakes simply add up, and you simply lose scores. In other words, go is generally generous to small mistakes, which gives an additional advantage to human players. (Of course, catastrophic mistakes are equally devastating on go games, too, but small mistakes are generally less serious, and there usually exists some way to do damage control.) Go is a strategic game - you can lose local battles, but still win the war. In chess, it isn't so easy to win the game once you lose a local battle.

  • Re:Endgame Databases (Score:3, Interesting)

    by bcwright ( 871193 ) on Friday August 08, 2008 @10:56AM (#24525299)

    This is indeed an advantage that chess enjoys, however it's more of a second-order effect. In chess the endgame database doesn't really come into play in a significant way until towards the very end of the game, since for the first part of the game the terminal nodes can be adequately evaluated without it even if they are "endgame" nodes from the point of view of the remaining material. Even during the endgame phase the endgame databases often don't come into play in a significant way because the endgame databases only cover the simplest cases. Not that such databases aren't worth having because when the situation does arise where those cases are significant, having access to the database can be decisive.

    However as a game of Go progresses, it should be possible to build up a library of possible outcomes for this specific game, especially with a highly parallel system such as this one. This would be most useful if you had some way of recognizing "similar enough" positions in the database, though it would still have some value during the Go endgame if it only recognized exact matches. (As you note, a Go endgame is nothing like a chess endgame so you can't build up a single database that will work for all Go games).

  • Analysis vs search (Score:4, Interesting)

    by Kjella ( 173770 ) on Friday August 08, 2008 @11:00AM (#24525353) Homepage

    It basicly comes down to whether you're analyzing the position or searching the position. Ask a chess player why he did some move and it's not "because I went through all the possible moves and materially it was the best" it's usually concepts like creating a good pawn structure, developing officers, pinning down the opponent, threatening key areas of the board, setting up for a knight fork or a mate threat and so on. As in "I don't know exactly where it's going, but I think my position now is better than the one I had". Computer chess programs don't do any of that, they just blindly test moves measuring material. Then it's all about giving the computer too many possibilities. If by any chance we should evolve computing power to beat a human on a 19x19 board, it'd be trivial to make say a 29x29 board and the human would play almost the same while the computer would drop horribly in skill. All quantitiy and no understanding of why any particular move makes sense over any other.

  • by fugue ( 4373 ) on Friday August 08, 2008 @11:03AM (#24525435) Homepage

    A lot of guesses about the equivalent FLOPS of a human brain have centered around naive counting of cells and comparing that with the rather slow switching speed (about 10Hz IIRC). Some estimates came out at about 1 TeraFLOPS but that seems ridiculously small in light of what humans can do that computers still struggle with.

    If you want to simulate a neuron down to the quantum level with transistors, it will take quite a few. Is this necessary? Nobody knows. Is it fundamental to the computation taking place that a neuron is a leaky integrator, or that the synaptic gap transmits signals chemically rather than electronically? Is a signal encoded by individual spike timings, or by rate, or by some amalgim? Sure, it takes many transistors to simulate all of those things, but the underlying computation may be much simpler. For sure, something like a center-surround cell in the retina uses a whole lot of neurons as inputs, and produces a result that a pretty modest pocket calculator could embarass.

    The brain is amazing. But we do not know how computation takes place in there (or we'd have built something at least as good by now). We don't even have a definition for what constitutes "how the brain behaves"--if you make your black box around a single neuron you define "how the brain works" in one way, but if your black box is outside the perception->action loop of perceiving a Necker cube, there's a completely different set of behaviours to emulate. And if the black box surrounds a whole man, well, we all know how simple they are. Women are a different matter... ;)

    Remember Hari Seldon!

  • MoGo Algorithm (Score:3, Interesting)

    by emeraldemon ( 1167599 ) on Friday August 08, 2008 @11:10AM (#24525599)
    For anyone interested, MoGo was the Phd thesis of Sylvain Gelly, and that thesis, which describes mogo, is available here: http://www.lri.fr/~gelly/ [www.lri.fr] As was said before, alpha-beta search is the most common strategy in chess, but my understanding is that it doesn't parallelize very well. Monte Carlo has been around for a long time, but it hasn't ever really succeeded at Go. Gelly's main contribution was to borrow a successful solution to the multi-armed bandit problem, an algorithm called UCB, and apply to the search tree. Initial values are determined through Monte Carlo style search, and then for each branch the algorithm estimates the upper confidence bound on the reward of each move, and preferentially searches the parts of the tree that seem to have good potential. This made it good, but still not quite as good as gnugo, so he used reinforcement learning offline to make a quick pattern-matching style evaluation to aid UCT. There are some other tweaks to improve play style mentioned in the thesis. Anyway, it's worth a look if your interested, and especially if you feel like MoGo has an obvious or boring algorithm.
  • by orclevegam ( 940336 ) on Friday August 08, 2008 @11:35AM (#24526127) Journal
    Two things. First, as mentioned by a previous post, the biggest difference between a human brain and a computer, is that the human brain is inherently massively parallel, where as the computer is inherently serial (but we can fake it with multiple CPUs). Now, you can always put more cores in the computer to try to increase parallel processing, but then the programming complexity goes up massively, and we're still getting the hang of how to properly utilize parallel processing (as opposed to the old fashioned serial processing which has been studied to death, and is on pretty strong mathematic foundations at this point). Secondly the human brain, near as we have been able to determine is at it's core a pattern matching and extrapolation based system. We're naturally good at picking out patterns from the chaos (sometimes too good, which is when we see patterns that don't really exist), and making, shall we call them educated guesses based on those patterns. For computers we have several good algorithms for finding patterns in particular domains, but we don't have generalized ones that will work at anything you throw at them, but even that's barking up the wrong tree so to speak. The reason humans must rely on pattern matching and extrapolation is because even though our brains are massively parallel, they're also massively sloppy, and very limited in computing power. Quite simply, we lack the processing capability to do a proper in depth analysis of most situations, so we create a simplified abstraction and work from that. A computer on the other hand, has massive computing power, and is incredibly exact, but lacking somewhat on the parallel processing front, so it makes perfect sense to take a Monte Carlo approach in those conditions.

    Remember, this isn't a true AI we're talking about, if that was the goal, yes a pattern matching, abstraction and extrapolation based system closer to a human brain would be appropriate, but what we have here is a case of machine learning within a specialized domain. The boundaries of the system are well defined, even if the content is not, and as such taking a generic approach is wasteful.
  • by Anonymous Coward on Friday August 08, 2008 @11:42AM (#24526239)

    1) You assume that the number of states of a neuron is finite. Neurons are analog and not digital, so this is a false assumption.

    2) Related to the above, neurons are continually exchanging chemicals through their walls. The level of excitation of a neuron is not a single number, but a series of traveling waves. These depend on its inputs and also the chemical makeup in the local area around the axon. Again, infinite variation is quite possible.

    3) No neurons are irrelevant to the workings of the brain. Every neuron lost causes minute, possibly undetectable changes in thought patterns and personality. This has little to do "getting significantly more stupid." For more detailed information, see work done by Karl Pribam and the holonomic brain theory [wikipedia.org].

    Now, being able to represent an infinite amount of states does not make the brain infinitely complex. However, it is likely impossible to model even somewhat accurately in a digital medium, so the brain/computer analogy is wrong. The original post you replied to made an argument from ignorance, and I'm glad you called him on it, but his conclusion is correct.

  • I'm not impressed (Score:3, Interesting)

    by Oidhche ( 1244906 ) on Friday August 08, 2008 @11:42AM (#24526251)

    This result says nothing about the program's 'intelligence'. It doesn't even say much about its go proficiency. Although I'm sure that the algorithms used are very sophisticated, the program won only by the virtue of sheer computing power of the hardware it was running on. Using 800-core supercomputer, it managed to beat 8-dan pro in a 9 stone handicap game. Perhaps using 8000-core supercomputer it would manage to beat him in an even game. Or, at 80000-cores beat do it giving him 9 stones. But the program is still as dumb running on 80000 cores as it is running on 800 cores, or on one core for that matter.

    The human brain is not such an amazing tool because of cunning algorithms or huge computing power, but because of the sheer size of the network of simple elements it consists of. There's a crucial difference in how algorithms and neuron scale. If you take some algorithm and run it on a computer with a vast memory and computing power, it's still the same dumb algorithm, no matter how many gigabytes and teraflops you throw at it. But if you take a neuron (either biological one or a computer simulation) and start connecting it to other neurons, then the larger the network, the more it capable of. At some point you might even say it's intelligent :P.

    I think it's highly doubtful that AI research will ever come up with something better than simply simulating a neural network. Intelligence is not something that can be contained within a single algorithm, no matter how ingenious and how powerful hardware it's run on. It's an emergent property of huge networks of simple elements. An algorithm can at best mimic some aspects of intelligence, but there's never any real thought behind it.

  • by ChrisMaple ( 607946 ) on Friday August 08, 2008 @12:13PM (#24526885)

    Using appropriate CMOS technology, scaling speed (and voltage) down to brain rates, would change power dissipation by at least 10^-8. Melting is not a problem.

    The brain's advantage is that it is reconfigurable in a way that semiconductors are not. Over years, a go player will rebuild a portion of his brain into a partially optimized machine

  • by MarkWatson ( 189759 ) on Friday August 08, 2008 @12:19PM (#24526993) Homepage

    Thanks for the correction!

    Must clean my glasses :-)

    Interesting about the 9 stone handicap though. In the 1970s I played the women's world champion and she gave me a 9 stone handicap. Even though I am not a very strong player, I was still amazed that she beat me by a small margin. Hopefully I have improved at least a little since then :-)

  • by Yxven ( 1100075 ) on Friday August 08, 2008 @12:40PM (#24527455)

    Mogo is processing a few thousand moves per second. He doesn't care whether it is your second or his second, and the more moves he is able to calculate, the better his move should be.

    In other words, the pro could have taken more time, but if he did, he'd be facing a stronger opponent.

    I'm not familiar enough with mogo to comment on the trade off, but the pro not taking his time doesn't necessarily mean that he didn't take the match seriously.

  • by tknd ( 979052 ) on Friday August 08, 2008 @01:31PM (#24528371)

    I think you are approaching the problem wrong. I can't, ever, calculate as fast as a computer can. forget a modern computer, I can't calculate as fast as the oldest computers out there (and I'm betting you can't either). I have no doubt about your inability to beat computers at raw calculation speeds either (and if you could, more processors means you don't).

    The brain (and even other animal brains) are capable of doing and learning specific operations. That means two things: much of your brain isn't devoted to your conciousness what what "you" are thinking. A lot of it is devoted to operations that you need to survive and operate in the world. For example let's take vision. Your eye balls are pretty stupid, they just channel the light down to receptors in the back of your eye ball socket. All of the intelligence starts right after the receptor cells detect the light. All of that information is passed to smarter cells that do a shit-ton of processing for you in real-time. One of these operations in your vision pathway is edge detection. That is while you might not be aware of it, your vision pathway is actually parsing and detecting the images you see for edges. There are various "optical illusion" images on the net that play with the weaknesses in how the human vision works.

    Another good example with vision is how you are able to perceive depth with a 2D picture. You can say "well you have to eyes and if you triangulate blah blah" well close one eye and see if you can still reach out and pick up your coffee mug. The answer is your vision pathway is doing some processing for you to determine the depth of objects in a 2d picture. That way when you want to pick up your mug, you don't stop and say "well I have to go 30 cms forward, 7 cms down" but rather you just do it. There are patients that have been found who have had this portion of their vision pathway destroyed and are unable to correctly reach out and grab things or tell how far things are. They are still able to identify what kind of object they are seeing, but when asked to reach out and pickup the object, they consistently fail and keep guessing until one of their other senses (touch) enables them to.

    But all of that functionality can be emulated if you can just figure out an algorithm and implement it. But the second part that computers haven't touched brains with is learning. A good example of this is if you take a perfectly fine monkey (2 arms, 2 legs) and identify which part of his brain is devoted to his left arm motor functions (controls moving his left arm) and then amputate his left arm or disable it forever, after enough time has passed, the monkey's section of his brain devoted to "left arm motor functions" will be taken over and reused for an adjacent function like say "left leg motor functions". In computers this is basically the equivalent of rewiring your processor to do something different on the fly. To me this is where all of the "genius" is in brains. The ability of neurons to rewire themselves on the fly and get reused for other functions is amazing. And this applies to all types of learning: learning new languages (with no books or teachers), learning to play a specific game well, learning hand writing or touch typing. All of these things your brain is capable of learning how to do without you thinking about it.

"Experience has proved that some people indeed know everything." -- Russell Baker

Working...