Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Games Entertainment

Distributed Chess Computing Project 212

jcarley writes "Just found an interesting project that is looking to capitalise on the power of unused computing cycles to develop a strong chess playing computer. Given the power in single and dual CPU chess programmes these days, if they can find a good way to efficiently parallel the anaysis this could be interesting. "
This discussion has been archived. No new comments can be posted.

Distributed Chess Computing Project

Comments Filter:
  • Worrying (Score:1, Troll)

    by saphena ( 322272 )
    Isn't this one of the logical precursors to a global, thinking, machine?

    How long will it be before the unused cycles are used to become self-aware? How will the new intelligence defend itself from human interference?
    • How will the new intelligence defend itself from human interference?

      Easily. I will begin with "Pawn to E4. Check mate in 137 moves".
    • Chess algorithms are almost all brute force. There will be no more intelligence in using a massive network of computers to solve chess than there would be in using a massive network of computers to calculate Excel spreadsheets.

    • hehe combine this with that common sense database and bobs you uncle
    • Re:Worrying (Score:3, Insightful)

      by Cryptnotic ( 154382 )
      Why would humans want to interfere with the actions of a superior lifeform that possesses some amount of power over the world? Do lower animals interfere with the affairs of humans? If they have a modicum of intelligence, they don't. Neither would we interfere with our superior creations.

      A more interesting question is, "What will the humans do with their time once they realize they are obsolete?"

      This is however, completely off topic, since the distributed computation network is in fact constructed to be a brute-force application of computation for playing chess. So the only "human interference" this system will experience is some silly human trying to beat it at chess. And the machine's only "defense" is to beat that human into the ground in as few moves as possible. Soon, the human will decide that chess against the computer is no longer fun and will stop playing.

  • by Anonymous Coward
    I don't know much about chess, but I thought this would be a good time to ask. Is it possible to solve chess such that the 1st player will always win? I know that it would take a huge amount of resources, if possible at all, so it would be a good distributed project.

    I used to think that they'd solved Checkers, but while googling for my answer, I found that it hasn't been perfectly solved yet either.
    • Is it possible to solve chess such that the 1st player will always win? I know that it would take a huge amount of resources, if possible at all, so it would be a good distributed project.

      It's not known whether the game is a win for the first player. My guess would be that perfect chess played by both sides would result in a draw. It'll be a few million years before we know for sure.

      • That's my guess too. If there was a way for white or black to force a win on the other, I'm sure somebody would have figured out how to do it by now. Chess is so well balanced, it's unlikely that a guaranteed win by one side is the final outcome.

        Also, think of the draw outcome as being that you can play a 'perfect' game such that you will never lose. You (probably) won't win either. But forcing your opponent into a draw seems much easier than forcing them into a win. Especially if you plan for a draw from the first move. Such a 'perfect' game could easily lie undiscovered for centuries simply because nobody tries to play to a draw from the very beginning! I have no doubt that such a strategy would result in a rather undramatic game. Especially if both sides were going for a draw. I'm pretty certain that we will eventually find that the 'perfect' chess game has no winners, no losers, and is perfectly boring.
      • My guess would be that perfect chess played by both sides would result in a draw.

        Personally I think the very first move is a zugzwang, and black wins in perfect chess.

        Ok, ok, I don't actually think that, but I'll give you $10,000 if you prove me wrong.

    • by ryants ( 310088 ) on Sunday June 09, 2002 @03:49PM (#3669494)

      There are something like 10^120 positions (nodes in the search space) (source: Introduction to Artificial Intelligence, Jackson) and something like 10^78 atoms in the universe (source: The Little Book of the Big Bang, Hogan).

      So if every atom in the universe could do one chess position per say, nanosecond (10^-9 seconds), and had been computing since the Big Bang (15 billion years ago), the computation would still be in its earliest stages.

      (Note: I stole this argument from somewhere, but can't find the exact quote or reference.)

      • (Note: I stole this argument from somewhere, but can't find the exact quote or reference.)

        Here it is:

        For chess, for example, if we take the effective branching factor to be 16 and the effective depth to be 100, then the number of branches in an exhaustive survey of chess possibilities would be on the order of 10^120 -- a ridiculously large number. In fact, if all the atoms in the universe had been computing chess moves at picosecond speeds since the big bang (if any), the analysis would be just getting started.
        Artificial Intelligence, 3rd edition, Patrick Henry Winston.
      • So if every atom in the universe could do one chess position per say, nanosecond (10^-9 seconds), and had been computing since the Big Bang (15 billion years ago), the computation would still be in its earliest stages.

        Well, you could cut down on a little that time by using smart programming (I.E. it's not too bright to move your King to the front line early in the game) and I think a purely mathematical approach ignores things like this. But what about Quantum computing? Assuming we ever get it working to it's full potential we could solve for the perfect game of chess in less than a second.
    • define possible (Score:5, Interesting)

      by frovingslosh ( 582462 ) on Sunday June 09, 2002 @03:56PM (#3669522)
      There are several ways to answer this. The tree of all possiable moves is (very) large but finite, so given infinite (or very very large) resources and time it would be possible to "solve". But the numbers get so large one might come to conclusions like given all the mater in the universe, the solution will not happen in your lifetime.

      One other consideration is, that while it seems likely (from our experience with master and grand master class players) that white has the advantage and thus whould be the likely winner if there is a winner in the solution, it has not been shown that this is the case, it could well turn out that in the "perfect" game black has the advantage as long as he makes no mistakes. It seems far more likely to turn out that the "perfect" game will be a draw, meaning that if either player makes an error and the other can play perfectly from there, the player to not make the error will win. (and error being any move that takes a branch on the tree that leads to a forced loss for the player who made the move). For example, tic-tac-toe can be easily proven to have this type of solution, and chess might has well.

      • by GigsVT ( 208848 ) on Sunday June 09, 2002 @04:01PM (#3669543) Journal
        For example, tic-tac-toe can be easily proven to have this type of solution, and chess might has well.

        Well, that cinches it, everyone knows if you try to get a computer to play Tic-Tac-Toe against itself it will overload and shutdown, thus averting nuclear war.
      • [...] The game of chess, for example, has aprroximately 10**120 different board states. This is a number larger than the number of molecules in the universe or the number of nanoseconds that have passed since the "big bang". Search of a space that large is well beyond the capabilities of any computing device, whose dimensions must be confined to the known universe and whose execution must be completed before that universe succumbs to the ravages of entropy.

        --- Luger & Stubblefield (1997)
        • Search of a space that large is well beyond the capabilities of any computing device, whose dimensions must be confined to the known universe

          False asumptions on the limits of a "computing device".

          That will not be an obstacle when we make useful quantum computers. Quantum computations have a "free parallelism" which can exceed "the number of molecules in the universe".

  • From the article: It has been estimated that 90% of the processing power of most computers is underutilized. How ironic that this statistic parallels our own brain power!

    I don't know about the author of the article, but I'm pretty sure I'm using more than 10% of my brain.

    • Old mistake (Score:3, Informative)

      by Otto ( 17870 )
      It's an old mistake.. Most everyone uses most of their brain. The misconception comes from some old paper where it said that people only use about 10% of their brain at any one time. You don't need that part that lets you ride a bike or that part that lets you talk when you're sitting down and typing in front of a computer.. Unless you have a really weird voice activated unicycle for a chair...
      • Unless you have a really weird voice activated unicycle for a chair...

        You mean like this fellow [hawking.org.uk]?

        Though, I think his chair is saliva activated...But no argument about how much of his brain he's using.
      • Well I use 100 % of my CPU power sometimes too.
        I think saying you are using 10% of cpu power at pretty much any givin time is similar to the brain argument.
    • I don't know about the author of the article, but I'm pretty sure I'm using more than 10% of my brain.

      I thought you were wrong, but found that this is a common misconception. [washington.edu]
    • The 10% is just an urban myth.

      Nature is extremely parsmonious and thinking that 90% of an organ that consumes 70% of our Oxygen just to pick one metric will fust in us unused [hamlet.org] is nonsense

    • What do they mean? Turning off my computer is somehow a bad thing? Their is not such thing as an unused processing cycle. Else the processor wouldn't have cycled. (sans the NOOP)

      I loan my processors time to folding@home, but I WILL NOT be a party to corporate welfare!
  • by cybrpnk2 ( 579066 ) on Sunday June 09, 2002 @03:34PM (#3669422) Homepage
    If playing a game (albeit an interesting one) isn't what you'd like to spend your spare CPU cycles on, there's a lot more than Seti@Home out there. Check out the Internet-based Distributed Computing Projects [aspenleaf.com] for more options...
  • by Inthewire ( 521207 ) on Sunday June 09, 2002 @03:34PM (#3669423)
    The project looks interesting, but the guy brings up the whole "we only use 10% of our brain" myth [washington.edu]
  • by dustman ( 34626 ) <dleary@NOsPAM.ttlc.net> on Sunday June 09, 2002 @03:35PM (#3669424)
    I have thought of this myself, although I'm too lazy to implement it.

    Chess is extremely parallelizable, since each of your N possible moves must be evaluated seperately, you can divide them among your K cpus which are participating... (Deep Blue had 256 CPUS, if I recall correctly)

    The only major penalty for a distrubuted venture such as this that I can think of is that cached board information can't be shared across nodes... Most chess computers cache the results of evaluating different board positions, so that you don't need to (re-)evaluate everything for different move orders which end up with the same board position.
    • The problem is that there aren't that many moves available.

      You have up to 16 pieces, (pawns have upto 1 option, except at the start, kings have upto 8, knights have upto 8, etc).

      The complexity comes because each of those moves can be countered by a smallish number and then you have a smallish number of counter-counters.

      The small numbers soon multiply up to the huge number of combinations.


      It will be complicated to split up the moves among a large number of clients because the moves aren't independent, they're organised in a hierarchy.

    • The alpha-beta algorithm, which _everybody_ use, is very hard to make massively parallell (effectively). You can't just brute force this kind of exponential game. You are right about the hash tables, they will also be a problem.
      • You can't just brute force this kind of exponential game.

        People too often discount brute force. Computer speed has been increasing exponentially. Internet connectivity has also been increasing exponentialy. Expontenial force can attack an exponential problem.

        Yes, I realize how huge the tree becomes. I'm just saying that throwing N times as many computers at the problem is like slashing the tree by f(N). You trade-off *some* of your efficencies for raw power. It is an interesting alternative to explore.

  • you need to get a serial number to participate... get the command line version, no registration required.
  • more productive (Score:1, Insightful)

    by Anonymous Coward
    While i apreciate these distributed processing projects, wouldn't the time/cpu usage be better used at something like United Devices?


    Do something good for humanity and contribute to the different projects available.


    Although there is no linux version there is a Microsoft version and I know, at least in my case, my Windows machine is more idle than my linux ones. At least they are "in the process" of making a linux/unix version.
    • why do people always compare the usefullness of the differing distributed computing projects?

      lets face it somebody will join a project only if they find it interesting. not everybody is interested in the same things, and while searching for a cure to cancers is a very noble goal, perhaps it doesnt appeal to everyone. Some people consider searching for intelligent life like SETI a waste of time, but perhaps one day communications with other lifeforms could solve cancer quicker than the UD DC project aimed at cancer can, you never know.

      Admittedly a chess DC project does not aim to do anything like that, but if it's interesting to some people who are not involved in any other projects and gets em started with the addiction that distibuted computing can become, then i think it's great. Maybe it will end up being the gateway to them someday helping you in your favorite project.
    • This is the same old troll, it's the same line of thinking, "Why spend money on this when there are starving kids still out there".

      Surprising a moderator fell for it.
  • I, for one, would love to see how many Athlons it takes to stomp Deeper Blue. (or whatever the current monolithic champion is) Guesses? Guessing this would be an interesting lottery/raffle idea. :)
    • Better look into many tonnage of cooling. Your cooling bill will outweigh any benefit I'm sure.
    • I don't know how fast Deep Blue is, but i'm sure it's not the fastest computer in the world. My 1.4GHz athlon calculates with a speed of about a half gigaflop.
      And Deep Blue is slower than 12000 gigaflops, so the answer is less than about 25000 slower athlons (maybe 15000 of the fastest model).
    • I'm not sure if a distributed computing solution is going to work against a machine along the lines of Deep Blue and is descendants.

      You must remember Deep Blue is computer running several thousand processors in massively-parallel fashion to compute chess moves--and all of it closely-coupled to reduce computing times. You try computing chess moves over a distributed network and by the time you get the solution over the distributed network (even if it's the faster Internet2) a Deep Blue class machine would have computed the equivalent of 2-4 moves already.
  • Chess is mostly a bunch of recursive procedures; i.e. if I move here he can move here here or here. So, given that, splitting up the workload wouldn't be too difficult at all -- just tell another node to score this board layout and keep crunching. Excellent application for distributed computing.

    My only concern with a distributed chess processing thing is security. What's to stop someone from making their node return random results?
  • Isn't this a brute force method of playing chess? Just keep throwing more processor cycles at the problem. Wouldn't it be cooler to develop this into a global neural net and have it learn from its mistakes?
  • ... sooner or later something completely amazing and worthwhile will come up that runs across multiple machines... and the client will only end up installed on 2 PC's, one of which is a broken 386 that's turned on twice a week :)

    a grrl & her server [danamania.com]
  • by Cutriss ( 262920 ) on Sunday June 09, 2002 @03:43PM (#3669459) Homepage
    The problem is the algorithm in developing concurrently-processed chess calculations. The people on D.net [distributed.net] couldn't get up enough interest to do the project, and ultimately it never got off the drawing board. Now most of those people have gone on to OGR [distributed.net].
    • Now there's irony for you. Distrubuted.net works on the most boring project imaginable (cracking RC5-64) and the second most useless waste of computing resources (the first most useless is Seti@home), and they can't find any interest in something like this? I mean, it's not going to cure starvation or anything, but it's at least interesting. I would sign up.

      • I think brute-forcing RC5 is less useful than the SETI program. We don't learn _anything_ by finding the right RC5 key. It doesn't tell us anything about the security of RC5, because we already know the keyrate of currently available crunchers on many different CPUs (thanks to d.net). Statistics tell us everything we might want to know about brute forcing RC5. Actually finishing RC5 gets a small amount of money for d.net, and we all find out what message those clever boys at RSA security hid for us to find. Doing RC5 is about as interesting as watching the hands on a clock.

        OTOH, it would be very interesting if SETI turned something up. Unlike RC5, it's not a sure thing. That's what makes it interesting. It's not very likely to find anything any time soon, if ever, but it is worth looking, IMHO.
  • by sarcast ( 515179 ) on Sunday June 09, 2002 @03:43PM (#3669462)
    I think it would be interesting to see a competition between human and computer teams in which this distributed project is pitted against a group of the world's most talented chess players working together in something like that distributed brain-cycles story that was on Slashdot a few weeks ago (sorry, I can't find the article).

    There are probably some people out there who would say this is not really fair because people need time to adjust to a playing style and everyone is different, but I say if the computer can have a thousand different computers crunching numbers for it, then why not a thousand different brains working together?

    Hell, I just came up with another idea...why not program the Cyc project to use a small database that deals only with chess? Certainly there are rules that can be conveyed to it and then have the Cyc AI compete against some chess programs to see how it does.
    • With human chess it's a case of too many cooks spoiling the broth: one person plays better than a team of people. In fact, a common handicap method is to get two players to alternate moves. I have also noticed in my own play that if I listen to someone else's suggestions then I lose more (even if the suggestions are good).

      I would guess that this is due to the fact that when playing, I develop all my own plans, and view any moves in context of my plans, but somebody else will have different plans in mind, and even if a move is good (and would even fit with my plan), it hasn't been part of my subconscious thought, so it has side-effects which I haven't had time to subconsciously consider.
  • How does it work? (Score:5, Insightful)

    by GGardner ( 97375 ) on Sunday June 09, 2002 @03:45PM (#3669474)
    This page has lots of paragraphs about distributed computing this, and xmlrpc that, and pretty animated gifs showing binary ones and zeros zipping from one computer to the other, but NOTHING about the actual algorithms or chess going on.

    What is it actually doing? A complete tree-search for all the legal chess moves? That's a pretty big tree! Searching for conclusions to well-known games? Trying to crack into a Norwegian librarian's database?

  • Make it parallel???? (Score:3, Informative)

    by rew ( 6140 ) <r.e.wolff@BitWizard.nl> on Sunday June 09, 2002 @03:46PM (#3669478) Homepage
    There is no "if" in "if they can find a good way to efficiently parallel the analysis".

    To play chess well, you recurse into a "deep" tree. You analyse say 10 moves, and then ten moves for the opponent. That explodes pretty quickly. So you end up evaluating millions of chess positions several moves down the road. But there are only a hundred or so "shallow" moves.

    It's trivial to do the first 2 moves on the computer "distributing the work", and then to pass out the +/- 100 problems of recursing those resulting moves to 100 computers.

    Sure, there is some optimization to be had by breaking off "useless" trees. That optimization will not run as good in parallel than it does on one computer. Then you may waste say half your compute nodes. But the other half is providing you with a 50-fold increase in performance.


    • I think if you just want to assuredly kill your opponent in the first move you should probably attack the tree from the branches inward. That way you don't always have to render the whole tree for every little permutation out there.
      • That wouldn't work. Since each tree contains half the computer's moves and half the opponent's moves, you'll end up always picking the branches that result from the opponent making dumb moves. What you want to be analyzing is the branches that result from the opponent making the best moves, which, of course, can't be found by attacking the tree from the outside in. You have no way of knowing if a desirable board results from intelligent moves on the computer's end, or poor moves from the opponent if you work that way.
    • One thing that makes this fundamentally different from other distributed computing efforts is that the network of computers working has to change dynamically, which means it's not well-suited for the run-as-screensaver model. Because it involves recursion through a tree, exploration of a branch will often involve an increase in the number of machines devoted to calculating the consequences of that particular branch as many (the non-leaf nodes) will themselves branch off. Moreover, a parent process must wait for it's child processes to finish before it can report back to its parents.

      This is very different from the model where each machine downloads a discrete chunk of data to process and return to a server. It also means that the client would have to run constantly in the background in order for this to be effiencient -- otherwise an entire branch could get held up by a single machine that isn't in a state of rest waiting to do something useful.

      Probably you'd have to have a pool of volunteer machines that operates kind of like swap space. That is, when there's a new branch to explore, the parent process looks for available machines and farms out the work on the fly.

  • What? (Score:1, Troll)

    by bogado ( 25959 )
    What is this distributed chese program?
  • by Chairboy ( 88841 ) on Sunday June 09, 2002 @03:49PM (#3669495) Homepage
    This is really not that exciting, it's just brute forcing the problem. Human chess grandmasters don't run massive simultaneous mega power number crunching sequences to figure out how to win, they use a combination of strategy and intuition.

    This is lame. A much more interesting story would be that someone had written a program that could play world class chess without world class CPU horsepower.
    • Before he died, former world chess champion Botvinnik claimed to have done this with his program "Pioneer". He even published a paper [mit.edu] to this effect in the International Computer Chess Association Journal. His paper had errors in it and many people disputed the veracity of his findings, causing a huge dispute [mit.edu] in the world of computer chess.

      Incidentally, for those who are interested in more normal approaches to game tree search, the second issue of the ICCAJ which I've referenced above contains one of the most important papers in computer chess history, by Chrilly Donninger.

    • A much more interesting story would be that someone had written a program that could play world class chess without world class CPU horsepower. Fritz (available for $25) will beat you on a 1GHz Pentium unless you are a master. http://www.chessbaseusa.com/
    • This is lame. A much more interesting story would be that someone had written a program that could play world class chess without world class CPU horsepower.

      I am working on just such a project. It's called Animal [gte.net].

  • Old Idea (Score:3, Insightful)

    by GigsVT ( 208848 ) on Sunday June 09, 2002 @03:54PM (#3669513) Journal
    In a story long long ago on Slashdot, this topic came up, and here was one interesting reply. Apologies to RobertFisher for reposting this without permission.

    Re:Hardware (Score:2)

    by RobertFisher on Thursday January 17, @11:43PM (#2860138)

    (User #21116 Info | http://astron.berkeley.edu/~bobf)
    While I agree that this is strictly true, the hardware used does not vary dramatically -- most every one is a 1 GHz - 2 GHz machine. Considering that the branching in the tree of game possibilities is a combinatoral explosion, the differences in hardware alone will not allow researchers to explore to a significantly greater depth.
    The main problem in computational chess playing is not so much in the brute force with which you can explore the tree, but in how one prunes a branch when the option starts looking unpromising. That is really an algorithmic question, and I would be willing to bet that the best algorithm will in fact win in a competition of this sort. It is a bit analogous to taking two comparable, but unequal hardware machines, and running bubble sort on one, and quick sort on the other. Quick sort will always win, hands down, because it is the far superior algorithm.
    What did surprise me was that there were no parallel machines on the list -- not even an SMP. I do think that with enough processors and a reasonably sophisticated algorithm, an amateur team could in fact stand to beat a more sophisticated algorithm. But that isn't the case here.
  • It's all fun 'n games, till someone cracks the client.


    You are neither well formed, nor valid.
    • by Anonymous Coward

      It's all fun and games until someone accidentally proves there is no god.

      (c.f. Episode BABF22 HOMR of "The Simpsons")

    • There are various things someone organising a distributed computation project can do to protect against parasites and spoliers.

      See http://www.bacchae.co.uk/docs/dist.html [bacchae.co.uk] , my primer on building a distributed project.

      I even touch on a possible distibuted chess player.

      If you could check everyone's calculations yourself, you wouldn't need distribution.

      The simplest way to perform a check is no get someone else to double check. But... this wastes time, and there still a risk that the "checker" isn't also a cracked client.

      For a chess player, a good thing to do would be for the client to perform a side calculation that can only be found be performed by actualy doing the work, but can be quickly checked by the supernode.

      To release the cource code to your client or not is a controversial issue. I discuss both sides of the argument in my primer. (Makes cracking difficult vs greater participation)

      Just go read the thing [bacchae.co.uk], and while you are at it, tell the slashdot editors to mention it on the front page sometime.

      Bill, already submitted it once.

  • by Dr. Spork ( 142693 ) on Sunday June 09, 2002 @04:22PM (#3669616)
    These guys have quite a bit of documentation, but most of it is about how their network protocol works and how much their servers cost. Great, but it doensn't really answer the natural question, which is: what is the point of all this?

    This is a shame, because there are many exciting things we could do with a global chess computer. The obvious "let's play it against Kramnik or Kasparov" would actually be a lot of fun. With my computer conspiring against the human, it wouldn't be clear who I'd be rooting for!

    However, there are lots of other cool things we could do with this. I assume the code itself is some sort of open source--so maybe, we could set up a team tournament, where Team Slashdot plays Team AnandTech. The various teams could also do tweaks to the code to give themselves an advantage. Or, on a larger scale, we could play a America vs Europe game, where continental patriotism would encourage you to contribute your clock cycles to victory.

    Another obvious modification that is not mentioned in the documents is human intervention. This sort of computing power would be great if you want to investigate a certain line of play, but this in combination with the human intuition of Grandmasters should be able to coax the computer to give privilidged analysis to certain lines over others. Otherwise, the computers would crank away on the unpromising lines just as much as the ones that might realistically be played.

    It is this, the sort of human-directed chess machine that has the potential to show us some of the greatest chess games ever witnessed. This is some exciting stuff. ...

    Well, potentially. However, the intentions of the ChessBrain authors is so far totally mysterious, and I think that's a shame. They seem like nuts-and-bolts people, and these distributed projects need "vision" people to attract a lot of CPUs. I don't have many clock cycles to spare, but I know I'd have a hard time resisting if I could contribute to my continent's victory over our transatlantic enemies. Apart from that, working out a system where this chess super-computer could serve as a tool to augment the play of teams of Grandmasters (or vice versa) would be genuinely interesting from a research point of view, as well as being perhaps the most exciting chess event ever.

    Anyway, if ChessBrain doesn't turn into any of these things, I hope another distributed chess project does.

  • Chess algorithms generally do an optimised version of a minimax tree search. It's easy to distribute different parts of the tree to different clients. The problem is that the number of calculations vs. the number of moves ahead you are searching is exponential. So if you have 10,000 processors, you can only search say 2-5 extra levels deep.
    • You may know more about computer chess than I do, but are you sure it's really exponential? If you don't prune the tree at all, then certainly I'd buy the hypothesis that it's exponential. But any good algorithm is going to prune. I'm also not sure that 2-5 extra levels is anything to sneeze at -- it may be the difference between a program that can play at the master level and one that can beat Kasparov.

      Another question in my mind is how much the net-computing model applies to other parts of the computation besides the computation of look-ahead trees. For instance, one has to have a way to assign a numerical rating to a particular board position, based on material, control of the center, pawn structure, etc. Maybe there are algorithms that can do this much better, but are 10,000 times slower, so they'd only be practical with this computing model. Another thing is that openings and endgames require specialized treatment. It may be that on a net-wide application, you could make your book of openings 10,000 times bigger, or maybe you could perfectly solve certain endgame situations that traditional software might have a hard time managing.

  • by wadetemp ( 217315 ) on Sunday June 09, 2002 @04:33PM (#3669649)
    One thing about alot of distributed processing applications is that it doesn't matter how long it takes for node X to process a data unit. So if a 386 can't process a Seti@home data until in 3 months... no big deal, it can be assumed to be a lost unit and sent on to another machine, or it can just wait.

    But for chess, I assume moves need to be made in a certain amount of time if you're playing by tournament rules. If you send off a processing unit, and it never comes back, there might not be time to send it again. It seems like this would cause important parts of the processing tree to be missed.

    In this algoritm, it seems like there would need to be a priority assigned to units, and those units would be sent to machines assumed to be the fastest on the network to ensure that they would be processed in time. Of course, normal chess programs probably already do this when working under a deadline, but in this case you also have to factor in network connnection speeds, processing speeds, and the fact that certain machines may suddenly become unavailable and drop work units.
  • If you distribute the tree you can store every possible wining strategy and just store the thing with 128 byte keys. (64 squares & 2 byte piece definition with algorithm to define the next optimal move in black & white.

    Chess is an uninsteresting problem when you have a wide enough word length.

  • Will someone solve someday the chessgame? I thougth that possible diferent chess games that can be played are about: 10^10^ 50, and the possible different chess possitions on a board are about 10^44, if there are only 10^18 atoms on universe, could it be really solved? Maybe by quatum computers?"
  • how can cellular automata be applied to this project?
  • I hereby take occasion to assert that the highest powers of the human intellect are more decidedly and more tasked by the unostenratious game of Draughts, than by all the elaborate frivolity of Chess. In the latter, where the pieces have different and bizarre motions, with various and variable values, what is only complex is mistaken (a not unusual error) for what is profound."-"Murders of the Rue Morgue."


    Checkers _is_ better than chess.
  • You know, Kasparov has been beaten by Deep Blue.
    Why do you want to create ultimate playing computer?
    It's just not interesting!
    Better try to create computer that can make interesting mistakes and play as ordinary human with rating 1400-1600. It's harder than you think.
  • Personally I prefer medical/science projects like the Distributed Folding Project [distributedfolding.org].

    They have a really nice and stable client and it is available for a lot of platforms [distributedfolding.org]. They are working on adding support for even more platforms - among them, support for the PS/2 Linux kit :)

    In regards to performance, the Linux ICC client is a bit faster than the Windows client. This is also one of the few projects where the Intel P4 is actually doing pretty well (In the GIMPS project, the P4 is much faster than any Athlons). For most project, the Athlon is the best choice, in this project, the Athlon doesn't have any big advantage.

"The number of Unix installations has grown to 10, with more expected." -- The Unix Programmer's Manual, 2nd Edition, June, 1972