Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Classic Games (Games) Games Technology

The New (Computer) Chess World Champion 107

An anonymous reader writes: The 7th Thoresen Chess Engines Competition (TCEC) has ended, and a new victor has been crowned: Komodo. The article provides some background on how the different competitive chess engines have been developed, and how we can expect Moore's Law to affect computer dominance in other complex games in the future.

"Although it is coming on 18 years since Deep Blue beat Kasparov, humans are still barely fending off computers at shogi, while we retain some breathing room at Go. ... Ten years ago, each doubling of speed was thought to add 50 Elo points to strength. Now the estimate is closer to 30. Under the double-in-2-years version of Moore's Law, using an average of 50 Elo gained per doubling since Kasparov was beaten, one gets 450 Elo over 18 years, which again checks out. To be sure, the gains in computer chess have come from better algorithms, not just speed, and include nonlinear jumps, so Go should not count on a cushion of (25 – 14)*9 = 99 years."
This discussion has been archived. No new comments can be posted.

The New (Computer) Chess World Champion

Comments Filter:
  • Big Data for chess (Score:4, Interesting)

    by lucm ( 889690 ) on Monday December 29, 2014 @10:33PM (#48693805)

    I would be curious to see how an algorithm based on millions of actual games fares against a pure mathematical model. "Based on your interest in taking queens with a pawn, you might be interested into taking a bishop with your rook".

    • by Anonymous Coward

      Or ... "It looks like you're possibly planning to sacrifice your bishop to the opponent's rook." Click here to see the usual counter.

    • by SeaFox ( 739806 )

      Is it bad I mentally saw the message appearing in a speech balloon from Clippy?

    • I would be curious to see how an algorithm based on millions of actual games fares against a pure mathematical model.

      Poorly, unless you have a new algorithm the world hasn't seen yet. Presumably there's a way to do it (because humans do it), but so far no one's figured out how to get a computer to do it.

    • You mean like what the runner up is already doing? http://stockfishchess.org/get-... [stockfishchess.org]

    • The tactics (he takes and I take...) computers have down cold. That's how they beat humans. They never miss a trick, and so even the best humans get worn out sweating every crazy possibility. Many many computer games have been won by some one crazy move that makes a seemingly lost position tenable.
      The strategy (long-term planning and positioning) is where computers are weaker. Not weak, but weaker.

      Once the end game is reached a large database of positions is used. (Humans effectively do this too, in the s

      • by lucm ( 889690 )

        This applies to computer vs computer situations. Once you put a human in the mix, it's a whole different situation because the frame of reference is different.

        You can't predict what you don't understand. That's why a chess computer that uses past human games to make decisions would be, in my opinion, a pretty nasty opponent.

    • You just took my knight. You won't believe what happens next!

      Check out this one weird trick with a pawn.

    • why bother with theoretical games? Just pre-calculate a lookup table that tells you every move you should make in every situation.

    • That has been tried with computer Go. It turns out that you can make a computer very, very good at predicting what moves top-level players will make, and still fail abysmally at making a strong program.

  • chess championship (Score:5, Interesting)

    by phantomfive ( 622387 ) on Monday December 29, 2014 @10:45PM (#48693841) Journal
    I don't know about the chess championship, but that is one of the best blogs on the internet. Where did it come from? In a world of 7-second attention spans, "you won't believe what happens next....", and pop culture domination, here is a guy who is talking about math, computers, and games in a friendly relaxed manner, because it is interesting to him. He talks about Godel (who apparently said, "Religions are, for the most part, bad—but religion is not"), some recent ideas in information theory, and a comparison between linear algebra and quantum computing. He uses LaTeX.

    Also, from the post I learned about a game called Arimaa [arimaa.com], which was designed to be hard for computers but easy for people. There is a bet that no computer will be able to beat a human, and you can win thousands of dollars if you do. So far it's apparently not even close. Also, got this great quote: "It’s not that chess is 99% tactics, it’s just that tactics takes up 99% of your time."
    • You can blog in LaTeX!?

    • by doug141 ( 863552 )

      Arimaa might make an interesting turing test. Your sig made me think of captain Paul Watson of the Sea Shepard telling the story of the time he got a job testing the intelligence of a captive orca. The orca got all answers right immediately after training, then suddenly started getting them all wrong. Paul realized the Orca was testing him, too.

  • ... that can challenge a human grandmaster just by using heuristics and considering perhaps at most only a few hundred board combinations instead of millions.

    Because really, the fact that a computer can beat the best human players at chess just by analyzing millions upon millions of board combinations is no more surprising than the fact that even a small child can figure out how to never lose when playing tic tac toe.

    • by itzly ( 3699663 )
      Why ? The current method works better. Occasionally, a grandmaster overlooks a mate in 1, or other easy things. This never happens with computers that do a brute force search.
    • As a chess player, I just had to stop and laugh this down.

      Show me a human grandmaster, (or even a B-class player) who can play without all the knowledge they have remembered.

      Show me a human who can consider a problem, and restrict their thinking to only a few hundred connections between different brain cells.

      Show me a human who can temporarily forget everything they know and approach problems using only a single class of algorithm.

      The only "chess player" who can play the way you'd hobble the computer is som

      • by mark-t ( 151149 )
        Obviously human players don't forget the games they have played in the past, but they don't need to recount every detail of every game in order to exercise that experience in game, as a chess program would.
        • They do, though. Grandmasters know tens of thousands of games by memory and on sight.

          My friend is a FM (FIDE Master) which is 2 levels below Grandmaster, and he remembers not only all the games he ever played, but also many thousands of games he has studied, and thousands more of other players who were playing in tournaments at the same time.

          Computers are even better at this, but the idea that a human can get even to the 90th percentile in chess without memorizing anything is absurd. Very often a player at

          • by mark-t ( 151149 )
            Human players generalize from their experience, they do not explicitly recount every game they have ever played in order to exercise their knowledge from that experience. A grandmaster may only explicitly consider a few hundred actual board combinations, on any given turn.
            • Human players generalize from their experience, they do not explicitly recount every game they have ever played in order to exercise their knowledge from that experience. A grandmaster may only explicitly consider a few hundred actual board combinations, on any given turn.

              You're just wrong, and you're clearly outside your expertise. They do explicitly recount all their games, and the games of their chess friends, and the notable historical games, and the notable games in the openings they play. Higher level grandmasters remember games that they studied for a couple days during preparation for a match a decade ago, and will play those lines in future games if the position comes up. If you knew, you'd know this.

              A grandmaster doesn't know how many "board positions" his brain ac

              • by mark-t ( 151149 )

                They do often create an explicit search tree and consider a few dozen positions.

                You are aware, I assume, that a few dozen can easily exceed a hundred. The several hundred I mentioned may be admittedly have been very generous, but my point was that grandmasters don't ever have to evaluate every board combination the way computers do. In fact, a modern computer can sometimes consider more board variations in even a single move during a single game than a chess player may have seen in their entire lifetime

                • The computer is not any more aware of the millions of calculations it is doing than the human is. In fact, I'd say the human is aware of a larger number of explicit calculations. I'd also point out, this is obvious.

                  The computer calculations are "explicit" to the human programmer, sure. But in the same way, a physiologist might consider the entire human process of calculations to be explicit. Then you're stuck with the reality that the human mind is doing a large number of analog calculations, where each cal

                  • by mark-t ( 151149 )

                    Although the brain can be likened to a massively parallel computer, humans cannot perform calculations as fast as a digital computer, so there is no basis to presume that humans are considering considerably more than the few dozen or so positions that grandmasters are typically assumed to analyze before making a move

                    And this point cannot be emphasized enough... even though the grandmaster considers such a small set of positions, the grandmaster plays at an *EXCELLENT* level, while any computer that were

                    • If you could program a computer with every chess game ever made, and then have it make generalizations from those games so that it was capable of recognizing patterns that occur in other games, being able to recognize potential winning strategies from a given board position, only analyzing a few dozen or so moves in advance during any actual game, just as a chess grandmaster does, and *STILL* be able to beat any human player.... then you'll have something.

                      Straight assertion with no apparent point. WTF does "have something" mean here? You can't claim the human analyzes only a few dozen things during the game. An fMRI will blow that one up in an instant. (BTW, the chess community already has seen that scan and the analysis and we know the answer, this isn't speculation or opinion) And the computer is aware of exactly 0 calculations. So your only real point is that humans are minimally aware of ourselves and that we have thought processes. Except, we don't unde

                    • by mark-t ( 151149 )
                      My poiing is, and has always been, that a computer being able to beat an excellent human player at chess is nothing more than mathematical stage magic, and cannot be shown to emulate human thought unless or until human thought itself can be shown to be equally illusory.
              • They are good at memorizing chess games because chess games make profound sense to them. The more you understand the why's of a chess move, the easier it is to remember it. They aren't good at memorizing arbitrary stuff, and being good at memorizing arbitrary stuff won't help you much getting good at chess.

                Playing from randomized positions, computers are vastly better than humans, since they don't rely on "moves making sense" the same way. Go programs (which are a lot weaker than the best humans) trounce hu

                • Chess software also does not remember arbitrary stuff. ;)

                  It could. But so could the human, if the reason stopped being arbitrary.

                  There are a lot of theories why Go computers are weak. I saw one analysis that compared the amount of effort to the amount of success, chess vs go. I don't have a link, but the conclusion was that go computers are actually just as far along as chess computers were with similar total effort.

  • So when will these chess playing programs attain general artificial intelligence on par with a human? With each improved player, we must surely be getting closer...
    • No. They win by crunching data. What we have learned is what games humans are still better at. I would say chess research is no longer AI research. The new horizon is figuring out how to beat humans at the games humans are still better at. That is how they'll get closer to humans.

    • by AK Marc ( 707885 )
      That's the same as saying that each improvement in the car brings us closer to an airplane. I'm not sure that works.
      • Yes, exactly! I would say that the AI programs we hear so much about: Watson, Google's self-driving cars, deep-learning neural networks, and so on will never reach general artificial intelligence. It's like climbing a tree and expecting to reach the moon. All those programs use simple algorithms geared towards just one purpose. You present those programs with any task other than the specific one they were designed for, and they fail miserably. General artificial intelligence, the ability to handle a wide va
  • I wonder if there is a difference between the best engine at beating other engines and the best engine at beating humans. Obviously, you need to put them on low powered machines to bring them down to the level of occasionally losing to humans to check.

    • They pretty much beat humans everytime. You may not get statistically significant results among the top chess engines.

    • I wonder if there is a difference between the best engine at beating other engines and the best engine at beating humans.

      Yes. I spent a lot of time watching this event and in the chat this subject came up more than a few times in various flavors.

      Probably the most important of the factors involved here is that these chess engines use a symmetric evaluation function. In layman's terms this means that they evaluate a given position the same regardless of which side the engine is actually playing. An anti-human optimal evaluation function would also consider which side of the board a human is playing on. Open positions greatly

  • Then again, I beat a computer at kick boxing once.
    Not original, I read that somewhere not 30 minutes ago.
  • Mr. Don Dailey (Score:4, Interesting)

    by nusuth ( 520833 ) <oooo_0000us@PLAN ... minus physicist> on Tuesday December 30, 2014 @08:05AM (#48695179) Homepage

    I am disappointed to see only one mention of late Don Dailey in TFA. He is actually the guy who wrote the whole thing. I had followed his posts for years in computer go mailing list. I have learned a lot from him as an R&D engineer in an unrelated field (chemical industry). While many people adopted "improvements" only because it made sense to them, Mr. Dailey had a very systematic and methodological approach to changing the program. He had ideas and insights for improvement like many others, but he never fell in love with his own ideas. If something did not work, it did not. No matter how plausible it seemed. He also had most patience I have seen of an online person. He would carry on discussions long after it was obvious the other party was not paying enough attention or was simply stupid. He did this almost to the day he died.
    Congrats Mr. Dailey. You have done it.

    • by itzly ( 3699663 )

      Congrats Mr. Dailey. You have done it.

      He's done it twice now. Komodo was also the winner of the TCEC Season 5 final against Stockfish. Mr Dailey passed away just after Komodo had made it to the final, but unfortunately, just too soon to witness the result.

  • komodo won but it has been the main contender with stockfish for a while. I think it has been number two in the last two or three tourneys

  • Stockfish is only slightly weaker, and is open source.

    What's the point of closed source chess engines when a lot of engines are already far stronger than humans? Who's going to pay the money for a closed-source chess engine? Idiots? A grandmaster may want it to study its playing "style", and chess algorithm researchers might want it to study it, and other chess engine designers might want it to reverse engineer it, but there's no practical reason for even a strong chess player to buy chess engines anymor

  • However for me it's harder to play shogi than chess. suplemen fitness murah [jualsuplemenmurah.com] jual suplemen fitness [gudangsuplement.com]

Experiments must be reproducible; they should all fail in the same way.

Working...