Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Classic Games (Games) Math Games

Chess Ratings — Move Over Elo 133

databuff writes "Less than 24 hours ago, Jeff Sonas, the creator of the Chessmetrics rating system, launched a competition to find a chess rating algorithm that performs better than the official Elo rating system. The competition requires entrants to build their rating systems based on the results of more than 65,000 historical chess games. Entrants then test their algorithms by predicting the results of another 7,809 games. Already three teams have managed create systems that make more accurate predictions than the official Elo approach. It's not a surprise that Elo has been outdone — after all, the system was invented half a century ago before we could easily crunch large amounts of historical data. However, it is a big surprise that Elo has been bettered so quickly!"
This discussion has been archived. No new comments can be posted.

Chess Ratings — Move Over Elo

Comments Filter:
  • Indeed (Score:5, Funny)

    by mooingyak ( 720677 ) on Wednesday August 04, 2010 @04:05PM (#33143658)

    However, it is a big surprise that Elo has been bettered done so quickly!

    Absolutely. I can almost guarantee no one thought that Elo would have been bettered done so quickly.

    • Does Timothy even glance at the stories he approves or is it pure pin the tail on the donkey?

      • Re:Indeed (Score:5, Funny)

        by Lord Byron II ( 671689 ) on Wednesday August 04, 2010 @04:13PM (#33143744)

        Timothy is the bettered done editor of Slashdot!

        • Re: (Score:1, Offtopic)

          by Hognoxious ( 631665 )

          # He's a bettered done kid

          [bettered done baby]

          Battered dome kid

          [battered dome baby]

          ooh ooooh ooh ooooh oo oo ooh ohh a hooway hooway hoowah hoowah/#

          Fuck me, I'd forgotten what a pile of shite Deacon Park South Texas were. Thanks a bastarding bunch for reminding me, you heiferflap.

          • Fuck me, I'd forgotten what a pile of shite Deacon Park South Texas were. Thanks a bastarding bunch for reminding me, you heiferflap.

            WTF? Is this what happens when some late-1980s Scottish bands get mixed up in a transporter with a popular animation series?

            If something that tenuous links to "Real Gone Kid" in your head, you must have some major trauma :-/

      • Re: (Score:1, Offtopic)

        by mike260 ( 224212 )

        Hopefully next time he will bettered posted checked done more carefully.

    • Re: (Score:3, Funny)

      by Braintrust ( 449843 )

      Indubitably. It filled with hope the one that no one thought Elo would have been bettered done so quickly.

    • Re:Indeed (Score:5, Funny)

      by camperdave ( 969942 ) on Wednesday August 04, 2010 @04:29PM (#33143954) Journal
      ELO hasn't done all that well since the big hair rock days of the late 1970s/early 1980s, pretty much since the drummer left to join Black Sabbath. I'm surprised at the band's connection to chess.
      • Re: (Score:3, Insightful)

        by Hognoxious ( 631665 )

        The first time I Heard Bev Bevan had joined Sabbath I kind of went "WTF?". But they're all Brummies, along with a lot of heavy metal bands around that time. Priest, Magnum ... they probably all played in pubs together wwhen they were 15.

        Similarly you couldn't be a serious goth in the 80s unless you were from Leeds, or a flare-wearing floppy-mopped tossbag in the 90s if you weren't a Manc.

    • However, it is a big surprise that Elo has been bettered done so quickly!

      Absolutely. I can almost guarantee no one thought that Elo would have been bettered done so quickly.

      Is it because elo would have been bettered done so quickly that you came to me?

    • What we need now is a chess rating system rating system. Then chess rating systems can compete with each other and be rated as to how well they rate chess.

  • Not that they'd use it, but it certainly couldn't hurt.
    • by PRMan ( 959735 )
      Technically, they are already using ELO-CHESS in the BCS, because Jeff Sagarin uses it in his rating system. So all that has to happen is for Jeff Sagarin to change his method.
  • by boneclinkz ( 1284458 ) on Wednesday August 04, 2010 @04:08PM (#33143694)
    Elo-L
  • umm (Score:5, Informative)

    by buddyglass ( 925859 ) on Wednesday August 04, 2010 @04:08PM (#33143698)

    However, it is a big surprise that Elo has been bettered done so quickly!

    Not really. Jeff Sagarin has had two systems of rating sports teams for a while now. One, ELO_CHESS, is based purely on win-loss, while the other, PURE POINTS, takes into account margin of victory. According to him, the latter is better at predicting future results. From his analysis:

    In ELO CHESS, only winning and losing matters; the score margin is of no consequence, which makes it very "politically correct". However it is less accurate in its predictions for upcoming games than is the PURE POINTS, in which the score margin is the only thing that matters. PURE POINTS is also known as PREDICTOR, BALLANTINE, RHEINGOLD, WHITE OWL and is the best single PREDICTOR of future games.

    • by l2718 ( 514756 ) on Wednesday August 04, 2010 @04:27PM (#33143920)
      Indeed, Sagarin has shown that applying Elo in sports where the winner is based on points scored is not optimal, since the average margin of victory is a better predictor of strength than won-loss record. But this has nothing to do with applying the Elo method to its original setting of chess, where the outcome of the game is only "win/draw/loss" and there is no margin of victory.
      • It's not inconceivable that one might apply an artificial means of gauging "margin of victory" to the domain of chess. Some sort of differential in the "value" of the pieces remaining for each contestant when the game ends. For the three teams that beat ELO, do their ratings systems only take "win/loss" as input, or do they also get the board's configuration at the point when the game ended?
        • by thousandinone ( 918319 ) on Wednesday August 04, 2010 @04:44PM (#33144118) Journal
          This is pretty ridiculous. Margin of victory? Is there a committee overseeing ethical treatment of chess pieces now? If I sacrifice everything but my King and a Bishop to checkmate you, why is that intrinsically a better strategy than sparing some of my pieces?

          There are definite merits to a sacrificial strategy- it's all about board control. Long as theres more than one or two legal moves available to your opponent, you can't really predict where he'll send his pieces. A queen in the middle of the board can cover a lot of distance and do some impressive maneuvers, but any given piece only occupies one spot. Control where your opponent moves, control the game. Not to mention that less pieces on the board gives you more options for where to move with your remaining pieces, and by allowing your pieces to be taken, you have a measure of control over where the free space on the board is.

          Indeed, given the rules of the game, I would say a strategy that goes to great lengths to preserve as many of ones own pieces as possible is flawed...
          • Sorry, but... You can't checkmate with only a king and a bishop.
            • by SomeJoel ( 1061138 ) on Wednesday August 04, 2010 @05:09PM (#33144430)

              Sorry, but... You can't checkmate with only a king and a bishop.

              The hell you can't. It turns out, your opponent has pieces too! Have you ever even played chess?

              • When chess nerds talk about end game strategies it is implied that "a king and a __ " ending is one where the other player has just a king.

            • by phantomfive ( 622387 ) on Wednesday August 04, 2010 @05:19PM (#33144540) Journal
              You know, you're really asking for it when you take a small point that isn't even relevant to his main point and attack it. Sorry, YOU'RE WRONG!!!!! [gameknot.com].

              If you ever find yourself in a game where you can sacrifice all your pieces to get to that position, DO IT!
              • by dylan_- ( 1661 )

                If you ever find yourself in a game where you can sacrifice all your pieces to get to that position, DO IT!

                Unless 1.Bh4 b3

                I get your point, but you need rid of that queen.

                • Heh, you're looking at the puzzle backwards. It's actually solvable, you can click on the pieces and solve it. The black pawns are coming towards the bottom of the board.
                  • by dylan_- ( 1661 )

                    Oh, I did solve it, but I thought black was playing the wrong move! I realised what I had wrong later last night as I was mixing up yet another lemsip.

                    In my defense, I'm loaded with the cold at the moment and can't think straight. Someone posted an old school picture on facebook yesterday, and it took me about 3 minutes to figure out which one was me!

          • Re: (Score:3, Insightful)

            by friedo ( 112163 )

            If some metric X is a statistically reliable method of predicting future success, then X can be defined as a margin of victory. Whether X is a function of the "values" of remaining pieces, or their positions on the board, or the number of moves, or whatever, is immaterial.

            • Except that if such a metric were used in the future - it would punish the most entertaining and trilling form of play.

              In chess, you win or lose. If players started "grinding" just to raise their ratings - ick.
          • Re: (Score:2, Interesting)

            by buddyglass ( 925859 )

            If I sacrifice everything but my King and a Bishop to checkmate you, why is that intrinsically a better strategy than sparing some of my pieces?

            Winning with only a king and a bishop remaining is no "better" than winning with all your pieces remaining. A win is a win. That said, winning a game while having many more pieces remaining than one's opponent may imply that the difference between your skill and your opponent's is greater than if you won with only a kind and bishop left. There may be some merit t

          • So...what you're saying is it takes more skill to win the game with more of your pieces. Which means you'd be a better player than someone who needs to get rid of those pieces first. Which means the margin of victory would be a good predictor of future outcomes? Am I right?

            Or, to put it another way. If a model is derived that accurately represents previous behaviour, and accurately predicts future behaviour, then the model is reasonably accurate. You're not liking it doesn't mean it's wrong.

        • Re: (Score:2, Informative)

          by databuff ( 1789500 )
          Data only shows results - so there's no scope for gauging the margin of victory.
          • I take back what I said, then. It is moderately surprising that there have already been three solutions that outperform ELO based solely on win/loss.
            • I thought so too, but looking at the site, it seems relatively trivial to set up a Bayesian structural equation model that models evolution of individual player's ability. That will produce a ton of parameters, but hierarchal magic can take care of that. In fact, they even mention that on the official hint. It's clear to see why that would outperform ELO.
      • But this has nothing to do with applying the Elo method to its original setting of chess, where the outcome of the game is only "win/draw/loss" and there is no margin of victory.

        You can easily keep track of a "margin" by assigning point values for the pieces that have been taken.
        http://en.wikipedia.org/wiki/Chess_piece_relative_value [wikipedia.org]

        That metric loses some relevance since someone behind on points can easily have a strategic victory,
        but there may still be some information of value gained from crunching the numbers.

  • Submission error (Score:3, Informative)

    by TubeSteak ( 669689 ) on Wednesday August 04, 2010 @04:09PM (#33143706) Journal

    Already three teams [kaggle.com] have managed create systems that make more accurate predictions than the official Elo approach.

    1 EdR* 0.729125
    2 whiteknight* 0.731656
    3 Elo Benchmark* 0.738107 {-- The "official Elo approach"

    Maybe we're counting from zero and they forgot to put it on the leaderboard?

  • I can't think of anything other than 70's cheese and largest white afro up until the release of Bobobo-bo Bo-bobo.

  • by LearnToSpell ( 694184 ) on Wednesday August 04, 2010 @04:14PM (#33143750) Homepage
    Less than 24 hours ago, the readers of Slashdot launched a competition to find an editing algorithm that performs better than the official "editors" of the site. The competition requires entrants to build their comment systems based on the results of over 9,000 historical submissions. Entrants then test their algorithms by predicting the results of the next 7,809 dup^H^H^Hstories. Already three teams have managed to create systems that make more accurate predictions than the official /. approach. It's not a surprise that Timothy has been outdone -- after all, he was invented half a century ago before English had been standardized. However, it is no big surprise that Slashdot has been bettered done so quickly! The winner: Texas Instruments! [speaknspell.co.uk]
  • by Last_Available_Usern ( 756093 ) on Wednesday August 04, 2010 @04:14PM (#33143760)
    Organized crime members linked to gambling rackets have been endicted for kidnapping a busload of nerds after they refused to program similar algorithms in exchange for Warcraft game time and photoshopped Natalie Portman porn.

    We all know that's not true though. They totally would have done it.
  • by l2718 ( 514756 ) on Wednesday August 04, 2010 @04:20PM (#33143834)
    Looking at the table, the differences in predictive power are small enough that it's not obvious they aren't due to chance alone; there needs to be some calculation that shows that the differences are meaningful validating the claim that the alternative methods actually extract more information than Elo does. Perhaps there is enough inherent randomness in Chess that even simple predictive models can extract most of the systematics so that what remains after Elo is mostly noise?
    • Perhaps there is enough inherent randomness in Chess that even simple predictive models can extract most of the systematics so that what remains after Elo is mostly noise?

      No. Chess has no random elements to it. You play against an opponent, with a very strict set of rules.

      Now sometimes the rules differ from game to game (such as timing, whether they use something like 3/5 fischer or 20 moves an hour sort of thing), which can have drastic changes to the outcome. For example if you do something like 20 moves an hour, sometimes Chess players will be running short on time, and they'll deliberately try to speed up their 18th 19th and 20th move to get that extra hour of time.

      The o

      • Re: (Score:3, Insightful)

        by l2718 ( 514756 )
        No. Chess has no random elements to it. You play against an opponent, with a very strict set of rules.

        I don't think you understand what the discussion in this post is about. The game of chess has no element of randomness -- but the players do, and it's the players we are trying to model. Just because, on average, player A is better than player B, doesn't mean that player A will win every game. The fact is that the same player will play at different levels of ability on different days, and that is the ran

      • Re: (Score:2, Informative)

        by shimage ( 954282 )
        Bullshit. Mistakes are roughly stochastic, ergo, there are random elements in chess players' performance. This is why chess matches involve more than just two games.
        • You clearly don't understand the point I'm trying to make then.

          A mistake is an element in my performance and can happen at any time - and it will affect my ranking.

          What Elo does it put me up against everyone else who is JUST as affected by these events as I am - There is nothing to say its an unfair battle. I do not have less pieces, I do not have a weaker position to start. Now there will be stronger players, and they will have higher rankings, weaker players will have lower rankings.

          What the GP was saying

          • by sahonen ( 680948 )
            When you roll a die or spin a roulette wheel or deal a hand of cards, the outcome is governed purely by the laws of physics, yet you treat the result as random anyway. The outcome of a chess game is the same way. Even if the outcome of the game is decided solely by player skill within the rules of the game, the result is treated as a statistically random phenomenon.
            • Why is it considered a random phenomenon when a player makes decisions in chess?

              When it comes to something like Poker, you don't know that you will ever get a good hand, you are stuck trying to play against your opponent. You can go through the entire game without getting a solid winning hand compared to your opponent, and if your opponent pushes you at every turn - you've already lost and no matter how "skillful" you are, your bad luck would cause a lost even if your opponent plays stupidly calling bluffs

              • by sahonen ( 680948 )
                You missed my point. I'll say it again. Every single physical phenomenon above the level of quantum physics is governed by deterministic physical laws, yet for the purpose of statistical analysis we treat them as random because we don't have the ability to know them exactly.

                The poker analogy was talking about a *single hand.* When you shuffle a deck of cards, they will come out in an order which is precisely determined by the actions taken to shuffle them, yet we treat the order of the cards after shuffl
      • by hoytak ( 1148181 )

        ... and the only things left to chance are your strategies.

        and whether you had too much coffee that morning, failed to see that move 10 steps ahead, etc. In high level chess, it seems that these kind of things have enormous effects on the outcome of the game and are not things that can be easily modeled except as random effects. Thus there is definitely a random element in the outcome of the game; Kasparov vs. Deep Blue was a mix of wins and losses; definitely not a deterministic outcome.

      • by phantomfive ( 622387 ) on Thursday August 05, 2010 @12:00AM (#33147024) Journal
        Mikhail Tal, one of the best players ever, would differ; because it's impossible to see deeply enough to know what the outcome of a move will be. He makes the point here [wikipedia.org], and I'll quote a small piece:

        Tal: - "Yes. For example, I will never forget my game with GM Vasiukov on a USSR Championship. We reached a very complicated position where I was intending to sacrifice a knight. The sacrifice was not obvious; there was a large number of possible variations; but when I began to study hard and work through them, I found to my horror that nothing would come of it. Ideas piled up one after another. I would transport a subtle reply by my opponent, which worked in one case, to another situation where it would naturally prove to be quite useless. As a result my head became filled with a completely chaotic pile of all sorts of moves, and the infamous "tree of variations", from which the chess trainers recommend that you cut off the small branches, in this case spread with unbelievable rapidity.

        Now I somehow realized that it was not possible to calculate all the variations, and that the knight sacrifice was, by its very nature, purely intuitive. And since it promised an interesting game, I could not refrain from making it."

        Journalist: - "And the following day, it was with pleasure that I read in the paper how Mikhail Tal, after carefully thinking over the position for 40 minutes, made an accurately-calculated piece sacrifice".

        You will find that lots of chess players have reported making similarly intuitive moves.

        • That's not random though, and that kind of intuition is what makes the rankings.

          What I mean is - if you were to take something like WoW, put 2 identical players against each other, have them preform the exact same moves at the exact same time - one will likely lose before the other. Because there is too much random generation in the game, like crit chances and things like that.

          Chess does not have any of those elements. Yes, you may have tons of moves available to you with far reaching implications but ultim

          • Nah, haven't you ever heard of the chess god Caissa, that chess players pray to? The choice to make a move is yours, but in many situations there is no logical reason to make one move above another. It comes down to luck. In an average, typical position, there are something like 3 moves that are all equally good. Which one you choose might as well be completely random (and indeed, that is how I choose between three moves that I can't tell which is best: as randomly as I can to make myself unpredictable).
            • Which is part of your strategy - which would reflect on how well you play chess, no? If you are actively trying to seem more random - and it works, that will make your chess rating go up.

    • I realize they are "predicting" games that have already taken place, but how would this affect a realtime match? How much would it change your moves knowing you've been predicted to lose? Or to win?
    • The differences are indeed quite small, but it seems obvious that you should be able to do better than ELO by splitting it into two parts:

      Games played as White and games played as Black.

      In fact, this seems so obvious that I suspect there's something I have overlooked! :-)

      As the contest site mentions, there's a very significant advantage to White, enough so that in their training data set White has 30+% win vs 20+% for Black.

      I suggest that taking the normal ELO-predicted outcome and then biasing it according

      • Pointless. Every official ELO rating is (and any rating system that replaces will be) calculated off 50% games as White and 50% games as Black because officially rated games are played in tournaments and matches in which each player is assigned an equal number of games as each. Since every ELO rating has the same White/Black ratio, there is no "bias" from one rating to the next to be corrected for.

        • Not pointless at all!

          Tournament results is what ELO really tries to predict, and there you are absolutely correct, i.e. everyone plays both White and Black equally often.

          However, the current challenge is NOT to predict how well each player is going to do in the aggregate, but to minimize the error for each individual game. THIS IS CRUCIAL!

          I.e. the error term is the RMS of the difference between the predicted and actual result for each individual game, not the sum of the normal pair of games against each com

  • Well, everyone knows that arena is serious business.
  • by frank_adrian314159 ( 469671 ) on Wednesday August 04, 2010 @04:28PM (#33143924) Homepage

    Are the better entries as transparent? ELO's a pretty simple way do do this - add or subtract a few points from the rating based on a win or a loss based on the relative difference of the ratings. Would anyone understand (other than "It's a neural net") the ratings produced by these competitors? Would anything human be able to calculate them?

    Also, are the new models' improvements in prediction statistically relevant? Or are they just fitting the noise? Both the training dataset and the test dataset seem rather small to me.

    Finally, and most importantly, how stable are the ratings? If I'm drunk and lose to a "patzer", do I go down to his level? Fairness of tournaments having small numbers of games has a lot to do with rating stability (unless we're assuming a population periodically beset by huge random shifts in ability).

    All-in-all, there's a lot of problems coming up with a good rating system. Opening the dataset to the world, saying "Have at it!", and looking at a single scorecard based solely on predictability is nowhere near sufficient.

    • Re: (Score:3, Interesting)

      by greg1104 ( 461138 )

      Development of stock trading systems, which are also trying to rank things based on historical data, have this persistent problem there's been waaay more research into than chess rankings. If you train them on a bunch of historical data, you will discover the best system is invariably one that essentially does a giant curve fitting job on that exact data. One thing trading system developers do to address this are use techniques like walk forward testing [automated-...system.com], where the system gets trained on one set of data bu

      • Walk-forward testing(a special type of something more general called cross-validation), is a bit over-rated. It's very intuitive, which is why it's used so much in the technical analysis crowd. But statistically, it's really roughly equivalent to multiplying the standard error by a constant factor unless you have severe model mis-specification(In which case you're doing something very wrong!)

        In general, it's good to parametrize a range of plausible models, test the assumptions of the model, and conservati

    • by Kijori ( 897770 )

      Are the better entries as transparent? ELO's a pretty simple way do do this - add or subtract a few points from the rating based on a win or a loss based on the relative difference of the ratings. Would anyone understand (other than "It's a neural net") the ratings produced by these competitors? Would anything human be able to calculate them?

      Take a look at the formulae used - Elo, particularly for tournament play, is already complicated enough that it's beyond the reach of a "back-of-the-napkin" calculation to work out your rating change. That's seen as one of the big advantages of the English Chess Federation's rating system; it's very simple, so you can just work out the change yourself.

  • Since the Elo system is not designed to predict future performance (it's designed to capture current relative rankings), then is it really surprising that programs designed to predict future performance are better at it?

    • Re: (Score:3, Informative)

      by mooingyak ( 720677 )

      Since the Elo system is not designed to predict future performance (it's designed to capture current relative rankings), then is it really surprising that programs designed to predict future performance are better at it?

      And if my current relative rank is higher than yours, doesn't that imply that if we play each other I should win? If not, what purpose does the rank serve?

      • Re: (Score:3, Funny)

        by vlm ( 69642 )

        And if my current relative rank is higher than yours, doesn't that imply that if we play each other I should win? If not, what purpose does the rank serve?

        Historical achievement, the glory of the grind. Much as my lower UID implies this comment should be more valuable than your high UID comment.

        • Re: (Score:3, Interesting)

          by mooingyak ( 720677 )

          Much as my lower UID implies this comment should be more valuable than your high UID comment.

          I used to think of myself as having a particularly high UID... until I realized that mine is actually lower than a majority of the total UIDs. Weirded me out a little. There are UIDs that are farther from the 1,000,000 mark than I am from Taco.

      • Since the Elo system is not designed to predict future performance (it's designed to capture current relative rankings), then is it really surprising that programs designed to predict future performance are better at it?

        And if my current relative rank is higher than yours, doesn't that imply that if we play each other I should win?

        That depends on the relative difference between the ranks. A narrow difference implies you might win, a wider difference implies you will win - and between the two lies a spectru

        • You're picking nits. His point still remains: The ranking system *should* provide a prediction of future performance, as its supposed to be an indicator of relative skill. Of course, if two ranks are close together, that means your error bars will be wider, but that doesn't change the basic fact that a higher rank should fundamentally translate to a higher likelihood of winning.

    • how do you test current relative rankings without using them to make predictions?
  • I don't think so. The time I'd spend on this project is worth a bit more than $50...

  • After all, it's not like other ideas [microsoft.com] haven't already been created in the meantime to address Elo's perceived shortcomings, right?
  • by LambdaWolf ( 1561517 ) on Wednesday August 04, 2010 @04:45PM (#33144134)

    Ah man, no matter how inadequate the Elo system may be for chess, it's much worse seeing it applied to other games where it doesn't belong, which happens regrettably often. The trouble is that the Elo system depends on the premise that nothing affects the outcome of a game other than the skill of each player (and who gets the white pieces).

    In chess, that assumption is a pretty good approximation to reality, since every tournament game in run the same way. But many games do have variations in rules or format across different events, such as different maps or races in a real-time strategy game, or different card pools in Magic: The Gathering. Then Elo ratings are biased by how often a player has the chance to play to his strong areas. Players in turn are compelled to game the system: "I should avoid this event because they're using Format X and my rating will stay stronger if I stick to Format Y." The Elo system is meant precisely to obviate that kind of gamesmanship: chess players should need to think only about the strengths of their opponents, which (in principle) will be weighted fairly when calculating rating adjustments. But if there are other competitive factors, which is true for most any popular game invented in the last 30 years, Elo ratings become that much less meaningful.

    • by selven ( 1556643 )

      Yes, linear ranking systems fail hard at anything as, let alone more, complex than rock-paper-scissors.

    • by jhol13 ( 1087781 )

      The Elo system does not depend on the premise "nothing affects the outcome of a game other than the skill of each player".

      Sure it is modelled according to that, but in practise it is very untrue even for chess. There are a lot of examples where player A has won player B N out of M times although according to rating difference very different outcome should have happened.

      The chess events are not similar, I have played a few and they do vary considerably (number of games per day, travel, lighting, temperature,

  • by jamrock ( 863246 ) on Wednesday August 04, 2010 @04:55PM (#33144264)
    Three teams done bettered Elo with betterer done algorithms, and the submitter is surprised that it was bettered done so quickly. I'm done. Was that better?

    He sounds like Lady Macbeth on crack.
  • Man I was like WTF? Cheese ratings? Got confused with seeing the packman icon.
  • I believe the algorithm used by Microsoft to match players for X-Box games was already beating Elo before this competition. They have a description of their algorithm at http://research.microsoft.com/en-us/projects/trueskill/ [microsoft.com]

    • Re: (Score:3, Informative)

      by Maarx ( 1794262 )
      Not to belittle what Microsoft did, but in the interest if giving credit where credit is due:

      Here’s the problem with Battle.net 2.0: 2002s Warcraft III: Reign of Chaos is one of the most underrated video games ever created. And that’s before you learn its online apparatus is the foundation for modern matchmaking, where Blizzard Entertainment should get royalties every time you brag about your X-Box Live Trueskill rating. (Then again, I shouldn’t be giving Blizzard ideas right now.)

      Here’s how Warcraft III matchmaking worked: Everyone starts at level one. The maximum level is fifty. You play players within six levels of your own. Win five games, gain a level. Lose five games, lose a level. The penalty for losing is reduced during levels one to nine. Thus, players who win half their games will become level ten.

      It was simple and transparent. That was the hook, and people choked on it. It turned Warcraft III ladder play into what ICCUP serves for Starcraft players, a stomping ground so competitive that climbing the food chain gave you a shot at the guys who played for a living. That’s what a good online gaming system does.

      The quote comes from Battle.net 2.0: The Antithesis of Consumer Confidence [the-ghetto.org]. I would encourage you to read the entire thing, but for reasons completely unrelated to this thread.

  • Comment removed based on user account deletion
    • Re: (Score:3, Funny)

      by daveime ( 1253762 )

      Pleased to say I jumped straight into the money at #7 with my first submission :-)

      Where AM I going to spend a whole 50 Euros ? Maybe I'll donate it to Greece, seems like they need it.

  • Elo Anecdote (Score:5, Informative)

    by afabbro ( 33948 ) on Wednesday August 04, 2010 @09:24PM (#33146324) Homepage

    Not relevant specifically to this story, but I always laugh at the story of how a prisoner manpiulated the Elo system via closed pool ratings inflation [wikipedia.org].

    Short summary: said prisoner only played against other prisoners, who he'd trained. Due to careful scheduling of the games, he rose from his true strength (probably sub-master) to being the second-highest rated played in the U.S. in 1996.

  • The problem with this kind of modeling is that many "good fitting" algorithms would, if implemented, change the system itself. There's more to competition chess than just the rules on how to move pieces. For example, while a game in isolation would almost always be played to win, there are many times that because of information from ratings (or due to the method of the tournament) you would start the game being equally happy to draw, which will affect how you play.

    Now, even if the difference in the numb

  • Given that ELO is relatively simple, it is more surprising that more complex algorithms with the benefit of acces to a lot of historical data only marginally outperform it. i.e. the transparency and simplicity of ELO combined with a relatively accurate outcome is better.

You know you've landed gear-up when it takes full power to taxi.

Working...