Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI Software The Almighty Buck Entertainment Games Technology

AI Decisively Defeats Four Pro Poker Players In 'Brains Vs AI' Tournament (ieee.org) 191

Halfway through the "Brains vs. AI" poker competition, it was pretty clear the artificial intelligence named Libratus would end up victorious against its human opponents, who are four of the world's top professional players. Lo and behold, Libratus lived up to its "balanced and forceful" Latin name by becoming the first AI to beat professional poker players at heads-up, no-limit Texas Hold'em, reports IEEE Spectrum. From the report: The tournament was held at the Rivers Casino in Pittsburgh from 11-30 January. Developed by Carnegie Mellon University, the AI won the "Brains Vs. Artificial Intelligence" tournament against four poker pros by $1,766,250 in chips over 120,000 hands (games). Researchers can now say that the victory margin was large enough to count as a statistically significant win, meaning that they could be at least 99.7 percent sure that the AI victory was not due to chance. Previous attempts to develop poker-playing AI that can exploit the mistakes of opponents -- whether AI or human -- have generally not been overly successful, says Tuomas Sandholm, a computer scientist at Carnegie Mellon University. Libratus instead focuses on improving its own play, which he describes as safer and more reliable compared to the riskier approach of trying to exploit opponent mistakes. Even more importantly, the victory demonstrates how AI has likely surpassed the best humans at doing strategic reasoning in "imperfect information" games such as poker. The no-limit Texas Hold'em version of poker is a good example of an imperfect information game because players must deal with the uncertainty of two hidden cards and unrestricted bet sizes. An AI that performs well at no-limit Texas Hold'em could also potentially tackle real-world problems with similar levels of uncertainty. In other words, the Libratus algorithms can take the "rules" of any imperfect-information game or scenario and then come up with its own strategy. The Libratus victory comes two years after a first "Brains Vs. Artificial Intelligence" competition held at the Rivers Casino in Pittsburgh in April-May 2015.
This discussion has been archived. No new comments can be posted.

AI Decisively Defeats Four Pro Poker Players In 'Brains Vs AI' Tournament

Comments Filter:
  • by turkeydance ( 1266624 ) on Tuesday January 31, 2017 @07:11PM (#53777933)
    oh no it can't. yes it can. no it can't.
  • by SuperKendall ( 25149 ) on Tuesday January 31, 2017 @07:14PM (#53777949)

    I don't play poker enough to know, but I wonder if many human players at the top level also try to win through discerning tells and weaknesses of opponents... if an AI can win so consistently is is using a technique that a human could also learn to get a step ahead of todays other human players?

    • I don't play poker enough to know, but I wonder if many human players at the top level also try to win through discerning tells and weaknesses of opponents... if an AI can win so consistently is is using a technique that a human could also learn to get a step ahead of todays other human players?

      These are all online pros (4 of the top 10 in the world). So their game is essentially entirely based on discerning patterns of betting behaviors and action frequencies.

    • The AI would probably have an advantage as its less likely to have discerning tells of its own because all of the easily discoverable ones would have been caught much earlier on, whereas human players who've only played against other human players might not have spotted and obvious and exploitable pattern in their play yet and even though the AI "knows" what it is, it can't easily tell them what they're doing wrong.
    • The obvious problem is that humans are bad at randomization and computers are reasonably good at that. If the computer then does something at random, the humans will not be able to copy it.

  • Must be the first time ever I've heard of a Texas Holdem game where luck was not a factor.
    • IMO a lot of poker is about being able to read the person you are playing against and know when they are bluffing or not. There is an element of luck when you make a bad call but get the card you need to win.
      • Re: (Score:2, Informative)

        by Anonymous Coward

        Most of poker is based off math, not tells.

        You get an idea of whether they are loose or tight players (i.e. bluff often vs bluff rarely), by their play/fold ratios on the various streets.

        You get the idea of their possible hands from their betting patterns (e.g. if a player that doesn't play often raises pre-flop, and then tries to represent having a 8-5 for a straight on a rainbow board, it's not convincing. They'll have an over-pair to the board, or possibly a set at most).

        And lastly you play what the od

        • You can't claim it's AI if you are using computer math skills. DIal it back to human math skills and augment that with an ability to read people with a camera, and then you have real AI.
          • Everything GP described is human math skills. Even rank amateurs poker players are doing all of that in their heads. It's elementary arithmetic/probability.

            • But what element of human psychology was there in the way that the computer played? AI is not about winning games over humans, it is about mimicing human psychology.
          • Which human's math skills? Humans have a staggering range of capabilities. Average? Then average in which way? A savant? Then a savant in which way?

            And what about autistics who happen to be very good at poker but lousy at reading human expressions?

            There are multiple different ways to be good at poker, and this system is just using one of them, and is clearly quite good at this particular way.

            I also didn't see anyone making the claim that this was a hard or general purpose AI. It's not, and you acting like s

    • Luck tends to not be the most consistent factor in a tournament, that's why you frequently see the same professional players making it through to the top 50 and frequently even the final table out of thousands. When you take the human factor out and boil it down to purely mathematics (i.e. chance and betting patterns etc) I would expect a computer to win long term. Poker though is more than just mathematics, it is about reading people and quite often manipulating them, playing a computer isn't the same as p
      • by Nunya666 ( 4446709 ) on Tuesday January 31, 2017 @10:59PM (#53778787)

        Luck tends to not be the most consistent factor in a tournament, that's why you frequently see the same professional players making it through to the top 50 and frequently even the final table out of thousands. When you take the human factor out and boil it down to purely mathematics (i.e. chance and betting patterns etc) I would expect a computer to win long term. Poker though is more than just mathematics, it is about reading people and quite often manipulating them, playing a computer isn't the same as playing poker at a table of people.

        I play a lot of poker, both in cash games and in tournaments. Luck is a HUGE factor in tournaments. That is why so many unknown players win them. If it was mostly skill, then only the pros would be winning them, which absolutely IS NOT the case.

        There is a saying in poker tournaments: in order to win, to have to win your share of "coin flips." An example of a "coin flip" is my matched (paired) hole cards against your two cards that are higher than mine. For example, my pair of 8s against your King-Queen. My pair only has a 52% chance of winning the hand. You have a 48% chance of improving your hand with any King or Queen, or making a higher straight, flush, or full house than I do.

        That being said, luck is only a factor at the end of the hand, when both of us have to show our cards. I often say that "poker is about everything except the cards, unless and until you have paid to see my cards." If I can make you fold with just my betting, then the cards are irrelevant. That happens in AT LEAST 4 out of 5 hands in live games. Most of the time, poker is about the size of my chip stack vs. yours, who bet first, how much they bet, how often they bet, how often they raise, how many hands they play (vs. just folding), how much they usually raise, do they have an aggressive image at the table, do they ever re-raise, have you seen them fold every time someone raises, etc., etc.

    • by alexo ( 9335 )

      Luck can be a big factor in an individual hand, but in the long term, it evens out.

      An interesting read:
      https://www.bloomberg.com/view... [bloomberg.com]

    • Over that many days and that many hands and that consistently? That wouldn't be luck, that would be a quantum miracle.

  • It sounds like its just better at calculating the odds than humans are, which is not much of a feat, really. I mean... it would almost be surprising if it couldn't.

    I'm not trying to diminish the significance of the research... but what is the real innovation here?

    • by gweihir ( 88907 )

      Nothing. It will also be better at bluffing, because a machine has no way to betray itself.

      This is a nice stunt, but this is still just statistics or "weak AI", if you must label it as AI.

    • by Ogive17 ( 691899 )
      If you only bet when the odds were in your favor, you would slowly bleed your stack away because the other players would immediately fold as soon as you bet. Then you'd have a "bad beat" when someone called and got an improbable card to win the hand when their odds were incredibly low. You'd probably lose a lot of money in those situations.

      With a little experience, simply calculating your odds is quite easy to do. However knowing when you're in a position of power or weakness is not nearly as easy to c
      • by vux984 ( 928602 )

        If you only bet when the odds were in your favor, you would slowly bleed your stack away because the other players would immediately fold as soon as you bet.

        Which is where after 120,000 hands of play, you've also optimized how often you need to bet on crappy hands. And even optimized how much you need to bet... etc.

        With a little experience, simply calculating your odds is quite easy to do.

        Your odds of having the best hand at the table yes. The odds of having the best hand at the table when PlayerA has done X and PlayerB has done Y... or the odds PlayerA will fold if you bet... Z... not so much. But with a few hundred thousands hands of experience you could start to nail those numbers down. So when you are playing the odds... you aren'

  • I'm looking forward to the AI that absolutely destroys the stock market which in turn will end the stock market.

    • that absolutely destroys the stock market which in turn will end the stock market

      Seems kind of redundant to end something already destroyed.

    • by gweihir ( 88907 )

      That is not going to happen. The stock-market is a mechanism to redistribute wealth from small-time players to the big ones. The big ones will be the ones with the AI (well, "statistical decision engine" actually) and they will make very sure not to kill the goose that lays golden eggs...

    • by sudon't ( 580652 )

      I'm looking forward to the AI that absolutely destroys the stock market which in turn will end the stock market.

      Congratulations on waking up from the coma! I have some good news, and some bad news, about the stock market...

    • by moeinvt ( 851793 )

      I don't know whether you'd call it "AI" but the stock market has already been "destroyed" by HFT and other algorithms. The market no longer serves its real purpose as a price discovery mechanism and investment vehicle. It's now a game of the big guys with the fastest computers skimming razor thin margins on billions of small transactions. The little guy is totally being robbed.

      Two reasons it hasn't "ended" already:
      1. The U.S. federal government offers tax incentives to the little people if they put thei

      • If you don't do many trades as a small player, it doesn't matter that someone is skimming razor thin margins off your transactions.
  • by gweihir ( 88907 ) on Tuesday January 31, 2017 @08:29PM (#53778279)

    Humans are still bad at statistics and machines can lie without any outward signs.

    What is the point of this news?

    • by JoshuaZ ( 1134087 ) on Tuesday January 31, 2017 @09:13PM (#53778475) Homepage
      Poker has for a long time been a game that was considered very difficult for AIs to do. We're now in a situation where very rapidly many things that we think of as hard problems for AI (playing poker, playing Go, image recognition, translation) are having AI close to equal or surpass humans. That should be concerning at multiple levels: first, this will have large-scale economic impacts. Second, and potentially more disturbingly, it means that we are closer to the point where AI may pose an existential risk to humans, and that tipping point could occur with very little warning.
      • by gweihir ( 88907 )

        Second, and potentially more disturbingly, it means that we are closer to the point where AI may pose an existential risk to humans, and that tipping point could occur with very little warning.

        Not at all. These are all "weak AI" applications. Weak AI cannot do general problems, unlike human beings. There is no existential risk here. There is a risk for the social order, as it turns out that many things do not actually need intelligence but can be done by specialized automation (i.e. weak AI) and hence quite a few jobs will be vanishing, but that is essentially it risk-wise.

        • Missing the point: yes, these are weak AIs. But if almost all tasks that humans can do, a weak AI can do better, than it is all the more reason to expect that the first general AIs when they arise will quickly be so much smarter than humans that their goals will be all that effectively matters.
          • by Dog-Cow ( 21281 )

            Define "smarter" in a way that applies to AI. There is not a single AI that has determined its own goals. You seem like an ignorant, illogical and stupid person. Perhaps you should be worried about being replaced by a tool, but us humans don't have that worry.

          • Weak AI is limited.

            By this I mean, AI is apparently good at looking at large data spaces and repeatedly making a decision based primarily upon probability (probability being the search for a pattern where no obvious pattern exists - that's what our brains do all the time). Once the decision has been calculated, the action can be performed, assuming the action is fairly mundane (examples: responding to requests for information, identification and manipulation of pieces) This also includes complex probabi
            • by gweihir ( 88907 )

              Exactly. Although since there are absolutely no positive research results at this time when it comes to strong AI, > 100 years to availability is a better bet, and it may well be never.

              • Strong AI is like the secret marble you can feel in a stack of envelopes. Research will continue to peel away all the envelopes until it reaches the bottom of the stack, without ever finding a marble. The marble is an illusion.
          • by gweihir ( 88907 )

            It is by far not "almost all tasks". It is strongly formalized tasks with strong rules and good statistical models.

            There is absolutely no reason to believe at this time general AI ("strong AI") will ever become available, except a blind, religious trust in "progress". There are not even theories how general AI could be created, despite more than half a century of research. Nothing at all.

            • by nasch ( 598556 )

              How many common human tasks really require general intelligence? It seems to me we need general intelligence to be good at lots of different tasks, but specialized AI could take over nearly all of them at some point. Maybe all of them.

      • by mentil ( 1748130 )

        it means that we are closer to the point where AI may pose an existential risk to humans, and that tipping point could occur with very little warning.

        Hold your horses, there. If by 'existential risk' you mean 'Cylons nuking the colonies' then this poker AI has little to do with a general intelligence that would be capable of desiring to exterminate humanity.
        Now if you meant 'industrial revolution that will leave all humans unemployable because AI does everything better' then you'd have a point. However, the thing with specialist systems is that they have to be recoded to be repurposed to other tasks. Eventually we may have AI that is able to learn a vari

        • AIs that desire to exterminate humanity are more the Hollywood version of threatening AI. What we need to be worried about first is the AIs that are indifferent; the ones that would do us harm as a side effect of getting something else done, simply because nobody bothered to program them to tell right from wrong.

          (I'm not trying to say that the human unemployability part isn't an issue -- because it definitely is -- but it's important to realize that when people talk about an AI existential risk, it's not Cy

      • by ledow ( 319597 )

        No, it's not.

        Poker is an easy game to describe. You can get perfect statistics for chances and what remains in the deck (if applicable).

        What you couldn't do up until now was the BETTING on poker. When you have $64k of chips in front of you, the optimal amount to bet is not obvious or easily iterated by brute force. Just sheer size of the potential "game-tree" in that betting was the only obstacle.

        Go was the same - the game tree is huge, and we now have heuristics that can cull it earlier and better than

        • I recommend the use of the phrase "Game Graph", rather than "Game Tree"
          For this reason:https://en.wikipedia.org/wiki/Graph_(abstract_data_type)
        • by nasch ( 598556 )

          There was one AI that learned to recognize wolves by the fact that they had snow around them, because the training photos of wolves all had snow.

        • by epine ( 68316 )

          Just sheer size of the potential "game-tree" in that betting was the only obstacle.

          You are so totally wrong. Conquering the game tree only gets you to a provably optimal Nash equilibrium, at which point you will never lose, but there's no guarantee you will ever win, either. Conquering the game tree isn't worth much if you only manage to arrive at an insanely conservative belt-and-suspenders playing posture.

          Here's the rub: in order to maximize your win rate against imperfect opponents you must *deviate* f

      • by Dog-Cow ( 21281 )

        Please explain how AI will pose an existential threat to humans. There is no logical path between improved AI and ending human existence.

        • There's been a lot written on this. I recommend Bostrom's book "Superintelligence" for a start. The short answer is that a sufficiently smart AI with a goal set that doesn't include human existence or happy humans is a problem. The classic thought example is an AI programmed to maximize the number of paperclips which responds by taking over the planet and turning everything into paperclips.
    • by The Evil Atheist ( 2484676 ) on Tuesday January 31, 2017 @09:40PM (#53778561)
      A lot of people have been trumpeting on about how computers will never be able to beat humans at poker. It's the same old song and dance. 1) Identify activity X that AIs can't do. 2) Attribute it to some made up quality that only all humans supposed possess. 3) AI beats humans. 4) "Well, what's so difficult about activity X? No one has ever claimed it was unachievable" 5) Restart process with activity X+1.
      • by gweihir ( 88907 )

        People that understand statistics have not made such claims. People that are afraid of "AI" without understanding it have been making them.

        This is weak AI. It does not surpass human beings at _understanding_ poker, because weak AI has no understanding. It just tuns out that statistics-based automation can play poker really well (without understanding it). The problem here is humans overestimating the difficulty of poker, not weak AI doing impressive feats.

        Of course, a lot of task that humans approach using

        • Of course, a lot of task that humans approach using intelligence or think they approach using intelligence does not actually require intelligence.
          .
          .
          .
          On the other hand, if you look at what the smartest 10% humans can do when they really think about something, there is no way to replicate that with weak AI. And strong AI is not even on the distant horizon.

          That just sounds like the same backtracking over the years. Everything from shifting the goalposts, and making up adhoc definitions to try and define the problem away.

          Most humans can't do what the 10% smartest can. And most of the 10% smartest can't do everything all the other people in that 10% can. If we were to head down your path of excuses, you may as well just get it over with and say humans aren't intelligent, which misses the point, really. Can computers do what humans do? Increasingly yes. That's

          • by Dog-Cow ( 21281 )

            Yep. Gwehir's argument is a twist on the God of the Gaps argument. He essentially defines intelligence as "that which a computer can't be programmed to do". So whenever a computer is programmed to do something new, he simply claims that activity does not require intelligence.

            • by gweihir ( 88907 )

              And if you had had even a brief look at the definition of weak AI and strong AI, then you would not spout such nonsense. These are not "my" definitions. These are what is used in the respective CS field. But apparently you are more in love with your own misconceptions than understanding things. Pathetic.

      • 1) Identify activity X

        For a short while that post brought up some really bad memories...

      • by radl33t ( 900691 )
        people do this about everything, especially other groups of people. For example those unthinking robotic Chinese who are incapable of innovation like exceptional Americans who were gifted these abilities by god almighty and ronald regan.
    • I guess the point of your post is to say that the top poker players usually win thanks to psychology and good - but not perfect - technical skills, basically. Against a comp (expected to be 100% accurate statistically, if well programmed) those top humans will fail, the way Kasparov lost against powerful machines. In this case, the AI under the hood, if limited to statistics, is not so elaborated.
      • by gweihir ( 88907 )

        And that is exactly it. Specifically for poker, there are also a lot of wannabes with too much money that think they are good players, easy marks for the pros. This statistical engine does not make any beginner's mistakes and plays an almost perfect mathematical strategy. Apparently that is enough.

    • The machine isn't allowed to read players' outward signs either, so it is balanced. And statistics don't take players' dynamic heuristics into account. Guessing a players' hole cards based on their behavior, particularly when they can regularly change up their tactics, that's extremely difficult stuff. Playing in a manner that allows money to be won in a timely manner, as opposed to a zero-sum game, that's even more difficult.
  • It must win by judging the tells of its opponents though a camera, not calculate the possible outcomes which have a greater chance of success. This would make it AI.
    • by Dog-Cow ( 21281 )

      Ah. The old "AI is whatever a computer can't do" argument. It's laughably sad that you can even think that way. Of course, it may be too much of an assumption that you can even think.

      • I didn't say AI is whatever a computer can't do. I guess if AI is about a computer's ability to calculate better than a human we have had it for 50 years now, all further study into the area should cease. I mean really, computers were beating people at poker 30 years ago.
    • RTFA; it did not use a camera to read tells. This was more like a game of online poker. Yes, it dynamically calculated outcomes, and used outcomes of previous hands to predict player behavior, and adjust it's decisions accordingly.
  • Cool... (Score:3, Funny)

    by The Grim Reefer ( 1162755 ) on Tuesday January 31, 2017 @09:14PM (#53778477)

    An AI that performs well at no-limit Texas Hold'em could also potentially tackle real-world problems with similar levels of uncertainty.

    We should plug it into all military hardware and put it in charge of all ICBM and SLBM launch decisions. Oh, and its hardware should have lots and lots of blinking lights...

    and reel-to-reel tape drives.

  • by bytesex ( 112972 ) on Wednesday February 01, 2017 @04:08AM (#53779423) Homepage

    Everybody keeps saying that, somehow, a computer being able to play poker is the next step up from Go. I think this 'easy' victory shows that it's not that, but that poker is really just quite a stupid game. Which _people_ try to play by 'reading faces', but that you _should_ play - as any gamble - by statistics.

    • by nasch ( 598556 )

      If you're suggesting that good poker players don't use statistics, you're very much mistaken. Anyone other than a rank beginner will do at least some calculation of odds.

    • And you'd be thinking wrong. Other forms of poker may follow that model, but people have crunched the numbers on holdem decades ago. It's different. That's why it's a multi-billion dollar industry. Even taking away tells and just going by bets, as in online poker, it still has that dynamic.
  • This is interesting, but I have to wonder how much the fact that the best human players have optimized their strategy to beat humans is a factor. I'd like to see whether the AI would maintain its advantage when the human players have become more familiar with its play style.
    • The players were making comments that they were trying to change up their playstyle based on the AI, but it would change too. Revising algorithms at the end of each night. 120,000 hands is a pretty decent amount of time to build up experience; but they said libratus just kept getting better.

It's a naive, domestic operating system without any breeding, but I think you'll be amused by its presumption.

Working...