Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Google Software Entertainment Games Technology

OpenAI's Bots Defeated Former Pro E-Sports Players At Dota 2 (theverge.com) 49

On August 5th, OpenAI's bots defeated four former professional Dota 2 players in two of the three matches played. "There were a few conditions to make the game manageable for the AI, such as a narrower pool of 18 Dota heroes to choose from (instead of the full 100+) and item delivery couriers that are invincible," reports The Verge. "But those simplifications did little to detract from just how impressive an achievement [the] win was." From the report: The OpenAI Five triumphed in convincing fashion in the first game, not allowing the human players to even destroy one of their defensive towers. The humans recovered a little in game two, conquering one tower, but they still got demolished. And finally, in a game three played purely for pride, the humans managed to squeeze out a win. What stands out when you watch the matches is the apparent intelligence of the AI's decisions and the inhuman absence of any indecision. The typical Dota 2 game, even on the professional tier, involves quite a bit of equivocation about whether to engage in a fight, try and shift it to a more favorable battleground, or run away from it completely. The OpenAI team just doesn't need the processing time that humans require, which made its play appear unnatural -- but only in the speed and crispness of the decision-making, not in the content of those decisions.
This discussion has been archived. No new comments can be posted.

OpenAI's Bots Defeated Former Pro E-Sports Players At Dota 2

Comments Filter:
  • by uCallHimDrJ0NES ( 2546640 ) on Monday August 06, 2018 @05:10PM (#57081354)

    Teaching bots to miss convincingly was the first problem we had to solve back when we were constructing quakeworld bots. It's hard for me to believe that it's some kind of news when bots defeat humans.

    • by Kjella ( 173770 )

      Teaching bots to miss convincingly was the first problem we had to solve back when we were constructing quakeworld bots. It's hard for me to believe that it's some kind of news when bots defeat humans.

      That's like saying I built a bot to play tic-tac-toe, it's hard to believe Deep Blue and AlphaGo made the news. Making an aimbot is trivial, making a system to balance a number of complex objectives, form strategies and adapt to rock-paper-scissors choices is hard.

      • Lol, "making an aimbot is trivial". So, old quakeworld bots are tic tac toe players, and modern bots are systems that balance a number of complex objectives? Is that what you're getting at?

  • by SuperKendall ( 25149 ) on Monday August 06, 2018 @05:14PM (#57081376)

    So wait a second, the pros beat the bots on round three...

    Doesn't that seem to mean that the pros simply needed time to learn how the bots thought, and reacted to different situations? And that they did so by the third game, where after two sound thrashings they pulled out a win? Even if close that is quite a performance leap they managed.

    I would have been a lot more interested in the results of a ten-game series where the pros had adequate time to understand the inhuman patterns of play the bots had.

    • You missed the part where the audience selected the "keystone cops" avatars for the computer to use during round 3.

      • Ok, that makes lots more sense but would have been nice to note in the summary which implied something very different.

      • Indeed, at the very start of the game the computer gave itself a 2% chance of winning with that selection. In the preceding two games it started out with more than 90% confidence of winning. It still played very well, but that was just an unwinnable game.

    • by SirMasterboy ( 872152 ) on Monday August 06, 2018 @06:01PM (#57081628)

      For games 1 and 2 the AI got to draft it's own choice of 5 heroes out of the allowed 18 in an alternating picking sequence with the human opponents as is traditional. When each game started, the AI announced that it had a 95% confidence that it would win based solely on the heroes it had choosen against the human's choices.

      For game 3 the audience picked 5 "random" heroes that had little to no synergy as a DotA team lineup and not good options for each of a DotA team's "roles" (pushing, ganking, tanking, supporting, etc). At the start of game 3 the AI announced that it calculated only a 2.9% chance that it would win against the human team with the heroes it was given.

      Even with the severely gimped hero choices, it did an admirable job at trying to make it work by doing some unusual strategies. The professional casters even commented on how the AI was seemingly doing all the right things that it could have done given its poor position and that it actually did better in the match than anyone had expected it to do given the heroes that it was.

  • Team of casters (Score:5, Informative)

    by Tough Love ( 215404 ) on Monday August 06, 2018 @05:16PM (#57081396)

    Those are hardly professional players, that is a team of casters who never practice together. It is doubtful that any of them could get onto even a second tier team today (well, moon, but then again maybe not). Still an impressive benchmark for AI and top tier pros are sure to get beaten sooner or later too, but what is the point of misrepresenting what actually happened?

    • Most of them have played and practiced together quite a bit actually.

      https://liquipedia.net/dota2/V [liquipedia.net]... [liquipedia.net]

      Blitz, Merlini, and Captitalist most recently, but Fogged used to play with them as well and Fogged has played with Merlini for a very long time.

      Blitz and Moon are certainly "pros". Merlini used to be pro, but sure, he is not as good as he used to be and Fogged and Capitalist have played on teams at least semi-professionally.

      I mean, they have to start somewhere, this is just more testing to

      • Right, still impressive as I said, and to be fair the headline was "former pros" and the article goes out of its way to note the progression from casual players up to the current ex-pros. Reaction time is a huge part of Dota and a huge advantage for a bot, also noted in the article.

  • by martyros ( 588782 ) on Monday August 06, 2018 @05:19PM (#57081408)

    One thing the summary didn't point out was that game three they didn't let the AI choose their own heroes. The humans basically conceded defeat; so for the third game they were just experimenting: The let the "audience" choose the heroes, and they chose heroes specifically which would be poor at playing the style which the computers had played so far, just to see if it could change its playing style to adapt to the new heroes. And the human team chose exactly the heroes that the AI chose for the first two games. After that draft, the AI's rating of its own chance of winning was 2.3%, based only on the draft. The AI adapted somewhat, but not much; and near the end of the game, the AI seemed to be doing a bunch of fairly sub-optimal things; like, it knew it couldn't really win, so it didn't know what to do except random micro.

    So, it was an interesting data point -- particularly the importance of choosing the right set of heroes. But it was certainly not a victory for humans. The AI soundly trounced them except when it was purposely crippled.

    Matches, post-game commentary, and other information available on the OpenAI Blog [openai.com] about the match.

    • by zlives ( 2009072 )

      so... its not AI, just another database.

      • by lgw ( 121541 )

        Sorry that's what "AI" means. Just like "hacker" means someone who attacks computer systems. Some linguistic battles aren't winnable.

    • One thing the summary didn't point out was that game three they didn't let the AI choose their own heroes. The humans basically conceded defeat; so for the third game they were just experimenting: The let the "audience" choose the heroes, and they chose heroes specifically which would be poor at playing the style which the computers had played so far, just to see if it could change its playing style to adapt to the new heroes. And the human team chose exactly the heroes that the AI chose for the first two games. After that draft, the AI's rating of its own chance of winning was 2.3%, based only on the draft. The AI adapted somewhat, but not much; and near the end of the game, the AI seemed to be doing a bunch of fairly sub-optimal things; like, it knew it couldn't really win, so it didn't know what to do except random micro.

      So, it was an interesting data point -- particularly the importance of choosing the right set of heroes. But it was certainly not a victory for humans. The AI soundly trounced them except when it was purposely crippled.

      Matches, post-game commentary, and other information available on the OpenAI Blog [openai.com] about the match.

      Sounds like a classic case of not all heroes being equal (ie some are OP / some are too nerfed) if the odds were that obvious after only character selection.

  • by Anonymous Coward

    I'll be impressed when they create an AI that can only use mouse and keyboard to play.

  • Macros (Score:3, Insightful)

    by that this is not und ( 1026860 ) * on Monday August 06, 2018 @05:22PM (#57081424)

    The only way to make a competition like this fair is to let the human players use as many powerful macros and software aids as they liked.

    The other alternative would be an airgap between the AI and the gaming computer in the form of electromechanical servos to drive the controllers and a camera with optical recognition software to see the screen.

    Without one of these two things, it's an uneven match and really just a joke.

  • Let machines play machines, while we watch. Make that a sport.
  • Bots Cheating (Score:5, Informative)

    by Luthair ( 847766 ) on Monday August 06, 2018 @05:39PM (#57081536)

    It should be noted that the bots are cheating and have a different level of access to the information in the game that players do not have. e.g. they have no need to select players to know inventories, precise health, mana etc. In addition to perfect possible information they also have perfect control input.

    They also make a big point about how the draft is considered to be one of the more difficult parts of dota and how impressive it is their bots can draft... but restrict the pool from the normal 115 to 18 and eliminate bans rendering it entirely unlike dota drafting. They also restricted 2 items, no summons/illusions, 5 invulnerable couriers (instead of a team having 1, which can be killed) and no scanning (a time-limited radar like ability).

    Don't get suckered into the news hype cycle until the bots use a screen with USB input and play the actual game instead of an arbitrary one.

    • Re:Bots Cheating (Score:4, Interesting)

      by SirMasterboy ( 872152 ) on Monday August 06, 2018 @06:14PM (#57081694)

      People need to get more informed. There are limitations yes, but there previously were a LOT more limitations and they are continuing to lift them.

      4 months ago the bots could only play 5 specific heroes against the same 5 specific heroes, they couldn't ward and they couldn't use or play against invisibility or fight roshan.

      Those limitations have been lifted as well as lifting the 5 static heroes to a choice for both teams of 18 heroes. They will obviously keep lifting these restrictions until the AI can play against illusions/summons and with the full hero pool.

      Also, the AI has a 200ms delay on every action, so I'm not sure I would hardly call it "perfect".

      But DotA is so complicated that just playing mechanically perfect is not enough to win. You need to play the proper strategies to win and whats amazing is that the AI is playing a strategy that it formed organically on it's own simply by playing itself for millions of games with very simple rules about what is good and what is bad.

  • by mentil ( 1748130 ) on Monday August 06, 2018 @05:41PM (#57081548)

    AI defeats top Generals at military wargame, put in charge of country's military strategy.

  • First 2 games AI wins, but the description seems like the humans were learning quick and getting better every game, winning by 3rd game. The question would be, would humans continue to win in subsequent games?
  • "But those simplifications did little to detract from just how impressive an achievement [the] win was."

    Which is... not very impressive.

    Ok, so I haven't actually played Dota2. I'm assuming it's a twitchy game where reaction speed and knowing when to attack/run is vital. That's how Zergling Blood and the first Dota were. When this whole genre was a mod of starcraft and warcraft, it wasn't very deep. And journalists are just legendarily bad at hyping AI and not undestanding games. Way too many damn people think that "games" are just "kid's stuff". And on the flip side, way too many kids are too easily impr

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling

Working...