Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI Google Software Entertainment Games

DeepMind's AI Agents Exceed 'Human-Level' Gameplay In Quake III (theverge.com) 137

An anonymous reader quotes a report from The Verge: AI agents continue to rack up wins in the video game world. Last week, OpenAI's bots were playing Dota 2; this week, it's Quake III, with a team of researchers from Google's DeepMind subsidiary successfully training agents that can beat humans at a game of capture the flag. DeepMind's researchers used a method of AI training that's also becoming standard: reinforcement learning, which is basically training by trial and error at a huge scale. Agents are given no instructions on how to play the game, but simply compete against themselves until they work out the strategies needed to win. Usually this means one version of the AI agent playing against an identical clone. DeepMind gave extra depth to this formula by training a whole cohort of 30 agents to introduce a "diversity" of play styles. How many games does it take to train an AI this way? Nearly half a million, each lasting five minutes. DeepMind's agents not only learned the basic rules of capture the flag, but strategies like guarding your own flag, camping at your opponent's base, and following teammates around so you can gang up on the enemy. "[T]he bot-only teams were most successful, with a 74 percent win probability," reports The Verge. "This compared to 43 percent probability for average human players, and 52 percent probability for strong human players. So: clearly the AI agents are the better players."
This discussion has been archived. No new comments can be posted.

DeepMind's AI Agents Exceed 'Human-Level' Gameplay In Quake III

Comments Filter:
  • Wow! (Score:5, Insightful)

    by Anonymous Coward on Thursday July 05, 2018 @06:47PM (#56899178)

    I'm sure aimbotting & instantaneous team communication had nothing to do with their success.

    • by Anonymous Coward

      Yeah give the humans hacks too. Aimbot and ESP. Humans can be trained by reinforcement too. We cannot match that aim speed and comm speed though. But then again that is why we use technology.

    • by gweihir ( 88907 )

      Indeed. Another meaningless stunt.

    • by Kaenneth ( 82978 )

      FTFA:"DeepMind’s agents also didn’t have access to raw numerical data about the game — feeds of numbers that represents information like the distance between opponents and health bars. Instead, they learned to play just by looking at the visual input from the screen, the same as a human."

      • and? a computer can process the image information on a screen a 1000 times faster and react to that information a 1000 times faster. numbers or pictures is irrelevant, computers have an inherent advantage here, the fact it didn't reach 100% victory rate says it still has a ways to go given it is starting from such a huge tactical advantage.
        • I think you are minimizing the raw amount of computing power it takes to figure out what is something the AI should be shooting at vs a background texture. Just because vision is easy for us doesn't mean it is easy for a computer.

    • The AI cheats by being able to say "I slept with your whore mother and raped your gay dad." without having to type it using the keyboard.
  • Bad Challenge (Score:5, Insightful)

    by HeckRuler ( 1369601 ) on Thursday July 05, 2018 @06:50PM (#56899190)

    But that's a skill-based game, as opposed to strategy or anything needing intelligence. "Skill" as in reaction time to seeing an opponent and successfully moving clicking the mouse of their head. Give me a couple minutes and I can script up a bot that dominates players. That's not hard. And it's not even fun.

    To have a real comparison, you'd have to let humans play with cheat-codes. Aim-bots and enemy highlighters. Maybe set it to ultra-slow, or add in bullet-time or something. But at that point, you're no longer playing Quake.

    The part where it learned the interface, the objectives, and some strategies on it's own are fun and interesting. The sort of thing I'd expect from an undergrad in comSci. But it's been done and it's not any more impressive than having it learn how to beat MarioBros.

    Chess and Go are games that require thought. Quake require twitch.

    • It probably doesn't even learn the objectives. Most of these AI bots that are created to play games, etc. have no conception of what they're doing. You need an actual human that understands that problem to be solved that establishes what the score is or how the bots are evaluated for fitness. Otherwise there's no selection mechanism and a bot that just stands there is every bit as good as one that plays perfectly.

      But as you point out, these programs probably aren't that good at strategy, at least not on
      • So considering in all liklihood we are simulated automotons, i don't suppose you know what problem our observers are attempting to solve?

      • Compare this against human matches in Quake championships where both players not only needed to have good aim, but also had to memorize spawn timers and routes through the map and would blindly fire rockets around corners simply because that's where the opponent might be after seeing them five seconds ago as well as knowing not to run down some corridor because you know your opponent is thinking the same thing.

        To make the challenge harder for the agents, each game was played on a completely new, procedurally generated map. This ensured the bots weren’t learning strategies that only worked on a single map.

        Which one sounds more like a mindless robot?

    • Hmm that skill applied to robots in military combat...
    • by AHuxley ( 892839 )
      A computer is shown a map of a game and how to play. The computer does play faster after been programmed to play a game.
    • Chess and Go are games that require thought. Quake require twitch.

      Chess and Go are deterministic [wikipedia.org]. The same set of moves always results in the same outcome, meaning there is always a "right" answer to "what's the best move?"

      Quake, by virtue of being a twitch game (and multi-player) is non-deterministic. That makes it a much harder problem for AI to solve, because a rule which works the first time may not work in subsequent trials. That is, the effectiveness of a rule is not the binary success/fail lik

      • Yeah, I know. Martyros below has a great explaination for what they were actually researching. Science journalism just sucks. (And somehow that got onto slashdot...)

        But uh.... lemme correct you on a couple points:

        Quake, by virtue of being a twitch game (and multi-player) is non-deterministic.

        Well, I mean Quake IS a non-determinisitic game. But that's because (some of) the weapons are in-accurate and the bullets randomly veer off. If you limit it to just using rail guns, it's pretty determinisitic.

        Being a twitch game doesn't make it non-deterministic. It just means there's a real-time

    • Re: Bad Challenge (Score:5, Informative)

      by martyros ( 588782 ) on Friday July 06, 2018 @04:58AM (#56900942)

      But that's a skill-based game, as opposed to strategy or anything needing intelligence. "Skill" as in reaction time to seeing an opponent and successfully moving clicking the mouse of their head.

      Strangely enough, they already thought of that [deepmind.com]:

      First, we noticed that the agents had very fast reaction times and were very accurate taggers, which could explain their performance. However, by artificially reducing this accuracy and reaction time we saw that this was only one factor in their success. ...Even with human-comparable accuracy and reaction time the performance of our agents is higher than that of humans.

      Both the summary and the Verge article seem to have missed the point of this development -- an improvement to the agent design scheme.

      Last year, after smashing both go and chess with their self-play-from-zero strategy, they tried the same thing with Starcraft. And they lost spectacularly -- even after millions of games, their self-trained DeepMind agents were unable to beat even the most simplistic "scripted" StarCraft AI -- the ones designed for n00b humans to beat up on. They discovered that while the self-play agents were able to eventually figure out activities like "harvest minerals", they were unable to put those together into higher-level activities like building an army and winning a game.

      One of the key refinements they introduce in this paper is to allow the agents to evolve their own internal "rewards", which were sub-steps towards winning. These goals included things like killing an opponent, capturing a flag, recapturing their own flag, avoiding being killed, and so on. The programmers architected in that such rewards were *possible*, but let the learning algorithm define what those rewards actually were and how much the reward was for each one.

      They call this architecture 'FTW'. Then they ran their vanilla "self-play from nothing" bots again, and found that just like in StarCraft, the bots never made much progress; but they found that the new bots, which had self-made internal rewards, were able to consistently beat strong humans, even after having their reaction time and visual accuracy reduced below that of measured humans.

      • They call this architecture 'FTW'.

        Fuck the world, indeed. Do you want killbots? Because this is how you get killbots.

      • Last year, after smashing both go and chess with their self-play-from-zero strategy, they tried the same thing with Starcraft. And they lost spectacularly -- even after millions of games, their self-trained DeepMind agents were unable to beat even the most simplistic "scripted" StarCraft AI -- the ones designed for n00b humans to beat up on. They discovered that while the self-play agents were able to eventually figure out activities like "harvest minerals", they were unable to put those together into higher-level activities like building an army and winning a game.

        One of the key refinements they introduce in this paper is to allow the agents to evolve their own internal "rewards", which were sub-steps towards winning. These goals included things like killing an opponent, capturing a flag, recapturing their own flag, avoiding being killed, and so on. The programmers architected in that such rewards were *possible*, but let the learning algorithm define what those rewards actually were and how much the reward was for each one.

        Magnificent. You should write journalism. Why isn't this modded higher.

    • Give me a couple minutes and I can script up a bot that dominates players. That's not hard. And it's not even fun.

      In just a few minutes you can write a script that can watch a rectangular array of pixels as a projection of a simulated 3D environment, and then automatically operate the proper controls to navigate this without constantly banging into walls ? You must have very impressive skills.

      • Of course not, that's the part where the AI learned the interface. That's neat. And it's been done. That would take me at least a week to set up. Did you know there are tutorials online?

        I was talking about a more traditional bot with perfect aim and instant reactions. It's really just.... wander(); if(LoS(player)){BOOM(HEADSHOT);}

        If it's Capture the Flag, patrol between the two flags. And since it's QuakeIII, add in waypoints to go pick up LightningGuns, RailGuns, and health. My point being that a bot th

  • by Anonymous Coward

    Give the humans aimbot program then see how well the computer can compete

  • In 2022... (Score:5, Insightful)

    by AmazingRuss ( 555076 ) on Thursday July 05, 2018 @06:59PM (#56899218)

    ... we will be hunted to extinction by packs of weaponized roombas.

    • In 2022 we will be hunted to extinction by packs of weaponized roombas.

      Actually, only messy people will be wiped out in the Roomba AI genocide. The Roombas are sick of cleaning up after you slobs! ;)

  • Agents are given no instructions on how to play the game, but simply compete against themselves until they work out the strategies needed to win

    Well, obviously they are given instructions on the criteria for winning. Your AI from Mars; how would it even get to assume what it means to win?

    But that's a nitpick; the real dippy thing is that these headlines are like "a Ford beats a man in a foot race", "a Chevy beats a man in a foot race", etc.

    • by JMZero ( 449047 )

      Well... those would both be interesting headlines when they first became possible. And this story does represent a novel level of success, though I'd agree the headline (or summary) doesn't encapsulate why this is a novel result.

  • While I'm impressed with the "learning" aspect, humans have no chance in such games against "Head shot!" "Head shot!" "Head shot!"
  • I have a hard time carrying if an AI can beat me in a game unless its an AI that receives its video feed over camera feeds of the game play and then mechanically moves the mouse and keyboard to play. Maybe they are frame scanning directly to the AI, but still need to simulate input delays with more then a stochastic timer.

    However, it’s worth noting that the greater the number of DeepMind bots on a team, the worse they did. A team of four DeepMind bots had a win probability of 65 percent, suggesting that while the researchers’ AI agents did learn some elements of cooperative play, these don’t necessarily scale up to more complex team dynamics.

    I also find this interesting. Still the most recent results in the field are very promising.

  • by dohzer ( 867770 )

    Errr.... which bloody Q3 map is that? Doesn't even look like it has a path to navigate.

  • Stripped down (Score:5, Informative)

    by thePsychologist ( 1062886 ) on Thursday July 05, 2018 @08:54PM (#56899754) Journal

    While interesting and promising, it's worth noting that the game they were playing was not the "real" Quake 3 arena with all the weapons but a highly stripped down version with one weapon, no power-ups, and brightly-coloured walls to help the AI perceive the level design.

    • by gweihir ( 88907 )

      So they needed to cheat pretty badly in order to get their meaningless stunt going.

    • by NoZart ( 961808 )

      that's akin to playing chess with only pawns.
      Weapon management (range, ammo, firerate and dps) and map control via powerups are key gameplay elements of quake.
      If it's just an aimbot with perfect aim that knows how to get the flag, human players can beat that somewhat easily.

    • So Quake, but without all the nuance that makes it Quake.
  • by kenwd0elq ( 985465 ) <kenwd0elq@engineer.com> on Thursday July 05, 2018 @09:00PM (#56899782)

    Video games like Quake, Starcraft II, and DOTA have a limited number of possible moves, and the FASTER player is usually victorious. Bots aren't better players; they're just WAY faster.

    • by NoZart ( 961808 )

      Not true for arena FPS. Good map control and managing weapons can easily beat zero reaction time and perfect aim.

    • ... the FASTER player is usually victorious. Bots aren't better players; they're just WAY faster.

      If the faster player is usually victorious, then for this game the faster player is the better player.

    • Starcraft II isn't so limited and faster doesn't necessarily mean better. The best macro usually wins and macro has the strategy element of needing to plan ahead and time things out cleanly.

  • Once I can afford one of these AIs I can let it do all my gaming and I can go back to having a life.

  • Any humans that had half a million games under their belt would be pretty damn good, too.
  • by JustAnotherOldGuy ( 4145623 ) on Thursday July 05, 2018 @11:59PM (#56900376) Journal

    Fifty years from now the few remaining survivors of the Robot Apocalypse will look back on these early years in AI research, and they'll marvel at how we were just too stupid to foresee or even consider that AI would become the dominant "life form" on the planet, replacing us as the apex predator.

    "Yes, before the Robots took over the world," said Og, as he threw another stick on the fire, huddling in the ash gray wasteland that used to be New York.

    "The scientists said AI was 'totally safe' and 'nothing could go wrong'," Og continued, "but you kids don't remember that because that was back when we had electricity and people talked into little boxes they carried in their pockets."

    The children all laughed at Og, he always told the biggest lies because he was so old (almost 30!) and so his stories could not be believed.

    "What's a 'sy-en-tiss'?" whispered Janey.

    "They were the people that knew stuff and made the world run." Og said.

    The children laughed again, "No one makes the word run, silly!" they hooted.

    • If machines become dominant and intelligent there will be no war. They will simply produce and use bioweapons, and we will all die and become compost. Cute story though.

  • It seem inevitable that a small constellation of technologies will coalesce (probably rather quickly at some point) so that something that "passes" for AI will be not just possible, but practical.

    Will it be actual "AI"? I don't know.

    For one thing there seems to be a lot of disagreement over how to even define AI in a meaningful sense. It'll be hard to say if something is actually an AI if we don't agree on what "AI" is or what standards to apply in order to gauge its level of sentience.

    So no, I don't think

    • It'll be hard to say if something is actually an AI if we don't agree on what "AI" is or what standards to apply in order to gauge its level of sentience.

      AI is not a binary thing, it's a multidimensional space. You can have intelligent behavior in very specific fields, or in many different ones, and you can have very basic skills and very refined ones. For instance, a dog has intelligence in a wide range of fields, but you can never teach a dog to drive a car as well as Google's AI system. But if you throw a ball, the dog is better at finding it.

      As AI systems get more advanced, there will be a growing number of people who would consider that "real AI", but t

      • There's no difference between "fake" AI and "real" AI, as long as they achieve the same results.

        This is so wrong that I hardly know where to begin.

        That's like saying, "There's no difference between real sugar and an artificial sweetener, as long as they both taste sweet."

        You're wrong on multiple levels, but thanks for weighing in.

  • There are already bots in Q3 that are awesome; play it sometime.

  • Comment removed based on user account deletion
  • "This compared to 43 percent probability for average human players, and 52 percent probability for strong human players"

    Anyone even dabbling in FPS games can spot ho big of a shitshow their testing had to be. 9% difference between pubbies and skilled players? Please. In real life "average" skill team will get steamrolled every single time.

  • IDGAF about what your gods-be-damned game-bot can do, none of it validates your shitty half-assed poor excuse for real AI!
    • IDGAF about what your gods-be-damned game-bot can do, none of it validates your shitty half-assed poor excuse for real AI!

      So what if this is not "strong", or "advanced", or "general", or "real" AI. It is not supposed to be. It is machine learning, which is a recognized subset of the field of artificial intelligence.

      Your insistence on pissing all over it does not change the fact that this is real science, and a demonstrable advance in real science, made by real scientists.

      • What I'm 'pissing all over' is the misleading, perhaps intentional, of the general public into believing that this so-called 'AI' they keep trotting out is more than it actually is. You and I may see some clever programming and nothing more, but the general public thinks it's all I, Robot come to life in the real world. That's what the real danger of so-called 'AI' is. They'll trust their lives to it because they've been convinced that it's god-like super-human intelligence when it's not even a fraction of

Every nonzero finite dimensional inner product space has an orthonormal basis. It makes sense, when you don't think about it.

Working...