Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI Education Quake Software Entertainment Games Technology

New AI Is Capable of Beating Humans At Doom (denofgeek.com) 170

An anonymous reader quotes a report from Den of Geek UK: Two students at Carnegie Mellon University have designed an artificial intelligence program that is capable of beating human players in a deathmatch game of 1993's Doom. Guillaume Lample and Devendra Singh Chaplot spent four months developing a program capable of playing first-person shooter games. The program made its debut at VizDoom (an AI competition that centered around the classic shooter) where it took second place despite the fact that their creation managed to beat human participants. That's not the impressive part about this program, however. No, what's really impressive is how the AI learns to play. The creator's full write-up on the program (which is available here) notes that their AI "allows developing bots that play the game using the screen buffer." What that means is that the program learns by interpreting what is happening on the screen as opposed to following a pre-set series of command instructions alone. In other words, this AI learns to play in exactly the same way a human player learns to play. This theory has been explored practically before, but Doom is arguably the most complicated game a program fueled by that concept has been able to succeed at. The AI's creators have already confirmed that they will be moving on to Quake, which will be a much more interesting test of this technologies capabilities given that Quake presents a much more complex 3D environment.
This discussion has been archived. No new comments can be posted.

New AI Is Capable of Beating Humans At Doom

Comments Filter:
  • Bring on your pesky AI. What I don't break with the hammer, I will break with my urine. Bzzzt!!!

  • by Anonymous Coward

    Well I for one welcome our new 2d overlords..

  • It's funny to see some mainstream press outlets freak about about this like it was Skynet when three is already so much anti-layer going on in games already...

    Many games have creatures that hunt players, usually programmed with a somewhat limited view of the entire internal game world just as this AI has only the screen to understand where objects are.

    So I guess I'm missing what is really new about this, I assume some in-game AI in a few games somewhere has already had behavior programmed by some kind of le

    • by lgw ( 121541 ) on Wednesday October 05, 2016 @07:12PM (#53021305) Journal

      The point is that the AI learned to play the game from only screen data. No maps, no preset strategy, just visual data. So, it has to learn to recognize threats and obstacles, and what to do when it does.

      Beating humans is a good test, because humans are good at exploiting patterns, so shortcuts like always taking a fixed route wouldn't work for long.

      • by narcc ( 412956 )

        The point is that the AI learned to play the game from only screen data.

        That is a very false statement.

        • That is a very false statement.

          Or you could actually read the article, I know I know its hard so I did it for you. From the abstract of the journal article;-

          The software, called ViZDoom, is based on the classical first-person shooter video game, Doom. It allows developing bots that play the game using the screen buffer. ViZDoom is lightweight, fast, and highly customizable via a convenient mechanism of user scenarios. In the experimental part, we test the environment by trying to learn bots for two scenari

      • by Dutch Gun ( 899105 ) on Wednesday October 05, 2016 @11:26PM (#53022475)

        Ehh, a first-person shooter is not really all that great a test for AI. Being a videogame programmer, and one who has programmed his share of AI in commercial videogames, the trick is not to create unbeatable AI, but to create an interesting experience for the player. Granted, we use a lot of internal data structures to assist the AI (like for navigation), and we obviously approach things from a completely different angle, but we also have to drastically handicap the AI's responsiveness and aim in shooter-type games.

        Remember, it's trivial for a computer to paint a bulls-eye between your eyes from 500 yards out with a sniper rifle no matter how you're moving or hiding. It's still reasonably easy even with true projectiles, as the AI can calculate perfect flight trajectories so that rocket will precisely intercept a moving target. An AI can't get disoriented, or confused, and has near-instant reflexes that no human can match.

        One trick I've used for shooter bots is to incorporate virtual springs attached to the bot's targets, helping to throw off their aim according to how the target is moving. You can also dynamically adjust the target spring tension or based on other factors, like difficulty scaling, whether the AI agent is running, jumping (throwing off his own aim), etc. That sort of thing, along with adding blind spots, artificial reaction times, intentional mistakes, and so on, are the things you have to do to keep the bots from kicking the crap out of human players just because they could instantly headshot you from across the map otherwise.

        Don't get me wrong... this is a neat little project. But beating humans in a shooter where fast reflexes and perfect aim dominate isn't really the end-all and be-all of AI tests, because our strengths and weakness are in different areas than for computer-based opponents.

        • by lgw ( 121541 )

          You've missed the point I think. This isn't an aimbot.

          This is a neural net that needed to learn on its own what an enemy looked like on the screen, and what obstacles were, and how to move around them, and so on. The basic "strategy" logic of the bot was fairly simple: a maze explorer, plus an aimbot. The interesting part was doing that given only the screen buffer to work with (well, they cheated a bit by giving the health explicitly).

        • by Anonymous Coward

          The novelty is not in the AI itself but rather that it's only being fed the exact same video data that a human player sees. Essentially applying the same technology that a self-driving car uses, mashed up with a neural net and instructed to learn to play.

      • It has interesting implications for software automation
    • There's a pretty big difference between a game AI (which is fed machine-centric game state information, and has an extensive pre-programmed ruleset) adapting marginally to a player's actions, vs learning to play (and master) the entire game via screen inspection.

    • by swb ( 14022 ) on Wednesday October 05, 2016 @07:35PM (#53021441)

      In-game bots may be operating on a limited view, but they're operating on actual hard data in basically machine-usable form. What's impressive about this is that it learns from what's on the screen -- distances, obstacles, paths, its location are all learned from visual input.

      What I'm curious about is how adaptable their visual learning system is or whether it's extremely Doom specific. I'd also be curious at how long it took to learn to play. I'd also be curious what the learning curve was -- linear, non-linear, flat, steep or what.

      • by maugle ( 1369813 )
        They say they're moving on to Quake but I can guess that, because of how machine learning generally works, the AI they've trained for Doom will be utterly helpless at Quake until fully retrained.
      • by narcc ( 412956 )

        . What's impressive about this is that it learns from what's on the screen -- distances, obstacles, paths, its location are all learned from visual input.

        Wow, no. The bot has access to the depth buffer. One of many, many, reasons why reality doesn't match up with the expectations set by the headline and summary. Read the paper. This is about as impressive as the average undergrad NN project.

        • by Anonymous Coward

          > The bot has access to the depth buffer.
          Doom doesn't /have/ a depth buffer. And the paper is quite explicit that the only input is an RGB framebuffer.

          • by rxmd ( 205533 )
            Actually the paper states the opposite. From page 4: "3) Depth Buffer Access: ViZDoom provides access to the renderer’s depth buffer (see Fig. 3), which may help an agent to understand the received visual information. This feature gives an opportunity to test whether the learning algorithms can autonomously learn the whereabouts of the objects in the environment. The depth information can also be used to simulate the distance sensors common in mobile robots."
    • by Maritz ( 1829006 )

      So I guess I'm missing what is really new about this

      Looks like.

  • no video? (Score:4, Insightful)

    by spongman ( 182339 ) on Wednesday October 05, 2016 @07:05PM (#53021271)

    this is a video game, after all...

  • by Carewolf ( 581105 ) on Wednesday October 05, 2016 @07:09PM (#53021297) Homepage

    I would have thought AI had already beaten us at those games long before beating us at chess or go. Note sure if I am supposed to feel good as a human or bad as a programmer now.

    • The novel part isn't that it can beat humans. As previously observed, that is a simple job for an aim-bot combined with some predefined, map-specific movement patterns. The novel part here is that it taught itself how.

    • You should continue to feel good as a human. Humanity needs that. The other posters will want to tell you that it's not that an AI beat a human, but HOW it did: By learning. I'm taking a different approach: It's not about the performance of the AI. It's all about how playing the game makes you FEEL. Sure, the AI got the most kills, but what KIND of kills, and at what cost? Did it seek out and kill the only players who were out of ammo, and low on health? Did it ever sacrifice itself to protect an asset? Dea
  • Aim bots.
  • by WheezyJoe ( 1168567 ) <fegg AT excite DOT com> on Wednesday October 05, 2016 @07:23PM (#53021371)

    Can I rig Call of Duty with an AI auto-pilot plug-in, and just sit back and watch it steam-roll over all the sucker humans in the game? If I play an online game like Overwatch and get smeered over and over by an opponent(s) with perfect aim and lightning-quick moves, will I just assume someone's introduced a bot into the game and I'm wasting my time with my hopelessly inferior carbon-based reflexes? Gaming may need its own version of the Butlerian Jihad.

    • Comment removed based on user account deletion
    • Always thought it would be possible to build system with camera and servos to drive WASD keys and a mouse, that could be programmed for say the Dust 2 map in CSGO. Fire fights are generally in the same places on the map, so once it learns the background imagery it just has to shoot at anything different.
      It sounds pointless, but CSGO has a real world economy so this could be used for real financial gain.
    • Isn't that already how it is? Play CoD for more than 5 minutes without someone being accused of hax...
  • Do you want Skynet?! Because this is how you get Skynet!

  • by Herkum01 ( 592704 ) on Wednesday October 05, 2016 @07:54PM (#53021559)

    Until's middle finger hurts from pressing the "W" for five hours straight, I will not be impressed. (yes I did this in 1994).

    • Re: (Score:3, Insightful)

      by freeze128 ( 544774 )
      Reconfigure your keyboard. You don't need to use WASD for doom. Try Right-Mouse for FORWARD instead. Your finger will thank you.

      Sorry for the 20-year late message.
    • by kackle ( 910159 )
      "Straight"? You know, you're allowed to release the "W" key once in a while...
    • by fuo ( 941897 )
      Doom doesn't have Jump, so S=Forward, Space=Back is better imo, it allows you rock forward and backwards easier since the same finger doesn't control both buttons.
  • by TranquilVoid ( 2444228 ) on Wednesday October 05, 2016 @07:59PM (#53021581)

    It will be interesting to see how future games develop to keep them fun for humans in an AI-filled world.

    Imagine your AI setup gets to the point where it truly has the same input, not needing to be directly fed the screenbuffer but can use a camera pointed at your monitor. Suddenly current anti-cheating technologies mean nothing, and enough people using these would quick ruin a game.

    • Reminds me of an xkcd about reputation-based spam-blocking [xkcd.com]. Seems we might have a similar situation here, except that we end up with everyone benefitting from being able to play against the most enjoyable opponents all the time.

    • Then maybe game companies will stop trying to control how everyone plays by forcing them to play on company servers only. And they'll put LAN gameplay capability back into games (which ironically was the only way multiplayer Doom could be played), where you can physically confirm who you're playing with and that they are not cheating.
    • Suddenly current anti-cheating technologies mean nothing, and enough people using these would quick ruin a game.

      Contrariwise, imagine a world where you can play in your computer against any number of AI opponents, regulated to the level that makes the game interesting to you. Then you don't need other people and cheating becomes meaningless, as it should be in a game.

    • This goes beyond gaming. Imagine when this AI gets turned on all those mundane jobs you do around the office! We're going to have a lot of unemployed fax monkeys!
  • They've had Doombots [wikipedia.org] for years.

  • We would have both died in the radioactive waste. I never mastered that game, and was so utterly hopeless at Quake (and the zillions of Quake clones that came after it) that I would be more useful at teaching the AI how not to play that game.
  • by zokum ( 650994 ) on Wednesday October 05, 2016 @09:03PM (#53021897) Homepage
    There's one glaring problem with this. The bot is good enough to beat a human. Most humans don't play Doom very well. If it beat a well known good player like Ocelot, Sedlo or Johsen, then it would be impressive. It's similar to writing a chess AI that can beat a human. This was done 30 years ago, but can that AI beat a grand master? Judging by the articles, the headline is somewhat misleading.

    Doom may seem simple compared to Quake, at least superficially, but Doom features the BFG 9000 which a good player can do some fairly impressive things with, that would be VERY hard to deduce from simply observing. How the BFG worked wasn't really worked out in full detail until the source code was released. The BFG9000 is probably one of the most complex FPS weapons in any mainstream game. Then there are techniques like wall running, bumping, silent BFG shots etc. Knowing about these and when they are of use, can give a player a huge edge. Can the bot discover, use and master this? Such techniques are vital on the most common deathmatch maps, map01 and 07 in Doom 2.

    Doom deathmatch can also be played in altdeath mode, typically map11 or maybe map16 are used for this type of play. This introduces many new skills, and downplays other. It is a rather different experience. Does the bot handle this? Navigating the 3d space of map11 is a lot more complicated than map07, which is basically flat. Figuring out the map, teleporters, secret doors, trigger lines that activate elevators, etc is pretty complicated stuff.

    Given phrases like "Their agent, he said, “was ducking most of the time and thus was hard to hit.” I suspect a good human player would outskill the bots here. From the ViZDOOM paper (https://arxiv.org/abs/1605.02097) "we test the environment by trying to learn bots for two scenarios: a basic move-and-shoot task and a more complex maze-navigation problem."

    When it comes to singleplayer, I would love to see bots play better than Henning in his 30nm run in 29:39, https://www.youtube.com/watch?... [youtube.com]
  • by Cytotoxic ( 245301 ) on Wednesday October 05, 2016 @09:32PM (#53022025)

    All the way back in the original Quake there was a really nice learning AI written in quake C. One version allowed you to add practice bots to work on your deathmatch strategy.

    Similar to the AI described in this article, the AI in this mod was ignorant of the map and had no preset patterns. It learned by doing. So as you began playing against them they were easy kills in the early rounds. They'd often just stand there and get shot. And they couldn't hit the broad side of a barn.

    But they learned the map. And they learned your moves. And within a few rounds you'd be lucky to stay alive long. And finally they would learn enough to get you every time. They'd know which direction you were going to dodge before you did. And they kept track of every resource in the game and all of the respawn times, so they'd deny you any ammo or health by timing their movements perfectly to collect all spawns instantly.

    It was very cool.

    Then the guy who wrote it used his AI to replace the original game AI for all of the enemies. Wow. It made the game into an entirely different experience.

    After about a half-level, the enemies would learn to avoid you, go out and recruit all the bad guys from the level and return in force. After a couple of more levels they'd learn to ambush, flank and surround you. They'd team up their fire, so you'd dodge a fireball to the left and strafe right into another fireball.

    It was really interesting, but ultimately unplayable. It really gave me an appreciation of the level of "balancing" that goes into creating a proper game AI. It certainly isn't about the same thing as making a chess AI that can beat Kasparov. It requires a great deal of work to make the enemy realistic and interesting and difficult but ultimately beatable.

    • All the way back in the original Quake there was a really nice learning AI written in quake C...But they learned the map. And they learned your moves. And within a few rounds you'd be lucky to stay alive long. And finally they would learn enough to get you every time. They'd know which direction you were going to dodge before you did. And they kept track of every resource in the game and all of the respawn times, so they'd deny you any ammo or health by timing their movements perfectly to collect all spawns instantly.

      It was very cool.

      I used to play quite A_LOT of Quake and Quake 2 and 3 and don't recall this. Bots have always been a split between extremely useless or extremely impossible (ie they know where you are and head-shot you before they've seen you). Even now in the latest games the bots are easily identifiable for the same reasons, either too crap or impossibly good.

      • I used to play a ton of Quake and Quake 2. I *think* Cytotoxic is referring to Eraser bot. As noted here [quakewiki.net], the bot will learn maps it has never seen before. Now, I don't remember ever seeing any documentation about the bots learning your play style or anything, but I do remember most of the rest of what Cytotoxic said.

        A Quake 2 version existed as well, so a friend and I used these bots in Quake 2 to test custom levels. At first, some would run around, not picking up much. Others would just sit still unt

  • Unless it also uses an aim bot, it isn't winning vs rail gun wielding aim bot players.
    • by jandrese ( 485 )
      If it's using a Rail Gun in the original DooM then it already has a big advantage over the other players.
  • sounds like the base logic for a terminator. Stick the same code in a robot with a gun and we're all screwed.
  • Comment removed based on user account deletion
  • The last second to last sentence of the article says, "...the developers claim that their AI won one of the competition games by learning to duck and therefore making itself much harder to hit." What version of Doom are they playing? As a teenage I played countless hours of Doom 1 & 2 and I don't remember a duck/crouch button.

The goal of Computer Science is to build something that will last at least until we've finished building it.

Working...