Follow Slashdot blog updates by subscribing to our blog RSS feed


Forgot your password?
Games Entertainment Technology

Inside F.E.A.R. 2's Engine and AI 34

Gamasutra sat down with software engineers from Monolith Productions to discuss the technology behind F.E.A.R. 2: Project Origin, due out in February. They provide insight into the development of the game's engine, and they discuss the goals and procedures behind creating entertaining AI. Quoting: "For instance, let's say that the AI wanted to kill the enemy. That would mean that there are a whole bunch of actions that satisfy the requirement for there being a dead enemy; let's say, 'Attack with ranged weapon,' right? ... Where the power comes from is the fact that those actions themselves can have conditions that they need to have met. So, 'attack with ranged weapon' may have conditions that say, 'I have to have a weapon, and I have to have it loaded. Go find me more actions that satisfy those requirements.' ... at that point, he may find another action, which is 'go to this weapon,' and then he may find another action which is 'reload your weapon.' So, that whole chain that I just described to you, of him doing three things in a row — which is going to pick up a weapon, loading a weapon, and then going to attack the player — that was not a directed thing that the level designer, nor that the AI engineer had to program; it was just the fact that we have these aggregate actions that the planner can pick from at will.
This discussion has been archived. No new comments can be posted.

Inside F.E.A.R. 2's Engine and AI

Comments Filter:
  • Awesome (Score:1, Insightful)

    by Anonymous Coward on Saturday December 20, 2008 @04:30PM (#26185965)

    Wow, this is how far we've come with game AI? I've been doing more elaborate things in CLIPS forward and backwards chaining in my AI class.

  • by Anonymous Coward on Saturday December 20, 2008 @04:35PM (#26186003)

    On the surface they face a "simple" action planning problem. What the summary describes is what a simple SCRIPS-based planner would do. This would be an intro-to-AI type of a problem that any CS student taking an AI class would deal with.

    The major difference from an intro-to-AI type of a problem here is that the environment is dynamic. Thus goals that might have been achieved just a few steps ago may now no longer be, and vice versa.

    Suppose your planner has executed a couple of actions that have reached half of your sub-goals. Now, due to environment changes, only a quarter of your sub-goals remains. Heck, some external environment changes may actually be productive and add to your reached sub-goals -- say if your enemy got into your firing range, you don't have to chase him anymore.

    It's much harder to generate reasonable, much less optimal plans when environment changes -- the planner needs to be quite more complex. It's a current research topic, and has been for a while.

    Cheers, Kuba

  • by Anonymous Coward on Saturday December 20, 2008 @04:38PM (#26186035)

    Dude ... that means you're supposed to be the one providing the insightful comments. That's how it works.

  • by Dutch Gun ( 899105 ) on Saturday December 20, 2008 @06:08PM (#26186629)

    The major difference from an intro-to-AI type of a problem here is that the environment is dynamic. Thus goals that might have been achieved just a few steps ago may now no longer be, and vice versa.

    Actually, unless you're talking about an "intro-to-AI" class at a game development studio, I'd venture to say that the primary difference is that game development AI is ultimately a much more pragmatic affair than academic research (although naturally, good developers try to use whatever knowledge they can glean from any source available). The ultimate goal of the AI is to make the game fun for the player to play against, while operating on only a small, real-time slice of the total CPU budget. It's an important distinction to make, and it often answers the question of "why don't they use x or y techniques?" Game programming has real-world problems and constraints, and it's important to be able to peek into an AI brain and understand exactly why some behavior may or may not have been working exactly why you'd expected it to. But it's also perfectly acceptable to alter the world itself in order to enhance the AI (and indirectly, the player experience), something that traditional AI research would typically not consider.

    Boiled down simply, yes, this is a dynamic graph traversal problem. I've used systems like that before in games I've worked on to execute intermediate goals, although nothing quite like this. The large problem is not the planning - it's actually perception that's really hard for an AI to do. The article discussed this when talking about the artists having to pepper the level with AI hints. For instance, how does a computer analyze the best location for cover? Typically, it's been a level designer that places tens of thousands of hint nodes all throughout the game to mark points of interest for the AI. I used a node-based approach in a brawling game I worked on, and these nodes would interact with the game engine to convey all sorts of information to the AI. As the environment was dynamically altered, these nodes (which doubled as both pathfinding and status communication nodes) would indicate to the player what was happening in the world.

    For instance, a bridge would start shaking as though it were about to collapse. Via the level's automation system, the nodes on which the AI was standing would switch status to "danger", and the AI would know it's top priority was to move off that node. Alternately, pathing nodes could be dynamically switched on and off in order to deal with obstacles in the world coming and going. The AI would continuously re-evaluate the path to the target in the background, based on higher-level goals, so it would just naturally move around obstacles and find alternative paths to other entities, or would alter goals altogether.

    So often, there's a whole set of systems built specifically to communicate information to the AI in a non-visual way. Game developers are starting to integrate these sorts of systems at a deeper level into the game engine, so that its simpler to build on top of these low level core systems, and execute more advanced tactical planning. This way, for instance, instead of a scripted event that must be manually hooked up to these informational notes, you can allow the physics engine to alter the world, and the AI will react naturally to that - not just in pathfinding, but perhaps the altered terrain provides new tactical possibilities. From there, it's great to see interesting emergent behavior come out of the simple building-blocks you've put together. The coolest thing for an AI programmers is to see players infer much more about an agent's behavior than you ever programmed into the game.

    I have to admit, I do miss AI programming. It's a really exciting field that I believe is going to gain more prominence in the public eye in the future (we're seeing it already in Left4Dead) as game developers look for ways to differentiate their games from their competitors, especially as graphical quality approaches its limits on the current platforms. I can definitely see those extra cores on current and future platforms being put to good use. I would have loved to have this on the last gen of consoles.

  • by NinthAgendaDotCom ( 1401899 ) on Saturday December 20, 2008 @08:27PM (#26187473) Homepage
    True AI is a hard problem. The restrictions that are in place in a game make it so that you're no longer producing intelligence, just producing a "trained pigeon" effect. Bots don't have the full range of actions and freedom available to them. Hell, the PLAYER doesn't have the full range of freedom, not even close.
  • by Dutch Gun ( 899105 ) on Saturday December 20, 2008 @09:37PM (#26187843)

    AFAIK, this is essentially the script paradigm of AI, which is fine to augment intelligence (we do it all the time), but without learning, it is quite expensive, and susceptible to the Chinese Room argument.

    I can only speak for myself, of course, but I never had any qualms about simulating intelligence artificially (aka Chinese Room Argument []). Games are, and will be for the foreseeable future, a very limited domain problem, which in turn calls for a domain-specific solution. The notion of whether an AI is actually "intelligent" or "aware" is so far removed from the reality of today's game development, it honestly has never even crossed my mind.

    In some sense, the notion of AI in regards to game development is almost a bit misleading. Game developers are not interested in programming artificial intelligence, only replicating its behavior using any practical method available. Don't forget, we're also constrained by CPU power and a real-time environment. If a simple state machine with directed goal-oriented pathing is enough to simulate human-like intelligence in a combat-focused game, then we'll be happy to stop there.

  • Re:Awesome (Score:5, Insightful)

    by BitZtream ( 692029 ) on Saturday December 20, 2008 @11:50PM (#26188691)

    As someone who's dabbled in designing a game engine, this sort of things always sound like no brainers. Then when you get right down to implementing them in a game and making them actually fast enough to keep the game real time, you realize that a lot of cool ideas that can be used to make things more realistic from a simulation perspective are no feasable with the processing power available.

    Lets face it, while Zork was awesome in its original form, if the processing power was available to do 3d graphics at the time, they probably would have done so. But it wasn't. Nor was the capability to do graphics anything even remotely comparable to what the game would have needed to do it justice. So instead, they use text because the processing power required to turn that text into something that was usable they did have available.

    Right now we can simulate phyics, use ray traced graphics and reverse kinemetics to make games look pretty damn realistic, just don't expect to get 60 frames a second out of anything the home user has in a world like F.E.A.R. uses. Sure, you can pre-render a movie, but that 2 minute movie clip make take an hours worth of processing power depending on enviroment complexity.

    So while you may look at these kinds of things and think: 'So what, I've done better in my little test apps where I was experimenting.', doing it within the constraints the developers of a real time game have is another story. I know I have done several things 'better' than the latest games have done, for instance I've written some AI to make the X-Plane flight simulator more realistic with its ATC traffic, but when plugged into the simulator and loaded it up several other aircraft in close proximity like you would expect to find at a large airport, the ATC calculations require too much processing power for my machine to nicely run the simulation. So I have to back out things till I get the speed to an acceptable level. The end result? My AI wasn't all that impressive since I ended up backing out most of the code that made it better in order to get it performing fast enough. In my case, I probably could have solved it with some multithreading goodness, or at least made it fall somewhere between what I originally created and what ended up running in the drawing thread but I think you can see my point.

  • Re:Awesome (Score:5, Insightful)

    by Dutch Gun ( 899105 ) on Sunday December 21, 2008 @11:07PM (#26196443)

    Wow, this is how far we've come with game AI? I've been doing more elaborate things in CLIPS forward and backwards chaining in my AI class.

    Awesome. Let me know when you've integrated it into a complete game, and it's running in real time using less than 5% total CPU load (no spikes) on the target spec machine. You're also responsible for:

    * Dynamic pathfinding and waypoints (it's all pointless if the agents can't move well)
    * Hint nodes and environmental information (finding a good cover location, escape route, etc)
    * Animation selection and agent movement subsystem (the enemies need to look good too)
    * Weapon aiming and firing system (AI can't be more than humanly accurate, or it's no fun)
    * Sensory detection and reaction system (simulating eyes, ears, nose of enemies - how do they detect you?)
    * Difficulty scaling (AI should get smarter at harder difficulty levels)
    * Creating a personality trait system (10-20 different enemies with different movement patterns in the game)
    * Squad controller agent (for your automated teammates)
    * Anything else the designers want to be able to do that involves moving critters around in the game autonomously

    Oh yeah, and it has to be "fun" as well. Finish this on schedule, show me the final product, and then I'll be impressed.

    Game AI != Academic Research Projects

Don't tell me how hard you work. Tell me how much you get done. -- James J. Ling