Inside F.E.A.R. 2's Engine and AI 34
Gamasutra sat down with software engineers from Monolith Productions to discuss the technology behind F.E.A.R. 2: Project Origin, due out in February. They provide insight into the development of the game's engine, and they discuss the goals and procedures behind creating entertaining AI. Quoting:
"For instance, let's say that the AI wanted to kill the enemy. That would mean that there are a whole bunch of actions that satisfy the requirement for there being a dead enemy; let's say, 'Attack with ranged weapon,' right? ... Where the power comes from is the fact that those actions themselves can have conditions that they need to have met. So, 'attack with ranged weapon' may have conditions that say, 'I have to have a weapon, and I have to have it loaded. Go find me more actions that satisfy those requirements.' ... at that point, he may find another action, which is 'go to this weapon,' and then he may find another action which is 'reload your weapon.' So, that whole chain that I just described to you, of him doing three things in a row — which is going to pick up a weapon, loading a weapon, and then going to attack the player — that was not a directed thing that the level designer, nor that the AI engineer had to program; it was just the fact that we have these aggregate actions that the planner can pick from at will.
Re: (Score:3, Informative)
Not hard to program that kind of thing. (Score:2)
Really, it's a graph that you create a topological ordering from in order to execute the correct sequence of actions. The cool part is seeing the end results of that sort of thing (and what algorithms and AI can do), not the exact implementation details. Although, I guess they're doing all this faster than real-time, so that's cool, too. *shrug*
Re: (Score:2)
Re:Not hard to program that kind of thing. (Score:4, Insightful)
AFAIK, this is essentially the script paradigm of AI, which is fine to augment intelligence (we do it all the time), but without learning, it is quite expensive, and susceptible to the Chinese Room argument.
I can only speak for myself, of course, but I never had any qualms about simulating intelligence artificially (aka Chinese Room Argument [wikipedia.org]). Games are, and will be for the foreseeable future, a very limited domain problem, which in turn calls for a domain-specific solution. The notion of whether an AI is actually "intelligent" or "aware" is so far removed from the reality of today's game development, it honestly has never even crossed my mind.
In some sense, the notion of AI in regards to game development is almost a bit misleading. Game developers are not interested in programming artificial intelligence, only replicating its behavior using any practical method available. Don't forget, we're also constrained by CPU power and a real-time environment. If a simple state machine with directed goal-oriented pathing is enough to simulate human-like intelligence in a combat-focused game, then we'll be happy to stop there.
Re: (Score:2, Insightful)
On the surface they face a "simple" action planning problem. What the summary describes is what a simple SCRIPS-based planner would do. This would be an intro-to-AI type of a problem that any CS student taking an AI class would deal with.
The major difference from an intro-to-AI type of a problem here is that the environment is dynamic. Thus goals that might have been achieved just a few steps ago may now no longer be, and vice versa.
Suppose your planner has executed a couple of actions that have reached hal
Re:Not hard to program that kind of thing. (Score:5, Insightful)
The major difference from an intro-to-AI type of a problem here is that the environment is dynamic. Thus goals that might have been achieved just a few steps ago may now no longer be, and vice versa.
Actually, unless you're talking about an "intro-to-AI" class at a game development studio, I'd venture to say that the primary difference is that game development AI is ultimately a much more pragmatic affair than academic research (although naturally, good developers try to use whatever knowledge they can glean from any source available). The ultimate goal of the AI is to make the game fun for the player to play against, while operating on only a small, real-time slice of the total CPU budget. It's an important distinction to make, and it often answers the question of "why don't they use x or y techniques?" Game programming has real-world problems and constraints, and it's important to be able to peek into an AI brain and understand exactly why some behavior may or may not have been working exactly why you'd expected it to. But it's also perfectly acceptable to alter the world itself in order to enhance the AI (and indirectly, the player experience), something that traditional AI research would typically not consider.
Boiled down simply, yes, this is a dynamic graph traversal problem. I've used systems like that before in games I've worked on to execute intermediate goals, although nothing quite like this. The large problem is not the planning - it's actually perception that's really hard for an AI to do. The article discussed this when talking about the artists having to pepper the level with AI hints. For instance, how does a computer analyze the best location for cover? Typically, it's been a level designer that places tens of thousands of hint nodes all throughout the game to mark points of interest for the AI. I used a node-based approach in a brawling game I worked on, and these nodes would interact with the game engine to convey all sorts of information to the AI. As the environment was dynamically altered, these nodes (which doubled as both pathfinding and status communication nodes) would indicate to the player what was happening in the world.
For instance, a bridge would start shaking as though it were about to collapse. Via the level's automation system, the nodes on which the AI was standing would switch status to "danger", and the AI would know it's top priority was to move off that node. Alternately, pathing nodes could be dynamically switched on and off in order to deal with obstacles in the world coming and going. The AI would continuously re-evaluate the path to the target in the background, based on higher-level goals, so it would just naturally move around obstacles and find alternative paths to other entities, or would alter goals altogether.
So often, there's a whole set of systems built specifically to communicate information to the AI in a non-visual way. Game developers are starting to integrate these sorts of systems at a deeper level into the game engine, so that its simpler to build on top of these low level core systems, and execute more advanced tactical planning. This way, for instance, instead of a scripted event that must be manually hooked up to these informational notes, you can allow the physics engine to alter the world, and the AI will react naturally to that - not just in pathfinding, but perhaps the altered terrain provides new tactical possibilities. From there, it's great to see interesting emergent behavior come out of the simple building-blocks you've put together. The coolest thing for an AI programmers is to see players infer much more about an agent's behavior than you ever programmed into the game.
I have to admit, I do miss AI programming. It's a really exciting field that I believe is going to gain more prominence in the public eye in the future (we're seeing it already in Left4Dead) as game developers look for ways to differentiate their games from their competitors, especially as graphical quality approaches its limits on the current platforms. I can definitely see those extra cores on current and future platforms being put to good use. I would have loved to have this on the last gen of consoles.
Awesome (Score:1, Insightful)
Wow, this is how far we've come with game AI? I've been doing more elaborate things in CLIPS forward and backwards chaining in my AI class.
Re:Awesome (Score:5, Insightful)
As someone who's dabbled in designing a game engine, this sort of things always sound like no brainers. Then when you get right down to implementing them in a game and making them actually fast enough to keep the game real time, you realize that a lot of cool ideas that can be used to make things more realistic from a simulation perspective are no feasable with the processing power available.
Lets face it, while Zork was awesome in its original form, if the processing power was available to do 3d graphics at the time, they probably would have done so. But it wasn't. Nor was the capability to do graphics anything even remotely comparable to what the game would have needed to do it justice. So instead, they use text because the processing power required to turn that text into something that was usable they did have available.
Right now we can simulate phyics, use ray traced graphics and reverse kinemetics to make games look pretty damn realistic, just don't expect to get 60 frames a second out of anything the home user has in a world like F.E.A.R. uses. Sure, you can pre-render a movie, but that 2 minute movie clip make take an hours worth of processing power depending on enviroment complexity.
So while you may look at these kinds of things and think: 'So what, I've done better in my little test apps where I was experimenting.', doing it within the constraints the developers of a real time game have is another story. I know I have done several things 'better' than the latest games have done, for instance I've written some AI to make the X-Plane flight simulator more realistic with its ATC traffic, but when plugged into the simulator and loaded it up several other aircraft in close proximity like you would expect to find at a large airport, the ATC calculations require too much processing power for my machine to nicely run the simulation. So I have to back out things till I get the speed to an acceptable level. The end result? My AI wasn't all that impressive since I ended up backing out most of the code that made it better in order to get it performing fast enough. In my case, I probably could have solved it with some multithreading goodness, or at least made it fall somewhere between what I originally created and what ended up running in the drawing thread but I think you can see my point.
Re:Awesome (Score:5, Insightful)
Wow, this is how far we've come with game AI? I've been doing more elaborate things in CLIPS forward and backwards chaining in my AI class.
Awesome. Let me know when you've integrated it into a complete game, and it's running in real time using less than 5% total CPU load (no spikes) on the target spec machine. You're also responsible for:
* Dynamic pathfinding and waypoints (it's all pointless if the agents can't move well)
* Hint nodes and environmental information (finding a good cover location, escape route, etc)
* Animation selection and agent movement subsystem (the enemies need to look good too)
* Weapon aiming and firing system (AI can't be more than humanly accurate, or it's no fun)
* Sensory detection and reaction system (simulating eyes, ears, nose of enemies - how do they detect you?)
* Difficulty scaling (AI should get smarter at harder difficulty levels)
* Creating a personality trait system (10-20 different enemies with different movement patterns in the game)
* Squad controller agent (for your automated teammates)
* Anything else the designers want to be able to do that involves moving critters around in the game autonomously
Oh yeah, and it has to be "fun" as well. Finish this on schedule, show me the final product, and then I'll be impressed.
Game AI != Academic Research Projects
Re:ahh, slashdot and AI (Score:5, Insightful)
Dude ... that means you're supposed to be the one providing the insightful comments. That's how it works.
Re: (Score:3, Funny)
Re: (Score:3, Funny)
Re: (Score:2, Funny)
I, for one, welcome our new bitch whore overlords!
Hype like Radiant AI? (Score:2, Informative)
Re: (Score:2, Interesting)
There are mods that "unlock" the Radiant AI in Oblivion and its not pretty. Since non-"essential" NPCs can be killed off and never respawn (obviously not counting guards and monsters) if you "wait" long enough out in the middle of nowhere, about 1/3rd of the population in Oblivion will kill itself off for SOME reason as dictated by its AI "goal".
AI like this DOES exist but because it'll take things to such extremes, developers have to dumb things down for gameplay purposes. A bot will never miss a sniper s
It's a stretch to call this AI (Score:4, Insightful)
Re: (Score:2)
Which game had the best AI so far? (Score:1)
Re: (Score:2)
I know it's old hat now, but when I first encountered the marines in Half-Life I was impressed by how they tried to flank you, how they would run from grenades, how they would try and flush you out from your cover and how they would rarely ever chase you to encounter the business end of your shotgun in the classic "player runs around the corner and shoots pursuers" ambush.
The Far Cry baddies were also pretty good.
I found the programming for the Unreal 2004 bots to be pretty good, too. On the higher settin
A taste of what's to come. (Score:2)
Hands On: F.E.A.R. 2: Project Origin (Part 1) [g4tv.com]
Hands On: F.E.A.R. 2: Project Origin (Part 2) [g4tv.com]
Exclusive Hands On: Project Origin [g4tv.com]
F.E.A.R. 2: Project Origin Behind the Scenes - Engine [g4tv.com]
F.E.A.R. 2: Project Origin Dev Diary: Design & story. [g4tv.com]
Enjoy the videos and don't forget to upgrade your PC. :)