Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Games Entertainment

The State of Game AI 88

Gamasutra has a summary written by Dan Kline of Crystal Dynamics for this year's Artificial Intelligence and Interactive Digital Entertainment (AIIDE) Conference held at Stanford University. They discussed why AI capabilities have not scaled with CPU speed, balancing MMO economies and game mechanics, procedural dialogue, and many other topics. Kline also wrote in more detail about the conference at his blog. "... Rabin put forth his own challenge for the future: Despite all this, why is AI still allowed to suck? Because, in his view, sharp AI is just not required for many games, and game designers frequently don't get what AI can do. That was his challenge for this AIIDE — to show others the potential, and necessity, of game AI, to find the problems that designers are trying to tackle, and solve them."
This discussion has been archived. No new comments can be posted.

The State of Game AI

Comments Filter:
  • by Anonymous Coward on Tuesday November 04, 2008 @12:54AM (#25622397)

    After all, publishers these days only care about churning out sequels quickly, so the so-called 'advanced' AI is basically just a computer versions of cheating player, instead of spending time on increasing the 'I' of the AI.

  • People man! (Score:1, Interesting)

    by Anonymous Coward on Tuesday November 04, 2008 @01:21AM (#25622583)

    Granted we haven't seen great AI in some time, I still rather play people any day of the week.

    RTS - People will do things unexpected and make mistakes.

    FPS - People will "get lucky" and there is always fun in that.

    the list goes on.

    Until your A.I. can call me an asshat and actually MEAN it, I'd rather play people.

    On a side note, once your A.I. can call me an asshat and mean it I want him unplugged...

  • by Rysith ( 647440 ) on Tuesday November 04, 2008 @01:44AM (#25622755)
    The problem there is that the two factors you mentioned (accuracy and commands-per-minute) are both things that AI can far exceed humans at, especially if you aren't careful to limit it. I think that the real solution is to make a game where learning and adapting is more important than accuracy or speed, but then someone would have to write actual AI.
  • main problems..... (Score:1, Interesting)

    by Anonymous Coward on Tuesday November 04, 2008 @01:54AM (#25622807)

    the main problem is that AI is HARD. like NP HARD. and its difficult to program so it goes in last. what would change it is a predefined AI library which can elarn and be plugged into multiple games like the havok physics engine. something easy, can be shoved in last and can learn.

  • by hardburn ( 141468 ) <hardburn@wumpus-ca[ ]net ['ve.' in gap]> on Tuesday November 04, 2008 @02:08AM (#25622897)

    I suspect that all the devs say they want a great AI in their game, but when deadlines start to come up, AI is one of the first things to get cut. That's why every RTS in history that got a preview in a magazine a year before release promised a "groundbreaking AI", and yet the same game when released still has ore trucks driving around a hill, across three bridges, and through the enemy base, just because that particular piece of ore was the closest in a straight line.

    I noticed devs getting slightly clever of late. In C&C: Generals, the initial resource piles are right next to your base, and harvester-type units don't go to resource piles that aren't in a certain range. Plus, the late-game economy depends on things unrelated to those resource piles. So the underlieing problem still exists, but is rarely noticed with the way the game works. Which is some kind of progress, I guess.

  • by SnowZero ( 92219 ) on Tuesday November 04, 2008 @02:11AM (#25622919)

    Education can help; A lot of college CS programs don't force the breadth that an "AI game developer" would need. In my undergrad degree, graphics, AI, game programming, and distributed systems were "applications" classes, and you only needed one or two. Usually the game programmers would have to take graphics, because even if your the sound or networking guy, they'd expect you to know graphics like they did. If more game programmers had taken systems classes (such as operating systems), I don't think they would have had as rough a road with multicore either.

    However, even given the current narrow classes, you can at least try to bleed through enough of the wider topics into game-oriented classes to get people footed. In the game programming course I was a teaching assistant for, I gave a couple lectures on AI (I was an AI/Robotics grad student). Then we gave them an AI-only assignment. It was a multiplayer tank game where we pitted their AIs against some test opponents, and then against one another in a tournament. It didn't allow much cheating in the AIs (only global visibility, where a player can see every unit at all times, which is nearly universal in game AIs). The assignment really seemed to be a hit, and hopefully for those students that went on to the games industry, it gave them the basis to branch out and learn other deeper AI techniques.

    It would also help if the AI community wouldn't look down on things such as games as "lowly applications". In some sense I think game programmers would be best off talking to robotics people or even web machine learning people (spam filtering, web search ranking, etc). Those people are already doing applied AI, and particularly for robotics folks, working on many of the same problems that good game AIs would face.

  • by SnowZero ( 92219 ) on Tuesday November 04, 2008 @03:24AM (#25623243)

    I think that you are wrong about the AI community looking down on games as "lowly applications."

    Well, at conferences such as AAAI, even robotics isn't treated as more than a "mere application". At least that's the treatment I felt going there and presenting work. Its ok I guess, because AI theory doesn't directly relate to where most work in applied AI happens (just like in applying machine learning -- most of the work is in the features, not the algorithms). However, I feel there is a real gap between what conferences such as AAAI are willing to embrace, and what happens at game development conferences or robotics conferences. Being someone who did RoboCup for many years, I really do know what it is like to span that gap -- some of the most important breakthroughs we made were not publishable in either type of conference. I really feel that the field lacks something like the AI equivalent of JGT (Journal of Graphics Tools) or Graphics Gems. If I were a more professorial type I might try to start something like that, but instead I just wait and hope someone else will, and then I could contribute to it.

    I think that many posters here are confusing Artificial Intelligence with Machine Learning. The later is a subset of the former, but is often difficult to apply to games. Accepting adaptation is typically equivalent to abandoning Game Theory.

    Other than the fact that applying machine learning successfully is difficult in general, I don't know if I really agree with that statement. Game theory is a subset of AI just like machine learning. Game theory is very important for abstract games such as chess, but I'd argue that most "physical simulation" games need things more like potential fields, advanced motion planning, high quality hand coded policies, and geometric stuff of that ilk. Old stuff like rule base AIs really has a place in games too -- the work done on scaling up expert systems is really like "software engineering for AI", and sadly a lot of that work didn't get published either.

    A simple form of learning we used in our robotics work was weighted experts given a set of hand-coded policies (~= "a set of AI strategies" for the non AI people out there). We used that to learn during 30-minute autonomous robotics games against robotic opponents, and it worked, even during a relatively short game. All you need is a way to define successful subgoals (Ex: scoring in a team game, kills in an FPS, or areas won in an RTS) and you can get convergence to the optimal strategy in a logarithmic number of rounds. In a turn-based game, if those experts each applied game theory, you could have game theory and learning combined in a pretty reasonable way. Yes I know that would not be optimal, since you normally would just pick the strongest AI, but humans rarely play optimally, nor is even fun to always play against the same strong strategy.

    Certainly there are entire fields of A.I. that are entirely unrelated, but there are many fields of A.I. whos core development is exclusively related to games.

    True. I guess I just haven't really seen the dots connected. Then again, I haven't been to any AI conferences in the last year or two, so maybe that has changed already. I hope so.

    Most games do not implement any heavy A.I. techniques because it is too difficult to provide skill gradients: Easy, Normal, Hard, Godlike. These skill gradients are pretty simple to implement as an escalation of "cheating," but not so simple as a tweak to AlphaBeta and pretty much futile with Machine Learning.

    Well, here's another place where I think the communities need to work more closely. Coming up with search strategies that are more human-like when measured statistically would be fascinating work. Limiting search depth and random mistakes works in some games, but we could do a lot better. I think some of the commercial chess games already do a pretty good job of various skill

  • by Dutch Gun ( 899105 ) on Tuesday November 04, 2008 @03:56AM (#25623369)

    The problem there is that the two factors you mentioned (accuracy and commands-per-minute) are both things that AI can far exceed humans at, especially if you aren't careful to limit it.

    You're correct. I've written AI for a number of commercial games. Some of the most challenging AI is for games in which the players are competing with the AI on what are supposed to be equal terms. An AI can home in on a player's forehead with a sniper rifle with little difficulty. It's a simple mathematical equation. How do you simulate the aiming a player has to do?

    The solution I came up with was to put the target's aim point on a set of springs attached to the player. By jumping around and changing direction quickly, the player would tend to throw the bot's aim off (imaging the target bouncing around, attached by the springs). But, stand still or move in the same direction for too long, and the bot would home in on the player. And, of course, just like a human player, the AI would get in a lucky shot every once in a while as the target crossed in front of the player.

    You have to come up with creative solutions to make the game "feel" fair. That's not the kind of stuff that's typically taught in college courses. Naturally, formal training doesn't hurt, but there are a lot of challenges unique to game development.

  • by tibman ( 623933 ) on Tuesday November 04, 2008 @04:07AM (#25623403) Homepage

    I've been in love with NeuralNets since first sight. After a few projects i really wanted to try my hand at a game. I came up with an idea for an RTS that was something like this: The majority of your units were ai, FF nets that evolved genetically. So you don't have to micro manage your economy or patrols or anything. You say, i want a factory here and your builders will find the resources needed and build that factory. They will have to learn the optimal way of doing this. The most successful individuals are the blueprints for the next generation. If that building is destroyed your command still stands, you want a freaking factory in that spot.. so they will toil away and continue building it. The battle side was supposed to be similar but i haven't gotten around to it. But the idea is an RTS that frees you up from the micro & details and let's you focus on the strategy aspect. A side effect is your units become unique over time, so even though your untis may be very successful on one map, a different type of terrain really makes life difficult for them for a few generations.

    Right now they learn how to gather materials and move it to construction sites. Actually a lot of fun to watch. But it is still a long long way from anything resembling a game.

    Back on topic though. An AI that adapts to it's opponent's weaknesses is well within reach of any game developer. The tough part with genetic and nnet AI is lack of control. You can't easily script a sequence and you can't guarentee what the user/customer will experience. I have had degree's of success over "difficulty" by restricting how many ticks of "thought" a net gets per second. But even then, the net will eventually adapt to this slow thinking.

  • by TheLink ( 130905 ) on Tuesday November 04, 2008 @05:43AM (#25623783) Journal
    Well I guess you could script some of the AI enemy soldiers to try to save their comrades - who have had their legs blown off by you or something.

    The mortally injured ones might alternate from cursing you and begging you to put them out of their misery.

    Some might try to surrender to you if they cornered, out of ammo and clearly out-classed.

    Might make for a different sort of game. Smart but not hard opponents :).

    At the end of the game when you have killed thousands of these, maybe you'd need a shrink ;).

    I think the current level of game AI is tolerable.

    I'd rather a better Aliens vs Predator game with Crysis sort of tech. With "predator" aliens that can climb trees or walls (unlike those in the previous AVP games - which couldn't - not even slowly).
  • by VShael ( 62735 ) on Tuesday November 04, 2008 @05:53AM (#25623817) Journal

    Agreed.
    Another great game to do this was Descent.

    If you flew into a room, started shooting madly, then reversed out so you pick off the enemies as they moved into the doorway one at a time, they quickly learned.

    Soon you'd find that the enemies wouldn't chase you. And would in fact, surround the doorway, as if knowing you would have to enter sooner or later... and then they'd all shoot you.

    It made the game very replayable.

  • by Anonymous Coward on Tuesday November 04, 2008 @08:36AM (#25624519)

    Who says AI has to be used to make opponents harder?

    It could also be used to:
    - Make opponents have conversations with each other.
    - Exchange knowledge, gear, supplies, etc.
    - Boost or ruin other opponents morale.
    - Live as they should and not just stand still waiting for the player.
    - Trade with each other.
    - Play card games, chess with each other.
    - Get recreational and make sand castles when bored
    - Have sex with each other
    - Go explore surroundings and remember them afterwards.
    - Disobey orders
    - Think new orders for themselves
    - Be opportunist
    - Get interested about mundane details like rocks if they happen to have geology training

  • by Anonymous Coward on Tuesday November 04, 2008 @08:53AM (#25624615)

    Again, AI don't have to be used to make opponents harder or easier. Just to make them more human.

    For starters, all computer opponents should have as accurate FoV as what player sees. Yeah that's right, calculate the 3D FoV for all opponents too. In realtime as they are also moving around.

    Without real FoV, there's no point trying to do opponents that explore their surroundings, don't know beforehand what's behind new corners, can remember where they've been, etc.

    In any case, AI should not know where player is unless AI: either sees where player is, other AI tells it where player is or AI has tracking skills and player actually left tracks.

  • by Anonymous Coward on Tuesday November 04, 2008 @10:54AM (#25625691)
    There are some good points about having AI enhance the fun for the player instead of greater difficulty or realism but there are still some genera of games that more difficulty and realism would be better. I use horror and survival games for an example. I've never found great interest in scripted monsters jumping out at you and then running mindlessly in your direction. I'd love to see a survival game were you have a sleek intelligent killing machine using opportunity and time to take you out, that backs off when in danger, and uses the environment to its advantage rather then yours. A game for example could be something like the first Alien movie. Lets see a horror/survival/puzzle game were you have to find a way off your ship in space, make crude probably ineffective weapons while being hunted down by a lone creature. The suspense alone could be huge if built with the right environment and right monster while just exploring. Why does a game need a large numbers of bad guys to mo down when with a great AI, level design, sound design and so on you should be able to have a good game with one?
  • by Paradigm_Complex ( 968558 ) on Tuesday November 04, 2008 @11:52AM (#25627003)
    Yes, increasing the AI could very well make the games fun for many people. What you're talking about is not exactly the kind of AI that's needed. There is a very big difference between being able to beat the human player and being able to outwit the human player. It's not necessarily difficulty or realism that's needed, it's creativity and genuine surprise.

    I want a game where the AI is smart enough to send a probe over and throw assimilator over the player's vespene geyser to keep the player behind in tech. I want an AI who will excessively use flash-bang grenades and then suddenly switch to a smoke grenade so the player will look away to avoid being blinded while the AI runs in and shoots him up. I want an AI smart enough to toss a veggie at his partner to give his partner back his up-b and help him recover back to the stage.

    These things may be possible today by coding in the exact individual ideas, but human players will quickly learn all the very limited tricks the AI knows and become bored with the AI. Still, it's better than what the vast majority of games have and I'd really love to see such things. What we really need is AI that's creative. That would make games much more fun.
  • by maugle ( 1369813 ) on Tuesday November 04, 2008 @12:57PM (#25628417)
    As far as RTS goes, I'd like to see the AI act more human. Not just in how "smart" it plays, but in how quickly it can act.

    Remember in Warcraft 2 (I've started playing it again recently, so I'll use it as an example), how you had to send roughly double the number of troops as the enemy to get a fair fight? The AI would have all its ogre-magi cast bloodlust (or all its paladins cast heal) simultaneously, while you'd be struggling to get a single spell out.

    And while that was going on, they'd be effortlessly churning out more units. During the time you're looking at your base and getting some more units ready, your units in battle would be getting slaughtered.

    So, what I'm really looking for in an AI is human-like delays between commands.
  • by nick_davison ( 217681 ) on Tuesday November 04, 2008 @03:58PM (#25631639)

    Huh, you must be right. I guess that's why online gaming and LAN parties are so incredibly unpopular.

    Everyone still hates campers. Most LAN party games have rulesets that discourage it.

    If you have rapid respawns, the guy who runs out, gets fifty kills vs. twenty deaths, is going to massively outscore the guy who nurses his last two percent of health and snipes five guys from hiding.

    Similarly, with short round times, it's much more fun to try something a little bit crazy so you can tell your friends about how you got a last point of health kill. After all, you know there's another round starting in thirty seconds.

    Ask yourself how many LAN party games really encourage a real world fear of pain, desperately trying to keep the one life you're given, etc.

    The games don't encourage realistic play from players. Why on earth would you want it from AI?

    And, for that reason, saying "AI could be more real because LAN parties are fun" doesn't really hold up - the humans aren't playing realistically, due to the rulesets imposed, who wants the AI to either?

  • by default luser ( 529332 ) on Wednesday November 05, 2008 @03:34PM (#25647623) Journal

    Sure, that's fine and good if you limit your character to an "Orc," and limit their exposure to "field of battle." Limiting scope is the easiest way to get AI that "works."

    A good example of limited-scope AI is the shock troopers from the original Half-Life: despite poor hardware specs, the troops reacts to player attacks, falling back and covering their movements with fire/grenades. This only "worked" because the troopers had a very limited list of actions to select from, and the paths the troopers could follow were already defined in stone. Just watch them attack or retreat: they will always launch attacks from or fall-back to the same strong points, using one of perhaps three pre-defined paths.

    What I'm talking about is truly diversified AI - is your Orc male or female? Does it have an outgoing or introverted personality? Does it start fights, or run away from them? Does it hunger for something, or does it lack drive? Does it try to be well-liked by everyone, or does it care more about others' happiness? Does it pick it's nose? Does it dance and sing? What exactly makes your Orc engaging, interesting, pathetic, predictable and unexpected, all at the same time?

    All the above factors and more can be described to create a more well-rounded AI, one that you can use in all sorts of situations. I mean, limited AI in limited situations has it's place (it makes a great background for a movie scene, or a well-scripted game encounter). But ultimately, what people want to see is AI that does something totally unexpected, like sontaneously get into an arguement with superiors, try to lie, etc. Unfortunately, indentifying all these various fragments that make-up the culmination of an AI's tendancies is difficult, and requires a good AI algorithm to put the data to good use.

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...