Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
Games Entertainment

The State of Game AI 88

Gamasutra has a summary written by Dan Kline of Crystal Dynamics for this year's Artificial Intelligence and Interactive Digital Entertainment (AIIDE) Conference held at Stanford University. They discussed why AI capabilities have not scaled with CPU speed, balancing MMO economies and game mechanics, procedural dialogue, and many other topics. Kline also wrote in more detail about the conference at his blog. "... Rabin put forth his own challenge for the future: Despite all this, why is AI still allowed to suck? Because, in his view, sharp AI is just not required for many games, and game designers frequently don't get what AI can do. That was his challenge for this AIIDE — to show others the potential, and necessity, of game AI, to find the problems that designers are trying to tackle, and solve them."
This discussion has been archived. No new comments can be posted.

The State of Game AI

Comments Filter:
  • by B5_geek ( 638928 ) on Tuesday November 04, 2008 @01:01AM (#25622449)

    The one game that has always stood out in my mind as having great A.I. was Comanche Maximum Overkill. The original (386DX-40 era) DOS game actually advertised in the manual that if you repeat the same attack pattern for 30 seconds then the game would adapt, AND IT DID!

    Imagine this scenario; you are in a helicopter hiding behind a hill. Whenever a bad-guy gets close enough, you pop-up above the hill, get a missile lock, fire, then drop below the hill. If you repeat this pattern long-enough (30+ seconds) then enemy copters will sneak up behind you and blow you up. I was always impressed at this "Learning A.I." as opposed to what most computers games do.

    RTS/TBS: build stuff quicker then you can and/or advance technology faster then should be possible.
    FPS: Have 'super accurate' shots, higher health, bigger guns.

  • by SnowZero ( 92219 ) on Tuesday November 04, 2008 @01:56AM (#25622827)

    This to me is a huge downfall of modern games - instead of making AI opponents "smarter", devs simply tweak the rules to give the AI more of an advantage.

    Indeed. Cheating AIs make me cringe. I'd really rather see a dumber AI that doesn't know where every unit on the level is, than play a more "skilled" one that's just using the fact that it's not playing the same game to gain its advantage.

    That being said, it is incredibly hard to define an AI that doesn't have "unrealistic" skills when the players' skills are advancing in the same fashion. For example, your skill in Halo is to a large extent determined by how accurate you are, which is easily mimicked by AI. I can't count the number of times I've heard someone accused of using an "aimbot" because their skill (or luck) in an FPS seemed "too good" or "unrealistic". The same goes for RTS games - the top human players in the world are to a large degree measured by how many commands, or actions, they can perform in a minute - which is again easily transferred to an AI opponent.

    It's hard to define, but not necessarily hard to measure. Record a bunch of humans playing, look at plots of where they aim based on location, distance, velocity, etc, and build a statistical model. Or, if you've got something more algorithm based, measure it the same way and make sure its distribution on plots looks fairly human. Of course, game companies will have to be willing to hire statistics/AI type people (or train their devs in those areas), and devote the money to gather player data and time to make it happen. So far few companies have gone that route, but I think more will in the future.

    Several years ago I went to an AI conference (AAAI), and they had a quake bot that was coded by some rule-based AI experts (SOAR bot). I was into quake quite a bit at the time, and I'd have to say that was the most fun human-like AI I've played in a game where the player and AI had equal footing (same unit(s) and capabilities). I don't think the game companies have been willing to hire those kind of people yet, but as I said before, hopefully that will change, especially once customers realize that screenshots don't always mean good gameplay.

    P.S. I did AI/Robotics for my degrees, and work on machine learning for a living. Haven't worked in the game industry, but I've worked with a bunch of people in school who have gone into that.

  • by KDR_11k ( 778916 ) on Tuesday November 04, 2008 @02:54AM (#25623141)

    You know how many people try to automate seemingly trivial things with scripts in Spring [clan-sy.com]? You quickly run into a barrier because while the strategy may be standard the specific execution must be adapted to the situation. E.g. a commonly requested feature is an auto-dgun widget for Total Annihilation clones/ripoffs but there's tons of factors to account for, the thing costs a lot to use, it can easily cause friendly fire that, depending on the situation, could be acceptable or unacceptable and it has to be aimed properly so the shot actually connects. If you automate that the enemy player will quickly learn the pattern of your automation and adapt to use it against you. That's why the tactics cannot be automated easily, a good player will employ tactics that beat your AI. Well, unless you make it impossible to override the AI's decisions but then you'll end up with units doing stupid things and players learning the situations in which the AI behaves well or badly and exploiting that.

    Real time strategy is really more on a tactical scale most of the time, strategic advantages can be countered by good tactics and a player must handle both the macro and micro (pretty much strategy and tactics) at the same time to get the maximum efficiency out of his troops. Any predictable behaviour is going to be a weakness.

  • by redscare2k4 ( 1178243 ) on Tuesday November 04, 2008 @08:42AM (#25624561)

    I don't agree. In your example, you can make those groups of enemies flank the player but give them low accuracy, for example. So in low difficulty the AI tactics are smart but their competence is low. And those of us that like masochistic difficulty levels would enjoy havin to put some mines to cover our back from those flankers.

  • by skelterjohn ( 1389343 ) on Tuesday November 04, 2008 @10:01AM (#25625089)
    Yeah... you don't want to use neural networks for game AI. Reinforcement Learning, on the other hand, is. At its heart the RL problem is the same as the sequential decision making problem. An agent acts in the world, receiving observations and numerical reward signals that it tries to maximize. The RL community is young (compared to the AI community as a whole) and is building up the theory and experience needed to approach these sorts of problems. All of my work focuses on agents learning to play video games (FPS and platformers, as more or less two separate threads). It's coming, and we'll be ready to help the AI soon...just leave a couple cycles free from all those fancy graphics so we can do some thinking in the background, ok?
  • by default luser ( 529332 ) on Tuesday November 04, 2008 @03:11PM (#25630865) Journal

    Wouldn't that be something?

    Unfortunately, there's no way to produce an AI like this, because each one would be a work of art. The immense amount of time it would take the programmer to construct personalities like the above from the ground-up would be prohibitive, and no amount of tools could streamline this.

    Really, this is the hardest part abouut AI design: classifying the entire human existence into easy-to-handle pieces. Unless you can successfully generalize human experiences and tendancies into neat little packages, there's no way you can create such an impressive AI as the above. You would spend too much time just doing each AI by hand.

  • by WuphonsReach ( 684551 ) on Tuesday November 04, 2008 @03:52PM (#25631511)
    The secondary problem, as you alluded to, is that nearly every game uses a completely different system for representing the world. Different combat types, different terrain, different environments.

    Which makes it extremely difficult to build upon previous work.

Vitamin C deficiency is apauling.

Working...