Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Programming Software Entertainment Games IT Technology

A Look At Modern Game AI 87

IEEE Spectrum is running a feature about the progress of game AI, and how it's helping to drive AI development in general. They explore several of the current avenues of research and look at potential solutions to some of the common problems. "The trade-off between blind searching and employing specialized knowledge is a central topic in AI research. In video games, searching can be problematic because there are often vast sets of possible game states to consider and not much time and memory available to make the required calculations. One way to get around these hurdles is to work not on the actual game at hand but on a much-simplified version. Abstractions of this kind often make it practical to search far ahead through the many possible game states while assessing each of them according to some straightforward formula. If that can be done, a computer-operated character will appear as intelligent as a chess-playing program--although the bot's seemingly deft actions will, in fact, be guided by simple brute-force calculations."
This discussion has been archived. No new comments can be posted.

A Look At Modern Game AI

Comments Filter:
  • by cjfs ( 1253208 ) on Thursday December 04, 2008 @02:14AM (#25985529) Homepage Journal

    F.E.A.R. , short for First Encounter Assault Recon .... University of Alberta GAMES (Game-playing, Analytical methods, Minimax search and Empirical Studies) .... called STRIPS (for STanford Research Institute Problem Solver)

    Combine that with such gems as:

    players view the virtual world from the perspective of the characters they manipulate, making Counter-Strike an example of what's known as a first-person-shooter game.

    and I'm not sure that belongs here.

    Then again, maybe I'm just bitter that I still can't beat GNU chess.

  • College AI Project (Score:4, Interesting)

    by Dripdry ( 1062282 ) on Thursday December 04, 2008 @02:15AM (#25985537) Journal

    Back in college I worked with a guy, Jeff, on an AI project. We were to play the game Freecell through to its finish.
    (if you're reading this, jeff, I'm still sorry I didn't do more coding on that and I owe you one)

    While I can understand the difficulties in doing a brute force search, and that a simplified "version" of the game could be helpful, OR even that parsed "states" or "instances" of situations in the game could be broken down and analyzed, wouldn't a simpler way be to use a fitness test on various actions? No, no... I lose points for not reading the article, perhaps.

    We used a combination of fitness and searching to determine a way to win a Freecell setup. Admittedly this is VERY simplified, and done in a sort of static system as opposed to a (usually) dynamic one in games.

    If there is less memory, the obvious answer seems to be to use a system to determine better ways of doing things. Rather than simplifying the game, couldn't the AI have a library of responses designed to fit certain situational profiles, then act in a (perhaps semi-random) manner that fits ? Perhaps the responses could be genetically determined, even.

    Also, this use of situations versus individual actions could help lengthen the time the AI has to come up with a response.

    Just some thoughts, though I'm sure others more experienced than me have these on the brain. I'm looking forward to the responses on this topic.

    • by Mad Merlin ( 837387 ) on Thursday December 04, 2008 @03:09AM (#25985799) Homepage

      Games of perfect knowledge versus an opponent are pretty simple to solve. You'll find they all basically boil down to minimax applied to game trees plus an evaluation function (which gives you a fitness value). There's also alpha-beta pruning and things like Negascout which are just optimizations for Minimax. The trickiest part of this is writing an effective (and fast) evaluation function.

      Freecell is a bit different because it's a single player game, but ultimately you can apply a similar method as above.

      Real time decision making in games is often quite different. One problem is that you don't necessarily always want to make the "best" move. In Game! [wittyrpg.com] for example, each monster has a regular attack and may have one or more special attacks. Using simple AI to pick one (such as, pick the attack that does the most raw damage) each time wouldn't be as interesting as picking randomly. Say one of the special attacks for the monster is to steal some gold from the player, why would the AI ever pick that? It doesn't benefit the AI at all, but it does make the monster more interesting to fight for the player. Similarly, if one monster has an absolutely devastating attack, a "smart" AI would always use it. But if the AI always uses the devastating attack then either that monster will be impossible to kill, or the regular attacks must be really boring. But, if the monster with the devastating attack only uses it occasionally, it keeps the player on their toes, perhaps they'll heal more often, or use more powerful attacks to try and dispatch the monster faster.

      Having said all of that, random picking isn't always the best way to go (although it's quite efficient with CPU time). The main problem with game trees is their branching factor. Chess is a fairly CPU intensive game for AI to play, as it has an average branching factor of ~36. For real time games, it's likely that you can use domain knowledge to substantially prune the branching factor, which makes the problem much simpler. For example instead of considering to, say, turn left 1 degree, or 2 degrees, or 3 degrees or... you could just consider turning left 90 degrees or 180 degrees. If you only end up with a dozen options left to pick from, you can fairly quickly expand several levels of the game tree and then make an informed decision.

      However, some games are not games of perfect knowledge (Backgammon, for example), often they rely on chance. In this case, the value of deeper game tree expansion rapidly diminishes, and you simply need to temper your fitness values based on the expected probability of that move being possible. The other problem with games of chance is that the branching factor is usually very high, which typically makes it unfeasible to expand too many levels in the game tree anyways.

      Of course you can precook a number of situations, most good Chess AIs have a large collection of book openings that they use. It's really just an application of domain knowledge again, then you can reuse your game tree expansion with evaluation function on each of the book openings to find the most appropriate one instead of doing an exhausive search of all possible moves.

      • by bishiraver ( 707931 ) on Thursday December 04, 2008 @03:32AM (#25985917) Homepage

        Since you used an example from a role playing game , I'll respond with similar. Disclaimer: I'm totally hot for genetic algorithms, ever since I saw this article on pathfinding [gamasutra.com].

        Oftentimes, a player character's "best" moves are limited by other factors rather than just needing to push the button - does he have enough combo points? does he have enough mana? is he in range? and so forth, before it's even a valid choice. Using a more complicated getInformation set than is outlined in the pathfinding program linked above, let's lay out what the mob can find out:

        How many hostile enemies are there?
        What kind of targets are the hostile enemies (ranged, melee, soft [rogue,mage,etc], hard [warrior,paladin])?
        How hurt is each target?
        How much damage has each available target done to me?
        How much healing has each available target done to other hostiles?
        How devastating have the non-damage abilities of available targets been to friendlies?
        Which abilities are available to me?
        Which status ailments do the available targets have currently?
        Add up levels of opponents and levels of allies. Who is larger?
        Which ability did I use last? .. And so on - I'm sure there are more checks you could give the AI access to - probably even depending on its intelligence.

        Then there are the actions the MOB can take:

        Choose target (can target self)
        Attack target (melee)
        Attack target (ranged)
        Use ability on target (repeat for however many abilities are available to the MOB - limited by mana, and so forth)
        Run away
        Close distance
        Run to ranged distance

        It would take quite a bit of training (you could probably automatically cull the first several generations, but later on you might actually have to interact with it yourself), but this kind of technique could end up with some very "smart" AIs that are fun and challenging to play against. You don't get God AIs, because they have limited information. You don't get God AIs, because their abilities are limited - and not by simple randomness. You might actually get an AI that stuns you and runs if it realizes it's outmatched, instead of stupidly sitting there and whaling at you with its rusty sword of crumbling.

        Granted, genetic algorithms have some HUGE drawbacks:

        The decision tree can be quite large, and it can take quite a few cycles to evaluate. Of course, as your fitness check you could check how long it took to execute.

        It can take quite a bit of training (hundreds of generations, with thousands of entities each) before you get something that resembles an intelligent algorithm.

        Meanwhile, it might generate something that checks for contingencies you never thought to bake into the AI script.

        • Re: (Score:3, Informative)

          by Mad Merlin ( 837387 )

          GAs are interesting, but they're definitely less dynamic than the other approaches I mentioned. As you pointed out, GAs are much too slow to use on the fly, you have to prebake them. That has both pros and cons, the obvious con is that if you ever tweak any of the parameters, you'll have to rebake all of your behaviours. The obvious pro is that you can actually see the complete behaviour, and you can manually tweak it (if necessary).

          Canonically, a GA is used when the search space is simply too large to sear

          • As an interesting other direction, neural networks can make interesting state evaluators. The only drawback being that the must be trained ahead of time. You end with (theoretically) more flexible state evaluators in the face of changing game environments.
          • Re: (Score:2, Interesting)

            Despite calling them GAs, the grand-parent is referring to Genetic Programming.

            Canonical GAs are never the correct tool for a problem. They combine a crude random local search (mutation) with the cross-over operator that is intended to splice partial solutions. The trouble is that even on problems designed to be exploited by GAs, like the Royal Road [psu.edu], a random restart hill-climber will perform better with the same number of fitness evaluations.

            I'm not as familiar with GP, but given the minute number of

      • by Anonymous Coward

        By the way... modern chess engines have an effective branching factor of about 2 (certainly less than 3)

        There may be 36 moves available in a typical position, but the engine will almost always have enough information to examine the best move first or second, and then rapidly refute all of the others by proving that they are inferior to the fully-examined first move (i.e. a beta cutoff).

        The effect is that it only takes about 2 full plies of extra depth to get a decent strength improvement (50-100 ELO).

      • Games of perfect knowledge versus an opponent are pretty simple to solve. You'll find they all basically boil down to minimax applied to game trees plus an evaluation function (which gives you a fitness value)

        All of which, when applied to the simplest and most abstract of all strategy games (GO [wikipedia.org]), fails to produce a competitive program. Search has its limits, even in zero-sum, perfect information, partisan, deterministic strategy games.

        • Re: (Score:3, Informative)

          by Mad Merlin ( 837387 )

          All of which, when applied to the simplest and most abstract of all strategy games (GO), fails to produce a competitive program. Search has its limits, even in zero-sum, perfect information, partisan, deterministic strategy games.

          That's because Go has enormous average branching factor (>300). Go is definitely not the simplest of all strategy games.

          • What could be more abstract than GO? The rules are simple, the pieces are simple, and the board is simple. It is the interactions that are complex. It is an example of what is called "emergent complexity" or complex situations arising out of a simple set of rules in an evolving environment. It has also been said that if there is intelligent life somewhere else in the universe then they probably play GO too.
    • Re: (Score:2, Funny)

      by timbalara ( 808701 )
      You bastard, now I'm working in the auto industry thanks to your lack of coding! - Jeff (I kid, I kid!)
  • one important point (Score:5, Interesting)

    by SirSlud ( 67381 ) on Thursday December 04, 2008 @02:45AM (#25985695) Homepage

    Article is pretty bang on. Adaptive AI is tough to do, as is balancing being a tunable-level of smart and being beatable. One thing I have not seen enough of in games is AI agents communicating with each other about intentions. More often it is simple a matter of saying, "I'm in this area, so don't try and go here." I've yet to really feel in a game that the enemies are working together. I saw a very nice presentation on Halo 3 high level AI at GDC 08 that kind of nailed some of these problems with a pretty simple solution - there should be some top level AI manager that handles requests from AI agents on what to do next when a high level goal becomes useless to attempt to achieve. Left4Dead sort of deals with this, not by talking to agents that are still alive, but by deciding when to introduce new agents, but the Halo 3 approach to me seemed very elegant. It was higher level AI than the article was talking about, but in effect it was a similar setup: AI achieves something, and says, "What's next?" Since the AI manager would know the state of the other enemies in its unit, it could decide that you might as well not start firing at the player since the two others were doing that. Maybe some other game vets could clue me in, but I havn't seen too many games like that where a module is advising the AI based on balancing attack/protect/advance ratios during gameplay. /framework/tools programmer //not AI programmer

    • by hax0r_this ( 1073148 ) on Thursday December 04, 2008 @02:56AM (#25985735)
      Actually, what the article is talking about is typically applied in systems of multiple levels of AI. Consider an ideal squad based shooter:

      1. Command AI issues squads orders (do/accomplish something) based on a very simple model of the battlefield
      2. Squad AI issues individual units orders (go somewhere) based on a more detailed model of the immediate area.
      3. "Conscious" individual AI computes a good way of following orders from the squad AI based on yet a more detailed model.
      4. "Subconscious" individual AI makes moment-to-moment decisions, for example about how to avoid minor obstacles that the "conscious" AI ignored.

      Of course that is very idealized.
      • by Nanidin ( 729400 )

        This sounds like a great approach. If you read my post below, many of the teams start at what you have called the Conscious Individual. If they finish that with enough time, I've seen them move on to the Squad and Command levels also. The main difference though is that since our game state has traditionally been pretty simple, there hasn't been a need to compose a simpler model of the state for the upper levels.

        The multi-tiered AI approach does seem very useful and intuitive though.

        • Re: (Score:3, Insightful)

          by tucuxi ( 1146347 )
          Additionally, a multilevel AI helps to even out the use of CPU. You don't want to take high-level command decisions every quarter second, because that would not leave enough time for any of the lower tiers to do anything useful, and would require too much CPU. With a multi-tiered AI, you can dedicate small and frequent AI slots to low-level decisions, and hold high-level decisions for less frequent and more CPU-intensive thinking.
    • interesting idea.

      let me make sure i understand it correctly:

      an RTS AI took a FPS AI out for a couple of drinks, and the resulting offspring would be an awesome AI that can work in teams to defeat all humans.

      i'm sure someone can throw 2 AIs together relatively quickly, release a game and see what happens.
      maybe a command and conquer: renegade 2?

      • Except an RTS/FPS AI merge wouldn't accomplish certain aspects of strategy.

        An RTS decides 'what' units to make, but does nothing about what to do with a current group of units. Nor does an RTS decide what to do in battle; have x units do this, while y units do flank. If unit z is below condition w, retreat/heal other units. Instead it simply says: I have x units. Send them all in to attack... No attack strategies, just build strategies. Hardly useful in an FPS when you are using 'pre-built' units to attac
    • by rdnetto ( 955205 )

      Wouldn't that behavior be unrealistic though, since the NPCs would 'know' things that they shouldn't? e.g. that they have a bunch of allies hiding behind that wall. If each NPC acted individually, perhaps they could use swarm-based behavior when they teamed up.

      • Re: (Score:3, Interesting)

        by tucuxi ( 1146347 )

        Depends on the setting. It makes perfect sense within a networked-battlefield scenario: what one unit sees, it will try to convey to others, and commanders can take decisions based on everything that is seen by any of their units.

        In a medieval setting, they would have to shout to each other (and be within hearing distance) to request assistance. Or wave colored flags, or send messenger pigeons.

        The radio squawking in Half-Life added an element of realism to this - you could actually "hear" what the bad guy

    • by msbmsb ( 871828 )
      As an NLP person, what I would love to see/develop is AI that "listens" to the players. Take for example a game like WoW or Halo where the players are chatting publicly (text is easier than speech) with each other about their next motion or attack or defense moves. If the AI was within "hearing" range, it could pick up on that public chatter and if possible, decide to counter in someway or ignore it as if it was diversionary or irrelevant (or simply misunderstood).

      It would be an interesting experiment at
      • I think the real problem with that isn't a technical one (although it would be really really really cool!). If you make it so players have to shut up in a dungeon or whatever, they'll just use an alternate means to communicate (ie telephone or voip). This will destroy immersion and annoy players. The only way this could work is really in a single player game where you have to talk to NPCs in english....

        • Re: (Score:2, Interesting)

          by msbmsb ( 871828 )
          Players would have the option for easier communication at a risk, or would have to find some other way to communicate which would carry it's own advantages and disadvantages, which I think is a reasonable disruption in modern gameplay and not one I'm so sure would annoy players so much.

          Plus, methods like true, range-based "whispering" could be useful, and would also carry with it some interesting risk (i.e., the intended person wasn't close enough to hear). The fact that a particular AI might only understa
    • by mcvos ( 645701 )

      Article is pretty bang on. Adaptive AI is tough to do, as is balancing being a tunable-level of smart and being beatable.

      Being beatable? I keep forgetting that in FPS games it's so easy to make near unbeatable bots. I'm a strategy gamer, and I'd love it if someone would make an unbeatable AI. Or at least a halfway decent one. Strategy is the area where some real advances in game AI are still needed.

      And then there's CRPGs of course, but I suspect that's another order of magnitude harder.

  • by Anonymous Coward

    If modern games are an indication of AI, then they're obviously smarter then we can hope.

    Just today, the AI in Far Cry 2 spotted me at long range after 1 shot with a sniper rifle, proceeded directly to me, despite heavy foliage for cover.

    Color me impressed. Even Sherlock Holmes would be proud of how quickly they deduced where I was.

    • Re: (Score:3, Insightful)

      by JoshJ ( 1009085 )

      The problem is that internally the game "knows" where you are- after all, it has to track your location.

      Every play against someone in counterstrike who was hacking? Wallhacks, aimbots, the whole nine yards? There's really nothing at all stopping the developers from doing that; and in fact some older games basically did do that, just with arbitrary delays before the AI snapped on you, deliberate fudge factors on accuracy, whatever it took to make the difficulty level sane for a human player.

      It's possible to

      • by Eivind ( 15695 ) <eivindorama@gmail.com> on Thursday December 04, 2008 @03:09AM (#25985801) Homepage

        That is true. The computer is simply very different, so modeling our strengths is just as hard as our weaknesses.

        It's trivial for an AI-controlled enemy to get headshots all the time. It's trivial for the AI to have complete knowledge of the battlefield and state of all items and characters on it. Humans can't do that.

        It's a lot -less- than trivial for an AI to notice patterns in the enemy and exploit them. Thus the same approach tends to work 100 times against the same AI. It can't learn from its mistakes.

        • by bishiraver ( 707931 ) on Thursday December 04, 2008 @03:37AM (#25985947) Homepage

          It could if the AI decision tree were a genetic algorithm.... each entity gets its own decision tree, and the ones that survive mate. :P

          Of course, that only really makes sense in an MMO 'verse.

          You could do some AI juggling, so that after every map (or every time the AI loses), it runs its algorithms against all previous scenarios until it wins (or at least, gets better at not losing).

          But then you end up with an AI that wins all the time, and a huge amount of CPU cycles.

          • by stjobe ( 78285 ) on Thursday December 04, 2008 @04:31AM (#25986203) Homepage

            But then you end up with an AI that wins all the time

            And we don't want that. We want an AI that wins some of the time, and that is beatable. That is, it should present us with a challenge, but the challenge can't be too great because then the game will be no fun.

            So, we only want a smart-enough AI, not a god AI.

            • Just adjust your fitness function accordingly :P

            • Re: (Score:3, Funny)

              by VeNoM0619 ( 1058216 )

              So, we only want a smart-enough AI, not a god AI.

              So the problem becomes: we don't want it to beat us all the time, but if we make it smart, it will beat us, but we want it to be smart!

              So the solution is: make it capable of beating us all the time. Then flip a coin to determine if it will choose the winning strategy, or sit like a lame duck so it won't win all the time.

            • by mcvos ( 645701 )

              But then you end up with an AI that wins all the time

              And we don't want that. We want an AI that wins some of the time, and that is beatable. That is, it should present us with a challenge, but the challenge can't be too great because then the game will be no fun.

              So, we only want a smart-enough AI, not a god AI.

              I'd love to see god AI, but then, I'm a strategy gamer. AI is really bad at strategy.

          • by tucuxi ( 1146347 )

            You are overly optimistic regarding GAs. I very much doubt that, whatever the AI cycles, you can evolve an AI team capable of winning against a good (as in top-third of the table) human team at, say, counter-strike.

            Gotcha: the AIs would not be allowed to cheat, and would have the exact same information at their disposal as the human team: a lot of visual input (but not the actual noise-free game geometry) and the same set of commands as human players.

            If you can evolve that kind of AI, the DARPA Grand Chal

            • Re: (Score:2, Interesting)

              Of course you could (assuming enough processing power), just have all the AI team constantly spinning 360 degrees and performing pattern matching on the visual input , as soon as a potential match is made fire at it ( to stop friendly fire have all the AI team choose the 1337 outfit and match against colour ). Also the DARPA challenge has already been beaten using pattern matching and learning algorithms (http://www.darpa.mil/grandchallenge/index.asp).
            • Re: (Score:3, Interesting)

              by thepotoo ( 829391 )

              I think you're under-optimistic regarding GAs.

              They can, with training (just against themselves!) beat human opponents at simple turn based games (citation [nih.gov]). That's the same level playing field you describe.

              It's been 10 years of GA optimization and theory, and 10 years of Moore's law since then. Computers have much better reflexes than humans, and you're telling me that a GA couldn't beat a master at CS?

              Tell you what: give me $50,000 in funding, six months to train the AI to general FPS rules (headshot, mo

              • "The point is, games have rules. Once you've learned the rules, you're unstoppable."

                There is an enormous difference though, the computer doesn't have any of the deficiencies of the human mind to get in the way. Most human beings 'wing it', most thought is 98% unconscious, therefore most of the time what you are testing how good someones unconscious processing is.

                You'll probably find the following interesting:

                (Quick version)
                http://i35.tinypic.com/10fruxh.jpg [tinypic.com]

                (Longer version)
                http://www.linktv.org/video/2142 [linktv.org]

                To

              • by tucuxi ( 1146347 )

                games have rules. Once you've learned the rules, you're unstoppable

                Ah, but I want both to play the same game, while your are suggesting giving the machine a special representation with a high-level vocabulary. The interface I am proposing is the same one you are using: images and sounds come out and commands are executed. Not "high-level game data" - only images and sounds.

                You talk about teaching the AI how to headshot. I am talking about the difficulty of processing a 2D image and interpreting it as a PoV rendering of an (unknown) 3D model, and locating a set of pixels t

                • Re: (Score:3, Informative)

                  by thepotoo ( 829391 )

                  I am not talking about giving the AI any more information than the user has, nor any special controls/interface. When I say games have rules, I mean in regards to movement speeds, damage, and the like. A machine can process these things and respond to a changing environment quicker than a human.

                  3D image processing in a game is finite (especially using low-res models like in CS), and there are only 4 different heads to recognize for each side.

                  In a small data set like this, NNs can and will quickly outcompe

                  • by tucuxi ( 1146347 )

                    Machines "can" do all sorts of wonderful things, but so far nobody has been able to get them to do them. Please cite examples or research that demonstrates accurate real-time 3d modeling from a synthetic 2d video, or stop making things up. Yes, I am sure it can be done. No, I very much doubt anybody can do it right now.

                    You say that synthetic video is finite. So is 2^256, or the number of grains of sand in the beach - what is your point? Are you suggesting that finite is always manageable?. Ok, maybe you ca

                    • I cannot find any citations for 3D modeling from 2D video, but I didn't look very hard. You're probably right, it's beyond current models.

                      I think you're missing what I'm trying to say here, though. A sufficiently advanced neural network may be able to play CS without the need for actual 3D processing.

                      I'm pretty sure that I'm right, but I really can't prove it without more time and a supercomputer to run it on. I'm currently writing proposal for a grant so I can model the selection pressures leading to th

      • I remember some bots for Quake and bots for bzflag which connect as regular users.

        That is a very good way to start an AI.
        The fudge factors can be toned down a lot because it cheats less.

      • So very true. A huge improvement to the AI would be visibility determining. IE, the AI might be able to tell the shot came from the southwest, but it can't see you exactly because of all the foliage - to root out the threat, it sends a squad to comb the area.

        Unfortunately, the squad pulls out a giant comb and starts running it through the foliage..

        • Re: (Score:2, Informative)

          by kungtotte ( 867910 )

          Operation Flashpoint and its sort-of-sequel Armed Assault does something along these lines. The AI has a field of view roughly corresponding to what most players have in a first person shooter (~90 degrees), and the AI can't see you if you're outside this field, but he can hear you if you do something noisy.

          Time of day, weather (rain/fog), foliage, obstacles, stance, movement speed and inherent camouflage of the unit will affect visibility and 'audibility'. Each weapon has two properties describing how visi


      • I'm pretty sure he was joking. Welcome to Slashdot, enjoy your stay.
    • by acidrainx ( 806006 ) on Thursday December 04, 2008 @03:04AM (#25985767) Homepage

      I have to say that the AI in Far Cry 2 is definitely one of the worst of current generation video games. I couldn't play that game for more than a couple days before getting utterly bored and frustrated at the idiotic AI.

      Enemy Territory: Quake Wars [enemyterritory.com], on the other hand, has some of the best AI I've seen AND its a multiplayer game. The bots' ability to attack and defend objectives while using infantry and vehicle skills against the random actions of human players is incredible.

      • Re: (Score:1, Funny)

        by Anonymous Coward

        Everyone of my gaming friends agree that QW has the best AI we've ever seen. I've spent some games just following AI snipers to see where the best spots are.

        Sometime it is hard to tell the difference between the bots and real players. It's only the absence of bad squeaky singing, incessant excuses about lag, and numerous opinions about my mother's sexual preferences that gives the game away.

      • Did the AI improve in later patches? Because I played QW when it first came out, against my brother (LAN) and the bots were as dumb as they come. Constantly driving vehicles into walls, running in front of me while I was shooting to try to give me a med pack, standing up out of cover to reload.

        It was a game-ruining experience, and if it's actually been improved since then, it would probably make QW worth playing.

        • Yeah they made some major improvements in later patches. Although I never saw them drive into walls. It's obviously more fun to play against human players and I would suggest that over playing against the bots, but the AI is still some of the best I've seen.
    • Man that is nothing.

      Game: Far Cry 2

      Time of day: 3 AM

      Weather: Thunderstorm.

      Distance: 200-300 yards

      Location: Jungle with heavy undergrowth.

      Position: crouching behind a tree, not moving.

      Result: Spotted and snipered.

    • by Zaatxe ( 939368 )
      You should have been modded insightful, not funny. I saw the same happening in Left 4 Dead: my AI companion survivors could "see" the zombies through the heavy foliage, but I could not.
  • Game AI For Fun (Score:4, Interesting)

    by Nanidin ( 729400 ) on Thursday December 04, 2008 @02:52AM (#25985721)
    The ACM Chapter that I preside over at Missouri S&T (Formerly the University of Missouri - Rolla) has been writing simple RTS games with AI APIs for the last two semesters. We're currently working on a third game to add to our repertoire. We host a tournament at the end of each semester and invite anyone that will come - the main site is at http://acm.mst.edu/~mstai [mst.edu]. The API is easy enough to get a handle on that a C++ novice could pick it up and do something with it within a few hours. Competitors are given 24 hours to write their AI, then we pit them against each other. Generally speaking, for the RTS style games we have written, AIs that act on an individual unit level only perform the best (both in execution time and scoring). This is probably due to the 24 hour time limit imposed, but it does show that even simple/greedy algorithms can perform well in game AI situations. I believe the winning team of our first tournament had an algorithm that went like this: for each unit: doBestActionForUnit(unit)
    • I have recently started playing Balanced Annihilation based on the Spring game engine. http://spring.clan-sy.com/ [clan-sy.com] The game has mainly been oriented to online play, but at least 3 good working AI's have been built by the community and I have enjoyed pitting them against and watching how they behave. http://spring.clan-sy.com/phpbb/viewforum.php?f=15 [clan-sy.com] You can almost anthropomorphise them to have different personalities. Being that it is all free and open source, if you are interested in RTS game AI, you may w
  • So what they're saying is simplify and use heuristics? Hasn't this been done for years now. One some level every single game out there does it because you can't model the real world 100% and the state you're considering is therefore simplified. What they're saying is simplify further by considering a subset or creating a model of the model that makes up the full game.

    In the case of simplifying further, isn't this exactly how a chess engine works?

    In the case of making a simplified model, I'd be surprised if

  • If you're deciding between "intelligent" and "beatable" then you're not talking about AI. An average person far outclasses, in any sufficiently complex game, a computer in the area of general intelligence. Knowing the physics equasions for a certain hit, being able to throw a hundred commands per second at your unit, having 100 percent perfect aim, these things don't involve intelligence. A game that can, without cheating beat a person on equal footing will be intelligent. I don't think there are any.
  • Let's hope they don't take their research from game AI too literally. Most game AI i've seen is programmed to hunt and kill the player.

  • Easy solution (Score:1, Offtopic)

    by OpenSourced ( 323149 )

    Use silver instead of copper. Silver is an excellent conductor, better than copper in fact. That will surely baffle all those copper thieves.

  • Actually the EVE-Online community, including devs are really gonna try to make AI happen in NPC encounters: http://myeve.eve-online.com/ingameboard.asp?a=topic&threadID=917074 [eve-online.com]

    • God I hope so. Right now EVE's AI behavior is: If player is within a visibility range, go to an optimum range of player and orbit; if player is in locking range, lock on player; if player is locked, fire weapons on player. I love how you can just destroy a whole squadron of enemies while the other squadron nearby just sits there and acts like nothing happened. God Eve is boring.

  • AI in games is approached the wrong way: instead of finding all the game states and choosing the best path, a far better approach is to apply statistics and do pattern matching. In fact, brains work with the latter method, not the former.

  • The problem (Score:3, Insightful)

    by Anonymous Coward on Thursday December 04, 2008 @10:15AM (#25988287)

    The problem with game AI isn't that we can't make better AI, it is that we don't make it a priority. Todays machines are powerful enough to give us good visuals but not powerful or memory spacious enough to really devote resources to too much beyond that. In Mass Effect I want to say we devoted something like 75% of the memory budget to textures, and we still had to downgrade the textures before the final ship. I don't know what the final stats were, but I wouldn't be surprised if about 90% of the budget was allocated to textures and polygons.

    That's not to say if you were to quadruple the memory on today's machines that AI would suddenly improve drastically, though. Many teams don't have the resources to devote to programming, so they need to take whatever is in the package. There's room for some entrepenureal spirits to create snap in AI programs, like what Havoc does for physics. Get started now, and you may have a refined product ready for the next generation of consoles.

    As an animator, I just want to point out that most anything smart you see in a game is a scripted sequence. An AI marine flipping a table and taking cover is mostly animation work. The only real code there is is a simple set of conditions that determine if the animation should be played and then some state changes to coincide with the animation. The measure of an AI isn't what kind of cool things it can do, because that's animator work, it's how quickly it figures out what it should do, and how well it figures out the quickest way to do it. When you see AI running out in the open, taking the long route to cover, getting hung up on corners or doing circles, that's bad AI.

    To give credit (and blame) where credit (and blame) is due, designers choose what kind of behaviors that are possible, so they too are highly responsible for the final appearance of the AI. If a designer neglects a cover system, then it can make even an intelligent AI look stupid by just having enemies stand in harms way. If a designer includes a visceral chainsaw attack, even a poor AI that gets a kill can still seem impressive.

    • I blame the console for this, when I saw the original specs for the 360 and the ps3 I was surprised that they both only had 1 gig of ram.

      The x box is shared, and the PS3 is 512 vid 512 system. IIRC

      Most games these days are built the the least common denominator.
      • by tlhIngan ( 30335 )

        I blame the console for this, when I saw the original specs for the 360 and the ps3 I was surprised that they both only had 1 gig of ram.

        The x box is shared, and the PS3 is 512 vid 512 system. IIRC

        Most games these days are built the the least common denominator

        Actually, both are only 512MB. Xbox360 is shared (512 for 3 PowerPCs plus GPU), PS3 has 256MB main system RAM and 256MB for GPU.

  • God are you there? (Score:3, Insightful)

    by kenp2002 ( 545495 ) on Thursday December 04, 2008 @11:47AM (#25989387) Homepage Journal

    It neve ceases to amaze me that that while science is fiercly opposed to God or Theology infiltrating science as a process, in AI development they almost "assume" that intelligence was crafted by a God.

    COMPLEX BEHAVIOR IS EMERGENT, NOT DESIGNED.

    In AI development they seem to assume that the proper development of AI to to be a God and design a system or method of AI that accomplished a specfic set of goals or objectives.

    Day after day evolution is a truth in science, and thats fine; but when it comes to AI development I swear they have never heard of evolution.

    Your behavior is a result from a wide and largely independent array of inputs.

    Your eyes don't make any decisions and aren't designed for decision making, they're input.
    Your feet, lungs, and regions of your brain operate as a COMPLEX INTEGRATED SYSTEMS OF INDEPENDENT FACULTIES.

    This is a much larger problem then the specifics of the task at hand. We are talking an organic development model for AI rather then a deterministic method. That is the largest flaw of Computer Science. Computers are largely deterministic devices, intelligence, isn't deterministic. A determinstic method of AI development is doomed.

    You have to evolve the AI. The AI needs to know the limitations of it's organism for proper development.

    Light, Dark
    Up, Down
    Here, There
    Friend, Foe
    Move from A to B ...
    Find A Weapon
    Assess Threat
    Attack or Flee
    etc...

    The very process of evolving the AI api in an organic model give the model itself the ability to ignore irrelevant data by feeding abstract and generalized data up the cognitive food chain with irrelevant data dying off early in the process. If the general data is insufficent then the AI simply asks it faculties for more specific input.

    OUT - I WANT TO READ HAMLET
    IN - BOOK SHELF NEAR, OBJECTS FOUND ON BOOKSHELF, ASSUME RECTAGLE OBJECTS ARE BOOKS
    IN - BOOKS OVER THERE ON THE BOOK SHELF (RECTANGLE OBJECTS CONFIRMED AS BOOK)
    IN - BOOK ON TOP SHELF IS ABEL (Binary Search fo the book shelf)
    IN - BOOK ON BOTTOM SHELF IS ZEUS
    OUT - LOOK IN THE MIDDLE OF THE BOOK SHELF
    IN - FIRST BOOK IS HOUSE OF M
    OUT - GO BACK A FEW BOOKS TO THE LEFT
    IN - FOUND BOOK HAMLET
    OUT - GET BOOK
    IN - TOO FAR AWAY
    OUT - MOVE CLOSER
    IN - I AM NEAR THE BOOK
    OUT - GRAB BOOK
    IN - LEFT ARM WON'T MOVE
    OUT - USE RIGHT ARM
    IN - I HAVE THE BOOK IN HAND

    Additionally AI evolves with the organism itself (physical charactersitics influence mental development).

    The reality of an AI is they need to be compiled or GROWN to fit the organism (say a terrorist or counter-terrorist in Counter-Strike)

    BASE FACULTIES + ORGANISM DEFINITION + CIRCUMSTANTIAL OVERIDES + GAME PLAY OVERIDES = Source Code for AI

    AI Complier then builds out an organic, almost B-Tree like info passing\storing pipelines based on the limitations.

    A creature with no eyes would never have to process visual data. In that case distant objects are irrelivant except for memory storage.

    My Prediction: AI isn't something that is developed, it's something that is Grown.

    You define it then compile it.

  • Use evolution (Score:3, Informative)

    by greg_barton ( 5551 ) <greg_barton@yaho ... minus herbivore> on Thursday December 04, 2008 @12:17PM (#25989877) Homepage Journal

    Here are two great examples of using evolving neural networks to drive game AI:

    Nero:
    http://nerogame.org/ [nerogame.org]

    Galactic Arms Race
    http://gar.eecs.ucf.edu/ [ucf.edu]

    They're both the brainchild of Kenneth Stanley.
    His current research can be seen here:
    http://eplex.cs.ucf.edu/ [ucf.edu]

  • Instead they use R.C., Real Cheating.
  • I have pragmatically programmed predator behavior, based on a instinctive behavior matrix which considered creature energy level, anger, hunger, time of day, proximity of food, proximity of other predators, success of prior encounters, etc. Predators could also sense the environment over a limited range. This produced a composite behavior probability which translated into the energy put into prey acquisition and tracking. Available pathways to food (links in the game network) also applied difficulty leve

Math is like love -- a simple idea but it can get complicated. -- R. Drabek

Working...