Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Games Entertainment

State of Computer Game AI 144

irix writes "Interesting point and counter-point about how far game AI has come along. " Or not, as the case maybe. There are times that I'm really impressed with how well computer games can "reason" things out, like Dungeon Keeper for example, and others', like the original Starcraft where I just shake my head. My biggest complaint still comes in how "not-smart" games are with just moving characters in the most efficent path on the screen - if they could just get my movement right, I'd be so happy, I'd scream.
This discussion has been archived. No new comments can be posted.

State of Computer Game AI

Comments Filter:
  • by Anonymous Coward
    AOE I??? The same game that allowed the "Great Peasant Massacre" tactic as per _Warcraft II_? Neither game's AIs had any *clue* about not sending peons/peasants/workers undefended way across the map *through a known free-fire zone* (e.g. sending workers across a narrow isthmus... flanked on both sides by multiple warships... killing every peasant that tried to cross... and doing it over and over again...).

    Civ's AI can be much more powerful for one strong reason in particular: it's turn-based. It *has* to be stronger, since you can't rely on the interface requiring a clickfest and trivially making it arbitararily tough on the human through sheer (lack of) virtue of the interface.
  • ...in time-compressed situations.

    In a military setting, one may often be placed in a situation that requires one to act decicivly and quickly without having a whole lot of time to think beforehand. To help counter this, the Powers That Be have, over the years, developed a number of heuristical, rules-based "drills".

    You practice these drills over and over and over and over and over - and over and over and over - until they become second nature. They become (for all intents and purposes) automatic responses.

    Now some of these drills are very simple - what to do when your machine gun jams, for instance - but some are very complex, and suprisingly adaptable.

    Assault drills, for example, don't have to be executed "by the book" perfect, as long as a few simple tenents are adhered to. (Keep one leg on the ground, provide cover fire, when in doubt, attack, always do _something_ as doing nothing will always get you killed)

    But I've yet to see a single game of any type try and apply rules like this - not even turn-based wargames like Steel Panthers 1-3 where it would be entirely appropriate. If I play these games using the tactics I was taught, I can _always_ beat the computer, normally very badly.

    Perhaps it's not that rules-based AI has reached its limits, it's that the people coding the rules aren't asking the right people to derive the rules.

    If anyone is coding games that require small-unit or armoured tactics, and they want some help with the rules, let me know. :)

    DG
  • by Anonymous Coward
    (Following up my own post - I know, how gauche :)

    A couple of years ago I was about to go on an extended exercise that was to lead into (maybe) some real action, and I wanted to sharpen my edge before I started fighting for real.

    So I started playing paintball.

    I went in thinking that my superior soldiering skills were bound to make me well-nigh invincible compared to untrained civillians, but that was NOT the case - the "pro" players (who knew every inch of ground, all the obstacles, etc.) made quick work of me.

    But then I hooked up with a fellow soldier, and we started working together using the fire team tactics we were taught.

    It's very simple: if either one of saw anyone - _anyone_ - come into range, we'd both fire at him to get him to take cover. Then, one man keeps firing (to keep his head down) while the other bounds forward a few steps. He starts firing, the other man leapfrogs, and you keep moving like this until one of you hits the target.

    All this happens _very_ quickly - speed and violence is the key.

    Once we started doing this, we totally cleaned house. We could attack groups much larger than ourselves and "kill" them all in a matter of seconds. Even the seasoned Pros (and there is a pro paintball circuit) couldn't stop us.

    I was _very_ suprised at how well it worked. I had always considered frontal Infantry assaults to be organized suicide, but this proved me dead wrong.

    The rules involved with this tactic are very simple, and should be relatively easy to program into a Quake-like game. I wonder why it hasn't happened yet. Oh iD-boys, are you out there?

    DG
  • by Anonymous Coward
    "One foot on the ground" does not literally mean "keep one of your boots in contact with the earth's surface", it means "never advance without an associated stationary base of support"

    In other words, never attack alone. Instead, link up with a partner and take turns advancing, with the non-moving partner covering the points most likely to house someone who could shoot at either of you - and if you've already made contact, "cover" means "shoot at it", even if you don't see anybody there.

    In a Quake schenario, you typically have a room with 2 entrances and one exit - the game advances by moving through the exit. The player pops up in one of the entrances, has a quick look, and then may or may not double around to the other entrance if it provides a better position.

    If this room were guarded by 3 bots, a "one foot on the ground" AI would have one bot immediately lay down fire into the entrance - even if it's not effective, it discourages the player from re-entering (especially if it's a rocket launcher being used for cover fire) Another bot moves to cover the other entrance in case the player doubles around, and the third bot moves through the entrance to chase the player. If it doesn't see the player in a short distance, it starts laying down cover fire at likely hiding spots, and the first bot moves down the hallway and past the third. They keep doing this until either they player is found and killed, or they reach some predefined radius of action.

    The player is now in a hot spot. If he doubles around, there is a bot in position, on guard, waiting for him, and if he stayed put, there are _two_ bots working together to flush him out.

    To be even more realistic, this group of 3 bots now provides the "foot on the ground" for a group of 4 other bots "upstream". The 3 who made contact call for help, and the next 4 come charging into the room and through the exit that the player did NOT take - and they move in twos as well - two watching, two moving, and leap-frogging, working their way towards the path that leads to the first entrance. The two who went out the first exit fall back to their room, but cover the entrance more actively.

    Now the player is REALLY in trouble. The room is secured against him, and there's a pack of co-ordinated killers looking to kill his ass.

    Against AI like this, a single-person first-person shooter player has to either have vastly superior weaponry, or be very smart about how rooms are assaulted. And yet, the AI is all rules-based.

    Incidently, "one foot on the ground" scales. Individual soldiers use it to move across the battlefield, platoons use it, companies, battalions, regiments - all the way up to armies. One unit provides support while the other unit moves.

    I'll tell you what, bot-boy. I'll put me, my 10 years in the Army, and three of my friends up against you and any 4 of your 3l1t3 c0d3rz friends, and we'll see who's talking nonsense.

    You'd be corpses so fast you wouldn't even have time to call for Mommy.
  • by Anonymous Coward

    The problem with the computers in Starcraft were

    1. The computer can simultaneously give far more orders then a single mouse (so it can coordinate its units in a way humans cant) and can guide all units most of the way to their destinations
    2. The resource management of the computer AI (or lack thereof) was a joke. You could always win a scenario by digging in and grinding provided that you survived the computers initial onslaught.
    3. The speed at which the computer builds its initial base is faster than any human player can manage, so the computer always has weight of numbers to crush upstart enemies with
    4. Multiple enemy computer opponents ALWAYS work together no matter how you set up teams (Grrr...)
    Oh, and so far as the crap navigation is concerned, I think that the basic unit AI plots a direct line, and then does 'follow the left' navigation (though with improvements over the WCII AI)
  • by Anonymous Coward
    A good example in this case would be Go. Go, for those not in the know, is an Eastern strategy game concerned with controlling areas of the board with colored rocks - this is a very simplistic explanation for a game that can be excrutiatingly complex and simple at the same time.

    But Go has so many options/states that to try and set up a decision tree, similar to what the previous poster described being used for Chess or Checkers, would be completely impossible, as fairly soon into the game, the computer would be attempting to calculate several trillion possibilities. I remember reading somewhere that if Deep Blue were programmed to play Go using the same heuristic, it would take a couple years to do the moves.

    However, there is still interest in making Go programs with an AI. This is an incredibly interesting field, as there are international Go tournaments devoted entirely to AIs, to see which is the best. Yet you take the best Go AI in the world, and test it against an average Go player. The AI will likely win or play very even for the first couple games. Then the human will see the patterns used by the AI (seeing patterns is very very useful in Go), start beating the AI soundly, and eventually be able to beat the AI even with very heavy stone disadvantages (somewhat like giving someone a piece advantage in chess).

    One of the more interesting methods I've seen to try and build a general Go AI was to create a graph data structure, then use strongly connected components, moving up several areas of abstraction, to create an idea of "areas of influence", then using a plethora of functions (known as "generals") to analyze the situation and suggest the best move.

    Obviously an AI cannot be created that will be able to defeat a human on equal standing, at least I don't see that happening in my lifetime. Computer games will continue to have to rely on creating a situation where the human is at a disadvantage to the AI in everything except for decision-making ability.
  • Well in Starcraft, there is a "Free-for-all" mode where the computer do exact that, the problem is that the game is too easy on the Free-for-all setting. People do exactly what you described above, grab a couple of bases early in the game, fortify them like crazy (especially if they are humans) and let the computers fight it out. A single person can easily defeat 7 computers if he or she manages to maintain a low profile throughout the game.

    I think we need to wait a generation or two before computer game AIs start becoming really clever. Even longer if game developers focus on adding new units instead of the AI (as they have historically done). Each unit you add adds quite a bit of complexity to the AI.
  • The AI in Starcraft cheats, but it doesn't magically get minerals. Now there are some maps where this is true (usually in the "Scenario" directory), but the computer players in a regular match are limited by the same resources you are.

    Of course the AI is still weak, but at least it doesn't cheat on resources, although it does have full knowledge of the map which is very annoying.
  • I second that! I've seen monsters in Marathon dodge my shots, run away when hurt, or circle around to get me from the rear. And the graphics are some of the best I've seen, very spooky and atmospheric. Only in Marathon do I actually become frightened while playing. As for the story, Marathon still has more story than the rest of the game industry put together. The tru7th [bungie.org] is still out there!

  • The monsters can dodge, hide behind things, lay down covering fire for each other, use the map's layout to surround enemies, and generally disguise the fact that they're controlled by rules. Also it avoids the fallacy this guy pointed out of having a single man defeat thousands of heavily armed monsters: Unreal's scenario only pits you against 1 or 2 enemies at a time, and a large group really will overwhelm a player.

  • Posted by Lord Kano-The Gangster Of Love:

    Isn't that grammar?

    LK
  • Posted by DonR:

    One of my favorite scenes in Half-Life:

    See two soliders standing around a corner, talking to each other. Equip crossbow.

    *thwack* Bolt in the back of the first ones head.

    Second soldier keeps on walking and talking, as if his budy were there.

    ---
    Donald Roeber
  • Posted by DaoniX:

    me and my friend can take on and whup 6 Starcraft Broodwar AI's...with an 8 player max, i'd say i'm better than the computer AI.
  • I've been looking at path finding algorithms as part of my first attempt at some real open source code (in this case, using the X license). In my case, I've been trying to write the algorithms as templated C++ code, to allow it to be used for a variety of situations. I'm not a great web guy, though. Is there anyone out there who would be interested in helping me set up a web site about this? Basically it would be an archive of source code and build files, long-winded explanations, and hopefully you could help create a graphic or two (animated GIF maybe) that visually show the different algorithms in action.
  • In SMAC, I totally dominated a neighboring computer player and he surrendered to me. He never backstabbed me. He would give me units, un asked for, when I went to war with other computer players. It was mind blowing when it first happened.
  • by Masem ( 1171 ) on Thursday June 24, 1999 @08:04AM (#1834962)
    Those mac people here probably remember the first doom-clone for the Mac called "Marathon" which became a trilogy of games (Marathon 2 was released for Win95 as a sort of experiment, but didn't do so well, partially because it overlapped with Quake's release).

    One of the features that it boasted (and that I would believe I experienced) was an adaptable AI, where as you continue to play, the behavior of the aliens would vary to match your style. For example, if you liked to hide around a corner, then peak into a room to shoot the aliens, then go back, you'd find the aliens later in the game would be more agressive about charging you. If you were more agressive, and charged into a horde, you'd be faced later with more long range attacks and mobile aliens. (This was better implemented in the final 2 games of the series).

    Again, there's still some rules base here for the AI to develop from, but it was a refreshing change from Doom/Quake (and of late, Unreal) where the monster behavior was constant through the game, and made the latter parts of the game boring. Half-life, as mentioned above, still suffers from this somewhat, but this is partially aided by the numerous types of terrain/locale that the player experiences before the game is over that it becomes hard to tell if the monsters are behavior the same throughout, or are responding to the changes in the environment.

    (BTW, the folks that made Marathon, Bungie, have continued to pump out games, including the popular Myth (and Myth 2 which is reported an excellent AI), and the soon to be delieved Oni, another FPS from leaked info.)

    Also, another aspect that doesn't seem to have been addressed here is how well bots for Quake or Half-Life or Unreal have been programmed. I know that I've found the learning potentials of several Quake bots to be outstanding, although it only lasts for that single DM play. Surprisingly, these are mostly written by the 3rd-party players, and not any game companies themselves. Maybe they ought to have a chat and improve the AIs in current games....?

  • When Id Software ported Doom to SGI, they made an option for dual CPU owners to use their second CPU for enhanced AI.

    I would have liked to have seen that, as well as Doom's v1.1 networked support for three views. If you had a three computer network, you could place a monitor to the left and right and get a very wide view.
  • While something like this can't be expected to work in realtime on every home computer, it may be able to be calculated on the developer's machine, and a memory (brain) dump of the resulting, evolved AI could be used. The pre-rendered aspect comes into play here.

    The datafile of the AI that would be passed along to the home user would be unreadable, perhaps, even to the developers.

    rtfm.mit.edu [mit.edu] contains some very good FAQs from Usenet on Genetic AI.

  • The article which was linked from / that was called a counterpoint misses the point of video game AI. While serious AI research includes making robots respond intelligently, most video game AI is the rock and roll of the AI world. Everything needs to be real-time.

    This is where fuzzy logic lends itself best. However, there are some very commendable variants in games that I've seen in my theoretical reverse engineering.

    Most action games use modes for the AI. If an entity hits a wall, he goes into Finding New Direction Mode. If the entity is in the direct line with the player, he will advance forth.

    Every mode repeats throughout the gametics, with exceptions, which are the rules which make them leave the current mode.

    Halflife provided an interesting variant with the marine human AI. When they switched modes, the entity played an audio sound. "Hit the deck!" meant they were in grenade lobbing mode. "Establishing Recon" meant that they were in walk-around mode.

    Trespasser introduced something that I wanted to implement for a long time: Instictual probabilities. Instead of always switching to certain modes, have the entity's codebase have a few options to go to, at any given time. The probability of each could be pre-defined, or dyanmic. For example, the probability of a dinosaur's attacking you could go far down if he had just eaten. The hunger factor would be at 5%, where the thirst factor would be at 70%. He would most likely travel in a fuzzy-direct route to the nearest water supply (that he knows about).

    In 3D shooters, what always impressed me was what enemies did after you drop down dead. In Jedi Knight, some of them did a dance. In Half Life, the humans reported and walked away. I thought it added a touch of realism when the tense monster started walking around casually. Savage mutilations after death with lots of bellowing are always immersive. :)

    The key to the illusion of making a computer look like it has more cycles is to pre-render as much as possible. This could work with mode probabilities. If a player is exceptionally good at taking out a behemoth tank with mines and finishing it off with an air strike, for example, the ai probability could opt to go into "defense from behemoth tank with mines and air strike" mode. I have not seen this variant on modes in games yet.

    Game AI has to be designed to live in the same CPU as a graphical rendering engine. For this reason, I don't see AI taking entirely new directions, such as Genetic AI being introduced as a replacement for Fuzzy Logic. Rather, I see game AI as getting some welcome variants to the modes such as I said above.
  • Same for you, AC.
  • Yeah, this sounds like my last roommate's list of complaints.

    Funny thing is, I'm about to get a degree, and he's not even close. Guess it's time to stop playing games...
  • In my AI course I created a simulation of autonomous agents on a little lattice. Their goal was to wander around, eat bushes, and avoid predators.

    Each was controlled by a little neural network. They had inputs such as their health, energy, what's in the eight squares around them, etc.

    They would make their decision and do it. Then another function would evaluate what they did and determine if it was "good." For example, if they lost health, that was "bad."

    Then I'd do a little weight update on the ANN to reflect that action. This made for incremental learning.

    For example, if the agent was on food (input) and ate (output), that increased its energy (good) so that was slightly reinforced for next time.

    I never really got to play with it much, so I'm not surprised they really didn't seem to get better over time. I think they died too early to really learn. My simulation had flaws. I know a lot more now and am sure I could pull it off, given time.

    I may use methods like that in my RTS game (see URL).
  • First of all, AI is such a general term that it's hard to apply it realistically. Anything that helps the machine to appear intelligent, or to learn (not the same thing), can be considered AI.

    In that respect, pathfinding is a form of AI because it does involve choosing an "intelligent" path from one point to another. Yet obviously a computer opponent that can outthink you and learn from your actions exhibits even more "intelligence".

    Second, I don't necessarily have a problem with AI "cheating", if it is done subtly enough. The human player obviously has many advantages compared to the computer player. I'm prepared to balance that out a little if need be. In particular, it may be that AI methods that don't cheat are ideal, but are sacrificed because we don't have the computational power (CPU etc.) to execute them. In the end, the question is whether the game play experience is enjoyable or not. If the AI had to "cheat" a little behind the scenes to achieve that, I'm okay, so long as it isn't obvious.

    Next, you have to realize that although AI methods have had some success, often it is in specialized areas, such as expert systems. The problem of building a general AI opponent is a little more like a Turing test that identifying a tumour.

    Typically classical AI methods are rule based and procedural. The tendency in the last decade or two has been toward emergent behaviours, under the umbrella of artificial life techniques.

    There are a host of techniques the developer can employ to do AI, but some are hard to implement (or implement well). Some require tweaking to achieve good results. Typically memory is required to store state, so that intelligent decisions can be made.

    In my game, I am hoping to finish my unit grouping and action classes soon. After that, I'll either work on networking or more complex actions. I intend to build a goal-based action structure for my computer opponents. For example, a high level goal might be "occupy quadrant 4" with subgoals of "eliminate enemy base 3" and "fortify ridge in sector 7".

    There's nothing new there, that sort of stuff has been written about since Minsky and friends. But obviously the implementation leaves a lot of room for creativity, because I haven't seen it pulled off very well.
  • I'm still most impressed by the AI in the original Descent... I don't know if they did anything special with the way the AI was handled but it worked really well. Doom, while one of the greatest games ever, had some of the WORST AI I've seen! You could kill almost anything just by hiding behind an obstacle.
  • The most efficient movement of troops can be done without any AI at all. It just takes a nice deterministic algorithm. They just haven't bothered to figure out a good one.
  • The algorithm is deterministic once everything is known about the terrain. Admittedly, everything may not be known at the beginning, and must be guesed and probed. I suppose this could be called AI, but it's not very advanced, and they do it just fine. Their biggest problem that I've seen is that they do not take into account all available information - in particular, they do not seem to realize when other troops are moving. So when you are trying to move nine troops through a small hole, and the first one enters, the others get confused. Admittedly, there is a certain amount of uncertainty involved. The troop may decide to stop in the middle and in fact create a block. However, this is not that likely, and they should only adjust their course if he does in fact stop. This is a fairly straightforward algorithm.
  • On Pac-Man: If, when no monsters are nearby, you go up into one of the alleys which are immediately northeast and southeast of your starting location, and you just sit there against the wall... you can then walk away and take a bathroom break, get something to eat, whatever. The monsters will continue roaming the maze indefinitely without finding you.

    Even good AI's have their weaknesses. ;-)

  • The AI in Civilization II is so bad, it's the best argument in favor of open source I've ever seen.

    Not only are the AI's so dumb that they have to cheat in order to keep up with a human player, but even the unit movement intelligence is horrendous. I've had times when I've directed a unit to move from point A to point B which is ten squares down a straight road, and what happens? The unit's first move is to hop OFF the road onto a mountain square, or instead it gets locked in a step-forward-step-backward loop until its movement points are exhausted. Give me a weekend and I could code a pathseeker that could literally run circles around the one in Civ II.

    If you're dumb enough to hand control of one of your cities to an AI advisor, you'll later find out that the computer has built lots of useless improvements there which are sucking up your income in maintenance costs.

    Another example of a bad AI pathfinder was the one in Myth I. If you told your journeyman to heal an archer standing right next to him at one end of a line of archers, your journeyman would stroll in front of the archers to the OTHER end of the line, getting hit by their arrows en route, then he would turn around and come BACK in front of your archer line again, and only then would he heal the original target. Kudos to Bungie for fixing this in Myth II.

    I'm glad to see discussion about bad AI's here; it gives me a chance to vent about games whose AI's have bugged me for years. ;-)

  • I've never encountered an AI as decent as the one in FreeCiv. It's amazing.
  • I dunno. Having played Age of Empires for a long time, I found that the computer had a range of personalities. Sometimes it sucked for no reason, other times it just wanted to kick your ass.

    Having not tried Civ yet, I am not up on some of the latest technologies. But I do look forward to AOE II (years behind schedule). The overall look/feel/gameplay from AOE I was awsome.

    PS: 'Daleks' had good AI and that was in the 80's.
  • Would you consider putting these rules onto a web page so that gamers could look at them? Both game players and writers could use this kind of information.

    A good place would be the Linux Game [linuxgames.com]'s site. If they won't host it, I will. This is the kind of "real" info that makes games and such very useful. Like the how to do a real autopsy page (vs. X-files' version).

  • Total Annihilation's AI really isn't all that great.

    It IS customizable, though not to the level you'd hope for.

    You're allowed to alter probabilites and maximum unit counts, so you can tell it to not build so damn many factories, and start churning out light tanks (for example).

    TA's AI has no idea of WHERE to build things. If you let it build mines (like claymore, not coal), it might build a nuclear mine right in the middle of its [densly packed] base.

    Don't get me wrong: I think its great that Cavedog allowed the level of modification that it did, and I love the TA games (Kingdoms just came out this week). Their AI just isn't that impressive.

  • ...add to the fact that it is common for computer players to ally with themselves to defeat the infidel humans (Civ, Civ2, SMAC, StarCraft). Of course, that keeps the humans from just doing the "sit by the fire until the storm out there blows over, then go see what is left to take". If there is a belligerent computer "race", why shouldn't it try to wipe out its fellow computer bretheren, too?

  • Computers do what humans tell them to do.

    In a sense this is true, however computers are capable of doing things that their programmers cannot fully comprehend or expect...imagine the first person to plot out a Mandelbrot Set.

    I worked in the connectionist AI field for a while and though I think it helped us think about the brain in new ways, very few real applications have come from neural nets. Expert systems are still the most used AI-type application, but it is tough to really call them AI.

    Our biggest problem with AI is that the brain has had billions of years of evolution, and is much larger and more complex than any neural net we can design today. However I do believe that it is within our grasp now to artificially evolve neural nets that can mimick ants, spiders, or worms. Getting to vertebrates will take a while.
  • I have to second this. The most incredible thing I saw was when one Skaarj played dead, waited for me to walk passed, and then started attacking. Now, such a strategy could merely be scripted or random, but the timing has been so perfect as to not occur when I wait for it, and only to happen if I assume that the beastie is dead.

    Of course that might just point to how bad a FPS player I am, but the realistic feeling is still there.

    Russell Ahrens

  • by Soong ( 7225 ) on Thursday June 24, 1999 @07:09AM (#1834982) Homepage Journal
    And the current state of game AI is why they should all have an API that I can program my own AI to because I can do it better (or at least what I want).

    If you've played Civ2 perhaps you can sympathise with the brain dead behaviour of "Automate settler" which irrigates evrything including the forests you were counting on for shield production.

    AI API's would also make way for AI contests. I'd love to see a few more of those.

    Gripe #2 on current state of game AI: cheating! Game programmers know that at some point the human learns and the computer has stayed the same so to up the difficulty the computer has to cheat. Civ again: building units becomes cheaper for the AI than the human, and the subjects ruled become more agreeable and easier to please requiring less expenditures freeing up effort for smashing the human.

    As said in the counter-point, rule based AI is a dead end. With the exception of if the computer can generate new rules and learn. Learning must be the next step for game AI and I wish (as per gripe #1) that I could write some for some of the games I like.
  • You did that too? I completetly missed the pre-release hype (how nobody knows) and some guy at work got it. I borrowed it for a week, took it home, did the thing with the ear phones, etc. I didn't even know about the marines. How freaky was that bit when you come into the top of that room and that scientist is running down below "at last your here!" and this marine comes out of nowhere and guns him down. The hl music kicks in, you walk a few steps and immediately you seem to be stuck in the middle of this room with all these thinking breathing people with big guns running around trying to kill you. My first ever game 'experience'. It was incredible!
  • You are in a maze of twisty messages all alike.

    Good god man. Do you know how long I was stuck in that stupid maze!

  • > The computer can simultaneously give far more orders then a single mouse

    This is PRECISELY why I hate, despise, revile almost every single real-time "strategy" game on the market, and for that matter, the whole genre. I can think reasonably fast, but I can't translate that into precise mouse clicking over a narrow little window to micro-manage suicidally stupid units who don't understand their general orders, just the one I immediately give them. I want a unit (meaning a GROUP, that reinforces itself) to defend an area, or scout, general orders, not "shoot at this bunch, now move here, and oh wait something across the map, just don't do a damn thing else while i give orders to that bunch, okay, now shoot here and move there".

    One game I particularly hated was Age of Empires. My border guards constantly just LET THE ENEMY THROUGH, and although the game mentioned a peaceful way to win, I couldn't imagine it, since diplomacy consisted of "send an army to kill his peasants". Carrying on the stupid juggling act, I found that farms degrade and disappear unless you manually "repair them" (Aunt Mae, quick, call the repairman, the field's blowing away!). Then to add insult to injury, there's not even a PAUSE function, not even one that disables the interface while it's paused. No bathroom breaks, no wrist relaxers. Ridiculous. Worst RTS ever.

    Populous 3 is an RTS I actually like. It's still a juggling act, but it feels like part of the puzzle that every level is. You have the one shaman unit, and a mindless mob that you can direct to cause havoc in general areas. The delayed order function was great for creating diversions once you had your plan layed out. It doesn't even have the pretense of having an AI, but it does model a rampaging mob very well.


  • Just look at Ultima Online. You can tell the computer players from the real players by the overwhelming aura of munchkinness.

    (yes, that's tongue in cheek, but there are plenty of mediocre humans that a good If...Then...Else bot can eat for dinner).

    Now how do you set up a neural net to make the Giant Squad of Killer Orcs (not the ones who just updated my Slashdot page) smarter as they watch my character fight them?


    --
    QDMerge -- generate documents automatically.
  • In the 20 years I've been practicing geekdom, I've watched the term "Artificial Intelligence" be used more and more loosely. At first, AI was reserved for the most sophisiticated systems that could reasonably be expected to pass a Turing Test.

    These days, though, any expert system or rules based engine exhibiting any kind of automated behavior is labelled "AI," even though (as in Warcraft or Starcraft) the behavior quickly demonstrates predictable and repeated weaknesses and a disappointing lack of adaptability.

    At last there are a few games (half-life) where the term AI might apply, but they're still just adaptable expert systems. If the market weren't so oriented towards hi-res 3D realistic graphics, perhaps we'd have seen a decent AI earlier.

    I'm not annoyed by the use of the term AI by game developers - that's the role their behavior engines are trying to fill. These engines just don't live up to the name. Yet.

  • We all now see why people like you post anonymously. All you did in your post was knock this guy's ideas (which are based in fact and generally correct). You offer no counterpoints, no other thoughts or ideas. If I had any moderator points, you'd be falling down the list like the stone in the pond of social darwinism that you are.

    I've written several AI systems, though not in the context of games. A set of finite rules is really insufficient. Once the human learns the rules, they will always win. What is needed in AI is for the machine to ascertain the behavior of the human in turn and adapt accordingly.
  • AI is a very bad word for computer-based players. Depending on your search algorithm, how complex the game is (I don't know how to measure the complexity, but I'm sure someone has a good way of doing it) and how fast your computer is, makes your computer have more or less AI?

    AI is (by my definition) learning, making own general rules from specific cases. The computer has to make observations of the real world (say, the real world would be the game), set up a hypothesis and make a formal description (theory).

    The next step is the interpretation of the theory. To be able to set up a model (set-theoretical model), and to use this for predictions. It must also be able to change the theories if a wrong induction was made.

    If a computer can do this, it is AI (this is, with todays mathematics, impossible). Having "fuzzy logic" or "neural nets" does not make AI, it simply makes the predictions of the computer more difficult, and might make the computer more flexible to other tactics/strategies. But is certainly does not make the computer to have AI.

    For further information, please read "Computerized Agents from a Linguistic Perspective" by Bertil Ekdahl (my teacher). Or reply to this :-)
  • Thank you :-)

    What they are talking about is not AI, the term AI is just used as a word for computers flexibility in tactics and strategies, and several other things. This is only a little comment, but it is confusions like these that makes game developers, scientists and so on to have a bad picture of what is possible and what is impossible.

    AI is impossible with todays mathematics. But with all the confusion and all the buzz-words, people think AI can be made, or that it is already here.

    Think one more second, if a computer had AI it would start reasoning about the world, why it is here, what it should do, make observations from the world and react to that. Even if the computers "world" is just the game.

    What if the AI in a game figured out that war is a bad thing, and it became peaceful, trying to reason with you to stop the war.

    It would be chaotic :-)

  • You think I'm joking? I'm an AI researcher, with 4 years neural network experience under my belt. I mean REAL AI too.


    Interesting piece of work, I would like to know how you define AI. And for that matter, how you define real AI. I could do this by email, but I could not find yours :-(
  • First (pedantic), `alife' is the _wrong_ term to use for modular/threaded programming techniques (Brooks' subsumption architechture etc.) The former refers to virus simulation/evolution, the latter being a methodology inspired by the modular nature of the brain.

    you're right, but genetic programming is precisely what game programmers are usually doing - they use (or plan on using) some sort of evolutionary technique to breed a 'perfect' behavior algorithm (which probably means breeding a perfect finite state machine to control the creature).

    frankly, most of computer game ai is pretty simple - it's mostly finite state machines for behavior control, sometimes augmented with 'fuzzy' (i.e. nondeterministic) state progression. there are some attempts at doing more that that - afaik, crash bandicoot and jedi knight both use subsumption for behavior control - but those are few and far between.

    btw, for some papers on future directions and game ai research, i'd check out the web page from the 1999 aaai symposium on computer games and artificial intelligence [nwu.edu].
  • Poor pathing for large ground units, especially when moving with groups of smaller units. Instead of waiting for the smaller units to move (which would make sense, since the small units accelerate quickly and move out of the way quickly), the large units will slowly come to a halt and then slowly accelerate in a different direction. By that time the small units have moved out of the way, so again we slow to a halt, change course, and accelerate. Part of the problem is the accel/decel times for large units (which shouldn't be changed), but things would work a lot better if they would hesitate slightly longer to see if their way will be cleared.

  • There's an algorithm called D* that has been proved optimal for things like fog of war (where "optimal" means it best used the available information, without trying to guess and getting lucky). However I don't know of anything that's proved optimal for enemy movements.

  • That was a pretty brilliant move, wasn't it?

    This actually happens two scenarios in Unreal. In the first, you come across a "dead" Skaarj which hops to its feet and attacks once you get close enough. In the other, the Skaarj drops when you shoot it and plays dead for a few seconds. That scared the living hell out of me.

    Even better, though, was the way you could blow the legs off the Krall. I'll never forget when the lot of them ambushed me, so I turned and leveled one, only to find him pulling himself along with his hands after me. Wow.

  • I probably sound like an idiot bringing up neural nets here, just because of the hype that is associated with them, but I am wondering if someone out there with a good background on neural nets could answer some questions I have.

    I know that the basic neural net works with three layers of neurons: an Input layer, with signals fed into it, a middle layer with a bunch of connections, and an output layer. To train it, you put input signals on one side, and "correct" output on the other, and let the middle layer weight the connections appropriately to "learn" the pattern it is being taught.

    What I am wondering is: Are there other arrangements/learning methods for neural nets other than that one? Could a neural net be taught to play a game by "noticing" somehow which things it tries work and which don't? Instead of a bunch of one-shot examples, could it operate in a continuous manner, with signals constantly being fed in and out, and connections constantly changing? I suppose the basic way humans attach "right" or "wrong" to events is by signals of pleasure or pain, with the network always trying to get more pleasure signals and less pain. I see I've just gone way off on a tangent here. I'll post anyway because I would really like an explanation of how much people now know about neural nets from someone studying such things.

    Anyone?
  • Heh, the Unreal AI is almost annoyingly good at times - the number of times I've wasted precious ammo missing a Skaarj warrior because it's dived out of the way of my shot... :o)

    Oh, and 2 Skaarj warriors were enough to overwhelm me occasionally, if they caughgt me by surprise

    Tim
  • The neural net backgammon programs such as jellyfish that play on FIBS (the First Internet Backgammon Server) [fibs.com]are _extremely_ good players, always ranked in the top 10, just a hair off world-championship levels.

    Neural net programs are now indispensible for game analysis and teaching. Even the standard 'book' opening moves have been rewritten since they came on the scene.

    Its really humbling to realize that my ass is being kicked not because the computer is out-number-crunching me but because the neural nets encode a much deeper, more fundamental understanding of the strategy of the game than I'll ever have :-(
  • HL was the first game to scare me with how sneaky the computer could get. After playing it over and over you kinda see how the AI works and patterns begin to form, but for video games it's the best I've encountered so far.
  • I'm interested in the reasons that this article, and similar ones, keeping turning up in the press.

    Intel keep hyping AI as the "next big thing" because more and more of the hard stuff is being taken off the processor by custom hardware - 3D cards, audio streaming and compression, etc. They want to sell their kick-butt processors so they hype the expensive stuff - currently that's Physical Simulation and "Advanced AIs", even if nobody has done physics well (it's a *very* hard problem and all the demos you'll see are cheap hacks and "special cases") and "Advanced AIs" are pure vapourware.

    Having been in the business for a fair few years, I maintain that nothing significant has been invented in gaming AI for ages. Sure, there have been learning systems but they have been reserved for nieche "simulators" and "virtual pets". Mainstream games continue to plod along with rule-based AIs as always.

    The only thing that has changed is that we can calculate more and more complex metrics for our rule based AIs - instead of proximity, we can do line-of-sight, instead of closest enemy we can now do spatial searches for the mean positions of groups of nasties.

    Why will it always be thus? Because debugging a neural network is impossible (you just have to re-train it with a different working set) and having predictable AIs is key to shipping a game within your launch window. Having an out of control AI where nobody can pinpoint the bugs will get your project canned.

    Certainly Dungeon Keeper 2 and Theme Park 2 are simple AIs - the only difference is that we can now run them on huge crowds, use flocking algorithms and have more interacting states to provide emergant behaviours. It's still all rule based though.

    - Robin Green, Bullfrog Productions.
  • There are actually alternate, fan-created AIs for Total Annihilation, by Cavedog. I think they were created because people weren't challenged enough by the skirmish AI. Unfortunately, many of them cheat. I don't know how they did this or whether Cavedog released an API or something, but it is a good example.
  • Sure, starcraft's AI isnt that good, but the AI in Brood War is way better. Get a multiplayer game going with 3 AI...AI rocks
  • Call me dumb but I can't see the difference between the two points of view. On the one hand the first article states that AI in games is comming of age with the incorporation of new `alife' techiques. Whereas the second article is slating games for not incorporating alife techniques, but points to their future incorporation.

    And both are wrong.

    First (pedantic), `alife' is the _wrong_ term to use for modular/threaded programming techniques (Brooks' subsumption architechture etc.) The former refers to virus simulation/evolution, the latter being a methodology inspired by the modular nature of the brain.

    Second, if games programmers can do "cheap tricks" instead of coding some modular, threaded architechture to control the actions of assailants, then good for them. Academics in AI make dodgy hybrid systems all the time and still claim to be high and mighty. (Brooks and the COG project)

    The bottom line is: who cares how it's implemented. If your game behaves realisticly then no-one can slate you for your methods. Neural nets and threads are v. expensive. if... else statements are cheap.
  • The most efficient movement of troops can be done without any AI at all. It just takes a nice deterministic algorithm. They just haven't bothered to figure out a good one.

    Finding a paths between places is ai. Deterministic algorithims can't deal with unexpected things such as a enemy suddenly appearing on the path or if the route between two points is unknown (due to fog of war or something like that).
  • You really believe that anyone can actually come up with computer AI that will match human inteligence? There has been only one instance in which I can recall an intellgent human being beaten by a computer. BigBlue winning a few games of chess. The only way the computer won was because it could calculate a huge multitude of possibilties at once.

    Computers can play a prefect game in tic-tac-toe and come damn close in checkers as someone has pointed out. Bayesian networks have been able to diagnose certain diseases as well as or better than pathologists(check out the literature on PATHFINDER I-IV, MUNIN, etc.). And I'm sure Chessmaster 5000 or gnuchess can beat the average person in chess.

    You're comparing the abilities of a computer player to that of the best human in the world in chess. That's not a really fair comparision since the average person is no where near that good.

    However, the best AIs can't deal mundane stuff that 99% of the population can handle such as recognizing faces or understanding language so AIs still have a really long way to go.

  • Doom AI (slightly simplified):
    Move towards player, slightly to left.
    When you can't move, move slightly right.
    Alternate moves everytime you get stuck.
    Fire when you see the player and have had enough time since last fire.

    Amazing, really, how well such simple rules can work.
  • Game trees are really useful only for chess-like games where there is no hidden information. For something like Starcraft, you need several levels:

    Individual units need movement, this involves pathfinding and the ability to see the enemy. The "star" of pathfinding is called A*, you should be able to find it on the web.

    Strategic AI is harder, some components:
    Deciding what units to build.
    Some games, such as AOE hardcode these (you can edit the files). Smarter ones would see what units you are using and build the units which defeat them. (Most RTS games are rock-paper-scissors in the end).

    Deciding what to do with them.
    Again using AOE, there are editable files filled with things like how quickly to explore, how violent to be, etc. This is the really hard part.

    The web page:
    Game AI Page [gameai.com]
    is an excellent source, and a book on programming RTS games was reviewed here a little while ago.

  • in my data structures course, we briefly covered game trees. this seems to be a very effective (yet ridiculously simple) AI method. but because it is recursive it totally was not efficient. does anyone here know how most game developers implement a.i ? even in starcraft, if the computer was trying to control just 30 units using the game tree method, it would probably bring a PIII to its knees.
  • Agreed. I've played 2 on 6's before (and won), and the best way is to simply let the bad guys throw themselves at you 'till they exhaust their resources. After that, it's a cinch.

    Now, I'm not saying I *like* the AI in SC.. it's infuriating at times.. I've lost entire squads of zealots due to their autonomous stupidity.
  • Sorry, SC (Broodwar included) sucks.

    I've spent many hours researching this topic :), and some of the problems I've found are:

    1. No real variation in strategy. The strategy can be summed up as this:
    if (myForces > yourForces || youExpanded)
    attack();

    2. No adapting to your behavior. It won't build up the appropriate counters for your attackers or defences; it always builds up a general purpose mix of units. If you've blocked off a route, it won't search for a weakness elsewhere.

    3. The AI relies on having full map information and enemy unit locations. But it fails to use this information effectively.

    4. Too many fatal flaws. E.g. attack with one worker at the begining of the game, and it will counter with all of its workers, taking them off production.

    The AI is so predictable, playing against it is simply playing the percentages. Make sure you control more resources, make sure you win the battles, and exploit the computers short sightedness.

    This, IMHO, is the antithesis of what strategy games are about. I like games that force you to be creative and innovative to win, and the current crop of RTS game AIs don't measure up.
  • something i thought of a while ago - we have graphic accelerating hardware, but to me i don't care how good a game looks, and there's an absolute limit as to how far we can go with 3d acceleration.

    what we really need is a card dedicated to accelerating ai. this would be tricky, but from some guys who i've spoken to about this who know much more about ai that i do, it should be possible.

    $0.02
  • Someone ought to tell you that Pac Man had only fixed movement patterns assigned to the ghosts.

    But that would be embarassing.
  • er, not according to an interview I read with the programmer (can't remember his name, and the book's in Grand Rapids (I'm in Lansing).)

    If you find that book, please do quote from it... I'd be interested in knowing what the programmer was thinking when he wrote it, because there are six or eight patterns that the player can variously run to clear each level. Perhaps the movement itself isn't fixed, but the optimal solution to each instance of a level is.

    It's possible to play pac-man blindfolded. I've seen this done, although I don't have the actual skill. I do know the first couple levels of pattern, though.

  • While speaking of civilisation i've got a question.

    How successful was the Linux port, is there any number of sales,... to look at??

    I ask because this is one of the few commercials games under Linux and wanted to know if this was a small/medium/big/huge success or if this didn't work (not engough games sold).

    thanks
  • virtua fighter 2 and 3 have some sort of "expert mode" where the computer learns moves from players.



    it seemed to learn moves from a player, try them, and rate how successful the move was. It only kept using stuff that worked well -- but seemed to mix stuff up so you couldn't easily second guess it. The AI still had a couple of bugs, but providing you were well mannered enough not to repeatedly exploit them you could get a tough and interesting game against the computer (with the added fun of seeing what moves it had picked up from you or your friends).



    fighting against vf2 computer is much more fun than fighting the old street fighter 2 ai, which was very repetitive and was heavily stacked in the
    computers favour.



    even though vf2 is quite old now, i still rate it. soulblade and a couple of others seem to have ripped it off with just small improvements in graphics etc. You can pick a second hand saturn up for about £30 these days. The PC version of vf2 is actually pretty recent and might have added/improved AI. Vf3 is excellent if you're willing to learn it, but seems to be too tricky for most people to learn.



    til you can use drunken kung fu in quake, i'll still keep going back to virtua fighter. shun di forever :)
    +++++
  • I would actually have to disagree.
    Okay, unreal monsters do dodge very well. But really, thats just a variation on the "perfect aim" bit, where a computer shooting at you has better than human accuracy. Unreal AIs have have better than human dodging... But only b/c the game is geared towards that. Quite akin to QuakeC bots (And seeing how the AI author, Steven Poldge wrote the definitive Quake 1 'bot, its quite logical).
    Unreal's AI puts up a good fight, as does Half-Life's and Aliens Vs. Predator's. But none of them have yet to *really* impress me. Eraser Bot for Quake 2 is quite good also.
    AI gonna impress me? Give it personality. Make it max headroom for cryin' out loud, but make it *seem* like more than some programmer's set of rules. Cheers, jeff
  • That's total BS. A friend of mine and I played a 2 (us) vs 5(them) game once, and we won ONLY because we held high ground, and lasted long enough for the AI to waste all of its materials. Siege tanks, guarded with missile turrets, backed up with SCV's to repair everything, kept us alive and hammering on the AI. We finally managed to clean house after the computer ran out, building a couple of Battlecrusisers (we were low on minerals too! :) to slowly go around the map kicking butt. Best game ever. :)
  • As a matter of fact, I am going to be devoting my fall term to a Bolo AI (or brain as the game terms them). It will make use of the strategic disposition ideas from comp.games.ai (or was it comp.ai.games?) in order to organise its thoughts and (hopefully) beat the player.

    My idea is that, provided with a simple set of rules for evaluating positions and risk, the brain will be able to decide on the optimal solution. I took AI a few terms ago and still have the books (naturally), so I should be able to implement some more modern AI ideas.

  • That's already being worked on in at least 2 of of the offline server projects. I'd like to say more about it, but because of the offline server's uh, questionable legal status. I think it's best that they announce something offically.

    On the production OSI Servers? There are already some sophisticated AI routines in place for at least some monsters. Orcs are supposed to be cannon fodder. The same can't be said of Lich Lords, or the new coding on Dragons. I'm concerned though. If you make Orcs use group tactics, it's going to increase the amount of server-based processing lag, and remove Orcs from the newbie cash stable.
  • Bolo rocked! I will always have a mac, if for no other reason than Bolo is still one of the coolest network games I've ever played. And you have to love a game that is that good, yet takes hardly any space or memory.


    itachi
  • There was/is an old Mac network game called Bolo... one of the first networked games for Mac, IIRC. In it, the player navigated a tank from an overhead map view around an island, killing other players' tanks, building pillboxes, capturing supply depots, etc. Tons o' fun. But the best part was the fact that Bolo had a plug-in architecture for brains that you'd write in C, etc. Some of these were pretty swift, too. Very old game, but I can still write a brain that'll kick my ass every time ;)

  • I can personally attest to the Marathon Trilogy's aliens being horrifyingly smart, rivalling the AI's in games today. Beyond that, the environment (especially in the first Marathon) was perfect: dimly lit, claustrophobic corridors, open cargo bays, etc. You'd really have to play it to understand it, but that was a scary game, evoking panic with an atmosphere that feels like movies like Aliens, Event Horizon, etc. I hadn't felt fear like that from a game until Half Life.

  • Ah, but I had the map... ;) I think a .gif or something is available of the maze map somewhere online; ask me if you need it.

    uh... we're both talking about Zork, or one of its progenitors, right?

  • er, not according to an interview I read with the programmer (can't remember his name, and the book's in Grand Rapids (I'm in Lansing).) He said that he actually wanted to give the "ghosts" individual personalities, though I don't know if this made it into the final product.

    It was in a book that interviewed around twenty old-school programmers, if anyone wants to back this up...

  • Great idea, but I think that genetic algorithms are not really suited to this task.

    Reason: The number of generations * the size of population required to create a well-evolved AI would be cost far too much in terms of computer power and processing time.

    For a good population you would likely want more than 60 AIs. You would likely want each generation to last till someone has at least 50 kills (or there is a total of more than 400 kills) to allow averages to tell who is really worthy of reproducing. You would also likely need hundreds of generations to zero in on optimal values. Of course you could reduce the number of generations by increasing the population size, or vice versa, but you can't really reduce the processing needed by much.

    You would likely have more success using a simulated annealing method and work on each AI individually. This way you wouldn't have to use such a large population. Results would be very simular. But the analogy is as nice.
  • (Those Marathon people here may remember me from the Story Page [bungie.org] ;-)

    Ah, Marathon--still, IMHO, the best FPS ever (and maybe yet to come...). Incredible AI, story, graphics (who needs polygons!). AI seems to be Bungie's forte--PID was suprisingly good for it's time, Myth & MythII were very nice, and Oni...mmmm, individualized AI's for every character, adaptation, learning, etc.

    But Marathon...back in those days, when the only real alternative was Doom, it was was breath taking (and still is). The primitive state 3D games only enhanced the effect--I can still remember dueling with Juggernauts (big barn-sized flying tanks), retreating to cover, only to have them sneak up behind me (sometimes going some distance) and toast me, sometimes even bringing help with them.

    Links galore: bungie.org has lots of info on Marathon [bungie.org], Oni [bungie.org] and PID [bungie.org] (as well as a little on Blam [bungie.org], but would take some explaining). Some of the original designers jumped ship and formed Double Aught [doubleaught.com], made Marathon Infinity, and are currently working on Duality [duality.net], yet another FPS.

    "I pledge to punch all switches, to never shoot where I could use grenades, to admit the existence of no level but Total Carnage, to never use Caps Lock as my 'run' key, and to never, ever, leave a single Bob alive." -The Oath of the Vidmastar, Marathon

    Rage Hard
  • by VonKruel ( 40638 ) on Thursday June 24, 1999 @07:35AM (#1835028)
    The ability of an AI to play a game well depends on the complexity of the game. The complexity of a game depends on how many possible moves there are at each turn, and how many possible game states there can be. For example:

    Tic-tac-toe: very easy -- the machine can examine the whole game-tree on each move, and *always* make the best possible move (which will result in a tie if the human also makes the best possible move).

    Checkers: more complex, but a deep search can be conducted, and a move database can be used to help matters also. There are some *excellent* (practically unbeatable) checkers programs in existence. And there are incredibly good human players also.

    Chess: quite a bit more complex than checkers. The number of possible states for a Chess game is hopelessly huge. To create a grandmaster-comparable Chess engine, you employ tactics like:

    o hardware acceleration of time-critical search code
    o move database for opening and end-games (tricky part is the middle game)
    o heuristics developed with the assistance of Chess grandmasters
    o a big-ass computer (lots-o-CPUs and memory bandwidth)
    o a very nice search engine of course ;-)

    It's important to note that adding more computing power doesn't help matters as much as you may think. For example, suppose you increase the computing power by a factor of 10. In a game where the number of possible moves is 10, this would enable you to see *one* (1) move further into the future. If you wanted to see two moves further into the future you'd need *100* times the computing power. This is over-simplifying things a bit, but you see my point.

    Now consider a computer game like a flight simulator or a real-time sim like StarCraft. The complexity of games like this is so massivley f'ing huge that you can't cope with the problem by using a simple min/max search -- you *must* rely *heavily* on heuristics (e.g. if...then...) to make an AI that will perform better than a monkey. A lot of AIs are given artificial assistance (e.g. they "cheat") so that they won't be easily defeated by a competent human player.
  • I know the feeling. I have been a big fan of Half Life,(yes I know it is a windows based game and!) and the artificial inteligence in that game is interesting to say the least. I really like the games that if you do not do the utmost right thing you don't get to go any further. And then if the characters in the game treat you like an actual being in their world it makes for a true gaming experience.But lets not forget how enthralling MYST was. There was no sprite interaction, it was the scenery that counted wih an immersive story. An AI is good only if the game really deserves the trouble it is to create one. Other than that I still love my first person shooters and runaround games. Look at Tribes and the upcoming Quake3 and say who needs an AI anywhoo. A computer can never imitate the very thing that created it and when it does we are obsolete.

    Just my opinion
  • And I will third it. I won't say that I am the best fps player, but I have to say that any enemy that will use my own tactics against me (it HUTS to have a Skaarj Scout circle straif you while you are trying to do the same to it, its better at it). From what I heard, the guy who did the Quake ReaperBot in his spare time got hired by Epic to do the Unreal AI.
  • Starcraft is scripted (it uses the same AI as Warcraft II with a few minor tweeks). Here is an example of how bad the Starcraft AI really is: it has a line in its resource management script that states something along the order of "if resources100, get 5000 resources" making it impossible to lay true siege to the computer and its supply lines can never be severed because it doesn't have any.
  • If I remember correctly, the Starcraft AI knows exactly where everyone is, all the time. The other irritating thing that it can do (and does) is with Broodwar the AI can sensor sweep an unlimated number of times even if it doesn't have a com station. I remember nuking their entire base and doing some mopup work and getting six sensor sweeps in the span of 15 seconds. Thats wrong, especially since I had wasted all of their command centers. Go figure.
  • I stand corrected.
  • The evolution system can be a stripped-down version of the game. For example, there is no need to render graphics while evolving monsters.

    On the other hand, evolving against human oponents would be best...

    Fear my wrath, please, fear my wrath?
    Homer
  • Since everyone in here is focusing on AI for computer games, I figured I would throw in my .02 on console games. It very well may be a lot of hype, but from what I hear the AI in the new N64 game Perfect Dark will be fantastic. Do a search on it and check out some of the info... seems pretty promising to me. Then again I think Goldeneye 007 was the best game any console has ever seen. Multiplayer rocked!


    Tell a man that there are 400 Billion stars and he'll believe you
  • Actually, getting a game to move characters in the most 'efficient' manner possible is extremely difficult for a computer to do. It's a variation on the old travelling-salesman problem: If you are a salesman who has to visit the capitols of the lower 48, what's the most efficient path for you to take? It turns out to be a fiendishly difficult problem to solve, even for supercomputers (which until recently took several days to solve it).
  • The key to the illusion of making a computer look like it has more cycles is to pre-render as much as possible. This could work with mode probabilities. If a player is exceptionally good at taking out a behemoth tank with mines and finishing it off with an air strike, for example, the ai probability could opt to go into "defense from behemoth tank with mines and air strike" mode. I have not seen this variant on modes in games yet.
    The reason you have not seen this variant on modes is that it is insanely hard to enumerate all the possible strategies and defences. The real key will be when someone develops a method whereby the AI can learn new strategies and plug them into this architecture.
  • By any chance, were you playing in easy mode? The AI becomes much more adaptable on the harder levels of difficulty.
  • That is AI. It has 5 rules that covers what it can do. A very limited one but it is AI none the less.

    just my 2cents
  • I am very impressed with Half-Life. It is my second favorite first-person-shooter behind Quake(my favotite game of any type). I found it to be very interesting and also very realistic. The people would act like you shot them and soldiers acted like soldiers.

    Unrealism really bothers me in gaming. After seeing how things work in HL, (especially the helicopter), games like Sin are pathetic at best.

    On a side note, I keep seeing that Wine can run Half Life. I have tried and keep failing. I wish I could find a place that documents how to do it.... hint...
    -Clump
  • Ok ... I have taken on up to 3 AI using the following method (with 1 AI of each race) (I've also beat 7, but you have to take out the other 4 by beating 'em to resources):

    Play Protoss. Send Probes to find all 3 bases ... Build a Pylon 1 screen away ... Build a forge near any of them ... now you can build plasma cannons ... build 2 or 3 plasma cannons by each pylon you built ... send your probe in to their base to attack a building ... it will get the attention of ALL their creatures (even taking drones and SCVs off of resource-nabbing) ... run back to your plasma cannons and enjoy the slaughter of another race ... That's shoddy AI for you (Our 20 SCVs can beat his 1 probe, let's kill him and be done with it even if we have to follow him into 3 plasm---aaargh!) ... But it doesn't cheat ... It just maximizes efficiency ...
  • I know exactly what you say when you mention the word "scare", the first time I played HL with full surround sound and big monitor in a dark room. It felt like I was almost being hunted..
    Wait till game AI advances and even the our best game playing humans cant figure out "patterns".
    -Kancer
    ps: a quick thing on Starcraft -Blizzard tuned down the "insane AI" in Brood war because of complaints.
  • No, finding a path from A to B can easily be implemented (there are many efficient algorithms for that (e.g. Djikstra's...))
    The traveling salesman problem is a bit different (but this little difference makes it - as you said - almost impossible to solve):
    given: a set of cities and roads.
    question: which is the most efficient way to visit all cities only once.
    solution: you have to test all possible solutions!!! (the only way to shorten this search is by using heuristics e.g. if a to b and c to d cross and there is a connection a to c and b to d then you take this one (which makes the to paths shorter).
  • The concept of AI API's is an excellent idea, not to mention it rhymes. A game could be distributed with the capability to author your own AI. The idea of open source is a popular thing and could do well for AI, I'm sure. People would work on them and might even develop something quite nice. Even if their creations were never used in the final product, it might lend a hand to software developers with some fresh ideas.

    Who knows what the geeks of the world can come up with. I think we can all say Linux.

    As for current AI, one of my favorites was Aliens vs. Predator. I think the character development played a part as well. But the intelligence of my opponent scared me sometimes.

    --kaniff

    I need a cool quote or something.
  • That's not any astounding development in AI. That's part of the game. When you kick their ass so hard that there's no chance of them ever being able to retaliate, they change their mood to "Submissive" and be really nice to you, plus they count toward a Conquest victory without you having to completely kill them. Which I think is a great idea, I hated in Civ2 having to hunt down that last enemy city...
    So don't get me wrong, I think the AI in SMAC is great. The good stuff about their AI is the way they position their forces around a city in a way that prevents a counterattack, and they know how and _when_ to use probe teams (the equivalent of Civ2's Diplomats/Spies), and how you can infiltrate their datalinks to see all their base operations so you can tell that they're not cheating. Just don't give too much credit to submission, it's just part of the game.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (7) Well, it's an excellent idea, but it would make the compilers too hard to write.

Working...