State of Computer Game AI 144
irix writes "Interesting point and counter-point about how far game AI has come along. " Or not, as the case maybe. There are times that I'm really impressed with how well computer games can "reason" things out, like Dungeon Keeper for example, and others', like the original Starcraft where I just shake my head. My biggest complaint still comes in how "not-smart" games are with just moving characters in the most efficent path on the screen - if they could just get my movement right, I'd be so happy, I'd scream.
Re:Good Enough (Score:1)
Civ's AI can be much more powerful for one strong reason in particular: it's turn-based. It *has* to be stronger, since you can't rely on the interface requiring a clickfest and trivially making it arbitararily tough on the human through sheer (lack of) virtue of the interface.
Rules-based systems work well for real humans... (Score:1)
In a military setting, one may often be placed in a situation that requires one to act decicivly and quickly without having a whole lot of time to think beforehand. To help counter this, the Powers That Be have, over the years, developed a number of heuristical, rules-based "drills".
You practice these drills over and over and over and over and over - and over and over and over - until they become second nature. They become (for all intents and purposes) automatic responses.
Now some of these drills are very simple - what to do when your machine gun jams, for instance - but some are very complex, and suprisingly adaptable.
Assault drills, for example, don't have to be executed "by the book" perfect, as long as a few simple tenents are adhered to. (Keep one leg on the ground, provide cover fire, when in doubt, attack, always do _something_ as doing nothing will always get you killed)
But I've yet to see a single game of any type try and apply rules like this - not even turn-based wargames like Steel Panthers 1-3 where it would be entirely appropriate. If I play these games using the tactics I was taught, I can _always_ beat the computer, normally very badly.
Perhaps it's not that rules-based AI has reached its limits, it's that the people coding the rules aren't asking the right people to derive the rules.
If anyone is coding games that require small-unit or armoured tactics, and they want some help with the rules, let me know.
DG
Some amplification (Score:1)
A couple of years ago I was about to go on an extended exercise that was to lead into (maybe) some real action, and I wanted to sharpen my edge before I started fighting for real.
So I started playing paintball.
I went in thinking that my superior soldiering skills were bound to make me well-nigh invincible compared to untrained civillians, but that was NOT the case - the "pro" players (who knew every inch of ground, all the obstacles, etc.) made quick work of me.
But then I hooked up with a fellow soldier, and we started working together using the fire team tactics we were taught.
It's very simple: if either one of saw anyone - _anyone_ - come into range, we'd both fire at him to get him to take cover. Then, one man keeps firing (to keep his head down) while the other bounds forward a few steps. He starts firing, the other man leapfrogs, and you keep moving like this until one of you hits the target.
All this happens _very_ quickly - speed and violence is the key.
Once we started doing this, we totally cleaned house. We could attack groups much larger than ourselves and "kill" them all in a matter of seconds. Even the seasoned Pros (and there is a pro paintball circuit) couldn't stop us.
I was _very_ suprised at how well it worked. I had always considered frontal Infantry assaults to be organized suicide, but this proved me dead wrong.
The rules involved with this tactic are very simple, and should be relatively easy to program into a Quake-like game. I wonder why it hasn't happened yet. Oh iD-boys, are you out there?
DG
"One foot on the ground" (Score:1)
In other words, never attack alone. Instead, link up with a partner and take turns advancing, with the non-moving partner covering the points most likely to house someone who could shoot at either of you - and if you've already made contact, "cover" means "shoot at it", even if you don't see anybody there.
In a Quake schenario, you typically have a room with 2 entrances and one exit - the game advances by moving through the exit. The player pops up in one of the entrances, has a quick look, and then may or may not double around to the other entrance if it provides a better position.
If this room were guarded by 3 bots, a "one foot on the ground" AI would have one bot immediately lay down fire into the entrance - even if it's not effective, it discourages the player from re-entering (especially if it's a rocket launcher being used for cover fire) Another bot moves to cover the other entrance in case the player doubles around, and the third bot moves through the entrance to chase the player. If it doesn't see the player in a short distance, it starts laying down cover fire at likely hiding spots, and the first bot moves down the hallway and past the third. They keep doing this until either they player is found and killed, or they reach some predefined radius of action.
The player is now in a hot spot. If he doubles around, there is a bot in position, on guard, waiting for him, and if he stayed put, there are _two_ bots working together to flush him out.
To be even more realistic, this group of 3 bots now provides the "foot on the ground" for a group of 4 other bots "upstream". The 3 who made contact call for help, and the next 4 come charging into the room and through the exit that the player did NOT take - and they move in twos as well - two watching, two moving, and leap-frogging, working their way towards the path that leads to the first entrance. The two who went out the first exit fall back to their room, but cover the entrance more actively.
Now the player is REALLY in trouble. The room is secured against him, and there's a pack of co-ordinated killers looking to kill his ass.
Against AI like this, a single-person first-person shooter player has to either have vastly superior weaponry, or be very smart about how rooms are assaulted. And yet, the AI is all rules-based.
Incidently, "one foot on the ground" scales. Individual soldiers use it to move across the battlefield, platoons use it, companies, battalions, regiments - all the way up to armies. One unit provides support while the other unit moves.
I'll tell you what, bot-boy. I'll put me, my 10 years in the Army, and three of my friends up against you and any 4 of your 3l1t3 c0d3rz friends, and we'll see who's talking nonsense.
You'd be corpses so fast you wouldn't even have time to call for Mommy.
Re:Umm... Grammer. (Score:2)
The problem with the computers in Starcraft were
Re:AI limitations (Score:2)
But Go has so many options/states that to try and set up a decision tree, similar to what the previous poster described being used for Chess or Checkers, would be completely impossible, as fairly soon into the game, the computer would be attempting to calculate several trillion possibilities. I remember reading somewhere that if Deep Blue were programmed to play Go using the same heuristic, it would take a couple years to do the moves.
However, there is still interest in making Go programs with an AI. This is an incredibly interesting field, as there are international Go tournaments devoted entirely to AIs, to see which is the best. Yet you take the best Go AI in the world, and test it against an average Go player. The AI will likely win or play very even for the first couple games. Then the human will see the patterns used by the AI (seeing patterns is very very useful in Go), start beating the AI soundly, and eventually be able to beat the AI even with very heavy stone disadvantages (somewhat like giving someone a piece advantage in chess).
One of the more interesting methods I've seen to try and build a general Go AI was to create a graph data structure, then use strongly connected components, moving up several areas of abstraction, to create an idea of "areas of influence", then using a plethora of functions (known as "generals") to analyze the situation and suggest the best move.
Obviously an AI cannot be created that will be able to defeat a human on equal standing, at least I don't see that happening in my lifetime. Computer games will continue to have to rely on creating a situation where the human is at a disadvantage to the AI in everything except for decision-making ability.
Re:AI API's needed (Score:2)
I think we need to wait a generation or two before computer game AIs start becoming really clever. Even longer if game developers focus on adding new units instead of the AI (as they have historically done). Each unit you add adds quite a bit of complexity to the AI.
Re:AI in Starcraft (not true) (Score:2)
Of course the AI is still weak, but at least it doesn't cheat on resources, although it does have full knowledge of the map which is very annoying.
Re:Mara7hon (Score:1)
Unreal most realistic yet (Score:2)
Re:Umm... Grammer. (Score:1)
Isn't that grammar?
LK
Re:Half-Life (Score:1)
One of my favorite scenes in Half-Life:
See two soliders standing around a corner, talking to each other. Equip crossbow.
*thwack* Bolt in the back of the first ones head.
Second soldier keeps on walking and talking, as if his budy were there.
---
Donald Roeber
Re:AI in Starcraft (Score:1)
me and my friend can take on and whup 6 Starcraft Broodwar AI's...with an 8 player max, i'd say i'm better than the computer AI.
Re:efficient AI's (Score:1)
Re:AI API's needed (Score:1)
Marathon (Score:3)
One of the features that it boasted (and that I would believe I experienced) was an adaptable AI, where as you continue to play, the behavior of the aliens would vary to match your style. For example, if you liked to hide around a corner, then peak into a room to shoot the aliens, then go back, you'd find the aliens later in the game would be more agressive about charging you. If you were more agressive, and charged into a horde, you'd be faced later with more long range attacks and mobile aliens. (This was better implemented in the final 2 games of the series).
Again, there's still some rules base here for the AI to develop from, but it was a refreshing change from Doom/Quake (and of late, Unreal) where the monster behavior was constant through the game, and made the latter parts of the game boring. Half-life, as mentioned above, still suffers from this somewhat, but this is partially aided by the numerous types of terrain/locale that the player experiences before the game is over that it becomes hard to tell if the monsters are behavior the same throughout, or are responding to the changes in the environment.
(BTW, the folks that made Marathon, Bungie, have continued to pump out games, including the popular Myth (and Myth 2 which is reported an excellent AI), and the soon to be delieved Oni, another FPS from leaked info.)
Also, another aspect that doesn't seem to have been addressed here is how well bots for Quake or Half-Life or Unreal have been programmed. I know that I've found the learning potentials of several Quake bots to be outstanding, although it only lasts for that single DM play. Surprisingly, these are mostly written by the 3rd-party players, and not any game companies themselves. Maybe they ought to have a chat and improve the AIs in current games....?
Interesting General AI Fact (Score:1)
I would have liked to have seen that, as well as Doom's v1.1 networked support for three views. If you had a three computer network, you could place a monitor to the left and right and get a very wide view.
Re:Genetic AI (Score:1)
While something like this can't be expected to work in realtime on every home computer, it may be able to be calculated on the developer's machine, and a memory (brain) dump of the resulting, evolved AI could be used. The pre-rendered aspect comes into play here.
The datafile of the AI that would be passed along to the home user would be unreadable, perhaps, even to the developers.
rtfm.mit.edu [mit.edu] contains some very good FAQs from Usenet on Genetic AI.
Somewhat Pointless Counterpoint and commendable AI (Score:2)
This is where fuzzy logic lends itself best. However, there are some very commendable variants in games that I've seen in my theoretical reverse engineering.
Most action games use modes for the AI. If an entity hits a wall, he goes into Finding New Direction Mode. If the entity is in the direct line with the player, he will advance forth.
Every mode repeats throughout the gametics, with exceptions, which are the rules which make them leave the current mode.
Halflife provided an interesting variant with the marine human AI. When they switched modes, the entity played an audio sound. "Hit the deck!" meant they were in grenade lobbing mode. "Establishing Recon" meant that they were in walk-around mode.
Trespasser introduced something that I wanted to implement for a long time: Instictual probabilities. Instead of always switching to certain modes, have the entity's codebase have a few options to go to, at any given time. The probability of each could be pre-defined, or dyanmic. For example, the probability of a dinosaur's attacking you could go far down if he had just eaten. The hunger factor would be at 5%, where the thirst factor would be at 70%. He would most likely travel in a fuzzy-direct route to the nearest water supply (that he knows about).
In 3D shooters, what always impressed me was what enemies did after you drop down dead. In Jedi Knight, some of them did a dance. In Half Life, the humans reported and walked away. I thought it added a touch of realism when the tense monster started walking around casually. Savage mutilations after death with lots of bellowing are always immersive.
The key to the illusion of making a computer look like it has more cycles is to pre-render as much as possible. This could work with mode probabilities. If a player is exceptionally good at taking out a behemoth tank with mines and finishing it off with an air strike, for example, the ai probability could opt to go into "defense from behemoth tank with mines and air strike" mode. I have not seen this variant on modes in games yet.
Game AI has to be designed to live in the same CPU as a graphical rendering engine. For this reason, I don't see AI taking entirely new directions, such as Genetic AI being introduced as a replacement for Fuzzy Logic. Rather, I see game AI as getting some welcome variants to the modes such as I said above.
Re:Umm... Nice flame. (Score:1)
Re:Umm... Flamebait (Score:1)
Funny thing is, I'm about to get a degree, and he's not even close. Guess it's time to stop playing games...
Incremental ANN Learning (Score:1)
Each was controlled by a little neural network. They had inputs such as their health, energy, what's in the eight squares around them, etc.
They would make their decision and do it. Then another function would evaluate what they did and determine if it was "good." For example, if they lost health, that was "bad."
Then I'd do a little weight update on the ANN to reflect that action. This made for incremental learning.
For example, if the agent was on food (input) and ate (output), that increased its energy (good) so that was slightly reinforced for next time.
I never really got to play with it much, so I'm not surprised they really didn't seem to get better over time. I think they died too early to really learn. My simulation had flaws. I know a lot more now and am sure I could pull it off, given time.
I may use methods like that in my RTS game (see URL).
Developer's Thoughts on AI (Score:1)
In that respect, pathfinding is a form of AI because it does involve choosing an "intelligent" path from one point to another. Yet obviously a computer opponent that can outthink you and learn from your actions exhibits even more "intelligence".
Second, I don't necessarily have a problem with AI "cheating", if it is done subtly enough. The human player obviously has many advantages compared to the computer player. I'm prepared to balance that out a little if need be. In particular, it may be that AI methods that don't cheat are ideal, but are sacrificed because we don't have the computational power (CPU etc.) to execute them. In the end, the question is whether the game play experience is enjoyable or not. If the AI had to "cheat" a little behind the scenes to achieve that, I'm okay, so long as it isn't obvious.
Next, you have to realize that although AI methods have had some success, often it is in specialized areas, such as expert systems. The problem of building a general AI opponent is a little more like a Turing test that identifying a tumour.
Typically classical AI methods are rule based and procedural. The tendency in the last decade or two has been toward emergent behaviours, under the umbrella of artificial life techniques.
There are a host of techniques the developer can employ to do AI, but some are hard to implement (or implement well). Some require tweaking to achieve good results. Typically memory is required to store state, so that intelligent decisions can be made.
In my game, I am hoping to finish my unit grouping and action classes soon. After that, I'll either work on networking or more complex actions. I intend to build a goal-based action structure for my computer opponents. For example, a high level goal might be "occupy quadrant 4" with subgoals of "eliminate enemy base 3" and "fortify ridge in sector 7".
There's nothing new there, that sort of stuff has been written about since Minsky and friends. But obviously the implementation leaves a lot of room for creativity, because I haven't seen it pulled off very well.
Impressive AI. (Score:1)
troop movement isn't ai (Score:1)
Re:troop movement is AI (Score:1)
Re:PAC MAN had good AI (Score:1)
Even good AI's have their weaknesses.
Civ II is the best argument for open source yet (Score:2)
Not only are the AI's so dumb that they have to cheat in order to keep up with a human player, but even the unit movement intelligence is horrendous. I've had times when I've directed a unit to move from point A to point B which is ten squares down a straight road, and what happens? The unit's first move is to hop OFF the road onto a mountain square, or instead it gets locked in a step-forward-step-backward loop until its movement points are exhausted. Give me a weekend and I could code a pathseeker that could literally run circles around the one in Civ II.
If you're dumb enough to hand control of one of your cities to an AI advisor, you'll later find out that the computer has built lots of useless improvements there which are sucking up your income in maintenance costs.
Another example of a bad AI pathfinder was the one in Myth I. If you told your journeyman to heal an archer standing right next to him at one end of a line of archers, your journeyman would stroll in front of the archers to the OTHER end of the line, getting hit by their arrows en route, then he would turn around and come BACK in front of your archer line again, and only then would he heal the original target. Kudos to Bungie for fixing this in Myth II.
I'm glad to see discussion about bad AI's here; it gives me a chance to vent about games whose AI's have bugged me for years.
FreeCiv's AI rocks (Score:1)
Good Enough (Score:2)
Having not tried Civ yet, I am not up on some of the latest technologies. But I do look forward to AOE II (years behind schedule). The overall look/feel/gameplay from AOE I was awsome.
PS: 'Daleks' had good AI and that was in the 80's.
Re:Some amplification (Score:1)
A good place would be the Linux Game [linuxgames.com]'s site. If they won't host it, I will. This is the kind of "real" info that makes games and such very useful. Like the how to do a real autopsy page (vs. X-files' version).
TA's AI (Score:1)
It IS customizable, though not to the level you'd hope for.
You're allowed to alter probabilites and maximum unit counts, so you can tell it to not build so damn many factories, and start churning out light tanks (for example).
TA's AI has no idea of WHERE to build things. If you let it build mines (like claymore, not coal), it might build a nuclear mine right in the middle of its [densly packed] base.
Don't get me wrong: I think its great that Cavedog allowed the level of modification that it did, and I love the TA games (Kingdoms just came out this week). Their AI just isn't that impressive.
Re:AI API's needed (Score:1)
Re:You really believe... (Score:1)
In a sense this is true, however computers are capable of doing things that their programmers cannot fully comprehend or expect...imagine the first person to plot out a Mandelbrot Set.
I worked in the connectionist AI field for a while and though I think it helped us think about the brain in new ways, very few real applications have come from neural nets. Expert systems are still the most used AI-type application, but it is tough to really call them AI.
Our biggest problem with AI is that the brain has had billions of years of evolution, and is much larger and more complex than any neural net we can design today. However I do believe that it is within our grasp now to artificially evolve neural nets that can mimick ants, spiders, or worms. Getting to vertebrates will take a while.
Re:Unreal most realistic yet (Score:1)
Of course that might just point to how bad a FPS player I am, but the realistic feeling is still there.
Russell Ahrens
AI API's needed (Score:3)
If you've played Civ2 perhaps you can sympathise with the brain dead behaviour of "Automate settler" which irrigates evrything including the forests you were counting on for shield production.
AI API's would also make way for AI contests. I'd love to see a few more of those.
Gripe #2 on current state of game AI: cheating! Game programmers know that at some point the human learns and the computer has stayed the same so to up the difficulty the computer has to cheat. Civ again: building units becomes cheaper for the AI than the human, and the subjects ruled become more agreeable and easier to please requiring less expenditures freeing up effort for smashing the human.
As said in the counter-point, rule based AI is a dead end. With the exception of if the computer can generate new rules and learn. Learning must be the next step for game AI and I wish (as per gripe #1) that I could write some for some of the games I like.
You too? (very mini spoiler for hl) (Score:1)
Re:Marathon (Score:1)
Good god man. Do you know how long I was stuck in that stupid maze!
Re:Umm... Grammer. (Score:2)
This is PRECISELY why I hate, despise, revile almost every single real-time "strategy" game on the market, and for that matter, the whole genre. I can think reasonably fast, but I can't translate that into precise mouse clicking over a narrow little window to micro-manage suicidally stupid units who don't understand their general orders, just the one I immediately give them. I want a unit (meaning a GROUP, that reinforces itself) to defend an area, or scout, general orders, not "shoot at this bunch, now move here, and oh wait something across the map, just don't do a damn thing else while i give orders to that bunch, okay, now shoot here and move there".
One game I particularly hated was Age of Empires. My border guards constantly just LET THE ENEMY THROUGH, and although the game mentioned a peaceful way to win, I couldn't imagine it, since diplomacy consisted of "send an army to kill his peasants". Carrying on the stupid juggling act, I found that farms degrade and disappear unless you manually "repair them" (Aunt Mae, quick, call the repairman, the field's blowing away!). Then to add insult to injury, there's not even a PAUSE function, not even one that disables the interface while it's paused. No bathroom breaks, no wrist relaxers. Ridiculous. Worst RTS ever.
Populous 3 is an RTS I actually like. It's still a juggling act, but it feels like part of the puzzle that every level is. You have the one shaman unit, and a mindless mob that you can direct to cause havoc in general areas. The delayed order function was great for creating diversions once you had your plan layed out. It doesn't even have the pretense of having an AI, but it does model a rampaging mob very well.
AI Ain't All It's Cracked Up To Be (Score:1)
Just look at Ultima Online. You can tell the computer players from the real players by the overwhelming aura of munchkinness.
(yes, that's tongue in cheek, but there are plenty of mediocre humans that a good If...Then...Else bot can eat for dinner).
Now how do you set up a neural net to make the Giant Squad of Killer Orcs (not the ones who just updated my Slashdot page) smarter as they watch my character fight them?
--
QDMerge -- generate documents automatically.
Loose Term (Score:1)
These days, though, any expert system or rules based engine exhibiting any kind of automated behavior is labelled "AI," even though (as in Warcraft or Starcraft) the behavior quickly demonstrates predictable and repeated weaknesses and a disappointing lack of adaptability.
At last there are a few games (half-life) where the term AI might apply, but they're still just adaptable expert systems. If the market weren't so oriented towards hi-res 3D realistic graphics, perhaps we'd have seen a decent AI earlier.
I'm not annoyed by the use of the term AI by game developers - that's the role their behavior engines are trying to fill. These engines just don't live up to the name. Yet.
Re:Loose Term (Score:1)
I've written several AI systems, though not in the context of games. A set of finite rules is really insufficient. Once the human learns the rules, they will always win. What is needed in AI is for the machine to ascertain the behavior of the human in turn and adapt accordingly.
This isn't AI (Score:1)
AI is (by my definition) learning, making own general rules from specific cases. The computer has to make observations of the real world (say, the real world would be the game), set up a hypothesis and make a formal description (theory).
The next step is the interpretation of the theory. To be able to set up a model (set-theoretical model), and to use this for predictions. It must also be able to change the theories if a wrong induction was made.
If a computer can do this, it is AI (this is, with todays mathematics, impossible). Having "fuzzy logic" or "neural nets" does not make AI, it simply makes the predictions of the computer more difficult, and might make the computer more flexible to other tactics/strategies. But is certainly does not make the computer to have AI.
For further information, please read "Computerized Agents from a Linguistic Perspective" by Bertil Ekdahl (my teacher). Or reply to this
Re:Loose Term (Score:1)
What they are talking about is not AI, the term AI is just used as a word for computers flexibility in tactics and strategies, and several other things. This is only a little comment, but it is confusions like these that makes game developers, scientists and so on to have a bad picture of what is possible and what is impossible.
AI is impossible with todays mathematics. But with all the confusion and all the buzz-words, people think AI can be made, or that it is already here.
Think one more second, if a computer had AI it would start reasoning about the world, why it is here, what it should do, make observations from the world and react to that. Even if the computers "world" is just the game.
What if the AI in a game figured out that war is a bad thing, and it became peaceful, trying to reason with you to stop the war.
It would be chaotic
Re:Loose Term (Score:1)
You think I'm joking? I'm an AI researcher, with 4 years neural network experience under my belt. I mean REAL AI too.
Interesting piece of work, I would like to know how you define AI. And for that matter, how you define real AI. I could do this by email, but I could not find yours
Re:Both views are the same (Score:1)
you're right, but genetic programming is precisely what game programmers are usually doing - they use (or plan on using) some sort of evolutionary technique to breed a 'perfect' behavior algorithm (which probably means breeding a perfect finite state machine to control the creature).
frankly, most of computer game ai is pretty simple - it's mostly finite state machines for behavior control, sometimes augmented with 'fuzzy' (i.e. nondeterministic) state progression. there are some attempts at doing more that that - afaik, crash bandicoot and jedi knight both use subsumption for behavior control - but those are few and far between.
btw, for some papers on future directions and game ai research, i'd check out the web page from the 1999 aaai symposium on computer games and artificial intelligence [nwu.edu].
another Starcraft problem (Score:2)
Poor pathing for large ground units, especially when moving with groups of smaller units. Instead of waiting for the smaller units to move (which would make sense, since the small units accelerate quickly and move out of the way quickly), the large units will slowly come to a halt and then slowly accelerate in a different direction. By that time the small units have moved out of the way, so again we slow to a halt, change course, and accelerate. Part of the problem is the accel/decel times for large units (which shouldn't be changed), but things would work a lot better if they would hesitate slightly longer to see if their way will be cleared.
Re:troop movement is AI (Score:1)
Re:Unreal most realistic yet (Score:1)
This actually happens two scenarios in Unreal. In the first, you come across a "dead" Skaarj which hops to its feet and attacks once you get close enough. In the other, the Skaarj drops when you shoot it and plays dead for a few seconds. That scared the living hell out of me.
Even better, though, was the way you could blow the legs off the Krall. I'll never forget when the lot of them ambushed me, so I turned and leveled one, only to find him pulling himself along with his hands after me. Wow.
Question about neural net learning methods (Score:2)
I know that the basic neural net works with three layers of neurons: an Input layer, with signals fed into it, a middle layer with a bunch of connections, and an output layer. To train it, you put input signals on one side, and "correct" output on the other, and let the middle layer weight the connections appropriately to "learn" the pattern it is being taught.
What I am wondering is: Are there other arrangements/learning methods for neural nets other than that one? Could a neural net be taught to play a game by "noticing" somehow which things it tries work and which don't? Instead of a bunch of one-shot examples, could it operate in a continuous manner, with signals constantly being fed in and out, and connections constantly changing? I suppose the basic way humans attach "right" or "wrong" to events is by signals of pleasure or pain, with the network always trying to get more pleasure signals and less pain. I see I've just gone way off on a tangent here. I'll post anyway because I would really like an explanation of how much people now know about neural nets from someone studying such things.
Anyone?
Re:Unreal most realistic yet (Score:1)
Oh, and 2 Skaarj warriors were enough to overwhelm me occasionally, if they caughgt me by surprise
Tim
Neural net bots rule Backgammon (Score:1)
Neural net programs are now indispensible for game analysis and teaching. Even the standard 'book' opening moves have been rewritten since they came on the scene.
Its really humbling to realize that my ass is being kicked not because the computer is out-number-crunching me but because the neural nets encode a much deeper, more fundamental understanding of the strategy of the game than I'll ever have
Half-Life, all the way! (Score:2)
New Game AI is Vapourware and Marketing hype. (Score:2)
Intel keep hyping AI as the "next big thing" because more and more of the hard stuff is being taken off the processor by custom hardware - 3D cards, audio streaming and compression, etc. They want to sell their kick-butt processors so they hype the expensive stuff - currently that's Physical Simulation and "Advanced AIs", even if nobody has done physics well (it's a *very* hard problem and all the demos you'll see are cheap hacks and "special cases") and "Advanced AIs" are pure vapourware.
Having been in the business for a fair few years, I maintain that nothing significant has been invented in gaming AI for ages. Sure, there have been learning systems but they have been reserved for nieche "simulators" and "virtual pets". Mainstream games continue to plod along with rule-based AIs as always.
The only thing that has changed is that we can calculate more and more complex metrics for our rule based AIs - instead of proximity, we can do line-of-sight, instead of closest enemy we can now do spatial searches for the mean positions of groups of nasties.
Why will it always be thus? Because debugging a neural network is impossible (you just have to re-train it with a different working set) and having predictable AIs is key to shipping a game within your launch window. Having an out of control AI where nobody can pinpoint the bugs will get your project canned.
Certainly Dungeon Keeper 2 and Theme Park 2 are simple AIs - the only difference is that we can now run them on huge crowds, use flocking algorithms and have more interacting states to provide emergant behaviours. It's still all rule based though.
- Robin Green, Bullfrog Productions.
Re:AI API's needed (Score:2)
AI in Starcraft (Score:1)
Both views are the same (Score:1)
And both are wrong.
First (pedantic), `alife' is the _wrong_ term to use for modular/threaded programming techniques (Brooks' subsumption architechture etc.) The former refers to virus simulation/evolution, the latter being a methodology inspired by the modular nature of the brain.
Second, if games programmers can do "cheap tricks" instead of coding some modular, threaded architechture to control the actions of assailants, then good for them. Academics in AI make dodgy hybrid systems all the time and still claim to be high and mighty. (Brooks and the COG project)
The bottom line is: who cares how it's implemented. If your game behaves realisticly then no-one can slate you for your methods. Neural nets and threads are v. expensive. if... else statements are cheap.
troop movement is AI (Score:1)
Finding a paths between places is ai. Deterministic algorithims can't deal with unexpected things such as a enemy suddenly appearing on the path or if the route between two points is unknown (due to fog of war or something like that).
Re:You really believe... Actually yes (Score:1)
Computers can play a prefect game in tic-tac-toe and come damn close in checkers as someone has pointed out. Bayesian networks have been able to diagnose certain diseases as well as or better than pathologists(check out the literature on PATHFINDER I-IV, MUNIN, etc.). And I'm sure Chessmaster 5000 or gnuchess can beat the average person in chess.
You're comparing the abilities of a computer player to that of the best human in the world in chess. That's not a really fair comparision since the average person is no where near that good.
However, the best AIs can't deal mundane stuff that 99% of the population can handle such as recognizing faces or understanding language so AIs still have a really long way to go.
Re:Impressive AI. (Score:1)
Move towards player, slightly to left.
When you can't move, move slightly right.
Alternate moves everytime you get stuck.
Fire when you see the player and have had enough time since last fire.
Amazing, really, how well such simple rules can work.
Re:efficient AI's (Score:1)
Individual units need movement, this involves pathfinding and the ability to see the enemy. The "star" of pathfinding is called A*, you should be able to find it on the web.
Strategic AI is harder, some components:
Deciding what units to build.
Some games, such as AOE hardcode these (you can edit the files). Smarter ones would see what units you are using and build the units which defeat them. (Most RTS games are rock-paper-scissors in the end).
Deciding what to do with them.
Again using AOE, there are editable files filled with things like how quickly to explore, how violent to be, etc. This is the really hard part.
The web page:
Game AI Page [gameai.com]
is an excellent source, and a book on programming RTS games was reviewed here a little while ago.
efficient AI's (Score:1)
Re:AI in Starcraft (Score:1)
Now, I'm not saying I *like* the AI in SC.. it's infuriating at times.. I've lost entire squads of zealots due to their autonomous stupidity.
Re:AI in Starcraft (Score:1)
I've spent many hours researching this topic
1. No real variation in strategy. The strategy can be summed up as this:
if (myForces > yourForces || youExpanded)
attack();
2. No adapting to your behavior. It won't build up the appropriate counters for your attackers or defences; it always builds up a general purpose mix of units. If you've blocked off a route, it won't search for a weakness elsewhere.
3. The AI relies on having full map information and enemy unit locations. But it fails to use this information effectively.
4. Too many fatal flaws. E.g. attack with one worker at the begining of the game, and it will counter with all of its workers, taking them off production.
The AI is so predictable, playing against it is simply playing the percentages. Make sure you control more resources, make sure you win the battles, and exploit the computers short sightedness.
This, IMHO, is the antithesis of what strategy games are about. I like games that force you to be creative and innovative to win, and the current crop of RTS game AIs don't measure up.
ai hardware? (Score:1)
what we really need is a card dedicated to accelerating ai. this would be tricky, but from some guys who i've spoken to about this who know much more about ai that i do, it should be possible.
$0.02
Re:PAC MAN had good AI? (Score:1)
But that would be embarassing.
Re:PAC MAN had good AI? (Score:1)
If you find that book, please do quote from it... I'd be interested in knowing what the programmer was thinking when he wrote it, because there are six or eight patterns that the player can variously run to clear each level. Perhaps the movement itself isn't fixed, but the optimal solution to each instance of a level is.
It's possible to play pac-man blindfolded. I've seen this done, although I don't have the actual skill. I do know the first couple levels of pattern, though.
Speaking of civ (Score:1)
While speaking of civilisation i've got a question.
How successful was the Linux port, is there any number of sales,... to look at??
I ask because this is one of the few commercials games under Linux and wanted to know if this was a small/medium/big/huge success or if this didn't work (not engough games sold).
thanks
virtua fighter 2 (Score:1)
it seemed to learn moves from a player, try them, and rate how successful the move was. It only kept using stuff that worked well -- but seemed to mix stuff up so you couldn't easily second guess it. The AI still had a couple of bugs, but providing you were well mannered enough not to repeatedly exploit them you could get a tough and interesting game against the computer (with the added fun of seeing what moves it had picked up from you or your friends).
fighting against vf2 computer is much more fun than fighting the old street fighter 2 ai, which was very repetitive and was heavily stacked in the
computers favour.
even though vf2 is quite old now, i still rate it. soulblade and a couple of others seem to have ripped it off with just small improvements in graphics etc. You can pick a second hand saturn up for about £30 these days. The PC version of vf2 is actually pretty recent and might have added/improved AI. Vf3 is excellent if you're willing to learn it, but seems to be too tricky for most people to learn.
til you can use drunken kung fu in quake, i'll still keep going back to virtua fighter. shun di forever
+++++
Re:Unreal most realistic yet (Score:1)
Okay, unreal monsters do dodge very well. But really, thats just a variation on the "perfect aim" bit, where a computer shooting at you has better than human accuracy. Unreal AIs have have better than human dodging... But only b/c the game is geared towards that. Quite akin to QuakeC bots (And seeing how the AI author, Steven Poldge wrote the definitive Quake 1 'bot, its quite logical).
Unreal's AI puts up a good fight, as does Half-Life's and Aliens Vs. Predator's. But none of them have yet to *really* impress me. Eraser Bot for Quake 2 is quite good also.
AI gonna impress me? Give it personality. Make it max headroom for cryin' out loud, but make it *seem* like more than some programmer's set of rules. Cheers, jeff
Re:AI in Starcraft (Score:1)
Re: Bolo (Score:1)
As a matter of fact, I am going to be devoting my fall term to a Bolo AI (or brain as the game terms them). It will make use of the strategic disposition ideas from comp.games.ai (or was it comp.ai.games?) in order to organise its thoughts and (hopefully) beat the player.
My idea is that, provided with a simple set of rules for evaluating positions and risk, the brain will be able to decide on the optimal solution. I took AI a few terms ago and still have the books (naturally), so I should be able to implement some more modern AI ideas.
Re:AI Ain't All It's Cracked Up To Be (Score:1)
On the production OSI Servers? There are already some sophisticated AI routines in place for at least some monsters. Orcs are supposed to be cannon fodder. The same can't be said of Lich Lords, or the new coding on Dragons. I'm concerned though. If you make Orcs use group tactics, it's going to increase the amount of server-based processing lag, and remove Orcs from the newbie cash stable.
Re:AI API's needed (Score:1)
itachi
Re:AI API's needed (Score:1)
Re:Marathon (Score:1)
Re:Marathon (Score:1)
uh... we're both talking about Zork, or one of its progenitors, right?
Re:PAC MAN had good AI? (Score:1)
It was in a book that interviewed around twenty old-school programmers, if anyone wants to back this up...
Re:Genetic AI (Score:1)
Reason: The number of generations * the size of population required to create a well-evolved AI would be cost far too much in terms of computer power and processing time.
For a good population you would likely want more than 60 AIs. You would likely want each generation to last till someone has at least 50 kills (or there is a total of more than 400 kills) to allow averages to tell who is really worthy of reproducing. You would also likely need hundreds of generations to zero in on optimal values. Of course you could reduce the number of generations by increasing the population size, or vice versa, but you can't really reduce the processing needed by much.
You would likely have more success using a simulated annealing method and work on each AI individually. This way you wouldn't have to use such a large population. Results would be very simular. But the analogy is as nice.
Re:Mara7hon (Score:1)
Ah, Marathon--still, IMHO, the best FPS ever (and maybe yet to come...). Incredible AI, story, graphics (who needs polygons!). AI seems to be Bungie's forte--PID was suprisingly good for it's time, Myth & MythII were very nice, and Oni...mmmm, individualized AI's for every character, adaptation, learning, etc.
But Marathon...back in those days, when the only real alternative was Doom, it was was breath taking (and still is). The primitive state 3D games only enhanced the effect--I can still remember dueling with Juggernauts (big barn-sized flying tanks), retreating to cover, only to have them sneak up behind me (sometimes going some distance) and toast me, sometimes even bringing help with them.
Links galore: bungie.org has lots of info on Marathon [bungie.org], Oni [bungie.org] and PID [bungie.org] (as well as a little on Blam [bungie.org], but would take some explaining). Some of the original designers jumped ship and formed Double Aught [doubleaught.com], made Marathon Infinity, and are currently working on Duality [duality.net], yet another FPS.
"I pledge to punch all switches, to never shoot where I could use grenades, to admit the existence of no level but Total Carnage, to never use Caps Lock as my 'run' key, and to never, ever, leave a single Bob alive." -The Oath of the Vidmastar, Marathon
Rage Hard
AI limitations (Score:3)
Tic-tac-toe: very easy -- the machine can examine the whole game-tree on each move, and *always* make the best possible move (which will result in a tie if the human also makes the best possible move).
Checkers: more complex, but a deep search can be conducted, and a move database can be used to help matters also. There are some *excellent* (practically unbeatable) checkers programs in existence. And there are incredibly good human players also.
Chess: quite a bit more complex than checkers. The number of possible states for a Chess game is hopelessly huge. To create a grandmaster-comparable Chess engine, you employ tactics like:
o hardware acceleration of time-critical search code
o move database for opening and end-games (tricky part is the middle game)
o heuristics developed with the assistance of Chess grandmasters
o a big-ass computer (lots-o-CPUs and memory bandwidth)
o a very nice search engine of course
It's important to note that adding more computing power doesn't help matters as much as you may think. For example, suppose you increase the computing power by a factor of 10. In a game where the number of possible moves is 10, this would enable you to see *one* (1) move further into the future. If you wanted to see two moves further into the future you'd need *100* times the computing power. This is over-simplifying things a bit, but you see my point.
Now consider a computer game like a flight simulator or a real-time sim like StarCraft. The complexity of games like this is so massivley f'ing huge that you can't cope with the problem by using a simple min/max search -- you *must* rely *heavily* on heuristics (e.g. if...then...) to make an AI that will perform better than a monkey. A lot of AIs are given artificial assistance (e.g. they "cheat") so that they won't be easily defeated by a competent human player.
Game AI (Score:1)
Just my opinion
Re:Half-Life, all the way! (Score:1)
Re:New Game AI is Vapourware and Marketing hype. (Score:1)
Re:Unreal most realistic yet (Score:1)
Re:AI in Starcraft (Score:1)
Re:Umm... Grammer. (Score:1)
Re:AI in Starcraft (not true) (Score:1)
Re:Genetic AI (Score:1)
On the other hand, evolving against human oponents would be best...
Fear my wrath, please, fear my wrath?
Homer
AI in games (Score:1)
Tell a man that there are 400 Billion stars and he'll believe you
Game movement AI (Score:1)
Re:Somewhat Pointless Counterpoint and commendable (Score:1)
Re:Half-Life (Score:1)
Re:Good Enough (Score:1)
just my 2cents
Re:Half-Life, all the way! (Score:1)
Unrealism really bothers me in gaming. After seeing how things work in HL, (especially the helicopter), games like Sin are pathetic at best.
On a side note, I keep seeing that Wine can run Half Life. I have tried and keep failing. I wish I could find a place that documents how to do it.... hint...
-Clump
Re:AI in Starcraft (Score:1)
Play Protoss. Send Probes to find all 3 bases
HL scared me too! (Score:1)
Wait till game AI advances and even the our best game playing humans cant figure out "patterns".
-Kancer
ps: a quick thing on Starcraft -Blizzard tuned down the "insane AI" in Brood war because of complaints.
Re:Game movement AI (Score:1)
The traveling salesman problem is a bit different (but this little difference makes it - as you said - almost impossible to solve):
given: a set of cities and roads.
question: which is the most efficient way to visit all cities only once.
solution: you have to test all possible solutions!!! (the only way to shorten this search is by using heuristics e.g. if a to b and c to d cross and there is a connection a to c and b to d then you take this one (which makes the to paths shorter).
Re:AI API's needed (Score:1)
Who knows what the geeks of the world can come up with. I think we can all say Linux.
As for current AI, one of my favorites was Aliens vs. Predator. I think the character development played a part as well. But the intelligence of my opponent scared me sometimes.
--kaniff
I need a cool quote or something.
Re:AI API's needed (Score:1)
So don't get me wrong, I think the AI in SMAC is great. The good stuff about their AI is the way they position their forces around a city in a way that prevents a counterattack, and they know how and _when_ to use probe teams (the equivalent of Civ2's Diplomats/Spies), and how you can infiltrate their datalinks to see all their base operations so you can tell that they're not cheating. Just don't give too much credit to submission, it's just part of the game.