A Look At Modern Game AI 87
IEEE Spectrum is running a feature about the progress of game AI, and how it's helping to drive AI development in general. They explore several of the current avenues of research and look at potential solutions to some of the common problems.
"The trade-off between blind searching and employing specialized knowledge is a central topic in AI research. In video games, searching can be problematic because there are often vast sets of possible game states to consider and not much time and memory available to make the required calculations. One way to get around these hurdles is to work not on the actual game at hand but on a much-simplified version. Abstractions of this kind often make it practical to search far ahead through the many possible game states while assessing each of them according to some straightforward formula. If that can be done, a computer-operated character will appear as intelligent as a chess-playing program--although the bot's seemingly deft actions will, in fact, be guided by simple brute-force calculations."
Holy Acronym Overload (Score:4, Funny)
F.E.A.R. , short for First Encounter Assault Recon .... University of Alberta GAMES (Game-playing, Analytical methods, Minimax search and Empirical Studies) .... called STRIPS (for STanford Research Institute Problem Solver)
Combine that with such gems as:
players view the virtual world from the perspective of the characters they manipulate, making Counter-Strike an example of what's known as a first-person-shooter game.
and I'm not sure that belongs here.
Then again, maybe I'm just bitter that I still can't beat GNU chess.
College AI Project (Score:4, Interesting)
Back in college I worked with a guy, Jeff, on an AI project. We were to play the game Freecell through to its finish.
(if you're reading this, jeff, I'm still sorry I didn't do more coding on that and I owe you one)
While I can understand the difficulties in doing a brute force search, and that a simplified "version" of the game could be helpful, OR even that parsed "states" or "instances" of situations in the game could be broken down and analyzed, wouldn't a simpler way be to use a fitness test on various actions? No, no... I lose points for not reading the article, perhaps.
We used a combination of fitness and searching to determine a way to win a Freecell setup. Admittedly this is VERY simplified, and done in a sort of static system as opposed to a (usually) dynamic one in games.
If there is less memory, the obvious answer seems to be to use a system to determine better ways of doing things. Rather than simplifying the game, couldn't the AI have a library of responses designed to fit certain situational profiles, then act in a (perhaps semi-random) manner that fits ? Perhaps the responses could be genetically determined, even.
Also, this use of situations versus individual actions could help lengthen the time the AI has to come up with a response.
Just some thoughts, though I'm sure others more experienced than me have these on the brain. I'm looking forward to the responses on this topic.
Re:College AI Project (Score:5, Interesting)
Games of perfect knowledge versus an opponent are pretty simple to solve. You'll find they all basically boil down to minimax applied to game trees plus an evaluation function (which gives you a fitness value). There's also alpha-beta pruning and things like Negascout which are just optimizations for Minimax. The trickiest part of this is writing an effective (and fast) evaluation function.
Freecell is a bit different because it's a single player game, but ultimately you can apply a similar method as above.
Real time decision making in games is often quite different. One problem is that you don't necessarily always want to make the "best" move. In Game! [wittyrpg.com] for example, each monster has a regular attack and may have one or more special attacks. Using simple AI to pick one (such as, pick the attack that does the most raw damage) each time wouldn't be as interesting as picking randomly. Say one of the special attacks for the monster is to steal some gold from the player, why would the AI ever pick that? It doesn't benefit the AI at all, but it does make the monster more interesting to fight for the player. Similarly, if one monster has an absolutely devastating attack, a "smart" AI would always use it. But if the AI always uses the devastating attack then either that monster will be impossible to kill, or the regular attacks must be really boring. But, if the monster with the devastating attack only uses it occasionally, it keeps the player on their toes, perhaps they'll heal more often, or use more powerful attacks to try and dispatch the monster faster.
Having said all of that, random picking isn't always the best way to go (although it's quite efficient with CPU time). The main problem with game trees is their branching factor. Chess is a fairly CPU intensive game for AI to play, as it has an average branching factor of ~36. For real time games, it's likely that you can use domain knowledge to substantially prune the branching factor, which makes the problem much simpler. For example instead of considering to, say, turn left 1 degree, or 2 degrees, or 3 degrees or... you could just consider turning left 90 degrees or 180 degrees. If you only end up with a dozen options left to pick from, you can fairly quickly expand several levels of the game tree and then make an informed decision.
However, some games are not games of perfect knowledge (Backgammon, for example), often they rely on chance. In this case, the value of deeper game tree expansion rapidly diminishes, and you simply need to temper your fitness values based on the expected probability of that move being possible. The other problem with games of chance is that the branching factor is usually very high, which typically makes it unfeasible to expand too many levels in the game tree anyways.
Of course you can precook a number of situations, most good Chess AIs have a large collection of book openings that they use. It's really just an application of domain knowledge again, then you can reuse your game tree expansion with evaluation function on each of the book openings to find the most appropriate one instead of doing an exhausive search of all possible moves.
Re:College AI Project (Score:4, Informative)
Since you used an example from a role playing game , I'll respond with similar. Disclaimer: I'm totally hot for genetic algorithms, ever since I saw this article on pathfinding [gamasutra.com].
Oftentimes, a player character's "best" moves are limited by other factors rather than just needing to push the button - does he have enough combo points? does he have enough mana? is he in range? and so forth, before it's even a valid choice. Using a more complicated getInformation set than is outlined in the pathfinding program linked above, let's lay out what the mob can find out:
How many hostile enemies are there? .. And so on - I'm sure there are more checks you could give the AI access to - probably even depending on its intelligence.
What kind of targets are the hostile enemies (ranged, melee, soft [rogue,mage,etc], hard [warrior,paladin])?
How hurt is each target?
How much damage has each available target done to me?
How much healing has each available target done to other hostiles?
How devastating have the non-damage abilities of available targets been to friendlies?
Which abilities are available to me?
Which status ailments do the available targets have currently?
Add up levels of opponents and levels of allies. Who is larger?
Which ability did I use last?
Then there are the actions the MOB can take:
Choose target (can target self)
Attack target (melee)
Attack target (ranged)
Use ability on target (repeat for however many abilities are available to the MOB - limited by mana, and so forth)
Run away
Close distance
Run to ranged distance
It would take quite a bit of training (you could probably automatically cull the first several generations, but later on you might actually have to interact with it yourself), but this kind of technique could end up with some very "smart" AIs that are fun and challenging to play against. You don't get God AIs, because they have limited information. You don't get God AIs, because their abilities are limited - and not by simple randomness. You might actually get an AI that stuns you and runs if it realizes it's outmatched, instead of stupidly sitting there and whaling at you with its rusty sword of crumbling.
Granted, genetic algorithms have some HUGE drawbacks:
The decision tree can be quite large, and it can take quite a few cycles to evaluate. Of course, as your fitness check you could check how long it took to execute.
It can take quite a bit of training (hundreds of generations, with thousands of entities each) before you get something that resembles an intelligent algorithm.
Meanwhile, it might generate something that checks for contingencies you never thought to bake into the AI script.
Re: (Score:3, Informative)
GAs are interesting, but they're definitely less dynamic than the other approaches I mentioned. As you pointed out, GAs are much too slow to use on the fly, you have to prebake them. That has both pros and cons, the obvious con is that if you ever tweak any of the parameters, you'll have to rebake all of your behaviours. The obvious pro is that you can actually see the complete behaviour, and you can manually tweak it (if necessary).
Canonically, a GA is used when the search space is simply too large to sear
Re: (Score:1)
Re: (Score:2, Interesting)
Canonical GAs are never the correct tool for a problem. They combine a crude random local search (mutation) with the cross-over operator that is intended to splice partial solutions. The trouble is that even on problems designed to be exploited by GAs, like the Royal Road [psu.edu], a random restart hill-climber will perform better with the same number of fitness evaluations.
I'm not as familiar with GP, but given the minute number of
Effective branching factor of chess is more like 2 (Score:2, Informative)
By the way... modern chess engines have an effective branching factor of about 2 (certainly less than 3)
There may be 36 moves available in a typical position, but the engine will almost always have enough information to examine the best move first or second, and then rapidly refute all of the others by proving that they are inferior to the fully-examined first move (i.e. a beta cutoff).
The effect is that it only takes about 2 full plies of extra depth to get a decent strength improvement (50-100 ELO).
Re: (Score:2)
Games of perfect knowledge versus an opponent are pretty simple to solve. You'll find they all basically boil down to minimax applied to game trees plus an evaluation function (which gives you a fitness value)
All of which, when applied to the simplest and most abstract of all strategy games (GO [wikipedia.org]), fails to produce a competitive program. Search has its limits, even in zero-sum, perfect information, partisan, deterministic strategy games.
Re: (Score:3, Informative)
That's because Go has enormous average branching factor (>300). Go is definitely not the simplest of all strategy games.
Re: (Score:2)
Re: (Score:2, Funny)
one important point (Score:5, Interesting)
Article is pretty bang on. Adaptive AI is tough to do, as is balancing being a tunable-level of smart and being beatable. One thing I have not seen enough of in games is AI agents communicating with each other about intentions. More often it is simple a matter of saying, "I'm in this area, so don't try and go here." I've yet to really feel in a game that the enemies are working together. I saw a very nice presentation on Halo 3 high level AI at GDC 08 that kind of nailed some of these problems with a pretty simple solution - there should be some top level AI manager that handles requests from AI agents on what to do next when a high level goal becomes useless to attempt to achieve. Left4Dead sort of deals with this, not by talking to agents that are still alive, but by deciding when to introduce new agents, but the Halo 3 approach to me seemed very elegant. It was higher level AI than the article was talking about, but in effect it was a similar setup: AI achieves something, and says, "What's next?" Since the AI manager would know the state of the other enemies in its unit, it could decide that you might as well not start firing at the player since the two others were doing that. Maybe some other game vets could clue me in, but I havn't seen too many games like that where a module is advising the AI based on balancing attack/protect/advance ratios during gameplay. /framework/tools programmer //not AI programmer
Re:one important point (Score:5, Interesting)
1. Command AI issues squads orders (do/accomplish something) based on a very simple model of the battlefield
2. Squad AI issues individual units orders (go somewhere) based on a more detailed model of the immediate area.
3. "Conscious" individual AI computes a good way of following orders from the squad AI based on yet a more detailed model.
4. "Subconscious" individual AI makes moment-to-moment decisions, for example about how to avoid minor obstacles that the "conscious" AI ignored.
Of course that is very idealized.
Re: (Score:2)
This sounds like a great approach. If you read my post below, many of the teams start at what you have called the Conscious Individual. If they finish that with enough time, I've seen them move on to the Squad and Command levels also. The main difference though is that since our game state has traditionally been pretty simple, there hasn't been a need to compose a simpler model of the state for the upper levels.
The multi-tiered AI approach does seem very useful and intuitive though.
Re: (Score:3, Insightful)
Re: (Score:2)
interesting idea.
let me make sure i understand it correctly:
an RTS AI took a FPS AI out for a couple of drinks, and the resulting offspring would be an awesome AI that can work in teams to defeat all humans.
i'm sure someone can throw 2 AIs together relatively quickly, release a game and see what happens.
maybe a command and conquer: renegade 2?
Re: (Score:2)
An RTS decides 'what' units to make, but does nothing about what to do with a current group of units. Nor does an RTS decide what to do in battle; have x units do this, while y units do flank. If unit z is below condition w, retreat/heal other units. Instead it simply says: I have x units. Send them all in to attack... No attack strategies, just build strategies. Hardly useful in an FPS when you are using 'pre-built' units to attac
Re: (Score:1)
Wouldn't that behavior be unrealistic though, since the NPCs would 'know' things that they shouldn't? e.g. that they have a bunch of allies hiding behind that wall. If each NPC acted individually, perhaps they could use swarm-based behavior when they teamed up.
Re: (Score:3, Interesting)
Depends on the setting. It makes perfect sense within a networked-battlefield scenario: what one unit sees, it will try to convey to others, and commanders can take decisions based on everything that is seen by any of their units.
In a medieval setting, they would have to shout to each other (and be within hearing distance) to request assistance. Or wave colored flags, or send messenger pigeons.
The radio squawking in Half-Life added an element of realism to this - you could actually "hear" what the bad guy
Re: (Score:1)
It would be an interesting experiment at
Re: (Score:1)
I think the real problem with that isn't a technical one (although it would be really really really cool!). If you make it so players have to shut up in a dungeon or whatever, they'll just use an alternate means to communicate (ie telephone or voip). This will destroy immersion and annoy players. The only way this could work is really in a single player game where you have to talk to NPCs in english....
Re: (Score:2, Interesting)
Plus, methods like true, range-based "whispering" could be useful, and would also carry with it some interesting risk (i.e., the intended person wasn't close enough to hear). The fact that a particular AI might only understa
Re: (Score:2)
Article is pretty bang on. Adaptive AI is tough to do, as is balancing being a tunable-level of smart and being beatable.
Being beatable? I keep forgetting that in FPS games it's so easy to make near unbeatable bots. I'm a strategy gamer, and I'd love it if someone would make an unbeatable AI. Or at least a halfway decent one. Strategy is the area where some real advances in game AI are still needed.
And then there's CRPGs of course, but I suspect that's another order of magnitude harder.
AI? In video games? (Score:2, Funny)
If modern games are an indication of AI, then they're obviously smarter then we can hope.
Just today, the AI in Far Cry 2 spotted me at long range after 1 shot with a sniper rifle, proceeded directly to me, despite heavy foliage for cover.
Color me impressed. Even Sherlock Holmes would be proud of how quickly they deduced where I was.
Re: (Score:3, Insightful)
The problem is that internally the game "knows" where you are- after all, it has to track your location.
Every play against someone in counterstrike who was hacking? Wallhacks, aimbots, the whole nine yards? There's really nothing at all stopping the developers from doing that; and in fact some older games basically did do that, just with arbitrary delays before the AI snapped on you, deliberate fudge factors on accuracy, whatever it took to make the difficulty level sane for a human player.
It's possible to
Re:AI? In video games? (Score:4, Insightful)
That is true. The computer is simply very different, so modeling our strengths is just as hard as our weaknesses.
It's trivial for an AI-controlled enemy to get headshots all the time. It's trivial for the AI to have complete knowledge of the battlefield and state of all items and characters on it. Humans can't do that.
It's a lot -less- than trivial for an AI to notice patterns in the enemy and exploit them. Thus the same approach tends to work 100 times against the same AI. It can't learn from its mistakes.
Re:AI? In video games? (Score:5, Interesting)
It could if the AI decision tree were a genetic algorithm.... each entity gets its own decision tree, and the ones that survive mate. :P
Of course, that only really makes sense in an MMO 'verse.
You could do some AI juggling, so that after every map (or every time the AI loses), it runs its algorithms against all previous scenarios until it wins (or at least, gets better at not losing).
But then you end up with an AI that wins all the time, and a huge amount of CPU cycles.
Re:AI? In video games? (Score:4, Insightful)
But then you end up with an AI that wins all the time
And we don't want that. We want an AI that wins some of the time, and that is beatable. That is, it should present us with a challenge, but the challenge can't be too great because then the game will be no fun.
So, we only want a smart-enough AI, not a god AI.
Re: (Score:1)
Just adjust your fitness function accordingly :P
Re: (Score:3, Funny)
So, we only want a smart-enough AI, not a god AI.
So the problem becomes: we don't want it to beat us all the time, but if we make it smart, it will beat us, but we want it to be smart!
So the solution is: make it capable of beating us all the time. Then flip a coin to determine if it will choose the winning strategy, or sit like a lame duck so it won't win all the time.
Re: (Score:2)
But then you end up with an AI that wins all the time
And we don't want that. We want an AI that wins some of the time, and that is beatable. That is, it should present us with a challenge, but the challenge can't be too great because then the game will be no fun.
So, we only want a smart-enough AI, not a god AI.
I'd love to see god AI, but then, I'm a strategy gamer. AI is really bad at strategy.
Re: (Score:2)
You are overly optimistic regarding GAs. I very much doubt that, whatever the AI cycles, you can evolve an AI team capable of winning against a good (as in top-third of the table) human team at, say, counter-strike.
Gotcha: the AIs would not be allowed to cheat, and would have the exact same information at their disposal as the human team: a lot of visual input (but not the actual noise-free game geometry) and the same set of commands as human players.
If you can evolve that kind of AI, the DARPA Grand Chal
Re: (Score:2, Interesting)
Re: (Score:1)
Re: (Score:3, Interesting)
I think you're under-optimistic regarding GAs.
They can, with training (just against themselves!) beat human opponents at simple turn based games (citation [nih.gov]). That's the same level playing field you describe.
It's been 10 years of GA optimization and theory, and 10 years of Moore's law since then. Computers have much better reflexes than humans, and you're telling me that a GA couldn't beat a master at CS?
Tell you what: give me $50,000 in funding, six months to train the AI to general FPS rules (headshot, mo
Re: (Score:2)
"The point is, games have rules. Once you've learned the rules, you're unstoppable."
There is an enormous difference though, the computer doesn't have any of the deficiencies of the human mind to get in the way. Most human beings 'wing it', most thought is 98% unconscious, therefore most of the time what you are testing how good someones unconscious processing is.
You'll probably find the following interesting:
(Quick version)
http://i35.tinypic.com/10fruxh.jpg [tinypic.com]
(Longer version)
http://www.linktv.org/video/2142 [linktv.org]
To
Re: (Score:2)
games have rules. Once you've learned the rules, you're unstoppable
Ah, but I want both to play the same game, while your are suggesting giving the machine a special representation with a high-level vocabulary. The interface I am proposing is the same one you are using: images and sounds come out and commands are executed. Not "high-level game data" - only images and sounds.
You talk about teaching the AI how to headshot. I am talking about the difficulty of processing a 2D image and interpreting it as a PoV rendering of an (unknown) 3D model, and locating a set of pixels t
Re: (Score:3, Informative)
I am not talking about giving the AI any more information than the user has, nor any special controls/interface. When I say games have rules, I mean in regards to movement speeds, damage, and the like. A machine can process these things and respond to a changing environment quicker than a human.
3D image processing in a game is finite (especially using low-res models like in CS), and there are only 4 different heads to recognize for each side.
In a small data set like this, NNs can and will quickly outcompe
Re: (Score:2)
Machines "can" do all sorts of wonderful things, but so far nobody has been able to get them to do them. Please cite examples or research that demonstrates accurate real-time 3d modeling from a synthetic 2d video, or stop making things up. Yes, I am sure it can be done. No, I very much doubt anybody can do it right now.
You say that synthetic video is finite. So is 2^256, or the number of grains of sand in the beach - what is your point? Are you suggesting that finite is always manageable?. Ok, maybe you ca
Re: (Score:1)
I cannot find any citations for 3D modeling from 2D video, but I didn't look very hard. You're probably right, it's beyond current models.
I think you're missing what I'm trying to say here, though. A sufficiently advanced neural network may be able to play CS without the need for actual 3D processing.
I'm pretty sure that I'm right, but I really can't prove it without more time and a supercomputer to run it on. I'm currently writing proposal for a grant so I can model the selection pressures leading to th
Re: (Score:2)
I remember some bots for Quake and bots for bzflag which connect as regular users.
That is a very good way to start an AI.
The fudge factors can be toned down a lot because it cheats less.
Re: (Score:2)
So very true. A huge improvement to the AI would be visibility determining. IE, the AI might be able to tell the shot came from the southwest, but it can't see you exactly because of all the foliage - to root out the threat, it sends a squad to comb the area.
Unfortunately, the squad pulls out a giant comb and starts running it through the foliage..
Re: (Score:2, Informative)
Operation Flashpoint and its sort-of-sequel Armed Assault does something along these lines. The AI has a field of view roughly corresponding to what most players have in a first person shooter (~90 degrees), and the AI can't see you if you're outside this field, but he can hear you if you do something noisy.
Time of day, weather (rain/fog), foliage, obstacles, stance, movement speed and inherent camouflage of the unit will affect visibility and 'audibility'. Each weapon has two properties describing how visi
Re: (Score:2)
I'm pretty sure he was joking. Welcome to Slashdot, enjoy your stay.
Re:AI? In video games? (Score:5, Informative)
I have to say that the AI in Far Cry 2 is definitely one of the worst of current generation video games. I couldn't play that game for more than a couple days before getting utterly bored and frustrated at the idiotic AI.
Enemy Territory: Quake Wars [enemyterritory.com], on the other hand, has some of the best AI I've seen AND its a multiplayer game. The bots' ability to attack and defend objectives while using infantry and vehicle skills against the random actions of human players is incredible.
Re: (Score:1, Funny)
Everyone of my gaming friends agree that QW has the best AI we've ever seen. I've spent some games just following AI snipers to see where the best spots are.
Sometime it is hard to tell the difference between the bots and real players. It's only the absence of bad squeaky singing, incessant excuses about lag, and numerous opinions about my mother's sexual preferences that gives the game away.
Re: (Score:2)
Did the AI improve in later patches? Because I played QW when it first came out, against my brother (LAN) and the bots were as dumb as they come. Constantly driving vehicles into walls, running in front of me while I was shooting to try to give me a med pack, standing up out of cover to reload.
It was a game-ruining experience, and if it's actually been improved since then, it would probably make QW worth playing.
Re: (Score:1)
Re: (Score:2)
Man that is nothing.
Game: Far Cry 2
Time of day: 3 AM
Weather: Thunderstorm.
Distance: 200-300 yards
Location: Jungle with heavy undergrowth.
Position: crouching behind a tree, not moving.
Result: Spotted and snipered.
Re: (Score:2)
Game AI For Fun (Score:4, Interesting)
Re: (Score:1)
Simplify and use heuristics (Score:2)
So what they're saying is simplify and use heuristics? Hasn't this been done for years now. One some level every single game out there does it because you can't model the real world 100% and the state you're considering is therefore simplified. What they're saying is simplify further by considering a subset or creating a model of the model that makes up the full game.
In the case of simplifying further, isn't this exactly how a chess engine works?
In the case of making a simplified model, I'd be surprised if
Artificial intelligence, isn't (Score:2)
Re: (Score:2)
Cheating is the AI using information that it wouldn't have if it was a human. Like looking through walls or dense foliage, and firing each shot with deadly accuracy because it knows the exact coordinates of its target. That's cheating. That used to be common in strategy games too, but nowadays people want strategy games to have a level playing field, and that means the AI loses big time, because no AI is capable of grasping complex strategic situations like a human can. Maybe that's easier on the tactical s
So it's not just science fiction anymore (Score:1)
Let's hope they don't take their research from game AI too literally. Most game AI i've seen is programmed to hunt and kill the player.
Easy solution (Score:1, Offtopic)
Use silver instead of copper. Silver is an excellent conductor, better than copper in fact. That will surely baffle all those copper thieves.
Re: (Score:1)
NPC AI under construction in Eve-Online (Score:2, Interesting)
Actually the EVE-Online community, including devs are really gonna try to make AI happen in NPC encounters: http://myeve.eve-online.com/ingameboard.asp?a=topic&threadID=917074 [eve-online.com]
Re: (Score:2)
God I hope so. Right now EVE's AI behavior is: If player is within a visibility range, go to an optimum range of player and orbit; if player is in locking range, lock on player; if player is locked, fire weapons on player. I love how you can just destroy a whole squadron of enemies while the other squadron nearby just sits there and acts like nothing happened. God Eve is boring.
AI does not need state trees, it needs statistics (Score:1)
AI in games is approached the wrong way: instead of finding all the game states and choosing the best path, a far better approach is to apply statistics and do pattern matching. In fact, brains work with the latter method, not the former.
The problem (Score:3, Insightful)
The problem with game AI isn't that we can't make better AI, it is that we don't make it a priority. Todays machines are powerful enough to give us good visuals but not powerful or memory spacious enough to really devote resources to too much beyond that. In Mass Effect I want to say we devoted something like 75% of the memory budget to textures, and we still had to downgrade the textures before the final ship. I don't know what the final stats were, but I wouldn't be surprised if about 90% of the budget was allocated to textures and polygons.
That's not to say if you were to quadruple the memory on today's machines that AI would suddenly improve drastically, though. Many teams don't have the resources to devote to programming, so they need to take whatever is in the package. There's room for some entrepenureal spirits to create snap in AI programs, like what Havoc does for physics. Get started now, and you may have a refined product ready for the next generation of consoles.
As an animator, I just want to point out that most anything smart you see in a game is a scripted sequence. An AI marine flipping a table and taking cover is mostly animation work. The only real code there is is a simple set of conditions that determine if the animation should be played and then some state changes to coincide with the animation. The measure of an AI isn't what kind of cool things it can do, because that's animator work, it's how quickly it figures out what it should do, and how well it figures out the quickest way to do it. When you see AI running out in the open, taking the long route to cover, getting hung up on corners or doing circles, that's bad AI.
To give credit (and blame) where credit (and blame) is due, designers choose what kind of behaviors that are possible, so they too are highly responsible for the final appearance of the AI. If a designer neglects a cover system, then it can make even an intelligent AI look stupid by just having enemies stand in harms way. If a designer includes a visceral chainsaw attack, even a poor AI that gets a kill can still seem impressive.
Re: (Score:2)
The x box is shared, and the PS3 is 512 vid 512 system. IIRC
Most games these days are built the the least common denominator.
Re: (Score:2)
Actually, both are only 512MB. Xbox360 is shared (512 for 3 PowerPCs plus GPU), PS3 has 256MB main system RAM and 256MB for GPU.
God are you there? (Score:3, Insightful)
It neve ceases to amaze me that that while science is fiercly opposed to God or Theology infiltrating science as a process, in AI development they almost "assume" that intelligence was crafted by a God.
COMPLEX BEHAVIOR IS EMERGENT, NOT DESIGNED.
In AI development they seem to assume that the proper development of AI to to be a God and design a system or method of AI that accomplished a specfic set of goals or objectives.
Day after day evolution is a truth in science, and thats fine; but when it comes to AI development I swear they have never heard of evolution.
Your behavior is a result from a wide and largely independent array of inputs.
Your eyes don't make any decisions and aren't designed for decision making, they're input.
Your feet, lungs, and regions of your brain operate as a COMPLEX INTEGRATED SYSTEMS OF INDEPENDENT FACULTIES.
This is a much larger problem then the specifics of the task at hand. We are talking an organic development model for AI rather then a deterministic method. That is the largest flaw of Computer Science. Computers are largely deterministic devices, intelligence, isn't deterministic. A determinstic method of AI development is doomed.
You have to evolve the AI. The AI needs to know the limitations of it's organism for proper development.
Light, Dark ...
Up, Down
Here, There
Friend, Foe
Move from A to B
Find A Weapon
Assess Threat
Attack or Flee
etc...
The very process of evolving the AI api in an organic model give the model itself the ability to ignore irrelevant data by feeding abstract and generalized data up the cognitive food chain with irrelevant data dying off early in the process. If the general data is insufficent then the AI simply asks it faculties for more specific input.
OUT - I WANT TO READ HAMLET
IN - BOOK SHELF NEAR, OBJECTS FOUND ON BOOKSHELF, ASSUME RECTAGLE OBJECTS ARE BOOKS
IN - BOOKS OVER THERE ON THE BOOK SHELF (RECTANGLE OBJECTS CONFIRMED AS BOOK)
IN - BOOK ON TOP SHELF IS ABEL (Binary Search fo the book shelf)
IN - BOOK ON BOTTOM SHELF IS ZEUS
OUT - LOOK IN THE MIDDLE OF THE BOOK SHELF
IN - FIRST BOOK IS HOUSE OF M
OUT - GO BACK A FEW BOOKS TO THE LEFT
IN - FOUND BOOK HAMLET
OUT - GET BOOK
IN - TOO FAR AWAY
OUT - MOVE CLOSER
IN - I AM NEAR THE BOOK
OUT - GRAB BOOK
IN - LEFT ARM WON'T MOVE
OUT - USE RIGHT ARM
IN - I HAVE THE BOOK IN HAND
Additionally AI evolves with the organism itself (physical charactersitics influence mental development).
The reality of an AI is they need to be compiled or GROWN to fit the organism (say a terrorist or counter-terrorist in Counter-Strike)
BASE FACULTIES + ORGANISM DEFINITION + CIRCUMSTANTIAL OVERIDES + GAME PLAY OVERIDES = Source Code for AI
AI Complier then builds out an organic, almost B-Tree like info passing\storing pipelines based on the limitations.
A creature with no eyes would never have to process visual data. In that case distant objects are irrelivant except for memory storage.
My Prediction: AI isn't something that is developed, it's something that is Grown.
You define it then compile it.
Use evolution (Score:3, Informative)
Here are two great examples of using evolving neural networks to drive game AI:
Nero:
http://nerogame.org/ [nerogame.org]
Galactic Arms Race
http://gar.eecs.ucf.edu/ [ucf.edu]
They're both the brainchild of Kenneth Stanley.
His current research can be seen here:
http://eplex.cs.ucf.edu/ [ucf.edu]
Wait a sec, alot of games don't use AI (Score:1)
Predators (Score:1)