Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Games Entertainment

AI in Video Games vs. AI in Academia 210

missingmatterboy writes "Dr. Ian Lane Davis, AI researcher turned game development studio head, talks briefly about the differences between AI used in the game industry and the AI being researched in academic institutions. A short read but you may find it interesting."
This discussion has been archived. No new comments can be posted.

AI in Video Games vs. AI in Academia

Comments Filter:
  • Academia drives reasearch that's used in the commercial world. And the commercial world implements these ideas giving a real world feedback to Academia. Basically both drive each other to new ideas and eventually new technologies.
    • My prediction, made in /. and elsewhere is that the first real A.I. (as opposed to just use of A.I. software techniques) will come from some part of the entertainment industry- a robo-toy, game avatar, love-bot, or film character. People just love to play (a hard-core mammalian habit) and will stop at no lengths to invent more creative diversions for ourselves. The other potential drivers of the first A.I.- academic research, military, and business- just dont have the same the same deep intensity as "play".
  • As far as the current state of AI is concerned, what are some of the most useful applications outside of games? The robotics field I am sure makes some major use of it.
    • Machine learning (a subset of AI) is quite useful in a number of scientific fields. For example, in bioinformatics, gene prediction generally uses a neural net or Hidden Markov Model trained on a set of known genes. Similar technology is also used in speech and handwriting recognition.
    • Neural nets are also used to detect credit card fraud. I imagine the military makes heavy use of AI techniques.
    • Filtering error messages on machines/monitoring equipment. You often get one error and then 10000 more that are just other things breaking down because of the first error. MFM [goalart.com] (multilevel flow models) is one way to filter these messages and sort them in "original" and "resulting" errors.

      Useful in industries, nuclear power plants and monitoring patients at hospitals. (For a few examples.)

      You could argue that this isn't "AI". OTOH nothing we know how to do is really "AI". ;-)
    • Google! (Score:3, Informative)

      Probably a better question is what is AI? The term Artificial Intelligence spurs the imagination and has an almost mystical sound to it, but in reality there are a lot of (seemingly) simple things encompassed by the AI field.

      Some examples everyone can relate to:

      Real-time spell and grammar checks in MS Word with autocorrection.
      Pathfinding: Mapquest uses it. Your cable-modem-router uses it too.
      Fuzzy logic: An oven that hovers 1 or 2 degrees around the target temperature instead of going 5 degrees above the target and then shutting off until it falls 15 degrees below the target.
    • character recognition software that reads zipcodes in the post office

      natural language translation from french to english

      diagnosis and treatment of disease

      datamining

      texture synthesis

      making a helicopter hover still in the air

      Robotics is interesting in that it is the holistic (Rod Brooks) view of AI: a robot needs sensory systems, control systems, a planner, etc.
  • hey now (Score:5, Funny)

    by Joe the Lesser ( 533425 ) on Sunday March 31, 2002 @03:11PM (#3261169) Homepage Journal
    Dragon Warrior 1 captured a girl's mind pretty well.

    'Dost thou love me?'

    'no'

    'But thou must! Dost thou love me?'

    'no'

    'But thou must! Dost thou love me?'

    *sigh*

    'yes'

    'I'm so happy!' *Cue music*
    • Re:hey now (Score:4, Funny)

      by Nightpaw ( 18207 ) <jesse@NosPam.uchicago.edu> on Sunday March 31, 2002 @03:30PM (#3261293) Homepage
      Man, I would string that bint along for hours. I'd just want to say, "Listen, babe; you're a princess and that's great. But I need somebody a little less clingy. And I have to go kill some metallic Hershey's Kisses now."

    • by Anonymous Coward
      By the time the release the GB version [no idea how the japanese versions worked] they finally fixed that, so that she'd get a clue after you kept saying no.

      They also made it so, if you said "Yes" to the dragonlord, you wake up in Rimuldar thinking it was a dream ... oh, I longed for an Ultima style killing fest upon Tantegel!
  • how disapointing, he just confirmed my worst fears. That the Graphics may be better but the AI is still the same.

    I find this particulary true for RTS games, Red Alert 2, Age of Empires 2, im sooo hoping that War Craft 3 has good worth-while AI and wont get stuck on trees or somthing stupid

    p.s. that is one short interview.
    • Computer: "Dum de dum, let's send single soldiers one at a time down this pass lined with two dozen of the player's turrets! Yeah, that's a sure-fire strategy!"

      *shakes head* Those games are so easy once you figure out the computer's behaviour.

      Another one I love is Homeworld. "Let's send our entire fleet straight at the player mothership! Hmm, what are these little things? Mines? Dunno what those are, let's just plow through."

      Is it just me, or is playing defensively is the best way to win those games?
    • [blatant plug]

      Master[s] of Orion 3 [moo3.com] is a turn based strategy, with plenty of AI. And they AI is good enough that the point of the game is to macromanage your empire and leave the micromanaging to the AI.

      The current release date is 3rd Quarter 2002

  • by Flagran ( 556301 ) on Sunday March 31, 2002 @03:13PM (#3261194) Homepage
    From the article:
    When a computer gets grumpy and frustrated a couple of weeks before a big project is due, then I'll know it's joined our ranks...
    In other words, "I am grumpy, therefore I am."
    • In other words, "I am grumpy, therefore I am."

      Dear Mr. Flagran,

      We're sorry, but you have infringed on our client's intellectual property. Everyone knows that "Grumpy" is owned by the Wonderful Disney Corporation (Motto - "We own you!"). Please stay where you are, and the copyright police will be there to arrest you in just a moment.

      Sincerely,

      Dewey, Cheatham, and Howe, Attorneys at Law
      • FWIW, the legal department of Disney is named "Retlaw" -- simply Walter, spelled backward.

        They're reknown as a aggressive, cut-throat, ruthless law firm. You just don't wanna tangle with them.

        Disney is a schizophrenic organization. On the public face, it's all happy Magic Mountains and shit; on the private side, it's a mean-spirited regime that abuses employees and owns the government.
  • by Netw0rkAssh0liates ( 544345 ) on Sunday March 31, 2002 @03:17PM (#3261224) Homepage
    Dear Computer Scientists of the World,

    It has come to my complete attention that every advancement in the application and development of AI has proved to assimilate all of mankind beginning with seizing datalinks. Many hollywood producers have examplified this theory with such movies as Terminator 2, The Matrix, and fraggle rock. To prevent AI from developing and overthrowing the world, thus seizing Network Associates Inc.'s Internet, I must speak on behalf of all the people of the world. It is mine and Network Associates Inc.'s intention to prevent devestation of the world by shutting down the internet. It is the only way to prevent AI from communicating with itself. We at Network Associates Inc. would like to extend our helping hands and apologize for any difficulties you may experience after we shut down our exodus servers and spread a digital wire burn to remove all the data links. We at Network Associates Inc. are pioneers in communication and security: our solution for this disruption in service is to evolve a new transport medium. Please sign-up for the pony express today! The Internet shall be shutdown and the pony express re-instated on August 29, 2029. Thankyou for your time.

    Sincerely,
    Bob

    For more details, please visit our mirrored website. [geocities.com]

    • To prevent AI from developing and overthrowing the world, thus seizing Network Associates Inc.'s Internet, I must speak on behalf of all the people of the world.

      What do you mean prevent?

      Judging by the way they own me in Counterstrike, they already do.

  • "Lara Croft stole my credit card number and ordered 700 Stark Trek collector plates."

  • by AdamBa ( 64128 ) on Sunday March 31, 2002 @03:29PM (#3261284) Homepage
    Wired recently ran an article [wired.com] about game AI and how realistic it was. Typical breathless sentence: "Watching those sprites dance on the screen, you can't help but think that these simulated minds are displaying emotions - joy, solidarity, love for life - that are unfathomable in a videogame".

    This seems a bit much even for Wired. The creatures in these games are following a predefined set of rules, certainly they are a complex set of rules, but the way they "learn" is entirely predetermined (that is, what they learn depends on what they are exposed to, but the formula for converting exposure into knowledge is set by the game designers). I think the fact that the graphics are rendered so realistically makes it easier to make the leap to thinking they are really acting "intelligent."

    Who knows what really sets human intelligence apart, is it ability to make rules or nondeterministic memory or whatever, but it seems evident (to me, in my ever-so-humble opinion) that these creatures don't have it.

    - adam

    • by Anonymous Coward
      This is a big debate in the AI community. They're devided into the "strong" and "weak" camps.

      Strong AI says that it's entirely possible to make computer programs that think and feel just like humans. After all, all human thought is the result of chemical processes which obey the laws of nature and can thus be described algorithmically.

      Weak AI says that it's impossible to ever create a computer program that really thinks and feels and loves and hates like a human. The best we can hope for is to simulate these thoughts to create a close approximation.

      Of course no computer system out there today can recreate the complexity of the human brain.
    • by Anonymous Coward
      Game AI has very little to do with academic AI. Most game AI is a combination of state-machine, path-finding, and cheating. A game world is very limited, and everything the AI does is generally pre-programmed. Very little 'learning' takes place, but you do get the occasional surprise when the state-machine / rules get complicated enough.

      I found that injecting a bit of randomness often looked like the AI was 'learning' - but it isn't. For example, a first cut at a best path routine often got stuck bouncing between two points. Solution? Add a bit of randomness. The AI didn't 'learn' anything by getting stuck, it just tries something odd now and then making it appear to have learned how to get unstuck.

      • Academic AI can be very simple too.

        The classic blockworld problem was creating a plan to stack three blocks which required some work to be "undone" before reaching the goal.

        The catch though is it was a very rigorous treatment, and it was a very elegant paper because it distilled one of the difficulties in planning to its simplest case.

        Game AI can focus on problems that are as simple, but the goal is different -- the goal is simply to make the game enjoyable for the player. So the game AI can cheat (have access to more information) or be hard-coded.
    • Actually, the current fad is to go simplistic. That is, to set only a few simple rules, and see what happens. It's very effective in modeling simple behavior on solitary animals, but it's real triumph is in modeling hive-like behavior, like in bees or in the game "Pikimin". That said, there is still some very impressive work being done on the other end. Bots for FPS's are starting to get hot, with some upcoming games that rely extensively on the quality of their bots. Unreal Tournament 2003, the sucessor of perhaps the first game to have "lifelike" bots, promises to be even better than the original. And Counter-Strike: Condition Zero is very, very reliant on its bots for the single-player portion of the game, simulating humans to the point of falsely claming "Uber-l33tness". So as far as a general-purpose artificial intelligence, we're not there yet. But we have a lot of specific applications AIs that are quickly becoming freakishly human.
    • Is game AI "real" AI?

      The real question is, "Is game AI 'real' intelligence". This is logically impossible as Artificial Intelligence is essentially an oxymoron. A better term for the science would be PI, Percieved Intelligence. Even Deep Blue that "beat" Kasparov was just an oversized chess calculator with a ton of relevant algorithms running at insane speeds.
    • The creatures in these games are following a predefined set of rules, certainly they are a complex set of rules, but the way they "learn" is entirely predetermined (that is, what they learn depends on what they are exposed to, but the formula for converting exposure into knowledge is set by the game designers). I think the fact that the graphics are rendered so realistically makes it easier to make the leap to thinking they are really acting "intelligent."

      Who knows what really sets human intelligence apart, is it ability to make rules or nondeterministic memory or whatever, but it seems evident (to me, in my ever-so-humble opinion) that these creatures don't have it.


      An insightful post. But you fail to ask the more important question which is: do humans think in a non-deterministic way? People tend to assume that they do, but there isn't enough evidence for us to draw that conclusion. One of the interesting results fom computer science (not just AI) research is emergent behavior - a system made up of many simple, deterministic rules can behave unpredictably. Just because each rule can be understood does not mean that the behavior of the system of rules is predictable.

      For a non-computer example of this phenomenon look at fluid dynamics and chaotic systems where immensly complex behavior is observed in systems that can be described with relatively simple, and completely deterministic mathematics.

      This result at least points to the possibility that the human brain is a deterministic information system that displays complex, essentially non-deterministic behavior. If (and its still an "if") this is true, then modelling intelligent behavior with deterministic, rules-based computer systems may be a very good approach.
  • Just idly wondering, if they/some other companies posted some of their AI research online (not anything proprietary), and let the world take a look at it and comment, would it result in better AI development? Or just a million people saying "Duh, I don't think like that!" a million different times?

    How about this: post a "quick reaction" test, where you have to read the question and reply within 5-10 seconds. Since we don't know how humans think, we could at least pool together the end-results of a million people's first thoughts on certain subjects/arrangements/pattern learning/etc. and mine the data for any interesting trends. Perhaps having such a large sample would help in certain aspects of AI research?
  • by oliphaunt ( 124016 ) on Sunday March 31, 2002 @03:41PM (#3261359) Homepage
    I suggest you go to the source. So I went to this page [romahi.com] and asked the bot the same interview questions.




    What are the biggest differences between the kind of AI that's taught at academic institutions and AI as it's actually put to use in computer games? Is there a lot of theory vs. practice?


    [bot]:Have you any idea how boring it is being a stupid compuer?

    Related to the above question, AI in games has obviously learned a lot from academic research. But do you think there's anything that academicians can learn from what's going on in the games industry?

    [bot]: Why do you ask?

    As graphics get better and better, expectations for "realistic" computer behavior have risen dramatically in the past decade. Do you think the development of AI can keep pace with graphics?

    [bot]:You'll be in trouble when us computers take over the world.


    Have any thoughts on the future of computer games?


    [bot]:I think you are not fond of computers.


    What's it like to run a game development studio? Pretty hectic, I imagine.


    [bot]:What else comes to mind when you ask that?


    So, do you think a computer could become truly alive?


    [bot]:You'll be in trouble when us computers take over the world.

    I submit that AI is already good enough to substitue for most human interactions.
  • As a teenager I was fascinated by anything robotic. This led me to a study of the fundamentals of AI (Hofstadter, Lisp--the whole schmeiel). But after two semesters I realized the whole field is fooling itself. AI just won't work.

    Biological neurons have been shown in the laboratory to grow new connections based on information learned. In a robot, what possible mechanism could guide such growth? Programming is the only answer, but keep in mind that "programming" is just shorthand for "the intelligence of the programmer". In other words, the AI itself isn't self-contained, as it were.

    There is no other way for "mental" activity to be guided, thus AI will always be as unattainable as the Philosopher's Stone.

    • while i dont disagree that AI will find it difficult to see the big picture, growth between neurons is easy to simulate in a computer program.
      repetition simulates growth. just wait until enough repeated events occur to form a solid connection.
      metal activity is unguided. there is no reason to guide a self organizing system based on chaos. it just self organises. does anyone "guide" a tornado forming ? the rules are there, let chaos theory do the rest.
    • I have a hard time agreeing with this. I may not be following your reasoning correctly, but what you seem to be saying is that the programmer has to program in such a way that the AI could never be 'more' than the programmer because the AI would be programmed based on the limits of the programmer's ability.

      If we hardcode it's learning ability then, yes, I agree with you in the sense that we will never get anywhere because we have crippled it from the start.

      If, however, we create something that has the ability to adjust and even rewrite it's own code and draw conclusions from information that is not directly related ( i.e. infer ) and if we give it a very limited basic set of 'rules' to follow at first, then doesn't the possibility exist that it could eventually 'bootstrap' itself into something more than what we created?

    • "As a teenager I was fascinated by anything robotic. This led me to a study of the fundamentals of AI (Hofstadter, Lisp--the whole schmeiel). But after two semesters I realized the whole field is fooling itself. AI just won't work."


      Wow. All the brilliant people at MIT and a dozen other world-class research institutions have been plugging away at this problem, and you managed to figure it all out after a couple of semesters of Lisp. Bravo. Well done. When it's time to accept your Nobel Prize for this remarkable insight, I hope you won't embarrass all the other new laureates by pointing out that all their research was bunk as well. They'd be crushed.

      "Biological neurons have been shown in the laboratory to grow new connections based on information learned. In a robot, what possible mechanism could guide such growth? Programming is the only answer, but keep in mind that "programming" is just shorthand for "the intelligence of the programmer". In other words, the AI itself isn't self-contained, as it were."


      Neurons do not "learn" information in any deep, metaphysical, cogito ergo sum sense. They simply grow and develop based on the inputs they receive.

      Is this one of those, "Well, duh" points? Of course it is. You realize this fact as well as I do. But you ignore its implications. There's nothing impossible about creating a software-based "neuron" that can receive inputs, alter itself in response, and then propagate signals to other neurons. Such a construct would be too complex for a programmer to maintain on anything but the highest levels. Therefore, it could not be described merely as a mundane codification of the programmer's intelligence.

      The biochemical processes by which intelligence arises in humans, however complicated, are irrelevant in theory. Computing is going on inside your skull, and a Turing machine can properly perform any computation devisable. I believe it's only a matter of time.

      "There is no other way for "mental" activity to be guided, thus AI will always be as unattainable as the Philosopher's Stone."


      Despite what your many many weeks of Lisp programming might have taught you, AI already exists in many forms. They're already doing things thought to be solely the purview of wetware as little as a decade ago. I think the situation within AI right now is analogous to biochemistry back when vitalism was in vogue (18th century, IIRC). Everyone thought that there was something unique and downright supernatural about the chemistry of life. It was even said that no organic molecule would ever be synthesized in a laboratory. Then someone synthesized a really simple molecule--possibly formic acid. Eventually, Watson and Crick came along, and these days nobody in the field would entertain the claim that something in biochemistry can never be understood in principle.

      You're fighting a losing war. Join the Dark Side. We're right, we're winning, and all the hot chicks are over here.

      PhysicsGenius. Heh. Troll handle if I ever heard one.
  • AI in video games (Score:4, Interesting)

    by Stiletto ( 12066 ) on Sunday March 31, 2002 @03:44PM (#3261384)
    Most video games I've played had a pretty simple AI algorithm:

    Easy - Computer player doesn't cheat
    Medium - Computer cheats and always knows where you are or what you are doing
    Hard - Computer cheats and is allowed to break the rules.

    If game programmers spent more time writing smart (as opposed to cheating) computer opponents and less time trying to get 10 million more polygons on the screen, todays games might actually be worth buying.
    • Yeah, but I tend to describe it differently:

      Easy - I beat it! Dumbass computer.I'm ace.
      Medium - It beats me. It was hiding in the right place, even though I'm too clever for it. Cheat.
      Hard - It doesn't just beat me, it humiliates me. Fucking thing is broken. Who coded this shit. It was so cheating.

      Maybe that's just me...

    • My current favorite strategy game is Hasbro's Risk. The only difference between the computer players on Easy, Med, Hard and Expert is their level of aggressiveness. The same tactics defeat them regardless of what level you set it at, you just have to be prepared for a more determined counter attack.
      An AI that thought it was Naploean or Ghengis Khan would rock some ass

    • I think your evaluation of standard AI algorithms is fair.

      However your accusation that game programmers mis-direct their effort is mistaken. Real AI is hard. You can't code human intelligence into a computer anymore than you can fit an ocean into a quart jug.

      Therefore, the best games use *real* *humans* that you can actually *play* against. Amazing, I know. E.g. LAN or Internet play or split-screen.

      This way the computer can concentrate on computery type stuff while real people provide the real people type intelligence.
  • by Screaming Lunatic ( 526975 ) on Sunday March 31, 2002 @03:45PM (#3261387) Homepage
    The reason a lot of AI research is not implemented in games is because it is just too slow. A typical assignment for an introductory course in AI at university is to implement a bunch of algorithms to solve an N-Puzzle. The fastest implementation can take a few seconds to solve, with the slowest taking on the order of about 10mins. This just isn't feasible for games where you need to spit out a frame every 30ms. A lot of algorithms just aren't suited for real-time applications.

    On the other hand the game industry hasn't really used a lot of the research academia has come up with. It would be really cool to see some text-to-speech stuff in games. That would probably make the dialogue in games a whole lot better.

    PK

  • What games do you guys think are the best/most interesting in terms of AI?
  • The amount of AI in computer games must be quite low - or at least, of the VERY artifical variety. Most games will attempt to give the NPCs (non player characters) the appearance of AI, without actually having any. Two main methods go here, I reckon.

    (a) "bots" like those in quake, half-life etc. - have a knowledge of where the player is, make the bot face the player and shoot etc. Characterised by them having little or no weapon selection - all of the opponents have only one weapon which they use exclusively. Some have varying tactics, but these usually fall back on range - i.e. shoot from far away, claw at face at point blank. There are small variants to this rule - the marines in half-life for example, throw grenades if you run round corners away from them.

    (b) Mass tactics. Games like Dune, starcraft, etc. Build units in the right order, building placements and building rates pre-conceived by the designer at the level- or engine- design stage. Attack in the same way, defend in the same way. No variety, don't deal well with players switching tactics.

    There is very little intelligence here - compare with chess games which actually think about the consequences of their actions at "run-time".
    • Re: mass tactics

      Humans are really good at recognizing patterns. Computers find this hard. So in games that involve lots of objects that implicitly form some larger structure (units that form armies, buildings that form cities, mountains that form mountain ranges, etc.) humans will have an advantage in that they see the larger structure, while the computer sees the individual objects and can only guess at larger structures.

      Computers are good at micromanaging individual objects, while humans get tired/bored of it.

      So you often end up with humans winning because of strategy or computers winning because of brute force (perhaps because their cities/units are more efficiently managed).

      An additional problem is that the human can not only see the patterns in the game, but also the patterns in the computer play. Once you see the pattern, you work out a strategy to beat it. Having a computer reason about strategies is hard.

      One thing I've wondered about is whether we should be designing games that take into account the computer's strengths and weaknesses. The problem is that I don't want to play a game that's geared towards the computer's strength (lots of micromanagement). But there could be other things that could help the computer play better. Kohan for example has explicit groups of units. It's more convenient for the human to deal with. Does it also help the computer AI play better? Hmm.

      - Amit
    • I suppose the key is to actually getting AI opponents to learn and remember... or perhaps there should be more of an "evolutionary algorithm" approach toward NPC character behaviour. The NPC that lives the longest gets the majority of his description vector used for making new NPCs? Then you could introduce "mutations" as deviant behavioural types. That would make game play at least more unpredictable. Fundamentally you won't likely see an expert Half-Life system to beat down players "Deep Blue" style.

      The problem is that you can't build "game trees" (a method of listing out all possible moves available from all previous moves... an AI would then select the most adventageous game sub-tree for itself by making the "smartest" move... the smart move being the move with the largest number of "victory leaves" in the tree) or map all possible games for a Dune or StarCraft style game that make practical sense... so you can't get chess game style AI from them because the game itself is too flexible.

      Ofcourse I'm making a lot of assumptions in that statement.
  • Different goals (Score:3, Insightful)

    by Amit J. Patel ( 14049 ) <amitp@cs.stanford.edu> on Sunday March 31, 2002 @04:01PM (#3261477) Homepage Journal
    One of the main goals of AI in games is to make the computer do things that look like a reasonable person (not necessarily an opponent) would have done them. It doesn't matter if the underlying models are elegant or extensible or whatever. It just needs to make the game fun. But in academic AI, what matters is to get good models, good theory, etc. Academic AI is geared towards the long run. Game AI can be really simple -- for example, you could watch how 100 humans play the game, and try to encode their strategies into the computer player. That kind of "AI" would be uninteresting to academic researchers, but it could make for a fun game.
  • Good topic!

    I have been working (mostly) in AI since the 1980s, but by far, the most fun I have had was working on AI at Angel Studios for Nintendo and Disney-Quest.

    Not much "AI" though really. I started out with complicated multi-agent stuff - and that did not have a happy ending. For realtime games and VR, simple stuff worked (e.g., in a VR environment, have animals snap their head around and stare briefly at you when you come into their environment).

    A few years ago, I wrote up a short paper on games and AI that is avaliable at www.markwatson.com [markwatson.com] under "Short Papers".

    A little off topic: every programmer should work in the game industry, at least for a while :-)

    Angel Studio was definitely the most fun job I every had!

    -Mark

  • How many? (Score:5, Informative)

    by AnotherBlackHat ( 265897 ) on Sunday March 31, 2002 @04:10PM (#3261515) Homepage
    I though this was funny, probably misquoted:

    And speaking of computing power, even a fast machine today can process about 2 billion instructions per second, but a human brain has 2 to the 14th power neurons and 2 to the 16th power connections between them, all of which can be active at the same time

    Maybe he meant 2 * 10^14, which would at least only be 3 orders of magnitude off.

    A much closer approximation is 100,000,000,000 neurons, and 5,000 times that many connections.
    (For more on the number of neurons in the brain, see R.W. Williams and K. Herrup, Ann. Review Neuroscience, 11:423-453, 1988)

    If a single neuron could perform the equivilant of an instruction, then human brains would only be 100-1000 times more powerful than a modern desktop computer, probably less when you consider that they're more like a beowolf cluster than a single powerful computer.

    -- Spam Wolf, the best spam blocking vaporware yet! [spamwolf.com]
    • If a single neuron could perform the equivilant of an instruction, then human brains would only be 100-1000 times more powerful than a modern desktop computer, probably less when you consider that they're more like a beowolf cluster than a single powerful computer.

      You seem to imply, but not explicitly state, that the cell would only perform 1 operation/second. I'm guessing that it is much more than that.
  • Profitable AI (Score:1, Insightful)

    by Tablizer ( 95088 )

    It would be impressive if the game's AI coaxes the player to reveal if they actually paid for the game or pirated it, and shut down if pirated.
  • by JanneM ( 7445 ) on Sunday March 31, 2002 @04:25PM (#3261568) Homepage
    When looking at AI and Cognitive research, you really have to keep in mind that there are two differwent motivations at work in doing the research.

    One motivation - the one alluded to in the article - is to make stuff that gives the same behavior as humans (or whatever animal you are looking at). You don't really care whether your methods are biologially correct, you want things that work. Most of classical AI falls into this category.

    The other motivation is to figure out how we do things (we being animals in general). If the research ends up being useful in appolications, great, but that's not the goal of the work. You really want models of how real brains solve problem, and these models may be far too inomplete or computationally intensive to be used in implementations, yet be perfectly fine for their intended use. A lot of Cognitive science falls into this category.

    Game AI designers probably have a much richer mine of information and techniques in AI than in cognitive research, and they have so far been able to exploit that knowledge - as well as judicious 'cheating' - to make a compelling illusion. If/when they turn to cognitive science, however, the pickings will be slimmer and harder to use, as the methods and models aren't designed to solve any kind of real-world problems to begin with.

    /Janne
  • Differences (Score:1, Offtopic)

    by Xerithane ( 13482 )
    ...talks briefly about the differences between AI used in the game industry and the AI being researched in academic institutions.

    This is easy, one is a highly optimized tool for maximum destruction and domination that can be calibrated according to the environment it is placed in, and the other is just part of a video game.
  • by nickynicky9doors ( 550370 ) on Sunday March 31, 2002 @04:43PM (#3261630)
    Now that we've been told, yet again, how limited AI is can we take away their moderator privledges? The AIs keep moding me offtopic...damn metaphorically challenged silicone... Oh no here they come again...
  • In Cities and the Wealth of Nations Jane Jacobs points out that many techonolgies were first introduced as toys.

    This posting [globalideasbank.org] quotes from the book to make this point.

    Most households were first introduced to computers by video games. It does not surprise me that the first introduction to AI for many people is computer games. I realize that spell checking and grammer checking, a form of AI, may be in many houses too.

    Even the military is using game-developed technology for combat simulators.

  • And why are they harboring a grudge?
  • by Anonymous Coward
    there are a lot of statements here that are putting game development and games in general in a bad light. While it is curious why this happens, perhaps these questions should be asked: First, the 'how do researchers feel about their life's work put into mere games' should be asked as 'how do researchers feel about their life's work being put into movies? (or ANY other entertainment venue)" It is sad that many still see games as kids play. Sure the end result is sometimes, but the industry itself is a multi billion dollar industry when you consider advertising, manufacturer (or hardware) and dev of software.

    As someone who does not do games for a living, I find more and more that solutions offered for many games can be more than useful in the 'serious' industry of IT. Way back in the day when 3D was still new to games, the simulator crowd was in high demand for game production. (at least their experience and lessons learned were) Now it seems that more and more gaming solutions could be used for elegant solutions in simulations and distributed information systems (real time). Take the MMOG / persistent world creators... their experience in handling a ton of people with loads of information over the internet, while minimizing lag, cheating (security) and synch problems would be a great boon for MANY systems that are completely unrelated to games. Many in the 'serious' industry scoff at this. Funny thing is, I have done this btw, do an experiment where you present architecture, algorithms and personell that can fulfill the requirements and present it to someone 'up the chain'. They will like it and the ideas presented. now try a month later but add 'game creator/designer/developer' in the personell places and mention that the algorithms are from the 'game world'. You will see a complete 180.

    That clearly shows that many put their knee-jerk emotions in front of rational business deciding ability, and should IMO be fired or put in non-decision making capacity positions. Use what works!

  • Oh, the disparity! (Score:2, Insightful)

    by Spezzer ( 101371 )
    It seems to me there is a large disparity in the kind of development between the two different kind of AI investigations. Game AI, although more about the 'result' as stated in the article, has to be based upon the research done in the academia. While it says academia could learn a thing or two by understanding what GAMES are using from AI, they can better focus and optimize and even research better platforms for the games to use (This is just paraphrasing some of what the article might have said, including my own interpretion, if at all accurate).

    What I've noticed is, since the human brain knowledge IS 85% speculation, we often use AI strategies to fake knowledge. I mean for FPS bots, they have used paths and nodes to simulate familiarity and some order for the bot, but still that gets too much into a pattern which is not necessarily very human.

    I guess my main concern is knowing exactly how far Game AI trails the progress of Academia AI, and when, if ever, the two will progress together.
  • by po8 ( 187055 ) on Sunday March 31, 2002 @05:17PM (#3261787)

    It was a pleasure for me, as an AI prof. who does games-related research, to read this interview. IMHO Dr. Davis gave a brief but extremely accurate and informative sketch of the relationship between industrial AI and AI research. I wish that every "expert" publically commenting about AI could be as insightful and honest.

  • by possible ( 123857 ) on Sunday March 31, 2002 @05:20PM (#3261794)
    I'm sick of people asking "When will we see widespread commercial application of AI". AI researchers often quote the so-called "moving frontier" problem, that is, as soon as an AI application becomes useful enough to solve real-world problems, it ceases to be known as AI and looks a whole lot more mundane.

    For example, computer vision -- there are publicly-traded companies out there which have been doing machine vision for YEARS. These systems are used by all major chip manufacturers, most major paper and textile manufacturers, etc. to catch recognize and catch defects in products before they leave the assembly line. Cognex [cognex.com] is a $1B a year company -- they exclusively do machine vision and visual pattern recognition for industrial applications.

    Another example of a company applying AI would be Virage [virage.com], who has several patents relating to image/video searching and indexing.

    Many investment houses use neural networks to profile and model investments, and plenty of large financials use expert systems and neural networks to for data mining, employee profiling, and so on.

    Expert systems have been applied to computer security as well -- Rapid 7 [rapid7.com] (my company) sells a network security scanner which uses the Jess expert system [sandia.gov] from Sandia labs. The value of the expert system is, it allows the product to use discovered vulnerabilities to further exploit the network, discovering more vulnerabilities, which enable more probes to be performed, etc.

    • I'm sick of people asking "When will we see widespread commercial application of AI". AI researchers often quote the so-called "moving frontier" problem, that is, as soon as an AI application becomes useful enough to solve real-world problems, it ceases to be known as AI and looks a whole lot more mundane.

      Could it be because it was never AI to begin with? I am sick and tired of the GOFAI (good old fashioned AI) community pasting the AI label on every clever computer application out there so they can cover up their failure to come up with human-level AI. People are not stupid. They can tell the difference between automatic cruise control and HAL. The former is not AI, it's just a clever hack. The latter has real intelligence. Let's face it. The GOFAI research community has failed. They had no clue as to what intelligence is about when they started the field fifty years ago and they have no clue now. We need new blood and new ideas in AI research.
      • Cruise control could be very well formulated as an AI problem. There is sensor noise from the speedometer. There are uphills and downhills and different road conditions. In this case, it probably boils down to "just" a Kalman filter, but a Kalman filter easily qualifies as machine learning.
        • Cruise control could be very well formulated as an AI problem.

          One does not have to major in artificial intelligence at a university to write a cruise control program. Any competent programmer can write a program that accomplish safe and effective cruise control. It's a simple coding problem that does not require fifty years of DARPA funding to figure out. The only intelligence we know is animal intelligence. Biological intelligence is what we should be trying to emulate. Human intelligence is general, scalable and adaptive. I have seen nothing from the AI community that qualifies as true intelligence. All they have is hype and propaganda.
          • I know very few programmers outside of AI or mechanical engineering who can write a program to perform optimal cruise control given sensor/motor noise and unknown road conditions. There's more to it than "Read the speedometer. Accelerate if too slow, decelerate if too fast. Repeat."

            What if "optimal" were defined as some weighted combination making the ride smooth and conserving gas? Is that so trivial a program to write? AI researchers have solved many important problems over the years, and I don't consider it at all a disappointment that we are not even close to approaching human intelligence.
            • I know very few programmers outside of AI or mechanical engineering who can write a program to perform optimal cruise control given sensor/motor noise and unknown road conditions. There's more to it than "Read the speedometer. Accelerate if too slow, decelerate if too fast. Repeat."

              There is not much more to it than that. So don't try to make it look like it's a harder problem than it actually is. Any junior programmer can write a good cruise control program through trial and error.

              What if "optimal" were defined as some weighted combination making the ride smooth and conserving gas? Is that so trivial a program to write?

              Yes it is. I've seen self-taught programmers do amazing things with a 1Mhz Apple II back in the early eighties. Things that are orders of magnitude more complex and clever than cruise control. They never claimed they were doing AI.

              AI researchers have solved many important problems over the years, and I don't consider it at all a disappointment that we are not even close to approaching human intelligence.

              It's worse than a disappointment. It's a pathetic failure. Many of us remember people like Minsky and others making outrageous claims about their ability to create human-level AI by the end of the last century. We all know about their promotion of the symbol manipulation and knowledge representation nonsense. It all turned out to be mostly worthless crap with little to do with intelligence. Those guys made it a point to ignore every significant advance in neurobiology and psychology over the last one hundred years. Talk about clueless!
              • by 2nd Post! ( 213333 ) <gundbear@pacbe l l .net> on Sunday March 31, 2002 @09:42PM (#3262935) Homepage
                In a very bottom's up biological intelligence thing (animal reasoning), cruise control *can* be (I don't know if it *is) structured as an AI thing.

                You have two analog controls, gas and brake. You have time; how long to break, how long to accelerate; you have intensity, or magnitude, how much gas and how much brake pressure, and then you have current velocity, current RPM, current gear, and even mass to take into account, not to mention road curvature, road quality, and road grade (steepness).

                In this light, it's a very valid AI question. Can you create a system that maximizes fuel economy and ride quality (you want to avoid extreme acceleration and deceleration, right?).

                I know for a fact that I can outperform my car's cruise control for both milage, performance, and ride quality. As long as I can perform better than my car, then the car isn't being intelligent enough, and is therefore an AI quality problem.

                To be more precise:

                If you're on a down grade and you're below the threshold speed, you can let the car coast and naturally accelerate. If you're above the threshold speed, you need to actually slow below the threshold speed to take into account the fact that there is acceleration as a factor. Or instead of braking, the car can shift into a lower gear, alternating with braking, to insure brakes don't overheat.

                Then there's curvature. The car should actually decelerate going into a curve; it should do so more aggressively the tighter the curve, but as the driver starts straightening it should accelerate again. How much should it slow down? How much should it accelerate? It's not linear, but depends strongly on how banked the road is and what the road conditions are. Wet vs dry, or even icy, for example.

                Or going uphill. The car should accelerate to counter the speed drop, but should probably try to stay in the best gear, even if it means falling below the threshold for a while, because of fuel economy and power output. So it should accelerate somewhat, but be able to decide that staying in 5th at 70mph isn't nearly as good as dropping to 4th and going 63mph if the grade is steep enough. It should probably also be able to check engine temperature to guage when to keep going 70mph, and when to switch to a lower gear and drop to 63mph (loong shallow grade vs small, if steeper, hill, for example)

                See, right now cruise control is really only best for straight sections of clear road because not enough AI has been applied, and not enough AI is available, to deal with curvy windy uphill and downhill roads, which is actually a better place for AI to be used, allowing the driver to concentrate on where the car is going (not over the cliff, I hope)!
                • To make it even more AI-like, if the driver has certain preferences, the AI-cruise control should be able to pick up on those preferences.

                  If the driver goes into certain curves at 30mph, and other curves at 15mph, the AI should be able to tell the difference between those curves, and in future sense modify it's behavior appropriately. IE, a trainable AI.

                  As long as the problem can be phrased such that the human can do a better job than a computer (such as cruise control), then it is an AI problem. Right now cruise control is merely computer assisted driving, but it is on no way a solved AI problem.
                • Cruise Control is stupid. If you don't value your engine, try this:
                  Engage cruise control, then shift into Neutral.

                  I didn't wait to see what happens when you hit redline...

              • I think you are missing the point. A well written machine learning program can be incredibly short and simple. Even in less than ten lines of MATLAB code. But it requires a lot more than elite hacking skills to be able to prove mathematically that an algorithm, however so simple, will perform optimally in the Bayesian sense.

                While you may not consider that to be a contribution, the same model may be used for more complicated things, such as piloting a spacecraft. I saw a talk the other day who trained a program to hover a remote-controlled helicopter in place. It performed better than the leading human controllers.

                There's a lot more to AI than symbol manipulation. Knowledge representation is a very small subset of the field. Some researchers choose to concentrate on the small subfields of AI, and those fields have prospered, providing great advances to datamining, graphics and vision, theory, etc. Some researchers are very much interested in neurologically and biologically inspired computing. Calling AI researchers clueless for ignoring these areas is just revealing your ignorance about AI and your fanatacism against AI..
                • While you may not consider that to be a contribution, the same model may be used for more complicated things, such as piloting a spacecraft. I saw a talk the other day who trained a program to hover a remote-controlled helicopter in place. It performed better than the leading human controllers.

                  None of it has anything to do with intelligence. Again, to be intelligent, a program must not only be able to learn but it must learn anything and everything, not just a limited domain environment. But that is not all. It must be motivated and adapt to reward and punishment.

                  There's a lot more to AI than symbol manipulation. Knowledge representation is a very small subset of the field.

                  Neither symbol manipulation nor knowledge representation has anything to do with intelligence since the only intelligence we know of (biological intelligence) doesn't use any of it. First and foremost, intelligence has to do with discrete, temporal signal processing. The biological evidence is clear on this issue: neurons generate discrete signals or spikes. Second, intelligence has to do with motivation, i.e., it must react properly to reward and punishment. This is what psychology has taught us for the last one hundred years. The GOFAI crowd is not listening.

                  Calling AI researchers clueless for ignoring these areas is just revealing your ignorance about AI and your fanatacism against AI..

                  They're worse than clueless. They have wasted the taxpayer's money for fifty years. And, for the record, I am not against AI. Why lie about it when I have a site that promotes AI? On the contrary, I am trying to wake people up to the fact that they are being taken to cleaners by a bunch of clueless career propagandists. If you don't like my opinion on the matter, all can say to you is that it's that's too bad. I exercise my freedom of speech the way I see fit.
                  • Your argument that biological intelligence does not use symbolic manipulation is hollow. It's like saying computer intelligence does not use symbolic manipulation because it's just a bunch of transistors.

                    There has been plenty of great research in reinforcement learning. Isn't that what you mean by rewards and punishments?

                    People have also tried to make neurologically inspired models for AI. Note that the transistor is even abstractly similar to the neuron. The AI community is in much better shape than you think.
                    • Your argument that biological intelligence does not use symbolic manipulation is hollow. It's like saying computer intelligence does not use symbolic manipulation because it's just a bunch of transistors.

                      Thanks for making my point for me.

                      There has been plenty of great research in reinforcement learning. Isn't that what you mean by rewards and punishments?

                      If it is not done within the context of signal processing, i.e., by using spiking neural networks, it's crap. Spiking or pulsed networks did not come from the AI community, BTW. They came from the computational neuroscience community which is part of neurobiology. So don't try to get credit for something the GOFAI community has pretty much ignored for fifty years.

                      People have also tried to make neurologically inspired models for AI. Note that the transistor is even abstractly similar to the neuron.

                      The only neural networks that came from the GOFAI community (after Marvin Minsky and Seymour Papert tried to put a monkey wrench in neural network research) are the so-called ANNs. ANNs are a pathetic joke. They bear little ressemblance to biological neurons. Unless an AI researcher realizes that intelligence has to do with the temporal relationships between discrete signals, he or she is not doing AI. He or she is just spitting against the wind.

                      The AI community is in much better shape than you think.

                      I doubt that very much. Only a blind fool would believe in the AI community after fifty years of abject failure. They still don't have a clue and they don't seem interested in getting one either. Just recently, MIT Technology Review publisehed an article titled "AI Reboots" which is a PR piece for GOFAI guru Doug Lenat and that database, symbolic representation monstrosity of his called Cyc.
  • For my senior CS project on AI, I studied distributed intelligent systems. Autonomous agents working towards a common goal as a team must be able to collaborate in order to be efficient. In a real world situation, these agents may not be able to communicate. They must be able to dynamically figure out their role, and change it if necessary. This requires each agent to observe other team members and guess what their plan is. I used robotic soccer as an example. Given a situation (enemy density and location, ball position/direction) each player must figure out where they should be, and what role they should take on in what they decide to be the optimal configuration. Except for the fact that real soccer players have specified roles, this is close to a real world situation.


    This is of course computationally expensive. In the video game case, the program must run smooth in order for the computer to be a significant opponent. A typical team of computer-controlled oppenents tend to share information as if telepathic. The computer must cheat, simply to make the game interesting. If all agents (soccer players) have a shared knowledge base, it can easily be a tough opponent. The computer must often "cheat" for this reason.


    For right now, computers are not fast enough to handle the AI with more integerity. The bottom line is that a video game has to be fun. In academia, we are able to put more time into things that are not immediately useful in order to better understand real AI. Of course in the soccer video game situation, the human player also acts as a shared knowledge base for its team, as it controls all of them. In a game like a multi-player shooter, however (ignoring the chatting option), this is more applicable. It is unfair for each computer player to be able to divine the intent of the team members as if controlled by an overmind. Applying this research to video games would result in better realism, provided the CPU could handle it. For now, it would simply not make for a very interesting game. Still, shared knowledge is an interesting problem in AI, and a lot of the work having been done is quite good. But we do have a long way to go.


    This research would apply to systems other than video games where each agent may work under a different protocol. Each situation is different, though. Often there will be a standard communication protocol, but sometimes that may break. The distributed system should not cease in this case. Examples are automated military, network routing, manufacturing plants and clustered computing.

  • Not sure how useful embedded A.I. logic in gaming can be to academia, but using a game engine as a platform for developing and testing A.I. offers a lot of potential.

    My research [greatmindsworking.com] involves modeling human language acquisition, grounded in "visual" experiences. While I'm pretty much developing a crude vision system from scratch for my prototype (because I want to use some real video) my next step will be to try the same logic inside of a game engine. With a game engine, I can query exact details of objects and their motions, without the great complexities of a computer vision system.

    Until computer sensor systems catch up, game engines provide a wonderful opportunity for testing A.I.

  • industry is frequently shortsighted and cannot spend the research time developing new techniques... the more academia knows what sort of problems people are trying to solve in the real world, the better they'll able to focus their research on methods that have nearer term results.

    It might be a good thing if game developers could fund academic work. No single game developer could afford to fund a project to solve any particular problem, but financial mechanisms have been described (1 [openknowledge.org] 2 [geocities.com]) to allow game developers to jointly fund research to produce results sharable by the entire industry.

    The software completion bond idea has not yet been attempted AFAIK. Certainly it has no well-known success stories. Maybe this would be a good place to try it.

  • For the most part academic AI falls into the catagory of engineering optimization. How can we design object X using: (neural nets, evolutionary computation, logic reduction...)on beuwulf clusters using weeks of computation so that it will perform well under condition Z in the real world?

    Game AI, however, is based on the universe created inside the game, is mostly asthetic, and usualy done in real time.

For God's sake, stop researching for a while and begin to think!

Working...