AI in Video Games vs. AI in Academia 210
missingmatterboy writes "Dr. Ian Lane Davis, AI researcher turned game development studio head, talks briefly about the differences between AI used in the game industry and the AI being researched in academic institutions. A short read but you may find it interesting."
Both encourage each other (Score:1)
First real AI from entertainment (Score:2)
A differing perspective (Score:4, Insightful)
"I don't think all those AI coders out there are thrilled by the idea that their lifes work is used for games
Maybe they're thrilled, maybe they aren't. Aside from conducting interviews with the researchers themselves, we really don't have any way of knowing. That's sort of beside the point, though.
I think the simple fact of the matter is that both applications probably benefit each other, although possibly not in the way most people might think. When I started out programming, a lot of my initial projects were focused on game development. A recurring theme in my thinking was ways to make the computer opponent "smarter", which naturally led me to wonder how I could make the computer learn new tactics and adapt to the human player's actions. As I quickly learned, adaptive systems research is serious stuff.
So, I decided to dig into whatever materials I could get my hands on related to artificial intelligence research and theory. To be honest, I never really got very far, but it remains an interest of mine to this day. I'd be willing to bet some of tomorrow's leading AI researchers are playing video games today. That seems like a pretty good benefit to me.
I guess the key point is this: if a particular application of a certain technology gets people excited about it, and interested in researching it, it's a Good Thing.
Re:A differing perspective (Score:2)
Well let's start the survey then. I have a PhD in AI (actually machine learning) and I published several papers before I went off into the commercial software development world. I would have been absolutely delighted for my research to have been used in a game.
I know that most of the PhD students I worked with were heavily into games, particularly LPMUDs and other virtual environments (this was back in the late 80's and early 90's). Several of the ideas that went on to be granted PhDs at my school were initially prototyped in our own MUD.
I quickly learned, adaptive systems research is serious stuff.
You ain't kidding. After six years of full-time research I had just begun to get into the field seriously. Fascinating stuff which I am still working on, even if in a non-academic context.
Re:Both encourage each other (Score:1)
Of the professors I know that study AI and similar fields most are into game AI. Be it 'Wumpus playing bots' or Chess AI's but they are still in there. Some may prefer the added challenge of robotic AI, but then they use the robot to either drive around and pester visitors or to play robotic soccer.
If you pursue a 'fun' idea you will have a much better time as you try it out. And it is much more friendly to demonstrate to other people that something boring.
Furthermore, it's my personal opinion that people that are too serious to play are generally very dull to be around and don't contribute much. Particularly not in a field such as AI which is still a very 'immature' (I.e. new and undiscovered.) field.
Is there any use for today's AI? (Score:1)
Re:Is there any use for today's AI? (Score:2)
Re:Is there any use for today's AI? (Score:1)
Re:Is there any use for today's AI? (Score:1)
Useful in industries, nuclear power plants and monitoring patients at hospitals. (For a few examples.)
You could argue that this isn't "AI". OTOH nothing we know how to do is really "AI".
Re:Is there any use for today's AI? (Score:2)
Google! (Score:3, Informative)
Some examples everyone can relate to:
Real-time spell and grammar checks in MS Word with autocorrection.
Pathfinding: Mapquest uses it. Your cable-modem-router uses it too.
Fuzzy logic: An oven that hovers 1 or 2 degrees around the target temperature instead of going 5 degrees above the target and then shutting off until it falls 15 degrees below the target.
Re:Is there any use for today's AI? (Score:3, Insightful)
natural language translation from french to english
diagnosis and treatment of disease
datamining
texture synthesis
making a helicopter hover still in the air
Robotics is interesting in that it is the holistic (Rod Brooks) view of AI: a robot needs sensory systems, control systems, a planner, etc.
Re:Is there any use for today's AI? (Score:1)
beverly explained everything
Re:Is there any use for today's AI? (Score:1)
Think about it; there's almost no problem that can't be solved by hardwiring the below laws into any AI robot:
1) The robot must do what will preserve human life.
2) The robot must obey a human's command.
3) The robot must preserve itself.
The laws are listed in order of precedence. Thus, if two laws conflict, the first one takes priority.
i.e. a robot has to choose whether to sacrifice itself in order to save a human - it must sacrifice itself in violation of law 3 in order to satisfy the first law.
Re:Is there any use for today's AI? (Score:1)
Re:Is there any use for today's AI? (Score:1)
Re:Is there any use for today's AI? (Score:2)
Re:Is there any use for today's AI? (Score:3, Insightful)
There's a minor problem with that statement, which is that robots aren't nearly bright enough to do any of those things. Not yet, anyway.
For the robot to be able to preserve human life, it must first be able to recognize humans reliably; then it must have a sophisticated situational awareness to understand in what cases a human's life might be in danger, and further, it must be smart enough to understand in what ways that perilous situation might be averted.
For the robot to obey a human's command, it must first be able to accurately interpret that command. Speech and speaker recognition are getting better, but they aren't there yet. And for the robot to again have the situational awareness to know what it is doing and what the results of its actions will be (including whether they endanger a human, as above), it is going to need to be much smarter than anything you can point to today.
Just recognizing humans reliably is a problem. The situational awareness part won't be happening any time soon. Asimov's laws require robots to be a hell of a lot smarter than they are today. By the time robots are smart enough to actually do these things, I'm not sure we'll even care about Asimov's laws (an actual set of ethical values might be a good substitute; hell, it works on humans, somtimes, anyway).
Re:Is there any use for today's AI? (Score:2)
Re:Is there any use for today's AI? (Score:4, Funny)
Re:Is there any use for today's AI? (Score:2, Insightful)
Re:Is there any use for today's AI? (Score:2, Interesting)
And what about the situation that gets trotted out in every ethics class, which illustrates one of the difficulties of utilitarianism: the robot can preserve total human life best by destroying some human life?--in a time of hunger and mass starvation, it decides that humans would best be served if it and its brethren killed 10% of the population to feed the rest. Easier to imagine, it decides that human life would best be preserved if all rednecks and christian fundamentalists were wiped off the face of the earch -- the U.S. at least. You can say that the 2nd law could be invoked, but it clearly conflicts with the 1st law in both of these -- and millions of other -- cases, and the 1st would take precedence.
These 3 rules are incredibly simplistic. If ethics were this simple, there would be no discipline of ethics within philosophy, and our courts would never have to deal with questions of ethics, only with those who break the ethics enshrined in laws.
hey now (Score:5, Funny)
'Dost thou love me?'
'no'
'But thou must! Dost thou love me?'
'no'
'But thou must! Dost thou love me?'
*sigh*
'yes'
'I'm so happy!' *Cue music*
Re:hey now (Score:4, Funny)
Nah, girls just learn slowly... (Score:1, Informative)
They also made it so, if you said "Yes" to the dragonlord, you wake up in Rimuldar thinking it was a dream
RTS AI (Score:1)
I find this particulary true for RTS games, Red Alert 2, Age of Empires 2, im sooo hoping that War Craft 3 has good worth-while AI and wont get stuck on trees or somthing stupid
p.s. that is one short interview.
I loved Red Alert against the computer. (Score:2)
*shakes head* Those games are so easy once you figure out the computer's behaviour.
Another one I love is Homeworld. "Let's send our entire fleet straight at the player mothership! Hmm, what are these little things? Mines? Dunno what those are, let's just plow through."
Is it just me, or is playing defensively is the best way to win those games?
Re:RTS AI (Score:2)
Master[s] of Orion 3 [moo3.com] is a turn based strategy, with plenty of AI. And they AI is good enough that the point of the game is to macromanage your empire and leave the micromanaging to the AI.
The current release date is 3rd Quarter 2002
A new DesCartes: (Score:4, Funny)
Re:A new DesCartes: (Score:2)
Dear Mr. Flagran,
We're sorry, but you have infringed on our client's intellectual property. Everyone knows that "Grumpy" is owned by the Wonderful Disney Corporation (Motto - "We own you!"). Please stay where you are, and the copyright police will be there to arrest you in just a moment.
Sincerely,
Dewey, Cheatham, and Howe, Attorneys at Law
Re:A new DesCartes: (Score:2)
They're reknown as a aggressive, cut-throat, ruthless law firm. You just don't wanna tangle with them.
Disney is a schizophrenic organization. On the public face, it's all happy Magic Mountains and shit; on the private side, it's a mean-spirited regime that abuses employees and owns the government.
Re:A new DesCartes: (Score:2)
What you are describing is a phony turd; if you think that is normal human behavior you have done a great job of telling everybody what you are.
AI shall be banned (Score:3, Funny)
It has come to my complete attention that every advancement in the application and development of AI has proved to assimilate all of mankind beginning with seizing datalinks. Many hollywood producers have examplified this theory with such movies as Terminator 2, The Matrix, and fraggle rock. To prevent AI from developing and overthrowing the world, thus seizing Network Associates Inc.'s Internet, I must speak on behalf of all the people of the world. It is mine and Network Associates Inc.'s intention to prevent devestation of the world by shutting down the internet. It is the only way to prevent AI from communicating with itself. We at Network Associates Inc. would like to extend our helping hands and apologize for any difficulties you may experience after we shut down our exodus servers and spread a digital wire burn to remove all the data links. We at Network Associates Inc. are pioneers in communication and security: our solution for this disruption in service is to evolve a new transport medium. Please sign-up for the pony express today! The Internet shall be shutdown and the pony express re-instated on August 29, 2029. Thankyou for your time.
Sincerely,
Bob
For more details, please visit our mirrored website. [geocities.com]
Re:AI shall be banned (Score:2)
What do you mean prevent?
Judging by the way they own me in Counterstrike, they already do.
New consumer complaints (Score:2, Funny)
Is game AI "real" AI? (Score:4, Insightful)
This seems a bit much even for Wired. The creatures in these games are following a predefined set of rules, certainly they are a complex set of rules, but the way they "learn" is entirely predetermined (that is, what they learn depends on what they are exposed to, but the formula for converting exposure into knowledge is set by the game designers). I think the fact that the graphics are rendered so realistically makes it easier to make the leap to thinking they are really acting "intelligent."
Who knows what really sets human intelligence apart, is it ability to make rules or nondeterministic memory or whatever, but it seems evident (to me, in my ever-so-humble opinion) that these creatures don't have it.
- adam
Is "real" AI "real" AI? (Score:3, Interesting)
Strong AI says that it's entirely possible to make computer programs that think and feel just like humans. After all, all human thought is the result of chemical processes which obey the laws of nature and can thus be described algorithmically.
Weak AI says that it's impossible to ever create a computer program that really thinks and feels and loves and hates like a human. The best we can hope for is to simulate these thoughts to create a close approximation.
Of course no computer system out there today can recreate the complexity of the human brain.
Re:Is game AI "real" AI? (Score:2, Interesting)
I found that injecting a bit of randomness often looked like the AI was 'learning' - but it isn't. For example, a first cut at a best path routine often got stuck bouncing between two points. Solution? Add a bit of randomness. The AI didn't 'learn' anything by getting stuck, it just tries something odd now and then making it appear to have learned how to get unstuck.
Re:Is game AI "real" AI? (Score:2)
The classic blockworld problem was creating a plan to stack three blocks which required some work to be "undone" before reaching the goal.
The catch though is it was a very rigorous treatment, and it was a very elegant paper because it distilled one of the difficulties in planning to its simplest case.
Game AI can focus on problems that are as simple, but the goal is different -- the goal is simply to make the game enjoyable for the player. So the game AI can cheat (have access to more information) or be hard-coded.
Re:Is game AI "real" AI? (Score:2, Interesting)
Re:Is game AI "real" AI? (Score:2)
The real question is, "Is game AI 'real' intelligence". This is logically impossible as Artificial Intelligence is essentially an oxymoron. A better term for the science would be PI, Percieved Intelligence. Even Deep Blue that "beat" Kasparov was just an oversized chess calculator with a ton of relevant algorithms running at insane speeds.
Re:Is game AI "real" AI? (Score:3, Insightful)
Who knows what really sets human intelligence apart, is it ability to make rules or nondeterministic memory or whatever, but it seems evident (to me, in my ever-so-humble opinion) that these creatures don't have it.
An insightful post. But you fail to ask the more important question which is: do humans think in a non-deterministic way? People tend to assume that they do, but there isn't enough evidence for us to draw that conclusion. One of the interesting results fom computer science (not just AI) research is emergent behavior - a system made up of many simple, deterministic rules can behave unpredictably. Just because each rule can be understood does not mean that the behavior of the system of rules is predictable.
For a non-computer example of this phenomenon look at fluid dynamics and chaotic systems where immensly complex behavior is observed in systems that can be described with relatively simple, and completely deterministic mathematics.
This result at least points to the possibility that the human brain is a deterministic information system that displays complex, essentially non-deterministic behavior. If (and its still an "if") this is true, then modelling intelligent behavior with deterministic, rules-based computer systems may be a very good approach.
Open Source AI? (Score:1)
How about this: post a "quick reaction" test, where you have to read the question and reply within 5-10 seconds. Since we don't know how humans think, we could at least pool together the end-results of a million people's first thoughts on certain subjects/arrangements/pattern learning/etc. and mine the data for any interesting trends. Perhaps having such a large sample would help in certain aspects of AI research?
Re:Open Source AI? (Score:2)
As for your H1B status. I sense some bitterness. What did you expect? Either return home to where you could be of use in your own nations research, or swing it out for citizenship, at which time you can join US research.
If you want answers about AI, (Score:3, Funny)
I submit that AI is already good enough to substitue for most human interactions.
Re: If you want answers about AI, (Score:2)
Re: If you want answers about AI, (Score:2)
AI will never be a reality (Score:1, Troll)
Biological neurons have been shown in the laboratory to grow new connections based on information learned. In a robot, what possible mechanism could guide such growth? Programming is the only answer, but keep in mind that "programming" is just shorthand for "the intelligence of the programmer". In other words, the AI itself isn't self-contained, as it were.
There is no other way for "mental" activity to be guided, thus AI will always be as unattainable as the Philosopher's Stone.
Re:AI will never be a reality (Score:2, Insightful)
repetition simulates growth. just wait until enough repeated events occur to form a solid connection.
metal activity is unguided. there is no reason to guide a self organizing system based on chaos. it just self organises. does anyone "guide" a tornado forming ? the rules are there, let chaos theory do the rest.
Re:AI will never be a reality (Score:2, Insightful)
If we hardcode it's learning ability then, yes, I agree with you in the sense that we will never get anywhere because we have crippled it from the start.
If, however, we create something that has the ability to adjust and even rewrite it's own code and draw conclusions from information that is not directly related ( i.e. infer ) and if we give it a very limited basic set of 'rules' to follow at first, then doesn't the possibility exist that it could eventually 'bootstrap' itself into something more than what we created?
Re:AI will never be a reality (Score:3, Insightful)
Wow. All the brilliant people at MIT and a dozen other world-class research institutions have been plugging away at this problem, and you managed to figure it all out after a couple of semesters of Lisp. Bravo. Well done. When it's time to accept your Nobel Prize for this remarkable insight, I hope you won't embarrass all the other new laureates by pointing out that all their research was bunk as well. They'd be crushed.
Neurons do not "learn" information in any deep, metaphysical, cogito ergo sum sense. They simply grow and develop based on the inputs they receive.
Is this one of those, "Well, duh" points? Of course it is. You realize this fact as well as I do. But you ignore its implications. There's nothing impossible about creating a software-based "neuron" that can receive inputs, alter itself in response, and then propagate signals to other neurons. Such a construct would be too complex for a programmer to maintain on anything but the highest levels. Therefore, it could not be described merely as a mundane codification of the programmer's intelligence.
The biochemical processes by which intelligence arises in humans, however complicated, are irrelevant in theory. Computing is going on inside your skull, and a Turing machine can properly perform any computation devisable. I believe it's only a matter of time.
Despite what your many many weeks of Lisp programming might have taught you, AI already exists in many forms. They're already doing things thought to be solely the purview of wetware as little as a decade ago. I think the situation within AI right now is analogous to biochemistry back when vitalism was in vogue (18th century, IIRC). Everyone thought that there was something unique and downright supernatural about the chemistry of life. It was even said that no organic molecule would ever be synthesized in a laboratory. Then someone synthesized a really simple molecule--possibly formic acid. Eventually, Watson and Crick came along, and these days nobody in the field would entertain the claim that something in biochemistry can never be understood in principle.
You're fighting a losing war. Join the Dark Side. We're right, we're winning, and all the hot chicks are over here.
PhysicsGenius. Heh. Troll handle if I ever heard one.
AI in video games (Score:4, Interesting)
Easy - Computer player doesn't cheat
Medium - Computer cheats and always knows where you are or what you are doing
Hard - Computer cheats and is allowed to break the rules.
If game programmers spent more time writing smart (as opposed to cheating) computer opponents and less time trying to get 10 million more polygons on the screen, todays games might actually be worth buying.
Re:AI in video games (Score:2, Funny)
Easy - I beat it! Dumbass computer.I'm ace.
Medium - It beats me. It was hiding in the right place, even though I'm too clever for it. Cheat.
Hard - It doesn't just beat me, it humiliates me. Fucking thing is broken. Who coded this shit. It was so cheating.
Maybe that's just me...
Re:AI in video games (Score:1)
An AI that thought it was Naploean or Ghengis Khan would rock some ass
Re:AI in video games (Score:1)
However your accusation that game programmers mis-direct their effort is mistaken. Real AI is hard. You can't code human intelligence into a computer anymore than you can fit an ocean into a quart jug.
Therefore, the best games use *real* *humans* that you can actually *play* against. Amazing, I know. E.g. LAN or Internet play or split-screen.
This way the computer can concentrate on computery type stuff while real people provide the real people type intelligence.
Academia AI and Game AI (Score:4, Informative)
On the other hand the game industry hasn't really used a lot of the research academia has come up with. It would be really cool to see some text-to-speech stuff in games. That would probably make the dialogue in games a whole lot better.
PK
Re:Academia AI and Game AI (Score:2)
Which games are state of the art right now? (Score:2)
not really AI in (most) games (Score:1)
(a) "bots" like those in quake, half-life etc. - have a knowledge of where the player is, make the bot face the player and shoot etc. Characterised by them having little or no weapon selection - all of the opponents have only one weapon which they use exclusively. Some have varying tactics, but these usually fall back on range - i.e. shoot from far away, claw at face at point blank. There are small variants to this rule - the marines in half-life for example, throw grenades if you run round corners away from them.
(b) Mass tactics. Games like Dune, starcraft, etc. Build units in the right order, building placements and building rates pre-conceived by the designer at the level- or engine- design stage. Attack in the same way, defend in the same way. No variety, don't deal well with players switching tactics.
There is very little intelligence here - compare with chess games which actually think about the consequences of their actions at "run-time".
Re:not really AI in (most) games (Score:2, Interesting)
Humans are really good at recognizing patterns. Computers find this hard. So in games that involve lots of objects that implicitly form some larger structure (units that form armies, buildings that form cities, mountains that form mountain ranges, etc.) humans will have an advantage in that they see the larger structure, while the computer sees the individual objects and can only guess at larger structures.
Computers are good at micromanaging individual objects, while humans get tired/bored of it.
So you often end up with humans winning because of strategy or computers winning because of brute force (perhaps because their cities/units are more efficiently managed).
An additional problem is that the human can not only see the patterns in the game, but also the patterns in the computer play. Once you see the pattern, you work out a strategy to beat it. Having a computer reason about strategies is hard.
One thing I've wondered about is whether we should be designing games that take into account the computer's strengths and weaknesses. The problem is that I don't want to play a game that's geared towards the computer's strength (lots of micromanagement). But there could be other things that could help the computer play better. Kohan for example has explicit groups of units. It's more convenient for the human to deal with. Does it also help the computer AI play better? Hmm.
- Amit
Re:not really AI in (most) games (Score:1)
The problem is that you can't build "game trees" (a method of listing out all possible moves available from all previous moves... an AI would then select the most adventageous game sub-tree for itself by making the "smartest" move... the smart move being the move with the largest number of "victory leaves" in the tree) or map all possible games for a Dune or StarCraft style game that make practical sense... so you can't get chess game style AI from them because the game itself is too flexible.
Ofcourse I'm making a lot of assumptions in that statement.
Different goals (Score:3, Insightful)
I did some AI work for games and VR (Score:2, Informative)
I have been working (mostly) in AI since the 1980s, but by far, the most fun I have had was working on AI at Angel Studios for Nintendo and Disney-Quest.
Not much "AI" though really. I started out with complicated multi-agent stuff - and that did not have a happy ending. For realtime games and VR, simple stuff worked (e.g., in a VR environment, have animals snap their head around and stare briefly at you when you come into their environment).
A few years ago, I wrote up a short paper on games and AI that is avaliable at www.markwatson.com [markwatson.com] under "Short Papers".
A little off topic: every programmer should work in the game industry, at least for a while :-)
Angel Studio was definitely the most fun job I every had!
-Mark
How many? (Score:5, Informative)
Maybe he meant 2 * 10^14, which would at least only be 3 orders of magnitude off.
A much closer approximation is 100,000,000,000 neurons, and 5,000 times that many connections.
(For more on the number of neurons in the brain, see R.W. Williams and K. Herrup, Ann. Review Neuroscience, 11:423-453, 1988)
If a single neuron could perform the equivilant of an instruction, then human brains would only be 100-1000 times more powerful than a modern desktop computer, probably less when you consider that they're more like a beowolf cluster than a single powerful computer.
-- Spam Wolf, the best spam blocking vaporware yet! [spamwolf.com]
Re:How many? (Score:1)
You seem to imply, but not explicitly state, that the cell would only perform 1 operation/second. I'm guessing that it is much more than that.
Re:How many? (Score:2)
Profitable AI (Score:1, Insightful)
It would be impressive if the game's AI coaxes the player to reveal if they actually paid for the game or pirated it, and shut down if pirated.
Don't clump all research together (Score:5, Interesting)
One motivation - the one alluded to in the article - is to make stuff that gives the same behavior as humans (or whatever animal you are looking at). You don't really care whether your methods are biologially correct, you want things that work. Most of classical AI falls into this category.
The other motivation is to figure out how we do things (we being animals in general). If the research ends up being useful in appolications, great, but that's not the goal of the work. You really want models of how real brains solve problem, and these models may be far too inomplete or computationally intensive to be used in implementations, yet be perfectly fine for their intended use. A lot of Cognitive science falls into this category.
Game AI designers probably have a much richer mine of information and techniques in AI than in cognitive research, and they have so far been able to exploit that knowledge - as well as judicious 'cheating' - to make a compelling illusion. If/when they turn to cognitive science, however, the pickings will be slimmer and harder to use, as the methods and models aren't designed to solve any kind of real-world problems to begin with.
/Janne
Differences (Score:1, Offtopic)
This is easy, one is a highly optimized tool for maximum destruction and domination that can be calibrated according to the environment it is placed in, and the other is just part of a video game.
AI /. Moderators (Score:4, Funny)
Re:AI /. Moderators (Score:1)
Oxymoron? YOU be the judge!
Re:AI /. Moderators (Score:3, Funny)
I think you're thinking of something else artificial...
--PhillipRe:AI /. Moderators (Score:2)
Technolgy has often first been introduced as toys. (Score:4, Insightful)
This posting [globalideasbank.org] quotes from the book to make this point.
Most households were first introduced to computers by video games. It does not surprise me that the first introduction to AI for many people is computer games. I realize that spell checking and grammer checking, a form of AI, may be in many houses too.
Even the military is using game-developed technology for combat simulators.
Who are these 'Alvin' people? (Score:1)
put in in perspective people (Score:2, Insightful)
As someone who does not do games for a living, I find more and more that solutions offered for many games can be more than useful in the 'serious' industry of IT. Way back in the day when 3D was still new to games, the simulator crowd was in high demand for game production. (at least their experience and lessons learned were) Now it seems that more and more gaming solutions could be used for elegant solutions in simulations and distributed information systems (real time). Take the MMOG / persistent world creators... their experience in handling a ton of people with loads of information over the internet, while minimizing lag, cheating (security) and synch problems would be a great boon for MANY systems that are completely unrelated to games. Many in the 'serious' industry scoff at this. Funny thing is, I have done this btw, do an experiment where you present architecture, algorithms and personell that can fulfill the requirements and present it to someone 'up the chain'. They will like it and the ideas presented. now try a month later but add 'game creator/designer/developer' in the personell places and mention that the algorithms are from the 'game world'. You will see a complete 180.
That clearly shows that many put their knee-jerk emotions in front of rational business deciding ability, and should IMO be fired or put in non-decision making capacity positions. Use what works!
Oh, the disparity! (Score:2, Insightful)
What I've noticed is, since the human brain knowledge IS 85% speculation, we often use AI strategies to fake knowledge. I mean for FPS bots, they have used paths and nodes to simulate familiarity and some order for the bot, but still that gets too much into a pattern which is not necessarily very human.
I guess my main concern is knowing exactly how far Game AI trails the progress of Academia AI, and when, if ever, the two will progress together.
This guy is right on... (Score:3, Informative)
It was a pleasure for me, as an AI prof. who does games-related research, to read this interview. IMHO Dr. Davis gave a brief but extremely accurate and informative sketch of the relationship between industrial AI and AI research. I wish that every "expert" publically commenting about AI could be as insightful and honest.
AI in the real world? (Score:4, Informative)
For example, computer vision -- there are publicly-traded companies out there which have been doing machine vision for YEARS. These systems are used by all major chip manufacturers, most major paper and textile manufacturers, etc. to catch recognize and catch defects in products before they leave the assembly line. Cognex [cognex.com] is a $1B a year company -- they exclusively do machine vision and visual pattern recognition for industrial applications.
Another example of a company applying AI would be Virage [virage.com], who has several patents relating to image/video searching and indexing.
Many investment houses use neural networks to profile and model investments, and plenty of large financials use expert systems and neural networks to for data mining, employee profiling, and so on.
Expert systems have been applied to computer security as well -- Rapid 7 [rapid7.com] (my company) sells a network security scanner which uses the Jess expert system [sandia.gov] from Sandia labs. The value of the expert system is, it allows the product to use discovered vulnerabilities to further exploit the network, discovering more vulnerabilities, which enable more probes to be performed, etc.
Sorry, Cruise Control Is Not AI (Score:3, Insightful)
Could it be because it was never AI to begin with? I am sick and tired of the GOFAI (good old fashioned AI) community pasting the AI label on every clever computer application out there so they can cover up their failure to come up with human-level AI. People are not stupid. They can tell the difference between automatic cruise control and HAL. The former is not AI, it's just a clever hack. The latter has real intelligence. Let's face it. The GOFAI research community has failed. They had no clue as to what intelligence is about when they started the field fifty years ago and they have no clue now. We need new blood and new ideas in AI research.
Re:Sorry, Cruise Control Is Not AI (Score:3, Informative)
Re:Sorry, Cruise Control Is Not AI (Score:2)
One does not have to major in artificial intelligence at a university to write a cruise control program. Any competent programmer can write a program that accomplish safe and effective cruise control. It's a simple coding problem that does not require fifty years of DARPA funding to figure out. The only intelligence we know is animal intelligence. Biological intelligence is what we should be trying to emulate. Human intelligence is general, scalable and adaptive. I have seen nothing from the AI community that qualifies as true intelligence. All they have is hype and propaganda.
Re:Sorry, Cruise Control Is Not AI (Score:2)
What if "optimal" were defined as some weighted combination making the ride smooth and conserving gas? Is that so trivial a program to write? AI researchers have solved many important problems over the years, and I don't consider it at all a disappointment that we are not even close to approaching human intelligence.
Re:Sorry, Cruise Control Is Not AI (Score:2)
There is not much more to it than that. So don't try to make it look like it's a harder problem than it actually is. Any junior programmer can write a good cruise control program through trial and error.
What if "optimal" were defined as some weighted combination making the ride smooth and conserving gas? Is that so trivial a program to write?
Yes it is. I've seen self-taught programmers do amazing things with a 1Mhz Apple II back in the early eighties. Things that are orders of magnitude more complex and clever than cruise control. They never claimed they were doing AI.
AI researchers have solved many important problems over the years, and I don't consider it at all a disappointment that we are not even close to approaching human intelligence.
It's worse than a disappointment. It's a pathetic failure. Many of us remember people like Minsky and others making outrageous claims about their ability to create human-level AI by the end of the last century. We all know about their promotion of the symbol manipulation and knowledge representation nonsense. It all turned out to be mostly worthless crap with little to do with intelligence. Those guys made it a point to ignore every significant advance in neurobiology and psychology over the last one hundred years. Talk about clueless!
Re:Sorry, Cruise Control Is Not AI (Score:4, Insightful)
You have two analog controls, gas and brake. You have time; how long to break, how long to accelerate; you have intensity, or magnitude, how much gas and how much brake pressure, and then you have current velocity, current RPM, current gear, and even mass to take into account, not to mention road curvature, road quality, and road grade (steepness).
In this light, it's a very valid AI question. Can you create a system that maximizes fuel economy and ride quality (you want to avoid extreme acceleration and deceleration, right?).
I know for a fact that I can outperform my car's cruise control for both milage, performance, and ride quality. As long as I can perform better than my car, then the car isn't being intelligent enough, and is therefore an AI quality problem.
To be more precise:
If you're on a down grade and you're below the threshold speed, you can let the car coast and naturally accelerate. If you're above the threshold speed, you need to actually slow below the threshold speed to take into account the fact that there is acceleration as a factor. Or instead of braking, the car can shift into a lower gear, alternating with braking, to insure brakes don't overheat.
Then there's curvature. The car should actually decelerate going into a curve; it should do so more aggressively the tighter the curve, but as the driver starts straightening it should accelerate again. How much should it slow down? How much should it accelerate? It's not linear, but depends strongly on how banked the road is and what the road conditions are. Wet vs dry, or even icy, for example.
Or going uphill. The car should accelerate to counter the speed drop, but should probably try to stay in the best gear, even if it means falling below the threshold for a while, because of fuel economy and power output. So it should accelerate somewhat, but be able to decide that staying in 5th at 70mph isn't nearly as good as dropping to 4th and going 63mph if the grade is steep enough. It should probably also be able to check engine temperature to guage when to keep going 70mph, and when to switch to a lower gear and drop to 63mph (loong shallow grade vs small, if steeper, hill, for example)
See, right now cruise control is really only best for straight sections of clear road because not enough AI has been applied, and not enough AI is available, to deal with curvy windy uphill and downhill roads, which is actually a better place for AI to be used, allowing the driver to concentrate on where the car is going (not over the cliff, I hope)!
Re:Sorry, Cruise Control Is Not AI (Score:2)
If the driver goes into certain curves at 30mph, and other curves at 15mph, the AI should be able to tell the difference between those curves, and in future sense modify it's behavior appropriately. IE, a trainable AI.
As long as the problem can be phrased such that the human can do a better job than a computer (such as cruise control), then it is an AI problem. Right now cruise control is merely computer assisted driving, but it is on no way a solved AI problem.
Cruise Control Is Not Intelligence. (Score:2)
I didn't wait to see what happens when you hit redline...
Re:Sorry, Cruise Control Is Not AI (Score:2)
While you may not consider that to be a contribution, the same model may be used for more complicated things, such as piloting a spacecraft. I saw a talk the other day who trained a program to hover a remote-controlled helicopter in place. It performed better than the leading human controllers.
There's a lot more to AI than symbol manipulation. Knowledge representation is a very small subset of the field. Some researchers choose to concentrate on the small subfields of AI, and those fields have prospered, providing great advances to datamining, graphics and vision, theory, etc. Some researchers are very much interested in neurologically and biologically inspired computing. Calling AI researchers clueless for ignoring these areas is just revealing your ignorance about AI and your fanatacism against AI..
Re:Sorry, Cruise Control Is Not AI (Score:2)
None of it has anything to do with intelligence. Again, to be intelligent, a program must not only be able to learn but it must learn anything and everything, not just a limited domain environment. But that is not all. It must be motivated and adapt to reward and punishment.
There's a lot more to AI than symbol manipulation. Knowledge representation is a very small subset of the field.
Neither symbol manipulation nor knowledge representation has anything to do with intelligence since the only intelligence we know of (biological intelligence) doesn't use any of it. First and foremost, intelligence has to do with discrete, temporal signal processing. The biological evidence is clear on this issue: neurons generate discrete signals or spikes. Second, intelligence has to do with motivation, i.e., it must react properly to reward and punishment. This is what psychology has taught us for the last one hundred years. The GOFAI crowd is not listening.
Calling AI researchers clueless for ignoring these areas is just revealing your ignorance about AI and your fanatacism against AI..
They're worse than clueless. They have wasted the taxpayer's money for fifty years. And, for the record, I am not against AI. Why lie about it when I have a site that promotes AI? On the contrary, I am trying to wake people up to the fact that they are being taken to cleaners by a bunch of clueless career propagandists. If you don't like my opinion on the matter, all can say to you is that it's that's too bad. I exercise my freedom of speech the way I see fit.
Re:Sorry, Cruise Control Is Not AI (Score:2)
There has been plenty of great research in reinforcement learning. Isn't that what you mean by rewards and punishments?
People have also tried to make neurologically inspired models for AI. Note that the transistor is even abstractly similar to the neuron. The AI community is in much better shape than you think.
Re:Sorry, Cruise Control Is Not AI (Score:2)
Thanks for making my point for me.
There has been plenty of great research in reinforcement learning. Isn't that what you mean by rewards and punishments?
If it is not done within the context of signal processing, i.e., by using spiking neural networks, it's crap. Spiking or pulsed networks did not come from the AI community, BTW. They came from the computational neuroscience community which is part of neurobiology. So don't try to get credit for something the GOFAI community has pretty much ignored for fifty years.
People have also tried to make neurologically inspired models for AI. Note that the transistor is even abstractly similar to the neuron.
The only neural networks that came from the GOFAI community (after Marvin Minsky and Seymour Papert tried to put a monkey wrench in neural network research) are the so-called ANNs. ANNs are a pathetic joke. They bear little ressemblance to biological neurons. Unless an AI researcher realizes that intelligence has to do with the temporal relationships between discrete signals, he or she is not doing AI. He or she is just spitting against the wind.
The AI community is in much better shape than you think.
I doubt that very much. Only a blind fool would believe in the AI community after fifty years of abject failure. They still don't have a clue and they don't seem interested in getting one either. Just recently, MIT Technology Review publisehed an article titled "AI Reboots" which is a PR piece for GOFAI guru Doug Lenat and that database, symbolic representation monstrosity of his called Cyc.
An example (multi-agent systems) (Score:2, Interesting)
This is of course computationally expensive. In the video game case, the program must run smooth in order for the computer to be a significant opponent. A typical team of computer-controlled oppenents tend to share information as if telepathic. The computer must cheat, simply to make the game interesting. If all agents (soccer players) have a shared knowledge base, it can easily be a tough opponent. The computer must often "cheat" for this reason.
For right now, computers are not fast enough to handle the AI with more integerity. The bottom line is that a video game has to be fun. In academia, we are able to put more time into things that are not immediately useful in order to better understand real AI. Of course in the soccer video game situation, the human player also acts as a shared knowledge base for its team, as it controls all of them. In a game like a multi-player shooter, however (ignoring the chatting option), this is more applicable. It is unfair for each computer player to be able to divine the intent of the team members as if controlled by an overmind. Applying this research to video games would result in better realism, provided the CPU could handle it. For now, it would simply not make for a very interesting game. Still, shared knowledge is an interesting problem in AI, and a lot of the work having been done is quite good. But we do have a long way to go.
This research would apply to systems other than video games where each agent may work under a different protocol. Each situation is different, though. Often there will be a standard communication protocol, but sometimes that may break. The distributed system should not cease in this case. Examples are automated military, network routing, manufacturing plants and clustered computing.
Game engines for A.I. research (Score:2)
My research [greatmindsworking.com] involves modeling human language acquisition, grounded in "visual" experiences. While I'm pretty much developing a crude vision system from scratch for my prototype (because I want to use some real video) my next step will be to try the same logic inside of a game engine. With a game engine, I can query exact details of objects and their motions, without the great complexities of a computer vision system.
Until computer sensor systems catch up, game engines provide a wonderful opportunity for testing A.I.
game developers funding AI research (Score:2)
It might be a good thing if game developers could fund academic work. No single game developer could afford to fund a project to solve any particular problem, but financial mechanisms have been described (1 [openknowledge.org] 2 [geocities.com]) to allow game developers to jointly fund research to produce results sharable by the entire industry.
The software completion bond idea has not yet been attempted AFAIK. Certainly it has no well-known success stories. Maybe this would be a good place to try it.
Academic AI (Score:2)
Game AI, however, is based on the universe created inside the game, is mostly asthetic, and usualy done in real time.
Re:Ethical dilemmas (Score:1)
Find a way to make some money off of all the legal fighting that will ensue.
Re:2 to the 14th (Score:1)
Keep doing tequila shots... (Score:2)
Re:low count (Score:2)
Re:Clever AI in games is a myth (Score:2)
[Caveat: I used to work for the company that produced it, so I may be biased.] The game "Powerslide" (an arcade-style racing game for the PC) had AI drivers which were indeed modelled with a neural net. The various net weights were 'bred' (GA techniques) to produce very good to poor AI 'mini-brains' (the poorer ones were explicitly bred to be played against in easy difficulty.) Sure, there were some hacks to make it all work well, but at the core, this was AI well-informed by the latest work in the field. Further, they really did drive pretty-much like one might expect a human to, even down to occasionally just wiping out or crashing.