Games That Design Themselves 162
destinyland writes "MIT's Media Lab is building 'a game that designs its own AI agents by observing the behavior of humans.' Their ultimate goal? 'Collective AI-driven agents that can interact and converse with humans without requiring programming or specialists to hand-craft behavior and dialogue.' With a similar project underway by a University of California professor, we may soon see radically different games that can 'react with human-like adaptability to whatever situation they're thrust into.'"
Re:Ragequit (Score:2, Interesting)
Okay for behavior, but dialogue? (Score:5, Interesting)
The idea of an AI that learns from the players sounds great when you're talking about a bot for Multiplayer Shooter 2010 developing tactics and strategies without explicit programming, or an NPC partner in a stealth gaming learning how not to bash their face into walls and then walk off a cliff into lava. Awesome, bring on the learned emergent behavior!
But dialogue? Oh lord no, please don't let the AI's learn how to "converse" from players. Because the last thing I need is to have AIs in games screaming "Shitcock!" or calling me a fag a thousand times in a row with computerized speed and efficiency.
Engineering Project (Score:3, Interesting)
Re:It can never be human like... (Score:5, Interesting)
Because programming -IS- Logic.
A group of neurons can be connected together to form a calculator. But, you can't multiply 20 digit numbers in your head. You don't have access to the "hardware" layer of your brain. Why would a sufficiently advanced AI be any different?
As such you generally tend to base it against the opponent you are playing. An AI cannot tell if you are an aggressive or passive person, you're strategic abilities or understanding of game mechanics having never met you before playing the game.
I play online games against people I've never met before too. What magical ability do I have, that a computer could not?
Re:It can never be human like... (Score:3, Interesting)
Indeed.
In fact, feeding bogus data to the AI is one of the realistic ways to limit, say, a racing game's agents - if they don't see the post in front of them because they aren't spending enough time per frame watching the road and are instead eyeballing their opponent, they're going to crash, just like any human. So you simulate that by using player proximity and the "erraticness" of the other opponents to model distraction and modulate the AI's awareness of dynamic obstacles and hazards.
Re:Engineering Project (Score:2, Interesting)
Re:It can never be human like... (Score:3, Interesting)
Yes and no. Back in the day when I was writing Quake bots, there were things you could do to always beat the AI. The AI cant pick out patterns that are luring it into a trap. WE are a long LONG way from having AI that can think about the situation and make a decision on it's own...
"Player 4 has done this 4 times trying to lead me down that corridor, what the hell is he doing? I'm gonna sit and wait or try and circle around to see what is up."
AI cant make a conscious decision that is not preprogrammed.
A robot could design a better 'game' (Score:2, Interesting)
Re:Okay for behavior, but dialogue? (Score:3, Interesting)
I've been wondering about this. After all, the human brain is not much more than a glorified rules engine. We learn by imitation, and we improve through reasoning (calculation). Computers are obviously capable of the latter, but nobody's managed to get the former quite right.
This is because computers are very precise--or really, as precise as the floating point unit allows them to be. That is to say, they can perfectly duplicate information. This means that their observations are very precise. But they have trouble improvising upon the gathered information. They cannot extrapolate from specific to general and interpolate back to specific again. Humans, on the other hand, do this naturally. And we make a largely unconscious decision to switch from storing generic to specific and vice versa, as well as when to access the generic and when to access the specific.
Lacking this ability, what computers are "observing" is very precise as well. And the only human activity that fits this even remotely is speech. Words are point data, or zero-dimensional. Most words have one particular meaning, and that's about it. Granted, there are the occasional "fruit flies like a banana" which have multiple dimensional properties, but only one meaning is correct in a conversation, and even humans require context to determine the meaning of the statement (the presence of a secondary meaning implies the statement is a joke or a pun, which is to say that computers and people that fail to recognize this have no sense of humor). But barring wordplay, it's trivial to apply rules (grammar) to words and produce meaningful output. It's the same as plugging in numbers for variables.
But for things that are even one-dimensional, you'll very quickly get too much data, which will slow down the decision processing significantly. 1.00000001 is 1.0 for a human, but for a computer, both are stored as separate data. It's possible to round (generalize) the values and store only that, but it's very difficult to go back to a specific.
Don't even start about two-dimensional, which is what FPS AI's would require (yes, there's a third dimension in FPS, but AI paths are reducible to two dimensions+commands). And human brains have a hard enough time with three dimensions, computers don't stand a chance.