Slashdot is powered by your submissions, so send in your scoop


Forgot your password?
PC Games (Games) Entertainment Games

Games That Design Themselves 162

destinyland writes "MIT's Media Lab is building 'a game that designs its own AI agents by observing the behavior of humans.' Their ultimate goal? 'Collective AI-driven agents that can interact and converse with humans without requiring programming or specialists to hand-craft behavior and dialogue.' With a similar project underway by a University of California professor, we may soon see radically different games that can 'react with human-like adaptability to whatever situation they're thrust into.'"
This discussion has been archived. No new comments can be posted.

Games That Design Themselves

Comments Filter:
  • Re:Ragequit (Score:2, Interesting)

    by oztemprom ( 953519 ) on Thursday July 30, 2009 @12:31PM (#28883649)
    Yeah, things like this would happen, but also, how easy would it be for a small but dedicated group of pranksters to deliberately behave in odd, amusing or offensive ways to train the AIs? AI09 says "I herd u leik tentacle pr0n"
  • by Chris Burke ( 6130 ) on Thursday July 30, 2009 @12:35PM (#28883719) Homepage

    The idea of an AI that learns from the players sounds great when you're talking about a bot for Multiplayer Shooter 2010 developing tactics and strategies without explicit programming, or an NPC partner in a stealth gaming learning how not to bash their face into walls and then walk off a cliff into lava. Awesome, bring on the learned emergent behavior!

    But dialogue? Oh lord no, please don't let the AI's learn how to "converse" from players. Because the last thing I need is to have AIs in games screaming "Shitcock!" or calling me a fag a thousand times in a row with computerized speed and efficiency.

  • Engineering Project (Score:3, Interesting)

    by COMON$ ( 806135 ) on Thursday July 30, 2009 @01:07PM (#28884187) Journal
    I always thought it would be interesting to create a project like this with a chat engine. Take a major chat engine and have a "Submit to AI" option where the AI would parse the conversation between you and a friend so it can record questions and responses in an overlapping matrix of possibilities and calculate the probability of what the response should be by historical conversations of the same nature. You should get impressive test results with a large enough set of data.
  • by johnsonav ( 1098915 ) on Thursday July 30, 2009 @01:13PM (#28884319) Journal

    Because programming -IS- Logic.

    A group of neurons can be connected together to form a calculator. But, you can't multiply 20 digit numbers in your head. You don't have access to the "hardware" layer of your brain. Why would a sufficiently advanced AI be any different?

    As such you generally tend to base it against the opponent you are playing. An AI cannot tell if you are an aggressive or passive person, you're strategic abilities or understanding of game mechanics having never met you before playing the game.

    I play online games against people I've never met before too. What magical ability do I have, that a computer could not?

  • by Bat Country ( 829565 ) on Thursday July 30, 2009 @01:20PM (#28884443) Homepage


    In fact, feeding bogus data to the AI is one of the realistic ways to limit, say, a racing game's agents - if they don't see the post in front of them because they aren't spending enough time per frame watching the road and are instead eyeballing their opponent, they're going to crash, just like any human. So you simulate that by using player proximity and the "erraticness" of the other opponents to model distraction and modulate the AI's awareness of dynamic obstacles and hazards.

  • by GenP ( 686381 ) on Thursday July 30, 2009 @01:41PM (#28884755)
    But I don't want to be replaced by a short perl script and a couple hundred gigs of prior probability distributions!
  • by Lumpy ( 12016 ) on Thursday July 30, 2009 @02:03PM (#28885113) Homepage

    Yes and no. Back in the day when I was writing Quake bots, there were things you could do to always beat the AI. The AI cant pick out patterns that are luring it into a trap. WE are a long LONG way from having AI that can think about the situation and make a decision on it's own...

    "Player 4 has done this 4 times trying to lead me down that corridor, what the hell is he doing? I'm gonna sit and wait or try and circle around to see what is up."

    AI cant make a conscious decision that is not preprogrammed.

  • by kathbot ( 1286452 ) on Thursday July 30, 2009 @02:36PM (#28885713)
    A recent proposal from the UC Santa Cruz EIS lab (also mentioned in the article), an Automated Game Designer: [] It's not about making a bot that can behave intelligently/interestingly in a restaurant setting... what are the broad applications of that? (As other people have pointed out, the bots may come out pretty demented and flavored like The Internet.) It's about making a game designer that can design games on its own, learn from its own experience, get MINIMAL human input (not 10,000 plays online). The computer designer can do what the computer is good at (enumerate all possible play traces, look for instances of accessibility/cheats/funky behavior the designer might not have intended or expected) and the humans on the side can do what they are good at (shaping, polishing, collecting a few human play traces).
  • by steelfood ( 895457 ) on Thursday July 30, 2009 @04:20PM (#28887383)

    I've been wondering about this. After all, the human brain is not much more than a glorified rules engine. We learn by imitation, and we improve through reasoning (calculation). Computers are obviously capable of the latter, but nobody's managed to get the former quite right.

    This is because computers are very precise--or really, as precise as the floating point unit allows them to be. That is to say, they can perfectly duplicate information. This means that their observations are very precise. But they have trouble improvising upon the gathered information. They cannot extrapolate from specific to general and interpolate back to specific again. Humans, on the other hand, do this naturally. And we make a largely unconscious decision to switch from storing generic to specific and vice versa, as well as when to access the generic and when to access the specific.

    Lacking this ability, what computers are "observing" is very precise as well. And the only human activity that fits this even remotely is speech. Words are point data, or zero-dimensional. Most words have one particular meaning, and that's about it. Granted, there are the occasional "fruit flies like a banana" which have multiple dimensional properties, but only one meaning is correct in a conversation, and even humans require context to determine the meaning of the statement (the presence of a secondary meaning implies the statement is a joke or a pun, which is to say that computers and people that fail to recognize this have no sense of humor). But barring wordplay, it's trivial to apply rules (grammar) to words and produce meaningful output. It's the same as plugging in numbers for variables.

    But for things that are even one-dimensional, you'll very quickly get too much data, which will slow down the decision processing significantly. 1.00000001 is 1.0 for a human, but for a computer, both are stored as separate data. It's possible to round (generalize) the values and store only that, but it's very difficult to go back to a specific.

    Don't even start about two-dimensional, which is what FPS AI's would require (yes, there's a third dimension in FPS, but AI paths are reducible to two dimensions+commands). And human brains have a hard enough time with three dimensions, computers don't stand a chance.

<< WAIT >>