Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
PC Games (Games) Entertainment Games

Games That Design Themselves 162

destinyland writes "MIT's Media Lab is building 'a game that designs its own AI agents by observing the behavior of humans.' Their ultimate goal? 'Collective AI-driven agents that can interact and converse with humans without requiring programming or specialists to hand-craft behavior and dialogue.' With a similar project underway by a University of California professor, we may soon see radically different games that can 'react with human-like adaptability to whatever situation they're thrust into.'"
This discussion has been archived. No new comments can be posted.

Games That Design Themselves

Comments Filter:
  • by chill ( 34294 ) on Thursday July 30, 2009 @12:42PM (#28883837) Journal

    This one [penny-arcade.com] shouldn't be too hard.

  • Bots (Score:4, Insightful)

    by Krneki ( 1192201 ) on Thursday July 30, 2009 @12:42PM (#28883841)
    Ask WoW developer, they can't spot most of the bots playing the game.
  • by amicusNYCL ( 1538833 ) on Thursday July 30, 2009 @01:12PM (#28884293)

    Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.

    You tell it to try to learn the rules, and make the best decision that it can.

    Consider AI for chess. The best AI can beat any human because it can spend the processing power to look, say, 25 moves into the future. When the computer considers all possible moves and for each one looks at all possible next moves, next moves, etc, for 25 turns, it's going to be able to quantify which move it should make now to have the best chance at winning. When you download a chess game and you can set the difficulty, the main thing they change is how far ahead the AI is allowed to look. An "easy" AI might only look 3 moves ahead. It's been a while since I took any AI courses, but I seem to remember that the human masters like Kasparov are capable of looking ahead around 10-12 turns.

    So it's not that you tell the AI to make bad decisions, you simply limit the information it has to work with. This is more equivalent to what most humans do when they make bad decisions ("I didn't think of that").

  • by Chyeld ( 713439 ) <chyeld@gma i l . c om> on Thursday July 30, 2009 @01:18PM (#28884401)

    Everything Peter does looks impressive while he stands by it. He's like a lesser powered Steve Jobs. However, unlike Steve, Peter's glamour effect only lasts till the product is released. Should Milo ever actually hit the market, it will immediately revert to a simulation of an autistic Eliza with Turrets syndrome and a tendency to stare at crotch rather than your face.

    Peter will then appear and indicate that he knew Milo I was going to be this bad, that's why for the past TWO decades, he's been working on Milo II, which will suppose to do everything he actually promised in Milo I and include a loveable dog character for you to interact with as well.

    When Milo II finally comes out, it'll be an actual stuffed basset hound.

  • by BarryJacobsen ( 526926 ) on Thursday July 30, 2009 @02:45PM (#28885865) Homepage

    So they're stealing our body heat and letting us write agent AI for them too? Geez, what lazy AI we invented.

    It was created in our own image.

  • by digitalsolo ( 1175321 ) on Thursday July 30, 2009 @02:53PM (#28886011) Homepage
    Are there any examples of a living being which does not spend the majority of it's life parroting or applying the behaviour of others?

    I'd contend that watching and mimicing others is the most effective method of learning. In fact, it's the ability to take and apply this learned knowledge to other situations that seperates the truly intelligent from the "average" in the world.
  • by Chyeld ( 713439 ) <chyeld@gma i l . c om> on Thursday July 30, 2009 @02:57PM (#28886065)

    The problem with AI's mimicking 'human' actions has nothing to do with a failure of logic or the ability to display randomness.

    It has to do with the fact that we've never really understood why we do certain things, because we hold the false notion that for the most part our actions are driven by logic rather than the reality that our logic is driven by our actions. Thus, when someone happens that doesn't fit our model, we ascribe it to randomness despite the fact that it could probably be shown that the same situation would result in the same reaction the majority of the time.

    If we actually studied our actions, rather than our rationalizations for why they occur, it'd be a lot easier to model our behavior.

    And an AI that doesn't care why we do something but just learns to predict WHAT we will do, is a good first step towards that.

  • Misleading Title (Score:4, Insightful)

    by Malkin ( 133793 ) on Thursday July 30, 2009 @03:01PM (#28886129)

    If the AI Agents are learning to mimic human behavior by observing how they play a game, then the game design clearly already exists. Therefore, what is described in the article is certainly not anything even remotely close to "games that design themselves."

  • by johnsonav ( 1098915 ) on Thursday July 30, 2009 @04:31PM (#28887647) Journal

    And a computer could not have consciousness because...

  • by johnsonav ( 1098915 ) on Friday July 31, 2009 @12:10AM (#28892347) Journal

    man hasn't "(re-)invented" it yet, and isn't likely to for a long time to come.

    That "long time" will be forever, if we never research it. You've got to start somewhere.

    Do you think we'll just magically come up with the answer, if we never think about the question?

There are two ways to write error-free programs; only the third one works.

Working...