Games That Design Themselves 162
destinyland writes "MIT's Media Lab is building 'a game that designs its own AI agents by observing the behavior of humans.' Their ultimate goal? 'Collective AI-driven agents that can interact and converse with humans without requiring programming or specialists to hand-craft behavior and dialogue.' With a similar project underway by a University of California professor, we may soon see radically different games that can 'react with human-like adaptability to whatever situation they're thrust into.'"
One measure of success... (Score:2, Insightful)
This one [penny-arcade.com] shouldn't be too hard.
Bots (Score:4, Insightful)
Re:It can never be human like... (Score:3, Insightful)
Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.
You tell it to try to learn the rules, and make the best decision that it can.
Consider AI for chess. The best AI can beat any human because it can spend the processing power to look, say, 25 moves into the future. When the computer considers all possible moves and for each one looks at all possible next moves, next moves, etc, for 25 turns, it's going to be able to quantify which move it should make now to have the best chance at winning. When you download a chess game and you can set the difficulty, the main thing they change is how far ahead the AI is allowed to look. An "easy" AI might only look 3 moves ahead. It's been a while since I took any AI courses, but I seem to remember that the human masters like Kasparov are capable of looking ahead around 10-12 turns.
So it's not that you tell the AI to make bad decisions, you simply limit the information it has to work with. This is more equivalent to what most humans do when they make bad decisions ("I didn't think of that").
Re:Interesting timing... (Score:5, Insightful)
Everything Peter does looks impressive while he stands by it. He's like a lesser powered Steve Jobs. However, unlike Steve, Peter's glamour effect only lasts till the product is released. Should Milo ever actually hit the market, it will immediately revert to a simulation of an autistic Eliza with Turrets syndrome and a tendency to stare at crotch rather than your face.
Peter will then appear and indicate that he knew Milo I was going to be this bad, that's why for the past TWO decades, he's been working on Milo II, which will suppose to do everything he actually promised in Milo I and include a loveable dog character for you to interact with as well.
When Milo II finally comes out, it'll be an actual stuffed basset hound.
Re:Mister Anderson, welcome back. We MISSED you. (Score:4, Insightful)
So they're stealing our body heat and letting us write agent AI for them too? Geez, what lazy AI we invented.
It was created in our own image.
Re:there are lots of human-like programs (Score:3, Insightful)
I'd contend that watching and mimicing others is the most effective method of learning. In fact, it's the ability to take and apply this learned knowledge to other situations that seperates the truly intelligent from the "average" in the world.
Re:It can never be human like... (Score:3, Insightful)
The problem with AI's mimicking 'human' actions has nothing to do with a failure of logic or the ability to display randomness.
It has to do with the fact that we've never really understood why we do certain things, because we hold the false notion that for the most part our actions are driven by logic rather than the reality that our logic is driven by our actions. Thus, when someone happens that doesn't fit our model, we ascribe it to randomness despite the fact that it could probably be shown that the same situation would result in the same reaction the majority of the time.
If we actually studied our actions, rather than our rationalizations for why they occur, it'd be a lot easier to model our behavior.
And an AI that doesn't care why we do something but just learns to predict WHAT we will do, is a good first step towards that.
Misleading Title (Score:4, Insightful)
If the AI Agents are learning to mimic human behavior by observing how they play a game, then the game design clearly already exists. Therefore, what is described in the article is certainly not anything even remotely close to "games that design themselves."
Re:It can never be human like... (Score:3, Insightful)
And a computer could not have consciousness because...
Re:It can never be human like... (Score:3, Insightful)
man hasn't "(re-)invented" it yet, and isn't likely to for a long time to come.
That "long time" will be forever, if we never research it. You've got to start somewhere.
Do you think we'll just magically come up with the answer, if we never think about the question?