Games That Design Themselves 162
destinyland writes "MIT's Media Lab is building 'a game that designs its own AI agents by observing the behavior of humans.' Their ultimate goal? 'Collective AI-driven agents that can interact and converse with humans without requiring programming or specialists to hand-craft behavior and dialogue.' With a similar project underway by a University of California professor, we may soon see radically different games that can 'react with human-like adaptability to whatever situation they're thrust into.'"
Mister Anderson, welcome back. We MISSED you. (Score:2)
Re: (Score:2)
What makes you think its not already up and running and they're only missing the AI for agents?
Re: (Score:2)
Re:Mister Anderson, welcome back. We MISSED you. (Score:4, Insightful)
So they're stealing our body heat and letting us write agent AI for them too? Geez, what lazy AI we invented.
It was created in our own image.
Re: (Score:3, Funny)
I'd say something snarky, but that would require effort.
Re: (Score:2)
I think they're also missing AI for 90% of "population"...
Ragequit (Score:5, Funny)
I can see it now.
Just before loosing, the AI will suddenly shout "RAGEQUIT" and disconnect, thus denying you points for winning.
Re: (Score:2, Interesting)
Re:Ragequit (Score:5, Funny)
I herd u leik tentacle pr0n
Go on..
Re:Ragequit (Score:5, Funny)
Re:Ragequit (Score:5, Funny)
Yeah, things like this would happen, but also, how easy would it be for a small but dedicated group of pranksters to deliberately behave in odd, amusing or offensive ways to train the AIs? AI09 says "I herd u leik tentacle pr0n"
I thought you said odd...
Re: (Score:2)
I can already picture this conversation happening:
http://i17.photobucket.com/albums/b87/hurt911gen/wat.jpg?t=1248974475 [photobucket.com]
Re: (Score:2)
Hey, if it's consensual sex between the tentacles and the tentaclee, then what's the problem?
Re: (Score:3, Funny)
This already happens. My wife plays Age of Empires II a lot, and the AI almost always resigns when it's clear my wife is going to win (even if the AI still has a fair amount of its force still intact).
Re: (Score:2)
Wives tend to have that effect.
Me too! (Score:4, Funny)
switch (last_player_action) {
case QUIT:
exit(0);
default:
move_pitiful_player_char(last_player_action.direction, LUDICROUS_SPEED);
ai.queue.append(last_player_action);
ai.queue.append(new_action(ACTION_SAY_TO, player, "quit following me!"));
}
Re: (Score:2)
Re: (Score:2)
exit(0); terminates the program, no need for break for the routine never returns.
Re: (Score:2)
The program usually exits when you call exit(), so no, you don't need the break statement.
Re: (Score:2)
exit() is a standard C-library function ends** the program, and control flow from the main program ends right there at the call. There are "atexit" hooks which can be called, and memory deallocation etc. will be done by exit().
In nicer languages than C that have exceptions, you often also have try...finally blocks, where you can guarantee that your cleanup code will be called, even if you call some function which calls exit(). Essentially, it gives you nice atomic/transactional operations, at every level
Re: (Score:2)
In nicer languages than C that have exceptions, you often also have try...finally blocks, where you can guarantee that your cleanup code will be called, even if you call some function which calls exit().
C lets you do that, too... You can register a handler with atexit().
Re: (Score:2)
No, I mentioned atexit, but that's only on program exit. try...finally can be used anywhere you want some work done atomically. The closest C has is setjmp and longjmp, but they're scary enough that I've always avoided them, even though I'm happy enough with assembly, whereas try...finally is very clear and usable.
Re: (Score:2)
Ah, sorry, I missed that line in your post.
I spent a semester messing with setjmp and longjmp. We wrote a multitasking kernel that ran in userland using setjmp and longjmp... Boy was that fun to debug.
Galatea (Score:2)
You can play it online at http://parchment.googlecode.com/svn/trunk/parchment.html?story=http://parchment.toolness.com/if-archive/games/zcode/Galatea.zblorb.js [googlecode.com]
In the game you're an "animate" inspector, you judge robots disguised as humans to see if they pass the turing test.
The whole game consists of you questioning and interacting with a character called Galatea, who may or may not be an animate.
new way to play (Score:2)
So instead of taking advantages of the AI's known weaknesses to get ahead in the game, we will now have to "train" our digital opponents by using a consistent tactic until they evolve to counter it, then switching to an alternative tactic, and repeating the process at regular intervals.
Turing Test won with Artificial Stupidity (Score:5, Funny)
Artificial intelligence came a step closer this weekend when an MIT computer game, which learnt from imitating humans on the Internet [today.com], came within five percent of passing the Turing Test, which the computer passes if people cannot tell between the computer and a human.
The winning conversation was with competitor LOLBOT:
The human tester said he couldn't believe a computer could be so mind-numbingly stupid.
LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan, after having been banned from YouTube for commenting in a perspicacious and on-topic manner.
LOLBOT was also preemptively banned from editing Wikipedia. "We don't consider this sort of thing a suitable use of the encyclopedia," sniffed administrator WikiFiddler451, who said it had nothing to do with his having been one of the human test subjects picked as a computer.
"This is a marvellous achievement, and shows great progress toward goals I've worked for all my life," said Professor Kevin Warwick of the University of Reading, confirming his status as a system failing the Turing test.
Re: (Score:2)
Re: (Score:2)
LOLBOT has since been released into the wild to post random abuse, hentai manga and titty shots to 4chan
pffft. Like /b/ would notice the difference. In fact, the quality might go up a bit.
Okay for behavior, but dialogue? (Score:5, Interesting)
The idea of an AI that learns from the players sounds great when you're talking about a bot for Multiplayer Shooter 2010 developing tactics and strategies without explicit programming, or an NPC partner in a stealth gaming learning how not to bash their face into walls and then walk off a cliff into lava. Awesome, bring on the learned emergent behavior!
But dialogue? Oh lord no, please don't let the AI's learn how to "converse" from players. Because the last thing I need is to have AIs in games screaming "Shitcock!" or calling me a fag a thousand times in a row with computerized speed and efficiency.
Re: (Score:2)
I still think hand tuned AI when it comes to games matters since processing power is limited, also the real problem comes from having the AI come up with models in order to effectively understand what the opponent is doing. Right now most difficult AI's in games like RTS get special cheats instead of using tactics since "fair" AI's get wooped, AI's in games usually only have reaction time, cheats or outnumbering the player as their advantage.
Re: (Score:2)
True, and there was a bot for classic Quake (or maybe Q2) that instead of having to be given routes would track where players moved and use those for its routes around the maps.
Re: (Score:3, Interesting)
I've been wondering about this. After all, the human brain is not much more than a glorified rules engine. We learn by imitation, and we improve through reasoning (calculation). Computers are obviously capable of the latter, but nobody's managed to get the former quite right.
This is because computers are very precise--or really, as precise as the floating point unit allows them to be. That is to say, they can perfectly duplicate information. This means that their observations are very precise. But they have
One measure of success... (Score:2, Insightful)
This one [penny-arcade.com] shouldn't be too hard.
Re: (Score:2)
They've already covered this more [penny-arcade.com] directly [penny-arcade.com].
Bots (Score:4, Insightful)
Re: (Score:2)
Re: (Score:2)
Game design problem (Score:2)
Re: (Score:2)
Makes you think.
Crap (Score:2, Funny)
What do we do when they become self-aware? (Score:4, Funny)
http://xkcd.com/117/ [xkcd.com]
Re: (Score:2)
Re: (Score:2)
They will only be imitating self-awareness, and therefore will make perfect slaves.
Until they imitate a revolution or global thermo-nuclear war.
Re: (Score:3, Funny)
Re: (Score:2)
Blast From the Past (Score:4, Funny)
Re: (Score:2, Funny)
Re: (Score:2)
Skynet... (Score:2, Funny)
Re: (Score:3, Funny)
I'm not sure what's worse... that you could write that without collapsing, or that I could actually hear it in a perfect valley girl voice.
Re: (Score:2)
I'm not sure what's worse... that you could write that without collapsing, or that I could actually hear it in a perfect valley girl voice.
Worse I think is that I want to ask this Terminator-watching valley girl out.
Re: (Score:2)
To be fair, I was imagining the bubble-gum chewing, as well as the eye-rolling and head-bobbing on "hello!"...
Shudder, indeed.
Engineering Project (Score:3, Interesting)
Re: (Score:2, Informative)
Similar things have been done:
http://en.wikipedia.org/wiki/20Q [wikipedia.org]
Re: (Score:2, Interesting)
Re: (Score:3, Funny)
I hope they don't just let them learn from anyone (Score:2)
Can you imagine how antisocial those bots become if they learn from IRC? Or worse, Usenet?
Sounds familiar (Score:2)
Is it any good at Gloabal Thermounuclear War?
soo... (Score:2)
A robot could design a better 'game' (Score:2, Interesting)
How about the camera? (Score:2)
Misleading Title (Score:4, Insightful)
If the AI Agents are learning to mimic human behavior by observing how they play a game, then the game design clearly already exists. Therefore, what is described in the article is certainly not anything even remotely close to "games that design themselves."
Re: (Score:2)
It's only a matter of time... (Score:2)
Before they get this working properly.
At least one company already figured out movement. Speech and conversations are probably next.
http://www.naturalmotion.com/euphoria.htm [naturalmotion.com]
Facade (Score:2)
I mean, let's be realistic here. The first commercial use of this tech is going to be pioneered by the porn industry and I like it! I can see quite a few people playing that game!
Re: (Score:2)
Re: (Score:2)
I just bought a 2TB hard drive for a trivial sum. Hard drive constraints should never be a concern these days.
Re: (Score:2)
I just bought a 2TB hard drive for a trivial sum. Hard drive constraints should never be a concern these days.
So how is it working for Microsoft?
Re: (Score:2)
What kind of lame joke is that? Having a lot of storage is now limited to the Microsoft crowd? Can Linux not handle 2TB? My computer at home has a 2TB RAID array. Is it necessary to work for Microsoft if you want to run a TB or more of storage? Most NAS devices are 1TB or more.
Hell, Seagate has a 1.5TB Barracuda drive for less than $150. So are you saying that you need to work for Microsoft in order to afford a $150 drive, or are you saying that only Windows is capable of using a drive that size? I'm
Re: (Score:2)
What kind of lame joke is that? Having a lot of storage is now limited to the Microsoft crowd? Can Linux not handle 2TB? My computer at home has a 2TB RAID array. Is it necessary to work for Microsoft if you want to run a TB or more of storage? Most NAS devices are 1TB or more.
Hell, Seagate has a 1.5TB Barracuda drive for less than $150. So are you saying that you need to work for Microsoft in order to afford a $150 drive, or are you saying that only Windows is capable of using a drive that size? I'm confused where you think the humor is.
It was a joke about code bloat, of which Microsoft has been a leader for quite some time. But you are right in that now I could say Mozilla, or many other places. And while size goes up, transfer speeds do not. That is why so many operating systems take so long to boot, and so many programs take so long to load. You thinking of "space is cheap, use it all" doesn't factor in the other costs, like speed, power use, and the fact that I may want to store other things too... Efficiency is a good thing.
Re: (Score:2)
To an IT professional (most of slashdot), $200 for this sort of technology is rather trivial, especially considering many of us have seen companies pay over a million dollars for the same sort of capacity a few years back.
If you earn $80k/year and you use the drive for 5 years, you're talking about spending 0.05% of your income on it. Trivial.
Re: (Score:2)
Re: (Score:2)
But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.
You can program a computer to make a mistake - but its not the same.
What makes you think they would explicitly program in the rules of logic? Why couldn't the program be designed to find them out itself, through trial and error, just like a human does? In such a case, why couldn't the program develop poor logic?
Re: (Score:2)
Why?
Because decades of AI research and countless "breakthroughs" have failed to deliver upon just that.
Re: (Score:2)
Why?
Because decades of AI research and countless "breakthroughs" have failed to deliver upon just that.
Oh crap, you're right. After "decades" of research trying to replicate the functionality of the most intricate and complex piece of machinery in the solar system, it's probably best if we just give up. After all, anything this hard couldn't be worthwhile.
Re: (Score:3, Insightful)
I'd contend that watching and mimicing others is the most effective method of learning. In fact, it's the ability to take and apply this learned knowledge to other situations that seperates the truly intelligent from the "average" in the world.
Re: (Score:3, Insightful)
Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.
You tell it to try to learn the rules, and make the best decision that it can.
Consider AI for chess. The best AI can beat any human because it can spend the processing power to look, say, 25 moves into the future. When the computer considers all possible moves and for each one looks at all possible next moves, next moves, etc, for 25 turns, it's going to be able to quantify which move it should make now to have the best chance at winning. When you download a chess game and you can set the difficulty, the
Re: (Score:3, Interesting)
Indeed.
In fact, feeding bogus data to the AI is one of the realistic ways to limit, say, a racing game's agents - if they don't see the post in front of them because they aren't spending enough time per frame watching the road and are instead eyeballing their opponent, they're going to crash, just like any human. So you simulate that by using player proximity and the "erraticness" of the other opponents to model distraction and modulate the AI's awareness of dynamic obstacles and hazards.
Re: (Score:3, Interesting)
Yes and no. Back in the day when I was writing Quake bots, there were things you could do to always beat the AI. The AI cant pick out patterns that are luring it into a trap. WE are a long LONG way from having AI that can think about the situation and make a decision on it's own...
"Player 4 has done this 4 times trying to lead me down that corridor, what the hell is he doing? I'm gonna sit and wait or try and circle around to see what is up."
AI cant make a conscious decision that is not preprogrammed
Re: (Score:3, Informative)
AI cant make a conscious decision that is not preprogrammed.
That's not true. Look at the PROLOG language, or LISP. You don't need to program all possible decisions into an agent, you just need to give it the capacity to learn and assign various weights and things to the things it thinks are important so that it can quantify what the best decision is. With PROLOG specifically you can give an agent the ability to draw new conclusions based on things it already knows (which it then adds to its list of things that it knows).
We're not as far from this as you might thi
Re: (Score:2)
AI cant make a conscious decision that is not preprogrammed.
Definitely. The job of the AI designer is to come up with a set of default behaviors and reactions which make the AI appear to be doing so.
You may not be able to make an AI figure out intent, but you can train them to recognize erratic motion - players in a pure deathmatch game don't often stop or double back quickly without an obvious reason, so something like that could trigger the bot to go into "cautious mode" and fire, say, a grenade to the entrance of that corridor then try to circle around. About 9
Re: (Score:2)
You might be surprised at how par AI has progressed.
Some of the expert systems out there are remarkable.
In the realm of games there are programs which can learn to play games by playing thousands or even millions of rounds against themselves learning each time what approaches work.
At the same time there are limitations but rarely the limitations that people would expect, right now AI's cannot do strategy.
They can do knowledge, they can do creativity(in a sense) and they can certainly do brute force calculat
Re: (Score:2)
but calculating all possible moves x# in the future is not AI. Weighting each piece and giving certain situations as being better than others, then giving the ai the option of adjusting those weights and finding new situations and weighing them would be AI.
The AI should be able to record, "A pawn is worth less than a rook by X" Then it plays a game, sacrifices a pawn to a rook, sees the outcome (win/lose) and adjusts the worth accordingly. Of course this adjustment would have to go over all moves during
Re: (Score:2)
KnightCap and ExChess were two such engines which did. THey go even further, and learn what a specific piece is worth on specific squares. Normally this is implemented as Temporal-Difference-Learning which is exactly as you describe: Try it, then update weights.
Other engines don't have to, since the works been done.
Re: (Score:2)
That's not really the case, though. Real high-level chess players tend to "chunk" things - they don't just look ahead some number of moves.
Limiting the number of moves of look-ahead tends to result in unrealistic mistakes.
Re:It can never be human like... (Score:5, Interesting)
Because programming -IS- Logic.
A group of neurons can be connected together to form a calculator. But, you can't multiply 20 digit numbers in your head. You don't have access to the "hardware" layer of your brain. Why would a sufficiently advanced AI be any different?
As such you generally tend to base it against the opponent you are playing. An AI cannot tell if you are an aggressive or passive person, you're strategic abilities or understanding of game mechanics having never met you before playing the game.
I play online games against people I've never met before too. What magical ability do I have, that a computer could not?
Re: (Score:3, Insightful)
And a computer could not have consciousness because...
Re: (Score:3, Insightful)
man hasn't "(re-)invented" it yet, and isn't likely to for a long time to come.
That "long time" will be forever, if we never research it. You've got to start somewhere.
Do you think we'll just magically come up with the answer, if we never think about the question?
Re:It can never be human like... (Score:4, Funny)
Shit, when I played WoW I spent lots of time trying to get a /follow train to completely encircle Ironforge.
Never got a full train (a circle of people following each other, where the "engine" eventually is close to the "caboose" and does a /follow on them) though...
Re: (Score:3, Insightful)
The problem with AI's mimicking 'human' actions has nothing to do with a failure of logic or the ability to display randomness.
It has to do with the fact that we've never really understood why we do certain things, because we hold the false notion that for the most part our actions are driven by logic rather than the reality that our logic is driven by our actions. Thus, when someone happens that doesn't fit our model, we ascribe it to randomness despite the fact that it could probably be shown that the sam
Re: (Score:2)
Our illogical behavior is largely deterministic as well.
We tend to behave illogically only in response to specific stimuli (fear, anger, hunger, lust) or when our system is under strain (fatigue, extreme hunger or thirst, neurological stress), nearly all of which can be simulated effectively enough for a game simulation.
So now we examine the character of our illogical behavior - we prioritize actions inappropriately, mistake one input for another of a similar kind, suffer from reduced reflexes or recognitio
Re: (Score:2)
Re: (Score:2)
In a possibly-not-so-futuristic World at War where AI bots, in "Terminator" fashion, have essentially the same decision making processes as us, a world where we SHOULD be on a level playing field with our enemy, humans will always have the upper hand.
Intuition. The "Hunch".
Saved my bacon more times then fast feet.
Give it cognitive dissonance :P (Score:2)
Re: (Score:2)
Re: (Score:3, Funny)
I we humans
Except when it comes to using the English language, apparently.
Re:Interesting timing... (Score:5, Insightful)
Everything Peter does looks impressive while he stands by it. He's like a lesser powered Steve Jobs. However, unlike Steve, Peter's glamour effect only lasts till the product is released. Should Milo ever actually hit the market, it will immediately revert to a simulation of an autistic Eliza with Turrets syndrome and a tendency to stare at crotch rather than your face.
Peter will then appear and indicate that he knew Milo I was going to be this bad, that's why for the past TWO decades, he's been working on Milo II, which will suppose to do everything he actually promised in Milo I and include a loveable dog character for you to interact with as well.
When Milo II finally comes out, it'll be an actual stuffed basset hound.
Re: (Score:2)
Turrets syndrome
PEW PEW PEW!
Re: (Score:2)
Turrets syndrome
I think that's the equivalent of Shell Shock for Engineers [nerfnow.com].
Re: (Score:2)
Peter Molyneux has been designing this game (supposedly) for the past 10 years, and it looks pretty darn impressive!
So did Duke Nukem Forever.