IEEE Spectrum Surveys Current Games' AI Technology 172
orac2 writes " IEEE Spectrum has an article on the AI technologies used in the current crop of video games. State machines, learning algorithms, cheating, smart terrain, etc are discussed. Game developers interviewed include Richard Evans, of Black and White fame, who talks about Lionhead's upcoming Dmitri project and Soren Johnson who created Civ III's AI."
Sheesh (Score:5, Funny)
Re:Sheesh (Score:5, Funny)
if(humanPlayer == "winning") {
cheat();
}
Re:Sheesh (Score:3, Funny)
Re:Sheesh (Score:1)
This is a longer version of what I used to yell before throwing the NES controller against the wall.
Re:Sheesh (Score:1)
You, sir, give artificial intelligence a whole new meaning.
Re:Sheesh (Score:1)
Ultimate Page of Artificial Intelligence for Games (Score:5, Informative)
Not so fast... (Score:5, Funny)
**unplugs computer**
yeah well (Score:4, Funny)
AI = my biggest complaint with the games industry (Score:5, Interesting)
Unfortunately, the games industry seems to have focused on turning out hundreds of online fragfest games that bring in the $$ but leave little to the imagination. Even 'The Sims' are at it.
AI doesn't necessarily have to be 100% realistic for a rewarding offline game. But even the bots in UT2003 aren't that hot, so it's clear AI and single player games are taking a backseat to the online money spinners.
Hopefully some big breakthroughs in AI will turn the tide, but with the games industry already ignoring AI, I'm not optimistic for AI's future in games.. since everyone would rather play their dumb neighbor anyway.
Re:I like it, but am unsure. (Score:4, Informative)
I'd really like to see a decent AI for games like Baldur's Gate or Neverwinter Nights. The henchman have roughly the IQ of a very dumb dog. On more than one occasion, I've had a henchman walk directly into a fireball on the basis that an opponent was nearby. Mmm... toasty.
Re:I like it, but am unsure. (Score:1)
Re:AI = my biggest complaint with the games indust (Score:5, Interesting)
In most FPS games, the bots simply have really good "aim" and really good "dodging ability" in the higher difficulty levels, coupled with the fact that the computer technically knows where you are all the time. Even so, a player will usually develop reflexes that will allow them to outgun the bots.
Players without the "reflexes" to beat the bots' super aim can still beat them, as the bots will repeatedly fall for the same tricks over and over.
To have realistic bots, they need to be able to learn from their mistakes. Bots fail to learn things such as the following:
1) The player's favorite weapons.
A common technique in games like Quake is to "control" the weapons. If you are playing against someone who is great with the rocket launcher, but not so hot with the other weapons, you can try to limit their access to that weapon. Bots don't pick up that you use the RL all the time, and thus don't really do a great job of stopping you from getting it.
2) The player's techniques.
Obviously, if a player likes to re-use certain techniques (circle strafing, etc.) too much, other players will pick up on it. Bots, however, don't really anticipate what the player might do in this fashion.
3) Mistakes.
At the same time, the bots will often reuse the same techniques as well. However, the human player will pick up on it. Bots need to learn what tactics it has used that have failed, and try something else.
Re:AI = my biggest complaint with the games indust (Score:3, Interesting)
A common technique in games like Quake is to "control" the weapons. If you are playing against someone who is great with the rocket launcher, but not so hot with the other weapons, you can try to limit their access to that weapon. Bots don't pick up that you use the RL all the time, and thus don't really do a great job of stopping you from getting it.
In my experience human players are no different in Q3A, UT2k3 or BF1492 in those 32+player servers. I mean there are so many players like "3l33t-b0rg" or "lick my pu$$y"(no joke), that the focus is more on "I killed 'BFGFucker9000'!" than strategy that requires both team-work and (gasp) thinking.
Now if you're talking about clan games or one-on-one matches, then yes, humans move to a higher level of strategic thinking to block resources that give opponents advantages.
For an AI subsystem to perform this type of thinking requires lots of dynamic("oh shit he has a rocket launcher...time to sniper") and static analysis("grab rocket launcher so he doesn't kick my ass and own me").
Re:AI = my biggest complaint with the games indust (Score:1)
Re:AI = my biggest complaint with the games indust (Score:2)
If you are really in to a game, there are groups and legues of players like you that can challenge you.
Re:AI = my biggest complaint with the games indust (Score:2)
Re:AI = my biggest complaint with the games indust (Score:2)
Re:AI = my biggest complaint with the games indust (Score:1)
2) The player's techniques....
3) Mistakes...."
In most games these are problems, however, while it's not a FPS, the AI's in Oni are fairly good.
The AI's will learn what wepons(attackes) you use and will learn to counter. Furthor more the AI's modify their attacks based on how well you defend yourself.
The AI's couldn't be confused as humon players(even if the game had a multiplayer), however they are skilled and adaptive enugh to be chalanging and can't be beten with any one stratagy
Re:AI = my biggest complaint with the games indust (Score:3, Funny)
Give the bots a chance.
Re:AI = my biggest complaint with the games indust (Score:2)
Re:AI = my biggest complaint with the games indust (Score:2, Informative)
id's focus is on graphics and physics (game engines) not providing strong heuristic bots (resource intensive entities)
Re:AI = my biggest complaint with the games indust (Score:3, Insightful)
MOD THIS UP #@ +5; Informative @# (Score:5, Informative)
Links? Here's more. (Score:1)
Who knew pathfinding was such a drag?
An interesting Link (Score:3, Interesting)
We Ain't There Yet... (Score:5, Funny)
12 year olds can kick my ass. (Score:3)
That is so true. I tried playing a multiplayer game at work a few years ago... and I was absolutely destroyed in seconds. I wanted to get better at the game, but there was no other beginners to play with, and the single-player mode sucked.
Re:12 year olds can kick my ass. (Score:2, Insightful)
Re:12 year olds can kick my ass. (Score:2)
The first two weeks I spent playing Counter-Strike I had a kill ratio of 4-1 and 2-1. Each time I died I forced myself to figure out what it was that I should have done differently, repeated over and over you eventually learn. Satisfaction is working hard for days on end and then finally getting a 2-1 kill ratio one night!
Re:12 year olds can kick my ass. (Score:3, Insightful)
I apologize in advance if I sound like anyone's dad.
<rant>
I'm not talking about guys who are smooth, have good moves, and use map features well... guys with obvious skillz... No, I'm talking about kiddiez who don't move well, show no strategy, and have 5-6 to 1 kill ratios. If you watch them play in first person mode (ala Counterstrike), it becomes immediately obvious that something ain't right.
Talk about taking the pansy way out... Of course, this may have something to do with our societal tendency towards instant results, since some people cannot defer gratification for even one second (granted, it's a pretty hard concept to teach to a child... but then again, some adults go their entire lives and never learn it). It's why all those instant weight loss pills are such a goldmine, AND it's why people cheat.
Don't work and practice to get better, Oh no, that would require effort... get an aimbot or a wallhack; you too can be instantly L33T!
Feh... learn to take your lumps like a man. If you suck, admit it, and practice.
Ain't no aimbot in the game of life.
</rant>
Re:12 year olds can kick my ass. (Score:2)
There is NO DOUBT there are cheats out there but they are RARE. more than 95 % of the complaints lodged with our group turn to out to be false. But every now and then you get a true script kiddie who actually gets off winning by cheating. The only good thing is broadband has given people an IP that sticks for at least a week generally, so a ban actually can annoy them. With UT we bann by UID as well, so at the worst you have to re-install the game.
Re:Not a bad compliment either.... (Score:2)
Re:12 year olds can kick my ass. (Score:2)
Learn grammar, then others might enjoy playing (and communicating) with you. Jerk.
Ethics, IP, amd AI (Score:5, Interesting)
Namely, what happens if some researched finally stumbles across an application that passes the Turing test? One that for all intents and purposes appears to be a conscious life form?
The resulting ethical problems will be myriad:
Re: (Score:2, Interesting)
Re:Ethics, IP, amd AI (Score:3, Interesting)
The turing test is to see if a person can tell the difference between another person over a teletype terminal and a computer.
If you can't tell the difference between a computer and a human, is the computer alive? However, if it is inorganic, is it alive?
Re:Ethics, IP, amd AI (Score:3, Troll)
The resulting ethical problems will be myriad
You watch too many movies. No matter how smart you can make a computer look, it is still performing the same fetch-execute cycle on primitive instructions like "add," "shift," and "branch." If that is a conscious life form, then so is a pencil and piece of paper on which you perform all these primitive instructions manually.
Re:Ethics, IP, amd AI (Score:5, Interesting)
Fan of John Searle [utm.edu], are you?
How's this for a thought experiment. Take a human being, and swap one of his neurons for an electronic circuit that behaves identically to a neuron. One at a time, swap out each real neuron and swap in an electronic one. Is he still concious when his brain is entirely made up of electronic neurons instead of organic ones? OK, now swap out each neuron, and swap in a tiny computer that can simulate the I/O behavior of a neuron. Swap these in one at a time. Is he still concious? OK, now start swapping out groups of neurons for computers that can simulate the I/O behavior of the group. Proceed until his entire brain is just one computer. When did he go from a human being to a soulless automaton?
Re:Ethics, IP, amd AI (Score:3, Insightful)
Remember, Newtonians felt that the whole of the universe could be described if you could only write the state down of everything in it. Then came Quantum Theory and opps, all of that went out the window.
What makes a neuron a neuron is that it is, well, a neuron. Take your philosophy 101 (Denet's "Brain in a Vat" article anyone?) and stuff it.
Re:Ethics, IP, amd AI (Score:4, Informative)
When it comes to building circuits that act like neurons, I'm not a neuromorphic engineer. But even today people are building circuits that can interface with neurons (look at the guys at Cal Tech [caltech.edu], for example). There was that guy in Britian (can't remember his name, references somebody?) who was doing experiments with re-routing electrical signals from his arm to his computer and back to his arm to see if the computer could reproduce the signal adequately to control the muscle (this was the same guy who walked around with implants that tracked where he was around the school).
If it makes you feel better you can skip the step about "synthetic" neurons and go right to the step where you've got a little computer that simulates the neurons and can interface with them.
As for simulating the brain exactly: first of all, there isn't much evidence that there is any quantum effects in the behavior of a neuron (people don't seem to take Roger Penrose too seriously in this area). Second of all, even if there are quantum effects and there was some randomness to the simulation, so what? Just because there are quantum effects, doesn't mean you can't simulate them. You aren't trying to *predict* what someone else's brain is going to do, you just want a simulation that follows the same laws. You just have to add some randomness to your experiment.
What makes a neuron a neuron is that it is, well, a neuron.
Can't argue with you there.
Re:Ethics, IP, amd AI (Score:3, Interesting)
The argument was evident to me before I ever heard of John Searle, but yes, I do agree with him.
How's this for a thought experiment. Take a human being, and swap one of his neurons for an electronic circuit that behaves identically to a neuron. One at a time, swap out each real neuron and swap in an electronic one. Is he still concious when his brain is entirely made up of electronic neurons instead of organic ones?
I don't believe such a thing is possible. But let's assume that it is. Now try this for a thought experiment: instead of swapping out biological neurons for mechanical ones, take an instantaneous state snapshot of an entire brain. Now find a person for every neuron and give them a set of instructions for how to behave exactly like a neuron, making telephone calls to communicate with the other neurons (if it's possible to create an electronic circuit modeling a neuron, it must be possible to codify the behavior into a set of human-understandable instructions). Give them the initial state of the brain you are copying. Is this vast network of people following instructions a conscious (albeit vastly slower) clone of the brain whose state was surveyed?
Repeat the above, but only use two people to simulate two random neurons from the surveyed brain. It is conscious? Now try three. Is that conscious? What is the magic threshold that can achieve consciousness?
I think your position of "consciousness is no more than the sum of the brain's neurons" is a much more perilous position to defend than the claim that there's something going on there that we don't understand.
Re:Ethics, IP, amd AI (Score:2)
Here's where I think our intuitions just differ. I would think that no matter how many people are carrying out the exeuctions of the program, even if it was one person carrying it out, or the entire planet, conciousness wouild arrive through the act of carrying out the computations (albeit at a much slower level).
Unfortunately, there's no real sense of arguing after this point, because what we think would happen simply differs, and there's no way to check conciousness (though you could interact with the simulated brain and it would respond intelligently).
Re:Ethics, IP, amd AI (Score:2)
I swear I'm not ripping him off, I thought of that argument just now!
Re:Ethics, IP, amd AI (Score:2)
The ants are alive individually. The question in that case is whether the colony itself is alive, as some sort of meta-organism.
It's funny you mention this example, which is explored in Douglas Hofstadter's book: "Gödel, Escher, Bach: An Eternal Golden Braid"
Re:Ethics, IP, amd AI (Score:2)
Just realise that there are other compelling viewpoints on the issue and keep an open mind. Don't become like Searle and just reject anything against your view as impossable or silly
Re:Ethics, IP, amd AI (Score:2)
Turing said nothing of the sort. He didn't speak to sentience at all, and he even considered the question "Can machines think" to be "too meaningless to deserve discussion." He only spoke to the question "could a machine some day win at the imitation game?"
Just realise that there are other compelling viewpoints on the issue and keep an open mind. Don't become like Searle and just reject anything against your view as impossable or silly
I have to admit that I do. More than 50 years of AI have failed to produce anything that can function at the level of a four-year-old. I believe we are a long way off from understanding sentience and biological intelligence, if they can be understood and analyzed at all. I'm not saying that it's impossible, but it's certainly more than the sum of a massive neural network. Current proponents of strong AI won't admit that there's something going on there that they don't understand.
Re:Ethics, IP, amd AI (Score:2)
Then I thought a better approach, is how many thousands of years has it taken to develop computational devices to the point they are now? Yes, the last 50 years has seen the process accelerating, but that doesn't mean it hasn't taken longer.
In many ways we are still at the abacus level of AI.
Re:Ethics, IP, amd AI (Score:2)
The advancements from there would snowball, as the hardware and software used to make things would then become meta-creators themselves (and therefore be the meta-meta-creations of the humans who designed this AI). This would go on and on...
While the site calls it a "singularity" I tend to think of it more as an "event horizon" (in more than one sense).
Re:Ethics, IP, amd AI (Score:2)
Re:Ethics, IP, amd AI (Score:1, Interesting)
Re:Ethics, IP, amd AI (Score:2, Insightful)
Re:Ethics, IP, amd AI (Score:2, Redundant)
Re:Ethics, IP, amd AI (Score:4, Insightful)
No an AI will not be considered alive until it can successfully judge a turing test
I never understand requirements like this; you're putting the mark way higher than you do for humans, or other life forms.
People (who knew nothing about AI) have been fooled in Turing tests by the likes of Eliza. And you're saying those people aren't even alive?
If you assume all adult humans are intelligent and alive, you can't make a test for intelligence that excludes some adult humans.
Note that the Turing test is a sufficient, but not necessary test for intelligence, as proposed by Turing. That means that he would consider a computer that passed it certainly intelligent, but it does not mean that "an AI will not be considered alive until it can pass a Turing test" - it may be considered intelligent for other reasons.
Re:Ethics, IP, amd AI (Score:5, Interesting)
Let me throw some more questions into the mix:
Re:Ethics, IP, amd AI (Score:3, Interesting)
If you like thinking about these sort of questions (what's the consequence of running a mind on a computer - what happens if we compute all its subjective instants on another computer in a worldwide cluster, etc) then you should read Permutation City by Greg Egan, which takes this discussion to extremes, with rather deep consequences.
Obviously the book is fiction, speculation, but still rather good - every time you finally wrap your brain around a new idea, you go "wow" as you get it, the next paragraph takes this new idea to its extremities :-)
Re:Ethics, IP, amd AI (Score:2)
Re:Ethics, IP, amd AI (Score:2)
Much of the first few weeks of a baby's life is spent learning how to move their body parts, and the discovery process continues for the first several years.
With both of my children I enjoyed watching them discovering their own feet with their hands. First the process of learning to move their feet and hands, and then the shock, and delight when they managed to grab on to their own feet.
Re:Ethics, IP, amd AI (Score:2, Informative)
Now, why do assume it is even possible to have an "AI life form"? One problem of many is that things that are alive (alive like animals are alive, not like plants are alive) are indeterministic (they do what they want to (free will)), and computers are deterministic. And why do you assume it would be equivalent to a person? Fleas are definitely alive, but they have exactly NO rights.
Tim
Re:Ethics, IP, amd AI (Score:2, Informative)
This is the ghost in the machine [everything2.com] myth. Many believe that human intelligence is somehow "special" in a way that mechanical devices can't be, as if the brain were made of more than mere matter that follows predictable laws of physics. It's a common view. I would wager that >90% of the population believes it. Virtually every religion embraces and teaches it, either explicitly or implicitly. People want to believe that their identity is somehow transcendent to the universe.
We've seen this before with vitalism [everything2.com]: people used to be convinced that living matter was somehow "special" and different than non-living matter in a fundamental way that dips below physics. Now we know that organic life is just a special arrangement of atoms that allows those atoms to be self-replicating.
Obviously, a living cell is a particulary complex arrangements of atoms. The difference between the animate and the inanimate is huge: we relate to grizzly bears much differently than we do to a pile of rocks. Prehaps this is why our intuition is so misinformed... it's not representationally meaningful to think of a grizzly bear as being composed of dirt, air, and water, even though it is.
The same thing applies to intelligence. We have every indication that brains cause minds. We've mapped which areas of the brain correspond to which areas of functionality. Emotions can be altered predictably with drugs. Every aspect of a near death experiences (NDE) can be triggered with chemicals, sensory deprivation (IIRC), a sharp blow to the head, or something mundane and physical. Ultimately, the experience known as self and the sensation of free will boil down to being just a special set of computations that can run on any Turing Machine or x86 with enough memory.
Of course, the complexity difference b/t you and an Unreal bot is several orders of magnitude. It's not representationally meaningful for me to think of you as the same thing... there's just not as much satisfaction in fragging a bot. :-)
MMRPG "societies." (Score:5, Interesting)
----
So how long until the AI gets good enough that we don't need it to be truly multiplayer and can all play on our local machines with AI characters that can chat with us about our real lives instead of just the game?
-
Morrowind's AI (Score:2, Interesting)
Morrowind, if you have not played it, you must. But make sure you have a fast processor 1ghz+
Re:Morrowind's AI (Score:2)
Re:MMRPG "societies." (Score:1)
In soviet russia, Dmitri codes AI!
(or at least codes things that pisses Adobe off)
Re:MMRPG "societies." (Score:3, Funny)
I'm waiting for the day when my bloody computer can go online and play by itself, so I can refamiliarize myself with real life. What would be real nice is if I could tell the computer, "I'm feeling obnoxious today," and it will go out and mock those it humiliates.
At that point, I can have a fulfilling online experience in ten minutes a day. Maybe I'll read a book or something.
:: spins the Wheel of Karma ::
fturnonlylfutrnnolyfunntrollyfuntrollfunnytrollf unny.troll.funny..troll...funny....troll.....funny .....troll.......f.....TROLL
Damn. Oh well, some days, I can't tell myself.
I don't know about anyone else (Score:3, Informative)
Moore's Law (Score:4, Insightful)
Fortunately, most graphics processing had by then moved onto dedicated graphics cards, and CPU resources and memory--already increasing dramatically, thanks to Moore's law--were being freed up for computationally intensive and hitherto impractical tasks, such as better AI.
They make Moore's Law sound as if it is something more than just an observation.
Re:Moore's Law (Score:1, Insightful)
Re:Moore's Law (Score:2)
It's just easier to say "Things have gotten faster due to Moore's Law" than it is to say "Things have gotten faster due to the fact that processor speed doubles every 18 months, according to a statement made by Intel's Gordon Moore in 1965." It's just kind of one of those understood things in the tech world.
Hell, professors here at my school will use it in class. Not as though it were an actual law (as in law of thermodynamics) but as in an understood concept of how fast technology is advancing.
OK, that was way too much time devoted to a petty posting...
Poor Article (Score:1)
Re:Poor Article (Score:1)
Scott
AI in games? (Score:1)
D'oh! (Score:1)
Read the headline too quickly again. I thought someone had found a way to get AI out of a ZX Spectrum. It's coffee time.
The Major Problem (Score:3, Interesting)
Naturally, the AI has the shortest time frame in the software engineering, but there is no reason it should remain stagnent across the future patches. From these patches, the developers can identify the shortfalls of the old AI, and correct them. This is very rarly done, and is only performed across versions.
It's also very difficult to find a game with a decent or challenging AI, since mose formal reviews ignore that portion of the review entirly. Most people will look for the 9/10 IGN Review award as opposed to the real deal in the message boards (the AI in the game is a cheating piece of c***).
Re: The Major Problem (Score:5, Interesting)
> Naturally, the AI has the shortest time frame in the software engineering, but there is no reason it should remain stagnent across the future patches.
Another problem is that lots of games are just engines that support an 'official' dataset plus whatever modpacks the players care to come up with, but even the cheatAI that ships with the game won't work worth a damn on the modpacks.
I hope in the future machine learning methods can help with both of these problems. I.e., a couple of months before release when the code is fairly stable and the graphics are in production, turn on the old Beowulf cluster and let reinforcement learning or an evolutionary algorithm train a good AI for the game. As for modpacks, the vendors could support something like sourceforge, where gamers could upload their modpacks and have the Beowulf cluster automagically re-tune the AI to work right with them.
And of course, the machine learning could continue in the background for as long as people were interested in the game, allowing them to download "new improved" AIs every few months.
Re: The Major Problem (Score:2)
Re: Beowulf! (Score:2)
> I'm not sure you'd get a lot of improvement from the technique though - bots would tend to play well with bots this way, unless you somehow managed to get a fitness function that reflected playing with humans. You could also invite a lot of human beta testers and evolve them based on survival rates.
You could bootstrap the system by having it play itself. That doesn't always result in the desired "arms race" of continual improvements, but people are looking at ways of inducing arms races.
If you could ever get it up to some basic level of performance (say, about what the typical game AI does now), you could enter a phase of distributed learning where thousands of people who had bought the game downloaded the latest AI, played it a time or two, and uploaded the results. That way no individual would have to play through a lot of games with a stupid AI as it explored options to improve its behavior against humans. Presumably if you downloaded the latest AI every week you would get a more competitive opponent every time.
> Some algorithm that noticed patterns in actual gameplay would probably induce cooler behaviour - something in the lines of
Yeah, those are hard problems for AI if you try to generalize the behavior. I wish more AI researchers were working on this kind of stuff instead of the toy problems that so many of them are still on. Several AI researchers have suggested games as the best research domain for acheiving "human level" AI.
The Attention Problem (Score:2, Interesting)
The major problem in AI is the Attention Problem - what features should be paid attention to in order to make a decision and how to ignore the huge ammount of irrelevant information (without explicitly examining that information to determine that it is irrelevant).
Game engines often present a very small "world view" (in terms of feature space) to AI agents because for every simulation cycle each agent has to check facts in its world view to guide its action, and the more complicated the world view the more CPU cycles are used by each agent.
For example, an agent might be exposed to the same fact in 3 ways: The first uses a hard-coded definition of what near means (that can be precomputed by the engine), the second allows the agent to use its own definition of nearness, while the third allows the agent to decide what distance means (if it has access to map information).
While the 3rd definition is the most flexible, it is also the most computationally expensive particularly when this computation may be run every simulation cycle.
So, what I was trying to say is that much of the flexiblity possible to an AI agent is limited by the feature space it has access to, and this feature space is usually very limited for efficiency purposes.
To improve the behavior of an AI agent, script tweaking may help some, but what is often needed is for the underlying physical engine to expose slightly more information. For example, the above "near" might be split into "sortof-near" and "really-near".
too close for comfort (Score:5, Funny)
"2% stupid" (Score:2, Funny)
Why not? Don't they want to model a typical geek, or did they find that hurts sales?
Some real info on Game AI (Score:3, Interesting)
http://www.seanet.com/~brucemo/topics/topics.htm [seanet.com]
Here is another one [cs.vu.nl]
Enjoy
AI is not AI (Score:5, Insightful)
Putting in random factors makes things much harder to pin down. Maybe when a character spots you, there will be a 50% "run or attack" decision. If the decision to run, then you think "Ha, ha, ha, he's running scared!" If the decision is to attack, and he gets you, then you think "Wow, that guy was good." If he attacks and you get him, then you feel like you're doing well.
To a great extent AI is psychological. You read into things what you want.
Re:AI is not AI (Score:2)
To a great extent AI is psychological. You read into things what you want.
In your rant, the use of A is extraneous, though I wager no moderator will see my post as insightful as some think yours was. Just as you no doubt see moderations to your comment as a sign of I instead of "Ah ha! A Markov Model was used to associate the text of my post with other, similar posts that were highly rated!" Shucks, you probably think the Turing Test is just about the computer's intelligence, too.
A game AI test (Score:5, Insightful)
The bot had some notable weaknesses (it kept getting killed going for the powerup in the center, or while coming up a lift, and never seemed to learn from these mistakes), but did fairly well overall. In the end I won with a substantial, but not overwhelming, margin.
So, I said, the AI had failed the test: given a fair match, on its most difficult settings, it lost. But then I realized, I had a lot of fun administering it. Then I realized that the point of an AI isn't to beat the player, but to be fun to play against; whether it wins or loses really doesn't matter.
AI is still (Score:1, Informative)
Anyways, coming to the topic of AI and entertainment, if u have visited the LOTR - TTT site, you'll see an interactive MASSIVE system. Imagine making a few entities interact, waiting for the sequence to render and then view the final movie that has been created....
Best quote from the article. (Score:2)
good book (mentioned in article) (Score:4, Informative)
I'm not sure I'd recommend this book to a novice programmer, but for a moderately experienced programmer who's interested in practical game AI design, this book is well worth a look. The name says it all, this is a book written by the folks in the trenches, passing along their hard-earned wisdom. Very enjoyable.
Now I want to try my own hand at writing some game AI. Maybe I should poke around on sourceforge for games that need AI help. (Assuming I can weed my way past all the projects that have NO CODE AT ALL, which seems to be especially common with the games on sourceforge.)
Invisible AI (Score:2, Insightful)
Good enough for what? For us to stop speaking and caring about it I guess.
When that happens, we'll have 'Invisible AI'* that just works, and game producers can no longer use it as a selling point.
Of course, I guess that won't happen anytime soon, and I can already see hardware manufacturers making AI-accelerator cards, with built-in multi-processors and neural net chips, to fit next to your graphics card. The 'Intel Inside' logo will gain a whole new meaning...
* - Yes, I'm adapting Don Norman's Invisible Computer [jnd.org] term.
Re:Only one problem (Score:2, Interesting)
Bright idea.
Besides, what is a human being besides a complex system of input and output anyway?
Re:Only one problem (Score:3, Interesting)
Re:Only one problem (Score:2)
Bull (Score:1)
Says who (besides you?). Can you back that up buddy? I think a hell of a lot of AI researchers will have a bone with what you said. And I don't just mean Kurzweil (http://www.kurzweilai.net/), though that's a good start.
> At least not if we continue down the path we've been on, and engineering techniques make that unlikely.
Once again, a vague statement thrown out of nowhere. Care to elaborate? Sounds to me like you're just gushing out your personal bias.
> Artificial sentience (Sci-fi AI) is a design problem, not a speed problem.
With enough speed the brain can be simulated (even though that's not really the goal of AI), because i)the brain is a physical device, ii)physics is computational, and therefore a computer can simulate it. If you simulate the brain fully, then sentience is there (and don't give me crap like the zombie arguments and other nonsense of David Chalmers et al)
> I think it's possible, but not on your PC.
He didn't claim there would be sentient AI on his PC.
PS: the original post didn't make all that much sense either, like where did he get that exact number (2025).
Re:Bull (Score:1)
Re:Bull (Score:3, Insightful)
Consider this thought experiment: scan a brain on a low level (neurons and their interconnections), and then put the whole thing into a computer (that will be available in say 40 years, according to some estimates; there are also currently quite good models of neuronal behavior, and I don't mean the simplified ANNs we often hear of), and then just run it. It doesn't matter that you don't understand how it works, it works anyways -- proof of concept -- not to say that's the most practical way to do AI, but just to show it's possible.
About Godel -- Godel's theorem [santafe.edu]is very specific, and says absolutely NOTHING about human intelligence
Like the article linked above says, it has been much abused (Penrose comes to mind). As far as your statement that intelligence is ultimately axiomatic, I don't know what you mean. You say the social sciences would become mathematical scienes: this is in fact not the case simply because social systems are too large, intricate, and complicated in order to consider them on the fundamental level (i.e. physics). It is a practical issue. It is perfecly clear, however, that theoretically social behavior is based on biology which reduces to chemistry which reduces to physics.
> Perhaps we could eventually develop AI to the point where it can simulate lower animal behavior [popsci.com], but the search for machines capable of human levels of thought is ultimately futile.
This comment is no better than the claims of the message I originally replied to.
Indeed, Godel's theorem merely. . . (Score:4, Insightful)
If a theorem cannot be expressed in one system you simply make another where it can.
One of the fascinating things about the human mind is its ability to go *beyond* single logical structures and fully understand an infinite number of incompatible algebras.
The problem with developing AI isn't so much that we don't understand the human mind, it's that we *do* understand it to be something well beyond a simple algebraic computer, which at the moment is all computers are. They are *computational* devices with a preprogramed logic. *That* logic is subject to Godel.
Your computer is a giant abacus. Nothing more, nothing less. This says nothing about the possibilities of developing machines that are *not* simply a bank of bistable switches.
Nor is there any axiom which states that intelligence must be *human* in form.
That last point is outrageously important to all sorts of fields.
KFG
Re:Indeed, Godel's theorem merely. . . (Score:2, Interesting)
Consider the analogous statement you get when you substitute the word "brains" for "computers". And indeed that is a valid substitiution:
1) Brains are computational, because they work based on the same fundamental physics, which is computational (Penrose notwithstanding).
2) Brains are preprogrammed no less than computers -- except that evolution did the programming; indeed, we now know (see Pinker et al) that the brain is mostly optimized for special-purpose information-processing (i.e. vision, running autonomic body processes, etc.) and abstract logical thought is far far slower than a planning algorithm or theorem prover running on a general purpose computer; indeed it is to a large extent the common sense that is so specialized to our environment through millions of years of evolution that makes us better in it than computers are (thus the 'situated agent' concept in AI).
Regarding 'understanding' -- there's nothing mysterious about this term -- it is simply a certain level of familiarity with a subject, and being able to answer to some detail 'how' and 'why' questions about it; people like Penrose misguidedly (is that a word?) try to suggest there's more to that. But Penrose's crap has been disproved a thousand times over.
Re:Indeed, Godel's theorem merely. . . (Score:1)
Tim
Re:Bull (Score:2)
In actuality, Godel's Theorem deals with the limits of our ability to describe formal systems of logic. That discussion, while interesting in and of itself, is completely tangential to discussions about simulating intelligence.
Re:Bull (Score:2)
Ah yes, the infinite problem of the hand drawing itself. The problem is naturally in concept rather than execution. I may not be able to easily model my own functioning brain all in a instant, however, since your working brain is modelable, I can do that instead.
I can draw a image of my own hand, I can sculpt an 3D model of my own hand, I can even grow cells into a working model of my own hand, and ultimately create a funtional replacement of my own hand. I CANNOT however separate my hand into component cells and use that to create a new hand.
Remember that all functions can be modeled into another medium if the abstract concepts behind those functions are sound and complete.
I could feasably model another duplicate body of mine in functional metals yet in a higher temperature enviroment (around 800 degrees where most metals become near liquids), but it would not be easy to make a suitable stable form given the higher ambient energy levels.
You are taking the mindset of "If we were meant to fly then we should have been born with wings."
Which is indeed bullshit as humans could be given wings provided they had sufficent bioenergy reserves to handle the effort of flapping wings, the skeletal frame reinforced to handle the stress of having huge wingspans, a circulatory system with two or three hearts to circulate blood into the wings, larger lungs to feed more oxygen into the body to compensate for the oxygen draw of the wings, and the ability to consume more food to feed the wings power storage supplies.
The other option is to create removable wings of a biological manner which are fed from a biological food tank and have some control manner for the humans to manipulate or mechanical wings which follow distinct flapping patterns which humans control by a handpad and control stick.
If it is allowable in the rules of physics (which a mechanical brain is indeed allowable) then the only limit remains in the effort in effort devoted to creating that device. Biological brains exist thusly mechanical brains can exist as many properties of brain functions are duplicatable with existing technology.
Re:Only one problem (Score:1)
Computers are, regardless of how many pipelines you put into the chip, essentially sequential. The brain, however, is massively parallel. It would take a fundamental shift in the way computers are made to allow us to even come close to modeling the way the brain functions.