Neural Net Learns Breakout By Watching It On Screen, Then Beats Humans 138
KentuckyFC writes "A curious thing about video games is that computers have never been very good at playing them like humans by simply looking at a monitor and judging actions accordingly. Sure, they're pretty good if they have direct access to the program itself, but 'hand-to-eye-co-ordination' has never been their thing. Now our superiority in this area is coming to an end. A team of AI specialists in London have created a neural network that learns to play games simply by looking at the RGB output from the console. They've tested it successfully on a number of games from the legendary Atari 2600 system from the 1980s. The method is relatively straightforward. To simplify the visual part of the problem, the system down-samples the Atari's 128-colour, 210x160 pixel image to create an 84x84 grayscale version. Then it simply practices repeatedly to learn what to do. That's time-consuming, but fairly simple since at any instant in time during a game, a player can choose from a finite set actions that the game allows: move to the left, move to the right, fire and so on. So the task for any player — human or otherwise — is to choose an action at each point in the game that maximizes the eventual score. The researchers say that after learning Atari classics such as Breakout and Pong, the neural net can then thrash expert human players. However, the neural net still struggles to match average human performance in games such as Seaquest, Q*bert and, most importantly, Space Invaders. So there's hope for us yet... just not for very much longer."
Don't let it watch terminator (Score:1, Insightful)
Next up wee have Sky Net.
AI (Score:5, Insightful)
For once, something based on proper AI (rather than human-generated heuristics).
However - notice it's limitations: Where there is a direct correlation between where you need to be, and where something else is on the screen (basically a 1:1 relationship in Pong, for example), it can cope with going higher or lower as required.
But when you put it into something that has more than a single thing to "learn" (move left/right, avoid bombs, shoot aliens, choose which aliens to shoot, don't shoot your own base, etc.) then the amount of training required goes up exponentially. And thus we could spend centuries of computer time in order to get something that can do as well as a simple heuristic designed by someone who knows the game (not saying heuristics don't have their place!).
"Trained" devices require training relative to some power of the variety of the inputs and the directness of their correlation to the game-arena. And thus, proper AI is really stymied when it comes to learning complex tasks.
But still - this is the sort of thing we should be doing. If it takes an infant two years with the best "computer" in the universe that we know of to learn how to talk, why should we think it will take a machine at even the top-end of the supercomputer scale (which can't have as many "connections" as the average human brain) any less?
Re:Can we get a summary of that excerpt, please? (Score:2, Insightful)
tl;dr == "I hate reading". You should NEVER see a tl;dr at slashdot. NERDS READ.
Re:Can we get a summary of that excerpt, please? (Score:2, Insightful)
Oh, wait, that's a dumb idea, because we'll end up looking like the stupid ones."
That is the conversation you should have had with yourself before you posted.
In the excerpt one of the chars expresses a begrudged acceptance of the 'gels' because they haven't 'fucked up' which is not, despite the anecdote which precedes the opinion, exclusive to fatalities. The responding party understands this, because he's not a total idiot, and says that he wishes the 'gels' made some kind of mistakes (again, with NO exclusivity to fatalities as you ridiculously assert in your summation).
Make me wonder how people like those in these comments ever passed verbal standardized tests. Reading comprehension is negligible and it seems even actively avoided.