Forgot your password?
typodupeerror
AI Games

Neural Net Learns Breakout By Watching It On Screen, Then Beats Humans 138

Posted by Soulskill
from the but-can-it-eat-doritos-all-day? dept.
KentuckyFC writes "A curious thing about video games is that computers have never been very good at playing them like humans by simply looking at a monitor and judging actions accordingly. Sure, they're pretty good if they have direct access to the program itself, but 'hand-to-eye-co-ordination' has never been their thing. Now our superiority in this area is coming to an end. A team of AI specialists in London have created a neural network that learns to play games simply by looking at the RGB output from the console. They've tested it successfully on a number of games from the legendary Atari 2600 system from the 1980s. The method is relatively straightforward. To simplify the visual part of the problem, the system down-samples the Atari's 128-colour, 210x160 pixel image to create an 84x84 grayscale version. Then it simply practices repeatedly to learn what to do. That's time-consuming, but fairly simple since at any instant in time during a game, a player can choose from a finite set actions that the game allows: move to the left, move to the right, fire and so on. So the task for any player — human or otherwise — is to choose an action at each point in the game that maximizes the eventual score. The researchers say that after learning Atari classics such as Breakout and Pong, the neural net can then thrash expert human players. However, the neural net still struggles to match average human performance in games such as Seaquest, Q*bert and, most importantly, Space Invaders. So there's hope for us yet... just not for very much longer."
This discussion has been archived. No new comments can be posted.

Neural Net Learns Breakout By Watching It On Screen, Then Beats Humans

Comments Filter:
  • by Anonymous Coward on Friday December 27, 2013 @11:20AM (#45796229)

    "I hope the lifter pilot doesn't get too bored." Jarvis is all chummy again.
    "There is no pilot. It's a smart gel."
    "Really? You don't say." Jarvis frowns. "Those are scary things, those gels. You know one suffocated a bunch of people in London a while back?"
    Yes, Joel's about to say, but Jarvis is back in spew mode. "No shit. It was running the subway system over there, perfect operational record, and then one day it just forgets to crank up the ventilators when it's supposed to. Train slides into station fifteen meters underground, everybody gets out, no air, boom."
    Joel's heard this before. The punchline's got something to do with a broken clock, if he remembers it right.
    "These things teach themselves from experience, right?," Jarvis continues. "So everyone just assumed it had learned to cue the ventilators on something obvious. Body heat, motion, CO2 levels, you know. Turns out instead it was watching a clock on the wall. Train arrival correlated with a predictable subset of patterns on the digital display, so it started the fans whenever it saw one of those patterns."
    "Yeah. That's right." Joel shakes his head. "And vandals had smashed the clock, or something."
    "Hey. You did hear about it."
    "Jarvis, that story's ten years old if it's a day. That was way back when they were starting out with these things. Those gels have been debugged from the molecules up since then."
    "Yeah? What makes you so sure?"
    "Because a gel's been running the lifter for the better part of a year now, and it's had plenty of opportunity to fuck up. It hasn't."
    "So you like these things?"
    "Fuck no," Joel says, thinking about Ray Stericker. Thinking about himself. "I'd like 'em a lot better if they did screw up sometimes, you know?"
    "Well, I don't like 'em or trust 'em. You've got to wonder what they're up to."

  • Re:AI (Score:3, Interesting)

    by Anonymous Coward on Friday December 27, 2013 @11:32AM (#45796337)

    If it takes an infant two years with the best "computer" in the universe that we know of to learn how to talk, why should we think it will take a machine at even the top-end of the supercomputer scale (which can't have as many "connections" as the average human brain) any less?

    Because we're learning languages in the wrong way.

  • Re:AI (Score:4, Interesting)

    by StripedCow (776465) on Friday December 27, 2013 @11:38AM (#45796401)

    If it takes an infant two years with the best "computer" in the universe that we know of to learn how to talk, why should we think it will take a machine at even the top-end of the supercomputer scale (which can't have as many "connections" as the average human brain) any less?

    Because neurons are much slower than transistors?

  • Re:AI (Score:3, Interesting)

    by GTRacer (234395) <gtracer308.yahoo@com> on Friday December 27, 2013 @12:37PM (#45796945) Homepage Journal
    So, Pimsleur or Rosetta?
  • by cascadingstylesheet (140919) on Friday December 27, 2013 @01:46PM (#45797611)

    This neural-net-combined-with-trial-and-error style of algorithm is typically referred to as a "JavaScript Programmer"-type algorithm in recent AI literature. (I'm being completely serious, too, in case you think this is a joke; it isn't.)

    The name derives from the similarity between how these kinds of algorithms work, and how JavaScript programmers tend to work.

    Funny, of course :)

    But, you got me thinking. The JavaScript programmer is generally trying to affect the appearance of stuff on the screen, therefore, he looks at the stuff on the screen, and tries to affect ... the stuff on the screen. So, it makes more sense than it might.

    Our new pong-playing overlords, on the other hand, if they are actually doing something important like remotely fighting wars or trying to save people or something, well, then we don't really know if they are looking at the right input, and it becomes much more important that they, and we, understand exactly how they are coming to their decisions.

The person who's taking you to lunch has no intention of paying.

Working...