Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror

Slashdot videos: Now with more Slashdot!

  • View

  • Discuss

  • Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

×
AI Games

Neural Net Learns Breakout By Watching It On Screen, Then Beats Humans 138

Posted by Soulskill
from the but-can-it-eat-doritos-all-day? dept.
KentuckyFC writes "A curious thing about video games is that computers have never been very good at playing them like humans by simply looking at a monitor and judging actions accordingly. Sure, they're pretty good if they have direct access to the program itself, but 'hand-to-eye-co-ordination' has never been their thing. Now our superiority in this area is coming to an end. A team of AI specialists in London have created a neural network that learns to play games simply by looking at the RGB output from the console. They've tested it successfully on a number of games from the legendary Atari 2600 system from the 1980s. The method is relatively straightforward. To simplify the visual part of the problem, the system down-samples the Atari's 128-colour, 210x160 pixel image to create an 84x84 grayscale version. Then it simply practices repeatedly to learn what to do. That's time-consuming, but fairly simple since at any instant in time during a game, a player can choose from a finite set actions that the game allows: move to the left, move to the right, fire and so on. So the task for any player — human or otherwise — is to choose an action at each point in the game that maximizes the eventual score. The researchers say that after learning Atari classics such as Breakout and Pong, the neural net can then thrash expert human players. However, the neural net still struggles to match average human performance in games such as Seaquest, Q*bert and, most importantly, Space Invaders. So there's hope for us yet... just not for very much longer."
This discussion has been archived. No new comments can be posted.

Neural Net Learns Breakout By Watching It On Screen, Then Beats Humans

Comments Filter:
  • by themightythor (673485) on Friday December 27, 2013 @11:56AM (#45796547)
    In this case, the "gels" were employing a heuristic to know when to do something (in this case, turn on the air ventilation system). It was assumed that it was something meaningful to the action (i.e. something to do with the recipients of the ventilation), but it was something arbitrary (i.e. the way the clock looked). So, unless you have insight into what the heuristic is, you won't know when it's going to have the expected behavior and when it isn't. Even if it seemingly has the expected behavior for a long time.
  • Re:Tetris (Score:5, Informative)

    by Sigma 7 (266129) on Friday December 27, 2013 @01:18PM (#45797335)

    Tetris is a solved problem if you're going for survival (assuming you don't get an extremely unlucky piece selection). Since AI has access to the current piece, the next piece, and can do a probability check on the next piece, it can basically last forever.

    I myself never made it past level 10, and I've never seen anyone make it past level 20.

    Tetris: The Grand Master: http://www.youtube.com/watch?v=jwC544Z37qo [youtube.com] - fast forward to 3:00 to see first majoor speedup, 4:45 for final speedup, and 5:01 for invisible pieces.

    That, and 999999 was done on a real NES within 3 minutes 11 seconds: http://www.youtube.com/watch?v=bR0BKCHJ48s [youtube.com]

What good is a ticket to the good life, if you can't find the entrance?

Working...