Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Classic Games (Games) AI NES (Games) Software Games

Nvidia's AI Recreates 'Pac-Man' For 40th Anniversary (hypebeast.com) 32

Nvidia recently taught its AI system to recreate the game Pac-Man just by watching it being played. Hypebeast reports: No coding or pre-rendered images were used for the software to base the recreation on. The AI model was simply fed visual data of the game being played alongside controller inputs. From there, the AI recreated what it saw frame by frame, resulting in a playable version of Bandai Namco's most recognizable title. Although it's not a perfect recreation of the title and all its assets, all the mechanics and gameplay goals are the same. NVIDIA even believes this is how AI will be applied to game creation in the future. [Rev Lebaredian, Nvidia's vice president of simulation technology] notes the experiment was done in collaboration with Bandai Namco as it celebrates the 40th anniversary of the classic arcade game.

The artificial intelligence program is called GameGAN, with GAN standing for "generative adversarial network," which is a common architecture used in machine learning. GAN works by attempting to replicate input data while also comparing its work to the original source. If the two don't match, the data is rejected and the program looks for improvements and tries again. Although AI programs have generated virtual gaming spaces before, GameGAN is able to use a "memory module" that allows the program to store an internal map of the digital space it's trying to recreate, leading to a more consistent copy.
The AI was trained on over 50,000 episodes and almost never died, the company says. Nvidia will be releasing the recreated game online in the near future.
This discussion has been archived. No new comments can be posted.

Nvidia's AI Recreates 'Pac-Man' For 40th Anniversary

Comments Filter:
  • This is cool as hell, but the article has a really interesting bit that I think shows some serious limitations of the approach:

    Sanja Fidler, director of Nvidia’s Toronto research lab states that GameGAN was trained on over 50,000 episodes in order to recreate Pac-Man, but the AI agent was so good at the game that it almost ever died. “That made it hard for the AI trying to recreate the game to learn the concept of dying,” said Fidler.

    It seems like at best it can create an okay copy of what it sees. It's entirely possible that some new and interesting gameplay mechanics might arise as a result of the inability to get it all perfect, but those will ultimately be an accident and something that the AI will fail to recognize as fun or exciting changes and will ultimately end up removing.

    • This is still very experimental tech, give it time. Imagine if you fed it all the data of Fallout, GTA, and World of Warcraft. In theory, you could have a whole world game (like Wow), of different sizes, that are realistically designed (like from GTA), in a futuristic settings (like Fallout), that has quests and dungeons modeled on all 3 (of the dungeons and quests that real players online mentioned were their favorites/best).
      • All of those games were great because they provided some novel experiences to their players in a variety of ways. None of them were completely new, but they were more polished iterations. This AI is incapable of capturing the very thing that made those games great, which is precisely "something I haven't quite seen before". I would imagine companies like EA or Activision would love it though since they seem content to shit out more of the same on a yearly basis. Maybe they'll even find a way to overwork the
        • The issue here is as you said, "this AI". What about tomorrow's AI? It's like looking at the Wolfenstein engine and declaring while it's nice, it couldn't be used to ever make a tower with floors. And while the Wolfenstein engine couldnt, it led the way to engines that could. Same can happen with this AI.
          • Re: (Score:3, Interesting)

            by narcc ( 412956 )

            You're already giving this "AI" way too much credit. What you seem to image is happening, isn't what's happening.

            Tomorrow's AI is going to look a lot like today's AI, which looks a lot like AI 10 years ago, which looks a lot like...

            • Every "AI" story on Slashdot depends on the ELIZA effect. [wikipedia.org]

            • Re:Limitations (Score:4, Insightful)

              by serviscope_minor ( 664417 ) on Saturday May 23, 2020 @06:53AM (#60093922) Journal

              OK before the angry crew jump on with the whole "ItS nOt StRoNg Ai So ItS NoT AI", whatever I don't care. It's replacing something that previously required intelligence to do with artifice, hence AI. There is no requirement for it to be general or past some arbitrary bar especially as far as the popular press goes. Everyone knows it's not a general human replacement.

              You're already giving this "AI" way too much credit. What you seem to image is happening, isn't what's happening.

              Tomorrow's AI is going to look a lot like today's AI, which looks a lot like AI 10 years ago, which looks a lot like...

              Well, no not really. I mean a bit, but only in as much as any other advancing field. Every year there's papers published. 80% of them are bad (actually I think we're in super-Sturgeon territory here with deep learning, and it's more like 99%), but a few are good. Even fewer are both good and useful. And they stick, so every year the field advances a little bit but looks almost exactly the same as it did the last year and the year before.

              The cumulative effect however is very different. If you'd given a 2080RTX to someone a decade ago, they wouldn't have had the tools or knowledge to do with it what you can do with it now. The field of deep learning was still very much in its infancy. The tools were all hand written per research group. Performance was mediocre even if you could get the buggers to converge (e.g. your name started with Yann and ended in LeCunn), none of the knowledge of how to design effective networks existed, the good optimizers didn't exist, and there wasn't even lots of great data.

              I remember I was in a group that did work briefly on CNNs in 2008ish. There was one guy who was particularly keen on them because they could be implemented to be very fast on FPGAs. So they looked promising and interesting for doing simple stuff on huge quantities of image data. We tried them and I remember the one guy complaining about convergence. Funnily enough another person tried something that looked quite a lot more modern, too, but we all got hung up on the lack of differentiability of some of the layers and investigated the rather poor gradient free optimization techniques available. Turns out if you have something like ReLU which isn't differentiable everywhere, it just doesn't matter with the right network and training designs. But we didn't know that and I think neither did anyone else then. So, we abandoned CNNs because we didn't know how to get them to work.

              It's interesting. The ideas have been around a long time, and many are obvious, but without the area in the right state, you can't make use of them because you're missing one or two crucial non obvious ideas and a whole pile of non obvious general knowhow not to mention good tooling and other miscellanea.

              So, the field is advancing. If you looked at the results in many areas of computer vision a decade ago, the results today are far far on. Take for example semantic segmentation. This is slightly over a decade ago, but I remember seeing this paper and being blown away by the results. I (and many others) were astounded:

              https://jamie.shotton.org/work... [shotton.org]

              It's an amazingly impressive (and now completely obsolete) piece of work, it makes cunning use of a variety of absolutely state of the art techniques, combined in a new way with a few interesting twists. Implementation must have been Now (13 years later), you can get results that blow this out of the water easily (I wouldn't be surprised if there's a pytorch tutorial example out there that utterly destroys it). There are now social media apps which offer realtime semantic segmentation with high resolution boundaries on a kinda crappy android phone. A top end 2008 PC would have a quad core i7 running at 3GHz. A kind of crappy Android phone (Galaxy A20s) has 8 in-order cores running at 1.8GHz.

              So 10 years buys you vastly better performance in terms of raw segmentation qu

              • by Junta ( 36770 )

                The issue is that nothing about this particular effort suggests something that useful for game creation and it doesn't really have a path to do it.

                Before this thing can begin, it has to observe an already existing implementation and then construct a knockoff. There's no sign that this approach could synthesize a new experience.

                Sure, you can speculate on what the future holds, but this particular example does nothing to support or discourage such ambitious speculation.

                • The issue is that nothing about this particular effort suggests something that useful for game creation and it doesn't really have a path to do it.

                  Before this thing can begin, it has to observe an already existing implementation and then construct a knockoff. There's no sign that this approach could synthesize a new experience.

                  Yes, this is a cute trick, rather than something useful, then again most academic papers are cute tricks rather than something useful, but they push on very slightly and after a while

                  • by narcc ( 412956 )

                    You're still giving this thing too much credit. You seem to be under the mistaken impression that they fed this thing some video and it pooped out a game.

                    The article is very light on details, because the authors are trying to get you to assume certain things without explicitly stating them to make you think there's a lot more to this project than there really is. I suspect that this is just like the zillion other overblown AI claims and it'll be come much less impressive when we find out how much "help" w

                    • You're still giving this thing too much credit. You seem to be under the mistaken impression that they fed this thing some video and it pooped out a game.

                      My day job is working with deep learning, so... no.

                      The article is very light on details, because the authors are trying to get you to assume certain things without explicitly stating them to make you think there's a lot more to this project than there really is. I suspect that this is just like the zillion other overblown AI claims and it'll be come much l

                    • by narcc ( 412956 )

                      Your optimism is confusing. You claim to understand the reality of things, but you're still harbor some bizarre beliefs about the state of the art and what is even thought possible.

                      This strange faith of yours is seriously misplaced.

                    • You're funny. You provide no argument and no evidence yet blither in about how how me not believing your ill informed opinions is an article of "faith".

                    • by tlhIngan ( 30335 )

                      That tech turned into alpha go, the first AI to beat humans at go, a game that was thought of as much too hard for the time being.

                      No, it's not hard. Go, like chess, is a simple game. Most games humans create are simple. By simple, it means there is a solution to the game. You can solve Tic-Tac-Toe trivially in about 5 minutes. Chess and Go are more complex, only because the potential branches of moves grows exponentially very quickly, Go more so than chess. This makes it really hard to compute the solution

            • No, I think you're not giving it enough credit and seem too focused on what it can do today must be somehow the limit of what it would ever be able to do. This GameGAN is based on BigGAN, and you should look into how its been able to make images over the years, from garbled nightmares, into photo realistic images. Like BigGAN, it can and will improve
    • This is cool as hell, but the article has a really interesting bit that I think shows some serious limitations of the approach:

      It seems like at best it can create an okay copy of what it sees.

      ie. To get the AI to write a game for you, you first need to program the game then let the AI watch it being played.

    • You make a good point. AIs currently need a lot of input before they can create something useful. This is not unlike humans. A graphics artist can create wonderful drawings and animations seemingly from nothing, but it took the artist years of studying and practise to become good at it. So how long does an AI need to create something really new and wonderful that isn't immediately identified as a copy or a mash-up of existing content? It will have to learn about everything and it will have to learn from the

    • by Ranbot ( 2648297 )

      I see this being a goldmine for making knock-offs of simple mobile games...Flappy Bird, Farmville, fruit ninja, runners, Candy Crush, tower defense, etc. Use AI to get a knock-off 90% complete, polish, update images and menus...then dump on the market free with ads and/or in-game purchases. Do that enough times and the sheer quantity will make money, just like the shitty mobile game developers do already, with less programming.

  • It looks like the video shows a little bit of the Ms Pac man intermission act
  • by Anonymous Coward on Friday May 22, 2020 @09:58PM (#60093158)

    The artificial intelligence program is called GameGAN, with GAN standing for "generative adversarial network"...

    A.I. won't make any difference between videogames and the real world. Once this gets into military hardware, we're fucked.

    Do you want Skynet? Because this is how you get Skynet.

    • A.I. won't make any difference between videogames and the real world.

      You don't make any difference between Terminator movies and the real world.

      So according to you, once you join the army are we all fucked.

  • The AI will duplicate Atari Combat! and Pong!
  • this game recreation engine, to - pornhub?

  • Does it have the kill screen

  • Is the implementation a streamlined gameplay loop or is it full of situational clauses?

  • > NVIDIA even believes this is how AI will be applied to game creation in the future.

    So in the future we'll have to write the game, let the AI spend hundreds of hours watching people play it, then it will crap out a shittier version for sale to the public?

    That just sounds like EA with extra steps.

  • Comment removed based on user account deletion
  • This is how the aliens built the NSEA-Protector. All from watching historical documents.

  • I look for the good man. I would be your Mistress!! Punish me! =>> https://kutt.it/7yO5Yi [kutt.it]
  • The real question is how much of the new code is copyrighted? The lawyers must be salivating something like this could really take the next Google/Java API battle up a level.

"Look! There! Evil!.. pure and simple, total evil from the Eighth Dimension!" -- Buckaroo Banzai

Working...