Nvidia's AI Recreates 'Pac-Man' For 40th Anniversary (hypebeast.com) 32
Nvidia recently taught its AI system to recreate the game Pac-Man just by watching it being played. Hypebeast reports: No coding or pre-rendered images were used for the software to base the recreation on. The AI model was simply fed visual data of the game being played alongside controller inputs. From there, the AI recreated what it saw frame by frame, resulting in a playable version of Bandai Namco's most recognizable title. Although it's not a perfect recreation of the title and all its assets, all the mechanics and gameplay goals are the same. NVIDIA even believes this is how AI will be applied to game creation in the future. [Rev Lebaredian, Nvidia's vice president of simulation technology] notes the experiment was done in collaboration with Bandai Namco as it celebrates the 40th anniversary of the classic arcade game.
The artificial intelligence program is called GameGAN, with GAN standing for "generative adversarial network," which is a common architecture used in machine learning. GAN works by attempting to replicate input data while also comparing its work to the original source. If the two don't match, the data is rejected and the program looks for improvements and tries again. Although AI programs have generated virtual gaming spaces before, GameGAN is able to use a "memory module" that allows the program to store an internal map of the digital space it's trying to recreate, leading to a more consistent copy. The AI was trained on over 50,000 episodes and almost never died, the company says. Nvidia will be releasing the recreated game online in the near future.
The artificial intelligence program is called GameGAN, with GAN standing for "generative adversarial network," which is a common architecture used in machine learning. GAN works by attempting to replicate input data while also comparing its work to the original source. If the two don't match, the data is rejected and the program looks for improvements and tries again. Although AI programs have generated virtual gaming spaces before, GameGAN is able to use a "memory module" that allows the program to store an internal map of the digital space it's trying to recreate, leading to a more consistent copy. The AI was trained on over 50,000 episodes and almost never died, the company says. Nvidia will be releasing the recreated game online in the near future.
Limitations (Score:2)
Sanja Fidler, director of Nvidia’s Toronto research lab states that GameGAN was trained on over 50,000 episodes in order to recreate Pac-Man, but the AI agent was so good at the game that it almost ever died. “That made it hard for the AI trying to recreate the game to learn the concept of dying,” said Fidler.
It seems like at best it can create an okay copy of what it sees. It's entirely possible that some new and interesting gameplay mechanics might arise as a result of the inability to get it all perfect, but those will ultimately be an accident and something that the AI will fail to recognize as fun or exciting changes and will ultimately end up removing.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
Re: (Score:3, Interesting)
You're already giving this "AI" way too much credit. What you seem to image is happening, isn't what's happening.
Tomorrow's AI is going to look a lot like today's AI, which looks a lot like AI 10 years ago, which looks a lot like...
Re: (Score:2)
Every "AI" story on Slashdot depends on the ELIZA effect. [wikipedia.org]
Re:Limitations (Score:4, Insightful)
OK before the angry crew jump on with the whole "ItS nOt StRoNg Ai So ItS NoT AI", whatever I don't care. It's replacing something that previously required intelligence to do with artifice, hence AI. There is no requirement for it to be general or past some arbitrary bar especially as far as the popular press goes. Everyone knows it's not a general human replacement.
You're already giving this "AI" way too much credit. What you seem to image is happening, isn't what's happening.
Tomorrow's AI is going to look a lot like today's AI, which looks a lot like AI 10 years ago, which looks a lot like...
Well, no not really. I mean a bit, but only in as much as any other advancing field. Every year there's papers published. 80% of them are bad (actually I think we're in super-Sturgeon territory here with deep learning, and it's more like 99%), but a few are good. Even fewer are both good and useful. And they stick, so every year the field advances a little bit but looks almost exactly the same as it did the last year and the year before.
The cumulative effect however is very different. If you'd given a 2080RTX to someone a decade ago, they wouldn't have had the tools or knowledge to do with it what you can do with it now. The field of deep learning was still very much in its infancy. The tools were all hand written per research group. Performance was mediocre even if you could get the buggers to converge (e.g. your name started with Yann and ended in LeCunn), none of the knowledge of how to design effective networks existed, the good optimizers didn't exist, and there wasn't even lots of great data.
I remember I was in a group that did work briefly on CNNs in 2008ish. There was one guy who was particularly keen on them because they could be implemented to be very fast on FPGAs. So they looked promising and interesting for doing simple stuff on huge quantities of image data. We tried them and I remember the one guy complaining about convergence. Funnily enough another person tried something that looked quite a lot more modern, too, but we all got hung up on the lack of differentiability of some of the layers and investigated the rather poor gradient free optimization techniques available. Turns out if you have something like ReLU which isn't differentiable everywhere, it just doesn't matter with the right network and training designs. But we didn't know that and I think neither did anyone else then. So, we abandoned CNNs because we didn't know how to get them to work.
It's interesting. The ideas have been around a long time, and many are obvious, but without the area in the right state, you can't make use of them because you're missing one or two crucial non obvious ideas and a whole pile of non obvious general knowhow not to mention good tooling and other miscellanea.
So, the field is advancing. If you looked at the results in many areas of computer vision a decade ago, the results today are far far on. Take for example semantic segmentation. This is slightly over a decade ago, but I remember seeing this paper and being blown away by the results. I (and many others) were astounded:
https://jamie.shotton.org/work... [shotton.org]
It's an amazingly impressive (and now completely obsolete) piece of work, it makes cunning use of a variety of absolutely state of the art techniques, combined in a new way with a few interesting twists. Implementation must have been Now (13 years later), you can get results that blow this out of the water easily (I wouldn't be surprised if there's a pytorch tutorial example out there that utterly destroys it). There are now social media apps which offer realtime semantic segmentation with high resolution boundaries on a kinda crappy android phone. A top end 2008 PC would have a quad core i7 running at 3GHz. A kind of crappy Android phone (Galaxy A20s) has 8 in-order cores running at 1.8GHz.
So 10 years buys you vastly better performance in terms of raw segmentation qu
Re: (Score:2)
The issue is that nothing about this particular effort suggests something that useful for game creation and it doesn't really have a path to do it.
Before this thing can begin, it has to observe an already existing implementation and then construct a knockoff. There's no sign that this approach could synthesize a new experience.
Sure, you can speculate on what the future holds, but this particular example does nothing to support or discourage such ambitious speculation.
Re: (Score:2)
The issue is that nothing about this particular effort suggests something that useful for game creation and it doesn't really have a path to do it.
Before this thing can begin, it has to observe an already existing implementation and then construct a knockoff. There's no sign that this approach could synthesize a new experience.
Yes, this is a cute trick, rather than something useful, then again most academic papers are cute tricks rather than something useful, but they push on very slightly and after a while
Re: (Score:2)
You're still giving this thing too much credit. You seem to be under the mistaken impression that they fed this thing some video and it pooped out a game.
The article is very light on details, because the authors are trying to get you to assume certain things without explicitly stating them to make you think there's a lot more to this project than there really is. I suspect that this is just like the zillion other overblown AI claims and it'll be come much less impressive when we find out how much "help" w
Re: (Score:2)
You're still giving this thing too much credit. You seem to be under the mistaken impression that they fed this thing some video and it pooped out a game.
My day job is working with deep learning, so... no.
The article is very light on details, because the authors are trying to get you to assume certain things without explicitly stating them to make you think there's a lot more to this project than there really is. I suspect that this is just like the zillion other overblown AI claims and it'll be come much l
Re: (Score:2)
Your optimism is confusing. You claim to understand the reality of things, but you're still harbor some bizarre beliefs about the state of the art and what is even thought possible.
This strange faith of yours is seriously misplaced.
Re: (Score:2)
You're funny. You provide no argument and no evidence yet blither in about how how me not believing your ill informed opinions is an article of "faith".
Re: (Score:2)
No, it's not hard. Go, like chess, is a simple game. Most games humans create are simple. By simple, it means there is a solution to the game. You can solve Tic-Tac-Toe trivially in about 5 minutes. Chess and Go are more complex, only because the potential branches of moves grows exponentially very quickly, Go more so than chess. This makes it really hard to compute the solution
Re: Limitations (Score:2)
Re: (Score:2)
This is cool as hell, but the article has a really interesting bit that I think shows some serious limitations of the approach:
It seems like at best it can create an okay copy of what it sees.
ie. To get the AI to write a game for you, you first need to program the game then let the AI watch it being played.
Re: (Score:2)
You make a good point. AIs currently need a lot of input before they can create something useful. This is not unlike humans. A graphics artist can create wonderful drawings and animations seemingly from nothing, but it took the artist years of studying and practise to become good at it. So how long does an AI need to create something really new and wonderful that isn't immediately identified as a copy or a mash-up of existing content? It will have to learn about everything and it will have to learn from the
Re: (Score:2)
I see this being a goldmine for making knock-offs of simple mobile games...Flappy Bird, Farmville, fruit ninja, runners, Candy Crush, tower defense, etc. Use AI to get a knock-off 90% complete, polish, update images and menus...then dump on the market free with ads and/or in-game purchases. Do that enough times and the sheer quantity will make money, just like the shitty mobile game developers do already, with less programming.
Ms Pac Man (Score:1)
Do you want Skynet? (Score:3, Insightful)
A.I. won't make any difference between videogames and the real world. Once this gets into military hardware, we're fucked.
Do you want Skynet? Because this is how you get Skynet.
Re: (Score:2)
A.I. won't make any difference between videogames and the real world.
You don't make any difference between Terminator movies and the real world.
So according to you, once you join the army are we all fucked.
For our next trick! (Score:2)
What could we get it to generate applying (Score:2)
this game recreation engine, to - pornhub?
Does it have the kill screen (Score:2)
Does it have the kill screen
How is the implementation? (Score:2)
Is the implementation a streamlined gameplay loop or is it full of situational clauses?
"In the future" (Score:2)
> NVIDIA even believes this is how AI will be applied to game creation in the future.
So in the future we'll have to write the game, let the AI spend hundreds of hours watching people play it, then it will crap out a shittier version for sale to the public?
That just sounds like EA with extra steps.
Re: (Score:2)
like in "Galaxy Quest" (Score:1)
This is how the aliens built the NSEA-Protector. All from watching historical documents.
I would be your Mistress (Score:1)
Copywrong (Score:2)