GTA 5 Graphics Are Now Being Boosted By Advanced AI At Intel (gizmodo.com) 44
Researchers at Intel Labs have applied machine learning techniques to GTA 5 to make it look incredibly realistic. Gizmodo reports: [I]nstead of training a neural network on famous masterpieces, the researchers at Intel Labs relied on the Cityscapes Dataset, a collection of images of a German city's urban center captured by a car's built-in camera, for training. When a different artistic style is applied to footage using machine learning techniques, the results are often temporally unstable, which means that frame by frame there are weird artifacts jumping around, appearing and reappearing, that diminish how real the results look. With this new approach, the rendered effects exhibit none of those telltale artifacts, because in addition to processing the footage rendered by Grand Theft Auto V's game engine, the neural network also uses other rendered data the game's engine has access to, like the depth of objects in a scene, and information about how the lighting is being processed and rendered.
That's a gross simplification -- you can read a more in-depth explanation of the research here -- but the results are remarkably photorealistic. The surface of the road is smoothed out, highlights on vehicles look more pronounced, and the surrounding hills in several clips look more lush and alive with vegetation. What's even more impressive is that the researchers think, with the right hardware and further optimization, the gameplay footage could be enhanced by their convolutional network at "interactive rates" -- another way to say in real-time -- when baked into a video game's rendering engine.
That's a gross simplification -- you can read a more in-depth explanation of the research here -- but the results are remarkably photorealistic. The surface of the road is smoothed out, highlights on vehicles look more pronounced, and the surrounding hills in several clips look more lush and alive with vegetation. What's even more impressive is that the researchers think, with the right hardware and further optimization, the gameplay footage could be enhanced by their convolutional network at "interactive rates" -- another way to say in real-time -- when baked into a video game's rendering engine.
Re: (Score:2)
Re: (Score:2)
Dirty roads (Score:1, Interesting)
Re: (Score:2)
Strange the video shows so much dirt and general brown on city roads. Strange the real video shows no dirt. Not sure how this is realistic or comparable.
That's not dirt on GTA's roads. The fact you thought it is shows how unrealistic it is. They were going for a cracked road look, but drive around in a city and you'll see when going 35mph if the road texture is so in your face visible you'd very likely step on the break and fear for your tires.
That's exactly what they were going for. The textures in GTA are hyper emphasised to the point of unrealism. You can see a similar effect on the tree trunks which in GTA 5 look like just brushing up against them would
Re: (Score:1)
I called it dirt because that what it looks like. Not discussing their goals, simply stating the roads are not realisstic. Comparing a road with brown against a gray road is not anywhere near the same.
Comeon
Re: (Score:3)
It seems that they trained their network on very clean roads. Also the video is limited to 720p and what looks like about 12 fps on YouTube so I'm guessing that if it was available in higher quality it might look less realistic.
It's an interesting trick but maybe not that useful. The output isn't really "game made photorealistic", it's "video similar to game but with very different artistic choices, rendered from recorded footage later".
Re: (Score:3)
They did manage to make it look more like what a dashcam would look like. Unfortunately, dashcams are pretty crappy compared to what the human eye sees. The synthetic looking but sharp trees turn into realistic, but blurry trees, for a huge example.
It's interesting, but they'd need to target a better dataset than video captured by mediocre cameras before I'd think to prefer it over the synthetic game in any scenario. However generally for a game I'd be going for more interesting, unrealistic aesthetics to g
Re: (Score:2)
Indeed. Note that they stuck to driving around from an in-car perspective. I suspect their training set may not have prepared them particularly well to deal with anything after the character exits the vehicle, drives in third-person, or perhaps even when they begin driving erratically like everyone in the game actually does.
Re: (Score:2)
You mean like this guy [youtube.com]?
Re: (Score:2)
The AI transform the broken down pseudo-California roads of GTA V into pristine, grey German roads, and the dried out, brown pseudo-California landscape into German forests, so I'm not sure if realism applies. This is more like a California -> Germany landscape converter.
Hot Coffee! (Score:2)
Potential for computers stuck on integrated? (Score:3)
Re: (Score:2)
It took a slam of the brakes of Moore’s law to get more optimization to be done.
No it didn't. The video game industry hasn't relied on brute force to improve graphics ... ever. There has been an never ending stream of ever more impressive optimisations basically since the first 2D side scrollers came out with CGA graphics and someone thought... "hmmm I could double the number of colours this game can display if I simply switch the CGA pallet between scenes!"
Re: (Score:2)
> The video game industry hasn't relied on brute force to improve graphics ... ever.
Uhmm are we completely forgetting history???
* Doom -- technically playable on 386 but requires 486 for a "smoothish" 35 FPS
* Strike Commander -- Yeah, good luck playing this on anything less then a 486DX 66 MHz [fabiensanglard.net] due to Gouraud shaded texturing mapping.
* Quake -- required a Pentium due to dual Integer and Floating Point execution pipes in the software texture mapper
* 3Dfx -- gee, only jump started hardware 3D. (Technically
Re: (Score:2)
I don't know what you think we're talking about but you just listed a whole bunch of games, not new technologies that optimised specific hardware.
You came close to the point though with Unreal Engine 5. Please note that even on current hardware it was not possible to realtime render 20 million triangles per frame, the ability to do so was thanks to a newly developed and optimised way of processing geometry, i.e. not using LOD maps. So really good example there and properly reinforcing my point that we optim
Re: (Score:2)
> I don't know what you think we're talking about but you just listed a whole bunch of games,
You made a bullshit claim (emphasis added):
I gave plenty of counter-examples where games were HEAVILY dependent on brute force of RAW CPU power to improve graphics. Getting 3D texture mapped triangles with software rasterization was a BIG deal. i.e. Ultima Underworld, Quake, Unreal, etc. Hell, even Quake had perspective corre
Re: (Score:1)
Re: (Score:2)
Nvidia GPUs have AI enhancement built in and it really works, but they need powerful dedicated hardware for it. AMD has been caught out by it, they don't have anything like that and it needs a lot of R&D to replicate.
Not a good training dataset (Score:3, Insightful)
All examples I've seen of the Cityscapes dataset were filmed during cloudy weather, which is the normal weather condition here in Germany. No wonder the GTA scene looks like it was observed through tinted glass.
Also the cameras probably didn't focus on the first few meters right in front of the car as it is more important to have sharpness on things that are small because of their distance. That explains why there is less detail on the processed roads.
Re: (Score:3)
Yep, they overfit to the training data. They made "Los Santos" (ie Los Angeles) go from looking like something out of the sunny pacific southwest to a cloudy northern landlocked city (I should know, I live in one). It's a neat trick, but just because the rendering now looks like the training data doesn't necessarily mean it's more accurate (or more pleasing). I will say the last segment at the end where they were able to retain decent color was impressive imho, but that seemed to be the exception rather
WTF? (Score:1)
Interactive rates != real-time (Score:3)
"interactive rates" means too slow for real-time, but still not a complete slideshow (ergo, still "interactive"). So, better leave it as is, and don't try to "explain" it incorrectly
Re: (Score:2)
2038 (Score:2)
Re: (Score:2)
Re: (Score:2)
You could queue up an order with ShopBLT. They use order queues so you know how far back in line you are by card, at least when you submit an order (check all of the different "types" of each model - ie, there are several versions of each 30X0 card). They don't charge until yours is available.
The nice cards may take a couple of months, but you aren't dependent on alert systems like NowInStock and having to face a mountain of people trying to buy the same thing at the same time (although this does work, se
Flight Simulators (Score:1)
Just simulating cheap dash-cam? (Score:4, Interesting)
Mostly just looks like a a color change to simulate the footage off cheap dash cams, and now how it would look to a person.
Re: (Score:1)
"... how it would look to a person."
Yes, that's exactly what it is doing.
Re: (Score:2)
Watch the video. They have a nice side by side comparison of what a raw colour change looks like compared to this change.
No, Thank you. (Score:3)
I maybe in the minority here but I don't want my games to look 100% real. Simulators, yes; but games no. I still want something out there that says "this is a game." Just like I like my animation to still have some kind of cartooniness to it.
Re: (Score:2)
I maybe in the minority here but I don't want my games to look 100% real. Simulators, yes; but games no. I still want something out there that says "this is a game." Just like I like my animation to still have some kind of cartooniness to it.
Well there's always Mario or Borderlands to suit your requirement. The reality though is many games absolutely do go for realism.
PS5 games are plenty realistic (Score:2)
Can we get better graphics, cheaper? (Score:3)
In this case they're synthesizing to look realistic, but, what if you just used the algorithm to reduce rendering overhead (a la DLSS, but game specific)?
How low on detail for the input can you go?
You train the network comparing a low quality, fast, 3D render to a high quality, expensive 3D render, then in game use the low quality render enhanced by the trained network (that you built once).
If doing the enhancement via the neural algorithm is cheaper to run than doing the actual lighting simulation in-game using a typical engine, we may have a wining proposition here.