It has to do with knowing which parts of the scene need more iterations or more detail to render. Say you have a screen that is nothing but black, then you don't need much rendering. You can think of it like the inverse problem of howing to know if a picture is rendered or actually a photo. People who look at fakes enough, know where exactly to look to determine it's fake. Likewise AI can be used to provide further rendering in those areas. In PC gaming FPS drops can be considered more accept. My laptop shits a brick when playing Noita because it literally is simulating all kinds of little shit and then also rendering it (I generally think this bottle neck is more the CPU than the GPU). Knowing where to focus your cycles (e.g. using more GPU for rendering quality) is not a trivial problem but there are nonetheless patterns.
There are certainly better questions to this approach than the one you raise. It ultimately makes me question just how much you have toyed with writing your own algorithms for rendering anything -- even a few attempts at generating a Mandelbrot would really begin to illuminate how the problem can be solved more intelligently.
It's a good exercise to write one but simply looking at the code and playing with it to generate zooms or anything similar is also a good exercise.
It's weird for me to think I am much older than this poster, so it probably comes down to how a lot of universities focus less on theory now and more on practical java.
I think what the problem is, is that it really is a much bigger step than it used to be to get into programming.
Once upon a time, it was all simpler, and every computer came with at least one programming guide. A user was never more than 20 or so keystrokes away from drawing their own graphics on the screen. Thats once upon a time.
Now there are 5 abstraction layers you have to deal with to do the same thing, and if you have never programmed before.... you not only have to learn all 5 of them simultaneou
Writing one in BASIC on an 8-bit computer back as a kid in the 1980s sure was not the most efficient method! But a fun way to spend a couple of hours, but some people took it way further e.g. fractint. As for the scaling, is DLSS much different than throwing the image through a few layers of a neural network, i.e. a few giant matrix multiplies separated by non-linear functions to prevent them from collapsing? That process makes the effort a fixed amount dependant on the number of screen pixels (and the DL m
I don't think that's how DLSS [wikipedia.org] works.
My impression is that a deep learning algorithm is trained to recognize an object in the scene from a bunch of different angles at low resolution and then replace it with a higher resolution version rendered beforehand by much better hardware. Then the low-end hardware feeds both the low-res scene and the motion vectors for all objects in the scene into the trained algorithm, which then replaces a bunch of the low-res stuff with higher-res stuff from its "library". The
My point was rather to address the OPs question of how AI can be used to find patterns to improve graphics, not this specific method. Thanks for the additional information.
This makes a lot of sense and when the switch is done in the rendering pipeline, that seems like it could help avoid a lot of potential issues with mis-identifying the object. Then again though it seems like you could solve this in the case you mention without needing AI because I am unsure why the rendering pipeline has to guess the obje
186,000 Miles per Second. It's not just a good idea. IT'S THE LAW.
Could someone explain please? (Score:2)
Re:Could someone explain please? (Score:1)
It has to do with knowing which parts of the scene need more iterations or more detail to render. Say you have a screen that is nothing but black, then you don't need much rendering. You can think of it like the inverse problem of howing to know if a picture is rendered or actually a photo. People who look at fakes enough, know where exactly to look to determine it's fake. Likewise AI can be used to provide further rendering in those areas. In PC gaming FPS drops can be considered more accept. My laptop shits a brick when playing Noita because it literally is simulating all kinds of little shit and then also rendering it (I generally think this bottle neck is more the CPU than the GPU). Knowing where to focus your cycles (e.g. using more GPU for rendering quality) is not a trivial problem but there are nonetheless patterns.
There are certainly better questions to this approach than the one you raise. It ultimately makes me question just how much you have toyed with writing your own algorithms for rendering anything -- even a few attempts at generating a Mandelbrot would really begin to illuminate how the problem can be solved more intelligently.
Re: (Score:2)
Re: (Score:2)
Re: (Score:2)
It's a good exercise to write one but simply looking at the code and playing with it to generate zooms or anything similar is also a good exercise.
It's weird for me to think I am much older than this poster, so it probably comes down to how a lot of universities focus less on theory now and more on practical java.
Re: (Score:2)
Once upon a time, it was all simpler, and every computer came with at least one programming guide. A user was never more than 20 or so keystrokes away from drawing their own graphics on the screen. Thats once upon a time.
Now there are 5 abstraction layers you have to deal with to do the same thing, and if you have never programmed before.... you not only have to learn all 5 of them simultaneou
Re: (Score:2)
Re: (Score:2)
I don't think that's how DLSS [wikipedia.org] works.
My impression is that a deep learning algorithm is trained to recognize an object in the scene from a bunch of different angles at low resolution and then replace it with a higher resolution version rendered beforehand by much better hardware. Then the low-end hardware feeds both the low-res scene and the motion vectors for all objects in the scene into the trained algorithm, which then replaces a bunch of the low-res stuff with higher-res stuff from its "library". The
Re: (Score:2)
My point was rather to address the OPs question of how AI can be used to find patterns to improve graphics, not this specific method. Thanks for the additional information.
This makes a lot of sense and when the switch is done in the rendering pipeline, that seems like it could help avoid a lot of potential issues with mis-identifying the object. Then again though it seems like you could solve this in the case you mention without needing AI because I am unsure why the rendering pipeline has to guess the obje