Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Games Technology

Carmack On 'Infinite Detail,' Integrated GPUs, and Future Gaming Tech 149

Vigile writes "John Carmack sat down for an interview during Quakecon 2011 to talk about the future of technology for gaming. He shared his thoughts on the GPU hardware race (hardware doesn't matter but drivers are really important), integrated graphics solutions on Sandy Bridge and Llano (with a future of shared address spaces they may outperform discrete GPUs) and of course some thoughts on 'infinite detail' engines (uninspired content viewed at the molecular level is still uninspired content). Carmack does mention a new-found interest in ray tracing, and how it will 'eventually win' the battle for rendering in the long run."
This discussion has been archived. No new comments can be posted.

Carmack On 'Infinite Detail,' Integrated GPUs, and Future Gaming Tech

Comments Filter:
  • by Sycraft-fu ( 314770 ) on Friday August 12, 2011 @04:18PM (#37073260)

    I think part of the problem is that you get a bunch of CS types who learned how to make a ray tracer in a class (because it is pretty easy and pretty cool) and also learn it is "O(log n)" but don't really understand what that means or what it applies to.

    Yes in theory, a ray tracing engine scales logarithmically with number of polygons. That means that past a point you can do more and more complex geometry without a lot of additional cost. However that forgets a big problem in the real world: Memory access. Memory access isn't free and it turns out to be a big issue for complex scenes in a ray tracer. You have to understand that algorithm speeds can't be taken out of context of a real system. You have to account for system overhead. Sometimes it can me a theoretically less optimal algorithm is better.

    Then there's the problem of shadowing/shading you pointed out. In a pure ray tracer, everything has that unnatural shiny/bright look. This is because you trace rays from the screen back to the light source. Works fine for direct illumination but the real world has lots of indirect illumination that gives the richness of shadows we see. For that you need something else like radiosity or photon mapping, and that has different costs.

    Finally there's the big issue of resolution that ray tracing types like to ignore. Ray tracing doesn't scale well with resolution. It scales O(n) in terms of pixels, and of course pixels grow in a power of two fashion since you increase horizontal and vertical resolution when you get higher PPI. Then if you want anti-aliasing, you have to do multiple rays per pixel. This is why when you see ray tracing demos they love to have all kinds of smooth spheres, but yet run at a low resolution. They can handle the polygons, but ask them to do 1920x1080 with 4xAA and they are fucked.

    Now none of this is to say that ray tracing will be something we never want to use. But it has some real issues that people seem to like to gloss over, issues that are the reason it isn't being used for realtime engines.

The key elements in human thinking are not numbers but labels of fuzzy sets. -- L. Zadeh

Working...