Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games Technology

Carmack Speaks On Ray Tracing, Future id Engines 256

Vigile writes "As a matter of principle, when legendary game programmer John Carmack speaks, the entire industry listens. In a recent interview he comments on a multitude of topics starting with information about Intel, their ray tracing research and upcoming Larrabee GPU. Carmack seems to think that Intel's direction using traditional ray tracing methods is not going to work and instead theorizes that using ray casting to traverse a new data structure he is developing is the best course of action. The 'sparse voxel octree' that Carmack discusses would allow for 'unique geometry down to the equivalent of the texel across everything.' He goes on to discuss other topics like the hardware necessary to efficiently process his new data structure, translation to consoles, multi-GPU PC gaming and even the world of hardware physics."
This discussion has been archived. No new comments can be posted.

Carmack Speaks On Ray Tracing, Future id Engines

Comments Filter:
  • by Brian Gordon ( 987471 ) on Wednesday March 12, 2008 @04:24PM (#22732274)
    It's the most realistic possible way of rendering, so when computers get fast enough we'll be able to everything with ray tracing. But effects past simple polygon rendering and water refraction are extremely difficult in ray-tracing.. not necessarily to program, but to simulate in real-time.
  • by neumayr ( 819083 ) on Wednesday March 12, 2008 @04:41PM (#22732430)

    The guy was considered an relic of the late 90s pc fps era years ago.
    Late 90s? So you're saying the Quake3 engine wasn't highly impressive? Why was it so succesful then? Why was it used by so many games until shortly before that HL2/Doom3 year?
    Note I'm not arguing for the Doom3 engine...
  • Re:So, (Score:5, Informative)

    by nschubach ( 922175 ) on Wednesday March 12, 2008 @04:54PM (#22732558) Journal
    John Carmack == Commander Keen == id Software == Doom = Wolfenstein == Quake == ??

    You've never heard of any of those? the guys you mention might not even be in gaming if it weren't for Carmack and John Romero.
  • by luther2.1k ( 61950 ) on Wednesday March 12, 2008 @04:59PM (#22732634) Homepage
    Bog standard ray tracing, which is what intel are harping on about at the moment isn't the be all and end all of global illumination algorithms, as many people who get all misty eyed about the technique would have you believe. It's terrible for diffuse interactions for one thing. Photon mapping [wikipedia.org] is a more realistic technique which simulates light more accurately.

    Tim.
  • Re:So, (Score:5, Informative)

    by nschubach ( 922175 ) on Wednesday March 12, 2008 @05:11PM (#22732764) Journal
    Funny. It's just freaking amazing that someone would even stoop so low as to mention Gabe Newell and not know Carmack. Hell, the original Half-Life is written on the Quake engine written by Carmack.
  • by Doogie5526 ( 737968 ) on Wednesday March 12, 2008 @05:27PM (#22732944) Homepage

    It's the most realistic possible way of rendering
    It only provides realistic rendering of reflections, refractions, and shadows. There are still many more properties of light that take different, also intensive algorithms reproduce like; color bleeding, caustics, sub-surface scattering, depth of field.

    I'm sure there's some market for these things, but there's so much more involved even after these algorithms are implemented. Now you have to add settings (or additional texture maps), for each object (or light). As soon add something with live reflections, you can't even throw away what's not on screen (or facing away from camera). So your memory requirements jump just because of that. There's many things that have to come in to place for these technologies are adopted widely. A lot of these algorithms have been around for over 25 years already and are just seeing wide adoption in feature films (most would be surprised at how much is faked, even today).

    I hope there's a class of games that don't use these things or take 1 or 2 of these things and use them in innovative ways. While I like the WW2 (or futuristic) FPS games, I feel all that dev time is better spent on innovative game play.

    Sorry that the brief reply I planned turned in to a rant.
  • by Creepy ( 93888 ) on Wednesday March 12, 2008 @05:28PM (#22732960) Journal
    Carmack is not saying the industry won't go to ray tracing, but rather that the industry won't abandon rasterization because each has strengths and weaknesses.

    He believes the same thing I do - a hybrid approach is most likely, at least in the short term. A sparse voxel octtree (a voxel is a 3d pixel and an octtree is a uniform 3d structure to hold the voxels - they are sparse because most are empty [hold air]) and would work well for ray tracing because it sounds like you'd need to cast rays to find the voxel. I'm not sure why/how it would save on overlapping edges unless the voxel itself holds color (texture) information and is fragment level in detail. Still, that seems like it would be an incredibly large data structure, so I'm sure he's doing some trick that I can't think of at the moment.

  • by Goaway ( 82658 ) on Wednesday March 12, 2008 @05:40PM (#22733042) Homepage
    "Ray tracing" traditionally means specifically tracing rays from the eye out into the scene. Other methods are usually referred to by different names.

    And even so, while tracing either photons or eye rays may be the most feasible method at the moment, it is by no mean the only way to solve the rendering equation, nor any kind of theoretical best.
  • Re:So, (Score:3, Informative)

    by Winckle ( 870180 ) <`ku.oc.elkcniw' `ta' `kram'> on Wednesday March 12, 2008 @06:09PM (#22733284) Homepage
    Interestingly, at least with American McGee, he gave an interview with GFW podcast, where he said that it was the publisher who wanted to put his name on the box, to create brand recognition, not the other way around.
  • by ultranova ( 717540 ) on Wednesday March 12, 2008 @07:00PM (#22733812)

    It's the most realistic possible way of rendering,

    No. For starters, the rays are "sparse", that is, there is space between two parallel rays which goes unexplored; furthermore, each beam either hits or doesn't hit any given object, leading to aliasing (jagged edges). A much better solution would be racing a cone from each pixel, with no space between them; however, the mathemathics of calulating reflections when the cone hits something would be horrible.

    Another problem is with global illumination. Normally, when the ray hits something, you send a ray towards each light source to figure out how that light source illuminates this place. However, any surface which is illuminated acts as a light source as well; in theory, you'd need to send a ray out to each point of each surface in the scene, and from those point again to each surface and so on ad infinitum to get it completely right.

    The third problem is caused by the trace being done in inverse; that is, instead of tracing rays of lights from the light source to the eye, you trace rays from the eye to the source. This is neccessary, because each source in theory emits an infinite number of beams, so tracing them all would take an infinite amount of computer power. However, since light takes the same path either way, you can invert the trace and only trace those beams which actually hit the eye of the observer, usually simplified to be one per pixel.

    So what's the problem ? Well, suppose there's a mirror near the light source, which reflects light to an are on a surface, on top of the surface already getting light directly from the source. This are will naturally be brighter than the rest of the surface, because it is receiving more light. However, when calculating the brightness with an inverted trace, you trace a ray to a spot on the surface, and then from the surface to the light source(s). There is no way to know that a mirror which is far from the path of the ray should also affect the brightness of the surface. Thus the spot which should be brighter isn't. Other variations of this problem include prisms and such rainbow-generating devices, for the same reason.

    There are solutions to all these problems: for example, POV-Ray can trace multiple rays per pixel to anti-alias the image and use forward ray tracing to try to calculate the global illumination and lens- and mirror effects. However, these are ultimately kludges to cover for the weaknesses of the core ray tracing algorithm. This, in turn strongly suggests that ray tracing is not the ultimate rendering method as far as realism goes.

  • by ultranova ( 717540 ) on Wednesday March 12, 2008 @07:12PM (#22733918)

    Everything light does is a combination of reflections and refractions (shadows are an artifact of those).

    Except the double-slit experiment. It's based on the fact that light has wavefront qualities, while ray tracing treats it as particles.

    I also strongly doubt that the discreet ray approach will ever produce very good global illumination, since the number of rays bouncing between surfaces quickly grows towards infinite as the desired accuracy grows.

    You'd need to do "wavefront racing" to fix these, and I for one have no idea how to do this - solve the quantum field equations for each particle in the scene after inventing the Grand Unified Theory ?-)

  • by kb ( 43460 ) on Wednesday March 12, 2008 @07:46PM (#22734208) Homepage Journal
    As far is I've understood it he isn't exactly using the octree for LOD but for storing all voxel data in s parse (there we have it ;) manner. If you only have one "layer" of voxels at whatever resolution defining eg. only the surface of things, most nodes of the octree will remain empty, and so you can reduce the data set to storing only what's necessary for rendering instead of having to store a full-resolution 3D representation of your space. Of course this leans happily towards a LOD system, as storing the data in different resolutions aka mip-mapping the geometry and then accessing the right detail level would essentially be for free if you do it right. In the end it's a promising approach with of course many details to be sorted out - there's still a lot of data to be stored per-voxel (texture coordinates, normals, basically everything that now goes into a vertex) if you want the full feature set like eg. lighting and such. But given dedicated hardware and combined with his mega texture stuff (which is basically only glorified unique UV mapping, perhaps somewhat generalized and with a good resource management system behind it) this could be pretty cool.
  • by ghostlibrary ( 450718 ) on Thursday March 13, 2008 @07:54AM (#22737552) Homepage Journal
    > Do you have any publications you can point me towards?

    Hmm... well, I write here under a pseudonym so it's hard to look up my work. But you can look up 'TreeSPH' in google for some good references to lots of astrophysical implementation. The 'Tree' part is obviously the voxel octtrees, while the 'SPH' means they added hydrodynamics to it by making 'blobs' that have a variable kernel size for smoothing over neighbors.

    Which basically means, for hydrodynamics, if it's uniform density you can use a single large 'blob' to represent it, while in an area where the density is rapidly changing you go to smaller 'blobs' because you need more computation. You then use a kernel function, which basically means how much you smooth over neighbors to get a good distribution. With this, you spend all your hydrodynamic computation time on the rapidly changing, shocky, or active stuff. So it's another example of how to decompose a problem they way Carmack seems to be suggesting.

    Funny thing is, in astrophysics, this stuff came out in the late 80s/early 90s, and astrophysics usually lags behind physics by a half a decade, which lags behind pure math by a decade. I think the challenge for getting into gaming is converting codes intended for cluster/supercomputer massive calculations into the realm of fast PC gaming.

    Tree codes are already heavily used in computer gaming (browsing through 'Computer Developer' magazine shows they are used for dynamic line-of-sight work a lot), so none of what Carmack suggests is cutting edge comp sci in theory. In fact, he used binary space partitioning with Doom, which is in the same field. Much as with Doom etc, the key is can he come up with a fast implementation (or even approximation). I think that's his real talent-- taking existing methods and concepts and figuring out a 'real world' implementation that's fast enough for gaming. He's a programming engineer of no small talent.
  • by John Carmack ( 101025 ) on Thursday March 13, 2008 @12:22PM (#22740222)
    In our current generation codebase we have moved to completely separate representations for rendering and physics, and I expect that to continue in the future. The requirements are different enough to merit different internal storage.

    John Carmack
  • by John Carmack ( 101025 ) on Thursday March 13, 2008 @12:45PM (#22740492)
    Tracing into an SVO structure can be done with almost a Bresenham algorithm, and when you reach whatever depth of descent you want (a critical factor, you aren't going all the way to final detail on every trace), you pop out whatever data is stored there (probably some vector quantized BRDF sort of thingy) and run a "fragment program" on it.

    The data sets for a game world represented like this would be massive, but it is intrinsically well suited for streaming, even over the internet, which may well be the dominant media distribution channel in the timeframe we are looking at.

    John Carmack

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...