Carmack Speaks On Ray Tracing, Future id Engines 256
Vigile writes "As a matter of principle, when legendary game programmer John Carmack speaks, the entire industry listens. In a recent interview he comments on a multitude of topics starting with information about Intel, their ray tracing research and upcoming Larrabee GPU. Carmack seems to think that Intel's direction using traditional ray tracing methods is not going to work and instead theorizes that using ray casting to traverse a new data structure he is developing is the best course of action. The 'sparse voxel octree' that Carmack discusses would allow for 'unique geometry down to the equivalent of the texel across everything.' He goes on to discuss other topics like the hardware necessary to efficiently process his new data structure, translation to consoles, multi-GPU PC gaming and even the world of hardware physics."
Re:There is a great disturbance in the source... (Score:2, Informative)
Re:An Irrelevant Hack (Score:2, Informative)
Note I'm not arguing for the Doom3 engine...
Re:So, (Score:5, Informative)
You've never heard of any of those? the guys you mention might not even be in gaming if it weren't for Carmack and John Romero.
Re:There is a great disturbance in the source... (Score:5, Informative)
Tim.
Re:So, (Score:5, Informative)
Re:There is a great disturbance in the source... (Score:4, Informative)
I'm sure there's some market for these things, but there's so much more involved even after these algorithms are implemented. Now you have to add settings (or additional texture maps), for each object (or light). As soon add something with live reflections, you can't even throw away what's not on screen (or facing away from camera). So your memory requirements jump just because of that. There's many things that have to come in to place for these technologies are adopted widely. A lot of these algorithms have been around for over 25 years already and are just seeing wide adoption in feature films (most would be surprised at how much is faked, even today).
I hope there's a class of games that don't use these things or take 1 or 2 of these things and use them in innovative ways. While I like the WW2 (or futuristic) FPS games, I feel all that dev time is better spent on innovative game play.
Sorry that the brief reply I planned turned in to a rant.
Re:Or, raytracing could work (Score:3, Informative)
He believes the same thing I do - a hybrid approach is most likely, at least in the short term. A sparse voxel octtree (a voxel is a 3d pixel and an octtree is a uniform 3d structure to hold the voxels - they are sparse because most are empty [hold air]) and would work well for ray tracing because it sounds like you'd need to cast rays to find the voxel. I'm not sure why/how it would save on overlapping edges unless the voxel itself holds color (texture) information and is fragment level in detail. Still, that seems like it would be an incredibly large data structure, so I'm sure he's doing some trick that I can't think of at the moment.
Re:There is a great disturbance in the source... (Score:5, Informative)
And even so, while tracing either photons or eye rays may be the most feasible method at the moment, it is by no mean the only way to solve the rendering equation, nor any kind of theoretical best.
Re:So, (Score:3, Informative)
Re:There is a great disturbance in the source... (Score:3, Informative)
No. For starters, the rays are "sparse", that is, there is space between two parallel rays which goes unexplored; furthermore, each beam either hits or doesn't hit any given object, leading to aliasing (jagged edges). A much better solution would be racing a cone from each pixel, with no space between them; however, the mathemathics of calulating reflections when the cone hits something would be horrible.
Another problem is with global illumination. Normally, when the ray hits something, you send a ray towards each light source to figure out how that light source illuminates this place. However, any surface which is illuminated acts as a light source as well; in theory, you'd need to send a ray out to each point of each surface in the scene, and from those point again to each surface and so on ad infinitum to get it completely right.
The third problem is caused by the trace being done in inverse; that is, instead of tracing rays of lights from the light source to the eye, you trace rays from the eye to the source. This is neccessary, because each source in theory emits an infinite number of beams, so tracing them all would take an infinite amount of computer power. However, since light takes the same path either way, you can invert the trace and only trace those beams which actually hit the eye of the observer, usually simplified to be one per pixel.
So what's the problem ? Well, suppose there's a mirror near the light source, which reflects light to an are on a surface, on top of the surface already getting light directly from the source. This are will naturally be brighter than the rest of the surface, because it is receiving more light. However, when calculating the brightness with an inverted trace, you trace a ray to a spot on the surface, and then from the surface to the light source(s). There is no way to know that a mirror which is far from the path of the ray should also affect the brightness of the surface. Thus the spot which should be brighter isn't. Other variations of this problem include prisms and such rainbow-generating devices, for the same reason.
There are solutions to all these problems: for example, POV-Ray can trace multiple rays per pixel to anti-alias the image and use forward ray tracing to try to calculate the global illumination and lens- and mirror effects. However, these are ultimately kludges to cover for the weaknesses of the core ray tracing algorithm. This, in turn strongly suggests that ray tracing is not the ultimate rendering method as far as realism goes.
Re:There is a great disturbance in the source... (Score:5, Informative)
Except the double-slit experiment. It's based on the fact that light has wavefront qualities, while ray tracing treats it as particles.
I also strongly doubt that the discreet ray approach will ever produce very good global illumination, since the number of rays bouncing between surfaces quickly grows towards infinite as the desired accuracy grows.
You'd need to do "wavefront racing" to fix these, and I for one have no idea how to do this - solve the quantum field equations for each particle in the scene after inventing the Grand Unified Theory ?-)
Re:There is a great disturbance in the source... (Score:5, Informative)
Re:How does it play with Physics? (Score:4, Informative)
Hmm... well, I write here under a pseudonym so it's hard to look up my work. But you can look up 'TreeSPH' in google for some good references to lots of astrophysical implementation. The 'Tree' part is obviously the voxel octtrees, while the 'SPH' means they added hydrodynamics to it by making 'blobs' that have a variable kernel size for smoothing over neighbors.
Which basically means, for hydrodynamics, if it's uniform density you can use a single large 'blob' to represent it, while in an area where the density is rapidly changing you go to smaller 'blobs' because you need more computation. You then use a kernel function, which basically means how much you smooth over neighbors to get a good distribution. With this, you spend all your hydrodynamic computation time on the rapidly changing, shocky, or active stuff. So it's another example of how to decompose a problem they way Carmack seems to be suggesting.
Funny thing is, in astrophysics, this stuff came out in the late 80s/early 90s, and astrophysics usually lags behind physics by a half a decade, which lags behind pure math by a decade. I think the challenge for getting into gaming is converting codes intended for cluster/supercomputer massive calculations into the realm of fast PC gaming.
Tree codes are already heavily used in computer gaming (browsing through 'Computer Developer' magazine shows they are used for dynamic line-of-sight work a lot), so none of what Carmack suggests is cutting edge comp sci in theory. In fact, he used binary space partitioning with Doom, which is in the same field. Much as with Doom etc, the key is can he come up with a fast implementation (or even approximation). I think that's his real talent-- taking existing methods and concepts and figuring out a 'real world' implementation that's fast enough for gaming. He's a programming engineer of no small talent.
Re:Senor Carmack, one question (Score:5, Informative)
John Carmack
Re:acceleration structures, etc... (Score:5, Informative)
The data sets for a game world represented like this would be massive, but it is intrinsically well suited for streaming, even over the internet, which may well be the dominant media distribution channel in the timeframe we are looking at.
John Carmack