Carmack Speaks On Ray Tracing, Future id Engines 256
Vigile writes "As a matter of principle, when legendary game programmer John Carmack speaks, the entire industry listens. In a recent interview he comments on a multitude of topics starting with information about Intel, their ray tracing research and upcoming Larrabee GPU. Carmack seems to think that Intel's direction using traditional ray tracing methods is not going to work and instead theorizes that using ray casting to traverse a new data structure he is developing is the best course of action. The 'sparse voxel octree' that Carmack discusses would allow for 'unique geometry down to the equivalent of the texel across everything.' He goes on to discuss other topics like the hardware necessary to efficiently process his new data structure, translation to consoles, multi-GPU PC gaming and even the world of hardware physics."
Right... (Score:1, Insightful)
Stunning! (Score:1, Insightful)
Surprisingly, a developer things that the technique he is working on is better than other techniques to address the class of problems to which the technique applies.
In other news, a substantial quantity of water was discovered in the Pacific Ocean.
Re:Stunning! (Score:5, Insightful)
Re:How about.... (Score:5, Insightful)
Its his job, and im pretty sure his passion to think about stuff like this.
Re:Stunning! (Score:5, Insightful)
Re:Carmack is yesterday's news (Score:2, Insightful)
Re:Right... (Score:5, Insightful)
Plusses:
I mean, seriously, what's your point? The man's not actually a God so we shouldn't listen to him? Is there somebody more experienced I should prefer to listen to? Is "n3tcat" the handle for somebody with thirty years experience in first-person shooter engines or something?
Re:Anyone betting on if Ray Tracing will give id.. (Score:3, Insightful)
Re:How does it play with Physics? (Score:5, Insightful)
A (longdistancetypedheretoavoidlamenessfilter) B
A non-tree method would just calculate all the interactions: A-B, A-C, B-C. But you can group B+C together when calculating their interaction with A because, at that distance, the result for (B+C)-A is the same as the result for B-A + C-A. Then the interaction between B & C must be calculated separately. So you've (even in this tiny example) reduced your calculation from 3 to 2.
And, of course, all the 'voxels' between A & B/C that are empty need not be considered at all. If you'd set it up as an NxNxN voxel cube, you'd be wasting time on calculating empty voxels between the occupied items.
So if you want realistic interactive environments, sparse voxel octtrees are the way to go-- you pump all the calculation time into the parts where it matters, and let the other stuff be 'smoothed' when such smoothing is indistinguishable from rounding error.
Typically, you can traverse the tree for a given error percentage, e.g. 'walk the tree and do interactions preserving 99% energy conservation' or similar. So your have predictable error, as well, despite being able to use arbitrary geometries and spacing for your elements.
Re:Stunning! (Score:5, Insightful)
Commenting on the fact that it is unsurprising that someone working on a different technique favors that technique over raytracing is not throwing anything out.
Its not a comment either way on the merits.
Were I to comment on the merits, I would point out that his position is both fairly obviously correct (in that sparse voxel octrees or something quite like them is almost beyond question the key to raytracing that's useful for reasonable quality in realtime), and entirely incorrect in his characterization of what everyone else is pushing: he pretends that "everyone" is pushing the most naive, brute force approach to raytracing, in which you don't use any kind of bounding volume structure and just do intersection tests against triangles. I've seen literally no recommendations that do that: almost all involve some form of bounding volume heirarchy, and sparse voxel octrees are just one instance of that (perhaps a fairly ideal one, and that's great). (Also, raytracing isn't limited to triangles, although most performance comparisons of raytracing to raster-based rendering methods use models constructed from triangles because it allows you to compare same-model performance of the different mechanisms; raytracing engines, however, don't generally need to decomposed curved objects into triangle-based approximations to render them in the first place, although this can sometimes be more efficient.)
TFS further misleads by suggesting that Carmack is proposing an alternative to raytracing, when really what he is proposing is a particular approach to raytracing, and, particularly, a particular approach in one well-known problem area in raytracing to which there are currently a whole array of approaches. And his focus on what he wants to get out of raytracing is a little different. But, essentially, his piece, while there are some potentially good criticisms on some particular aspects of and arguments for Intel's specific approach to raytrace, is in accord with (not opposed to) the general idea that raytracing techniques are going to be increasingly important in gaming.
Is that enough "on the merits" for you?
The Most Telling Quote.... (Score:5, Insightful)
Yet, when I play a game, I'll admit, I'm not playing glaring attention to these faults. The last thing that really bothered me in games was 16-bit color banding and I haven't seen any of that in, oh, like 3 or 4 years.
The gamer side of me agrees with Carmack on things looking cool who cares if it's wrong, the geek side of me is angry and demands it be pixel-accurate.
Re:How does it play with Physics? (Score:5, Insightful)
Yea. Because interactivity trumps photorealism for every single possible type of game. Oh wait, that's false.
You sound like the people who said that StarCraft was crap because sprites were outdated junk and every good game (like Total Annihilation) had already moved to 3D. Different engineers will make different design choices for different applications, and there is no total order of correctness among a single class of design choice.
slashdot lameness filter is LAME (Score:-1, Insightful)
Re:So... what this tells me... (Score:5, Insightful)
Meanwhile the rest of us have been enjoying these articles immensely because we get to obtain some insight about what each of the major players are thinking in regards to Real-Time Raytracing. The great thing about obtaining insight from others is that you can then use your newfound insight to come to your own conclusions.
If you're simply looking for a consensus from the industry, don't expect one for a long while. The concept won't entirely be accepted until someone goes out there and proves it out. Just like high-quality 3D graphics were thought to be too slow on the PC. Until game creators proved out a few different methods to make them usable, that is. (Starting with Wing Commander, moving to Wolf-3D, Doom, Duke 3D, and eventually Quake. After that, the industry said, "Welp, we better make some hardware for this." And thus 3DFX was born.
Re:There is a great disturbance in the source... (Score:2, Insightful)
I'd agree, with a proviso. (Score:3, Insightful)
If you neglect the impact of mobile objects on diffuse reflections, you CAN pre-generate an entire radiosity map for a game, which is good because it's slow. However, it's an important addition as the "texture", "warmth" and "naturalness" of an image depend on diffuse reflections, not direct reflections.
Ultimately, you need to consider diffuse reflections for all objects. There are a few ray-tracing techniques which, instead of assuming direct reflection only, define a distribution (usually some variant on Gaussian) over which the light is reflected. This isn't quite the same as cone-tracing - cone-tracing is generally a simplified form of this where the distributions are trivial and uniform. Wave-tracing is another method that can be used.
As for what should be done, I'd rather see hardware engineers focus on providing primitives that can support what is needed both now and in the future, as hardware changes relatively slowly. That frees software engineers to develop the best methods they can, without forcing them to wait when they reach the limits of the method.
Re:How does it play with Physics? (Score:2, Insightful)
You're right. Playing in a wasteland of scrapped geometry doesn't sound like much fun. OTOH, turning a perfectly good level into a wasteland of scrapped geometry... now we're talkin'.
Re:So, (Score:5, Insightful)
George Washington is pretty legendary, but we don't have a George Washington's America, do we? The name is irrelevant. How could the guy who basically invented the First Person Shooter not be legendary? When it first came out, the original Wolfenstein was the most highly optimized game I'd ever played. I still remember thinking it wouldn't run on my slow-ass computer, and being blown away when it ran fast as can be.
WRT sparse voxel octree FTFA (Score:2, Insightful)
In all, Carmack hints towards massive detail for graphics and not much else. This is something he's always done in the past and has really seemed to obsess over. It's one of his greatest weaknesses as a trend-setter and industry leader. He did it before with the idtech 4 and it let HL2 steal the show with more thoughtful physics integration and charater AI. Nicer looking soft shadows, it seems, wasn't enough.
Id is great in that they push the industry forward in terms of graphics, but graphics only go so far. When it comes to realism, people want more nature in 3D. This means better physics, and more intuitive content building tools. Screenshots are great, but after you see it moving, that's when you make your final judgement.
Re:There is a great disturbance in the source... (Score:3, Insightful)
Innovation does not require much dev time, it requires one bright mind to come up with a good idea and many managers that won't mind spending money on an unproven concept.
Re:So, (Score:2, Insightful)
Re:There is a great disturbance in the source... (Score:2, Insightful)
However, straight ray tracing will give you sharp shadows and won't simulate diffuse interactions correctly unless you start casting many bundles of rays around; and that starts to get very slow very quickly. Yes, photon mapping has the use of rays and intersection tests in common with vanilla ray tracing but it goes a lot further in trying to simulate how light works.
Anyway, the results we get now with shadow mapping, per pixel lighting and clever use of environment maps give us results pretty much indistinguishable from raytracing for a fraction of the cost: with cycles left over to have a stab at some sort of ambient occlusion and colour bleeding to simulate radiance transfer and soft shadows. Really, with the techniques we're using in realtime graphics now, the only thing I see traditional ray tracing being any good for is reflections and even then we could just be clever about doing real-time generation of environment maps in the right places and get a result that virtually identical.
Re:There is a great disturbance in the source... (Score:3, Insightful)
as long as it's got a skateboarding turtle it's sure to be a winner.
Re:There is a great disturbance in the source... (Score:3, Insightful)
Oh, maybe I better get cracking on the GUT.
Raytracing is the opposite of reality... (Score:-1, Insightful)
This is the exact opposite of reality, and results in missing information in the scene.
A radiosity algorithm is much closer to reality, shooting rays from each light source out into the environment. But simple radiosity algorithms don't handle mirror surfaces or refraction. So the combination of radiosity solution first, then raytrace the scene from the camera's viewpoint gives and much more realistic result than either by themselves.
No single rendering method handles all the possible interactions of light and matter in producing an image...yet.
*whoosh* (Score:0, Insightful)
^
^
^
^
It's do-able (Score:3, Insightful)
It's just a bit more work, and would be unnecessary for most "realistic" scenes, which is why raytracers designed to produce pretty pictures usually skip those features.
I see phase-based optical effects fairly rarely out in the real world (as opposed to in a lab), and I suspect most folks would have never even noticed them.
Re:Right... (Score:5, Insightful)
Many people with years of experience still make god awful mistakes. Experience can only take you so far considering that experience is also the reason why companies stagnate because people get locked into a certain way of looking at things.
Re:WRT sparse voxel octree FTFA (Score:4, Insightful)
search of the literature [acm.org] shows, I was quite late to the game
One of the most fundamental properties of voxmaps is that the geometry and texture are defined hand-in-hand - they have the same resolution, and every point in the geometry has a unique texture. If you want this, then there are data structures like sparse octrees that store the data quite efficiently.
However, decoupling the geometry and texture opens the door for all sorts of tricks usefull in limited memory situations. It was these tricks that made realtime polygon graphics possible in the past. Things like low resolution polygons with high resolution texture maps, tiled/reused texture maps and layered decals, are all ways to cut down on the amount of data needed while still creating a decent looking scene.
However, as the amount of memory increases, these tricks are less necessary and less desirable. Artists want to be able to paint any part of a scene any way they want - and this is exactly what John has done in id Tech 5, their newest engine. After doing so he did some experimentation and found that storing this data in a sparse octree is even more memory efficient than the way he is doing it now, using pixmaps and polygons. If this approach were to work, artists would then have the same freedom in editing the geometry of the world as they do now with textures - the entire world would have geometry at the resolution of current texture maps with zero additional memory costs. That would be awesome.
For this to work though, you need to be able to render the data efficiently. Raycasting of sparse octrees is one of those embarrassingly parallel problems, and thus hardware acceleration for it is relatively easy. But they don't exist due to lack of market, and unfortunately graphics cards are not well suited for this, IIRC because GPUs mostly accelerate floating point calculations, while descending the sparse-octree uses a lot of integer bit-twiddling (I might be wrong about the reasons here). But with the memory-usage tradeoffs shifting in favor of voxmaps, GPU vendors looking to make their products better suited for general purpose High Performace Computing, and John Carmack pushing for it, this may be an idea whose time has come.
Re:There is a great disturbance in the source... (Score:3, Insightful)
The photon system in POV-Ray would be a backwards raytracing approach, or forward if you prefer to use the lights as the reference point. The rays are cast from the lights outward towards targets to determine what the effects will be. That data gets stored, then normal raytracing proceeds and uses the photon data to affect the texture where the light falls.
Want list for POV-Ray: BRDF and BSDF and brute force raytracing.
Ultimate raytracer: Spectral light instead of single RGB colors mixed with all of the above.