Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games Technology

Carmack Speaks On Ray Tracing, Future id Engines 256

Vigile writes "As a matter of principle, when legendary game programmer John Carmack speaks, the entire industry listens. In a recent interview he comments on a multitude of topics starting with information about Intel, their ray tracing research and upcoming Larrabee GPU. Carmack seems to think that Intel's direction using traditional ray tracing methods is not going to work and instead theorizes that using ray casting to traverse a new data structure he is developing is the best course of action. The 'sparse voxel octree' that Carmack discusses would allow for 'unique geometry down to the equivalent of the texel across everything.' He goes on to discuss other topics like the hardware necessary to efficiently process his new data structure, translation to consoles, multi-GPU PC gaming and even the world of hardware physics."
This discussion has been archived. No new comments can be posted.

Carmack Speaks On Ray Tracing, Future id Engines

Comments Filter:
  • Right... (Score:1, Insightful)

    by n3tcat ( 664243 ) on Wednesday March 12, 2008 @04:12PM (#22732120)
    Lest we overlook the fact that he thought multiplicative lighting was the way to go, rather than dealing with the performance hit of additive lighting in Quake 3. Sometimes the fastest way is not always the best way. Or at least the only best way.
  • Stunning! (Score:1, Insightful)

    by DragonWriter ( 970822 ) on Wednesday March 12, 2008 @04:12PM (#22732124)

    Carmack seems to think that Intel's direction using traditional ray tracing methods is not going to work and instead theorizes that using ray casting to traverse a new data structure he is developing is the best course of action.


    Surprisingly, a developer things that the technique he is working on is better than other techniques to address the class of problems to which the technique applies.

    In other news, a substantial quantity of water was discovered in the Pacific Ocean.
  • Re:Stunning! (Score:5, Insightful)

    by Bob of Dole ( 453013 ) on Wednesday March 12, 2008 @04:17PM (#22732196) Journal
    A developer who coded the engines that nearly all PC first person shooters have run on. That's obviously not enough to accept his words without hesitation, but obviously the person knows more about high-performance 3D rendering than a random coder like myself.
  • Re:How about.... (Score:5, Insightful)

    by Broken scope ( 973885 ) on Wednesday March 12, 2008 @04:24PM (#22732276) Homepage
    Because he is more of an engine/renderer designer than game developer.

    Its his job, and im pretty sure his passion to think about stuff like this.
  • Re:Stunning! (Score:5, Insightful)

    by Falesh ( 1000255 ) on Wednesday March 12, 2008 @04:27PM (#22732302) Homepage
    Why not analyse his argument and judge it on it's merits rather then throw it out simply because he is working on an idea of his own?
  • by Anonymous Coward on Wednesday March 12, 2008 @04:38PM (#22732390)
    John Carmack is not a game developer, he's an engine programmer.
  • Re:Right... (Score:5, Insightful)

    by Jerf ( 17166 ) on Wednesday March 12, 2008 @04:51PM (#22732536) Journal
    Yeah!

    Plusses:
    • One of the primary fathers of the FPS genre.
    • Wolfenstien 3D
    • Doom
    • Quake 1
    • Quake 2
    • Quake 3
    • Endless articles and commentary on the field
    • A shitload of stuff I'm forgetting
    Minusses:
    • "Thought multiplicative lighting was the way to go, rather than dealing with the performance hit of additive lighting in Quake 3."
    Conclusion: Carmack sucks!

    I mean, seriously, what's your point? The man's not actually a God so we shouldn't listen to him? Is there somebody more experienced I should prefer to listen to? Is "n3tcat" the handle for somebody with thirty years experience in first-person shooter engines or something?
  • by rucs_hack ( 784150 ) on Wednesday March 12, 2008 @04:58PM (#22732614)
    surely you mean brown...
  • by ghostlibrary ( 450718 ) on Wednesday March 12, 2008 @04:59PM (#22732626) Homepage Journal
    Having developed octtree voxel methods for astrophysics sims (crashing galaxies into one another), I suggest they are ideal for physics. The idea of a tree is you group things to maintain a certain amount of accuracy. For example, if you have 3 items interacting just with gravity:

    A (longdistancetypedheretoavoidlamenessfilter) B .. C

    A non-tree method would just calculate all the interactions: A-B, A-C, B-C. But you can group B+C together when calculating their interaction with A because, at that distance, the result for (B+C)-A is the same as the result for B-A + C-A. Then the interaction between B & C must be calculated separately. So you've (even in this tiny example) reduced your calculation from 3 to 2.

    And, of course, all the 'voxels' between A & B/C that are empty need not be considered at all. If you'd set it up as an NxNxN voxel cube, you'd be wasting time on calculating empty voxels between the occupied items.

    So if you want realistic interactive environments, sparse voxel octtrees are the way to go-- you pump all the calculation time into the parts where it matters, and let the other stuff be 'smoothed' when such smoothing is indistinguishable from rounding error.

    Typically, you can traverse the tree for a given error percentage, e.g. 'walk the tree and do interactions preserving 99% energy conservation' or similar. So your have predictable error, as well, despite being able to use arbitrary geometries and spacing for your elements.
  • Re:Stunning! (Score:5, Insightful)

    by DragonWriter ( 970822 ) on Wednesday March 12, 2008 @05:04PM (#22732690)

    Why not analyse his argument and judge it on it's merits rather then throw it out simply because he is working on an idea of his own?


    Commenting on the fact that it is unsurprising that someone working on a different technique favors that technique over raytracing is not throwing anything out.

    Its not a comment either way on the merits.

    Were I to comment on the merits, I would point out that his position is both fairly obviously correct (in that sparse voxel octrees or something quite like them is almost beyond question the key to raytracing that's useful for reasonable quality in realtime), and entirely incorrect in his characterization of what everyone else is pushing: he pretends that "everyone" is pushing the most naive, brute force approach to raytracing, in which you don't use any kind of bounding volume structure and just do intersection tests against triangles. I've seen literally no recommendations that do that: almost all involve some form of bounding volume heirarchy, and sparse voxel octrees are just one instance of that (perhaps a fairly ideal one, and that's great). (Also, raytracing isn't limited to triangles, although most performance comparisons of raytracing to raster-based rendering methods use models constructed from triangles because it allows you to compare same-model performance of the different mechanisms; raytracing engines, however, don't generally need to decomposed curved objects into triangle-based approximations to render them in the first place, although this can sometimes be more efficient.)

    TFS further misleads by suggesting that Carmack is proposing an alternative to raytracing, when really what he is proposing is a particular approach to raytracing, and, particularly, a particular approach in one well-known problem area in raytracing to which there are currently a whole array of approaches. And his focus on what he wants to get out of raytracing is a little different. But, essentially, his piece, while there are some potentially good criticisms on some particular aspects of and arguments for Intel's specific approach to raytrace, is in accord with (not opposed to) the general idea that raytracing techniques are going to be increasingly important in gaming.

    Is that enough "on the merits" for you?
  • by Applekid ( 993327 ) on Wednesday March 12, 2008 @05:08PM (#22732732)

    It won't be right, but it will look cool, and that's all that really matters when you're looking at something like that. We are not doing light transport simulation here, we are doing something that is supposed to look good.
    I'm the sort of guy that watches a movie and notices the compression artifacts in black, listens to music and hears mushy cymbals. I walk by a screen and notice dead pixels and that the input source isn't in the LCD's native resolution.

    Yet, when I play a game, I'll admit, I'm not playing glaring attention to these faults. The last thing that really bothered me in games was 16-bit color banding and I haven't seen any of that in, oh, like 3 or 4 years.

    The gamer side of me agrees with Carmack on things looking cool who cares if it's wrong, the geek side of me is angry and demands it be pixel-accurate.
  • by Chandon Seldon ( 43083 ) on Wednesday March 12, 2008 @05:12PM (#22732776) Homepage

    Yea. Because interactivity trumps photorealism for every single possible type of game. Oh wait, that's false.

    You sound like the people who said that StarCraft was crap because sprites were outdated junk and every good game (like Total Annihilation) had already moved to 3D. Different engineers will make different design choices for different applications, and there is no total order of correctness among a single class of design choice.

  • by Anonymous Coward on Wednesday March 12, 2008 @05:16PM (#22732820)
    i hate the lameness filter also
  • Wow, these articles have all been so informative. I could not possibly have predicted those outcomes.

    Meanwhile the rest of us have been enjoying these articles immensely because we get to obtain some insight about what each of the major players are thinking in regards to Real-Time Raytracing. The great thing about obtaining insight from others is that you can then use your newfound insight to come to your own conclusions.

    If you're simply looking for a consensus from the industry, don't expect one for a long while. The concept won't entirely be accepted until someone goes out there and proves it out. Just like high-quality 3D graphics were thought to be too slow on the PC. Until game creators proved out a few different methods to make them usable, that is. (Starting with Wing Commander, moving to Wolf-3D, Doom, Duke 3D, and eventually Quake. After that, the industry said, "Welp, we better make some hardware for this." And thus 3DFX was born. ;-))
  • by Brian Gordon ( 987471 ) on Wednesday March 12, 2008 @05:23PM (#22732888)
    Ray tracing is the most realistic possible method for rendering. You can "trace rays" in different ways though; photon mapping is just one technique involved.
  • by jd ( 1658 ) <imipak@yahoGINSBERGo.com minus poet> on Wednesday March 12, 2008 @05:36PM (#22733016) Homepage Journal
    I have argued many times, and will no doubt do so for many years hence, that there won't be one unique solution to this, that the quality possible using a full genuine ray-tracing and radiocity solution (ie: not just using shaders or other simple rendering techniques) conflicts with the requirements of fast action. The only sensible (IMAO) is to use a combination of methods, depending on rate of change. Where rate of change is below a critical threshold, then use full-blown ray-tracing (or perhaps cone-tracing) and radiosity. Because rays can be thrown at ever-finer angles, and because you can postpone multiple-generation light sources, you could even have relatively static images become progressively better. When change is too fast for ray-tracing, it's probably too fast for the eye to pick out the nuances, so downgrade the method. For mid-range rates of change, use a reasonably good but fast method of getting "good enough" results. For very high speeds, the eye certainly can't pick out details. Shaders would be quite good enough there.

    If you neglect the impact of mobile objects on diffuse reflections, you CAN pre-generate an entire radiosity map for a game, which is good because it's slow. However, it's an important addition as the "texture", "warmth" and "naturalness" of an image depend on diffuse reflections, not direct reflections.

    Ultimately, you need to consider diffuse reflections for all objects. There are a few ray-tracing techniques which, instead of assuming direct reflection only, define a distribution (usually some variant on Gaussian) over which the light is reflected. This isn't quite the same as cone-tracing - cone-tracing is generally a simplified form of this where the distributions are trivial and uniform. Wave-tracing is another method that can be used.

    As for what should be done, I'd rather see hardware engineers focus on providing primitives that can support what is needed both now and in the future, as hardware changes relatively slowly. That frees software engineers to develop the best methods they can, without forcing them to wait when they reach the limits of the method.

  • by SpectreHiro ( 961765 ) on Wednesday March 12, 2008 @05:38PM (#22733024) Homepage

    If you had 32 guys running around shooting rocket launchers at each other that would pretty much dessimate any real-life structure in a matter of minutes. Some things just need limits in a gaming environment. Flying crates and debris is fun, but who wants to play in wasteland of scrapped geometry?

    You're right. Playing in a wasteland of scrapped geometry doesn't sound like much fun. OTOH, turning a perfectly good level into a wasteland of scrapped geometry... now we're talkin'.
  • Re:So, (Score:5, Insightful)

    by omeomi ( 675045 ) on Wednesday March 12, 2008 @05:41PM (#22733046) Homepage
    Now if John Carmack is as legendary as /. is making him out to be, why isn't it John Carmack's Quake/Doom?

    George Washington is pretty legendary, but we don't have a George Washington's America, do we? The name is irrelevant. How could the guy who basically invented the First Person Shooter not be legendary? When it first came out, the original Wolfenstein was the most highly optimized game I'd ever played. I still remember thinking it wouldn't run on my slow-ass computer, and being blown away when it ran fast as can be.
  • by not-enough-info ( 526586 ) <forwardtodevnull@gmail.com> on Wednesday March 12, 2008 @05:41PM (#22733050) Homepage Journal
    The 'sparse voxel octree' he talks about is basically a new data structure that simplifies storage of the polygons. This would require support from hardware manufacturers and provide content producers a new format in which to encode complex geometries. Carmack theorizes that having geometry detail to the level that we now have in texture detail is the next gen graphics paradigm. Basically, sparse voxel octrees would be to polygon meshes what polygon meshes were to billboarded sprites.

    In all, Carmack hints towards massive detail for graphics and not much else. This is something he's always done in the past and has really seemed to obsess over. It's one of his greatest weaknesses as a trend-setter and industry leader. He did it before with the idtech 4 and it let HL2 steal the show with more thoughtful physics integration and charater AI. Nicer looking soft shadows, it seems, wasn't enough.

    Id is great in that they push the industry forward in terms of graphics, but graphics only go so far. When it comes to realism, people want more nature in 3D. This means better physics, and more intuitive content building tools. Screenshots are great, but after you see it moving, that's when you make your final judgement.
  • by KDR_11k ( 778916 ) on Wednesday March 12, 2008 @05:56PM (#22733180)
    I feel all that dev time is better spent on innovative game play.

    Innovation does not require much dev time, it requires one bright mind to come up with a good idea and many managers that won't mind spending money on an unproven concept.
  • Re:So, (Score:2, Insightful)

    by GalacticLordXenu ( 1195057 ) on Wednesday March 12, 2008 @06:22PM (#22733436)
    Honest answer? You're a noob. John Carmack is extremely well-known.
  • by luther2.1k ( 61950 ) on Wednesday March 12, 2008 @06:39PM (#22733596) Homepage
    I don't think you're entirely correct - ray tracing, or tracing rays and bouncing them around a scene to see what they hit is used in most rendering algorithms somewhere or other. Hell, even rasterization uses a kind of ray tracing in environment mapping: you cast a ray from the eye to the pixel being rasterized and then reflect that in to look up into a precomputed environment map. And of course, we trace rays all the time for collision detection and various other tasks.
        However, straight ray tracing will give you sharp shadows and won't simulate diffuse interactions correctly unless you start casting many bundles of rays around; and that starts to get very slow very quickly. Yes, photon mapping has the use of rays and intersection tests in common with vanilla ray tracing but it goes a lot further in trying to simulate how light works.
        Anyway, the results we get now with shadow mapping, per pixel lighting and clever use of environment maps give us results pretty much indistinguishable from raytracing for a fraction of the cost: with cycles left over to have a stab at some sort of ambient occlusion and colour bleeding to simulate radiance transfer and soft shadows. Really, with the techniques we're using in realtime graphics now, the only thing I see traditional ray tracing being any good for is reflections and even then we could just be clever about doing real-time generation of environment maps in the right places and get a result that virtually identical.
  • by thrillseeker ( 518224 ) on Wednesday March 12, 2008 @07:01PM (#22733830)
    and many managers that won't mind spending money on an unproven concept.

    as long as it's got a skateboarding turtle it's sure to be a winner.
  • by Actually, I do RTFA ( 1058596 ) on Wednesday March 12, 2008 @07:30PM (#22734068)

    You'd need to do "wavefront racing" to fix these, and I for one have no idea how to do this - solve the quantum field equations for each particle in the scene after inventing the Grand Unified Theory

    Oh, maybe I better get cracking on the GUT.

  • by Anonymous Coward on Wednesday March 12, 2008 @07:54PM (#22734278)

    But as for realism (but not necessarily practicality, especially in a realtime setting), I think raytracing still, ultimately, wins on all of those.
    The main problem with raytracing is the rays are traced in the opposite direction of the way light rays pass through a scene in the real world. The basic method of raytracing is to calculate the path of a line drawn from the "camera" out into the space of each pixel in the view of the camera. As the line, or ray, intersects objects new lines are drawn dependent on the material properties of the object. This continues until a light source is found or found to be blocked from light sources or leaves the "world."

    This is the exact opposite of reality, and results in missing information in the scene.

    A radiosity algorithm is much closer to reality, shooting rays from each light source out into the environment. But simple radiosity algorithms don't handle mirror surfaces or refraction. So the combination of radiosity solution first, then raytrace the scene from the camera's viewpoint gives and much more realistic result than either by themselves.

    No single rendering method handles all the possible interactions of light and matter in producing an image...yet.
  • *whoosh* (Score:0, Insightful)

    by Anonymous Coward on Wednesday March 12, 2008 @08:46PM (#22734636)
    This article...

    ^

    ^

    ^

    ^ ...my head
  • It's do-able (Score:3, Insightful)

    by mbessey ( 304651 ) on Wednesday March 12, 2008 @09:15PM (#22734852) Homepage Journal
    The raytracing applications used for optical system design can do wavefront analysis, as well as wavelength-based dispersion measurements. Calculating the phase of a wavefront at a surface is basically just a distance measurement (taking into account refraction).

    It's just a bit more work, and would be unnecessary for most "realistic" scenes, which is why raytracers designed to produce pretty pictures usually skip those features.

    I see phase-based optical effects fairly rarely out in the real world (as opposed to in a lab), and I suspect most folks would have never even noticed them.
  • Re:Right... (Score:5, Insightful)

    by blahplusplus ( 757119 ) on Wednesday March 12, 2008 @10:11PM (#22735164)
    "I mean, seriously, what's your point? The man's not actually a God so we shouldn't listen to him? Is there somebody more experienced I should prefer to listen to? Is "n3tcat" the handle for somebody with thirty years experience in first-person shooter engines or something?"

    Many people with years of experience still make god awful mistakes. Experience can only take you so far considering that experience is also the reason why companies stagnate because people get locked into a certain way of looking at things.
  • by pavon ( 30274 ) on Thursday March 13, 2008 @12:36AM (#22736086)
    As John mentioned in his post here, these are not new ideas. I remember playing around with raytracing/casting of sparse-octree voxels for fun almost ten years ago, and as a quick
    search of the literature [acm.org] shows, I was quite late to the game :) What is cool is that he thinks that the gaming world is ready for them, and that he is going to try and push the hardware folks to make it happen.

    One of the most fundamental properties of voxmaps is that the geometry and texture are defined hand-in-hand - they have the same resolution, and every point in the geometry has a unique texture. If you want this, then there are data structures like sparse octrees that store the data quite efficiently.

    However, decoupling the geometry and texture opens the door for all sorts of tricks usefull in limited memory situations. It was these tricks that made realtime polygon graphics possible in the past. Things like low resolution polygons with high resolution texture maps, tiled/reused texture maps and layered decals, are all ways to cut down on the amount of data needed while still creating a decent looking scene.

    However, as the amount of memory increases, these tricks are less necessary and less desirable. Artists want to be able to paint any part of a scene any way they want - and this is exactly what John has done in id Tech 5, their newest engine. After doing so he did some experimentation and found that storing this data in a sparse octree is even more memory efficient than the way he is doing it now, using pixmaps and polygons. If this approach were to work, artists would then have the same freedom in editing the geometry of the world as they do now with textures - the entire world would have geometry at the resolution of current texture maps with zero additional memory costs. That would be awesome.

    For this to work though, you need to be able to render the data efficiently. Raycasting of sparse octrees is one of those embarrassingly parallel problems, and thus hardware acceleration for it is relatively easy. But they don't exist due to lack of market, and unfortunately graphics cards are not well suited for this, IIRC because GPUs mostly accelerate floating point calculations, while descending the sparse-octree uses a lot of integer bit-twiddling (I might be wrong about the reasons here). But with the memory-usage tradeoffs shifting in favor of voxmaps, GPU vendors looking to make their products better suited for general purpose High Performace Computing, and John Carmack pushing for it, this may be an idea whose time has come.
  • by muridae ( 966931 ) on Thursday March 13, 2008 @01:07AM (#22736210)
    POV-Ray's radiosity is still ray-tracing. It just sends more rays out from a point to determine the light that is reaching that point. From the description in the POV-Ray docs, it sounds more like a modified brute force rendering approach. While brute force just picks a relatively random direction and blends the result from each pass, POV determines if it needs to create new radiosity data for that point and if it does generates several rays.

    The photon system in POV-Ray would be a backwards raytracing approach, or forward if you prefer to use the lights as the reference point. The rays are cast from the lights outward towards targets to determine what the effects will be. That data gets stored, then normal raytracing proceeds and uses the photon data to affect the texture where the light falls.

    Want list for POV-Ray: BRDF and BSDF and brute force raytracing.
    Ultimate raytracer: Spectral light instead of single RGB colors mixed with all of the above.

Top Ten Things Overheard At The ANSI C Draft Committee Meetings: (5) All right, who's the wiseguy who stuck this trigraph stuff in here?

Working...