Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games Technology

Carmack Speaks On Ray Tracing, Future id Engines 256

Vigile writes "As a matter of principle, when legendary game programmer John Carmack speaks, the entire industry listens. In a recent interview he comments on a multitude of topics starting with information about Intel, their ray tracing research and upcoming Larrabee GPU. Carmack seems to think that Intel's direction using traditional ray tracing methods is not going to work and instead theorizes that using ray casting to traverse a new data structure he is developing is the best course of action. The 'sparse voxel octree' that Carmack discusses would allow for 'unique geometry down to the equivalent of the texel across everything.' He goes on to discuss other topics like the hardware necessary to efficiently process his new data structure, translation to consoles, multi-GPU PC gaming and even the world of hardware physics."
This discussion has been archived. No new comments can be posted.

Carmack Speaks On Ray Tracing, Future id Engines

Comments Filter:
  • by Ferzerp ( 83619 ) on Wednesday March 12, 2008 @04:07PM (#22732056)
    It as if hundreds of ray tracing fanboys cried out at once, and were silenced.
    • by AmaDaden ( 794446 ) on Wednesday March 12, 2008 @04:09PM (#22732082)
      Ray Tracing has a place. New high speed FPS games are not it.
      • Re: (Score:2, Informative)

        It's the most realistic possible way of rendering, so when computers get fast enough we'll be able to everything with ray tracing. But effects past simple polygon rendering and water refraction are extremely difficult in ray-tracing.. not necessarily to program, but to simulate in real-time.
        • by luther2.1k ( 61950 ) on Wednesday March 12, 2008 @04:59PM (#22732634) Homepage
          Bog standard ray tracing, which is what intel are harping on about at the moment isn't the be all and end all of global illumination algorithms, as many people who get all misty eyed about the technique would have you believe. It's terrible for diffuse interactions for one thing. Photon mapping [wikipedia.org] is a more realistic technique which simulates light more accurately.

          Tim.
          • Re: (Score:2, Insightful)

            Ray tracing is the most realistic possible method for rendering. You can "trace rays" in different ways though; photon mapping is just one technique involved.
            • by Goaway ( 82658 ) on Wednesday March 12, 2008 @05:40PM (#22733042) Homepage
              "Ray tracing" traditionally means specifically tracing rays from the eye out into the scene. Other methods are usually referred to by different names.

              And even so, while tracing either photons or eye rays may be the most feasible method at the moment, it is by no mean the only way to solve the rendering equation, nor any kind of theoretical best.
              • by Zeussy ( 868062 )
                The other thing carmack was rambling on about was a Voxel based system. If you havnt seen this already its worth a quick peak Voxlap Cave Demo [advsys.net] I havn't RTA but from the summary it sounds like he is making a system similar to this but using an octree to allow for LoD.
                • by kb ( 43460 ) on Wednesday March 12, 2008 @07:46PM (#22734208) Homepage Journal
                  As far is I've understood it he isn't exactly using the octree for LOD but for storing all voxel data in s parse (there we have it ;) manner. If you only have one "layer" of voxels at whatever resolution defining eg. only the surface of things, most nodes of the octree will remain empty, and so you can reduce the data set to storing only what's necessary for rendering instead of having to store a full-resolution 3D representation of your space. Of course this leans happily towards a LOD system, as storing the data in different resolutions aka mip-mapping the geometry and then accessing the right detail level would essentially be for free if you do it right. In the end it's a promising approach with of course many details to be sorted out - there's still a lot of data to be stored per-voxel (texture coordinates, normals, basically everything that now goes into a vertex) if you want the full feature set like eg. lighting and such. But given dedicated hardware and combined with his mega texture stuff (which is basically only glorified unique UV mapping, perhaps somewhat generalized and with a good resource management system behind it) this could be pretty cool.
            • Re: (Score:2, Insightful)

              by luther2.1k ( 61950 )
              I don't think you're entirely correct - ray tracing, or tracing rays and bouncing them around a scene to see what they hit is used in most rendering algorithms somewhere or other. Hell, even rasterization uses a kind of ray tracing in environment mapping: you cast a ray from the eye to the pixel being rasterized and then reflect that in to look up into a precomputed environment map. And of course, we trace rays all the time for collision detection and various other tasks.
              However, straight ray
        • by Doogie5526 ( 737968 ) on Wednesday March 12, 2008 @05:27PM (#22732944) Homepage

          It's the most realistic possible way of rendering
          It only provides realistic rendering of reflections, refractions, and shadows. There are still many more properties of light that take different, also intensive algorithms reproduce like; color bleeding, caustics, sub-surface scattering, depth of field.

          I'm sure there's some market for these things, but there's so much more involved even after these algorithms are implemented. Now you have to add settings (or additional texture maps), for each object (or light). As soon add something with live reflections, you can't even throw away what's not on screen (or facing away from camera). So your memory requirements jump just because of that. There's many things that have to come in to place for these technologies are adopted widely. A lot of these algorithms have been around for over 25 years already and are just seeing wide adoption in feature films (most would be surprised at how much is faked, even today).

          I hope there's a class of games that don't use these things or take 1 or 2 of these things and use them in innovative ways. While I like the WW2 (or futuristic) FPS games, I feel all that dev time is better spent on innovative game play.

          Sorry that the brief reply I planned turned in to a rant.
          • Re: (Score:3, Insightful)

            by KDR_11k ( 778916 )
            I feel all that dev time is better spent on innovative game play.

            Innovation does not require much dev time, it requires one bright mind to come up with a good idea and many managers that won't mind spending money on an unproven concept.
            • Re: (Score:3, Insightful)

              and many managers that won't mind spending money on an unproven concept.

              as long as it's got a skateboarding turtle it's sure to be a winner.
          • by DragonWriter ( 970822 ) on Wednesday March 12, 2008 @06:39PM (#22733602)

            It only provides realistic rendering of reflections, refractions, and shadows.


            Everything light does is a combination of reflections and refractions (shadows are an artifact of those).

            So, yeah, what you are in effect saying is that raytracing only provides realistic rendering of things that light actually does.

            There are still many more properties of light that take different, also intensive algorithms reproduce like; color bleeding, caustics, sub-surface scattering, depth of field.


            Color bleeding and caustics are effects of reflection, subsurface scattering is reflection and refraction, depth of field is refraction (through a lens between the viewpoint and the image). Now, its true, that there are shortcuts that provide tolerable approximations of those effects faster than actually tracing rays in most cases, and that even static raytracers often prefer those to what would be necessary to do those effects through raytracing alone. Its also true that some real effects, to do well with raytracing, would require shooting separate rays for different wavelengths of light, which while conceptually possible (and I think some very specialized systems have been made which do this), is probably utterly impractical for realtime systems for the forseeable future (this is a lot bigger load increase than anti-aliasing would be.)

            But as for realism (but not necessarily practicality, especially in a realtime setting), I think raytracing still, ultimately, wins on all of those.

            • by ultranova ( 717540 ) on Wednesday March 12, 2008 @07:12PM (#22733918)

              Everything light does is a combination of reflections and refractions (shadows are an artifact of those).

              Except the double-slit experiment. It's based on the fact that light has wavefront qualities, while ray tracing treats it as particles.

              I also strongly doubt that the discreet ray approach will ever produce very good global illumination, since the number of rays bouncing between surfaces quickly grows towards infinite as the desired accuracy grows.

              You'd need to do "wavefront racing" to fix these, and I for one have no idea how to do this - solve the quantum field equations for each particle in the scene after inventing the Grand Unified Theory ?-)

              • Re: (Score:3, Insightful)

                You'd need to do "wavefront racing" to fix these, and I for one have no idea how to do this - solve the quantum field equations for each particle in the scene after inventing the Grand Unified Theory

                Oh, maybe I better get cracking on the GUT.

              • by DragonWriter ( 970822 ) on Wednesday March 12, 2008 @07:35PM (#22734108)

                Except the double-slit experiment. It's based on the fact that light has wavefront qualities, while ray tracing treats it as particles.


                Good point.

                I also strongly doubt that the discreet ray approach will ever produce very good global illumination, since the number of rays bouncing between surfaces quickly grows towards infinite as the desired accuracy grows.


                Well, yeah, the "raytracing is ideally photorealistic" argument does rely (even ignoring the wave effects that raytracing misses), essentially, on unlimited processing power and memory, and isn't necessarily applicable in any particular practical domain. My reference to shortcut techniques to directly model particular phenomena instead of tracing all the necessary rays being a feature of even static raytracing package, and to many of the idealized advantages of raytracing not being realized in practice was based on that.

                You'd need to do "wavefront racing" to fix these, and I for one have no idea how to do this - solve the quantum field equations for each particle in the scene after inventing the Grand Unified Theory ?-)


                That sounds about right, probably using a quantum computer "graphics card" (QGPU?).
              • It's do-able (Score:3, Insightful)

                by mbessey ( 304651 )
                The raytracing applications used for optical system design can do wavefront analysis, as well as wavelength-based dispersion measurements. Calculating the phase of a wavefront at a surface is basically just a distance measurement (taking into account refraction).

                It's just a bit more work, and would be unnecessary for most "realistic" scenes, which is why raytracers designed to produce pretty pictures usually skip those features.

                I see phase-based optical effects fairly rarely out in the real world (as oppose
              • Re: (Score:3, Interesting)

                by Fred_A ( 10934 )

                Except the double-slit experiment. It's based on the fact that light has wavefront qualities, while ray tracing treats it as particles.
                There's also the problem of the raytracing of moving objects. Especially when they move at a significant fraction of c.

                No, raytracing has way too many drawbacks to be used seriously nowadays.
            • by Swampash ( 1131503 ) on Wednesday March 12, 2008 @09:38PM (#22734970)
              depth of field is refraction (through a lens between the viewpoint and the image)

              No it's not. Depth of field is a function of aperture, and has nothing to do with either lenses or refraction.
        • by flyingsquid ( 813711 ) on Wednesday March 12, 2008 @06:05PM (#22733244)
          It's the most realistic possible way of rendering, so when computers get fast enough we'll be able to everything with ray tracing.

          When that happens, will it also become possible to wield a flashlight and a shotgun at the same time? Or is there some kind of fundamental law against that, like how you can't know the position and velocity of a particle at the same time?

        • Re: (Score:3, Informative)

          by ultranova ( 717540 )

          It's the most realistic possible way of rendering,

          No. For starters, the rays are "sparse", that is, there is space between two parallel rays which goes unexplored; furthermore, each beam either hits or doesn't hit any given object, leading to aliasing (jagged edges). A much better solution would be racing a cone from each pixel, with no space between them; however, the mathemathics of calulating reflections when the cone hits something would be horrible.

          Another problem is with global illumination. Nor

  • by davidwr ( 791652 ) on Wednesday March 12, 2008 @04:10PM (#22732096) Homepage Journal
    This just came through the time vortex, dated 3 weeks from yesterday:

    Voxel? Texel? Just make my pink pony look nice and pretty.
  • by Zymergy ( 803632 ) * on Wednesday March 12, 2008 @04:16PM (#22732182)
    ...*MORE SHADES OF BLACK*??!!
  • Doom 3 was horribly boring, despite looking pretty.

    Why not go back sometime and do Quake the way it was meant to be done? I'd love to see those Lovecraftian references fleshed out properly, along with gameplay mechanics that aren't just a "running short on development time, do it like DooM!" type of scenario.

    RAGE might very well be great, if it does truly end up being the open-world game version of Mad Max. After what happened with Doom 3 and Quake IV (not quite Id's fault) however, I'm going to have to

    • by rucs_hack ( 784150 ) on Wednesday March 12, 2008 @05:04PM (#22732688)
      Redo quake? Why?

      I guess you haven't seen ezquake (best obtained in the nquake package). A faster, more frantic FPS game you will not find. It's also been substantially improved graphically, so there are many more colours about the place.

      Any reworking of quake 1 would be badly received by the old school (by this I mean most people who still play it). New people wouldn't see the need, since most of them have grown up on a different fps style. Why revisit quake 1?

      Personally I love the game, and play it often. If I have one criticism its that most of the servers are populated by people who are so good at the game, after playing for so long, that just living long enough to pick up a gun and kill someone can be a real challenge.

    • DOOM 3 was really a demo of their new game engine. I don't see anything wrong with building a game platform and then selling it to other companies who want to do creative work.
    • by GreggBz ( 777373 )

      I'm sick of the "modern" FPS. HL2, Bioshock, never finished em'. I liked Doom3 better, but I won't rave about it.

      It just seems these days you have incredible graphics and the inevitable linear progression up to "the Boss."

      Further, things really never get any more challenging. I never feel like I need to take my gameplaying skill and reflexes to the next level, as I have grown up doing with simpler games. I'm just shooing more zombies.

      I don't get any sense of immersion. Maybe they throw in some token po

  • by Telvin_3d ( 855514 ) on Wednesday March 12, 2008 @04:23PM (#22732254)
    My biggest concern with where he is going with this is that it does not sound like it will play very nice with physics. In page two he makes some comments on how characters and other animated elements will still likely be done with more traditional methods and then mixed with this for the static objects like the world.

    The problem with this is that we are moving more and more towards interactive environments where everything from the ground to the flowerpots are breakable, bendable or movable. It doesn't sound like this new system will play very nice with physics intensive or highly interactive environments. Now, i could be completely wrong. He doesn't address the point directly. But it is still a point for concern.
    • Re: (Score:3, Interesting)

      by Speare ( 84249 )

      The problem with this is that we are moving more and more towards interactive environments where everything from the ground to the flowerpots are breakable, bendable or movable. It doesn't sound like this new system will play very nice with physics intensive or highly interactive environments. Now, i could be completely wrong. He doesn't address the point directly. But it is still a point for concern.

      I agree completely. When Carmack can implement even a low-polygon all-things-dynamic wonder like Katamari Damashii using his quartile duplex hectotree algorithm, I'll be impressed. The time of precompiling 99% of the game into a static optimized traversal graph is over. Now you've got a bag of loose models (many of which morph), the sum of which fills the screen.

      • by Chandon Seldon ( 43083 ) on Wednesday March 12, 2008 @05:12PM (#22732776) Homepage

        Yea. Because interactivity trumps photorealism for every single possible type of game. Oh wait, that's false.

        You sound like the people who said that StarCraft was crap because sprites were outdated junk and every good game (like Total Annihilation) had already moved to 3D. Different engineers will make different design choices for different applications, and there is no total order of correctness among a single class of design choice.

    • by ghostlibrary ( 450718 ) on Wednesday March 12, 2008 @04:59PM (#22732626) Homepage Journal
      Having developed octtree voxel methods for astrophysics sims (crashing galaxies into one another), I suggest they are ideal for physics. The idea of a tree is you group things to maintain a certain amount of accuracy. For example, if you have 3 items interacting just with gravity:

      A (longdistancetypedheretoavoidlamenessfilter) B .. C

      A non-tree method would just calculate all the interactions: A-B, A-C, B-C. But you can group B+C together when calculating their interaction with A because, at that distance, the result for (B+C)-A is the same as the result for B-A + C-A. Then the interaction between B & C must be calculated separately. So you've (even in this tiny example) reduced your calculation from 3 to 2.

      And, of course, all the 'voxels' between A & B/C that are empty need not be considered at all. If you'd set it up as an NxNxN voxel cube, you'd be wasting time on calculating empty voxels between the occupied items.

      So if you want realistic interactive environments, sparse voxel octtrees are the way to go-- you pump all the calculation time into the parts where it matters, and let the other stuff be 'smoothed' when such smoothing is indistinguishable from rounding error.

      Typically, you can traverse the tree for a given error percentage, e.g. 'walk the tree and do interactions preserving 99% energy conservation' or similar. So your have predictable error, as well, despite being able to use arbitrary geometries and spacing for your elements.
      • Wait, you mean next gen games will let us crash galaxies into one another? Sign me up :p
      • This could be used in a whole host of areas where object-object interactions are important, but testing for those interactions can be very expensive. (e.g., molecular dynamics, agent-based biology modeling) One method we've used to solve these issues are interaction potentials. (the object-object interactions are through gradients of the potentials that can be scaled linearly with the number of objects if cleverly constructed.) However, I'm intrigued at using these data structures as an alternative approach

        • by ghostlibrary ( 450718 ) on Thursday March 13, 2008 @07:54AM (#22737552) Homepage Journal
          > Do you have any publications you can point me towards?

          Hmm... well, I write here under a pseudonym so it's hard to look up my work. But you can look up 'TreeSPH' in google for some good references to lots of astrophysical implementation. The 'Tree' part is obviously the voxel octtrees, while the 'SPH' means they added hydrodynamics to it by making 'blobs' that have a variable kernel size for smoothing over neighbors.

          Which basically means, for hydrodynamics, if it's uniform density you can use a single large 'blob' to represent it, while in an area where the density is rapidly changing you go to smaller 'blobs' because you need more computation. You then use a kernel function, which basically means how much you smooth over neighbors to get a good distribution. With this, you spend all your hydrodynamic computation time on the rapidly changing, shocky, or active stuff. So it's another example of how to decompose a problem they way Carmack seems to be suggesting.

          Funny thing is, in astrophysics, this stuff came out in the late 80s/early 90s, and astrophysics usually lags behind physics by a half a decade, which lags behind pure math by a decade. I think the challenge for getting into gaming is converting codes intended for cluster/supercomputer massive calculations into the realm of fast PC gaming.

          Tree codes are already heavily used in computer gaming (browsing through 'Computer Developer' magazine shows they are used for dynamic line-of-sight work a lot), so none of what Carmack suggests is cutting edge comp sci in theory. In fact, he used binary space partitioning with Doom, which is in the same field. Much as with Doom etc, the key is can he come up with a fast implementation (or even approximation). I think that's his real talent-- taking existing methods and concepts and figuring out a 'real world' implementation that's fast enough for gaming. He's a programming engineer of no small talent.
    • There is always going to be static geometry in a level. You don't want the entire thing to be deformable/movable/destructable, especially in a multiplayer game. The end result would just be a big old bowl of slop with everything interesting about your environment has been pulverized to rubble. If you had 32 guys running around shooting rocket launchers at each other that would pretty much dessimate any real-life structure in a matter of minutes. Some things just need limits in a gaming environment. Flying c
      • Re: (Score:2, Insightful)

        by SpectreHiro ( 961765 )

        If you had 32 guys running around shooting rocket launchers at each other that would pretty much dessimate any real-life structure in a matter of minutes. Some things just need limits in a gaming environment. Flying crates and debris is fun, but who wants to play in wasteland of scrapped geometry?

        You're right. Playing in a wasteland of scrapped geometry doesn't sound like much fun. OTOH, turning a perfectly good level into a wasteland of scrapped geometry... now we're talkin'.

    • by jd ( 1658 )
      I have argued many times, and will no doubt do so for many years hence, that there won't be one unique solution to this, that the quality possible using a full genuine ray-tracing and radiocity solution (ie: not just using shaders or other simple rendering techniques) conflicts with the requirements of fast action. The only sensible (IMAO) is to use a combination of methods, depending on rate of change. Where rate of change is below a critical threshold, then use full-blown ray-tracing (or perhaps cone-trac
  • So, let me get this straight, intel says their baby is the best. Nvidia says theirs is the best. And Carmack is making his own, and saying his is the best. Wow, these articles have all been so informative. I could not possibly have predicted those outcomes.
    • by AKAImBatman ( 238306 ) <akaimbatman AT gmail DOT com> on Wednesday March 12, 2008 @05:22PM (#22732874) Homepage Journal

      Wow, these articles have all been so informative. I could not possibly have predicted those outcomes.

      Meanwhile the rest of us have been enjoying these articles immensely because we get to obtain some insight about what each of the major players are thinking in regards to Real-Time Raytracing. The great thing about obtaining insight from others is that you can then use your newfound insight to come to your own conclusions.

      If you're simply looking for a consensus from the industry, don't expect one for a long while. The concept won't entirely be accepted until someone goes out there and proves it out. Just like high-quality 3D graphics were thought to be too slow on the PC. Until game creators proved out a few different methods to make them usable, that is. (Starting with Wing Commander, moving to Wolf-3D, Doom, Duke 3D, and eventually Quake. After that, the industry said, "Welp, we better make some hardware for this." And thus 3DFX was born. ;-))
  • Whenever anyone decries a method as "not going that direction", there is always my memory if in the early 1900's, some famous physicist declaring that all of Physics had been invented and there would never be anything new from that point on. There is always the chance that as chips just get faster and more cores, especially with interval raytracing [sunfishstudio.com], that at least a few games will go that way. If those games are popular enough, then like wolfenstein 3d and doom did for raycasting with textures, a whole new
    • Re: (Score:3, Informative)

      by Creepy ( 93888 )
      Carmack is not saying the industry won't go to ray tracing, but rather that the industry won't abandon rasterization because each has strengths and weaknesses.

      He believes the same thing I do - a hybrid approach is most likely, at least in the short term. A sparse voxel octtree (a voxel is a 3d pixel and an octtree is a uniform 3d structure to hold the voxels - they are sparse because most are empty [hold air]) and would work well for ray tracing because it sounds like you'd need to cast rays to find the vo
  • by goatpunch ( 668594 ) on Wednesday March 12, 2008 @05:01PM (#22732654)
    In other news, all games will now consist of reflective spheres moving around on checkerboards...
  • by Applekid ( 993327 ) on Wednesday March 12, 2008 @05:08PM (#22732732)

    It won't be right, but it will look cool, and that's all that really matters when you're looking at something like that. We are not doing light transport simulation here, we are doing something that is supposed to look good.
    I'm the sort of guy that watches a movie and notices the compression artifacts in black, listens to music and hears mushy cymbals. I walk by a screen and notice dead pixels and that the input source isn't in the LCD's native resolution.

    Yet, when I play a game, I'll admit, I'm not playing glaring attention to these faults. The last thing that really bothered me in games was 16-bit color banding and I haven't seen any of that in, oh, like 3 or 4 years.

    The gamer side of me agrees with Carmack on things looking cool who cares if it's wrong, the geek side of me is angry and demands it be pixel-accurate.
    • He's not saying it will never be possible. What that quote was referring to is that it shouldn't be the primary focus of the coming generations of graphics processing. With real limits to computing power you have to choose where to spend the resources. He wants them spent in near-infinite geometric and texture detail, not near-infinite light tracing. That's my take on his answer given limited development and zero graphics experience.
  • The direction that everybody is looking at for next generation, both console and eventual graphics card stuff, is a "sea of processors" model, typified by Larrabee or enhanced CUDA and things like that

    As I've pointed out on the NVIDIA forums [nvidia.com], CUDA/OpenGL interoperability is totally broken from a game or video performance standpoint. Instead of being able to quickly shuffle graphics buffers between your CUDA kernel and your OpenGL graphics engine, you have to waste time and bus throughput copying them fro

  • Wolf3D (Score:2, Interesting)

    by Nimey ( 114278 )
    Interestingly, raycasting is what id used in Wolfenstein 3D, way back in 1992.

    http://en.wikipedia.org/wiki/Ray_casting [wikipedia.org]
    • Ray casting != ray tracing
    • It should be noted that raytracing [wikipedia.org] and raycasting [wikipedia.org] are different things.

      Raycasting is basically 3d-esque rendering of a 2d map - one ray is cast for every column on screen. Raytracing is the true 3d version - one ray per pixel (plus reflection rays, shadow rays, etc.).

      The point is, Raytracing is far more computationally expensive, and visually impressive, than raycasting.
  • The 'sparse voxel octree' he talks about is basically a new data structure that simplifies storage of the polygons. This would require support from hardware manufacturers and provide content producers a new format in which to encode complex geometries. Carmack theorizes that having geometry detail to the level that we now have in texture detail is the next gen graphics paradigm. Basically, sparse voxel octrees would be to polygon meshes what polygon meshes were to billboarded sprites.

    In all, Carmack hints t
    • by pavon ( 30274 ) on Thursday March 13, 2008 @12:36AM (#22736086)
      As John mentioned in his post here, these are not new ideas. I remember playing around with raytracing/casting of sparse-octree voxels for fun almost ten years ago, and as a quick
      search of the literature [acm.org] shows, I was quite late to the game :) What is cool is that he thinks that the gaming world is ready for them, and that he is going to try and push the hardware folks to make it happen.

      One of the most fundamental properties of voxmaps is that the geometry and texture are defined hand-in-hand - they have the same resolution, and every point in the geometry has a unique texture. If you want this, then there are data structures like sparse octrees that store the data quite efficiently.

      However, decoupling the geometry and texture opens the door for all sorts of tricks usefull in limited memory situations. It was these tricks that made realtime polygon graphics possible in the past. Things like low resolution polygons with high resolution texture maps, tiled/reused texture maps and layered decals, are all ways to cut down on the amount of data needed while still creating a decent looking scene.

      However, as the amount of memory increases, these tricks are less necessary and less desirable. Artists want to be able to paint any part of a scene any way they want - and this is exactly what John has done in id Tech 5, their newest engine. After doing so he did some experimentation and found that storing this data in a sparse octree is even more memory efficient than the way he is doing it now, using pixmaps and polygons. If this approach were to work, artists would then have the same freedom in editing the geometry of the world as they do now with textures - the entire world would have geometry at the resolution of current texture maps with zero additional memory costs. That would be awesome.

      For this to work though, you need to be able to render the data efficiently. Raycasting of sparse octrees is one of those embarrassingly parallel problems, and thus hardware acceleration for it is relatively easy. But they don't exist due to lack of market, and unfortunately graphics cards are not well suited for this, IIRC because GPUs mostly accelerate floating point calculations, while descending the sparse-octree uses a lot of integer bit-twiddling (I might be wrong about the reasons here). But with the memory-usage tradeoffs shifting in favor of voxmaps, GPU vendors looking to make their products better suited for general purpose High Performace Computing, and John Carmack pushing for it, this may be an idea whose time has come.
  • Carmack can also be wrong, and I have a long memory.

    I still remember Quake 2 and buying my own first PC and specing it based on his opinion.

    He spoke well of the Intel i740. Which turned out to be a dog compared to the Voodoo2.

    While, of course, he speaks of the future here, he got this wrong. Very wrong.

    From his .plan file in 1998:

    Intel i740
    ----------
    Good throughput, good fillrate, good quality, good features. A very competent chip. I wish intel great success with the 740. I think that it f
    • by ferat ( 971 )
      I don't see how he was wrong. He said it should perform as well or better than the Voodoo 1, but was currently slower than the voodoo 1 due to driver maturity.

      Why is it a great shock that the voodoo 2 was far superior to the voodoo 1?

For God's sake, stop researching for a while and begin to think!

Working...