Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics Software First Person Shooters (Games) Intel Technology

How Quake Wars Met the Ray Tracer 158

An anonymous reader writes "Intel released the article 'Quake Wars Gets Ray Traced' (PDF) which details the development efforts of the research team that applied a real-time ray tracer to Enemy Territory: Quake Wars. It describes the benefits and challenges of transparency textures with this rendering technology. Further insight is given into what special effects are most costly. Examples of glass and a 3D water implementation are shown. The outlook hints into the area of freely programmable many-core processors, like Intel's upcoming Larrabee, that might be able to handle such a workload." We mentioned the ray-traced Quake Wars last in June; the PDF here delves into the implementation details, rather than just showing a demo, and explains what parts of the game give the most difficulty in going from rasterization to ray-tracing.
This discussion has been archived. No new comments can be posted.

How Quake Wars Met the Ray Tracer

Comments Filter:
  • by MichaelSmith ( 789609 ) on Monday January 26, 2009 @02:46AM (#26604833) Homepage Journal
    I interpreted this bit...

    For this project, we started rewriting the renderer from ground zero. Because of this, the very first images from the renderer were not of typical ray- tracing caliber, but displayed only the basic parts of the geometry, without any shaders or textures

    ...to mean that they rolled their own.

  • Re:Hrmm (Score:5, Informative)

    by j1m+5n0w ( 749199 ) on Monday January 26, 2009 @03:07AM (#26604923) Homepage Journal

    Quoting wikipedia: "Intel planned to have engineering samples of Larrabee ready by the end of 2008, with a video card featuring Larrabee hitting shelves in late 2009 or early 2010."

    Of course, it's always possible that AMD or Nvidia could beat Intel to market with a ray-tracing friendly GPU, but it doesn't seem likely that they'll bet the farm on a technology that isn't well-established.

    If you want to play a software ray-traced game right now (or you just want to heat your house for the winter), you might want to look at Outbound or Let There be Light, which are both open-source games (though they run on Windows) built on top of Arauna. Gameplay is not really up to par with commercial games, but as a technology demo they're quite impressive. Framerates are tolerable on reasonably modern CPUs.

  • by CMKCot ( 1297039 ) on Monday January 26, 2009 @03:55AM (#26605063)
    both screens were raytraced, they just showed off two ways of simulating the water.
  • by Flentil ( 765056 ) on Monday January 26, 2009 @04:12AM (#26605119)
    Best off-topic post I've seen today.
  • by j1m+5n0w ( 749199 ) on Monday January 26, 2009 @04:14AM (#26605129) Homepage Journal

    I am still waiting for a game/demo that actually is build from the ground up with ray tracing in mind and by that I mean one that actually looks good,

    Have you tried Outbound? You can find it here [igad.nhtv.nl]. While it's probably not destined to be a huge hit, it looks nice and runs at a playable framerate on a reasonably fast computer. (If you don't want to try to "beat" the game, there's an option buried in one of the configuration files to disable physics and just fly around and admire the scenery.)

  • by EvolutionsPeak ( 913411 ) on Monday January 26, 2009 @04:27AM (#26605165)

    All of this stuff is done in software on the CPU, so the graphics hardware really doesn't affect it.

  • by YesIAmAScript ( 886271 ) on Monday January 26, 2009 @04:58AM (#26605259)

    The problems haven't changed since the 80s.

    I attended Siggraph in 1989 and watched the AT&T Pixel Planes presentation. Things still haven't changed in 20 years.

    I have no idea how you say that ray tracing somehow frees you from quads (or tris). You're still going to have to describe the geometry somehow. Depending how things are done you might get some freedom from surface normals and such, but you'll still have to figure out how to make that tree from sub-elements so that the ray-tracer can bounce rays off it. When a ray passes through the bounding box of the tree, you're going to have to be able to find out of the ray truly intersects the tree and if so, where on it did it hit, at what angle and what color the tree would appear to be from the angle the ray came from. That's going to require you describe the tree with geometry elements and the texture/color and spectral changes depending on angle.

  • by CarpetShark ( 865376 ) on Monday January 26, 2009 @05:05AM (#26605287)

    AMD or Nvidia could beat Intel to market with a ray-tracing friendly GPU, but it doesn't seem likely that they'll bet the farm on a technology that isn't well-established.

    What? Not well-established? Raytracing is probably one of the most established graphics technologies. Specifically, it's been coming to games for years; only a matter of time. In fact, I don't really know why they're making such a big deal out of it here, since I'm pretty sure I read that the original quake (or was it doom?) traced a ray or two for some mapping reason, back when the source code was released.

    Raytracing has mostly been replaced with other, faster technologies these days, which produce similar results, so it's not the panacea it seemed back when you had 5-bit hand-drawn stuff OR raytracing.

    None of which is to belittle the work done on this game, because it does look nice, and improves on the graphics of the games before. But so do most games. Wake me up when town characters have emotions based on that guy you killed last week who rebuilt the clock tower because you suggested it back when you weren't so torn up about your wife dying.

  • by wild_quinine ( 998562 ) on Monday January 26, 2009 @06:30AM (#26605601)
    How did this get modded insightful - by ANYONE?

    guys they did this work, I played this game enough to be able to tell it wasn't fun to play, it tried to be a Battlefield 2 clone with a broken physics engine, and "real-time" shadows that wasted FPS and didn't need to be real-time at all, static objects could have just been baked into the megatextures like bf2, was sad to see ETQW when it finally showed up a year late and suck ass gameplay. Splash Damage and id should be ashamed of this product and tech.

    QW:ET is one of the best made, best balanced team FPS games I have EVER played. If it draws from anything, it draws from the previous Enemy Territory game. I'm sure we've all played a lot of the original ET, being that it was free. QW is like a much refined version of this, with a modern graphics overhaul, and more interesting setting.

    a warmed over version of Doom3 / Quake 4 tech that was poorly coded by Splash.

    I mean, come on? Flamebait if not outright troll. But insightful? Where's the evidence that this was poorly coded - this game is a masterwork, IMO.

  • by Joce640k ( 829181 ) on Monday January 26, 2009 @06:34AM (#26605617) Homepage

    Wolfenstein did "ray casting" - not the same thing.

  • by Narishma ( 822073 ) on Monday January 26, 2009 @06:58AM (#26605703)
    That's ray casting, not ray tracing. Two different things.
  • Bad sample (Score:4, Informative)

    by argent ( 18001 ) <peter@slashdot . ... t a r o nga.com> on Monday January 26, 2009 @07:42AM (#26605915) Homepage Journal

    As someone else noted, both pictures were raytraced.

    To really show the difference between 2d and 3d water, you need to show the water interacting with a solid object close enough so that you can see that in one example the waves really go up and down and in another they're just a picture of waves on a mirror.

    There's been a LOT of work making 2d water look dramatic, and I've seen people say they prefer 2d water in broad shots like this in other games (not even raytraced ones), but when you're in the game looking over the edge of a dock or looking at a nearby boat with the light behind you, it's pretty clear that spending more time on the physics of the water pays off.

    Heck, even with 2d water, paying attention to the wave effects in shallow versus deep water pays off when you interact with it. And that's rarely done because it's not as dramatic.

  • by grumbel ( 592662 ) <grumbel+slashdot@gmail.com> on Monday January 26, 2009 @08:52AM (#26606221) Homepage

    Textured Quads are used for leaves because of polygon count.

    The whole point of realtime ray tracing is that it scales with O(log(n)) instead of O(n) when it comes to polygons. Which means that you can and should model each leaf as fully polygon. That is actually done in quite a few other examples, such as the sunflower scene [openrt.de] or that Boeing model [uni-sb.de], where every last screw is modeled and yes, ray tracing can handle those just fine.

    Now there is of course a cavet, this scalability only works for static scenes and things become quite a bit more problematic when stuff is animated, but never the less the whole point of 'going ray tracing' is because presumably polygon counts are slowly get high enough that ray tracing just outruns rasterization.

  • by robthebloke ( 1308483 ) on Monday January 26, 2009 @10:21AM (#26606885)

    If you are a decent well learned programmer, essentialy an expert in algorithmic complexity, then surely you understand the comparison O(n) vs O(log n) and why you cannot refute it with horseshit.

    How about real world experience then?

    We have approximately 120 rack units in our renderfarm, each with dual quad core xeons + Dual Quadros. Of the rendering jobs we submit, approximately 0.001% use raytracing exclusively, about 0.5% make use of raytracing extensions. The rest is rasterization because it's a hell of a lot more efficient. Period. (And I'm talking about the real world here - not Big O notation on paper)

    The arguments for scene complexity go out of the window very quickly in all fairness, for quite a few reasons.

    1. To double the complexity of a model, we typically expect to see the time spent authoring that asset to increase by a factor of 6. We currently employ in the region of 200 modellers. A doubling of scene complexity would take that number to 1200 (if you don't count the additional management overhead etc). There simply aren't enough skilled people to make that a reality, so there is an absolute cap on how complex a given scene can become.

    2. We always have, and always will continue to seperate the rendering into seperate passes for the compositor to correctly light at a later stage in the pipeline. A highly skilled compositor can produce higher quality images quicker than a better rendering algorithm can. Because we always split the scene into smaller constituent parts, the scene's never get complex enough to see any ray tracing benefits (and those parts can be rendered seperately on different nodes in our RF).

    3. We typically use 4k x 4k image sizes, rasterization is certainly fast enough for those image sizes. Our scene complexity is far higher than that of a any game now, or in the next 5 years.

    4. Scene complexity is inherently limited by 1 other major factor that you've completely ignored. Memory speed. As your data set increases, rendering performance degrades to the speed at which you can feed the processors - i.e. the speed of the memory. Again, this is another reason why we seperate the scene into render layers.

    CG has never, and will never, use accurate mathematical models to produce images. If a cheap trick is good enough, it's good enough. Raytracing never really made the in-roads into the FilmFX world that the early 80's/90's evangelists predicted - And i predict that it will never make the in-roads into Games that you seem to believe.

    Thirdly, what the fuck do current video cards have to do with *anything* about this? This is called RESEARCH. Ever do any?

    Wow! Ever done any research yourself? If you did, you'd know that the answer is an awful lot! The only computational resource available that can provide both the memory bandwidth, and the computational power required for raytracing is the GPU. Our rendering process has been using GPU's to accelerate raytracing (and rasterization) for a couple of years now, unfortunately all of the problems I raised above regarding ray-tracing still apply.

  • by daVinci1980 ( 73174 ) on Monday January 26, 2009 @12:46PM (#26608541) Homepage

    Rockoon, you are mistaken in a lot of your points. Even if you seem a bit angry, please allow me to explain. (I work for nvidia, but I do not speak for them).

    Firstly, in rasterization, 4xAA does mean 4 samples per-pixel. The short version is that 4xAA basically means that we render into buffers that are twice as large in the X and Y direction (so 2*2 is 4), and then resolve the extra pixels with hardware when we go to present the backbuffer into the front buffer.

    I can't speak to 4xAA in raytracing, but to be apples-to-apples, it would have to literally be extra rays in the X and Y directions. Note that I'm not claiming there's a 4x performance penalty here, though, because modern ray tracers rely a lot on cache coherency to be performant. Algorithmically, I would agree that there really is a potential for 4x the cost, but algorithmically we don't care about the constants we multiply by, right?

    Third, it's important to consider what current cards do because they're the largest install base, and they are what developers will target. It's also important if you believe that hybrid raytracing is the future--almost all modern raytracers use rasterization for the eye rays to try to help with the pixel complexity problem.

    Fourth, you are correct. In fact, there are probably relatively few hardware inventions that didn't begin their life as a software implementation--CPUs excepted.

    Finally, you are incorrect. Raytracing scales O(pixels) and O(ln(complexity)). Rasterization is relatively constant in the number of pixels, and O(complexity). I agree, scene complexity has gone up considerably (and continues to go up considerably) every generation of new titles. Fortunately, in the same time period rasterization has massively decreased the cost of processing geometry while simultaneously increasing the ability to parallelize those types of workloads. Modern GPUs (like the relatively old 8800 GTX [wikipedia.org]) can process in the neighborhood of 300M visible triangles per second. That means that if you're trying to redraw your scene at 60Hz, you can have around 5M triangles per scene per frame. The closest I've seen of most modern titles is in the 500K-1M range, so I think we still have some head room in this regard. Modern techniques, such as soft shadowing and depth-only passes definitely eat into this count, which is why we're seeing much higher counts than we used to.

    Regarding pixel complexity, the number of pixels that matters is more than just the resolution, it's also how many times you'd like to draw those pixels in a given second. Seven years ago, you were lucky to find a CRT that drew 1280x1024 (which is a weird, 5:4 resolution, but I digress) at more than 60 Hz. 85 was reasonably common, but finding a monitor that drew at 1600x1200x85 was pretty rare.

    Now, you can find monitors that render at 1920x1200x120 for relatively cheap. And 240 Hz is on the way. [extremetech.com] That's a lot of pixel data to be moving and redrawing. And speaking from experience, I can say that leveraging coherence within a single frame is hard, and leveraging coherence between frames is virtually unheard of.

    It's not that raytracing is an impossible dream, it's just that the GP was correct: it's no panacea.

    I'd like to reiterate: though I work for nvidia, I do not speak for them.

  • by dudpixel ( 1429789 ) on Monday January 26, 2009 @11:14PM (#26617127)

    The DOOM engine also used an extension of the same technology (raycasting) as wolf3d, as did Rise of the Triad (and several other games of the same era, Heretic comes to mind also).

    The DOOM engine was expanded to use ceiling textures and sloping walls/ceilings etc and was much improved over wolf3d but still not using the same techniques as we use these days. I believe that among the first games to do it the current way were Duke3D(?) and Quake - and you'll remember they used to advertise the "rooms above rooms feature".

So you think that money is the root of all evil. Have you ever asked what is the root of money? -- Ayn Rand

Working...