How Quake Wars Met the Ray Tracer 158
An anonymous reader writes "Intel released the article 'Quake Wars Gets Ray Traced' (PDF) which details the development efforts of the research team that applied a real-time ray tracer to Enemy Territory: Quake Wars. It describes the benefits and challenges of transparency textures with this rendering technology. Further insight is given into what special effects
are most costly. Examples of glass and a 3D water implementation are shown. The outlook hints into the area of freely programmable many-core processors, like Intel's upcoming Larrabee, that might be able to handle such a workload." We mentioned the ray-traced Quake Wars last in June; the PDF here delves into the implementation details, rather than just showing a demo, and explains what parts of the game give the most difficulty in going from rasterization to ray-tracing.
Would have posted first, but... (Score:4, Funny)
I was using a raytracer.
Re: (Score:2)
Re: (Score:2)
Hey!! Wtf was that!?!?! :D
Don't shit where you live!
If only Essences coder did games =P
http://ada.untergrund.net/showdemo.php?demoid=189 [untergrund.net]
http://ada.untergrund.net/showdemo.php?demoid=437 [untergrund.net]
http://ada.untergrund.net/showdemo.php?demoid=386 [untergrund.net]
http://ada.untergrund.net/showdemo.php?demoid=428 [untergrund.net]
Not that Amiga was ever any good for 3D graphics but whatever :D
Hrmm (Score:5, Funny)
So when can I buy the CPU/Vid card that can do raytracing, heat my house, cook food off and pipe extra heat out for a steamhouse?
Re: (Score:2, Funny)
Re: (Score:2, Funny)
2010. [wikipedia.org]
Re:Hrmm (Score:5, Informative)
Quoting wikipedia: "Intel planned to have engineering samples of Larrabee ready by the end of 2008, with a video card featuring Larrabee hitting shelves in late 2009 or early 2010."
Of course, it's always possible that AMD or Nvidia could beat Intel to market with a ray-tracing friendly GPU, but it doesn't seem likely that they'll bet the farm on a technology that isn't well-established.
If you want to play a software ray-traced game right now (or you just want to heat your house for the winter), you might want to look at Outbound or Let There be Light, which are both open-source games (though they run on Windows) built on top of Arauna. Gameplay is not really up to par with commercial games, but as a technology demo they're quite impressive. Framerates are tolerable on reasonably modern CPUs.
raytracing is VERY established (Score:5, Informative)
What? Not well-established? Raytracing is probably one of the most established graphics technologies. Specifically, it's been coming to games for years; only a matter of time. In fact, I don't really know why they're making such a big deal out of it here, since I'm pretty sure I read that the original quake (or was it doom?) traced a ray or two for some mapping reason, back when the source code was released.
Raytracing has mostly been replaced with other, faster technologies these days, which produce similar results, so it's not the panacea it seemed back when you had 5-bit hand-drawn stuff OR raytracing.
None of which is to belittle the work done on this game, because it does look nice, and improves on the graphics of the games before. But so do most games. Wake me up when town characters have emotions based on that guy you killed last week who rebuilt the clock tower because you suggested it back when you weren't so torn up about your wife dying.
Re: (Score:2, Interesting)
Not sure about those two, but I'm pretty certain Wolfenstein 3D did. That was for visibility and texture coordinate calculation, rather than light and shadow. Since the map was 2D only a handful of rays were needed.
Re: (Score:2)
Sounds like the same system, yes. I'm pretty sure it was back with Doom 1, now that I think about it more.
Re:raytracing is VERY established (Score:4, Informative)
Re:raytracing is VERY established (Score:5, Interesting)
Wolf3D used raycasting, rather than tracing to give a pseudo-3D rendering of what was basically a 2D grid map.
It's pretty clever how it worked, I remember having a LOT of fun cooking up my own similar renderer back in the day (Turbo Pascal with inline asm was fun!). If I remember rightly:
First, the ceiling and floor were drawn in, covering everything (intersecting in the middle, vertically). Then, they took your location on the map, and cast a ray for each row of pixels (320 of them, I believe). This ray went forward until it intersected a wall - and the distance to the wall was measured. It then did a quick calculation (lookup table) to determine the height of the wall at that distance, subtracted half that height from the center of the screen, and plotted a vertical line in the color of the wall. I seem to remember the wall color was retrieved from a small texture and scaled.
That gives surprisingly good results, albeit with no lighting or shading.
Re: (Score:2)
Wouldn't it make more sense to simply store the top and bottom coordinates directly to the lookup table ?
In any case, you'd also need a Z-buffer to draw sprites correctly, especially if one is half behind a corner and half visible.
ray casting != ray tracing (Score:5, Informative)
Wolfenstein did "ray casting" - not the same thing.
Re: (Score:2)
Re: (Score:2)
Raytracing has mostly been replaced with other, faster technologies these days, which produce similar results, so it's not the panacea it seemed back when you had 5-bit hand-drawn stuff OR raytracing.
Those technologies are only faster for the moment. Theoretically, at some point in the future, raytracing will be faster again, and already produces better effects.
It's actually hard to tell which will win, just thinking about it. If I'm reading TFA right, they went from a 20-machine cluster to a single machine in some 4-5 years. And raytracing has better theoretical scalability -- it's embarrassingly parallizable, and has quite a few cases (extremely complex geometry, real curves instead of just triangles,
memory and parallelism (Score:2)
Movie studios usually go with rasterization rather than ray-tracing because ray-tracers access scene memory in an unpredictable way, and so ray-tracing is excruciatingly slow if you can't fit the entire scene into the memory of one of the machines that's rendering it. Rasterizers can split the scene up, render the pieces separately, and combine the results in a single image.
For games, which typically run on a single machine, this is a non-issue.
Re: (Score:2)
Ah, thanks for that...
For games, which typically run on a single machine, this is a non-issue.
It still is, though, because now there's memory bandwidth to consider. The amount of cache/RAM, and how fast it is -- or worse, requiring more RAM in your system, or on a raytrace-friendly video card...
Contrast with rasterizers, which can at least transfer the scene in discrete, contiguous chunks, whereas a raytracer will hit it more unpredictably, as you said.
Re: (Score:2)
Re: (Score:1)
Re: (Score:2)
It's the kitchen of tomorrow:
http://ftp.arl.mil/ftp/historic-computers/png/cray2.png [arl.mil]
Re: (Score:2)
Buy a Quad P4 motherboard from 4 years ago and use heatpipes to do all that you ask.
Hell if you did it right you could get one of the old P4 Xeon dell servers that would take 8 processors and easily heat your home and water for you.
Re: (Score:2)
Never ending chase... (Score:5, Insightful)
Yet another ray tracing article and yet again all the same problems as before. Doing yesterday games in ray tracing is all nifty, but also kind of pointless. For one we already played them, but more importantly, it doesn't actually use the strength of ray tracing. Rendering a tree build out of texture quads is a nice accomplishment, but wasn't the whole point of ray tracing that one can have a million polygons and no longer need such hacks? So show me a realistic tree instead of trying to replicate the limitations of rasterization.
I am still waiting for a game/demo that actually is build from the ground up with ray tracing in mind and by that I mean one that actually looks good, just a few shiny spheres might have been impressive back on the Amiga some 20 years ago, not any more.
Re:Never ending chase... (Score:5, Informative)
Have you tried Outbound? You can find it here [igad.nhtv.nl]. While it's probably not destined to be a huge hit, it looks nice and runs at a playable framerate on a reasonably fast computer. (If you don't want to try to "beat" the game, there's an option buried in one of the configuration files to disable physics and just fly around and admire the scenery.)
Re: (Score:2)
Looks kinda crappy vs todays games, or the games off 5 years ago, on par with those released 8 years ago maybe ;)
Re: (Score:2)
I don't think the sun flower scene looks "real" either,all the sharpness and perfectness make it look MORE synthetic.
Re: (Score:2)
Yes, but that is in large part because it was build to have millions of polygons, not to be artistically pleasing. What makes the scene interesting isn't really that it looks good or not, but that it shows graphics that todays 3D hardware simply couldn't render. Its kind of the same with voxels, it doesn't necessary look better, but its different, it allows you to do stuff that you couldn't do with other technology and thats why I think it would be interesting to see a full game build on it and be it just a
Re:Never ending chase... (Score:4, Informative)
The problems haven't changed since the 80s.
I attended Siggraph in 1989 and watched the AT&T Pixel Planes presentation. Things still haven't changed in 20 years.
I have no idea how you say that ray tracing somehow frees you from quads (or tris). You're still going to have to describe the geometry somehow. Depending how things are done you might get some freedom from surface normals and such, but you'll still have to figure out how to make that tree from sub-elements so that the ray-tracer can bounce rays off it. When a ray passes through the bounding box of the tree, you're going to have to be able to find out of the ray truly intersects the tree and if so, where on it did it hit, at what angle and what color the tree would appear to be from the angle the ray came from. That's going to require you describe the tree with geometry elements and the texture/color and spectral changes depending on angle.
Re: (Score:2)
I think the gp was talking about using transparent quads with textures to fake complex graphics.
IE, branches of a tree are really just a texuture painted on a quad with bump-mapping. It's 2 polys, not 1000 it could be.
the article makes a reference to trying to speed up hit testing of transparent quads when shadow-casting.
Games/technology will continue this slow R&D followed by a quick jump in visuals for all games for the next 10 years. At some point, our power will be suffiecent enough that the tree wi
Re: (Score:2)
IE, branches of a tree are really just a texuture painted on a quad with bump-mapping. It's 2 polys, not 1000 it could be.
That's the case in Fighting Force for PS1. It's also the case in Animal Crossing, which was designed for a 1996 GPU and a fixed-angle camera. (The GameCube game was a source port from N64, the DS has N64-class GPU, and the Wii game intentionally kept the same art style.) But in plenty of games using a "realistic" art style designed for the PS2 or more powerful hardware, branches are actual geometry.
Re: (Score:2)
People said the same things as you are saying 20 years ago. Iterative fractal systems (as you describe) were also big at SigGraph 1989.
But that extra power was instead used to raise the output resolution, texture resolution and the poly count, instead of using the trickery you speak of. I don't foresee a boom in the next 10 years any more than the last 20.
Also, there's no difference between ray tracing and scan-line rasterization that would dictate or even facilitate a change from textures (and transparency
Re:Never ending chase... (Score:5, Insightful)
You aren't going to see that kind of thing in a game for many reasons that boil down to that ray tracing isn't ready to do realtime. To make a game that used ray tracing would pretty much doom it to failure.
One problem you have is that the graphics hardware out there isn't built for ray tracing, it's built for rasterization. Now while I'm sure you can write your own ray tracer on the newer hardware that does GPGPU stuff, I'm also sure it wouldn't run as well. Reason is that current graphics cards are purpose built rasterizers. They are designed to do that as fast as possible. So you are left with writing your own ray tracing engine in software, either on the CPU or GPU. This is not going to be fast, especially since ray tracking is fairly computationally intensive.
Well then you hit the next problem: Pixels. Ray tracers do NOT scale well with resolution. Each pixel has to have it's own ray cast. If you want to do anti-aliasing, then you have to do more rays for that. This is why ray tracing demos tend towards low resolutions. It is much faster the less pixels you have to do. Ok well that doesn't compare favorably against the rasterizers. They scale extremely well with resolution, and also in terms of anti-aliasing. Many of them can do 4xFSAA with next to no performance penalty, and can do it at full HD resolutions. Not the case with your ray tracer. If it can render 40 FPS at 1920x1200 with no AA, it'll be just 10 FPS with 4x AA since it now has to do 4 rays per pixel.
So you aren't going to see it happen any time soon. The net result wouldn't look as good as the equivalent rasterized game. It won't be the sort of thing you see until either there starts to be purpose built ray tracing hardware (GPUs may start be made for both) or general purpose processors are so fast it makes no real difference.
Intel is all up on this because they see GPUs as a threat to their computation market. However, as this demonstrates here, there really isn't an advantage at this time. You throw a positively massive system at it and you get poor performance. Even if you redid the game so it used extremely high geometry, nobody would give a shit. IT would run way too slow on any normal computer.
Re: (Score:2)
The issue with raytracing is memory access patterns; this is not so much an issue with GPUs vs CPUs, but rather that both CPUs and GPUs rely on linear prefetch patterns through memory, which raytracing breaks as you traverse the spatial subdivision structure.
Secondly, ray tracers scale very well with resolution O(n) where n = number of pixels; we currently still have a relatively high constant cost, but assuming moore's law keeps up in performance and we find an
Re: (Score:2)
true, optimally it requires static geometry for the scene mesh, and that actually is why they're using bump mapping for the water rather than a surface mesh. One flaw to this is that the edges of the water will not rise or fall, so you need to fake it with texturing or particle systems.
Ironically, bump mapping is largely being replaced in rasterizers by pseudo raytracing techniques that use the normal map and a height to project the actual geometry from the subsurface (however, most are still mapped to a s
Re: (Score:2)
Near as I can tell, your whole argument boils down to "raytracing is slow[er]." (I added the "er".) The years keep going by, though, and while the clocks are creeping up slower than they used to, we keep getting more 'n' more cores. Raytracing might always be slower but it's only a matter of time until it's fast enough.
When you say 10 FPS, I think "That's amazing!" That means in 10 years we'll all be doing 60 FPS it on $500 machines.
Re: (Score:2)
One problem you have is that the graphics hardware out there isn't built for ray tracing, it's built for rasterization.
This paper is by Intel, who wants to release video hardware specifically designed for raytracing. It's also embarrassingly parallel -- this example originally ran on a cluster of 20 machines.
Ray tracers do NOT scale well with resolution. Each pixel has to have it's own ray cast.
That's linear. As in, throw in twice as many cores, and you can handle twice as many pixels. How does rasterization scale with pixels?
Ok well that doesn't compare favorably against the rasterizers. They scale extremely well with resolution, and also in terms of anti-aliasing. Many of them can do 4xFSAA with next to no performance penalty...
Alright -- but you haven't actually shown what the requirement is. Is it logarithmic? Linear? Constant?
The net result wouldn't look as good as the equivalent rasterized game.
No, it will look better, when it gets there.
As an example: A mirror in a rasterized
Re: (Score:2)
Interpersonal skills, go read some, spend a few years living it then open your mouth.
It may be worth wiping the rabid spittle off your chin first.
( I'm a random passerby, not the original poster. )
Re: (Score:3, Insightful)
I know I use my share of the foul words in the english language, but think about this - everyone would take your comment more seriously if you didn't use them; at least not to the excess seen in your post.
Re:Never ending chase... (Score:5, Informative)
If you are a decent well learned programmer, essentialy an expert in algorithmic complexity, then surely you understand the comparison O(n) vs O(log n) and why you cannot refute it with horseshit.
How about real world experience then?
We have approximately 120 rack units in our renderfarm, each with dual quad core xeons + Dual Quadros. Of the rendering jobs we submit, approximately 0.001% use raytracing exclusively, about 0.5% make use of raytracing extensions. The rest is rasterization because it's a hell of a lot more efficient. Period. (And I'm talking about the real world here - not Big O notation on paper)
The arguments for scene complexity go out of the window very quickly in all fairness, for quite a few reasons.
1. To double the complexity of a model, we typically expect to see the time spent authoring that asset to increase by a factor of 6. We currently employ in the region of 200 modellers. A doubling of scene complexity would take that number to 1200 (if you don't count the additional management overhead etc). There simply aren't enough skilled people to make that a reality, so there is an absolute cap on how complex a given scene can become.
2. We always have, and always will continue to seperate the rendering into seperate passes for the compositor to correctly light at a later stage in the pipeline. A highly skilled compositor can produce higher quality images quicker than a better rendering algorithm can. Because we always split the scene into smaller constituent parts, the scene's never get complex enough to see any ray tracing benefits (and those parts can be rendered seperately on different nodes in our RF).
3. We typically use 4k x 4k image sizes, rasterization is certainly fast enough for those image sizes. Our scene complexity is far higher than that of a any game now, or in the next 5 years.
4. Scene complexity is inherently limited by 1 other major factor that you've completely ignored. Memory speed. As your data set increases, rendering performance degrades to the speed at which you can feed the processors - i.e. the speed of the memory. Again, this is another reason why we seperate the scene into render layers.
CG has never, and will never, use accurate mathematical models to produce images. If a cheap trick is good enough, it's good enough. Raytracing never really made the in-roads into the FilmFX world that the early 80's/90's evangelists predicted - And i predict that it will never make the in-roads into Games that you seem to believe.
Thirdly, what the fuck do current video cards have to do with *anything* about this? This is called RESEARCH. Ever do any?
Wow! Ever done any research yourself? If you did, you'd know that the answer is an awful lot! The only computational resource available that can provide both the memory bandwidth, and the computational power required for raytracing is the GPU. Our rendering process has been using GPU's to accelerate raytracing (and rasterization) for a couple of years now, unfortunately all of the problems I raised above regarding ray-tracing still apply.
Re: (Score:3, Interesting)
4. Scene complexity is inherently limited by 1 other major factor that you've completely ignored. Memory speed. As your data set increases, rendering performance degrades to the speed at which you can feed the processors - i.e. the speed of the memory. Again, this is another reason why we seperate the scene into render layers.
What's neat about raytracing is that the memory access can be divided into millions of separate 'threads' that are not dependent on each other. So, with a processor (such as Tera MTA) where threads run in the order that memory is available you achieve maximum memory bandwidth.
On 'modern' processors where memory is read in the order that threads run you get massive pipeline and cache stalls when using a software raystracer. So when you are comparing the 'vs' of rasterization and raytracing you need to cons
Re: (Score:2)
I think it's exactly the opposite. 'FilmFX' world isn't using raytracing because it hasn't been used in games. Games drive this tech, and if we get fast hardware for raytracing then movies will use it exclusively.
You'd have a point if it wasn't for the fact that dedicated ray tracing hardware has been around for decades as standalone hardware [pixelution.co.uk], and dedicated PCI cards [computerarts.co.uk].
The FilmFX industry already has dedicated hardware accelerated ray tracing, has had it for some time, and finds no use for it. I still find it laughable that everyone gets so excited about HW ray tracing, because quite frankly, it's a dead end.....
Re: (Score:2)
True, from what I remember, the film industry is mostly using REYES now, which I believe essentially breaks a scene down into micropolygons (didn't reread the wiki page, going by memory), but some studios still use a hybrid approach such as rasterization for modeling and scene building, then render the final scene using advanced techniques such as ray tracing with global illumination.
Both Ray Tracing and REYES need access to the entire scene, meaning the scene needs to be contained in memory (which can be v
Re: (Score:2)
dedicated ray tracing hardware has been around for decades as standalone hardware
Your example has 14 dual-core processors at less than 200 Mhz, which if it is based on its predecessors is a traditional RISC chip, meaning lots of pipeline stalls. You also had to use their rendering software. It has all the memory access problems I mentioned. This is NOT AT ALL SIMILAR to what I was saying. You completely misunderstood my post if you think this hardware has any bearing.
Image a system with 10,000 100 Mhz cores on a single CPU. Now compare rasterization vs raytracing on that machine.
Re: (Score:2)
Of the rendering jobs we submit, approximately 0.001% use raytracing exclusively, about 0.5% make use of raytracing extensions.
Are you using radiosity for GI, or are you not using any GI for the remaining 99.499%?
Re: (Score:2)
Re: (Score:2)
Interesting, thank's for the reply.
I'm working on a physically-based renderer, and I absolutely understand what you mean about realistic vs real. We find we have to insert additional light sources etc to make a good image, just as you would in a professional photo shoot.
Re:Never ending chase... (Score:5, Informative)
Rockoon, you are mistaken in a lot of your points. Even if you seem a bit angry, please allow me to explain. (I work for nvidia, but I do not speak for them).
Firstly, in rasterization, 4xAA does mean 4 samples per-pixel. The short version is that 4xAA basically means that we render into buffers that are twice as large in the X and Y direction (so 2*2 is 4), and then resolve the extra pixels with hardware when we go to present the backbuffer into the front buffer.
I can't speak to 4xAA in raytracing, but to be apples-to-apples, it would have to literally be extra rays in the X and Y directions. Note that I'm not claiming there's a 4x performance penalty here, though, because modern ray tracers rely a lot on cache coherency to be performant. Algorithmically, I would agree that there really is a potential for 4x the cost, but algorithmically we don't care about the constants we multiply by, right?
Third, it's important to consider what current cards do because they're the largest install base, and they are what developers will target. It's also important if you believe that hybrid raytracing is the future--almost all modern raytracers use rasterization for the eye rays to try to help with the pixel complexity problem.
Fourth, you are correct. In fact, there are probably relatively few hardware inventions that didn't begin their life as a software implementation--CPUs excepted.
Finally, you are incorrect. Raytracing scales O(pixels) and O(ln(complexity)). Rasterization is relatively constant in the number of pixels, and O(complexity). I agree, scene complexity has gone up considerably (and continues to go up considerably) every generation of new titles. Fortunately, in the same time period rasterization has massively decreased the cost of processing geometry while simultaneously increasing the ability to parallelize those types of workloads. Modern GPUs (like the relatively old 8800 GTX [wikipedia.org]) can process in the neighborhood of 300M visible triangles per second. That means that if you're trying to redraw your scene at 60Hz, you can have around 5M triangles per scene per frame. The closest I've seen of most modern titles is in the 500K-1M range, so I think we still have some head room in this regard. Modern techniques, such as soft shadowing and depth-only passes definitely eat into this count, which is why we're seeing much higher counts than we used to.
Regarding pixel complexity, the number of pixels that matters is more than just the resolution, it's also how many times you'd like to draw those pixels in a given second. Seven years ago, you were lucky to find a CRT that drew 1280x1024 (which is a weird, 5:4 resolution, but I digress) at more than 60 Hz. 85 was reasonably common, but finding a monitor that drew at 1600x1200x85 was pretty rare.
Now, you can find monitors that render at 1920x1200x120 for relatively cheap. And 240 Hz is on the way. [extremetech.com] That's a lot of pixel data to be moving and redrawing. And speaking from experience, I can say that leveraging coherence within a single frame is hard, and leveraging coherence between frames is virtually unheard of.
It's not that raytracing is an impossible dream, it's just that the GP was correct: it's no panacea.
I'd like to reiterate: though I work for nvidia, I do not speak for them.
Re: (Score:2)
Just because a monitor displays at 120Hz or 240Hz does not mean you need to render at that speed. Currently the only time graphics cards render at the same speed as the monitor refresh is if vertical sync is enabled and they can render above that amount, otherwise they'll render at half. E.g. if the graphics card can only do 45fps and the monitor refreshes at 60Hz and vertical sync is enabled, then the monitor will get updates every 2 cycles, rendering will effectively be done at 30fps.
Maybe I'm wrong, but
Re: (Score:2)
You are somewhat correct--there's no requirement that you have to have an image ready to display every time the monitor refreshes. If you don't, the old image will be displayed (or when vsync is off, the image will be updated as often as the data is available, possibly even multiple times per redraw).
However, what's the point of having a monitor that can display at 240 Hz if you're not using that rate to convey more information?
If 15/30/60 Hz is fine for you, then stick with the monitor you have now. If you
Re: (Score:2)
I don't think the reason for moving to those higher refresh rates is the desire for higher frame rates in 3D rendering, at least at this point.
Games played at high resolutions with a lot of detail can't be rendered even close to 120fps, and I think pretty much everyone would pretty a better looking game at 60fps than a worse looking one at 120fps. Although that may change as hardware gets better.
I think the primary benefit of moving to higher refresh rates is avoiding conversions between different media ty
Re: (Score:2)
It's tricky, and I was perhaps being disingenuous by not mentioning it. Sue me. :)
Pixels that are uncovered (ie, no geometry touches that pixel), are basically free. So if you're running a vertex shader that uses constants to always ensure that it covers 1 pixel on the screen, the pixel shader will only be invoked for that one pixel. Regardless of your resolution... You could be running at 1x1 or 1600x1200 with 8xAA, and the pixel shader will only run for that one pixel.
However, some pixels can be covered s
Re: (Score:2)
I love that every time I post anything remotely work related on /., I get one of these posts. I wonder if it's the same guy every time?
What other kind of hardware would you propose we build? What companies are you aware of that build non-proprietary hardware?
Re: (Score:2)
If you are a decent well learned programmer, essentialy an expert in algorithmic complexity, then surely you understand the comparison O(n) vs O(log n) and why you cannot refute it with horseshit.
Meh. The same thing could be said about Z-buffer, which is O(n), vs Painter's Algorithm, which is O(n log n). Until the hardware became fast enough to overcome the much larger multiplicative constants in Z-buffer, Painter's Algorithm won and that's all there is to it. Not only did it take quite a while until har
Re: (Score:2)
Actually your tree example has nothing at all to do with raytracing. Textured Quads are used for leaves because of polygon count. The polygon count required to create every individual leave is extremely huge and it takes not only more render power but a hell of a lot more setup/translation processing.
The higher polygon count required would put just as much demand on a raytracer as it would on a reyes or scanline renderer. In fact it may put more stress because raytracing scenes tends to require larger amoun
Re: (Score:3, Informative)
Textured Quads are used for leaves because of polygon count.
The whole point of realtime ray tracing is that it scales with O(log(n)) instead of O(n) when it comes to polygons. Which means that you can and should model each leaf as fully polygon. That is actually done in quite a few other examples, such as the sunflower scene [openrt.de] or that Boeing model [uni-sb.de], where every last screw is modeled and yes, ray tracing can handle those just fine.
Now there is of course a cavet, this scalability only works for static scenes and things become quite a bit more problematic when stuff is an
Re: (Score:3, Insightful)
Re: (Score:2)
isnt the point that x*log(n) (where x represents the extra cost of doing raytracing) is still going to be smaller than n (for large enough n), infact the benifit of switching to raytracing (if it can approach its theoretical limit) is that for highly complex scenes it scales incredibly well (almost flat infact)
Re: (Score:2)
Re: (Score:2)
I've been working in 3d modelling and animation professionaly for a long time, and started back in 3ds in the dos days and povray...
Honestly i've never heard this. I've never experienced this either. From a performance point of view, raytracing as always been heavier with larger poly counts.
But it is interesting, perhaps i'm wrong and i've never noticed and its worth checking into. After all i'm more of a modeller and animator than a render coder ;) but i've been around a while in the field.
Re: (Score:2)
Btw, i just checked out the sunflower and boeing models. I'm not sure they show off exactly what you're referring to becauese there is no text explaining what is going on in the scene so here is my assumption...
In the sunflower case, each peddle is rendered, and each leave is as well (this is nothing new or specific to raytracing) In fact, each of those peddles and leaves are "Instances" of 1 peddle, and 1 leave. Its simply cloned in memory from the source objects and it saves down on polygon count/memory e
Re: (Score:2)
Large parts of any game level are completly static and so are many models, but you are absolutely right, dynamic scenes are a problem and todays games have plenty of them. Which is exactly why I said I would like to see stuff that focuses on the strength of raytracing instead of trying to replicate current games.
I don't doubt that rasterization will dominate computer games for a long while to come, compatibility on multiple platforms alone pretty much guarantees that, all technical benefits aside. However
Re: (Score:2)
Why do people always compare static rendering with dynamic rendering?
Because the time to render that static image does count.
A billion dollar industry isn't wrong.
Billion dollar industries have been wrong before. And merely because a particular technique is right now, doesn't mean it will be right forever.
Case in point: It used to be, everyone in the industry assumed shadows were too hard to do dynamically. So, as a performance hack, they were projected statically, and incorporated into the textures. If you had a light shining on a fan, you'd have a texture on the wall behind it of a spinning-fan shadow.
Then, co
Was it more fun? (Score:4, Insightful)
So was the ray-traced version of the game more fun? Or am I missing the point of games?
Re: (Score:3, Insightful)
In this case, yea. You were missing the point. It wasn't meant to make the game more fun. It was meant to show that ray tracing was possible on a fairly modern game. It's like modding a toaster to run BSD, or adding a laser turret to your mailbox: a substantial reason for doing it is to see if it's possible.
Re: (Score:2)
before after pictures (Score:4, Interesting)
When looking at the before/after pictures, was anyone else surprised when they read which was the raytraced version?
To me, the ship in the water looks better with the bump map.
Re:before after pictures (Score:4, Informative)
Re: (Score:3, Insightful)
it's got more obvious special effects but the other one looks for more realistic.
Re:before after pictures (Score:4, Insightful)
Bad sample (Score:4, Informative)
As someone else noted, both pictures were raytraced.
To really show the difference between 2d and 3d water, you need to show the water interacting with a solid object close enough so that you can see that in one example the waves really go up and down and in another they're just a picture of waves on a mirror.
There's been a LOT of work making 2d water look dramatic, and I've seen people say they prefer 2d water in broad shots like this in other games (not even raytraced ones), but when you're in the game looking over the edge of a dock or looking at a nearby boat with the light behind you, it's pretty clear that spending more time on the physics of the water pays off.
Heck, even with 2d water, paying attention to the wave effects in shallow versus deep water pays off when you interact with it. And that's rarely done because it's not as dramatic.
Did Intel graphics improve when I wasn't looking? (Score:1)
Seriously, as of a year ago I would rather have used an old AGP TNT2 than than the latest built in Intel graphics. I improved the performance of a relatives machine by 35% after putting in a PCI GeForce 5600 instead of using the built in Intel.
Did something happen over the past year or two that caused Intel to be able to publish papers like this? I mean their graphics are fine for a Windows desktop running Office and a browser, but it stops there unless something recently changed.
Re:Did Intel graphics improve when I wasn't lookin (Score:2)
There's a huge difference between the things they have in their lab and the things they're selling.
Re: (Score:1)
Re: (Score:3, Informative)
All of this stuff is done in software on the CPU, so the graphics hardware really doesn't affect it.
Re: (Score:1)
they're looking into raytracing ''because'' they're graphic cards are so bad. They can't be bothered to juice up their video graphics department and productivity, so they're trying to make the world change, instead of Intel changing their ways.
- They would be a lot better off if they would just stop making graphics processors altogether and let ATI, Nvidia do the integrated graphics, or better yet, motherboard makers should bloody well realize that intel integrated is crap and stop buying integrated graphi
Re: (Score:1)
Erm... error? (Score:1)
Is it just me or does this widely disseminated professional document contain a then/than error?
Let's hope that wasn't one of the coding challenges they faced.
if (x > y/2) than
{
'uh oh
}
(This post written assuming "the other one" = "the other part of the ray bundle")
Still taking the wrong approach... (Score:4, Interesting)
They're still taking the wrong approach to raytracing. If Philip Slusallek was able to get 30 FPS in a raytraced game in 2005, using a single Pentium 4 behind a raytracing accelerator that was roughly equivalent to a Rage Pro in terms of gates and clock speed, it seems silly to me to ignore the possibilities of adding an "RPU" to the mix instead of just adding more general purpose CPU power. Yes, I know that's Intel's thing, but even for Intel... a raytracing core would be a tiny speck in an I7.
Mod parent up insightful... (Score:4, Insightful)
Intel isn't trying to do ray tracing. Really, their point is to find a way to make GPUs unnecessary since it is a threat to the CPU market.
They can call it "ray tracing extensions" to the I7 or I8 CPU. It's not like the x86/x86_64 instruction sets are some kind of blushing virgin whose precious architectural purity would be violated by adding instructions like "RT_LOAD_MESH" and "RT_LOAD_SHADER"...
What bothers me is how nVidia is missing the boat.
Intel promotional text (Score:5, Funny)
Classic games? (Score:2)
Why is an article about a game that was released in mid 2007 tagged classicgames? Quake is a classic game, Enemy Territory: Quake Wars certainly isn't.
But why? (Score:2)
Rasterizing triangles and the "first intersection" on a ray tracer actually give exactly the same result for a triangle mesh.
Ray tracing has a more obvious mapping onto the rendering equation, but rendering geometry or even first order reflections offers very little advantage (and several disadvantages) over rasterization techniques. Shadows are more implicit in ray tracing, but they don't look "better" until you have area light sources and start shooting a LOT of rays.
And that's really the problem. Most
Re:animation, bottlenecks, etc... (Score:5, Informative)
For this project, we started rewriting the renderer from ground zero. Because of this, the very first images from the renderer were not of typical ray- tracing caliber, but displayed only the basic parts of the geometry, without any shaders or textures
...to mean that they rolled their own.
Re: (Score:1, Redundant)
Re: (Score:2)
Re: (Score:3, Informative)
Re: (Score:2, Funny)
Re: (Score:1)
That's a classic Cult of the Dead Cow story, right from the start of the internet.
Re: (Score:3, Informative)
guys they did this work, I played this game enough to be able to tell it wasn't fun to play, it tried to be a Battlefield 2 clone with a broken physics engine, and "real-time" shadows that wasted FPS and didn't need to be real-time at all, static objects could have just been baked into the megatextures like bf2, was sad to see ETQW when it finally showed up a year late and suck ass gameplay. Splash Damage and id should be ashamed of this product and tech.
QW:ET is one of the best made, best balanced team FPS games I have EVER played. If it draws from anything, it draws from the previous Enemy Territory game. I'm sure we've all played a lot of the original ET, being that it was free. QW is like a much refined version of this, with a modern graphics overhaul, and more interesting setting.
a warmed over version of Doom3 / Quake 4 tech that was poorly coded by Splash.
I mean, come on? Flamebait if not outright troll. But insightful? Where's the evidence that this was poorly coded - this game is
Re: (Score:2)
QW:ET is one of the best made, best balanced team FPS games I have EVER played. If it draws from anything, it draws from the previous Enemy Territory game. I'm sure we've all played a lot of the original ET, being that it was free. QW is like a much refined version of this, with a modern graphics overhaul, and more interesting setting.
I also thought the game was a poor BF2 clone with serious balance issues, and poor hit detection. It also has a HUGE learning curve in comparison to most games in it's class.
Re: (Score:2)
Re: (Score:2)
Don't worry, just wait for his next game. John Carmack is about to make you his bitch.
Re: (Score:2)
Re: (Score:2, Insightful)
Reality, by definition, is "dirty". We have dust, we have imperfections in every surface, no matter how carefully machined. Houses are never truly square, roads are never perfectly level, and points in a corner are always rounded. Always.
Computers, by definition, are "clean". Squares are always truly square, roads are as perfectly level as they were designed to be, and corners are always razor sharp, no matter how much you "zoom in".
The problem with modern graphics systems is they are computed to extreme levels of precision. If they incorporated a sort of fundamental randomness, if they were intrinsically uncertain, they just might be able to really approximate reality, which is messy, ugly, and imperfect.
You seem to be confusing texture irregularity with material consistency. A house wall is not perfectly "razor sharp", but no matter how many times you look at it, they do not suffer from "randomness" or are in any way "uncertain". At least not if you are not looking at a sub-atomic level. Also, the bandwidth would not be that high, if you take into account that human eyes have very little resolution, and thus an extreme amount of detail at a distance would be pretty much irrelevant.
Re: (Score:2)
Re: (Score:2)
Personally, I'm thinking about FPGAs which produce circuits at relatively low bandwidth but that are highly tuned to the task at hand.
Hardware-Accelerated Shaders Using FPGAs [dctsystems.co.uk]
Re: (Score:2)
Sure, but I believe the point is this: if you paid retail for the render farm this project used, you would be VERY disappointed with 1280x720 resolution at 20fps.
When you can get 1920x1080 @ 60fps playing ETQW on a $100 video card, you start to see the raytracing results in a different light.
Re: (Score:2)
I was talking about 1280x720 being todays resolution for gaming, and I bet realtime raytracers rendering 1280x720 would win the hearts of ANY gamer over todays or tomorrows GPU assisted games running at 1920x1080 or even double that.
And I was saying that you cannot get real-time 1280x720 raytracing today for a reasonable price. You can run the game rastered at 1280x720+ for under $1,000 from a major manufacturer (even with their video card mark-up), or you can spend $10,000 or more on the 4-socket, 16-core
Re: (Score:2)