Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
Graphics Software Entertainment Games

Students Evaluate Ray Tracing From Developers' Side 84

Vigile writes "Much has been said about ray tracing for gaming in recent weeks: luminaries like John Carmack, Cevat Yerli and NVIDIA's David Kirk have already placed their flags in the ground but what about developers that have actually worked on fully ray traced games? PC Perspective discusses the benefits and problems in art creation, programming and design on a ray traced game engine with a group of students working on two separate projects. These are not AAA-class titles but they do offer some great insights for anyone considering the ray tracing and rasterization debate."
This discussion has been archived. No new comments can be posted.

Students Evaluate Ray Tracing From Developers' Side

Comments Filter:
  • I know this may not be a popular question, but what is the point with raytracing for games?

    We're finally getting to a level of technology with rasterization where we're producing visuals at a level which is "Good Enough" (or better) for practically every genre. Do we really need to get on the hardware treadmill for the next 10 years to get to a similar technology level to get slightly more realistic lighting and reflections?

    • by ZiakII ( 829432 ) on Wednesday June 25, 2008 @04:26PM (#23941191)
      I know this may not be a popular question, but what is the point with raytracing for games? We're finally getting to a level of technology with rasterization where we're producing visuals at a level which is "Good Enough" (or better) for practically every genre. Do we really need to get on the hardware treadmill for the next 10 years to get to a similar technology level to get slightly more realistic lighting and reflections?

      Yes, we do because everything that we do currently is just a hackish like system where we are using programing tricks and other methods to get it to look realistic. Instead of a video card you would just need a faster cpu, which if we base off of moore's law won't be much longer.
      • by argent ( 18001 ) <peter@slashdot.2 ... m ['ong' in gap]> on Wednesday June 25, 2008 @04:38PM (#23941337) Homepage Journal

        Instead of a video card you would just need a faster cpu, which if we base off of moore's law won't be much longer.

        If the video card makers had picked up on the RPU [uni-sb.de] you could use your video card to get realistic high frame-rate raytraced games today.

        Dr Slusallek is working at nVidia now, so who knows?

      • Re: (Score:3, Interesting)

        by ZephyrXero ( 750822 )
        To take Ziakll's argument even further... One problem with today's game industry is how long it takes to make a video game. Back in the 80s games could be made with a small handful of people in less than a year. Now it takes about 10 times as many people and anywhere from 2 to 5 years to produce a game. The biggest time (and of course money) sink in this process is art and level development. If raytracing can make things simpler and quicker to get accomplished for an artist then that will equal less time fo
        • Re: (Score:2, Interesting)

          The biggest time (and of course money) sink in this process is art and level development.

          This more or less hits the nail on the head. Raytracing isn't going to reduce this burden - you still have to write shaders for mental ray for example. The biggest problem is that as the complexity of the required art assets increases linearly (with moores law), the amount of time taken to model, rig and animate those models increases exponentially.

          To put it bluntly, at the moment the industry has to invest in better tools to simplify the asset creation stage.

          • Re: (Score:3, Insightful)

            by mrchaotica ( 681592 ) *

            To put it bluntly, at the moment the industry has to invest in better tools to simplify the asset creation stage.

            Indeed. Who cares about raytracing? The next big thing in games is procedural generation of content!

        • by raynet ( 51803 )

          I don't think it matters that much to the artist how things are rendered in the end. Someone will still need to model all the characters and objects, do level design, textures, animations, etc. And as technology advances and we get better, faster graphhics cards, the models and textures have to be higher quality, thus taking more and more time to produce.

      • Re: (Score:2, Insightful)

        Instead of a video card you would just need a faster cpu, which if we base off of moore's law won't be much longer.

        You'd actually need faster ram first. Current 800/1066Mhz ram can't compete with the 1900Mhz+ ram available on the GPU - it simply can't feed the data to the CPU core's quick enough. To speed up execution times of each core you'd need to limit the amount of Ram it can access to minimise cache misses. At that point, you suddenly have a stream processor that's pretty damn similar to the current GPU model.

      • Re: (Score:3, Insightful)

        by Goaway ( 82658 )

        Raytracing is just as much of a hack as rasterizing. It's just a different hack. Both are nothing but rough approximations of the rendering equation.

        Raytracing is better at rasterizing for rendering silver spheres on checkerboards, but the lack of those aren't the main problem with graphics these days. Raytracing is pretty much as bad at rasterizing at things that matter much more, such as decent global illumination.

      • With nVidia's recent moves to allow you to use your old vid cards as a PhysX card, it would be interesting if graphics processing did in fact transfer back over to the multi-cored CPUs while video cards themselves became relegated to doing advanced physics simulations.
    • Re: (Score:2, Insightful)

      Raytracing is an "embarrassingly parallel" task that should scale well as desktop computers execute more and more code in parallel. Can the same be said about rasterizing?

      • Re: (Score:3, Interesting)

        Actually, yes ...

        One of the main reasons that we now have 800 stream processors on fancy graphics cards is that you can split the most costly portions of advanced rasterization into hundreds of independant processes.

        • Re: (Score:3, Informative)

          by Solra Bizna ( 716281 )

          Actually, no.

          Rasterization is not embarrassingly parallel in the same way that raytracing is. Distributing tasks among those 800 "stream processors" is exceedingly complicated, because the underlying "task" involves iterating over every pixel that intersects with a given triangle rather than (as in raytracing) iterating over every triangle that could intersect with a given pixel.

          -:sigma.SB

          • And how would you calculate what intersects with what without iterating over each one?
            • by Solra Bizna ( 716281 ) on Thursday June 26, 2008 @12:45AM (#23945581) Homepage Journal

              The point is that, in raytracing, you can assign each of your 800 "stream processors" different pixels. Done. You're parallel. When one finishes, give it another pixel to work on, and repeat until you've rendered the whole thing.

              Each core still has to iterate over all (well, some, I'm oversimplifying) of the triangles, but it can do so COMPLETELY INDEPENDENTLY of the other cores and still come up with a good result. Your performance gains are almost linearly proportional to the number of cores.

              You can even have a relatively high-latency connection (Gigabit Ethernet, for instance) between the various cores, broadcast the scene data over this connection, and then receive individual "chunks" of rendered pixels back. I defy you to do that with rasterization.

              -:sigma.SB

              • While the task of figuring out which pixels go to which triangles isn't easily parallelisable, running pixel and vertex shaders is. THAT's what the stream processors are used for.

              • Re: (Score:2, Insightful)

                This is a somewhat rose tinted view of ray tracing. You can't simply throw each pixel onto a different stream processor and expect it to work. Whilst this works for throwing a different pixel at a different CPU core, this does not work with the stream processors we currently have available to us.

                The problem we have both in the cell, and in stream processors on a GPU is that you can't arbritrarily access large data sets. So, it is impossible to write any code for a triangle that allows it to fire off ray
      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Yes.

        Any conceivable hardware architecture that is optimized for ray tracing will be even better at rasterization. It's a basic axiom, once you understand the way memory (has to) work.

      • by Goaway ( 82658 )

        Raytracing is an "embarrassingly parallel" task
        Not quite. Each ray has to access the entire world geometry. Try to do that in parallel, and you'll either need huge separate memories for each processor, or you'll run out of shared memory bandwidth pretty quick.

        Rasterizing is far better when it comes to parallel memory access.

    • by argent ( 18001 ) <peter@slashdot.2 ... m ['ong' in gap]> on Wednesday June 25, 2008 @04:33PM (#23941271) Homepage Journal

      Realistic lighting allows you to use those clever algorithms in your head that you've learned over the past 20+ years in the real world, so when you see a flicker of a reflection or a change in the shadows in a darkened tunnel you can turn and blast the damn camper on the opposite rooftop before he nails you with his sniper rifle.

      • by Goaway ( 82658 )

        But raytracing doesn't do realistic lighting at all! It won't easily do global illumination, and it won't do reflected or refracted caustics either, at least not in realtime.

        • Comment removed based on user account deletion
        • Re: (Score:3, Informative)

          by argent ( 18001 )

          But raytracing doesn't do realistic lighting at all!

          It does more realistic lighting than rasterization, and it definitely will do caustics... you just need to shoot more rays. Whether you can shoot enough rays in realtime or not, well, that's where you need the speedup from an RPU.

          "Global Illumination" isn't a lighting effect, it's a heuristic for rasterizing that fakes some effects that require additional rays to calculate. In some cases that's ludicrously many rays, in others it's not. There's also some v

          • by Goaway ( 82658 )

            It does more realistic lighting than rasterization, and it definitely will do caustics... you just need to shoot more rays.

            What is traditionally referred to as raytracing would require immense numbers of rays to do caustics. To handle those, one needs to combine ray tracing with other methods, such as photon tracing.

            "Global Illumination" isn't a lighting effect, it's a heuristic for rasterizing that fakes some effects that require additional rays to calculate.

            No, global illumination is often implemented through various heuristics, but the term itself is general, and just means taking indirect light into account.

            There's also some very good (albeit still expensive) techniques to simulate radiance and other "global illumination" effects in raytracing.

            And many of those work just as well for rasterizing.

            • by argent ( 18001 )

              What is traditionally referred to as raytracing would require immense numbers of rays to do caustics. To handle those, one needs to combine ray tracing with other methods, such as photon tracing.

              "Photon tracing" is raytracing, starting from the light source. It's the same algorithm, so raytracing hardware (like an RPU) should also accelerate it.

              [global illumination] just means taking indirect light into account.

              Generating multiple secondary rays gets the same results, albeit at a higher cost. And to get cor

              • by Goaway ( 82658 )

                So do addition and subtraction, indirection, and object-oriented programming, but you wouldn't argue that a raytracer that used these wasn't a raytracer.

                No, what I argue is this: You get a bigger payoff by laying off the raytracing and just staying with rasterizing. Raytracing costs a lot, and the payback small. There are methods to improve rasterizing that give a bigger payback for far less cost.

                • by argent ( 18001 )

                  Raytracing costs a lot, and the payback small.

                  Raytracing only costs less than what you have to do to get almost-as-good results with rasterizing because you've got a dedicated hardware rasterizer that cost more than your CPU doing the job for you.

                  Philipp Slusallek demonstrated a hardware raytracing engine in 2005 that was handling realtime raytracing despite being implemented on an FPGA with about as many gates as (and at a slower clock than) a Rage II... contemporary GPUs were running at 8x the clock and h

                  • by Goaway ( 82658 )

                    And raytracing is embarrassingly parallelizable

                    Only for trivial geometry. With real geometry, you'll quickly run into memory bandwidth issues, as every single ray can potentially access any part of the geometry.

                    • by argent ( 18001 )

                      With real geometry, you'll quickly run into memory bandwidth issues, as every single ray can potentially access any part of the geometry.

                      The available memory bandwidth on the RPU was less than the Rage II... to be precise, 350 MB/s (RPU, 2005) versus 480 MB/s on the Rage II.

                      The ATI v8650 has 111 GB/s, roughly 30 times the memory bandwidth on the RPU.

                      With that memory bandwidth, and a truncated version of the SaarCOR design to fit the constraints of the FPGA, the most complex scene in the SIGgraph paper was o

    • it removes the need for complex graphics cards. Most of the problems I have had with computers have been due to these very complex bits of hardware and software not quite working together.
    • Re: (Score:3, Informative)

      Head on over here to see what a raytraced Enemy Territory: Quake Wars [tgdaily.com] looks like. Pay particular attention to the water and windows.

      Now read everyone else's responses and realize that raytracing is a super-easy way to take advantage of multiple cores and simplify your code at the same time. All the crazy stunts and tricks you have to pull to get some of those lighting and reflection tricks can be thrown out the window, and the extra time could be used to ::crosses fingers:: make better gameplay. We
      • by Hatta ( 162192 )

        Honestly, I'm not that impressed with the raytraced Quake Wars. Some nicer textures and higher res models could make it a lot prettier with a lot less horse power. Yes, the reflections are impressive, but reflections aren't that important really.

        • Honestly, I'm not that impressed with the raytraced Quake Wars. Some nicer textures and higher res models could make it a lot prettier with a lot less horse power. Yes, the reflections are impressive, but reflections aren't that important really.

          Reflections (and various translucency effects) are all they were demoing. Raytracing probably doesn't have any special scaling problem with high resolution textures and models (I think it's an issue of "rays per pixel"), but that's not what they were worried about

          • by Goaway ( 82658 )

            Reflections (and various translucency effects) are all they were demoing.
            It's all they are demoing because it's pretty much all raytracing is actually good at, at least at realtime speeds.
            • raytracing is also really good at curved surfaces. No need to tesselate everything.. you can use real nurbs.

              • by Goaway ( 82658 )

                If you're doing it slowly and in software, yes. But to get to realtime speeds you either need to make sacrifices, or use hardware (or even both).

                Although if you managed to design hardware that does ray-nurb intersections quickly then you're set. But I'm assuming we'd actually just get simple ray-triangle intersections.

      • GPUs can do raytracing fine ... but it still won't be used most of the time, simply because you can get better looking images by not trying to use it for all your rendering.

        Raytracing, best in moderation.

      • Re: (Score:3, Interesting)

        All the crazy stunts and tricks you have to pull to get some of those lighting and reflection tricks can be thrown out the window, and the extra time could be used to ::crosses fingers:: make better gameplay.

        Not only that, but any weird new kind of gameplay which depends on interesting visuals can be done much, much easier.

        Simple example: Portal. Right now, it involves all sorts of crazy tricks. As I understand it, objects (at least, cubes which fall through the portal) are duplicated at both ends of a portal (in case you can see both ends at once)... The "hall of mirrors" effect of two portals across the hallway from each other is apparently intensive (it causes lag), and there is a hard (adjustable) limit,

      • I'd rather see decent GI algorithms implemented before raytracing. Most of the objects i see around me are not chrome or glass.....
        • Raytracing is for anything that throws a reflection or alters light, including most metals and polished woods. In my office, I count a reflective desk surface, the glass face on my clock, an older CRT, the windows, the doorknobs, the globes that cover the lights, and some light reflectivity from the wood on the bookshelf.

    • by grumbel ( 592662 )

      I know this may not be a popular question, but what is the point with raytracing for games?

      The core idea is that rasterization scales linear while raytracing scales logarithmically, i.e. when you have few polygons, rasterization is faster, but when you cross a certain threshold on the number of polygons, raytracing wins. As a nice side effect raytracing can also do some things easily that are hard with rasterization, like reflection.

      The whole debate of course is if we have already crossed that threshold or even if we ever will, since the whole thing only works out for the very basic algorithm. W

      • The core idea is that rasterization scales linear while raytracing scales logarithmically...
        Wouldn't that be just the opposite? If something scales logarithmically, each additional unit increase in the independent variable produces less increase in the dependent variable. IANAGP (graphics programmer), but I have heard it said that it is raytracing that scales linearly.
    • In the current model, Nvidia and ATI (now AMD) are the stars of the show. Gamers upgrade their video card at least anually, and now they often buy a pair to run SLI. The CPU is less important. Despite their efforts to force game devels to adopt multi-threaded game engines the uptake hasn't been what they need to drive sales of newer multi-core chips.

      Move the heavy grunt of graphics from the GPU to the Intel Inside, so that gamers must have that octocore CPU and even better a pair of em and make em get wo

      • Despite their efforts to force game devels to adopt multi-threaded game engines the uptake hasn't been what they need to drive sales of newer multi-core chips.

        Yes they have. I've been working on multi-threaded engines for the past 3 to 4 years on the 360 and PS3 - as has every other programmer in the industry. With PC games however it's a different ball game entirely. For example, here is a list of games i found being used as a benchmark for a recent processor and the dates they were released:

        2006: Warhammer Mark of Chaos
        2007: Supreme Commander
        2004: Unreal Tournament 2004
        2005: Serious Sam 2
        2005: Quake 4
        2006: Prey
        2004: Far Cry
        2007: Crysis

        And in the ar

    • by Hatta ( 162192 )

      We're finally getting to a level of technology with rasterization where we're producing visuals at a level which is "Good Enough" (or better) for practically every genre. Do we really need to get on the hardware treadmill for the next 10 years to get to a similar technology level to get slightly more realistic lighting and reflections?

      I thought we were "good enough" for practically every genre 10 years ago. I don't think there's much point from a gameplay perspective in anything prettier than Half-Life. B

      • by Surt ( 22457 )

        Half-life was sufficiently low polygon that it looked quite angular, which clearly wasn't the goal of the art direction. Artists mostly just need more tris to make their art look better, so for the next few generations at least there will be continuing improvements in the ability of games to deliver on the artist's intentions. Games are not just about game mechanics, they also involve story, and for delivering story, suspension of disbelief is key. As long as the technology is getting in the way of deliv

    • We're finally getting to a level of technology with rasterization where we're producing visuals at a level which is "Good Enough" (or better) for practically every genre. Do we really need to get on the hardware treadmill for the next 10 years to get to a similar technology level to get slightly more realistic lighting and reflections?

      Highlighted because nobody else seams to understand that you agree that raytracing is better but are saying that its unneeded. Hell for most genres (stratergy,RPG,arcade,puzzle) i doubt that graphics make any difference to playability at all.

      I completely agree but the fact is most people are consumerist bitches, and will end up buying whatever were told is best , games will then all move to raytracing, meaning everybody will be forced to even if like you and I, they don't give a shit about having slightly

    • by Quarters ( 18322 )
      That's the problem, at least for the IHVs. DX9/10 or OpenGL2.x with programmable pixel pipelines and gobs of stream processors on the GPU are quite enough to get movie CGI quality games at sufficiently high framerates, resolution, and image quality for years. That does nothing to help sell the next uber video card at $600+ a pop, though. Raytracing would start the IHVs down a path of all new technologies to invent and exploit on 6 month release schedules.
  • Debate? (Score:5, Informative)

    by Vectronic ( 1221470 ) on Wednesday June 25, 2008 @04:30PM (#23941243)

    "...ray tracing and rasterization debate"

    I don't think there is any debate at all, RayTracing is by far superior, there is just the problem of computing power.

    Anyone (perhaps ask the modelers for the games) who deals with 3D software, knows the benefits of RayTracing for simulating reality (Reflections, Ambient Occlusion, Sub-Surface Scattering, etc)

    And once computing power reaches that level it will even speed up the process of creating games because you can let the RayTracing take care of shadows, reflections, highlights, etc instead of manually mapping them.

    Take a look at anything LightWave [newtek.com], Maya [autodesk.com], 3Dsmax [autodesk.com], Softimage [softimage.com], Blender [blender.org], etc spits out of its render engines, or visual effects in recent movies... granted, that's (as stated a few times in the discussion) years away... but, I don't think anyone is arguing against RayTracing.

    (-1 Bastard) ...but...whatever, ive been waiting for real-time RayTracing for years even just within my own 3D applications, nevermind games...

    • I'd quibble. (Score:5, Insightful)

      by jd ( 1658 ) <`imipak' `at' `yahoo.com'> on Wednesday June 25, 2008 @04:59PM (#23941623) Homepage Journal
      Raytracing is superior to doing nothing, but conetracing, non-uniform conetracing and wavetracing are all superior to raytracing, and all but wavetracing benefit from adding in radiosity. The advantages of raytracing over all other methods are that it is totally parallelizable and can be implemented using a fairly simple set of algorithms, potentially allowing for a truly gigantic number of compute elements on a single die. One big headache, though, is that to get a significant visual improvement, you have to cast a large number of rays per pixel (or you can't do scatter properly) and you need multiple generations (ie: secondary light sources), where each generation needs to be processed twice - once for direct reflection, once for refraction. This would be fine, but it means different rays will take a different length of time to complete, which in turn means that to get smooth motion, you have to calculate the time for the slowest possible path and synchronize to that.

      Typically, however, games manufacturers do NOT mean "raytracing" when they say "raytracing". They mean basic rendering. ie: Applying of shaders and other simple colouring techniques. Renderman, the rendering package used to produce movies like Finding Nemo, uses rendering, not raytracing. Rendering is popular with movie producers because it's fast and "good enough". (Audiences differ on the subject, with plenty of people preferring model-based special-effects because the lighting is real and the reflections are correct - well, they'd better be!) My fear is that true raytracing and physically correct lighting models will be totally overlooked in favour of things that will be cheaper to produce and therefore make more money.

    • I don't think there is any debate at all, RayTracing is by far superior, there is just the problem of computing power.

      People fail to realize that rasterization is required for raytracing to work. Rasterization is the process taking some object or model that is precisely defined and placing it on a screen made of components. Because of this ailiasing occurs. What's good at removing something ugly like aliasing? Current videocards. Even if ray-tracing were to be dominant, you still need a processor to handle aliasing. Ray tracing could handle the modeling aspect, (including shadows and reflections, etc...) but the rasteriz

      • by plover ( 150551 ) *

        Rasterization is the process taking some object or model that is precisely defined and placing it on a screen made of components. Because of this ailiasing occurs.

        That's kind of a non-point. Rasterizing and aliasing happens all the time with standard television signals. But I have no problem distinguishing between a rendered actor (a "synthespian") and a human actor. It's not the aliasing, it's the full complement of lighting, shading, texture, model, physics, motion, etc.

        And even if you could perfectly render flesh, hair, cloth, etc., to the point where I couldn't see the difference in a static image, there's still the problem of motion. Human gait is still m

      • Sorry -- maybe I'm missing something -- but what's the problem? You cast multiple rays per pixel, and that's all you need to do; you just average them to get the pixel color that you display on screen. It's basically the same way that current video cards handle antialiasing: Supersampling.
        • Re: (Score:3, Informative)

          by j1m+5n0w ( 749199 )
          If you want to do it right, there's actually quite a bit more to it [stanford.edu] than you're implying. But I do agree that this isn't exactly a show-stopper for ray tracing. Yes, you need a way to plot pixels on the screen, but it's not like that's an unsolved problem. You could probably even do the necessary filtering with a fragment program and give the video card something useful to do between calls to glVertex2f().
          • I wasn't aware that it was common practice to use real DSP-style reconstruction filters in computer graphics. I guess times are changing! I thought graphics cards typically rendered the scene at, e.g., 4x, and then used plain old bilinear interpolation to generate the new one. I remember when nVidia came out with its Quincunx supersampler; their marketing people were making a big deal out of it (and it was hardly an ideal reconstruction filter.)

            • I don't know what sort of filtering current games do on the final image, but it makes sense to do more filtering with a ray tracer because tracing more rays is so expensive. I expect the technique is more common on off-line renderers.

              (Texture filtering is pretty widely used everywhere, but that's a separate issue.)

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      "...RayTracing is by far superior, there is just the problem of computing power."

      And then there's people like me, who would consider this statement to be self contradictory.

    • Re: (Score:3, Insightful)

      by Anonymous Coward

      You already have raytracing in games, just not in realtime. Why spend the computing power in realtime raytracing when you can bake the static lighting into PTMs (polynomial texture maps), or the dynamic lighting into spherical harmonics maps, and use these to reconstruct the illumination, including self-shadowing effects, with convincing effects, at a fraction of the cost?
      Lightmaps already take into consideration ambient occlusion, check q3map2 for instance, this is nothing new. As for reflections, no one w

      • Why spend the computing power in realtime raytracing when you can bake the static lighting into PTMs (polynomial texture maps), or the dynamic lighting into spherical harmonics maps, and use these to reconstruct the illumination, including self-shadowing effects, with convincing effects, at a fraction of the cost?

        Because a) all those maps take up memory that could be used elsewhere and b) they don't take into account moving objects or lights.

        This is not to say the existing technology is without faults, on

        • Re: (Score:3, Interesting)

          by Targon ( 17348 )

          Just because CPU power is at the point where it may be enough, the whole point to having a video card is to offload that work to allow the CPU to deal with "more important" work. Console games tend to be very limited in terms of what is going on around the main character in a game. Sure, the graphics may be an issue, but you don't see games where the main character in a story has to push through a crowd of computer controlled NPCs that are not just there as a part of some puzzle, but are all doing or tr

          • If processor power isn't enough to handle a crowd of NPCs that don't have much of an agenda in a game, how do we have enough processing power to handle the game AI PLUS Ray Tracing?

            I'm not much of a gamer, but I'm skeptical of the claim that AI is really using all that CPU power. I would guess that most of the CPU resources in a modern game are devoted to managing the complexity of the 3-d models the GPU has to render. But, if we assume that AI and physics really do use the whole CPU, and that the CPU i

            • by Targon ( 17348 )

              The difference between "The Witcher" and other computer games is that every NPC in an area has a scheduled script that runs all the time. Most games have the majority of NPCs just standing in place or wandering randomly without considering the time of day, so the AI needed for NPCs doesn't require as much.

              This is why I suggested that in a game where you have dozens of NPCs each with their own agenda in an area that it would take a LOT more CPU power than most people might expect. I don't know if there i

              • I don't know if there is a decent level of multi-threading in that particular game though, so that could have something to do with it.

                Also, a big factor would be whether the AI is written in something like C or something like Python. I don't doubt that there are a multitude of ways of implementing AI that would suck up all the CPU resources available, but assuming a computer fast enough to do real-time ray tracing, and assuming the NPCs don't have to be good "go" players, a sensible implementation ought

    • Re: (Score:3, Informative)

      by forkazoo ( 138186 )

      Take a look at anything LightWave, Maya, 3Dsmax, Softimage, Blender, etc spits out of its render engines, or visual effects in recent movies... granted, that's (as stated a few times in the discussion) years away... but, I don't think anyone is arguing against RayTracing.

      None of the programs you mentioned is a pure ray tracer. All of them can be used to make images which involve ray tracing, but a lot of great work has been done in those programs without tracing any rays. So, probably not the best example

      • RenderMan uses the scanline algorithm, but your graphics board doesn't use that. It uses the Z-Buffer algorithm. I suspect RenderMan only uses scanline for historic reasons. Scanline was mainly used when memory was expensive and RenderMan is decades old. Remember military simulators with Evans and Sutherland graphics? That was probably the last realtime hardware using scanline.
    • I know fully well 3DSMax and Maya are the defacto standard in big shop game developers. Blender is HUGE among open source in indy game shops.

      But the "other" underdog is TrueSpace. I've been with them (Caligari) since version 1. They're now at 7.6. In the almost 10-15 years they've been around they've always been under one guy, Roman Ormandi. It says a lot to me, that he hasn't yet "cashed" out on this amazing 3D hand changing that's been going on with all the other apps.

      3DS started with Autodesk, went to Di

      • Wikipedia says Caligari was adquired by Microsoft this year. Which is a bit weird to me, considering they bought then dumped SoftImage some years ago.

        I remember trueSpace from when I had my Amiga computer. Back then it was called Caligari and had one of the easiest to use 3D modellers. Seems like RealSoft is also still around. Not to mention NewTek's LightWave of course.

      • Maya and 3DS max definately rule the Games world, followed closely by MotionBuilder and then XSI.

        Blender is probably used a lot in indy development, but i suspect that a far greater number use pirated copies of Max and Maya. (they may claim the assets were created in blender for I don't want to get sued reasons....)

        You're history of 3DS Max is slightly off. Max was published by Autodesk, but developed by Yost. Discreet Bought Yost, and max was then published by Kinetix. Discreet was evetually bought by
    • by Goaway ( 82658 )

      I don't think there is any debate at all, RayTracing is by far superior, there is just the problem of computing power.
      That's a common misconception. Raytracing gets you some benefits, but the cost is huge. However, there are many things it can't do either. Global lighting is just as hard with raytracing as with rasterization, but the payoff for getting that right is often far bigger than what you'd gain by using raytracing.
  • Art Direction (Score:3, Insightful)

    by vertigoCiel ( 1070374 ) on Wednesday June 25, 2008 @10:19PM (#23944731)
    If you haven't, go take a look at the screenshots in the article. Scroll up there, click on the link, and scroll down the page a bit.

    Seen them? Good. They demonstrate one thing very effectively: no matter what rendering engine you use, good art direction trumps technology, every time.

    These games are using "cutting edge" technology, and the article blathers on about how ray-tracing allowed them to use ridiculous amounts of triangles and have "complex lighting and shadows." But they look like crap.

    Contrast this with games like Twilight Princess, Super Mario Galaxy, Ico, Shadow of the Colossus, and Rez. All of them use rasterization on hardware between two and eight years old, but they look fantastic.
    • I think your definition of "looks like crap" is different than mine. I think they look pretty good. And I think their accomplishment is even more impressive when you consider that Outbound was made by a handful of students in about a year using experimental tools.

      Also, the high-resolution images don't seem to have done much anti-aliasing, which would have improved the perceived quality significantly. (Outbound does support a range of AA settings. Perhaps Bikker didn't want to be accused of submitting

    • Google for rtChess. (you'll probably need to replace the SDL.dll that comes with to avoid a crash on windows) It's got the best chess set models I've ever seen in a game. Of course I'm biased. They're mostly CSG with real curved surfaces. I made them by sketching on some graph paper and reading off 3 points on each curve and making a primitive that took that as a definition. On the performance side, the download is still using a horribly old ray tracer. The newer one (not available yet) is 2-4x faster per c
  • Why do articles always take the position that rasterization and ray tracing are mutually exclusive? They are not. Basically both schemes involve determining which pixels in a 2D canvas should be what colors based on the contents of a 3D canvas. Rasterization techniques typically iterate through the elements in the 3D canvas in order to determine which pixels in the 2D canvas to turn on, while ray tracing typically iterates through the pixels in the 2D canvas determining what color it should be by "shooting

Remember the good old days, when CPU was singular?

Working...