Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Intel Graphics Software Entertainment Games

Crytek Bashes Intel's Ray Tracing Plans 151

Vigile writes "Despite all good intentions, Intel continues to see a lot of its work on ray tracing countered not only by their competition, as you'd expect, but also by the very developers that Intel is going to depend on for success in the gaming market. The first major developer to speak on the Intel Larrabee and ray tracing debate was id Software's John Carmack, who basically said that Intel's current plans weren't likely to be implemented soon or ever. This time Cevat Yerli, one of the Crytek developers responsible for the graphically impressive titles Far Cry and Crysis, sees at least 3-5 more years of pure rasterization technology before moving to a hybrid rendering compromise. Intel has previously eschewed the idea of mixed rendering, but with more and more developers chiming in for it, it's likely where gaming will move."
This discussion has been archived. No new comments can be posted.

Crytek Bashes Intel's Ray Tracing Plans

Comments Filter:
  • Stop motion movies (Score:4, Interesting)

    by BadAnalogyGuy ( 945258 ) <BadAnalogyGuy@gmail.com> on Friday April 11, 2008 @09:11AM (#23035216)
    For years some claymation movies were set up by hand and shot frame by frame in a process called stop motion [wikipedia.org]. While adequate, the resulting film was typically unnatural and the movements very stiff compared to live actors.

    Enter ILM and go motion [wikipedia.org]. Instead of filming static scenes, the clay was moved slightly during the shot to create a blurry frame. This blurry frame made the scene seem more realistic. The blur is what the eye picks up in the movie frame, so an actor walking in a scene is not a set of pinpoint focus shots but a series of blurs as the man moves.

    Ray tracing is great for static scenes. But movement is the key to games that require this much detail, and so each frame should not be beautifully rendered framebuffers, but a mix of several framebuffers over the span of one frame. Star Wars did it great. Most computer games, not so much.
  • by ozmanjusri ( 601766 ) <aussie_bob@hoMOSCOWtmail.com minus city> on Friday April 11, 2008 @09:24AM (#23035334) Journal
    Ray tracing is great for static scenes.

    Where did you get that idea?

    Ray tracing can do selective motion blur very inexpensively. You test against a bounding sphere a triangl's motion span, then interpolate the ray along an approximation of the triangle's path.

    That's a very bad analogy you're using...

  • by mofag ( 709856 ) on Friday April 11, 2008 @09:27AM (#23035376)
    I ignored this story first time around because I assumed it must be an April fool's joke which I think is not unreasonable: Intel leading innovation in the GPU sector ....
  • by Floritard ( 1058660 ) on Friday April 11, 2008 @10:05AM (#23035814)

    you want at least 30 frames per second and even that isn't considered great by many gamers.
    I've always wondered about the need for a solid 60 fps in every game. A lot of games, especially console games of late, are going for that cinematic experience, and as theatrical movies themselves run at 24 fps, maybe all it would take is today's prettiest graphics and a really sophisticated use of motion-blur to make a good game running at that mere 24 fps. Maybe for first-person shooters and racing games, you want that almost hyper-real 60 fps of unblurred, crystal clear action, but for those other action/adventure games you could probably get by with less. There was an article recently about how playing sports games isn't so much like simulating you playing the sport as it is simulating you watching a televised sports program. In that case, why would you need more fps than that at which your television (NTSC: 29.97 fps, PAL: 25 fps) has traditionally broadcast? It might even look more real with less frames.
  • by Naughty Bob ( 1004174 ) * on Friday April 11, 2008 @10:19AM (#23035976)
    Dude, FPS for video games is not really comparable with FPS in films/TV etc., for one simple reason-

    In video games, the frame rate is also the rate at which everything else (physics, etc.) is calculated.
  • by Colonel Korn ( 1258968 ) on Friday April 11, 2008 @10:21AM (#23035994)
    Well done motion-blurred 24 currently would take more power than 60 unblurred fps, but yeah, the notion isn't a bad one.
  • it's customers that drive the market, not developers.

    In the case of a company like Intel who's pushing a new technology, the developers are the customers. It's not Joe Consumer who's going to be buying into Intel's technology. (At least not until there are games that support it.) It's going to be the developers. Developers who will be taking a gamble on a next generation technology in hopes that they lead the pack. And as history has proven, the first out of the gate often earns the most money. (At least in the short term.)

    Of course, history has also proven that new technologies often fail. Thus the risk is commiserate with the reward. There may be a lot to gain, but there is also a lot to lose. A lot of dollar signs, that is.
  • by andersbergh ( 884714 ) on Friday April 11, 2008 @10:22AM (#23036022)
    No it's not, usually games have a separate loop for logic (physics, AI, etc) running at say, 30 fps. If the GPU can push more frames than that, then why not.
  • by Colonel Korn ( 1258968 ) on Friday April 11, 2008 @10:48AM (#23036366)
    Let's surmise for a minute:

    The problem with ray tracing, as Carmack said, is that it will always be much slower than raster-based graphics with a given amount of computing power. He pointed out that there's nothing impressive about Intel's demo of a game from two generations ago running sort of acceptably at moderate resolution on an overpowered Intel demo system. He said that they'll never be able to produce a ray traced engine competitive with the state of the art raster-based games, so the ray tracing, while technically satisfying, will in every case offer poor performance for inferior graphics.

    All of this boils down to a time lag. If raster graphics can do something in 2008, ray tracing can do it in 2012, etc. What if raster graphics stopped progressing for four years? Then ray tracing would have a chance to catch up, perhaps leading to new engines and APIs based on ray tracing, which would ensure long term use.

    But wait...raster graphics have already been at a standstill for two years, for the first time since their inception. When the 360 came out and then the 8800 line showed up to put it firmly in its technical place, gaming graphics capabilities suddenly stopped. Not only did nVidia have its first unassailable lead over ATI in a long time, but suddenly the PC gaming market finally showed very strong signs of finally dying. Most of the remaining PC game developers shifted development to consoles, leading to (again as Carmack pointed out) a stationary graphical hardware target for new games. The overall number of PC gamers managed to stay high, but literally almost all of them were playing World of Warcraft, which has very low graphics card requirements.

    Now two years have gone by, and WoW still dominates PC gaming, while only a few games have shown up that really push current hardware, with few people buying them. It's a pity that the most graphically impressive game is also quite mediocre when it comes to gameplay. There's very little market pressure on nVidia outside of the small enthusiast community, and they've managed to milk a 4x hardware lead over consoles for an unprecedented length of time. The graphics card industry used to beat the living crap out of Moore's Law, but now they've managed to eek out a 10% improvement in over two years, which is just sad. The next generation parts may or may not be coming soon, may or may not bring a large performance boost, and may or may not have any software at all to really justify their purchase.

    Going waaaaay back to the beginning, CPU speeds over this same time period have been keeping up with their normal exponential increase in power. At this rate, it would only take two more generations of PC gaming failure for ray tracing on the CPU to catch up with rastering on the GPU, and if that happens, it could end up going to consoles. Hell, it might even be good for PC gaming's health. Currently most console players have a PC, but with its Intel integrated graphics it's only suited to playing games from 6-8 years ago. Already those same PCs can probably match that with ray tracing. If games were only dependent on CPU speed, they'd be a lot more accessible and easily played by a much larger part of the population.

  • Re:Well... duh! (Score:2, Interesting)

    by daveisfera ( 832409 ) on Friday April 11, 2008 @11:19AM (#23036792) Homepage
    Actually, Carmack did say that he thought it would never fully transition to raytracing. He said that rasterization would always stay a step ahead and could "emulate" or fake a lot of the effects that raytracing can pull off. He did say that a hybrid method showed the most promise, but he also spent the majority of the time talking about how his new idea (has some goofy name that I can't remember right now) would be the best option of all.
  • by Anonymous Coward on Friday April 11, 2008 @11:56AM (#23037272)
    > It's no surprise that Intel is being bashed over their idea of real-time CPU ray-tracing. As anyone who has ever ray-traced will realize it's extremely slow. At times you're talking about HOURS PER FRAME

    Hours per frame if you do accurate global illumination... Which is also very expensive to do using rasterization, and isn't done in any modern game, by the way.

    Raytracing a very complex scene with a proper scene partitioning scheme can be done in under a second on a modern single processor machine. If you add adaptive antialiasing (only done at visble edges), you can add maybe 50% more CPU time... If you want soft shadows, make that a few seconds of rendering time... You can add some approximate global illumination on top of this to make it more viable.

    Still, this can be done fast using multiple cores, and with specialized hardware, will be feasible in real-time. Someone has already shown very basic raytracing can be done in hardware, in real-time, using an FPGA (look up SaarCOR). If nvidia made a raytracing graphics card, they could absolutely deliver something on par with current rasterization products that runs at real-time framerates.

    In the end, it's slower than rasterization, but it looks alot better.... You get soft shadows, reflections, refraction, etc. all for "free"... I must also mention that with rasterization, implementing realistic effects can be very painful, while with raytracing it's alot more "natural" and intuitive to program (since it's based on an actual simulation of light, rather than projecting triangles).
  • Perhaps OT (Score:3, Interesting)

    by jjohnson ( 62583 ) on Friday April 11, 2008 @11:57AM (#23037292) Homepage
    But how much better do game graphics need to be?

    I played the Crysis demo on a recent graphics card, and was suitably impressed for ten minutes. After that, it was the same old boring FPS that I stopped playing five years ago. Graphics seem stuck in the exponential curve of the uncanny valley, where incremental improvements in rendering add nothing to the image except to heighten that sense of 'almost there' that signals to the brain that it's *not* photorealism.

    This isn't meant to be the same old "it's the gameplay, stupid" rant that we get here. It's simply to question why any real work is being done on rendering engines when we seem to long have passed the point of diminishing returns.
  • by SilentBob0727 ( 974090 ) on Friday April 11, 2008 @12:40PM (#23037800) Homepage
    Personally, I'd love to see realtime raytracing see the light of day because I recognize the math behind it as more "pure" than rasterization. Of course there are several algorithmic hurdles preventing pure realtime raytracing from seeing the light of day, unless you start to hyperparallelize the operations in a dedicated GPU, and even then there are obstacles; in the worst cases, a ray can bounce along an infinite path, dividing into multiple segments as it goes, leading to infinitely branched recursion until some heuristic or another cuts it short. And as we all know, "heuristic" is a fancy word for "cheat".

    Further, raytracing cannot handle advanced refraction and reflection effects, like the surface of water causing uneven illumination at the bottom of a pool, or a bright red ball casting a red spot on a white piece of paper, without preemptive "photon mapping", which is another cheat.

    In short, we have not been able improve upon the original raytracing algorithms without "cheating reality". Modern raytracing that includes photon mapping is a hybrid anyway. So the raytracing purists really have nothing to stand on until there's enough hardware to accurately calculate the paths of quadrillions of photons at high resolution sixty times a second. I'm not saying we won't get there, I'm saying probably not within this decade.

    The reality is, the only advantage raytracing has over rasterization is its ability to compute reflection, refraction, and some atmospheric effects (e.g. a spotlight or a laser causing a visible halo in its path) with "physical" accuracy. The capabilities of rasterization have grown leaps and bounds since the 1960s, roughly linearly in proportion to available hardware.

    Purists be damned. A hybrid of each technique utilizing what it's good at (raytracing for reflection, refraction, and atmospheric halos, rasterization for drawing the physical objects, "photon mapping" for advanced reflection and refraction effects) is likely the best approach here.
  • Re:why bash? (Score:1, Interesting)

    by Anonymous Coward on Friday April 11, 2008 @04:24PM (#23040630)
    No better then the pointcloud techniques available in modern renderers like prman... and as those techniques are based on simple rasterizations not unlike shadow maps and the energy transfer algorithms within those pointclouds are trivially parallelizable there is every reason to believe that these strategies will still be orders of magnitude faster on the manicores than shooting 100s of rays per sample.

    Yes, I understand that a renderer has all day to do it's job and that a game engine has to run like the clappers but the those offline renderers could also choose to raytrace if it was going to be faster... and in those situations you'd think an advantage would magnified... but they go with rasterization techniques because they are more efficient and easier to distribute.

    Raytracing is conspicuous consumption of resource... exactly what a chip maker wants.

He has not acquired a fortune; the fortune has acquired him. -- Bion

Working...