Forgot your password?
typodupeerror
Graphics Intel Games

Wolfenstein Gets Ray Traced 184

Posted by Soulskill
from the ach-mein-framen dept.
An anonymous reader writes "After showcasing Quake Wars: Ray Traced a few years ago, Intel is now showing their latest graphics research project using Wolfenstein game content. The new and cool special effects are actually displayed on a laptop using a cloud-based gaming approach with servers that have an Intel Knights Ferry card (many-core) inside. Their blog post has a video and screenshots."
This discussion has been archived. No new comments can be posted.

Wolfenstein Gets Ray Traced

Comments Filter:
  • I don't get it (Score:4, Insightful)

    by Yuioup (452151) on Tuesday September 14, 2010 @03:47AM (#33570568)

    Why build a ray tracer into a 4th game after doing it for Q3, Q4 and ET:QW. Why don't they focus on improving already existing raytracing code into the first 3 games.

    I donnow but it seems like they're keeping themselves busy for the sake of looking busy.

    Y

    • Re: (Score:3, Interesting)

      by pieisgood (841871)

      Yeah, this project is simply here to validate itself.

      I don't know if that's entirely true though. Carmack talks of slowly integrating raytracing technology into videogames. This research into raytracing in games could prove useful later in videogame development. As I understand most advancements in videogame visuals today are optimizations on old research. So I wouldn't rain on their parade entirely.

    • by cupantae (1304123) <maroneill@gmailPARIS.com minus city> on Tuesday September 14, 2010 @07:51AM (#33571846)

      I can't understand why they're not giving people what they want: ray traced Nethack.

      • I can't understand why they're not giving people what they want: ray traced Nethack.

        You can have ray traced nethack. Just print every screen on a laser printer with high-resolution TrueType fonts.

    • by Sigma 7 (266129)

      You can only do so much raytracing in the original games. For example, Quake doesn't support or include the map information to create visually detailed worlds - after that, you're guessing on how the world should be shown.

    • Re: (Score:2, Informative)

      by JCZwart (1585673)
      This is Intel, not Id. It's a tech demo to show off what Intel's technology is capable of. Ray tracing scenes in real time was absolutely unthinkable just a few years back (and honestly I'm quite impressed with what they've achieved here, since ray tracing is about the most expensive (though also most realistic) way to render a scene in 3D).
  • by Orphis (1356561) on Tuesday September 14, 2010 @03:49AM (#33570582)
    Mom, can I buy a new cloud to play Halo 10 ?
  • by dsavi (1540343) on Tuesday September 14, 2010 @03:57AM (#33570602) Homepage
    It's rendered in the cloud. If they managed to actually get more bang for the buck- i.e. made this run on conventional hardware- Then I'd be interested. They're just doing something that has been done before, albeit maybe not in real time (But you never know, seeing these new OpenCL apps), running it on high-end servers, and piping it into a small laptop. I'm not sure how much of an achievement this is, we've all heard of gaming in the cloud before.
    • by Ecuador (740021)

      Exactly, using a bunch of servers to run a game on a laptop is neither impressive not new.
      Plus, the game looks nothing like Wolfenstein, which by the way used to run fine on my 386SX - no raytracing there of course. Where are the narrow grey or blue stone-walled corridors? And what is all that furniture doing in Castle Wolfenstein?

  • by sych (526355)

    So... many... triangles!

    • Re: (Score:2, Funny)

      by BodeNGE (1664379)
      Oh my god, ... it's full of triangles!
    • Re: (Score:3, Insightful)

      by Rockoon (1252108)
      This is the true advantages of raytracing. A rasterizer would have to deal with each and every triangle in that chandelier.

      Rasterizers scale on O(triangles) while raytracers scale on O(pixels * log triangles). I dont remember if it was Microsoft Research or something out of Intel, but 5 or so years ago they did some scalability testing and concluded that about 1 million polygons was the sweet-spot where raytracing and rasterization were about equal in efficiency using the per iteration constants derived i
      • by grumbel (592662)

        A rasterizer would have to deal with each and every triangle in that chandelier.

        Or it could just do LOD with a geometry shader on the GPU.

        When the polygon counts do get high enough, there will be no looking back.

        The problem is that you don't just want high polygon counts, but high polygon counts for dynamic objects. And ray tracing itself has its strength in static objects, as soon as stuff moves and deforms, ray tracing runs into quite a few issues, not necessary unsolvable issues, but that demo was rather lacking in that aspect as the particle system looked like complete garbage compared to todays games.

        Or to put it another way: The job of a tech demo is t

      • > When the polygon counts do get high enough, there will be no looking back.

        Academic fairy land. Polycounts will never get high enough. There is a physical limit on how many polygons a animator/modeller/rigger can actually handle in a sane way. The higher the number of polygons in a character, the more time an artist has to spend painting the skin weights and mapping the UV coords. The relationship between the number of polygons and the asset creation time is exponential. Double the poly count, and ex
  • by rh2600 (530311) on Tuesday September 14, 2010 @04:13AM (#33570694) Homepage
    When a laptop packing a multi-GHz 64bit CPU with gigs of RAM gets called a thin client...
    • by aws4y (648874)
      it is thiner than the server cluster computing each frame at the back end of this caned demo.
      • by RulerOf (975607)

        it is thiner than the server cluster computing each frame at the back end of this caned demo.

        Not only is it thinner, but it's Intel certified not to squash your nuts when used on your lap, unlike the rackmount server.

        ...I learned that that the hard way.

        The rackmount servers do tend to run cooler, though, so if you're not terribly attached to your nuts....

    • It is a thin client. All it's doing is holding the client software to accept the pre-rendered feed. It does nothing but hold a high speed network connection and display rendered frames.

      It's their fault for using such a high powered bit of kit, but if it's doing no processing of its own it's still just a thin client. Albeit extremely expensive.
      • by drinkypoo (153816)

        It's their fault for using such a high powered bit of kit, but if it's doing no processing of its own it's still just a thin client. Albeit extremely expensive.

        Seems more like a thick client anyway...

        I doubt they could have done it without the bandwidth that the newer hardware affords. Intel has traditionally been starved for bandwidth of all types; not so now.

    • by Sulphur (1548251)

      That is dual core you insensitive clod.

  • Poor ray tracing (Score:2, Interesting)

    Their ray tracer has a few issues.
    -The player does not appear in the scope reflection (but his shadow does).
    -The people's shadows are cast in a different direction than the car's.
    • Re:Poor ray tracing (Score:4, Informative)

      by Purity Of Essence (1007601) on Tuesday September 14, 2010 @04:38AM (#33570778)

      1. It's extremely common in FPS games for the player model to be excluded from the player perspective. It really complicates things and usually doesn't look good without a lot of extra work.

      2. That's not the car's shadow. The building shadow is the shadow you are seeing. You can't see the car's shadow because the car is mostly (if not entirely) shadowed by the building behind it. The viewing angles were not suited for showing a shadow cast by any directly illuminated portion of the car.

      • Re: (Score:2, Informative)

        by Anonymous Coward

        You are right, the player model is often excluded, but that isn't really necessary. Especially id Software is usually known to show the player model's shadows and refelctions (including mirrors) since Doom 3.
        And if you really want a game with not only visible player model but actually pretty good player animation and physics, you should try out Dark Messiah of Might and Magic.

  • What's the point? (Score:2, Insightful)

    by pacinpm (631330)

    I know they just started but still... what is the point of this? There is no upsides to rendering. It's slower (you need 4 servers), it looks worse (they had no antialiasing, ugly smoke, no complex lightning). You can do some things like reflections and refractions and portlas bit easier than with other methods but most of the time you don't need 100% correct reflections/refractions (simplified models work quite nice) and security cameras where implemented in Duke Nukem 3D on i486 machines without problems.

    • by retroStick (1040570) on Tuesday September 14, 2010 @04:40AM (#33570790)
      As someone who has dabbled with raytracing before, I would have to agree. It's an interesting tech demo of something that's possible, but not really of practical use. For instance, they showed the chandelier with a million polys - that's all well and good, but it's on the ceiling! If the game was actually being played, the player would never get close enough to see those clever refractions. (And even if they did, the demo shows the frame rate would drop to around 17-20 FPS).
      • by hairyfeet (841228)

        Not to mention who is actually gonna use this thing? Bandwidth ain't cheap for most folks, and uncapped connections are becoming a thing of the past. Finally you have the fact that a good 90%+ of the games are made for either the consoles first, or are designed to be "multiplatform" which means my $36 HD4650 I bought over a year ago plays just about every game out there at my LCD native 1600x900 thanks to the consoles being so behind the curve.

        So other than spending a whole bunch of money and man hours so

    • by gmthor (1150907)
      From what I heard. Raytracing scales better than rasteriastion. In other words O(raytracing) \subset O(rasterisation). Obviously, raytracing has really bad coefficients.
      • It should scale well for multiple clients, particularly where surfaces are not perfect optical reflectors. If every surface scatters then each client need only require tracing for the last leg of every ray.

      • Re:What's the point? (Score:4, Informative)

        by TheRaven64 (641858) on Tuesday September 14, 2010 @07:20AM (#33571622) Journal
        Not quite. The complexity of rasterisation is (very) roughly O(number of polygons * number of lights). The complexity of ray tracing is O(number of rays). The number of primary rays is the number of pixels (sometimes multiplied by 4 or 9). The number of secondary rays depends on the number of lights (you fire a ray into the scene and then a secondary ray from what it hits to each light). This means that increasing the complexity of the scene does not affect the ray tracing time very much, but increasing the resolution does. On the plus side, ray tracing gives you shadows and reflections for free. It also degrades more gracefully - you can get a lower quality scene quickly (just from one primary ray per pixel) and then add the details from secondary rays and extra rays if the user doesn't move. In contrast, rasterisation tends to just lower the frame rate.
        • > The complexity of rasterisation is (very) roughly O(number of polygons * number of lights).

          Doesn't deferred rendering improve that though?
    • Re:What's the point? (Score:4, Interesting)

      by leuk_he (194174) on Tuesday September 14, 2010 @05:18AM (#33570948) Homepage Journal

      There is no point now. But in 10 years (maybe faster) the cpu speeds has increased to the point that you don't need a high performance cluster. It would be nice if you can at that moment run a game without an advanced GPU. in full detail.

      If you have to start research about raytracing when the hardware is cheap enough you are too late.

      And as for quality: fun of a game has little to do with grpahics quality. But it has to advance, or else we still would be looking at pong like graphics. people buy 1080p tv at sizes where it almost impossilbel to see the difference with 720p. But they still want the best quality.

      PS, when they speak of wolfenstein i still think of the 1991 prequal to doom that was playable on a 286.

      • by Amouth (879122)

        PS, when they speak of wolfenstein i still think of the 1991 prequal to doom that was playable on a 286.

        thats what i was thinking - would be fun to see it re-released with nothing but the graphics improved (still no mouse aim no multilevel no physics)

  • Project Offset (Score:5, Interesting)

    by nacturation (646836) * <nacturation@gma[ ]com ['il.' in gap]> on Tuesday September 14, 2010 @04:30AM (#33570748) Journal

    Anybody know what happened to http://www.projectoffset.com/ [projectoffset.com] ? They released tons of killer videos showing an amazing game concept, outstanding real-time effects [youtube.com]... then Intel buys them and... nothing!

  • by Hadlock (143607) on Tuesday September 14, 2010 @04:32AM (#33570756) Homepage Journal

    Yeah, you're rendering Wolfensetein on a cluster.... but can you get Wolfenstein running on a Beowulf cluster... or, dare I say it... a Beowulfenstein cluster???
     
    ;)

  • Ironic (Score:3, Interesting)

    by Anonymous Coward on Tuesday September 14, 2010 @04:37AM (#33570768)

    That none of intels graphics processors have any hope in hell of real time ray tracing.

    • by dave420 (699308)
      They were working on Larrabee, using up to 48 cores (essentially P54C Pentium cores, with some modifications), which would be much better suited to real-time raytracing than any existing GPUs.
  • fps counter lying? (Score:3, Interesting)

    by citizenr (871508) on Tuesday September 14, 2010 @04:37AM (#33570772) Homepage

    Chandelier part displays 40 fps on top right, but you can clearly see on the screen that its more like 15. Not to mention unimpressive difference between RT and normal renderer. I was expecting something more real life.

  • by loufoque (1400831) on Tuesday September 14, 2010 @04:44AM (#33570808)

    The very idea of using the cloud to render a FPS is preposterous and will never work in practice, for obvious latency reasons.

    • Re: (Score:3, Funny)

      Is that supposed to be ironic given the runaway success of the OnLive game service? http://www.onlive.com/ [onlive.com]

      • Re: (Score:3, Insightful)

        by Fross (83754)

        Do you have anything to back up that "runaway success" claim? As far as I can tell it's been shunned by hardcore gamers due >100ms input lag, and I've not seen anything about it having huge takeup.

    • Re: (Score:3, Insightful)

      by Thanshin (1188877)

      The very idea of using the cloud to render a FPS is preposterous and will never work in practice, for obvious latency reasons.

      How else will you start training for the moment when that computing capacity is on every PC?

      You use the cloud, ignore the lag and build an engine ready for the generation of computers that will come in five or ten years. You'll lose a lot of your investigation, but anyone who starts studying RT at that point will be years behind you.

      • by kramulous (977841)

        Forward thinking? That's craziness. Where on earth did you get such a crappy idea?

        I wonder whether 5 years out is a little too far before this compute power hits the consumer.

      • Re: (Score:3, Insightful)

        by tibit (1762298)

        I don't know if latency is any sort of a problem. You're talking of a LAN connection. This technology is not meant to render stuff somewhere out there on the intertubes. It needs to be in the same building, or on the same campus.

      • You use the cloud, ignore the lag

        How much ignoring do you mean? I know you mean something between a turn-based slideshow and a twitch game [youtube.com], but precisely how much?

    • by Danathar (267989)

      change "will never work in progress"

      to

      "will never work in progress for me"

    • Funny you mention that...I've actually played twitch FPS games on OnLive, a cloud gaming service, and they were playable. If the cloud gaming servers were organized such that nearly all subscribers could reach a server bank within 100 cable miles the latency from the cloud would be negligible.

  • The surveillance station.
    At a wall in the game you see twelve screens that each show a different location of the level. This can be used by the player to get a tactical gaming advantage. Have you ever seen something similiar in a current game? Again - probably not

    Someone doesn't play many games. Many 3D engines, for well over 10 years, have had some means of rendering to a texture and throwing it up on a poly in the game world. I'm going to say that hardware accelerated means of doing this have been common

    • by WhitetailKitten (866108) on Tuesday September 14, 2010 @05:23AM (#33570974)
      You wanna know the last game I played that featured this "surveillance camera" business?

      Duke Nukem 3D


      Ohhhh, snap!
      /* OK, it was one monitor at a time, but that's arguably a tactical decision to not let the player see every camera at once */
      • by daid303 (843777)

        You had to 'use' the monitor to view it. I think Unreal (or atleast Unreal Tournament) was the first engine that managed to render a 3D back to a texture and display it ingame. And that's more then 10 years old.

  • That's... Lovely. (Score:5, Interesting)

    by L4t3r4lu5 (1216702) on Tuesday September 14, 2010 @05:00AM (#33570878)
    10fps to be able to see glass refraction on a surface so small it's totally inconsequential.

    Yawn. Wake me up when they get refraction working with a playable framerate like Source had seven years ago. Regarde [youtube.com]
    • They don't like the whole GPU market because the more powerful a GPU you have, often the less powerful a CPU you need. This is particularly true now that GPUs are out and out stream processors. Intel sees this as a threat, and AMD has made it a more explicit threat with their fusion idea (combined CPU/GPU chips).

      Well as a result of this Intel has done various things some useful (like make extremely fast processors) and some not. This is one of the "not" things. They have been trying to get people interested

    • by Smidge204 (605297)

      Actually my first reaction was the "Lost Coast" demo that Valve put out with Half-Life 2 a few years ago. It had dynamic reflections and refractions in animated water surfaces and stained glass windows with refractions.

      And it didn't need four high-powered graphics servers to keep it above 10FPS either.
      =Smidge=

  • Everybody agrees that ray tracing is just awesome and I at least think it's the future of 3D computer graphics. But there is only one big 3D hardware vendor left, AMD is more a CPU vendor that tries to get into the 3D market because Intel is too big in the CPUs market. Intel only have small on-board graphic chips. Will we see ray-tracing from Nvidia anytime soon?

    I sure hope that maybe Intel or AMD try to take over the 3D computer graphics market with their CPU know-how (ray-tracing is using mostly the CPU).

    • by grumbel (592662)

      Everybody agrees that ray tracing is just awesome

      Actually, no, the raytracing shown in that demo isn't awesome, it is rather primitive and ugly. You can render shiny spheres with it and static high polygon objects, but basically nothing else.

      The stuff that you need to make graphics look good is global illumination and that demo had none of that. Todays games on the other side start to get there, you can already find realtime ambient occlusion in some games, you can get soft shadows and there have been tech demos even showing realtime photon mapping. And o

  • I want my kills to look hyper-realistic. And soon.

  • If the "Future of Graphics Rendering" was a job being advertised and potential candidates were asked to submit their Resume, then Intel's would be very thin.

    The job is asking for 5 years experience, with a tertiary qualification, preferably post grad.

    In Graphics, Intel has completed High School and done 2 years admin temping.

    And yes, I am still bitter about the Intel i740 Graphics Card [wikipedia.org]. Intel are just great at the snowjobs, even suckering John Carmack in a very ancient .plan [floatingorigin.com] update:
    "Good throughput,
    • by 0123456 (636235)

      The reality turned out to be what this story will be - smoke and mirrors.

      The i740 was OK once you stuck enough video memory on the card: what crippled it was Intel's crazy desire to pull textures over the AGP bus when other cards had large amounts of 128-bit VRAM. I presume the intention was to increase AGP takeup, but the reality was that it made AGP look bad when compared even to older 3dfx cards on PCI.

  • by DrXym (126579)
    It's interesting to see what a game looks like with raytracing, but I don't see any practical use for this tech until they can make it happen in a normal GPU.

    The problem with ray tracing is that if you have 1280x720 display then you're going to have to fire off at least 921,600 rays which must be intersected with objects and these in turn split into more rays as they reflect / refract around the screen. In a complex scene you may end up firing millions of rays. And I say at least because at 1 ray per pixe

    • Re: (Score:3, Interesting)

      by tibit (1762298)

      Just thinking about the bandwidths is interesting. Start with 150E6 rays per second. Assume that to traverse the binary space subdivision data structures takes, say, 256 bytes, along with another 256 bytes worth of data for the polygon. That requires ~77 gigabytes/s memory bandwidth, sustained. So in practice you need the bandwidth of 6 fastest DDR3 sticks. And your algorithms better kept the CPUs pipelines full, and did proper prefetching, or else cache misses will have you for a day's worth of meals.

  • Ahh Youth (Score:5, Insightful)

    by kenp2002 (545495) on Tuesday September 14, 2010 @07:06AM (#33571532) Homepage Journal

    "The surveillance station. At a wall in the game you see twelve screens that each show a different location of the level. This can be used by the player to get a tactical gaming advantage. Have you ever seen something similiar in a current game? Again - probably not"

    Yes, In Duke Nukem 3D... over 15 years ago. And again in a bout 40 other FPS games that followed including the Unreal series, more then a few Quake maps especially in capture and control maps.

    "There is nothing more amusing to watch then some young kid discover something old and think it is new" - That quote in action.

    • Re:Ahh Youth (Score:4, Informative)

      by Sigma 7 (266129) on Tuesday September 14, 2010 @09:26AM (#33572760)

      Duke Nukem 3D, while it did have surveillance, only had one screen. If you stopped watching the screen, it would render a blocky image for one of the cameras it monitors rather than a clear image.

      It took until at least the Unreal Engine before a multi-screen display was possible, and I'm not sure how much that impacted the framerate.

    • Re: (Score:3, Informative)

      by SheeEttin (899897)
      Red Faction had security cameras in 2001. Multiple screens on-screen, but I don't remember if you could change them. Half-Life 2 (or one of the episodes) had security cameras, too, that you could change, but I don't think there were more than one at a time. (I don't think it's an engine limitation.)
  • by ledow (319597)

    Multi-million dollars graphics render farms and we still can't draw convincing fire, trees or animate a human walking smoothly (even with motion capture you often "see the join" between one action and another).

    • by tibit (1762298)

      Motion capture is a crutch. What you really need for fluid motion of humans and other animate models is motion control akin to what they have in robots, say in Big Dog [youtube.com]. What motion capture does is basically leave the dynamics and control to a wetware system. It's a hack at best.

      The game engine needs a kinetics+kinematics simulator, and a controller like what you'd have in a real-life robot. If you push this idea forward, it enables you to do very realistic tricks. Say you get an extra strength pill -- all i

  • by Lisandro (799651)
    Why would someone want to raytrace a game which is 18 years old?
  • Greetings from Germany and a special shout-out to ze narrator who has ze proper aczent for demonstrating zis particular game :-)

  • The surveillance station.wolf_station.jpg At a wall in the game you see twelve screens that each show a different location of the level. This can be used by the player to get a tactical gaming advantage. Have you ever seen something similiar in a current game? Again - probably not.

    Uhm.... Counter Strike had this in one of its levels like 10 years ago.

  • The model is highly detailed with around one million triangles.

    Sounds like the programmers are way too used to the dominant rendering model. One of the advantages of ray tracing is that you don't have to build everything out of triangles; you can have real continuous curves. For example, a ray-traced sphere can be an actual sphere. A lot of objects that require thousands of triangles with current GPUs can be produced using a much smaller number of objects using constructive solid geometry in a ray tracing context. It's analogous to the difference between raster and vec

I find you lack of faith in the forth dithturbing. - Darse ("Darth") Vader

Working...