Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!


Forgot your password?
Graphics Games Entertainment Hardware News Technology

Unreal Engine and Unity To Get NVIDIA's New VR Rendering Tech (roadtovr.com) 17

An anonymous reader writes: NVIDIA has announced that Unreal Engine and Unity will see integrations of its new Simultaneous Multi-projection (SMP) rendering tech, which the company says can yield "a 3x VR graphics performance improvement over previous generation GPUs." NVIDIA recently introduced the technology as a unique feature of its latest series of GPUs built on the 'Pascal' architecture. According to the company, Simultaneous Multi-projection allows up to 16 views to be rendered from a single point with just one geometry pass, whereas older cards would need to add an additional pass for each additional view. This is especially beneficial for VR rendering which inherently must render two views for each frame (one for each eye). With Simultaneous Multi-projection built into Unreal Engine and Unity, game creators will have much easier access to its performance benefits. SMP is supported by all of NVIDIA's 10-series GPUs, including the recently announced GTX 1060.
This discussion has been archived. No new comments can be posted.

Unreal Engine and Unity To Get NVIDIA's New VR Rendering Tech

Comments Filter:
  • Avoiding rendering views that wouldn't show up in the final picture? They call it LMS:

    SMP can be used specifically for VR to achieve what Nvidia calls âLens Matched Shadingâ(TM). The goal of LMS is to avoid rendering pixels which end up being discarded in the final view sent to the display in the VR headset after the distortion process.

    How is that not ray tracing?

    • by Anonymous Coward

      No, they didn't implement ray tracing in hardware. The LMS technique is described very well here: http://www.roadtovr.com/nvidia-explains-pascal-simultaneous-multi-projection-lens-matched-shading-for-vr/

      As you can see, instead of starting with an oversized render followed by distortion, they use the SMP hardware to transform a much closer match to the final render in order to achieve efficiency in the amount of resources required to clock out the final (lens-warped) image to the display.

    • by Guspaz ( 556486 ) on Thursday July 07, 2016 @08:27PM (#52467759)

      How IS it ray tracing? Modern VR requires a lens-correction distortion be performed after rendering so that the image you see through the lens matches what was rendered. Lens matched shading breaks the image up into four quadrants and renders four projected views that are a closer match to where the detail will be in the final result.

      Here you can see an example done traditionally:

      http://i.imgur.com/FA56wzN.jpg [imgur.com]

      And here you can see the same scene rendered with lens matched shading:

      http://i.imgur.com/CsDouw0.jpg [imgur.com]

      When a rendered scene normally goes through the distortion shader, most of the pixels around the edges are going to be lost as the distortion is strongest at the edges. This particular technique avoids that by starting out rendering less detail at the edges.

      You could also take this to another level and use it for foveated rendering (where you render detail based on where the eye is looking), rather than the current technique of rendering multiple viewports at different resolutions and blending between them. Foveated rendering is a huge win for performance in that it results in a drastic reduction in pixels rendered, but the hardware (very accurate and very low latency eye tracking) and software required to do it isn't quite ready for consumer use.

      • Just wanted to thank you for the time you took to write this.

      • by phorm ( 591458 )

        "where you render detail based on where the eye is looking"

        Just to understand this better, does that mean it has some for of eye/iris tracking and then essentially ups the AA and/or poly count wherever in the scene that is, but renders with less detail around the periphery, similar to how a photo would have a more detail image at the focus point and a more blurry background or surrounding?

        I did google it but wanted to be clear from somebody in the know.

        • by Guspaz ( 556486 )

          Your eye can only really see detail in a very small area where you are directly looking (in the centre of your vision), but your brain is very good at filling in the blanks and hiding this fact. It drops off extremely rapidly, and for the vast majority of your field of view, you can resolve barely more than basic colour and movement.

          The idea behind foveated rendering is, you use eye-tracking to figure out where the user's eye is looking, and then you render a very small full-detail image and place that wher

          • by phorm ( 591458 )

            What would be *really* cool would be to have a 3rd-party display that shows this in effect. If the VR user doesn't notice the difference, but a 3rd-party watching on a screen can see the detail at focal-point and the lack thereof it would be pretty neat to watch.

            It would also be good to see how the eyes of different people might work. I'd image the size of the focal point or peripheral detail may be slightly different for various persons.

            • by Guspaz ( 556486 )

              There are demos out there you can look at, using modified Rift HMDs. A company called SMI has been working on it. The limitation isn't the understanding of visual acuity, but the overall polish and sophistication of the implementation:

              http://www.roadtovr.com/hands-... [roadtovr.com]

              Another major issue is the ability to actually derive speed benefits from this approach. If you're implementing it by (as they do in this demo) rendering three different views at different resolutions in different passes, there's a fair bit of

We must believe that it is the darkest before the dawn of a beautiful new world. We will see it when we believe it. -- Saul Alinsky