stereoscopic-3d-rendering-best-practices

It turns out that rendering stereoscopic 3D images is not as simple as slapping two slightly different views side-by-side for each eye. There’s lots of nuance that goes into rendering a proper 3D view that properly mimics real world vision — and there’s lot’s that can go wrong if you aren’t careful. Oliver Kreylos, a VR researcher at UC Davis, emphasizes the importance of proper stereoscopic rendering and has a great introduction to 3D rendering for Oculus Rift developers.

The dangers of poor stereoscopic 3D rendering range from eyestrain and headaches to users not feeling right in the virtual world.

The latter of which is the “biggest danger VR is facing now,” Kreylos told me. The subtleties of improper 3D rendering are such that the everyday first-time VR user won’t think, “this is obviously wrong, let me see how to fix it.” They’ll say instead, “I guess 3D isn’t so great after all, I’ll pass,” says Kreylos. This could be a major hurdle to widespread consumer adoption of virtual reality.

Oliver Kreylos is a PhD virtual reality researcher who works at the Institute for Data Analysis and Visualization, and the W.M. Keck Center for Active Visualization at the University of California, Davis. He maintains a blog on his VR research at Doc-Ok.org, where a few months back he showed us what it’s like to be inside of a CAVE.

Kreylos has a great introductory article about the ins and outs of proper stereoscopic 3D rendering: Good Stereo vs. Bad Stereo.

He also has an illuminating video that’s great for anyone not already versed in 3D:

“…here’s the bottom line: Toe-in stereo is only a rough approximation of correct stereo, and it should not be used. If you find yourself wondering how to specify the toe-in angle in your favorite graphics software, hold it right there, you’re doing it wrong,” Kreylos wrote in Good Stereo vs. Bad Stereo.

SEE ALSO
Meta CTO Confirms Work on "glasses form-factor" Mixed Reality Device

“The fact that toe-in stereo is still used — and seemingly widely used — could explain the eye strain and discomfort large numbers of people report with 3D movies and stereoscopic 3D graphics. Real 3D movie cameras should use lens shift, and virtual stereoscopic cameras should use skewed frusta, aka off-axis projection. While the standard 3D graphics camera model can be generalized to support skewed frusta, why not just replace it with a model that can do it without additional thought, and is more flexible and more generally applicable to boot?” he concludes.

Oculus Rift SDK and Unity Have the Basics, but There’s More That Can Go Wrong

Kreylos, who has had some time to play with the Oculus Rift, tells me that Oculus and Unity have laid a great foundation for proper stereoscopic 3D thanks to the SDK.

“At the most basic, someone has to set up the proper camera parameters, i.e., projection matrices, to get the virtual world to show up on the screens just right. On the Rift, someone also has to do the lens distortion correction. Both these things are taken care of by the Rift SDK, by both the low-level C++ framework and the Unity3D binding. And as far as I can tell, both bindings do it correctly. It’s a bit more tricky in the Unity3D binding due to having to work around Unity’s camera model, but apparently they pulled it off.”

skewed frustrum stereoscopic 3d
An example of a proper-skewed frustum 3D rendering

Kreylos checked his initial impressions with the SDK source code.

“For the Rift SDK, I went the source code route. I found the bits of code that set up the projection matrices, and while they’re scattered all over the place, I did find the lines that set up a skewed-frustum projection using calibration parameters read from the Rift’s non-volatile RAM during initialization. That was very strong evidence that the Rift SDK uses a proper stereo model. I then compared the native SDK display to the Unity display, and they looked as much the same as I could tell, so I’m confident about the Unity binding as well.”

SEE ALSO
25 Free Games & Apps Quest 3S Owners Should Download First

“Any software based either on the low-level SDK or the Unity binding should therefore have the basics right,” he added.

But, there’s more that can go wrong. Developer vigilance is required.

“A lot of 3D graphics software does things to the virtual camera that they can only get away with because normal screens are 2D, such as squashing the 3D model in the projection direction for special effects, rendering HUD or other UI elements to the front- or backplane, using oblique frusta to cheaply implement clipping planes, etc. All those shortcuts stop working in stereo or in an HMD. And those tricks are done deeply inside game engines, or even by applications themselves. Meaning there are additional pitfalls beyond the basic stereo setup,” he said.

Kreylos gives the thumbs up to Oculus’ SDK documentation regarding correct stereoscopic 3D rendering, noting that inside there is a “very detailed discussion” on the matter. You can find the latest Oculus Rift SDK documentation here (after logging in). Anyone building an Oculus Rift game from the ground up should absolutely consult this document to get started with understanding proper 3D rendering.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Joseph Mansfield

    This is very useful information, but does this really apply to the Oculus Rift? The eyes are center-aligned with each of the halves of the display, rather than having to converge on some point on a single screen, as with 3D movies.

    As the official documentation says:

    “Unlike stereo TVs, rendering inside of the Rift does not require off-axis or asymmetric projection. Instead, projection axes are parallel to each other as illustrated in Figure 6. This means that camera setup will be very similar to that normally used for non-stereo rendering, except you will need to shift the camera to adjust for each eye location.”

    • Psuedonymous

      This is very useful information, but does this really apply to the Oculus Rift?
      Oh my yes! If you start thinking “I need convergence when setting up my stereo 3D monitor/TV, so I must set the convergence for the Rift”, things will go horribly wrong! Your cameras should always be parallel when rendering for the Rift (and HMDs in general that don’t used canted screens and obtuse optics).

      The necessity of setting the correct convergence and separation for stereo 3D monitors could easily result in users, or even developers, mistakenly adding these to the Rift and having a very bad time. Separation should be the IPD, ‘convergence’ should be parallel, and neither of these should need to change while orthostereo is maintained (you could potentially change IPD if you wanted to make the world grow or shrink, but this is an edge case).

      Nipping this in he bud is important to stop the idea taking root. An example is Virio Perception, with either an incorrectly named or incorrectly implemented convergence setting as part of SHOCT

      • Enterfrize

        “Nipping this in he bud is important to stop the idea taking root. An example is Virio Perception, with either an incorrectly named or incorrectly implemented convergence setting as part of SHOCT”

        Hi There!

        I’m the co-author of SHOCT, and I think this remark is incorrect. The Vireio Perception cameras are parallel, and there is never any toe-in no matter what you do with the convergence. Convergence is achieved by offsetting the frustrums as recommended in the above article.

        Convergence is a natural function, and the reason it’s important to specially calibrate in stereoscopic 3D drivers like Vireio is because the games weren’t intended to be shown in stereoscopic 3D in the first place, so we need to accurately complete this important data. It’s not as accurate as working with a natively programmed game, but our results can get pretty darned close.

        Using endorsed demos like Oculus’ Tuscany demos and Valve’s Team Fortress 2, you will find examples of negative parallax (a result of convergence). For example, walk to a chandelier on a wall upstairs in Tuscany, and you will immediately see negative parallax when you get very close to it. TF2 is more incidental, but it’s there. This is nearly identical to the behavior we achieve in games like Skyrim and Left4Dead.

        The only real complaint we’ve received about SHOCT comes from gamers who are unable to adjust the SHOCT lines, and this is more a bug than an error in judgment. We are working on getting that fixed, and there are other innovations coming around the bend.

        Feel free to PM me, and we can discuss this more. I’m always open to new ideas and ways to make things easier to use.

        Regards,
        Neil

        • Psuedonymous

          I’m glad to hear Vireio is not canting the cameras in. My complaint thus boils down to the use of the word ‘convergence’ to describe what is going on. Calling it ‘offset’ (or something else) would help make it obvious that what is going on is not the same as the convergence people may be used to.
          In tandem with this, an update to the settings guide to explicitly state what the currently-named-convergence setting is actually doing to the in-game cameras would help reduce confusion.
          It may also be worthwhile removing the ‘The Stereoscopic 3D Map’ page, and incorporating only the relevant details into the ‘3D displays vs HMDs’ page; currently the guide goes from an explicit description of how stereo 3D monitors display images, a page on “but you don’t do that with the Rift”, then jumping right into calibrating.
          Maybe emphasize that calibrating Vireio is the process of getting the separation to match the IPD, and the setting-formerly-known-as-convergence to match the optical characteristics of the Rift; that there is a specific single end-goal desired setting, not the ‘play with things until it looks good’ situation with stereo monitors.

  • David Wyand

    Unfortunately, if you follow the advice in the first half of the article you will get into trouble when rendering for the Oculus Rift. What Mr. Kreylos outlines there does indeed pertain to other forms of stereoscopic 3D rendering. But it does not pertain to the Rift according to the Oculus VR SDK documentation.

    Also, I wanted to add that comparing how things are done between the Oculus VR SDK and Unity 3D should always come up as a match. The Unity 3D work was done by Oculus VR themselves, not the company Unity. And in all cases, parallel frustums are used to render the side-by-side viewports, not the off-axis frustum talked about in the first half of the article.

  • Doc Ok

    The Oculus Rift’s projection is *mostly* on-axis. The lenses are centered over the centers of their respective half-screens, but the viewer’s eyes are not necessarily, depending on the viewer’s interpupillary distance. Depending on IPD, the projection has to be skewed inwards (if IPD is larger than screen half center distance) or outwards. Granted, the difference is subtle. But there is code in the SDK to do exactly that: first it sets up an on-axis projection, and then it applies a skew matrix in a second step, depending on configured IPD. But since the skewing is applied inside the SDK, as far as application software is concerned, it should use on-axis parallel virtual cameras.

    • Gnometech

      Yes, true. You start by placing your parallel cameras as appropriate for the screen centers and then adjust their projection offset based on the user’s IPD. I guess when I first read through the article I took “virtual stereoscopic cameras should use skewed frusta, aka off-axis projection” to mean that both eyes should use the same frustum but skewed for each eye, which is different from two parallel frustum with their projection offset for each eye.

      • Doc Ok

        Correct; in HMDs using a single shared screen — or in most others, really — both eyes’ frusta will be skewed, but they will be based on adjacent, not overlapping screens, unlike in projection-based 3D displays (including 3D TVs). Interestingly, some older HMDs use optics to create a virtual shared screen at some distance from the viewer, probably to simplify porting inflexible software from projection-based 3D displays. But with proper underlying design, there is absolutely zero difference between any of these approaches. I have a separate article on the details on my blog, at http://doc-ok.org/?p=27 (“Standard Camera Model Considered Harmful”).

  • Andreas Aronsson

    Ah, finally got to read this. Such interesting discussion too…

    And completely off topic, I have bumped into Doc Ok’s material previously at several points in time when hobby-researching stereoscopic 3D and virtual reality. Now the dots are coming together and I realize my fragmented knowledge has been gained from a single source :P This has happened to me before, and it’s always kind of world shaking, haha. I guess it’s true with the factors of creators / editors / consumers.

    And actually on topic. I was always quite confused about convergence when photographing in 3D. Wondering what is the correct amount, should I move my cameras apart when zooming, etc.

    For the Rift my brain pooped out that it should not need convergence because you converge with your eyes, but I had a hard time understanding what makes it different from normal 3D media where convergence is needed, meaning I didn’t accept my brain poop until I later read some articles online.

    This with skewed frustas… frustums… frustumamsums, sounds complicated. I hope to start coding my own apps soon enough, luckily that seems integrated into the different packages, but it makes me less certain of success than I was before :x

  • Quang Tri

    VERY VERY Interesting !!!

  • Martin Hawking Schwanke

    I find this very interesting, I did some 3d conversion using free software a while ago, and ended up feeling “not right” as you put it! I managed to watch Avatar on our 3D Samsung TV in its entirety, and didn’t feel any different, the QA was superb. Cant wait almost to buy and fit an Oculus Rift and GTX1080 , which I think is the perfect combo for my existing PC system.