On stage at Samsung’s 2014 developer conference last week, Warren Mayoss, Head of Technology Product Development at DreamWorks Animation, spoke about the company’s initial work with virtual reality on the Oculus Rift and Gear VR. Though they’ve produced several real-time VR experiences, the studio’s bread and butter is high-fidelity pre-rendered CGI. But how to bring that level of quality to virtual reality? One approach may be pre-rendered 360 degree 3D CGI footage which the company calls ‘Super Cinema’.
Real-time Rendering vs. Pre-rendering
Let’s talk a bit about computer graphics. What’s important to understand is the difference between real-time rendering and pre-rendered content—if you already know the difference, skip this section.
Real-time rendering is generally required for interactive content like videogames; since the player has the option to move and look in any direction, the game must draw one frame at a time, rendering (or ‘calculating’) what each frame should look like based on where the user is looking. But computers can only do this so fast, and things slow down when the graphics become more complex. (Arguably) the minimum acceptable rate to perceive smooth motion is 30 frames per second (although virtual reality has been shown to demand much higher rates). Any slower than that and you start to see more of a slideshow than fluid video.
Much of today’s computer generated imagery, especially in the film world, is so complex that computers can’t render it anywhere close to 30 frames per second—in some cases a more accurate unit would be frames per hour. Transformers (2007), for example, had CGI so complex that it took 38 hours to render one frame in some cases, according to director Michael Bay.
Fortunately, since film has no user-input, and thus the view will always be identical from one viewing to the next, CGI frames can be rendered ahead of time—aptly called ‘pre-rendering.’ While each frame may take anywhere from minutes to hours to render, they are essentially saved as still photos which can then be compiled into a video and played back at the desired fluid framerate after the fact.
The challenge for a company like DreamWorks Animation, which primarily creates pre-rendered CGI films, is supporting headtracking in virtual reality while maintaining the same high-fidelity visuals that the company is known for. With user view control like headtracking, the usual method of pre-rendering is impossible because there’s no telling what direction the user will want to look and thus which frames to generate.
One option to hop this hurdle is to pre-render 360 degree 3D frames, then project them onto a virtual sphere around the user, affording the headtracking interactivity that’s critical to VR while not requiring each viewer to own a supercomputer for real-time rendering of those complex scenes. This can also enable VR experiences on lower-end hardware that lacks desktop-class computing. InnerspaceVR, for instance, is creating such experiences in CryEngine and pre-rendering for playback on less powerful devices like Gear VR.
DreamWorks Animation ‘Super Cinema’ Format for Virtual Reality
Warren Mayoss, Head of Technology Product Development at DreamWorks Animation, said on stage at SDC 2014 that the company is indeed experimenting with this technique.
“We’re coming up with a system that we call Super Cinema and this is something where we’re taking the quality assets of our feature film and we are delivering that in a new 360 immersive experience to the consumer,” Mayoss told the crowd at the conference’s second day keynote. He even shared a brief clip of a ‘Super Cinema’ scene with assets from the company’s How to Train Your Dragon franchise:
See Also: 5 Insights for New VR Developers from DreamWorks Animation Head of Technology Product Development
The video above looks a bit funky because the entire 360 degree frame is being projected onto a 2D display. Actual playback would stretch the footage around a virtual sphere with the viewer on the inside, able to look in any direction around the scene. And while this technique enables higher fidelity graphics through pre-rendering, it isn’t without tradeoffs. To be clear, the issues I’m about to mention pertain to all forms of 360 degree VR footage, not just DreamWorks’ implementation.
For one, the filesize of a Super Cinema film would likely be huge compared to a standard film release. Instead of rendering a relatively small 1920×1080 frame, as you’d find on a Blu-ray for instance, a 360 degree frame would have to be many times that resolution in order to preserve quality after stretching all the way around the viewer. This can be combated with compression and streaming technology, but there’s only so much you can do without degrading those sharp visuals you’d expect from a DreamWorks film.
By my understanding, Super Cinema would also necessarily lack positional tracking (the ability to track a user’s head in 3D space). Positional tracking is important for comfort in virtual reality and also greatly enhances immersion. Proper positional tracking requires a real-time scene where the geometry can be redrawn appropriately as the user moves their head about. For instance, if you leaned around an open doorway, you’d expect to be able to see through it. With pre-rendered frames, this isn’t possible.
For that same reason, IPD could be an issue with the Super Cinema approach. IPD defines the distance between a person’s eyes. This value varies from person to person within a certain range, and content tuned for each person’s IPD can be important to a comfortable VR experience. With real-time rendering, a fix is simple: just change the distance between the virtual cameras that capture the scene for each eye. However, in the case of a pre-rendered scene, everything is captured with one set IPD.
Indeed, aiming for the average IPD would probably work well for most, but for outliers with a very small or very large IPD, the resulting content might not make their eyes too happy. It’s not out of the question that DreamWorks could work their rendering magic to make it easy to render one master film and then (relatively) quickly render some IPD-specific variants, but I doubt they’re looking to make the end-user experience as complicated as asking the user to measure their IPD!
It will be interesting to see how DreamWorks and others approach these issues.