dreamworks vr sdc 2014 warren mayoss

On stage at Samsung’s 2014 developer conference last week, Warren Mayoss, Head of Technology Product Development at DreamWorks Animation, spoke about the company’s initial work with virtual reality on the Oculus Rift and Gear VR. Though they’ve produced several real-time VR experiences, the studio’s bread and butter is high-fidelity pre-rendered CGI. But how to bring that level of quality to virtual reality? One approach may be pre-rendered 360 degree 3D CGI footage which the company calls ‘Super Cinema’.

Real-time Rendering vs. Pre-rendering

dream works 90 minute movie requires

Let’s talk a bit about computer graphics. What’s important to understand is the difference between real-time rendering and pre-rendered content—if you already know the difference, skip this section.

Real-time rendering is generally required for interactive content like videogames; since the player has the option to move and look in any direction, the game must draw one frame at a time, rendering (or ‘calculating’) what each frame should look like based on where the user is looking. But computers can only do this so fast, and things slow down when the graphics become more complex. (Arguably) the minimum acceptable rate to perceive smooth motion is 30 frames per second (although virtual reality has been shown to demand much higher rates). Any slower than that and you start to see more of a slideshow than fluid video.

Much of today’s computer generated imagery, especially in the film world, is so complex that computers can’t render it anywhere close to 30 frames per second—in some cases a more accurate unit would be frames per hour. Transformers (2007), for example, had CGI so complex that it took 38 hours to render one frame in some cases, according to director Michael Bay.

SEE ALSO
Hands-on: 'Attack on Titan VR' Could Be a Diamond in the Rough — Emphasis on Rough

Fortunately, since film has no user-input, and thus the view will always be identical from one viewing to the next, CGI frames can be rendered ahead of time—aptly called ‘pre-rendering.’ While each frame may take anywhere from minutes to hours to render, they are essentially saved as still photos which can then be compiled into a video and played back at the desired fluid framerate after the fact.

The challenge for a company like DreamWorks Animation, which primarily creates pre-rendered CGI films, is supporting headtracking in virtual reality while maintaining the same high-fidelity visuals that the company is known for. With user view control like headtracking, the usual method of pre-rendering is impossible because there’s no telling what direction the user will want to look and thus which frames to generate.

One option to hop this hurdle is to pre-render 360 degree 3D frames, then project them onto a virtual sphere around the user, affording the headtracking interactivity that’s critical to VR while not requiring each viewer to own a supercomputer for real-time rendering of those complex scenes. This can also enable VR experiences on lower-end hardware that lacks desktop-class computing. InnerspaceVR, for instance, is creating such experiences in CryEngine and pre-rendering for playback on less powerful devices like Gear VR.

DreamWorks Animation ‘Super Cinema’ Format for Virtual Reality

Warren Mayoss, Head of Technology Product Development at DreamWorks Animation, said on stage at SDC 2014 that the company is indeed experimenting with this technique.

“We’re coming up with a system that we call Super Cinema and this is something where we’re taking the quality assets of our feature film and we are delivering that in a new 360 immersive experience to the consumer,” Mayoss told the crowd at the conference’s second day keynote. He even shared a brief clip of a ‘Super Cinema’ scene with assets from the company’s How to Train Your Dragon franchise:

SEE ALSO
'Metro Awakening' Review – Atmospheric, Claustrophobic, and Eventually Monotonic

See Also: 5 Insights for New VR Developers from DreamWorks Animation Head of Technology Product Development

The video above looks a bit funky because the entire 360 degree frame is being projected onto a 2D display. Actual playback would stretch the footage around a virtual sphere with the viewer on the inside, able to look in any direction around the scene. And while this technique enables higher fidelity graphics through pre-rendering, it isn’t without tradeoffs. To be clear, the issues I’m about to mention pertain to all forms of 360 degree VR footage, not just DreamWorks’ implementation.

For one, the filesize of a Super Cinema film would likely be huge compared to a standard film release. Instead of rendering a relatively small 1920×1080 frame, as you’d find on a Blu-ray for instance, a 360 degree frame would have to be many times that resolution in order to preserve quality after stretching all the way around the viewer. This can be combated with compression and streaming technology, but there’s only so much you can do without degrading those sharp visuals you’d expect from a DreamWorks film.

By my understanding, Super Cinema would also necessarily lack positional tracking (the ability to track a user’s head in 3D space). Positional tracking is important for comfort in virtual reality and also greatly enhances immersion. Proper positional tracking requires a real-time scene where the geometry can be redrawn appropriately as the user moves their head about. For instance, if you leaned around an open doorway, you’d expect to be able to see through it. With pre-rendered frames, this isn’t possible.

SEE ALSO
Upcoming VR Multiplayer Survival 'GRIM' Looks Like 'RUST' on Mars, Trailer Here

For that same reason, IPD could be an issue with the Super Cinema approach. IPD defines the distance between a person’s eyes. This value varies from person to person within a certain range, and content tuned for each person’s IPD can be important to a comfortable VR experience. With real-time rendering, a fix is simple: just change the distance between the virtual cameras that capture the scene for each eye. However, in the case of a pre-rendered scene, everything is captured with one set IPD.

Indeed, aiming for the average IPD would probably work well for most, but for outliers with a very small or very large IPD, the resulting content might not make their eyes too happy. It’s not out of the question that DreamWorks could work their rendering magic to make it easy to render one master film and then (relatively) quickly render some IPD-specific variants, but I doubt they’re looking to make the end-user experience as complicated as asking the user to measure their IPD!

It will be interesting to see how DreamWorks and others approach these issues.

Continue Reading on Page 2…

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • David Mulder

    I would expect a studio such as Dreamworks to especially embrace non-pre-rendered VR movies… but guess I was wrong in that regard. Still think that could be a huge deal though.

  • Simon

    The video clip shows a single ‘rendered to sphere’ view which does not really tell us much.

    If we want ‘correct’ VR view we’d need to have RGBD image data and then cut/correct/compensate to form left/right views. But then there are issues with parallax and occlusion, at least when looking at a single frame – perhaps there are algorythms for filling in details from adjacent frames.

    I think that ‘pre-rendered VR movies’ data probably need a whole lot more ‘tweaking’ before it is actually displayed to user, but that this tweaking is going to be a lot lighter (computationally) that rendering photo realistic scenes.

    Perhaps particular formats will have special tricks for mapping sections when the occlusion is significant; if you are rendering a scene with a nearby object (ie. dragon) you would be perfectly capable of rendering the section hidden behind the dragon to an ‘off screen’ section and use that to fix the occlusion. Say add an additional 10% frame size to store these areas and a data table to map them…. this would be a whole lot harder for live action though.

  • Simon

    Also regarding image quality – many compression schemes have great variance in what bit rate you ‘record’ in. If the encoder is smart it can use more bits for the important parts of the screen and less for background.

    Not only is the scene pre-rendered, the encoding/compression could be heavily tweaked for screen content.

  • cly3d

    So they have a lat/long render of the scene and it’s marketed as “Super cinema”?
    How is this any different than what *everyone* is doing when producing CGI for S3D 360?

    Game designers / environment artists creating for 360 in CryEngine for instance, are already creating 360 sets. Arguably CryEngine on a good machine does approach pre-rendered (at least comparable to some Pixar / Disney Cg films)

    I acknowledge Pixar is the undisputed leader in Cg Storytelling, and each “scene” would need to be a complete environment (360 set)… but “super cinema” is just marketing/packaging, and not some breakthrough as I was hoping it would be on reading this.

  • It is not true that positional tracking is impossible with prerendered content. You can render and store light fields in a similar format as light fields captured by a light field camera, allowing positional tracking of the head within a certain volume and correct stereoscopic view from any angle where both eyes are within the volume with any chosen focus plane.

    360 degree lightfield video takes up even more space. We’d need something like Sony’s archival disc, with a read speed to match.

  • care package

    There are 360 VR videos on Youtube right now that would work just fine for a new cinema experience. Many have already seen the world of warcraft one. To try and make it to the point you can move your head around inside the environment would be insanely large for data and processing. Kind of a captain obvious thing really. Man, imagine the kind of data and processing real life must take…..

    Maybe they could just figure out a way quickly ask to stream only the field of view instead of trying to always render the whole sphere. What do I know.