PresenZ is a system developed by VFX studio Nozon which combines some of the essential benefits of pre-rendered imagery with those of real-time interactivity. For the first time the company is demonstrating a room-scale version of a Presenz-enabled scene.

Crash Course: Pre-rendered vs. Real-time CGI

Click to Expand

To explain the difference between pre-rendered and real-time CGI, let me quickly take you back to the early days of hand-drawn animation.

Many of the cherished classics from this age, like Pinocchio (1940), were created by having an artist sit down to draw a series of pictures, and then have those pictures played back quickly after one another to create the illusion of continuous motion. Because the amount of time it takes to draw each frame is much longer (let’s say 5 minutes) than the amount of time that each frame is displayed (1/30th of a second, in the case of 30 FPS playback), interactivity is impractical—you might ask for a character to make a movement in the scene that requires 1000 frames and it would take more than three days just to draw the frames for that movement. However, if the artist can drawn simply enough—like a stick figure flip-book animation—they may be able to produce your desired result in a matter of minutes.

This is very much like the difference between pre-rendered and real-time CGI. Imagine now that instead of an artist drawing each frame, a computer is doing the drawing. A very detailed frame may take the computer five minutes to draw, depending upon how powerful it is. At a drawing (or ‘rendering’) rate of one frame per five minutes, interactivity is still impractical because you may ask for a change that takes several minutes to be produced. It makes more sense to plan what you want the computer to draw ahead of time, then simply gather up all of the frames after they have been drawn and then play them back to back to create a sense of continuous motion. This is pre-rendered CGI: the frames are drawn (‘rendered’) before (‘pre’) viewing.

For this reason, pre-rendered CGI is the preferred method for creating very high fidelity imagery, like that seen from major animation studios like DreamWorks and Pixar. The graphics in films from these studios far surpasses what your computer or Xbox can render because pre-rendered films have the time to spend many seconds, minutes, or even hours on each frame, so long as you will view them all strung together at some point in the future. But that means no interactivity because the viewer is not there during the rendering process to dictate the action (and even if they were, it would take to long to see the results).

SEE ALSO
Revamped Meta App Reintroduces Quest Users to Some of the Best Immersive Art Out There

See Also: DreamWorks Reveals Glimpse of 360 Degree ‘Super Cinema’ Rendering for VR Films

But if you make the frames simple enough (like the stick figure flip book animation) such that the computer can draw many per second, you can reach a level of practical interactivity where you can ask the computer for a change and see the resulting frames nearly instantly. This is real-time CGI: the frames are rendered as fast as they are being displayed (‘real-time’). Real-time CGI opens the door to interactivity, like being able to pick up a virtual object or press a button to open a hatch; depending upon the user’s input, new frames can be drawn quickly enough to show the result of an action instantly, rather than minutes, hours, or days later.

For this reason real-time CGI is the preferred method for creating games. Interactivity is a crucial element of gaming, and thus it makes sense to bring the graphics down to a point that the computer can render frames in real-time such that the user can press a button to dictate the action in the scene.

So simply put, pre-rendered CGI excels in visuals while real-time CGI excels in interactivity. They’re not fundamentally different except for how long it takes to draw the action.

The Promise of Presenz

For the entire history of CGI, creators have had to choose between the high fidelity visuals of pre-rendered CGI or the interactivity of real-time CGI. Then along comes Presenz, promising to mash together these two formerly incompatible benefits into a single solution.

Nozon first revealed Presenz in early 2015, demonstrating positional tracking parallax—the ability to look around objects as you move your head through 3D space—in a small area around the users head in a scene with animated pre-rendered CGI visuals with complexity which would otherwise bring a real-time game engine to a halt.

SEE ALSO
'Ghosts of Tabor' Reaches $20 Million Milestone, Doubling Revenue Since 1.0 Launch on Quest

I’ve seen it for myself and it’s everything they say it is: the visuals of pre-rendered CGI with the positional tracking parallax that’s normally only possible with a scene rendered in real-time. The area in which you can move your head about the scene is only about a meter square. If you hit the edge of that area the scene will fade out as the view of the scene in the area beyond that space has not been pre-rendered. This is of course a limitation if users want to be able to crouch, jump, or walk around a larger scene.

For the first time the company is now showing a room-scale Presenz-enabled scene which is navigable with the HTC Vive & Lighthouse tracking system. In the video above, we see a pre-rendered scene which can be navigated from one end to another seamlessly just like a real-time experience. Nozon calls the scene’s viewable area the ‘zone of view’ rather than a singular ‘point of view’ (as you would be stuck with using a traditional pre-rendering approach.

So what’s the downside to this seemingly magic solution to the pros and cons balance of pre-rendered vs. real-time CGI? Well for one, interactivity is currently limited. You may be able to navigate through the scene, but interaction in the traditional real-time sense is not possible as the scene is still pre-rendered. Nozon says that they’re developing the ability to add real-time interactive elements into their Presenz scenes, but so far they’ve only demonstrated support for pre-rendered animations.

Another downside to Presenz is file size. Relatively simple scenes can climb to the gigabyte count quickly (likely scaling with the size of the zone of view), though Nozon says they are working on compression schemes which “make it possible to reduce [the file size of] some scenes by a factor 10.”

Framerate is also another downside.  A Presenz scene can only currently be animated up to 25 FPS (though the headset still views the scene at its own native refresh rate). It isn’t clear yet if this is a technical limitation or a means to keep the file size down.

SEE ALSO
Quest 3 and 3S Are Bundled with 'Batman: Arkham Shadow' and Quest+ Membership Until Early Next Year

PresenZ vs. Lightfields

Those of you following along carefully will probably notice some commonalities between the Presenz solution and lightfields. I certainly did, and so I queried Nozon about the differences between the two. The company insists that, despite the similarities, Presenz is a patented solution which differs from lightfields. My efforts to understand the precise differences didn’t get very far as the company is understandably careful not to dig into the specifics of their technology.

See Also: OTOY Shows Us Live-captured Light Fields in the HTC Vive

However, Nozon’s Matthieu Labeau provided me with a broad comparison between the two solutions (the skeptical reader will understand that this list is likely to lean in Nozon’s favor):

PresenZ

Benefits

  • File sizes manageable by today’s computers: about 15-20 Mbytes per frame without temporal compression. We expect to reach 20-30 Mbytes/second or better with implementation of temporal compression.
  • Can be plugged into any high-end renderer.
  • Production companies can keep their pipelines as is, and all their previous 3D Assets.
  • Capable of animated content

Lightfield

Benefits

  • Specular-lightning, reflections, and transparency are not baked in, so they will react realistically to positional tracking.

Current Limitations

  • Specular lighting is ‘baked-in’ (this can be solved in the future), but we don’t believe it’s an immersion breaker in the meant-time.
  • Large file sizes making slow downloads for the moment.

Current Limitations

  • Still imagery only
  • Comprises made for manageable data size: capture of a small volume, that is scaled up when viewed. That changes the scale of the scene and limits immersion (everything feels big and far away).

Computer Minimum Specs

Oculus recommended spec + Raid0 SSD

Computer Minimum Specs

  • Unknown but believed to require high-end GPUs. To our knowledge no standard computer can handle animation in this format.

Both technologies are still in development so the list here is likely to be in flux for a while to come. Either way, Presenz seems to be making good headway in combining the benefits of pre-rendered and real-time CGI, though there are still a number of limitations that will need sorting before broad application of the technology is possible.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • DougP

    Very exciting tech.
    This could be turn out to be the game changing tech we need for some complete “presence” experiences in VR.

  • S Cholerton

    Please include the date on these articles, or the relevance is impossible to judge.