NextVR, a company specializing in live VR broadcasting of sports and entertainment content, has debuted its latest broadcast technology this week at CES 2018. Parallel with new, more powerful and higher resolution VR headsets, improvements to NextVR’s technology are bringing promising new levels of quality and volumetric capability to VR video content.
Attitudes toward 360 video content among high-end VR users are generally quite bad. Because 360 video is so easy to capture on inexpensive hardware, there’s troves of low-effort content that’s captured and/or produced poorly. Contrasted against the high resolution, high framerate, and highly interactive VR games that many users of high-end headsets are used to, 360 video content is often dismissed out of hand, assumed to be the usual highly-compressed, off-horizon, non-stereoscopic mess that seems to crop up at every turn.
NextVR, on the other hand, is one of only a handful of companies pushing the limits of production and playback quality, now approaching a level of quality and features that could properly be called ‘VR video’ rather than plain old ‘360 video’. Now with higher resolution headsets on the market, the company’s latest pipeline improvements will give even the high-end headset crowd a glimpse of the true potential of VR video.
End to End Approach
Having originally formed in 2009 as Next3D, a company creating compression and live broadcasting technology for 3DTV content, the company pivoted into NextVR following the sharp decline of the 3DTV market, and—having raised more than $100 million in venture capital since—hasn’t looked back.
NextVR is focused on the entire pipeline from capture to broadcast to viewing, and everywhere in between. Speaking with co-founder David Cole last month about the company’s latest developments, it’s clear that NextVR does much more than just film or host video content. Cole explained how the company constructs their own camera rigs, and builds their own compression, transmission, and playback technology, putting NextVR in a fundamentally different class than filmmakers shooting 360 video on GoPro rigs and throwing it on YouTube.
It’s that end to end approach which has allowed the company to push the boundaries of VR video quality, and its latest developments show great hope for a medium marred by expectations set by lowest common denominator content.
Improved Quality
The first major improvement that the company is rolling out is a major jump in video quality, which can finally be realized by headsets with higher resolution (in the tethered category), and more powerful processors capable of breaking through previous decoding bottlenecks (in the mobile category).
Cole says that, in the best case scenario with 8 Mbps bandwidth, the company can now stream 20 pixels per degree, up from 8.5 pixels per degree previously. Keep in mind, that’s also in stereo and at 60 FPS. The company plans to roll out this higher-res playback to supported devices (we understand Windows VR headsets only, to start) early this year, but I got to see a preview running on Samsung’s Odyssey VR headset which offers a solid step up in pixel count over headsets like the Rift and Vive (1,440 × 1,600 vs 1,080 × 1,200).
Watching footage from the company’s library of sports content, including soccer, basketball, and even monster truck rallies, I was very impressed with the improved quality. Not only does the higher resolution make details stand out much more clearly, it also greatly enhances the stereo effect, as the more defined imagery creates sharper edges which makes stereo depth more apparent.
That enhanced stereo effect made things look better in general, but was especially notable on thinner details, like the net of a soccer goal which was clearly separated from the rest of the scene behind it. With insufficient resolution, thin details like the net sometimes seem to mesh with the world behind, rather than clearly existing at their own discrete depth.
The 60 FPS footage looks much smoother than the 30 FPS footage that’s often seen from 360 content, and it also allows for some decent slow motion; I watched in awe as a massive monster truck hit a ramp in front of me and did a complete back flip in slow motion. In another scene, a monster truck cut a sharp turn right near me and sent detailed clumps and clouds of dirt flying in my direction; it was a great example of the image quality, stereoscopy, and slow motion, as I really felt for a moment like there was something flying toward me.
Live Volumetric Video
In addition to improved quality, NextVR is also adapting their pipeline for volumetric capture and playback, allowing the viewer’s perspective to move positionally within the video (rather than just rotationally). Adding that extra dimension is huge for immersion, since it means the scene reacts to your movements in a way that appears much more natural. Even though VR video content generally assumes a static perspective, even the small movements that you make when ‘sitting still’ must be reflected in your view to maintain high immersion. So while NextVR’s volumetric solution isn’t going to allow you to walk around room-scale footage without breaking the scene, it still stands to make a big difference for seated content.
David Cole, NextVR’s co-founder, told me that the company’s capture and playback approach is well-suited for latency-free volumetric playback, which is crucial considering one of their key value propositions is the live broadcasting of VR content.
Since the company is using stereo orthogonal projection, Cole explained, wherein the scene’s pixels are projected on a 3D mesh and transmitted to the host device, new frames needed for positional tracking are generated locally and displayed at the headset’s own refresh rate (meaning, just like rotational tracking, even though the footage is 60 FPS, you’ll see 90Hz tracking on a 90Hz headset). Each transmitted frame essentially has the shape of the scene built in, so when you move your head to look behind an object to reveal something that you couldn’t see before, you don’t need to wait for the server to send a new frame with your headset’s updated position (which would introduce significant latency).
I got a chance to see the company’s volumetric playback in action. Putting on the Samsung Odyssey headset once again, I found myself sitting on a pretty beach at sunset, surrounded by big boulders and rocks, with the waves lapping near my feet in front of me. As I moved my head, I could clearly see the scene moving accurately around me, and by moving I could make out sides of the rocks that I otherwise wouldn’t be able to see from a static perspective. As Cole described, it felt latency-free (beyond the headset’s inherent latency).
The volumetric beach scene was a good tech demonstration, but I didn’t see a wide enough variety of volumetric content to get a feel for how it would handle more challenging scenes, like those with closer and/or faster moving objects. Because of the tendency for near-field objects to cast significant ‘volumetric shadows’ (blank areas where the camera is blocked from capturing due to occlusion), it’s likely that volumetric capture will be limited to certain, suitable productions.
The company says that volumetric viewing will be rolled out starting this year, coming first to on-demand content, followed by live broadcasts.
Continued on Page 2: ‘Wow to Watch’ »
Page: 1 2