After having teased the tech toward the end of last year, we’ve finally gone hands-on with HypeVR’s volumetric video captures which lets you move around inside of VR videos.

Inherent Limitations of 360 Video

Today’s most immersive VR video productions are shot in 360 degree video and 3D. Properly executed 360 3D video content can look quite good in VR (just take a look at some of the work from Felix & Paul Studios). But—assuming we can one day achieve retina-quality resolution and geometrically perfect stereoscopy—there’s a hurdle that 360 3D video content simply can’t surmount: movement inside of the video experience.

With any 360 video today (3D or otherwise) your view is locked to a single vantage point. Unlike real-time rendered VR games, you can’t walk around inside the video—let alone just lean in your chair and expect the scene to move accordingly. Not only is that less immersive, it’s also less comfortable; we’re are all constantly moving our heads slightly even when sitting still, and when the virtual view doesn’t line up with those movements, the world feels a less real and less comfortable.

Volumetric VR Video Capture

That’s one of a number of reasons that HypeVR is working on volumetric video capture technology. The idea is to capture not just a series of 360 pictures and string them together (like with traditional 360 cameras), but to capture the volumetric data of the scene for each frame so that when the world is played back, the information is available to enable the user to move inside the video.

SEE ALSO
Quest's Most Popular Mini Golf Game Gets a Brand New Course of Mythic Proportions

At CES 2017, I saw both the original teaser video shot with HypeVR’s monster capture rig, and a brand new, even more vivid experience, created in conjunction with Intel.

With an Oculus Rift headset, I stepped into that new scene: a 30 second loop of a picturesque valley in lush Vietnam. I was standing on a rock on a tiny little island in the middle of a lake. Just beyond the rock the island was covered in lush wild grasses, and a few yards away from me was a grazing water buffalo and a farmer.

Surrounding me in the distance was rainforest foliage and an amazing array of waterfalls cascading down into the lake. Gentle waves rippled through the water and lapped the edge of my little island, pushing some of the wild grass at the water’s edge.

It was vivid and sharp—it felt more immersive than pretty much any 360 3D video I’ve ever seen through a headset, mostly because I was able to move around within the video, with proper parallax, in a roomscale area. It made me feel like I was actually standing there, in Vietnam, not just that my eyes alone had been transported. This is the experience we all want when we imagine VR video, and it’s where the medium needs to head in the future to becoming truly compelling.

Now, I’ve seen impressive photogrammetry VR experiences before, but photogrammetry requires someone to canvas a scene for hours, capturing it from every conceivable angle and then compiling all the photos together into a model. The results can be tremendous, but there’s no way to capture moving objects because you can’t capture the entire scene fast enough to record moving objects.

SEE ALSO
Samsung Reportedly Set to Unveil Smart Glasses at Galaxy S25 Event in January

HypeVR’s approach is different, their rig sits static in a scene and captures it 60 times per second, using a combination of high-quality video capture and depth-mapping LiDAR. Later, the texture data from the video is fused with the depth data to create 60 volumetric ‘frames’ of the scene per second. That means you’ll be able to see waves moving or cars driving, but still maintain the volumetric data which gives users the ability to move within some portion of the capture.

hypevr-capture-rig-2The ‘frames’ in the case of volumetric video capture are actually real-time rendered 3D models of the scene which are playing back one after another. That not only allows the viewer to walk around within the space like they would a VR game environment, but is also the reason why HypeVR’s experiences look so sharp and immersive—every frame that’s rendered for the VR headset’s display is done so with optimal sampling of the available data and has geometrically correct 3D at every angle (not just a few 3D sweet spots, as with 360 3D video). This approach also means there’s no issues with off-horizon capture (as we too frequently see with 360 camera footage).

Continue Reading on Page 2 >>

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.

Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."