X

Image courtesy Google

Google Takes a Step Closer to Making Volumetric VR Video Streaming a Thing

Google unveiled a method of capturing and streaming volumetric video, something Google researchers say can be compressed down to a lightweight format capable of even being rendered on standalone VR/AR headsets.

Both monoscopic and stereocopic 360 video are flawed insofar they don’t allow the VR user to move their head completely within a 3D area; you can rotationally look up, down, left, right, and side to side (3DOF), but you can’t positionally lean back or forward, stand up or sit down, or move your head’s position to look around something (6DOF). Even seated, you’d be surprised at how often you move in your chair, or make micro-adjustments with your neck, something that when coupled with a standard 360 video makes you feel like you’re ‘pulling’ the world along with your head. Not exactly ideal.

Volumetric video is instead about capturing how light exists in the physical world, and displaying it so VR users can move their heads around naturally. That means you’ll be able to look around something in a video because that extra light (and geometry) data has been captured from multiple viewpoints. While Google didn’t invent the idea—we’ve seen something similar from NextVR before it was acquired by Apple—it’s certainly making strides to reduce overall cost and finally make volumetric video a thing.

In a paper published ahead of SIGGRAPH 2020, Google researchers accomplish this by creating a custom array of 46 time-synchronized action cams stuck onto a 92cm diameter dome. This provides the user with an 80-cm area of positional movement, and also bringing 10 pixels per degree angular resolution, a 220+ degrees FOV, and 30fps video capture. Check out the results below.

 

The researchers say the system can reconstruct objects as close as 20cm to the camera rig, which is thanks to a recently introduced interpolation algorithm in Google’s deep learning system DeepView.

This is done by replacing its underlying multi-plane image (MPI) scene representation with a collection of spherical shells which are better suited for representing panoramic light field content, researchers say.

“We further process this data to reduce the large number of shell layers to a small, fixed number of RGBA+depth layers without significant loss in visual quality. The resulting RGB, alpha, and depth channels in these layers are then compressed using conventional texture atlasing and video compression techniques. The final, compressed representation is lightweight and can be rendered on mobile VR/AR platforms or in a web browser,” Google researchers conclude.

In practice, what Google is introducing here is a more cost-effective solution that may eventually spark the company to create its own volumetric immersive video team, much like it did with its 2015-era Google Jump 360 rig project before it was shuttered last year. That’s of course provided Google further supports the project by say, adding in support for volumetric video to YouTube and releasing an open source plan for the camera array itself. Whatever the case, volumetric video, or what Google refers to in the paper as Light Field video, is starting to look like a viable step forward for storytellers looking to drive the next chapter of immersive video.

If you’re looking for more examples of Google’s volumetric video, you can check them out here.

Related Posts
Disqus Comments Loading...