The 360 VR video market is rapidly getting busier. With companies like NextVR and Jaunt racing to develop technologies and deliver content which result in compelling, immersive video for virtual reality, things are moving fast. Now a new company called ‘HypeVR’ aims to raise the bar by delivering ultra high quality 3D footage that you’re able to ‘lean into’ – made possible with 3D mapping techniques.
If the prowess of 360 degree VR video providers was measures in raw technical specifications, it’s likely HypeVR would win hands down. Just in terms of sheer data capture, HypeVR’s new multi-camera rig’s capabilities are not to be sniffed at.
The proprietary system uses 14 Red Dragon, 6k Video Cameras rig-mounted to capture a 360 degree field of view. Recording at 60Hz currently (with 90Hz planned) the definition of the resulting footage, once stitched would probably be impressive enough in and of itself, but there’s more. HypeVR’s rig is extended to use a Velodyne LiDAR scanner, capable of capturing up to 700,000 points of 3D depth information every second at a range of up to 100M. The solution certainly lives up to the company’s name.
What does this actually mean for content though? Well, ‘traditional’ 3D 360 video when captured, is done so at fixed depths – that is you get a 360 snapshot of the world, but that ‘static’ video imagery is all you get. When using a VR headset with positional tracking this means, at a very basic level, you can’t translate within the video scene to get a different lateral viewpoint – you can’t lean around or into a scene. HypeVR’s system however, promises to capture not only the video data but the 3D depth information of the environment it was shot in. This means that the imagery can be mapped to 3D models of the environment which can in turn be rendered in real-time inside a VR headset – the upshot? You can dodge or lean around a scene and the world should alter according to your viewpoint.
Depth information is captured using a system called LiDAR, a term derived from it’s scanning methodology (light and RADAR), namely using spinning lasers to capture high resolution, 3D geometric data from an environment – the data collection it produces is known as ‘point cloud’, essentially many, many geometric points in 3D space. This data can be used to model 3D imagery base on the environment, which can be manipulated as required.
It’s a technique you’ll probably be familiar with without even knowing. Most people have used Google’s Street View system, it was one of the first mainstream commercial uses of the technology, although at a very crude level.
“I couldn’t believe my eyes when we got 6 degree VR working for the first time,” said Tonaci Tran, HypeVR co-founder. “The extra dimension of being able to lean into a live action scene, really takes VR to the next level. Now with the superb head tracking abilities of the upcoming Oculus Crescent Bay and HTC Vive, we will be able to have the most optimal experience for our live action VR. In order to achieve these amazing results, we needed a robust solution to record 3d depth information. By combining our proprietary algorithm and multi-channel, real-time LiDAR technology, we are able to generate ultra high resolution dense 3d depth information at any distance which allows our 3d capture system to perform well in any environment. Although we are currently working with an ultra high end and robust VR 3d capture solution, we are also working on more compact and affordable systems so that other VR filmmaker’s can begin creating amazing 6 degree live action VR content.“
To be fair, that’s something a co-founder of a new company is bound to say. However there’s absolutely no denying the startling power and potential of HypeVR’s technology in terms of pure paper specs. I personally can’t wait to see footage captured using the device, although that brings me to a few issues we face with current VR display technologies and their content delivery systems.
The source footage is obviously impressive in terms of resolution, but processing and stitching such gargantuan captures together convincingly will be a challenge. And finally, what compromises will have to be made for footage captured by the system to be rendered capably on the first generation of VR headsets? Although panel resolutions are creeping up, rendering 3D geometry and high resolution video at frame rates such as these will also require some thought. Finally, even when both those issues are resolved, how do you deliver the content without compromising video quality? We’ve already seen a disappointing level of compression artefacts in Milk VR’s ‘high quality’ downloads for example, how do you get high quality video onto people’s faces affordably and quickly?
I guess we’ll find out in the course of the next few years, but despite my reservations above, I can’t wait to see what HypeVR come up with and the kinds of experiences we may be enjoying in VR in the not too distant future.