After having teased the tech toward the end of last year, we’ve finally gone hands-on with HypeVR’s volumetric video captures which lets you move around inside of VR videos.

Inherent Limitations of 360 Video

Today’s most immersive VR video productions are shot in 360 degree video and 3D. Properly executed 360 3D video content can look quite good in VR (just take a look at some of the work from Felix & Paul Studios). But—assuming we can one day achieve retina-quality resolution and geometrically perfect stereoscopy—there’s a hurdle that 360 3D video content simply can’t surmount: movement inside of the video experience.

With any 360 video today (3D or otherwise) your view is locked to a single vantage point. Unlike real-time rendered VR games, you can’t walk around inside the video—let alone just lean in your chair and expect the scene to move accordingly. Not only is that less immersive, it’s also less comfortable; we’re are all constantly moving our heads slightly even when sitting still, and when the virtual view doesn’t line up with those movements, the world feels a less real and less comfortable.

Volumetric VR Video Capture

That’s one of a number of reasons that HypeVR is working on volumetric video capture technology. The idea is to capture not just a series of 360 pictures and string them together (like with traditional 360 cameras), but to capture the volumetric data of the scene for each frame so that when the world is played back, the information is available to enable the user to move inside the video.

SEE ALSO
'Dumb Ways: Free For All' Brings a Massive Slate of Multiplayer Mini-games to Quest 3 in November

At CES 2017, I saw both the original teaser video shot with HypeVR’s monster capture rig, and a brand new, even more vivid experience, created in conjunction with Intel.

With an Oculus Rift headset, I stepped into that new scene: a 30 second loop of a picturesque valley in lush Vietnam. I was standing on a rock on a tiny little island in the middle of a lake. Just beyond the rock the island was covered in lush wild grasses, and a few yards away from me was a grazing water buffalo and a farmer.

Surrounding me in the distance was rainforest foliage and an amazing array of waterfalls cascading down into the lake. Gentle waves rippled through the water and lapped the edge of my little island, pushing some of the wild grass at the water’s edge.

It was vivid and sharp—it felt more immersive than pretty much any 360 3D video I’ve ever seen through a headset, mostly because I was able to move around within the video, with proper parallax, in a roomscale area. It made me feel like I was actually standing there, in Vietnam, not just that my eyes alone had been transported. This is the experience we all want when we imagine VR video, and it’s where the medium needs to head in the future to becoming truly compelling.

Now, I’ve seen impressive photogrammetry VR experiences before, but photogrammetry requires someone to canvas a scene for hours, capturing it from every conceivable angle and then compiling all the photos together into a model. The results can be tremendous, but there’s no way to capture moving objects because you can’t capture the entire scene fast enough to record moving objects.

SEE ALSO
'Batman: Arkham Shadow' Behind-the-scenes – Insights & Artwork from Camouflaj

HypeVR’s approach is different, their rig sits static in a scene and captures it 60 times per second, using a combination of high-quality video capture and depth-mapping LiDAR. Later, the texture data from the video is fused with the depth data to create 60 volumetric ‘frames’ of the scene per second. That means you’ll be able to see waves moving or cars driving, but still maintain the volumetric data which gives users the ability to move within some portion of the capture.

hypevr-capture-rig-2The ‘frames’ in the case of volumetric video capture are actually real-time rendered 3D models of the scene which are playing back one after another. That not only allows the viewer to walk around within the space like they would a VR game environment, but is also the reason why HypeVR’s experiences look so sharp and immersive—every frame that’s rendered for the VR headset’s display is done so with optimal sampling of the available data and has geometrically correct 3D at every angle (not just a few 3D sweet spots, as with 360 3D video). This approach also means there’s no issues with off-horizon capture (as we too frequently see with 360 camera footage).

Continue Reading on Page 2 >>

1
2
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Ian Shook

    Super exciting!

  • Sponge Bob

    And how much does it cost ?

    50K, 100K , more ?

    Not for consumer or anything close to consumer VR

    Mark my words:
    General public VR usage will explode only when consumers are able to transfer their own surroundings into VR experiences which they can edit or augment in any way and share them on e.g YouTube-type of online service

    Photogrammetry is a good start and right direction to follow – they just do it all wrong ..

    Why ?

    Not gonna tell :-)

    • Daniel Gochez

      350k in cameras alone.. Well depends on the model they used. A good Red camera with lens can set you back 25k and they used 14

      • Sponge Bob

        they use lidar too (that thing on top – much like in google’s driverless cars) and good lidar is much more expensive than camera by definition

        • user

          google said its lidar costs $8000

          • Sponge Bob

            it was 50K originally

            good camera should cost less anyway

  • Foreign Devil

    Is there anywhere we can download those 30 second clips? Also could a computer meeting Oculus minimum specs have any chance of playing it back smoothly?

  • Grinch

    ridiculous hype. Looking at the rig its obvious that there will be massive holes between the camera positions unless every subject stays a considerable distance away (100′ or more) next the interaxial distance is way to big for effective stereoscopy leading to serious miniaturization – but if everything has to be far enough away to avoid the huge holes humans cant perceive depth at that range anyway. and the idea that being able to move around a couple of feet in each direction is hardly “volumetric” Once again hype and Exaggerated reality…

    • Sponge Bob

      I would put my money in Matterport capture system as opposed to this monstrosity…

      At least it costs 10 times less and does a job well with non-moving surroundings (except for running photographer – you have to hide behind a corner to not end up in the captures VR scene – lots of hassle :):):)

      Still DOA for general public :)

      • Matterport? That’s different tech targeted at a different audience. You can not for example look under or around an object from your current position, in fact you can not move within it at all other than select a predefined hotspot which it morph blends to. From what I could tell anyway.

        • Sponge Bob

          tech is similar in part – based on cameras (and also depth camera or lidar in this contraption), rotating or not
          if rotating (moving) then you need less cameras (less $$$) but can only catch fixed scenes like house interior – that’s what matterport does best
          Even with fixed scenes, with a few predefined camera locations there will be lots of gaps in VR reconstruction

          You really need a freely moving camera(s) to look at each and every object from different angles

          Coming up :-)

      • Matterport isn’t anywhere close to Hype’s solution. Matterport stitches stereoscopic 360 images – the only use of 3d is a very low res dollhouse view to establish spacial relationships between the panoramas and teleporting. Look at a Matterport mesh in Unity or a 3d modeling software and you will quickly see you fell for a parlor trick

    • muchrockness

      This system scans and builds a 3D model of the scene. You can view the scene from any perspective, and therefore you can model the virtual eyes to be any interaxial distance you want.

    • Thomas Jergel

      Maybe if they get the cost down enough they might be able to use multiple cameras to fill enough gaps.

      The amount of data would be HUGE though and might require even more specialized software to handle such a workflow and files.

      • WthLee

        i read somewhere its 8 gb of unprocessed data per frame..

  • Becca

    It captures at 60 FPS and it takes 6 minutes to process a frame, so 6 hours to process a single second of video…

    • jonas wahlbärj

      Do you know how long it takes for Disneys movies to process one frame? Hours.

      • OgreTactics

        not true since at least 10 years ago.

        • Zach Gray

          Actually more true now than ever. Final frame rendering can easily climb 30+ hours at the feature level.

          http://io9.gizmodo.com/one-animal-in-zootopia-has-more-individual-hairs-than-e-1761542252

          • Daniel Gochez

            I’ve been in 3d for 25 years, and even with moore’s law rendered frames still take about the same amount of time as they used to. Sounds crazy especially if you consider that I started with a machine with a 8 mhz processor and I am now using a machine totalling at about 48000mhz (dual Xeon 16 core at 3ghz each) The easy explanation is that we are doing much higher resolutions (DV resolution = 345,600 pixels, while 4k= 9,437,184 pixels) and much better quality than we used to by using more complex models, shaders and computationally intensive calculations that simulate realistic reflection, refraction, global illumination, caustics, light scattering etc.

          • Raphael

            I use octane and cinema 4d.. gpu rendering. Previews are very fast but any rendering higher than basic DL (direct lighting) takes a long time. Still I believe I’m getting more done in the time I’ve been using Octane but even gpu rendering isn’t that fast. Then again when I see what’s possible in realtime with unity or unreal I wonder why I even bother?

          • Raphael, don’t forget that many textures in high quality realtime games / engines have been rendered / baked in CAD previously so keep bothering :)

          • OgreTactics

            Hybrid path-tracing and ray-marching have existed for 10 years, and the only established engine that managed to approach real-time rendering was Brigade until it was vaporwaved by Octane (god I hate Octane).

            I ask myself the same thing when I see that not just the technology but even the software like C4D or Octane haven’t evolved in 20 years which is huge.

          • Daniel Gochez

            You can achieve really good results with Unreal using it as a render engine, your final frames will be almost instantaneous, but the setup process is more involved than traditional 3d rendering. So in the end it’s more man hours vrs less render time, most of us still prefer to have a render farm and let it render overnight than putting in extra hours and have it render quickly.

          • OgreTactics

            Not just crazy, but baffling. I see so many young people wanting to get into 3D like my generation got into photoshop or unity or ableton…but a never-seen before mass of young people just giving up or prefering to just go for simple motion and even code.

            3D CG is by far the WORST computing technological domain of all in terms of conception, evolution, accessibility, sense…

          • Daniel Gochez

            I have also seen many 3D artists give up. And I don’t blame them, the hours are long, the software complex, expensive, and unstable. While the quality bar is very high and the pay is meh. How did it become this way? This generation grew up with endless 3d shows and movies and it is the new and exciting art form so everyone wanted to be part of it even if schools are horribly expensive.

          • OgreTactics

            Well if the VR market says anything…maybe because there’s a lack of critical step-back, and rational practicality in the sense of what companies creating tools or hardware are doing.

            The fact that C4D or 3DSMax look, feel and are used almost EXACTLY like in the 90s by artists because of how little the interface has evolved and how crazily complicated it still is, tells a lot about the lack of conception and sense in how companies are creating tools.

            Which is disappointing also because research on the other hand, is doing tremendous job and constantly iterating and evolving, but it’s like nobody integrates their algorithms or even wants to be competitive.

  • ra51

    Demo available so that we can “believe the hype”?

  • OgreTactics

    Impractical but nice experimental rig. I wonder what’s best since processing is so heavy, a Lytro Immerge or this.

  • Actually watched it at CES, impressive.

  • HopscotchInteractive

    HypeVR’s demo at CES was great and I liked crouching down and looking under the water buffalo to see the water behind it. 360° volumetric video is an impressive experience, but I was like, “Can’t I teleport?” I already wanted to go further. Having explored virtual tours in the Vive (Realities.io, The Lab, Google Earth) I can see where this is going and how it might blend down the road with other media. Even though it’s not scalable at present, it should get there so at least experiencing it becomes more mainstream. I know consumers and even most pros can’t afford this rig, so yes @disqus_PDyszClMXc:disqus is right from a content creation perspective, Matterport is a good way to go to get .OBJ and point-cloud to play with, or experimenting with multiple POV 360°…video really changes the experience.

  • Jolly

    Sounds great. I want one of those cameras! But I will never have one. Too costly.

  • That’s an awesome project, one of the most interesting things I’ve seen come out from CES. Anyway the scene made with Intel were 3GB per frame, so there are still lots of problems to face…

  • Tomas Sandven

    OK I am HYPED!!!

  • user

    thank god. thats what i want to see. keep pushing this tech instead of 360 video

  • I see alot of interest in the tech behind it. I’m certain it will make an interesting demo.I look forward to the results, which I expect to be like Kinnect captured video on steroids.

    But as far as the future of 360 videos in general, I still don’t see it.

    Is there a huge demand for 360 videos? How many do you watch in a day? I’ve checked out a few, here and there, out of curiosity and desperation for VR content. The number of them I thought were interesting enough to see more of were small. Even the best didn’t really warrant the effort.

    Video is a passive media, and all attempts to make it interactive have failed in the past simply because people like it to remain passive. It’s something you do well eating, relaxing, or even entertaining a date . It’s a story you take in, not an event. It’s told through direction, framing, and focus. 360 video is good for events (sports, travel, concerts), but lousy for story telling. And what it demands from the audience to get that experience is too high. Even if we get our VR sunglasses in the future, it’s still mentally and physical taxing to look around constantly. I can’t be the only person who has a hard time turning 180 degrees around on a couch.

    This isn’t the future of cinema, it’s a curiosity and a tech demo.

    • Mo Last

      check out 360 3D videos, not 360 only

  • Tony a

    lols – we have been doing this for a while now .. ie: last couple of years.. want a demo ? https://www.youtube.com/watch?v=4uYkbXlgUCw
    production and distribution limitations we have taken care of ( happly stream unlimited detail over low bandwidth net easly ), however we also still have no volumetric shadows, we have to work on that still ourselves.

  • Moris974

    The same technology exists for CG only:
    – PresenZ: http://www.nozon.com/presenz
    – Dragonfly: https://www.suprawings.com/

    This removes the cost/constraint of the camera.

  • Wyatt Rappa

    Maybe the answer to solving the shadow problem is combining this with a circular ring of LiDAR flying drones which could shoot the scene from the outside, looking toward the camera rig.

    • Sponge Bob

      dude, this scene sucks and hardly justifies the expense

      the only scene justifying ring of cameras around it would be high quality porn movie where you can be inside the movie :)

  • SunnytheVV

    I have started using EF EVE http://www.ef-eve.com volumetric video platform a month ago and although the capture has some rough edges (some people might need better quality but for me it is perfect for the cost of $39.99) I am very impressed. I make volumetric video within seconds, upload the capture into my own Unity environments and stream live. It is a huge break through – render time is 0 and anyone can do who has 2 Kinect cameras. So to sum up – cheap volumetric capture, portable, streams live volumetric video and anyone can make volumetric content – well thats a real change, not a big expensive rig with insane render time.

  • dk

    it’s the same as hololens’ holotours