Hallelujah is a new experience by VR film studio Within that’s captured using Lytro’s latest Immerge light-field camera which captures volumetric footage that makes for a much more immersive experience than traditional 360 video. Hallelujah is a performance of Leonard Cohen’s 1984 song of the same name, and mixes the latest in VR film capture technology with superb spatial audio to form a stunning experience.

Update (9/23/17): A spokesperson for the project offered the following response to our inquiry regarding the release of the volumetric version of the film: “We currently don’t have a date set for the real-time rendered version, but we are planning on releasing it in the near future.”


Update (9/22/17, 3:49PM PT): A spokesperson for the project has confirmed that the version of the experience just released is mastered from the original light-field capture, but unfortunately takes the form of a 360 video rather than true volumetric video, even on the desktop VR headsets that support positional tracking. We’ve inquired if and when the volumetric version will be made available.


Update (9/22/17): The light-field captured piece, Hallelujah, is finally available to the public through the Within app on just about every mobile and desktop VR platform, for free. Head to the Within website to be directed to the app for your platform of choice.

Photo courtesy Lytro

Original Article (4/23/17): Lytro’s Immerge camera is unlike any 360 camera you’ve seen before. Instead of shooting individual ‘flat’ frames, the Immerge camera has a huge array of cameras which gather many views of the same scene, data which is crunched by special software to recreate the actual shape of the environment around the camera. The big benefit of which is that the playback puts the viewer in a virtual capture of the space, allowing for a limited amount of movement within the scene, whereas traditional 360 video only captures a static viewpoint which is essentially stuck to your head. Not to mention the Immerge camera also provides true stereo and outputs a much higher playback quality. The result is a much richer and more immersive VR film experience than what you’ve seen with traditional 360 video shoots.

SEE ALSO
With Quest 2 Widely Sold Out, Meta Aims to Make Quest 3 Price More Palatable with 0% Financing

After a recent visit to Lytro check out the latest version of the Immerge camera, I concluded, “All-in-all, seeing Lytro’s latest work with Immerge has further convinced me that today’s de facto 360-degree film capture is a stopgap. When it comes to cinematic VR film production, volumetric capture is the future, and Lytro is on the bleeding edge.”

In that article I talked a lot about the tech, but I couldn’t say much about the experience that I saw which left me so impressed. Hallelujah was that experience, and now I can talk about it.

Created by VR film studio Within, Hallelujah pairs Immerge’s volumetric capture capabilities with finely mixed spatial audio that forms the foundation for a stunning performance of Leonard Cohen’s 1984 song Hallelujah (perhaps most famously performed by Jeff Buckley). Lytro has provided a great behind-the-scenes look at the production here:

In Hallelujah, singer Bobby Halvorson starts as the solo lead of the song directly in front of the viewer against a pitch black background. As the only object in the scene, it’s easy to feel the volume and shape of the singer thanks to the volumetric capture. As you lean left and right, you see the sides of Halvorson’s face in a way that would be utterly impossible with traditional 360 capture techniques. With that sense of depth and parallax comes a feeling that the singer is really there right in front of you.

The entire song is sung a capella with no instrumentation, and copies of Halvorson are duplicated to the left, right, and behind the viewer, singing accompanying tracks.

SEE ALSO
'Hitman 3 VR: Reloaded' Gameplay Revealed in New Trailer, Coming Exclusively to Quest 3 This Summer
Photo courtesy Lytro

There you are, in a void of blackness, with Halvorsons at arms length on all sides, singing a stirring version of the uplifting song. You can look all around you to see each version of Halvorson singing a different track as he creates all the notes of the song. As you turn your head, the careful audio mixing becomes apparent; each track accurately sounds like it’s coming from each of the respective singers. This excellently mixed spatial audio significantly enhanced the sense of Presence for me; Halvorson doesn’t just look and feel like he’s there in front of you, he also sounds like it.

As the song progresses, you may noticed a distinct and fitting reverb coming from Halvorson’s voice. It isn’t a digitally applied effect though; as the song progresses, the black void around him eventually fades away and you find yourself in the midst of a beautifully adorned church. Behind the singer is a full church choir that joins into the song all at once, adding a swelling of new voices to Halvorson’s self-accompanied solo.

Photo courtesy Lytro

Turning around to explore this newly unveiled environment, you can see the ceiling, walls, and windows in detail; behind you are rows of empty pews, flanked by huge columns supporting arches that run from the back of the church to the front. Returning your gaze forward, the song reaches its climax and eventual conclusion.

Lytro’s Immerge camera | Photo by Road to VR

The whole thing is but the length of the song (about 5 minutes), but the stirring performance feels like it was done for you alone, one which feels uniquely immersive thanks to some special technology and a carefully planned and well executed production.

SEE ALSO
Apple Releases 'Submerged' Short Film on Vision Pro, Announces Upcoming Content with The Weeknd, NBA & More

Hallelujah is the kind of experience that is likely to be used for a long while as a demo not only to show the benefits of volumetric capture, but also to stir the imaginations of other creators working in the medium of VR film. Sadly, there’s no word yet as to when (or on what platforms) the experience will launch to the public.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • JordanViray

    Any word on when the demo will be available to the public?

    • benz145

      We’ve reached out to the company to ask.

  • Robert Robert

    But you still need to shoot your foreground subject on a green screen. Until they can make a camera array that has each lens 1mm apart or less this will be a hacky artifact filled gimmick. Glad they are at least trying

    • benz145

      A lot of high-end production already does this, so nothing that production people aren’t used to doing at the level that this camera is intended for.

    • Lucidfeuer

      But that’s not the case with lighfield camera though, that’s not what Lytro does? Their camera arrays use lightfield sensors that have grids of photosensitive transistors capturing a scene from spatial viewpoints. All you need is to reduce the grainy noise, either with higher resolution sensors, or software noise correction. There is no green-screen needed as you are capturing a fine 3D spatial cloud of pixels, real-scene objects are already differentiated from their background or surrounding.

      • Infinite-Realities

        They are using a green screen in this demo. The main guy was filmed separately and composited.

        • Lucidfeuer

          Ok, thanks. By the way do you think videogrammetry is usable for clean object/people scanning yet, like you did with photogrammetry? I love your technical work.

          • Infinite-Realities

            Thank you and I think so yes. We’re seeing some very presentable results now. It could be especially interesting combined with OTOY/FB’s true 360 Lightfield system.

    • psuedonymous

      There is no such requirement to shoot lightfield images with chroma-keying (indeed, one of the demos of their cinematic camera is to demonstrate keyless depth segmentation). The inter-lens separation only starts to become the limitation as objects get closer to the camera (as you need to do harsher interpolation between samples).

  • RavnosCC

    So this is not yet available in the Within app?

  • Ian Shook

    What is the spacial volume from this camera like? Is it a hemi-sphere?

    • David Herrington

      I believe the camera only captures a “flat” plane of information in front of it. I think that this was filmed in multiple shootings, with each shot rotating the Lytro camera to the respective angle necessary to capture the full 360 degrees.

      • NickCoombe

        Edit: I may have misread your answer. Yes, they would have had to rotate the camera to capture the space, but each capture contains depth info and multiple angles of imagery. They would have also had to use multiple takes to capture the singer Bobby Halvorson, since he appears multiple times simultaneously, but they don’t need multiple takes to capture, say, a single performer, or the choir itself with positional information, because all the depth and visual info is already there in the single take.

        Here’s my original reply:

        That’s not quite correct; the Lytro camera actually captures multiple images at all times (you can see the multiple lenses in the “camera” in the images posted in the article, and I assume each of those cameras shown there also captures multiple images, if each of them also uses Lytro technology), so all frames of the video are recorded from many, many positions simultaneously, generating not only images from multiple angles, but also capturing depth information, which can then be used to recreate the geometry of the scene.

        This is also why a green screen is generally not as necessary, because backgrounds can be filtered out by using the depth information captured by the camera.

        Facebook’s new 360 cameras can also achieve this by capturing depth information, although Facebook’s cameras don’t capture multiple angles/positions at the same time, just depth info and single position imagery. By using clever geometry reconstruction and image interpolation, they can still get half decent head-tracked viewing. Lytro’s approach gives much higher quality however.

        • David Herrington

          Yes, your edit is what I originally was saying.

  • Strawb77

    “traditional 360 video”

  • Graham J ⭐️

    Quick correction: Experience is what you have when you consume media, not the media itself.

  • disviq

    I wonder why can’t Lytro capture be converted into a photogrammetric live render in Unreal or Unity?

  • Damn, we won’t be able to try it at the moment… it’s so sad!