How does it work?

The basic concept of 360 capture in Unity is similar to 360 capture in the real world: we point the (virtual) camera in many different directions to capture different parts of the scene, and then combine the results into a single 360 degree image. Monoscopic capture is particularly simple: we just point the camera in six directions, up, down, left, right, forward and backwards (cubemap format). The camera is located in exactly the same position for all six. The image below shows capturing a checkered box using a camera located inside the box.

checkered_box_capture

Representing a complete 360 degree environment in a single image faces the same challenge as trying to represent the surface of the Earth on a flat map. The projection used by most tools today is the equirectangular projection. In an equirectangular projection, vertical (y) position gives latitude, while horizontal (x) position gives longitude. This projection is generally avoided by mapmakers because it distorts the area of the poles, making them appear much larger than they actually are; but it is convenient for 360 degree panoramas because it is easy to create an efficient viewer for this format by simply projecting the image onto the inside of a large sphere and placing the viewer inside the sphere.

Equirectangular_projection_SW
CC-BY-SA/Strebe

Note that 360 captures must be very high-resolution, because at any given time, you are only “zooming in” on a small portion of the video (typically 10-20%). To roughly match the angular resolution of an Oculus Rift or HTC Vive, 360 captures must be at least 4K pixels wide.

SEE ALSO
XR News Bits – 'Thrasher' Launches, Kojima Explores Vision Pro, Big Game Updates, & More

Converting from the captured camera views into the final equirectangular image requires a re-projection operation, which copies pixels from the source images to their correct locations in the target image. This can be done very quickly on GPU using a compute shader; once it’s done, the final image is transferred to the CPU for saving.

Note that in some cases, this simple approach can produce visual artifacts. For example, if a vignette filter is being applied to dim the edges of the view, this will cause the edge of each of the six views to be dimmed, making the edges of the cube visually evident, as shown below. To reproduce such an effect, it would be necessary to remove it during capture, and then add it back while viewing the panorama. Screen-space effects that cause issues like this usually appear in sky and water effects. Other screen-space effects, like bloom or antialiasing, affect the entire view uniformly, and so produce no noticeable artifacts.

VikingVillage_thumb_artifacts

 

Continued on Page 3 ..

1
2
3
4
Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • yavilevich

    What an interesting article! I really enjoyed it and learned a lot from it. I hope RoadtoVR will feature more technical articles like this in the future. Keep up the good work.

  • Don Gateley

    Really, really great article. Thanks.

    Have you any experience yet or any opinion on the Stereolabs ZED 3D camera. It’s at a price point I could consider if it is capable of taking 3D videos of what’s around me and there is a way to get its output viewable as a YouTube 3D/360 video that could be viewed with a Google Cardboard class viewer.

    https://www.stereolabs.com/zed/specs/

    Thanks

  • EliasNora

    Interesting article, thank you !
    Just one thing about pictures: it is true that once you have saved your photos with 360 Panorama Capture, you need to upload them on a specific platform for sharing/embedding them on a website. VRchive is ok but you should try https://360player.io or Koola. They’re not dedicated to virtual worlds but the user experience is better from my opinion if you are looking for pro services ++

  • ymike

    There is a new software Surreal Capture (https://www.surrealcapture.com), it can capture 360 degree video directly from game. You no long need use a complex manual process to create panorama image or movie.