X

Google’s Tango Engineering Director, Johnny Lee, on AR Capabilities Enabled by Depth Sensors

Augmented Reality has played a huge role at the developer conferences for Microsoft, Apple, Facebook, and Google, which is a great sign that the industry is moving towards spatially-aware computing. Microsoft is the only company to start with head-mounted AR with the HoloLens while the other three companies are starting with phone-based AR. They are using machine learning with the phone camera to do six degree-of-freedom tracking, but Google’s Project Tango is the only phone solution that’s starting with a depth-sensor camera.

LISTEN TO THE VOICES OF VR PODCAST

This allows the Tango to do more sophisticated depth-sensor compositing and area learning where virtual objects can be placed within a spatial memory context that is persistent across sessions. They also have a sophisticated virtual positional system (VPS) that will help customers locate products within a store, which is going through early testing with Lowes.

I had a chance to talk with Tango Engineering Director Johnny Lee at Google I/O about the unique capabilities of the Tango phones including tracking, depth-sensing, and area learning. We cover the underlying technology in the phone, world locking & latency comparisons to HoloLens, virtual positioning system, privacy, future features of occlusions, object segmentation, & mapping drift tolerance, and the future of spatially-aware computing. I also compare and contrast the recent AR announcements from Apple, Google, Microsoft, and Facebook in my wrap-up.

The Asus ZenPhone AR coming out in July will also be one of the first Tango & Daydream-enabled phones.

More:


Support Voices of VR

Music: Fatality & Summer Trip

Related Posts
Disqus Comments Loading...