X

Google Acquires Eye-tracking Company Eyefluence, Reportedly Building VR Headset With Tech

Just what Google has brewing in their skunkworks, we can’t say for sure, but with their most recent acquisition of Eyefluence, a company that builds eye-tracking technology for VR headsets, it seems Google is getting ever deeper into what’s largely considered ‘the next generation’ of dedicated VR hardware.

A report from Engadget published yesterday maintains that Google’s secret standalone VR headset “will integrate eye-tracking and use sensors and algorithms to map out the real-world space in front of a user.”

According to unnamed sources cited by Engadget writer Aaron Souppouris, visual processing company Movidius is providing chips to Google, and that the new project is “separate from the company’s Daydream VR platform, will not require a computer or phone to power it.”

Daydream VR is a platform devised by Google that works with a select number of flagship smartphones from various manufacturers, the first of which is the Google Pixel. Along with Pixel’s unveiling earlier this month, the company also revealed the ‘View’, the first Daydream headset.

Engadget’s report however was published hours before Eyefluence quietly announced they would be joining Google for an undisclosed sum, first spotted by Mattermark.

In our hands-on piece with FOVE, the only purpose-built eye-tracking VR headset out currently, Executive Editor Ben Lang hashed out a number of use-cases where augmented and virtual reality could benefit from eye-tracking.

  • Eye-based interface and contextual feedback: Imagine a developer wants to make a monster pop out of a closet, but only when the user is actually looking at it. Or perhaps an interface menu that expands around any object you’re looking at but collapses automatically when you look away.
  • Simulated depth-of-field: While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths) it cannot simulate depth of field (the blurring of out of focus objects) because the flat panel means that all light from the scene is coming from the same depth). If you know where the user is looking, you can simulate depth of field by applying a fake blur to the scene at appropriate depths.
  • Foveated rendering: This is a rendering technique that’s aimed to reduce the rendering workload, hopefully making it easier for applications to achieve higher framerates which are much desired for an ideal VR experience. This works by rendering in high quality only at the very center of the user’s gaze (only a small region on the retina’s center, the fovea, picks up high detail) and rendering the surrounding areas at lower resolutions. If done right, foveated rendering can significantly reduce the computational workload while the scene will look nearly identical to the user. Microsoft Research has a great technical explanation of foveated rendering in which they were able to demonstrate 5-6x acceleration in rendering speed.
  • Avatar eye-mapping: This is one I almost never hear discussed, but it excites me just as much as the others. It’s amazing how much body language can be conveyed with just headtracking alone, but the next step up in realism will come by mapping mouth and eye movements of the player onto their avatar.
Related Posts
Disqus Comments Loading...