X

Hands-on: FOVE’s Eye-tracking VR Headset Was the Next Best at CES

FOVE, a VR headset prototype in the works by a Japan-based team, is quickly closing the experience gap between itself and the Oculus Rift. If they continue at this pace, they could catch up, and with a trick up their sleeve—eye-tracking.

When I first took at look at FOVE’s VR headset back in November, my experience was less than stellar. The eye-tracking tech worked, but the overall experience left much to be desired (the presentation on inadequate hardware didn’t help the case). When I got in touch with FOVE’s CTO, Lochlainn Wilson, he told me he was dismayed that I ended up seeing that old prototype, which he said was”barely more than a mock up and is very far removed from the final product in terms of quality, accuracy and experience.” After seeing the company’s latest prototype at CES 2015, it’s clear that Wilson was not bending the truth—the latest version is a huge step in the right direction.

While the latest FOVE prototype looks similar on the outside, it’s using totally different display tech: currently a single 1440p LCD panel with a field of view that felt like it matched that of the Oculus Rift DK2. Though the current prototype lacks positional tracking (the ability to track the head through 3D space), the rotational tracking felt better than every other new VR headset I tired at CES, save for the Oculus Rift Crescent Bay prototype. It’s good to see that they’re focused on latency, because it’s the keystone that many entrants to the consumer VR headset market are currently lacking.

On top of a 1440p display, solid headtracking, and an a decent field of view, FOVE’s real trick is its ability to track the wearer’s eyes.

Inside the headset, each lens is surrounded on the top, bottom, left, and right with IR LEDs which illuminate the eye, allowing cameras inside to detect the orientation of each. Looking through the lenses looks just like you’d expect from a VR headset without eye-tracking; Wilson told me that this is different from other VR headset eye-tracking solutions, which can have components that obscure parts of the field of view.

The calibration process has been streamlined from the last time I saw it; now you follow a green animated dot around the edges of your gaze, and pause as it does to capture calibration points at 9 or so discrete locations on the display. The whole thing takes probably no more than 20 seconds.

As it stands, FOVE is putting a fair amount of emphasis on the ability to aim with your eyes (probably because it’s easy to show and easy to understand), but to me that’s a red herring; what excites me about eye-tracking are the more abstract and enabling possibilities like eye-based interface and contextual feedback, simulated depth-of-field, foveated rending, and avatar eye-mapping. Let’s go through those one by one real quick:

  • Eye-based interface and contextual feedback: Imagine a developer wants to make a monster pop out of a closet, but only when the user is actually looking at it. Or perhaps an interface menu that expands around any object you’re looking at but collapses automatically when you look away.
  • Simulated depth-of-field: While the current split-screen stereo view familiar to users of most consumer VR headsets accurately simulates vergence (movement of the eyes to converge the image of objects at varying depths) it cannot simulate depth of field (the blurring of out of focus objects) because the flat panel means that all light from the scene is coming from the same depth). If you know where the user is looking, you can simulate depth of field by applying a fake blur to the scene at appropriate depths.
  • Foveated rendering: This is a rendering technique that’s aimed to reduce the rendering workload, hopefully making it easier for applications to achieve higher framerates which are much desired for an ideal VR experience. This works by rendering in high quality only at the very center of the user’s gaze (only a small region on the retina’s center, the fovea, picks up high detail) and rendering the surrounding areas at lower resolutions. If done right, foveated rendering can significantly reduce the computational workload while the scene will look nearly identical to the user. Microsoft Research has a great technical explanation of foveated rendering in which they were able to demonstrate 5-6x acceleration in rendering speed.
  • Avatar eye-mapping: This is one I almost never hear discussed, but it excites me just as much as the others. It’s amazing how much body language can be conveyed with just headtracking alone, but the next step up in realism will come by mapping mouth and eye movements of the player onto their avatar.

When you break it down like this, ample eye-tracking actually stands to add a lot to the VR experience in both performance and functionality—it could represent a major competitive advantage.

Lochlainn Wilson, CTO, told me that FOVE’s eye-tracking system will be fast and accurate enough to pull off everything listed above. Some of it I actually saw in action.

Continue Reading on Page 2…

Page: 1 2

Related Posts
Disqus Comments Loading...