fove eye tracking hmd (1)

At Engadget Expand 2014 in NYC I got to take a look at Fove, a head mounted display with integrated eye-tracking. This is an older prototype made some time ago, but demonstrates the core technology and potential benefits of eye-tracking in an HMD.

Fove is a Japanese company working on a consumer head-mounted display with integrated eye-tracking. The company received some press attention recently thanks to its participation with Microsoft Venture’s Accelerator program.

fove eye tracking hmd (4)

At Engadget Expand I met with Fove’s CEO, Yuka Kojima, who showed me the company’s first prototype head mounted display. Right now it consists of a lightweight foam housing, two lenses, and two small cameras, on the left and right of the lenses, pointed at each eye. I wasn’t allowed to snap a photo of the inside of the headset, but it appears they’re using all off-the-shelf components in this prototype. The lens and micro display assembly appeared to be torn out of a Sony HMZ headset. The infrared cameras on the left and right had fairly wide angle lenses and are probably taken out of a consumer gesture camera.

Given the 45 degree (or so) field of view, I would not call the existing Fove prototype a ‘VR headset’. That delineation is reserved for HMDs with a larger field of view. However, Fove intends for their finished product to fall into that immersive category; they say their latest prototype uses a 2.5k display and has a 100 degree field of view. Fove says they’re working on positional tracking as well.

fove eye tracking hmd (3)

Kojima had the Fove HMD connected to a laptop at Expand that struggled to run the experiences that were demonstrated; at 10-15 FPS, it was clear that the company wasn’t too concerned with showing a high fidelity VR experience, rather they just wanted to show their eye-tracking tech. Unfortunately it was unclear if some lag I experienced with the eye tracking was due to the company’s tech, or merely an insufficient computer. We’re soon hoping to get our hands on a the company’s latest prototype with a more powerful computer running the demo.

SEE ALSO
Meta Quest 3S (256GB) Gets Early Holiday Sale After Being Left Out on Black Friday

Kojima ran me through a simple calibration sequence which could be easily gamified. A series of white dots were shown on the display and I was asked to look at each one while she clicked the mouse to save the eye position data for each point. It took about 15 seconds to go through the calibration sequence and when it was over I was asked to look at a green dot in the center of the screen. If the calibration went well, another dot, representing my gaze would land on or very close to the center dot. If it was miscalibrated, the dot would be far away. It took about three tries to calibrate properly; when we did get a good data set, the tracking seemed very accurate, with the gaze dot landing right on the center point.


A quick glimpse at Fove’s view of your eye and the calibration process.

After calibrating I asked Kojima if I could tighten the headset but she somewhat frantically told me not to, for fear or throwing off the calibration. This makes me wonder if users will need to frequently re-calibrate when removing the headset for breaks or making adjustments. So long as the calibration process can become faster and more robust, it won’t be a problem, but if users need to recalibrate every time they make a comfort adjustment, it might present an issue to usability.

Headtracking was built into the headset but there isn’t positional tracking on this prototype. However, the company has said that they’re working on positional tracking.

Kojima walked me through a few different experiences demonstrating the eye tracking capabilities of the Fove HMD. The first had me in a dark city street with some futuristic-looking super-soldiers lined up before me. Looking at them caused me to shoot them and one after another they dropped to the ground after being blasted by my eyes.

SEE ALSO
Cisco Boosts Vision Pro Meetings with New 'Webex' Spatial Video Streaming Update

Another demo was essentially the same thing but this time there were ships flying all around me. Looking at them caused me to lock onto them, then a tap of the mouse shot them out of the sky. While I wouldn’t call this kind of eye-based aiming particularly compelling, I suppose it’s a step up from aiming with a reticle attached to your face—something we see today across a number of Oculus Rift demos.

fove eye tracking hmd (2)The next demo was closer to the applications that eye-tracking would be most useful for—gaze-based focus and character interaction. In the next scene I was standing in front of a virtual character in an open field. When I glanced at the background, the foreground blurred. When I glanced at the character, she smiled and came into focus while the background blurred.

The same mechanics could be used for foveated rendering, where the image is only rendered in high fidelity at the very center of the eye, while areas further away from that point are rendered with lower quality to reduce the computing power needed to render the scene (as we only see sharply in a very small area). This may be necessary to push framerates in virtual reality to 90 FPS or more, which is what Oculus believes is necessary.

If Fove is going enable foveated rendering, their eye tracking tech needs to be extremely fast, lest you glance over to another part of the scene and see it blurred for a moment until the system finds your new gaze direction—a sure immersion breaker unless done fast enough. As I mentioned previously, I experienced noticeable lag in the eye-tracking, though it wasn’t clear if this was inherent or due to an inadequate computer.

SEE ALSO
One of the Best Mods for 'Half-Life: Alyx' is Becoming a Full Game Next Year

All in all, Fove’s tech works well as a proof of concept. Their next steps will be important: showing that they can succeed in the other important aspects of a VR headset, like comfort, latency, low-persistence, and positional tracking, and also convincing the world that eye-tracking is necessary for a VR headset (I believe that it is). If Fove can pull it off, their innovation will likely be more about pricing than heretofore unseen tech; like VR headsets of yore, eye-tracking is not new, it simply hasn’t been seen at a consumer price point. If and when it hits, it could be a game changer.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • David Mulder

    I wish i remembered who did this, but someone wrote a long article about why partial higher resolution rendering is unrealistic: the refresh-/framerate required to match the speed of the eye was an order of magnitude higher than we are able to render right now. And if I remember correctly he was even arguing that once those speeds do become realistic it would still likely cost more than it would save assuming rendering would work anything like it works now, but I forgot the reasoning behind that (or put more correctly: I do not enough about how render engines work to comprehend his argument).

    Either way, point in case is just that it might not have as big a future as you think, if I come across the article I will send it to you ~

    I do however think that for generic interfacing eye tracking can be somewhat useful and incredibly valuable for human to human interfacing. Combine this with mouth and eye brow facing cameras and you might start leaving the uncanny valley. Of course there is the problem of the eye brows being hidden, but still all of this should be doable.

  • elecman

    Why not modify an existing headset to enable eye tracking?

  • Darshan Gayake

    Ben you are very very lucky being to try revolutionary technology in their infancy.
    Here Envy you :-)

    I too believe eye tracking is revolutionary in sense that it can allow 4K in Q3 2015/16 for main stream VR gaming (If by then there is any VR gaming in mainstream) which would not be possible otherwise in VR mode of course.

    See the metro last light FPS in 3840X2160 high quality non stereo yields just 37 Fps.
    when you go medium quality its 54 FPs now consider Stereo/VR mode. though its not consider mainstream no one gives FPs data but i can safely guess 50/60% performance loss which sums 36FPs in medium quality thats miles away from 60Fps DK2 or 90FPs Crystal Cove requirement here we are talking about HIGHEST END Single GPU of 2014 with BEST AVAILABLE REST COMPONENTS. only 14/15% of total hardcore gamer community might posses such rig. Rift is targeting even casual and non gamer community too.

    http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/10

    DK2 at 1080p is bad visually and even 2560X1440 might not such great improvement as its pentile display.

    Why i consider eye tracking is significant, example is ES SKYRIM, In advance game settings when you set view distance slider at high, there is significant drop in frame rate and if you set view distance low it increases frame rates dramatically. you can get high frame rate when you enable object detail fade or actor detail fade or light fade. which is proof of skyrim’s multi layer drawing/rendering in real time.

    So view tracked detail rendering or LoD is very very important factor. consider computer draw highest detail only where you are looking and rest of area become fade.

    Game engines too require to support multi layer rendering like when certain map load it load at 2 layer or 3 layer rendering with basic mesh models which then get super imposed with other layers depending on where user gaze is.

    I know this will require advanced and accurate tracking with minimum latency but this tech can free up CPU/GPU significantly that even moderate systems can handle VR in 4K which is must for immersion.(after spending significant hours on DK2 i strongly believe even 2K will not do for CV1 but i know CV1 highest likely to be 2K only)