One demo had me floating amidst a futuristic cityscape. When I was ready to start the action, I looked down at a button below me which was instantly ‘selected’ as soon as I looked at it, causing an army of hovering drones to pop up in front of me. As I looked at them, lasers fired immediately, destroying each one in rapid succession as my eye saccaded across the scene. It felt fairly accurate and fast, and worked well as a proof of concept, but aiming and firing with your eyes alone is not exactly natural so it also felt a little strange.
Another demo showed simulated depth-of-field. I entered a room and gazed at two soldiers to gun them down. From there, Wilson had me alternate my gaze between the one of the corpses and the wall behind it. The scene blurred appropriately based on the depth of my gaze, bringing the corpse into focus while blurring the background, or vice versa. It’s was hard to tell whether or not it was fast enough that I was convinced that it was true depth-of-field, especially as Wilson had exaggerated the blurring effect for demonstration purposes. However, if FOVE will be fast and accurate enough to pull off foveated rendering, as Wilson asserts, there should be no issue with depth of field too, though it may require some tuning to make it feel right.
Wilson even told me that measurement of pupil dilation could be possible with FOVE’s eye-tracking system, though he called it an “inexact science” for the time being.
The language of utilizing eye-tracking input effectively (especially with regard to user interaction) still has a long way to go, but FOVE’s latest prototype serves as a solid proof of concept of what eye-tracking can add to virtual reality, and they’ve so far got an impressive headset to boot. Assuming FOVE stays on this trajectory, they’re definitely worth keeping an eye on.