X

Hands On: SMI Proves that Foveated Rendering is Here and it Really Works

SMI, a company working in the field of gaze detection for over 20 years, hit CES this year with an application of their latest 250hz eye tracking solution coupled with a holy grail for 2nd generation virtual reality – FOVeated rendering.

SMI‘s (SensorMotoric Instruments) history with eye tracking is lengthy, having been on the cutting edge of the field for over 2 decades now. Up until now however, their technologies have been used in any number of fields, from consumer research – gathering how people’s eyes are drawn to particular products in a supermarket aisle all they way to informing the optimal design for eye wear for sporting professionals.

At CES this year, SMI were at the show to demonstrate their latest 250Hz eye tracking solution integrated with a VR headset. More importantly however, they demonstrated this eye tracking coupled with Foveated rendering, a technology that is generally regarded as vital for next generation VR experiences.

Foveated rendering is an image rendering technique which is born from the way we look at and process images from the world around us. Although human vision has a very wide total field of view, we really only focus on very small segments of that view at any one time. Our eyes rapidly dart from point to point, drawing information from those focal points. In the world of VR that means that, at least in theory, most of the pixels used to render the virtual world to a headset at a constant resolution is largely wasted. There’s not a lot of point drawing all of those pixels percentage of them at any one time.

Foveated rendering aims to reduce VR rendering load by using gaze detection to tell the VR application where the user is looking and therefore which area of the view to construct at high visual fidelity. Allowing the rest of the image, which falls into our peripheral view to be drawn at lower resolutions, the further out from the current focal point it is. The technique is largely accepted as necessary as we strive to achieve an image which is imperceptible to the human eye from reality, an image that requires a resolution in the region of 16k per eye for a 180 degree field of view, according to Oculus’ chief scientist Michael Abrash. That’s a lot of potentially wasted pixels.

I met with SMI’s Christian Villwock, Director OEM Solutions, who showed me their latest technology integrated with a modified Oculus Rift DK2. SMI had replaced the lens assembly inside the headset, with the custom headset incorporating the tech needed to see where you were looking. (We’ll have a deep dive on exactly how this works at a later date).

Firstly, Villwock showed me SMI’s eye tracking solution and demonstrated its speed and accuracy. After calibration (a simple ‘look at the circles’ procedure), your profile information is stored for any future application use so this is a one-time requirement.

The first demo, comprises a scene will piles of wooden boxes in front of you. A red dot highlights your current gaze point, with different boxes highlighting when looked at. This was very quick and extremely accurate, I could very easily target and follow the edge of the boxes in question with precision. The fun bit? Once you have a box highlighted, hitting the right joypad trigger causes that box to explode high into the air. What impressed though was that, as the boxes rose higher, I could almost unconsciously and almost instantly target them and continue the same trick, blasting the box higher into the air. The system was so accurate that, even when the box was a mere few pixels across at 100s of feet in the air, I was still able to hit it as a target and continue blasting it higher. Seriously impressive stuff.

The best was yet to come though, as Villwock moved me to the their piece de resistance, Foveated rendering. Integrated into the, by now, well-worn Tuscany tech demo from the Oculus SDK, SMI’s version is able to render defined portions of the scene presented to the user at varying levels of detail defined as concentric circles round the current gaze point. Think of this like an archery target, with the bullseye representing your focal point, rendered at 100% detail, with the next segment 60% detail and the final segment 20% detail.

There were a couple of questions that I had going into this demo.

One: Is the eye tracking to Foveated pipeline quick enough to track my eye, shifting that bullseye and concentric circles of lower detail fast enough for me not to detect it? The answer is ‘yes’, it can – and it really does work well.

Two: Can I detect when Foveated rendering is on or off? The answer is ‘yes’, but it’s something you really need to look for (or, as it happens, look  away for). With SMI’s current system, the lower detail portions of the image are in your peripheral vision, and for me they caused a slight shimmering to appear at the very edge of my vision. Bear in mind however this is entirely related to the field of view of the image itself and how aggressively that outer region is reduced in detail. i.e. it’s probably a solvable issue, and one that my not even be noticed by many – especially during a more action-packed VR experience.

The one thing I could not gauge is of course the very thing this technology is designed to resolve – how much performance was gained when engaging Foveated rendering versus 100% of the pixels at full fidelity. That will have to wait for another time, but cannot be ignored of course – so I wanted to be clear.

So, much to my surprise, Foveated rendering looks to already be within the grasp of second generation headsets. Christian told me that they’re discussing implementations with hardware vendors right now. It does seem clear that, for the second generation of VR headsets, and if we ever hope to reach those resolutions which allow imperceptible display artefacts, eye tracking is a must. SMI seem to have a solution that works right now, which puts them in a strong position as R&D ramps up for VR’s next gen in a couple of years.

Related Posts
Disqus Comments Loading...