X

Image courtesy Oculus

Oculus Brings More Lifelike Avatars to Rift & Go in ‘Expressive Avatars’ Update

Oculus is today launching their ‘Expressive Avatars’ update on Rift and mobile VR; it’s a significant step up in realism thanks to a few additions including cleverly simulated eye models, lipsync, and even microexpressions. If you’re also hurting for more hairstyles, clothes, and accessories, you might want to pop into your headset at some point today to check it all out once the update is live.

First unveiled at OC5 last year, the public release of Oculus’ avatar overhaul launches today which also includes an update to the Avatar Editor on PC and mobile. The new editor includes a range of new customization options such as lipstick, eye color, brow and lash colors, new hair, clothing and eye wear options.

Just like Oculus’ previous avatar system, third-party apps and games will also support the update. Over the course of a few days Oculus tells us games such as Poker Stars VR, Tribe XR DJ, and Epic Rollercoaster will all include support. While not stated specifically, it’s clear the company is hoping to appeal to more third-party developers with ‘Expressive Avatars’, as many games on the platform make use of their own avatar systems.

Oculus is set to release their blogpost that officially announces Expressive Avatars at some point today.

Express Yourself

Oculus first released their first version of Oculus Avatars in 2016, and while the company has since given users the chance to customize their persistent digital likenesses with a variety of textures, clothing, and hairstyles, without eye or face tracking avatars were essentially inarticulate masks that made the user rely upon motion controls and body language to transmit emotion.

Oculus previously used eye wear to avoid off-putting stares, Image courtesy Oculus

This was due to the fact that no Oculus devices actually feature face or eye-tracking, which would naturally give avatars a greater avenue for 1:1 user expression. And with the impending release of Oculus Quest and Rift S, that’s still going to be the case, as neither headset offers these things. Hardware notwithstanding, Oculus has been hacking away at just what they can get away with in order to better simulate realistic-looking eye movement, blinking, facial movements, lipsyncing—all of it in the name of making avatars more human.

“We’ve made a big step forward with this update,” says Oculus Avatars product manager Mike Howard. “Bringing together expertise in art, machine learning and behavioural modeling, the Oculus Avatar team developed algorithms to accurately simulate how people talk to, and look at, objects and other people—all without any cameras to track eye or face movement. The Oculus Avatar team were able to codify these models of behavior for VR, and then had the fun job of tuning them to make interactions feel more lifelike.”

Keeping It Real

Oculus’ Mike Howard penned a deep-dive article on the past, present and future of Oculus Avatars, which tells us a little more about what sort of challenges the company faced in creating not only more realistic avatars with its current hardware in mind—limited by the lack of on-board biometric tracking and user’s computers/mobile headsets—but doing it well within the bounds of the uncanny valley.

That’s something you can’t afford to brush up against if you want users to invest both the time into creating their digital likenesses and interacting with others, Howard maintains.

“In VR, when you see an avatar moving in a realistic and very human way, your mind begins to analyze it, and you see what’s wrong. You could almost think of this as an evolutionary defense mechanism. We should be wary of things that move like humans but don’t behave like us,” he says.

Making an avatar that simply moves its mouth when you talk and blink at regular intervals wasn’t enough for Oculus. The system needed to be tailor-made to infer when a user might conceivably blink, and make the best possible guess at how a user’s mouth should move when forming words. That last part is a particularly tough equation, as humans move their mouths before, during, and after producing a word, leaving the predictive capabilities with a hard ceiling of accuracy. More on that in a bit.

As for eyeballs, the realization that VR headset users typically only move their eyes about 10 degrees off-center, and use their head to accommodate the rest of the way to look at any given object, made it “easier to predict where someone was looking based on head direction and the objects or people in front of them in a virtual environment, giving us more confidence in being able to simulate compelling eye behaviors,” Howard maintains.

The system is said to simulate blinking, and a host of eye kinematics such as gaze shifting, saccades (rotating the eye rapidly usually during focal change), micro-saccades, and smoothly tracking objects with the eye.

And for simulated lip movements, the team discovered they could model intermediate mouth shapes between each sound and the following sound by controlling how quickly individual (virtual) mouth muscles could move, something the team dubs ‘differential interpolation’.

To boot, the team has also included micro-expressions to keep faces looking natural during speech and rest, although they’re clearly staying away from actual implied expressions like extremely happy, sad, angry, etc. An avatar looking bored or disgusted during a lively chat could cross wires socially.

What Oculus Avatars *won’t* do, Image courtesy Oculus

In the end, Howard makes it clear that more realistic-looking avatars are technically in the purview of current head and hand tracking hardware, although compute power across all platforms puts a hard barrier on the sort of skin and hair that can be simulated. Frankly put: a more detailed skin texture means you have to model that skin to look natural as it stretches over your face. Having more detailed skin also necessitates equally detailed hair to match.

“Given our learning to date, we determined that we would use a more sculpturally accurate form, but we’d also use texture and shading to pull it back from being too realistic, in order to match the behavioral fidelity that we were increasingly confident we could simulate,” Howard explains. “Our goal was to create something that was human enough that you’d read into the physiological traits and face behaviors we wanted to exemplify, but not so much that you’d fixate on the way that the skin should wrinkle and stretch, or the behavior of hair (which is incredibly difficult to simulate).”

There’s still plenty left to do. Oculus Avatars aren’t seamlessly available in all games on either the Oculus platform or Steam, requiring developers to integrate on a case-by-case basis. Not to be missed: they’re still basically floating torsos and hands at the moment. To that tune, the company is working on inverse kinematic models to make full body avatars a possibility.

If you want to read more about the history and possible future of Oculus Avatars, check out Howard’s deep-dive when it goes live later today.

Update (12:15 ET): In a previous version of this article, it was stated that Oculus Avatars aren’t cross-platform, however this isn’t accurate. Oculus made a cross-plaform option available to developers last year, although this integration must be done on a game-by-game basis. Developers can choose to use default Oculus avatars or allow unique Oculus platform user avatars in their game, although it’s far from the seamless integration that the word ‘cross-platform’ might imply. We’ve updated the offending bit to better reflect this distinction.

Related Posts
Disqus Comments Loading...