Oculus Chief Scientist Predicts the Next 5 Years of VR Technology

Massive improvements in VR hardware and software over just five years will make current headsets seem like "something out of pre-history"

25

The annual presentation at Oculus Connect by Michael Abrash, Chief Scientist at Oculus, is always a highlight of the company’s annual developer event, projecting a forward-thinking and ever inspirational look at the future of virtual reality. This time, at Oculus Connect 3, he made some bold, specific predictions about the state of VR in five years.

michael-abrash-predictions-oculus-connect-3-2016-1

Firstly, the visuals, as this is the most critical area for near-term improvement. Current high-end headsets like the Rift and Vive, with their roughly 100 degree field of view and 1080×1200 display panels, equate to around 15 pixels per degree. Humans are capable of seeing at least 220 degrees field of view at around 120 pixels per degree (assuming 20/20 vision), Abrash says, and display and optics technologies are far away from achieving this (forget 4K or 8K, this is beyond 24K per eye). In five years, he predicts a doubling of the current pixels per degree to 30, with a widening of FoV to 140 degrees, using a resolution of around 4000×4000 per eye. In addition, the current fixed depth of focus of current headsets should become variable. Widening the FOV beyond 100 degrees and achieving variable focus both require new advancements in displays and optics, but Abrash believes this will be solved within five years.

michael-abrash-predictions-oculus-connect-3-2016-3

Rendering 4K x 4K per eye at 90Hz is an order of magnitude more demanding than the current spec, so for this to be achievable in the next five years, foveated rendering is essential, Abrash says. This is a technique where only the tiny portion of the image that lands on the fovea—the only part of the retina that can see significant detail—is rendered at full quality, with the rest blending to a much lower fidelity (massively reducing rendering requirements). Estimating the position of the fovea requires “virtually perfect” eye tracking, which Abrash describes as “not a solved problem at all” due to the variability of pupils, eyelids, and the complexities of building a system that works across the full range of eye motion for a broad userbase. But as it is so critical, Abrash believes it will be tackled in five years, but admits it has the highest risk factor among his predictions.

michael-abrash-predictions-oculus-connect-3-2016-2

Next, he moved briefly to audio; personalised head-related transfer functions (HRTFs) will enhance the realism of positional audio. The Rift’s current 3D audio solution generates a real-time HRTF based on head tracking, but this is general across all users. HRTFs vary by individual due to the size of the torso and head and the shape of the ears; creation of personalised HRTFs should significantly improve the audio experience for everyone, Abrash believes. While he didn’t go into detail as to how this would be achieved (it typically requires an anechoic chamber), he suggested it could be “quick and easy” to generate one in your own home within the next five years. In addition, he expects advancements to the modelling of reflection, diffraction and interference patterns will improve sound propagation to a more realistic level. Accurate audio is arguably even more complex than visual improvement due to the discernible effect of the speed of sound; despite these impressive advancements, real-time virtual sound propagation will likely remain simplified well beyond five years, as it is so computationally expensive, Abrash says.

On controllers, Abrash believes hand-held motion devices like Oculus Touch could remain the default interaction technology “40 years from now”. Ergonomics, functionality and accuracy will no doubt improve over that period, but this style of controller could well become “the mouse of VR.” Interestingly, he suggested hand tracking (without the use of any controller or gloves) would become standard within five years, accurate enough to represent precise hand movements in VR, and particularly useful for expressive avatars, and for simple interactions that don’t require holding a Touch-like controller, such as web browsing or launching a movie. I think there are parallels with smartphones compared to consoles and PC here; touchscreens are great for casual interaction, but nothing beats physical buttons for typing or intense gaming. It makes sense that no matter how good hand tracking becomes, you’ll still want to be holding something with physical resistance in many situations.

michael-abrash-predictions-oculus-connect-3-2016-7

Addressing more general improvements, Abrash predicts that despite the complexities of eye tracking and wider-FoV displays and optics, headsets will be lighter in five years, with better weight distribution. Plus, he says, they should have more convenient handling of prescription correction (perhaps a bonus improvement that comes with depth of focus technology). Most significantly, at the high end, VR headsets will become wireless. We’ve heard many times that existing wireless solutions are simply not up the task of meeting even the current bandwidth and latency requirements of VR, and Abrash repeated this sentiment, but believes it can be achieved in five years, assuming foveated rendering is part of the equation.

Next he talked about the potential of bringing the real world into the virtual space, something he referred to as “augmented VR”; this would scan your real environment to be rendered convincingly in the headset, or it could place you in another scanned environment. This could serve as the ultimate mixed-reality ‘chaperone’ system for confidently moving around your real space, picking up real objects, and seeing who just walked in, but also to make you feel like you were anywhere on the planet, blurring the line between real-world and VR. While we can already create a believable, high-resolution pre-scanned recreation of many environments (see Realities), doing this in real-time in a consumer product has significant hurdles to negotiate, but Abrash believes many of them will be solved in five years. He clarified that augmented VR would be very different to AR glasses (e.g. Hololens) that use displays to overlay the real world, as augmented VR would allow complete control over every pixel in the scene, allowing for much more precise changes, complete transformations of the entire scene, and anything in between.

The real significance of augmented VR is being able to share any environment with other people, locally or across the world. The VR avatars coming soon to Oculus Home are primitive compared to what Abrash expects to be possible in five years. Even with hand tracking close to the accuracy of retroreflective-studded gloves in a motion capture environment, advancements in facial expression capture/reproduction and markerless full-body tracking, the realistic representation of virtual humans is by far the most challenging aspect of VR, Abrash says, due to the way we are so finely tuned to the most subtle changes in expression and body language of people around us. We can expect a huge number improvements to the believability of sharing a virtual space with others, but staying on the right side of the uncanny valley will still be the goal in five years he believes. It could be decades before anyone gets close to feeling they are in the presence of a “true human” in VR.

michael-abrash-predictions-oculus-connect-3-2016-6

Finally, Abrash revisited his “dream workspace” he discussed last year, with unlimited whiteboards, monitors, or holographic displays in any size and configuration, instantly switchable depending on the task at hand, for the most productive work environment possible. Add virtual humans, and it becomes an equally powerful group working tool. But in order for this to be comfortable to use as an all-day work environment, all of the advancements he covered would be required. For example, the display technology would need to be sharp enough so that virtual monitors could replace real monitors, augmented VR would need to be capable of reproducing and sharing the real environment with accuracy, the FOV would need to be wide enough to see everyone in the meeting at once, and spatial audio would need to be accurate enough to pinpoint who is speaking. Not every aspect of this dream will come true in five years, he says, but Abrash believes we will be well along the path.

The prospect of such a giant step forward in so many areas in such a short space of time is exciting, but can it really happen? And will it all be within one generational leap, or can we expect to be on third-generation consumer headsets by then? Well, Abrash made decent predictions about the specifications of consumer VR headsets almost three years ago, so let’s hope he’s been just as accurate this time.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


The trial version of Microsoft’s Monster Truck Madness probably had something to do with it. And certainly the original Super Mario Kart and Gran Turismo. A car nut from an early age, Dominic was always drawn to racing games above all other genres. Now a seasoned driving simulation enthusiast, and former editor of Sim Racer magazine, Dominic has followed virtual reality developments with keen interest, as cockpit-based simulation is a perfect match for the technology. Conditions could hardly be more ideal, a scientist once said. Writing about simulators lead him to Road to VR, whose broad coverage of the industry revealed the bigger picture and limitless potential of the medium. Passionate about technology and a lifelong PC gamer, Dominic suffers from the ‘tweak for days’ PC gaming condition, where he plays the same section over and over at every possible combination of visual settings to find the right balance between fidelity and performance. Based within The Fens of Lincolnshire (it’s very flat), Dominic can sometimes be found marvelling at the real world’s ‘draw distance’, wishing virtual technologies would catch up.
  • makeithappen

    A lot of us are getting older and 5 years is a long time to wait. Billionaires – spend that money and make it happen faster!

    • dk

      4 years later …..they r sort of working on it

  • OkinKun

    Left out the slide that showed his prediction for 5 years from now? I could have swore I remember seeing it.. gotta watch it over again..

  • Mike

    “Abrash believes hand-held motion devices like Oculus Touch could remain the default interaction technology 40 years from now”

    There’s no way that could happen. As smart as this guy is, this is obviously wrong. The most natural form of hand tracking is gloves. 40 years is a REALLY long time – by then there will be gloves with force feedback that can make it seem like you’re holding any object, including a controller. I can see a controller being the default for only maybe 10 more years, at the most.

    • Marco –

      Totally right. Technology will move faster and faster in the next 20-30 years, compared to the past 20-30. There’s no way something will ‘remain’ for 40 years.

      • asdf

        Oculus Touch won’t remain relevant for ten years tbh

        • Mike

          Probably not. There are already early “dev kit” vr gloves you can buy right now (though they aren’t compatible with much yet). Even way back in 2009 there were high-quality prototype VR gloves – I tried one out at a research lab I was doing an internship at (though obviously that would have been too expensive for consumers). Even the “Leap Motion” visual hand tracking would be better than controllers for many applications if they could make it work reliably.

      • Mike

        Well, the mouse has been standard for 30 – 40 years and hasn’t changed significantly, but that’s just because it works so well – you can’t do much better for the problem it’s designed to solve. When it comes to VR, hand-held controllers are definitely NOT the ideal solution to realistically simulating hand presence.

        • Get Schwifty!

          I think his statement about “default” is the key here. Oculus is surely aware of all the developments going on regarding gloves, etc. but are probably seeing simple handheld controllers as the simple, cheap solution that works pretty well like the mouse as you pointed out with little processing overhead.

          I do think predicting anything 40 years from now is futile though, controllers will probably be the “default” for say another decade or so and no more. In 40 years I would expect camera technology and software to be advanced enough it just “sees” your body and projects it into the scene with no need for controllers or gloves of any kind unless you have to have haptic feedback, then gloves are it if not some kind of body suit for full on VR experiences (emphasis on the “X” lol).

          • 8bit

            body tracking is not the difficult part. As he demonstrated in the video, we’re already doing very good body and hand tracking. The issue is the tactile feedback that a controller provides. It makes interactions with objects feel much more natural. 40 years is a long time, but he might be right… Its hard to imagine a better solution than something along the lines of current controllers until you can jack directly into the matrix. They’ll change somewhat, and integrate new features of course, but will take the same general structure, kind of like how mice have developed over the last few decades.

    • Get Schwifty!

      It’s especially an odd prediction coming from the team that believes camera technology development is the better long term path for VR… Oculus has bet the farm on that approach. I kind of wonder if this statement is taken out of context or not quite relayed the way it was intended.

  • Raphael

    Why is there no mention of flying car? Anyway… I like the oculus team overall and as a Vive user I find oculus even more compelling because they educate and have a strong PR presence.

  • VR_Evangelist

    I was hoping he was going to mention VRD (Virtual Retina Display). Maybe it’s too early to make that prediction.

  • David Herrington

    So this goes along with what I have been saying for awhile now. Foveated Rendering (FR) is a must for the quick advancement of VR.

    That being said… if a CHIEF scientist at Oculus is saying FR will not be ready anytime soon due to problems with eye tracking then that means we truly do have 3 or so years until we can reap the benefits of this tech.

    Which means that the next big gen release of VR will probably be in 3-5 years… from right now. T_T

    Save us HTC/Steam!

  • OgreTactics

    Very interesting, especially on the importance of eye-tracking, but his initial prediction are (hopefully) WAY off.

    Headsets started with a 2K resolution, and barely the next-year there are 4K Virtual Headsets slated for release and even mobile. Why would it take 5 years to reduce the screen density so that we have two short rectangle 4K screen, one per eye?

    And I do NOT agree with his °FOV prediction that even him stated his more important than resolution: there are ALREADY 120° VH like the Freefly VR or BoboVR anamorphic lenses which easy to produce, the Gamefacelab VH at 140° or the WerealitySky at 150° experimental FOV or the Vrunion Claire at 170°, NOW, in 2016.

    Also I think he’s completely wrong on the Oculus Touch: they are good surrogate for hands for now, since software for Nimble/13th Lab technology they bought wasn’t developed far enough, but I still once again that they are missing the point: nobody asked for 1:1 hand tracking, but simply spatial touchscreen-like interaction accompanying with one physical controller. The idea of a semantic user gesture database (which was published in 2010) that can trigger complex contextual in-game like actions may still be far off, but I’m sorry but Leap Motion was released 4 years ago and is already impressive as such, I don’t believe excuse for not integrating a minimum form of hand-tracking, be it specifically for experimentation, crowd-research and development purpose.

    As for the haptic, I agree such input under the form of gloves or bracelets are way off, but, the same single external-tracking component that can track environment and hands could well track any surface and third-party controller or keyboard for overlay in the virtual environment. Like for hand-gestures, nobody asked for ground-breaking leaps with 1:1 tracking or feedback, smart usage of already pre-existing technology is the only transitional way to go.

    The wireless part is spot-on: nobody wants to have a visual-interface device, which are only supposed to replace screens not machines, tied to a proprietary enclosed mobile device (like in the Oculus Standalone prototype or Intel alloy) but like with their wearable earbuds, they want to be able to switch from one source to another. I don’t know how far a dedicated compression-less wireless video streaming processor is, but I suspect there are options existing if I recall my research on components well.

    Also I think Augmented VR is a smart way to talk about Virtual Headset capabilities to track and interact with the outside environment, but I think it’s kind of a mistake to separate AR & VR: they are the same continuous conceptual sens, starting by overlaying objects and interface on the real world until it’s completely covered and becomes full VR.

    • Wino

      This guy knows what he is talking about. If your opinion is different than his, it is likely because he has information that you don’t have.
      About the resolution, we did not start with 2k resolution, but with 1k. He means 4k per eye. so not 3840×2160. But 4kx4k per eye. So that is almost 2 times more pixels on a single eye than normal 4k. Add in the fact that we need 90fps minimum, we are looking at 6~8 times more pixels per second than a normal 4k screen. The main problem is not the screen, but how to render them. We need perfect eye-tracker perfectly supported by the videocard/driver. Because there is no video card today (or in the next 5 years) able to render that many pixels in any game.

      • OgreTactics

        He might have information that I don’t, but I too have informations that he doesn’t since I work in a VR lab, and you obviously don’t have either since you don’t understand that we will not brute-force or have raw specs for a long-time but supersample them.

        4Kx4K is already possible with a VR dedicated SLI on high-end graphic cards, even more if you supersample. 90fps has well, can be done with retro-projection. In fact all the game is about stopping this specs yard run bullshit, and start actually innovating and optimising.

        Another example, hand tracking: nobody ever said we have to have 1:1 hand tracking from the get-go. But the only way to get there is to actually take a step back and implement hand-tracking the way it works and makes sense for now: with touch-like gesture movements, which works very well, and for all purposes, a semantic gesture library that can translate amalgamated movements into general actions.

        None of these things have been implemented, yet this is the first thing EVEN CHIMPANZES try to do when trying VR on, because a VR headset without hand-tracking is like a TV without any button or remote, or a computer without keyboard/mouse/touch screen.

        And because of this dumb general talk about VR and raw specs, and bullshit technology limitation because people haven’t done their homework on what already exists OR more important if it’s the job they’re paid for, what should exist. So please before stop this kool-aid self-limitation attitude that consist in saying “this is not possible yet, there is not such thing today…” just because you don’t know what you are talking about (and I have such capable computer right next to me in my office, as I said a GTX1080SLI) which is expensive, but the point is, it exist so much, anybody with a budget can by a 4Kx4K capable machine easily optimised with supersampling.

      • Dr. Krzysztof Pietroszek

        Everything Augure says is right. Abrash is talking “consumer-level”, not “possible”, thus the difference. In research things are possible now that will be fully commercialized in 5-10 years. Just to add to Augure, there is an upcoming wireless VR with enough bandwidth: https://techcrunch.com/2016/11/14/mits-new-movr-system-makes-wireless-vr-possible-with-any-headset/

  • Suitch

    He didn’t say 40 years, he said “for years”. He just paused because he ran out of air in the middle.

    • BL

      He definitely said forty years. He would have said “Years from now” if that’s what he meant. “For years from now” is incorrect English and doesn’t make sense. He was comparing motion controllers to the mouse, which had an even longer lifespan as a primary input device.

  • Mo Nilforoushan

    VR and AR are among tech trends nowadays, and the faster businesses will understand the benefits for themselves the faster these technologies will spread. There are other trends evolving at the same time: https://teamlogicitplanotx.com/information-technology-in-dallas-trends/

  • Realist333

    Not sure where, how or why the engineers head down a certain market path. Rift started as a PC peripheral and has gone down a path leading to an Xbox 360/PS3 being worn on the head, with their latest release. It just seems backwards. We now have the RTX 2080ti and the current headsets don’t support SLI. What could be done with 2 – 2080ti’s, one for each eye, in regards to resolution and FOV. I would pay for two to support this depending on results. Maybe we need the 3080ti or 4080ti to have no SDE. Once, I have been in VR for a while I don’t really find the FOV as limiting as the resolution. I just don’t understand why we haven’t seen an upgraded PC headset from Oculus instead of what is being rolled out.

  • Realist333

    Meant to add on to the controller discussion as well. An audio designer/engineer was discussing the limitations of digital music reproduction compared to analog reproduction and pointed out that humans are analog, we don’t hear, see or feel in digital. We need to feel switches, dials, buttons, throttles, wheels and joysticks. I would love to be able to have a set of gloves that could flip a switch, turn a knob or press a button. My phone has haptic feedback so why not gloves.

  • MarkD

    Interesting article. I think that in 5 years can change a lot. But the Oculus Rift helmet of virtual reality becomes better every year. I like to play VR games and often play in a virtual reality center – https://virivr.com.au/. A few months ago, I bought the Oculus Rift virtual reality helmet for my PC. This virtual reality helmet allows me to play quality VR games. But the Oculus Rift helmet needs to connect to the PC and this greatly limits the movement in this helmet. This is probably the only drawback of this virtual reality helmet. I hope that the future development of a helmet will become it more autonomous.

  • sdrawkcab

    4 years later and resolutions still not that great. Wonder how many more years till we get VR with 4K TV clarity. Another 5 years from now?