DeepFocus is Facebook’s AI-driven renderer that’s said to produce natural looking blur in real-time, something that’s poised to go hand-in-hand with the varifocal displays of tomorrow. Today, Facebook announced that DeepFocus is going open source; while the company’s wide field of view (FOV) prototype ‘Half Dome’ may be proprietary, their deep learning tool will be “hardware agnostic.”

When you hold up your hand in front of you, your eyes naturally converge and accommodate, bringing your hand into focus. The experience of this isn’t the same in the VR headsets of today however since the light is coming from a fixed source, sending your eyes into overdrive to resolve near-field images. This is where varifocal displays and eye tracking comes in, as the once fixed focal length becomes variable to match your eyes depending on where you’re looking at any given moment.

Essentially it will let you focus on objects regardless of their distance from you, making the overall experience more comfortable and immersive. But the missing piece of the puzzle here is the ability for the headset to also replicate natural-looking defocus blur too, something that happens when you focus on your hand and the background goes fuzzy. Enter DeepFocus.

 

In a research paper presented at SIGGRAPH Asia 2018, the company says DeepFocus is inspired by “increasing evidence of the important role retinal defocus blur plays in driving accommodative responses, as well as the perception of depth and physical realism.”

Unlike more traditional AI systems used for deep learning-based image analysis, DeepFocus is said to processes visuals while maintaining the ultrasharp image resolutions necessary for high-quality VR.

SEE ALSO
Eye-tracking is a Game Changer for XR That Goes Far Beyond Foveated Rendering

That means not only will things look more realistic in varifocal VR headsets and even AR headsets with light-field displays, they’ll also mitigate eyestrain associated with vergence-accommodation conflict.

“This network is demonstrated to accurately synthesize defocus blur, focal stacks, multilayer decompositions, and multiview imagery using only commonly available RGB-D images, enabling real-time, near-correct depictions of retinal blur with a broad set of accommodation-supporting HMDs,” Facebook researchers say.

Facebook will be publishing both the source and neural net training data today “for engineers developing new VR systems, vision scientists, and other researchers studying perception,” the company says in a blog post.

Introducing DeepFocus at Oculus Connect 5, Facebook Reality Lab’s chief scientist Michael Abrash said that while Half Dome and DeepFocus are essentially “just the start for optics and displays, which is the poster child for how progress is accelerating.”

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Xron

    Hope it helps other companies to try and go for varifocal viewing aswell…

  • Weston

    This is gunna make VR game twitch streaming and youtube Let’s plays look a lot better and give an idea of what the gamer is looking at.

    • Firestorm185

      That’s very true! Never thought aboutt that, but having everything outside of the player’s viewpoint blurred is going to make it much easier to understand what is going on through a pancake display.

  • GigaSora

    how hard could this be?

    if (infocus)
    dontblur();
    else
    blur();

    I don’t get the fuss.

    • Weston

      Yeah that’s probably the full extent of the tech, the same way the full extent of your web browser is
      if (linkClicked)
      goToDifferentWebsite();
      else
      continueWatchingPorn();

    • Mradr

      Well at a higher level sure – that is the idea of OOP – but try diving into that and tell me it is easy:)

    • jj

      thats probably the most basic form of how it works theoretically but its probably a lot mmre difficult especially if its all behind the scenes and built in

    • jj

      because you don’t understand the technology. you obviously know something about coding, but for you to make such an over generalization kinda shows your ignorance to the topic.

      • GigaSora

        Hey… I’m a coding genius. Only a prodigy could take something so complicated and boil it down to the bare essentials. The team was just over-complicating it.

        • Felix

          Wrong, you’re missing that it’s part of a varifocal system. 3D data is generated as well to inform the display movement so there is more than just a synthetic depth of focus

          • GigaSora

            Thats fair. This then.

            3Ddata data = Generate3Ddata();
            if (infocus)
            dontblur();
            else
            blur();
            MoveDisplay(data);

          • Felix

            Nope your code is still wrong, again doesn’t include varifocal portion. Eye tracking, display position, non orthogonal mapping etc. need specific coordinates gaze data and so on

          • GigaSora

            That’s all encapsulated in the data variable. Learn some code man. You’ll like it. It seems like you could be okay at it.

          • Mikael Korpinen

            yes but how about the low level stuff? That doesn’t exists right? You are giving very high level stuff but everyone knows the full work is under the hood with the engine

          • domahman

            Bro!
            Infinite loop {GenerateImage=GetEyePosition();}

            Done! Of course you have to feed eye position to render engine to detect object collision in the path of eye.

  • oompah

    kaballah magic
    hope they dont play u down on their
    pentagram
    & harvest ur soul
    Hotel California
    what a lovely place
    VR

  • Mradr

    Neat tech – but it sounds like this stuff like 5+ years out sadly.

  • Albert Hartman

    Vergence is important for sensing 3D. Accomodation also – for young eyes. Accomodation disappears as you get older. And everyone you see wearing glasses has no accomodation – otherwise they wouldn’t need glasses. This is not an important feature.

    • jj

      well you cant try to focus on things that are blurry…. so making the things your looking at in the distance higher res, so that you can trick your brain into focusing on it, is better than nothing and optimizing the performance of the game.

  • Neil

    Looks like Oculus ripped off this startup I found a couple years ago called DeepSee. They were developing this long before Oculus. https://angel.co/deepseeinc/jobs

  • Felix

    Looks like oculus ripped off DeepSee Inc’s technology, they were doin this in 2016

  • People are wondering if this means that the Half Dome has been abandoned

  • wotever99ninynine

    awesome tech. but i dont think its that important. fixing the physical focus by moving the screens so things close to face are finally sharp is important. wide fov is important. higher res displays are important. good functional reprojection is important. eye tracking and foveated rendering is reasonably important. blurring what you are not looking at is not that important.. you will naturally blur it anyway. and realistic blur = even less important. i believe the effect will be amazing. but there are more important and less costly things to focus (pun not intended) on right now. this tech is years away anyway. im still looking forward to seeing all this come together and what the end result wiil be though :)

  • Mradr

    I wonder when they will release this technology? As it sounds 5+ years out. They have a lot of neat things around the bin – but I do wonder how much of that technology is going to make it into the next release as adding new things only brings up the cost. I be ok with the cost – but I am only a small market to the larger market that wants it at a lower cost.

    This technology seems like it’ll be in the CV3 or depending on deployment – SC2.

  • Alina