In his latest presentation at Oculus Connect 5, Oculus Chief Scientist Michael Abrash took a fresh look at the five-year VR technology predictions he made at OC3 in 2016. He believes his often-referenced key predictions are “pretty much on track,” albeit delayed by about a year, and that he “underestimated in some areas.” Optics & Displays [caption id="attachment_82690" align="aligncenter" width="640"] Image courtesy Oculus[/caption] Revisiting each area of technology in turn, Abrash began by discussing optics and displays. His predictions for headset capabilities in the year 2021 were 4K × 4K resolution per-eye, a 140 degree field of view, and variable depth of focus. “This is an area where I clearly undershot,” he said, noting that Oculus’ own Half Dome prototype shown earlier this year had already met two of these specifications (140 degree FOV and variable focus), and that display panels matching the predicted resolution have already been shown publicly. [irp posts="78485" name="Google & LG Detail Next-Gen 1,443 PPI OLED VR Display, 'especially ideal for standalone AR/VR'"] Abrash highlighted the rapidly progressing area of research around varifocal displays, saying that they had made “significant progress in solving the problem” with the AI-driven renderer DeepFocus that can achieve “natural, gaze-contingent blur in real time,” and that they would be publishing their findings in the coming months. [caption id="attachment_82683" align="aligncenter" width="640"] Image courtesy Oculus[/caption] Beyond Half Dome, Abrash briefly mentioned two potential solutions for future optics: pancake lenses and waveguides. Like Fresels, the pancake lens isn’t a new innovation, but is “only now becoming truly practical." By using polarization-based reflection to fold the optic path into a small space, Abrash says pancake lenses have the potential of reaching retinal resolution and a 200 degree field of view, but there would have to be a tradeoff between form-factor and field of view. Because of the way pancake lenses work "you can get either a very wide field of view or a compact headset [...] but not both at the same time," he said. [caption id="attachment_82684" align="aligncenter" width="640"] Image courtesy Oculus[/caption] But waveguides—a technology being accelerated by AR research and development—theoretically have no resolution or field of view limitations, and are only a few millimetres thick, and could eventually result in an incredibly lightweight headset at any desired field of view and at retina resolution (but that is still many years away). Foveated Rendering Moving on to graphics, Abrash’s key prediction in 2016 was that foveated rendering would be a core technology within five years. He extended his prediction by a year (saying that he now expects it will happen within four years from now), and that the rendering approach will likely be enhanced by deep learning. He showed an image with 95% of the pixels removed, with the distribution of remaining pixels dissipating away from the point of focus. The rest of the image was reconstructed efficiently through a deep learning algorithm, and it was impressively similar to the original full resolution version, ostensibly close enough to fool your peripheral vision. Foveated rendering ties closely with eye tracking, the technology that Abrash thought was the most risky of his predictions in 2016. Today, he is much more confident that solid eye tracking will be achieved (it is already part of the way there in Half Dome), but this prediction was also extended by a year. Spatial Audio Spatial audio was the next topic, and Abrash conceded that his prediction of personalised Head-Related Transfer Functions (the unique geometry of each person's ear which influences how they perceive the soundfield around them) becoming a standard part of the home VR setup within five years might also need to be extended, but he described how a recent demo experience convinced him that “audio Presence is a real thing.” Clearly the technology already works, but the personalised HRTF used for this demonstration involved a 30-minute ear scan followed by “a lengthy simulation,” so it’s not yet suitable for a consumer-grade product. Controllers & Input [caption id="attachment_82685" align="aligncenter" width="640"] Image courtesy Oculus[/caption] Regarding controllers, Abrash stood by his predictions of Touch-like controllers remaining the primary input device in the near future (alongside hand tracking). After running a short clip of one of Oculus’ haptic glove experiments, he adjusted his previous opinion that haptic feedback for hands wasn’t even on the distant horizon, saying that “we’ll have useful haptic hands in some form within ten years.” [irp posts="71348" name="Hands-on: HaptX Glove Delivers Impressively Detailed Micro-pneumatic Haptics, Force Feedback"] Ergonomics & Form Factor [caption id="attachment_82682" align="aligncenter" width="640"] Abrash presented this sleek concept as plausible form-factor for an AR/VR headset once waveguide optics are mastered. | Image courtesy Oculus[/caption] On the subject of ergonomics, Abrash referred to the increasingly significant technology overlap between VR and AR research, noting that future VR headsets will not only be wireless, but could be made much lighter by using the two-part architecture already introduced on some AR devices, where heavy components such as the battery and compute hardware could be placed in a puck that goes in your pocket or on your waist. He said this companion device could also link wirelessly to the headset for complete freedom of motion. Even still, optical limitations are largely the bottleneck keeping VR headsets from approaching a ski-goggle like design, but advances in pancake and waveguide optics could make for significantly more slender headsets. [irp posts="79183" name="Toward Truly Glasses-sized AR: First Look at DigiLens' AR HUD Reference Headset"] Continued on Page 2 » Computer Vision [caption id="attachment_82689" align="alignright" width="325"] Michael Abrash speaks on stage at Oculus Connect 5. | Image courtesy Oculus[/caption] Abrash went on to describe the commonality in computer vision technologies that VR and AR devices will enjoy in the future, as both types of hardware will be capable of rendering convincing mixed reality, where real-world objects and environments are shared with virtual ones. The approaches to this challenge are different; VR headsets of the future will be able to scan and render the real world in the virtual environment in real time, Abrash says (he believes this will be workable within four years), whereas AR glasses 'overlay' a rendered layer to the real world seen through a transparent display. While he thinks that VR will be the “best [form of] mixed reality for a long time,” he also expects the two technologies to eventually share the same underlying interface, and share the same developer environment and tools, with apps and environments working seamlessly across both types of headset. Virtual Humans [caption id="attachment_82686" align="aligncenter" width="640"] One of these is real and the other is a computer recreation. Can you tell which? | Image courtesy Oculus[/caption] Finally, he spoke about convincing virtual humans, and the uncanny valley that, until recently, seemed almost impossible to overcome. Abrash highlighted a machine learning-based approach called Codec Avatars, running real footage of a talking head next to a remarkably accurate reconstruction. While still in its early stages, he believes that this could revolutionise virtual communication and collaboration. “I’m not betting on having convincingly human avatars within four years,” he said, “but I’m no longer betting against it either.” [irp posts="66675" name="Researchers Showcase Impressive New Bar for Real-time Digital Human Rendering in VR"] "Four years from now, VR is going to jump to the next level," he surmised. "And that’s just the start. Every area will continue to improve, and virtual humans and likely haptic hands will be along before too long. In short, as far as I can see—and I can see pretty far—the future of VR couldn’t be brighter." While Abrash spent his time talking about underlying technologies and capabilities, we recently explored 10 specific projects that have us excited about the future of AR and VR.