intel-logo

Intel’s Daniel Pohl speaks to Road to VR about Intel’s research into improving the user experience for the new generation of virtual reality HMDs.

Intel Inside [Your HMD]

Daniel Pohl
Daniel Pohl

Although Intel’s chipset solutions have included  greater and greater emphasis for on-board GPU functionality, you wouldn’t normally associate them with cutting-edge advancement in the field. That said, Intel is renowned for it’s heavy emphasis on R&D and not all of it directly related to ‘here and now’ use cases. That is, they’re investing in the future of unproven technologies.

Recently, a little known group within Intel carrying out just this sort of research, released a paper detailing what they’d been working on. Daniel Pohl, a research scientist based at the Intel Visual Institute in Saarbrücken, Germany has been investigating the cutting edge field of computer graphics for many years. His work includes ground-breaking work on Ray Tracing, culminating in 2004 with a fully Ray Traced version of Quake 3. Recently, he’s been hard at work concentrating on how to improve image processing techniques used to help deal with aberrations caused by lenses used in VR Headsets like the Oculus Rift.

The technique Daniel is pioneering claims to provide a better method to compensate with image quality loss when images are pre-warped and adjusted to deal with aberrations introduced by these lenses. In the case of lenses used in the Oculus Rift (of type aspheric), the image is treated with a barrel distortion filter to cancel out the pincushion effect these lenses introduce (see diagram below):

grid_pin_barrel_2
Pincushion Distortion [Left] and Barrel Distortion [Right]

 

With modern GPUs, this method is fairly cheap in terms of compute cycles, but the most popular implementation (Bilinear Texture Lookup) introduces other. unwanted artefacts as a result. The imprecision of the method means that colours can be smeared, leading to a less vibrant image and potentially loss in image detail too. A post on Daniel’s blog puts it well:

 Imagine the case where an image should be displayed in which one pixel is white, the next black, next white etc.; using the bilinear interpolated texture lookup for image warping the result will be some grayish color. More generally spoken high-frequency detail will be lost.

The Intel Solution

Daniel proposes two remedies to this issue:

  1. Use Bicubic Texture Lookups: Essentially a  drop-in replacement for Bilinear Filtering, the technique offers greater precision when pre-warping the image with fewer artefacts when compared with Bilinear Filtering. The compute cost for the GPU is slightly higher, but easily handled by modern cards.
  2. Implement Barrel Distortion Directly Using Object-Space lookups: This method renders the scene with the required barrel distortion directly from the scene with no further post processing required. In theory eliminating and artefacts introduced by either previous method of post-processing. This has a higher compute cost and must be introduced earlier in the rendering cycle and as a result would be more difficult to implement with existing solutions. But the result is excellent image quality.
SEE ALSO
Sightful Cancels Headless AR Workstation 'Spacetop', Pivots to Windows AR Software

bilinearvsobject2

Daniel got in touch with us to discuss the techniques and his work in more detail in an interview. We’re happy to present that interview below:

Road to VR: Tell us a little about yourself.

Daniel: I am Daniel, a research scientist at Intel for about 7 years. I am located in Saarbrücken at the Intel Visual Computing Institute. Since I have been a child my passion has been for gaming (back then at the Sinclair ZX81, followed by the C64 and later the PC) and later also editing existing games or developing new ones. When the first 2.5D shooters came out I was highly impressed by the level of immersion that was achievable. Back then this meant images rendered at 320×200, no texture filtering, etc.; the evolution to today’s games and capabilities has been amazing. When I was studying computer science at the Erlangen-Nürnberg University I was happy to be able to visit graphics courses and learned a lot about 3d programming which finally influenced me in making research projects that used a ray tracer for games (e.g. Quake 3: Raytraced in 2004).

Road to VR: Would you consider yourself at VR enthusiast? What got you interested in Virtual Reality?

Daniel: Yes. I remember watching the movie “The Lawnmower Man” and I was impressed by the possibilities that virtual reality could once enable. However, it wasn’t until the summer of 2012 during GamesCom in Cologne when I was trying out my first HMD, the Oculus Rift prototype. It was running a level from the Doom 3: BFG Edition and for the first time I felt like I am really inside the level.

Road to VR: Talk about your work at Intel and detail for us your current project.

Daniel: In the earlier years at Intel our group has been working on newer and more advanced approaches for ray tracing in modern games. This was followed by an investigation on cloud-gaming, specifically with the focus on analysing existing latencies and bandwidth issues. Recently we decided to also take a deeper look at Virtual Reality.

Road to VR: How did your work on Ray Tracing lead to your work on image enhancement?

Daniel: Last year when we understood the way the optical distortion correction of the lens is applied in wide-angle HMDs using post-processing pixel shaders we assumed it could be doable in a better way using a ray tracer. A ray tracer allows easily to change the camera model and to shoot rays differently, in this case in a barrel-distorted fashion. (This could potentially also be done in a custom rasterizer, but not using the common interfaces (DX, OpenGL) that are used theses days.) At the beginning of this year we started implementing various methods for distortion compensations. Most of them we tested without using an actual HMD – later when we got the Rift DK1 we were able to verify that it actually works.

SEE ALSO
Amazon Prime Video is Getting a Well Overdue Relaunch on Quest, Including Offline Watching

Road to VR: Give us some detail on the new image processing techniques you’re researching and what they bring to consumer Virtual Reality

Daniel: Consumer, wide-angle HMDs use relatively cheap lenses to bring the screen into focus for the human eyes and enable the wide field of view. The drawback is that there are spatial distortions and chromatic aberrations. Luckily, those can be compensated for in software and this e.g. in the Oculus SDK done as a post-processing pixel shader. The default implementations are warping the image in a barrel-distorted way and are using bilinear texture access to do so. The result is that the processed image is getting blurred. Our suggestion here is to use bicubic filtering instead, leading to a bit better image quality. The other, bigger change is to not do any post-processing, but to render the scene in a way that it is barrel-distorted (object-space correction). Our approach uses a ray tracer that sends out the primary rays in exactly that way. The benefit is a clear improvement in sharpness which we quantify in our paper “Improved Pre-Warping for Wide Angle, Head Mounted Displays” by Daniel Pohl (Intel), Greg Johnson (Intel) and Timo Bolkart (Saarland University).

Road to VR: Is it true that the requirement or effectiveness of the processing techniques you outline are more important at lower resolutions and at higher resolutions they become less important?

Daniel:

Display resolution:
The blur from image warping will happen independent of the display resolution. Only if the display would be showing more detail than the human eye can see, then the amount of blur would be less noticeable. But we are far, far away from that even when considering 1920×1080 displays for HMDs.

Rendering resolution:
One way to avoid the heavy blurring during image warping is to oversample the original rendered image. If you roughly render at 2x the resolution and use bicubic filtering you get according to some of our metrics roughly the same amount of detail compared to the object-space solution at the regular resolution – of course at the cost of rendering twice as many pixels. However, it should be kept in mind that the mathematical equation for the barrel distortion is not linear, but using the oversampled image and warping it back to the display resolution makes assumptions of a linear model. Therefore we still get a higher quality image by the object-space solution.

SEE ALSO
Apple Releases Long-awaited Panoramic Display Feature on Vision Pro in 2.2 Beta

Road to VR: How challenging is your bicubic ‘image-space’ post processing filter to implement on current engines / projects? What kind of performance overhead does the technique incur?

Daniel: As far as I can judge it this would be very easy to implement. Instead of the current shader using bilinear texture access one could write a shader that reads out the 16 texels manually and does bicubic filtering in the shader. There are examples on the web about bicubic filtering in a pixel shader. We measured that on high-end GPUs there is only a very small impact going from bilinear to bicubic filtering. Exact numbers can be found in the paper.

Road to VR: You describe the object space correction method in which the image is rendered in a barrel-distorted way (i.e. no post-processing required). How expensive is this technique?

Daniel: This depends on various factors. If you currently use OpenGL or DirectX than this would mean that you would need to switch to a custom written renderer and might need to give up the hardware acceleration that came with those interfaces.

If you already have a custom written renderer then we measured in our test system that our method is actually faster AND gives the better image quality compared to first rendering the regular, rectangular frame and then warp it by a pixel-shader. The reason is that we can skip rendering the black areas which are created by warping the image and are usually not visible in the HMD anyway. Further we don’t need the post-processing step.

Road to VR: What’s next for you at Intel?

Daniel: During this research we found some other interesting issues happening on HMDs that we want to look at. But it is too early to share details on it.

—-

Our thanks to Daniel for answering our questions. If you want to know more, you can find Daniel’s VR Blog here and the full paper on the research can be found here. We look forward to hearing more about Intel’s solutions for improving quality in the Virtual Reality space soon.

If you’d like to read more on this subject, VR Guy delves even deeper in an excellent post here.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • Kevin

    >But we are far, far away from that even when considering 1920×1080 displays for HMDs.
    Yeah, not really. How old is this interview? Has this researcher been out of the loop with the prototype Oculus Rift?

    • Paul James

      I think you may have misunderstood what Daniel was saying. He was saying even at 1080p (which by the way the Rift HD Prototype isn’t at 960 x 1080 per eye) and beyond not that 1080ps displays were a long way off.

    • Elchtest

      Thanks for your interest. The “far, far away” was regarding the previous sentence.

      > if the display would be showing more detail than the human eye can see, then the amount of blur would be less noticeable. But we are far, far away from that…

  • vrguy

    Nice article, Paul. Check out the post Can the GPU compensate for all Optical Aberrations for additional info on what a GPU can and cannot do

    • Paul James

      Great article! Hope you don’t mind but I tagged it onto the end of this one, as it’s clearly of interest to everyone.

  • Mageoftheyear

    Wow wow wow, fascinating interview. Thank you for the coverage and keep on keeping on with that science Daniel! ;)
    What an incredible advent we are on the verge of – the wait is torture!

  • Druss

    This was awesome, hope the VR industry is taking notes! Most games currently in development probably won’t be able to change too much (I don’t know how hard it is to write and implement a new shader), but we all know this is only the beginning and stuff should only get better with time!