Reducing Latency is Becoming Increasingly Complex
Trends
Presence in VR requires low latency, and reducing latency is not easy. Low latency is also not the result of one single technique. Instead, many methods work together to achieve the desired result. ’Asynchronous time warp’ modifies the image just before sending it to the display. ’Predictive tracking’ lowers perceived latency by estimating future orientation. ’Direct mode’ bypasses the operating system. ‘Foveated rendering’ reduces render complexity by understanding eye position. ’Render masking’ removes pixels from hidden areas in the image.
If this sounds complex, it is just the beginning. One needs to measure optical distortion and correct it in real-time. Frame rates continue to increase, thus lowering the available time to render a frame. Engines can optimize rendering by using similarities between the left- and right-eye images. Techniques that used to be exotic are now becoming mainstream.
Only a handful of companies have the money and people to master all these techniques. Most other organizations prefer to focus on their core competencies. What should they do
OSVR implications
A key goal of OSVR is to ‘make hard things easy without making easy things hard’. The OSVR ’Render Manager’ exemplifies this. OSVR makes these latency-reduction methods available to everyone. We work with graphics vendors to achieve direct mode through their API. We work with game engines to provide native integration of OSVR into their code.
I expect the OSVR community to continue to keep track of the state of the art, and improve the code-base. Developers using OSVR can focus away from the plumbing of rendering. OSVR will continue to allow developers to focus on great experiences.
The Peripherals Are Coming
Trends
A PC is useful with a mouse and keyboard. Likewise, a VR headset is useful with a head tracker. A PC is better when adding a printer, a high-quality microphone and a scanner. A headset is better with an eye tracker, a hand controller and a haptic device. VR peripherals increase immersion and bring more senses into play.
In a PC environment, there are many ways to achieve the same task. You select an option using the mouse, the keyboard, by touching the screen, or even with your voice. In VR, you can do this with a hand gesture, with a head nod or by pressing a button. Applications want to focus on what you want to do rather than how you do it.
More peripherals mean more configurations. If you are in a car racing experience, you’d love to use a rumble chair if you have it. Even though rumble chairs are not commonplace, there are several types of them. Applications need to be able to sense what peripherals are available and make use of them.
Even a fundamental capability like tracking will have many variants. Maybe you have a wireless headset that allows you to roam around. Maybe you sit in front of a desk with limited space. Maybe you have room to reach forward with your hands. Maybe you are on a train and can’t do so. Applications can’t assume just one configuration.
OSVR Implications
OSVR embeds Virtual Reality Peripheral Network (VRPN), an established open-source library. Supporting many devices and focusing on the what, not the how is in our DNA.
I expect OSVR to continue to improve its support for new devices. We might need to enhance the generic eye tracker interface as eye trackers become more common. We will need to look for common characteristics of haptics devices. We might even be able to standardize how vendors specify optical distortion.
This is a community effort, not handed down from some elder council in an imperial palace. I would love to see working groups formed to address areas of common interest.
Turning Data into Information
Trends
A stream of XYZ hand coordinates is useful. Knowing that this stream represents a figure-eight is more useful. Smart software can turn data into higher-level information. Augmented reality tools detect objects in video feeds. Eye tracking software converts eye images into gaze direction. Hand tracking software converts hand position into gestures.
Analyzing real-time data gets us closer to understanding emotion and intent. In turn, applications that make use of this information can become more compelling. A game can use gaze direction to improve the quality of interaction with a virtual character. Monitoring body vitals can help achieve the desired level of relaxation or excitement.
As users experience this enhanced interaction, they will demand more of it.
OSVR Implications
Desktop applications don’t have code to detect a mouse double-click. They rely on the operating system to convert mouse data into the double-click event. OSVR needs to provide applications with both low-level data and high-level information.
In ‘OSVR speak’, an ‘analysis plugin’ is the software that converts data into information. While early OSVR work focused on lower-level tasks, several analysis plugins are already available. For example, DAQRI integrated a plugin that detects objects in a video stream.
I expect many more plugins will become available. The open OSVR architecture opens plugin development to everyone. If you are an eye tracking expert, you can add an eye tracking plugin. If you have code that detects gestures, it is easy to connect it to OSVR. One might also expect a plugin marketplace, like an asset store, to help find and deploy plugins.