Nvidia’s VRWorks Audio Brings Physically Based 3D GPU Accelerated Sound

6

At Nvidia’s special Editors Day event in Austin, Texas, the company unveiled an enhancement to it’s VRWorks APIs which now includes what it says is the first “Physically Based Acoustic Simulator Engine”, accelerated by Nvidia GPUs.

Nvidia and AMD’s focus on claiming the high ground in virtual reality has been under way place for some time now, but it’s largely been VR visuals and latency that have received the lions share of the attention.

At Nvidia’s special Editor’s Day event today, Nvidia CEO Jen-Hsun Huang announced that VRworks (formerly Gameworks VR), the company’s collection of virtual reality focused rendering APIs, is to receive a new, physically based audio engine capable of performing calculations needed to project model sound interaction with virtual spaces entirely on the GPU.

nvidia-vr-audio-1

Physically based spatial or 3D audio is the process by which sounds generated within a virtual scene are affected by the path they take before reaching the player’s virtual ears. The resulting affects such as muffling due to walls and door occluding sound to reverberation, echoes caused by the bouncing of sound off many physical surfaces.

Nvidia states in a recently released video demonstrating the new VR Audio engine, that they’re approaching audio modelling and rendering much like ray tracing. Ray tracing is a processor intensive but incredibly accurate to render graphics, calculating the path from source to destination of individual rays of light in a scene. Similarly, albeit presumably computationally much more cheaply, Nvidia claims to be tracing the path sound waves travel through a virtual scene, applying ‘physical’ attributes and dynamically rendered audio based on the resulting distortion. to put is simply, they work out how sound bounces off stuff and make it sound real. In fact VR Audio uses Nvidia’s pre-existing ray-tracing engine OptiX to “simulate the movement, or propagation, of sound within an environment, changing the sound in real time based on the size, shape and material properties of your virtual world — just as you’d experience in real life.”

SEE ALSO
'Skydance's Behemoth' Won’t Miss Dec 5 Launch Despite Technical Setbacks, Week-1 Patches Announced

vrworks-audio-plan

Spatial audio and the use of HRTFs (Head Related Transfer Function) in virtual reality is already common with in SDK options for both Oculus and SteamVR development, not to mention various 3rd party options such as Realspace audio and 3DCeption. I cannot with degree of authority say how computationally accurate the physical modelling already is in any of those options, although I have heard how they sound. So whilst the promises of such great levels accuracy by Nvidia are certainly appealing (and judging by the video pretty convincing), perhaps it’s the GPU offloading that sells the idea as a possible winner. You can hear the results in the embedded video at the very top of this page.

However, as with most of the VRWorks suite of APIs and technologies, whatever benefits are brought by the company’s GPU accelerated VR Audio will be limited to those with Nvidia GPUs. What’s more, developers will have to target these APIs in code specifically too. The arguments for doing so however may well be attractive enough for developers to do just that and as an audio enthusiast,  I can’t deny that the idea of such potential accuracy in VR sounds-capes is pretty appealing to me. We’ll see how it stacks up once it arrived.

In the mean time, if you’re a developer who’s interested in getting an early peek at VRWorks Audio, head over to this sign up page.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • Raphael

    This is a good thing. Potentially very different from all the fake “positional audio” we’ve seen before such as CMSS 3d and “surround” headphones.

  • Al

    Seems to relate nicely to space4, but also to ignore material.

    Bricks do not propagate sounds that way, and cloths (flags and drapes) did not appear to absorb.

    Perhaps it’s in there but yet unused. Time will tell.

    • kalqlate

      I was thinking the same, but the article states the following…

      In fact VR Audio uses Nvidia’s pre-existing ray-tracing engine OptiX to “simulate the movement, or propagation, of sound within an environment, changing the sound in real time based on the size, shape and material properties of your virtual world — just as you’d experience in real life.”

      …so, oddly, maybe this demo didn’t fully exploit the available features.

  • Jorge Curiel

    This article needs so much editing… but great news

  • shayneo

    Goddamn. I thought i had come up with this. Looks like I was beaten to the idea!. One thing though, if one was to follow the “Physically based” principles properly the focus would be more on the surface physics of how sound interacts with spaces. So it’d account for conservation of energy across roughness, how surfaces disperse so when not reflecting (Conservation of energy!) and so on.

    • Sovereign Man

      Existing sound propagation software used in the acoustical engineering field already does all this, albeit with the use of pre-calculated approximation factors for absorption, scattering, refraction, reflection, transmission, attenuation, and so on. The key thing here is the use of the incredible parallel processing power of a modern GPU to calculate the propagation in real-time. I wish the professional software made use of this, it can take literally days to calculate large noise models using CPUs only.