Meta announced its released a new Acoustic Ray Tracing feature that will make it easier for developers to add more immersive audio to their VR games and apps.

Released in the Audio SDK for Unity and Unreal, the new Acoustic Ray Tracing tech is designed to automate the complex process of simulating realistic acoustics which is traditionally achieved through labor-intensive, manual methods.

With Meta’s Acoustic Ray Tracing, the company says in a developer blog post it can simulate sound reflections, reverberations, and things like diffraction, occlusion, and obstruction—all critical to making spatial audio closer to the real thing.

Image courtesy Meta

The new audio feature, which Meta calls “a more natural audio experience” than its older Shoebox Room Acoustics model, also supports complex environments.

“Our acoustics features can handle arbitrarily complex geometry, ensuring that even the most intricate environments are accurately simulated,” Meta says. “Whether your VR scene is a winding cave, a bustling cityscape, or an intricate indoor environment, our technology can manage the complexity without compromising on performance.

SEE ALSO
Netflix Discontinues Quest App Following Browser Streaming Quality Bump

The Acoustic Ray Tracing system is said to integrate with existing workflows, supporting popular middleware like FMOD and Wwise. Notably, Acoustic Ray Tracing is being used in the upcoming Quest exclusive Batman: Arkham Shadow, demonstrating its potential for creating immersive experiences.

“One of the standout benefits of our new acoustics features is their performance on mobile hardware. While other solutions in the market require powerful PCs due to their high performance cost, our SDK is optimized to run efficiently on mobile devices such as Quest headsets. This opens up new possibilities for high-quality audio simulation in mobile applications, making immersive audio more accessible than ever before,” the company says.

You can find out more about Meta’s Acoustic Ray Tracing here. You’ll also find documentation on Meta’s Audio SDK (Unity Unreal) and Acoustic Ray Tracing (Unity | Unreal).

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Arno van Wingerde

    Sounds like a good idea, provided the integration for developers writing for multiple platforms can easily swap engines. But isn't the effect almost totally negated by the comfortable but rather limited on-ear sound of the Quest? Or will the Quest4/5 be better?

    • ViRGiN

      It has nothing to do with hardware.

    • Christian Schildwaechter

      The built-in speakers shouldn't limit the effect any more than Quest audio in general.

      can simulate sound reflections, reverberations, and things like diffraction, occlusion, and obstruction

      This effectively means parts of the signal will be repeated, certain frequencies dampened, with delay and repetitions defined by the (virtual) object the sound hit and its position/material.

      This doesn't require good speakers, it works because our brains have learned that certain spaces influence sound in certain ways. It isn't new either, there have been decent HiFi sound processors allowing to switch between "wooden cabin", "concert hall" and "cathedral" for decades. Audio raytracing now does a lot more, calculating this dynamically per object instead of using an approximation for a fixed spot in the room.

      Quest audio makes assumptions how sound will arrives at your ear, which can be wrong. You'd probably get better results with an open speaker design like on Index, and much better results on Airpod Pros using an audio profile using 3D scans of you ears created on your iPhone. But that for the last few percents, not fundamental differences.

      Physically realistic sound makes the experience believable, similar to proper physics in games. AVP users are impressed when the voice of persons/personas changes depending on where they stand/look and the virtual environment. Meta's audio rendering should allow something similar, but without using an R1 signal processor, probably running on the XR2's DSP instead.

    • MasterElwood

      Most people use headphones / earbuds with the quest. That's what the 3.5mm port is for.

      • Christian Schildwaechter

        Do you have a source/numbers for "most people use headphones"? This would be pretty surprising, because as a general pattern for technology use, the majority tends to use whatever the default was, even if a small change would be enough to provide a much better experience. Which is why including a decent default option is so important.

  • Octogod

    Their previous HRTF implementation was perf expensive, bad, and stayed bad for many years, so my hopes are low.

    For those unaware, Wwise or FMOD already have scalable, tested HRTF solutions in place in use on Quest.

  • Very interesting, also it's good to hear that it is compatible with FMOD

  • Thud

    I'm not a fan of Meta by any stretch but you really have to give credit to their R&D teams. They just keep turning out good research and moving VR forward.

  • Part of me wonders if this will help build convincing MR. In that objects that exist virtually in your real life environment might need or could at least use a proper acoustic to the room. Excited about the tech.

    Also really curious about the performance implications.