In a new series of articles on the official Oculus Developer Blog, the company is offering an overview of additions to the Oculus Audio SDK, specifically new techniques for near-field 3D audio and volumetric sound source rendering. These articles serve as a primer for Oculus Connect 4, taking place October 11th and 12th, which will feature presentations about their “breakthroughs in spatial audio technologies.”
Near-field audio rendering aims to further enhance the realism of spatial audio in VR, particularly for sound sources within arm’s reach of the user. This is achieved with more precise HRTFs that take acoustic diffraction into account, as the head will bend higher frequency sound waves, causing an ‘acoustic shadow’ that is simulated with different filtering effects for each ear, in response to the direction of the sound source.
A second article on volumetric sound sources discusses the problem of using single point sources for large objects or characters in spatial audio rendering—a single point source tends to sound unnatural, like it is coming only from the centre of the object. Oculus Research’s solution was to develop a “process to compute the projection based on the distance and radius”, using spherical harmonics to create a “physically correct and high performance way to represent large sound sources.”
Both articles go into quite a bit of detail into how Oculus is thinking about and attempting to solve these problems. We expect to hear more on this topic from Oculus at the Connect conference in October.