New Algorithm Could Make VR Sound More Realistic
New Algorithm Could Make VR Sound More Realistic
Y'all're probably familiar with the way good sound design can bring a game or video to life. It can take huge teams of creators 60 minutes upon 60 minutes to brand the audio but right, only almost no amount of fourth dimension is enough to craft the perfect audio for a virtual reality feel. Sound design has been vastly simplified because of the innate unscripted nature of VR simulations, but a new algorithm from researchers at Stanford could finally change that.
In scripted media like a pre-rendered 2D video, you always know where audio should come from — the sound levels for each channel never modify from one viewing to the next. Even a 3D game has a workable level of complication thank you to the predetermined parameters of the environment. With VR, there are simply likewise many variables to create perfect, realistic sound from every perspective.
Currently, the algorithms for creating sound models come from piece of work done more than a century ago by scientist Hermann von Helmholtz. In the late 19th century, Helmholtz devised some of the theoretical underpinnings of wave propagation. The so-chosen Helmholtz Equation has since get a major component of audio modeling along with the boundary element method (BEM).
That's all well and good if you're dealing with an environment without too many variables. Virtual reality ratchets up the possible sound models to previously unheard of levels. To make VR sound accurate, engineers would need to create audio models based on where the viewer is standing in the virtual world and what they're looking at. Doing that with the Helmholtz Equation and BEM would take powerful computers multiple hours. So, far from practical.
The potential solution comes from Stanford professor Doug James and graduate student Jui-Hsien Wang. The new GPU-accelerated algorithm calculates audio models thousands of times faster by completely avoiding the Helmholtz Equation and BEM. We're talking seconds of processing instead of hours.
The pair's approach borrows from 20th-century Austrian composer Fritz Heinrich Klein, who found a way to generate the "Mother Chord" from multiple piano notes. They call their algorithm KleinPAT in recognition of his posthumous contribution. The video above includes some comparisons between Helmholtz-generated audio models and KleinPAT. They sound very similar, which is the point. Y'all can get almost identical sound from KleinPAT with much less computing fourth dimension. The researchers believe this algorithm could be a game-changer for simulating audio in dynamic 3D environments.
Now read:
- VR vs. AR vs. MR: What Is Each 1 Proficient for?
- Sony, Microsoft Bring together Forces in Cloud Gaming and AI
- HTC Vive Focus Plus VR Headset Launches April 15 for $799
Source: https://www.extremetech.com/extreme/296223-new-algorithm-could-make-vr-sound-more-realistic
Posted by: jacobsyoublive.blogspot.com
0 Response to "New Algorithm Could Make VR Sound More Realistic"
Post a Comment