Nvidia announced and released a new VRWorks SDK for audio and 360-degree video at the GPU Technology Conference yesterday. The new tools have been designed with the Pascal architecture found in the Titan X and the GeForce GTX 1080 in mind.

Initial versions of Nvidia’s VRWorks SDK focused primarily on video and graphical performance issues. The biggest challenge was developing a tool that could make “rendering to dual 1080×1200 displays at 90 frames per second” easier for developers, says Nvidia. While they were successful in doing this, it took up a lot of the GPU’s processing power.

With newer more powerful Pascal GPUs, the task is easier on the hardware and does not require the full attention of the graphics processor. The Pascal architecture allows for simultaneous projection of multiple images. This design cuts the processing power needed to render a scene by a substantial 50 percent. For developers to take advantage of this, the company has developed a few new tools.

One of the tools Nvidia has included in the VRWorks SDK is Single Pass Stereo. With this tool, developers only have to render the geometry of an image once. Previously the GPU would have to draw the scene twice, once for each screen in the VR unit. However, now with the Pascal-based GPUs, the “left and right displays [can] share a single geometry pass.”

Nvidia also developed a tool called Lens Matched Shading that eliminates pixels that will not be used in the final images that are sent to the headset. Previously these needless pixels were thrown out post-rendering basically wasting GPU cycles.

With all this freed-up power left over, Nvidia has branched into audio processing with the VRWorks Audio SDK. Current VR systems do a pretty good job at pinpointing a sound source in 3D space, but they completely ignore environmental acoustics.

When a sound is produced, the sound wave travels out in all directions. Only the waves that travel directly to your ear are unaffected by the surroundings. In most VR applications this direct wave is all we hear. However, in reality, all those other waves that are bouncing around the room reach our ears as well, and various phenomena change how each wave sounds.

Nvidia has taken the OptiX ray-tracing engine that it uses in graphical ray tracing and adapted it for audio. The tool allows developers to build a scene in 3D then assign objects within the environment attributes that alter the sound wave. For example, solid walls will reflect sound while tapestries will absorb the waves, as demonstrated in the video above.

VRWorks audio is already integrated for Unreal Engine 4, but also comes with C-APIs so that it may be integrated into any other application.

Nvidia has also developed tools to help developers create a “sense of touch” by incorporating spacial recognition and improved haptics, but this is an area of VR that still has quite a way to go. The graphical and sound improvements in the VRWorks SDKs should be enough to keep developers busy bringing more realism to their games and applications.