LISTEN TO THE VOICES OF VR PODCAST
I had a chance to catch up with Bob Pette, general manager of the ProVis business unit at Nvidia, where he talked to me about their new Quadro GPUs, VR-related software announcements, and updates to their physically-based Iray renderer. Nvidia is moving towards being able to do live interactive ray tracing, but they’re not there yet since it’s still a very computationally-intense process. They were showing some demos of being able to change stationary camera position with a photorealistic-rendered room with the option to chose between four different lighting conditions.
Bob also talks about their the parallel-processing capabilities for these Nvidia GPUs and how they are enabling a lot of innovation within the Deep Learning and Machine Learning fields. He sees a trend of software tools starting to think about how to leverage the GPU processing in order to add artificial intelligence features within content creation software.
For example, Bob sees that the perceptual capabilities of machine learning techniques that leverage the GPU might be able to help optimize ray tracing algorithms in reaching a “good enough” visual threshold, and to be able to detect ray tracing errors. He also acknowledged that the computational demands for training neural networks are still high enough that he sees that they’ll be primarily trained through cloud-based computing with supplementary local GPU updates and tuning.
There are still a lot of open problems to solve before we see live, interactive ray tracing. But what’s clear is that Nvidia’s GPU technologies are at the center of catalyzing the current groundswell of virtual reality technologies and machine learning innovations.
Support Voices of VR
- Subscribe on iTunes
- Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip