I recently attended the Experiential Technology Conference where there were a lot of companies looking at how to use biometric data to get insights into health & wellness, education for cognitive enhancement, market research, and more. Over the next couple of years, virtual reality platforms will be integrating more and more of these biometric data streams, and I wanted to learn about what kind of insights can be extrapolated from them. I talked with behavioral neuroscientist John Burkhardt from iMotions, one of the leading biometric platforms, about what metrics that can be captured from eye tracking, facial tracking, galvanic skin response, EEG, EMG, and ECG.
LISTEN TO THE VOICES OF VR PODCAST
I also talked to Burkhardt about some of the ethical implications of integrating these biometric data streams within an entertainment and consumer VR context. He says that the fields of advertising and brainwashing often borrow from each other’s research, and he is specifically concerned about whether or not it’ll be possible to hack our fixed action patterns, which are essentially stimulus response behaviors that could be operating below our conscious awareness.
Most of the work that iMotions does is within the context of controlled research and explicit consent of participants, but what happens when entire virtual environments can be controlled and manipulated by companies who know more about your unconscious behaviors than you do?
Burkhardt says that there is a behavior that he would consider to be unethical for how this biometric data are captured and used, but the problem is that no one really knows where that threshold lies. We might be able recognize it after it’s already crossed, but it’s hard to predict what that looks like or when it might happen. We’re not there yet, but the potential is clearly there. An open question is whether or not the VR community is going to take a reactive or proactive approach to it.
Burkhardt also says that these types of issues tend to be resolved by implicit collective consensus in the sense that we’re already tolerating a lot of the cost/benefit tradeoffs of using modern technology. He says that it’s just a matter of time before someone creates a way to formulate a unique biometric fingerprint based upon aggregating these different data streams, and it’s an open question as to who should own and control that key.
The insights from biometric data streams could also evolve to the point where big data companies who may be capturing it could be able predict our behavior, but potentially even be able to directly manipulate and control it. He also says that it raises deeper philosophical questions like if someone can take away our free will with the right stimuli, then do we even have it to begin with?
As I covered in my previous podcast with Jim Preston, it’s easy to jump to utopian or dystopian outcomes regarding privacy in VR, but it’s more likely to fall somewhere in between, as it is complicated and complex. There are lots of potential new forms of self awareness of being able to observe our autonomic and unconscious internal states of being as well as changing the depth and fidelity of social interactions. But there also risks for this type of data being used to shape and control our behaviors in ways that cross an ethical threshold. It’s something that no individual person or company can figure out, but is something that is going to require the entire VR community.
Support Voices of VR
- Subscribe on iTunes
- Donate to the Voices of VR Podcast Patreon
Music: Fatality & Summer Trip