X

Studying the Consequences of Avatar Embodiment at Stanford’s State-of-the-art Virtual Reality Lab

Stanford uses their virtual reality lab to study the consequences of avatar embodiment. A ten minute presentation by the lab’s director, Jeremy Bailenson, reveals some enthralling findings showing that how we see ourselves in virtual reality can measurably impact our actions in real life.

Overview of Stanford’s Virtual Human Interaction Lab (VHIL)

The VHIL lab is what you might call a premium virtual reality setup — far outside of the budget of most individuals. In addition to a CAVE, they’ve got a ‘multisensory’ room which features 22 point surround sound. There’s even special floor-shaking speakers embedded in the floor which could be used to simulate the rumble of a falling virtual object, among other things.

The head mounted display (VR headset) they are using in the multisensory room appears to be the ~$36,000 nVisor SX111. The unit has a 1280×720 display and a 102 degree horizontal field of view. Optical tracking looks to be used for positional tracking.

The Consequences of Avatar Embodiment

When I stepped into the Holodeck, one of the most exciting things was avatar embodiment — having my entire body represented within the virtual reality game world.

While my avatar was indeed quite blocky, it still felt like me. After all, when I moved my arms in real life, my avatar moved its arms in the virtual world. And when I walked around in the real world, my avatar moved the very same way. For each real-world action, my avatar responded convincingly in the virtual space.

Project Holodeck’s director, Nathan Burba, accompanied me during my time in the Holodeck. As he spoke to me within the virtual world, I looked in the direction of his avatar just like I would have looked at him in real life; despite his avatar having a square head and no eyes, it still felt natural because I was so immersed.

While I thought a lot about my experience after the demo, I had no idea that the virtual world could leak out and affect my real-life actions.

That’s what the director of Stanford’s Virtual Human Interaction Lab, Jeremy Bailenson, says he’s demonstrated through numerous experiments. I recently came across this enthralling presentation and was very surprised at the myriad of findings:

Related Posts
Disqus Comments Loading...