Naturalistic input for users wishing to take that extra step towards full body immersion for their virtual reality experience is still some way off. You can see the pieces forming and coming together, and things are evolving quickly, but we’re not there yet. For now, the only real way to get all your limbs tracked and modelled in VR is to grab yourself a fully fledged motion capture studio.
The setup comprises the following:
- 40 Camera Vicon Motion Capture for realtime mocap
- Custom data translation code to facilitate communication between Vicon and Unity
- Unity Pro for realtime rendering
- Autodesk Maya for scene creation
- Dell workstations and NVidia Graphics Cards for data processing and rendering
- Paralinx Arrow for wireless HDMI (untethered operator)
..so not exactly the kind of setup the average person is likely to have in their basement, but the results are undeniably cool. Motion data is captured at 120FPS for an impressively fluid translation of body to avatar mapping. MoCap data is then fed into Unity, which then translates your virtual presence in realtime through the environment.
As stated, the team have released their software and have even included a rather straightforward looking set of instructions. You know, just in case you really do have that killer MoCap system in your basement after all.
It’s another example of research that could one day inform systems that are available to us regular consumers. And before you scoff, cast your mind back a short 3 years and ask yourself if you thought VR would be where it is today.
For more information on VIVE, check our their dedicated web page here, and Emily Carr’s University site is here.