Sixense’s successful STEM Kickstarter has been running for a few weeks now, but for the first time they’re showing the true power of their VR tracking system with full 5 point tracking. The results are, quite frankly, awesome.

I guess I shouldn’t be surprised. I pay close attention to the virtual reality space and in my mind I fully get the implications of full avatar embodiment. But for some reason, finally seeing it happen in a familiar environment (inside Sixense’s Tuscany VR demo) has me smiling with excitement. Just watch, and let me know what you think:

There’s something about natural human movement that just looks so much more compelling in a virtual avatar than traditional animation. There’s a life, or perhaps essence, brought to the avatar on the right in the video that feels human — like there’s really a person in there. I was blown away the first time I got my virtual hands inside of the game because it enhances immersion significantly. Having your entire body naturally there inside the game is going to take immersion to the next level.

That’s why in the headline I say that this is the next step for VR. Full body tracking is going to bring life to virtual worlds, not only by animating and making feel real your own avatar, but by putting you around other players that also move like real people. And it’s not just that feeling which is important; the gameplay implications are clear — we’ll finally be able to duck, jump, dodge, kick, and punch inside of these virtual spaces, and have it truly be part of the gameplay.

SEE ALSO
Meta Expects Orion-like AR Glasses to Launch Before 2030, Priced Between Phones and Laptops

Someone make a Chivalry for VR and the STEM, please?!

I’m quite impressed with what Sixense has been able to do with 5 trackers and some smart inverse kinematics. Professional motion capture systems track many more points, but what I’m seeing in the video above is appealingly natural movement from a virtual avatar. I can’t wait to see many of the great Rift + Hydra demos up and running with full tracking thanks to STEM!

Sixense tells me that the prototype shown in the video is using older ‘time modulation’ sensing techniques, akin to the Hydra, with more latency than STEM. Sixense recently released a video to explain the differences between the old process and the new (and much faster) way of doing things with STEM:

Down the road I imagine Sixense will demo pre-production STEM prototypes to show the difference in latency between STEM and the Hydra.

At the time of writing, the STEM Kickstarter is just over $550,000 in funding (221% of their original goal) and has 7 days left. The Kickstarter also just passed its first stretch goal which will make the base station wirelesss, making it even easier to game in VR.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • Andrés

    You should mention the PrioVR in the post, since it’s quite a relevant competitor when it comes to full-body motion capturing.

    • Psuedonymous

      PrioVR is great of avatar mocap, but no0t as suitable for VR. Because all they are measuring is the orientation of the individual sensors, actual body pose has to be estimated. Even in the perfect scenario of measuing the distance between every sensor, measuring the length of every one of your joints, and having all the sensors fixed in a known relationship to your skeleton, there will still be common scenarios where track will be lost. For example, sliding your foot on the floor. While the measured skeleton will be correct, the position in space will not. Same with any sort of running and jumping (whenever both your feet are off the ground, there is no absolute world anchor that the rest of the sensors positions can be inferred from). Also, quite a lot of possible head translation movements cannot be inferred just from head and shoulder orientation.

      The ideal setup (if you can afford it) would be to use STEM sensors to absolutely position the head, torso and limb endpoints, and to use the PrioVR to fill in the skeleton in between in place of IK.

  • Nick vB

    Good point, each system has it’s own set of issues (just look @1:22 in STEM video)

    But a fusion of both would be pretty awesome for cases where optical tracking is not possible.

    • DWoodall

      1:22 was actually bad calibration on my part. If you notice the foot orientation isn’t following mine correctly. The IK solution solved for joint rotation successfully, but unfortunately the joint rotation didn’t match my own foot orientation.