I wrote a few months back that augmented reality needed to prove itself. While I still think this is the case, I’m happy to report that top minds are working on just that. Some genius folks from MIT have created ‘T(ether)’, an amazing system which allows a user to interact with an augmented reality world by reaching out and manipulating it with their hands.

This sort thing is best explained with video. Thankfully, the MIT folks have put together a great video showing a bit about how T(ether) works:

[vimeo 42173010 w=400 h=300]

Here’s the technical description straight from the project page:

T(ether) is a novel spatially aware display that supports intuitive interaction with volumetric data. The display acts as a window affording users a perspective view of three- dimensional data through tracking of head position and orientation. T(ether) creates a 1:1 mapping between real and virtual coordinate space allowing immersive exploration of the joint domain. Our system creates a shared workspace in which co-located or remote users can collaborate in both the real and virtual worlds. The system allows input through capacitive touch on the display and a motion-tracked glove. When placed behind the display, the user’s hand extends into the virtual world, enabling the user to interact with objects directly.

As you can see the system works very well, even if it’s just a prototype. The hand-tracking is extremely responsive, and with it, the user can easily create, manipulate, and even animate objects in a virtual world.

Unfortunately, T(ether) requires an expensive and immobile infrared tracking system (the same kind used for motion-capture in videogames and movies) in order to use; this isn’t a portable or affordable system by any stretch of the imagination.

SEE ALSO
VR's Most Popular Fishing Game is Finally Coming to PC VR, Five Years After Quest Launch

Still, what they have here is an impressive proof-of-concept which makes it very easy to imagine practical applications of such technology.

I’ve emailed the folks responsible for T(ether) about using an HMD instead of an iPad as the display interface. It would seem much more natural to me to simply look around through a head-tracked HMD than to hold a heavy iPad as a window into the augmented-reality world. The interface is of course placed on the iPad, but with a little elbow-grease, the interface could simply be projected within the augmented world itself, eliminating the need for the iPad. I’ll update this article if I get a response.

As soon as I started watching them create and place cubes, my mind went to Minecraft. Pair T(ether) with an omni-directional treadmill an HMD and Minecraft, and you’d be able to stroll around a virtual world and physically hold a virtual cube and place it wherever you like; this would be way more immersive than simply right-clicking to place a block. I’ve built many-a structure in Minecraft, but to place each cube with my bare-hands would be truly incredible and a big step in the direction of immersive VR gaming.

Gotta give credit where credit is due. The folks behind T(ether) are as follows: Matthew Blackshaw (@mblackshaw)Dávid Lakatos (@dogichow)Hiroshi IshiiKen Perlin.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."