Exclusive: Former Oculus VP of Engineering Demonstrates Long Range VR Tracking System

46

Vector Not Raster

Before the MTS demonstration, McCauley explained why he thinks this cameraless approach is important. Primarily it’s about range and cost.

mts-jack-mccauley-vr-tracking-system-laser-(12)McCauley said that while he was at Oculus and the company was working on their first camera-based system for the DK2, he quickly picked up on the range problem. The company had talked for a long time about room-scale capability, and McCauley didn’t see the camera approach as sufficiently scalable to those distances. He explained that the range of a camera based approach is limited by the image sensor which is raster-based.

The Rift DK2 camera has a resolution of 752×480. The headset of a user sitting just a few feet away can only be seen by a small portion of those pixels (as the view of the headset only takes up a portion of the total pixels that comprise the scene). As you get further away, the headset is represented on fewer and fewer pixels which means the computer has much less data to work with, McCauley says.

You can think of it like this: if at 8 feet from the camera the headset only takes up 94×60 of the 752×480 sensor, it’s essentially like trying to track the headset with a 94×60 pixel camera up close with the headset filling its entire field of view. The further away you move the headset, the lower resolution your camera becomes (in a sense); there’s no effective means of zooming the camera in when the headset is at range so that it can use more of its image sensor.

Several tricks have been devised to counter this reduction in available pixels at range, like dynamically boosting the LED brightness to create a larger light source for the camera to spot, using the flashing of LEDs to glean additional information about the tracked object, and utilizing dynamic exposure of the camera. At a certain point however, the resolution of the camera-based tracking becomes the fundamental range-limiting factor.

SEE ALSO
Mixed Reality Update is a Natural Fit for Rube Goldberg Game 'Gadgeteer', Now Available on Quest 3
The Oculus Rift DK2 and Positional Tracking Camera
The Oculus Rift DK2 and Positional Tracking Camera

The obvious fix then is to increase the resolution of the image sensor, but that racks up cost quickly and USB bandwidth becomes a bottleneck, McCauley says.

So he opted for a vector-based approach; one which would not be stuck with a set resolution, meaning that, in theory, it could track with equal precision at 5 feet or 50 feet. McCauley says that Kris Pister, the professor who pioneered the tracking algorithms in MTS, has used a similar system to track a drone in the air more than one mile away (though I would guess at that range we’re far removed from the realm of ‘lasers you can legally point at a person’).

Because MTS only has to stream the values of the angle of the laser, the solution is very low bandwidth compared to sending and processing a high resolution image at 60Hz or more, says McCauley.

Beyond Proof of Concept

mts jack mccauley vr tracking system laser (5)The system isn’t perfect. There were plenty of times where I saw it lose tracking, and it isn’t integrated with any apps at this point so I wasn’t able to actually look into a headset and see how precise the tracking was. But McCauley’s goal is only to demonstrate the concept, and it appears he’s well on the way. There’s still tons of room for optimization to get the system working in tip-top shape. Ultimately though, he doesn’t intend to be the purveyor of MTS.

“I’m gonna let someone else [commercialize it]. What I’m gonna do is put the system together to let someone else try to get this to work. I can get the components… the companies on board to provide the hardware to build the thing and get it debugging in some rudimentary form,” says McCauley. “But to get it actually integrated with an application? I don’t ever intend to do that. I’m just going to make this thing to prove it can be done. That’s the only interest I have.”

SEE ALSO
'Minecraft' to Drop PSVR Players Next Year, Leaving PSVR 2 Support Very Doubtful

When I press him on this, he says he has no interest in spinning up a company for the technology. He seems happy to be taking a break after Oculus, and has plenty of work left to do on his Lola T70. But it doesn’t sound quite like he’s doing this as an academic endeavor, where he’ll simply publish his findings for just anyone. Instead, McCauley is considering looking within his network to find the right partners to make MTS a reality.

“I have access to all the foundries and stuff and the silicon which is high value. And I have enough friends that if I say ‘that thing is gonna go’ or ‘we’re gonna do this’, they’ll be on board,” he tells me. “If you’re at a small startup somewhere—even a medium sized startup—you’ll have a tough time getting people to [take the risk on you to get this built]. All the engineering that goes into making this is an enormous expense, but it’s already kind of done [referring to the foundries that craft the MEMS devices]… to get those kinds of resources is very hard to do for a small company but I’m pretty well connected.”

1
2
3

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • psuedonymous

    This exact same technique was first implemented over a decade ago: http://www.k2.t.u-tokyo.ac.jp/fusion/LaserActiveTracking/

    • mellott124

      Wow, really impressed by this. There are several papers on it at the end of the linked page as well.

    • benz145

      From the article:

      “McCauley is clear to point out that much of what comprises MTS are off the shelf parts and algorithms pioneered for other purposes.”

      “Why nobody [applied this tech] for VR/MOCAP, I do not know. Perhaps nobody thought of it recently but it’s 16 years old and very established,” [McCauley said]

    • Rob B

      Not sure if this is the same group, but just found this with MEMs mirrors for 3d tracking in 2009:

      https://www.youtube.com/watch?v=oKYNBJmWuK4

  • Kevin White

    The obvious fix is to have the cameras inside the headset.

    • Hamish Pain

      Except then whatever the cameras on the headset are looking at will get smaller the further away you get, right? And then you have to get the data back to the computer through the headset cables

    • benz145

      We can put cameras on headsets. That’s not the thing preventing VR-capable inside-out tracking.

  • Bryan Ischo

    I fail to see why this is better than lighthouse. The lighthouse units themselves are probably just as cheap to build as this unit, since they have little more than synchronization logic internally and the sweeping laser. The sensors on the Vive headset add cost but they’re just small diodes as far as I am aware and probably not that expensive. Also they can be added to additional devices and track at the same time. How can this single laser track both my head and my hands? Can’t do it.

    I like that people are looking for better solutions but … just make lighthouse cheaper via volume production and refinement. There’s your tracking solution, now go focus on something else. Like, finding a way to eliminate the display cable. Or improving the field of view. Or improving the resolution. Those things are desperately needed. Better tracking than lighthouse, not so much.

    • Tyler Cook

      If this works like the sony laser pico projector, then it could track multiple objects. The way it works is like a CRT screen. It paints an image by sweeping a laser using the MEMS in a rectangle pattern. If there were two points to track, it would get both as it sweeped.

      It is very similar to lighthouse, except it is sweeping in a more precisely controlled area.

      But, I do agree with you in that it doesn’t really need to be that precise. Lighthouse floods the area, which is exactly what you want anyway, since you can be anywhere inside the zone.

      The only big benefit I would see is if, for some reason, sweeping a laser in this fashion is somehow more efficient or allows for greater distance in tracking. Then, you can have bigger rooms.

      • Bryan Ischo

        Good points; however, you can have bigger rooms with more lighthouse units and some synchronization logic and/or software to stitch the swept areas together.

        Also, I’m pretty sure that you could track 500 sensors using lighthouse if you had enough sensors. You can just keep adding sensors indefinitely, they can all share the same tracking signal from the lighthouse units. I doubt that the solution described in this article could go beyond a few tracked points, even if it used the sony laser pico projector technique you described.

        Also there’s the fact that the technique described in this article requires a reflection of laser light back to the unit; there is room for lots of error there.

        • Jack McCauley

          I have two systems, a vector based one like the film, and a raster scanned one like the pico projector. Either one will work about as well as the other.

          • Bryan Ischo

            Thank you for replying. My goal is in no way to discourage your work, if you can make better/cheaper tracking, by all means do!

          • Eric B

            Hi Jack, are you still working on this tracking ? I think it’s fantastic and it would lead to mass adoption of Virtual reality .

      • Jack McCauley

        Exactly. I’ve got one of those to experiment with, an oscillating mirror.

      • Sven Viking

        As presented here, though, each complete sweep is taking seconds rather than milliseconds.

    • David Mulder

      The big advantage seems to me to be that the cost-per-tracked-item will be FAR lower. So if you would want to do full body tracking then this is a solution that might do the trick if it’s fast enough, whilst Lighthouse is simply impossible and Camera based tracking unlikely. The disadvantage of this system is that it has – like Oculus’ camera tracking – a far bigger occlusion problem, so you might end up having to put a fair number of these stations in your room for reliable tracking of more than a headset.

    • Jack McCauley
      • Rob B

        Have you considered placing one mirror on the tracked object (headset), and searching for a fixed lit up marker? That way multiple objects may simultaneously track themselves without interference. (ie, they all try to point to the same bright fixed object).

    • Jack McCauley

      A couple of things, the laser energy is spread over an ellipse and with mine it’s a point source so the range is much longer. What do you think four motors cost??

      • Ryan

        I’m actually very surprised Vive came in as cheap as it did, given the cost of motors and lasers. Micro fabricated mirrors should be cheaper, but do they have the tip/tilt range to track objects close to the mirror?

        • Jack H

          I haven’t seen MEMS raster mirrors with FoV better than about 14 deg.

      • Hamish Pain

        @BryanIscho:disqus Advantages as far as I can see them:
        Longer range (Lighthouse needs an LED flash), automatic visual acknowledgement of tracking (a blessing for devs like me, though that will probably change to an IR laser, right?), far higher update frequency (lighthouse needs three phases of light emission, so there can be some drift in the meantime+need for IMU).
        Disadvantages:
        Harder to track multiple regions (maybe? If the mirror’s steerability has a great response time and low overshoot [MEMS!], then it could probably be adapted to continuous scanning + tracking multiple items at reduced update frequency), still needs to be hooked up to a computer or the tracked object needs a light-sensor for information transmission, reliability of MEMS steering vs. motors (MEMS may be better due to low-mass and no friction issues), doesn’t track orientation currently or depth (though I can think of a few ways to do so with what I’m assuming to be the actual tech behind it. FPGA, yeah?)

        @jack_mccauley:disqus Love this kind of high-speed tracking. Have you considered using retroreflective materials for the marker? That could increase range even further by reducing laser-spread on impact. It could also allow a second laser-tracker unit to target each marker by reducing light-interference without having to alternately-pulse the lasers. I’m assuming the visual sensor, if not a camera, is an opto-diode under a lens for this, as spatially disparate laser light would still interfere with it.

        Pretty awesome laser steering, MEMS has really sped things up. I remember making a spinning mirror based laser tracer, what I wouldn’t have given for a steerable mirror!

        • benz145

          To answer a small part of this, some of the tracked objects in the video were retroreflective markers. Jack showed me a few different things being tracked.

    • Andrew Jakobs

      One reason this is better, it has no actual physical moving parts like the motors for spinning the lens on the lighthouse (which makes noise and will wear out after a while, at this point we have no idea how long the lighthouse basestations will work)..
      And the mirror is just like a DLP chip, so multiple mirrors can track multiple object, BUT you’re right, in that regard the lighthouse system is much simpler as you just add sensors on the stuf you want tracked and don’t need extra lasers for that..

    • Chip Weinberger

      Another limitation to lighthouse is the size of the objects it can track.

      Look at the size of the ‘halo’ on the vive controllers. Thats about the limit for good tracking.

      This coupd possibly track smaller items. It is also cheaper to make a tracked object. You could put these reflective markers on lots of objects. Without the need for electronics in them. Put a marker on whatever you want and have high fidelity tracking on it.

  • VR Geek

    Sounds like there may have been a few egos battling over which way to do the tracking at Oculus. Not sure Oculus went the right way with their camera based tracking. We will see in the final product, but it will need to be WAY better than the DK2 which would require IMO something much greater than the 752×480 camera they shipped with it. Based on this article and my own personal experience with the DK2, I suspect even 1080×1920 would not give enough resolution to track when back 10 feet. Maybe 4k plus would, but then there are larger bandwidth demands and more painfully, massive computational efforts required by the host PC. My gut is telling me that once the dust settles in May or June, how big of an issue, or not, the constellation tracking system is will become very apparent. I sense Oculus is in trouble here, especially after using the ViVe. I hope not as they really have done sooooo much to get VR off the ground.

    • DJ

      Sometime after the DK2 was released, Palmer Luckey explained that they went with the camera tracking system (now known as Constellation) after they’ve tried everything that was viable at the time. Apparently they didn’t try just a few, they tried dozens of technologies. Many tracking systems are better than Constellation, but they’re either too expensive, or locked by patents and unattainable, or have problems that only become apparent in application. The one technology that matched all of their criteria for price, availability, and capability better than any other that they tried was Constellation. It was a basic engineering decision.

      It’s easy to watch at all of these impressive tech demos and say, “That’s the solution to everything!” But, they rarely actually are solutions to your very specific application.

      I think that Constellation probably won’t survive more than a couple iterations of the Oculus Rift. Better systems are coming down the pipeline, you’d be ignorant to think that Oculus isn’t working with them to determine their viability. And new ideas are being invented that might even supplant those in a few years more. It’s a very volatile, and exciting, area of research at the moment.

      • VR Geek

        I very much agree with all you comments. That said, Valve is about to own Oculus pretty hard with Lighthouse. You cannot blame Oculus as they surely tried their hardest. It is just interesting that even with tons of money, and top talent that they are coming to market with what looks like the inferior tracking system. I am sure someone over there is loosing sleep. I hope constellation is better than previous demos when the CV1 arrives and that they quickly address if not for CV2.

        • Rob B

          From my understanding, the resolution of the camera isn’t as limiting as you describe. Its still used in conjunction with the IMU unit, and only used to correct larger scale drift.

          • VR Geek

            I can only speak to the DK2 camera myself which also uses the IMU, but it has never been that solid. We will have to wait to try the CV1. Lighthouse was incredible when I tried it extensively last year. Super solid.

          • Rob B

            Here’s the comments from OKreylos (doc-ok), that describes the process if you’re interested:

            http://doc-ok.org/?p=1405

          • VR Geek

            Interesting. Thanks

          • Guest A

            It’s true that the camera isn’t as limiting as it sounds, but of the two (camera and IMU) the camera is the limiting factor. And like you said, they have to be used in conjunction. IMU is very fast but not accurate. The accuracy degrades overtime (i.e., drifting). How many seconds before it no longer tolerable as 1-1 depends on the specifics of the IMU. You need the camera to give you that accuracy. The higher the resolution the camera is, the further the range it can give you that accuracy data. However this means if you want longer range you need higher resolution and this doesn’t scale well.

  • Foreign Devil

    I keep thinking about the military applications of a laser that can lock in perfectly on a rapidly moving target. . .

  • Po Tato

    Celebrities gonna freak out when their fans track their each and every move with this kind of device

  • TrevorAGreen

    What I’m curious about is hybrid tracking systems that are domain specific. So If I have a lighthouse system I can track the vive and the controllers. And any other lighthouse enabled device. But what if I want to bring in something else that is tracked in the same space? Say I have a coffee mug. And I want to see that. That might be better tracked by a camera than enabling it for lighthouse. Or maybe I buy a item that is specifically designed to match a certain tracking approach. So it has some sensor and the object included, but it still allows it to appear in the same 3d space. Maybe that is a lower resolution of tracking. Maybe something even higher. Something specific that would be a cool tactile experience would be foam balls. The games that you could apply that too would be almost endless. But they probably wouldn’t be appropriate for a hard body powered solution like lighthouse. They would need to be camera tracked, or some other system. We are getting the vive and the controllers. Now we need to ramp up and start creating other tactile experiences.

  • Fadelis01

    The main issue I would see with this system is the fact that you need to track and differentiate multiple objects in the play space. I do see the need to move as much of the processing burden from the equipment in the players hands/head. That is the one benefit that truly remains with the CV1. I have high hopes for this technology, and this guy is on the right track IMHO.

    • Fadelis01

      Hey! Brainwave moment… what if the mirrors and lazer detectors took advantage of polarization? This would allow your scanners to scan across a mirror with a custom “polarization pattern” and read orientation and even an object identification key of some sort. I’m imagining a mirror “strip” that would have multiple polarization regions. As the mirror was scanned, a serial string of information could be created in the reflection as the Lazer reflection passed across the mirror. This could verify that the mirror is the “right” reflective surface for that scanner as well.

      • Fadelis01

        A mirror “strip” could have multiple states at that point and not just binary the polarization tint lines could look something like this for example /|/ where the two leading and trailing “/” polarization angles would denote the beginning and end of a string. And the middle angles “|” would be some defined but of info about the object the mirror is attached to. The scanner could trial multiple angles of sweep until it reads a full strip. At that point, it could infer object type, angle, position and distance from the scanner.

        • Fadelis01

          You could even have one contiguous mirror “halo” with a polarization “Descriptor phrase” going all the way around. At that point, the scanner simply finds the reflection, then finds the angle where it can “read” and the phrase that is visible could also state the rotation markers for the object… The mirror could utilize the same “microsphere” technology used in reflective safety paint and the Polarization could be a simple plastic polarizing strip that would just “glue” on top of that painted finish. This would be very cheap and robust.

          • Fadelis01

            The paint idea brings up other thoughts… what is there were phosphorescent “dots” that could be charged with a low level UV stobe. An optical camera could then broadly view the space and direct a more precise lazer to the right points of interest. This would still require no power or logic from the tracked objects, but would further protect the users from getting blinded by lazer scanning.

        • Fadelis01

          AND if the polarization phrase was done via a transparent LCD overlay, then button/trigger info could be relayed via the same positional scan back to the room scanners…

  • MosBen

    What is the value in a greater range? With the Vive at room scale everyone’s concerned about not having enough space for VR. What does a greater range allow us to do that I’m missing?

  • OgreTactics

    How precise is it? I mean: how small can be the reflective marker and still be tracked by this system?

    Because if precise/small, then you could very well have a 3 laser array projected by the mirror onto a surface with 3 tiny markers side-by-side for xyz movement/orientation tracking. But of course you ultimately would have to use different marker areas and MTS to track the whole 360° movements of an object.

    Anyway, I’m convinced this can be WAY smaller and cheaper than lighthouse. In fact I don’t down to how small it can be reduced (until it can be integrated into the headset) which lidars will never be.

  • Eric B

    i wish this tracking system was available .Any chance Jack is still working on this?

  • Nice post, thanks for sharing.
    News Health Education Entertainment Technology Writing