There’s no doubt that what Oculus and Samsung managed to achieve with the mobile Gear VR headset is impressive, but one aspect, positional tracking, lags far behind desktop VR headsets. Oculus CTO John Carmack has affirmed his team’s focus on adding this missing piece to an otherwise excellent mobile VR package.

Not that long ago there were many that were skeptical that, in the wake of the largely (and unsurprisingly) disappointing experiences offered by Google’s unashamedly low-rent Cardboard movement, that Samsung could pull off a substantially better experience with its Samsung Galaxy Note 4 powered mobile VR headset, the Gear VR. However, the combination of Oculus’ VR engineering prowess and Samsung’s proven industrial design expertise produced a so-called ‘Innovator Edition’ Gear VR that really did deliver a compelling VR experience.

But, despite managing to pull off low persistence and 60 FPS mobile VR on their very first try, one aspect of the experience lagged behind the rapidly evolving desktop PC VR headsets: positional tracking.

ART-camera
See Also: Overview of Positional Tracking Technologies for Virtual Reality

Positional tracking tracks a player’s head as it moves through 3D space, and is a major benefactor to comfort and immersion. Tethered PC headsets have achieved high performance positional tracking using an ‘outside-in’ technique: placing stationary sensors away from the user to watch the movement of their head.

However, this ‘outside-in’ style of positional tracking doesn’t make a lot of sense for mobile VR, whose entire selling point is the ability to take it and use it anywhere (and without wires); you wouldn’t want to carry a wired positional tracking sensor everywhere you go and be forced to set it up whenever you want to use the headset.

SEE ALSO
Meta CTO Confirms Work on "glasses form-factor" Mixed Reality Device

So the more appealing method of achieving positional tracking on a mobile VR headset flips to the ‘inside-out’ concept: use sensors on the phone to watch the movement of the environment (and thus understand the movement of the player’s head through space). But it turns out that achieving this with the precision necessary for virtual reality (‘sub-millimeter accuracy’ as you’ll often hear) is particularly challenging and still considered an unsolved problem.

carmack-connect-1-featured
See Also: John Carmack’s Quest to Bring Minecraft to Virtual Reality

Although we know that Oculus CTO John Carmack had been working on the issue for some time, Oculus and Samsung hadn’t achieved a viable solution by the time they wanted to ship Gear VR.

So Gear VR went out the door with rotational tracking only. Developers have been able to design around this limitation for Gear VR apps quite effectively, but it’s widely agreed that positional tracking would add significantly to the mobile VR experience.

We haven’t heard anything about positional tracking since Carmack was mobbed at ‘Connect’ Oculus’, developer conference last year, prompting one of his infamous corridor mini-lectures. Carmack elucidated then on what may or may not be on the way for mobile virtual reality:

I spent a while on that and I’m confident I can do a good job with stereo cameras, so we’re left with the problem there of; if you put them in the headset then either you need to have processing in the headset and build a whole ‘other system there, or you need to push all the data through USB3 to the phone which is going to take up a lot of power … so it does not look good for making an inside-out tracking system that doesn’t use a whole lot of battery power.

Now, in a response to a question via twitter put to Carmack on the subject of his pet ‘vrscript’ project, he responded that he’s been too busy on other positional tracking work to spend time on it.

As Carmack alludes in the Connect statement above, having a set of stereo cameras, allowing the calculation of depth in the real world, seems a bare minimum here, but after you’ve managed to convince a mobile phone manufacturer to refactor their smartphone hardware to include two, very fast (by mobile standards) imaging sensors and perhaps a co-processor to assist with low power computer vision related calculations, you still have to stream and interpret that data inside an application already overburdened by the demands of mobile VR rendering. Hopefully, the pace of mobile processing speeds with successive generations of phones will assist here, and this is where Samsung’s S7 mobile phone could play a major role.

SEE ALSO
'Hitman 3 VR: Reloaded' Gets First Big Patch Today Following Critical Misfire

Carmack seemed frustrated at Connect at how little resources Oculus had devoted to solving this significant problem.

“It bugs me a little bit, we have like 30 computer vision experts at Oculus form the different companies we’ve acquired and none of them just wanna go solve this problem, they’re all interested in their esoteric kinda researchy things while this is a problem I want solved right now,” he said. “I wish someone had spent all last year on it.”

Carmack thinks it’s solvable, but he said at the point “I still don’t have a ton of support within the company.” Perhaps as the consumer Oculus Rift is locked down, with the consumer Gear VR already on the market, Carmack has managed to persuade some of the computer vision resource to work for him these past few months.

samsung unpacked mwc 2016
See Also: New Samsung ‘Unpacked’ Event Teaser Puts the Spotlight on Gear VR on Feb 21st

So when might we know more about positional tracking for Samsung’s Gear VR? The next major event for the mobile industry is Mobile World Congress and Samsung’s accompanying ‘Unpacked’ event. It seems likely that Samsung, perhaps anticipating a growing tide of interest from its competitors in VR, may tell us more about next gen mobile VR.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • BC Wilson

    Why get hung up on incorporating every component into the headset, especially if this is still an unsolvable problem. A pair of external positional trackers could be included with the headset–just place them in the room and you’re ready to go, like the HTC Vive.

    • care package

      As the article states, that would still invisibly tie the headset down to a room, and make it’s ‘wireless’ aspect a bit pointless. It’s being marketed as a device you can go anywhere and use. I guess that is why they are getting ‘hung up’ on incorporating every component into the headset.

    • It’s only unsolved at the moment because someone like John Carmack didn’t get “hung up” on it until now; the only way to crack the problem is to work on it; it’s worth working on because once the solution is established it will be a game changer. Untethered VR is the future; the current Rift/Vive/PSVR are only stop-gap technologies.

  • Mateusz

    One battery operated lighthouse cube would do the trick ;p

    • Same issue as placing stereo cameras on the headset though (in addition to another external accessory). You need the sensor array on the headset, something to read all of them, then processing of that data and communication over USB to the device. Sure, a lot less bandwidth than cameras, but the tradeoff is the external cube. Personally, I wouldn’t mind it as I love GearVR (have DK1, DK2 and Vive as well), but that might be a hard selling point as it’s either ANOTHER accessory or will bring the price of the headset back up to include the cube in addition to the costs of the headset OR would have to be an alternate headset and then there’s the manufacturing challenge.

      Bottom line is mobile really needs a camera based, inside-out solution as Carmack said.

  • Courtney A Jeff

    they should mimic positional tracking by using the head tracking if u use the touch pad it would go from head tracking to positional only in and out not the freedom to move any direction like the oculus rift but it would give a added option better than no option u could lean in to look at something or to the chair next to u or another player use headtracking to look then touch the touchpad and move ur head down to go closer and up to go back release and go back to headtracking i love people because Jesus loves me

    • Malkmus

      That’s a simple software solution that would depend on the developer. The Rose & I mobile experience on GearV, for example, already has this feature. I used it once and never bothered with it again. It’s simply not as compelling as the real deal.

  • Courtney A Jeff

    use a optional camera sensor like oculus rift but have it bluetooth like bluetooth headphones or speakers it would not be wired and it would give actual postional tracking

    • Paul Zirkle

      Bluetooth latency is huge compared to USB.

      According to wikipedia: “USB2 high-speed (480 Mbit/s) uses transactions within each micro frame (125 µs) where using 1-byte interrupt packet results in a minimal response time of 940 ns. 4-byte interrupt packet results in 984 ns.” USB 3.0 is said to be ten times faster.

      Meanwhile, something like Bluetooth “Smart” would have a bandwidth of 0.27 Mbit/s with a minimum latency of 3 ms. While “Classic” gives up to 2.1 Mbit/s but with a latency of ~100ms.

      Keep in mind 984 ns (nanoseconds) = 0.000984 ms (milliseconds).

      • ZenInsight

        I think when galaxy s8 releases with USBc and higher resolution, this may be possible.

  • Abide | VR

    Lean-in content requires extremely low latency to accurately render parallax. The stereo camera solution works but processor efficiencies will be required – also, the user’s environment has to be simple enough with obvious reference points to gauge accurate depth (3d depth sensing isn’t perfect). I have seen some tests using a single camera aimed at a 6″x6″ target (like AR matrix code) and the processor has to calculate distortion and angle changes on the target. It is a pretty good solution but still has some latency issues and it only works for 180 degree content.

  • WyrdestGeek

    I was hoping it might be possible to come up with a work-around, at least for Minecraft Gear VR, using a kinect– but if Bluetooth latency is bad, then that might be a problem.

    Also, if I’m understanding what I was reading about the Kinect (especially v1 which is what I have) it doesn’t cover a very large volume. So even if it all worked, and if the latency was miraculously low, you still might not be able to walk but a few feet. :-/