Meta Quest 3 brings with it new ‘Touch Plus’ controllers that do away with the tracking ring that’s been part of the company’s 6DOF consumer VR controllers ever since the original Rift. But that’s not the only change.

Editor’s Note: for some clarity in this article (and comments), let’s give some names to all the different 6DOF VR controllers the company has shipped over the years.

  • Rift CV1 controller: Touch v1
  • Rift S controller: Touch v2
  • Quest 1 controller: Touch v2
  • Quest 2 controller: Touch v3
  • Quest Pro controller: Touch Pro
  • Quest 3 controller: Touch Plus

6DOF consumer VR controllers from Meta have always had a ‘tracking ring’ as part of their design. The ring houses an array of infrared LEDs that cameras can detect, giving the system the ability to track the controllers in 3D space.

Image courtesy Meta

Quest 3 will be the first 6DOF consumer headset from the company to ship with controllers without a tracking ring; the company is calling new controllers ‘Touch Plus’.

Tracking Coverage

In a session at Meta Connect 2023, the company explained it has moved the IR LEDs from the tracking ring into the faceplate of the controller, while also adding a single IR LED at the bottom of the handle. This means the system has less consistently visible markers for tracking, but Meta believes its improved tracking algorithms are up to the challenge of tracking Touch Plus as well as Quest 2’s controllers.

Note that Touch Plus is different than the company’s Touch Pro controllers—which also don’t have a tracking ring—but instead use on-board cameras to track their own position in space. Meta confirmed that Touch Pro controllers are compatible with Quest 3, just like Quest 2.

SEE ALSO
iPhone 16 Adds Spatial Photos and Spatial Audio Capture for Vision Pro

Meta was clear to point out that the change in camera placements on Quest 3 means the controller tracking volume will be notably different than on Quest 2.

The company said Quest 3 has about the same amount of tracking volume, but it has strategically changed the shape of the tracking volume.

Notably, Quest 3’s cameras don’t capture above the head of the user nearly as well as Quest 2. But the tradeoff is that Quest 3 has more tracking coverage around the user’s torso (especially behind them), and more around the shoulders:

This graphic shows unique areas of tracking coverage that are present on one headset but not the other

Meta believes this is a worthwhile tradeoff because players don’t often hold their hands above their head for long periods of time, and because the headset can effectively estimate the position of the controllers when outside of the tracking area for short periods.

Haptics

Photo by Road to VR

As for haptic feedback, the company said that “haptics on the Touch Plus controller are certainly improved, but not quite to the level of Touch Pro,” and further explained that Touch Plus has a single haptic motor (a voice coil modulator), whereas Touch Pro controllers have additional haptic motors in both the trigger and thumbstick.

The company also reminded developers about its Meta Haptics Studio tool, which aims to make it easy to develop haptic effects that work across all of the company’s controllers, rather than needing to design the effects for the haptic hardware in each controller individually.

Trigger Force

Touch Plus also brings “one more little secret” that no other Touch controller has to date: a two-stage index trigger.

SEE ALSO
Official Quest 3S Image Leaks Ahead of Anticipated Meta Connect Unveiling

Meta explained that once a user fully pulls the trigger, any additional force can be read as a separate value—essentially a measure of how hard the trigger is being squeezed after being fully depressed.

What’s Missing From Touch Pro

Meta also said that Touch Plus won’t include some of the more niche features of Touch Pro, namely the ‘pinch’ sensor on the thumbpad, and the pressure-sensitive stylus nub that can be attached to the bottom and used to ‘draw’ on real surfaces.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Ben is the world's most senior professional analyst solely dedicated to the XR industry, having founded Road to VR in 2011—a year before the Oculus Kickstarter sparked a resurgence that led to the modern XR landscape. He has authored more than 3,000 articles chronicling the evolution of the XR industry over more than a decade. With that unique perspective, Ben has been consistently recognized as one of the most influential voices in XR, giving keynotes and joining panel and podcast discussions at key industry events. He is a self-described "journalist and analyst, not evangelist."
  • XRC

    Interested to know how many Carbon Design team members are left working at Meta on their controller development?

  • Andrew Jakobs

    Hmm, why not have one extra camera on top to track the controller even better above the head. Ahwell, let’s first see what the actual reviews will say about it.

    • Andrey

      This. They ditched 2 eye-tracking (and face tracking that is not really needed) cameras – 3 in total – from Quest Pro and added two RGB cameras (1 was there on QPro), so there was at least one “free” slot (I believe even Gen.1 XR2 SoC supports even more cameras (up to 12?), but whatever). I can barely understand and accept that eye-tracking wasn’t possible in a 500$ headset, but actually making tracking of the controllers worse in some scenarios, while presumingly making it better in others – it’s not what you call “evolution”. If you can’t make it better, just let it stay the same. Would one or two IR camera(s) (that are pretty cheap if I am not mistaken) in the upper part of the headset make it significantly harder to manufacture or make the final price per headset for them 100$ higher? Pretty sure it’s not like that.
      And their words about “those two areas that are now worse tracked (above the head and under the chin) – almost noone use it for a long time!” – well, I use it! In swordfighting games I like holding either a sword or spear above the shield right to my head (imagine Rome legionary) and attack like that in the upper parts of the body (especially in the head if the body protected by the shield). Also any throwing motions like mentioned spears or even grenades (when you “cook” it) may play a very bad joke with you if tracking will be lost at this exact moment. Not to mention that one of the worst things of original Q2 controllers (rings) was stopping us from using the bow like it should be used (hand that pulls the string and holds the arrow should be as close to your cheek as possible for you to aim down the arrow for the best accuracy), and now when the rings are finally gone… it won’t be possible because controllers will be out of tracking zone! Just great.
      I pre-ordered Quest 3 yesterday and after all those topics on Reddit about tracking I almost ordered QPro controllers too, but I still decided to give a chance to those new Touch Plus ones, though I am already morally prepared that it will be pretty bad in the named above scenarios and I will need to spend another 300$ just to forgot about all those problems once and for all.
      And all of that could be easily solved if Meta just provided an additional “Q3 with Touch Pro” bundle…

      • ViRGiN

        idk man, i feel like the downgrade won’t be real (same way lighthouse tracking isn’t really better, nobody stick their hands up their butt – well, maybe except vrchat weebs).
        still don’t agree with necessity of eye tracking – benefits arent real at this point, and games wont add support to it anyway.
        i’ve played shooters since quest 1, and never ever i felt like i was compensating for ‘bad’ tracking.

        • kakek

          Climbing mechanics will probably suffer.

          • ViRGiN

            Probably not, when you climb you look up.

          • kakek

            You might be right. I can rememebr specific situation where you don’t, like monkey bar traversing. But the are not that comon. Might be worth the tradeof.

        • polysix

          LMAO if your Q2 or future Q3 had eye tracking you’d be all about it. You’re such a clutz.

          Quest Pro > Q3 in all ways that matter for GOOD VR.

          • ViRGiN

            Don’t be so trashy. I’ve had quest pro and didn’t care about it. You’re just projecting.

          • Charles U. Farley

            YOU calling people “trashy” is just fucking rich.

        • Nevets

          That’s a silly, circular argument, namely:’Introducing a new technology is low priority because we aren’t seeing the benefits yet. But we won’t see the benefits if it isn’t introduced.’

          For cost reasons, perhaps there is justification to delay.

        • Nomad

          That sounds like sour grapes to me. Eye tracking as well as better hand tracking can make a big deal in social VR, so you dismiss it despite the fact that none other than Carmack himself admitted that they raised the price of the Quest 2 specifically because most people were playing VRchat and Rec Room instead of premium Quest store games.

          I mean, there’s nothing wrong if you see the Quest as a single player video game console, that’s a valid use, but we’re talking about whether this is a good change for most VR users here, and social VR is a driver of VR adoption. And in apps like VRchat, eye tracking makes a huge difference in how other people perceive you. You never see it yourself, just as you never see your own eyeballs in real life (unless you’re looking in a mirror of course), but it can make a big difference in conveying expressions to other people.

          Hand tracking does too, frankly. You can’t see it yourself because you don’t see your hands when they’re out of the field of view of the headset, but other people see when they glitch out if you’re not keeping them in front of your face. And headset cameras can have a problem tracking your hands when they’re at rest by your sides, your arms can occlude the line of sight. Now the controllers have smaller tracking footprints, so they can be occluded more easily. That may work out poorly.

          • ViRGiN

            It would be nice to have – but i dont think the demand is really there yet, nor it’s optimized enough to run on mobile. Big concern on qpro was that with all these extra features, the battery drains really fast. People already laugh at cartoony characters – they would still laugh at them with eye tracking.

          • Nomad

            I forgot to mention the biggest deal about eye tracking. Quest needs it for foveated rendering. There’s no way they’re rendering 4k VR content on a cell phone processor, they’re rendering in lower resolution despite the increased panel resolution. Foveated rendering would let them increase the perceived visual fidelity while working within the limited amount of processing power that self contained headsets have.

            I was always baffled by those battery life costs of the eye and face tracking features. I’m not really sure how it takes that much power compared to the cost of real time graphics rendering. My only guess is that it was primarily driven by the CPU power needed to analyze the camera feed from the face. I suspect that eye tracking only should be less power hungry.

          • ViRGiN

            Again it would be nice to have, but the gains aren’t really there yet. It will handle 4k video just fine, i dont think anyone expects native 4k rendering in game from standalone just yet.
            The problem is developers still aren’t adding support for it, despite it being seemingly easy. It needs to be at the driver level, and it’s not exactly one size fits all solution. Just look at the lack of even simple DLSS integration in PCVR games. I’ve added it to my own project under half an hour.

            Pimax isnt the best example, but it’s the example we have. They added eye tracking and driver level foveated rendering, but the gains are minimalistic, and considering much higher price, jank and added weight, that money is better spent on better pc hardware than squeezing out the last drops out of underpowered pc.

            If eye tracking is such a necessity, any company can enter the market and simply deliver a better product. But eye tracking is absent from developers mind, just look at lack of support for valve knuckles in pretty much every game..

          • Tanix Tx3

            The foveated rendering can only take advantage if there is an bigger FOV.
            Less than 100 degree, where will you gona save substantial processing power, narrowing the sharp area to 30degree?

          • shadow9d9

            VRChat removed face and eye tracking. You need to fiddle with a mod to get it to work now.

      • Christian Schildwaechter

        The XR2 Gen 1 can connect seven cameras directly. AFAIK the Quest Pro uses a special FPGA to merge the images from the separate lowres eye (and face?) tracking cameras into a single video stream, so they only occupy one camera slot. The Quest 3 will need another camera slot for the depth sensor.

        I don’t agree that adding extra cameras for tracking above the head would significantly improve tracking (as argued above), or that keeping the cameras when they aren’t necessarily needed would be good design, as they’d not only add cost, but also weight and require extra data processing. We’ll have to wait for tests and review to really determine how well the Quest 3 tracking above the head works based on IMUs and clever guessing, and ordering Touch Pro controllers based on just an assumption seems like a pretty expensive idea.

        I fully agree that they should have integrated eye tracking at least for UI and interaction in games, even if ETFR would be still too taxing for the XR2 Gen 2 to make sense. And if they managed to combine several tracking cameras on the Quest Pro to work around the camera slot limit, that would have worked on the Quest 3 too, if it was even necessary. I don’t know for sure if the XR2 Gen 2 offers more than seven camera connections, but I vaguely remember that it does.

    • Christian Schildwaechter

      … because players don’t often hold their hands above their head for long periods of time, and because the headset can effectively estimate the position of the controllers when outside of the tracking area for short periods.

      • Cl

        I feel like they released it this way to be “good enough” for most people. Tracking on these controllers is probably going to be worse than quest 2.

        If you want to play beatsaber or something they expect you to get the pro controllers. They didn’t put more cameras because it matches that “good enough” mentality since people have to option to spend more for better tracking.

        • ViRGiN

          That’s just silly talk at this moment.
          Meta is doing stuff as perfect as it’s reasonably possible.
          Quest 1 controllers were battle tested by Carmack himself to not have ANY draw backs compared to other forms of tracking.

          • Cl

            Making the tracking good enough for most people and having the option for better tracking is reasonable though. Nothing silly about it. Of course you just attack anything that can be perceived as bad for meta. Now that’s silly.

            On the real though i occasionally lose tracking while playing beatsaber on quest 2, but don’t with lighthouse. If I put my hands behind my back woth quest 2 I’ll lose tracking but not with lighthouse. If I put the controllers in front of eachother I lose tracking, but not with lighthouse. Oh look, some drawbacks.

          • ViRGiN

            Of course you had to attack one of the most popular, and highest grossing VR games as Meta not caring about perfect enough tracking for it. It’s Beat Saber dude.

            I’m not interested in the issues you are experiencing, especially when you counter with with “but dont with lighthouse”. If I were to say I constantly had issues with lighthouse tracking, you just wouldn’t believe it. Besides, imagine playing mobile game with “high end” lighthouse tracking.

          • Cl

            You said no drawbacks and I named a few. Beatsaber is the tracking benchmark.

          • ViRGiN

            You edited your comment.
            Hands behind your back? Yeah, are you scratching your butt? Or play tens of hours a month the only type of game where it’s actually usable – pool games that haven’t seen any updates in years?
            People put their controllers in front of eachother all the time – no issues here man.

          • Cl

            Meh, it’s all good with me. Things have pros and cons. Nothings perfect.

            Now that the grand meta wizard is here, I have a question for you.

            Would you rather have quest pro or quest 3 with quest pro controllers? If it costs the same.

          • ViRGiN

            I have no experience with Quest 3 controllers.
            I did use Quest Pro controllers and I liked the build quality, it’s notable step-up, just like the Pro controllers for Xbox or whatever they are called.

          • Nomad

            You’ve never played a game that involves an inventory system where you can put things on your back have you?

          • ViRGiN

            I did. And i played them just fine using Quest. It doesn’t magically stops tracking the moment cameras can’t see controllers.

          • Charles U. Farley

            SHHIIIIILLLLLLLLLLLLLLLLLLL

          • ViRGiN

            VALVE CUCKHARD

        • Fraz Leonheart

          Its been said by multiple sources including Beatsaber devs themselves that Quest 3 has passed the Expert+ test.

          • Игорь

            While the tracking is generally good (I’d say better than Quest 2 thanks to repositioning of the tracking volume), one issue that keeps popping up for me is it’s inability to track controllers when you bring it from the back/bottom into view with the faceplate facing away (like when looking at the palm of your hand rather than keeping the controller vertical). Played about an hour of Alyx just now and it happened maybe 4 times. Not gamebreaking, but definitely not as immersive compared to something like Knuckles where you don’t think about controllers at all.
            In Beat Saber, it tends glitch out sometimes when ducking under an obstacle, especially when I tilt my head down. I would miss my swings maybe 40% of the time.
            But overall the tracking is way better than I though it would be when I first saw the new design, I’m just not sure why they did it. Sure, having smaller controllers is nice, but it’s not something you notice during gameplay.

      • Nomad

        Nonsense. The IMUs are only about keeping the controllers updated between refreshes of the cameras, to make the motion look more smooth. They cannot provide useful data for a minute. You can tell quest users in vrchat because they largely can’t put their hands behind their head or in various other positions.

        The users themselves can’t tell because this happens when their hands are outside of the field of view of the headset display.

        • Christian Schildwaechter

          An decent IMU can easily report a very stable rotation/position for ten minutes and much more without having to be correct by an absolute tracking reference, as long as you don’t make a lot of backwards and forwards movements. Drift is usually the result of many small errors, not the sensors reporting wrong data. The 3DoF Oculus Go released in 2017 relied completely on IMU tracking with no way to use any external reference to correct for drift, and one of its main usages was virtual cinema. That would have been impossible if the tracking would have been stable only for one minute or the 16ms between to camera frames at 60Hz. The image would constantly have moved away.

          Any pure IMU based tracking suffers from drift, and without a fixed reference, the solution is to regularly “reset” it. On the Go this was necessary for the 3DoF controller every few minutes when it was moved a lot in games, and similar to resetting the view direction on a Quest, pressing a button on the controller would realign it with the headset. Other systems like the IMU based Slime full body trackers have the user stand in a standard pose to undo any drift, which can be necessary several times per hour, again depending on how much you move. A lot of people build their own Slime trackers, as hardware and software are open source with some choices regarding the components, and how often they need to reset also depends on whether they pick the IMUs for USD 2 or USD 20.

          IMU drift on the Quest will be corrected whenever the camera tracking indicates that the position is off, but drift is not in any way as bad as you seem to believe. There were some pretty bad cases with older phones used with Google Cardboard, where the world could make a 360° turn within one minute, but these were uncalibrated sensors intended to detect coarse movement with their values collected unfiltered. Today’s IMUs are much more stable, usually calibrated and fuse the data from several sensors to significantly improve signal quality.

          Whether a system trusts the IMUs to deliver accurate data is another question. A Quest will basically complain about the controllers no longer being visible to the cameras to get you to move them back, basically forcing the user to stay within the fully tracked zone, not risking any errors that might accumulate if the users were allowed to move the controllers behind their back or head.
          In contrast SlimeTracker or a Go will just take the IMU data and wait until the users themselves notice that the positions reported are somewhat off and manually resets the IMUs. And you could watch a full movie on the Go without ever having to reset your viewing direction.

          • Nomad

            You’re working very hard to conflate two things. You keep saying “IMU”, but you don’t seem to understand what you’re talking about. The Go only had rotation tracking. And yes, solid state gyros are pretty good and able to hold angle tracking for a long time. But that’s not what we’re talking about here. We’re talking about tracking the position of controllers in 3d space, and that’s a whole different ball of wax. Then you have to use accelerometers to measure their translation (movement through space instead of rotation), and that’s far less accurate. That’s what only works for a fraction of a second before it drifts too far.

            You mention slime trackers, but you don’t seem to understand them. They are gyro only as well, they rely upon using software to make certain assumptions about the position of body parts based on that rotation data, and that’s why they’re less accurate, and that’s why they’re not used for hands. But they only work at all because of that guesswork based on the limited freedom of movement of the parts they track, they wouldn’t work for hands in the first place. They do not, and I cannot stress this enough, use accelerometers to measure their motion through 3d space. You cannot use them to argue that hand controllers can track their position over the time span of minutes without an external frame of reference.

            What you’re talking about doing with the hand controllers requires using accelerometers to measure the controllers translation. Gyro only simply does not work to track hand position. And accelerometers are not accurate enough to work for more than a fraction of a second before drifting, badly.

            You can see this in VR if you actually look for it. When the system loses tracking of the hand controllers, you can see it losing it’s positional accuracy quickly, and then the system locks its position in space down until it reacquires it.

          • Christian Schildwaechter

            TL;DR: IMUs are more than just gyros and accelerometers, SlimeVR calculates 3D position from skeletal data and rotation data, because our limbs are of a fixed length, which makes this pretty precise, not because accelerator data is only usable for fractions of a second, Meta very likely tested if tracking without a top camera would still work well enough, and an improved IMU tracking on Quest 3 may actually reduce some Quest 2 artifacts. The rest is very long and only for those interested in a lot of technical details about how IMUs and tracking based on them work.

            IMU stands for Inertial Measurement Movement, because the main sensors measure acceleration along or around different axis. Simple ones may only be 3DoF with a gyroscope, reporting rotation around XYZ, most of those used in HMDs/controllers are at least 6DoF that also add acceleration along XYZ. So called 9DoF usually add a 3DoF digital compass, and 10DoF a 1DoF gravity sensor pointing down or a pressure based 1DoF altitude sensor, which helps when use in drones.

            Data from these sensors is often combined/fused, because using just one can introduce errors, as there is no reference to check the values against. But whenever you rotate the IMU, it will also be accelerated linearly in one direction, if the IMU isn’t sitting exactly at the center of the rotation. So a controller takes both the rotation and translation data and combines/compares them to improve data quality. The magnetometer in 9DoF IMUs could in theory be used to deliver absolute rotational data, and therefore completely correct for the gyroscope drift, but in reality the magnetic field data is extremely jittery and getting close to a metal object seriously distorts it, so any detected field line change is cross checked with gyro and accelerometers. If these don’t show movement, the data is thrown out.

            This plus powerful micro-controllers/digital motion processors (DMP) capable of doing a lot of signal processing is how IMUs today achieve much higher precision and repeatability, which is also why all the IMUs recommended for SlimeVR are 6DoF or 9DoF, not just 3DoF gyros. SlimeVR uses the accelerators to correct the data from the gyroscopes, so they absolutely use accelerometers to measure their motion through 3D space, though only for fusion, and usually letting the IMU’s DMP do that. They just don’t use it to later determine the trackers position in 3D space, because SlimeVR uses a skeletal model to determine where neighboring trackers can be. For someone with a 40cm long lower leg, the foot ankle will always be 40cm from the knee joint, its position can be determined with 3DoF data alone, so in this situation it makes no sense to try to calculate the distance from the movement measured by the accelerators, as the result would always be worse. Arm/elbow tracking with SlimeVR is possible, using the same skeletal calculation as with the legs, and it support up to 16 tracked positions, so you could also track the hands. But hand tracking with SlimeVR would be pretty pointless if you are already holding a controller with absolute 6DoF tracking in your hands, so nobody does that. One could track the wrists, but their position could just be derived from elbow position and controller position plus rotation.

            And all that without ever having to rely on positional data from the trackers thanks to all our joints. But that doesn’t mean you can’t try, SlimeVR just doesn’t need to. The Oculus Go did not only have rotation tracking, you could lean to the side and forward and the head model would follow. Anatomy and balance limit how far we can lean, so this guessing the 3D position from relative rotation and translation sensor data is again pretty safe.

            You could query the SDK for the acceleration on HMD and controller with OVRDisplay.acceleration and OVRInput.GetLocalControllerAcceleration. So the Go controller could not only be rotated in place, but moved through 3D like e.g. a golf club, with some software trying to “guess” the movement from a combination of rotation, translation and assumptions about the arm length. The problem with this approach is always that all the translation data is relative, so you never get an absolute position, only a change from the last estimated position that you have to add. Swinging your arm like hitting with a tennis rack means thousands of direction changes for the accelerators, which quickly leads to the accumulated errors I mention a lot, so it is not a good method for reliably tracking arm movement behind your back on a Quest.

            But the Quest 3 can of course do exactly what SlimeVR does: it knows that the controller is attached to the hand, the hand to the arm, the arm to the shoulders, and the shoulders to the head. This plus our shoulder and elbow joints severely limits where arm, hand and the controller could be, so if the user moved the controller over his/her head with a virtual sword, Meta can first follow that movement and correct the data with absolute values from the camera tracking, and then closely project the “flight path” along the same trajectory by checking acceleration and rotation, as our anatomy will most likely make it follow a curve. If the user instead just pushes straight up, there will be no X or Z data from the accelerators. If the user than starts to move the controller to straight to the back, there will be Z data. But if s/he just holds the sword over the head, all sensors will report basically no movement, and the position will be very stable based on the initial trajectory projection for a long time. Again, moving around will quickly ruin the precision.

            Its far from perfect and not usable for tracking complex movement, but that is exactly why I in my first comment limited the usefulness to situations where there isn’t a lot of movement, and then went on to describe why this is most likely sufficient for tracking over the head due to “typical” behavior limiting complex movement to where we can see it. We just don’t do complex things over our head without looking up. And that is also why Meta wrote that “because the headset can effectively estimate the position of the controllers when outside of the tracking area for short periods.”

            You are trying to make an argument why solely IMU based positional tracking is too imprecise, but nobody is trying to do this. We are talking about a very specific situation where the controller is in an area not tracked by the camera, with the possible positions limited by anatomy, the initial trajectory until it left the tracking space known, and very limited movement expected that could lead to accumulated errors.

            Your whole argument why it isn’t possible is basically “accelerometers are not accurate enough to work for more than a fraction of a second before drifting, badly.“, without providing any evidence at all, and a somewhat simplified idea of how IMUs and IMU tracking work. I would fully agree that IMU just based on adding up a series of non-linear acceleration values will quickly become unusable, but that’s just not what Meta is doing here, they are merging a whole lot of well known data and only use the IMU to track the movement within very specific limits.

            I don’t know if they ever implemented the trajectory predictions in the same way on the Quest 2, as they could still rely on the tracking cameras facing up, so the artifacts you describe may be simply due to a less smart implementation, and could actually be gone on the Quest 3, if they now have to rely more on IMU tracking and skeletal models. Though I’d still assume that the “locking in space” is mostly a precaution, so they could do better. They just don’t bother, because they want you to move the controllers back to the front, where the cameras can see them reliably.

          • TomWhittonUK

            Don’t be such a douche on forums @Nomad76:disqus , not a good look. : )

          • Traph

            Excellent posts (as usual).

            Ironically enough, you can really see the internal deprioritization of accelerometer tracking vs SLAM with the Touch Pros. Put them down when watching a movie in bed and within a few minutes they’ll either start drifting away or pop up a tracking lost notification.

            This doesn’t happen nearly as often with the Touch v2 and I’d bet anything is the result of some algorithm saying “IMU positioning isn’t good enough, what do the cameras say?”. On the Touch v2, the HMD cameras can pick up stationary controllers as long as they’re within field of view but the Touch Pros use their own cameras and have a hard time properly positioning relative to a bed sheet or desk surface in extreme proximity.

          • Christian Schildwaechter

            That could be a very interesting test case. The Touch Pros use a mid-range Snapdragon SoC that seems a bit overkill for a controller. I assume Meta chose it because it contains a digital signal processor of the same Hexagon family as on the XR2. The room and hand tracking on Quest 1 and 2 is done on the Hexagon, so it is very likely that the Touch Pro uses the same tracking software as the HMD itself, which would save Meta a lot of work, and allow for some experimentation.

            I’d understand why the controllers would get confused while lying on a bed, as SLAM works best with detectable edges, which are usually missing in sheets. But it should work pretty well while standing on a desk and at least seeing the edge, so as long as it doesn’t lie there with the camera directly facing the desk surface (which may be what you meant with “extreme proximity”).

            It is interesting that they can both drift or pop up a warning, as drift should only occur when SLAM is not used/available. Obviously this can not be due to the controllers being out of a tracking range as on Quest 2, as they track themselves, so they sort of have to decide that camera tracking has become unusable and switch to the IMU instead. And then apply some unknown criteria to decide how long the IMU data can be trusted before sending the warning.

            The controllers drifting away would mean they trust the IMU too much when they don’t detect any movement, but at least it is a good sign that it takes a few minutes, hinting that Meta’s approach with the Quest 3 could work.

      • Jason Arencibia

        They have never seen my son playing that gorilla tag game that seems to be popular! he’s climbing all over the place, they have their hands up high a lot!

    • Dragon Marble

      You’d be surprised how much tracking is based on IMUs as opposed to cameras. You’ll know what I mean if you ever tried to play your Quest on a cruise ship.

      So, with the headset off, curtains down, you see an absolutely stationary world. It’s a calm day and you don’t feel the ship’s movement at all. Put on the headset, you are now in an extremely disorienting and confusing VR world that is rocking back and forth, back and forth…

      • Christian Schildwaechter

        That is a pretty hardcore way to test how good your VR legs are. Sea sickness and motion sickness are based on the same issue: there is a discrepancy between input from your visual and vestibular system, so what your eyes see does not match the movement reported by some fluid filled tubes near your ears.

        Usually these work in sync, and if your brain detects that they don’t any more, it concludes that you must have accidentally poisoned yourself, as poison/drugs also mess with our senses. So making you feel sick to make you throw up is not a bad idea in general, but unfortunately there is no way to just send a “false alarm, it’s just VR/a rocking ship”.

        There are several ways to get your systems out of sync:

        – you feel movement, but don’t see it (ship cabin)
        – you see movement, but don’t feel it (VR locomotion)
        – you feel movement and see movement, but there is a delay (VR rendering latency)

        Now every single one of these is bad. But if you play a badly optimized VR game with stick based locomotion on a moving ship, you can get all three at once. It would be interesting to see if even experienced VR users with well trained “VR legs” still get nauseous if you just throw enough sources of sensory dissonance at them, or if at one point the brain just goes “screw this, I’m just going to ignore this nonsense data”.

        I’m not going to volunteer for trying this out. But since you own a Quest Pro, you might be willing to test how it would feel using VR on a moving ship with the facepad removed, so you can see both the cabin and the VR world while the ship is rocking and rolling, so both worlds move in the wrong way at the same time. Should be a very … impressive experience/experiment. For science, of course. Or TikTok views.

        • XRC

          One mode that affects even seasoned VR players including some Devs is “smooth turning”.

          So perhaps a smooth turning application…

        • Dragon Marble

          I took the headset off right away. At first I thought it was broken.

        • CrusaderCaracal

          Cruises are kind of expensive though. i reckon you’d achieve better results for cheaper by renting out a tinny

          • Christian Schildwaechter

            The sea sickness part doesn’t work the same on an open boat, because you can always see the horizon, and the movement of the horizon will match the movement reported from your vestibular system. Sea sickness happens a lot quicker when you are under deck without a porthole/window to look at the horizon, as the cabin walls and floor will stay stable relative to your body, while the world moves in a very noticeable way. Which is why going up to the deck and look at the horizon is a way to mitigate sea sickness.

            Someone getting nauseous due to the ship rocking and rolling even while looking at the horizon is a different effect, as vestibular and visual system would be actually in sync. Unfamiliarity with a permanently moving world would be a candidate for causing the brain to assume that you again must have poisoned yourself, because it knows from years of experience on land that the world does in fact not permanently roll from one side to the other and back. Something must be terribly wrong it it now all of a sudden does, so it is best to get rid of anything in the stomach that could be causing this.

      • Nomad

        All you’re saying is that the headset uses the gyro as the primary head angle sensor because inside out tracking is kind of trash.

        • Dragon Marble

          Gyro tracking? What year are you from?

    • david vincent

      Yeah, yet another regression since the pretty good inside out tracking of the Rift S.
      Maybe Meta really want you to buy their Touch Pro.

      • Andrew Jakobs

        But the tracking of current controllers is the same or even better as with the S.

        • david vincent

          Rift has 5 tracking cameras (1 on the top)

          • Andrew Jakobs

            Because it had more camera’s doesn’t mean the tracking itself was better as it is now. Improvements in its drivers stopped a long time ago. Experience, better internal sensors and faster processing improved tracking with less camera’s. But we will see if it won’t be a problem. Let’s just hope that with next version the technology has improved and become cheaper so the current Touch Pro versions can be shipped as standard controllers.

          • david vincent

            The tracking aera was bigger so yes it was better…

          • CrusaderCaracal

            Older cameras though, we have the tech now to reduce the amount of cameras

          • david vincent

            No IMU can replace a real camera

          • CrusaderCaracal

            Who knows what the future holds

          • david vincent

            It’s inherent to the way how IMU work

    • Tanix Tx3

      Their assumption is already in the article text.
      Its not worth it, because the ‘over head time’ is short enough to be guessed by their algorithm. Not worth hassle and money to plan an extra camera for that.
      Since they already made this design decision, there is no point discussing it, till their next HMD.

  • ViRGiN

    Will be interesting to see how much tech Valve will steal for their surely-coming Valve Dickhard.

    • kakek

      Not more than meta is going to steal from apple’s vsion pro to update Q3 software for the next 2 years.

      • shadow9d9

        FB spends more than everyone else combined when it comes to research, including Apple.

        • Christian Schildwaechter

          TL;DR: That is likely, but we don’t really know. And what counts in the end is not how much you spent, but which results you got.

          Apple said they have been working on AVP for almost a decade, and Facebook bought Oculus nine years ago. Meta just did it very publicly, while until a few months ago, we only had rumors about Apple’s HMD development.

          Meta most likely spend more money on it, and if Apple had had a secret Reality Labs department burning through USD 10bn a year, it would have shown in the financial reports. But Apple developed their own mobile operating system, their own high performance mobile chips with rather powerful GPUs. They introduced ARKit in 2017, are now at ARKit 6, and it runs on iPhones released after 2015, so they sold more than 1bn AR capable devices. This is all part of their long term XR goals.

          They also acquired a large number of companies, like PrimeSense, the developer of the original Kinect depth sensor, in 2013, SMI, the largest eye tracking competitor to Tobii, in 2017, or NextVR, a 360° live video streaming service with exclusive contracts with the NBA etc., in 2020. Depth sensors have been in iPhones for years, eye tracking is a core feature of the AVP, and there are some quite impressive rumors about the quality level of content they will provide with NextVR.

          So Apple spent a lot of money and time on XR, both in calendar years and (wo)man-years, as in the number of involved researchers, engineers and developers. Actual HMD development was only visible through hints like job offers or them being the largest customer for Varjo XR-3 HMDs. So it just wasn’t as obvious, as a lot of the developed tech is dual-use, and if you would include the full development cost for all their own tech used in AVP, they probably outspend Meta, which still makes and spends most of its money on social networks.

          Meta makes great hardware and absolutely wins in affordability, but what blew people away on the AVP was the intuitive UI based on eye tracking and simple hand gestures. And it took developers just a few days to implement the same on a Quest Pro. The software part of Meta’s billions of investments unfortunately doesn’t match the great hardware. So far they created a widely criticized UI, a series of social VR apps nobody likes, a virtual conferencing solution not even Meta employees are willing to use, and a scrapped mobile OS project intended to replace Googles Android, and all these are core products for their Metaverse plans. In the end it doesn’t really matter how much you spend, but how well you spend it.

        • Nomad

          Well they certainly dumped a staggering amount of money into “metaverse” research, but nobody can really explain where all that money went.

          I mean, look at their difficulties putting legs into Horizons. They made their big announcement about finally adding in legs many years after other platforms had legs, and that was a lie at that, they were showing off commercially motion captured footage and pretending it was from their own hardware. It took them another year to actually add them in in a rudimentary form. I wouldn’t be boasting about how much money they spend when this is what you get from it.

          I suspect a lot of that money really went to subsidizing the sale of Quest 2 headsets to undercut the competition. There’s nothing special about the headset, it’s just cheap.

          • shadow9d9

            Except we know a whole bunch went into- optics/pancake lenses, hand tracking, ringless tracking, airlink, etc.

    • Charles U. Farley

      How do you talk so much with Zuck’s cuck so firmly lodged in your gullet? Do you shift it to the side and cough up your incel-speech through the space? Or do you actually take a break now and then from engorging yourself of Zuck’s nuts long enough to make your rambling, mostly-incoherent shill-talk?

      • ViRGiN

        Wow, imagine getting so aggressive cause someone smeared shit all over your favorite gaymer company.

  • Richard R Garabedian

    ..that ring was the only way to pick up the controller out of the box…

    • philingreat

      That’s why there is hand tracking

    • CrusaderCaracal

      dont worry mate they redesigned the box so you can pick it up

  • Garhert

    They could have convinced everyone with a launch game “Lassoing like a Cowboy”.

    • Christian Schildwaechter

      That should be an easy task, as there is no way a human could judge how precise the controller tracking over the head is based on the simulated movement of a large, soft and heavy physical object like a lasso without force feedback controllers. So even if the tracking over the head was horribly off, not even an experienced cowboy could tell, as the feedback mechanism (amount and direction of opposing force on the hand) they rely on for throwing a rope with a sling isn’t available, and the physics are way to complicated to “see” discrepancies. They’d pretty much only have to ensure that the rotation speed roughly matches and that the lasso points in the opposite direction of the hand.

      • Garhert

        But the cameras don’t see your hand even after throwing because your forearm is still in the air above (maybe in front of) your head. So I assumed that the Quest 3 can only estimate the position of your hand based on the movements of your shoulder and a part of your upper arm. Maybe I’m completely wrong and it is really as easy as you say.

        • Christian Schildwaechter

          It would indeed get more difficult after the lasso has been thrown. Once there is no more force pulling on arm and hand, the “intuitive” understanding of their position would work again properly. Humans have much more than five senses, and one of them is proprioception and allows us to know pretty exactly where our arms, hands and fingers are without seeing them.

          The virtual arm position while circling above the head would have to be derived just from the measured relative movement plus knowledge about the human anatomy limiting where the hand can actually be, and the result could be somewhat off. Not too far off, as moving your hand in a circle above you head is probably one of the easiest movements to extrapolate, as you’d see a cyclic +X,+Z,-X,-Z acceleration and the gyroscope indicating a steady curve.

          Our vertical FoV is tilted somewhat downwards, as it is much more important to see what’s happening with our hands or lying on the floor than above our heads, and the same is true for HMDs. But if the Quest 3 tracking cameras do not cover the full vertical FoV of the display, and your arm is also still to the side and above out of view, you could end up with a situation where you’d see your arm in an approximated position, but thanks to proprioception know that it is actually somewhere else.

          It would probably still work. This is now moving into very speculative areas, as I have never tried to throw a lasso, but I assume that a lasso will usually be thrown forwards, and that during the last half circulation above the head, the hand no longer pulls the lasso back, but follows it forward, so the lasso stops circling around the head and instead flies forward towards a target. But that would mean that you’d release the lasso always with the arm holding it stretched forward and somewhat up, where it should again be within the range of the tracking cameras.

          I’m pretty sure about proprioception being easier to fool when a heavy object is pulling on the hand. I’m absolutely not sure how a lasso is thrown, and don’t really know how far up the hands can be tracked on Quest 3 when they are in front of the head. So this is just theory and someone will probably have to come up with a lasso throwing app to try this. A quick check on SidequestVR showed no results for lasso yet, all the cowboys seem to just use guns for everything, and an astonishing amount of those are apparently armed green cacti wearing sun glasses and a big smile.

  • Nomad

    I’m a little skeptical about those charts of unique tracking coverage. They don’t quite seem to match up with the position of the lenses. The problem is it’s a 3d chart flattened down and viewed from a confusing angle, so it’s not clear what they’re claiming, apparently a lot of that extra coverage is behind the torso. That’s useful, despite the skepticism in the comments about it. Maybe it doesn’t matter in single player gaming when you’re keeping your hands in front of your face. But I put a large focus on social VR, such as VRchat, and there the position of your hands matters even if you can’t see them, other people see if your hands glitch out. On the other hand I worry that the elimination of the ring will make the tracking footprint smaller and make the controllers more easy to block with your body when your hands are at rest by your sides.

    Also it’s not uncommon to find games using a mechanism where you reach over your shoulder as a kind of inventory management to pull out a weapon or item or something. I’m not sure where the position of those down looking cameras are, but the text claims there’s better coverage around the shoulders, that may help with that kind of system.

    Alas the “two stage” triggers aren’t really unique. Index controllers have a two stage trigger, it reads position of the trigger and then at the end of the travel it clicks and reads a button press. This is a different kind of two stage system, sure. But I would argue that the analog grip pressure sensor on the Index controller makes more sense than a pressure sensing trigger. It might have made more sense to put the pressure sensor on the grip trigger. I’m at a loss as to how you’d use a pressure sensor at the end of the index trigger travel, but perhaps we’ll see some interesting applications for it.

    • ViRGiN

      We haven’t seen ANYTHING interesting being done with valve controllers, EVER. Even the crappy low-effort handshake demo doesn’t showcase anything unique. It actually plays better with virtual desktop hand tracking lol.

      • CrusaderCaracal

        Valve controllers are so sick but no games really use them well

  • Isn’t this going to impact climbing games?

  • Mark

    Really need to see how this plays out in full reviews

  • Daryl_ED

    Interesting, this is a similar camera placement to the HP Reverb G2. Some have criticised its tracking pattern/volume (for me it was ok). This will tell us whether the tracking algorithms are better then WMR.

  • Andross

    I used the quest 2 for two short a time, but i always noticed more delay in every movement comparing with my CV1 tracking.
    I really struggle to trust meta that they were able to make a better controller even removing the led rings.
    (also with Quest 2 i need to play with lights on in my room, but not so important).

    i’m not much informed about latest trakcing technologies but i am pretty skeptic about the possibility to keep a decent reliable position of my sword while i hold it on back position ready for a slash.

    we know that this movements are still a problem to write them correctly in the code…
    (sorry for bad english)

  • Michael Kiwi

    Techy question if anyone knows. In the Rift headset, each LED flashed a pattern so that it could be individually identified. Do the Quest 3 controllers LEDs still flash these signals? And is camera synchronization with the LED’s still a problem? Or is this solved by Global stutters on the IR cameras?