Alan Yates, Chief Pharologist at Valve
Alan Yates, ‘Chief Pharologist’ at Valve

Lighthouse is Valve’s laser-based tracking system that forms the bedrock of Steam VR’s input system. Alan Yates, Hardware Engineer at Valve was a key player in the system’s design and implementation. Alan, frustrated with being asked the same questions over and over on Twitter, produced a nano-FAQ on Lighthouse. Here are the highlights.

Don’t worry – you’re not alone. I had no idea what a Pharology was either. It’s the term given to the scientific study of lighthouses, signal lights, their construction and illumination. It’s also forms part of the tongue-in-cheek job title Alan Yates, a Hardware Engineer at gaming giant Valve, that served as a knowing nod to the company’s secretive ‘Lighthouse’ project in which Yates has been actively involved prior ro it’s unveiling at GDC this year.

Lighthouse is a 3D spatial laser-tracking system designed by Valve primarily to solve hard problems surrounding virtual reality positional tracking, be it for their new Steam VR head mounted displays, such as HTC’s Vive or controller input for games based in Steam VR’s ecosystem. There are a lot of questions surrounding the system, which uses one or more laser-point producing ‘base stations’ which scan the room creating laser reference points which are picked up by sensors on controllers or headsets.

Since the system’s launch at GDC 2015 this week, Alan has clearly got a little tired of asking the same question over and over via social media and so produced a nano-FAQ comprising answers to questions he’s now regularly asked.

Use the navigation arrows above to move through the tweets.

1:

2:

 

 

SEE ALSO
Why I Think AR Glasses Are the Inevitable Future of the Smartphone

3:

 

4:

5:

 

6:

7:

 

8:

9:

 

10:


We’ll be bringing you first hands experience using Lighthouse as part of Valve’s Steam VR demos very soon as the Road to VR team continue to report back from GDC 2015 in San Francisco.

SEE ALSO
New Trailer Reveals Release Date for 'SUPERHOT' Spiritual Successor 'COLD VR'

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Based in the UK, Paul has been immersed in interactive entertainment for the best part of 27 years and has followed advances in gaming with a passionate fervour. His obsession with graphical fidelity over the years has had him branded a ‘graphics whore’ (which he views as the highest compliment) more than once and he holds a particular candle for the dream of the ultimate immersive gaming experience. Having followed and been disappointed by the original VR explosion of the 90s, he then founded RiftVR.com to follow the new and exciting prospect of the rebirth of VR in products like the Oculus Rift. Paul joined forces with Ben to help build the new Road to VR in preparation for what he sees as VR’s coming of age over the next few years.
  • Sven Viking

    “which uses one or more laser-point producing ‘base stations’ which create an invisible constellation of light points in a given play area”

    This sounds like a misunderstanding. The base stations each send out a single beam that very quickly scans over the entire room (and by “scans” I mean the beam traverses the room, not that the base station gains any information), 100 times per second. It seems as if creates a bright IR flash prior to starting a scan to provide timing synchronisation.

    Sensors hit by the beam figure out their direction from the base station based on timing, and with enough sensor hits their rigid host object can calculate its 3D location and orientation relative to the base station. Basically the base station acts as a single point of reference — it’s not that it projects a number of stationary markers around the room and the tracked objects use cameras to read those markers like the physical stickers in Valve’s earlier HMD prototypes.

    There are many advantages to this including solving certain occlusion problems, not needing complex image processing, not needing suitable surfaces within an appropriate range for markers to be projected onto, and not needing expensive optics on every tracked object to look for the markers. It does mean that you’ll need direct line-of-sight between the sensors and at least one base station to avoid losing tracking, but with two base stations that shouldn’t usually be a problem.

    • Paul James

      You’re absolutely right Sven, I’ve updated the post to correct this. We need to get into the nitty gritty of this system soon – above is just a very brief oversight.

      Thanks for the excellent explanation.

      • Sven Viking

        Not a problem. By the way, for anyone who’s interested, the advantages I can think of of Lighthouse over Rift-style optical tracking (many of which are mentioned in these tweets) :

        – Less complex image processing.

        – Unlike seeing a bunch of dots and trying to figure out which dot corresponds to which IR LED (iirc DK2 switches some LEDs on and off to reduce the ambiguity), this system knows from the start which sensor is which. This means less processing and, in some cases, fewer sensors needed for calculations. Most importantly, though, it means an unlimited number of tracked objects.

        – You can locate even a single sensor if it has line-of-sight to two base stations (whereas with two cameras you could only do that if the light was somehow distinct from any other lights).

        – You can basically keep adding base stations to increase the tracking area as much as you want. (I still don’t understand how the system distinguishes between lasers from different base stations, though. If you get the opportunity, please ask someone from Valve.)

        – Base stations only need power, they don’t need any connection to the PC

        – Probably a wider “FOV” for each base station compared to a camera? Maybe a longer effective range?

        Disadvantages I can think of:

        – (consumer) Base stations might be larger and/or more expensive than tracking cameras, possibly?

        – While the sensors are small, light and cheap, it’s still possible that they’re bulkier, heavier and more expensive than IR LEDs.

        – Some of the advantages of Lighthouse depend on mounting at least two base stations somewhere, which could be inconvenient

        – All tracked objects need to have a data connection to the PC.

        – No chance of trying to use the camera image for some other purpose, like optical hand tracking or something (implausible without depth-sensing dual cameras though). Could be seen as a plus for those with privacy concerns.

        – Sort of a stretch, but: Can’t get something’s 3D location without either multiple visible sensors or multiple visible base stations. From a camera image, you could get the 3D location of something like a glowing orb from a single 2D image in a PS Move sort of way (though possibly not as accurately).

        Some disadvantages to both systems:

        – You need line-of-sight between sensors and base stations (STEM doesn’t need this). If you crawl under a table, wear a balaclava over your HMD or shield the end of the controller with your body while playing Harakiri Simulator, you’ll lose tracking.

        – The previous point also means you’re likely to need sensors sticking out of the top of controllers, like PS Move orbs or the Valve controller prototype’s strange flower things, to prevent them from being easily occluded by an arm or such.

        – You need to have wired camera/s or base station/s mounted in stable locations wherever you go, or periodically recharge battery-powered base stations at least.

        • nzkspdr

          Hey Sven,

          Thanks for your insight. It cleared some questions and brought some new ones :) Are there any recommended placement of the base stations? What’s the principle behind sensor identification? What sort of range the laser beam covers (presumably an horizontal beam with X vertical degrees)? Interesting read

          Fer

          • Sven Viking

            First, note that I don’t have a Vive kit and am only going on what I’ve read/heard/seen from Valve, impressions, photos, Reddit users more knowledgeable than me etc. — apologies if I sounded like I was claiming greater knowledge.

            Presumably Valve’s GDC demos are likely to use something very close to the ideal base station setup, and the base stations have been described as being placed “on the ceiling”, “in the corners of the room”, and “on the wall”, so I’d guess high in the corners of the room or tracking area is likely to be the preferred location. It also fits with the talk of rectangular play areas. As indicated earlier, I’m not sure of the “FOV” or range, but if it gives near-complete coverage with stations in the corner of a 15’x15′ space the range should be more than 22 feet and you’d expect the “FOV” to be at least around 90 degrees.

            The principle behind sensor identification is that each sensor has an ID, so a sensor can say “My ID is XX and was just hit by a laser.” The system figures out what that sensor’s angle from the base station must be based on the time elapsed since the base station began its predictable scanning pattern. It then adds that point to the list, which could be visualised as similar to a 2D image taken from the perspective of the base station (even though all the base station does is flash and shoot lasers).

            That 2D data is a lot like the Rift tracking camera’s image with a bunch of spots on it, so, with enough points and armed with the knowledge of where the sensors are placed on the rigid body being tracked, you can run pose calculation similar to the Rift’s to determine its position and orientation relative to the base station. The difference is, you don’t need to first try to pick the LED dots out of a potentially grainy image, and there’s no need to resolve ambiguities about which dot corresponds to which LED/sensor. The other difference is that if you have more than one base station you can make use of the data from both, and also triangulate the absolute position of any individual sensor with a simultaneous view to both base stations.

            Each device is also connected separately to the computer, of course, so when a sensor registers a hit there’s no change of confusing it as belonging to a different device.

            Basically, it’s not that the base station is tracking multiple devices, it’s that multiple devices are tracking themselves using the base stations as points of reference. (I still don’t know how they tell one base station from the other, though. I can think of several possibilities, like alternating their scans, but all of them seem to have flaws to me.)

    • Sven Viking

      Oops, I did make one serious mistake due to thinking that each base station used only one laser. It turns out that there are actually two lasers per station — one vertical and one horizontal, each with a wide beam coveri the full height or width of the field. Rather than moving in scanlines, each simply sweeps across the entire field, alternately, essentially providing X and Y coordinates for each sensor hit. This is far simpler and easier to do than the single-beam misconception, without ridiculous rotation speeds needing to be involved.

      The rest of the explanation remains the same, except sensor hits are split into X and Y phases. The two are combined to get the full direction of the sensor relative to the base station.

      /u/Fastidiocy from Reddit has posted (and created, I assume) a really impressive animation to illustrate how it works:
      http://gfycat.com/PlaintiveImmenseArgusfish

  • wowvr

    With a title like that, I think a certain man named Ben from another publication has rubbed off onto you. Let’s not go down that path.

    • Paul James

      I felt the ‘listicle’ format was justified here. Certainly for me, these are indeed 10 things I didn’t know about the system.

      People like lists, and as long as the information found within those lists are presented are useful and presented contextually, there’s nothing really wrong with the format.

      Having said that, it’s not something you’ll see too often on Road to VR, as I’m sure you’ve noticed up to this point.

  • pedrw nascimentw

    The staff roadtovr is doing an excellent job and I do not miss any news. And read all eagerly (with Google translator’s help)

    Thanks :-)

  • gerwindehaan

    Here is my (sideline hobbyist) speculation on the opto-mechanical part of the earliest lighthouse proto, see my visual annotation on https://twitter.com/gerwindehaan/status/574012794057670657 . It looks like two separate sweeping laser lines instead of a single X-Y actuation, which makes sense in terms of speed and coverage. As Sven Viking mentions, probably a IR flood flash could light up the room for time sync on all sensors, potentially also for in-between base-station syncing. In their last proto, it might even be that they put effort in getting the ‘origin’ of the IR LEDs positioned on the central axis of both motors, thereby making it possible for a group of sensors to get a base orientation for the origin.

  • brandon9271

    so if i understand this correctly the ‘lighthouse’ is sort of sending out a x-y ‘raster’ like an old CRT and the sensors on the HMD are each working like a NES zapper? :-p damn, that’s genius. why didn’t I think of that?!

    • Sven Viking

      Yes, a lot like that! I actually compared it to a light gun just recently (but I was thinking of Sega’s).

      • brandon9271

        This tech would be great for bringing accurate lightguns back to LCD monitors. When not using the controllers for VR they could be used in MAME :)

        • Sven Viking

          Just use Mame on a giant virtual CRT screen in VR :). And the controller could look like one of the original light guns.

  • Sean Concannon OculusOptician

    So typical that Valve would take this route with their tracking solution, the laser technology they are using is expensive and unnecessary. You need more lawyers than engineers to seriously consider standing/walking experiences but Valve can clearly afford them. What about the laser radiation being emitted by these base stations, the frustration in having to set them up and run wires to each corner of the room. Who really has access to a 15’x15′ play area when an omni-directional treadmill gives you unlimited movement at a fraction of the space. Lastly, how much is this all going to cost?

    • Sven Viking

      It’ll be using low-powered IR light, there shouldn’t be any potential for dangerous laser radiation as far as I know (DK2 also emits IR light). Nobody’s sure what the cost will be compared to a tracking camera, yet, though a couple of people reckoned the parts involved shouldn’t be too expensive.

      The base stations don’t need a wire to the computer, just power, so they could technically be battery powered. I don’t know how much power they draw, though, so don’t know how often you’d be likely to need to recharge them if so.

      > “Lastly, how much is this all going to cost?”

      Probably less than an omnidirectional treadmill.

    • Sven Viking

      P.S. — you don’t need be full 15’x15′, of course, it’s just the maximum coverage for two base stations.

    • Ainar

      ‘Laser radiation’? You know that’s just light, right? It’s not like they’re industrial grade cutting lasers designed to sever the limbs that slip outside of the 15×15 area…or are they? :D

      Jokes aside, I’m leaning more towards the seated experiences myself but I think the technology itself here is ingenious. It’s actually quite the opposite to unnecessary as it is amazingly minimalist.

  • Toxle Wease

    Interesting historical note: When I first heard about Lighthouse I was reminded of a somewhat obscure 2005 paper from the wireless sensor networks literature: http://www-users.cs.umn.edu/~tianhe/Papers/spotlight.pdf. It anticipates some of the invention, basically sweeping laser planes across a set of low-cost sensors from a known spot and using the timing of the illumination to let each sensor compute its location.

    Abstract:
    The problem of localization of wireless sensor nodes has long been
    regarded as very difficult to solve, when considering the realities
    of real world environments. In this paper, we formally describe,
    design, implement and evaluate a novel localization system, called
    Spotlight. Our system uses the spatio-temporal properties of well
    controlled events in the network (e.g., light), to obtain the
    locations of sensor nodes. We demonstrate that a high accuracy in
    localization can be achieved without the aid of expensive hardware
    on the sensor nodes, as required by other localization systems. We
    evaluate the performance of our system in deployments of Mica2
    and XSM motes. Through performance evaluations of a real
    system deployed outdoors, we obtain a 20cm localization error. A
    sensor network, with any number of nodes, deployed in a 2500m2
    area, can be localized in under 10 minutes, using a device that
    costs less than $1000. To the best of our knowledge, this is the first
    report of a sub-meter localization error, obtained in an outdoor
    environment, without equipping the wireless sensor nodes with
    specialized ranging hardware.

    • KT

      I take it that, at least from your point of view, the similarities are purely coincidental then?

      There go my Nvidia/Valve tracking collaboration theories. :)

    • Ainar

      Interesting, I had a similar situation when kinect came out, around the same time I came across a paper from a few years back describing pretty much the same technology. Guess it just means those ideas don’t go to waste but may take time to be practically implemented ;)

  • smaller one-person team made a system called ReplaceReality that also attempts to solve the ‘VR Room’ concept. ( http://replacereality.com/ )

    Although the light / laser approach has benefits, it still suffers from the same occlusion problem which still drives the need for software to : 1. understand a body in space and 2. combine source data from multiple sources to form a smooth experience. So whether you use multiple ‘base stations was light emitters’ or ‘multiple kinects’ or ‘multiple PS eye’ set ups, the goal is to use redundancy to solve the occlusion problem (lots of sensors in the room) plus good software that knows how to determine the “best” cross section of data.

    Neat that they’re working on this stuff. My solution isn’t meant for consumer home use, for the same obvious reasons there’s isn’t. They probably spent more money on the project than mine, and it’s good to see that they didn’t seem to solve it the same way i did.

    At the end of the day, the tech is only as good as the experiences that are designed around it. The technical capability has been around for a while, people just need to have really cool ideas of what to actually do with it.