Augmented reality has such an awesome promise, but I’ve scarcely seen one useful augmented reality function come to fruition. Augmented reality wants to superimpose digital information over our otherwise analog world. So far, this has only reached consumers in the form of smartphone apps, and hardly any of them are anything more than novelties. I recognize that augmented reality is an important stepping stone on the road to virtual reality, but right now it’s boring the world with gimmicks.
There’s lot’s of useful information that we could overlay onto our daily lives, but the current crop of smartphone apps not only give us relatively useless information, but they’re delivering that information in the wrong place.
The military has one of the earliest and most useful examples of augmented reality that I can think of. The modern fighter jet HUD (heads up display) overlaps heaps of useful information onto the real world by projecting the info onto a pane of glass at the front of the cockpit. Pilots can track heading, horizon, angle of attack, airspeed, altitude, weapon systems, and a lots of other critical information, without having to take their eyes off of the skies. Rudimentary augmented reality systems were used as far back as the 1930s(!) to help pilots predict bullet drop and gun-lead while aiming for enemy aircraft. If you’ve never seen such a HUD, have a look at this video below — note that the tracking of the overlay is almost dead-on (we need that performance for consumer applications)!
[youtube=http://www.youtube.com/watch?v=RjqrfChgijs]
But obviously your everyday consumer doesn’t fly a fighter pilot, or even a commercial airliner. So where can we find practical and useful applications for augmented reality? Well, I’ll tell you that I’m tremendously surprised that we’ve yet to see such AR systems in our cars. Cars could benefit nearly as much from an informational AR overlay for the driver as pilots do from their aviation-oriented HUD.
Several consumer-available cars already have forward-facing radar which can detect the speed of vehicles in front of your car and that information can be used to automatically adjust your cruise control. Why not take that info and slap it up on a HUD and show me the relative speeds of all of the different vehicles in front of me? This would be useful for highway driving and deciding which lane to jump into. Sensors could be placed on the sides of the vehicle to detect when other vehicles are in your blindspot, and the HUD could indicate this so that you don’t accidentally merge straight into another car, even if you forget to check your blind spot.
And there’s plenty more useful applications I can think of when talking about AR in cars, but the key to recognize here is where the information is delivered. In the plane, it’s in front of the pilot, in the car, it’s in front of the driver. This works because when doing either of these activities, you already have a medium upon which information can be displayed (namely, the on the glass HUD screen or the windshield). These things are always in front of the user of the activity no matter what the circumstances.
This differs from the current way that AR is delivered, which has been exclusively through smartphones. The problem is that we don’t always hold smartphones to our faces, and when we do, we feel silly! Not to mention that the current range of AR applications are gimmicks at best. You’ll struggle to find any applications of practical value. Sure, I can use my phone with the Yelp app in ‘Monocole’ mode and hold it up to the horizon to see which restaurants are around me… but this isn’t actually providing any useful information to me. I’d be better offer just locating those places on a top-down map. Let’s also not forget that the hardware just isn’t good enough at this point to provide convincing 1:1 scene tracking, and that means that the on-screen AR elements are sluggish and unconvincingly overlaid with the real world.
That’s not to say that there is no use for AR. We just need to find a medium through which it makes sense, and start looking for problems that could be solved with AR — rather than creating AR and then trying to shoe-horn it into places where we don’t have any problems.
Daily we’re getting closer to wearable HMDs that you could actually wear in public without getting laughed at. We’re also slowly working toward contact-lens screens. Once we reach these milestones, AR will become significantly more practical because the display medium will always be with us, rather than requiring us to pull out our phone, launch and app, and hold it up to our face.
What we need to do is match the display mediums to the issues, and we also need some higher processing power and better algorithms.
Here’s a great example of matching the medium to the problem: it’s one of the only potentially useful AR apps that actually addresses a real problem and offers a good solution — it’s called World Lens. These are the kinds of useful applications that we need to be working toward with AR:
[youtube=http://www.youtube.com/watch?v=h2OfQdYrHRs].
And when it comes to algorithms, some really smart people are working on technology to augment the limited gyroscopes and accelerometers that our smartphones are currently equipped with. I wrote a post about such systems a few months back, you might want to have a look.
With all of this in mind, I want to leave you with an interesting TED talk that makes use of AR. The demonstration is impressive, but see if you can connect some of the issues that I’ve presented here with what you see in the video:
[youtube=http://www.youtube.com/watch?v=C4pHP-pgwlI]
Pingback: Google Announces Project Glass, Wants to Bring Augmented Reality and Wearable Computing to the Masses [video] « Road to Virtual Reality()