Mojo Vision is a company that’s working to produce smart contact lenses, which it hopes in the near future will let users have a non-obtrusive display without needing to wear a pair of smart glasses.

CNET’s Scott Stein got a chance to go hands-on with a prototype at CES 2020 earlier this month, and although the company isn’t at the point just yet where it will insert the prototype tech into an unsuspecting journalist’s eyeballs, Mojo is adamant about heading in that direction; the team regularly wears the current smart contact lens prototype.

While Mojo maintains its contact lenses are still years away from getting squarely onto the eyeballs of consumers, Mojo is confident enough to say it’ll come sometime in this decade, something the company sees landing in the purview of optometrists so users can get their microdisplay-laden lenses tailored to fit their eyes.

Image courtesy Mojo Vision

But just how ‘micro’ is that display supposed to be? Fast Company reports that Mojo’s latest and greatest squeezes 70,000 pixels into less than half a millimeter. Granted, that’s serving up a green monochrome microLED to the eye’s fovea, but it’s an impressive feat none the less.

On its way to consumers, Mojo says it’s first seeking FDA-approval for its contacts as a medical device that the company says will display text, sense objects, track eye motion, and see in the dark to some degree, which is intended to help users with degenerative eye diseases.

SEE ALSO
Microsoft is Aware of Significant Display Issues on Some HoloLens 2 Units

Fast Company reports that Mojo integrates a thin solid-state battery within the lens, which is meant to last all day and be charged via wireless conduction in something similar to an AirPods case when not in use. The farther-reaching goal however is continuous charging via a thin, necklace-like device. All of this tiny tech, which will also include a radio for smartphone tethering, will be covered with a painted iris.

Image courtesy Mojo Vision, via CNET

Mojo also maintains that its upcoming version will have eye-tracking and some amount of computer vision—two elements that separate smart glasses from augmented reality glasses.

Smart glasses overlay simple information into the user’s field of view although it doesn’t interact naturally with the environment. Augmented reality, which is designed to insert digital objects and information seamlessly into reality, requires accurate depth mapping and machine learning. That typically means more processing power, bigger batteries, more sensors, and larger optics for a wide enough field of view to be useful. Whether Mojo’s lenses will be able to do that remains to be seen, but it at least has a promising start as a basically invisible pair of smart glasses.

Whatever the case, it appear investors are pitching into Mojo Vision’s vision. It’s thus far garnered $108 million in venture capital investments, coming from the likes of Google’s Gradient Ventures, Stanford’s StartX fund, HP Tech Ventures, Motorola Solutions Venture Capital, and LG.

Newsletter graphic

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. More information.


Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • Jan Ciger

    Uh, folks, I would think RoadToVR knows better than this.

    a) eye-tracking as something separating smart glasses (whatever that is) from augmented reality glasses. That’s not true. Eye tracking (tracking what the user is looking at) is not required for AR (and, in fact, most AR devices don’t have it).

    b) “Augmented reality … requires accurate depth mapping and machine learning.” Nope. If this was true then things like ARToolkit or the original Vuforia wouldn’t exist. They don’t do neither depth mapping nor machine learning!

    What one really needs for AR and what makes the difference between a personal head-up display (“smartglasses”) like Google Glass and AR glasses (e.g. Hololens) is the fact that the latter is capable of tracking the environment and to achieve (for better or worse) registration of virtual objects with the real world. That can be achieved by many methods, whether by tracking a marker using a simple camera (ARKit) or by a complex SLAM approach (Hololens).

    However, you really don’t need (and most of these applications don’t do it) any machine learning, nor eye tracking and depth mapping is an useful bonus but not strictly required if you are tracking the objects around you by other means. You do need it for ensuring correct occlusions which improves the illusion that the overlaid objects are real but it is by no means necessary for AR to work.

    • Ad

      Occlusion is essential for AR, it’s like phone VR vs actual VR. Eye tracking and a moving display would be great too, or as a means of control with a button on the glasses or an Apple Watch for actuation.

      • Jan Ciger

        Occlusion is important but it is by no means essential (i.e. that you can’t have AR without it). A lot of successful AR applications have been built without it – e.g. look at Pokemon Go.

        And re eye tracking – you certainly don’t want eye tracking as a control input. Most eye movement is involuntary, so it would be an extremely noisy input. And if you do force yourself to focus on something to keep your gaze steady, you will feel a really bad eye strain/headache very quickly. I have done some experiments with this back in the 2004 or so and it has not been pleasant. Also the Tobii tracker used (and most of its similar market competitors) is not super duper accurate – it it is OK for determining what object are you looking at (assuming it is reasonably large) but certainly not precise enough to use as a pointer/”mouse”.

        Where eye tracking is the most useful is as a passive input – monitoring what you are looking at (tracking attention or focus e.g. during a training task or to provide context-relevant information), things like foveated rendering and for ergonomic evaluation (“Did the user actually see the sign/button/danger?”)

        • Ad

          So for eye tracking I just meant it would have heavy assist, and you would tap the glasses to select, no dragging things around or if you look away the window moves.

          As for occlusion, I don’t agree at all. I think it’s essential to making apps that feel natural and useful and able to take any amount of stress or strain from real world conditions. Pokémon Go is now 5 years old, these new expensive glasses need to be different, especially if they’re giving up the complex inputs of a phone.

          • Spruce

            Dont keep trying, hes clearly obsessed with “winning debates”

    • Depth mapping and machine learning are required to integrate virtual world items into the real world. The machine has to know your environment to lodge virtual things within it. Just because you map the depth visually, rather then with specific depth mapping cameras, doesn’t mean it’s not gauging depth. They are not wrong. You’re splitting hairs.

      Also the author didn’t say eye tracking was required by AR, they just said it would separate these from simple “Smartglasses”. Smartglasses are basically just HUD’s.

      You’ve really gone overboard on the author over nothing.

      • Jan Ciger

        Sorry, that’s not true. What you need to do is to know where your camera is relative to the environment. That’s all.

        That you can do e.g. by sticking a marker to the wall – exactly like ARKit or Vuforia is doing. There is no depth mapping at all, only simple perspective-n-points correspondence problem (you know the 3D shape of the marker/object you are tracking, you know the 2D reprojection of that marker in the camera image, from that you can estimate position, orientation and scale of the camera vs the marker). This is trivial to implement, e.g. using OpenCV. I have done it many times now.

        Of course, it requires a marker(s) – but the point is it can be done even on a cheap phone without any special hardware, only single camera is required. We have been doing AR like this for decades now, people tend to forget that Hololens or Magic Leap is not the only (nor the best) way of doing AR (same as folks forgetting that VR didn’t start with Oculus Rift).

        Depth mapping by itself won’t give you this information anyway, regardless whether you are using sterescopic camera (like PSVR), structured light (Kinect 1) or ToF sensors (Kinect 2). The depth map is used generally for two things – occlusions (making sure your virtual overlay gets occluded by real world objects correctly – helps with believability) and as an input to SLAM, if the device is using that (RGB-D SLAM is easier/more robust than just RGB one). But not all AR systems do/need that.

        Also machine learning isn’t needed for any of this, it is just basic geometry and some least squares optimization.

        >Also the author didn’t say eye tracking was required by AR, they just
        said it would separate these from simple “Smartglasses”. Smartglasses are basically just HUD’s.

        Then you either don’t understand what he wrote or you don’t understand that “smartglasses” can’t do any AR because they are unable to track the environment, regardless of whether or not they have eye tracking. The entire argument he is making is wrong.

  • Ad

    Can’t wait for people to refuse to buy VR headsets because “VR contacts are just around the corner,” or “headsets are so bulky, why isn’t it in my eye where it can catch fire and kill me?”

  • Ad

    “seeking FDA-approval for its contacts as a medical device that the company says will display text, sense objects, track eye motion, and see in the dark to some degree,”

    Have any AR or VR firms said that they will refuse to work with defense contractors or militaries?

  • ShiftyInc

    So here is a question. how the hell is this being powered then if it’s in your eye. Even if it has a tiny battery on the lens at some point, that would be super dangerous. Lets not forget that stuff to this day still explodes or catches fire. Having that happen with a phone in your pocket or hand sucks but you will be alive. If it happens on your soft eyeball, you could instantly be dead or braindead.

    • Andrew Jakobs

      they say they are using a solid state battery… And if it happens you’re probably ‘just’ blind, it’s not THAT big of a battery..

      • ShiftyInc

        Never heard of that one, but its very new so not surprised. And yeah blind is much better then dead. Still not a big fan of sticking something electronic in my eyes. But then again the same thing could happen when you are wearing a normal vr headset.

    • Kimberle McDonald

      Some tech, like RFID, can get its power OTA. It’s not a stretch to say your smart phone could power it.

  • Foreign Devil

    Having worn ordinary contacts during all my teen years. . there is so much that could go wrong with this . . . Even regular contacts often led to eye infections. .. so happy for lasik.

    • Trenix

      I agree that this is going to be a bad idea, but I never had an issue with my contacts. I’ve certainly never got an eye infection from them. You sure you’re not one of those people who never cleaned your contacts, reused the same solution from the day before, and even slept with them? Those people are asking for problems.

      • Lynwood

        I honestly receive approximately 6,000 to 8,000 dollars monthly operating the web. I was jobless inspite of giving lots of time and diligence. I always wanted a work that is reliable. I was never into schemes for instance rapidly becoming “rich overnight” that end up being promotional schemes. Additionally the most desirable feature when it comes to it is that I now do business for just several hrs in a day which retains me additional of freedom other than what I had before with my parents. Don’t avoid this chance and make sure to take action quick >>>>> 7i.fi/a0Rh5

        • Kimberle McDonald

          Yes i’ve heard cam chat girls can make good money

  • digression

    “Waiter, waiter, I wish that thing in my eye was only a fly”

  • Jim P

    Democrats ruling. Welcome to hell

  • flamaest

    $108 million in venture capital is PEANUTS.

  • That is really, REALLY cool. And I would never put that in my eye. EVER. But wow, so cool! That is one of the most amazing things I would never want.

  • Amazing, simply amazing. I advise you to read both the linked articles, because they are very informative

  • Jack H

    The line about eye-tracking reminds me of search coil tracking where something like a glasses frame is used to track the pose of coils of wire in contact lenses. I wonder if resonant or induction charging and wireless data could be passed to the same coils for a three-in-one use.

  • Jim P

    Just got contacts My eye does not open enough to get them in. So now I’m going back to glasses

  • Kimberle McDonald

    This is definitely the future of AR/VR. After that… implants, and we will finally be cyborgs.

  • That’s not how optics work.

    How are you supposed to focus on a screen that’s on your cornea? Take a pin, bring it up to all the way to your eye, as close as you feel comfortable, then tell me what you see.

    Even with some magic light field voodoo it would still not be able to present an image at a viewable focusing distance, as the area is too small for the required light field.

    • kalqlate

      How to focus on a screen that’s on your cornea: An array of controllable MEMS micro lenses. The tech exists and is continually improving.

      Presenting an image at a viewable focusing distance: I would do it as follows: 9-DoF IMU in the control silicon outside of the area of the pupil tracks all eye movement and communicates this and other information wirelessly via onboard WiFi or Bluetooth to the companion contact in the other eye and a belt, necklace or over the ear controller. With that information, various pieces of information displayed in both contacts can be maintained in desired virtual position in stereoscopic X/Y and depth with proper offsetting to compensate for all motion. The varying distance measured between the two pupils determines the focal plane in focus and blurs the others, all to match how focus and blurring is happening in the real world.