Meta’s Reported Plan to Add Facial Recognition to Smart Glasses Slammed by ACLU-led Coalition

20

An ACLU-led coalition representing more than 70 civil liberties advocacy groups are pushing back against Meta’s reported plans to bring facial recognition to its smart glasses.

The New York Times initially reported in February that Meta is currently exploring who should be recognizable through its smart glasses, as the company ostensibly hopes to bring some form of facial recognition to Ray-Ban and Oakley smart glasses.

According to the NYT report, possible options include “recognizing people a user knows because they are connected on a Meta platform, and identifying people whom the user may not know but who have a public account on a Meta site like Instagram.”

Now, as reported by Wired, an ACLU-led coalition hopes to oppose those plans, which the group says could turn Meta’s smart glasses into ad hoc “surveillance glasses,” capable of endangering consumers and vulnerable communities, and broadly undermining civil rights and civil liberties.

Ray-Ban Meta ‘Scriber’ model | Image courtesy Meta, EssilorLuxottica

The group, which also includes the Electronic Privacy Information Center (EPIC), Fight for the Future, Access Now, and the Leadership Conference on Civil and Human Rights, issued an open letter to Meta CEO Mark Zuckerberg on Monday urging the company to stop and publicly disavow its plans.

SEE ALSO
'Microsoft Flight Simulator' PSVR 2 Gameplay Revealed Ahead of April Update

“People should be able to move through their daily lives without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are silently and invisibly verifying their identities and potentially matching their names to a wealth of readily available data about their habits, hobbies, relationships, health, and behaviors,” the letter reads.

Meta Ray-Ban Display Glasses & Neural Band | Image courtesy Meta

“It isn’t hard to see how easily this technology could be abused by corporations, private individuals, and the government to target immigrants, LGBTQIA+ people, and other vulnerable groups,” an ACLU petition adds. “It also puts domestic violence and stalking survivors at risk and could even be used to go after protestors or people who criticize the government.”

Meta has bowed to public pressure before, albeit after years of costly litigation. As mentioned by Wired, in November 2021 the company ended Facebook’s photo-tagging system and said it would delete the facial recognition templates of more than a billion users, which at the time was called “a company-wide move to limit the use of facial recognition in our products.”

Neither Meta, nor its hardware partner EssilorLuxottica responded to Wired’s request for comment.

This follows news in February that Meta’s smart glasses partner EssilorLuxottica sold over seven million smart glasses in 2025 alone; that year the companies not only shipped a hardware refresh of Ray-Ban Meta, but also Oakley Meta HSTN, Oakley Meta Vanguard, and the $800 Meta Ray-Ban Display glasses—the company’s first smart glasses to include a heads-up display.

It’s not just Meta making smart glasses though. Meanwhile, a rash of competitors are currently preparing their own smart glasses for consumer release; GoogleSamsung and Amazon have all announced their own devices, while Apple is also reportedly developing multiple pairs.

This article may contain affiliate links. If you click an affiliate link and buy a product we may receive a small commission which helps support the publication. See here for more information.

Well before the first modern XR products hit the market, Scott recognized the potential of the technology and set out to understand and document its growth. He has been professionally reporting on the space for nearly a decade as Editor at Road to VR, authoring more than 4,000 articles on the topic. Scott brings that seasoned insight to his reporting from major industry events across the globe.
  • fcpw

    No one is forcing people to buy them. That said, you would be quite dumb to do so.

    • Yeah but the problem here is that even if you don't buy them, you're seen through the glasses of all the people around you that bought them

    • Oxi

      The entire problem is not that you might be forced to wear them, but that someone else will wear them and be able to pull up your name and info at will!

  • STL

    Let’s be very clear: the ability to recognize people based on photos or brief encounters is a genuine talent. Many people can identify almost everyone they’ve ever met. For those who lack this ability, an AI face-recognition tool would function as a prosthetic replacement—just like glasses, hearing aids, or other assistive technologies.

    I am one of the few people who struggles severely with this. I couldn’t even recognize my own mother (!) in person at times. This condition cost me my job as a highly paid consultant because I failed to recognize clients and coworkers when I saw them on the street or outside the office.

    Please keep this in perspective: people who wear glasses are not told they must manage without them just because most people have good natural vision. We don’t withhold artificial limbs from amputees or screen readers from blind people simply because others don’t need them. Face-recognition assistance for those with prosopagnosia (face blindness) deserves the same acceptance and support.

    • Herbert Werters

      Yeah, but not from Meta in a consumer device. Are you serious?

    • Oxi

      I don't think that's a fair comparison, sorry. The whole issue is not that I might ping someone and ask for permission for my device to be able to recognize them, it's that this gives any person the same ability to recognize and identify someone that facebook's algorithms have. Glasses don't let me see you a mile away through a security camera. Similar systems to this are already getting people seriously harmed as government agencies building off of the work by facebook and leading AI firms are buying software to pluck people out of crowds for arrest or documenting their presence to harass them later for repeated civil disobedience.

    • Christian Schildwaechter

      Now imagine a world where everybody can buy these and not only use them to put a name on faces, but also get extra background information from Meta's AI. You meet a potential new client, he looks at you with his Meta smartglasses and gets "This is STL, he suffers from prosopagnosia, so meetings with him and team socializing might be a problem." How many jobs as a highly paid consultant do you expect to still get in that world?

      The issue here is that this won't be limited to those who need it, and that it will connect faces to Meta's vast pool of personal data with reports generated by their AI. Based on past behavior, Meta absolutely cannot be trusted to not abuse collected data, and they didn't even bother to adress concerns. They could at least attempt, for example offering a phone app that sends out a bluetooth beacon that disables facial recognition within a 10m diameter, to allow people to IRL "opt-out". Instead they assume that everybody automatically accepts video surveillance with facial recognition.

      Meta will have to geofence this feature, as using their smartglasses this way in Europe would straight up be breaking lots of laws about data privacy and surveillance, getting the wearer into serious troubles. In Europe you aren't allowed to record people in a recognizable way, not even the police is allowed to use facial recognition wherever they like, and private security cameras are only allowed to cover a max of 1m of public space, for example in front of a shop. And there are lots of laws and supreme court rulings explaining why this is important.

      • STL

        While I do appreciate your considerations, the subject is a more philosophical one. Do I want to live in a society based on deceit or on honesty? In my case the solution was to switch to a job with lower social contacts and higher technology impact. At least I do remember numbers pretty well!
        Since I‘m lazy, I don’t want to dive deeper into it, but I‘m sure you got my point. And it’s still what I want to share and which information I give freely to the public, so why would I bother if this is just easier to find?

      • ichigo

        I think most people are being hyperbolic and looking at the worst case scenario with this emerging technology. But i do think safeguards need to be implemented and transparent here. I don't think laws/regulation from redundant overpaid bureaucrats made to prevent other bureaucrats from doing something is the solution. in most cases it's just applied and a burden against the average jo who points his camera from his small business into the street a little to catch 'repeat' shoplifters.

        Ironically Europe already has some of the world's densest surveillance networks despite or alongside its strict privacy laws……very protected from government thanks to them regulations am sure. (rules on paper =/= real-world surveillance). But I'm far more concerned about what silicon valley megacorp is doing with its conformist collective staff and their monoculture ideology. (have a look at what most the staff donate to and support).

        We can have the benefits of helpful AR tools (including for people with prosopagnosia) without handing over unchecked power. Let's demand better engineering and transparency. And not lean on overpaid bureaucrats looking to suck money out of it with backhand deals with threats of regulations. The same undemocratic bureaucrats who often retire from EU roles straight into high-paying positions at the very corporations they " "regulated" ".

        • Christian Schildwaechter

          Whether bureaucrats are overpaid, redundant or not, legislation has been the only thing putting a limit to endless data grabbing so far, and the GDPR is one of the EU's success stories alongside a lot of other consumer protection laws that ended up benefiting the whole world.

          Of course all regulation adds more friction and has some negative consequences for businesses. Companies like OpenAI wouldn't work in the EU for numerous reasons, from finances to regulatory oversight and very strickt data protection laws. And of course it would be convenient to just use whatever (camera) data you can get in whatever way you like, and there will be numerous applications that simply won't work if you have to always ask for permission first.

          But the philosophy behind this is that the limits of your personal freedom is the personal freedom of others. So just because someone lives on the cutting edge and can make use of advanced technology like AR and facial recognition, this still doesn't mean they are automatically allowed to do so in a way that restricts the personal rights of others that may not even be aware of what is already possible. Laws ideally allow people to better live together, which also requires protecting some of them from others with more know-how or tech or money, even if it means all those fancy new toys cannot always be used to their full potential. Simply because everytime figuring out data privacy has been left to the market, it ended in a few large companies hoarding (and selling) data without any acceptable restraint.

          And regarding existing suveillance one has to differentiate between countries. While EU laws are very strict and mostly limit surveillance, for example the UK has been a long time champion of larhe scale public camera imstallations, and as part of the five-eyes country is also very heavily involved in global internet surveillance.

          Which is still comparably
          harmless considering that the FBI recently admitted that they were buying tracking data from US companies to work around having to first get court orders as required by law. The whole discussion about Chinese owned TikTok being a national security threat to the US was a big farce, since TikTok went out of its way to ensure user data was secure and not accessible by the Chinese government, while that government didn't even need TikTok, because they could simply buy US citizen profile data from lots of large US internet companies, precisely because the US lacks a data protection law like the GDPR.

          • ichigo

            I get the intent behind strong privacy rules like GDPR limiting unchecked data grabs and it sounds noble…on paper. But data grabbing didn't magically stop because of GDPR and it’s fines!

            Being overpaid and redundant isn't a side issue it's core to the dirtiness of it all. The character and incentives of the people introducing and enforcing these rules matter. E.U. officials and commissioners slide straight into high-paying gigs at the very Big Tech firms (or their law firms) they were supposedly regulating. That's not neutral protection.

            E.U. legislation is not ‘the only thing' that's limited data grabbing and ignores how markets and tech itself have evolved safeguards. Companies face massive reputational and competitive pressure, users abandon invasive products, and rivals (or open-source alternatives) fill gaps with better privacy features.

            I get the appeal of painting GDPR as one of the EU's great 'success stories' it sounds principled and a nice fairy tale. But cherry-picking a few fines on Big Tech or some consumer protection wins doesn't make the whole over-engineered system effective or necessary. Plenty of countries have managed strong privacy norms, data security, and consumer safeguards without layering on the EUs massive bureaucracy conflicting national implementations and endless regulatory sprawl. You don't need this machine to get decent results individual nations could (and did pre-GDPR) handle it themselves with far less friction and self-congratulation. And with less money and power handed over to unelected.
            In reality GDPR has been a mixed bag at best and a drag on innovation at worst. Compliance costs hit small businesses hardest. Studies show it reduced venture deals in the EU (especially for data-heavy or consumer apps), halved new app entries in some cases, and increased market concentration favoring the very megacorps it claims to restrain. OpenAI and others avoiding heavy EU operations isn't a bug it's a feature of how this stifles cutting-edge work. How much jobs and investment have been lost.

            Despite the strict paper rules, EU (and the UK) maintains some of the densest public CCTV networks in the West. the US example of agencies buying commercial data to dodge warrants shows governments everywhere find workarounds. Pretending the EU's framework magically fixes global data markets ignores how voluntary sharing for convenience/services drives a lot of it worldwide. Strict consent everywhere would kneecap useful tech too.
            Your philosophy point freedom limited by others' rights is reasonable in theory and sounds nice, but applied via E.U. style top-down rules it often protects incumbents more than the vulnerable. It creates friction that slows beneficial tools. We shouldn't neuter innovation because of hypothetical abuse or small known abuse. And a bloated system where bureaucrats "regulate" then cash in is not the solution here and sound never be.

            The whole tiktok thing is way more complex than that it’s not data protection it’s about influence etc. I think we can all agree on the low intelligence people it attracts and manipulates. Just like other new apps that popped up around 2022 that seem to push the same nonsense. And it’s well known how different toktok is in it’s native land. But as i said way more complex than just being a “big farce” over data.

          • Christian Schildwaechter

            If customers actually avoided abusive companies, or competiton from other companies or open source would push them out of the market, Meta would have gone broke a decade ago. And their solely user profiling based ad revenue allowed them to buy into other fields by simply purchasing competitors like Instagram or WhatsApp to further their dominance. The sole thing that has stopped them so far are regulatory limits. And the thing that put the largest dent into their revenue was Apple forcing them to ask iOS users for permission before tracking them, after the US government had failed to do so.

            Facebook fearing they’d be forced to split off Instagram and/or WhatsApp from the social network was a central reason to split their XR business into the separate Meta unit, so their back then big bet on future platform dominance could still be managed without any oversight in case Facebook itself was declared a monopoly and the company put under regulatory observation.

            No doubt the EU comes with a lot of burdening bureaucracy and is in dire need of lots of reforms, but that’s not at all an easy tasks when dozens of countries with very varying interests have to consent on how to do it. It’s like Churchill stating that democracy is the worst political system, only all the others we tried were even worse. And how efficient/inefficient it is is up for debate, and IMHO it is nowhere near as doom and gloom as you paint it. Most Europeans are very aware that the EU is far from as flexible and efficient as one would wish, and often causes (unnecessary) extra work, but the vast majority also sees that the benefits outweigh the costs, including having the longest phase of peace among countries that went to war with each other for centuries, lifting the living standards in lots of countries while sticking to strong consumer, worker and environmental protection. And I’m pretty sure that a lot of Europeans see OpenAI avoiding operations in the EU as a clear win.

          • ichigo

            "If customers actually avoided abusive companies" – This is a dishonest take to what i was saying. I didn't even say every customer and it was one of many things. Not to mention if it was solely GDPR stopping them would that not also make them go broke with fines…..Am not engaging if that's how you start.

            And most Europeans think E.U. is short for Europe (continent) that's why they use 'Europe' interchangeable with the E.U when talking about it…..

  • Tech

    NSA/CIA dream – you wear these and work as free spy for these agencies.

  • marco

    probably no one remember that 1st gen google class received a similar slam years back

  • From the title, I thought there was some legal constraint blocking Mtea. Instead it is just some people sending Zuck an email. Well, good luck with that

  • Oxi

    Just straight up using the chaos right now as a way to get this done without backlash.

    • Herbert Werters

      It won't do any good if there's an even bigger blow later on. This isn't some minor issue that society could really accept or get over. I think “move fast, break things” really isn't a good idea in this case. For anyone.

    • STL

      Chaos is a ladder. (GOT)

  • ichigo

    None of this make no sense. While i understand individual concerns. And let's ignore the surge in donations to ACLU from silicon valley. This comes more as a shake down by an ideologically captured group that targets and weaponizes.

    Anyone with common sense can have a good chuckle at the idea modern western governments and packed cilvil service would target the " "minorities" " they proclaim most affected here. Some of us live in the real world….And am not even sure why they would think these people world be most affected seem oddly specific and telling.

    Also i think they are trying to blur the definitions again when they use " "immigrant" ". Because i see no reason why they would be targeted unless they did something illegal that most countries have as a law.

    Meta should be transparent about safeguards but the ACLU's approach is less about liberty and more about control.

    TLDR; It either affects everyone or this is a BS shake down and divisive nonsense…

    (in regards to protesting the same group black covering their faces to " "protest" " all the time should be safe from any IDing. And there is plenty of cameras at these things anyway we all see what they do and get away with.)