Meta’s Smart Glasses Face Lawsuit Over Privacy Concerns

The rapid rise of AI-powered consumer devices has sparked growing debates about how user data is collected and processed. Now, tech giant Meta is facing a new lawsuit in the United States over allegations related to its smart glasses and how the captured footage is handled.


The lawsuit was filed by Gina Bartone from New Jersey and Mateo Canu from California, who claim the company misled consumers about privacy protections. According to the complaint, the marketing of Ray-Ban Meta Smart Glasses included phrases such as “designed for privacy” and “controlled by you,” while failing to clearly disclose that footage could be reviewed by overseas workers. The lawsuit also names manufacturing partner Luxottica as a defendant.

The plaintiffs argue they relied on these privacy claims when purchasing the product. They also claim there was no clear warning that their recordings might be viewed by contractors abroad. The issue may be widespread: more than seven million units of the smart glasses were reportedly sold in 2025, with the footage feeding into a data pipeline used to train Meta’s AI systems. According to the complaint, users have no option to fully opt out of this process.

Findings From the Swedish Investigation

The lawsuit follows a joint investigation published on February 27 by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten.

Their report revealed that data annotators in Nairobi, Kenya were reviewing content captured by the smart glasses. These workers were employed by Sama, a subcontractor that helps label images, audio recordings, and transcripts to improve AI systems.

Some workers told journalists they encountered extremely sensitive material while performing their tasks. They reported seeing footage of people using the toilet, undressing, or engaging in sexual activity while the glasses were recording. One worker reportedly said: “We see everything — from living rooms to naked bodies.”

The investigation also cited a former Meta employee who said the automated face-blurring system designed to protect identities “does not always function as intended,” leaving some faces visible. Workers also reported seeing bank card information and hearing conversations related to criminal activity.

Meta’s Response

Meta told the BBC that when users share content with its AI services, the company may rely on contractors to evaluate that data. The company added that such practices are described in its privacy policy and that the data is filtered before review to help protect user privacy.

Regulatory Scrutiny

The controversy has also attracted attention from regulators. The U.K.’s Information Commissioner's Office described the allegations as “concerning” and said it would request information from Meta regarding its compliance with data protection laws.

Meanwhile, European regulators are examining whether transferring EU user data to contractors in Kenya complies with the standards set by the General Data Protection Regulation.

Growing Debate Around Smart Glasses

The issue adds to growing scrutiny surrounding Meta’s smart glasses, which the company has positioned as a key part of its hardware strategy. Privacy advocates, including the Electronic Privacy Information Center, have already called on regulators to investigate plans to add facial recognition features to the devices.

The controversy highlights a broader challenge facing AI-powered wearables: balancing technological innovation with the protection of user privacy.

Next Post Previous Post