THE AI-powered glasses created by Meta have been modified to “reveal anyone’s personal details” just by looking at them.
That includes their name, address, phone number and even their relatives names.
Two Harvard students revealed they had installed invasive facial recognition software, called PimEyes, onto a pair of Meta specs[/caption] The pair tested the glasses on strangers on the subway, and on students on their campus[/caption]Two Harvard students, AnhPhu Nguyen and Caine Ardayfio, recently revealed they had installed invasive facial recognition software, called PimEyes, onto a pair of Meta specs.
PimEyes is a controversial search engine that emerged as a hobby project in 2017, before being commercialised in 2019.
It allows users to search the whole web for images of their face, or for someone else’s – be it a rogue piece of marketing or an accidental ‘photobomb’.
The students then used a large language model (LLM) to combine all that data, making it possible to find out someones name and where they live, even if they’re just a passerby.
“This synergy between LLMs and reverse face search allows for fully automatic and comprehensive data extraction that was previously not possible with traditional methods alone,” the pair wrote in a Google document.
“Our goal is to demonstrate the current capabilities of smart glasses, face search engines, LLMs, and public databases, raising awareness that extracting someone’s home address and other personal details from just their face on the street is possible today.”
Let's break it down even further...
Essentially, AI (artificial intelligence) is used to detect when you’re looking at someones face through the glasses.
Then, it scours the entire internet for all the instances in which that face occurs.
For example, your social media profile or your business’ website.
Finally, it uses other online data sources like articles and voter registration bases to figure out a persons name, address, phone number and their relatives names.
All that information is then fed back to a phone app the students created.
The terrifying modification, dubbed ‘I-XRAY’, is an obvious breach of privacy.
However, it can be used for far more nefarious and dangerous purposes – which is why the students are refusing to share the code publicly.
“Some dude could just find some girl’s home address on the train and just follow them home,” Nguyen told 404 Media.
Nguyen added that they used Meta Ray Bans 2 for their project because the smart glasses “look almost indistinguishable from regular glasses”.
What initially began as a side project, has now become a tool to demonstrate the “current capabilities of smart glasses, face search engines, LLMs, and public databases”, Nguyen and Ardayfio wrote.
The pair tested the glasses on strangers on the subway, and on students on their campus.
Students were eventually shown their own results, with one being shocked to see images of her from middle school – which is roughly Year 5 to Year 7 in the UK.
The Sun has contacted Meta for comment.
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.