‘Goodbye Meta AI.’
If you’ve been on Instagram in the last couple of weeks, these are three words you’ve probably seen – and three words you’ve seen in some form or another over the last few months.
And all these viral scam Instagram posts tend to say the same thing. As last week’s put it: ‘I do not give Meta or anyone else permission to use any of my personal data, profile information or photos.’
Panic, confusion and frustration are just some of the ways people have felt about Meta’s announcement earlier this year that it’s training its artificial intelligence (AI) tech by feeding it public Facebook and Instagram posts.
Experts don’t really blame the public for being ticked off by this – AI is a fresh-based technology advancing at breakneck speed, and privacy and copyright laws are struggling to make sense of it.
So the simple thing you might be wondering is what on earth is Meta AI and what content is it actually looking at?
If you’ve ever met an AI, it’s probably not Meta AI. ChatGBT, Apple’s Siri and Amazon’s Alexa are all well-known examples of smart assistance software powered by AI.
Meta AI is the latest on the scene. It’s a totally free tool wedged into news feeds, chats with friends and search bars of Facebook, Messenger, Instagram and WhatsApp.
AI isn’t actually anything new on Meta. It’s been using the technology for years, says Dr Kimberley Hardcastle, who leads Northumbria University’s research on the subject, it’s generative AI which is new.
‘AI has been used for years on social media platforms like Facebook to recommend content, personalise ads and moderate posts by analysing patterns in user data,’ she tells Metro.
Generative AI, however, can ‘ create new content, such as text and images, rather than merely analysing or categorising data’.
About 55% of Britons struggle to explain what AI actually is, and some don’t see it as particularly useful.
Meta AI is powered by LLaMA 3, a machine that can comprehend and generate human-like speech and images called a large language model.
‘Meta AI is even more useful with voice and vision – you can talk to your assistant and it will talk back to you, and it can see what you see when you share photos and ask questions about the things in them,’ a Meta spokesperson explained to Metro.
As well as ‘creating’ images, the chatbot can ‘answer virtually any question’. Mid-chat on Facebook, for example, a user could type ‘@Meta AI’ and ask where in town does the best kebab, or WhatsApp the chatbot and ask it to generate a photo-grade-ish image of Vladimir Putin playing footie on the Moon.
For Britons, however, Meta AI isn’t actually out yet after the rollout was delayed in June.
‘Meta AI is on track to be the most used AI assistant in the world by the end of this year. It currently has over 400 million active users monthly and 185M weekly active users across our products,’ the Meta spokesperson adds.
Meta AI can’t go to school and get a uni degree. Hence why engineers want it to ‘learn’ about the world by reading people’s public posts. The bot also trawls through websites, books, news articles and research papers.
As silly as it might sound, Meta says doing so helps the software better understand British culture, history and our pretty goofy way of chatting.
‘This includes analysing posts, comments, and interactions to understand language patterns and user behaviour,’ adds Dr Hardcastle, who is also an assistant professor in marketing at the Newcastle Business School.
‘The AI’s capability to understand and process user-generated content may start to feel like increased surveillance, something we have already witnessed before on Meta platforms.’
Meta has said it’s complying with privacy laws and all the data its AI services are peaking at is all public anyway. In other words, the Meta spokesperson stresses, that it’s not direct messages, private posts or anything uploaded by users under 18.
‘We are committed to developing AI responsibly and transparently,’ they say.
‘We’ll only use information that is publicly available online. We also use information you’ve shared publicly on Meta’s products and services, such as public posts and comments, or public photos and captions.
‘When the features are available, we may also use the information people share when interacting with our generative AI features, such as Meta AI or businesses who use generative AI, to build and improve our products.
‘We do not use people’s private messages with friends and family, nor will we use data from private accounts.’
It depends on who you’re asking. The Meta spokesperson says not at all – ‘we’ll only use publicly available information,’ they said.
Gaël Duval, a data privacy expert, said Meta is the latest tech titan to plug an AI something or other in its products. He worries that ‘public’ user information does include everything from ‘sexual desires’ to ‘health troubles’.
Just shy of eight in 10 internet users use a Meta platform, having nearly 4,000,000,000 users overall. That’s a lot of data, both private and public, so the company can easily build ‘incredibly detailed, accurate’ profiles of its users.
‘Your personal data is an incredibly valuable currency and is assisting to make giant corporations even more wealthy as they refine their already powerful technology,’ Duval, who created the Android operating system /e/OS, completely free from tracking and data collection, tells Metro.
‘Even seemingly mundane information like what groceries you buy or what news articles you click on makes them money. This is the trade-off for using their “services” free of charge.’
The European Center for Digital Rights, known as NOYB (None of Your Business), filed complaints in several European countries about Meta’s AI policy change in June, stalling its rollout in the continent.
‘Instead of asking users for their consent (opt-in), Meta argues that it has a legitimate interest that overrides the fundamental right to data protection and privacy of European users,’ the non-profit said.
Aside from privacy, Dr Hardcastle says generative AI itself has its own issues. ‘Gen AI’s ability to generate and (potentially) manipulate information brings new privacy concerns and ethical challenges, potentially reshaping the future of how platforms influence and engage users,’ she says,
Yes! And this is what Facebook’s say about AI:
‘We use and develop advanced technologies such as artificial intelligence, machine learning systems and augmented reality so that people can use our Products safely regardless of physical ability or geographic location. For example, technology such as this helps people who have visual impairments understand what or who is in photos or videos shared on Facebook or Instagram.’
About nine in 10 people accept legal terms and conditions without reading them. All new users have to sign Meta’s terms.
‘People should be very concerned that tech giants are stealthily updating terms and conditions to include the use of their information to train AI,’ says Duval.
Meta’s updated its privacy terms in May and went into effect June 26. While the terms have been in place for a while, it’s only now the training is kicking off – in Britain, the Information Commissioner’s Office (ICO) is keeping an eye on things.
‘The ICO has not provided regulatory approval for the processing and it is for Meta to ensure and demonstrate ongoing compliance,’ the privacy watchdog said last month.
Duval raised concerns that users need to apply to opt out of having their public posts fed to Meta AI.
‘This is just another in a long list of instances where big tech providers prioritise power over people; such companies rely on people being unaware and granting “passive consent”,’ he says.
However, the company’s spokesperson said otherwise.
‘From this week, adults based in the UK who use Facebook and Instagram will start receiving in-app notifications to explain what we’re doing, including how they can access a simple objection form at any time to object to their data being used to train our generative AI models,’ the Meta spokesperson explained.
‘We’ll honour all objection forms already received, as well as new objection forms submitted.
‘We won’t contact people who have already objected as we’ll continue to honour their choice.’
For Dr Hardcastle, generative AI should in no way be a race. Society needs to grasp the technology before embedding it into lives – just like when social media was first created.
‘This time, however, we have the benefit of hindsight to recognise the risks and the need for careful oversight,’ she says, ‘before AI becomes too deeply intertwined in our lives.’
Get in touch with our news team by emailing us at webnews@metro.co.uk.
For more stories like this, check our news page.