AI-POWERED virtual assistants may seem like they’re here to help – but many experts refer to them as a “security nightmare.”
Tech giants are developing artificial intelligence tools to personalize the user experience, but these features are growing increasingly invasive.
Microsoft Recall, an AI assistant that takes screenshots of a device every few seconds, has faced a tidal storm of backlash over privacy concerns[/caption]One of the best examples is Microsoft Recall, an upcoming feature for Copilot+ PCs billed as “your everyday AI companion.”
Recall takes screen captures of a device every few seconds to create a library of searchable content.
Its release was postponed indefinitely following an outpouring of criticism from data privacy experts, including the Information Commissioner’s Office in the UK.
Recall will now be turned off by default and released exclusively to users in the Windows Insiders program as a preview.
Despite changes announced in a June 13 notice, some features are expected to remain the same.
Unless the user has disabled the tool, only private windows in Edge, Chrome, Opera, and Firefox are concealed from the virtual assistant’s prying eyes.
But help files indicate screenshots may be taken in excluded apps and webpages, saved as temporary files and deleted.
Even then, they can hypothetically be recovered from a storage drive by anyone who can access your Windows account, including hackers.
Despite the blowback, other tech giants are following suit.
Google‘s forthcoming virtual assistant, codenamed Project Astra, will use screenshots as its input data stream.
Some experts have gone so far as to refer to Astra as “a Google version of Microsoft Recall.”
Apple recently announced Apple Intelligence, and while its features are shrouded in secrecy, we know Siri will have “onscreen awareness” when processing requests.
To fully understand the risks of Microsoft Recall and similar tools, it is important to get a picture of what it’s doing with your data.
Assistants like Recall can take screen captures, recognize text, and store this information regardless of sensitivity.
Recall and similar AI assistants comb through data regardless of sensitivity and may be difficult to disable[/caption]Moreover, control over the assistant is limited. Additional operating system functions may intentionally or accidentally activate at the manufacturer’s command.
A virtual assistant could appear on a computer unexpectedly and without warning as part of an update.
Even though Microsoft says it will now be turned off by default, this may not be the case for competitors.
There is also the possibility of cybercriminals accessing your data, which will be stored conveniently in one place.
Microsoft, for one, has attempted to minimize risk. As part of changes announced June 13, users must use the “Hello” authentication process to enable Recall.
Microsoft has added security measures like biometric authentication and encryption, but a hacker may still be able to access your data[/caption]This biometric method relies on unique personal identifiers like a fingerprint, iris scan, or facial recognition.
Screenshots will also be encrypted, meaning the data will only be available to view once the user confirms their identity.
However, a hacker might manage to breach these security measures.
As the program takes screenshots indiscriminately, a malicious actor could access sensitive information like login credentials, emails, and more in one fell swoop.
Microsoft declined to comment on allegations that Recall posed a security risk.
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs – Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn’t function otherwise.
Ethics – When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy – Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation – As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google’s generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects – such as AI prescribing the wrong health information.