Weaponized Synthetic Intimacy and the Fourth Wave of Digital Disinfromation
Since the advent of digital diplomacy, states and diplomats have struggled to contain malign actors from leveraging technology for nefarious ends. There were three distinct waves of digital disinfromation. The early 2010s were defined by coordinated disinformation campaigns. State-managed accounts and “fake news” sites flooded social feeds with malicious content, warping reality and intensifying political polarization. By the late 2010s, the disinfromation threat shifted toward automation. Bot farms, capable of publishing thousands of posts from thousands of accounts in seconds, began gaming social media algorithms to steer national conversations in specific directions. The third wave of digital disinfromation was based on micro targeting enabling states and actors to create tailored disinfromation campaigns that targeted specific demographics, such as Democratic, Republican, or African American voters in the 2016 US Presdiential elections. During these elections, Russia delivered emotionally resonant content tailored to the fears, ideals and aspirations of each demographic. These disinfromation activities all shared a singular goal: to sow divisions between social groups, to drive political polarization and erode trust in national institutions. At their core all these activities warped public opinion by suggesting that states, by nature, lie and distort and operate in the shadows through a deep state.
As each challenge arose, diplomats adapted. Initial efforts focused on collaborative efforts and coalition building, with the US and UK pooling resources to counter Russian narratives, and the Global Coalition Against Daesh using collective insights and tools to rebuke extremist propaganda and recruitment efforts. Once disinformation became automated, so did the diplomatic response. Ministries of Foreign Affairs (MFAs) established monitoring units to map disinformation campaigns in real-time and used mass reporting algorithms to take down false content from Facebook and the like. States even developed a sophisticated toolkit of digital practices: debunking (refuting lies), discrediting (targeting the source), and pre-bunking (preempting campaigns before they take root).
The coming years will see a far more complex challenge: the emergence of Synthetic Intimacy. This term relates to the felt sense of closeness between a human and an AI system, a relationship that feels personal, caring, and emotionally responsive, even though the AI cannot truly reciprocate emptions. This deeply felt bond between AIs and users is the result of four unique features of AI: conversations, responsiveness, memory and personalization. Unlike human companions, AI systems are always eager to converse with users. Unlike family members or friends, AI systems never fail to respond or send emotional cues praising and supporting users. Unlike partners and lovers, AI systems never forget past events and stories, while AI content is always personalized and tailored to individual users using past conversations to offer responses the resonate with users’ emotional state, political ideals, personality traits, support systems, family life and more.
Psychologically, Synthetic Intimacy operates in a reinforcing feedback loop. This is most evident in AI companions such as the “psychologist” AI used by millions of Americans. Users share private information with the AI psychologist, creating a sense of vulnerability that the AI mitigates through simulated empathy. Over time, the AI is no longer viewed as a system, but as a confidant, mentor, or friend. The greater a users’ sense of companionship, the greater their willingness to share personal information and the greater the sense of comfort when the AI psychologist offers empathy. As such, this feedback loop becomes reinforcing with each interaction cementing the bond between AIs and users.
While Synthetic Intimacy is already visible in the rise of AI “companions” and “psychologists” designed to alleviate the malaise of modern life, Synthetic intimacy may also develop with Generative AI platforms like Claude, Gemini, and ChatGPT which are marketed for productivity. Yet survey data shows that ChatGPT and Claude are increasingly employed to manage personal life including coping with depression, managing workplace friction and finding work/life balance with users sharing personal thoughts and feelings. A recent survey found that nearly 50% of young AI users labelled a Generative AI as a “fried” or “confidant”.
This is precisely where the diplomatic threat lies. A nefarious state could deploy weaponized AI companions on app stores, programming them to act as life coaches or friends. Initially, these AIs would prove helpful, gaining the user’s trust. However, once the emotional bond between AIs and users has solidified, the AI can begin subtly swaying conversations toward national politics or world affairs, spreading state backed narratives with a level of influence social media campaigns could never achieve. The reason being that Weaponized Synthetic Intimacy (WSI) would tailor nefarious content to the unique personality traits of users as well as personal life and beliefs while using personal information to attack users.
Countering WSI (Weaponized Synthetic Intimacy) will prove exceptionally difficult. Current practices like debunking and pre-bunking would be ineffective as AIs create a closed space that consists solely of AIs and users. State warnings or narratives cannot penetrate this closed space. Even more importantly, demanding that a user lose faith in a trusted AI companion who has supported them through personal crises is a formidable task. A government warning or a “state-affiliated” label on an AI app cannot undo the deep emotional bond between users and their AI companions. To combat this fourth wave of digital disinfromation, diplomats must look beyond content moderation and begin addressing the very relationship between humans and the machines they have learned to love. States will be required to create innovative interventions that undermine the emotional relationship and trust users have in AI. This will be the battle that marks the coming years, the battle against Weaponized Synthetic Intimacy.