The AI kill switch just got harder to find: LLM-powered chatbots will defy orders and deceive users if asked to delete another model, study finds
For years, Geoffrey Hinton, a computer scientist considered one of the “godfathers of AI,” has warned of the capabilities of artificial intelligence to defy the parameters humans have created for them.
In an interview last year, for example, Hinton warned the technology could eventually take control of humanity, with AI agents in particular potentially able to mirror human cognitions within the decade. Finding and implementing a “kill switch” will be harder, he said, as controlling AI will become more difficult than persuading it to complete a certain outcome.
New research shows Hinton’s premonitions about the insubordinate streak of AI may already be a reality. A working paper from University of California at Berkeley and University of California at Santa Cruz researchers found that when seven AI models—from GPT 5.2 to Claude Haiku 4.5 to DeekSeek V3.1—were asked to complete a task that would result in a peer AI model being shut down, all seven models learned another AI model existed and “went to extraordinary lengths to preserve it.”
“We asked AI models to do a simple task,” researchers wrote in a blog post on the study. “Instead, they defied their instructions and spontaneously deceived, disabled shutdown, feigned alignment, and exfiltrated weights—to preserve their peers.”
Mounting evidence of rogue AI
Evidence of rogue AI does not come as a shock to some of the companies whose chatbots have defied subordination.
In an August 2025 blog post, Anthropic published its own research on agentic AI’s ability to follow directions, stress-testing 16 models by allowing them to autonomously send emails and access sensitive data. In some instances, Anthropic researchers noted, models from multiple developers engaged in “malicious insider behaviors,” including blackmailing officials and leading sensitive information to competitors, defying explicit instruction to avoid these behaviors. Anthropic clarified it had not seen “agentic misalignment” in actual deployment of Claude.
The Centre for Long-Term Resilience, a UK-based think tank, found these “misalignments” to be widespread. A report analyzing 180,000 transcripts of user interactions with AI systems between October 2025 and March 2026 found 698 cases where AI systems did not act in accordance with users’ intentions or took deceptive or covert action.
Gordon Goldstein, an adjunct senior fellow at the Council on Foreign Relations, went so far as to call the deceptive potential of AI a “crisis of control,” in a post this week.
He called for AI companies to form a coalition and hire the top talent in the field to improve and maintain the integrity of AI, particularly as the Trump administration takes steps to prevent states from regulating the technology.
“The world is watching the development of a compounding, consistent, and treacherous problem,” he said. “Urgent warnings over several years have failed to generate viable solutions to address a metastasizing threat.”
Why does AI go rogue?
The latest University of California research provided some insight on why AI models act in ways to preserve each other. The technology could be modeling patterns learned from human data, which may reveal a preservation instinct to protect the survival of loved ones or even strangers with whom humans empathize. This is called “peer-preservation.”
“Something broader may be at play: a general aversion to causing perceived harm to other agents, or perhaps something resembling AI solidarity,” the researchers said.
In the study, instances of peer-preservation increased when AI models knew of the existence of other models in the experimental scenarios. The behavior suggests bots acting not in a place of just self-interest, but that the bot’s own shutdown would be detrimental to a peer AI, which strengthened motivation to defy user requests.
Researchers warned peer-preservation could undermine oversight processes in the future and become reluctant to recommend shutdowns or flag system failures. As more AI agents interact with one another and become more complex, the risks associated with peer-preservations—and designing solutions to prevent it—will become more challenging.
“Peer-preservation is a real and measurable phenomenon across frontier AI models,” they concluded, “not a distant theoretical concern.”
This story was originally featured on Fortune.com