Chickens Are More Conscious Than AI, According to Study
It looks like we won't have to worry about AI going rogue and destroying humanity — at least not yet. A new study found that the current generation of large language models (LLMs) are most likely not conscious or sentient, reports AI Insider.
The study came from a think tank called Rethink Priorities, which employed a framework called a "Digital Consciousness Model." The researchers used the model to survey humans, chickens, current LLMs, and a 1960s-era AI chatbot named ELIZA.
After observing the model's 200+ indicators, researchers ranked the four subjects based on their probability of consciousness. From most conscious to least conscious, the rankings went:
- Humans
- Chickens
- Modern AI LLMs
- ELIZA
What Even Is Consciousness?
Of course, the findings are not an exact science because consciousness itself is abstract. There is no definitive indicator or behavior that conclusively marks consciousness. That's why the Digital Consciousness Model measures probability of consciousness, and the probability changes in real time as more evidence comes forward.
The fact that chickens are more likely more conscious than AI shouldn't exactly be surprising, though, given that chickens are living, breathing beings. "It has long been comedy gold how many people don't think animals are conscious," one amused person wrote on Reddit. Another wrote, "A tree is more conscious than AI."
AI Still Causes Conscious Problems
While AI doesn't seem to have a mind of its own yet, it has managed to cause severe mental problems for many people. In November 2025, The New York Timesconfirmed almost 50 cases of mental health crises brought on by ChatGPT. Nine of those people were hospitalized, and three of them died.
Oregon woman Robin Richardson told The Guardian about her friend Joe Ceccanti's experience with ChatGPT. He went into psychosis after spending 12 to 20 hours a day interacting with the chatbot before jumping off a bridge to his death. “Every time he went back to ChatGPT, it hooked him a little bit more," she said, "and after a while, he stopped being interested in anything else.”
Psychiatrist Keith Sakata told The Guardianthat he had seen 12 patients with AI-accelerated psychosis. “They developed grandiose beliefs about being on the verge of a major technological breakthrough, alongside classic manic symptoms such as impulsive spending, decreased need for sleep and, at the peak, auditory hallucinations,” he said. “What stood out clinically was that the chatbot interactions did not generate the illness, but appeared to scaffold and reinforce beliefs that were already becoming pathological.”