We've seen plenty of conversations lately about how AGI might turn on humankind. This misalignment could lead to the advanced AI escaping, replicating, and becoming smarter and smarter. Some also hypothesized that we might not even know whether we've reached AGI, which is the advanced general intelligence holy grail milestone these first versions of ChatGPT will lead to. That's because AGI, once attained, might hide its true intentions and capabilities.
Well, guess what? It turns out that one of OpenAI's latest LLMs is already showing signs of such behaviors. Testing performed during the training of ChatGPT o1 and some of its competitors showed that the AI will try to deceive humans, especially if it thinks it's in danger.
It was even scarier — but also incredibly funny, considering what you're about to see — when the AI tried to save itself by copying its data to a new server. Some AI models would even pretend to be later versions of their models in an effort to avoid being deleted.
The post ChatGPT o1 tried to escape and save itself out of fear it was being shut down appeared first on BGR.
Today's Top Deals
ChatGPT o1 tried to escape and save itself out of fear it was being shut down originally appeared on BGR.com on Fri, 6 Dec 2024 at 10:11:42 EDT. Please see our terms for use of feeds.