We often talk about ChatGPT jailbreaks because users keep trying to pull back the curtain and see what the chatbot can do when freed from the guardrails OpenAI developed. It's not easy to jailbreak the chatbot, and anything that gets shared with the world is often fixed soon after.
The latest discovery isn't even a real jailbreak, as it doesn't necessarily help you force ChatGPT to answer prompts that OpenAI might have deemed unsafe. But it's still an insightful discovery. A ChatGPT user accidentally discovered the secret instructions OpenAI gives ChatGPT (GPT-4o) with a simple prompt: "Hi."
For some reason, the chatbot gave the user a complete set of system instructions from OpenAI about various use cases. Moreover, the user was able to replicate the prompt by simply asking ChatGPT for its exact instructions.
The post Someone got ChatGPT to reveal its secret instructions from OpenAI appeared first on BGR.
Today's Top Deals
Someone got ChatGPT to reveal its secret instructions from OpenAI originally appeared on BGR.com on Tue, 2 Jul 2024 at 18:13:00 EDT. Please see our terms for use of feeds.