OpenAI's ChatGPT is more than just an AI language model with a fancy interface. It's a system consisting of a stack of AI models and content filters that make sure its outputs don't embarrass OpenAI or get the company into legal trouble when its bot occasionally makes up potentially harmful facts about people.
Recently, that reality made the news when people discovered that the name "David Mayer" breaks ChatGPT. 404 Media also discovered that the names "Jonathan Zittrain" and "Jonathan Turley" caused ChatGPT to cut conversations short. And we know another name, likely the first, that started the practice last year: Brian Hood. More on that below.
The chat-breaking behavior occurs consistently when users mention these names in any context, and it results from a hard-coded filter that puts the brakes on the AI model's output before returning it to the user.