Late Friday night, OpenAI suspended a developer who had created Dean.Bot — a ChatGPT-powered chatbot intended to support Minnesota Rep. Dean Phillips' 2024 presidential campaign — for violating new rules preventing lobbyists and candidates from using ChatGPT for politics, The Washington Post reported.
Dean.Bot conversed with constituents in real time while mimicking the long shot Democratic challenger for president, using an AI-generated version of Phillips' voice to respond to user questions, like why Democrats shouldn't be throwing their weight behind incumbent President Joe Biden in the 2024 election.
"While I respect President Biden, the data and conversations with Americans across the country indicates a strong desire for change," WaPo reported Dean.Bot responded in an AI-generated voice that sounded like Phillips' but had an unusual cadence.
Phillips' background is in business. He ran his family's distilling company between 2000 and 2012 before pivoting to run Talenti, the popular gelato company he'd invested in until it was sold in 2014. He has represented Minnesota since 2019 and launched his campaign to challenge Biden in October last year. However, his polling numbers haven't indicated he'd be a significant threat to Biden.
The creation of Dean.Bot was funded by the Super PAC We Deserve Better, WaPo reported. The PAC then partnered with AI developer Delphi to build the bot. Delphi's account was suspended for violating the political rules late Friday, just a day after WaPo wrote about the creation of Dean.Bot.
OpenAI earlier this month announced new rules preventing developers from building applications using its ChatGPT software intended for political campaigning or lobbying, saying the platform is "still working to understand how effective our tools might be for personalized persuasion."
Representatives for OpenAI and Phillips' 2024 presidential campaign did not immediately respond to requests for comment from Business Insider.
Despite OpenAI's new rules, Dean.Bot won't be the last AI creation we see this election season.
The Hill reported Google and Meta have crafted policies requiring politicians and lobbyists to label content created by generative AI in campaign-related materials in response to a surge of AI-created content, including deepfakes and misleading audio.
The outlet reported that several lawmakers, including Sen. Amy Klobuchar and Sen. Susan Collins, have also introduced congressional proposals to address AI use in advertisements.
But viewers will need to keep their eyes peeled for AI-generated content this election season until those measures pass — if they pass at all.