X's native AI chatbot Grok is now providing users with election-related queries a Vote.gov banner and directs them to visit the site for "accurate and up-to-date information about the 2024 US Elections."
Earlier this month, Secretaries of State for Michigan, Minnesota, New Mexico, Pennsylvania, and Washington demanded action from X after they investigated reports that the chatbot populated incorrect election information. False responses included inaccurate information about ballot deadlines for multiple states. The group urged the company to follow in OpenAI's footsteps, which partnered with the National Association of Secretaries of State to provide election information through CanIVote.org.
While X didn't agree to such a partnership, state leaders responded positively to Grok's new update: "We appreciate X’s action to improve their platform and hope they continue to make improvements that will ensure their users have access to accurate information from trusted sources in this critical election year."
Lawmakers still want to see more movement on the spread of election misinformation and deepfakes by industry players and federal agencies, however. On Aug. 27, a coalition of Democratic lawmakers once again petitioned the Federal Election Commission (FEC) to clarify its stance on AI-generated synthetic images of candidates. The group, joining consumer rights watchdog Public Citizen, has demanded the FEC establish rules on the use of "deceptive AI" and decide if they can be classified as “fraudulent misrepresentation” in campaigning.
In a letter sent to the FEC, the lawmakers specifically called out recent images generated by Grok 2, the bot's latest version introduced with brand new image generation capabilities. "It is critical for our democracy that this be promptly addressed, noting the degree to which Grok-2 has already been used to distribute fake content regarding the 2024 presidential election," the letter reads.
"While electoral disinformation campaigns and voter suppression are not new in this country, AI has the potential to supercharge deception in an ecosystem already rife with false content," wrote Congresswoman Shontel M. Brown. "Twitter and Elon Musk have the responsibility to implement and require responsible use of its AI technology and, if not, the FEC must urgently step in to prevent further electoral fraud, especially by one of the two major candidates for president of the United States.”