Meta is facing intense scrutiny this week from critics accusing the social media giant of censoring conservative viewpoints and intentionally falsifying information about the assassination attempt against former President Donald Trump.
On Tuesday, the company publicly addressed two instances that had received widespread condemnation and detailed Meta's actions to adjust its algorithms in response. One incident involved a picture of Trump after the attempted assassination, which Meta's internal systems "incorrectly applied a fact-check label to," the company said in a blog post. The other involved Meta AI responses about the shooting, which in some instances inaccurately claimed the incident hadn't occurred at all.
"In both cases, our systems were working to protect the importance and gravity of this event," Meta's blog post, written by Joel Kaplan, the company's vice president of global policy, reads. "And while neither was the result of bias, it was unfortunate and we understand why it could leave people with that impression. That is why we are constantly working to make our products better and will continue to quickly address any issues as they arise."
Following the shooting on July 13 that left Trump wounded, one rally attendee dead, and two hospitalized, Meta's AI chatbot was programmed to not respond to queries about the assassination attempt at all, according to Kaplan.
Prominent figures including Elon Musk and Trump himself, seized upon Meta's oversight and the fact-checking label applied to the photo as evidence of censorship. Trump called the incidents "another attempt at RIGGING THE ELECTION!!!"
In the blog post for Meta, Kaplan denied the decisions had been made with bias. Kaplan said it is a "known issue" that chatbots like Meta AI can be unreliable when asked about breaking news or real-time events. Meta has updated its AI response on the topic, but Kaplan acknowledged, "We should have done this sooner."
"In a small number of cases, Meta AI continued to provide incorrect answers, including sometimes asserting that the event didn't happen — which we are quickly working to address," Kaplan wrote. "These types of responses are referred to as hallucinations, which is an industry-wide issue we see across all generative AI systems, and is an ongoing challenge for how AI handles real-time events going forward. Like all generative AI systems, models can return inaccurate or inappropriate outputs, and we'll continue to address these issues and improve these features as they evolve and more people share their feedback."
Trump, who has had an ongoing public feud with Meta CEO Mark Zuckerberg and has threatened to imprison the Facebook cofounder if he's re-elected, was not satisfied with the response.
On Tuesday, as criticism of the Big Tech companies swirled online, Trump took to Truth Social to urge his followers to "GO AFTER META AND GOOGLE. LET THEM KNOW WE ARE ALL WISE TO THEM, WILL BE MUCH TOUGHER THIS TIME."
The issues with the chatbot and the resulting chaos in response highlight the persistent challenges for tech companies and voters alike while navigating the development of new AI tech amid a contentious presidential election season.
Representatives for Meta and the Trump campaign did not immediately respond to requests for comment from Business Insider.