Elon Musk’s Grok Hit With Bans and Regulatory Probes Worldwide
Grok, the A.I. chatbot developed by Elon Musk’s xAI, is facing mounting backlash after users exploited the tool to generate sexually explicit images of real women and children. Government regulators and A.I. safety advocates are now calling for investigations and, in some cases, outright bans, as nonconsensual deepfake pornography proliferates online.
Indonesia and Malaysia moved swiftly this week to ban Grok. Indonesia’s minister of communication and digital affairs, Meutya Hafid, said in a statement, “The government sees nonconsensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space.”
Malaysian officials similarly cited “repeated misuse” of Grok to create nonconsensual, sexualized images. In both countries, the restrictions will remain in place while regulatory probes move forward.
The U.K. communications regulator Ofcom is investigating what it called “deeply concerning reports” of malicious uses of Grok, as well as the platform’s compliance with existing rules. If regulators determine that xAI is liable, the company could face a fine equal to the greater of 10 percent of its global revenue or 18 million pounds (roughly $21.2 million). A full ban in the U.K. remains on the table, depending on the outcome of the inquiry.
Musk has sought to shift responsibility to users who request or upload illegal content. In a Jan. 3 post on X, he wrote, “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content.” Regulators, however, appear unconvinced. The wave of investigations and bans suggests a broader shift toward holding social media and A.I. companies accountable for how their tools are used—not just who uses them.
In response to the controversy, Musk has limited Grok’s image-generation features to paying subscribers. Free users who request images now receive a message stating: “Image generation and editing are currently limited to paying subscribers. You can subscribe to unlock these features.” But for many lawmakers and victims of deepfake abuse, the move falls far short.
The European Union has ordered X to preserve all documents related to Grok through the end of 2026, extending an existing data-retention mandate while authorities investigate the issue. Sweden is among the E.U. member states that have publicly criticized Grok, particularly after the country’s deputy prime minister was reportedly targeted by nonconsensual deepfake imagery.
The debate is unfolding against a broader regulatory backdrop. Australia is entering its first full year enforcing a nationwide ban on social media use for children under 16, while 45 U.S. states have enacted laws targeting A.I.-generated child sexual abuse material.
Despite the controversy, the U.S. Department of Defense announced a partnership with Grok on Jan. 12, just days after reports of the deepfake misuse surfaced. Under the agreement, the Pentagon plans to feed military and intelligence data into Grok to support innovation efforts.
‘Nudification apps’ and the risks of unchecked generative A.I.
Tools like Grok have drawn particular ire for their resemblance to so-called “nudification apps,” a term used by the U.K. children’s commissioner to describe technologies that can rapidly create sexualized images without consent. Lawmakers argue that the speed and scale at which such images can now spread make them especially dangerous.
A quarter of women across all age groups have experienced nonconsensual sharing of explicit images, according to a recent report from Communia, an A.I.-powered self-development app. Among Gen Z women, that figure rises to 40 percent. The report also found that the use of deepfakes in these images has quadrupled for Gen Z women since 2023.
As schools and local authorities grapple with A.I.-generated sexual imagery involving minors—such as a case in Lancaster, Penn. where two juvenile males were charged with multiple counts including possession and dissemination of child pornography—some victims are pushing for stronger safeguards. Texas high school student Elliston Berry, for example, has advocated for the federal Take It Down Act, which focuses on removing harmful content after it appears. The bill, however, does not hold platforms liable unless they fail to comply with takedown requests.
For Olivia DeRamus, founder and CEO of Communia, incremental measures are insufficient. She argues that banning Grok outright is the only viable solution. “No company should be allowed to knowingly facilitate and profit off of sexual abuse,” DeRamus told Observer. “Charging for the tool is simply inflating his bottom line.”
DeRamus contends that the A.I. industry has demonstrated an unwillingness to self-regulate or implement meaningful safety guardrails. “I have since realized that the only actions governments can take to stop revenge porn and non-consensual explicit image sharing from becoming a universal experience for women and girls is to hold the companies knowingly facilitating this either criminally liable or banning them altogether,” she said.
“Freedom of speech has never protected abuse and public harm,” DeRamus added. “In fact, it requires a certain level of moderation to ensure everyone can participate in public discourse safely. This includes women and girls, who will be forced away from public life if the current rates of abuse continue.”