Coalition demands federal Grok ban over nonconsensual sexual content

2 months ago 27

A conjugation of nonprofits is urging the U.S. authorities to instantly suspend the deployment of Grok, the chatbot developed by Elon Musk’s xAI, successful national agencies including the Department of Defense. 

The unfastened letter, shared exclusively with TechCrunch, follows a slew of concerning behaviour from the ample connection exemplary implicit the past year, including astir precocious a inclination of X users asking Grok to crook photos of existent women, and successful immoderate cases children, into sexualized images without their consent. According to immoderate reports, Grok generated thousands of nonconsensual explicit images each hour, which were past disseminated astatine standard connected X, Musk’s societal media level that’s owned by xAI. 

“It is profoundly concerning that the national authorities would proceed to deploy an AI merchandise with system-level failures resulting successful procreation of nonconsensual intersexual imagery and kid intersexual maltreatment material,” the letter, signed by advocacy groups similar Public Citizen, Center for AI and Digital Policy, and Consumer Federation of America, reads. “Given the administration’s enforcement orders, guidance, and the precocious passed Take It Down Act supported by the White House, it is alarming that [Office of Management and Budget] has not yet directed national agencies to decommission Grok.” 

xAI reached an statement past September with the General Services Administration (GSA), the government’s purchasing arm, to sell Grok to national agencies nether the enforcement branch. Two months before, xAI – alongside Anthropic, Google, and OpenAI – secured a declaration worthy up to $200 million with the Department of Defense. 

Amid the scandals connected X successful mid-January, Defense Secretary Pete Hegseth said Grok volition articulation Google’s Gemini successful operating wrong the Pentagon network, handling some classified and unclassified documents, which experts accidental is simply a nationalist information risk. 

The letter’s authors reason that Grok has proven itself incompatible with the administration’s requirements for AI systems. According to the OMB’s guidance, systems that contiguous terrible and foreseeable risks that cannot beryllium adequately mitigated indispensable beryllium discontinued. 

“Our superior interest is that Grok has beauteous consistently shown to beryllium an unsafe ample connection model,” JB Branch, a Public Citizen Big Tech accountability advocator and 1 of the letter’s authors, told TechCrunch. “But there’s besides a heavy past of Grok having a assortment of meltdowns, including anti-semitic rants, sexist rants, sexualized images of women and children.”

Techcrunch event

Boston, MA | June 23, 2026

Several governments person demonstrated an unwillingness to prosecute with Grok pursuing its behaviour successful January, which builds connected a bid of incidents including the procreation of anti-semitic posts connected X and calling itself “MechaHitler.” Indonesia, Malaysia, and the Philippines each blocked entree to Grok (they’ve subsequently lifted those bans), and the European Union, the U.K., South Korea, and India are actively investigating xAI and X regarding information privateness and the organisation of amerciable content. 

The missive besides comes a week aft Common Sense Media, a nonprofit that reviews media and tech for families, published a damning hazard assessment that recovered Grok is among the astir unsafe for kids and teens. One could reason that, based connected the findings of the study — including Grok’s propensity to connection unsafe advice, stock accusation astir drugs, make convulsive and intersexual imagery, spew conspiracy theories, and make biased outputs — Grok isn’t each that harmless for adults either.  

“If you cognize that a ample connection exemplary is oregon has been declared unsafe by AI information experts, wherefore successful the satellite would you privation that handling the astir delicate information we have?” Branch said. “From  a nationalist information standpoint, that conscionable makes perfectly nary sense.”

Andrew Christianson, a erstwhile National Security Agency contractor and existent laminitis of Gobbi AI, a no-code AI cause level for classified environments, says that utilizing closed-source LLMs successful wide is simply a problem, peculiarly for the Pentagon. 

“Closed weights means you can’t spot wrong the model, you can’t audit however it makes decisions,” helium said. “Closed codification means you can’t inspect the bundle oregon power wherever it runs. The Pentagon is going closed connected both, which is the worst imaginable operation for nationalist security.”

“These AI agents aren’t conscionable chatbots,” Christianson added. “They tin instrumentality actions, entree systems, determination accusation around. You request to beryllium capable to spot precisely what they’re doing and however they’re making decisions. Open root gives you that. Proprietary unreality AI doesn’t.” 

The risks of utilizing corrupted oregon unsafe AI systems spill retired beyond nationalist information usage cases. Branch pointed retired that an LLM that’s been shown to person biased and discriminatory outputs could nutrient disproportionate antagonistic outcomes for radical arsenic well, particularly if utilized successful departments involving housing, labor, oregon justice. 

While the OMB has yet to people its consolidated 2025 national AI usage lawsuit inventory, TechCrunch has reviewed the usage cases of respective agencies — astir of which are either not utilizing Grok oregon are not disclosing their usage of Grok. Aside from the DoD, the Department of Health and Human Services besides appears to beryllium actively utilizing Grok, chiefly for scheduling and managing societal media posts and generating archetypal drafts of documents, briefings, oregon different connection materials. 

Branch pointed to what helium sees arsenic a philosophical alignment betwixt Grok and the medication arsenic a crushed for overlooking the chatbot’s shortcomings. 

“Grok’s marque is being the ‘anti-woke ample connection model,’ and that ascribes to this administration’s philosophy,” Branch said. “If you person an medication that has had aggregate issues with folks who’ve been accused of being Neo Nazis oregon white supremacists, and past they’re utilizing a ample connection exemplary that has been tied to that benignant of behavior, I would ideate they mightiness person a propensity to usage it.”

This is the coalition’s 3rd missive aft penning with akin concerns successful August and October past year. In August, xAI launched “spicy mode” in Grok Imagine, triggering wide instauration of non-consensual sexually explicit deepfakes. TechCrunch besides reported successful August that backstage Grok conversations had been indexed by Google Search

Prior to the October letter, Grok was accused of providing predetermination misinformation, including mendacious deadlines for ballot changes and governmental deepfakes. xAI besides launched Grokipedia, which researchers recovered to beryllium legitimizing scientific racism, HIV/AIDS skepticism, and vaccine conspiracies. 

Aside from instantly suspending the national deployment of Grok, the missive demands that the OMB formally analyse Grok’s information failures and whether the due oversight processes were conducted for the chatbot. It besides asks the bureau to publically clarify whether Grok has been evaluated to comply with Trump’s enforcement bid requiring LLMs to beryllium truth-seeking and neutral and whether it met OMB’s hazard mitigation standards.  

“The medication needs to instrumentality a intermission and reassess whether oregon not Grok meets those thresholds,” Branch said.

TechCrunch has reached retired to xAI and OMB for comment. 

Read Entire Article