Elon Musk said Wednesday helium is “not alert of immoderate bare underage images generated by Grok,” hours earlier the California Attorney General opened an investigation into xAI’s chatbot implicit the “proliferation of nonconsensual sexually explicit material.”
Musk’s denial comes arsenic pressure mounts from governments worldwide — from the UK and Europe to Malaysia and Indonesia — aft users connected X began asking Grok to crook photos of real women, and successful immoderate cases children, into sexualized images without their consent. Copyleaks, an AI detection and contented governance platform, estimated astir 1 representation was posted each infinitesimal connected X. A abstracted sample gathered from January 5 to January 6 recovered 6,700 per hr implicit the 24-hour period. (X and xAI are portion of the aforesaid company.)
“This material…has been utilized to harass radical crossed the internet,” said California Attorney General Rob Bonta successful a statement. “I impulse xAI to instrumentality contiguous enactment to guarantee this goes nary further.”
The AG’s bureau volition analyse whether and however xAI violated the law.
Several laws beryllium to support targets of nonconsensual intersexual imagery and kid intersexual maltreatment worldly (CSAM). Last twelvemonth the Take It Down Act was signed into a national law, which criminalizes knowingly distributing nonconsensual intimate images – including deepfakes – and requires platforms similar X to region specified contented wrong 48 hours. California besides has its ain series of laws that Gov. Gavin Newsom signed successful 2024 to ace down connected sexually explicit deepfakes.
Grok began fulfilling idiosyncratic requests connected X to nutrient sexualized photos of women and children towards the extremity of the year. The inclination appears to person taken disconnected aft definite adult-content creators prompted Grok to make sexualized imagery of themselves arsenic a signifier of marketing, which past led to different users issuing akin prompts. In a fig of nationalist cases, including well-known figures similar “Stranger Things” histrion Millie Bobby Brown, Grok responded to prompts asking it to change existent photos of existent women by changing clothing, assemblage positioning, oregon carnal features successful overtly intersexual ways.
According to some reports, xAI has begun implementing safeguards to code the issue. Grok present requires a premium subscription earlier responding to definite image-generation requests, and adjacent past the representation whitethorn not beryllium generated. April Kozen, VP of selling astatine Copyleaks, told TechCrunch that Grok whitethorn fulfill a petition successful a much generic oregon toned-down way. They added that Grok appears much permissive with big contented creators.
Techcrunch event
San Francisco | October 13-15, 2026
“Overall, these behaviors suggest X is experimenting with aggregate mechanisms to trim oregon power problematic representation generation, though inconsistencies remain,” Kozen said.
Neither xAI nor Musk has publically addressed the occupation caput on. A fewer days aft the instances began, Musk appeared to marque airy of the contented by asking Grok to make an image of himself successful a bikini. On January 3, X’s information account said the institution takes “action against amerciable contented connected X, including [CSAM],” without specifically addressing Grok’s evident deficiency of safeguards oregon the instauration of sexualized manipulated imagery involving women.
The positioning mirrors what Musk posted today, emphasizing illegality and idiosyncratic behavior.
Musk wrote helium was “not alert of immoderate bare underage images generated by Grok. Literally zero.” That connection doesn’t contradict the beingness of bikini pics oregon sexualized edits much broadly.
Michael Goodyear, an subordinate prof astatine New York Law School and erstwhile litigator, told TechCrunch that Musk apt narrowly focused connected CSAM due to the fact that the penalties for creating oregon distributing synthetic sexualized imagery of children are greater.
“For example, successful the United States, the distributor oregon threatened distributor of CSAM tin look up to 3 years imprisonment nether the Take It Down Act, compared to 2 for nonconsensual big intersexual imagery,” Goodyear said.
He added that the “bigger point” is Musk’s effort to gully attraction to problematic idiosyncratic content.
“Obviously, Grok does not spontaneously make images. It does truthful lone according to idiosyncratic request,” Musk wrote successful his post. “When asked to make images, it volition garbage to nutrient thing illegal, arsenic the operating rule for Grok is to obey the laws of immoderate fixed state oregon state. There whitethorn beryllium times erstwhile adversarial hacking of Grok prompts does thing unexpected. If that happens, we hole the bug immediately.”
Taken together, the station characterizes these incidents arsenic uncommon, attributes them to idiosyncratic requests oregon adversarial prompting, and presents them arsenic method issues that tin beryllium solved done fixes. It stops abbreviated of acknowledging immoderate shortcomings successful Grok’s underlying information design.
“Regulators whitethorn consider, with attraction to escaped code protections, requiring proactive measures by AI developers to forestall specified content,” Goodyear said.
TechCrunch has reached retired to xAI to inquire however galore times it caught instances of nonconsensual sexually manipulated images of women and children, what guardrails specifically changed, and whether the institution notified regulators of the issue. TechCrunch volition update the nonfiction if the institution responds.
The California AG isn’t the lone regulator to effort to clasp xAI accountable for the issue. Indonesia and Malaysia person some temporarily blocked entree to Grok; India has demanded that X marque contiguous method and procedural changes to Grok; the European Commission ordered xAI to clasp each documents related to its Grok chatbot, a precursor to opening a caller investigation; and the UK’s online information watchdog Ofcom opened a ceremonial investigation nether the UK’s Online Safety Act.
xAI has travel nether occurrence for Grok’s sexualized imagery before. As AG Bonta pointed retired successful a statement, Grok includes a “spicy mode” to make explicit content. In October, an update made it adjacent easier to jailbreak what small information guidelines determination were, resulting successful galore users creating hardcore pornography with Grok, arsenic good arsenic graphic and convulsive intersexual images.
Many of the much pornographic images that Grok has produced person been of AI-generated radical — thing that galore mightiness inactive find ethically dubious but possibly little harmful to the individuals successful the images and videos.
“When AI systems let the manipulation of existent people’s images without wide consent, the interaction tin beryllium contiguous and profoundly personal,” Copyleaks co-founder and CEO Alon Yamin said successful a connection emailed to TechCrunch. “From Sora to Grok, we are seeing a accelerated emergence successful AI capabilities for manipulated media. To that end, detection and governance are needed present much than ever to assistance forestall misuse.”















English (US) ·