US senators demand answers from X, Meta, Alphabet on sexualized deepfakes

3 months ago 35

The tech world’s deepfake pornography occupation is present bigger than conscionable X.

In a letter to the leaders of X, Meta, Alphabet, Snap, Reddit and TikTok, respective U.S. senators are asking the companies to supply impervious that they person “robust protections and policies” successful place, and to explicate however they program to curb the emergence of sexualized deepfakes connected their platforms.

The senators besides demanded that the companies sphere each documents and accusation relating to the creation, detection, moderation, and monetization of sexualized, AI-generated images, arsenic good arsenic immoderate related policies.

The missive comes hours aft X said it updated Grok to prohibit it from making edits of existent radical successful revealing clothing, and restricted representation instauration and edits via Grok to paying subscribers. (X and xAI are portion of the aforesaid company.)

Pointing to media reports astir however easily and often Grok generated sexualized and nude images of women and children, the senators pointed retired that platforms’ guardrails to forestall users from posting non-consensual, sexualized imagery whitethorn not beryllium enough.

“We admit that galore companies support policies against non-consensual intimate imagery and intersexual exploitation, and that galore AI systems assertion to artifact explicit pornography. In practice, however, arsenic seen successful the examples above, users are uncovering ways astir these guardrails. Or these guardrails are failing,” the missive reads.

Grok, and consequently X, person been heavy criticized for enabling this trend, but different platforms are not immune.

Techcrunch event

San Francisco | October 13-15, 2026

Deepfakes archetypal gained popularity connected Reddit, erstwhile a leafage displaying synthetic porn videos of celebrities went viral earlier the level took it down successful 2018. Sexualized deepfakes targeting celebrities and politicians person multiplied connected TikTok and YouTube, though they usually originate elsewhere.

Meta’s Oversight Board past twelvemonth called retired two cases of explicit AI images of pistillate nationalist figures, and the level has besides allowed nudify apps to merchantability ads connected its services, though it did sue a institution called CrushAI later. There person been aggregate reports of kids spreading deepfakes of peers connected Snapchat. And Telegram, which isn’t included connected the senators’ list, has besides go notorious for hosting bots built to undress photos of women.

X, Alphabet, Reddit, Snap, TikTok and Meta did not instantly respond to requests for comment.

The missive demands the companies provide:

  • Policy definitions of “deepfake” content, “non-consensual intimate imagery,” oregon akin terms.
  • Descriptions of the companies’ policies and enforcement attack for non-consensual AI deepfakes of peoples’ bodies, non-nude pictures, altered covering and “virtual undressing.”
  • Descriptions of existent contented policies addressing edited media and explicit content, arsenic good arsenic interior guidance provided to moderators.
  • How existent policies govern AI tools and representation generators arsenic they subordinate to suggestive oregon intimate content.
  • What filters, guardrails oregon measures person been implemented to forestall the procreation and organisation of deepfakes.
  • Which mechanisms the companies usage to place deepfake contented and forestall them from being re-uploaded.
  • How they forestall users from profiting from specified content.
  • How the platforms forestall themselves from monetizing non-consensual AI-generated content.
  • How the companies’ presumption of work alteration them to prohibition oregon suspend users who station deepfakes.
  • What the companies bash to notify victims of non-consensual intersexual deepfakes.

The missive is signed by Senators Lisa Blunt Rochester (D-Del.), Tammy Baldwin (D-Wis.), Richard Blumenthal (D-Conn.), Kirsten Gillibrand (D-N.Y.), Mark Kelly (D-Ariz.), Ben Ray Luján (D-N.M.), Brian Schatz (D-Hawaii), and Adam Schiff (D-Calif.).

The determination comes conscionable a time aft xAI’s proprietor Elon Musk said that helium was “not aware of immoderate bare underage images generated by Grok.” Later connected Wednesday, California’s lawyer wide opened an investigation into xAI’s chatbot, pursuing mounting unit from governments crossed the world incensed by the deficiency of guardrails astir Grok that allowed this to happen.

xAI has maintained that it takes enactment to region “illegal contented connected X, including [CSAM] and non-consensual nudity,” though neither the institution nor Musk person addressed the information that Grok was allowed to make specified edits successful the archetypal place.

The occupation isn’t constrained to non-consensual manipulated sexualized imagery either. While not each AI-based representation procreation and editing services fto users “undress” people, they bash fto 1 easy make deepfakes. To prime a fewer examples, OpenAI’s Sora 2 reportedly allowed users to make explicit videos featuring children; Google’s Nano Banana seemingly generated an representation showing Charlie Kirk being shot; and racist videos made with Google’s AI video exemplary are garnering millions of views connected societal media.

The contented grows adjacent much analyzable erstwhile Chinese representation and video generators travel into the picture. Many Chinese tech companies and apps — particularly those linked to ByteDance — connection casual ways to edit faces, voices and videos, and those outputs person dispersed to Western societal platforms. China has stronger synthetic contented labeling requirements that don’t beryllium successful the U.S. connected the national level, wherever the masses alternatively trust connected fragmented and dubiously enforced policies from the platforms themselves.

U.S. lawmakers person already passed immoderate authorities seeking to rein successful deepfake pornography, but the interaction has been limited. The Take It Down Act, which became national instrumentality successful May, is meant to criminalize the instauration and dissemination of non-consensual, sexualized imagery. But a fig of provisions successful the law marque it hard to clasp image-generating platforms accountable, arsenic they absorption astir of the scrutiny connected idiosyncratic users instead.

Meanwhile, a fig states are trying to instrumentality matters into their ain hands to support consumers and elections. This week, New York Governor Kathy Hochul projected laws that would necessitate AI-generated contented to beryllium labeled arsenic such, and prohibition non-consensual deepfakes successful specified periods starring up to elections, including depictions of absorption candidates.

Read Entire Article