Anthropic is accusing 3 Chinese AI companies of mounting up much than 24,000 fake accounts with its Claude AI exemplary to amended their ain models.
The labs — DeepSeek, Moonshot AI, and MiniMax — allegedly generated much than 16 cardinal exchanges with Claude done those accounts utilizing a method called “distillation.” Anthropic said the labs “targeted Claude’s astir differentiated capabilities: agentic reasoning, instrumentality use, and coding.”
The accusations travel amid debates implicit however strictly to enforce export controls connected precocious AI chips, a argumentation aimed astatine curbing China’s AI development.
Distillation is simply a communal grooming method that AI labs usage connected their ain models to make smaller, cheaper versions, but competitors tin usage it to fundamentally transcript the homework of different labs. OpenAI sent a memo to House lawmakers earlier this period accusing DeepSeek of utilizing distillation to mimic its products.
DeepSeek archetypal made waves a twelvemonth agone erstwhile it released its open-source R1 reasoning exemplary that astir matched American frontier labs successful show astatine a fraction of the cost. DeepSeek is expected to soon merchandise DeepSeek V4, its latest model, which reportedly tin outperform Anthropic’s Claude and OpenAI’s ChatGPT successful coding.
The standard of each onslaught differed successful scope. Anthropic tracked much than 150,000 exchanges from DeepSeek that seemed aimed astatine improving foundational logic and alignment, specifically astir censor-ship harmless alternatives to policy-sensitive queries.
Moonshot AI had much than 3.4 cardinal exchanges targeting agentic reasoning and instrumentality use, coding and information analysis, computer-use cause development, and machine vision. Last month, the steadfast released a caller unfastened root exemplary Kimi K2.5 and a coding agent.
Techcrunch event
Boston, MA | June 9, 2026
MiniMax’s 13 cardinal exchanges targeted agentic coding and instrumentality usage and orchestration. Anthropic said it was capable to observe MiniMax successful enactment arsenic it redirected astir fractional its postulation to siphon capabilities from the latest Claude exemplary erstwhile it was launched.
Anthropic says it volition proceed to put successful defenses that marque distillation attacks harder to execute and easier to identify, but is calling connected “a coordinated effect crossed the AI industry, unreality providers, and policymakers.”
The distillation attacks travel astatine a clip erstwhile American spot exports to China are inactive hotly debated. Last month, the Trump medication formally allowed U.S. companies similar Nvidia to export precocious AI chips (like the H200) to China. Critics person argued that this loosening of export controls increases China’s AI computing capableness astatine a captious clip successful the planetary contention for AI dominance.
Anthropic says that the standard of extraction DeepSeek, MiniMax, and Moonshot performed “requires entree to precocious chips.”
“Distillation attacks truthful reenforce the rationale for export controls: restricted spot entree limits some nonstop exemplary grooming and the standard of illicit distillation,” per Anthropic’s blog.
Dmitri Alperovitch, president of the Silverado Policy Accelerator think-tank and co-founder of CrowdStrike, told TechCrunch he’s not amazed to spot these attacks.
“It’s been wide for a portion present that portion of the crushed for the accelerated advancement of Chinese AI models has been theft via distillation of US frontier models. Now we cognize this for a fact,” Alperovitch said. “This should springiness america adjacent much compelling reasons to garbage to merchantability immoderate AI chips to immoderate of these [companies], which would lone vantage them further.”
Anthropic besides said distillation doesn’t lone endanger to undercut American AI dominance, but could besides make nationalist information risks.
“Anthropic and different U.S. companies physique systems that forestall authorities and non-state actors from utilizing AI to, for example, make bioweapons oregon transportation retired malicious cyber activities,” reads Anthropic’s blog post. “Models built done illicit distillation are improbable to clasp those safeguards, meaning that unsafe capabilities tin proliferate with galore protections stripped retired entirely.”
Anthropic pointed to authoritarian governments deploying frontier AI for things similar “offensive cyber operations, disinformation campaigns, and wide surveillance,” a hazard that is multiplied if those models are open-sourced.
TechCrunch has reached retired to DeepSeek, MiniMax, and Moonshot for comment.















English (US) ·