Anthropic has disclosed that its Claude AI models were the target of industrial-scale distillation campaigns carried out by three Chinese AI labs: DeepSeek, Moonshot AI, and MiniMax. Through roughly 24,000 fictitious accounts, the three labs collectively produced over 16 million exchanges with Claude, in violation of Anthropic's terms of service and regional access limitations. A machine learning technique called distillation uses the outputs of a more sophisticated AI model to train a less powerful one.
Although AI labs frequently and lawfully employ this technique to produce smaller, less expensive versions of their own models, rivals may also use it as a weapon.
In this instance, Claude's most potent abilities—agentic reasoning, tool use, and coding—were extracted by foreign labs using distillation at a fraction of the time and expense required to develop them separately. These campaigns used a similar strategy. The labs accessed Claude at scale while avoiding detection by using commercial proxy services and fraudulent accounts.
Anthropic used infrastructure indicators, request metadata, IP address correlation, and industry partner confirmation to attribute each campaign with high confidence. The extent of the assaults With over 13 million exchanges centered on agentic coding and tool usage, MiniMax was in charge of the biggest operation. Prior to MiniMax releasing the model it was training, Anthropic discovered this campaign while it was still in operation.
MiniMax quickly changed course and redirected almost half of its traffic to capture capabilities from the most recent system when Anthropic released a new Claude model during the campaign. Over 3.4 million exchanges focused on computer vision, coding, computer-use agent development, and agentic reasoning were carried out by Moonshot AI (Kimi models). Numerous access pathways were used by the lab to create hundreds of fraudulent accounts, and the request metadata matched the public profiles of senior Moonshot employees.
With more than 150,000 exchanges, DeepSeek was the smallest campaign and concentrated on tasks that required reasoning and grading based on rubrics. In a noteworthy method, DeepSeek successfully produced chain-of-thought training data at scale by asking Claude to explain the internal logic behind finished responses step by step.
Claude was also used by DeepSeek to produce censorship-safe answers to politically delicate questions concerning authoritarianism, party leaders, and dissidents. Lab Trades Principal Objectives MiniMax More than 13 million Computer vision, coding, and agentic reasoning AI Moonshot More than 3.4 million Computer vision, coding, and agentic reasoning More than 150,000 DeepSeek Reasoning, extracting a chain of thought, and content that is safe for censorship In one instance, over 20,000 fraudulent accounts were managed concurrently by a single proxy network, which combined distillation traffic with irrelevant client requests to make detection more difficult. DeepSeek has also been charged by OpenAI with employing comparable distillation methods against ChatGPT.
A memo to U.S. lawmakers claims that DeepSeek staff members got around OpenAI's access controls by using unapproved resellers and third-party routers. The safety precautions that American companies incorporate into their systems are absent from illegally distilled models, according to Anthropic.
Anthropic has implemented a number of countermeasures, such as tools for detecting coordinated multi-account activity, classifiers and behavioral fingerprinting systems to identify distillation patterns in API traffic, and chain-of-thought elicitation detection. Anthropic is creating model-level safeguards to lessen the efficacy of outputs for illegal distillation without compromising the legitimate user experience, and it is sharing technical indicators with other AI labs, cloud providers, and pertinent authorities. In order to address distillation threats at this scale, Anthropic emphasized that no one company can handle the issue alone and called for coordinated action from the AI industry, cloud providers, and legislators.
To receive more immediate updates, visit LinkedIn and X. Make ZeroOwl a Google Preferred Source.

%2520(1).webp&w=3840&q=75)
.webp%3Fw%3D696%26resize%3D696%2C0%26ssl%3D1&w=3840&q=75)

.webp&w=3840&q=75)



%2520(1).webp&w=3840&q=75)


