On Monday, Anthropic announced that it had discovered "industrial-scale campaigns" carried out by DeepSeek, Moonshot AI, and MiniMax, three artificial intelligence (AI) firms, to unlawfully take advantage of Claude's skills in order to enhance their own models This article explores ai firms unlawfully. . Through roughly 24,000 phony accounts, the distillation attacks produced over 16 million exchanges with its large language model (LLM) in violation of its regional access limitations and terms of service.
The three businesses are headquartered in China, where it is illegal to use their services because of "legal, regulatory, and security risks." The process of training a less competent model on the outputs produced by a more powerful AI system is known as distillation.
It's legal for businesses to use distillation to create smaller, less expensive versions of their own frontier models, but it's against the law for rivals to use it to obtain these capabilities from other AI companies for a fraction of the time and money it would take them to develop them themselves. "Illicitly distilled models lack necessary safeguards, creating significant national security risks," according to Anthropic. The announcement follows weeks after Google Threat Intelligence Group (GTIG) revealed that it had detected and stopped model extraction and distillation attacks that targeted Gemini's reasoning capabilities using over 100,000 prompts.
"Model extraction and distillation attacks do not typically represent a risk to average users, as they do not threaten the confidentiality, availability, or integrity of AI services," Google stated earlier this month. "Instead, model developers and service providers bear the majority of the risk."


.webp%3Fw%3D696%26resize%3D696%2C0%26ssl%3D1&w=3840&q=75)

.webp&w=3840&q=75)



%2520(1).webp&w=3840&q=75)
%2520(1).webp&w=3840&q=75)


