A group of threat actors, including those associated with China and Russia, were discovered using ChatGPT to support malicious cyber and influence operations, according to OpenAI's confirmation. Specifically, OpenAI's tools were used by Chinese state-affiliated hackers to improve their cyberattack capabilities, and the platform was used by a Russian content farm connected to the "Rybar" network to automate online propaganda. Since then, OpenAI has suspended several accounts connected to these activities, reaffirming the company's position against the abuse of AI in cybersecurity situations.
Chinese threat actors connected to well-known cyber espionage units used ChatGPT accounts to create, translate, and improve phishing emails and malicious code elements, according to OpenAI's investigation.
According to reports, these actors aimed to create convincing lures, automate technical reconnaissance, and optimize spear-phishing campaigns targeting the global defense, technology, and policy sectors. Chinese Cyberattacks Powered by AI (Source: openai) In order to identify state-linked exploitation of AI models, the company also works with private cybersecurity partners and governments. An important reminder of the changing relationship between artificial intelligence and threats to international security is provided by this incident.
Maintaining ethical protections and strong detection systems will be crucial in preventing hostile actors from using AI tools as weapons as they become more potent and widely available.












