Gemini AI Model Cyberattacks To circumvent conventional detection techniques, threat actors have started using Google's Gemini API to dynamically generate C# code for multi-stage malware. In its February 2026 AI Threat Tracker report, the Google Threat Intelligence Group (GTIG) described this in detail, highlighting the HONESTCUE framework that was initially noticed in September 2025. HONESTCUE functions as a downloader and launcher that retrieves self-contained C# source code by submitting hard-coded queries to Gemini's API.
Without creating disk artifacts, this code carries out stage-two functionality, such as downloading payloads from URLs hosted on CDNs like Discord. GeminiAI-powered HONESTCUE malware (Source: Google) Static analysis and behavioral detection are made more difficult by the malware's use of the genuine.NET CSharpCodeProvider to compile and run the received code directly in memory.
Even for APT groups, GenAI can speed up and automate certain basic tasks, but this has nothing to do with the exaggerated claims of its purported omnipotence in hacking. Second, Google might be putting itself in danger legally. Since Google is well aware that nation-state organizations and cyberterrorists actively use its AI technology for nefarious ends, it might be held accountable for the harm these cyberthreat actors cause.
The reported abuse could have been avoided by installing guardrails and enhancing customer due diligence, both of which are inexpensive. The big question now is who will be responsible, and Google is unlikely to have a compelling response. For daily cybersecurity updates, check out X, LinkedIn, and Google. To have your stories featured, get in touch with us.












.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)