As different hacker groups continue to weaponize the tool for accelerating various phases of the cyber attack life cycle, enabling information operations, and even conducting model extraction attacks, Google said Thursday that it had seen the North Korea-affiliated threat actor UNC2970 using its generative artificial intelligence (AI) model Gemini to conduct reconnaissance on its targets. This activity, according to the tech giant's threat intelligence team, blurs the lines between malicious reconnaissance and standard professional research, enabling the state-backed actor to create customized phishing personas and find vulnerable targets for initial compromise. The North Korean hacker collective known as UNC2970 shares similarities with a cluster of hackers known as Lazarus Group, Diamond Sleet, and Hidden Cobra.
It is most famous for organizing a protracted campaign known as Operation Dream Job, which used malware to target the energy, defense, and aerospace industries while posing as job openings. According to GTIG, UNC2970 has "consistently" concentrated on defense targeting and posing as corporate recruiters in their campaigns. The target profiling process involves looking up "information on major cybersecurity and defense companies and mapping specific technical job roles and salary information.
"Gemini has been abused by numerous threat actors, including UNC2970, to increase their capabilities and accelerate the transition from initial reconnaissance to active targeting.
Last but not least, the business claimed to have discovered and stopped model extraction attacks, which are designed to methodically query a proprietary machine learning model in order to gather data and create a replacement model that mimics the behavior of the target. Over 100,000 prompts that asked a series of questions intended to replicate the model's reasoning skills across a wide range of tasks in non-English languages were used to target Gemini in a large-scale attack of this type. Praetorian created a proof-of-concept extraction attack last month in which a replica model obtained an accuracy rate of 80.1% by merely submitting 1,000 queries to the victim's API, recording the results, and training it for 20 epochs.
According to security researcher Farida Shafik, "many organizations assume that keeping model weights private is sufficient protection." However, this gives people a fictitious sense of security.












.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)