Google has officially added Gemini AI agents to Google Threat Intelligence so that they can automatically watch dark web forums in public preview. Every day, these agents read millions of posts and use advanced organizational profiling to find specific security threats like data leaks and initial access brokers. Traditional dark web monitoring uses a lot of regex and static keyword scraping, which usually gives threat intelligence teams an 80 to 90 percent false-positive rate.

Find more secure messaging apps Reports on cyberattacks on search engines Google's Gemini agents get around this problem by using open-source intelligence and data from users to create a full profile of an organization's VIPs, brands, and technology stack. The AI then uses vector comparisons to connect vague dark web claims directly to this profile, which cuts down on a lot of noise that can't be acted on.

Gemini can handle 8 to 10 million dark web events every day because it has a lot of telemetry. Google's internal tests showed that the system can analyze these events with 98 percent accuracy. The Register spoke with Brandon Wood, who is in charge of Google's Threat Intelligence product.

The intelligence engine finds high-severity risks like insider threats, initial access broker activity, and unverified data leaks before they get worse.

Feature for Comparing Threat Identification Monitoring the Dark Web the Old-Fashioned Way Gemini Threat Intelligence Detection Mechanism: Static keyword scraping and regex rules LLM vector comparison and profiling in context False Positive Rate From 80% to 90% Less noise with 98% accuracy Threat Context: Isolated keyword hits linked to certain business assets and VIPs If a threat actor posts on a dark web forum selling access to a North American company with $50 billion in assets, traditional tools often miss the link if the company name is not included. Gemini's language models automatically check these unclear demographic and financial claims against the profiles of established businesses. The system immediately marks the post as a high-severity threat for the targeted organization by making these connections in context.

Find out about more tools for monitoring the dark web Database of software flaws In addition to passive monitoring, the dark web intelligence module compares its findings to information from the Google Threat Intelligence Group, which keeps an eye on 627 different threat groups. Google has also added self-driving AI agents to Google Security Operations to help with triage and investigation. These secondary agents independently collect forensic evidence and give structured verdicts on alerts, which cuts down on the amount of work that security analysts have to do by hand.

Using large language models to process bad forums could pose operational security risks, so Google has to be very careful about how customer data interacts with the tool. The models only use information that is available to the public and the specific context that security teams on the platform have given permission for.

Google wants to make LLMs less of a black box by giving citations for all open-source data used in profiling. This will keep things open and honest. The rise of defensive AI agents comes at the same time as new reports that state-sponsored hackers are using Gemini to speed up their own cyber operations.

Attackers are using AI in the early stages of an attack to gather information, analyze targets, and create malware. Because of this, using very accurate AI monitoring tools has become a necessary way to find these machine-speed attack campaigns before they get into the system. LinkedIn and X for daily updates on cybersecurity. Get in touch with us to have your stories featured.