This is part of our ongoing series about what we learned about AI, cybersecurity, and geopolitics at RSAC 2026. We're concentrating on machine learning and large language models (LLMs) as types of AI. Integrating this technology will help reduce risks and improve cybersecurity measures as we add new features, either by building them ourselves or by using AI-enabled security tools.

It's important to know how the whole industry thinks about AI implementation, governance, controls, use cases, and basic parts. As practitioners, our role is very important because we all need to work together and vote on how things should change. Vendors need to keep up with this trend that is moving quickly. Service providers also need to keep up with the changes in the world.

The tide is rising, and we all need to work together to deal with these changes in the best way possible. Some AI security products could cost as much as a full-time analyst's salary, which is a lot of money. Look for ways to get the most out of that budget, even though some solutions will cost more at first.

If you don't want to deal with this problem on your own, you might want to ask service providers who are keeping a close eye on these issues and offering cheaper options for help. You need to think about the pros and cons of doing it and getting big results. I can't believe how much faster it has made me respond, and I can't stress enough how much it has helped.

The key is to use these improvements in ways that are useful and lead to real, good results. The best way to use these new technologies is to use them in ways that are useful and good. The most important thing is not to overestimate how many new alerts and alerts can be found.

We need to use them to make the most of these improvements. We need to be proactive at every stage of the implementation cycle to make sure we get big results and quick implementation and results. These tools are very helpful, but having a person in the loop makes sure that code is thoroughly audited and validated. Seeing real-life examples of security teams using an agentic LLM would be very helpful.