The AI and Adversarial Testing Benchmark Report 2026 from Pentera says that most security leaders are having a hard time protecting AI systems with tools and skills that aren't up to the task This article explores companies protecting ai. . The report, which is based on a survey of 300 US CISOs and other high-level security leaders, looks at how companies are protecting their AI infrastructure and points out important gaps caused by a lack of skills and the use of security controls that weren't made for the AI era.
AI use is growing faster than security visibility. AI systems are rarely used on their own. They are layered on top of and connected to the technology that companies already use, such as cloud platforms, identity systems, applications, and data pipelines. Because ownership is spread out across different teams, centralized oversight is no longer effective.
Because of this, 67% of CISOs said they didn't have a clear picture of how AI was being used in their company. None of the people who answered said they had full visibility; instead, they said they were aware of or okay with some kind of unmanaged or unsanctioned AI use. Security teams have a hard time figuring out how risky AI systems are when they don't know where they work or what resources they can use.
A lot of the time, basic questions like what identities AI systems use, what data they can access, or how they act when controls fail go unanswered. ## Skills, not money, are the main problem The study shows that the biggest problems with AI security are not money, even though it is now a common topic of conversation in boardrooms and executive meetings.
CISOs said that these were the biggest problems they faced when trying to protect AI infrastructure: Not enough internal knowledge (50%) 48% of people said they couldn't see how AI was being used. 36 percent said that AI systems don't have enough security tools made just for them. Only 17% said that budget problems were their main worry.
This means that a lot of companies are willing to spend money on AI security, but they don't yet have the specific skills needed to assess AI-related risks in real-world situations. AI systems bring new behaviors that security teams are still learning how to deal with, like making decisions on their own, indirect access paths, and systems that can only interact with certain people. It is hard to tell if existing controls are working as they should without the right knowledge and regular testing.
Most of the work is being done by legacy controls. Most businesses are adding AI infrastructure to their existing security controls because there aren't any best practices, skills, or tools specifically for AI. The study found that 75% of CISOs use old security tools, like endpoint, application, cloud, or API security tools, to protect AI systems.
Only 11% said they had security tools made just for protecting AI infrastructure. This is similar to what has happened in the past when new technologies came out: companies first adapted their existing defenses before creating more specific security measures. This can give you basic protection, but controls made for traditional systems may not work with AI because it changes how people access things and opens up new ways for attacks.
A well-known problem, now applied to AI. The results show that AI security problems come from gaps in the basics, not from a lack of knowledge or intent. The report says that as AI becomes a bigger part of business infrastructure, companies will need to work on building expertise and finding better ways to check security controls in places where AI is already running.
Download the AI and Adversarial Testing Benchmark Report 2026 to read more about the data and the main points. Ryan Dory, Director of Technical Advisors at Pentera, wrote this article.












