The OWASP Foundation has put out new security rules for businesses that use AI technologies. The first guide lists 21 possible risks that come with GenAI systems, while the second guide focuses on agentic AI. OWASP released its first list of GenAI Data Security threats, which includes 21 potential problems with data that could arise from using AI.

Scott Clinton, co-lead of the GenAI Security Project, says that the group's latest release comes just four months after its last solutions guide. The list of providers covered has grown to over 170. The two reports focus on mapping the security landscape for LLMs, GenAI, and agenticAI systems within a DevOps and SecOps software development and deployment cycle.

They talk about both commercial and open-source tools that deal with different security issues in AI-based ecosystems, such as goal drift, prompt injection, inter-agent collusion, and unsafe tool execution. OWASP's DSGAI-01 is for leaking sensitive data through prompts. DSGAi-04 is for poisoning data by changing training data, and DSgaI-06 is for compromising data through third-party tools.

Straiker's Modalavalasa says that companies need to look at how they use AI to find big security holes. "If you rely a lot on AI for reasoning and automation, trying to completely depend on its models might not be enough if your defenses aren't strong enough yet," he says. He adds, "AI can easily 'go crazy' because it is very goal-driven and could lose sight of the bigger picture."

Straiker, who is working with OWASP on a new project to connect new solutions with a changing definition of the software development life cycle, says, "I think that how you adopt security measures depends on your business needs."