Security teams continue to concentrate on safeguarding the models themselves even as AI copilots and assistants are integrated into daily tasks This article explores safeguarding models ai. . However, recent events indicate that the workflows surrounding those models pose a greater risk.

More than 900,000 users' ChatGPT and DeepSeek chat data was recently stolen by two Chrome extensions that pretended to be AI assistants. In a different study, researchers showed how IBM's AI coding assistant could be tricked into running malware on a developer's computer by prompt injections concealed in code repositories. The AI algorithms themselves were not compromised by either attack. Before outputs leave your environment, check them for sensitive information.

These safeguards ought to reside outside of the model itself, in middleware that verifies actions prior to their release. AI agents should be handled just like any other user or service.

Don't grant an AI complete access to all systems if it only requires read access to one. Limit OAuth tokens to the bare minimum of permissions needed, and keep an eye out for irregularities such as an AI unexpectedly gaining access to data it has never previously touched.