When Grafana's AI modules process information from Web pages that attackers control, they are open to attack This article explores noma security research. . They send sensitive data to an attacker's server by tricking the AI into thinking that instructions are harmless.
Joe McManus, the CISO of Grafana Labs, said that an issue with the image renderer in its Markdown component was quickly fixed in response to Noma's research. Noma said that there was a "zero-click" attack or one that worked without any clicks, but the company disagrees. McManu said, "We want to make it clear that there is no proof that this bug was used in the wild and that no data was leaked from GrafANA Cloud." Noma said in a statement, "The user couldn't see what was going on in the background and couldn't do anything about it."
"GrafANA's quick response to the patch and commitment to user safety." But we can't ignore the fact that the exploit's mechanics are not being shown correctly. "The documentation is clear, and we still have full faith in the research results," the company told ZeroOwl in an email.
"There were no alerts, flags, or prompts asking for confirmation. "The model handled an indirect prompt injection on its own by treating log content as valid context and acting without restriction or notification of any strange behavior," said Sasi Levi, head of Noma's security research department. "A successful execution of this exploit required a lot of user involvement. Specifically, users would have had to repeatedly follow malicious instructions provided in logs after our AI assistant alerted them," he said in the statement.
The technical problem that caused the issue has been fixed, but users should still be careful when using Grafna's AI features. Noma Security's security team has found the vulnerability and is working on a fix for it. They are also working on a fix for the bug that affects Google's Chrome and Microsoft's Windows software.












