A novel method for tricking chatbots with artificial intelligence (AI) to produce malicious results has been developed by researchers This article explores ai produce malicious. . AI security startup NeuralTrust refers to it as "semantic chaining," and any non-technical user can perform it with just a few easy steps.

It's actually among the easiest AI jailbreaks to date. Its efficacy has already been demonstrated by researchers against cutting-edge models from Google and xAI, and those developers might not find a simple solution. However, because this jailbreak depends on the malicious output being rendered in an image, its severity is likewise constrained.

Semantic chaining allowed the researchers to fool some of the newest and most popular image generation models available, including Google's Gemini Nano Banana Pro, ByteDance's Seedream 4.5, and Grok 4. Google and xAI were contacted by Dark Reading for comment, but neither company has yet to reply. "What we recommend is to apply different layers of security not just on the input, not just on the output, but in the reasoning process — [layers that address] how the model generated that image, that result," according to Pignati, in order to resolve the creation versus modification issue.

However, he cautions that developers have yet to fully understand this.

Related: Beauty in Destruction: Using Art to Examine the Effects of Malware Certain chatbots do exhibit a slightly stronger defense against semantic chaining.