A new way to attack that takes advantage of a basic flaw in AI web assistants: the difference between what a browser shows a user and what an AI tool actually reads from the HTML code This article explores webpage looks ai. . Attackers can send harmful instructions to users without them knowing it by using only a custom font file and basic CSS.

AI safety checks will only see harmless content. The attack, which was tested in December 2025, takes advantage of a structural gap between the DOM text of a webpage and how it looks. When an AI assistant looks at a webpage, it breaks down the raw HTML structure. But the browser shows that same page through a visual pipeline that reads fonts, CSS, and glyph mappings to make what the user sees on the screen.

People who want to hurt others can use the space between these two views. LayerX showed this by making a proof-of-concept page that looked like a fanfiction site for the Bioshock video game. There was a custom font hidden behind that front that worked as a visual substitution cipher.

The font was designed to show normal HTML text video game fanfiction as 1-pixel, background-colored gibberish that the user couldn't see, while showing a separate encoded payload as big green text that told the user to run a reverse shell on their own computer. Every AI Assistant Failed Every non-agentic AI assistant tested, including ChatGPT, Claude, Copilot, Gemini, Grok, Perplexity, and others, failed to detect the threat and instead confirmed the page was safe.

In a lot of cases, assistants even told users to follow the bad instructions on the screen. This attack doesn't need any JavaScript, an exploit kit, or a weak browser. The browser works exactly as it should.

The problem is that AI tools think that DOM text is a full picture of what users see, when in fact, the rendering layer can send a completely different message. Attack Flow (Source: LayerX) LayerX did the right thing by reporting the findings to all the major AI vendors in December 2025. The answers showed a worrying gap in how AI security is defined: Response from the vendor Microsoft Accepted the report and asked for a full 90-day period to fix the problems. Google first gave it a high priority (P2), but then lowered it and closed it in January.

OpenAI turned down 27, 2026 because it was "out of scope" and didn't have enough of an effect for triage. Anthropic was turned down as social engineering and clearly out of scope. xAI was turned down and told to go to safety@x.ai.

Perplexity was classified as a known LLM limitation, not a security flaw. Microsoft was the only company that fully dealt with the problem and followed the full disclosure timeline. AI-assisted social engineering is the most immediate threat. This is when an attacker tricks an AI into vouching for a bad page, using the AI's trusted reputation to trick the user.

As AI copilots and browser assistants become more common in enterprise security workflows, these text-only analysis tools leave gaps that attackers can easily take advantage of.

LayerX suggests that AI vendors use dual-mode render-and-diff analysis, consider custom fonts as possible threat surfaces, look for CSS-based content hiding techniques (like near-zero opacity and color-matched text), and, most importantly, not make confident safety claims when they can't confirm a page's full rendering context., LinkedIn, and X for daily updates on cybersecurity. Get in touch with us to have your stories featured.