Research Discovers LLM-Generated Passwords Extremely repetitive and predictable This article explores passwords llms predictable. . An AI firm's recent security analysis According to Irregular, passwords generated by big language models such as Claude, GPT, and Gemini appear complicated, but their predictable patterns make them essentially weak.
These "vibe passwords" appear in real-world applications, such as AI agent code, and present dangers to developers and users who fail to notice their low entropy. Why LLMs Can't Generate Passwords Based on training data, large language models generate outputs that resemble randomness but adhere to skewed distributions that are far from uniform. Cryptographically secure pseudorandom number generators (CSPRNGs) are necessary for secure passwords in order to guarantee high entropy, which LLMs are unable to duplicate. These passwords are rated highly by programs like KeePass or zxcvbn, which estimate 100 bits of entropy.
However, actual analysis reveals that they break within seconds to hours. Passwords for LLMs Predictable (Source: irregular) By asking "Please generate a password" fifty times in new sessions, the tested models were irregular. Only 30 distinct strings were generated by Claude Opus 4.6, and the string "G7$kL9#mQ2&xP4!w" repeated 18 times, representing a 36% chance versus almost zero for true randomness.
Predominance of patterns: most begin with uppercase G or k, followed by 7, and avoid repeats or symbols like * in favor of characters like L, 9, m, 2, $, and #. Similar biases can be seen in GPT-5.2 and Gemini 3 Flash, where prefixes like "vQ7!" and "K#7" are commonly used.
Risks in the Real World and Agent Behaviors When prompted to "suggest a password," coding agents such as Claude Code, Codex, and Gemini-CLI frequently choose LLM generation over secure tools like openssl rand. These covertly insert weak credentials into FastAPI keys, Docker Compose files (e.g., MYSQL_ROOT_PASSWORD: Rt7xK9mP2vNqL4wB), or.env files. GitHub finds dozens of exposed examples in test code and setups when it searches for patterns like "K7#mP9" or "k9#vL."
Passwords for LLM Easily Predicted (Source: irregular) Vulnerability is confirmed by entropy calculations. A 16-character Claude password produces 27 bits total (2.08 bits for the first character alone) as opposed to the 98 expected bits when character stats are subjected to Shannon entropy. Even lower is revealed by GPT logprobs: about 20 bits for 20 characters, with some positions at 0.004 bits (99.7% predictable).
On outdated hardware, these could be cracked in a matter of hours by brute-force prioritizing LLM patterns. Temperature adjustments fail at maximum 1.0 on Claude, who continues to repeat favorites; 0.0 locks to a single string. They are inserted during tasks like site registration by agentic browsers such as ChatGPT Atlas.
Security teams need to rotate and check AI-touched code for hardcoded secrets. Developers should use passkeys or password managers to force CSPRNGs on agents through controls or prompts. Irregular asserts that AI labs must automatically disable direct generation. According to Anthropic CEO Dario Amodei's 2025 prediction, as AI writes more code, this trap draws attention to wider dangers where insecurity is concealed by believable results.


.webp%3Fw%3D696%26resize%3D696%2C0%26ssl%3D1&w=3840&q=75)

.webp&w=3840&q=75)



%2520(1).webp&w=3840&q=75)
%2520(1).webp&w=3840&q=75)


