LangChain and LangGraph are free and open-source frameworks that help developers make apps that use Large Language Models (LLMs) This article explores security flaw langflow. . If the vulnerabilities are used correctly, they could let people see filesystem data, environment secrets, and conversation history.
Last week alone, LangChain was downloaded more than 52 million, 23 million, and 9 million times, according to the Python Package Index. The following versions have fixed the security holes: CVE-2026-34070 - langchain-core >=1.2.22 and 0.3.81 and 12.5 - Langgraph-checkpoint-sqlite 3.0.1 - LangGraph-checkpoints 3.1 and 1.1.3 - LangChain-Core 1.3 and 1 2.5- LangGraph 1.2 and 1 3.4 - Lang Graph 1.4 and 1 5 and 1 6 and 1 7 and 1 8 and 1 9.
The news comes just days after a serious security flaw in Langflow was actively exploited within 20 hours of being made public, allowing hackers to steal sensitive data from developer environments. The results show once again that AI plumbing is not immune to common security holes, which could put whole systems at risk. "Langchain is not alone.
"Cyera said it sits at the center of a huge web of dependencies that runs through the AI stack."












