CVE-2026-34070

high

Description

LangChain is a framework for building agents and LLM-powered applications. Prior to version 1.2.22, multiple functions in langchain_core.prompts.loading read files from paths embedded in deserialized config dicts without validating against directory traversal or absolute path injection. When an application passes user-influenced prompt configurations to load_prompt() or load_prompt_from_config(), an attacker can read arbitrary files on the host filesystem, constrained only by file-extension checks (.txt for templates, .json/.yaml for examples). This issue has been patched in version 1.2.22.

References

https://thehackernews.com/2026/03/langchain-langgraph-flaws-expose-files.html

https://github.com/langchain-ai/langchain/security/advisories/GHSA-qh6h-p6c9-ff54

https://github.com/langchain-ai/langchain/releases/tag/langchain-core==1.2.22

https://github.com/langchain-ai/langchain/commit/27add913474e01e33bededf4096151130ba0d47c

Details

Source: Mitre, NVD

Published: 2026-03-31

Updated: 2026-04-02

Risk Information

CVSS v2

Base Score: 7.8

Vector: CVSS2#AV:N/AC:L/Au:N/C:C/I:N/A:N

Severity: High

CVSS v3

Base Score: 7.5

Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

Severity: High

EPSS

EPSS: 0.00192