langchain-ai v0.3.51 was discovered to contain an indirect prompt injection vulnerability in the GmailToolkit component. This vulnerability allows attackers to execute arbitrary code and compromise the application via a crafted email message.
https://github.com/langchain-ai/langchain/issues/30833
https://github.com/Jr61-star/CVEs/blob/main/CVE-2025-46059.md