CVE-2026-25960

high

Description

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-v359-jj2v-j536

https://github.com/vllm-project/vllm/security/advisories/GHSA-qh4c-xf7m-gxfc

https://github.com/vllm-project/vllm/pull/34743

https://github.com/vllm-project/vllm/commit/6f3b2047abd4a748e3db4a68543f8221358002c0

Details

Source: Mitre, NVD

Published: 2026-03-09

Updated: 2026-03-09

Risk Information

CVSS v2

Base Score: 7.5

Vector: CVSS2#AV:N/AC:L/Au:S/C:C/I:N/A:P

Severity: High

CVSS v3

Base Score: 7.1

Vector: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L

Severity: High

EPSS

EPSS: 0.00016