CVE-2025-62426

medium

Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, the /v1/chat/completions and /tokenize endpoints allow a chat_template_kwargs request parameter that is used in the code before it is properly validated against the chat template. With the right chat_template_kwargs parameters, it is possible to block processing of the API server for long periods of time, delaying all other requests. This issue has been patched in version 0.11.1.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-69j4-grxj-j64p

https://github.com/vllm-project/vllm/pull/27205

https://github.com/vllm-project/vllm/commit/3ada34f9cb4d1af763fdfa3b481862a93eb6bd2b

https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/openai/serving_engine.py#L809-L814

https://github.com/vllm-project/vllm/blob/2a6dc67eb520ddb9c4138d8b35ed6fe6226997fb/vllm/entrypoints/chat_utils.py#L1602-L1610

Details

Source: Mitre, NVD

Published: 2025-11-21

Updated: 2025-11-21

Risk Information

CVSS v2

Base Score: 6.8

Vector: CVSS2#AV:N/AC:L/Au:S/C:N/I:N/A:C

Severity: Medium

CVSS v3

Base Score: 6.5

Vector: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Severity: Medium

EPSS

EPSS: 0.00044