CVE-2026-34756

medium

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.1.0 to before 0.19.0, a Denial of Service vulnerability exists in the vLLM OpenAI-compatible API server. Due to the lack of an upper bound validation on the n parameter in the ChatCompletionRequest and CompletionRequest Pydantic models, an unauthenticated attacker can send a single HTTP request with an astronomically large n value. This completely blocks the Python asyncio event loop and causes immediate Out-Of-Memory crashes by allocating millions of request object copies in the heap before the request even reaches the scheduling queue. This vulnerability is fixed in 0.19.0.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-3mwp-wvh9-7528

https://github.com/vllm-project/vllm/pull/37952

https://github.com/vllm-project/vllm/commit/b111f8a61f100fdca08706f41f29ef3548de7380

Details

Source: Mitre, NVD

Published: 2026-04-06

Updated: 2026-04-06

Risk Information

CVSS v2

Base Score: 6.8

Vector: CVSS2#AV:N/AC:L/Au:S/C:N/I:N/A:C

Severity: Medium

CVSS v3

Base Score: 6.5

Vector: CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Severity: Medium