CVE-2025-46560

medium

Description

vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to ​​inefficient list concatenation operations​​, the algorithm exhibits ​​quadratic time complexity (O(n²))​​, allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-vc6m-hm49-g9qg

https://github.com/vllm-project/vllm/blob/8cac35ba435906fb7eb07e44fe1a8c26e8744f4e/vllm/model_executor/models/phi4mm.py#L1182-L1197

Details

Source: Mitre, NVD

Published: 2025-04-30

Updated: 2025-04-30

Risk Information

CVSS v2

Base Score: 6.8

Vector: CVSS2#AV:N/AC:L/Au:S/C:N/I:N/A:C

Severity: Medium

CVSS v3

Base Score: 6.5

Vector: CVSS:3.0/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Severity: Medium

EPSS

EPSS: 0.0004