CVE-2026-22778

critical

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv

https://github.com/vllm-project/vllm/releases/tag/v0.14.1

https://github.com/vllm-project/vllm/pull/32319

https://github.com/vllm-project/vllm/pull/31987

Details

Source: Mitre, NVD

Published: 2026-02-02

Updated: 2026-02-02

Risk Information

CVSS v2

Base Score: 10

Vector: CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C

Severity: Critical

CVSS v3

Base Score: 9.8

Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Severity: Critical