CVE-2026-22807

high

Description

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.14.0, vLLM loads Hugging Face `auto_map` dynamic modules during model resolution without gating on `trust_remote_code`, allowing attacker-controlled Python code in a model repo/path to execute at server startup. An attacker who can influence the model repo/path (local directory or remote Hugging Face repo) can achieve arbitrary code execution on the vLLM host during model load. This happens before any request handling and does not require API access. Version 0.14.0 fixes the issue.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-2pc9-4j83-qjmr

https://github.com/vllm-project/vllm/releases/tag/v0.14.0

https://github.com/vllm-project/vllm/pull/32194

https://github.com/vllm-project/vllm/commit/78d13ea9de4b1ce5e4d8a5af9738fea71fb024e5

Details

Source: Mitre, NVD

Published: 2026-01-21

Updated: 2026-01-21

Risk Information

CVSS v2

Base Score: 10

Vector: CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C

Severity: Critical

CVSS v3

Base Score: 8.8

Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Severity: High