CVE-2026-27893

high

Description

vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.

References

https://github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59

https://github.com/vllm-project/vllm/pull/36192

https://github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72

Details

Source: Mitre, NVD

Published: 2026-03-27

Updated: 2026-03-27

Risk Information

CVSS v2

Base Score: 10

Vector: CVSS2#AV:N/AC:L/Au:N/C:C/I:C/A:C

Severity: Critical

CVSS v3

Base Score: 8.8

Vector: CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Severity: High