vLLM is an inference and serving engine for large language models (LLMs). Starting in version 0.10.1 and prior to version 0.18.0, two model implementation files hardcode `trust_remote_code=True` when loading sub-components, bypassing the user's explicit `--trust-remote-code=False` security opt-out. This enables remote code execution via malicious model repositories even when the user has explicitly disabled remote code trust. Version 0.18.0 patches the issue.
https://github.com/vllm-project/vllm/security/advisories/GHSA-7972-pg2x-xr59
https://github.com/vllm-project/vllm/pull/36192
https://github.com/vllm-project/vllm/commit/00bd08edeee5dd4d4c13277c0114a464011acf72