CVE-2025-53630

critical

Description

llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.

References

https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-vgg9-87g3-85w8

https://github.com/ggml-org/llama.cpp/commit/26a48ad699d50b6268900062661bd22f3e792579

Details

Source: Mitre, NVD

Published: 2025-07-10

Updated: 2025-07-10

Risk Information

CVSS v2

Base Score: 6.8

Vector: CVSS2#AV:N/AC:M/Au:N/C:P/I:P/A:P

Severity: Medium

CVSS v3

Base Score: 8.8

Vector: CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

Severity: High

CVSS v4

Base Score: 9.3

Vector: CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N

Severity: Critical

EPSS

EPSS: 0.00042