CVE-2025-52566

high

Description

llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.

References

https://github.com/ggml-org/llama.cpp/security/advisories/GHSA-7rxv-5jhh-j6xx

https://github.com/ggml-org/llama.cpp/commit/dd6e6d0b6a4bbe3ebfc931d1eb14db2f2b1d70af

Details

Source: Mitre, NVD

Published: 2025-06-24

Updated: 2025-06-24

Risk Information

CVSS v2

Base Score: 7.2

Vector: CVSS2#AV:L/AC:L/Au:N/C:C/I:C/A:C

Severity: High

CVSS v3

Base Score: 8.6

Vector: CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:C/C:H/I:H/A:H

Severity: High

EPSS

EPSS: 0.00013