Azure Linux 3.0 Security Update: kernel (CVE-2025-21839)

medium Nessus Plugin ID 295879

Synopsis

The remote Azure Linux host is missing one or more security updates.

Description

The version of kernel installed on the remote Azure Linux 3.0 host is prior to tested version. It is, therefore, affected by a vulnerability as referenced in the CVE-2025-21839 advisory.

- In the Linux kernel, the following vulnerability has been resolved: KVM: x86: Load DR6 with guest value only before entering .vcpu_run() loop Move the conditional loading of hardware DR6 with the guest's DR6 value out of the core .vcpu_run() loop to fix a bug where KVM can load hardware with a stale vcpu->arch.dr6. When the guest accesses a DR and host userspace isn't debugging the guest, KVM disables DR interception and loads the guest's values into hardware on VM-Enter and saves them on VM-Exit. This allows the guest to access DRs at will, e.g. so that a sequence of DR accesses to configure a breakpoint only generates one VM-Exit. For DR0-DR3, the logic/behavior is identical between VMX and SVM, and also identical between KVM_DEBUGREG_BP_ENABLED (userspace debugging the guest) and KVM_DEBUGREG_WONT_EXIT (guest using DRs), and so KVM handles loading DR0-DR3 in common code, _outside_ of the core kvm_x86_ops.vcpu_run() loop. But for DR6, the guest's value doesn't need to be loaded into hardware for KVM_DEBUGREG_BP_ENABLED, and SVM provides a dedicated VMCB field whereas VMX requires software to manually load the guest value, and so loading the guest's value into DR6 is handled by {svm,vmx}_vcpu_run(), i.e.
is done _inside_ the core run loop. Unfortunately, saving the guest values on VM-Exit is initiated by common x86, again outside of the core run loop. If the guest modifies DR6 (in hardware, when DR interception is disabled), and then the next VM-Exit is a fastpath VM-Exit, KVM will reload hardware DR6 with vcpu->arch.dr6 and clobber the guest's actual value. The bug shows up primarily with nested VMX because KVM handles the VMX preemption timer in the fastpath, and the window between hardware DR6 being modified (in guest context) and DR6 being read by guest software is orders of magnitude larger in a nested setup. E.g. in non-nested, the VMX preemption timer would need to fire precisely between #DB injection and the #DB handler's read of DR6, whereas with a KVM-on-KVM setup, the window where hardware DR6 is dirty extends all the way from L1 writing DR6 to VMRESUME (in L1). L1's view: ========== <L1 disables DR interception> CPU 0/KVM-7289 [023] d.... 2925.640961: kvm_entry: vcpu 0 A: L1 Writes DR6 CPU 0/KVM-7289 [023] d.... 2925.640963: <hack>: Set DRs, DR6 = 0xffff0ff1 B: CPU 0/KVM-7289 [023] d.... 2925.640967:
kvm_exit: vcpu 0 reason EXTERNAL_INTERRUPT intr_info 0x800000ec D: L1 reads DR6, arch.dr6 = 0 CPU 0/KVM-7289 [023] d.... 2925.640969: <hack>: Sync DRs, DR6 = 0xffff0ff0 CPU 0/KVM-7289 [023] d....
2925.640976: kvm_entry: vcpu 0 L2 reads DR6, L1 disables DR interception CPU 0/KVM-7289 [023] d....
2925.640980: kvm_exit: vcpu 0 reason DR_ACCESS info1 0x0000000000000216 CPU 0/KVM-7289 [023] d....
2925.640983: kvm_entry: vcpu 0 CPU 0/KVM-7289 [023] d.... 2925.640983: <hack>: Set DRs, DR6 = 0xffff0ff0 L2 detects failure CPU 0/KVM-7289 [023] d.... 2925.640987: kvm_exit: vcpu 0 reason HLT L1 reads DR6 (confirms failure) CPU 0/KVM-7289 [023] d.... 2925.640990: <hack>: Sync DRs, DR6 = 0xffff0ff0 L0's view:
========== L2 reads DR6, arch.dr6 = 0 CPU 23/KVM-5046 [001] d.... 3410.005610: kvm_exit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216 CPU 23/KVM-5046 [001] ..... 3410.005610: kvm_nested_vmexit: vcpu 23 reason DR_ACCESS info1 0x0000000000000216 L2 => L1 nested VM-Exit CPU 23/KVM-5046 [001] ..... 3410.005610:
kvm_nested_vmexit_inject: reason: DR_ACCESS ext_inf1: 0x0000000000000216 CPU 23/KVM-5046 [001] d....
3410.005610: kvm_entry: vcpu 23 CPU 23/KVM-5046 [001] d.... 3410.005611: kvm_exit: vcpu 23 reason VMREAD CPU 23/KVM-5046 [001] d.... 3410.005611: kvm_entry: vcpu 23 CPU 23/KVM-5046 [001] d.... 3410.
---truncated--- (CVE-2025-21839)

Note that Nessus has not tested for this issue but has instead relied only on the application's self-reported version number.

Solution

Update the affected packages.

See Also

https://nvd.nist.gov/vuln/detail/CVE-2025-21839

Plugin Details

Severity: Medium

ID: 295879

File Name: azure_linux_CVE-2025-21839.nasl

Version: 1.1

Type: local

Published: 1/22/2026

Updated: 1/22/2026

Supported Sensors: Nessus

Risk Information

VPR

Risk Factor: Medium

Score: 4.4

CVSS v2

Risk Factor: Medium

Base Score: 4.6

Temporal Score: 3.4

Vector: CVSS2#AV:L/AC:L/Au:S/C:N/I:N/A:C

CVSS Score Source: CVE-2025-21839

CVSS v3

Risk Factor: Medium

Base Score: 5.5

Temporal Score: 4.8

Vector: CVSS:3.0/AV:L/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H

Temporal Vector: CVSS:3.0/E:U/RL:O/RC:C

Vulnerability Information

CPE: p-cpe:/a:microsoft:azure_linux:kernel-debuginfo, p-cpe:/a:microsoft:azure_linux:kernel-devel, p-cpe:/a:microsoft:azure_linux:kernel-drivers-gpu, p-cpe:/a:microsoft:azure_linux:python3-perf, p-cpe:/a:microsoft:azure_linux:kernel-docs, x-cpe:/o:microsoft:azure_linux, p-cpe:/a:microsoft:azure_linux:kernel, p-cpe:/a:microsoft:azure_linux:kernel-drivers-sound, p-cpe:/a:microsoft:azure_linux:bpftool, p-cpe:/a:microsoft:azure_linux:kernel-drivers-intree-amdgpu, p-cpe:/a:microsoft:azure_linux:kernel-drivers-accessibility, p-cpe:/a:microsoft:azure_linux:kernel-tools

Required KB Items: Host/local_checks_enabled, Host/cpu, Host/AzureLinux/release, Host/AzureLinux/rpm-list

Exploit Ease: No known exploits are available

Patch Publication Date: 7/8/2025

Vulnerability Publication Date: 3/7/2025

Reference Information

CVE: CVE-2025-21839