Skip to content

Commit fa808ed

Browse files
ouptonMarc Zyngier
authored and
Marc Zyngier
committed
KVM: arm64: Ensure a VMID is allocated before programming VTTBR_EL2
Vladimir reports that a race condition to attach a VMID to a stage-2 MMU sometimes results in a vCPU entering the guest with a VMID of 0: | CPU1 | CPU2 | | | | kvm_arch_vcpu_ioctl_run | | vcpu_load <= load VTTBR_EL2 | | kvm_vmid->id = 0 | | | kvm_arch_vcpu_ioctl_run | | vcpu_load <= load VTTBR_EL2 | | with kvm_vmid->id = 0| | kvm_arm_vmid_update <= allocates fresh | | kvm_vmid->id and | | reload VTTBR_EL2 | | | | | kvm_arm_vmid_update <= observes that kvm_vmid->id | | already allocated, | | skips reload VTTBR_EL2 Oh yeah, it's as bad as it looks. Remember that VHE loads the stage-2 MMU eagerly but a VMID only gets attached to the MMU later on in the KVM_RUN loop. Even in the "best case" where VTTBR_EL2 correctly gets reprogrammed before entering the EL1&0 regime, there is a period of time where hardware is configured with VMID 0. That's completely insane. So, rather than decorating the 'late' binding with another hack, just allocate the damn thing up front. Attaching a VMID from vcpu_load() is still rollover safe since (surprise!) it'll always get called after a vCPU was preempted. Excuse me while I go find a brown paper bag. Cc: [email protected] Fixes: 934bf87 ("KVM: arm64: Load the stage-2 MMU context in kvm_vcpu_load_vhe()") Reported-by: Vladimir Murzin <[email protected]> Signed-off-by: Oliver Upton <[email protected]> Link: https://lore.kernel.org/r/[email protected] Signed-off-by: Marc Zyngier <[email protected]>
1 parent 102c51c commit fa808ed

File tree

3 files changed

+14
-21
lines changed

3 files changed

+14
-21
lines changed

arch/arm64/include/asm/kvm_host.h

+1-1
Original file line numberDiff line numberDiff line change
@@ -1259,7 +1259,7 @@ int kvm_arm_pvtime_has_attr(struct kvm_vcpu *vcpu,
12591259
extern unsigned int __ro_after_init kvm_arm_vmid_bits;
12601260
int __init kvm_arm_vmid_alloc_init(void);
12611261
void __init kvm_arm_vmid_alloc_free(void);
1262-
bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
1262+
void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid);
12631263
void kvm_arm_vmid_clear_active(void);
12641264

12651265
static inline void kvm_arm_pvtime_vcpu_init(struct kvm_vcpu_arch *vcpu_arch)

arch/arm64/kvm/arm.c

+10-12
Original file line numberDiff line numberDiff line change
@@ -559,6 +559,16 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
559559
mmu = vcpu->arch.hw_mmu;
560560
last_ran = this_cpu_ptr(mmu->last_vcpu_ran);
561561

562+
/*
563+
* Ensure a VMID is allocated for the MMU before programming VTTBR_EL2,
564+
* which happens eagerly in VHE.
565+
*
566+
* Also, the VMID allocator only preserves VMIDs that are active at the
567+
* time of rollover, so KVM might need to grab a new VMID for the MMU if
568+
* this is called from kvm_sched_in().
569+
*/
570+
kvm_arm_vmid_update(&mmu->vmid);
571+
562572
/*
563573
* We guarantee that both TLBs and I-cache are private to each
564574
* vcpu. If detecting that a vcpu from the same VM has
@@ -1138,18 +1148,6 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu)
11381148
*/
11391149
preempt_disable();
11401150

1141-
/*
1142-
* The VMID allocator only tracks active VMIDs per
1143-
* physical CPU, and therefore the VMID allocated may not be
1144-
* preserved on VMID roll-over if the task was preempted,
1145-
* making a thread's VMID inactive. So we need to call
1146-
* kvm_arm_vmid_update() in non-premptible context.
1147-
*/
1148-
if (kvm_arm_vmid_update(&vcpu->arch.hw_mmu->vmid) &&
1149-
has_vhe())
1150-
__load_stage2(vcpu->arch.hw_mmu,
1151-
vcpu->arch.hw_mmu->arch);
1152-
11531151
kvm_pmu_flush_hwstate(vcpu);
11541152

11551153
local_irq_disable();

arch/arm64/kvm/vmid.c

+3-8
Original file line numberDiff line numberDiff line change
@@ -135,11 +135,10 @@ void kvm_arm_vmid_clear_active(void)
135135
atomic64_set(this_cpu_ptr(&active_vmids), VMID_ACTIVE_INVALID);
136136
}
137137

138-
bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
138+
void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
139139
{
140140
unsigned long flags;
141141
u64 vmid, old_active_vmid;
142-
bool updated = false;
143142

144143
vmid = atomic64_read(&kvm_vmid->id);
145144

@@ -157,21 +156,17 @@ bool kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
157156
if (old_active_vmid != 0 && vmid_gen_match(vmid) &&
158157
0 != atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids),
159158
old_active_vmid, vmid))
160-
return false;
159+
return;
161160

162161
raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
163162

164163
/* Check that our VMID belongs to the current generation. */
165164
vmid = atomic64_read(&kvm_vmid->id);
166-
if (!vmid_gen_match(vmid)) {
165+
if (!vmid_gen_match(vmid))
167166
vmid = new_vmid(kvm_vmid);
168-
updated = true;
169-
}
170167

171168
atomic64_set(this_cpu_ptr(&active_vmids), vmid);
172169
raw_spin_unlock_irqrestore(&cpu_vmid_lock, flags);
173-
174-
return updated;
175170
}
176171

177172
/*

0 commit comments

Comments
 (0)