summaryrefslogtreecommitdiffstats
path: root/Documentation/virt
diff options
context:
space:
mode:
authorSean Christopherson <seanjc@google.com>2024-10-10 20:23:11 +0200
committerPaolo Bonzini <pbonzini@redhat.com>2024-10-25 18:54:42 +0200
commit5f6a3badbb74231aaf2dc9996d689c538101ffb6 (patch)
tree873ea0a67cf2347af53c7e13ddee1474de6024c6 /Documentation/virt
parentKVM: x86/mmu: Mark folio dirty when creating SPTE, not when zapping/modifying (diff)
downloadlinux-5f6a3badbb74231aaf2dc9996d689c538101ffb6.tar.xz
linux-5f6a3badbb74231aaf2dc9996d689c538101ffb6.zip
KVM: x86/mmu: Mark page/folio accessed only when zapping leaf SPTEs
Now that KVM doesn't clobber Accessed bits of shadow-present SPTEs, e.g. when prefetching, mark folios as accessed only when zapping leaf SPTEs, which is a rough heuristic for "only in response to an mmu_notifier invalidation". Page aging and LRUs are tolerant of false negatives, i.e. KVM doesn't need to be precise for correctness, and re-marking folios as accessed when zapping entire roots or when zapping collapsible SPTEs is expensive and adds very little value. E.g. when a VM is dying, all of its memory is being freed; marking folios accessed at that time provides no known value. Similarly, because KVM marks folios as accessed when creating SPTEs, marking all folios as accessed when userspace happens to delete a memslot doesn't add value. The folio was marked access when the old SPTE was created, and will be marked accessed yet again if a vCPU accesses the pfn again after reloading a new root. Zapping collapsible SPTEs is a similar story; marking folios accessed just because userspace disable dirty logging is a side effect of KVM behavior, not a deliberate goal. As an intermediate step, a.k.a. bisection point, towards *never* marking folios accessed when dropping SPTEs, mark folios accessed when the primary MMU might be invalidating mappings, as such zappings are not KVM initiated, i.e. might actually be related to page aging and LRU activity. Note, x86 is the only KVM architecture that "double dips"; every other arch marks pfns as accessed only when mapping into the guest, not when mapping into the guest _and_ when removing from the guest. Tested-by: Alex Bennée <alex.bennee@linaro.org> Signed-off-by: Sean Christopherson <seanjc@google.com> Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-ID: <20241010182427.1434605-10-seanjc@google.com>
Diffstat (limited to 'Documentation/virt')
-rw-r--r--Documentation/virt/kvm/locking.rst76
1 files changed, 39 insertions, 37 deletions
diff --git a/Documentation/virt/kvm/locking.rst b/Documentation/virt/kvm/locking.rst
index 1bedd56e2fe3..693090bfc66d 100644
--- a/Documentation/virt/kvm/locking.rst
+++ b/Documentation/virt/kvm/locking.rst
@@ -147,49 +147,51 @@ Then, we can ensure the dirty bitmaps is correctly set for a gfn.
2) Dirty bit tracking
-In the origin code, the spte can be fast updated (non-atomically) if the
+In the original code, the spte can be fast updated (non-atomically) if the
spte is read-only and the Accessed bit has already been set since the
Accessed bit and Dirty bit can not be lost.
But it is not true after fast page fault since the spte can be marked
writable between reading spte and updating spte. Like below case:
-+------------------------------------------------------------------------+
-| At the beginning:: |
-| |
-| spte.W = 0 |
-| spte.Accessed = 1 |
-+------------------------------------+-----------------------------------+
-| CPU 0: | CPU 1: |
-+------------------------------------+-----------------------------------+
-| In mmu_spte_clear_track_bits():: | |
-| | |
-| old_spte = *spte; | |
-| | |
-| | |
-| /* 'if' condition is satisfied. */| |
-| if (old_spte.Accessed == 1 && | |
-| old_spte.W == 0) | |
-| spte = 0ull; | |
-+------------------------------------+-----------------------------------+
-| | on fast page fault path:: |
-| | |
-| | spte.W = 1 |
-| | |
-| | memory write on the spte:: |
-| | |
-| | spte.Dirty = 1 |
-+------------------------------------+-----------------------------------+
-| :: | |
-| | |
-| else | |
-| old_spte = xchg(spte, 0ull) | |
-| if (old_spte.Accessed == 1) | |
-| kvm_set_pfn_accessed(spte.pfn);| |
-| if (old_spte.Dirty == 1) | |
-| kvm_set_pfn_dirty(spte.pfn); | |
-| OOPS!!! | |
-+------------------------------------+-----------------------------------+
++-------------------------------------------------------------------------+
+| At the beginning:: |
+| |
+| spte.W = 0 |
+| spte.Accessed = 1 |
++-------------------------------------+-----------------------------------+
+| CPU 0: | CPU 1: |
++-------------------------------------+-----------------------------------+
+| In mmu_spte_update():: | |
+| | |
+| old_spte = *spte; | |
+| | |
+| | |
+| /* 'if' condition is satisfied. */ | |
+| if (old_spte.Accessed == 1 && | |
+| old_spte.W == 0) | |
+| spte = new_spte; | |
++-------------------------------------+-----------------------------------+
+| | on fast page fault path:: |
+| | |
+| | spte.W = 1 |
+| | |
+| | memory write on the spte:: |
+| | |
+| | spte.Dirty = 1 |
++-------------------------------------+-----------------------------------+
+| :: | |
+| | |
+| else | |
+| old_spte = xchg(spte, new_spte);| |
+| if (old_spte.Accessed && | |
+| !new_spte.Accessed) | |
+| flush = true; | |
+| if (old_spte.Dirty && | |
+| !new_spte.Dirty) | |
+| flush = true; | |
+| OOPS!!! | |
++-------------------------------------+-----------------------------------+
The Dirty bit is lost in this case.