summaryrefslogtreecommitdiffstats
path: root/mm/hugetlb_vmemmap.c
diff options
context:
space:
mode:
authorMuchun Song <songmuchun@bytedance.com>2023-11-27 09:46:42 +0100
committerAndrew Morton <akpm@linux-foundation.org>2023-12-11 01:51:53 +0100
commitb123d09304d8676ba327b72a39a6d0b79b6f604c (patch)
treeed7059a9d6e8928a53ae4c437753c5221c9121b5 /mm/hugetlb_vmemmap.c
parentmm/swapfile: replace kmap_atomic() with kmap_local_page() (diff)
downloadlinux-b123d09304d8676ba327b72a39a6d0b79b6f604c.tar.xz
linux-b123d09304d8676ba327b72a39a6d0b79b6f604c.zip
mm: pagewalk: assert write mmap lock only for walking the user page tables
The 8782fb61cc848 ("mm: pagewalk: Fix race between unmap and page walker") introduces an assertion to walk_page_range_novma() to make all the users of page table walker is safe. However, the race only exists for walking the user page tables. And it is ridiculous to hold a particular user mmap write lock against the changes of the kernel page tables. So only assert at least mmap read lock when walking the kernel page tables. And some users matching this case could downgrade to a mmap read lock to relief the contention of mmap lock of init_mm, it will be nicer in hugetlb (only holding mmap read lock) in the next patch. Link: https://lkml.kernel.org/r/20231127084645.27017-2-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Acked-by: Mike Kravetz <mike.kravetz@oracle.com> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/hugetlb_vmemmap.c')
0 files changed, 0 insertions, 0 deletions