summaryrefslogtreecommitdiffstats
path: root/mm/shmem.c
diff options
context:
space:
mode:
authorShakeel Butt <shakeel.butt@linux.dev>2024-09-07 01:05:12 +0200
committerAndrew Morton <akpm@linux-foundation.org>2024-09-10 01:39:17 +0200
commit354a595a4a4d9dfc0d3e5703c6c5520e6c2f52d8 (patch)
treee5f4878255d3c472123f8ebe2b0101625489586e /mm/shmem.c
parentmaple_tree: mark three functions as __maybe_unused (diff)
downloadlinux-354a595a4a4d9dfc0d3e5703c6c5520e6c2f52d8.tar.xz
linux-354a595a4a4d9dfc0d3e5703c6c5520e6c2f52d8.zip
mm: replace xa_get_order with xas_get_order where appropriate
The tracing of invalidation and truncation operations on large files showed that xa_get_order() is among the top functions where kernel spends a lot of CPUs. xa_get_order() needs to traverse the tree to reach the right node for a given index and then extract the order of the entry. However it seems like at many places it is being called within an already happening tree traversal where there is no need to do another traversal. Just use xas_get_order() at those places. Link: https://lkml.kernel.org/r/20240906230512.124643-1-shakeel.butt@linux.dev Signed-off-by: Shakeel Butt <shakeel.butt@linux.dev> Reviewed-by: Liam R. Howlett <Liam.Howlett@oracle.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Hugh Dickins <hughd@google.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Nhat Pham <nphamcs@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm/shmem.c')
-rw-r--r--mm/shmem.c2
1 files changed, 1 insertions, 1 deletions
diff --git a/mm/shmem.c b/mm/shmem.c
index 74f093d88c78..361affdf3990 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -890,7 +890,7 @@ unsigned long shmem_partial_swap_usage(struct address_space *mapping,
if (xas_retry(&xas, page))
continue;
if (xa_is_value(page))
- swapped += 1 << xa_get_order(xas.xa, xas.xa_index);
+ swapped += 1 << xas_get_order(&xas);
if (xas.xa_index == max)
break;
if (need_resched()) {