summaryrefslogtreecommitdiff
path: root/include/net/aligned_data.h
diff options
context:
space:
mode:
authorSeongJae Park <sj@kernel.org>2025-09-29 17:44:09 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2025-10-19 16:37:38 +0200
commit0ccd91cf749536d41307a07e60ec14ab0dbf21f5 (patch)
tree8dd0fe5425bdcdf4a03e2336d3f399cdf8cfadbe /include/net/aligned_data.h
parented30038550014a2d8ea6d7cc3de483aaa269d2e3 (diff)
mm/damon/vaddr: do not repeat pte_offset_map_lock() until success
commit b93af2cc8e036754c0d9970d9ddc47f43cc94b9f upstream. DAMON's virtual address space operation set implementation (vaddr) calls pte_offset_map_lock() inside the page table walk callback function. This is for reading and writing page table accessed bits. If pte_offset_map_lock() fails, it retries by returning the page table walk callback function with ACTION_AGAIN. pte_offset_map_lock() can continuously fail if the target is a pmd migration entry, though. Hence it could cause an infinite page table walk if the migration cannot be done until the page table walk is finished. This indeed caused a soft lockup when CPU hotplugging and DAMON were running in parallel. Avoid the infinite loop by simply not retrying the page table walk. DAMON is promising only a best-effort accuracy, so missing access to such pages is no problem. Link: https://lkml.kernel.org/r/20250930004410.55228-1-sj@kernel.org Fixes: 7780d04046a2 ("mm/pagewalkers: ACTION_AGAIN if pte_offset_map_lock() fails") Signed-off-by: SeongJae Park <sj@kernel.org> Reported-by: Xinyu Zheng <zhengxinyu6@huawei.com> Closes: https://lore.kernel.org/20250918030029.2652607-1-zhengxinyu6@huawei.com Acked-by: Hugh Dickins <hughd@google.com> Cc: <stable@vger.kernel.org> [6.5+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'include/net/aligned_data.h')
0 files changed, 0 insertions, 0 deletions