diff options
| author | Illia Ostapyshyn <illia@yshyn.com> | 2024-05-17 11:13:48 +0200 |
|---|---|---|
| committer | Andrew Morton <akpm@linux-foundation.org> | 2024-07-03 19:29:52 -0700 |
| commit | 0ba5e806e14e97a4dd34e21ae2994693bcdd0406 (patch) | |
| tree | 31745fb9e970f9f5f10e0dc53ee8ab0b81c9a26d /mm | |
| parent | 525c30304928ff0efee4dfab8319a9d4f254ab46 (diff) | |
mm/vmscan: update stale references to shrink_page_list
Commit 49fd9b6df54e ("mm/vmscan: fix a lot of comments") renamed
shrink_page_list() to shrink_folio_list(). Fix up the remaining
references to the old name in comments and documentation.
Link: https://lkml.kernel.org/r/20240517091348.1185566-1-illia@yshyn.com
Signed-off-by: Illia Ostapyshyn <illia@yshyn.com>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Matthew Wilcox (Oracle) <willy@infradead.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'mm')
| -rw-r--r-- | mm/memory.c | 2 | ||||
| -rw-r--r-- | mm/swap_state.c | 2 | ||||
| -rw-r--r-- | mm/truncate.c | 2 |
3 files changed, 3 insertions, 3 deletions
diff --git a/mm/memory.c b/mm/memory.c index d10e616d7389..2ba8ccdd5a85 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4541,7 +4541,7 @@ static vm_fault_t __do_fault(struct vm_fault *vmf) * lock_page(B) * lock_page(B) * pte_alloc_one - * shrink_page_list + * shrink_folio_list * wait_on_page_writeback(A) * SetPageWriteback(B) * unlock_page(B) diff --git a/mm/swap_state.c b/mm/swap_state.c index 642c30d8376c..6498491e3ad8 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -28,7 +28,7 @@ /* * swapper_space is a fiction, retained to simplify the path through - * vmscan's shrink_page_list. + * vmscan's shrink_folio_list. */ static const struct address_space_operations swap_aops = { .writepage = swap_writepage, diff --git a/mm/truncate.c b/mm/truncate.c index e99085bf3d34..5ce62a939e55 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -554,7 +554,7 @@ EXPORT_SYMBOL(invalidate_mapping_pages); * This is like mapping_evict_folio(), except it ignores the folio's * refcount. We do this because invalidate_inode_pages2() needs stronger * invalidation guarantees, and cannot afford to leave folios behind because - * shrink_page_list() has a temp ref on them, or because they're transiently + * shrink_folio_list() has a temp ref on them, or because they're transiently * sitting in the folio_add_lru() caches. */ static int invalidate_complete_folio2(struct address_space *mapping, |
