diff options
| author | Harry Yoo <harry.yoo@oracle.com> | 2026-03-09 16:22:19 +0900 |
|---|---|---|
| committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2026-03-19 16:15:17 +0100 |
| commit | d8df600b67d70dee6b682b267f85edf236a73854 (patch) | |
| tree | 47a94f11ea19a2e86eec96f35d2448ca0b24f5e5 | |
| parent | 9320c77134ab8d7701e20608bbf08517df4fa321 (diff) | |
mm/slab: fix an incorrect check in obj_exts_alloc_size()
commit 8dafa9f5900c4855a65dbfee51e3bd00636deee1 upstream.
obj_exts_alloc_size() prevents recursive allocation of slabobj_ext
array from the same cache, to avoid creating slabs that are never freed.
There is one mistake that returns the original size when memory
allocation profiling is disabled. The assumption was that
memcg-triggered slabobj_ext allocation is always served from
KMALLOC_CGROUP type. But this is wrong [1]: when the caller specifies
both __GFP_RECLAIMABLE and __GFP_ACCOUNT with SLUB_TINY enabled, the
allocation is served from normal kmalloc. This is because kmalloc_type()
prioritizes __GFP_RECLAIMABLE over __GFP_ACCOUNT, and SLUB_TINY aliases
KMALLOC_RECLAIM with KMALLOC_NORMAL.
As a result, the recursion guard is bypassed and the problematic slabs
can be created. Fix this by removing the mem_alloc_profiling_enabled()
check entirely. The remaining is_kmalloc_normal() check is still
sufficient to detect whether the cache is of KMALLOC_NORMAL type and
avoid bumping the size if it's not.
Without SLUB_TINY, no functional change intended.
With SLUB_TINY, allocations with __GFP_ACCOUNT|__GFP_RECLAIMABLE
now allocate a larger array if the sizes equal.
Reported-by: Zw Tang <shicenci@gmail.com>
Fixes: 280ea9c3154b ("mm/slab: avoid allocating slabobj_ext array from its own slab")
Closes: https://lore.kernel.org/linux-mm/CAPHJ_VKuMKSke8b11AZQw1PTSFN4n2C0gFxC6xGOG0ZLHgPmnA@mail.gmail.com [1]
Cc: stable@vger.kernel.org
Signed-off-by: Harry Yoo <harry.yoo@oracle.com>
Link: https://patch.msgid.link/20260309072219.22653-1-harry.yoo@oracle.com
Tested-by: Zw Tang <shicenci@gmail.com>
Signed-off-by: Vlastimil Babka (SUSE) <vbabka@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
| -rw-r--r-- | mm/slub.c | 7 |
1 files changed, 0 insertions, 7 deletions
diff --git a/mm/slub.c b/mm/slub.c index b68db0f5a637..238dbc2b8403 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2113,13 +2113,6 @@ static inline size_t obj_exts_alloc_size(struct kmem_cache *s, size_t sz = sizeof(struct slabobj_ext) * slab->objects; struct kmem_cache *obj_exts_cache; - /* - * slabobj_ext array for KMALLOC_CGROUP allocations - * are served from KMALLOC_NORMAL caches. - */ - if (!mem_alloc_profiling_enabled()) - return sz; - if (sz > KMALLOC_MAX_CACHE_SIZE) return sz; |
