summaryrefslogtreecommitdiff
path: root/include/net/aligned_data.h
diff options
context:
space:
mode:
authorBobby Eshleman <bobbyeshleman@meta.com>2026-03-02 16:32:56 -0800
committerSasha Levin <sashal@kernel.org>2026-03-12 07:09:58 -0400
commita77a5423e9c632d8c9eaa480ebf5367ecb16f646 (patch)
treea25931c34271406c88558dd4c07229b15bbaeee7 /include/net/aligned_data.h
parent4596f44d4f5b93710508da5c736c2f27835bfab1 (diff)
net: devmem: use READ_ONCE/WRITE_ONCE on binding->dev
[ Upstream commit 40bf00ec2ee271df5ba67593991760adf8b5d0ed ] binding->dev is protected on the write-side in mp_dmabuf_devmem_uninstall() against concurrent writes, but due to the concurrent bare reads in net_devmem_get_binding() and validate_xmit_unreadable_skb() it should be wrapped in a READ_ONCE/WRITE_ONCE pair to make sure no compiler optimizations play with the underlying register in unforeseen ways. Doesn't present a critical bug because the known compiler optimizations don't result in bad behavior. There is no tearing on u64, and load omissions/invented loads would only break if additional binding->dev references were inlined together (they aren't right now). This just more strictly follows the linux memory model (i.e., "Lock-Protected Writes With Lockless Reads" in tools/memory-model/Documentation/access-marking.txt). Fixes: bd61848900bf ("net: devmem: Implement TX path") Signed-off-by: Bobby Eshleman <bobbyeshleman@meta.com> Link: https://patch.msgid.link/20260302-devmem-membar-fix-v2-1-5b33c9cbc28b@meta.com Signed-off-by: Jakub Kicinski <kuba@kernel.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'include/net/aligned_data.h')
0 files changed, 0 insertions, 0 deletions