<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/fs/iomap/buffered-io.c, branch linux-rolling-stable</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2026-03-19T15:15:15Z</updated>
<entry>
<title>iomap: don't mark folio uptodate if read IO has bytes pending</title>
<updated>2026-03-19T15:15:15Z</updated>
<author>
<name>Joanne Koong</name>
<email>joannelkoong@gmail.com</email>
</author>
<published>2026-03-03T23:34:20Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=67983c950ffc2361b0d20ad7e82bb5576aeaf09b'/>
<id>urn:sha1:67983c950ffc2361b0d20ad7e82bb5576aeaf09b</id>
<content type='text'>
commit debc1a492b2695d05973994fb0f796dbd9ceaae6 upstream.

If a folio has ifs metadata attached to it and the folio is partially
read in through an async IO helper with the rest of it then being read
in through post-EOF zeroing or as inline data, and the helper
successfully finishes the read first, then post-EOF zeroing / reading
inline will mark the folio as uptodate in iomap_set_range_uptodate().

This is a problem because when the read completion path later calls
iomap_read_end(), it will call folio_end_read(), which sets the uptodate
bit using XOR semantics. Calling folio_end_read() on a folio that was
already marked uptodate clears the uptodate bit.

Fix this by not marking the folio as uptodate if the read IO has bytes
pending. The folio uptodate state will be set in the read completion
path through iomap_end_read() -&gt; folio_end_read().

Reported-by: Wei Gao &lt;wegao@suse.com&gt;
Suggested-by: Sasha Levin &lt;sashal@kernel.org&gt;
Tested-by: Wei Gao &lt;wegao@suse.com&gt;
Reviewed-by: Darrick J. Wong &lt;djwong@kernel.org&gt;
Cc: stable@vger.kernel.org # v6.19
Link: https://lore.kernel.org/linux-fsdevel/aYbmy8JdgXwsGaPP@autotest-wegao.qe.prg2.suse.org/
Fixes: b2f35ac4146d ("iomap: add caller-provided callbacks for read and readahead")
Signed-off-by: Joanne Koong &lt;joannelkoong@gmail.com&gt;
Link: https://patch.msgid.link/20260303233420.874231-2-joannelkoong@gmail.com
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>iomap: fix invalid folio access after folio_end_read()</title>
<updated>2026-02-26T23:00:40Z</updated>
<author>
<name>Joanne Koong</name>
<email>joannelkoong@gmail.com</email>
</author>
<published>2026-01-26T22:41:07Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=565da373190f8ea3f4e2898e54922f53f106a88f'/>
<id>urn:sha1:565da373190f8ea3f4e2898e54922f53f106a88f</id>
<content type='text'>
[ Upstream commit aa35dd5cbc060bc3e28ad22b1d76eefa3f024030 ]

If the folio does not have an iomap_folio_state (ifs) attached and the
folio gets read in by the filesystem's IO helper, folio_end_read() will
be called by the IO helper at any time. For this case, we cannot access
the folio after dispatching it to the IO helper, eg subsequent accesses
like

        if (ctx-&gt;cur_folio &amp;&amp;
                    offset_in_folio(ctx-&gt;cur_folio, iter-&gt;pos) == 0) {

are incorrect.

Fix these invalid accesses by invalidating ctx-&gt;cur_folio if all bytes
of the folio have been read in by the IO helper.

This allows us to also remove the +1 bias added for the ifs case. The
bias was previously added to ensure that if all bytes are read in, the
IO helper does not end the read on the folio until iomap has decremented
the bias.

Fixes: b2f35ac4146d ("iomap: add caller-provided callbacks for read and readahead")
Signed-off-by: Joanne Koong &lt;joannelkoong@gmail.com&gt;
Link: https://patch.msgid.link/20260126224107.2182262-2-joannelkoong@gmail.com
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
Signed-off-by: Sasha Levin &lt;sashal@kernel.org&gt;
</content>
</entry>
<entry>
<title>iomap: wait for batched folios to be stable in __iomap_get_folio</title>
<updated>2026-01-14T16:06:02Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2026-01-13T15:39:17Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=561940a7ee81319b9cba06d2b7ba6b45a5c41cbc'/>
<id>urn:sha1:561940a7ee81319b9cba06d2b7ba6b45a5c41cbc</id>
<content type='text'>
__iomap_get_folio needs to wait for writeback to finish if the file
requires folios to be stable for writes.  For the regular path this is
taken care of by __filemap_get_folio, but for the newly added batch
lookup it has to be done manually.

This fixes xfs/131 failures when running on PI-capable hardware.

Fixes: 395ed1ef0012 ("iomap: optional zero range dirty folio processing")
Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Link: https://patch.msgid.link/20260113153943.3323869-1-hch@lst.de
Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-by: Darrick J. Wong &lt;djwong@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>iomap: replace folio_batch allocation with stack allocation</title>
<updated>2025-12-15T14:17:44Z</updated>
<author>
<name>Brian Foster</name>
<email>bfoster@redhat.com</email>
</author>
<published>2025-12-08T14:05:48Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=ed61378b4dc63efe76cb8c23a36b228043332da3'/>
<id>urn:sha1:ed61378b4dc63efe76cb8c23a36b228043332da3</id>
<content type='text'>
Zhang Yi points out that the dynamic folio_batch allocation in
iomap_fill_dirty_folios() is problematic for the ext4 on iomap work
that is under development because it doesn't sufficiently handle the
allocation failure case (by allowing a retry, for example). We've
also seen lockdep (via syzbot) complain recently about the scope of
the allocation.

The dynamic allocation was initially added for simplicity and to
help indicate whether the batch was used or not by the calling fs.
To address these issues, put the batch on the stack of
iomap_zero_range() and use a flag to control whether the batch
should be used in the iomap folio lookup path. This keeps things
simple and eliminates allocation issues with lockdep and for ext4 on
iomap.

While here, also clean up the fill helper signature to be more
consistent with the underlying filemap helper. Pass through the
return value of the filemap helper (folio count) and update the
lookup offset via an out param.

Fixes: 395ed1ef0012 ("iomap: optional zero range dirty folio processing")
Signed-off-by: Brian Foster &lt;bfoster@redhat.com&gt;
Link: https://patch.msgid.link/20251208140548.373411-1-bfoster@redhat.com
Acked-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Darrick J. Wong &lt;djwong@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge tag 'vfs-6.19-rc1.folio' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs</title>
<updated>2025-12-01T18:26:38Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2025-12-01T18:26:38Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=f2e74ecfba1b0d407f04b671a240cc65e309e529'/>
<id>urn:sha1:f2e74ecfba1b0d407f04b671a240cc65e309e529</id>
<content type='text'>
Pull folio updates from Christian Brauner:
 "Add a new folio_next_pos() helper function that returns the file
  position of the first byte after the current folio. This is a common
  operation in filesystems when needing to know the end of the current
  folio.

  The helper is lifted from btrfs which already had its own version, and
  is now used across multiple filesystems and subsystems:
   - btrfs
   - buffer
   - ext4
   - f2fs
   - gfs2
   - iomap
   - netfs
   - xfs
   - mm

  This fixes a long-standing bug in ocfs2 on 32-bit systems with files
  larger than 2GiB. Presumably this is not a common configuration, but
  the fix is backported anyway. The other filesystems did not have bugs,
  they were just mildly inefficient.

  This also introduce uoff_t as the unsigned version of loff_t. A recent
  commit inadvertently changed a comparison from being unsigned (on
  64-bit systems) to being signed (which it had always been on 32-bit
  systems), leading to sporadic fstests failures.

  Generally file sizes are restricted to being a signed integer, but in
  places where -1 is passed to indicate "up to the end of the file", it
  is convenient to have an unsigned type to ensure comparisons are
  always unsigned regardless of architecture"

* tag 'vfs-6.19-rc1.folio' of git://git.kernel.org/pub/scm/linux/kernel/git/vfs/vfs:
  fs: Add uoff_t
  mm: Use folio_next_pos()
  xfs: Use folio_next_pos()
  netfs: Use folio_next_pos()
  iomap: Use folio_next_pos()
  gfs2: Use folio_next_pos()
  f2fs: Use folio_next_pos()
  ext4: Use folio_next_pos()
  buffer: Use folio_next_pos()
  btrfs: Use folio_next_pos()
  filemap: Add folio_next_pos()
</content>
</entry>
<entry>
<title>iomap: fix iomap_read_end() for already uptodate folios</title>
<updated>2025-11-25T09:22:19Z</updated>
<author>
<name>Joanne Koong</name>
<email>joannelkoong@gmail.com</email>
</author>
<published>2025-11-18T21:11:11Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=d7ff85d4b899e02b4b8a8ca9f44f54a06aee1b4d'/>
<id>urn:sha1:d7ff85d4b899e02b4b8a8ca9f44f54a06aee1b4d</id>
<content type='text'>
There are some cases where when iomap_read_end() is called, the folio
may already have been marked uptodate. For example, if the iomap block
needed zeroing, then the folio may have been marked uptodate after the
zeroing.

iomap_read_end() should unlock the folio instead of calling
folio_end_read(), which is how these cases were handled prior to commit
f8eaf79406fe ("iomap: simplify -&gt;read_folio_range() error handling for
reads"). Calling folio_end_read() on an uptodate folio leads to buggy
behavior where marking an already uptodate folio as uptodate will XOR it
to be marked nonuptodate.

Fixes: f8eaf79406fe ("iomap: simplify -&gt;read_folio_range() error handling for reads")
Signed-off-by: Joanne Koong &lt;joannelkoong@gmail.com&gt;
Link: https://patch.msgid.link/20251118211111.1027272-2-joannelkoong@gmail.com
Tested-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reported-by: Matthew Wilcox (Oracle) &lt;willy@infradead.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>iomap: use find_next_bit() for uptodate bitmap scanning</title>
<updated>2025-11-25T09:22:10Z</updated>
<author>
<name>Joanne Koong</name>
<email>joannelkoong@gmail.com</email>
</author>
<published>2025-11-11T19:36:58Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=b56c1c54f225ca02d88ec562f017be23429bf5b2'/>
<id>urn:sha1:b56c1c54f225ca02d88ec562f017be23429bf5b2</id>
<content type='text'>
Use find_next_bit()/find_next_zero_bit() for iomap uptodate bitmap
scanning. This uses __ffs() internally and is more efficient for
finding the next uptodate or non-uptodate bit than iterating through the
the bitmap range testing every bit.

Signed-off-by: Joanne Koong &lt;joannelkoong@gmail.com&gt;
Link: https://patch.msgid.link/20251111193658.3495942-10-joannelkoong@gmail.com
Reviewed-by: Darrick J. Wong &lt;djwong@kernel.org&gt;
Suggested-by: Christoph Hellwig &lt;hch@infradead.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>iomap: use find_next_bit() for dirty bitmap scanning</title>
<updated>2025-11-25T09:22:10Z</updated>
<author>
<name>Joanne Koong</name>
<email>joannelkoong@gmail.com</email>
</author>
<published>2025-11-11T19:36:57Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=fed9c62d28b726dad70cc03fd28ffd700b59c741'/>
<id>urn:sha1:fed9c62d28b726dad70cc03fd28ffd700b59c741</id>
<content type='text'>
Use find_next_bit()/find_next_zero_bit() for iomap dirty bitmap
scanning. This uses __ffs() internally and is more efficient for
finding the next dirty or clean bit than iterating through the bitmap
range testing every bit.

Signed-off-by: Joanne Koong &lt;joannelkoong@gmail.com&gt;
Link: https://patch.msgid.link/20251111193658.3495942-9-joannelkoong@gmail.com
Reviewed-by: Darrick J. Wong &lt;djwong@kernel.org&gt;
Suggested-by: Christoph Hellwig &lt;hch@infradead.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>iomap: simplify when reads can be skipped for writes</title>
<updated>2025-11-12T09:50:32Z</updated>
<author>
<name>Joanne Koong</name>
<email>joannelkoong@gmail.com</email>
</author>
<published>2025-11-11T19:36:55Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=a298febc47e0ce116b9fc8151337ba8b2137e42d'/>
<id>urn:sha1:a298febc47e0ce116b9fc8151337ba8b2137e42d</id>
<content type='text'>
Currently, the logic for skipping the read range for a write is

if (!(iter-&gt;flags &amp; IOMAP_UNSHARE) &amp;&amp;
    (from &lt;= poff || from &gt;= poff + plen) &amp;&amp;
    (to &lt;= poff || to &gt;= poff + plen))

which breaks down to skipping the read if any of these are true:
a) from &lt;= poff &amp;&amp; to &lt;= poff
b) from &lt;= poff &amp;&amp; to &gt;= poff + plen
c) from &gt;= poff + plen &amp;&amp; to &lt;= poff
d) from &gt;= poff + plen &amp;&amp; to &gt;= poff + plen

This can be simplified to
if (!(iter-&gt;flags &amp; IOMAP_UNSHARE) &amp;&amp; from &lt;= poff &amp;&amp; to &gt;= poff + plen)

from the following reasoning:

a) from &lt;= poff &amp;&amp; to &lt;= poff
This reduces to 'to &lt;= poff' since it is guaranteed that 'from &lt;= to'
(since to = from + len). It is not possible for 'from &lt;= to' to be true
here because we only reach here if plen &gt; 0 (thanks to the preceding 'if
(plen == 0)' check that would break us out of the loop). If 'to &lt;=
poff', plen would have to be 0 since poff and plen get adjusted in
lockstep for uptodate blocks. This means we can eliminate this check.

c) from &gt;= poff + plen &amp;&amp; to &lt;= poff
This is not possible since 'from &lt;= to' and 'plen &gt; 0'. We can eliminate
this check.

d) from &gt;= poff + plen &amp;&amp; to &gt;= poff + plen
This reduces to 'from &gt;= poff + plen' since 'from &lt;= to'.
It is not possible for 'from &gt;= poff + plen' to be true here. We only
reach here if plen &gt; 0 and for writes, poff and plen will always be
block-aligned, which means poff &lt;= from &lt; poff + plen. We can eliminate
this check.

The only valid check is b) from &lt;= poff &amp;&amp; to &gt;= poff + plen.

Signed-off-by: Joanne Koong &lt;joannelkoong@gmail.com&gt;
Link: https://patch.msgid.link/20251111193658.3495942-7-joannelkoong@gmail.com
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Darrick J. Wong &lt;djwong@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
<entry>
<title>iomap: simplify -&gt;read_folio_range() error handling for reads</title>
<updated>2025-11-12T09:50:32Z</updated>
<author>
<name>Joanne Koong</name>
<email>joannelkoong@gmail.com</email>
</author>
<published>2025-11-11T19:36:54Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=f8eaf79406fe9415db0e7a5c175b50cb01265199'/>
<id>urn:sha1:f8eaf79406fe9415db0e7a5c175b50cb01265199</id>
<content type='text'>
Instead of requiring that the caller calls iomap_finish_folio_read()
even if the -&gt;read_folio_range() callback returns an error, account for
this internally in iomap instead, which makes the interface simpler and
makes it match writeback's -&gt;read_folio_range() error handling
expectations.

Signed-off-by: Joanne Koong &lt;joannelkoong@gmail.com&gt;
Link: https://patch.msgid.link/20251111193658.3495942-6-joannelkoong@gmail.com
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Darrick J. Wong &lt;djwong@kernel.org&gt;
Signed-off-by: Christian Brauner &lt;brauner@kernel.org&gt;
</content>
</entry>
</feed>
