<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/drivers/block/osdblk.c, branch linux-rolling-stable</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2017-04-19T15:10:51Z</updated>
<entry>
<title>block: remove the osdblk driver</title>
<updated>2017-04-19T15:10:51Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2017-04-12T16:01:06Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=100815522c672a27c71bd2267b1cd41cd131e30c'/>
<id>urn:sha1:100815522c672a27c71bd2267b1cd41cd131e30c</id>
<content type='text'>
This was just a proof of concept user for the SCSI OSD library, and
never had any real users.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Acked-by: Boaz Harrosh &lt;ooo@electrozaur.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
</entry>
<entry>
<title>block: fold cmd_type into the REQ_OP_ space</title>
<updated>2017-01-31T21:00:44Z</updated>
<author>
<name>Christoph Hellwig</name>
<email>hch@lst.de</email>
</author>
<published>2017-01-31T15:57:31Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=aebf526b53aea164508730427597d45f3e06b376'/>
<id>urn:sha1:aebf526b53aea164508730427597d45f3e06b376</id>
<content type='text'>
Instead of keeping two levels of indirection for requests types, fold it
all into the operations.  The little caveat here is that previously
cmd_type only applied to struct request, while the request and bio op
fields were set to plain REQ_OP_READ/WRITE even for passthrough
operations.

Instead this patch adds new REQ_OP_* for SCSI passthrough and driver
private requests, althought it has to add two for each so that we
can communicate the data in/out nature of the request.

Signed-off-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
</entry>
<entry>
<title>block, drivers: add REQ_OP_FLUSH operation</title>
<updated>2016-06-07T19:41:38Z</updated>
<author>
<name>Mike Christie</name>
<email>mchristi@redhat.com</email>
</author>
<published>2016-06-05T19:32:23Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=3a5e02ced11e22ecd9da3d6710afe15bcfee1d10'/>
<id>urn:sha1:3a5e02ced11e22ecd9da3d6710afe15bcfee1d10</id>
<content type='text'>
This adds a REQ_OP_FLUSH operation that is sent to request_fn
based drivers by the block layer's flush code, instead of
sending requests with the request-&gt;cmd_flags REQ_FLUSH bit set.

Signed-off-by: Mike Christie &lt;mchristi@redhat.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Reviewed-by: Hannes Reinecke &lt;hare@suse.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
</entry>
<entry>
<title>osdblk: switch to using blk_queue_write_cache()</title>
<updated>2016-04-12T22:00:39Z</updated>
<author>
<name>Jens Axboe</name>
<email>axboe@fb.com</email>
</author>
<published>2016-03-30T16:11:15Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=177febc8430e0efa808caca721a59898c5422668'/>
<id>urn:sha1:177febc8430e0efa808caca721a59898c5422668</id>
<content type='text'>
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
</content>
</entry>
<entry>
<title>mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd</title>
<updated>2015-11-07T01:50:42Z</updated>
<author>
<name>Mel Gorman</name>
<email>mgorman@techsingularity.net</email>
</author>
<published>2015-11-07T00:28:21Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=d0164adc89f6bb374d304ffcc375c6d2652fe67d'/>
<id>urn:sha1:d0164adc89f6bb374d304ffcc375c6d2652fe67d</id>
<content type='text'>
__GFP_WAIT has been used to identify atomic context in callers that hold
spinlocks or are in interrupts.  They are expected to be high priority and
have access one of two watermarks lower than "min" which can be referred
to as the "atomic reserve".  __GFP_HIGH users get access to the first
lower watermark and can be called the "high priority reserve".

Over time, callers had a requirement to not block when fallback options
were available.  Some have abused __GFP_WAIT leading to a situation where
an optimisitic allocation with a fallback option can access atomic
reserves.

This patch uses __GFP_ATOMIC to identify callers that are truely atomic,
cannot sleep and have no alternative.  High priority users continue to use
__GFP_HIGH.  __GFP_DIRECT_RECLAIM identifies callers that can sleep and
are willing to enter direct reclaim.  __GFP_KSWAPD_RECLAIM to identify
callers that want to wake kswapd for background reclaim.  __GFP_WAIT is
redefined as a caller that is willing to enter direct reclaim and wake
kswapd for background reclaim.

This patch then converts a number of sites

o __GFP_ATOMIC is used by callers that are high priority and have memory
  pools for those requests. GFP_ATOMIC uses this flag.

o Callers that have a limited mempool to guarantee forward progress clear
  __GFP_DIRECT_RECLAIM but keep __GFP_KSWAPD_RECLAIM. bio allocations fall
  into this category where kswapd will still be woken but atomic reserves
  are not used as there is a one-entry mempool to guarantee progress.

o Callers that are checking if they are non-blocking should use the
  helper gfpflags_allow_blocking() where possible. This is because
  checking for __GFP_WAIT as was done historically now can trigger false
  positives. Some exceptions like dm-crypt.c exist where the code intent
  is clearer if __GFP_DIRECT_RECLAIM is used instead of the helper due to
  flag manipulations.

o Callers that built their own GFP flags instead of starting with GFP_KERNEL
  and friends now also need to specify __GFP_KSWAPD_RECLAIM.

The first key hazard to watch out for is callers that removed __GFP_WAIT
and was depending on access to atomic reserves for inconspicuous reasons.
In some cases it may be appropriate for them to use __GFP_HIGH.

The second key hazard is callers that assembled their own combination of
GFP flags instead of starting with something like GFP_KERNEL.  They may
now wish to specify __GFP_KSWAPD_RECLAIM.  It's almost certainly harmless
if it's missed in most cases as other activity will wake kswapd.

Signed-off-by: Mel Gorman &lt;mgorman@techsingularity.net&gt;
Acked-by: Vlastimil Babka &lt;vbabka@suse.cz&gt;
Acked-by: Michal Hocko &lt;mhocko@suse.com&gt;
Acked-by: Johannes Weiner &lt;hannes@cmpxchg.org&gt;
Cc: Christoph Lameter &lt;cl@linux.com&gt;
Cc: David Rientjes &lt;rientjes@google.com&gt;
Cc: Vitaly Wool &lt;vitalywool@gmail.com&gt;
Cc: Rik van Riel &lt;riel@redhat.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>block: support different tag allocation policy</title>
<updated>2015-01-23T21:15:46Z</updated>
<author>
<name>Shaohua Li</name>
<email>shli@fb.com</email>
</author>
<published>2015-01-16T01:32:25Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=ee1b6f7aff94019c09e73837054979063f722046'/>
<id>urn:sha1:ee1b6f7aff94019c09e73837054979063f722046</id>
<content type='text'>
The libata tag allocation is using a round-robin policy. Next patch will
make libata use block generic tag allocation, so let's add a policy to
tag allocation.

Currently two policies: FIFO (default) and round-robin.

Cc: Jens Axboe &lt;axboe@fb.com&gt;
Cc: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Christoph Hellwig &lt;hch@infradead.org&gt;
Signed-off-by: Shaohua Li &lt;shli@fb.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@fb.com&gt;
</content>
</entry>
<entry>
<title>block: replace strict_strtoul() with kstrtoul()</title>
<updated>2013-09-11T22:56:56Z</updated>
<author>
<name>Jingoo Han</name>
<email>jg1.han@samsung.com</email>
</author>
<published>2013-09-11T21:20:07Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=bb8e0e84b30afc9827931c9773d75d5c99fcddff'/>
<id>urn:sha1:bb8e0e84b30afc9827931c9773d75d5c99fcddff</id>
<content type='text'>
The use of strict_strtoul() is not preferred, because strict_strtoul() is
obsolete.  Thus, kstrtoul() should be used.

Signed-off-by: Jingoo Han &lt;jg1.han@samsung.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>block: Add bio_clone_bioset(), bio_clone_kmalloc()</title>
<updated>2012-09-09T08:35:39Z</updated>
<author>
<name>Kent Overstreet</name>
<email>koverstreet@google.com</email>
</author>
<published>2012-09-06T22:35:02Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=bf800ef1816b4283a885e55ad38068aec9711e4d'/>
<id>urn:sha1:bf800ef1816b4283a885e55ad38068aec9711e4d</id>
<content type='text'>
Previously, there was bio_clone() but it only allocated from the fs bio
set; as a result various users were open coding it and using
__bio_clone().

This changes bio_clone() to become bio_clone_bioset(), and then we add
bio_clone() and bio_clone_kmalloc() as wrappers around it, making use of
the functionality the last patch adedd.

This will also help in a later patch changing how bio cloning works.

Signed-off-by: Kent Overstreet &lt;koverstreet@google.com&gt;
CC: Jens Axboe &lt;axboe@kernel.dk&gt;
CC: NeilBrown &lt;neilb@suse.de&gt;
CC: Alasdair Kergon &lt;agk@redhat.com&gt;
CC: Boaz Harrosh &lt;bharrosh@panasas.com&gt;
CC: Jeff Garzik &lt;jeff@garzik.org&gt;
Acked-by: Jeff Garzik &lt;jgarzik@redhat.com&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
</content>
</entry>
<entry>
<title>block: remove spurious uses of REQ_HARDBARRIER</title>
<updated>2010-09-10T10:35:36Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2010-09-03T09:56:16Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=9cbbdca44ae1a6f512ea1e2be11ced8bbb9d430a'/>
<id>urn:sha1:9cbbdca44ae1a6f512ea1e2be11ced8bbb9d430a</id>
<content type='text'>
REQ_HARDBARRIER is deprecated.  Remove spurious uses in the following
users.  Please note that other than osdblk, all other uses were
already spurious before deprecation.

* osdblk: osdblk_rq_fn() won't receive any request with
  REQ_HARDBARRIER set.  Remove the test for it.

* pktcdvd: use of REQ_HARDBARRIER in pkt_generic_packet() doesn't mean
  anything.  Removed.

* aic7xxx_old: Setting MSG_ORDERED_Q_TAG on REQ_HARDBARRIER is
  spurious.  Removed.

* sas_scsi_host: Setting TASK_ATTR_ORDERED on REQ_HARDBARRIER is
  spurious.  Removed.

* scsi_tcq: The ordered tag path wasn't being used anyway.  Removed.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Acked-by: Boaz Harrosh &lt;bharrosh@panasas.com&gt;
Cc: James Bottomley &lt;James.Bottomley@suse.de&gt;
Cc: Peter Osterlund &lt;petero2@telia.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
<entry>
<title>block: deprecate barrier and replace blk_queue_ordered() with blk_queue_flush()</title>
<updated>2010-09-10T10:35:36Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2010-09-03T09:56:16Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=4913efe456c987057e5d36a3f0a55422a9072cae'/>
<id>urn:sha1:4913efe456c987057e5d36a3f0a55422a9072cae</id>
<content type='text'>
Barrier is deemed too heavy and will soon be replaced by FLUSH/FUA
requests.  Deprecate barrier.  All REQ_HARDBARRIERs are failed with
-EOPNOTSUPP and blk_queue_ordered() is replaced with simpler
blk_queue_flush().

blk_queue_flush() takes combinations of REQ_FLUSH and FUA.  If a
device has write cache and can flush it, it should set REQ_FLUSH.  If
the device can handle FUA writes, it should also set REQ_FUA.

All blk_queue_ordered() users are converted.

* ORDERED_DRAIN is mapped to 0 which is the default value.
* ORDERED_DRAIN_FLUSH is mapped to REQ_FLUSH.
* ORDERED_DRAIN_FLUSH_FUA is mapped to REQ_FLUSH | REQ_FUA.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Acked-by: Boaz Harrosh &lt;bharrosh@panasas.com&gt;
Cc: Christoph Hellwig &lt;hch@infradead.org&gt;
Cc: Nick Piggin &lt;npiggin@kernel.dk&gt;
Cc: Michael S. Tsirkin &lt;mst@redhat.com&gt;
Cc: Jeremy Fitzhardinge &lt;jeremy@xensource.com&gt;
Cc: Chris Wright &lt;chrisw@sous-sol.org&gt;
Cc: FUJITA Tomonori &lt;fujita.tomonori@lab.ntt.co.jp&gt;
Cc: Geert Uytterhoeven &lt;Geert.Uytterhoeven@sonycom.com&gt;
Cc: David S. Miller &lt;davem@davemloft.net&gt;
Cc: Alasdair G Kergon &lt;agk@redhat.com&gt;
Cc: Pierre Ossman &lt;drzeus@drzeus.cx&gt;
Cc: Stefan Weinhuber &lt;wein@de.ibm.com&gt;
Signed-off-by: Jens Axboe &lt;jaxboe@fusionio.com&gt;
</content>
</entry>
</feed>
