<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/arch/powerpc/lib/qspinlock.c, branch linux-6.2.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-6.2.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-6.2.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2022-12-12T01:34:52Z</updated>
<entry>
<title>powerpc/qspinlock: Fix 32-bit build</title>
<updated>2022-12-12T01:34:52Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-12-08T12:32:25Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=13959373e9c9021cc80730c7bd1242e07b10b328'/>
<id>urn:sha1:13959373e9c9021cc80730c7bd1242e07b10b328</id>
<content type='text'>
Some 32-bit configurations don't pull in the spin_begin/end/relax
definitions. Fix is to restore a lost include.

Reported-by: kernel test robot &lt;lkp@intel.com&gt;
Fixes: 84990b169557 ("powerpc/qspinlock: add mcs queueing for contended waiters")
Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/oe-kbuild-all/202212050224.i7uh9fOh-lkp@intel.com
Link: https://lore.kernel.org/r/20221208123225.1566113-1-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: add compile-time tuning adjustments</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:32Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=0b2199841a7952d01a717b465df028b40b2cf3e9'/>
<id>urn:sha1:0b2199841a7952d01a717b465df028b40b2cf3e9</id>
<content type='text'>
This adds compile-time options that allow the EH lock hint bit to be
enabled or disabled, and adds some new options that may or may not
help matters. To help with experimentation and tuning.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-18-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: provide accounting and options for sleepy locks</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:31Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=12b459a5ebf3308e718bc1dd48acb7c4cf7f1a75'/>
<id>urn:sha1:12b459a5ebf3308e718bc1dd48acb7c4cf7f1a75</id>
<content type='text'>
Finding the owner or a queued waiter on a lock with a preempted vcpu is
indicative of an oversubscribed guest causing the lock to get into
trouble. Provide some options to detect this situation and have new CPUs
avoid queueing for a longer time (more steal iterations) to minimise the
problems caused by vcpu preemption on the queue.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-17-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: allow indefinite spinning on a preempted owner</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:30Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=39dfc73596b48bb50cf7e4f3f54e38427dda5b4e'/>
<id>urn:sha1:39dfc73596b48bb50cf7e4f3f54e38427dda5b4e</id>
<content type='text'>
Provide an option that holds off queueing indefinitely while the lock
owner is preempted. This could reduce queueing latencies for very
overcommitted vcpu situations.

This is disabled by default.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-16-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: reduce remote node steal spins</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:29Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=cc79701114154efe79663ba47d9e51aad2ed3c78'/>
<id>urn:sha1:cc79701114154efe79663ba47d9e51aad2ed3c78</id>
<content type='text'>
Allow for a reduction in the number of times a CPU from a different
node than the owner can attempt to steal the lock before queueing.
This could bias the transfer behaviour of the lock across the
machine and reduce NUMA crossings.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-15-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: use spin_begin/end API</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:28Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=71c235027ce7940434acd3f553602ad8b5d36469'/>
<id>urn:sha1:71c235027ce7940434acd3f553602ad8b5d36469</id>
<content type='text'>
Use the spin_begin/spin_cpu_relax/spin_end APIs in qspinlock, which helps
to prevent threads issuing a lot of expensive priority nops which may not
have much effect due to immediately executing low then medium priority.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-14-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: allow lock stealing in trylock and lock fastpath</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:27Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=f61ab43cc1a6146d6eef7e0713a452c3677ad13e'/>
<id>urn:sha1:f61ab43cc1a6146d6eef7e0713a452c3677ad13e</id>
<content type='text'>
This change allows trylock to steal the lock. It also allows the initial
lock attempt to steal the lock rather than bailing out and going to the
slow path.

This gives trylock more strength: without this a continually-contended
lock will never permit a trylock to succeed. With this change, the
trylock has a small but non-zero chance.

It also gives the lock fastpath most of the benefit of passing the
reservation back through to the steal loop in the slow path without the
complexity.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-13-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: add ability to prod new queue head CPU</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:26Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=be742c573fdafcfa1752642ca1c7aaf08c258128'/>
<id>urn:sha1:be742c573fdafcfa1752642ca1c7aaf08c258128</id>
<content type='text'>
After the head of the queue acquires the lock, it releases the
next waiter in the queue to become the new head. Add an option
to prod the new head if its vCPU was preempted. This may only
have an effect if queue waiters are yielding.

Disable this option by default for now, i.e., no logical change.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-12-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: allow propagation of yield CPU down the queue</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:25Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=28db61e207ea3890d286cff3141c1ce67346074d'/>
<id>urn:sha1:28db61e207ea3890d286cff3141c1ce67346074d</id>
<content type='text'>
Having all CPUs poll the lock word for the owner CPU that should be
yielded to defeats most of the purpose of using MCS queueing for
scalability. Yet it may be desirable for queued waiters to yield to a
preempted owner.

With this change, queue waiters never sample the owner CPU directly from
the lock word. The queue head (which is spinning on the lock) propagates
the owner CPU back to the next waiter if it finds the owner has been
preempted. That waiter then propagates the owner CPU back to the next
waiter, and so on.

s390 addresses this problem differenty, by having queued waiters sample
the lock word to find the owner at a low frequency. That has the
advantage of being simpler, the advantage of propagation is that the
lock word never has to be accesed by queued waiters, and the transfer of
cache lines to transmit the owner data is only required when lock holder
vCPU preemption occurs.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-11-npiggin@gmail.com

</content>
</entry>
<entry>
<title>powerpc/qspinlock: allow stealing when head of queue yields</title>
<updated>2022-12-02T06:48:50Z</updated>
<author>
<name>Nicholas Piggin</name>
<email>npiggin@gmail.com</email>
</author>
<published>2022-11-26T09:59:24Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=b4c3cdc1a698a2f6168768d0bed4bf062723722e'/>
<id>urn:sha1:b4c3cdc1a698a2f6168768d0bed4bf062723722e</id>
<content type='text'>
If the head of queue is preventing stealing but it finds the owner vCPU
is preempted, it will yield its cycles to the owner which could cause it
to become preempted. Add an option to re-allow stealers before yielding,
and disallow them again after returning from the yield.

Disable this option by default for now, i.e., no logical change.

Signed-off-by: Nicholas Piggin &lt;npiggin@gmail.com&gt;
Signed-off-by: Michael Ellerman &lt;mpe@ellerman.id.au&gt;
Link: https://lore.kernel.org/r/20221126095932.1234527-10-npiggin@gmail.com

</content>
</entry>
</feed>
