<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/crypto/async_tx, branch linux-2.6.30.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.30.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.30.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2009-03-25T16:13:25Z</updated>
<entry>
<title>dmaengine: allow dma support for async_tx to be toggled</title>
<updated>2009-03-25T16:13:25Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2009-03-25T16:13:25Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=729b5d1b8ec72c28e99840b3f300ba67726e3ab9'/>
<id>urn:sha1:729b5d1b8ec72c28e99840b3f300ba67726e3ab9</id>
<content type='text'>
Provide a config option for blocking the allocation of dma channels to
the async_tx api.

Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
</content>
</entry>
<entry>
<title>async_tx: provide __async_inline for HAS_DMA=n archs</title>
<updated>2009-03-25T16:13:25Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2009-03-25T16:13:25Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=06164f3194e01ea4c76941ac60f541d656c8975f'/>
<id>urn:sha1:06164f3194e01ea4c76941ac60f541d656c8975f</id>
<content type='text'>
To allow an async_tx routine to be compiled away on HAS_DMA=n arch it
needs to be declared __always_inline otherwise the compiler may emit
code and cause a link error.

Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
</content>
</entry>
<entry>
<title>dmaengine: replace dma_async_client_register with dmaengine_get</title>
<updated>2009-01-06T18:38:17Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2009-01-06T18:38:17Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=209b84a88fe81341b4d8d465acc4a67cb7c3feb3'/>
<id>urn:sha1:209b84a88fe81341b4d8d465acc4a67cb7c3feb3</id>
<content type='text'>
Now that clients no longer need to be notified of channel arrival
dma_async_client_register can simply increment the dmaengine_ref_count.

Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;


</content>
</entry>
<entry>
<title>dmaengine: provide a common 'issue_pending_all' implementation</title>
<updated>2009-01-06T18:38:14Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2009-01-06T18:38:14Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=2ba05622b8b143b0c95968ba59bddfbd6d2f2559'/>
<id>urn:sha1:2ba05622b8b143b0c95968ba59bddfbd6d2f2559</id>
<content type='text'>
async_tx and net_dma each have open-coded versions of issue_pending_all,
so provide a common routine in dmaengine.

The implementation needs to walk the global device list, so implement
rcu to allow dma_issue_pending_all to run lockless.  Clients protect
themselves from channel removal events by holding a dmaengine reference.

Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;



</content>
</entry>
<entry>
<title>dmaengine: centralize channel allocation, introduce dma_find_channel</title>
<updated>2009-01-06T18:38:14Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2009-01-06T18:38:14Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=bec085134e446577a983f17f57d642a88d1af53b'/>
<id>urn:sha1:bec085134e446577a983f17f57d642a88d1af53b</id>
<content type='text'>
Allowing multiple clients to each define their own channel allocation
scheme quickly leads to a pathological situation.  For memory-to-memory
offload all clients can share a central allocator.

This simply moves the existing async_tx allocator to dmaengine with
minimal fixups:
* async_tx.c:get_chan_ref_by_cap --&gt; dmaengine.c:nth_chan
* async_tx.c:async_tx_rebalance --&gt; dmaengine.c:dma_channel_rebalance
* split out common code from async_tx.c:__async_tx_find_channel --&gt;
  dma_find_channel

Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;



</content>
</entry>
<entry>
<title>dmaengine: up-level reference counting to the module level</title>
<updated>2009-01-06T18:38:14Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2009-01-06T18:38:14Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=6f49a57aa5a0c6d4e4e27c85f7af6c83325a12d1'/>
<id>urn:sha1:6f49a57aa5a0c6d4e4e27c85f7af6c83325a12d1</id>
<content type='text'>
Simply, if a client wants any dmaengine channel then prevent all dmaengine
modules from being removed.  Once the clients are done re-enable module
removal.

Why?, beyond reducing complication:
1/ Tracking reference counts per-transaction in an efficient manner, as
   is currently done, requires a complicated scheme to avoid cache-line
   bouncing effects.
2/ Per-transaction ref-counting gives the false impression that a
   dma-driver can be gracefully removed ahead of its user (net, md, or
   dma-slave)
3/ None of the in-tree dma-drivers talk to hot pluggable hardware, but
   if such an engine were built one day we still would not need to notify
   clients of remove events.  The driver can simply return NULL to a
   -&gt;prep() request, something that is much easier for a client to handle.

Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Acked-by: Maciej Sosnowski &lt;maciej.sosnowski@intel.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;



</content>
</entry>
<entry>
<title>dmaengine: remove dependency on async_tx</title>
<updated>2009-01-06T01:10:19Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2009-01-06T00:14:31Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=07f2211e4fbce6990722d78c4f04225da9c0e9cf'/>
<id>urn:sha1:07f2211e4fbce6990722d78c4f04225da9c0e9cf</id>
<content type='text'>
async_tx.ko is a consumer of dma channels.  A circular dependency arises
if modules in drivers/dma rely on common code in async_tx.ko.  It
prevents either module from being unloaded.

Move dma_wait_for_async_tx and async_tx_run_dependencies to dmaeninge.o
where they should have been from the beginning.

Reviewed-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;


</content>
</entry>
<entry>
<title>async_xor: dma_map destination DMA_BIDIRECTIONAL</title>
<updated>2008-12-08T20:46:00Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2008-12-08T20:46:00Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=a06d568f7c5e40e34ea64881842deb8f4382babf'/>
<id>urn:sha1:a06d568f7c5e40e34ea64881842deb8f4382babf</id>
<content type='text'>
Mapping the destination multiple times is a misuse of the dma-api.
Since the destination may be reused as a source, ensure that it is only
mapped once and that it is mapped bidirectionally.  This appears to add
ugliness on the unmap side in that it always reads back the destination
address from the descriptor, but gcc can determine that dma_unmap is a
nop and not emit the code that calculates its arguments.

Cc: &lt;stable@kernel.org&gt;
Cc: Saeed Bishara &lt;saeed@marvell.com&gt;
Acked-by: Yuri Tikhonov &lt;yur@emcraft.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
</content>
</entry>
<entry>
<title>async_tx: make async_tx_run_dependencies() easier to read</title>
<updated>2008-09-14T02:57:04Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2008-09-14T02:57:04Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=fdb0ac80618729e6b12121c66449b8532990eaf3'/>
<id>urn:sha1:fdb0ac80618729e6b12121c66449b8532990eaf3</id>
<content type='text'>
* Rename 'next' to 'dep'
* Move the channel switch check inside the loop to simplify
  termination

Acked-by: Ilya Yanok &lt;yanok@emcraft.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
</content>
</entry>
<entry>
<title>async_tx: fix the bug in async_tx_run_dependencies</title>
<updated>2008-09-05T15:15:47Z</updated>
<author>
<name>Yuri Tikhonov</name>
<email>yur@emcraft.com</email>
</author>
<published>2008-09-05T15:15:47Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=de24125dd0a452bfd4502fc448e3534c5d2e87aa'/>
<id>urn:sha1:de24125dd0a452bfd4502fc448e3534c5d2e87aa</id>
<content type='text'>
Should clear the next pointer of the TX if we are sure that the
next TX (say NXT) will be submitted to the channel too. Overwise,
we break the chain of descriptors, because we lose the information
about the next descriptor to run. So next time, when invoke
async_tx_run_dependencies() with TX, it's TX-&gt;next will be NULL, and
NXT will be never submitted.

Cc: &lt;stable@kernel.org&gt;		[2.6.26]
Signed-off-by: Yuri Tikhonov &lt;yur@emcraft.com&gt;
Signed-off-by: Ilya Yanok &lt;yanok@emcraft.com&gt;
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
</content>
</entry>
</feed>
