<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/drivers/net/ethernet/sfc/ef100_tx.c, branch linux-5.14.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-5.14.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-5.14.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2020-11-13T23:33:30Z</updated>
<entry>
<title>sfc: support GRE TSO on EF100</title>
<updated>2020-11-13T23:33:30Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-11-12T15:20:05Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=c5122cf584128f9d42655189e69fda7151c1f275'/>
<id>urn:sha1:c5122cf584128f9d42655189e69fda7151c1f275</id>
<content type='text'>
We can treat SKB_GSO_GRE almost exactly the same as UDP tunnels, except
 that we don't want to edit the outer UDP len (as there isn't one).
For SKB_GSO_GRE_CSUM, we have to use GSO_PARTIAL as the device doesn't
 support offload of non-UDP outer L4 checksums.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
Acked-by: Martin Habets &lt;mhabets@solarflare.com&gt;
Reviewed-by: Alexander Duyck &lt;alexanderduyck@fb.com&gt;
</content>
</entry>
<entry>
<title>sfc: correctly support non-partial GSO_UDP_TUNNEL_CSUM on EF100</title>
<updated>2020-11-13T23:33:27Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-11-12T15:19:47Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=42bfd69a9fdd84b6b99324e745c4817878bbe0b7'/>
<id>urn:sha1:42bfd69a9fdd84b6b99324e745c4817878bbe0b7</id>
<content type='text'>
By asking the HW for the correct edits, we can make UDP tunnel TSO
 work without needing GSO_PARTIAL.  So don't specify it in our
 netdev-&gt;gso_partial_features.
However, retain GSO_PARTIAL support, as this will be used for other
 protocols later.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
Acked-by: Martin Habets &lt;mhabets@solarflare.com&gt;
Reviewed-by: Alexander Duyck &lt;alexanderduyck@fb.com&gt;
</content>
</entry>
<entry>
<title>sfc: only use fixed-id if the skb asks for it</title>
<updated>2020-10-31T00:42:53Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-10-28T20:43:59Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=dbe2f251f9eb2158d2c9ec22cc22cc2fc66370e1'/>
<id>urn:sha1:dbe2f251f9eb2158d2c9ec22cc22cc2fc66370e1</id>
<content type='text'>
AIUI, the NETIF_F_TSO_MANGLEID flag is a signal to the stack that a
 driver may _need_ to mangle IDs in order to do TSO, and conversely
 a signal from the stack that the driver is permitted to do so.
Since we support both fixed and incrementing IPIDs, we should rely
 on the SKB_GSO_FIXEDID flag on a per-skb basis, rather than using
 the MANGLEID feature to make all TSOs fixed-id.
Includes other minor cleanups of ef100_make_tso_desc() coding style.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>sfc: implement encap TSO on EF100</title>
<updated>2020-10-31T00:42:53Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-10-28T20:43:39Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=806f9f23b6732de81daf3aafae5363bb56d29ed6'/>
<id>urn:sha1:806f9f23b6732de81daf3aafae5363bb56d29ed6</id>
<content type='text'>
The NIC only needs to know where the headers it has to edit (TCP and
 inner and outer IPv4) are, which fits GSO_PARTIAL nicely.
It also supports non-PARTIAL offload of UDP tunnels, again just
 needing to be told the outer transport offset so that it can edit
 the UDP length field.
(It's not clear to me whether the stack will ever use the non-PARTIAL
 version with the netdev feature flags we're setting here.)

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>sfc: de-indirect TSO handling</title>
<updated>2020-09-12T00:15:22Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-09-11T22:40:03Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=1679c72cf48552e75a624b9c9230e2c7c18cfffc'/>
<id>urn:sha1:1679c72cf48552e75a624b9c9230e2c7c18cfffc</id>
<content type='text'>
Remove the tx_queue-&gt;handle_tso function pointer, and just use
 tx_queue-&gt;tso_version to decide which function to call, thus removing
 an indirect call from the fast path.
Instead of passing a tso_v2 flag to efx_mcdi_tx_init(), set the desired
 tx_queue-&gt;tso_version before calling it.
In efx_mcdi_tx_init(), report back failure to obtain a TSOv2 context by
 setting tx_queue-&gt;tso_version to 0, which will cause the TX path to
 use the GSO-based fallback.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>sfc: remove spurious unreachable return statement</title>
<updated>2020-09-11T21:55:14Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-09-11T18:44:56Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=3d6aef65dfaa0e9d6632fb15aebc01aae163c392'/>
<id>urn:sha1:3d6aef65dfaa0e9d6632fb15aebc01aae163c392</id>
<content type='text'>
The statement above it already returns, so there is no way to get here.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>sfc: use tx_queue-&gt;old_read_count in EF100 TX path</title>
<updated>2020-09-05T19:21:40Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-09-03T21:34:57Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=5374d6024cd47e028f96a382104b2653f010b430'/>
<id>urn:sha1:5374d6024cd47e028f96a382104b2653f010b430</id>
<content type='text'>
As in the Siena/EF10 case, it minimises cacheline ping-pong between
 the TX and completion paths.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>sfc: make ef100 xmit_more handling look more like ef10's</title>
<updated>2020-09-05T19:21:39Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-09-03T21:34:42Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8cb2675634ab8ec7986aa9dfe1a7a934872ef51d'/>
<id>urn:sha1:8cb2675634ab8ec7986aa9dfe1a7a934872ef51d</id>
<content type='text'>
This should cause no functional change; merely make there only be one
 design of xmit_more handling to understand.  As with the EF10/Siena
 version, we set tx_queue-&gt;xmit_pending when we queue up a TX, and
 clear it when we ring the doorbell (in ef100_notify_tx_desc).
While we're at it, make ef100_notify_tx_desc static since nothing
 outside of ef100_tx.c uses it.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>sfc: add and use efx_tx_send_pending in tx.c</title>
<updated>2020-09-05T19:21:39Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-09-03T21:34:15Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=1c0544d24927e4fad04f858216b8ea767a3bd123'/>
<id>urn:sha1:1c0544d24927e4fad04f858216b8ea767a3bd123</id>
<content type='text'>
Instead of using efx_tx_queue_partner(), which relies on the assumption
 that tx_queues_per_channel is 2, efx_tx_send_pending() iterates over
 txqs with efx_for_each_channel_tx_queue().
We unconditionally set tx_queue-&gt;xmit_pending (renamed from
 xmit_more_available), then condition on xmit_more for the call to
 efx_tx_send_pending(), which will clear xmit_pending.  Thus, after an
 xmit_more TX, the doorbell is un-rung and xmit_pending is true.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: Jakub Kicinski &lt;kuba@kernel.org&gt;
</content>
</entry>
<entry>
<title>sfc_ef100: TX path for EF100 NICs</title>
<updated>2020-08-04T01:22:54Z</updated>
<author>
<name>Edward Cree</name>
<email>ecree@solarflare.com</email>
</author>
<published>2020-08-03T20:34:00Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=d19a5372186336df8a90391c1ae2011e03310dca'/>
<id>urn:sha1:d19a5372186336df8a90391c1ae2011e03310dca</id>
<content type='text'>
Includes checksum offload and TSO, so declare those in our netdev features.

Signed-off-by: Edward Cree &lt;ecree@solarflare.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
</feed>
