<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/drivers/net/ethernet/intel/e1000, branch linux-4.3.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-4.3.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-4.3.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2015-05-12T14:39:27Z</updated>
<entry>
<title>e1000: Replace e1000_free_frag with skb_free_frag</title>
<updated>2015-05-12T14:39:27Z</updated>
<author>
<name>Alexander Duyck</name>
<email>alexander.h.duyck@redhat.com</email>
</author>
<published>2015-05-07T04:12:20Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=6bf93ba89ea22bec5f9d88bf458a230de59d141e'/>
<id>urn:sha1:6bf93ba89ea22bec5f9d88bf458a230de59d141e</id>
<content type='text'>
Signed-off-by: Alexander Duyck &lt;alexander.h.duyck@redhat.com&gt;
Acked-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>e1000, e1000e: Use dma_rmb instead of rmb for descriptor read ordering</title>
<updated>2015-04-08T16:15:14Z</updated>
<author>
<name>Alexander Duyck</name>
<email>alexander.h.duyck@redhat.com</email>
</author>
<published>2015-04-07T23:55:27Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=837a1dba0078d0bad755f6cb13a48c1623d11ff5'/>
<id>urn:sha1:837a1dba0078d0bad755f6cb13a48c1623d11ff5</id>
<content type='text'>
This change replaces calls to rmb with dma_rmb in the case where we want to
order all follow-on descriptor reads after the check for the descriptor
status bit.

Signed-off-by: Alexander Duyck &lt;alexander.h.duyck@redhat.com&gt;
Acked-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>ethernet: codespell comment spelling fixes</title>
<updated>2015-03-09T02:54:22Z</updated>
<author>
<name>Joe Perches</name>
<email>joe@perches.com</email>
</author>
<published>2015-03-07T04:49:12Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=dbedd44e982d61c156337b1a3fb252b24085f8e3'/>
<id>urn:sha1:dbedd44e982d61c156337b1a3fb252b24085f8e3</id>
<content type='text'>
To test a checkpatch spelling patch, I ran codespell against
drivers/net/ethernet/.

$ git ls-files drivers/net/ethernet/ | \
  while read file ; do \
    codespell -w $file; \
  done

I removed a false positive in e1000_hw.h

Signed-off-by: Joe Perches &lt;joe@perches.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>e1000: add dummy allocator to fix race condition between mtu change and netpoll</title>
<updated>2015-03-06T10:47:10Z</updated>
<author>
<name>Sabrina Dubroca</name>
<email>sd@queasysnail.net</email>
</author>
<published>2015-02-26T05:35:41Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=08e8331654d1d7b2c58045e549005bc356aa7810'/>
<id>urn:sha1:08e8331654d1d7b2c58045e549005bc356aa7810</id>
<content type='text'>
There is a race condition between e1000_change_mtu's cleanups and
netpoll, when we change the MTU across jumbo size:

Changing MTU frees all the rx buffers:
    e1000_change_mtu -&gt; e1000_down -&gt; e1000_clean_all_rx_rings -&gt;
        e1000_clean_rx_ring

Then, close to the end of e1000_change_mtu:
    pr_info -&gt; ... -&gt; netpoll_poll_dev -&gt; e1000_clean -&gt;
        e1000_clean_rx_irq -&gt; e1000_alloc_rx_buffers -&gt; e1000_alloc_frag

And when we come back to do the rest of the MTU change:
    e1000_up -&gt; e1000_configure -&gt; e1000_configure_rx -&gt;
        e1000_alloc_jumbo_rx_buffers

alloc_jumbo finds the buffers already != NULL, since data (shared with
page in e1000_rx_buffer-&gt;rxbuf) has been re-alloc'd, but it's garbage,
or at least not what is expected when in jumbo state.

This results in an unusable adapter (packets don't get through), and a
NULL pointer dereference on the next call to e1000_clean_rx_ring
(other mtu change, link down, shutdown):

BUG: unable to handle kernel NULL pointer dereference at           (null)
IP: [&lt;ffffffff81194d6e&gt;] put_compound_page+0x7e/0x330

    [...]

Call Trace:
 [&lt;ffffffff81195445&gt;] put_page+0x55/0x60
 [&lt;ffffffff815d9f44&gt;] e1000_clean_rx_ring+0x134/0x200
 [&lt;ffffffff815da055&gt;] e1000_clean_all_rx_rings+0x45/0x60
 [&lt;ffffffff815df5e0&gt;] e1000_down+0x1c0/0x1d0
 [&lt;ffffffff811e2260&gt;] ? deactivate_slab+0x7f0/0x840
 [&lt;ffffffff815e21bc&gt;] e1000_change_mtu+0xdc/0x170
 [&lt;ffffffff81647050&gt;] dev_set_mtu+0xa0/0x140
 [&lt;ffffffff81664218&gt;] do_setlink+0x218/0xac0
 [&lt;ffffffff814459e9&gt;] ? nla_parse+0xb9/0x120
 [&lt;ffffffff816652d0&gt;] rtnl_newlink+0x6d0/0x890
 [&lt;ffffffff8104f000&gt;] ? kvm_clock_read+0x20/0x40
 [&lt;ffffffff810a2068&gt;] ? sched_clock_cpu+0xa8/0x100
 [&lt;ffffffff81663802&gt;] rtnetlink_rcv_msg+0x92/0x260

By setting the allocator to a dummy version, netpoll can't mess up our
rx buffers.  The allocator is set back to a sane value in
e1000_configure_rx.

Fixes: edbbb3ca1077 ("e1000: implement jumbo receive with partial descriptors")
Signed-off-by: Sabrina Dubroca &lt;sd@queasysnail.net&gt;
Tested-by: Aaron Brown &lt;aaron.f.brown@intel.com&gt;
Signed-off-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
</content>
</entry>
<entry>
<title>e1000: call netif_carrier_off early on down</title>
<updated>2015-03-06T10:47:10Z</updated>
<author>
<name>Eliezer Tamir</name>
<email>eliezer.tamir@linux.intel.com</email>
</author>
<published>2015-02-25T15:52:49Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=f9c029db70880a66cf03c34aa6d4d5c9b2d13281'/>
<id>urn:sha1:f9c029db70880a66cf03c34aa6d4d5c9b2d13281</id>
<content type='text'>
When bringing down an interface netif_carrier_off() should be
one the first things we do, since this will prevent the stack
from queuing more packets to this interface.
This operation is very fast, and should make the device behave
much nicer when trying to bring down an interface under load.

Also, this would Do The Right Thing (TM) if this device has some
sort of fail-over teaming and redirect traffic to the other IF.

Move netif_carrier_off as early as possible.

Signed-off-by: Eliezer Tamir &lt;eliezer.tamir@linux.intel.com&gt;
Tested-by: Aaron Brown &lt;aaron.f.brown@intel.com&gt;
Signed-off-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
</content>
</entry>
<entry>
<title>net: e1000: support txtd update delay via xmit_more</title>
<updated>2015-01-23T02:10:23Z</updated>
<author>
<name>Florian Westphal</name>
<email>fw@strlen.de</email>
</author>
<published>2015-01-07T11:40:33Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8a4d0b93c142a53c369998303d2114b5beeca7af'/>
<id>urn:sha1:8a4d0b93c142a53c369998303d2114b5beeca7af</id>
<content type='text'>
Don't update Tx tail descriptor if we queue hasn't been stopped and
we know at least one more skb will be sent right away.

Signed-off-by: Florian Westphal &lt;fw@strlen.de&gt;
Tested-by: Aaron Brown &lt;aaron.f.brown@intel.com&gt;
Signed-off-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
</content>
</entry>
<entry>
<title>e1000: fix time comparison</title>
<updated>2015-01-23T02:10:15Z</updated>
<author>
<name>Asaf Vertz</name>
<email>asaf.vertz@tandemg.com</email>
</author>
<published>2015-01-08T06:01:00Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=d5c7d7f6427cd7c39353d09bf47bfbc7800b6a53'/>
<id>urn:sha1:d5c7d7f6427cd7c39353d09bf47bfbc7800b6a53</id>
<content type='text'>
To be future-proof and for better readability the time comparisons are
modified to use time_after_eq() instead of plain, error-prone math.

Signed-off-by: Asaf Vertz &lt;asaf.vertz@tandemg.com&gt;
Acked-by: Jacob Keller &lt;jacob.e.keller@intel.com&gt;
Tested-by: Aaron Brown &lt;aaron.f.brown@intel.com&gt;
Signed-off-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
</content>
</entry>
<entry>
<title>net: rename vlan_tx_* helpers since "tx" is misleading there</title>
<updated>2015-01-13T22:51:08Z</updated>
<author>
<name>Jiri Pirko</name>
<email>jiri@resnulli.us</email>
</author>
<published>2015-01-13T16:13:44Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=df8a39defad46b83694ea6dd868d332976d62cc0'/>
<id>urn:sha1:df8a39defad46b83694ea6dd868d332976d62cc0</id>
<content type='text'>
The same macros are used for rx as well. So rename it.

Signed-off-by: Jiri Pirko &lt;jiri@resnulli.us&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>ethernet/intel: Use napi_alloc_skb</title>
<updated>2014-12-10T18:31:57Z</updated>
<author>
<name>Alexander Duyck</name>
<email>alexander.h.duyck@redhat.com</email>
</author>
<published>2014-12-10T03:40:56Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=67fd893ee07db94bcef6c7537f8569b49ff124d4'/>
<id>urn:sha1:67fd893ee07db94bcef6c7537f8569b49ff124d4</id>
<content type='text'>
This change replaces calls to netdev_alloc_skb_ip_align with
napi_alloc_skb.  The advantage of napi_alloc_skb is currently the fact that
the page allocation doesn't make use of any irq disable calls.

There are few spots where I couldn't replace the calls as the buffer
allocation routine is called as a part of init which is outside of the
softirq context.

Cc: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: Alexander Duyck &lt;alexander.h.duyck@redhat.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>ethernet/intel: Use eth_skb_pad and skb_put_padto helpers</title>
<updated>2014-12-09T01:47:42Z</updated>
<author>
<name>Alexander Duyck</name>
<email>alexander.h.duyck@redhat.com</email>
</author>
<published>2014-12-03T16:17:39Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=a94d9e224e3c48f57559183582c6410e7acf1d8b'/>
<id>urn:sha1:a94d9e224e3c48f57559183582c6410e7acf1d8b</id>
<content type='text'>
Update the Intel Ethernet drivers to use eth_skb_pad() and skb_put_padto
instead of doing their own implementations of the function.

Also this cleans up two other spots where skb_pad was called but the length
and tail pointers were being manipulated directly instead of just having
the padding length added via __skb_put.

Cc: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: Alexander Duyck &lt;alexander.h.duyck@redhat.com&gt;
Acked-by: Jeff Kirsher &lt;jeffrey.t.kirsher@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
</feed>
