<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel, branch linux-2.6.22.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.22.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.22.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2008-02-25T23:59:40Z</updated>
<entry>
<title>Linux 2.6.22.19</title>
<updated>2008-02-25T23:59:40Z</updated>
<author>
<name>Greg Kroah-Hartman</name>
<email>gregkh@suse.de</email>
</author>
<published>2008-02-25T23:59:40Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=37579d1574f6c18f1f648201c6b0850ac94094cd'/>
<id>urn:sha1:37579d1574f6c18f1f648201c6b0850ac94094cd</id>
<content type='text'>
</content>
</entry>
<entry>
<title>NETFILTER: nf_conntrack_tcp: conntrack reopening fix</title>
<updated>2008-02-25T23:59:23Z</updated>
<author>
<name>Jozsef Kadlecsik</name>
<email>kadlec@blackhole.kfki.hu</email>
</author>
<published>2008-02-19T15:24:01Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=98d047714d208a6f8a933175a32d7d33931198ad'/>
<id>urn:sha1:98d047714d208a6f8a933175a32d7d33931198ad</id>
<content type='text'>
[NETFILTER]: nf_conntrack_tcp: conntrack reopening fix

[Upstream commits b2155e7f + d0c1fd7a]

TCP connection tracking in netfilter did not handle TCP reopening
properly: active close was taken into account for one side only and
not for any side, which is fixed now. The patch includes more comments
to explain the logic how the different cases are handled.
The bug was discovered by Jeff Chua.

Signed-off-by: Jozsef Kadlecsik &lt;kadlec@blackhole.kfki.hu&gt;
Signed-off-by: Patrick McHardy &lt;kaber@trash.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>SCSI: sd: handle bad lba in sense information</title>
<updated>2008-02-25T23:59:22Z</updated>
<author>
<name>James Bottomley</name>
<email>James.Bottomley@HansenPartnership.com</email>
</author>
<published>2008-02-02T22:06:23Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=3b62bc1363411799eac3d7dab2412b2df3fa9ac0'/>
<id>urn:sha1:3b62bc1363411799eac3d7dab2412b2df3fa9ac0</id>
<content type='text'>
patch 366c246de9cec909c5eba4f784c92d1e75b4dc38 in mainline.

Some devices report medium error locations incorrectly.  Add guards to
make sure the reported bad lba is actually in the request that caused
it.  Additionally remove the large case statment for sector sizes and
replace it with the proper u64 divisions.

Tested-by: Mike Snitzer &lt;snitzer@gmail.com&gt;
Cc: Stable Tree &lt;stable@kernel.org&gt;
Cc: Tony Battersby &lt;tonyb@cybernetics.com&gt;
Signed-off-by: James Bottomley &lt;James.Bottomley@HansenPartnership.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>Be more robust about bad arguments in get_user_pages()</title>
<updated>2008-02-25T23:59:21Z</updated>
<author>
<name>Jonathan Corbet</name>
<email>corbet@lwn.net</email>
</author>
<published>2008-02-17T17:18:36Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=07a854c8eeb63498124ea760dc1ffa4335627f75'/>
<id>urn:sha1:07a854c8eeb63498124ea760dc1ffa4335627f75</id>
<content type='text'>
MAINLINE: 900cf086fd2fbad07f72f4575449e0d0958f860f

So I spent a while pounding my head against my monitor trying to figure
out the vmsplice() vulnerability - how could a failure to check for
*read* access turn into a root exploit? It turns out that it's a buffer
overflow problem which is made easy by the way get_user_pages() is
coded.

In particular, "len" is a signed int, and it is only checked at the
*end* of a do {} while() loop.  So, if it is passed in as zero, the loop
will execute once and decrement len to -1.  At that point, the loop will
proceed until the next invalid address is found; in the process, it will
likely overflow the pages array passed in to get_user_pages().

I think that, if get_user_pages() has been asked to grab zero pages,
that's what it should do.  Thus this patch; it is, among other things,
enough to block the (already fixed) root exploit and any others which
might be lurking in similar code.  I also think that the number of pages
should be unsigned, but changing the prototype of this function probably
requires some more careful review.

Signed-off-by: Jonathan Corbet &lt;corbet@lwn.net&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
CC: Oliver Pinter &lt;oliver.pntr@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>quicklists: Only consider memory that can be used with GFP_KERNEL</title>
<updated>2008-02-25T23:59:21Z</updated>
<author>
<name>Christoph Lameter</name>
<email>clameter@sgi.com</email>
</author>
<published>2008-02-17T17:18:24Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=7d495f4f808c8625900d9e52cb531992b350458d'/>
<id>urn:sha1:7d495f4f808c8625900d9e52cb531992b350458d</id>
<content type='text'>
patch 96990a4ae979df9e235d01097d6175759331e88c in mainline.

Quicklists calculates the size of the quicklists based on the number of
free pages.  This must be the number of free pages that can be allocated
with GFP_KERNEL.  node_page_state() includes the pages in ZONE_HIGHMEM and
ZONE_MOVABLE which may lead the quicklists to become too large causing OOM.

Signed-off-by: Christoph Lameter &lt;clameter@sgi.com&gt;
Tested-by: Dhaval Giani &lt;dhaval@linux.vnet.ibm.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
Signed-off-by: Oliver Pinter &lt;oliver.pntr@gmail.com&gt;

</content>
</entry>
<entry>
<title>knfsd: query filesystem for NFSv4 getattr of FATTR4_MAXNAME</title>
<updated>2008-02-25T23:59:21Z</updated>
<author>
<name>J. Bruce Fields</name>
<email>bfields@citi.umich.edu</email>
</author>
<published>2008-02-07T20:03:57Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=ad14c9bb60583f9eac5e7b33c052d2a9614d113f'/>
<id>urn:sha1:ad14c9bb60583f9eac5e7b33c052d2a9614d113f</id>
<content type='text'>
mainline: a16e92edcd0a2846455a30823e1bac964e743baa

Without this we always return 2^32-1 as the the maximum namelength.

Signed-off-by: J. Bruce Fields &lt;bfields@citi.umich.edu&gt;
Signed-off-by: Andreas Gruenbacher &lt;agruen@suse.de&gt;
CC: Oliver Pinter &lt;oliver.pntr@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>NFS: Fix an Oops in encode_lookup()</title>
<updated>2008-02-25T23:59:21Z</updated>
<author>
<name>Trond Myklebust</name>
<email>Trond.Myklebust@netapp.com</email>
</author>
<published>2008-02-07T20:03:49Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=29301cef1b0106eb400d83c971bdccf7e5bd6d46'/>
<id>urn:sha1:29301cef1b0106eb400d83c971bdccf7e5bd6d46</id>
<content type='text'>
mainline: 54af3bb543c071769141387a42deaaab5074da55

It doesn't look as if the NFS file name limit is being initialised correctly
in the struct nfs_server. Make sure that we limit whatever is being set in
nfs_probe_fsinfo() and nfs_init_server().

Also ensure that readdirplus and nfs4_path_walk respect our file name
limits.

Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Acked-by: Neil Brown &lt;neilb@suse.de&gt;
CC: Oliver Pinter &lt;oliver.pntr@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>NFSv2/v3: Fix a memory leak when using -onolock</title>
<updated>2008-02-25T23:59:21Z</updated>
<author>
<name>Trond Myklebust</name>
<email>Trond.Myklebust@netapp.com</email>
</author>
<published>2008-02-07T20:03:52Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=f3442df1495a8d424a47c735afa937a74651aa7c'/>
<id>urn:sha1:f3442df1495a8d424a47c735afa937a74651aa7c</id>
<content type='text'>
mainline: 5cef338b30c110daf547fb13d99f0c77f2a79fbc

    Neil Brown said:
    &gt; Hi Trond,
    &gt;
    &gt; We found that a machine which made moderately heavy use of
    &gt; 'automount' was leaking some nfs data structures - particularly the
    &gt; 4K allocated by rpc_alloc_iostats.
    &gt; It turns out that this only happens with filesystems with -onolock
    &gt; set.

    &gt; The problem is that if NFS_MOUNT_NONLM is set, nfs_start_lockd doesn't
    &gt; set server-&gt;destroy, so when the filesystem is unmounted, the
    &gt; -&gt;client_acl is not shutdown, and so several resources are still
    &gt; held.  Multiple mount/umount cycles will slowly eat away memory
    &gt; several pages at a time.

    Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;

Acked-by: Neil Brown &lt;neilb@suse.de&gt;
Signed-off-by: Neil Brown &lt;neilb@suse.de&gt;
CC: Oliver Pinter &lt;oliver.pntr@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;


</content>
</entry>
<entry>
<title>NFS: Fix nfs_reval_fsid()</title>
<updated>2008-02-25T23:59:20Z</updated>
<author>
<name>Trond Myklebust</name>
<email>Trond.Myklebust@netapp.com</email>
</author>
<published>2008-02-07T20:03:45Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=89f2dc01cfed1c24b3f32af748698de35c3a0cae'/>
<id>urn:sha1:89f2dc01cfed1c24b3f32af748698de35c3a0cae</id>
<content type='text'>
mainline: a0356862bcbeb20acf64bc1a82d28a4c5bb957a7

We don't need to revalidate the fsid on the root directory. It suffices to
revalidate it on the current directory.

Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Acked-by: Neil Brown &lt;neilb@suse.de&gt;
CC: Oliver Pinter &lt;oliver.pntr@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>knfsd: fix spurious EINVAL errors on first access of new filesystem</title>
<updated>2008-02-25T23:59:20Z</updated>
<author>
<name>J. Bruce Fields</name>
<email>bfields@citi.umich.edu</email>
</author>
<published>2008-02-07T20:03:41Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=e48a28b355edad0cdc75c3bb8f78bd818bddcddc'/>
<id>urn:sha1:e48a28b355edad0cdc75c3bb8f78bd818bddcddc</id>
<content type='text'>
mainline: ac8587dcb58e40dd336d99d60f852041e06cc3dd

The v2/v3 acl code in nfsd is translating any return from fh_verify() to
nfserr_inval.  This is particularly unfortunate in the case of an
nfserr_dropit return, which is an internal error meant to indicate to
callers that this request has been deferred and should just be dropped
pending the results of an upcall to mountd.

Thanks to Roland &lt;devzero@web.de&gt; for bug report and data collection.

Cc: Roland &lt;devzero@web.de&gt;
Acked-by: Andreas Gruenbacher &lt;agruen@suse.de&gt;
Signed-off-by: J. Bruce Fields &lt;bfields@citi.umich.edu&gt;
Reviewed-By: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
CC: Oliver Pinter &lt;oliver.pntr@gmail.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
</feed>
