<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/mm, branch linux-2.6.19.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.19.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.19.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2007-02-05T16:31:44Z</updated>
<entry>
<title>[PATCH] Don't allow the stack to grow into hugetlb reserved regions</title>
<updated>2007-02-05T16:31:44Z</updated>
<author>
<name>Adam Litke</name>
<email>agl@us.ibm.com</email>
</author>
<published>2007-01-30T22:35:39Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8b0165ce42354878b66482f34149d99660dbcdb0'/>
<id>urn:sha1:8b0165ce42354878b66482f34149d99660dbcdb0</id>
<content type='text'>
When expanding the stack, we don't currently check if the VMA will cross
into an area of the address space that is reserved for hugetlb pages.
Subsequent faults on the expanded portion of such a VMA will confuse the
low-level MMU code, resulting in an OOPS.  Check for this.

Signed-off-by: Adam Litke &lt;agl@us.ibm.com&gt;
Cc: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Cc: William Lee Irwin III &lt;wli@holomorphy.com&gt;
Cc: Hugh Dickins &lt;hugh@veritas.com&gt;
Cc: &lt;stable@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Check for populated zone in __drain_pages</title>
<updated>2007-02-05T16:31:39Z</updated>
<author>
<name>Christoph Lameter</name>
<email>clameter@sgi.com</email>
</author>
<published>2007-01-06T00:37:02Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=75cda36db203e982ea134d785ea58d8b56c3aef3'/>
<id>urn:sha1:75cda36db203e982ea134d785ea58d8b56c3aef3</id>
<content type='text'>
Both process_zones() and drain_node_pages() check for populated zones
before touching pagesets.  However, __drain_pages does not do so,

This may result in a NULL pointer dereference for pagesets in unpopulated
zones if a NUMA setup is combined with cpu hotplug.

Initially the unpopulated zone has the pcp pointers pointing to the boot
pagesets.  Since the zone is not populated the boot pageset pointers will
not be changed during page allocator and slab bootstrap.

If a cpu is later brought down (first call to __drain_pages()) then the pcp
pointers for cpus in unpopulated zones are set to NULL since __drain_pages
does not first check for an unpopulated zone.

If the cpu is then brought up again then we call process_zones() which will
ignore the unpopulated zone.  So the pageset pointers will still be NULL.

If the cpu is then again brought down then __drain_pages will attempt to
drain pages by following the NULL pageset pointer for unpopulated zones.

Signed-off-by: Christoph Lameter &lt;clameter@sgi.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Fix up page_mkclean_one(): virtual caches, s390</title>
<updated>2007-01-10T19:05:23Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2006-12-22T13:25:52Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8e609d9efea47564c000d486f558d0c0aba8617e'/>
<id>urn:sha1:8e609d9efea47564c000d486f558d0c0aba8617e</id>
<content type='text'>
 - add flush_cache_page() for all those virtual indexed cache
   architectures.

 - handle s390.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
[chrisw: fold in d6e88e671ac1]
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Fix incorrect user space access locking in mincore() (CVE-2006-4814)</title>
<updated>2007-01-10T19:05:23Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@woody.osdl.org</email>
</author>
<published>2006-12-16T17:44:32Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=e26353af7096103cec474473cbd81dc4190bba77'/>
<id>urn:sha1:e26353af7096103cec474473cbd81dc4190bba77</id>
<content type='text'>
Doug Chapman noticed that mincore() will doa "copy_to_user()" of the
result while holding the mmap semaphore for reading, which is a big
no-no.  While a recursive read-lock on a semaphore in the case of a page
fault happens to work, we don't actually allow them due to deadlock
schenarios with writers due to fairness issues.

Doug and Marcel sent in a patch to fix it, but I decided to just rewrite
the mess instead - not just fixing the locking problem, but making the
code smaller and (imho) much easier to understand.

Cc: Doug Chapman &lt;dchapman@redhat.com&gt;
Cc: Marcel Holtmann &lt;holtmann@redhat.com&gt;
Cc: Hugh Dickins &lt;hugh@veritas.com&gt;
Cc: Andrew Morton &lt;akpm@osdl.org&gt;
[chrisw: fold in subsequent fix: 4fb23e439ce0]
Acked-by: Hugh Dickins &lt;hugh@veritas.com&gt;
[chrisw: fold in subsequent fix: 825020c3866e]
Signed-off-by: Oleg Nesterov &lt;oleg@tv-sign.ru&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] fix OOM killing of swapoff</title>
<updated>2007-01-10T19:05:23Z</updated>
<author>
<name>Hugh Dickins</name>
<email>hugh@veritas.com</email>
</author>
<published>2007-01-06T00:37:03Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=85a181bb8fbaf93019651dbfa5034788b7164fa1'/>
<id>urn:sha1:85a181bb8fbaf93019651dbfa5034788b7164fa1</id>
<content type='text'>
These days, if you swapoff when there isn't enough memory, OOM killer gives
"BUG: scheduling while atomic" and the machine hangs: badness() needs to do
its PF_SWAPOFF return after the task_unlock (tasklist_lock is also held
here, so p isn't going to be freed: PF_SWAPOFF might get turned off at any
moment, but that doesn't really matter).

Signed-off-by: Hugh Dickins &lt;hugh@veritas.com&gt;
Cc: &lt;stable@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] VM: Fix nasty and subtle race in shared mmap'ed page writeback</title>
<updated>2007-01-10T19:05:22Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@macmini.osdl.org</email>
</author>
<published>2006-12-29T18:00:58Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=54e25b0460e6b1100e7ef9c0ac801bdce83921c0'/>
<id>urn:sha1:54e25b0460e6b1100e7ef9c0ac801bdce83921c0</id>
<content type='text'>
The VM layer (on the face of it, fairly reasonably) expected that when
it does a -&gt;writepage() call to the filesystem, it would write out the
full page at that point in time.  Especially since it had earlier marked
the whole page dirty with "set_page_dirty()".

But that isn't actually the case: -&gt;writepage() does not actually write
a page, it writes the parts of the page that have been explicitly marked
dirty before, *and* that had not got written out for other reasons since
the last time we told it they were dirty.

That last caveat is the important one.

Which _most_ of the time ends up being the whole page (since we had
called "set_page_dirty()" on the page earlier), but if the filesystem
had done any dirty flushing of its own (for example, to honor some
internal write ordering guarantees), it might end up doing only a
partial page IO (or none at all) when -&gt;writepage() is actually called.

That is the correct thing in general (since we actually often _want_
only the known-dirty parts of the page to be written out), but the
shared dirty page handling had implicitly forgotten about these details,
and had a number of cases where it was doing just the "-&gt;writepage()"
part, without telling the low-level filesystem that the whole page might
have been re-dirtied as part of being mapped writably into user space.

Since most of the time the FS did actually write out the full page, we
didn't notice this for a loong time, and this needed some really odd
patterns to trigger.  But it caused occasional corruption with rtorrent
and with the Debian "apt" database, because both use shared mmaps to
update the end result.

This fixes it. Finally. After way too much hair-pulling.

Acked-by: Nick Piggin &lt;nickpiggin@yahoo.com.au&gt;
Acked-by: Martin J. Bligh &lt;mbligh@google.com&gt;
Acked-by: Martin Michlmayr &lt;tbm@cyrius.com&gt;
Acked-by: Martin Johansson &lt;martin@fatbob.nu&gt;
Acked-by: Ingo Molnar &lt;mingo@elte.hu&gt;
Acked-by: Andrei Popa &lt;andrei.popa@i-neo.ro&gt;
Cc: High Dickins &lt;hugh@veritas.com&gt;
Cc: Andrew Morton &lt;akpm@osdl.org&gt;,
Cc: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Cc: Segher Boessenkool &lt;segher@kernel.crashing.org&gt;
Cc: David Miller &lt;davem@davemloft.net&gt;
Cc: Arjan van de Ven &lt;arjan@infradead.org&gt;
Cc: Gordon Farquharson &lt;gordonfarquharson@gmail.com&gt;
Cc: Guillaume Chazarain &lt;guichaz@yahoo.fr&gt;
Cc: Theodore Tso &lt;tytso@mit.edu&gt;
Cc: Kenneth Cheng &lt;kenneth.w.chen@intel.com&gt;
Cc: Tobias Diedrich &lt;ranma@tdiedrich.de&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
[chrisw: backport to 2.6.19.1]
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Buglet in vmscan.c</title>
<updated>2007-01-10T19:05:20Z</updated>
<author>
<name>Shantanu Goel</name>
<email>sgoel01@yahoo.com</email>
</author>
<published>2006-12-30T00:48:59Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=cb5dad8b96734e7f92160e389794ca8d9b58da2d'/>
<id>urn:sha1:cb5dad8b96734e7f92160e389794ca8d9b58da2d</id>
<content type='text'>
Fix a rather obvious buglet.  Noticed while instrumenting the VM using
/proc/vmstat.

Cc: Christoph Lameter &lt;clameter@engr.sgi.com&gt;
Cc: &lt;stable@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Fix for shmem_truncate_range() BUG_ON()</title>
<updated>2007-01-10T19:05:19Z</updated>
<author>
<name>Badari Pulavarty</name>
<email>pbadari@us.ibm.com</email>
</author>
<published>2006-12-22T09:06:23Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=cb57fcaf9b8c1946aa1e436821a7a4901dc926d0'/>
<id>urn:sha1:cb57fcaf9b8c1946aa1e436821a7a4901dc926d0</id>
<content type='text'>
Ran into BUG() while doing madvise(REMOVE) testing.  If we are punching a
hole into shared memory segment using madvise(REMOVE) and the entire hole
is below the indirect blocks, we hit following assert.

	        BUG_ON(limit &lt;= SHMEM_NR_DIRECT);

Signed-off-by: Badari Pulavarty &lt;pbadari@us.ibm.com&gt;
Cc: Hugh Dickins &lt;hugh@veritas.com&gt;
Cc: &lt;stable@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] read_zero_pagealigned() locking fix</title>
<updated>2007-01-10T19:05:17Z</updated>
<author>
<name>Hugh Dickins</name>
<email>hugh@veritas.com</email>
</author>
<published>2006-12-10T10:18:43Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=18576724d36745d801988ed56de1062182a0fe02'/>
<id>urn:sha1:18576724d36745d801988ed56de1062182a0fe02</id>
<content type='text'>
Ramiro Voicu hits the BUG_ON(!pte_none(*pte)) in zeromap_pte_range: kernel
bugzilla 7645.  Right: read_zero_pagealigned uses down_read of mmap_sem,
but another thread's racing read of /dev/zero, or a normal fault, can
easily set that pte again, in between zap_page_range and zeromap_page_range
getting there.  It's been wrong ever since 2.4.3.

The simple fix is to use down_write instead, but that would serialize reads
of /dev/zero more than at present: perhaps some app would be badly
affected.  So instead let zeromap_page_range return the error instead of
BUG_ON, and read_zero_pagealigned break to the slower clear_user loop in
that case - there's no need to optimize for it.

Use -EEXIST for when a pte is found: BUG_ON in mmap_zero (the other user of
zeromap_page_range), though it really isn't interesting there.  And since
mmap_zero wants -EAGAIN for out-of-memory, the zeromaps better return that
than -ENOMEM.

Signed-off-by: Hugh Dickins &lt;hugh@veritas.com&gt;
Cc: Ramiro Voicu: &lt;Ramiro.Voicu@cern.ch&gt;
Cc: &lt;stable@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] x86_64: fix bad page state in process 'swapper'</title>
<updated>2006-11-23T17:30:38Z</updated>
<author>
<name>Mel Gorman</name>
<email>mel@skynet.ie</email>
</author>
<published>2006-11-23T12:01:41Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=1abbfb412b1610ec3a7ec0164108cee01191d9f5'/>
<id>urn:sha1:1abbfb412b1610ec3a7ec0164108cee01191d9f5</id>
<content type='text'>
find_min_pfn_for_node() and find_min_pfn_with_active_regions() both
depend on a sorted early_node_map[].  However, sort_node_map() is being
called after fin_min_pfn_with_active_regions() in
free_area_init_nodes().

In most cases, this is ok, but on at least one x86_64, the SRAT table
caused the E820 ranges to be registered out of order.  This gave the
wrong values for the min PFN range resulting in some pages not being
initialised.

This patch sorts the early_node_map in find_min_pfn_for_node().  It has
been boot tested on x86, x86_64, ppc64 and ia64.

Signed-off-by: Mel Gorman &lt;mel@csn.ul.ie&gt;
Acked-by: Andre Noll &lt;maan@systemlinux.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
</feed>
