<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/include/asm-sparc64/tsb.h, branch linux-2.6.21.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.21.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-2.6.21.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2007-06-11T18:36:53Z</updated>
<entry>
<title>[PATCH] SPARC64: Fix two bugs wrt. kernel 4MB TSB.</title>
<updated>2007-06-11T18:36:53Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@sunset.davemloft.net</email>
</author>
<published>2007-06-07T05:52:35Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=b426a33edd9a3f2254cd89faefbadab9671c742a'/>
<id>urn:sha1:b426a33edd9a3f2254cd89faefbadab9671c742a</id>
<content type='text'>
1) The TSB lookup was not using the correct hash mask.

2) It was not aligned on a boundary equal to it's size,
   which is required by the sun4v Hypervisor.

wasn't having it's return value checked, and that bug will be fixed up
as well in a subsequent changeset.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Chris Wright &lt;chrisw@sous-sol.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Get DEBUG_PAGEALLOC working again.</title>
<updated>2007-03-17T00:20:28Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@sunset.davemloft.net</email>
</author>
<published>2007-03-17T00:20:28Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=d1acb4210aaa9bdc413d276dbc96d0a23ada97ba'/>
<id>urn:sha1:d1acb4210aaa9bdc413d276dbc96d0a23ada97ba</id>
<content type='text'>
We have to make sure to use base-pagesize TLB entries even during the
early transition period where we need TLB miss handling but don't have
the kernel page tables setup yet for the linear region.

Also, it is necessary therefore to not use the 4MB TSB for these
translations, and instead use the normal kernel TSB.  This allows us
to also get rid of the 4MB tsb for debug builds which shrinks the
kernel a little bit.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Create a seperate kernel TSB for 4MB/256MB mappings.</title>
<updated>2006-03-20T09:13:56Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@davemloft.net</email>
</author>
<published>2006-02-22T06:31:11Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=d7744a09504d5ae84edc8289a02254e1f2102410'/>
<id>urn:sha1:d7744a09504d5ae84edc8289a02254e1f2102410</id>
<content type='text'>
It can map all of the linear kernel mappings with zero TSB hash
conflicts for systems with 16GB or less ram.  In such cases, on
SUN4V, once we load up this TSB the first time with all the
mappings, we never take a linear kernel mapping TLB miss ever
again, the hypervisor handles them all.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: More TLB/TSB handling fixes.</title>
<updated>2006-03-20T09:13:34Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@sunset.davemloft.net</email>
</author>
<published>2006-02-18T02:01:02Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8b234274418d6d79527c4ac3a72da446ca4cb35f'/>
<id>urn:sha1:8b234274418d6d79527c4ac3a72da446ca4cb35f</id>
<content type='text'>
The SUN4V convention with non-shared TSBs is that the context
bit of the TAG is clear.  So we have to choose an "invalid"
bit and initialize new TSBs appropriately.  Otherwise a zero
TAG looks "valid".

Make sure, for the window fixup cases, that we use the right
global registers and that we don't potentially trample on
the live global registers in etrap/rtrap handling (%g2 and
%g6) and that we put the missing virtual address properly
in %g5.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Initial sun4v TLB miss handling infrastructure.</title>
<updated>2006-03-20T09:11:52Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@sunset.davemloft.net</email>
</author>
<published>2006-02-07T07:44:37Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=d257d5da39a78b32721ca84b2ba7f461f2f7ed7f'/>
<id>urn:sha1:d257d5da39a78b32721ca84b2ba7f461f2f7ed7f</id>
<content type='text'>
Things are a little tricky because, unlike sun4u, we have
to:

1) do a hypervisor trap to do the TLB load.
2) do the TSB lookup calculations by hand

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Access TSB with physical addresses when possible.</title>
<updated>2006-03-20T09:11:32Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@sunset.davemloft.net</email>
</author>
<published>2006-02-01T23:55:21Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=517af33237ecfc3c8a93b335365fa61e741ceca4'/>
<id>urn:sha1:517af33237ecfc3c8a93b335365fa61e741ceca4</id>
<content type='text'>
This way we don't need to lock the TSB into the TLB.
The trick is that every TSB load/store is registered into
a special instruction patch section.  The default uses
virtual addresses, and the patch instructions use physical
address load/stores.

We can't do this on all chips because only cheetah+ and later
have the physical variant of the atomic quad load.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Kill out-of-date commentary in asm-sparc64/tsb.h</title>
<updated>2006-03-20T09:11:31Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@sunset.davemloft.net</email>
</author>
<published>2006-02-01T07:13:29Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=b0fd4e49aea8a460afab7bc67cd618e2d19291d4'/>
<id>urn:sha1:b0fd4e49aea8a460afab7bc67cd618e2d19291d4</id>
<content type='text'>
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Increase swapper_tsb size to 32K.</title>
<updated>2006-03-20T09:11:26Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@davemloft.net</email>
</author>
<published>2006-02-01T02:33:49Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=2f7ee7c63f08b7f883b710a29d91c1891b81b8e1'/>
<id>urn:sha1:2f7ee7c63f08b7f883b710a29d91c1891b81b8e1</id>
<content type='text'>
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Fix incorrect TSB lock bit handling.</title>
<updated>2006-03-20T09:11:21Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@davemloft.net</email>
</author>
<published>2006-02-01T02:32:44Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=4753eb2ac7022b999e5e484f1a5dc001dba22bd3'/>
<id>urn:sha1:4753eb2ac7022b999e5e484f1a5dc001dba22bd3</id>
<content type='text'>
The TSB_LOCK_BIT define is actually a special
value shifted down by 32-bits for the assembler
code macros.

In C code, this isn't what we want.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[SPARC64]: Add infrastructure for dynamic TSB sizing.</title>
<updated>2006-03-20T09:11:17Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@davemloft.net</email>
</author>
<published>2006-02-01T02:31:20Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=98c5584cfc47932c4f3ccf5eee2e0bae1447b85e'/>
<id>urn:sha1:98c5584cfc47932c4f3ccf5eee2e0bae1447b85e</id>
<content type='text'>
This also cleans up tsb_context_switch().  The assembler
routine is now __tsb_context_switch() and the former is
an inline function that picks out the bits from the mm_struct
and passes it into the assembler code as arguments.

setup_tsb_parms() computes the locked TLB entry to map the
TSB.  Later when we support using the physical address quad
load instructions of Cheetah+ and later, we'll simply use
the physical address for the TSB register value and set
the map virtual and PTE both to zero.

Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
</feed>
