<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/arch/um/include/asm/tlbflush.h, branch linux-rolling-stable</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2024-10-23T07:52:49Z</updated>
<entry>
<title>um: Abandon the _PAGE_NEWPROT bit</title>
<updated>2024-10-23T07:52:49Z</updated>
<author>
<name>Tiwei Bie</name>
<email>tiwei.btw@antgroup.com</email>
</author>
<published>2024-10-11T10:23:53Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=2717c6b649e1840328c2758a478bf4034a22ac3e'/>
<id>urn:sha1:2717c6b649e1840328c2758a478bf4034a22ac3e</id>
<content type='text'>
When a PTE is updated in the page table, the _PAGE_NEWPAGE bit will
always be set. And the corresponding page will always be mapped or
unmapped depending on whether the PTE is present or not. The check
on the _PAGE_NEWPROT bit is not really reachable. Abandoning it will
allow us to simplify the code and remove the unreachable code.

Reviewed-by: Benjamin Berg &lt;benjamin.berg@intel.com&gt;
Signed-off-by: Tiwei Bie &lt;tiwei.btw@antgroup.com&gt;
Link: https://patch.msgid.link/20241011102354.1682626-2-tiwei.btw@antgroup.com
Signed-off-by: Johannes Berg &lt;johannes.berg@intel.com&gt;
</content>
</entry>
<entry>
<title>um: refactor TLB update handling</title>
<updated>2024-07-03T15:09:50Z</updated>
<author>
<name>Benjamin Berg</name>
<email>benjamin.berg@intel.com</email>
</author>
<published>2024-07-03T13:45:36Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=bcf3d957c63d8b6d718b862fea18c5f14ce803e2'/>
<id>urn:sha1:bcf3d957c63d8b6d718b862fea18c5f14ce803e2</id>
<content type='text'>
Conceptually, we want the memory mappings to always be up to date and
represent whatever is in the TLB. To ensure that, we need to sync them
over in the userspace case and for the kernel we need to process the
mappings.

The kernel will call flush_tlb_* if page table entries that were valid
before become invalid. Unfortunately, this is not the case if entries
are added.

As such, change both flush_tlb_* and set_ptes to track the memory range
that has to be synchronized. For the kernel, we need to execute a
flush_tlb_kern_* immediately but we can wait for the first page fault in
case of set_ptes. For userspace in contrast we only store that a range
of memory needs to be synced and do so whenever we switch to that
process.

Signed-off-by: Benjamin Berg &lt;benjamin.berg@intel.com&gt;
Link: https://patch.msgid.link/20240703134536.1161108-13-benjamin@sipsolutions.net
Signed-off-by: Johannes Berg &lt;johannes.berg@intel.com&gt;
</content>
</entry>
<entry>
<title>um: Add SPDX headers for files in arch/um/include</title>
<updated>2019-09-15T19:37:17Z</updated>
<author>
<name>Alex Dewar</name>
<email>alex.dewar@gmx.co.uk</email>
</author>
<published>2019-08-25T09:49:19Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=f2f4bf5aabadd6575f5daabcb0a2f506e3f5f68c'/>
<id>urn:sha1:f2f4bf5aabadd6575f5daabcb0a2f506e3f5f68c</id>
<content type='text'>
Convert files to use SPDX header. All files are licensed under the GPLv2.

Signed-off-by: Alex Dewar &lt;alex.dewar@gmx.co.uk&gt;
Signed-off-by: Richard Weinberger &lt;richard@nod.at&gt;
</content>
</entry>
<entry>
<title>x86, um: initial part of asm-um move</title>
<updated>2008-10-23T05:55:19Z</updated>
<author>
<name>Al Viro</name>
<email>viro@zeniv.linux.org.uk</email>
</author>
<published>2008-08-17T23:13:17Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8ede0bdb63305d3353efd97e9af6210afb05734e'/>
<id>urn:sha1:8ede0bdb63305d3353efd97e9af6210afb05734e</id>
<content type='text'>
Signed-off-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@zytor.com&gt;
</content>
</entry>
</feed>
