<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/arch/arm/include/asm/pgtable.h, branch linux-4.15.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-4.15.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-4.15.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2017-11-21T14:45:36Z</updated>
<entry>
<title>ARM: fix get_user_pages_fast</title>
<updated>2017-11-21T14:45:36Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2017-10-25T10:04:14Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=1ee5e87f86deca84fdcb7c71bb8368cacc4c24e3'/>
<id>urn:sha1:1ee5e87f86deca84fdcb7c71bb8368cacc4c24e3</id>
<content type='text'>
Ensure that get_user_pages_fast() is not able to access memory which
has been mapped with PROT_NONE.

Reported-by: Al Viro &lt;viro@ZenIV.linux.org.uk&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
</entry>
<entry>
<title>arch, mm: convert all architectures to use 5level-fixup.h</title>
<updated>2017-03-09T19:48:47Z</updated>
<author>
<name>Kirill A. Shutemov</name>
<email>kirill.shutemov@linux.intel.com</email>
</author>
<published>2017-03-09T14:24:05Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=9849a5697d3defb2087cb6b9be5573a142697889'/>
<id>urn:sha1:9849a5697d3defb2087cb6b9be5573a142697889</id>
<content type='text'>
If an architecture uses 4level-fixup.h we don't need to do anything as
it includes 5level-fixup.h.

If an architecture uses pgtable-nop*d.h, define __ARCH_USE_5LEVEL_HACK
before inclusion of the header. It makes asm-generic code to use
5level-fixup.h.

If an architecture has 4-level paging or folds levels on its own,
include 5level-fixup.h directly.

Signed-off-by: Kirill A. Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Acked-by: Michal Hocko &lt;mhocko@suse.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Merge tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm</title>
<updated>2016-08-02T20:11:27Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2016-08-02T20:11:27Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=221bb8a46e230b9824204ae86537183d9991ff2a'/>
<id>urn:sha1:221bb8a46e230b9824204ae86537183d9991ff2a</id>
<content type='text'>
Pull KVM updates from Paolo Bonzini:

 - ARM: GICv3 ITS emulation and various fixes.  Removal of the
   old VGIC implementation.

 - s390: support for trapping software breakpoints, nested
   virtualization (vSIE), the STHYI opcode, initial extensions
   for CPU model support.

 - MIPS: support for MIPS64 hosts (32-bit guests only) and lots
   of cleanups, preliminary to this and the upcoming support for
   hardware virtualization extensions.

 - x86: support for execute-only mappings in nested EPT; reduced
   vmexit latency for TSC deadline timer (by about 30%) on Intel
   hosts; support for more than 255 vCPUs.

 - PPC: bugfixes.

* tag 'for-linus' of git://git.kernel.org/pub/scm/virt/kvm/kvm: (302 commits)
  KVM: PPC: Introduce KVM_CAP_PPC_HTM
  MIPS: Select HAVE_KVM for MIPS64_R{2,6}
  MIPS: KVM: Reset CP0_PageMask during host TLB flush
  MIPS: KVM: Fix ptr-&gt;int cast via KVM_GUEST_KSEGX()
  MIPS: KVM: Sign extend MFC0/RDHWR results
  MIPS: KVM: Fix 64-bit big endian dynamic translation
  MIPS: KVM: Fail if ebase doesn't fit in CP0_EBase
  MIPS: KVM: Use 64-bit CP0_EBase when appropriate
  MIPS: KVM: Set CP0_Status.KX on MIPS64
  MIPS: KVM: Make entry code MIPS64 friendly
  MIPS: KVM: Use kmap instead of CKSEG0ADDR()
  MIPS: KVM: Use virt_to_phys() to get commpage PFN
  MIPS: Fix definition of KSEGX() for 64-bit
  KVM: VMX: Add VMCS to CPU's loaded VMCSs before VMPTRLD
  kvm: x86: nVMX: maintain internal copy of current VMCS
  KVM: PPC: Book3S HV: Save/restore TM state in H_CEDE
  KVM: PPC: Book3S HV: Pull out TM state save/restore into separate procedures
  KVM: arm64: vgic-its: Simplify MAPI error handling
  KVM: arm64: vgic-its: Make vgic_its_cmd_handle_mapi similar to other handlers
  KVM: arm64: vgic-its: Turn device_id validation into generic ID validation
  ...
</content>
</entry>
<entry>
<title>arm/arm64: KVM: Make default HYP mappings non-excutable</title>
<updated>2016-06-29T12:01:34Z</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2016-06-13T14:00:49Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=0996353f8ec6c6dba4a1f916bf6d9ace6f7d2b49'/>
<id>urn:sha1:0996353f8ec6c6dba4a1f916bf6d9ace6f7d2b49</id>
<content type='text'>
Structures that can be generally written to don't have any requirement
to be executable (quite the opposite). This includes the kvm and vcpu
structures, as well as the stacks.

Let's change the default to incorporate the XN flag.

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
</content>
</entry>
<entry>
<title>arm/arm64: KVM: Map the HYP text as read-only</title>
<updated>2016-06-29T12:01:34Z</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2016-06-13T14:00:48Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=5900270550cbb8a272bfc248b69531cd44dcf0d5'/>
<id>urn:sha1:5900270550cbb8a272bfc248b69531cd44dcf0d5</id>
<content type='text'>
There should be no reason for mapping the HYP text read/write.

As such, let's have a new set of flags (PAGE_HYP_EXEC) that allows
execution, but makes the page as read-only, and update the two call
sites that deal with mapping code.

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
</content>
</entry>
<entry>
<title>arm/arm64: KVM: Enforce HYP read-only mapping of the kernel's rodata section</title>
<updated>2016-06-29T11:59:14Z</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2016-06-13T14:00:47Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=74a6b8885f7026e33f8a4776b7ac17c76b8e5a52'/>
<id>urn:sha1:74a6b8885f7026e33f8a4776b7ac17c76b8e5a52</id>
<content type='text'>
In order to be able to use C code in HYP, we're now mapping the kernel's
rodata in HYP. It works absolutely fine, except that we're mapping it RWX,
which is not what it should be.

Add a new HYP_PAGE_RO protection, and pass it as the protection flags
when mapping the rodata section.

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
</content>
</entry>
<entry>
<title>ARM: 8578/1: mm: ensure pmd_present only checks the valid bit</title>
<updated>2016-06-09T16:51:47Z</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2016-06-07T16:57:54Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=624531886987f0f1b5d01fb598034d039198e090'/>
<id>urn:sha1:624531886987f0f1b5d01fb598034d039198e090</id>
<content type='text'>
In a subsequent patch, pmd_mknotpresent will clear the valid bit of the
pmd entry, resulting in a not-present entry from the hardware's
perspective. Unfortunately, pmd_present simply checks for a non-zero pmd
value and will therefore continue to return true even after a
pmd_mknotpresent operation. Since pmd_mknotpresent is only used for
managing huge entries, this is only an issue for the 3-level case.

This patch fixes the 3-level pmd_present implementation to take into
account the valid bit. For bisectability, the change is made before the
fix to pmd_mknotpresent.

[catalin.marinas@arm.com: comment update regarding pmd_mknotpresent patch]

Fixes: 8d9625070073 ("ARM: mm: Transparent huge page support for LPAE systems.")
Cc: &lt;stable@vger.kernel.org&gt; # 3.11+
Cc: Russell King &lt;linux@armlinux.org.uk&gt;
Cc: Steve Capper &lt;Steve.Capper@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: 8432/1: move VMALLOC_END from 0xff000000 to 0xff800000</title>
<updated>2015-09-22T07:13:57Z</updated>
<author>
<name>Nicolas Pitre</name>
<email>nicolas.pitre@linaro.org</email>
</author>
<published>2015-09-13T02:25:26Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=6ff0966052c46efb53980b8a1add2e7b49c9f560'/>
<id>urn:sha1:6ff0966052c46efb53980b8a1add2e7b49c9f560</id>
<content type='text'>
There is a 12MB unused region in our memory map between the vmalloc and
fixmap areas. This became unused with commit e9da6e9905e6, confirmed
with commit 64d3b6a3f480.

We also have a 8MB guard area before the vmalloc area.  With the default
240MB vmalloc area size and the current VMALLOC_END definition, that
means the end of low memory ends up at 0xef800000 which is unfortunate
for 768MB machines where 8MB of RAM is lost to himem.

Let's move VMALLOC_END to 0xff800000 so the guard area won't chop the
top of the 768MB low memory area while keeping the default vmalloc area
size unchanged and still preserving a gap between the vmalloc and fixmap
areas.

Signed-off-by: Nicolas Pitre &lt;nico@linaro.org&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>arm: drop L_PTE_FILE and pte_file()-related helpers</title>
<updated>2015-02-10T22:30:31Z</updated>
<author>
<name>Kirill A. Shutemov</name>
<email>kirill.shutemov@linux.intel.com</email>
</author>
<published>2015-02-10T22:10:17Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=b007ea798f5c568d3f464d37288220ef570f062c'/>
<id>urn:sha1:b007ea798f5c568d3f464d37288220ef570f062c</id>
<content type='text'>
We've replaced remap_file_pages(2) implementation with emulation.  Nobody
creates non-linear mapping anymore.

This patch also adjust __SWP_TYPE_SHIFT, effectively increase size of
possible swap file to 128G.

Signed-off-by: Kirill A. Shutemov &lt;kirill.shutemov@linux.intel.com&gt;
Cc: Russell King &lt;linux@arm.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>ARM: 8239/1: Introduce {set,clear}_pte_bit</title>
<updated>2014-12-03T16:00:06Z</updated>
<author>
<name>Jungseung Lee</name>
<email>js07.lee@gmail.com</email>
</author>
<published>2014-11-29T02:03:51Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=1f92f77ab68a012638afa9e001b7a1e1e5197930'/>
<id>urn:sha1:1f92f77ab68a012638afa9e001b7a1e1e5197930</id>
<content type='text'>
Introduce helper functions for pte_mk* functions and it would be
used to change individual bits in ptes at times.

Signed-off-by: Jungseung Lee &lt;js07.lee@gmail.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
</content>
</entry>
</feed>
