<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/arch/arm/include/asm, branch linux-4.15.y</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-4.15.y</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-4.15.y'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2018-02-16T19:06:43Z</updated>
<entry>
<title>arm64: KVM: Report SMCCC_ARCH_WORKAROUND_1 BP hardening support</title>
<updated>2018-02-16T19:06:43Z</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2018-02-06T17:56:14Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=7a1b576877dd7bb0f7bc647bb7770d7cb7b97151'/>
<id>urn:sha1:7a1b576877dd7bb0f7bc647bb7770d7cb7b97151</id>
<content type='text'>
Commit 6167ec5c9145 upstream.

A new feature of SMCCC 1.1 is that it offers firmware-based CPU
workarounds. In particular, SMCCC_ARCH_WORKAROUND_1 provides
BP hardening for CVE-2017-5715.

If the host has some mitigation for this issue, report that
we deal with it using SMCCC_ARCH_WORKAROUND_1, as we apply the
host workaround on every guest exit.

Tested-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;

Conflicts:
	arch/arm/include/asm/kvm_host.h
	arch/arm64/include/asm/kvm_host.h
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>arm/arm64: KVM: Consolidate the PSCI include files</title>
<updated>2018-02-16T19:06:42Z</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2018-02-06T17:56:08Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=bf9708a5df1e945d19ab431e785d810d8de72e6d'/>
<id>urn:sha1:bf9708a5df1e945d19ab431e785d810d8de72e6d</id>
<content type='text'>
Commit 1a2fb94e6a77 upstream.

As we're about to update the PSCI support, and because I'm lazy,
let's move the PSCI include file to include/kvm so that both
ARM architectures can find it.

Acked-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Tested-by: Ard Biesheuvel &lt;ard.biesheuvel@linaro.org&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>arm64: KVM: Use per-CPU vector when BP hardening is enabled</title>
<updated>2018-02-16T19:06:41Z</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2018-01-03T16:38:35Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=9a7a2f40da4a334a379c7a4fc472a15e9d04104d'/>
<id>urn:sha1:9a7a2f40da4a334a379c7a4fc472a15e9d04104d</id>
<content type='text'>
Commit 6840bdd73d07 upstream.

Now that we have per-CPU vectors, let's plug then in the KVM/arm64 code.

Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Catalin Marinas &lt;catalin.marinas@arm.com&gt;

Conflicts:
	arch/arm/include/asm/kvm_mmu.h
	arch/arm64/include/asm/kvm_mmu.h
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>Merge tag 'kvm-arm-fixes-for-v4.15-1' of git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm</title>
<updated>2017-12-05T17:02:03Z</updated>
<author>
<name>Radim Krčmář</name>
<email>rkrcmar@redhat.com</email>
</author>
<published>2017-12-05T17:02:03Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=609b7002705ae72a6ca45b633b7ff1a09a7a0d86'/>
<id>urn:sha1:609b7002705ae72a6ca45b633b7ff1a09a7a0d86</id>
<content type='text'>
KVM/ARM Fixes for v4.15.

Fixes:
 - A number of issues in the vgic discovered using SMATCH
 - A bit one-off calculation in out stage base address mask (32-bit and
   64-bit)
 - Fixes to single-step debugging instructions that trap for other
   reasons such as MMMIO aborts
 - Printing unavailable hyp mode as error
 - Potential spinlock deadlock in the vgic
 - Avoid calling vgic vcpu free more than once
 - Broken bit calculation for big endian systems
</content>
</entry>
<entry>
<title>mm: switch to 'define pmd_write' instead of __HAVE_ARCH_PMD_WRITE</title>
<updated>2017-11-30T02:40:42Z</updated>
<author>
<name>Dan Williams</name>
<email>dan.j.williams@intel.com</email>
</author>
<published>2017-11-30T00:10:10Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=e4e40e0263ea6a3bfefbfd15d1b6ff5c03f2b95e'/>
<id>urn:sha1:e4e40e0263ea6a3bfefbfd15d1b6ff5c03f2b95e</id>
<content type='text'>
In response to compile breakage introduced by a series that added the
pud_write helper to x86, Stephen notes:

    did you consider using the other paradigm:

    In arch include files:
    #define pud_write       pud_write
    static inline int pud_write(pud_t pud)
     .....

    Then in include/asm-generic/pgtable.h:

    #ifndef pud_write
    tatic inline int pud_write(pud_t pud)
    {
            ....
    }
    #endif

    If you had, then the powerpc code would have worked ... ;-) and many
    of the other interfaces in include/asm-generic/pgtable.h are
    protected that way ...

Given that some architecture already define pmd_write() as a macro, it's
a net reduction to drop the definition of __HAVE_ARCH_PMD_WRITE.

Link: http://lkml.kernel.org/r/151129126721.37405.13339850900081557813.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams &lt;dan.j.williams@intel.com&gt;
Suggested-by: Stephen Rothwell &lt;sfr@canb.auug.org.au&gt;
Cc: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Cc: "Aneesh Kumar K.V" &lt;aneesh.kumar@linux.vnet.ibm.com&gt;
Cc: Oliver OHalloran &lt;oliveroh@au1.ibm.com&gt;
Cc: Chris Metcalf &lt;cmetcalf@mellanox.com&gt;
Cc: Russell King &lt;linux@armlinux.org.uk&gt;
Cc: Ralf Baechle &lt;ralf@linux-mips.org&gt;
Cc: "H. Peter Anvin" &lt;hpa@zytor.com&gt;
Cc: Arnd Bergmann &lt;arnd@arndb.de&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>KVM: arm/arm64: debug: Introduce helper for single-step</title>
<updated>2017-11-29T15:46:19Z</updated>
<author>
<name>Alex Bennée</name>
<email>alex.bennee@linaro.org</email>
</author>
<published>2017-11-16T15:39:19Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=696673d192f52c2c5a702224ee21f005318a844b'/>
<id>urn:sha1:696673d192f52c2c5a702224ee21f005318a844b</id>
<content type='text'>
After emulating instructions we may want return to user-space to handle
single-step debugging. Introduce a helper function, which, if
single-step is enabled, sets the run structure for return and returns
true.

Signed-off-by: Alex Bennée &lt;alex.bennee@linaro.org&gt;
Reviewed-by: Julien Thierry &lt;julien.thierry@arm.com&gt;
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
</content>
</entry>
<entry>
<title>arm: KVM: Fix VTTBR_BADDR_MASK BUG_ON off-by-one</title>
<updated>2017-11-29T15:46:18Z</updated>
<author>
<name>Marc Zyngier</name>
<email>marc.zyngier@arm.com</email>
</author>
<published>2017-11-16T17:58:21Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=5553b142be11e794ebc0805950b2e8313f93d718'/>
<id>urn:sha1:5553b142be11e794ebc0805950b2e8313f93d718</id>
<content type='text'>
VTTBR_BADDR_MASK is used to sanity check the size and alignment of the
VTTBR address. It seems to currently be off by one, thereby only
allowing up to 39-bit addresses (instead of 40-bit) and also
insufficiently checking the alignment. This patch fixes it.

This patch is the 32bit pendent of Kristina's arm64 fix, and
she deserves the actual kudos for pinpointing that one.

Fixes: f7ed45be3ba52 ("KVM: ARM: World-switch implementation")
Cc: &lt;stable@vger.kernel.org&gt; # 3.9
Reported-by: Kristina Martsenko &lt;kristina.martsenko@arm.com&gt;
Reviewed-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
Signed-off-by: Marc Zyngier &lt;marc.zyngier@arm.com&gt;
Signed-off-by: Christoffer Dall &lt;christoffer.dall@linaro.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm</title>
<updated>2017-11-26T23:03:49Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2017-11-26T23:03:49Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=bbecb1cfcca55f98cfcb62fa36a32d79975d8816'/>
<id>urn:sha1:bbecb1cfcca55f98cfcb62fa36a32d79975d8816</id>
<content type='text'>
Pull ARM fixes from Russell King:

 - LPAE fixes for kernel-readonly regions

 - Fix for get_user_pages_fast on LPAE systems

 - avoid tying decompressor to a particular platform if DEBUG_LL is
   enabled

 - BUG if we attempt to return to userspace but the to-be-restored PSR
   value keeps us in privileged mode (defeating an issue that ftracetest
   found)

* 'fixes' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: BUG if jumping to usermode address in kernel mode
  ARM: 8722/1: mm: make STRICT_KERNEL_RWX effective for LPAE
  ARM: 8721/1: mm: dump: check hardware RO bit for LPAE
  ARM: make decompressor debug output user selectable
  ARM: fix get_user_pages_fast
</content>
</entry>
<entry>
<title>ARM: BUG if jumping to usermode address in kernel mode</title>
<updated>2017-11-26T15:41:39Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2017-11-24T23:49:34Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8bafae202c82dc257f649ea3c275a0f35ee15113'/>
<id>urn:sha1:8bafae202c82dc257f649ea3c275a0f35ee15113</id>
<content type='text'>
Detect if we are returning to usermode via the normal kernel exit paths
but the saved PSR value indicates that we are in kernel mode.  This
could occur due to corrupted stack state, which has been observed with
"ftracetest".

This ensures that we catch the problem case before we get to user code.

Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
</entry>
<entry>
<title>ARM: fix get_user_pages_fast</title>
<updated>2017-11-21T14:45:36Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@armlinux.org.uk</email>
</author>
<published>2017-10-25T10:04:14Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=1ee5e87f86deca84fdcb7c71bb8368cacc4c24e3'/>
<id>urn:sha1:1ee5e87f86deca84fdcb7c71bb8368cacc4c24e3</id>
<content type='text'>
Ensure that get_user_pages_fast() is not able to access memory which
has been mapped with PROT_NONE.

Reported-by: Al Viro &lt;viro@ZenIV.linux.org.uk&gt;
Signed-off-by: Russell King &lt;rmk+kernel@armlinux.org.uk&gt;
</content>
</entry>
</feed>
