<feed xmlns='http://www.w3.org/2005/Atom'>
<title>kernel/arch/arm64/kvm/vgic/vgic-v3-nested.c, branch linux-rolling-stable</title>
<subtitle>Hosts the 0x221E linux distro kernel.</subtitle>
<id>https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable</id>
<link rel='self' href='https://universe.0xinfinity.dev/distro/kernel/atom?h=linux-rolling-stable'/>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/'/>
<updated>2025-11-24T22:29:14Z</updated>
<entry>
<title>KVM: arm64: GICv3: Force exit to sync ICH_HCR_EL2.En</title>
<updated>2025-11-24T22:29:14Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-11-20T17:25:26Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=78ffc28456f5981f0e54007fe124e20610abd0ea'/>
<id>urn:sha1:78ffc28456f5981f0e54007fe124e20610abd0ea</id>
<content type='text'>
FEAT_NV2 is pretty terrible for anything that tries to enforce immediate
effects, and writing to ICH_HCR_EL2 in the hope to disable a maintenance
interrupt is vain. This only hits memory, and the guest hasn't cleared
anything -- the MI will fire.

For example, running the vgic_irq test under NV results in about 800
maintenance interrupts being actually handled by the L1 guest,
when none were expected.

As a cheap workaround, read back ICH_MISR_EL2 after writing 0 to
ICH_HCR_EL2. This is very cheap on real HW, and causes a trap to
the host in NV, giving it the opportunity to retire the pending MI.
With this, the above test runs to completion without any MI being
actually handled.

Yes, this is really poor...

Tested-by: Fuad Tabba &lt;tabba@google.com&gt;
Reviewed-by: Fuad Tabba &lt;tabba@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Mark Brown &lt;broonie@kernel.org&gt;
Link: https://msgid.link/20251120172540.2267180-37-maz@kernel.org
Signed-off-by: Oliver Upton &lt;oupton@kernel.org&gt;
</content>
</entry>
<entry>
<title>KVM: arm64: GICv3: nv: Plug L1 LR sync into deactivation primitive</title>
<updated>2025-11-24T22:29:14Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-11-20T17:25:25Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=6dd333c8942b2e5bb5927af843b56ec2857db7c7'/>
<id>urn:sha1:6dd333c8942b2e5bb5927af843b56ec2857db7c7</id>
<content type='text'>
Pretty much like the rest of the LR handling, deactivation of an
L2 interrupt gets reflected in the L1 LRs, and therefore must be
propagated into the L1 shadow state if the interrupt is HW-bound.

Instead of directly handling the active state (which looks a bit
off as it ignores locking and L1-&gt;L0 HW propagation), use the new
deactivation primitive to perform the deactivation and deal with
the required maintenance.

Tested-by: Fuad Tabba &lt;tabba@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Mark Brown &lt;broonie@kernel.org&gt;
Link: https://msgid.link/20251120172540.2267180-36-maz@kernel.org
Signed-off-by: Oliver Upton &lt;oupton@kernel.org&gt;
</content>
</entry>
<entry>
<title>KVM: arm64: GICv3: nv: Resync LRs/VMCR/HCR early for better MI emulation</title>
<updated>2025-11-24T22:29:14Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-11-20T17:25:24Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=eb33ffa2bd3f1842d2960aff7484869fc64aa2fb'/>
<id>urn:sha1:eb33ffa2bd3f1842d2960aff7484869fc64aa2fb</id>
<content type='text'>
The current approach to nested GICv3 support is to not do anything
while L2 is running, wait a transition from L2 to L1 to resync
LRs, VMCR and HCR, and only then evaluate the state to decide
whether to generate a maintenance interrupt.

This doesn't provide a good quality of emulation, and it would be
far preferable to find out early that we need to perform a switch.

Move the LRs/VMCR and HCR resync into vgic_v3_sync_nested(), so
that we have most of the state available. As we turning the vgic
off at this stage to avoid a screaming host MI, add a new helper
vgic_v3_flush_nested() that switches the vgic on again. The MI can
then be directly injected as required.

Tested-by: Fuad Tabba &lt;tabba@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Mark Brown &lt;broonie@kernel.org&gt;
Link: https://msgid.link/20251120172540.2267180-35-maz@kernel.org
Signed-off-by: Oliver Upton &lt;oupton@kernel.org&gt;
</content>
</entry>
<entry>
<title>KVM: arm64: Eagerly save VMCR on exit</title>
<updated>2025-11-24T22:29:13Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-11-20T17:25:10Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=cf72ee63711916ad808f82eb054dd9d69727a5bf'/>
<id>urn:sha1:cf72ee63711916ad808f82eb054dd9d69727a5bf</id>
<content type='text'>
We currently save/restore the VMCR register in a pretty lazy way
(on load/put, consistently with what we do with the APRs).

However, we are going to need the group-enable bits that are backed
by VMCR on each entry (so that we can avoid injecting interrupts for
disabled groups).

Move the synchronisation from put to sync, which results in some minor
churn in the nVHE hypercalls to simplify things.

Tested-by: Fuad Tabba &lt;tabba@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Mark Brown &lt;broonie@kernel.org&gt;
Link: https://msgid.link/20251120172540.2267180-21-maz@kernel.org
Signed-off-by: Oliver Upton &lt;oupton@kernel.org&gt;
</content>
</entry>
<entry>
<title>KVM: arm64: Turn vgic-v3 errata traps into a patched-in constant</title>
<updated>2025-11-24T22:29:11Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-11-20T17:24:54Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8d3dfab1d305d61359454d9c09b736f077a9fce4'/>
<id>urn:sha1:8d3dfab1d305d61359454d9c09b736f077a9fce4</id>
<content type='text'>
The trap bits are currently only set to manage CPU errata. However,
we are about to make use of them for purposes beyond beating broken
CPUs into submission.

For this purpose, turn these errata-driven bits into a patched-in
constant that is merged with the KVM-driven value at the point of
programming the ICH_HCR_EL2 register, rather than being directly
stored with with the shadow value..

This allows the KVM code to distinguish between a trap being handled
for the purpose of an erratum workaround, or for KVM's own need.

Tested-by: Fuad Tabba &lt;tabba@google.com&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Tested-by: Mark Brown &lt;broonie@kernel.org&gt;
Link: https://msgid.link/20251120172540.2267180-5-maz@kernel.org
Signed-off-by: Oliver Upton &lt;oupton@kernel.org&gt;
</content>
</entry>
<entry>
<title>Merge tag 'kvmarm-6.17' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD</title>
<updated>2025-07-29T16:27:40Z</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2025-07-29T16:27:40Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=314b40b3b6189cc6bffce5d68e3f4c4f6a68dae5'/>
<id>urn:sha1:314b40b3b6189cc6bffce5d68e3f4c4f6a68dae5</id>
<content type='text'>
KVM/arm64 changes for 6.17, round #1

 - Host driver for GICv5, the next generation interrupt controller for
   arm64, including support for interrupt routing, MSIs, interrupt
   translation and wired interrupts.

 - Use FEAT_GCIE_LEGACY on GICv5 systems to virtualize GICv3 VMs on
   GICv5 hardware, leveraging the legacy VGIC interface.

 - Userspace control of the 'nASSGIcap' GICv3 feature, allowing
   userspace to disable support for SGIs w/o an active state on hardware
   that previously advertised it unconditionally.

 - Map supporting endpoints with cacheable memory attributes on systems
   with FEAT_S2FWB and DIC where KVM no longer needs to perform cache
   maintenance on the address range.

 - Nested support for FEAT_RAS and FEAT_DoubleFault2, allowing the guest
   hypervisor to inject external aborts into an L2 VM and take traps of
   masked external aborts to the hypervisor.

 - Convert more system register sanitization to the config-driven
   implementation.

 - Fixes to the visibility of EL2 registers, namely making VGICv3 system
   registers accessible through the VGIC device instead of the ONE_REG
   vCPU ioctls.

 - Various cleanups and minor fixes.
</content>
</entry>
<entry>
<title>KVM: arm64: Add helper to identify a nested context</title>
<updated>2025-07-08T17:40:30Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-07-08T17:25:08Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=1d6fea7663b2d3fc8569cf14c91a49fb9b37067b'/>
<id>urn:sha1:1d6fea7663b2d3fc8569cf14c91a49fb9b37067b</id>
<content type='text'>
A common idiom in the KVM code is to check if we are currently
dealing with a "nested" context, defined as having NV enabled,
but being in the EL1&amp;0 translation regime.

This is usually expressed as:

	if (vcpu_has_nv(vcpu) &amp;&amp; !is_hyp_ctxt(vcpu) ... )

which is a mouthful and a bit hard to read, specially when followed
by additional conditions.

Introduce a new helper that encapsulate these two terms, allowing
the above to be written as

	if (is_nested_context(vcpu) ... )

which is both shorter and easier to read, and makes more obvious
the potential for simplification on some code paths.

Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Reviewed-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/20250708172532.1699409-4-oliver.upton@linux.dev
Signed-off-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
</content>
</entry>
<entry>
<title>KVM: arm64: nv: Fix MI line level calculation in vgic_v3_nested_update_mi()</title>
<updated>2025-06-26T07:01:45Z</updated>
<author>
<name>Wei-Lin Chang</name>
<email>r09922117@csie.ntu.edu.tw</email>
</author>
<published>2025-06-25T08:47:09Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=af040a9a296044fd4b748786c2516f172a7617f1'/>
<id>urn:sha1:af040a9a296044fd4b748786c2516f172a7617f1</id>
<content type='text'>
The state of the vcpu's MI line should be asserted when its
ICH_HCR_EL2.En is set and ICH_MISR_EL2 is non-zero. Using bitwise AND
(&amp;=) directly for this calculation will not give us the correct result
when the LSB of the vcpu's ICH_MISR_EL2 isn't set. Correct this by
directly computing the line level with a logical AND operation.

Signed-off-by: Wei-Lin Chang &lt;r09922117@csie.ntu.edu.tw&gt;
Link: https://lore.kernel.org/r/20250625084709.3968844-1-r09922117@csie.ntu.edu.tw
[maz: drop the level check from the original code]
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</content>
</entry>
<entry>
<title>KVM: arm64: nv: Fix tracking of shadow list registers</title>
<updated>2025-06-19T08:58:20Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-06-15T15:11:38Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=8a8ff069c7ad9a359c54683329883e2432cff191'/>
<id>urn:sha1:8a8ff069c7ad9a359c54683329883e2432cff191</id>
<content type='text'>
Wei-Lin reports that the tracking of shadow list registers is
majorly broken when resync'ing the L2 state after a run, as
we confuse the guest's LR index with the host's, potentially
losing the interrupt state.

While this could be fixed by adding yet another side index to
track it (Wei-Lin's fix), it may be better to refactor this
code to avoid having a side index altogether, limiting the
risk to introduce this class of bugs.

A key observation is that the shadow index is always the number
of bits in the lr_map bitmap. With that, the parallel indexing
scheme can be completely dropped.

While doing this, introduce a couple of helpers that abstract
the index conversion and some of the LR repainting, making the
whole exercise much simpler.

Reported-by: Wei-Lin Chang &lt;r09922117@csie.ntu.edu.tw&gt;
Reviewed-by: Wei-Lin Chang &lt;r09922117@csie.ntu.edu.tw&gt;
Reviewed-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
Link: https://lore.kernel.org/r/20250614145721.2504524-1-r09922117@csie.ntu.edu.tw
Link: https://lore.kernel.org/r/86qzzkc5xa.wl-maz@kernel.org
</content>
</entry>
<entry>
<title>KVM: arm64: Add assignment-specific sysreg accessor</title>
<updated>2025-06-05T13:17:32Z</updated>
<author>
<name>Marc Zyngier</name>
<email>maz@kernel.org</email>
</author>
<published>2025-06-03T07:08:21Z</published>
<link rel='alternate' type='text/html' href='https://universe.0xinfinity.dev/distro/kernel/commit/?id=6678791ee3da0b78c28fe7d77814097f53cbb8df'/>
<id>urn:sha1:6678791ee3da0b78c28fe7d77814097f53cbb8df</id>
<content type='text'>
Assigning a value to a system register doesn't do what it is
supposed to be doing if that register is one that has RESx bits.

The main problem is that we use __vcpu_sys_reg(), which can be used
both as a lvalue and rvalue. When used as a lvalue, the bit masking
occurs *before* the new value is assigned, meaning that we (1) do
pointless work on the old cvalue, and (2) potentially assign an
invalid value as we fail to apply the masks to it.

Fix this by providing a new __vcpu_assign_sys_reg() that does
what it says on the tin, and sanitises the *new* value instead of
the old one. This comes with a significant amount of churn.

Reviewed-by: Miguel Luis &lt;miguel.luis@oracle.com&gt;
Reviewed-by: Oliver Upton &lt;oliver.upton@linux.dev&gt;
Link: https://lore.kernel.org/r/20250603070824.1192795-2-maz@kernel.org
Signed-off-by: Marc Zyngier &lt;maz@kernel.org&gt;
</content>
</entry>
</feed>
