mbox series

[v2,00/28] ARM Scalable Vector Extension (SVE)

Message ID 1504198860-12951-1-git-send-email-Dave.Martin@arm.com
Headers show
Series ARM Scalable Vector Extension (SVE) | expand

Message

Dave Martin Aug. 31, 2017, 5 p.m. UTC
This series implements Linux kernel support for the ARM Scalable Vector
Extension (SVE). [1]  It supersedes the previous v1: see [3] for link.
See the individual patches for details of changes.

The patches apply on v4.13-rc7 + linux-arm64/for-next/core.
For convenience, a git tree is available. [4]


To reduce spam, some people may not been copied on the entire series.
For those who did not receive the whole series, it can be found in the
linux-arm-kernel archive. [2]


*Note* The final two patches (27-28) of the series are still RFC --
before committing to this ABI it would be good to get feedback on
whether the approach makes sense and whether it suitable for other
architectures.  These two patches are not required by the rest of the
series and can be revised or merged later.


Support for use of SVE by KVM guests is not currently included.
Instead, such use will be trapped and reflected to the guest as
undefined instruction execution.  SVE is hidden from the view of the
CPU feature registers visible to guests, so that guests will not
expect it to work.


This series has been build- and boot-tested on Juno r0 and the ARM FVP
Base model with SVE plugin.  Because there is no hardware with SVE
support yet, testing of the SVE functionality has only been performed on
the model.

Regression testing of v1 using LTP showed no regressions on the kernel
tests.

Regression testing of v2 is under way.


Series summary:

 * Patches 1-5 contain some individual bits of preparatory spadework,
   which are indirectly related to SVE.

Dave Martin (5):
  regset: Add support for dynamically sized regsets
  arm64: KVM: Hide unsupported AArch64 CPU features from guests
  arm64: efi: Add missing Kconfig dependency on KERNEL_MODE_NEON
  arm64: Port deprecated instruction emulation to new sysctl interface
  arm64: fpsimd: Simplify uses of {set,clear}_ti_thread_flag()

   Non-trivial changes among these are:

   * Patch 1: updates the regset core code to handle regsets whose size
     is not fixed at compile time.  This avoids bloating coredumps even
     though the maximum theoretical SVE regset size is large.

   * Patch 2: extends KVM to modify the ARM architectural ID registers
     seen by guests, by trapping and emulating certain registers.  For
     SVE this is a temporary measure, but it may be useful for other
     architecture extensions.  This patch may also be built on in the
     future, since the only registers currently emulated are those
     required for hiding SVE.

 * Patches 6-10 add SVE-specific system register and structure layout
   definitions, and the low-level boot code and accessors needed for
   making use of SVE.

Dave Martin (5):
  arm64/sve: System register and exception syndrome definitions
  arm64/sve: Low-level SVE architectural state manipulation functions
  arm64/sve: Kconfig update and conditional compilation support
  arm64/sve: Signal frame and context structure definition
  arm64/sve: Low-level CPU setup

 * Patches 11-13 implement the core context management facilities to
   provide each user task with its own SVE register context, signal
   handling facilities, and sane programmer's model interoperation
   between SVE and FPSIMD.

Dave Martin (3):
  arm64/sve: Core task context handling
  arm64/sve: Support vector length resetting for new processes
  arm64/sve: Signal handling support

 * Patches 14 and 16 provide backend logic for detecting and making use
   of the different SVE vector lengths supported by the hardware.

 * Patch 15 moves around code in cpufeatures.c to fit.

Dave Martin (3):
  arm64/sve: Backend logic for setting the vector length
  arm64: cpufeature: Move sys_caps_initialised declarations
  arm64/sve: Probe SVE capabilities and usable vector lengths

 * Patches 17-18 update the kernel-mode NEON / EFI FPSIMD frameworks to
   interoperate correctly with SVE.

Dave Martin (2):
  arm64/sve: Preserve SVE registers around kernel-mode NEON use
  arm64/sve: Preserve SVE registers around EFI runtime service calls

 * Patches 19-21 implement the userspace frontend for managing SVE,
   comprising ptrace, some new arch-specific prctl() calls, and a new
   sysctl for init-time setup.

Dave Martin (3):
  arm64/sve: ptrace and ELF coredump support
  arm64/sve: Add prctl controls for userspace vector length management
  arm64/sve: Add sysctl to set the default vector length for new
    processes

 * Patches 22-24 provide stub KVM extensions for using KVM only on the
   host, while denying guest access.  (A future series will extend this
   with full support for SVE in guests.)

Dave Martin (3):
  arm64/sve: KVM: Prevent guests from using SVE
  arm64/sve: KVM: Treat guest SVE use as undefined instruction
    execution
  arm64/sve: KVM: Hide SVE from CPU features exposed to guests

And finally:

 * Patch 25 disengages the safety catch, enabling the kernel SVE runtime
   support and allowing userspace to use SVE.

Dave Martin (1):
  arm64/sve: Detect SVE and activate runtime support

 * Patch 26 adds some basic documentation.

Dave Martin (1):
  arm64/sve: Add documentation

 * Patches 27-28 (which may be considered RFC) propose a mechanism to
   report the maximum runtime signal frame size to userspace.

Dave Martin (2):
  arm64: signal: Report signal frame size to userspace via auxv
  arm64/sve: signal: Include SVE when computing AT_MINSIGSTKSZ


References:

[1] ARM Scalable Vector Extension
https://community.arm.com/groups/processors/blog/2016/08/22/technology-update-the-scalable-vector-extension-sve-for-the-armv8-a-architecture

[2] linux-arm-kernel August 2017 Archives by thread
http://lists.infradead.org/pipermail/linux-arm-kernel/2017-August/thread.html

[3] [PATCH 00/27] ARM Scalable Vector Extension (SVE)
http://lists.infradead.org/pipermail/linux-arm-kernel/2017-August/524691.html

[4] http://linux-arm.org/git?p=linux-dm.git;a=shortlog;h=refs/heads/sve/v2
    git://linux-arm.org/linux-dm.git sve/v2


Full series and diffstat:

Dave Martin (28):
  regset: Add support for dynamically sized regsets
  arm64: KVM: Hide unsupported AArch64 CPU features from guests
  arm64: efi: Add missing Kconfig dependency on KERNEL_MODE_NEON
  arm64: Port deprecated instruction emulation to new sysctl interface
  arm64: fpsimd: Simplify uses of {set,clear}_ti_thread_flag()
  arm64/sve: System register and exception syndrome definitions
  arm64/sve: Low-level SVE architectural state manipulation functions
  arm64/sve: Kconfig update and conditional compilation support
  arm64/sve: Signal frame and context structure definition
  arm64/sve: Low-level CPU setup
  arm64/sve: Core task context handling
  arm64/sve: Support vector length resetting for new processes
  arm64/sve: Signal handling support
  arm64/sve: Backend logic for setting the vector length
  arm64: cpufeature: Move sys_caps_initialised declarations
  arm64/sve: Probe SVE capabilities and usable vector lengths
  arm64/sve: Preserve SVE registers around kernel-mode NEON use
  arm64/sve: Preserve SVE registers around EFI runtime service calls
  arm64/sve: ptrace and ELF coredump support
  arm64/sve: Add prctl controls for userspace vector length management
  arm64/sve: Add sysctl to set the default vector length for new
    processes
  arm64/sve: KVM: Prevent guests from using SVE
  arm64/sve: KVM: Treat guest SVE use as undefined instruction execution
  arm64/sve: KVM: Hide SVE from CPU features exposed to guests
  arm64/sve: Detect SVE and activate runtime support
  arm64/sve: Add documentation
  arm64: signal: Report signal frame size to userspace via auxv
  arm64/sve: signal: Include SVE when computing AT_MINSIGSTKSZ

 Documentation/arm64/cpu-feature-registers.txt |   6 +-
 Documentation/arm64/sve.txt                   | 477 +++++++++++++++++
 arch/arm/include/asm/kvm_host.h               |   3 +
 arch/arm64/Kconfig                            |  12 +
 arch/arm64/include/asm/cpu.h                  |   4 +
 arch/arm64/include/asm/cpucaps.h              |   3 +-
 arch/arm64/include/asm/cpufeature.h           |  35 ++
 arch/arm64/include/asm/elf.h                  |   5 +
 arch/arm64/include/asm/esr.h                  |   3 +-
 arch/arm64/include/asm/fpsimd.h               |  71 ++-
 arch/arm64/include/asm/fpsimdmacros.h         | 148 ++++++
 arch/arm64/include/asm/kvm_arm.h              |   5 +-
 arch/arm64/include/asm/kvm_host.h             |  11 +
 arch/arm64/include/asm/processor.h            |  10 +
 arch/arm64/include/asm/sysreg.h               |  24 +
 arch/arm64/include/asm/thread_info.h          |   2 +
 arch/arm64/include/asm/traps.h                |   2 +
 arch/arm64/include/uapi/asm/auxvec.h          |   3 +-
 arch/arm64/include/uapi/asm/hwcap.h           |   1 +
 arch/arm64/include/uapi/asm/ptrace.h          | 135 +++++
 arch/arm64/include/uapi/asm/sigcontext.h      | 120 ++++-
 arch/arm64/kernel/armv8_deprecated.c          |  15 +-
 arch/arm64/kernel/cpufeature.c                |  96 +++-
 arch/arm64/kernel/cpuinfo.c                   |   7 +
 arch/arm64/kernel/entry-fpsimd.S              |  17 +
 arch/arm64/kernel/entry.S                     |  14 +-
 arch/arm64/kernel/fpsimd.c                    | 729 +++++++++++++++++++++++++-
 arch/arm64/kernel/head.S                      |  13 +-
 arch/arm64/kernel/process.c                   |   4 +
 arch/arm64/kernel/ptrace.c                    | 270 +++++++++-
 arch/arm64/kernel/signal.c                    | 222 +++++++-
 arch/arm64/kernel/signal32.c                  |   2 +-
 arch/arm64/kernel/traps.c                     |   5 +-
 arch/arm64/kvm/handle_exit.c                  |   8 +
 arch/arm64/kvm/hyp/switch.c                   |  12 +-
 arch/arm64/kvm/sys_regs.c                     | 292 +++++++++--
 arch/arm64/mm/proc.S                          |  14 +-
 fs/binfmt_elf.c                               |   6 +-
 include/linux/regset.h                        |  67 ++-
 include/uapi/linux/elf.h                      |   1 +
 include/uapi/linux/prctl.h                    |   9 +
 kernel/sys.c                                  |  12 +
 virt/kvm/arm/arm.c                            |   3 +
 43 files changed, 2753 insertions(+), 145 deletions(-)
 create mode 100644 Documentation/arm64/sve.txt

Comments

Alex Bennée Sept. 13, 2017, 2:37 p.m. UTC | #1
Dave Martin <Dave.Martin@arm.com> writes:

> Currently, a guest kernel sees the true CPU feature registers
> (ID_*_EL1) when it reads them using MRS instructions.  This means
> that the guest will observe features that are present in the
> hardware but the host doesn't understand or doesn't provide support
> for.  A guest may legimitately try to use such a feature as per the
> architecture, but use of the feature may trap instead of working
> normally, triggering undef injection into the guest.
>
> This is not a problem for the host, but the guest may go wrong when
> running on newer hardware than the host knows about.
>
> This patch hides from guest VMs any AArch64-specific CPU features
> that the host doesn't support, by exposing to the guest the
> sanitised versions of the registers computed by the cpufeatures
> framework, instead of the true hardware registers.  To achieve
> this, HCR_EL2.TID3 is now set for AArch64 guests, and emulation
> code is added to KVM to report the sanitised versions of the
> affected registers in response to MRS and register reads from
> userspace.
>
> The affected registers are removed from invariant_sys_regs[] (since
> the invariant_sys_regs handling is no longer quite correct for
> them) and added to sys_reg_desgs[], with appropriate access(),
> get_user() and set_user() methods.  No runtime vcpu storage is
> allocated for the registers: instead, they are read on demand from
> the cpufeatures framework.  This may need modification in the
> future if there is a need for userspace to customise the features
> visible to the guest.
>
> Attempts by userspace to write the registers are handled similarly
> to the current invariant_sys_regs handling: writes are permitted,
> but only if they don't attempt to change the value.  This is
> sufficient to support VM snapshot/restore from userspace.
>
> Because of the additional registers, restoring a VM on an older
> kernel may not work unless userspace knows how to handle the extra
> VM registers exposed to the KVM user ABI by this patch.
>
> Under the principle of least damage, this patch makes no attempt to
> handle any of the other registers currently in
> invariant_sys_regs[], or to emulate registers for AArch32: however,
> these could be handled in a similar way in future, as necessary.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
>
> ---
>
> Changes since v1
> ----------------
>
> Requested by Marc Zyngier:
>
> * Get rid of ternary operator use in walk_sys_regs().
>
> * Call write_to_read_only() if an attempt to write an ID reg is
> trapped, rather than reinventing.
> Probably we won't get there anyway: the architecture says that this
> should undef at EL1 instead.
>
> * Make ID register sysreg table less cryptic and spread the entries one
> per line.
> Also, make the architecturally unallocated and allocated but hidden
> cases more clearly distinct.  These require the same behaviour but for
> different reasons, so it's better to identify them as separate.
>
> Other:
>
> * Delete BUG_ON()s that are skipped by construction:
> These check that the result of sys_reg_to_index() is a 64-bit
> register, which is always true because sys_reg_to_index()
> explicitly sets this.
>
> * Remove duplicate const in __access_id_reg args [sparse]
> ---
>  arch/arm64/include/asm/sysreg.h |   3 +
>  arch/arm64/kvm/hyp/switch.c     |   6 +
>  arch/arm64/kvm/sys_regs.c       | 282 +++++++++++++++++++++++++++++++++-------
>  3 files changed, 246 insertions(+), 45 deletions(-)
>
> diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
> index f707fed..480ecd6 100644
> --- a/arch/arm64/include/asm/sysreg.h
> +++ b/arch/arm64/include/asm/sysreg.h
> @@ -149,6 +149,9 @@
>  #define SYS_ID_AA64DFR0_EL1		sys_reg(3, 0, 0, 5, 0)
>  #define SYS_ID_AA64DFR1_EL1		sys_reg(3, 0, 0, 5, 1)
>
> +#define SYS_ID_AA64AFR0_EL1		sys_reg(3, 0, 0, 5, 4)
> +#define SYS_ID_AA64AFR1_EL1		sys_reg(3, 0, 0, 5, 5)
> +
>  #define SYS_ID_AA64ISAR0_EL1		sys_reg(3, 0, 0, 6, 0)
>  #define SYS_ID_AA64ISAR1_EL1		sys_reg(3, 0, 0, 6, 1)
>
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 945e79c..35a90b8 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -81,11 +81,17 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
>  	 * it will cause an exception.
>  	 */
>  	val = vcpu->arch.hcr_el2;
> +
>  	if (!(val & HCR_RW) && system_supports_fpsimd()) {
>  		write_sysreg(1 << 30, fpexc32_el2);
>  		isb();
>  	}
> +
> +	if (val & HCR_RW) /* for AArch64 only: */
> +		val |= HCR_TID3; /* TID3: trap feature register accesses */
> +

I wondered as this is the hyp switch can we make use of testing val &
HCR_RW for both this and above. But it seems minimal in the generated
code so probably not.

>  	write_sysreg(val, hcr_el2);
> +
>  	/* Trap on AArch32 cp15 c15 accesses (EL1 or EL0) */
>  	write_sysreg(1 << 15, hstr_el2);
>  	/*
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index 2e070d3..b1f7552 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -892,6 +892,137 @@ static bool access_cntp_cval(struct kvm_vcpu *vcpu,
>  	return true;
>  }
>
> +/* Read a sanitised cpufeature ID register by sys_reg_desc */
> +static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
> +{
> +	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
> +			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
> +
> +	return raz ? 0 : read_sanitised_ftr_reg(id);
> +}
> +
> +/* cpufeature ID register access trap handlers */
> +
> +static bool __access_id_reg(struct kvm_vcpu *vcpu,
> +			    struct sys_reg_params *p,
> +			    const struct sys_reg_desc *r,
> +			    bool raz)
> +{
> +	if (p->is_write)
> +		return write_to_read_only(vcpu, p, r);
> +
> +	p->regval = read_id_reg(r, raz);
> +	return true;
> +}
> +
> +static bool access_id_reg(struct kvm_vcpu *vcpu,
> +			  struct sys_reg_params *p,
> +			  const struct sys_reg_desc *r)
> +{
> +	return __access_id_reg(vcpu, p, r, false);
> +}
> +
> +static bool access_raz_id_reg(struct kvm_vcpu *vcpu,
> +			      struct sys_reg_params *p,
> +			      const struct sys_reg_desc *r)
> +{
> +	return __access_id_reg(vcpu, p, r, true);
> +}
> +
> +static int reg_from_user(u64 *val, const void __user *uaddr, u64 id);
> +static int reg_to_user(void __user *uaddr, const u64 *val, u64 id);
> +static u64 sys_reg_to_index(const struct sys_reg_desc *reg);
> +
> +/*
> + * cpufeature ID register user accessors
> + *
> + * For now, these registers are immutable for userspace, so no values
> + * are stored, and for set_id_reg() we don't allow the effective value
> + * to be changed.
> + */
> +static int __get_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
> +			bool raz)
> +{
> +	const u64 id = sys_reg_to_index(rd);
> +	const u64 val = read_id_reg(rd, raz);
> +
> +	return reg_to_user(uaddr, &val, id);
> +}
> +
> +static int __set_id_reg(const struct sys_reg_desc *rd, void __user *uaddr,
> +			bool raz)
> +{
> +	const u64 id = sys_reg_to_index(rd);
> +	int err;
> +	u64 val;
> +
> +	err = reg_from_user(&val, uaddr, id);
> +	if (err)
> +		return err;
> +
> +	/* This is what we mean by invariant: you can't change it. */
> +	if (val != read_id_reg(rd, raz))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +
> +static int get_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> +		      const struct kvm_one_reg *reg, void __user *uaddr)
> +{
> +	return __get_id_reg(rd, uaddr, false);
> +}
> +
> +static int set_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> +		      const struct kvm_one_reg *reg, void __user *uaddr)
> +{
> +	return __set_id_reg(rd, uaddr, false);
> +}
> +
> +static int get_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> +			  const struct kvm_one_reg *reg, void __user *uaddr)
> +{
> +	return __get_id_reg(rd, uaddr, true);
> +}
> +
> +static int set_raz_id_reg(struct kvm_vcpu *vcpu, const struct sys_reg_desc *rd,
> +			  const struct kvm_one_reg *reg, void __user *uaddr)
> +{
> +	return __set_id_reg(rd, uaddr, true);
> +}
> +
> +/* sys_reg_desc initialiser for known cpufeature ID registers */
> +#define ID_SANITISED(name) {			\
> +	SYS_DESC(SYS_##name),			\
> +	.access	= access_id_reg,		\
> +	.get_user = get_id_reg,			\
> +	.set_user = set_id_reg,			\
> +}
> +
> +/*
> + * sys_reg_desc initialiser for architecturally unallocated cpufeature ID
> + * register with encoding Op0=3, Op1=0, CRn=0, CRm=crm, Op2=op2
> + * (1 <= crm < 8, 0 <= Op2 < 8).
> + */
> +#define ID_UNALLOCATED(crm, op2) {			\
> +	Op0(3), Op1(0), CRn(0), CRm(crm), Op2(op2),	\
> +	.access = access_raz_id_reg,			\
> +	.get_user = get_raz_id_reg,			\
> +	.set_user = set_raz_id_reg,			\
> +}
> +
> +/*
> + * sys_reg_desc initialiser for known ID registers that we hide from guests.
> + * For now, these are exposed just like unallocated ID regs: they appear
> + * RAZ for the guest.
> + */
> +#define ID_HIDDEN(name) {			\
> +	SYS_DESC(SYS_##name),			\
> +	.access = access_raz_id_reg,		\
> +	.get_user = get_raz_id_reg,		\
> +	.set_user = set_raz_id_reg,		\
> +}
> +
>  /*
>   * Architected system registers.
>   * Important: Must be sorted ascending by Op0, Op1, CRn, CRm, Op2
> @@ -944,6 +1075,84 @@ static const struct sys_reg_desc sys_reg_descs[] = {
>  	{ SYS_DESC(SYS_DBGVCR32_EL2), NULL, reset_val, DBGVCR32_EL2, 0 },
>
>  	{ SYS_DESC(SYS_MPIDR_EL1), NULL, reset_mpidr, MPIDR_EL1 },
> +
> +	/*
> +	 * ID regs: all ID_SANITISED() entries here must have corresponding
> +	 * entries in arm64_ftr_regs[].
> +	 */

arm64_ftr_regs isn't updated in this commit. Does this break bisection?

> +
> +	/* AArch64 mappings of the AArch32 ID registers */
> +	/* CRm=1 */
> +	ID_SANITISED(ID_PFR0_EL1),
> +	ID_SANITISED(ID_PFR1_EL1),
> +	ID_SANITISED(ID_DFR0_EL1),
> +	ID_HIDDEN(ID_AFR0_EL1),
> +	ID_SANITISED(ID_MMFR0_EL1),
> +	ID_SANITISED(ID_MMFR1_EL1),
> +	ID_SANITISED(ID_MMFR2_EL1),
> +	ID_SANITISED(ID_MMFR3_EL1),
> +
> +	/* CRm=2 */
> +	ID_SANITISED(ID_ISAR0_EL1),
> +	ID_SANITISED(ID_ISAR1_EL1),
> +	ID_SANITISED(ID_ISAR2_EL1),
> +	ID_SANITISED(ID_ISAR3_EL1),
> +	ID_SANITISED(ID_ISAR4_EL1),
> +	ID_SANITISED(ID_ISAR5_EL1),
> +	ID_SANITISED(ID_MMFR4_EL1),
> +	ID_UNALLOCATED(2,7),
> +
> +	/* CRm=3 */
> +	ID_SANITISED(MVFR0_EL1),
> +	ID_SANITISED(MVFR1_EL1),
> +	ID_SANITISED(MVFR2_EL1),
> +	ID_UNALLOCATED(3,3),
> +	ID_UNALLOCATED(3,4),
> +	ID_UNALLOCATED(3,5),
> +	ID_UNALLOCATED(3,6),
> +	ID_UNALLOCATED(3,7),
> +
> +	/* AArch64 ID registers */
> +	/* CRm=4 */
> +	ID_SANITISED(ID_AA64PFR0_EL1),
> +	ID_SANITISED(ID_AA64PFR1_EL1),
> +	ID_UNALLOCATED(4,2),
> +	ID_UNALLOCATED(4,3),
> +	ID_UNALLOCATED(4,4),
> +	ID_UNALLOCATED(4,5),
> +	ID_UNALLOCATED(4,6),
> +	ID_UNALLOCATED(4,7),
> +
> +	/* CRm=5 */
> +	ID_SANITISED(ID_AA64DFR0_EL1),
> +	ID_SANITISED(ID_AA64DFR1_EL1),
> +	ID_UNALLOCATED(5,2),
> +	ID_UNALLOCATED(5,3),
> +	ID_HIDDEN(ID_AA64AFR0_EL1),
> +	ID_HIDDEN(ID_AA64AFR1_EL1),
> +	ID_UNALLOCATED(5,6),
> +	ID_UNALLOCATED(5,7),
> +
> +	/* CRm=6 */
> +	ID_SANITISED(ID_AA64ISAR0_EL1),
> +	ID_SANITISED(ID_AA64ISAR1_EL1),
> +	ID_UNALLOCATED(6,2),
> +	ID_UNALLOCATED(6,3),
> +	ID_UNALLOCATED(6,4),
> +	ID_UNALLOCATED(6,5),
> +	ID_UNALLOCATED(6,6),
> +	ID_UNALLOCATED(6,7),
> +
> +	/* CRm=7 */
> +	ID_SANITISED(ID_AA64MMFR0_EL1),
> +	ID_SANITISED(ID_AA64MMFR1_EL1),
> +	ID_SANITISED(ID_AA64MMFR2_EL1),
> +	ID_UNALLOCATED(7,3),
> +	ID_UNALLOCATED(7,4),
> +	ID_UNALLOCATED(7,5),
> +	ID_UNALLOCATED(7,6),
> +	ID_UNALLOCATED(7,7),
> +

I think it might be worthwhile adding a test to kvm-unit-tests to walk
all the ID registers to check this.

>  	{ SYS_DESC(SYS_SCTLR_EL1), access_vm_reg, reset_val, SCTLR_EL1, 0x00C50078 },
>  	{ SYS_DESC(SYS_CPACR_EL1), NULL, reset_val, CPACR_EL1, 0 },
>  	{ SYS_DESC(SYS_TTBR0_EL1), access_vm_reg, reset_unknown, TTBR0_EL1 },
> @@ -1790,8 +1999,8 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  	if (!r)
>  		r = find_reg(&params, sys_reg_descs, ARRAY_SIZE(sys_reg_descs));
>
> -	/* Not saved in the sys_reg array? */
> -	if (r && !r->reg)
> +	/* Not saved in the sys_reg array and not otherwise accessible? */
> +	if (r && !(r->reg || r->get_user))
>  		r = NULL;
>
>  	return r;
> @@ -1815,20 +2024,6 @@ static const struct sys_reg_desc *index_to_sys_reg_desc(struct kvm_vcpu *vcpu,
>  FUNCTION_INVARIANT(midr_el1)
>  FUNCTION_INVARIANT(ctr_el0)
>  FUNCTION_INVARIANT(revidr_el1)
> -FUNCTION_INVARIANT(id_pfr0_el1)
> -FUNCTION_INVARIANT(id_pfr1_el1)
> -FUNCTION_INVARIANT(id_dfr0_el1)
> -FUNCTION_INVARIANT(id_afr0_el1)
> -FUNCTION_INVARIANT(id_mmfr0_el1)
> -FUNCTION_INVARIANT(id_mmfr1_el1)
> -FUNCTION_INVARIANT(id_mmfr2_el1)
> -FUNCTION_INVARIANT(id_mmfr3_el1)
> -FUNCTION_INVARIANT(id_isar0_el1)
> -FUNCTION_INVARIANT(id_isar1_el1)
> -FUNCTION_INVARIANT(id_isar2_el1)
> -FUNCTION_INVARIANT(id_isar3_el1)
> -FUNCTION_INVARIANT(id_isar4_el1)
> -FUNCTION_INVARIANT(id_isar5_el1)
>  FUNCTION_INVARIANT(clidr_el1)
>  FUNCTION_INVARIANT(aidr_el1)
>
> @@ -1836,20 +2031,6 @@ FUNCTION_INVARIANT(aidr_el1)
>  static struct sys_reg_desc invariant_sys_regs[] = {
>  	{ SYS_DESC(SYS_MIDR_EL1), NULL, get_midr_el1 },
>  	{ SYS_DESC(SYS_REVIDR_EL1), NULL, get_revidr_el1 },
> -	{ SYS_DESC(SYS_ID_PFR0_EL1), NULL, get_id_pfr0_el1 },
> -	{ SYS_DESC(SYS_ID_PFR1_EL1), NULL, get_id_pfr1_el1 },
> -	{ SYS_DESC(SYS_ID_DFR0_EL1), NULL, get_id_dfr0_el1 },
> -	{ SYS_DESC(SYS_ID_AFR0_EL1), NULL, get_id_afr0_el1 },
> -	{ SYS_DESC(SYS_ID_MMFR0_EL1), NULL, get_id_mmfr0_el1 },
> -	{ SYS_DESC(SYS_ID_MMFR1_EL1), NULL, get_id_mmfr1_el1 },
> -	{ SYS_DESC(SYS_ID_MMFR2_EL1), NULL, get_id_mmfr2_el1 },
> -	{ SYS_DESC(SYS_ID_MMFR3_EL1), NULL, get_id_mmfr3_el1 },
> -	{ SYS_DESC(SYS_ID_ISAR0_EL1), NULL, get_id_isar0_el1 },
> -	{ SYS_DESC(SYS_ID_ISAR1_EL1), NULL, get_id_isar1_el1 },
> -	{ SYS_DESC(SYS_ID_ISAR2_EL1), NULL, get_id_isar2_el1 },
> -	{ SYS_DESC(SYS_ID_ISAR3_EL1), NULL, get_id_isar3_el1 },
> -	{ SYS_DESC(SYS_ID_ISAR4_EL1), NULL, get_id_isar4_el1 },
> -	{ SYS_DESC(SYS_ID_ISAR5_EL1), NULL, get_id_isar5_el1 },
>  	{ SYS_DESC(SYS_CLIDR_EL1), NULL, get_clidr_el1 },
>  	{ SYS_DESC(SYS_AIDR_EL1), NULL, get_aidr_el1 },
>  	{ SYS_DESC(SYS_CTR_EL0), NULL, get_ctr_el0 },
> @@ -2079,12 +2260,31 @@ static bool copy_reg_to_user(const struct sys_reg_desc *reg, u64 __user **uind)
>  	return true;
>  }
>
> +static int walk_one_sys_reg(const struct sys_reg_desc *rd,
> +			    u64 __user **uind,
> +			    unsigned int *total)
> +{
> +	/*
> +	 * Ignore registers we trap but don't save,
> +	 * and for which no custom user accessor is provided.
> +	 */
> +	if (!(rd->reg || rd->get_user))
> +		return 0;
> +
> +	if (!copy_reg_to_user(rd, uind))
> +		return -EFAULT;
> +
> +	(*total)++;
> +	return 0;
> +}
> +
>  /* Assumed ordered tables, see kvm_sys_reg_table_init. */
>  static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
>  {
>  	const struct sys_reg_desc *i1, *i2, *end1, *end2;
>  	unsigned int total = 0;
>  	size_t num;
> +	int err;
>
>  	/* We check for duplicates here, to allow arch-specific overrides. */
>  	i1 = get_target_table(vcpu->arch.target, true, &num);
> @@ -2098,21 +2298,13 @@ static int walk_sys_regs(struct kvm_vcpu *vcpu, u64 __user *uind)
>  	while (i1 || i2) {
>  		int cmp = cmp_sys_reg(i1, i2);
>  		/* target-specific overrides generic entry. */
> -		if (cmp <= 0) {
> -			/* Ignore registers we trap but don't save. */
> -			if (i1->reg) {
> -				if (!copy_reg_to_user(i1, &uind))
> -					return -EFAULT;
> -				total++;
> -			}
> -		} else {
> -			/* Ignore registers we trap but don't save. */
> -			if (i2->reg) {
> -				if (!copy_reg_to_user(i2, &uind))
> -					return -EFAULT;
> -				total++;
> -			}
> -		}
> +		if (cmp <= 0)
> +			err = walk_one_sys_reg(i1, &uind, &total);
> +		else
> +			err = walk_one_sys_reg(i2, &uind, &total);
> +
> +		if (err)
> +			return err;
>
>  		if (cmp <= 0 && ++i1 == end1)
>  			i1 = NULL;


--
Alex Bennée
Alex Bennée Sept. 14, 2017, 9:33 a.m. UTC | #2
Dave Martin <Dave.Martin@arm.com> writes:

> update_cpu_features() currently cannot tell whether it is being
> called during early or late secondary boot.  This doesn't
> desperately matter for anything it currently does.
>
> However, SVE will need to know here whether the set of available
> vector lengths is fixed of still to be determined when booting a
> CPU so that it can be updated appropriately.
>
> This patch simply moves the sys_caps_initialised stuff to the top
> of the file so that it can be more widely.  There doesn't seem to
> be a more obvious place to put it.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  arch/arm64/kernel/cpufeature.c | 30 +++++++++++++++---------------
>  1 file changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index cd52d36..43ba8df 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -51,6 +51,21 @@ unsigned int compat_elf_hwcap2 __read_mostly;
>  DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
>  EXPORT_SYMBOL(cpu_hwcaps);
>
> +/*
> + * Flag to indicate if we have computed the system wide
> + * capabilities based on the boot time active CPUs. This
> + * will be used to determine if a new booting CPU should
> + * go through the verification process to make sure that it
> + * supports the system capabilities, without using a hotplug
> + * notifier.
> + */
> +static bool sys_caps_initialised;
> +
> +static inline void set_sys_caps_initialised(void)
> +{
> +	sys_caps_initialised = true;
> +}
> +
>  static int dump_cpu_hwcaps(struct notifier_block *self, unsigned long v, void *p)
>  {
>  	/* file-wide pr_fmt adds "CPU features: " prefix */
> @@ -1041,21 +1056,6 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
>  }
>
>  /*
> - * Flag to indicate if we have computed the system wide
> - * capabilities based on the boot time active CPUs. This
> - * will be used to determine if a new booting CPU should
> - * go through the verification process to make sure that it
> - * supports the system capabilities, without using a hotplug
> - * notifier.
> - */
> -static bool sys_caps_initialised;
> -
> -static inline void set_sys_caps_initialised(void)
> -{
> -	sys_caps_initialised = true;
> -}
> -
> -/*
>   * Check for CPU features that are used in early boot
>   * based on the Boot CPU value.
>   */


--
Alex Bennée
Suzuki K Poulose Sept. 14, 2017, 9:35 a.m. UTC | #3
On 31/08/17 18:00, Dave Martin wrote:
> update_cpu_features() currently cannot tell whether it is being
> called during early or late secondary boot.  This doesn't
> desperately matter for anything it currently does.
>
> However, SVE will need to know here whether the set of available
> vector lengths is fixed of still to be determined when booting a
> CPU so that it can be updated appropriately.
>
> This patch simply moves the sys_caps_initialised stuff to the top
> of the file so that it can be more widely.  There doesn't seem to
> be a more obvious place to put it.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
> ---
>  arch/arm64/kernel/cpufeature.c | 30 +++++++++++++++---------------
>  1 file changed, 15 insertions(+), 15 deletions(-)
>
> diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
> index cd52d36..43ba8df 100644
> --- a/arch/arm64/kernel/cpufeature.c
> +++ b/arch/arm64/kernel/cpufeature.c
> @@ -51,6 +51,21 @@ unsigned int compat_elf_hwcap2 __read_mostly;
>  DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
>  EXPORT_SYMBOL(cpu_hwcaps);
>
> +/*
> + * Flag to indicate if we have computed the system wide
> + * capabilities based on the boot time active CPUs. This
> + * will be used to determine if a new booting CPU should
> + * go through the verification process to make sure that it
> + * supports the system capabilities, without using a hotplug
> + * notifier.
> + */
> +static bool sys_caps_initialised;
> +
> +static inline void set_sys_caps_initialised(void)
> +{
> +	sys_caps_initialised = true;
> +}
> +
>  static int dump_cpu_hwcaps(struct notifier_block *self, unsigned long v, void *p)
>  {
>  	/* file-wide pr_fmt adds "CPU features: " prefix */
> @@ -1041,21 +1056,6 @@ void __init enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
>  }
>
>  /*
> - * Flag to indicate if we have computed the system wide
> - * capabilities based on the boot time active CPUs. This
> - * will be used to determine if a new booting CPU should
> - * go through the verification process to make sure that it
> - * supports the system capabilities, without using a hotplug
> - * notifier.
> - */
> -static bool sys_caps_initialised;
> -
> -static inline void set_sys_caps_initialised(void)
> -{
> -	sys_caps_initialised = true;
> -}

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Alex Bennée Sept. 14, 2017, 10:52 a.m. UTC | #4
Dave Martin <Dave.Martin@arm.com> writes:

> Kernel-mode NEON will corrupt the SVE vector registers, due to the
> way they alias the FPSIMD vector registers in the hardware.
>
> This patch ensures that any live SVE register content for the task
> is saved by kernel_neon_begin().  The data will be restored in the
> usual way on return to userspace.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  arch/arm64/kernel/fpsimd.c | 6 ++++--
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index cea05a7..dd89acf 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -744,8 +744,10 @@ void kernel_neon_begin(void)
>  	__this_cpu_write(kernel_neon_busy, true);
>
>  	/* Save unsaved task fpsimd state, if any: */
> -	if (current->mm && !test_and_set_thread_flag(TIF_FOREIGN_FPSTATE))
> -		fpsimd_save_state(&current->thread.fpsimd_state);
> +	if (current->mm) {
> +		task_fpsimd_save();
> +		set_thread_flag(TIF_FOREIGN_FPSTATE);
> +	}
>
>  	/* Invalidate any task state remaining in the fpsimd regs: */
>  	__this_cpu_write(fpsimd_last_state, NULL);


--
Alex Bennée
Alex Bennée Sept. 14, 2017, 1:02 p.m. UTC | #5
Dave Martin <Dave.Martin@arm.com> writes:

> This patch adds two arm64-specific prctls, to permit userspace to
> control its vector length:
>
>  * PR_SVE_SET_VL: set the thread's SVE vector length and vector
>    length inheritance mode.
>
>  * PR_SVE_GET_VL: get the same information.
>
> Although these calls shadow instruction set features in the SVE
> architecture, these prctls provide additional control: the vector
> length inheritance mode is Linux-specific and nothing to do with
> the architecture, and the architecture does not permit EL0 to set
> its own vector length directly.  Both can be used in portable tools
> without requiring the use of SVE instructions.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  arch/arm64/include/asm/fpsimd.h    | 14 ++++++++++++
>  arch/arm64/include/asm/processor.h |  4 ++++
>  arch/arm64/kernel/fpsimd.c         | 46 ++++++++++++++++++++++++++++++++++++++
>  include/uapi/linux/prctl.h         |  4 ++++
>  kernel/sys.c                       | 12 ++++++++++
>  5 files changed, 80 insertions(+)
>
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index 2723cca..d084968 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -17,6 +17,7 @@
>  #define __ASM_FP_H
>
>  #include <asm/ptrace.h>
> +#include <asm/errno.h>
>
>  #ifndef __ASSEMBLY__
>
> @@ -99,6 +100,9 @@ extern void sve_sync_from_fpsimd_zeropad(struct task_struct *task);
>  extern int sve_set_vector_length(struct task_struct *task,
>  				 unsigned long vl, unsigned long flags);
>
> +extern int sve_set_current_vl(unsigned long arg);
> +extern int sve_get_current_vl(void);
> +
>  extern void __init sve_init_vq_map(void);
>  extern void sve_update_vq_map(void);
>  extern int sve_verify_vq_map(void);
> @@ -114,6 +118,16 @@ static void __maybe_unused sve_sync_to_fpsimd(struct task_struct *task) { }
>  static void __maybe_unused sve_sync_from_fpsimd_zeropad(
>  	struct task_struct *task) { }
>
> +static int __maybe_unused sve_set_current_vl(unsigned long arg)
> +{
> +	return -EINVAL;
> +}
> +
> +static int __maybe_unused sve_get_current_vl(void)
> +{
> +	return -EINVAL;
> +}
> +
>  static void __maybe_unused sve_init_vq_map(void) { }
>  static void __maybe_unused sve_update_vq_map(void) { }
>  static int __maybe_unused sve_verify_vq_map(void) { return 0; }
> diff --git a/arch/arm64/include/asm/processor.h b/arch/arm64/include/asm/processor.h
> index 3faceac..df66452 100644
> --- a/arch/arm64/include/asm/processor.h
> +++ b/arch/arm64/include/asm/processor.h
> @@ -197,4 +197,8 @@ static inline void spin_lock_prefetch(const void *ptr)
>  int cpu_enable_pan(void *__unused);
>  int cpu_enable_cache_maint_trap(void *__unused);
>
> +/* Userspace interface for PR_SVE_{SET,GET}_VL prctl()s: */
> +#define SVE_SET_VL(arg)	sve_set_current_vl(arg)
> +#define SVE_GET_VL()	sve_get_current_vl()
> +
>  #endif /* __ASM_PROCESSOR_H */
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index 361c019..42e8331 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -27,6 +27,7 @@
>  #include <linux/kernel.h>
>  #include <linux/init.h>
>  #include <linux/percpu.h>
> +#include <linux/prctl.h>
>  #include <linux/preempt.h>
>  #include <linux/prctl.h>
>  #include <linux/ptrace.h>
> @@ -420,6 +421,51 @@ int sve_set_vector_length(struct task_struct *task,
>  	return 0;
>  }
>
> +/*
> + * Encode the current vector length and flags for return.
> + * This is only required for prctl(): ptrace has separate fields
> + */
> +static int sve_prctl_status(void)
> +{
> +	int ret = current->thread.sve_vl;
> +
> +	if (test_thread_flag(TIF_SVE_VL_INHERIT))
> +		ret |= PR_SVE_VL_INHERIT;
> +
> +	return ret;
> +}
> +
> +/* PR_SVE_SET_VL */
> +int sve_set_current_vl(unsigned long arg)
> +{
> +	unsigned long vl, flags;
> +	int ret;
> +
> +	vl = arg & PR_SVE_VL_LEN_MASK;
> +	flags = arg & ~vl;
> +
> +	if (!system_supports_sve())
> +		return -EINVAL;
> +
> +	preempt_disable();
> +	ret = sve_set_vector_length(current, vl, flags);
> +	preempt_enable();
> +
> +	if (ret)
> +		return ret;
> +
> +	return sve_prctl_status();
> +}
> +
> +/* PR_SVE_GET_VL */
> +int sve_get_current_vl(void)
> +{
> +	if (!system_supports_sve())
> +		return -EINVAL;
> +
> +	return sve_prctl_status();
> +}
> +
>  static unsigned long *sve_alloc_vq_map(void)
>  {
>  	return kzalloc(BITS_TO_LONGS(SVE_VQ_MAX) * sizeof(unsigned long),
> diff --git a/include/uapi/linux/prctl.h b/include/uapi/linux/prctl.h
> index 1b64901..1ef9370 100644
> --- a/include/uapi/linux/prctl.h
> +++ b/include/uapi/linux/prctl.h
> @@ -198,7 +198,11 @@ struct prctl_mm_map {
>  # define PR_CAP_AMBIENT_CLEAR_ALL	4
>
>  /* arm64 Scalable Vector Extension controls */
> +/* Flag values must be kept in sync with ptrace NT_ARM_SVE interface */
> +#define PR_SVE_SET_VL			48	/* set task vector length */
>  # define PR_SVE_SET_VL_ONEXEC		(1 << 18) /* defer effect until exec */
> +#define PR_SVE_GET_VL			49	/* get task vector length */
> +/* Bits common to PR_SVE_SET_VL and PR_SVE_GET_VL */
>  # define PR_SVE_VL_LEN_MASK		0xffff
>  # define PR_SVE_VL_INHERIT		(1 << 17) /* inherit across exec */
>
> diff --git a/kernel/sys.c b/kernel/sys.c
> index 2855ee7..f8215a6 100644
> --- a/kernel/sys.c
> +++ b/kernel/sys.c
> @@ -110,6 +110,12 @@
>  #ifndef SET_FP_MODE
>  # define SET_FP_MODE(a,b)	(-EINVAL)
>  #endif
> +#ifndef SVE_SET_VL
> +# define SVE_SET_VL(a)		(-EINVAL)
> +#endif
> +#ifndef SVE_GET_VL
> +# define SVE_GET_VL()		(-EINVAL)
> +#endif
>
>  /*
>   * this is where the system-wide overflow UID and GID are defined, for
> @@ -2389,6 +2395,12 @@ SYSCALL_DEFINE5(prctl, int, option, unsigned long, arg2, unsigned long, arg3,
>  	case PR_GET_FP_MODE:
>  		error = GET_FP_MODE(me);
>  		break;
> +	case PR_SVE_SET_VL:
> +		error = SVE_SET_VL(arg2);
> +		break;
> +	case PR_SVE_GET_VL:
> +		error = SVE_GET_VL();
> +		break;
>  	default:
>  		error = -EINVAL;
>  		break;


--
Alex Bennée
Alex Bennée Sept. 14, 2017, 1:28 p.m. UTC | #6
Dave Martin <Dave.Martin@arm.com> writes:

> Until KVM has full SVE support, guests must not be allowed to
> execute SVE instructions.
>
> This patch enables the necessary traps, and also ensures that the
> traps are disabled again on exit from the guest so that the host
> can still use SVE if it wants to.
>
> This patch introduces another instance of
> __this_cpu_write(fpsimd_last_state, NULL), so this flush operation
> is abstracted out as a separate helper fpsimd_flush_cpu_state().
> Other instances are ported appropriately.
>
> As a side effect of this refactoring, a this_cpu_write() in
> fpsimd_cpu_pm_notifier() is changed to __this_cpu_write().  This
> should be fine, since cpu_pm_enter() is supposed to be called only
> with interrupts disabled.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Ard Biesheuvel <ard.biesheuvel@linaro.org>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

>
> ---
>
> Changes since v1
> ----------------
>
> Requested by Marc Zyngier:
>
> * Avoid the verbose arithmetic for CPTR_EL2_DEFAULT, and just
> describe it in terms of the set of bits known to be RES1 in
> CPTR_EL2.
>
> Other:
>
> * Fixup to drop task SVE state cached in the CPU registers across
> guest entry/exit.
>
> Without this, we may enter an EL0 process with wrong data in the
> extended SVE bits and/or wrong trap configuration.
>
> This is not a problem for the FPSIMD part of the state because KVM
> explicitly restores the host FPSIMD state on guest exit; but this
> restore is sufficient to corrupt the extra SVE bits even if nothing
> else does.
>
> * The fpsimd_flush_cpu_state() function, which was supposed to abstract
> the underlying flush operation, wasn't used. [sparse]
>
> This patch is now ported to use it.  Other users of the same idiom are
> ported too (which was the original intention).
>
> fpsimd_flush_cpu_state() is marked inline, since all users are
> ifdef'd and the function may be unused.  Plus, it's trivially
> suitable for inlining.
> ---
>  arch/arm/include/asm/kvm_host.h   |  3 +++
>  arch/arm64/include/asm/fpsimd.h   |  1 +
>  arch/arm64/include/asm/kvm_arm.h  |  4 +++-
>  arch/arm64/include/asm/kvm_host.h | 11 +++++++++++
>  arch/arm64/kernel/fpsimd.c        | 31 +++++++++++++++++++++++++++++--
>  arch/arm64/kvm/hyp/switch.c       |  6 +++---
>  virt/kvm/arm/arm.c                |  3 +++
>  7 files changed, 53 insertions(+), 6 deletions(-)
>
> diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
> index 127e2dd..fa4a442 100644
> --- a/arch/arm/include/asm/kvm_host.h
> +++ b/arch/arm/include/asm/kvm_host.h
> @@ -299,4 +299,7 @@ int kvm_arm_vcpu_arch_get_attr(struct kvm_vcpu *vcpu,
>  int kvm_arm_vcpu_arch_has_attr(struct kvm_vcpu *vcpu,
>  			       struct kvm_device_attr *attr);
>
> +/* All host FP/SIMD state is restored on guest exit, so nothing to save: */
> +static inline void kvm_fpsimd_flush_cpu_state(void) {}
> +
>  #endif /* __ARM_KVM_HOST_H__ */
> diff --git a/arch/arm64/include/asm/fpsimd.h b/arch/arm64/include/asm/fpsimd.h
> index d084968..5605fc1 100644
> --- a/arch/arm64/include/asm/fpsimd.h
> +++ b/arch/arm64/include/asm/fpsimd.h
> @@ -74,6 +74,7 @@ extern void fpsimd_restore_current_state(void);
>  extern void fpsimd_update_current_state(struct fpsimd_state *state);
>
>  extern void fpsimd_flush_task_state(struct task_struct *target);
> +extern void sve_flush_cpu_state(void);
>
>  /* Maximum VL that SVE VL-agnostic software can transparently support */
>  #define SVE_VL_ARCH_MAX 0x100
> diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
> index dbf0537..7f069ff 100644
> --- a/arch/arm64/include/asm/kvm_arm.h
> +++ b/arch/arm64/include/asm/kvm_arm.h
> @@ -186,7 +186,8 @@
>  #define CPTR_EL2_TTA	(1 << 20)
>  #define CPTR_EL2_TFP	(1 << CPTR_EL2_TFP_SHIFT)
>  #define CPTR_EL2_TZ	(1 << 8)
> -#define CPTR_EL2_DEFAULT	0x000033ff
> +#define CPTR_EL2_RES1	0x000032ff /* known RES1 bits in CPTR_EL2 */
> +#define CPTR_EL2_DEFAULT	CPTR_EL2_RES1
>
>  /* Hyp Debug Configuration Register bits */
>  #define MDCR_EL2_TPMS		(1 << 14)
> @@ -237,5 +238,6 @@
>
>  #define CPACR_EL1_FPEN		(3 << 20)
>  #define CPACR_EL1_TTA		(1 << 28)
> +#define CPACR_EL1_DEFAULT	(CPACR_EL1_FPEN | CPACR_EL1_ZEN_EL1EN)
>
>  #endif /* __ARM64_KVM_ARM_H__ */
> diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> index d686300..05d8373 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -25,6 +25,7 @@
>  #include <linux/types.h>
>  #include <linux/kvm_types.h>
>  #include <asm/cpufeature.h>
> +#include <asm/fpsimd.h>
>  #include <asm/kvm.h>
>  #include <asm/kvm_asm.h>
>  #include <asm/kvm_mmio.h>
> @@ -390,4 +391,14 @@ static inline void __cpu_init_stage2(void)
>  		  "PARange is %d bits, unsupported configuration!", parange);
>  }
>
> +/*
> + * All host FP/SIMD state is restored on guest exit, so nothing needs
> + * doing here except in the SVE case:
> +*/
> +static inline void kvm_fpsimd_flush_cpu_state(void)
> +{
> +	if (system_supports_sve())
> +		sve_flush_cpu_state();
> +}
> +
>  #endif /* __ARM64_KVM_HOST_H__ */
> diff --git a/arch/arm64/kernel/fpsimd.c b/arch/arm64/kernel/fpsimd.c
> index b430ee0..7837ced 100644
> --- a/arch/arm64/kernel/fpsimd.c
> +++ b/arch/arm64/kernel/fpsimd.c
> @@ -875,6 +875,33 @@ void fpsimd_flush_task_state(struct task_struct *t)
>  	t->thread.fpsimd_state.cpu = NR_CPUS;
>  }
>
> +static inline void fpsimd_flush_cpu_state(void)
> +{
> +	__this_cpu_write(fpsimd_last_state, NULL);
> +}
> +
> +/*
> + * Invalidate any task SVE state currently held in this CPU's regs.
> + *
> + * This is used to prevent the kernel from trying to reuse SVE register data
> + * that is detroyed by KVM guest enter/exit.  This function should go away when
> + * KVM SVE support is implemented.  Don't use it for anything else.
> + */
> +#ifdef CONFIG_ARM64_SVE
> +void sve_flush_cpu_state(void)
> +{
> +	struct fpsimd_state *const fpstate = __this_cpu_read(fpsimd_last_state);
> +	struct task_struct *tsk;
> +
> +	if (!fpstate)
> +		return;
> +
> +	tsk = container_of(fpstate, struct task_struct, thread.fpsimd_state);
> +	if (test_tsk_thread_flag(tsk, TIF_SVE))
> +		fpsimd_flush_cpu_state();
> +}
> +#endif /* CONFIG_ARM64_SVE */
> +
>  #ifdef CONFIG_KERNEL_MODE_NEON
>
>  DEFINE_PER_CPU(bool, kernel_neon_busy);
> @@ -915,7 +942,7 @@ void kernel_neon_begin(void)
>  	}
>
>  	/* Invalidate any task state remaining in the fpsimd regs: */
> -	__this_cpu_write(fpsimd_last_state, NULL);
> +	fpsimd_flush_cpu_state();
>
>  	preempt_disable();
>
> @@ -1032,7 +1059,7 @@ static int fpsimd_cpu_pm_notifier(struct notifier_block *self,
>  	case CPU_PM_ENTER:
>  		if (current->mm)
>  			task_fpsimd_save();
> -		this_cpu_write(fpsimd_last_state, NULL);
> +		fpsimd_flush_cpu_state();
>  		break;
>  	case CPU_PM_EXIT:
>  		if (current->mm)
> diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> index 35a90b8..951f3eb 100644
> --- a/arch/arm64/kvm/hyp/switch.c
> +++ b/arch/arm64/kvm/hyp/switch.c
> @@ -48,7 +48,7 @@ static void __hyp_text __activate_traps_vhe(void)
>
>  	val = read_sysreg(cpacr_el1);
>  	val |= CPACR_EL1_TTA;
> -	val &= ~CPACR_EL1_FPEN;
> +	val &= ~(CPACR_EL1_FPEN | CPACR_EL1_ZEN);
>  	write_sysreg(val, cpacr_el1);
>
>  	write_sysreg(__kvm_hyp_vector, vbar_el1);
> @@ -59,7 +59,7 @@ static void __hyp_text __activate_traps_nvhe(void)
>  	u64 val;
>
>  	val = CPTR_EL2_DEFAULT;
> -	val |= CPTR_EL2_TTA | CPTR_EL2_TFP;
> +	val |= CPTR_EL2_TTA | CPTR_EL2_TFP | CPTR_EL2_TZ;
>  	write_sysreg(val, cptr_el2);
>  }
>
> @@ -117,7 +117,7 @@ static void __hyp_text __deactivate_traps_vhe(void)
>
>  	write_sysreg(mdcr_el2, mdcr_el2);
>  	write_sysreg(HCR_HOST_VHE_FLAGS, hcr_el2);
> -	write_sysreg(CPACR_EL1_FPEN, cpacr_el1);
> +	write_sysreg(CPACR_EL1_DEFAULT, cpacr_el1);
>  	write_sysreg(vectors, vbar_el1);
>  }
>
> diff --git a/virt/kvm/arm/arm.c b/virt/kvm/arm/arm.c
> index a39a1e1..af9f5da 100644
> --- a/virt/kvm/arm/arm.c
> +++ b/virt/kvm/arm/arm.c
> @@ -647,6 +647,9 @@ int kvm_arch_vcpu_ioctl_run(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  		 */
>  		preempt_disable();
>
> +		/* Flush FP/SIMD state that can't survive guest entry/exit */
> +		kvm_fpsimd_flush_cpu_state();
> +
>  		kvm_pmu_flush_hwstate(vcpu);
>
>  		kvm_timer_flush_hwstate(vcpu);


--
Alex Bennée
Alex Bennée Sept. 14, 2017, 1:30 p.m. UTC | #7
Dave Martin <Dave.Martin@arm.com> writes:

> When trapping forbidden attempts by a guest to use SVE, we want the
> guest to see a trap consistent with SVE not being implemented.
>
> This patch injects an undefined instruction exception into the
> guest in response to such an exception.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

> ---
>  arch/arm64/kvm/handle_exit.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 17d8a16..e3e42d0 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -147,6 +147,13 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>
> +static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	/* Until SVE is supported for guests: */
> +	kvm_inject_undefined(vcpu);
> +	return 1;
> +}
> +
>  static exit_handle_fn arm_exit_handlers[] = {
>  	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
>  	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
> @@ -160,6 +167,7 @@ static exit_handle_fn arm_exit_handlers[] = {
>  	[ESR_ELx_EC_HVC64]	= handle_hvc,
>  	[ESR_ELx_EC_SMC64]	= handle_smc,
>  	[ESR_ELx_EC_SYS64]	= kvm_handle_sys_reg,
> +	[ESR_ELx_EC_SVE]	= handle_sve,
>  	[ESR_ELx_EC_IABT_LOW]	= kvm_handle_guest_abort,
>  	[ESR_ELx_EC_DABT_LOW]	= kvm_handle_guest_abort,
>  	[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,


--
Alex Bennée
Alex Bennée Sept. 14, 2017, 1:31 p.m. UTC | #8
Dave Martin <Dave.Martin@arm.com> writes:

> When trapping forbidden attempts by a guest to use SVE, we want the
> guest to see a trap consistent with SVE not being implemented.
>
> This patch injects an undefined instruction exception into the
> guest in response to such an exception.

I do wonder if this should be merged with the previous trap enabling
patch though?

>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> ---
>  arch/arm64/kvm/handle_exit.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
>
> diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> index 17d8a16..e3e42d0 100644
> --- a/arch/arm64/kvm/handle_exit.c
> +++ b/arch/arm64/kvm/handle_exit.c
> @@ -147,6 +147,13 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
>  	return 1;
>  }
>
> +static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> +{
> +	/* Until SVE is supported for guests: */
> +	kvm_inject_undefined(vcpu);
> +	return 1;
> +}
> +
>  static exit_handle_fn arm_exit_handlers[] = {
>  	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
>  	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
> @@ -160,6 +167,7 @@ static exit_handle_fn arm_exit_handlers[] = {
>  	[ESR_ELx_EC_HVC64]	= handle_hvc,
>  	[ESR_ELx_EC_SMC64]	= handle_smc,
>  	[ESR_ELx_EC_SYS64]	= kvm_handle_sys_reg,
> +	[ESR_ELx_EC_SVE]	= handle_sve,
>  	[ESR_ELx_EC_IABT_LOW]	= kvm_handle_guest_abort,
>  	[ESR_ELx_EC_DABT_LOW]	= kvm_handle_guest_abort,
>  	[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,


--
Alex Bennée
Alex Bennée Sept. 14, 2017, 1:32 p.m. UTC | #9
Dave Martin <Dave.Martin@arm.com> writes:

> KVM guests cannot currently use SVE, because SVE is always
> configured to trap to EL2.
>
> However, a guest that sees SVE reported as present in
> ID_AA64PFR0_EL1 may legitimately expect that SVE works and try to
> use it.  Instead of working, the guest will receive an injected
> undef exception, which may cause the guest to oops or go into a
> spin.
>
> To avoid misleading the guest into believing that SVE will work,
> this patch masks out the SVE field from ID_AA64PFR0_EL1 when a
> guest attempts to read this register.  No support is explicitly
> added for ID_AA64ZFR0_EL1 either, so that is still emulated as
> reading as zero, which is consistent with SVE not being
> implemented.
>
> This is a temporary measure, and will be removed in a later series
> when full KVM support for SVE is implemented.
>
> Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>

Reviewed-by: Alex Bennée <alex.bennee@linaro.org>

>
> ---
>
> Changes since v1
> ----------------
>
> Requested by Marc Zyngier:
>
> * Use pr_err() instead inventing "kvm_info_once" ad-hoc.
> ---
>  arch/arm64/kvm/sys_regs.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/sys_regs.c b/arch/arm64/kvm/sys_regs.c
> index b1f7552..a0ee9b0 100644
> --- a/arch/arm64/kvm/sys_regs.c
> +++ b/arch/arm64/kvm/sys_regs.c
> @@ -23,6 +23,7 @@
>  #include <linux/bsearch.h>
>  #include <linux/kvm_host.h>
>  #include <linux/mm.h>
> +#include <linux/printk.h>
>  #include <linux/uaccess.h>
>
>  #include <asm/cacheflush.h>
> @@ -897,8 +898,17 @@ static u64 read_id_reg(struct sys_reg_desc const *r, bool raz)
>  {
>  	u32 id = sys_reg((u32)r->Op0, (u32)r->Op1,
>  			 (u32)r->CRn, (u32)r->CRm, (u32)r->Op2);
> +	u64 val = raz ? 0 : read_sanitised_ftr_reg(id);
>
> -	return raz ? 0 : read_sanitised_ftr_reg(id);
> +	if (id == SYS_ID_AA64PFR0_EL1) {
> +		if (val & (0xfUL << ID_AA64PFR0_SVE_SHIFT))
> +			pr_err_once("kvm [%i]: SVE unsupported for guests, suppressing\n",
> +				    task_pid_nr(current));
> +
> +		val &= ~(0xfUL << ID_AA64PFR0_SVE_SHIFT);
> +	}
> +
> +	return val;
>  }
>
>  /* cpufeature ID register access trap handlers */


--
Alex Bennée
Dave Martin Sept. 15, 2017, 12:04 a.m. UTC | #10
On Wed, Sep 13, 2017 at 03:37:42PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:

[...]

> > diff --git a/arch/arm64/kvm/hyp/switch.c b/arch/arm64/kvm/hyp/switch.c
> > index 945e79c..35a90b8 100644
> > --- a/arch/arm64/kvm/hyp/switch.c
> > +++ b/arch/arm64/kvm/hyp/switch.c
> > @@ -81,11 +81,17 @@ static void __hyp_text __activate_traps(struct kvm_vcpu *vcpu)
> >  	 * it will cause an exception.
> >  	 */
> >  	val = vcpu->arch.hcr_el2;
> > +
> >  	if (!(val & HCR_RW) && system_supports_fpsimd()) {
> >  		write_sysreg(1 << 30, fpexc32_el2);
> >  		isb();
> >  	}
> > +
> > +	if (val & HCR_RW) /* for AArch64 only: */
> > +		val |= HCR_TID3; /* TID3: trap feature register accesses */
> > +
> 
> I wondered as this is the hyp switch can we make use of testing val &
> HCR_RW for both this and above. But it seems minimal in the generated
> code so probably not.

I figured that the code was cleaner ths way, since they're independent
bits of code that both happen to be applicable only to AArch64 guests.

[...]

> > +
> > +	/*
> > +	 * ID regs: all ID_SANITISED() entries here must have corresponding
> > +	 * entries in arm64_ftr_regs[].
> > +	 */
> 
> arm64_ftr_regs isn't updated in this commit. Does this break bisection?

This commit only adds ID_SANITISED() entries for regs that are already
present in arm64_ftr_regs[].  (If you spot any that are missing, give me
a shout...)

SVE only adds one new ID register, ID_AA64ZFR0_EL1 -- but SVE defines no
fields in there yet, so I just leave it ID_UNALLOCATED() which will
cause it to read as zero for the guest.

> > +
> > +	/* AArch64 mappings of the AArch32 ID registers */
> > +	/* CRm=1 */
> > +	ID_SANITISED(ID_PFR0_EL1),
> > +	ID_SANITISED(ID_PFR1_EL1),

[...]

> > +	/* CRm=7 */
> > +	ID_SANITISED(ID_AA64MMFR0_EL1),
> > +	ID_SANITISED(ID_AA64MMFR1_EL1),
> > +	ID_SANITISED(ID_AA64MMFR2_EL1),
> > +	ID_UNALLOCATED(7,3),
> > +	ID_UNALLOCATED(7,4),
> > +	ID_UNALLOCATED(7,5),
> > +	ID_UNALLOCATED(7,6),
> > +	ID_UNALLOCATED(7,7),
> > +
> 
> I think it might be worthwhile adding a test to kvm-unit-tests to walk
> all the ID registers to check this.

Sounds sensible, I'll take a look at that.

[...]

Cheers
---Dave
Dave Martin Sept. 29, 2017, 1 p.m. UTC | #11
On Thu, Sep 14, 2017 at 02:31:13PM +0100, Alex Bennée wrote:
> 
> Dave Martin <Dave.Martin@arm.com> writes:
> 
> > When trapping forbidden attempts by a guest to use SVE, we want the
> > guest to see a trap consistent with SVE not being implemented.
> >
> > This patch injects an undefined instruction exception into the
> > guest in response to such an exception.
> 
> I do wonder if this should be merged with the previous trap enabling
> patch though?

Yes, that would make sense now I look at it.

Can I keep your Reviewed-by on the combined patch?

Cheers
---Dave

> 
> >
> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
> > ---
> >  arch/arm64/kvm/handle_exit.c | 8 ++++++++
> >  1 file changed, 8 insertions(+)
> >
> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
> > index 17d8a16..e3e42d0 100644
> > --- a/arch/arm64/kvm/handle_exit.c
> > +++ b/arch/arm64/kvm/handle_exit.c
> > @@ -147,6 +147,13 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
> >  	return 1;
> >  }
> >
> > +static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
> > +{
> > +	/* Until SVE is supported for guests: */
> > +	kvm_inject_undefined(vcpu);
> > +	return 1;
> > +}
> > +
> >  static exit_handle_fn arm_exit_handlers[] = {
> >  	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
> >  	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
> > @@ -160,6 +167,7 @@ static exit_handle_fn arm_exit_handlers[] = {
> >  	[ESR_ELx_EC_HVC64]	= handle_hvc,
> >  	[ESR_ELx_EC_SMC64]	= handle_smc,
> >  	[ESR_ELx_EC_SYS64]	= kvm_handle_sys_reg,
> > +	[ESR_ELx_EC_SVE]	= handle_sve,
> >  	[ESR_ELx_EC_IABT_LOW]	= kvm_handle_guest_abort,
> >  	[ESR_ELx_EC_DABT_LOW]	= kvm_handle_guest_abort,
> >  	[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,
> 
> 
> --
> Alex Bennée
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
Alex Bennée Sept. 29, 2017, 2:43 p.m. UTC | #12
Dave Martin <Dave.Martin@arm.com> writes:

> On Thu, Sep 14, 2017 at 02:31:13PM +0100, Alex Bennée wrote:
>>
>> Dave Martin <Dave.Martin@arm.com> writes:
>>
>> > When trapping forbidden attempts by a guest to use SVE, we want the
>> > guest to see a trap consistent with SVE not being implemented.
>> >
>> > This patch injects an undefined instruction exception into the
>> > guest in response to such an exception.
>>
>> I do wonder if this should be merged with the previous trap enabling
>> patch though?
>
> Yes, that would make sense now I look at it.
>
> Can I keep your Reviewed-by on the combined patch?

Sure.

>
> Cheers
> ---Dave
>
>>
>> >
>> > Signed-off-by: Dave Martin <Dave.Martin@arm.com>
>> > ---
>> >  arch/arm64/kvm/handle_exit.c | 8 ++++++++
>> >  1 file changed, 8 insertions(+)
>> >
>> > diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
>> > index 17d8a16..e3e42d0 100644
>> > --- a/arch/arm64/kvm/handle_exit.c
>> > +++ b/arch/arm64/kvm/handle_exit.c
>> > @@ -147,6 +147,13 @@ static int kvm_handle_unknown_ec(struct kvm_vcpu *vcpu, struct kvm_run *run)
>> >  	return 1;
>> >  }
>> >
>> > +static int handle_sve(struct kvm_vcpu *vcpu, struct kvm_run *run)
>> > +{
>> > +	/* Until SVE is supported for guests: */
>> > +	kvm_inject_undefined(vcpu);
>> > +	return 1;
>> > +}
>> > +
>> >  static exit_handle_fn arm_exit_handlers[] = {
>> >  	[0 ... ESR_ELx_EC_MAX]	= kvm_handle_unknown_ec,
>> >  	[ESR_ELx_EC_WFx]	= kvm_handle_wfx,
>> > @@ -160,6 +167,7 @@ static exit_handle_fn arm_exit_handlers[] = {
>> >  	[ESR_ELx_EC_HVC64]	= handle_hvc,
>> >  	[ESR_ELx_EC_SMC64]	= handle_smc,
>> >  	[ESR_ELx_EC_SYS64]	= kvm_handle_sys_reg,
>> > +	[ESR_ELx_EC_SVE]	= handle_sve,
>> >  	[ESR_ELx_EC_IABT_LOW]	= kvm_handle_guest_abort,
>> >  	[ESR_ELx_EC_DABT_LOW]	= kvm_handle_guest_abort,
>> >  	[ESR_ELx_EC_SOFTSTP_LOW]= kvm_handle_guest_debug,
>>
>>
>> --
>> Alex Bennée
>>
>> _______________________________________________
>> linux-arm-kernel mailing list
>> linux-arm-kernel@lists.infradead.org
>> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel


--
Alex Bennée