mbox

[0/3] KVM/ARM updates for 4.10-rc4

Message ID 1484307093-29153-1-git-send-email-marc.zyngier@arm.com
State New
Headers show

Pull-request

git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git tags/kvm-arm-for-4.10-rc4

Message

Marc Zyngier Jan. 13, 2017, 11:31 a.m. UTC
Radim, Paolo,

Here's the KVM/ARM updates for 4.10-rc4. Two timer fixes, and one vgic
fix for a deadlock that's been reported this week (which should land
into stable).

Please pull.

Thanks,

	M.

The following changes since commit a121103c922847ba5010819a3f250f1f7fc84ab8:

  Linux 4.10-rc3 (2017-01-08 14:18:17 -0800)

are available in the git repository at:

  git://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm.git tags/kvm-arm-for-4.10-rc4

for you to fetch changes up to 1193e6aeecb36c74c48c7cd0f641acbbed9ddeef:

  KVM: arm/arm64: vgic: Fix deadlock on error handling (2017-01-13 11:19:35 +0000)

----------------------------------------------------------------
KVM/ARM updates for 4.10-rc4

- Fix for timer setup on VHE machines
- Drop spurious warning when the timer races against
  the vcpu running again
- Prevent a vgic deadlock when the initialization fails

----------------------------------------------------------------
Christoffer Dall (1):
      KVM: arm/arm64: Fix occasional warning from the timer work function

Jintack Lim (1):
      KVM: arm64: Access CNTHCTL_EL2 bit fields correctly on VHE systems

Marc Zyngier (1):
      KVM: arm/arm64: vgic: Fix deadlock on error handling

 arch/arm/include/asm/virt.h   |  5 +++++
 arch/arm/kvm/arm.c            |  3 +++
 arch/arm64/include/asm/virt.h |  9 +++++++++
 include/kvm/arm_arch_timer.h  |  1 +
 virt/kvm/arm/arch_timer.c     | 26 +++++++++++++++++++++++---
 virt/kvm/arm/hyp/timer-sr.c   | 33 +++++++++++++++++++++------------
 virt/kvm/arm/vgic/vgic-init.c | 18 +++++++++++++-----
 virt/kvm/arm/vgic/vgic-v2.c   |  2 --
 virt/kvm/arm/vgic/vgic-v3.c   |  2 --
 9 files changed, 75 insertions(+), 24 deletions(-)

Comments

Christoffer Dall Jan. 13, 2017, 12:36 p.m. UTC | #1
On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
> From: Jintack Lim <jintack@cs.columbia.edu>
> 
> Current KVM world switch code is unintentionally setting wrong bits to
> CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
> timer.  Bit positions of CNTHCTL_EL2 are changing depending on
> HCR_EL2.E2H bit.  EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
> not set, but they are 11th and 10th bits respectively when E2H is set.
> 
> In fact, on VHE we only need to set those bits once, not for every world
> switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
> 1, which makes those bits have no effect for the host kernel execution.
> So we just set those bits once for guests, and that's it.
> 
> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> ---
>  arch/arm/include/asm/virt.h   |  5 +++++
>  arch/arm/kvm/arm.c            |  3 +++
>  arch/arm64/include/asm/virt.h |  9 +++++++++
>  include/kvm/arm_arch_timer.h  |  1 +
>  virt/kvm/arm/arch_timer.c     | 23 +++++++++++++++++++++++
>  virt/kvm/arm/hyp/timer-sr.c   | 33 +++++++++++++++++++++------------
>  6 files changed, 62 insertions(+), 12 deletions(-)
> 
> diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
> index a2e75b8..6dae195 100644
> --- a/arch/arm/include/asm/virt.h
> +++ b/arch/arm/include/asm/virt.h
> @@ -80,6 +80,11 @@ static inline bool is_kernel_in_hyp_mode(void)
>  	return false;
>  }
>  
> +static inline bool has_vhe(void)
> +{
> +	return false;
> +}
> +
>  /* The section containing the hypervisor idmap text */
>  extern char __hyp_idmap_text_start[];
>  extern char __hyp_idmap_text_end[];
> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> index 1167678..9d74464 100644
> --- a/arch/arm/kvm/arm.c
> +++ b/arch/arm/kvm/arm.c
> @@ -1099,6 +1099,9 @@ static void cpu_init_hyp_mode(void *dummy)
>  	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
>  	__cpu_init_stage2();
>  
> +	if (is_kernel_in_hyp_mode())
> +		kvm_timer_init_vhe();
> +
>  	kvm_arm_init_debug();
>  }
>  
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index fea1073..439f6b5 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -47,6 +47,7 @@
>  #include <asm/ptrace.h>
>  #include <asm/sections.h>
>  #include <asm/sysreg.h>
> +#include <asm/cpufeature.h>
>  
>  /*
>   * __boot_cpu_mode records what mode CPUs were booted in.
> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
>  }
>  
> +static inline bool has_vhe(void)
> +{
> +	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> +		return true;
> +
> +	return false;
> +}
> +

I was experimenting with using has_vhe for some of the optimization code
I was writing, and I saw a hyp crash as a result.  That made me wonder
if this is really safe in Hyp mode?

Specifically, there is no guarantee that this will actually be inlined
in the caller, right?  At least that's what I can gather from trying to
understand the semantics of the inline keyword in the GCC manual.

Further, are we guaranteed that the static branch gets compiled into
something that doesn't actually look at cpu_hwcap_keys, which is not
mapped in hyp mode?

Thanks,
-Christoffer

>  #ifdef CONFIG_ARM64_VHE
>  extern void verify_cpu_run_el(void);
>  #else
> diff --git a/include/kvm/arm_arch_timer.h b/include/kvm/arm_arch_timer.h
> index b717ed9..5c970ce 100644
> --- a/include/kvm/arm_arch_timer.h
> +++ b/include/kvm/arm_arch_timer.h
> @@ -76,4 +76,5 @@ void kvm_timer_unschedule(struct kvm_vcpu *vcpu);
>  
>  void kvm_timer_vcpu_put(struct kvm_vcpu *vcpu);
>  
> +void kvm_timer_init_vhe(void);
>  #endif
> diff --git a/virt/kvm/arm/arch_timer.c b/virt/kvm/arm/arch_timer.c
> index a7fe606..6a084cd 100644
> --- a/virt/kvm/arm/arch_timer.c
> +++ b/virt/kvm/arm/arch_timer.c
> @@ -24,6 +24,7 @@
>  
>  #include <clocksource/arm_arch_timer.h>
>  #include <asm/arch_timer.h>
> +#include <asm/kvm_hyp.h>
>  
>  #include <kvm/arm_vgic.h>
>  #include <kvm/arm_arch_timer.h>
> @@ -509,3 +510,25 @@ void kvm_timer_init(struct kvm *kvm)
>  {
>  	kvm->arch.timer.cntvoff = kvm_phys_timer_read();
>  }
> +
> +/*
> + * On VHE system, we only need to configure trap on physical timer and counter
> + * accesses in EL0 and EL1 once, not for every world switch.
> + * The host kernel runs at EL2 with HCR_EL2.TGE == 1,
> + * and this makes those bits have no effect for the host kernel execution.
> + */
> +void kvm_timer_init_vhe(void)
> +{
> +	/* When HCR_EL2.E2H ==1, EL1PCEN and EL1PCTEN are shifted by 10 */
> +	u32 cnthctl_shift = 10;
> +	u64 val;
> +
> +	/*
> +	 * Disallow physical timer access for the guest.
> +	 * Physical counter access is allowed.
> +	 */
> +	val = read_sysreg(cnthctl_el2);
> +	val &= ~(CNTHCTL_EL1PCEN << cnthctl_shift);
> +	val |= (CNTHCTL_EL1PCTEN << cnthctl_shift);
> +	write_sysreg(val, cnthctl_el2);
> +}
> diff --git a/virt/kvm/arm/hyp/timer-sr.c b/virt/kvm/arm/hyp/timer-sr.c
> index 798866a..63e28dd 100644
> --- a/virt/kvm/arm/hyp/timer-sr.c
> +++ b/virt/kvm/arm/hyp/timer-sr.c
> @@ -35,10 +35,16 @@ void __hyp_text __timer_save_state(struct kvm_vcpu *vcpu)
>  	/* Disable the virtual timer */
>  	write_sysreg_el0(0, cntv_ctl);
>  
> -	/* Allow physical timer/counter access for the host */
> -	val = read_sysreg(cnthctl_el2);
> -	val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
> -	write_sysreg(val, cnthctl_el2);
> +	/*
> +	 * We don't need to do this for VHE since the host kernel runs in EL2
> +	 * with HCR_EL2.TGE ==1, which makes those bits have no impact.
> +	 */
> +	if (!has_vhe()) {
> +		/* Allow physical timer/counter access for the host */
> +		val = read_sysreg(cnthctl_el2);
> +		val |= CNTHCTL_EL1PCTEN | CNTHCTL_EL1PCEN;
> +		write_sysreg(val, cnthctl_el2);
> +	}
>  
>  	/* Clear cntvoff for the host */
>  	write_sysreg(0, cntvoff_el2);
> @@ -50,14 +56,17 @@ void __hyp_text __timer_restore_state(struct kvm_vcpu *vcpu)
>  	struct arch_timer_cpu *timer = &vcpu->arch.timer_cpu;
>  	u64 val;
>  
> -	/*
> -	 * Disallow physical timer access for the guest
> -	 * Physical counter access is allowed
> -	 */
> -	val = read_sysreg(cnthctl_el2);
> -	val &= ~CNTHCTL_EL1PCEN;
> -	val |= CNTHCTL_EL1PCTEN;
> -	write_sysreg(val, cnthctl_el2);
> +	/* Those bits are already configured at boot on VHE-system */
> +	if (!has_vhe()) {
> +		/*
> +		 * Disallow physical timer access for the guest
> +		 * Physical counter access is allowed
> +		 */
> +		val = read_sysreg(cnthctl_el2);
> +		val &= ~CNTHCTL_EL1PCEN;
> +		val |= CNTHCTL_EL1PCTEN;
> +		write_sysreg(val, cnthctl_el2);
> +	}
>  
>  	if (timer->enabled) {
>  		write_sysreg(kvm->arch.timer.cntvoff, cntvoff_el2);
> -- 
> 2.1.4
>
Christoffer Dall Jan. 13, 2017, 1:46 p.m. UTC | #2
On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
> 
> On 13/01/17 12:36, Christoffer Dall wrote:
> > On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
> >> From: Jintack Lim <jintack@cs.columbia.edu>
> >>
> >> Current KVM world switch code is unintentionally setting wrong bits to
> >> CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
> >> timer.  Bit positions of CNTHCTL_EL2 are changing depending on
> >> HCR_EL2.E2H bit.  EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
> >> not set, but they are 11th and 10th bits respectively when E2H is set.
> >>
> >> In fact, on VHE we only need to set those bits once, not for every world
> >> switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
> >> 1, which makes those bits have no effect for the host kernel execution.
> >> So we just set those bits once for guests, and that's it.
> >>
> >> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >> ---
> >>  arch/arm/include/asm/virt.h   |  5 +++++
> >>  arch/arm/kvm/arm.c            |  3 +++
> >>  arch/arm64/include/asm/virt.h |  9 +++++++++
> >>  include/kvm/arm_arch_timer.h  |  1 +
> >>  virt/kvm/arm/arch_timer.c     | 23 +++++++++++++++++++++++
> >>  virt/kvm/arm/hyp/timer-sr.c   | 33 +++++++++++++++++++++------------
> >>  6 files changed, 62 insertions(+), 12 deletions(-)
> >>
> >> diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
> >> index a2e75b8..6dae195 100644
> >> --- a/arch/arm/include/asm/virt.h
> >> +++ b/arch/arm/include/asm/virt.h
> >> @@ -80,6 +80,11 @@ static inline bool is_kernel_in_hyp_mode(void)
> >>  	return false;
> >>  }
> >>  
> >> +static inline bool has_vhe(void)
> >> +{
> >> +	return false;
> >> +}
> >> +
> >>  /* The section containing the hypervisor idmap text */
> >>  extern char __hyp_idmap_text_start[];
> >>  extern char __hyp_idmap_text_end[];
> >> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> >> index 1167678..9d74464 100644
> >> --- a/arch/arm/kvm/arm.c
> >> +++ b/arch/arm/kvm/arm.c
> >> @@ -1099,6 +1099,9 @@ static void cpu_init_hyp_mode(void *dummy)
> >>  	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
> >>  	__cpu_init_stage2();
> >>  
> >> +	if (is_kernel_in_hyp_mode())
> >> +		kvm_timer_init_vhe();
> >> +
> >>  	kvm_arm_init_debug();
> >>  }
> >>  
> >> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> >> index fea1073..439f6b5 100644
> >> --- a/arch/arm64/include/asm/virt.h
> >> +++ b/arch/arm64/include/asm/virt.h
> >> @@ -47,6 +47,7 @@
> >>  #include <asm/ptrace.h>
> >>  #include <asm/sections.h>
> >>  #include <asm/sysreg.h>
> >> +#include <asm/cpufeature.h>
> >>  
> >>  /*
> >>   * __boot_cpu_mode records what mode CPUs were booted in.
> >> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
> >>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
> >>  }
> >>  
> >> +static inline bool has_vhe(void)
> >> +{
> >> +	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> >> +		return true;
> >> +
> >> +	return false;
> >> +}
> >> +
> > 
> > I was experimenting with using has_vhe for some of the optimization code
> > I was writing, and I saw a hyp crash as a result.  That made me wonder
> > if this is really safe in Hyp mode?
> > 
> > Specifically, there is no guarantee that this will actually be inlined
> > in the caller, right?  At least that's what I can gather from trying to
> > understand the semantics of the inline keyword in the GCC manual.
> 
> Indeed, there is no strict guarantee that this is enforced. We should
> probably have __always_inline instead. But having checked the generated
> code for __timer_restore_state, the function is definitely inlined
> (gcc 6.2). Happy to queue an extra patch changing that.
> 
> > Further, are we guaranteed that the static branch gets compiled into
> > something that doesn't actually look at cpu_hwcap_keys, which is not
> > mapped in hyp mode?
> 
> Here's the disassembly:
> 
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
> ffff000008ad01d4:       9240bc21        and     x1, x1, #0xffffffffffff
> ffff000008ad01d8:       d503201f        nop
> ffff000008ad01dc:       d503201f        nop
> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
> ffff000008ad01e4:       927ef842        and     x2, x2, #0xfffffffffffffffd
> ffff000008ad01e8:       b2400042        orr     x2, x2, #0x1
> ffff000008ad01ec:       d51ce102        msr     cnthctl_el2, x2
> ffff000008ad01f0:       d2834002        mov     x2, #0x1a00                     // #6656
> ffff000008ad01f4:       8b020000        add     x0, x0, x2
> ffff000008ad01f8:       91038002        add     x2, x0, #0xe0
> ffff000008ad01fc:       39425443        ldrb    w3, [x2,#149]
> ffff000008ad0200:       34000103        cbz     w3, ffff000008ad0220 <__timer_restore_state+0x50>
> ffff000008ad0204:       f945a821        ldr     x1, [x1,#2896]
> ffff000008ad0208:       d51ce061        msr     cntvoff_el2, x1
> ffff000008ad020c:       f9400441        ldr     x1, [x2,#8]
> ffff000008ad0210:       d51be341        msr     cntv_cval_el0, x1
> ffff000008ad0214:       d5033fdf        isb
> ffff000008ad0218:       b940e000        ldr     w0, [x0,#224]
> ffff000008ad021c:       d51be320        msr     cntv_ctl_el0, x0
> ffff000008ad0220:       d65f03c0        ret
> 
> The static branch resolves as such when VHE is enabled (taken from
> a running model):
> 
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
> ffff000008ad01d4:       9240bc21        nop
> ffff000008ad01d8:       d503201f        nop
> ffff000008ad01dc:       d503201f        b	ffff000008ad01f0
> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
> [...]
> 
> That's using a toolchain that supports the "asm goto" feature that is used
> to implement static branches (and that's guaranteed not to generate any
> memory access other than the code patching itself).
> 
> Now, with a toolchain that doesn't support this, such as gcc 4.8:

Hmm, I saw the error with 5.4.1, but perhaps I messed something else up,
because I cannot seem to reproduce this at the moment.

> 
> ffff000008aa5168 <__timer_restore_state>:
> ffff000008aa5168:       f9400001        ldr     x1, [x0]
> ffff000008aa516c:       9240bc21        and     x1, x1, #0xffffffffffff
> ffff000008aa5170:       d503201f        nop
> ffff000008aa5174:       f00038a2        adrp    x2, ffff0000091bc000 <reset_devices>
> ffff000008aa5178:       9113e042        add     x2, x2, #0x4f8
> ffff000008aa517c:       b9402c42        ldr     w2, [x2,#44]
> ffff000008aa5180:       6b1f005f        cmp     w2, wzr
> ffff000008aa5184:       540000ac        b.gt    ffff000008aa5198 <__timer_restore_state+0x30>
> ffff000008aa5188:       d53ce102        mrs     x2, cnthctl_el2
> ffff000008aa518c:       927ef842        and     x2, x2, #0xfffffffffffffffd
> ffff000008aa5190:       b2400042        orr     x2, x2, #0x1
> ffff000008aa5194:       d51ce102        msr     cnthctl_el2, x2
> ffff000008aa5198:       91400402        add     x2, x0, #0x1, lsl #12
> ffff000008aa519c:       396dd443        ldrb    w3, [x2,#2933]
> ffff000008aa51a0:       34000103        cbz     w3, ffff000008aa51c0 <__timer_restore_state+0x58>
> ffff000008aa51a4:       f945a821        ldr     x1, [x1,#2896]
> ffff000008aa51a8:       d51ce061        msr     cntvoff_el2, x1
> ffff000008aa51ac:       f9457441        ldr     x1, [x2,#2792]
> ffff000008aa51b0:       d51be341        msr     cntv_cval_el0, x1
> ffff000008aa51b4:       d5033fdf        isb
> ffff000008aa51b8:       b95ae000        ldr     w0, [x0,#6880]
> ffff000008aa51bc:       d51be320        msr     cntv_ctl_el0, x0
> ffff000008aa51c0:       d65f03c0        ret
> 
> This is now controlled by some date located at FFFF0000091BC524:
> 
> maz@approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
> 
> vmlinux:     file format elf64-littleaarch64
> 
> Sections:
> Idx Name          Size      VMA               LMA               File off  Algn
> [...]
>  23 .bss          000da348  ffff0000091b8000  ffff0000091b8000  01147a00  2**12
>                   ALLOC
> 
> That's the BSS, which we do map in HYP (fairly recent).

But we cannot map the BSS at the same address though, right?  So
wouldn't this actually fail?

> 
> But maybe we should have have some stronger guarantees that we'll
> always get things inlined, and that the "const" side is enforced:
> 
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index b4989df..4710469 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>  }
>  
>  /* System capability check for constant caps */
> -static inline bool cpus_have_const_cap(int num)
> +static __always_inline bool cpus_have_const_cap(int num)
>  {
> -	if (num >= ARM64_NCAPS)
> -		return false;
> +	BUILD_BUG_ON(!__builtin_constant_p(num));
> +	BUILD_BUG_ON(num >= ARM64_NCAPS);
> +
>  	return static_branch_unlikely(&cpu_hwcap_keys[num]);
>  }
>  
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 439f6b5..1257701 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
>  }
>  
> -static inline bool has_vhe(void)
> +static __always_inline bool has_vhe(void)
>  {
>  	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>  		return true;
> 
> 
> But that's probably another patch or two. Thoughts?
> 
Yes, if something needs fixing there, it should be a separate patch.

Thanks,
-Christoffer
Marc Zyngier Jan. 13, 2017, 1:57 p.m. UTC | #3
On 13/01/17 13:46, Christoffer Dall wrote:
> On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>
>> On 13/01/17 12:36, Christoffer Dall wrote:
>>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
>>>> From: Jintack Lim <jintack@cs.columbia.edu>
>>>>
>>>> Current KVM world switch code is unintentionally setting wrong bits to
>>>> CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
>>>> timer.  Bit positions of CNTHCTL_EL2 are changing depending on
>>>> HCR_EL2.E2H bit.  EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
>>>> not set, but they are 11th and 10th bits respectively when E2H is set.
>>>>
>>>> In fact, on VHE we only need to set those bits once, not for every world
>>>> switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
>>>> 1, which makes those bits have no effect for the host kernel execution.
>>>> So we just set those bits once for guests, and that's it.
>>>>
>>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
>>>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
>>>> ---
>>>>  arch/arm/include/asm/virt.h   |  5 +++++
>>>>  arch/arm/kvm/arm.c            |  3 +++
>>>>  arch/arm64/include/asm/virt.h |  9 +++++++++
>>>>  include/kvm/arm_arch_timer.h  |  1 +
>>>>  virt/kvm/arm/arch_timer.c     | 23 +++++++++++++++++++++++
>>>>  virt/kvm/arm/hyp/timer-sr.c   | 33 +++++++++++++++++++++------------
>>>>  6 files changed, 62 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
>>>> index a2e75b8..6dae195 100644
>>>> --- a/arch/arm/include/asm/virt.h
>>>> +++ b/arch/arm/include/asm/virt.h
>>>> @@ -80,6 +80,11 @@ static inline bool is_kernel_in_hyp_mode(void)
>>>>  	return false;
>>>>  }
>>>>  
>>>> +static inline bool has_vhe(void)
>>>> +{
>>>> +	return false;
>>>> +}
>>>> +
>>>>  /* The section containing the hypervisor idmap text */
>>>>  extern char __hyp_idmap_text_start[];
>>>>  extern char __hyp_idmap_text_end[];
>>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
>>>> index 1167678..9d74464 100644
>>>> --- a/arch/arm/kvm/arm.c
>>>> +++ b/arch/arm/kvm/arm.c
>>>> @@ -1099,6 +1099,9 @@ static void cpu_init_hyp_mode(void *dummy)
>>>>  	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
>>>>  	__cpu_init_stage2();
>>>>  
>>>> +	if (is_kernel_in_hyp_mode())
>>>> +		kvm_timer_init_vhe();
>>>> +
>>>>  	kvm_arm_init_debug();
>>>>  }
>>>>  
>>>> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
>>>> index fea1073..439f6b5 100644
>>>> --- a/arch/arm64/include/asm/virt.h
>>>> +++ b/arch/arm64/include/asm/virt.h
>>>> @@ -47,6 +47,7 @@
>>>>  #include <asm/ptrace.h>
>>>>  #include <asm/sections.h>
>>>>  #include <asm/sysreg.h>
>>>> +#include <asm/cpufeature.h>
>>>>  
>>>>  /*
>>>>   * __boot_cpu_mode records what mode CPUs were booted in.
>>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
>>>>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
>>>>  }
>>>>  
>>>> +static inline bool has_vhe(void)
>>>> +{
>>>> +	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>>>> +		return true;
>>>> +
>>>> +	return false;
>>>> +}
>>>> +
>>>
>>> I was experimenting with using has_vhe for some of the optimization code
>>> I was writing, and I saw a hyp crash as a result.  That made me wonder
>>> if this is really safe in Hyp mode?
>>>
>>> Specifically, there is no guarantee that this will actually be inlined
>>> in the caller, right?  At least that's what I can gather from trying to
>>> understand the semantics of the inline keyword in the GCC manual.
>>
>> Indeed, there is no strict guarantee that this is enforced. We should
>> probably have __always_inline instead. But having checked the generated
>> code for __timer_restore_state, the function is definitely inlined
>> (gcc 6.2). Happy to queue an extra patch changing that.
>>
>>> Further, are we guaranteed that the static branch gets compiled into
>>> something that doesn't actually look at cpu_hwcap_keys, which is not
>>> mapped in hyp mode?
>>
>> Here's the disassembly:
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
>> ffff000008ad01d4:       9240bc21        and     x1, x1, #0xffffffffffff
>> ffff000008ad01d8:       d503201f        nop
>> ffff000008ad01dc:       d503201f        nop
>> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
>> ffff000008ad01e4:       927ef842        and     x2, x2, #0xfffffffffffffffd
>> ffff000008ad01e8:       b2400042        orr     x2, x2, #0x1
>> ffff000008ad01ec:       d51ce102        msr     cnthctl_el2, x2
>> ffff000008ad01f0:       d2834002        mov     x2, #0x1a00                     // #6656
>> ffff000008ad01f4:       8b020000        add     x0, x0, x2
>> ffff000008ad01f8:       91038002        add     x2, x0, #0xe0
>> ffff000008ad01fc:       39425443        ldrb    w3, [x2,#149]
>> ffff000008ad0200:       34000103        cbz     w3, ffff000008ad0220 <__timer_restore_state+0x50>
>> ffff000008ad0204:       f945a821        ldr     x1, [x1,#2896]
>> ffff000008ad0208:       d51ce061        msr     cntvoff_el2, x1
>> ffff000008ad020c:       f9400441        ldr     x1, [x2,#8]
>> ffff000008ad0210:       d51be341        msr     cntv_cval_el0, x1
>> ffff000008ad0214:       d5033fdf        isb
>> ffff000008ad0218:       b940e000        ldr     w0, [x0,#224]
>> ffff000008ad021c:       d51be320        msr     cntv_ctl_el0, x0
>> ffff000008ad0220:       d65f03c0        ret
>>
>> The static branch resolves as such when VHE is enabled (taken from
>> a running model):
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
>> ffff000008ad01d4:       9240bc21        nop
>> ffff000008ad01d8:       d503201f        nop
>> ffff000008ad01dc:       d503201f        b	ffff000008ad01f0
>> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
>> [...]
>>
>> That's using a toolchain that supports the "asm goto" feature that is used
>> to implement static branches (and that's guaranteed not to generate any
>> memory access other than the code patching itself).
>>
>> Now, with a toolchain that doesn't support this, such as gcc 4.8:
> 
> Hmm, I saw the error with 5.4.1, but perhaps I messed something else up,
> because I cannot seem to reproduce this at the moment.
> 
>>
>> ffff000008aa5168 <__timer_restore_state>:
>> ffff000008aa5168:       f9400001        ldr     x1, [x0]
>> ffff000008aa516c:       9240bc21        and     x1, x1, #0xffffffffffff
>> ffff000008aa5170:       d503201f        nop
>> ffff000008aa5174:       f00038a2        adrp    x2, ffff0000091bc000 <reset_devices>
>> ffff000008aa5178:       9113e042        add     x2, x2, #0x4f8
>> ffff000008aa517c:       b9402c42        ldr     w2, [x2,#44]
>> ffff000008aa5180:       6b1f005f        cmp     w2, wzr
>> ffff000008aa5184:       540000ac        b.gt    ffff000008aa5198 <__timer_restore_state+0x30>
>> ffff000008aa5188:       d53ce102        mrs     x2, cnthctl_el2
>> ffff000008aa518c:       927ef842        and     x2, x2, #0xfffffffffffffffd
>> ffff000008aa5190:       b2400042        orr     x2, x2, #0x1
>> ffff000008aa5194:       d51ce102        msr     cnthctl_el2, x2
>> ffff000008aa5198:       91400402        add     x2, x0, #0x1, lsl #12
>> ffff000008aa519c:       396dd443        ldrb    w3, [x2,#2933]
>> ffff000008aa51a0:       34000103        cbz     w3, ffff000008aa51c0 <__timer_restore_state+0x58>
>> ffff000008aa51a4:       f945a821        ldr     x1, [x1,#2896]
>> ffff000008aa51a8:       d51ce061        msr     cntvoff_el2, x1
>> ffff000008aa51ac:       f9457441        ldr     x1, [x2,#2792]
>> ffff000008aa51b0:       d51be341        msr     cntv_cval_el0, x1
>> ffff000008aa51b4:       d5033fdf        isb
>> ffff000008aa51b8:       b95ae000        ldr     w0, [x0,#6880]
>> ffff000008aa51bc:       d51be320        msr     cntv_ctl_el0, x0
>> ffff000008aa51c0:       d65f03c0        ret
>>
>> This is now controlled by some date located at FFFF0000091BC524:
>>
>> maz@approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
>>
>> vmlinux:     file format elf64-littleaarch64
>>
>> Sections:
>> Idx Name          Size      VMA               LMA               File off  Algn
>> [...]
>>  23 .bss          000da348  ffff0000091b8000  ffff0000091b8000  01147a00  2**12
>>                   ALLOC
>>
>> That's the BSS, which we do map in HYP (fairly recent).
> 
> But we cannot map the BSS at the same address though, right?  So
> wouldn't this actually fail?

We map it at the same relative offset, and use adrp to get the base
address (PC relative). So whatever context we're in, we should be OK.

Thanks,

	M.
Christoffer Dall Jan. 13, 2017, 2:14 p.m. UTC | #4
On Fri, Jan 13, 2017 at 01:57:23PM +0000, Marc Zyngier wrote:
> On 13/01/17 13:46, Christoffer Dall wrote:
> > On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> >> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
> >>
> >> On 13/01/17 12:36, Christoffer Dall wrote:
> >>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
> >>>> From: Jintack Lim <jintack@cs.columbia.edu>
> >>>>
> >>>> Current KVM world switch code is unintentionally setting wrong bits to
> >>>> CNTHCTL_EL2 when E2H == 1, which may allow guest OS to access physical
> >>>> timer.  Bit positions of CNTHCTL_EL2 are changing depending on
> >>>> HCR_EL2.E2H bit.  EL1PCEN and EL1PCTEN are 1st and 0th bits when E2H is
> >>>> not set, but they are 11th and 10th bits respectively when E2H is set.
> >>>>
> >>>> In fact, on VHE we only need to set those bits once, not for every world
> >>>> switch. This is because the host kernel runs in EL2 with HCR_EL2.TGE ==
> >>>> 1, which makes those bits have no effect for the host kernel execution.
> >>>> So we just set those bits once for guests, and that's it.
> >>>>
> >>>> Signed-off-by: Jintack Lim <jintack@cs.columbia.edu>
> >>>> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
> >>>> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> >>>> ---
> >>>>  arch/arm/include/asm/virt.h   |  5 +++++
> >>>>  arch/arm/kvm/arm.c            |  3 +++
> >>>>  arch/arm64/include/asm/virt.h |  9 +++++++++
> >>>>  include/kvm/arm_arch_timer.h  |  1 +
> >>>>  virt/kvm/arm/arch_timer.c     | 23 +++++++++++++++++++++++
> >>>>  virt/kvm/arm/hyp/timer-sr.c   | 33 +++++++++++++++++++++------------
> >>>>  6 files changed, 62 insertions(+), 12 deletions(-)
> >>>>
> >>>> diff --git a/arch/arm/include/asm/virt.h b/arch/arm/include/asm/virt.h
> >>>> index a2e75b8..6dae195 100644
> >>>> --- a/arch/arm/include/asm/virt.h
> >>>> +++ b/arch/arm/include/asm/virt.h
> >>>> @@ -80,6 +80,11 @@ static inline bool is_kernel_in_hyp_mode(void)
> >>>>  	return false;
> >>>>  }
> >>>>  
> >>>> +static inline bool has_vhe(void)
> >>>> +{
> >>>> +	return false;
> >>>> +}
> >>>> +
> >>>>  /* The section containing the hypervisor idmap text */
> >>>>  extern char __hyp_idmap_text_start[];
> >>>>  extern char __hyp_idmap_text_end[];
> >>>> diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
> >>>> index 1167678..9d74464 100644
> >>>> --- a/arch/arm/kvm/arm.c
> >>>> +++ b/arch/arm/kvm/arm.c
> >>>> @@ -1099,6 +1099,9 @@ static void cpu_init_hyp_mode(void *dummy)
> >>>>  	__cpu_init_hyp_mode(pgd_ptr, hyp_stack_ptr, vector_ptr);
> >>>>  	__cpu_init_stage2();
> >>>>  
> >>>> +	if (is_kernel_in_hyp_mode())
> >>>> +		kvm_timer_init_vhe();
> >>>> +
> >>>>  	kvm_arm_init_debug();
> >>>>  }
> >>>>  
> >>>> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> >>>> index fea1073..439f6b5 100644
> >>>> --- a/arch/arm64/include/asm/virt.h
> >>>> +++ b/arch/arm64/include/asm/virt.h
> >>>> @@ -47,6 +47,7 @@
> >>>>  #include <asm/ptrace.h>
> >>>>  #include <asm/sections.h>
> >>>>  #include <asm/sysreg.h>
> >>>> +#include <asm/cpufeature.h>
> >>>>  
> >>>>  /*
> >>>>   * __boot_cpu_mode records what mode CPUs were booted in.
> >>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
> >>>>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
> >>>>  }
> >>>>  
> >>>> +static inline bool has_vhe(void)
> >>>> +{
> >>>> +	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> >>>> +		return true;
> >>>> +
> >>>> +	return false;
> >>>> +}
> >>>> +
> >>>
> >>> I was experimenting with using has_vhe for some of the optimization code
> >>> I was writing, and I saw a hyp crash as a result.  That made me wonder
> >>> if this is really safe in Hyp mode?
> >>>
> >>> Specifically, there is no guarantee that this will actually be inlined
> >>> in the caller, right?  At least that's what I can gather from trying to
> >>> understand the semantics of the inline keyword in the GCC manual.
> >>
> >> Indeed, there is no strict guarantee that this is enforced. We should
> >> probably have __always_inline instead. But having checked the generated
> >> code for __timer_restore_state, the function is definitely inlined
> >> (gcc 6.2). Happy to queue an extra patch changing that.
> >>
> >>> Further, are we guaranteed that the static branch gets compiled into
> >>> something that doesn't actually look at cpu_hwcap_keys, which is not
> >>> mapped in hyp mode?
> >>
> >> Here's the disassembly:
> >>
> >> ffff000008ad01d0 <__timer_restore_state>:
> >> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
> >> ffff000008ad01d4:       9240bc21        and     x1, x1, #0xffffffffffff
> >> ffff000008ad01d8:       d503201f        nop
> >> ffff000008ad01dc:       d503201f        nop
> >> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
> >> ffff000008ad01e4:       927ef842        and     x2, x2, #0xfffffffffffffffd
> >> ffff000008ad01e8:       b2400042        orr     x2, x2, #0x1
> >> ffff000008ad01ec:       d51ce102        msr     cnthctl_el2, x2
> >> ffff000008ad01f0:       d2834002        mov     x2, #0x1a00                     // #6656
> >> ffff000008ad01f4:       8b020000        add     x0, x0, x2
> >> ffff000008ad01f8:       91038002        add     x2, x0, #0xe0
> >> ffff000008ad01fc:       39425443        ldrb    w3, [x2,#149]
> >> ffff000008ad0200:       34000103        cbz     w3, ffff000008ad0220 <__timer_restore_state+0x50>
> >> ffff000008ad0204:       f945a821        ldr     x1, [x1,#2896]
> >> ffff000008ad0208:       d51ce061        msr     cntvoff_el2, x1
> >> ffff000008ad020c:       f9400441        ldr     x1, [x2,#8]
> >> ffff000008ad0210:       d51be341        msr     cntv_cval_el0, x1
> >> ffff000008ad0214:       d5033fdf        isb
> >> ffff000008ad0218:       b940e000        ldr     w0, [x0,#224]
> >> ffff000008ad021c:       d51be320        msr     cntv_ctl_el0, x0
> >> ffff000008ad0220:       d65f03c0        ret
> >>
> >> The static branch resolves as such when VHE is enabled (taken from
> >> a running model):
> >>
> >> ffff000008ad01d0 <__timer_restore_state>:
> >> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
> >> ffff000008ad01d4:       9240bc21        nop
> >> ffff000008ad01d8:       d503201f        nop
> >> ffff000008ad01dc:       d503201f        b	ffff000008ad01f0
> >> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
> >> [...]
> >>
> >> That's using a toolchain that supports the "asm goto" feature that is used
> >> to implement static branches (and that's guaranteed not to generate any
> >> memory access other than the code patching itself).
> >>
> >> Now, with a toolchain that doesn't support this, such as gcc 4.8:
> > 
> > Hmm, I saw the error with 5.4.1, but perhaps I messed something else up,
> > because I cannot seem to reproduce this at the moment.
> > 
> >>
> >> ffff000008aa5168 <__timer_restore_state>:
> >> ffff000008aa5168:       f9400001        ldr     x1, [x0]
> >> ffff000008aa516c:       9240bc21        and     x1, x1, #0xffffffffffff
> >> ffff000008aa5170:       d503201f        nop
> >> ffff000008aa5174:       f00038a2        adrp    x2, ffff0000091bc000 <reset_devices>
> >> ffff000008aa5178:       9113e042        add     x2, x2, #0x4f8
> >> ffff000008aa517c:       b9402c42        ldr     w2, [x2,#44]
> >> ffff000008aa5180:       6b1f005f        cmp     w2, wzr
> >> ffff000008aa5184:       540000ac        b.gt    ffff000008aa5198 <__timer_restore_state+0x30>
> >> ffff000008aa5188:       d53ce102        mrs     x2, cnthctl_el2
> >> ffff000008aa518c:       927ef842        and     x2, x2, #0xfffffffffffffffd
> >> ffff000008aa5190:       b2400042        orr     x2, x2, #0x1
> >> ffff000008aa5194:       d51ce102        msr     cnthctl_el2, x2
> >> ffff000008aa5198:       91400402        add     x2, x0, #0x1, lsl #12
> >> ffff000008aa519c:       396dd443        ldrb    w3, [x2,#2933]
> >> ffff000008aa51a0:       34000103        cbz     w3, ffff000008aa51c0 <__timer_restore_state+0x58>
> >> ffff000008aa51a4:       f945a821        ldr     x1, [x1,#2896]
> >> ffff000008aa51a8:       d51ce061        msr     cntvoff_el2, x1
> >> ffff000008aa51ac:       f9457441        ldr     x1, [x2,#2792]
> >> ffff000008aa51b0:       d51be341        msr     cntv_cval_el0, x1
> >> ffff000008aa51b4:       d5033fdf        isb
> >> ffff000008aa51b8:       b95ae000        ldr     w0, [x0,#6880]
> >> ffff000008aa51bc:       d51be320        msr     cntv_ctl_el0, x0
> >> ffff000008aa51c0:       d65f03c0        ret
> >>
> >> This is now controlled by some date located at FFFF0000091BC524:
> >>
> >> maz@approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
> >>
> >> vmlinux:     file format elf64-littleaarch64
> >>
> >> Sections:
> >> Idx Name          Size      VMA               LMA               File off  Algn
> >> [...]
> >>  23 .bss          000da348  ffff0000091b8000  ffff0000091b8000  01147a00  2**12
> >>                   ALLOC
> >>
> >> That's the BSS, which we do map in HYP (fairly recent).
> > 
> > But we cannot map the BSS at the same address though, right?  So
> > wouldn't this actually fail?
> 
> We map it at the same relative offset, and use adrp to get the base
> address (PC relative). So whatever context we're in, we should be OK.
> 
Ah, right, I'll be shutting up now then.

(Will make a not to go back and carefully examine exactly why this
failed for me.)


Thanks,
-Christoffer
Mark Rutland Jan. 13, 2017, 2:42 p.m. UTC | #5
Hi,

On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
> 
> On 13/01/17 12:36, Christoffer Dall wrote:
> > On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:

> >> +static inline bool has_vhe(void)
> >> +{
> >> +	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
> >> +		return true;
> >> +
> >> +	return false;
> >> +}
> >> +
> > 
> > I was experimenting with using has_vhe for some of the optimization code
> > I was writing, and I saw a hyp crash as a result.  That made me wonder
> > if this is really safe in Hyp mode?
> > 
> > Specifically, there is no guarantee that this will actually be inlined
> > in the caller, right?  At least that's what I can gather from trying to
> > understand the semantics of the inline keyword in the GCC manual.
> 
> Indeed, there is no strict guarantee that this is enforced. We should
> probably have __always_inline instead. But having checked the generated
> code for __timer_restore_state, the function is definitely inlined
> (gcc 6.2). Happy to queue an extra patch changing that.

> > Further, are we guaranteed that the static branch gets compiled into
> > something that doesn't actually look at cpu_hwcap_keys, which is not
> > mapped in hyp mode?

If I disable CONFIG_JUMP_LABEL (which lives under "General setup", with
teh title "Optimize very unlikely/likely branches"), I see adrp; add;
ldr sequences accessing cpu_hwcap_keys when using cpus_have_const_cap()
in hyp code, even with the patch below.

Do we have the whole kernel image mapped around hyp, so that this would
work by relative offset? Do we have a guarantee that adrp+add is used?

Thanks,
Mark.

> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index b4989df..4710469 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>  }
>  
>  /* System capability check for constant caps */
> -static inline bool cpus_have_const_cap(int num)
> +static __always_inline bool cpus_have_const_cap(int num)
>  {
> -	if (num >= ARM64_NCAPS)
> -		return false;
> +	BUILD_BUG_ON(!__builtin_constant_p(num));
> +	BUILD_BUG_ON(num >= ARM64_NCAPS);
> +
>  	return static_branch_unlikely(&cpu_hwcap_keys[num]);
>  }
>  
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 439f6b5..1257701 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
>  }
>  
> -static inline bool has_vhe(void)
> +static __always_inline bool has_vhe(void)
>  {
>  	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>  		return true;
> 
> 
> But that's probably another patch or two. Thoughts?
> 
> 	M.
> -- 
> Jazz is not dead. It just smells funny...
> _______________________________________________
> kvmarm mailing list
> kvmarm@lists.cs.columbia.edu
> https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
Mark Rutland Jan. 13, 2017, 2:46 p.m. UTC | #6
On Fri, Jan 13, 2017 at 01:36:12PM +0100, Christoffer Dall wrote:
> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:

> Further, are we guaranteed that the static branch gets compiled into
> something that doesn't actually look at cpu_hwcap_keys, which is not
> mapped in hyp mode?

The fact that this might happen silently seems to be a larger problem.

Can we do something like the EFI stub, and ensure that (unintentional)
references to symbols outside of the hyp-stub will fail to link? That's
ensrue by some symbol mangling in drivers/firmware/efi/libstub/Makefile.

I think this may have come up before; I can't recall if there was some
reason that was problematic.

Thanks,
Mark.
Mark Rutland Jan. 13, 2017, 2:55 p.m. UTC | #7
On Fri, Jan 13, 2017 at 02:42:04PM +0000, Mark Rutland wrote:
> On Fri, Jan 13, 2017 at 01:30:29PM +0000, Marc Zyngier wrote:
> > On 13/01/17 12:36, Christoffer Dall wrote:
> > > Further, are we guaranteed that the static branch gets compiled into
> > > something that doesn't actually look at cpu_hwcap_keys, which is not
> > > mapped in hyp mode?
> 
> If I disable CONFIG_JUMP_LABEL (which lives under "General setup", with
> teh title "Optimize very unlikely/likely branches"), I see adrp; add;
> ldr sequences accessing cpu_hwcap_keys when using cpus_have_const_cap()
> in hyp code, even with the patch below.

Looking again, that's the same sequence Marc mentioned, as it falls in
the BSS. I just happened to be looking at the unlinked .o file rather
than the vmlinux.

Sorry for the noise.

Thanks,
Mark.
Suzuki K Poulose Jan. 13, 2017, 2:56 p.m. UTC | #8
On 13/01/17 13:30, Marc Zyngier wrote:
> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>
> On 13/01/17 12:36, Christoffer Dall wrote:
>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
>>> From: Jintack Lim <jintack@cs.columbia.edu>
>>>
...

>>>  /*
>>>   * __boot_cpu_mode records what mode CPUs were booted in.
>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
>>>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
>>>  }
>>>
>>> +static inline bool has_vhe(void)
>>> +{
>>> +	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>>> +		return true;
>>> +
>>> +	return false;
>>> +}
>>> +
>>
>> I was experimenting with using has_vhe for some of the optimization code
>> I was writing, and I saw a hyp crash as a result.  That made me wonder
>> if this is really safe in Hyp mode?
>>
>> Specifically, there is no guarantee that this will actually be inlined
>> in the caller, right?  At least that's what I can gather from trying to
>> understand the semantics of the inline keyword in the GCC manual.
>
> Indeed, there is no strict guarantee that this is enforced. We should
> probably have __always_inline instead. But having checked the generated
> code for __timer_restore_state, the function is definitely inlined
> (gcc 6.2). Happy to queue an extra patch changing that.
>
>> Further, are we guaranteed that the static branch gets compiled into
>> something that doesn't actually look at cpu_hwcap_keys, which is not
>> mapped in hyp mode?
>
> Here's the disassembly:
>
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
> ffff000008ad01d4:       9240bc21        and     x1, x1, #0xffffffffffff
> ffff000008ad01d8:       d503201f        nop
> ffff000008ad01dc:       d503201f        nop
> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
> ffff000008ad01e4:       927ef842        and     x2, x2, #0xfffffffffffffffd
> ffff000008ad01e8:       b2400042        orr     x2, x2, #0x1
> ffff000008ad01ec:       d51ce102        msr     cnthctl_el2, x2
> ffff000008ad01f0:       d2834002        mov     x2, #0x1a00                     // #6656
> ffff000008ad01f4:       8b020000        add     x0, x0, x2
> ffff000008ad01f8:       91038002        add     x2, x0, #0xe0
> ffff000008ad01fc:       39425443        ldrb    w3, [x2,#149]
> ffff000008ad0200:       34000103        cbz     w3, ffff000008ad0220 <__timer_restore_state+0x50>
> ffff000008ad0204:       f945a821        ldr     x1, [x1,#2896]
> ffff000008ad0208:       d51ce061        msr     cntvoff_el2, x1
> ffff000008ad020c:       f9400441        ldr     x1, [x2,#8]
> ffff000008ad0210:       d51be341        msr     cntv_cval_el0, x1
> ffff000008ad0214:       d5033fdf        isb
> ffff000008ad0218:       b940e000        ldr     w0, [x0,#224]
> ffff000008ad021c:       d51be320        msr     cntv_ctl_el0, x0
> ffff000008ad0220:       d65f03c0        ret
>
> The static branch resolves as such when VHE is enabled (taken from
> a running model):
>
> ffff000008ad01d0 <__timer_restore_state>:
> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
> ffff000008ad01d4:       9240bc21        nop
> ffff000008ad01d8:       d503201f        nop
> ffff000008ad01dc:       d503201f        b	ffff000008ad01f0
> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
> [...]
>
> That's using a toolchain that supports the "asm goto" feature that is used
> to implement static branches (and that's guaranteed not to generate any
> memory access other than the code patching itself).
>
> Now, with a toolchain that doesn't support this, such as gcc 4.8:
>
> ffff000008aa5168 <__timer_restore_state>:
> ffff000008aa5168:       f9400001        ldr     x1, [x0]
> ffff000008aa516c:       9240bc21        and     x1, x1, #0xffffffffffff
> ffff000008aa5170:       d503201f        nop
> ffff000008aa5174:       f00038a2        adrp    x2, ffff0000091bc000 <reset_devices>
> ffff000008aa5178:       9113e042        add     x2, x2, #0x4f8
> ffff000008aa517c:       b9402c42        ldr     w2, [x2,#44]
> ffff000008aa5180:       6b1f005f        cmp     w2, wzr
> ffff000008aa5184:       540000ac        b.gt    ffff000008aa5198 <__timer_restore_state+0x30>
> ffff000008aa5188:       d53ce102        mrs     x2, cnthctl_el2
> ffff000008aa518c:       927ef842        and     x2, x2, #0xfffffffffffffffd
> ffff000008aa5190:       b2400042        orr     x2, x2, #0x1
> ffff000008aa5194:       d51ce102        msr     cnthctl_el2, x2
> ffff000008aa5198:       91400402        add     x2, x0, #0x1, lsl #12
> ffff000008aa519c:       396dd443        ldrb    w3, [x2,#2933]
> ffff000008aa51a0:       34000103        cbz     w3, ffff000008aa51c0 <__timer_restore_state+0x58>
> ffff000008aa51a4:       f945a821        ldr     x1, [x1,#2896]
> ffff000008aa51a8:       d51ce061        msr     cntvoff_el2, x1
> ffff000008aa51ac:       f9457441        ldr     x1, [x2,#2792]
> ffff000008aa51b0:       d51be341        msr     cntv_cval_el0, x1
> ffff000008aa51b4:       d5033fdf        isb
> ffff000008aa51b8:       b95ae000        ldr     w0, [x0,#6880]
> ffff000008aa51bc:       d51be320        msr     cntv_ctl_el0, x0
> ffff000008aa51c0:       d65f03c0        ret
>
> This is now controlled by some date located at FFFF0000091BC524:
>
> maz@approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
>
> vmlinux:     file format elf64-littleaarch64
>
> Sections:
> Idx Name          Size      VMA               LMA               File off  Algn
> [...]
>  23 .bss          000da348  ffff0000091b8000  ffff0000091b8000  01147a00  2**12
>                   ALLOC
>
> That's the BSS, which we do map in HYP (fairly recent).
>
> But maybe we should have have some stronger guarantees that we'll
> always get things inlined, and that the "const" side is enforced:

Agreed.

>
> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
> index b4989df..4710469 100644
> --- a/arch/arm64/include/asm/cpufeature.h
> +++ b/arch/arm64/include/asm/cpufeature.h
> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>  }
>
>  /* System capability check for constant caps */
> -static inline bool cpus_have_const_cap(int num)
> +static __always_inline bool cpus_have_const_cap(int num)

I think we should have the above change and make it inline always.

>  {
> -	if (num >= ARM64_NCAPS)
> -		return false;
> +	BUILD_BUG_ON(!__builtin_constant_p(num));

This is not needed, as the compilation would fail if num is not a constant with
static key code.

> +	BUILD_BUG_ON(num >= ARM64_NCAPS);
> +

Also, I think it would be good to return false for caps > the ARM64_NCAPS, in sync
with the non-const version.


>  	return static_branch_unlikely(&cpu_hwcap_keys[num]);
>  }
>
> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
> index 439f6b5..1257701 100644
> --- a/arch/arm64/include/asm/virt.h
> +++ b/arch/arm64/include/asm/virt.h
> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
>  	return read_sysreg(CurrentEL) == CurrentEL_EL2;
>  }
>
> -static inline bool has_vhe(void)
> +static __always_inline bool has_vhe(void)
>  {
>  	if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>  		return true;
>
>
> But that's probably another patch or two. Thoughts?

With the above changes, please feel free to add :

Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
Marc Zyngier Jan. 13, 2017, 2:57 p.m. UTC | #9
On 13/01/17 14:46, Mark Rutland wrote:
> On Fri, Jan 13, 2017 at 01:36:12PM +0100, Christoffer Dall wrote:
>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
> 
>> Further, are we guaranteed that the static branch gets compiled into
>> something that doesn't actually look at cpu_hwcap_keys, which is not
>> mapped in hyp mode?
> 
> The fact that this might happen silently seems to be a larger problem.
> 
> Can we do something like the EFI stub, and ensure that (unintentional)
> references to symbols outside of the hyp-stub will fail to link? That's
> ensrue by some symbol mangling in drivers/firmware/efi/libstub/Makefile.
> 
> I think this may have come up before; I can't recall if there was some
> reason that was problematic.

>From what I remember, this was a gigantic mess... We could revisit it
though (after all, it's been a whole year since we did turn the whole
thing upside down -- time for a rewrite!).

	M.
Jintack Lim Jan. 13, 2017, 7:04 p.m. UTC | #10
On Fri, Jan 13, 2017 at 9:56 AM, Suzuki K Poulose
<Suzuki.Poulose@arm.com> wrote:
> On 13/01/17 13:30, Marc Zyngier wrote:
>>
>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>
>> On 13/01/17 12:36, Christoffer Dall wrote:
>>>
>>> On Fri, Jan 13, 2017 at 11:31:32AM +0000, Marc Zyngier wrote:
>>>>
>>>> From: Jintack Lim <jintack@cs.columbia.edu>
>>>>
> ...
>
>
>>>>  /*
>>>>   * __boot_cpu_mode records what mode CPUs were booted in.
>>>> @@ -80,6 +81,14 @@ static inline bool is_kernel_in_hyp_mode(void)
>>>>         return read_sysreg(CurrentEL) == CurrentEL_EL2;
>>>>  }
>>>>
>>>> +static inline bool has_vhe(void)
>>>> +{
>>>> +       if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>>>> +               return true;
>>>> +
>>>> +       return false;
>>>> +}
>>>> +
>>>
>>>
>>> I was experimenting with using has_vhe for some of the optimization code
>>> I was writing, and I saw a hyp crash as a result.  That made me wonder
>>> if this is really safe in Hyp mode?
>>>
>>> Specifically, there is no guarantee that this will actually be inlined
>>> in the caller, right?  At least that's what I can gather from trying to
>>> understand the semantics of the inline keyword in the GCC manual.
>>
>>
>> Indeed, there is no strict guarantee that this is enforced. We should
>> probably have __always_inline instead. But having checked the generated
>> code for __timer_restore_state, the function is definitely inlined
>> (gcc 6.2). Happy to queue an extra patch changing that.
>>
>>> Further, are we guaranteed that the static branch gets compiled into
>>> something that doesn't actually look at cpu_hwcap_keys, which is not
>>> mapped in hyp mode?
>>
>>
>> Here's the disassembly:
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
>> ffff000008ad01d4:       9240bc21        and     x1, x1, #0xffffffffffff
>> ffff000008ad01d8:       d503201f        nop
>> ffff000008ad01dc:       d503201f        nop
>> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
>> ffff000008ad01e4:       927ef842        and     x2, x2,
>> #0xfffffffffffffffd
>> ffff000008ad01e8:       b2400042        orr     x2, x2, #0x1
>> ffff000008ad01ec:       d51ce102        msr     cnthctl_el2, x2
>> ffff000008ad01f0:       d2834002        mov     x2, #0x1a00
>> // #6656
>> ffff000008ad01f4:       8b020000        add     x0, x0, x2
>> ffff000008ad01f8:       91038002        add     x2, x0, #0xe0
>> ffff000008ad01fc:       39425443        ldrb    w3, [x2,#149]
>> ffff000008ad0200:       34000103        cbz     w3, ffff000008ad0220
>> <__timer_restore_state+0x50>
>> ffff000008ad0204:       f945a821        ldr     x1, [x1,#2896]
>> ffff000008ad0208:       d51ce061        msr     cntvoff_el2, x1
>> ffff000008ad020c:       f9400441        ldr     x1, [x2,#8]
>> ffff000008ad0210:       d51be341        msr     cntv_cval_el0, x1
>> ffff000008ad0214:       d5033fdf        isb
>> ffff000008ad0218:       b940e000        ldr     w0, [x0,#224]
>> ffff000008ad021c:       d51be320        msr     cntv_ctl_el0, x0
>> ffff000008ad0220:       d65f03c0        ret
>>
>> The static branch resolves as such when VHE is enabled (taken from
>> a running model):
>>
>> ffff000008ad01d0 <__timer_restore_state>:
>> ffff000008ad01d0:       f9400001        ldr     x1, [x0]
>> ffff000008ad01d4:       9240bc21        nop
>> ffff000008ad01d8:       d503201f        nop
>> ffff000008ad01dc:       d503201f        b       ffff000008ad01f0
>> ffff000008ad01e0:       d53ce102        mrs     x2, cnthctl_el2
>> [...]
>>
>> That's using a toolchain that supports the "asm goto" feature that is used
>> to implement static branches (and that's guaranteed not to generate any
>> memory access other than the code patching itself).
>>
>> Now, with a toolchain that doesn't support this, such as gcc 4.8:
>>
>> ffff000008aa5168 <__timer_restore_state>:
>> ffff000008aa5168:       f9400001        ldr     x1, [x0]
>> ffff000008aa516c:       9240bc21        and     x1, x1, #0xffffffffffff
>> ffff000008aa5170:       d503201f        nop
>> ffff000008aa5174:       f00038a2        adrp    x2, ffff0000091bc000
>> <reset_devices>
>> ffff000008aa5178:       9113e042        add     x2, x2, #0x4f8
>> ffff000008aa517c:       b9402c42        ldr     w2, [x2,#44]
>> ffff000008aa5180:       6b1f005f        cmp     w2, wzr
>> ffff000008aa5184:       540000ac        b.gt    ffff000008aa5198
>> <__timer_restore_state+0x30>
>> ffff000008aa5188:       d53ce102        mrs     x2, cnthctl_el2
>> ffff000008aa518c:       927ef842        and     x2, x2,
>> #0xfffffffffffffffd
>> ffff000008aa5190:       b2400042        orr     x2, x2, #0x1
>> ffff000008aa5194:       d51ce102        msr     cnthctl_el2, x2
>> ffff000008aa5198:       91400402        add     x2, x0, #0x1, lsl #12
>> ffff000008aa519c:       396dd443        ldrb    w3, [x2,#2933]
>> ffff000008aa51a0:       34000103        cbz     w3, ffff000008aa51c0
>> <__timer_restore_state+0x58>
>> ffff000008aa51a4:       f945a821        ldr     x1, [x1,#2896]
>> ffff000008aa51a8:       d51ce061        msr     cntvoff_el2, x1
>> ffff000008aa51ac:       f9457441        ldr     x1, [x2,#2792]
>> ffff000008aa51b0:       d51be341        msr     cntv_cval_el0, x1
>> ffff000008aa51b4:       d5033fdf        isb
>> ffff000008aa51b8:       b95ae000        ldr     w0, [x0,#6880]
>> ffff000008aa51bc:       d51be320        msr     cntv_ctl_el0, x0
>> ffff000008aa51c0:       d65f03c0        ret
>>
>> This is now controlled by some date located at FFFF0000091BC524:
>>
>> maz@approximate:~/Work/arm-platforms$ aarch64-linux-gnu-objdump -h vmlinux
>>
>> vmlinux:     file format elf64-littleaarch64
>>
>> Sections:
>> Idx Name          Size      VMA               LMA               File off
>> Algn
>> [...]
>>  23 .bss          000da348  ffff0000091b8000  ffff0000091b8000  01147a00
>> 2**12
>>                   ALLOC
>>
>> That's the BSS, which we do map in HYP (fairly recent).
>>
>> But maybe we should have have some stronger guarantees that we'll
>> always get things inlined, and that the "const" side is enforced:
>
>
> Agreed.
>
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h
>> b/arch/arm64/include/asm/cpufeature.h
>> index b4989df..4710469 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int
>> num)
>>  }
>>
>>  /* System capability check for constant caps */
>> -static inline bool cpus_have_const_cap(int num)
>> +static __always_inline bool cpus_have_const_cap(int num)
>
>
> I think we should have the above change and make it inline always.
>
>>  {
>> -       if (num >= ARM64_NCAPS)
>> -               return false;
>> +       BUILD_BUG_ON(!__builtin_constant_p(num));
>
>
> This is not needed, as the compilation would fail if num is not a constant
> with
> static key code.
>
>> +       BUILD_BUG_ON(num >= ARM64_NCAPS);
>> +
>
>
> Also, I think it would be good to return false for caps > the ARM64_NCAPS,
> in sync
> with the non-const version.
>
>
>>         return static_branch_unlikely(&cpu_hwcap_keys[num]);
>>  }
>>
>> diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
>> index 439f6b5..1257701 100644
>> --- a/arch/arm64/include/asm/virt.h
>> +++ b/arch/arm64/include/asm/virt.h
>> @@ -81,7 +81,7 @@ static inline bool is_kernel_in_hyp_mode(void)
>>         return read_sysreg(CurrentEL) == CurrentEL_EL2;
>>  }
>>
>> -static inline bool has_vhe(void)
>> +static __always_inline bool has_vhe(void)
>>  {
>>         if (cpus_have_const_cap(ARM64_HAS_VIRT_HOST_EXTN))
>>                 return true;
>>

I'm fine with the above change.

Thanks,
Jintack

>>
>> But that's probably another patch or two. Thoughts?
>
>
> With the above changes, please feel free to add :
>
> Reviewed-by: Suzuki K Poulose <suzuki.poulose@arm.com>
>
>
Marc Zyngier Jan. 16, 2017, 1:30 p.m. UTC | #11
On 13/01/17 14:56, Suzuki K Poulose wrote:
> On 13/01/17 13:30, Marc Zyngier wrote:
>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>

[...]

>> But maybe we should have have some stronger guarantees that we'll
>> always get things inlined, and that the "const" side is enforced:
> 
> Agreed.
> 
>>
>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>> index b4989df..4710469 100644
>> --- a/arch/arm64/include/asm/cpufeature.h
>> +++ b/arch/arm64/include/asm/cpufeature.h
>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>>  }
>>
>>  /* System capability check for constant caps */
>> -static inline bool cpus_have_const_cap(int num)
>> +static __always_inline bool cpus_have_const_cap(int num)
> 
> I think we should have the above change and make it inline always.
> 
>>  {
>> -	if (num >= ARM64_NCAPS)
>> -		return false;
>> +	BUILD_BUG_ON(!__builtin_constant_p(num));
> 
> This is not needed, as the compilation would fail if num is not a constant with
> static key code.
> 
>> +	BUILD_BUG_ON(num >= ARM64_NCAPS);
>> +
> 
> Also, I think it would be good to return false for caps > the ARM64_NCAPS, in sync
> with the non-const version.

But what's the semantic? It means we're accessing a capability that
doesn't exist, which looks like a major bug in my book. Is there any
valid use case for this?

Thanks,

	M.
Marc Zyngier Jan. 16, 2017, 2:11 p.m. UTC | #12
On 16/01/17 13:30, Marc Zyngier wrote:
> On 13/01/17 14:56, Suzuki K Poulose wrote:
>> On 13/01/17 13:30, Marc Zyngier wrote:
>>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>>
> 
> [...]
> 
>>> But maybe we should have have some stronger guarantees that we'll
>>> always get things inlined, and that the "const" side is enforced:
>>
>> Agreed.
>>
>>>
>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>> index b4989df..4710469 100644
>>> --- a/arch/arm64/include/asm/cpufeature.h
>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>>>  }
>>>
>>>  /* System capability check for constant caps */
>>> -static inline bool cpus_have_const_cap(int num)
>>> +static __always_inline bool cpus_have_const_cap(int num)
>>
>> I think we should have the above change and make it inline always.
>>
>>>  {
>>> -	if (num >= ARM64_NCAPS)
>>> -		return false;
>>> +	BUILD_BUG_ON(!__builtin_constant_p(num));
>>
>> This is not needed, as the compilation would fail if num is not a constant with
>> static key code.

I also just checked this, and it doesn't fail if the compiler doesn't
directly supports jump labels (we then fallback to the static key being
a standard memory access).

Thanks,

	M.
Suzuki K Poulose Jan. 16, 2017, 2:19 p.m. UTC | #13
On 16/01/17 14:11, Marc Zyngier wrote:
> On 16/01/17 13:30, Marc Zyngier wrote:
>> On 13/01/17 14:56, Suzuki K Poulose wrote:
>>> On 13/01/17 13:30, Marc Zyngier wrote:
>>>> [+ Suzuki, who wrote the whole cpus_have_const_cap thing]
>>>>
>>
>> [...]
>>
>>>> But maybe we should have have some stronger guarantees that we'll
>>>> always get things inlined, and that the "const" side is enforced:
>>>
>>> Agreed.
>>>
>>>>
>>>> diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
>>>> index b4989df..4710469 100644
>>>> --- a/arch/arm64/include/asm/cpufeature.h
>>>> +++ b/arch/arm64/include/asm/cpufeature.h
>>>> @@ -105,10 +105,11 @@ static inline bool cpu_have_feature(unsigned int num)
>>>>  }
>>>>
>>>>  /* System capability check for constant caps */
>>>> -static inline bool cpus_have_const_cap(int num)
>>>> +static __always_inline bool cpus_have_const_cap(int num)
>>>
>>> I think we should have the above change and make it inline always.
>>>
>>>>  {
>>>> -	if (num >= ARM64_NCAPS)
>>>> -		return false;
>>>> +	BUILD_BUG_ON(!__builtin_constant_p(num));
>>>
>>> This is not needed, as the compilation would fail if num is not a constant with
>>> static key code.
>
> I also just checked this, and it doesn't fail if the compiler doesn't
> directly supports jump labels (we then fallback to the static key being
> a standard memory access).

Ah, I missed that part of the story. Sorry about that. Please go ahead with the
changes. I had a similar check in my first version and was dropped later with a
similar review comment. We hadn't considered older tool chain.


Suzuki
Radim Krčmář Jan. 17, 2017, 4:49 p.m. UTC | #14
2017-01-13 11:31+0000, Marc Zyngier:
> Radim, Paolo,
> 
> Here's the KVM/ARM updates for 4.10-rc4. Two timer fixes, and one vgic
> fix for a deadlock that's been reported this week (which should land
> into stable).

Pulled to kvm/master, thanks.