mbox

[0/9] arm64: minor fixups and enhancements

Message ID cover.1408736066.git.geoff@infradead.org
State New
Headers show

Pull-request

git://git.linaro.org/people/geoff.levand/linux-kexec.git for-arm64

Message

Geoff Levand Aug. 22, 2014, 7:49 p.m. UTC
Hi Catalin and Will,

Here is a series of various fixups and improvements for arm64.  Please
consider.

-Geoff

The following changes since commit 94156675847c14a9b16e91b035da32e35e98ef79:

  Revert "arm64: dmi: Add SMBIOS/DMI support" (2014-07-31 14:00:03 +0100)

are available in the git repository at:

  git://git.linaro.org/people/geoff.levand/linux-kexec.git for-arm64

for you to fetch changes up to 4192d403bb9703063c59a052293faa19d38d2f02:

  arm64: Add new cpu-return-addr device tree binding (2014-08-22 12:30:41 -0700)

----------------------------------------------------------------
Geoff Levand (9):
      arm64: Fix efi kernel entry
      arm64: Fix INVALID_HWID definition
      arm64: Remove unneeded extern keyword
      arm64: Remove unused variable in head.S
      arm64: Fix include header order in vmlinux.lds.S
      arm64: Add new routine local_disable
      arm64: Add atomic macros to assembler.h
      arm64: Add missing AT() macros to vmlinux.lds.S
      arm64: Add new cpu-return-addr device tree binding

 Documentation/devicetree/bindings/arm/cpus.txt | 25 +++++++++++++++++++
 arch/arm64/include/asm/assembler.h             | 33 ++++++++++++++++++++++++++
 arch/arm64/include/asm/cpu_ops.h               |  4 ++--
 arch/arm64/include/asm/cputype.h               |  2 +-
 arch/arm64/include/asm/irqflags.h              | 13 ++++++++++
 arch/arm64/kernel/head.S                       |  8 +++----
 arch/arm64/kernel/vmlinux.lds.S                | 14 +++++------
 7 files changed, 85 insertions(+), 14 deletions(-)

Comments

Catalin Marinas Aug. 26, 2014, 3:55 p.m. UTC | #1
On Fri, Aug 22, 2014 at 08:49:15PM +0100, Geoff Levand wrote:
> The recent addition of EFI support has changed the way the arm64 kernel
> image is entered based on whether or not CONFIG_EFI is defined.  To support
> this conditional entry point add a new global symbol start to head.S that
> marks the entry, and change the the linker script ENTRY() command to define
> start as the kernel entry.

I don't yet understand what the problem is. The _text symbol is
currently defined in vmlinux.lds.S to be the beginning of the .head.text
section. Whether we have CONFIG_EFI or not, the entry point is the same.
Catalin Marinas Aug. 26, 2014, 3:57 p.m. UTC | #2
On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> Change the preprocessor macro INVALID_HWID definition from ULONG_MAX to (~0)
> to allow INVALID_HWID to be used within assembly source files.
> 
> Commit 3e98fdacc59bbbdbb659be1a144ccc48ed4860fa (arm64: kernel: make the pen of
> the secondary a 64-bit unsigned value) added the preprocessor macro INVALID_HWID
> to asm/cputype.h with it defined to be ULONG_MAX.  Use of INVALID_HWID in an
> assembly source file that includes cputype.h will generate an assembler
> undefined symbol ULONG_MAX build error.  The kernel does define a ULONG_MAX in
> kernel.h, but that file not setup to be included in assembly files.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/cputype.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
> index 379d0b8..6b68380 100644
> --- a/arch/arm64/include/asm/cputype.h
> +++ b/arch/arm64/include/asm/cputype.h
> @@ -16,7 +16,7 @@
>  #ifndef __ASM_CPUTYPE_H
>  #define __ASM_CPUTYPE_H
>  
> -#define INVALID_HWID		ULONG_MAX
> +#define INVALID_HWID		UL(~0)

Does it actually expand to ULONG_MAX? ~0 is an int.
Catalin Marinas Aug. 26, 2014, 4:04 p.m. UTC | #3
On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> Add the new arm64 routine local_disable() to allow the masking of several DAIF
> flags in one operation.  Currently, we only have routines to mask individual
> flags, and to mask several flags multiple calls to daifset are required.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/irqflags.h | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
> index 11cc941..28521d4 100644
> --- a/arch/arm64/include/asm/irqflags.h
> +++ b/arch/arm64/include/asm/irqflags.h
> @@ -113,5 +113,18 @@ static inline int arch_irqs_disabled_flags(unsigned long flags)
>  #define local_dbg_enable()	asm("msr	daifclr, #8" : : : "memory")
>  #define local_dbg_disable()	asm("msr	daifset, #8" : : : "memory")
>  
> +enum daif_flag {
> +	DAIF_FIQ   = (1UL << 6),
> +	DAIF_IRQ   = (1UL << 7),
> +	DAIF_ASYNC = (1UL << 8),
> +	DAIF_DBG   = (1UL << 9),
> +	DAIF_ALL   = (0xffUL << 6),
> +};
> +
> +static inline void local_disable(unsigned long daif_flags)
> +{
> +	arch_local_irq_restore(daif_flags | arch_local_save_flags());
> +}

Who's using this function? I don't see any patch in this series calling
it.
Catalin Marinas Aug. 26, 2014, 4:05 p.m. UTC | #4
On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> Add an assembler macro definition 'atomic' for performing generic
> atomic operations.  Also add macro definitions for atomic_add,
> atomic_sub, atomic_inc, and atomic_dec.

Same question as before. Why are this needed?
Catalin Marinas Aug. 26, 2014, 4:08 p.m. UTC | #5
On Fri, Aug 22, 2014 at 08:49:17PM +0100, Geoff Levand wrote:
> The existing arm64 linker script uses some pre-defined section macros from
> vmlinux.lds.h, all of which include a section load address attribute of
> 'AT(ADDR(section) - LOAD_OFFSET)'.  LOAD_OFFSET is not defined for arm64, and
> defaults to zero, so this mix of section attributions has no ill effect.
> 
> For consistency and to clear the way for a possible definition of LOAD_OFFSET
> add any missing AT() macros to vmlinux.lds.S.

So, what's the possible definition for LOAD_OFFSET on arm64?
Mark Rutland Aug. 26, 2014, 4:11 p.m. UTC | #6
Hi Geoff,

On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> Function prototypes are never definitions, so remove any 'extern' keyword
> from the funcion prototypes in cpu_ops.h. Fixes warnings emited by
> checkpatch.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>

Looks sane to me:

Acked-by: Mark Rutland <mark.rutland@arm.com>

Mark.

> ---
>  arch/arm64/include/asm/cpu_ops.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/cpu_ops.h b/arch/arm64/include/asm/cpu_ops.h
> index d7b4b38..150ce63 100644
> --- a/arch/arm64/include/asm/cpu_ops.h
> +++ b/arch/arm64/include/asm/cpu_ops.h
> @@ -61,7 +61,7 @@ struct cpu_operations {
>  };
>  
>  extern const struct cpu_operations *cpu_ops[NR_CPUS];
> -extern int __init cpu_read_ops(struct device_node *dn, int cpu);
> -extern void __init cpu_read_bootcpu_ops(void);
> +int __init cpu_read_ops(struct device_node *dn, int cpu);
> +void __init cpu_read_bootcpu_ops(void);
>  
>  #endif /* ifndef __ASM_CPU_OPS_H */
> -- 
> 1.9.1
> 
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
Ard Biesheuvel Aug. 26, 2014, 4:19 p.m. UTC | #7
On 26 August 2014 17:55, Catalin Marinas <catalin.marinas@arm.com> wrote:
> On Fri, Aug 22, 2014 at 08:49:15PM +0100, Geoff Levand wrote:
>> The recent addition of EFI support has changed the way the arm64 kernel
>> image is entered based on whether or not CONFIG_EFI is defined.  To support
>> this conditional entry point add a new global symbol start to head.S that
>> marks the entry, and change the the linker script ENTRY() command to define
>> start as the kernel entry.
>
> I don't yet understand what the problem is. The _text symbol is
> currently defined in vmlinux.lds.S to be the beginning of the .head.text
> section. Whether we have CONFIG_EFI or not, the entry point is the same.
>

First of all, the 'add x13, x18, #0x16' was carefully chosen to be
both a "MZ" prefix and an executable instruction without any harmful
side effects. So currently, the EFI stub jumps to that add
instruction, and not to the 'b stext' that comes after. There is an
issue with that, which I have already proposed a patch for (arm64/efi:
efistub: jump to 'stext' directly, not through the header), but this
is related to the guarantees the UEFI spec gives about where the
header gets loaded (if at all).

However, going back to your patch, setting ENTRY() only affects the
vmlinux ELF image, and this information gets stripped when creating
the binary. Do you need the entry point to be set so you can load
vmlinux using the debugger, perhaps? In that case, did you have any
problems branching to the add instruction? If so, I would like to know
about it.
Mark Rutland Aug. 26, 2014, 4:23 p.m. UTC | #8
On Tue, Aug 26, 2014 at 05:04:43PM +0100, Catalin Marinas wrote:
> On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > Add the new arm64 routine local_disable() to allow the masking of several DAIF
> > flags in one operation.  Currently, we only have routines to mask individual
> > flags, and to mask several flags multiple calls to daifset are required.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  arch/arm64/include/asm/irqflags.h | 13 +++++++++++++
> >  1 file changed, 13 insertions(+)
> > 
> > diff --git a/arch/arm64/include/asm/irqflags.h b/arch/arm64/include/asm/irqflags.h
> > index 11cc941..28521d4 100644
> > --- a/arch/arm64/include/asm/irqflags.h
> > +++ b/arch/arm64/include/asm/irqflags.h
> > @@ -113,5 +113,18 @@ static inline int arch_irqs_disabled_flags(unsigned long flags)
> >  #define local_dbg_enable()	asm("msr	daifclr, #8" : : : "memory")
> >  #define local_dbg_disable()	asm("msr	daifset, #8" : : : "memory")
> >  
> > +enum daif_flag {
> > +	DAIF_FIQ   = (1UL << 6),
> > +	DAIF_IRQ   = (1UL << 7),
> > +	DAIF_ASYNC = (1UL << 8),
> > +	DAIF_DBG   = (1UL << 9),
> > +	DAIF_ALL   = (0xffUL << 6),
> > +};
> > +
> > +static inline void local_disable(unsigned long daif_flags)
> > +{
> > +	arch_local_irq_restore(daif_flags | arch_local_save_flags());
> > +}
> 
> Who's using this function? I don't see any patch in this series calling
> it.

This was for implementing the spin-table cpu-return-addr idea. Before
returning to the spin-table we need to mask everything, and not doing
that as four back-to-back context-synchronizing writes would be nice.

It should probably be introduced by the cpu-return-addr series unless
this is useful elsewhere. If we don't actually have another use, we
could just have a msr daifset, #0xf (arch_local_disable_all?).

Thanks,
Mark.
Mark Rutland Aug. 26, 2014, 4:27 p.m. UTC | #9
On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> asm-generic/vmlinux.lds.h should be included after the arch
> specific headers so that the arch headers can override the
> generic macro defs in asm-generic/vmlinux.lds.h.
> 
> Fixes preprosessor redefined warnings when adding arch specific
> macros.

A sample of those warnings would be nice. What do you see being
redefined?

Thanks,
Mark.

> Signed-off-by: Geoff Levand <geoff@infradead.org> for Huawei, Linaro
> ---
>  arch/arm64/kernel/vmlinux.lds.S | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
> index fbb1af7..8dc1d46 100644
> --- a/arch/arm64/kernel/vmlinux.lds.S
> +++ b/arch/arm64/kernel/vmlinux.lds.S
> @@ -4,10 +4,10 @@
>   * Written by Martin Mares <mj@atrey.karlin.mff.cuni.cz>
>   */
>  
> -#include <asm-generic/vmlinux.lds.h>
>  #include <asm/thread_info.h>
>  #include <asm/memory.h>
>  #include <asm/page.h>
> +#include <asm-generic/vmlinux.lds.h>
>  
>  #include "image.h"
>  
> -- 
> 1.9.1
> 
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
Mark Rutland Aug. 26, 2014, 4:31 p.m. UTC | #10
On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> Change the preprocessor macro INVALID_HWID definition from ULONG_MAX to (~0)
> to allow INVALID_HWID to be used within assembly source files.
> 
> Commit 3e98fdacc59bbbdbb659be1a144ccc48ed4860fa (arm64: kernel: make the pen of
> the secondary a 64-bit unsigned value) added the preprocessor macro INVALID_HWID
> to asm/cputype.h with it defined to be ULONG_MAX.  Use of INVALID_HWID in an
> assembly source file that includes cputype.h will generate an assembler
> undefined symbol ULONG_MAX build error.  The kernel does define a ULONG_MAX in
> kernel.h, but that file not setup to be included in assembly files.

We don't seem to be using INVALID_HWID in any assembly at the moment,
and I'm not sure I follow what we'd need it for.

Why do we need this in assembly?

Thanks,
Mark.

> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/include/asm/cputype.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/arch/arm64/include/asm/cputype.h b/arch/arm64/include/asm/cputype.h
> index 379d0b8..6b68380 100644
> --- a/arch/arm64/include/asm/cputype.h
> +++ b/arch/arm64/include/asm/cputype.h
> @@ -16,7 +16,7 @@
>  #ifndef __ASM_CPUTYPE_H
>  #define __ASM_CPUTYPE_H
>  
> -#define INVALID_HWID		ULONG_MAX
> +#define INVALID_HWID		UL(~0)
>  
>  #define MPIDR_UP_BITMASK	(0x1 << 30)
>  #define MPIDR_MT_BITMASK	(0x1 << 24)
> -- 
> 1.9.1
> 
> 
> 
> _______________________________________________
> linux-arm-kernel mailing list
> linux-arm-kernel@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
>
Geoff Levand Aug. 26, 2014, 5:38 p.m. UTC | #11
Hi Mark,

On Tue, 2014-08-26 at 17:31 +0100, Mark Rutland wrote:
> On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > Change the preprocessor macro INVALID_HWID definition from ULONG_MAX to (~0)
> > to allow INVALID_HWID to be used within assembly source files.
> > 
> > Commit 3e98fdacc59bbbdbb659be1a144ccc48ed4860fa (arm64: kernel: make the pen of
> > the secondary a 64-bit unsigned value) added the preprocessor macro INVALID_HWID
> > to asm/cputype.h with it defined to be ULONG_MAX.  Use of INVALID_HWID in an
> > assembly source file that includes cputype.h will generate an assembler
> > undefined symbol ULONG_MAX build error.  The kernel does define a ULONG_MAX in
> > kernel.h, but that file not setup to be included in assembly files.
> 
> We don't seem to be using INVALID_HWID in any assembly at the moment,
> and I'm not sure I follow what we'd need it for.
> 
> Why do we need this in assembly?

We currently don't use INVALID_HWID in any assembly files, but I wanted
to use it for some testing and found the current definition gave an
error.

-Geoff
Geoff Levand Aug. 26, 2014, 6:18 p.m. UTC | #12
Hi Catalin,

On Tue, 2014-08-26 at 16:57 +0100, Catalin Marinas wrote:
> On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > Change the preprocessor macro INVALID_HWID definition from ULONG_MAX to (~0)
> > to allow INVALID_HWID to be used within assembly source files.
> > 
> > --- a/arch/arm64/include/asm/cputype.h
> > +++ b/arch/arm64/include/asm/cputype.h
> > @@ -16,7 +16,7 @@
> >  #ifndef __ASM_CPUTYPE_H
> >  #define __ASM_CPUTYPE_H
> >  
> > -#define INVALID_HWID		ULONG_MAX
> > +#define INVALID_HWID		UL(~0)
> 
> Does it actually expand to ULONG_MAX? ~0 is an int.

It seems to be OK, in C:

volatile unsigned long secondary_holding_pen_release = INVALID_HWID;

Disassembly of section .data:

0000000000000000 <secondary_holding_pen_release>:
   0:   ffffffff        .word   0xffffffff
   4:   ffffffff        .word   0xffffffff

In .S:

        mov     x10, INVALID_HWID               // clear secondary_holding_pen_release
 2b0:   9280000a        mov     x10, #0xffffffffffffffff        // #-1

-Geoff
Geoff Levand Aug. 26, 2014, 6:42 p.m. UTC | #13
Hi,

On Tue, 2014-08-26 at 18:19 +0200, Ard Biesheuvel wrote:
> First of all, the 'add x13, x18, #0x16' was carefully chosen to be
> both a "MZ" prefix and an executable instruction without any harmful
> side effects. 

OK, I didn't look so closely to realize this was an instruction with out
side effects.

> So currently, the EFI stub jumps to that add
> instruction, and not to the 'b stext' that comes after. There is an
> issue with that, which I have already proposed a patch for (arm64/efi:
> efistub: jump to 'stext' directly, not through the header), but this
> is related to the guarantees the UEFI spec gives about where the
> header gets loaded (if at all).
> 
> However, going back to your patch, setting ENTRY() only affects the
> vmlinux ELF image, and this information gets stripped when creating
> the binary. Do you need the entry point to be set so you can load
> vmlinux using the debugger, perhaps? In that case, did you have any
> problems branching to the add instruction? If so, I would like to know
> about it.

kexec-tools [1] can load vmlinux elf files, and uses ehdr.e_entry as the
kernel entry point.  I tested without this patch (branching to _text),
and it works OK, so we can drop this patch.

-Geoff

[1] https://git.linaro.org/people/geoff.levand/kexec-tools.git

-Geoff
Geoff Levand Aug. 26, 2014, 7:27 p.m. UTC | #14
Hi Mark,

On Tue, 2014-08-26 at 17:27 +0100, Mark Rutland wrote:
> On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > asm-generic/vmlinux.lds.h should be included after the arch
> > specific headers so that the arch headers can override the
> > generic macro defs in asm-generic/vmlinux.lds.h.
> > 
> > Fixes preprosessor redefined warnings when adding arch specific
> > macros.
> 
> A sample of those warnings would be nice. What do you see being
> redefined?

In testing I wanted to set LOAD_OFFSET.  If gave warnings like
these:

  arch/arm64/include/asm/page.h: warning: "LOAD_OFFSET" redefined [enabled by default]

I'll send out another patch with updated message.

-Geoff
Geoff Levand Aug. 26, 2014, 7:33 p.m. UTC | #15
Catalin,

On Tue, 2014-08-26 at 17:08 +0100, Catalin Marinas wrote:
> On Fri, Aug 22, 2014 at 08:49:17PM +0100, Geoff Levand wrote:
> > The existing arm64 linker script uses some pre-defined section macros from
> > vmlinux.lds.h, all of which include a section load address attribute of
> > 'AT(ADDR(section) - LOAD_OFFSET)'.  LOAD_OFFSET is not defined for arm64, and
> > defaults to zero, so this mix of section attributions has no ill effect.
> > 
> > For consistency and to clear the way for a possible definition of LOAD_OFFSET
> > add any missing AT() macros to vmlinux.lds.S.
> 
> So, what's the possible definition for LOAD_OFFSET on arm64?

Well, this is just intended as a cleanup for vmlinux.lds.S, but I think
a LOAD_OFFSET definition may be needed for kdump.

Takahiro, could you comment?

-Geoff
Geoff Levand Aug. 26, 2014, 7:40 p.m. UTC | #16
Hi Catalin,

On Tue, 2014-08-26 at 17:05 +0100, Catalin Marinas wrote:
> On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > Add an assembler macro definition 'atomic' for performing generic
> > atomic operations.  Also add macro definitions for atomic_add,
> > atomic_sub, atomic_inc, and atomic_dec.
> 
> Same question as before. Why are this needed?

I use these in the smp-spin-table re-work I am doing to keep a
count of CPUs in the secondary_holding_pen, but those patches
are not yet ready for review.

I thought I'd post this now thinking theses macros might be of use
to someone else.  I could just put them into the smp-spin-table
series, please let me know.

-Geoff
AKASHI Takahiro Aug. 27, 2014, 6:53 a.m. UTC | #17
On 08/27/2014 04:33 AM, Geoff Levand wrote:
> Catalin,
>
> On Tue, 2014-08-26 at 17:08 +0100, Catalin Marinas wrote:
>> On Fri, Aug 22, 2014 at 08:49:17PM +0100, Geoff Levand wrote:
>>> The existing arm64 linker script uses some pre-defined section macros from
>>> vmlinux.lds.h, all of which include a section load address attribute of
>>> 'AT(ADDR(section) - LOAD_OFFSET)'.  LOAD_OFFSET is not defined for arm64, and
>>> defaults to zero, so this mix of section attributions has no ill effect.
>>>
>>> For consistency and to clear the way for a possible definition of LOAD_OFFSET
>>> add any missing AT() macros to vmlinux.lds.S.
>>
>> So, what's the possible definition for LOAD_OFFSET on arm64?
>
> Well, this is just intended as a cleanup for vmlinux.lds.S, but I think
> a LOAD_OFFSET definition may be needed for kdump.
>
> Takahiro, could you comment?

My current implementation of kdump doesn't assume use of LOAD_OFFSET since it can
utilize, as crash (2nd stage) kernel, the same one as 1st stage kernel.

The loaded address and entry address will be handled properly by userspace tool, kexec,
as far as TEXT_OFFSET is reasonably small (that is, fit into the memory region dedicated
for crash kernel).

-Takahiro AKASHI

> -Geoff
>
>
Catalin Marinas Aug. 27, 2014, 8:21 a.m. UTC | #18
On Tue, Aug 26, 2014 at 07:18:00PM +0100, Geoff Levand wrote:
> On Tue, 2014-08-26 at 16:57 +0100, Catalin Marinas wrote:
> > On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > > Change the preprocessor macro INVALID_HWID definition from ULONG_MAX to (~0)
> > > to allow INVALID_HWID to be used within assembly source files.
> > > 
> > > --- a/arch/arm64/include/asm/cputype.h
> > > +++ b/arch/arm64/include/asm/cputype.h
> > > @@ -16,7 +16,7 @@
> > >  #ifndef __ASM_CPUTYPE_H
> > >  #define __ASM_CPUTYPE_H
> > >  
> > > -#define INVALID_HWID		ULONG_MAX
> > > +#define INVALID_HWID		UL(~0)
> > 
> > Does it actually expand to ULONG_MAX? ~0 is an int.
> 
> It seems to be OK, in C:
> 
> volatile unsigned long secondary_holding_pen_release = INVALID_HWID;
> 
> Disassembly of section .data:
> 
> 0000000000000000 <secondary_holding_pen_release>:
>    0:   ffffffff        .word   0xffffffff
>    4:   ffffffff        .word   0xffffffff

OK, it looks like it's sign-extending from int to unsigned long (an
alternative would have been to write (~UL(0)) but the above should do as
well).

Anyway, the patch should come with the series that makes use of such
change.
Catalin Marinas Aug. 27, 2014, 8:24 a.m. UTC | #19
On Tue, Aug 26, 2014 at 08:27:30PM +0100, Geoff Levand wrote:
> On Tue, 2014-08-26 at 17:27 +0100, Mark Rutland wrote:
> > On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > > asm-generic/vmlinux.lds.h should be included after the arch
> > > specific headers so that the arch headers can override the
> > > generic macro defs in asm-generic/vmlinux.lds.h.
> > > 
> > > Fixes preprosessor redefined warnings when adding arch specific
> > > macros.
> > 
> > A sample of those warnings would be nice. What do you see being
> > redefined?
> 
> In testing I wanted to set LOAD_OFFSET.  If gave warnings like
> these:
> 
>   arch/arm64/include/asm/page.h: warning: "LOAD_OFFSET" redefined [enabled by default]
> 
> I'll send out another patch with updated message.

We currently don't define LOAD_OFFSET, so it would be good to see
what/why it is defined to. You could keep this patch as part of the
series introducing LOAD_OFFSET.
Catalin Marinas Aug. 27, 2014, 8:25 a.m. UTC | #20
On Tue, Aug 26, 2014 at 08:40:53PM +0100, Geoff Levand wrote:
> On Tue, 2014-08-26 at 17:05 +0100, Catalin Marinas wrote:
> > On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > > Add an assembler macro definition 'atomic' for performing generic
> > > atomic operations.  Also add macro definitions for atomic_add,
> > > atomic_sub, atomic_inc, and atomic_dec.
> > 
> > Same question as before. Why are this needed?
> 
> I use these in the smp-spin-table re-work I am doing to keep a
> count of CPUs in the secondary_holding_pen, but those patches
> are not yet ready for review.
> 
> I thought I'd post this now thinking theses macros might be of use
> to someone else.  I could just put them into the smp-spin-table
> series, please let me know.

Keeping them all together would be better. I don't like introducing
unused code.
Catalin Marinas Aug. 27, 2014, 8:30 a.m. UTC | #21
On Fri, Aug 22, 2014 at 08:49:17PM +0100, Geoff Levand wrote:
> Add a new arm64 device tree binding cpu-return-addr.  This binding is recomended
> for all ARM v8 CPUs that have an "enable-method" property value of "spin-table".
> The value is a 64 bit physical address that secondary CPU execution will transfer
> to upon CPU shutdown.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  Documentation/devicetree/bindings/arm/cpus.txt | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/arm/cpus.txt b/Documentation/devicetree/bindings/arm/cpus.txt
> index 1fe72a0..42d5d5f 100644
> --- a/Documentation/devicetree/bindings/arm/cpus.txt
> +++ b/Documentation/devicetree/bindings/arm/cpus.txt
> @@ -201,6 +201,15 @@ nodes to be present and contain the properties described below.
>  			  property identifying a 64-bit zero-initialised
>  			  memory location.
>  
> +	- cpu-return-addr
> +		Usage: recomended for all ARM v8 CPUs that have an
> +		       "enable-method" property value of "spin-table".
> +		Value type: <prop-encoded-array>
> +		Definition:
> +			# On ARM v8 64-bit systems must be a two cell property.
> +			The value is a 64 bit physical address that secondary
> +			CPU execution will transfer to upon CPU shutdown.

As I've been away for most of the past four weeks, I haven't read all
the threads around this topic. But I don't think we ended up with a
clearly agreed recommendation for cpu-return-addr. If we do, we also
need to be clear on what state the CPU should be in when returned to
such address (ELx, MMU, caches).

In general, if we need returning to firmware I would strongly recommend
PSCI but I know there is the Applied board which does not have EL3, so
something like this may work. But we need to get them into discussion as
well since I assume cpu-return-addr would be a firmware provided
address.
Will Deacon Aug. 27, 2014, 8:40 a.m. UTC | #22
On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> Remove an unused local variable from head.S.  It seems this was never
> used even from the initial commit
> 9703d9d7f77ce129621f7d80a844822e2daa7008 (arm64: Kernel booting and
> initialisation), and is a left over from a previous implementation
> of __calc_phys_offset.
> 
> Signed-off-by: Geoff Levand <geoff@infradead.org>
> ---
>  arch/arm64/kernel/head.S | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
> index 4019b85..607d4bb 100644
> --- a/arch/arm64/kernel/head.S
> +++ b/arch/arm64/kernel/head.S
> @@ -377,10 +377,6 @@ ENTRY(__boot_cpu_mode)
>  	.long	0
>  	.popsection
>  
> -	.align	3
> -2:	.quad	.
> -	.quad	PAGE_OFFSET
> -
>  #ifdef CONFIG_SMP
>  	.align	3
>  1:	.quad	.

Thanks, I'll take this one into the arm64 tree.

Will
Geoff Levand Aug. 29, 2014, 9:45 p.m. UTC | #23
Hi Catalin,

On Wed, 2014-08-27 at 09:30 +0100, Catalin Marinas wrote:
> On Fri, Aug 22, 2014 at 08:49:17PM +0100, Geoff Levand wrote:
> > Add a new arm64 device tree binding cpu-return-addr.  This binding is recomended
> > for all ARM v8 CPUs that have an "enable-method" property value of "spin-table".
> > The value is a 64 bit physical address that secondary CPU execution will transfer
> > to upon CPU shutdown.
> > 
> > Signed-off-by: Geoff Levand <geoff@infradead.org>
> > ---
> >  Documentation/devicetree/bindings/arm/cpus.txt | 25 +++++++++++++++++++++++++
> >  1 file changed, 25 insertions(+)
> > 
> > diff --git a/Documentation/devicetree/bindings/arm/cpus.txt b/Documentation/devicetree/bindings/arm/cpus.txt
> > index 1fe72a0..42d5d5f 100644
> > --- a/Documentation/devicetree/bindings/arm/cpus.txt
> > +++ b/Documentation/devicetree/bindings/arm/cpus.txt
> > @@ -201,6 +201,15 @@ nodes to be present and contain the properties described below.
> >  			  property identifying a 64-bit zero-initialised
> >  			  memory location.
> >  
> > +	- cpu-return-addr
> > +		Usage: recomended for all ARM v8 CPUs that have an
> > +		       "enable-method" property value of "spin-table".
> > +		Value type: <prop-encoded-array>
> > +		Definition:
> > +			# On ARM v8 64-bit systems must be a two cell property.
> > +			The value is a 64 bit physical address that secondary
> > +			CPU execution will transfer to upon CPU shutdown.
> 
> As I've been away for most of the past four weeks, I haven't read all
> the threads around this topic. But I don't think we ended up with a
> clearly agreed recommendation for cpu-return-addr. If we do, we also
> need to be clear on what state the CPU should be in when returned to
> such address (ELx, MMU, caches).

Regarding the system state, I think what Mark proposed [1] is what we
should work towards.  I'll add that to the binding's Definition text.

I have not tried to implement it yet though, but once I get it working
and tested we'll be able to say that this is what works and should be
the interface.

[1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-August/278910.html

> In general, if we need returning to firmware I would strongly recommend
> PSCI but I know there is the Applied board which does not have EL3, so
> something like this may work. But we need to get them into discussion as
> well since I assume cpu-return-addr would be a firmware provided
> address.

Yes, cpu-return-addr will be (optionally) provided by the firmware.

The current kexec_prepare system call implementation I have checks
the return of cpu_ops.cpu_disable() for all CPUs.  I have setup the
spin-table cpu_disable() to check if the device tree defines a
cpu-return-addr for that CPU.  So if there is no cpu-return-addr
kexec_prepare will fail and the user will get an error on a kernel
load from kexec-tools.

Feng Kan of APM has already said that address O will work correctly
for the APM board [2], and Arun Chandran has tested this.

[2] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-July/273084.html

-Geoff
Geoff Levand Aug. 29, 2014, 9:53 p.m. UTC | #24
Hi Catalin,

On Wed, 2014-08-27 at 09:24 +0100, Catalin Marinas wrote:
> On Tue, Aug 26, 2014 at 08:27:30PM +0100, Geoff Levand wrote:
> > On Tue, 2014-08-26 at 17:27 +0100, Mark Rutland wrote:
> > > On Fri, Aug 22, 2014 at 08:49:16PM +0100, Geoff Levand wrote:
> > > > asm-generic/vmlinux.lds.h should be included after the arch
> > > > specific headers so that the arch headers can override the
> > > > generic macro defs in asm-generic/vmlinux.lds.h.
> > > > 
> > > > Fixes preprosessor redefined warnings when adding arch specific
> > > > macros.
> > > 
> > > A sample of those warnings would be nice. What do you see being
> > > redefined?
> > 
> > In testing I wanted to set LOAD_OFFSET.  If gave warnings like
> > these:
> > 
> >   arch/arm64/include/asm/page.h: warning: "LOAD_OFFSET" redefined [enabled by default]
> > 
> > I'll send out another patch with updated message.
> 
> We currently don't define LOAD_OFFSET, so it would be good to see
> what/why it is defined to. You could keep this patch as part of the
> series introducing LOAD_OFFSET.

Sorry I wasn't clear on this, but I have no plan to define an arm64
specific LOAD_OFFSET.  This patch was intended to add consistency
to the output section definitions in the arm64 linker script, and to
also make the output section definitions in the arm64 linker script
consistent with the way other architectures have their output sections
defined.

If this is not enough motivation for you to be interested in this,
then I won't bother to send out that updated patch.  Please let me
know.

-Geoff