Message ID | cover.1457977401.git.geoff@infradead.org |
---|---|
State | New |
Headers | show |
Hi On Mon, Mar 14, 2016 at 05:48:00PM +0000, Geoff Levand wrote: > The existing arm64 hcall implementations are limited in that they only > allow for two distinct hcalls; with the x0 register either zero or not > zero. Also, the API of the hyp-stub exception vector routines and the > KVM exception vector routines differ; hyp-stub uses a non-zero value in > x0 to implement __hyp_set_vectors, whereas KVM uses it to implement > kvm_call_hyp. > > To allow for additional hcalls to be defined and to make the arm64 hcall > API more consistent across exception vector routines, change the hcall > implementations to use the 16 bit immediate value of the HVC instruction > to specify the hcall type. I'm a bit concerned about namespace pollution on the HVC immediate here. Existing users tend allocate a single "random" number to identify the API -- Xen and Jailhouse do this for example. If we start using the HVC immediate to select functions, not just APIs, the space is going to fill up a lot faster, if we have a multiplex multiple APIs through it. (We don't currently seem to multiplex APIs much here, except that we do use HVC for PSCI calls from the guest, and it could be used for additional paravirtualised services in the future). > Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and > HVC_SET_VECTORS to be used as hcall type specifiers and convert the > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() > routines to use these new macros when executing an HVC call. Also, > change the corresponding hyp-stub and KVM el1_sync exception vector > routines to use these new macros. It would also be preferable to keep the 32-bit and 64-bit APIs the same; we should avoid having them different unless there's a clinching technical reason... There may be some historical context for this that I'm missing... Cheers ---Dave > > Signed-off-by: Geoff Levand <geoff@infradead.org> > Signed-off-by: James Morse <james.morse@arm.com> > --- > arch/arm64/include/asm/virt.h | 27 +++++++++++++++++++++++++++ > arch/arm64/kernel/hyp-stub.S | 32 +++++++++++++++++++++----------- > arch/arm64/kvm/hyp.S | 3 ++- > arch/arm64/kvm/hyp/hyp-entry.S | 9 ++++++--- > 4 files changed, 56 insertions(+), 15 deletions(-) > [...]
Hi, On Tue, 2016-03-15 at 13:50 +0000, Dave Martin wrote: > On Mon, Mar 14, 2016 at 05:48:00PM +0000, Geoff Levand wrote: > > The existing arm64 hcall implementations are limited in that they only > > allow for two distinct hcalls; with the x0 register either zero or not > > zero. Also, the API of the hyp-stub exception vector routines and the > > KVM exception vector routines differ; hyp-stub uses a non-zero value in > > x0 to implement __hyp_set_vectors, whereas KVM uses it to implement > > kvm_call_hyp. > > > > To allow for additional hcalls to be defined and to make the arm64 hcall > > API more consistent across exception vector routines, change the hcall > > implementations to use the 16 bit immediate value of the HVC instruction > > to specify the hcall type. > > I'm a bit concerned about namespace pollution on the HVC immediate here. > Existing users tend allocate a single "random" number to identify the > API -- Xen and Jailhouse do this for example. > > If we start using the HVC immediate to select functions, not just APIs, > the space is going to fill up a lot faster, if we have a multiplex > multiple APIs through it. This was discussed and concluded that we have 16 bits to fill up, and that is enough. Functions can still be multiplexed through a single HVC immediate if the user chooses to do so. > > (We don't currently seem to multiplex APIs much here, except that we > do use HVC for PSCI calls from the guest, and it could be used for > additional paravirtualised services in the future). > > > Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and > > HVC_SET_VECTORS to be used as hcall type specifiers and convert the > > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() > > routines to use these new macros when executing an HVC call. Also, > > change the corresponding hyp-stub and KVM el1_sync exception vector > > routines to use these new macros. > > It would also be preferable to keep the 32-bit and 64-bit APIs the same; > we should avoid having them different unless there's a clinching > technical reason... Please expand on why you see it as preferable. What problems do you see? -Geoff
On Tue, Mar 15, 2016 at 11:15:10AM -0700, Geoff Levand wrote: > Hi, > > On Tue, 2016-03-15 at 13:50 +0000, Dave Martin wrote: > > On Mon, Mar 14, 2016 at 05:48:00PM +0000, Geoff Levand wrote: > > > The existing arm64 hcall implementations are limited in that they only > > > allow for two distinct hcalls; with the x0 register either zero or not > > > zero. Also, the API of the hyp-stub exception vector routines and the > > > KVM exception vector routines differ; hyp-stub uses a non-zero value in > > > x0 to implement __hyp_set_vectors, whereas KVM uses it to implement > > > kvm_call_hyp. > > > > > > To allow for additional hcalls to be defined and to make the arm64 hcall > > > API more consistent across exception vector routines, change the hcall > > > implementations to use the 16 bit immediate value of the HVC instruction > > > to specify the hcall type. > > > > I'm a bit concerned about namespace pollution on the HVC immediate here. > > Existing users tend allocate a single "random" number to identify the > > API -- Xen and Jailhouse do this for example. > > > > If we start using the HVC immediate to select functions, not just APIs, > > the space is going to fill up a lot faster, if we have a multiplex > > multiple APIs through it. > > This was discussed and concluded that we have 16 bits to fill up, > and that is enough. Functions can still be multiplexed through a Enough for what? > single HVC immediate if the user chooses to do so. But KVM can't? The HVC #imm space doesn't seem to be managed, which implies that discovery and/or renumbering mechanisms would be needed if we end up wanting to mux multiple ABIs through there. The tighter limitation on immediate size, and the need for code patching if translation of HVC numbers is needed, mean that this can be harder when using the HVC immediate for demux rather than an ordinary register. Currently, the only other ABI muxed through HVC is PSCI, but it already looks like there is a potential collision -- HVC #0 from EL1 is already KVM_CALL_HYP or a PSCI call, and we rely on knowing whether the call came from the host or guest to demux it properly. This kind of problem is likely to proliferate over time. > > (We don't currently seem to multiplex APIs much here, except that we > > do use HVC for PSCI calls from the guest, and it could be used for > > additional paravirtualised services in the future). > > > > > Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and > > > HVC_SET_VECTORS to be used as hcall type specifiers and convert the > > > existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() > > > routines to use these new macros when executing an HVC call. Also, > > > change the corresponding hyp-stub and KVM el1_sync exception vector > > > routines to use these new macros. > > > > It would also be preferable to keep the 32-bit and 64-bit APIs the same; > > we should avoid having them different unless there's a clinching > > technical reason... > > Please expand on why you see it as preferable. What problems do > you see? Fragmentation avoidance is the main argument I see. The architectural constraints and the problem to be solved are basically the same between 32- and 64-bit, AFAICT. Cheers ---Dave
On 16/03/16 13:50, Dave Martin wrote: > On Tue, Mar 15, 2016 at 11:15:10AM -0700, Geoff Levand wrote: >> Hi, >> >> On Tue, 2016-03-15 at 13:50 +0000, Dave Martin wrote: >>> On Mon, Mar 14, 2016 at 05:48:00PM +0000, Geoff Levand wrote: >>>> The existing arm64 hcall implementations are limited in that they only >>>> allow for two distinct hcalls; with the x0 register either zero or not >>>> zero. Also, the API of the hyp-stub exception vector routines and the >>>> KVM exception vector routines differ; hyp-stub uses a non-zero value in >>>> x0 to implement __hyp_set_vectors, whereas KVM uses it to implement >>>> kvm_call_hyp. >>>> >>>> To allow for additional hcalls to be defined and to make the arm64 hcall >>>> API more consistent across exception vector routines, change the hcall >>>> implementations to use the 16 bit immediate value of the HVC instruction >>>> to specify the hcall type. >>> >>> I'm a bit concerned about namespace pollution on the HVC immediate here. >>> Existing users tend allocate a single "random" number to identify the >>> API -- Xen and Jailhouse do this for example. >>> >>> If we start using the HVC immediate to select functions, not just APIs, >>> the space is going to fill up a lot faster, if we have a multiplex >>> multiple APIs through it. >> >> This was discussed and concluded that we have 16 bits to fill up, >> and that is enough. Functions can still be multiplexed through a > > Enough for what? > >> single HVC immediate if the user chooses to do so. > > But KVM can't? > > The HVC #imm space doesn't seem to be managed, which implies that > discovery and/or renumbering mechanisms would be needed if we end up > wanting to mux multiple ABIs through there. The tighter limitation > on immediate size, and the need for code patching if translation of > HVC numbers is needed, mean that this can be harder when using the HVC > immediate for demux rather than an ordinary register. > > Currently, the only other ABI muxed through HVC is PSCI, but it > already looks like there is a potential collision -- HVC #0 from EL1 is > already KVM_CALL_HYP or a PSCI call, and we rely on knowing whether > the call came from the host or guest to demux it properly. > > This kind of problem is likely to proliferate over time. > >>> (We don't currently seem to multiplex APIs much here, except that we >>> do use HVC for PSCI calls from the guest, and it could be used for >>> additional paravirtualised services in the future). >>> >>>> Define three new preprocessor macros HVC_CALL_HYP, HVC_GET_VECTORS, and >>>> HVC_SET_VECTORS to be used as hcall type specifiers and convert the >>>> existing __hyp_get_vectors(), __hyp_set_vectors() and kvm_call_hyp() >>>> routines to use these new macros when executing an HVC call. Also, >>>> change the corresponding hyp-stub and KVM el1_sync exception vector >>>> routines to use these new macros. >>> >>> It would also be preferable to keep the 32-bit and 64-bit APIs the same; >>> we should avoid having them different unless there's a clinching >>> technical reason... >> >> Please expand on why you see it as preferable. What problems do >> you see? > > Fragmentation avoidance is the main argument I see. The architectural > constraints and the problem to be solved are basically the same between > 32- and 64-bit, AFAICT. +1. I never quite understood why we went from a single HVC immediate + a register indicating the operation to a proliferation of immediate values (and still the need for a register to indicate the operation in most cases). This seems to go in a direction that is diametrically opposite the the "normal" ARM way. That doesn't make it an invalid approach, but uniformity with other APIs (PSCI for example) and the 32bit KVM code seems a highly desirable feature (given that I'll end up maintaining that code). Thanks, M.
Hi Marc, On Wed, 2016-03-16 at 14:09 +0000, Marc Zyngier wrote: > This seems to go in a direction that is diametrically opposite the > the > "normal" ARM way. That doesn't make it an invalid approach, but > uniformity with other APIs (PSCI for example) and the 32bit KVM code > seems a highly desirable feature (given that I'll end up maintaining > that code). We need a way to get the CPU back to the exception level it had on entry to the kernel, and this hcall change is part of my proposed solution. If you could outline something you think would be a better fit, I'll take that and work on an implementation of it. -Geoff
Hi! On 14/03/16 17:48, Geoff Levand wrote: > From: AKASHI Takahiro <takahiro.akashi@linaro.org> > > Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus > and save registers' status in per-cpu ELF notes before starting crash > dump kernel. See kernel_kexec(). > Even if not all secondary cpus have shut down, we do kdump anyway. > > As we don't have to make non-boot(crashed) cpus offline (to preserve > correct status of cpus at crash dump) before shutting down, this patch > also adds a variant of smp_send_stop(). > > Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > index b1adc51..76402c6cd 100644 > --- a/arch/arm64/kernel/smp.c > +++ b/arch/arm64/kernel/smp.c > @@ -701,6 +705,28 @@ static void ipi_cpu_stop(unsigned int cpu) > cpu_relax(); > } > > +static atomic_t waiting_for_crash_ipi; > + > +static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs) > +{ > + crash_save_cpu(regs, cpu); > + > + raw_spin_lock(&stop_lock); > + pr_debug("CPU%u: stopping\n", cpu); > + raw_spin_unlock(&stop_lock); > + > + atomic_dec(&waiting_for_crash_ipi); > + > + local_irq_disable(); Aren't irqs already disabled here? - or is this a 'just make sure'.... > + > + if (cpu_ops[cpu]->cpu_die) > + cpu_ops[cpu]->cpu_die(cpu); > + > + /* just in case */ > + while (1) > + wfi(); > +} > + > /* > * Main handler for inter-processor interrupts > */ > @@ -731,6 +757,12 @@ void handle_IPI(int ipinr, struct pt_regs *regs) > irq_exit(); > break; > > + case IPI_CPU_CRASH_STOP: > + irq_enter(); > + ipi_cpu_crash_stop(cpu, regs); > + irq_exit(); This made me jump: irq_exit() may end up in the __do_softirq() (with irqs turned back on!) ... but these lines are impossible to reach. Maybe get the compiler to enforce this with an unreachable() instead? > + break; > + > #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST > case IPI_TIMER: > irq_enter(); > @@ -791,6 +823,30 @@ void smp_send_stop(void) > pr_warning("SMP: failed to stop secondary CPUs\n"); > } > > +void smp_send_crash_stop(void) > +{ > + cpumask_t mask; > + unsigned long timeout; > + > + if (num_online_cpus() == 1) > + return; > + > + cpumask_copy(&mask, cpu_online_mask); > + cpumask_clear_cpu(smp_processor_id(), &mask); > + > + atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1); > + > + smp_cross_call(&mask, IPI_CPU_CRASH_STOP); > + > + /* Wait up to one second for other CPUs to stop */ > + timeout = USEC_PER_SEC; > + while ((atomic_read(&waiting_for_crash_ipi) > 0) && timeout--) > + udelay(1); > + > + if (atomic_read(&waiting_for_crash_ipi) > 0) > + pr_warn("SMP: failed to stop secondary CPUs\n"); > +} > + > /* > * not supported here > */ > Thanks, James
Hi! On 14/03/16 17:48, Geoff Levand wrote: > From: AKASHI Takahiro <takahiro.akashi@linaro.org> > > On the startup of primary kernel, the memory region used by crash dump > kernel must be specified by "crashkernel=" kernel parameter. > reserve_crashkernel() will allocate and reserve the region for later use. > > User space tools, like kexec-tools, will be able to find that region marked > as "Crash kernel" in /proc/iomem. [NB: Re-ordered diff hunks ] > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > index 7802f21..4edf181 100644 > --- a/arch/arm64/mm/init.c > +++ b/arch/arm64/mm/init.c > @@ -171,6 +229,8 @@ void __init arm64_memblock_init(void) > memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start); > #endif > > + reserve_crashkernel(); > + > early_init_fdt_scan_reserved_mem(); > > /* 4GB maximum for 32-bit only capable devices */ > This is 'nit' territory, but if you were to make this: > if (IS_ENABLED(CONFIG_KEXEC_CORE)) > reserve_crashkernel(); then the #ifdefs around reserve_crashkernel() can go, and the compiler will work out that this static function can be optimised out. It also means the compiler performs its checks, even if CONFIG_KEXEC_CORE isn't selected. The same trick can be applied in patch 18 (around reserve_elfcorehdr()). > @@ -66,6 +67,63 @@ static int __init early_initrd(char *p) > early_param("initrd", early_initrd); > #endif > > +#ifdef CONFIG_KEXEC_CORE > +/* > + * reserve_crashkernel() - reserves memory for crash kernel > + * > + * This function reserves memory area given in "crashkernel=" kernel command > + * line parameter. The memory reserved is used by dump capture kernel when > + * primary kernel is crashing. > + */ > +static void __init reserve_crashkernel(void) > +{ > + unsigned long long crash_size = 0, crash_base = 0; > + int ret; > + > + ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), > + &crash_size, &crash_base); > + if (ret) > + return; > + > + if (crash_base == 0) { > + crash_base = memblock_find_in_range(0, > + MEMBLOCK_ALLOC_ACCESSIBLE, crash_size, 1 << 21); > + if (crash_base == 0) { > + pr_warn("Unable to allocate crashkernel (size:%llx)\n", > + crash_size); > + return; > + } > + memblock_reserve(crash_base, crash_size); > + > + } else { > + /* User specifies base address explicitly. */ > + if (!memblock_is_region_memory(crash_base, crash_size) || > + memblock_is_region_reserved(crash_base, crash_size)) { > + pr_warn("crashkernel has wrong address or size\n"); > + return; > + } > + > + if (crash_base & ((1 << 21) - 1)) { > + pr_warn("crashkernel base address is not 2MB aligned\n"); > + return; > + } > + > + memblock_reserve(crash_base, crash_size); > + } > + > + pr_info("Reserving %lldMB of memory at %lldMB for crashkernel\n", > + crash_size >> 20, crash_base >> 20); > + > + crashk_res.start = crash_base; > + crashk_res.end = crash_base + crash_size - 1; > +} > +#else > +static void __init reserve_crashkernel(void) > +{ > + ; > +} > +#endif /* CONFIG_KEXEC_CORE */ > + > /* > * Return the maximum physical address for ZONE_DMA (DMA_BIT_MASK(32)). It > * currently assumes that for memory starting above 4G, 32-bit devices will Thanks, James
Hi! On 18/03/16 18:08, James Morse wrote: > On 14/03/16 17:48, Geoff Levand wrote: >> From: AKASHI Takahiro <takahiro.akashi@linaro.org> >> >> Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus >> and save registers' status in per-cpu ELF notes before starting crash >> dump kernel. See kernel_kexec(). >> Even if not all secondary cpus have shut down, we do kdump anyway. >> >> As we don't have to make non-boot(crashed) cpus offline (to preserve >> correct status of cpus at crash dump) before shutting down, this patch >> also adds a variant of smp_send_stop(). >> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c >> index b1adc51..76402c6cd 100644 >> --- a/arch/arm64/kernel/smp.c >> +++ b/arch/arm64/kernel/smp.c >> @@ -701,6 +705,28 @@ static void ipi_cpu_stop(unsigned int cpu) >> cpu_relax(); >> } >> >> +static atomic_t waiting_for_crash_ipi; >> + >> +static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs) >> +{ >> + crash_save_cpu(regs, cpu); >> + >> + raw_spin_lock(&stop_lock); >> + pr_debug("CPU%u: stopping\n", cpu); >> + raw_spin_unlock(&stop_lock); >> + >> + atomic_dec(&waiting_for_crash_ipi); >> + >> + local_irq_disable(); >> + >> + if (cpu_ops[cpu]->cpu_die) >> + cpu_ops[cpu]->cpu_die(cpu); >> + >> + /* just in case */ >> + while (1) >> + wfi(); Having thought about this some more: I don't think spinning like this is safe. We need to spin with the MMU turned off, otherwise this core will pollute the kdump kernel with TLB entries from the old page tables. Suzuki added code to catch this happening with cpu hotplug (grep CPU_STUCK_IN_KERNEL in arm64/for-next/core), but that won't help here. If 'CPU_STUCK_IN_KERNEL' was set by a core, I don't think we can kexec/kdump for this reason. Something like cpu_die() for spin-table is needed, naively I think it should turn the MMU off, and jump back into the secondary_holding_pen, but the core would still be stuck in the kernel, and the memory addresses associated with secondary_holding_pen can't be re-used. (which is fine for kdump, but not kexec) Thanks, James
On Fri, Mar 18, 2016 at 06:08:30PM +0000, James Morse wrote: > Hi! > > On 14/03/16 17:48, Geoff Levand wrote: > > From: AKASHI Takahiro <takahiro.akashi@linaro.org> > > > > On the startup of primary kernel, the memory region used by crash dump > > kernel must be specified by "crashkernel=" kernel parameter. > > reserve_crashkernel() will allocate and reserve the region for later use. > > > > User space tools, like kexec-tools, will be able to find that region marked > > as "Crash kernel" in /proc/iomem. > > [NB: Re-ordered diff hunks ] > > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c > > index 7802f21..4edf181 100644 > > --- a/arch/arm64/mm/init.c > > +++ b/arch/arm64/mm/init.c > > @@ -171,6 +229,8 @@ void __init arm64_memblock_init(void) > > memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start); > > #endif > > > > + reserve_crashkernel(); > > + > > early_init_fdt_scan_reserved_mem(); > > > > /* 4GB maximum for 32-bit only capable devices */ > > > > > This is 'nit' territory, but if you were to make this: > > if (IS_ENABLED(CONFIG_KEXEC_CORE)) > > reserve_crashkernel(); > > then the #ifdefs around reserve_crashkernel() can go, and the compiler will work > out that this static function can be optimised out. It also means the > compiler performs its checks, even if CONFIG_KEXEC_CORE isn't selected. The same > trick can be applied in patch 18 (around reserve_elfcorehdr()). It will also make the code more understandable. Thanks, -Takahiro AKASHI > > @@ -66,6 +67,63 @@ static int __init early_initrd(char *p) > > early_param("initrd", early_initrd); > > #endif > > > > +#ifdef CONFIG_KEXEC_CORE > > +/* > > + * reserve_crashkernel() - reserves memory for crash kernel > > + * > > + * This function reserves memory area given in "crashkernel=" kernel command > > + * line parameter. The memory reserved is used by dump capture kernel when > > + * primary kernel is crashing. > > + */ > > +static void __init reserve_crashkernel(void) > > +{ > > + unsigned long long crash_size = 0, crash_base = 0; > > + int ret; > > + > > + ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), > > + &crash_size, &crash_base); > > + if (ret) > > + return; > > + > > + if (crash_base == 0) { > > + crash_base = memblock_find_in_range(0, > > + MEMBLOCK_ALLOC_ACCESSIBLE, crash_size, 1 << 21); > > + if (crash_base == 0) { > > + pr_warn("Unable to allocate crashkernel (size:%llx)\n", > > + crash_size); > > + return; > > + } > > + memblock_reserve(crash_base, crash_size); > > + > > + } else { > > + /* User specifies base address explicitly. */ > > + if (!memblock_is_region_memory(crash_base, crash_size) || > > + memblock_is_region_reserved(crash_base, crash_size)) { > > + pr_warn("crashkernel has wrong address or size\n"); > > + return; > > + } > > + > > + if (crash_base & ((1 << 21) - 1)) { > > + pr_warn("crashkernel base address is not 2MB aligned\n"); > > + return; > > + } > > + > > + memblock_reserve(crash_base, crash_size); > > + } > > + > > + pr_info("Reserving %lldMB of memory at %lldMB for crashkernel\n", > > + crash_size >> 20, crash_base >> 20); > > + > > + crashk_res.start = crash_base; > > + crashk_res.end = crash_base + crash_size - 1; > > +} > > +#else > > +static void __init reserve_crashkernel(void) > > +{ > > + ; > > +} > > +#endif /* CONFIG_KEXEC_CORE */ > > + > > /* > > * Return the maximum physical address for ZONE_DMA (DMA_BIT_MASK(32)). It > > * currently assumes that for memory starting above 4G, 32-bit devices will > > > Thanks, > > James >
On Fri, Mar 18, 2016 at 06:08:15PM +0000, James Morse wrote: > Hi! > > On 14/03/16 17:48, Geoff Levand wrote: > > From: AKASHI Takahiro <takahiro.akashi@linaro.org> > > > > Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus > > and save registers' status in per-cpu ELF notes before starting crash > > dump kernel. See kernel_kexec(). > > Even if not all secondary cpus have shut down, we do kdump anyway. > > > > As we don't have to make non-boot(crashed) cpus offline (to preserve > > correct status of cpus at crash dump) before shutting down, this patch > > also adds a variant of smp_send_stop(). > > > > Signed-off-by: AKASHI Takahiro <takahiro.akashi@linaro.org> > > > diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > > index b1adc51..76402c6cd 100644 > > --- a/arch/arm64/kernel/smp.c > > +++ b/arch/arm64/kernel/smp.c > > @@ -701,6 +705,28 @@ static void ipi_cpu_stop(unsigned int cpu) > > cpu_relax(); > > } > > > > +static atomic_t waiting_for_crash_ipi; > > + > > +static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs) > > +{ > > + crash_save_cpu(regs, cpu); > > + > > + raw_spin_lock(&stop_lock); > > + pr_debug("CPU%u: stopping\n", cpu); > > + raw_spin_unlock(&stop_lock); > > + > > + atomic_dec(&waiting_for_crash_ipi); > > + > > + local_irq_disable(); > > Aren't irqs already disabled here? - or is this a 'just make sure'.... Well, it also exists in ipi_cpu_stop() ... > > + > > + if (cpu_ops[cpu]->cpu_die) > > + cpu_ops[cpu]->cpu_die(cpu); > > + > > + /* just in case */ > > + while (1) > > + wfi(); > > +} > > + > > /* > > * Main handler for inter-processor interrupts > > */ > > @@ -731,6 +757,12 @@ void handle_IPI(int ipinr, struct pt_regs *regs) > > irq_exit(); > > break; > > > > + case IPI_CPU_CRASH_STOP: > > + irq_enter(); > > + ipi_cpu_crash_stop(cpu, regs); > > + irq_exit(); > > This made me jump: irq_exit() may end up in the __do_softirq() (with irqs turned > back on!) ... but these lines are impossible to reach. Maybe get the compiler to > enforce this with an unreachable() instead? I'm not sure how effective unreachable() is here, but OK I will add it. Thanks, -Takahiro AKASHI > > + break; > > + > > #ifdef CONFIG_GENERIC_CLOCKEVENTS_BROADCAST > > case IPI_TIMER: > > irq_enter(); > > @@ -791,6 +823,30 @@ void smp_send_stop(void) > > pr_warning("SMP: failed to stop secondary CPUs\n"); > > } > > > > +void smp_send_crash_stop(void) > > +{ > > + cpumask_t mask; > > + unsigned long timeout; > > + > > + if (num_online_cpus() == 1) > > + return; > > + > > + cpumask_copy(&mask, cpu_online_mask); > > + cpumask_clear_cpu(smp_processor_id(), &mask); > > + > > + atomic_set(&waiting_for_crash_ipi, num_online_cpus() - 1); > > + > > + smp_cross_call(&mask, IPI_CPU_CRASH_STOP); > > + > > + /* Wait up to one second for other CPUs to stop */ > > + timeout = USEC_PER_SEC; > > + while ((atomic_read(&waiting_for_crash_ipi) > 0) && timeout--) > > + udelay(1); > > + > > + if (atomic_read(&waiting_for_crash_ipi) > 0) > > + pr_warn("SMP: failed to stop secondary CPUs\n"); > > +} > > + > > /* > > * not supported here > > */ > > > > > Thanks, > > James > >
On Mon, Mar 21, 2016 at 01:29:28PM +0000, James Morse wrote: > Hi! > > On 18/03/16 18:08, James Morse wrote: > > On 14/03/16 17:48, Geoff Levand wrote: > >> From: AKASHI Takahiro <takahiro.akashi@linaro.org> > >> > >> Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus > >> and save registers' status in per-cpu ELF notes before starting crash > >> dump kernel. See kernel_kexec(). > >> Even if not all secondary cpus have shut down, we do kdump anyway. > >> > >> As we don't have to make non-boot(crashed) cpus offline (to preserve > >> correct status of cpus at crash dump) before shutting down, this patch > >> also adds a variant of smp_send_stop(). > > >> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c > >> index b1adc51..76402c6cd 100644 > >> --- a/arch/arm64/kernel/smp.c > >> +++ b/arch/arm64/kernel/smp.c > >> @@ -701,6 +705,28 @@ static void ipi_cpu_stop(unsigned int cpu) > >> cpu_relax(); > >> } > >> > >> +static atomic_t waiting_for_crash_ipi; > >> + > >> +static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs) > >> +{ > >> + crash_save_cpu(regs, cpu); > >> + > >> + raw_spin_lock(&stop_lock); > >> + pr_debug("CPU%u: stopping\n", cpu); > >> + raw_spin_unlock(&stop_lock); > >> + > >> + atomic_dec(&waiting_for_crash_ipi); > >> + > >> + local_irq_disable(); > >> + > >> + if (cpu_ops[cpu]->cpu_die) > >> + cpu_ops[cpu]->cpu_die(cpu); > >> + > >> + /* just in case */ > >> + while (1) > >> + wfi(); > > Having thought about this some more: I don't think spinning like this is safe. > We need to spin with the MMU turned off, otherwise this core will pollute the > kdump kernel with TLB entries from the old page tables. I think that wfi() will never wake up since local interrupts are disabled here. So how can it pollute the kdump kernel? > Suzuki added code to > catch this happening with cpu hotplug (grep CPU_STUCK_IN_KERNEL in > arm64/for-next/core), but that won't help here. If 'CPU_STUCK_IN_KERNEL' was set > by a core, I don't think we can kexec/kdump for this reason. I will need to look into Suzuki's code. > Something like cpu_die() for spin-table is needed, naively I think it should > turn the MMU off, and jump back into the secondary_holding_pen, but the core > would still be stuck in the kernel, and the memory addresses associated with > secondary_holding_pen can't be re-used. (which is fine for kdump, but not kexec) Please note that the code is exercised only in kdump case through machine_crash_shutdown(). Thanks, -Takahiro AKASHI > > Thanks, > > James >
On 31/03/16 08:57, AKASHI Takahiro wrote: > On Mon, Mar 21, 2016 at 01:29:28PM +0000, James Morse wrote: >> Hi! >> >> On 18/03/16 18:08, James Morse wrote: >>> On 14/03/16 17:48, Geoff Levand wrote: >>>> From: AKASHI Takahiro <takahiro.akashi@linaro.org> >>>> >>>> Primary kernel calls machine_crash_shutdown() to shut down non-boot cpus >>>> and save registers' status in per-cpu ELF notes before starting crash >>>> dump kernel. See kernel_kexec(). >>>> Even if not all secondary cpus have shut down, we do kdump anyway. >>>> >>>> As we don't have to make non-boot(crashed) cpus offline (to preserve >>>> correct status of cpus at crash dump) before shutting down, this patch >>>> also adds a variant of smp_send_stop(). >> >>>> diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c >>>> index b1adc51..76402c6cd 100644 >>>> --- a/arch/arm64/kernel/smp.c >>>> +++ b/arch/arm64/kernel/smp.c >>>> @@ -701,6 +705,28 @@ static void ipi_cpu_stop(unsigned int cpu) >>>> cpu_relax(); >>>> } >>>> >>>> +static atomic_t waiting_for_crash_ipi; >>>> + >>>> +static void ipi_cpu_crash_stop(unsigned int cpu, struct pt_regs *regs) >>>> +{ >>>> + crash_save_cpu(regs, cpu); >>>> + >>>> + raw_spin_lock(&stop_lock); >>>> + pr_debug("CPU%u: stopping\n", cpu); >>>> + raw_spin_unlock(&stop_lock); >>>> + >>>> + atomic_dec(&waiting_for_crash_ipi); >>>> + >>>> + local_irq_disable(); >>>> + >>>> + if (cpu_ops[cpu]->cpu_die) >>>> + cpu_ops[cpu]->cpu_die(cpu); >>>> + >>>> + /* just in case */ >>>> + while (1) >>>> + wfi(); >> >> Having thought about this some more: I don't think spinning like this is safe. >> We need to spin with the MMU turned off, otherwise this core will pollute the >> kdump kernel with TLB entries from the old page tables. > > I think that wfi() will never wake up since local interrupts are disabled > here. So how can it pollute the kdump kernel? Having interrupts disabled doesn't prevent an exit from WFI. Quite the opposite, actually. It is designed to wake-up the core when something happens on the external interface. Thanks, M.
On Thu, Mar 31, 2016 at 09:12:32AM +0100, Marc Zyngier wrote: > On 31/03/16 08:57, AKASHI Takahiro wrote: > > On Mon, Mar 21, 2016 at 01:29:28PM +0000, James Morse wrote: > >> On 18/03/16 18:08, James Morse wrote: > >>> On 14/03/16 17:48, Geoff Levand wrote: > >>>> + /* just in case */ > >>>> + while (1) > >>>> + wfi(); > >> > >> Having thought about this some more: I don't think spinning like this is safe. > >> We need to spin with the MMU turned off, otherwise this core will pollute the > >> kdump kernel with TLB entries from the old page tables. > > > > I think that wfi() will never wake up since local interrupts are disabled > > here. So how can it pollute the kdump kernel? > > Having interrupts disabled doesn't prevent an exit from WFI. Quite the > opposite, actually. It is designed to wake-up the core when something > happens on the external interface. Further, WFI is a hint, and may simply act as a NOP. The ARM ARM calls this out (see "D1.17.2" Wait For Interrupt in ARM DDI 0487A.i): Because the architecture permits a PE to leave the low-power state for any reason, it is permissible for a PE to treat WFI as a NOP , but this is not recommended for lowest power operation. Thanks, Mark.
On 31/03/16 08:46, AKASHI Takahiro wrote: > James Morse wrote: > > This made me jump: irq_exit() may end up in the __do_softirq() (with irqs turned > > back on!) ... but these lines are impossible to reach. Maybe get the compiler to > > enforce this with an unreachable() instead? > > I'm not sure how effective unreachable() is here, but OK I will add it. I thought '__builtin_unreachable()' would generate a warning if it was reachable, but from [0] it looks like it just suppresses warnings. You're right, it won't help. Thanks, James [0] https://gcc.gnu.org/onlinedocs/gcc/Other-Builtins.html
On 03/14/16 at 05:47pm, Geoff Levand wrote: > This series adds the core support for kexec re-boot and kdump on ARM64. This > version of the series combines Takahiro's kdump patches with my kexec patches. > Please consider all patches for inclusion. > > To load a second stage kernel and execute a kexec re-boot or to work with kdump > on ARM64 systems a series of patches to kexec-tools [2], which have not yet been > merged upstream, are needed. Please update to the latest if you have been using > an older version. > > To examine vmcore (/proc/vmcore), you should use > - gdb v7.7 or later > - crash v7.1.1 or later > > [1] https://git.kernel.org/cgit/linux/kernel/git/geoff/linux-kexec.git > [2] https://git.kernel.org/cgit/linux/kernel/git/geoff/kexec-tools.git > Geoff, for easier to review maybe you can send kexec patches first then AKASHI Takahiro can send the kdump patches as a standalone patchset? Thanks Dave
On 03/31/2016 02:19 PM, AKASHI Takahiro wrote: > On Fri, Mar 18, 2016 at 06:08:30PM +0000, James Morse wrote: >> Hi! >> >> On 14/03/16 17:48, Geoff Levand wrote: >>> From: AKASHI Takahiro <takahiro.akashi@linaro.org> >>> >>> On the startup of primary kernel, the memory region used by crash dump >>> kernel must be specified by "crashkernel=" kernel parameter. >>> reserve_crashkernel() will allocate and reserve the region for later use. >>> >>> User space tools, like kexec-tools, will be able to find that region marked >>> as "Crash kernel" in /proc/iomem. >> >> [NB: Re-ordered diff hunks ] >> >>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c >>> index 7802f21..4edf181 100644 >>> --- a/arch/arm64/mm/init.c >>> +++ b/arch/arm64/mm/init.c >>> @@ -171,6 +229,8 @@ void __init arm64_memblock_init(void) >>> memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start); >>> #endif >>> >>> + reserve_crashkernel(); >>> + >>> early_init_fdt_scan_reserved_mem(); >>> >>> /* 4GB maximum for 32-bit only capable devices */ >>> >> >> >> This is 'nit' territory, but if you were to make this: >>> if (IS_ENABLED(CONFIG_KEXEC_CORE)) >>> reserve_crashkernel(); >> >> then the #ifdefs around reserve_crashkernel() can go, and the compiler will work >> out that this static function can be optimised out. It also means the >> compiler performs its checks, even if CONFIG_KEXEC_CORE isn't selected. The same >> trick can be applied in patch 18 (around reserve_elfcorehdr()). > > It will also make the code more understandable. I tried this change, but unfortunately, preserve_crashkernel() calls an external function, parse_crashkernel(), which is defined only if CONFIG_KEXEC_CORE. So we will see a compiler error in !CONFIG_KEXEC_CORE if we take #ifdef out around perserve_crashkernel(). -Takahiro AKASHI > Thanks, > -Takahiro AKASHI > >>> @@ -66,6 +67,63 @@ static int __init early_initrd(char *p) >>> early_param("initrd", early_initrd); >>> #endif >>> >>> +#ifdef CONFIG_KEXEC_CORE >>> +/* >>> + * reserve_crashkernel() - reserves memory for crash kernel >>> + * >>> + * This function reserves memory area given in "crashkernel=" kernel command >>> + * line parameter. The memory reserved is used by dump capture kernel when >>> + * primary kernel is crashing. >>> + */ >>> +static void __init reserve_crashkernel(void) >>> +{ >>> + unsigned long long crash_size = 0, crash_base = 0; >>> + int ret; >>> + >>> + ret = parse_crashkernel(boot_command_line, memblock_phys_mem_size(), >>> + &crash_size, &crash_base); >>> + if (ret) >>> + return; >>> + >>> + if (crash_base == 0) { >>> + crash_base = memblock_find_in_range(0, >>> + MEMBLOCK_ALLOC_ACCESSIBLE, crash_size, 1 << 21); >>> + if (crash_base == 0) { >>> + pr_warn("Unable to allocate crashkernel (size:%llx)\n", >>> + crash_size); >>> + return; >>> + } >>> + memblock_reserve(crash_base, crash_size); >>> + >>> + } else { >>> + /* User specifies base address explicitly. */ >>> + if (!memblock_is_region_memory(crash_base, crash_size) || >>> + memblock_is_region_reserved(crash_base, crash_size)) { >>> + pr_warn("crashkernel has wrong address or size\n"); >>> + return; >>> + } >>> + >>> + if (crash_base & ((1 << 21) - 1)) { >>> + pr_warn("crashkernel base address is not 2MB aligned\n"); >>> + return; >>> + } >>> + >>> + memblock_reserve(crash_base, crash_size); >>> + } >>> + >>> + pr_info("Reserving %lldMB of memory at %lldMB for crashkernel\n", >>> + crash_size >> 20, crash_base >> 20); >>> + >>> + crashk_res.start = crash_base; >>> + crashk_res.end = crash_base + crash_size - 1; >>> +} >>> +#else >>> +static void __init reserve_crashkernel(void) >>> +{ >>> + ; >>> +} >>> +#endif /* CONFIG_KEXEC_CORE */ >>> + >>> /* >>> * Return the maximum physical address for ZONE_DMA (DMA_BIT_MASK(32)). It >>> * currently assumes that for memory starting above 4G, 32-bit devices will >> >> >> Thanks, >> >> James >>
On Thu, Mar 31, 2016 at 11:10:38AM +0100, Mark Rutland wrote: > On Thu, Mar 31, 2016 at 09:12:32AM +0100, Marc Zyngier wrote: > > On 31/03/16 08:57, AKASHI Takahiro wrote: > > > On Mon, Mar 21, 2016 at 01:29:28PM +0000, James Morse wrote: > > >> On 18/03/16 18:08, James Morse wrote: > > >>> On 14/03/16 17:48, Geoff Levand wrote: > > >>>> + /* just in case */ > > >>>> + while (1) > > >>>> + wfi(); > > >> > > >> Having thought about this some more: I don't think spinning like this is safe. > > >> We need to spin with the MMU turned off, otherwise this core will pollute the > > >> kdump kernel with TLB entries from the old page tables. > > > > > > I think that wfi() will never wake up since local interrupts are disabled > > > here. So how can it pollute the kdump kernel? > > > > Having interrupts disabled doesn't prevent an exit from WFI. Quite the > > opposite, actually. It is designed to wake-up the core when something > > happens on the external interface. > > Further, WFI is a hint, and may simply act as a NOP. Ah, OK. But even so, none of interrupt handlers (nor other code) will be executed after cpu wakes up, and the memory won't be polluted. Or do I still miss something? -Takahiro AKASHI > The ARM ARM calls this out (see "D1.17.2" Wait For Interrupt in ARM DDI > 0487A.i): > > Because the architecture permits a PE to leave the low-power > state for any reason, it is permissible for a PE to treat WFI as > a NOP , but this is not recommended for lowest power operation. > > Thanks, > Mark.
On Fri, Apr 01, 2016 at 05:45:09PM +0900, AKASHI Takahiro wrote: > On Thu, Mar 31, 2016 at 11:10:38AM +0100, Mark Rutland wrote: > > On Thu, Mar 31, 2016 at 09:12:32AM +0100, Marc Zyngier wrote: > > > On 31/03/16 08:57, AKASHI Takahiro wrote: > > > > On Mon, Mar 21, 2016 at 01:29:28PM +0000, James Morse wrote: > > > >> On 18/03/16 18:08, James Morse wrote: > > > >>> On 14/03/16 17:48, Geoff Levand wrote: > > > >>>> + /* just in case */ > > > >>>> + while (1) > > > >>>> + wfi(); > > > >> > > > >> Having thought about this some more: I don't think spinning like this is safe. > > > >> We need to spin with the MMU turned off, otherwise this core will pollute the > > > >> kdump kernel with TLB entries from the old page tables. > > > > > > > > I think that wfi() will never wake up since local interrupts are disabled > > > > here. So how can it pollute the kdump kernel? > > > > > > Having interrupts disabled doesn't prevent an exit from WFI. Quite the > > > opposite, actually. It is designed to wake-up the core when something > > > happens on the external interface. > > > > Further, WFI is a hint, and may simply act as a NOP. > > Ah, OK. But even so, none of interrupt handlers (nor other code) will > be executed after cpu wakes up, and the memory won't be polluted. The code comprising the while(1) loop will be executed, and TLB walks, speculative fetches, etc may occur regardless. We don't share TLB entries between cores (we don't have support for ARMv8.2s CnP), and the kdump kernel should be running in a carveout from main memory (which IIUC is not mapped by the original kernel). So normally, this would not be a problem. However, if there is a problem with the page tables (e.g. entries erroneously point into a PA range the kdump kernel is using), then unavoidable background memory traffic from the CPU may cause problems for the kdump kernel. Thanks, Mark.
Hi, On Fri, 2016-04-01 at 09:59 +0800, Dave Young wrote: > Geoff, for easier to review maybe you can send kexec patches first > then AKASHI Takahiro > can send the kdump patches as a standalone patchset? Marc Zyngier specifically asked for an integrated set of patches for easier review. I will keep it that way for now. -Geoff
Mark, On Fri, Apr 01, 2016 at 10:36:35AM +0100, Mark Rutland wrote: > On Fri, Apr 01, 2016 at 05:45:09PM +0900, AKASHI Takahiro wrote: > > On Thu, Mar 31, 2016 at 11:10:38AM +0100, Mark Rutland wrote: > > > On Thu, Mar 31, 2016 at 09:12:32AM +0100, Marc Zyngier wrote: > > > > On 31/03/16 08:57, AKASHI Takahiro wrote: > > > > > On Mon, Mar 21, 2016 at 01:29:28PM +0000, James Morse wrote: > > > > >> On 18/03/16 18:08, James Morse wrote: > > > > >>> On 14/03/16 17:48, Geoff Levand wrote: > > > > >>>> + /* just in case */ > > > > >>>> + while (1) > > > > >>>> + wfi(); > > > > >> > > > > >> Having thought about this some more: I don't think spinning like this is safe. > > > > >> We need to spin with the MMU turned off, otherwise this core will pollute the > > > > >> kdump kernel with TLB entries from the old page tables. > > > > > > > > > > I think that wfi() will never wake up since local interrupts are disabled > > > > > here. So how can it pollute the kdump kernel? > > > > > > > > Having interrupts disabled doesn't prevent an exit from WFI. Quite the > > > > opposite, actually. It is designed to wake-up the core when something > > > > happens on the external interface. > > > > > > Further, WFI is a hint, and may simply act as a NOP. > > > > Ah, OK. But even so, none of interrupt handlers (nor other code) will > > be executed after cpu wakes up, and the memory won't be polluted. > > The code comprising the while(1) loop will be executed, and TLB walks, > speculative fetches, etc may occur regardless. > > We don't share TLB entries between cores (we don't have support for > ARMv8.2s CnP), and the kdump kernel should be running in a carveout from > main memory (which IIUC is not mapped by the original kernel). So > normally, this would not be a problem. In fact, the memory region for crash kernel will be just memblock_reserve()'d. We can't do memblock_remove() here because that region is expected to exist as part of system memory. (For details, see kimage_load_crash_segment() in kexec_core.c.) Should we remove it from kernel mapping? > However, if there is a problem with the page tables (e.g. entries > erroneously point into a PA range the kdump kernel is using), then > unavoidable background memory traffic from the CPU may cause problems > for the kdump kernel. But the traffic generated by the cpu will concentrate on the area around the while loop, ipi_cpu_crash_stop(), in the *crashed* kernel's memory. (I'm not sure about speculative behaviors.) So I don't think it will hurt kdump kernel. Thanks, -Takahiro AKASHI > Thanks, > Mark.
Marc, it has been discussed for long time, I think kdump code can be logically splitted from this series so that we can get the kexec done first? On 04/01/16 at 11:39am, Geoff Levand wrote: > Hi, > > On Fri, 2016-04-01 at 09:59 +0800, Dave Young wrote: > > Geoff, for easier to review maybe you can send kexec patches first > > then AKASHI Takahiro > > can send the kdump patches as a standalone patchset? > > Marc Zyngier specifically asked for an integrated set > of patches for easier review. I will keep it that way > for now. > > -Geoff
Hi Dave, On 17/05/16 06:42, Dave Young wrote: > Marc, it has been discussed for long time, I think kdump code can be > logically splitted from this series so that we can get the kexec > done first? The basis for kexec are already on their way to mainline, as part of the hibernate series that James has put together. I'd expect someone to: 1) rebase the remaining kexec/kdump patches based on the current arm64/for-next/core, 2) post these patches when -rc1 gets cut in two weeks from now, 3) address the review comments in a timely manner, 4) update the series once a week so that we see some actual progress If this happens, I can't see any reason why this code wouldn't get merged. But keeping kexec and kdump together is not what has prevented the code from getting merged until now. Thanks, M. > > On 04/01/16 at 11:39am, Geoff Levand wrote: >> Hi, >> >> On Fri, 2016-04-01 at 09:59 +0800, Dave Young wrote: >>> Geoff, for easier to review maybe you can send kexec patches first >>> then AKASHI Takahiro >>> can send the kdump patches as a standalone patchset? >> >> Marc Zyngier specifically asked for an integrated set >> of patches for easier review. I will keep it that way >> for now. >> >> -Geoff >
Marc, Dave On Tue, May 17, 2016 at 09:06:54AM +0100, Marc Zyngier wrote: > Hi Dave, > > On 17/05/16 06:42, Dave Young wrote: > > Marc, it has been discussed for long time, I think kdump code can be > > logically splitted from this series so that we can get the kexec > > done first? > > The basis for kexec are already on their way to mainline, as part of the > hibernate series that James has put together. > > I'd expect someone to: > > 1) rebase the remaining kexec/kdump patches based on the current > arm64/for-next/core, Done. At the time of last week. The only issue that I have now is a support for KASLR, but this is not a kernel issue, but tool's. I'm now talking with Dave Anderson (RedHat) about how we should fix it. > 2) post these patches when -rc1 gets cut in two weeks from now, Yes, we are planning to do so. > 3) address the review comments in a timely manner, I'd like to expect the code to be reviewed in a timely manner, too. In v4.6, we have not got any comments (nor ack's) on kexec/kdump-specific part of patches. If this happens again, it will make the turnaround of our re-spinning even longer. > 4) update the series once a week so that we see some actual progress Yes if we have updates. > If this happens, I can't see any reason why this code wouldn't get > merged. But keeping kexec and kdump together is not what has prevented > the code from getting merged until now. Yeah, agree. Thanks, -Takahiro AKASHI > > Thanks, > > M. > > > > On 04/01/16 at 11:39am, Geoff Levand wrote: > >> Hi, > >> > >> On Fri, 2016-04-01 at 09:59 +0800, Dave Young wrote: > >>> Geoff, for easier to review maybe you can send kexec patches first > >>> then AKASHI Takahiro > >>> can send the kdump patches as a standalone patchset? > >> > >> Marc Zyngier specifically asked for an integrated set > >> of patches for easier review. I will keep it that way > >> for now. > >> > >> -Geoff > > > > > -- > Jazz is not dead. It just smells funny...
Hi Marc On 05/17/16 at 09:06am, Marc Zyngier wrote: > Hi Dave, > > On 17/05/16 06:42, Dave Young wrote: > > Marc, it has been discussed for long time, I think kdump code can be > > logically splitted from this series so that we can get the kexec > > done first? > > The basis for kexec are already on their way to mainline, as part of the > hibernate series that James has put together. > > I'd expect someone to: > > 1) rebase the remaining kexec/kdump patches based on the current > arm64/for-next/core, > 2) post these patches when -rc1 gets cut in two weeks from now, > 3) address the review comments in a timely manner, > 4) update the series once a week so that we see some actual progress > > If this happens, I can't see any reason why this code wouldn't get > merged. But keeping kexec and kdump together is not what has prevented > the code from getting merged until now. Cool, thanks a lot for the clarification about the situation. Dave