diff mbox series

[v2,2/2] powerpc/mm: Add memory_block_size as a kernel parameter

Message ID 20230609060851.329406-2-aneesh.kumar@linux.ibm.com (mailing list archive)
State Superseded
Headers show
Series [v2,1/2] powerpc/mm: Cleanup memory block size probing | expand

Checks

Context Check Description
snowpatch_ozlabs/github-powerpc_selftests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_ppctests success Successfully ran 8 jobs.
snowpatch_ozlabs/github-powerpc_sparse success Successfully ran 4 jobs.
snowpatch_ozlabs/github-powerpc_kernel_qemu success Successfully ran 24 jobs.
snowpatch_ozlabs/github-powerpc_clang success Successfully ran 6 jobs.

Commit Message

Aneesh Kumar K V June 9, 2023, 6:08 a.m. UTC
Certain devices can possess non-standard memory capacities, not constrained
to multiples of 1GB. Provide a kernel parameter so that we can map the
device memory completely on memory hotplug.

Restrict memory_block_size value to a power of 2 value similar to LMB size.
The memory block size should also be more than the section size.

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
---
 .../admin-guide/kernel-parameters.txt         |  3 +++
 arch/powerpc/kernel/setup_64.c                | 23 +++++++++++++++++++
 arch/powerpc/mm/init_64.c                     | 17 ++++++++++----
 3 files changed, 38 insertions(+), 5 deletions(-)

Comments

Reza Arbab June 13, 2023, 8:06 p.m. UTC | #1
On Fri, Jun 09, 2023 at 11:38:51AM +0530, Aneesh Kumar K.V wrote:
>Certain devices can possess non-standard memory capacities, not constrained
>to multiples of 1GB. Provide a kernel parameter so that we can map the
>device memory completely on memory hotplug.

Case in point; the memory block size determined at boot is 1GB, but we 
know that 15.75GB of device memory will be hotplugged during runtime.

Reviewed-by: Reza Arbab <arbab@linux.ibm.com>
David Hildenbrand June 19, 2023, 10:35 a.m. UTC | #2
On 09.06.23 08:08, Aneesh Kumar K.V wrote:
> Certain devices can possess non-standard memory capacities, not constrained
> to multiples of 1GB. Provide a kernel parameter so that we can map the
> device memory completely on memory hotplug.

So, the unfortunate thing is that these devices would have worked out of 
the box before the memory block size was increased from 256 MiB to 1 GiB 
in these setups. Now, one has to fine-tune the memory block size. The 
only other arch that I know, which supports setting the memory block 
size, is x86 for special (large) UV systems -- and at least in the past 
128 MiB vs. 2 GiB memory blocks made a performance difference during 
boot (maybe no longer today, who knows).


Obviously, less tunable and getting stuff simply working out of the box 
is preferable.

Two questions:

1) Isn't there a way to improve auto-detection to fallback to 256 MiB in 
these setups, to avoid specifying these parameters?

2) Is the 256 MiB -> 1 GiB memory block size switch really worth it? On 
x86-64, experiments (with direct map fragmentation) showed that the 
effective performance boost is pretty insignificant, so I wonder how big 
the 1 GiB direct map performance improvement is.


I guess the only real issue with 256 MiB memory blocks and 1 GiB direct 
mapping is memory unplug of boot memory: when unplugging a 256 MiB 
block, one would have to remap the 1 GiB range using 2 MiB ranges.

... I was wondering what would happen if you simply leave the direct 
mapping in this corner case in place instead of doing this remapping. 
IOW, remove the memory but keep the direct map pointing at the removed 
memory. Nobody should be touching it, or are there any cases where that 
could hurt?


Or is there any other reason why we really want 1 GiB memory blocks 
instead of to defaulting to 256 MiB the way it used to be?

Thanks!

> 
> Restrict memory_block_size value to a power of 2 value similar to LMB size.
> The memory block size should also be more than the section size.
> 
> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
> ---
>   .../admin-guide/kernel-parameters.txt         |  3 +++
>   arch/powerpc/kernel/setup_64.c                | 23 +++++++++++++++++++
>   arch/powerpc/mm/init_64.c                     | 17 ++++++++++----
>   3 files changed, 38 insertions(+), 5 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 9e5bab29685f..833b8c5b4b4c 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -3190,6 +3190,9 @@
>   			Note that even when enabled, there are a few cases where
>   			the feature is not effective.
>   
> +	memory_block_size=size [PPC]
> +			 Use this parameter to configure the memory block size value.
> +
>   	memtest=	[KNL,X86,ARM,M68K,PPC,RISCV] Enable memtest
>   			Format: <integer>
>   			default : 0 <disable>
> diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
> index 246201d0d879..cbdb924462c7 100644
> --- a/arch/powerpc/kernel/setup_64.c
> +++ b/arch/powerpc/kernel/setup_64.c
> @@ -892,6 +892,29 @@ unsigned long memory_block_size_bytes(void)
>   
>   	return MIN_MEMORY_BLOCK_SIZE;
>   }
> +
> +/*
> + * Restrict to a power of 2 value for memblock which is larger than
> + * section size
> + */
> +static int __init parse_mem_block_size(char *ptr)
> +{
> +	unsigned int order;
> +	unsigned long size = memparse(ptr, NULL);
> +
> +	order = fls64(size);
> +	if (!order)
> +		return 0;
> +
> +	order--;
> +	if (order < SECTION_SIZE_BITS)
> +		return 0;
> +
> +	memory_block_size = 1UL << order;
> +
> +	return 0;
> +}
> +early_param("memory_block_size", parse_mem_block_size);
>   #endif
>   
>   #if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO)
> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
> index 97a9163f1280..5e6dde593ea3 100644
> --- a/arch/powerpc/mm/init_64.c
> +++ b/arch/powerpc/mm/init_64.c
> @@ -549,13 +549,20 @@ static int __init probe_memory_block_size(unsigned long node, const char *uname,
>   	return 0;
>   }
>   
> -/*
> - * start with 1G memory block size. Early init will
> - * fix this with correct value.
> - */
> -unsigned long memory_block_size __ro_after_init = 1UL << 30;
> +unsigned long memory_block_size __ro_after_init;
>   static void __init early_init_memory_block_size(void)
>   {
> +	/*
> +	 * if it is set via early param just return.
> +	 */
> +	if (memory_block_size)
> +		return;
> +
> +	/*
> +	 * start with 1G memory block size. update_memory_block_size()
> +	 * will derive the right value based on device tree details.
> +	 */
> +	memory_block_size = 1UL << 30;
>   	/*
>   	 * We need to do memory_block_size probe early so that
>   	 * radix__early_init_mmu() can use this as limit for
Aneesh Kumar K V June 19, 2023, 4:17 p.m. UTC | #3
David Hildenbrand <david@redhat.com> writes:

> On 09.06.23 08:08, Aneesh Kumar K.V wrote:
>> Certain devices can possess non-standard memory capacities, not constrained
>> to multiples of 1GB. Provide a kernel parameter so that we can map the
>> device memory completely on memory hotplug.
>
> So, the unfortunate thing is that these devices would have worked out of 
> the box before the memory block size was increased from 256 MiB to 1 GiB 
> in these setups. Now, one has to fine-tune the memory block size. The 
> only other arch that I know, which supports setting the memory block 
> size, is x86 for special (large) UV systems -- and at least in the past 
> 128 MiB vs. 2 GiB memory blocks made a performance difference during 
> boot (maybe no longer today, who knows).
>
>
> Obviously, less tunable and getting stuff simply working out of the box 
> is preferable.
>
> Two questions:
>
> 1) Isn't there a way to improve auto-detection to fallback to 256 MiB in 
> these setups, to avoid specifying these parameters?

The patch does try to detect as much as possible by looking at device tree
nodes and aperture window size. But there are still cases where we find
a memory aperture of size X GB and device driver hotplug X.YGB memory.

>
> 2) Is the 256 MiB -> 1 GiB memory block size switch really worth it? On 
> x86-64, experiments (with direct map fragmentation) showed that the 
> effective performance boost is pretty insignificant, so I wonder how big 
> the 1 GiB direct map performance improvement is.


Tarun is running some tests to evaluate the impact. We used to use 1GiB
mapping always. This was later switched to use memory block size to fix
issues with memory unplug
commit af9d00e93a4f ("powerpc/mm/radix: Create separate mappings for hot-plugged memory")
explains some details related to that change.


>
>
> I guess the only real issue with 256 MiB memory blocks and 1 GiB direct 
> mapping is memory unplug of boot memory: when unplugging a 256 MiB 
> block, one would have to remap the 1 GiB range using 2 MiB ranges.

>
> ... I was wondering what would happen if you simply leave the direct 
> mapping in this corner case in place instead of doing this remapping. 
> IOW, remove the memory but keep the direct map pointing at the removed 
> memory. Nobody should be touching it, or are there any cases where that 
> could hurt?
>
>
> Or is there any other reason why we really want 1 GiB memory blocks 
> instead of to defaulting to 256 MiB the way it used to be?
>

The idea we are working towards is to keep the memory block size small
but map the boot memory using 1G. An unplug request can split that 1G
mapping later. We could look at the possibility of leaving that mapping
without splitting. But not sure why we would want to do that if we can
correctly split things. Right now there is no splitting support in powerpc.

-aneesh
David Hildenbrand June 19, 2023, 4:28 p.m. UTC | #4
On 19.06.23 18:17, Aneesh Kumar K.V wrote:
> David Hildenbrand <david@redhat.com> writes:
> 
>> On 09.06.23 08:08, Aneesh Kumar K.V wrote:
>>> Certain devices can possess non-standard memory capacities, not constrained
>>> to multiples of 1GB. Provide a kernel parameter so that we can map the
>>> device memory completely on memory hotplug.
>>
>> So, the unfortunate thing is that these devices would have worked out of
>> the box before the memory block size was increased from 256 MiB to 1 GiB
>> in these setups. Now, one has to fine-tune the memory block size. The
>> only other arch that I know, which supports setting the memory block
>> size, is x86 for special (large) UV systems -- and at least in the past
>> 128 MiB vs. 2 GiB memory blocks made a performance difference during
>> boot (maybe no longer today, who knows).
>>
>>
>> Obviously, less tunable and getting stuff simply working out of the box
>> is preferable.
>>
>> Two questions:
>>
>> 1) Isn't there a way to improve auto-detection to fallback to 256 MiB in
>> these setups, to avoid specifying these parameters?
> 
> The patch does try to detect as much as possible by looking at device tree
> nodes and aperture window size. But there are still cases where we find
> a memory aperture of size X GB and device driver hotplug X.YGB memory.
> 

Okay, and I assume we can't detect that case easily.

Which interface is that device driver using to hotplug memory? It's 
quite surprising I have to say ...

>>
>> 2) Is the 256 MiB -> 1 GiB memory block size switch really worth it? On
>> x86-64, experiments (with direct map fragmentation) showed that the
>> effective performance boost is pretty insignificant, so I wonder how big
>> the 1 GiB direct map performance improvement is.
> 
> 
> Tarun is running some tests to evaluate the impact. We used to use 1GiB
> mapping always. This was later switched to use memory block size to fix
> issues with memory unplug
> commit af9d00e93a4f ("powerpc/mm/radix: Create separate mappings for hot-plugged memory")
> explains some details related to that change.
> 

IIUC, that commit (conditionally) increased the memory block size to 
avoid the splitting, correct? By that, it broke the device driver use case.

> 
>>
>>
>> I guess the only real issue with 256 MiB memory blocks and 1 GiB direct
>> mapping is memory unplug of boot memory: when unplugging a 256 MiB
>> block, one would have to remap the 1 GiB range using 2 MiB ranges.
> 
>>
>> ... I was wondering what would happen if you simply leave the direct
>> mapping in this corner case in place instead of doing this remapping.
>> IOW, remove the memory but keep the direct map pointing at the removed
>> memory. Nobody should be touching it, or are there any cases where that
>> could hurt?
>>
>>
>> Or is there any other reason why we really want 1 GiB memory blocks
>> instead of to defaulting to 256 MiB the way it used to be?
>>
> 
> The idea we are working towards is to keep the memory block size small

That would be preferable, yes ...

> but map the boot memory using 1G. An unplug request can split that 1G
> mapping later. We could look at the possibility of leaving that mapping
> without splitting. But not sure why we would want to do that if we can
> correctly split things. Right now there is no splitting support in powerpc.

If splitting over-complicates the matter (and well, it will even consume 
more memory), it might at least be worth looking into that. Yes, it's 
cleaner.

I think there is also the option to fail memory offlining (and therefore 
unplug) if we have a 1 GiB mapping and don't want to split. For 
hotplugged memory it would always work to unplug again. aarch64 blocks 
any boot memory from getting unplugged.

But I guess that might break existing use cases (unplug boot memory) on 
ppc64 that rely on ZONE_MOVABLE to have it working with guarantees, 
right? Could be optimized but not sure if that's the best approach.
Michael Ellerman June 20, 2023, 12:35 p.m. UTC | #5
David Hildenbrand <david@redhat.com> writes:
> On 09.06.23 08:08, Aneesh Kumar K.V wrote:
>> Certain devices can possess non-standard memory capacities, not constrained
>> to multiples of 1GB. Provide a kernel parameter so that we can map the
>> device memory completely on memory hotplug.
>
> So, the unfortunate thing is that these devices would have worked out of 
> the box before the memory block size was increased from 256 MiB to 1 GiB 
> in these setups. Now, one has to fine-tune the memory block size. The 
> only other arch that I know, which supports setting the memory block 
> size, is x86 for special (large) UV systems -- and at least in the past 
> 128 MiB vs. 2 GiB memory blocks made a performance difference during 
> boot (maybe no longer today, who knows).
>
>
> Obviously, less tunable and getting stuff simply working out of the box 
> is preferable.
>
> Two questions:
>
> 1) Isn't there a way to improve auto-detection to fallback to 256 MiB in 
> these setups, to avoid specifying these parameters?
>
> 2) Is the 256 MiB -> 1 GiB memory block size switch really worth it? On 
> x86-64, experiments (with direct map fragmentation) showed that the 
> effective performance boost is pretty insignificant, so I wonder how big 
> the 1 GiB direct map performance improvement is.

The other issue is simply the number of sysfs entries.

With 64TB of memory and a 256MB block size you end up with ~250,000
directories in /sys/devices/system/memory.

cheers
David Hildenbrand June 20, 2023, 12:53 p.m. UTC | #6
On 20.06.23 14:35, Michael Ellerman wrote:
> David Hildenbrand <david@redhat.com> writes:
>> On 09.06.23 08:08, Aneesh Kumar K.V wrote:
>>> Certain devices can possess non-standard memory capacities, not constrained
>>> to multiples of 1GB. Provide a kernel parameter so that we can map the
>>> device memory completely on memory hotplug.
>>
>> So, the unfortunate thing is that these devices would have worked out of
>> the box before the memory block size was increased from 256 MiB to 1 GiB
>> in these setups. Now, one has to fine-tune the memory block size. The
>> only other arch that I know, which supports setting the memory block
>> size, is x86 for special (large) UV systems -- and at least in the past
>> 128 MiB vs. 2 GiB memory blocks made a performance difference during
>> boot (maybe no longer today, who knows).
>>
>>
>> Obviously, less tunable and getting stuff simply working out of the box
>> is preferable.
>>
>> Two questions:
>>
>> 1) Isn't there a way to improve auto-detection to fallback to 256 MiB in
>> these setups, to avoid specifying these parameters?
>>
>> 2) Is the 256 MiB -> 1 GiB memory block size switch really worth it? On
>> x86-64, experiments (with direct map fragmentation) showed that the
>> effective performance boost is pretty insignificant, so I wonder how big
>> the 1 GiB direct map performance improvement is.
> 
> The other issue is simply the number of sysfs entries.
> 
> With 64TB of memory and a 256MB block size you end up with ~250,000
> directories in /sys/devices/system/memory.

Yes, and so far on other archs we only optimize for that for on UV x86 
systems (with a default of 2 GiB). And that was added before we started 
to speed up memory device lookups significantly using a radix tree IIRC.

It's worth noting that there was a discussion on:

(a) not creating these device sysfs entries (when configured on the 
cmdline); often, nobody really ends up using them to online/offline 
memory blocks. Then, the only primary users is lsmem.

(b) exposing logical devices (e.g., a DIMM) taht can only be 
offlined/removed as a whole, instead of their individual memblocks (when 
configured on the cmdline). But for PPC64 that won't help.


But (a) gets more tricky if device drivers (and things like dax/kmem) 
rely on user-space memory onlining/offlining.
diff mbox series

Patch

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 9e5bab29685f..833b8c5b4b4c 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -3190,6 +3190,9 @@ 
 			Note that even when enabled, there are a few cases where
 			the feature is not effective.
 
+	memory_block_size=size [PPC]
+			 Use this parameter to configure the memory block size value.
+
 	memtest=	[KNL,X86,ARM,M68K,PPC,RISCV] Enable memtest
 			Format: <integer>
 			default : 0 <disable>
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index 246201d0d879..cbdb924462c7 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -892,6 +892,29 @@  unsigned long memory_block_size_bytes(void)
 
 	return MIN_MEMORY_BLOCK_SIZE;
 }
+
+/*
+ * Restrict to a power of 2 value for memblock which is larger than
+ * section size
+ */
+static int __init parse_mem_block_size(char *ptr)
+{
+	unsigned int order;
+	unsigned long size = memparse(ptr, NULL);
+
+	order = fls64(size);
+	if (!order)
+		return 0;
+
+	order--;
+	if (order < SECTION_SIZE_BITS)
+		return 0;
+
+	memory_block_size = 1UL << order;
+
+	return 0;
+}
+early_param("memory_block_size", parse_mem_block_size);
 #endif
 
 #if defined(CONFIG_PPC_INDIRECT_PIO) || defined(CONFIG_PPC_INDIRECT_MMIO)
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 97a9163f1280..5e6dde593ea3 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -549,13 +549,20 @@  static int __init probe_memory_block_size(unsigned long node, const char *uname,
 	return 0;
 }
 
-/*
- * start with 1G memory block size. Early init will
- * fix this with correct value.
- */
-unsigned long memory_block_size __ro_after_init = 1UL << 30;
+unsigned long memory_block_size __ro_after_init;
 static void __init early_init_memory_block_size(void)
 {
+	/*
+	 * if it is set via early param just return.
+	 */
+	if (memory_block_size)
+		return;
+
+	/*
+	 * start with 1G memory block size. update_memory_block_size()
+	 * will derive the right value based on device tree details.
+	 */
+	memory_block_size = 1UL << 30;
 	/*
 	 * We need to do memory_block_size probe early so that
 	 * radix__early_init_mmu() can use this as limit for