diff mbox

[v3,9/9] s390: teach platforms not to zero struct pages memory

Message ID 20170508113624.GA4876@osiris (mailing list archive)
State Not Applicable
Headers show

Commit Message

Heiko Carstens May 8, 2017, 11:36 a.m. UTC
On Fri, May 05, 2017 at 01:03:16PM -0400, Pavel Tatashin wrote:
> If we are using deferred struct page initialization feature, most of
> "struct page"es are getting initialized after other CPUs are started, and
> hence we are benefiting from doing this job in parallel. However, we are
> still zeroing all the memory that is allocated for "struct pages" using the
> boot CPU.  This patch solves this problem, by deferring zeroing "struct
> pages" to only when they are initialized on s390 platforms.
> 
> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> Reviewed-by: Shannon Nelson <shannon.nelson@oracle.com>
> ---
>  arch/s390/mm/vmem.c |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
> 
> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> index 9c75214..ffe9ba1 100644
> --- a/arch/s390/mm/vmem.c
> +++ b/arch/s390/mm/vmem.c
> @@ -252,7 +252,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
>  				void *new_page;
>  
>  				new_page = vmemmap_alloc_block(PMD_SIZE, node,
> -							       true);
> +							       VMEMMAP_ZERO);
>  				if (!new_page)
>  					goto out;
>  				pmd_val(*pm_dir) = __pa(new_page) | sgt_prot;

If you add the hunk below then this is

Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>

Comments

Pavel Tatashin May 15, 2017, 6:24 p.m. UTC | #1
Hi Heiko,

Thank you for looking at this patch. I am worried to make the proposed 
change, because, as I understand in this case we allocate memory not for 
"struct page"s but for table that hold them. So, we will change the 
behavior from the current one, where this table is allocated zeroed, but 
now it won't be zeroed.

Pasha

> 
> If you add the hunk below then this is
> 
> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
> 
> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> index ffe9ba1aec8b..bf88a8b9c24d 100644
> --- a/arch/s390/mm/vmem.c
> +++ b/arch/s390/mm/vmem.c
> @@ -272,7 +272,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
>   		if (pte_none(*pt_dir)) {
>   			void *new_page;
>   
> -			new_page = vmemmap_alloc_block(PAGE_SIZE, node, true);
> +			new_page = vmemmap_alloc_block(PAGE_SIZE, node, VMEMMAP_ZERO);
>   			if (!new_page)
>   				goto out;
>   			pte_val(*pt_dir) = __pa(new_page) | pgt_prot;
>
Heiko Carstens May 15, 2017, 11:17 p.m. UTC | #2
Hello Pasha,

> Thank you for looking at this patch. I am worried to make the proposed
> change, because, as I understand in this case we allocate memory not for
> "struct page"s but for table that hold them. So, we will change the behavior
> from the current one, where this table is allocated zeroed, but now it won't
> be zeroed.

The page table, if needed, is allocated and populated a couple of lines
above. See the vmem_pte_alloc() call. So my request to include the hunk
below is still valid ;)

> >If you add the hunk below then this is
> >
> >Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
> >
> >diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
> >index ffe9ba1aec8b..bf88a8b9c24d 100644
> >--- a/arch/s390/mm/vmem.c
> >+++ b/arch/s390/mm/vmem.c
> >@@ -272,7 +272,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
> >  		if (pte_none(*pt_dir)) {
> >  			void *new_page;
> >-			new_page = vmemmap_alloc_block(PAGE_SIZE, node, true);
> >+			new_page = vmemmap_alloc_block(PAGE_SIZE, node, VMEMMAP_ZERO);
> >  			if (!new_page)
> >  				goto out;
> >  			pte_val(*pt_dir) = __pa(new_page) | pgt_prot;
> >
>
Pavel Tatashin May 16, 2017, 12:33 a.m. UTC | #3
Ah OK, I will include the change.

Thank you,
Pasha

On 05/15/2017 07:17 PM, Heiko Carstens wrote:
> Hello Pasha,
> 
>> Thank you for looking at this patch. I am worried to make the proposed
>> change, because, as I understand in this case we allocate memory not for
>> "struct page"s but for table that hold them. So, we will change the behavior
>> from the current one, where this table is allocated zeroed, but now it won't
>> be zeroed.
> 
> The page table, if needed, is allocated and populated a couple of lines
> above. See the vmem_pte_alloc() call. So my request to include the hunk
> below is still valid ;)
> 
>>> If you add the hunk below then this is
>>>
>>> Acked-by: Heiko Carstens <heiko.carstens@de.ibm.com>
>>>
>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
>>> index ffe9ba1aec8b..bf88a8b9c24d 100644
>>> --- a/arch/s390/mm/vmem.c
>>> +++ b/arch/s390/mm/vmem.c
>>> @@ -272,7 +272,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
>>>   		if (pte_none(*pt_dir)) {
>>>   			void *new_page;
>>> -			new_page = vmemmap_alloc_block(PAGE_SIZE, node, true);
>>> +			new_page = vmemmap_alloc_block(PAGE_SIZE, node, VMEMMAP_ZERO);
>>>   			if (!new_page)
>>>   				goto out;
>>>   			pte_val(*pt_dir) = __pa(new_page) | pgt_prot;
>>>
>>
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
>
diff mbox

Patch

diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index ffe9ba1aec8b..bf88a8b9c24d 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -272,7 +272,7 @@  int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
 		if (pte_none(*pt_dir)) {
 			void *new_page;
 
-			new_page = vmemmap_alloc_block(PAGE_SIZE, node, true);
+			new_page = vmemmap_alloc_block(PAGE_SIZE, node, VMEMMAP_ZERO);
 			if (!new_page)
 				goto out;
 			pte_val(*pt_dir) = __pa(new_page) | pgt_prot;