diff mbox

[RFC,1/4] powerpc/mm: Rename variable to reflect start address of a section

Message ID 1455711179-20357-1-git-send-email-khandual@linux.vnet.ibm.com (mailing list archive)
State RFC
Headers show

Commit Message

Anshuman Khandual Feb. 17, 2016, 12:12 p.m. UTC
The commit (16a05bff1: powerpc: start loop at section start of
start in vmemmap_populated()) reused 'start' variable to compute
the starting address of the memory section where the given address
belongs. Then the same variable is used for iterating over starting
address of all memory sections before reaching the 'end' address.
Renaming it as 'section_start' makes the logic more clear.

Fixes: 16a05bff1 ("powerpc: start loop at section start of start in vmemmap_populated()")
Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
---
 arch/powerpc/mm/init_64.c | 12 ++++++++----
 1 file changed, 8 insertions(+), 4 deletions(-)

Comments

Michael Ellerman Feb. 18, 2016, 2:34 p.m. UTC | #1
On Wed, 2016-02-17 at 17:42 +0530, Anshuman Khandual wrote:

> The commit (16a05bff1: powerpc: start loop at section start of
> start in vmemmap_populated()) reused 'start' variable to compute
> the starting address of the memory section where the given address
> belongs. Then the same variable is used for iterating over starting
> address of all memory sections before reaching the 'end' address.
> Renaming it as 'section_start' makes the logic more clear.
> 
> Fixes: 16a05bff1 ("powerpc: start loop at section start of start in vmemmap_populated()")

It's not a fix, just a cleanup. Fixes lines should be reserved for actual bug
fixes.

> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> ---
>  arch/powerpc/mm/init_64.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
> index 379a6a9..d6b9b4d 100644
> --- a/arch/powerpc/mm/init_64.c
> +++ b/arch/powerpc/mm/init_64.c
> @@ -170,11 +170,15 @@ static unsigned long __meminit vmemmap_section_start(unsigned long page)
>   */
>  static int __meminit vmemmap_populated(unsigned long start, int page_size)
>  {
> -	unsigned long end = start + page_size;
> -	start = (unsigned long)(pfn_to_page(vmemmap_section_start(start)));
> +	unsigned long end, section_start;
>  
> -	for (; start < end; start += (PAGES_PER_SECTION * sizeof(struct page)))
> -		if (pfn_valid(page_to_pfn((struct page *)start)))
> +	end = start + page_size;
> +	section_start = (unsigned long)(pfn_to_page
> +					(vmemmap_section_start(start)));
> +
> +	for (; section_start < end; section_start
> +				+= (PAGES_PER_SECTION * sizeof(struct page)))
> +		if (pfn_valid(page_to_pfn((struct page *)section_start)))
>  			return 1;
>  
>  	return 0;

That's not a big improvement.

But I think this code could be improved. There's a lot of casts, it seems to be
confused about whether it's iterating over addresses or struct pages.

cheers
Anshuman Khandual Feb. 19, 2016, 4:50 a.m. UTC | #2
On 02/18/2016 08:04 PM, Michael Ellerman wrote:
> On Wed, 2016-02-17 at 17:42 +0530, Anshuman Khandual wrote:
> 
>> The commit (16a05bff1: powerpc: start loop at section start of
>> start in vmemmap_populated()) reused 'start' variable to compute
>> the starting address of the memory section where the given address
>> belongs. Then the same variable is used for iterating over starting
>> address of all memory sections before reaching the 'end' address.
>> Renaming it as 'section_start' makes the logic more clear.
>>
>> Fixes: 16a05bff1 ("powerpc: start loop at section start of start in vmemmap_populated()")
> 
> It's not a fix, just a cleanup. Fixes lines should be reserved for actual bug
> fixes.

Sure, got it.

> 
>> Signed-off-by: Anshuman Khandual <khandual@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/mm/init_64.c | 12 ++++++++----
>>  1 file changed, 8 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
>> index 379a6a9..d6b9b4d 100644
>> --- a/arch/powerpc/mm/init_64.c
>> +++ b/arch/powerpc/mm/init_64.c
>> @@ -170,11 +170,15 @@ static unsigned long __meminit vmemmap_section_start(unsigned long page)
>>   */
>>  static int __meminit vmemmap_populated(unsigned long start, int page_size)
>>  {
>> -	unsigned long end = start + page_size;
>> -	start = (unsigned long)(pfn_to_page(vmemmap_section_start(start)));
>> +	unsigned long end, section_start;
>>  
>> -	for (; start < end; start += (PAGES_PER_SECTION * sizeof(struct page)))
>> -		if (pfn_valid(page_to_pfn((struct page *)start)))
>> +	end = start + page_size;
>> +	section_start = (unsigned long)(pfn_to_page
>> +					(vmemmap_section_start(start)));
>> +
>> +	for (; section_start < end; section_start
>> +				+= (PAGES_PER_SECTION * sizeof(struct page)))
>> +		if (pfn_valid(page_to_pfn((struct page *)section_start)))
>>  			return 1;
>>  
>>  	return 0;
> 
> That's not a big improvement.
> 
> But I think this code could be improved. There's a lot of casts, it seems to be
> confused about whether it's iterating over addresses or struct pages.

Right, this patch just tries to clear on such confusion. Thats all.
diff mbox

Patch

diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 379a6a9..d6b9b4d 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -170,11 +170,15 @@  static unsigned long __meminit vmemmap_section_start(unsigned long page)
  */
 static int __meminit vmemmap_populated(unsigned long start, int page_size)
 {
-	unsigned long end = start + page_size;
-	start = (unsigned long)(pfn_to_page(vmemmap_section_start(start)));
+	unsigned long end, section_start;
 
-	for (; start < end; start += (PAGES_PER_SECTION * sizeof(struct page)))
-		if (pfn_valid(page_to_pfn((struct page *)start)))
+	end = start + page_size;
+	section_start = (unsigned long)(pfn_to_page
+					(vmemmap_section_start(start)));
+
+	for (; section_start < end; section_start
+				+= (PAGES_PER_SECTION * sizeof(struct page)))
+		if (pfn_valid(page_to_pfn((struct page *)section_start)))
 			return 1;
 
 	return 0;