diff mbox series

[1/4] mm: Move ioremap page table mapping function to mm/

Message ID 20190610043838.27916-1-npiggin@gmail.com (mailing list archive)
State Not Applicable
Headers show
Series [1/4] mm: Move ioremap page table mapping function to mm/ | expand

Checks

Context Check Description
snowpatch_ozlabs/apply_patch success Successfully applied on branch next (a3bf9fbdad600b1e4335dd90979f8d6072e4f602)
snowpatch_ozlabs/checkpatch warning total: 0 errors, 2 warnings, 22 checks, 528 lines checked

Commit Message

Nicholas Piggin June 10, 2019, 4:38 a.m. UTC
ioremap_page_range is a generic function to create a kernel virtual
mapping, move it to mm/vmalloc.c and rename it vmap_range.

For clarity with this move, also:
- Rename vunmap_page_range (vmap_range's inverse) to vunmap_range.
- Rename vmap_page_range (which takes a page array) to vmap_pages.

Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---

Fixed up the arm64 compile errors, fixed a few bugs, and tidied
things up a bit more.

Have tested powerpc and x86 but not arm64, would appreciate a review
and test of the arm64 patch if possible.

 include/linux/vmalloc.h |   3 +
 lib/ioremap.c           | 173 +++---------------------------
 mm/vmalloc.c            | 228 ++++++++++++++++++++++++++++++++++++----
 3 files changed, 229 insertions(+), 175 deletions(-)

Comments

Anshuman Khandual June 10, 2019, 5:42 a.m. UTC | #1
On 06/10/2019 10:08 AM, Nicholas Piggin wrote:
> ioremap_page_range is a generic function to create a kernel virtual
> mapping, move it to mm/vmalloc.c and rename it vmap_range.

Absolutely. It belongs in mm/vmalloc.c as its a kernel virtual range.
But what is the rationale of changing the name to vmap_range ?
 
> 
> For clarity with this move, also:
> - Rename vunmap_page_range (vmap_range's inverse) to vunmap_range.

Will be inverse for both vmap_range() and vmap_page[s]_range() ?

> - Rename vmap_page_range (which takes a page array) to vmap_pages.

s/vmap_pages/vmap_pages_range instead here ................^^^^^^

This deviates from the subject of this patch that it is related to
ioremap only. I believe what this patch intends is to create

- vunmap_range() takes [VA range]

	This will be the common kernel virtual range tear down
	function for ranges created either with vmap_range() or
	vmap_pages_range(). Is that correct ?

- vmap_range() takes [VA range, PA range, prot]
- vmap_pages_range() takes [VA range, struct pages, prot] 

Can we re-order the arguments (pages <--> prot) for vmap_pages_range()
just to make it sync with vmap_range() ?

static int vmap_pages_range(unsigned long start, unsigned long end,
 			   pgprot_t prot, struct page **pages)
Nicholas Piggin June 10, 2019, 6:21 a.m. UTC | #2
Anshuman Khandual's on June 10, 2019 3:42 pm:
> 
> 
> On 06/10/2019 10:08 AM, Nicholas Piggin wrote:
>> ioremap_page_range is a generic function to create a kernel virtual
>> mapping, move it to mm/vmalloc.c and rename it vmap_range.
> 
> Absolutely. It belongs in mm/vmalloc.c as its a kernel virtual range.
> But what is the rationale of changing the name to vmap_range ?

Well it doesn't just map IO. It's for arbitrary kernel virtual mapping
(including ioremap). Last patch uses it to map regular cacheable
memory.

>> For clarity with this move, also:
>> - Rename vunmap_page_range (vmap_range's inverse) to vunmap_range.
> 
> Will be inverse for both vmap_range() and vmap_page[s]_range() ?

Yes.

> 
>> - Rename vmap_page_range (which takes a page array) to vmap_pages.
> 
> s/vmap_pages/vmap_pages_range instead here ................^^^^^^

Yes.

> This deviates from the subject of this patch that it is related to
> ioremap only. I believe what this patch intends is to create
> 
> - vunmap_range() takes [VA range]
> 
> 	This will be the common kernel virtual range tear down
> 	function for ranges created either with vmap_range() or
> 	vmap_pages_range(). Is that correct ?
> - vmap_range() takes [VA range, PA range, prot]
> - vmap_pages_range() takes [VA range, struct pages, prot] 

That's right although we already have all those functions, so I don't
create anything, only move and re-name. I'm happy to change the
subject if you have a preference.

> Can we re-order the arguments (pages <--> prot) for vmap_pages_range()
> just to make it sync with vmap_range() ?
> 
> static int vmap_pages_range(unsigned long start, unsigned long end,
>  			   pgprot_t prot, struct page **pages)
> 

Sure, makes sense.

Thanks,
Nick
Christophe Leroy June 11, 2019, 5:24 a.m. UTC | #3
Le 10/06/2019 à 06:38, Nicholas Piggin a écrit :
> ioremap_page_range is a generic function to create a kernel virtual
> mapping, move it to mm/vmalloc.c and rename it vmap_range.
> 
> For clarity with this move, also:
> - Rename vunmap_page_range (vmap_range's inverse) to vunmap_range.
> - Rename vmap_page_range (which takes a page array) to vmap_pages.

Maybe it would be easier to follow the change if the name change was 
done in another patch than the move.

> 
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> 
> Fixed up the arm64 compile errors, fixed a few bugs, and tidied
> things up a bit more.
> 
> Have tested powerpc and x86 but not arm64, would appreciate a review
> and test of the arm64 patch if possible.
> 
>   include/linux/vmalloc.h |   3 +
>   lib/ioremap.c           | 173 +++---------------------------
>   mm/vmalloc.c            | 228 ++++++++++++++++++++++++++++++++++++----
>   3 files changed, 229 insertions(+), 175 deletions(-)
> 
> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
> index 51e131245379..812bea5866d6 100644
> --- a/include/linux/vmalloc.h
> +++ b/include/linux/vmalloc.h
> @@ -147,6 +147,9 @@ extern struct vm_struct *find_vm_area(const void *addr);
>   extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
>   			struct page **pages);
>   #ifdef CONFIG_MMU
> +extern int vmap_range(unsigned long addr,
> +		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +		       unsigned int max_page_shift);

Drop extern keyword here.

As checkpatch tells you, 'CHECK:AVOID_EXTERNS: extern prototypes should 
be avoided in .h files'

Christophe

>   extern int map_kernel_range_noflush(unsigned long start, unsigned long size,
>   				    pgprot_t prot, struct page **pages);
>   extern void unmap_kernel_range_noflush(unsigned long addr, unsigned long size);
> diff --git a/lib/ioremap.c b/lib/ioremap.c
> index 063213685563..e13946da8ec3 100644
> --- a/lib/ioremap.c
> +++ b/lib/ioremap.c
> @@ -58,165 +58,24 @@ static inline int ioremap_pud_enabled(void) { return 0; }
>   static inline int ioremap_pmd_enabled(void) { return 0; }
>   #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
>   
> -static int ioremap_pte_range(pmd_t *pmd, unsigned long addr,
> -		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
> -{
> -	pte_t *pte;
> -	u64 pfn;
> -
> -	pfn = phys_addr >> PAGE_SHIFT;
> -	pte = pte_alloc_kernel(pmd, addr);
> -	if (!pte)
> -		return -ENOMEM;
> -	do {
> -		BUG_ON(!pte_none(*pte));
> -		set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot));
> -		pfn++;
> -	} while (pte++, addr += PAGE_SIZE, addr != end);
> -	return 0;
> -}
> -
> -static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr,
> -				unsigned long end, phys_addr_t phys_addr,
> -				pgprot_t prot)
> -{
> -	if (!ioremap_pmd_enabled())
> -		return 0;
> -
> -	if ((end - addr) != PMD_SIZE)
> -		return 0;
> -
> -	if (!IS_ALIGNED(phys_addr, PMD_SIZE))
> -		return 0;
> -
> -	if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
> -		return 0;
> -
> -	return pmd_set_huge(pmd, phys_addr, prot);
> -}
> -
> -static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
> -		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
> -{
> -	pmd_t *pmd;
> -	unsigned long next;
> -
> -	pmd = pmd_alloc(&init_mm, pud, addr);
> -	if (!pmd)
> -		return -ENOMEM;
> -	do {
> -		next = pmd_addr_end(addr, end);
> -
> -		if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr, prot))
> -			continue;
> -
> -		if (ioremap_pte_range(pmd, addr, next, phys_addr, prot))
> -			return -ENOMEM;
> -	} while (pmd++, phys_addr += (next - addr), addr = next, addr != end);
> -	return 0;
> -}
> -
> -static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr,
> -				unsigned long end, phys_addr_t phys_addr,
> -				pgprot_t prot)
> -{
> -	if (!ioremap_pud_enabled())
> -		return 0;
> -
> -	if ((end - addr) != PUD_SIZE)
> -		return 0;
> -
> -	if (!IS_ALIGNED(phys_addr, PUD_SIZE))
> -		return 0;
> -
> -	if (pud_present(*pud) && !pud_free_pmd_page(pud, addr))
> -		return 0;
> -
> -	return pud_set_huge(pud, phys_addr, prot);
> -}
> -
> -static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
> -		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
> -{
> -	pud_t *pud;
> -	unsigned long next;
> -
> -	pud = pud_alloc(&init_mm, p4d, addr);
> -	if (!pud)
> -		return -ENOMEM;
> -	do {
> -		next = pud_addr_end(addr, end);
> -
> -		if (ioremap_try_huge_pud(pud, addr, next, phys_addr, prot))
> -			continue;
> -
> -		if (ioremap_pmd_range(pud, addr, next, phys_addr, prot))
> -			return -ENOMEM;
> -	} while (pud++, phys_addr += (next - addr), addr = next, addr != end);
> -	return 0;
> -}
> -
> -static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr,
> -				unsigned long end, phys_addr_t phys_addr,
> -				pgprot_t prot)
> -{
> -	if (!ioremap_p4d_enabled())
> -		return 0;
> -
> -	if ((end - addr) != P4D_SIZE)
> -		return 0;
> -
> -	if (!IS_ALIGNED(phys_addr, P4D_SIZE))
> -		return 0;
> -
> -	if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr))
> -		return 0;
> -
> -	return p4d_set_huge(p4d, phys_addr, prot);
> -}
> -
> -static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr,
> -		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
> -{
> -	p4d_t *p4d;
> -	unsigned long next;
> -
> -	p4d = p4d_alloc(&init_mm, pgd, addr);
> -	if (!p4d)
> -		return -ENOMEM;
> -	do {
> -		next = p4d_addr_end(addr, end);
> -
> -		if (ioremap_try_huge_p4d(p4d, addr, next, phys_addr, prot))
> -			continue;
> -
> -		if (ioremap_pud_range(p4d, addr, next, phys_addr, prot))
> -			return -ENOMEM;
> -	} while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
> -	return 0;
> -}
> -
>   int ioremap_page_range(unsigned long addr,
>   		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
>   {
> -	pgd_t *pgd;
> -	unsigned long start;
> -	unsigned long next;
> -	int err;
> -
> -	might_sleep();
> -	BUG_ON(addr >= end);
> -
> -	start = addr;
> -	pgd = pgd_offset_k(addr);
> -	do {
> -		next = pgd_addr_end(addr, end);
> -		err = ioremap_p4d_range(pgd, addr, next, phys_addr, prot);
> -		if (err)
> -			break;
> -	} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
> -
> -	flush_cache_vmap(start, end);
> +	unsigned int max_page_shift = PAGE_SHIFT;
> +
> +	/*
> +	 * Due to the max_page_shift parameter to vmap_range, platforms must
> +	 * enable all smaller sizes to take advantage of a given size,
> +	 * otherwise fall back to small pages.
> +	 */
> +	if (ioremap_pmd_enabled()) {
> +		max_page_shift = PMD_SHIFT;
> +		if (ioremap_pud_enabled()) {
> +			max_page_shift = PUD_SHIFT;
> +			if (ioremap_p4d_enabled())
> +				max_page_shift = P4D_SHIFT;
> +		}
> +	}
>   
> -	return err;
> +	return vmap_range(addr, end, phys_addr, prot, max_page_shift);
>   }
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index 233af6936c93..dd27cfb29b10 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -119,7 +119,7 @@ static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end)
>   	} while (p4d++, addr = next, addr != end);
>   }
>   
> -static void vunmap_page_range(unsigned long addr, unsigned long end)
> +static void vunmap_range(unsigned long addr, unsigned long end)
>   {
>   	pgd_t *pgd;
>   	unsigned long next;
> @@ -135,6 +135,198 @@ static void vunmap_page_range(unsigned long addr, unsigned long end)
>   }
>   
>   static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
> +{
> +	pte_t *pte;
> +	u64 pfn;
> +
> +	pfn = phys_addr >> PAGE_SHIFT;
> +	pte = pte_alloc_kernel(pmd, addr);
> +	if (!pte)
> +		return -ENOMEM;
> +	do {
> +		BUG_ON(!pte_none(*pte));
> +		set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot));
> +		pfn++;
> +	} while (pte++, addr += PAGE_SIZE, addr != end);
> +	return 0;
> +}
> +
> +static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +			unsigned int max_page_shift)
> +{
> +	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
> +		return 0;
> +
> +	if (max_page_shift < PMD_SHIFT)
> +		return 0;
> +
> +	if ((end - addr) != PMD_SIZE)
> +		return 0;
> +
> +	if (!IS_ALIGNED(phys_addr, PMD_SIZE))
> +		return 0;
> +
> +	if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
> +		return 0;
> +
> +	return pmd_set_huge(pmd, phys_addr, prot);
> +}
> +
> +static inline int vmap_pmd_range(pud_t *pud, unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +			unsigned int max_page_shift)
> +{
> +	pmd_t *pmd;
> +	unsigned long next;
> +
> +	pmd = pmd_alloc(&init_mm, pud, addr);
> +	if (!pmd)
> +		return -ENOMEM;
> +	do {
> +		next = pmd_addr_end(addr, end);
> +
> +		if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot,
> +					max_page_shift))
> +			continue;
> +
> +		if (vmap_pte_range(pmd, addr, next, phys_addr, prot))
> +			return -ENOMEM;
> +	} while (pmd++, phys_addr += (next - addr), addr = next, addr != end);
> +	return 0;
> +}
> +
> +static int vmap_try_huge_pud(pud_t *pud, unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +			unsigned int max_page_shift)
> +{
> +	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
> +		return 0;
> +
> +	if (max_page_shift < PUD_SHIFT)
> +		return 0;
> +
> +	if ((end - addr) != PUD_SIZE)
> +		return 0;
> +
> +	if (!IS_ALIGNED(phys_addr, PUD_SIZE))
> +		return 0;
> +
> +	if (pud_present(*pud) && !pud_free_pmd_page(pud, addr))
> +		return 0;
> +
> +	return pud_set_huge(pud, phys_addr, prot);
> +}
> +
> +static inline int vmap_pud_range(p4d_t *p4d, unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +			unsigned int max_page_shift)
> +{
> +	pud_t *pud;
> +	unsigned long next;
> +
> +	pud = pud_alloc(&init_mm, p4d, addr);
> +	if (!pud)
> +		return -ENOMEM;
> +	do {
> +		next = pud_addr_end(addr, end);
> +
> +		if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot,
> +					max_page_shift))
> +			continue;
> +
> +		if (vmap_pmd_range(pud, addr, next, phys_addr, prot,
> +					max_page_shift))
> +			return -ENOMEM;
> +	} while (pud++, phys_addr += (next - addr), addr = next, addr != end);
> +	return 0;
> +}
> +
> +static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +			unsigned int max_page_shift)
> +{
> +	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
> +		return 0;
> +
> +	if (max_page_shift < P4D_SHIFT)
> +		return 0;
> +
> +	if ((end - addr) != P4D_SIZE)
> +		return 0;
> +
> +	if (!IS_ALIGNED(phys_addr, P4D_SIZE))
> +		return 0;
> +
> +	if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr))
> +		return 0;
> +
> +	return p4d_set_huge(p4d, phys_addr, prot);
> +}
> +
> +static inline int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +			unsigned int max_page_shift)
> +{
> +	p4d_t *p4d;
> +	unsigned long next;
> +
> +	p4d = p4d_alloc(&init_mm, pgd, addr);
> +	if (!p4d)
> +		return -ENOMEM;
> +	do {
> +		next = p4d_addr_end(addr, end);
> +
> +		if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot,
> +					max_page_shift))
> +			continue;
> +
> +		if (vmap_pud_range(p4d, addr, next, phys_addr, prot,
> +					max_page_shift))
> +			return -ENOMEM;
> +	} while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
> +	return 0;
> +}
> +
> +static int vmap_range_noflush(unsigned long addr,
> +			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +			unsigned int max_page_shift)
> +{
> +	pgd_t *pgd;
> +	unsigned long start;
> +	unsigned long next;
> +	int err;
> +
> +	might_sleep();
> +	BUG_ON(addr >= end);
> +
> +	start = addr;
> +	pgd = pgd_offset_k(addr);
> +	do {
> +		next = pgd_addr_end(addr, end);
> +		err = vmap_p4d_range(pgd, addr, next, phys_addr, prot,
> +					max_page_shift);
> +		if (err)
> +			break;
> +	} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
> +
> +	return err;
> +}
> +
> +int vmap_range(unsigned long addr,
> +		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
> +		       unsigned int max_page_shift)
> +{
> +	int ret;
> +
> +	ret = vmap_range_noflush(addr, end, phys_addr, prot, max_page_shift);
> +	flush_cache_vmap(addr, end);
> +
> +	return ret;
> +}
> +
> +static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
>   		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
>   {
>   	pte_t *pte;
> @@ -160,7 +352,7 @@ static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
>   	return 0;
>   }
>   
> -static int vmap_pmd_range(pud_t *pud, unsigned long addr,
> +static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr,
>   		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
>   {
>   	pmd_t *pmd;
> @@ -171,13 +363,13 @@ static int vmap_pmd_range(pud_t *pud, unsigned long addr,
>   		return -ENOMEM;
>   	do {
>   		next = pmd_addr_end(addr, end);
> -		if (vmap_pte_range(pmd, addr, next, prot, pages, nr))
> +		if (vmap_pages_pte_range(pmd, addr, next, prot, pages, nr))
>   			return -ENOMEM;
>   	} while (pmd++, addr = next, addr != end);
>   	return 0;
>   }
>   
> -static int vmap_pud_range(p4d_t *p4d, unsigned long addr,
> +static int vmap_pages_pud_range(p4d_t *p4d, unsigned long addr,
>   		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
>   {
>   	pud_t *pud;
> @@ -188,13 +380,13 @@ static int vmap_pud_range(p4d_t *p4d, unsigned long addr,
>   		return -ENOMEM;
>   	do {
>   		next = pud_addr_end(addr, end);
> -		if (vmap_pmd_range(pud, addr, next, prot, pages, nr))
> +		if (vmap_pages_pmd_range(pud, addr, next, prot, pages, nr))
>   			return -ENOMEM;
>   	} while (pud++, addr = next, addr != end);
>   	return 0;
>   }
>   
> -static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
> +static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr,
>   		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
>   {
>   	p4d_t *p4d;
> @@ -205,7 +397,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
>   		return -ENOMEM;
>   	do {
>   		next = p4d_addr_end(addr, end);
> -		if (vmap_pud_range(p4d, addr, next, prot, pages, nr))
> +		if (vmap_pages_pud_range(p4d, addr, next, prot, pages, nr))
>   			return -ENOMEM;
>   	} while (p4d++, addr = next, addr != end);
>   	return 0;
> @@ -217,7 +409,7 @@ static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
>    *
>    * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N]
>    */
> -static int vmap_page_range_noflush(unsigned long start, unsigned long end,
> +static int vmap_pages_range_noflush(unsigned long start, unsigned long end,
>   				   pgprot_t prot, struct page **pages)
>   {
>   	pgd_t *pgd;
> @@ -230,7 +422,7 @@ static int vmap_page_range_noflush(unsigned long start, unsigned long end,
>   	pgd = pgd_offset_k(addr);
>   	do {
>   		next = pgd_addr_end(addr, end);
> -		err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr);
> +		err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr);
>   		if (err)
>   			return err;
>   	} while (pgd++, addr = next, addr != end);
> @@ -238,12 +430,12 @@ static int vmap_page_range_noflush(unsigned long start, unsigned long end,
>   	return nr;
>   }
>   
> -static int vmap_page_range(unsigned long start, unsigned long end,
> +static int vmap_pages_range(unsigned long start, unsigned long end,
>   			   pgprot_t prot, struct page **pages)
>   {
>   	int ret;
>   
> -	ret = vmap_page_range_noflush(start, end, prot, pages);
> +	ret = vmap_pages_range_noflush(start, end, prot, pages);
>   	flush_cache_vmap(start, end);
>   	return ret;
>   }
> @@ -1148,7 +1340,7 @@ static void free_vmap_area(struct vmap_area *va)
>    */
>   static void unmap_vmap_area(struct vmap_area *va)
>   {
> -	vunmap_page_range(va->va_start, va->va_end);
> +	vunmap_range(va->va_start, va->va_end);
>   }
>   
>   /*
> @@ -1586,7 +1778,7 @@ static void vb_free(const void *addr, unsigned long size)
>   	rcu_read_unlock();
>   	BUG_ON(!vb);
>   
> -	vunmap_page_range((unsigned long)addr, (unsigned long)addr + size);
> +	vunmap_range((unsigned long)addr, (unsigned long)addr + size);
>   
>   	if (debug_pagealloc_enabled())
>   		flush_tlb_kernel_range((unsigned long)addr,
> @@ -1736,7 +1928,7 @@ void *vm_map_ram(struct page **pages, unsigned int count, int node, pgprot_t pro
>   		addr = va->va_start;
>   		mem = (void *)addr;
>   	}
> -	if (vmap_page_range(addr, addr + size, prot, pages) < 0) {
> +	if (vmap_pages_range(addr, addr + size, prot, pages) < 0) {
>   		vm_unmap_ram(mem, count);
>   		return NULL;
>   	}
> @@ -1903,7 +2095,7 @@ void __init vmalloc_init(void)
>   int map_kernel_range_noflush(unsigned long addr, unsigned long size,
>   			     pgprot_t prot, struct page **pages)
>   {
> -	return vmap_page_range_noflush(addr, addr + size, prot, pages);
> +	return vmap_pages_range_noflush(addr, addr + size, prot, pages);
>   }
>   
>   /**
> @@ -1922,7 +2114,7 @@ int map_kernel_range_noflush(unsigned long addr, unsigned long size,
>    */
>   void unmap_kernel_range_noflush(unsigned long addr, unsigned long size)
>   {
> -	vunmap_page_range(addr, addr + size);
> +	vunmap_range(addr, addr + size);
>   }
>   EXPORT_SYMBOL_GPL(unmap_kernel_range_noflush);
>   
> @@ -1939,7 +2131,7 @@ void unmap_kernel_range(unsigned long addr, unsigned long size)
>   	unsigned long end = addr + size;
>   
>   	flush_cache_vunmap(addr, end);
> -	vunmap_page_range(addr, end);
> +	vunmap_range(addr, end);
>   	flush_tlb_kernel_range(addr, end);
>   }
>   EXPORT_SYMBOL_GPL(unmap_kernel_range);
> @@ -1950,7 +2142,7 @@ int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages)
>   	unsigned long end = addr + get_vm_area_size(area);
>   	int err;
>   
> -	err = vmap_page_range(addr, end, prot, pages);
> +	err = vmap_pages_range(addr, end, prot, pages);
>   
>   	return err > 0 ? 0 : err;
>   }
>
Nicholas Piggin June 19, 2019, 3:43 a.m. UTC | #4
Christophe Leroy's on June 11, 2019 3:24 pm:
> 
> 
> Le 10/06/2019 à 06:38, Nicholas Piggin a écrit :
>> ioremap_page_range is a generic function to create a kernel virtual
>> mapping, move it to mm/vmalloc.c and rename it vmap_range.
>> 
>> For clarity with this move, also:
>> - Rename vunmap_page_range (vmap_range's inverse) to vunmap_range.
>> - Rename vmap_page_range (which takes a page array) to vmap_pages.
> 
> Maybe it would be easier to follow the change if the name change was 
> done in another patch than the move.

I could do that.

>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>> ---
>> 
>> Fixed up the arm64 compile errors, fixed a few bugs, and tidied
>> things up a bit more.
>> 
>> Have tested powerpc and x86 but not arm64, would appreciate a review
>> and test of the arm64 patch if possible.
>> 
>>   include/linux/vmalloc.h |   3 +
>>   lib/ioremap.c           | 173 +++---------------------------
>>   mm/vmalloc.c            | 228 ++++++++++++++++++++++++++++++++++++----
>>   3 files changed, 229 insertions(+), 175 deletions(-)
>> 
>> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
>> index 51e131245379..812bea5866d6 100644
>> --- a/include/linux/vmalloc.h
>> +++ b/include/linux/vmalloc.h
>> @@ -147,6 +147,9 @@ extern struct vm_struct *find_vm_area(const void *addr);
>>   extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
>>   			struct page **pages);
>>   #ifdef CONFIG_MMU
>> +extern int vmap_range(unsigned long addr,
>> +		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
>> +		       unsigned int max_page_shift);
> 
> Drop extern keyword here.

I don't know if I was going crazy but at one point I was getting
duplicate symbol errors that were fixed by adding extern somewhere.
Maybe sleep depravation. However...

> As checkpatch tells you, 'CHECK:AVOID_EXTERNS: extern prototypes should 
> be avoided in .h files'

I prefer to follow existing style in surrounding code at the expense
of some checkpatch warnings. If somebody later wants to "fix" it
that's fine.

Thanks,
Nick
Christophe Leroy June 19, 2019, 1:18 p.m. UTC | #5
Le 19/06/2019 à 05:43, Nicholas Piggin a écrit :
> Christophe Leroy's on June 11, 2019 3:24 pm:
>>
>>
>> Le 10/06/2019 à 06:38, Nicholas Piggin a écrit :

[snip]

>>> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
>>> index 51e131245379..812bea5866d6 100644
>>> --- a/include/linux/vmalloc.h
>>> +++ b/include/linux/vmalloc.h
>>> @@ -147,6 +147,9 @@ extern struct vm_struct *find_vm_area(const void *addr);
>>>    extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
>>>    			struct page **pages);
>>>    #ifdef CONFIG_MMU
>>> +extern int vmap_range(unsigned long addr,
>>> +		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
>>> +		       unsigned int max_page_shift);
>>
>> Drop extern keyword here.
> 
> I don't know if I was going crazy but at one point I was getting
> duplicate symbol errors that were fixed by adding extern somewhere.

probably not on a function name ...

> Maybe sleep depravation. However...
> 
>> As checkpatch tells you, 'CHECK:AVOID_EXTERNS: extern prototypes should
>> be avoided in .h files'
> 
> I prefer to follow existing style in surrounding code at the expense
> of some checkpatch warnings. If somebody later wants to "fix" it
> that's fine.

I don't think that's fine to 'fix' later things that could be done right 
from the begining. 'Cosmetic only' fixes never happen because they are a 
nightmare for backports, and a shame for 'git blame'.

In some patches, you add cleanups to make the code look nicer, and here 
you have the opportunity to make the code nice from the begining and you 
prefer repeating the errors done in the past ? You're surprising me.

Christophe

> 
> Thanks,
> Nick
>
Nicholas Piggin June 22, 2019, 9:42 a.m. UTC | #6
Christophe Leroy's on June 19, 2019 11:18 pm:
> 
> 
> Le 19/06/2019 à 05:43, Nicholas Piggin a écrit :
>> Christophe Leroy's on June 11, 2019 3:24 pm:
>>>
>>>
>>> Le 10/06/2019 à 06:38, Nicholas Piggin a écrit :
> 
> [snip]
> 
>>>> diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
>>>> index 51e131245379..812bea5866d6 100644
>>>> --- a/include/linux/vmalloc.h
>>>> +++ b/include/linux/vmalloc.h
>>>> @@ -147,6 +147,9 @@ extern struct vm_struct *find_vm_area(const void *addr);
>>>>    extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
>>>>    			struct page **pages);
>>>>    #ifdef CONFIG_MMU
>>>> +extern int vmap_range(unsigned long addr,
>>>> +		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
>>>> +		       unsigned int max_page_shift);
>>>
>>> Drop extern keyword here.
>> 
>> I don't know if I was going crazy but at one point I was getting
>> duplicate symbol errors that were fixed by adding extern somewhere.
> 
> probably not on a function name ...

I know it sounds crazy :P

>>> As checkpatch tells you, 'CHECK:AVOID_EXTERNS: extern prototypes should
>>> be avoided in .h files'
>> 
>> I prefer to follow existing style in surrounding code at the expense
>> of some checkpatch warnings. If somebody later wants to "fix" it
>> that's fine.
> 
> I don't think that's fine to 'fix' later things that could be done right 
> from the begining. 'Cosmetic only' fixes never happen because they are a 
> nightmare for backports, and a shame for 'git blame'.
> 
> In some patches, you add cleanups to make the code look nicer, and here 
> you have the opportunity to make the code nice from the begining and you 
> prefer repeating the errors done in the past ? You're surprising me.

Well I never claimed to be consistent. I actually don't mind the
extern keyword so it's probably just my personal preference that
makes me notice something nearby. I have dropped those "cleanup"
changes though, so there.

Thanks,
Nick
diff mbox series

Patch

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 51e131245379..812bea5866d6 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -147,6 +147,9 @@  extern struct vm_struct *find_vm_area(const void *addr);
 extern int map_vm_area(struct vm_struct *area, pgprot_t prot,
 			struct page **pages);
 #ifdef CONFIG_MMU
+extern int vmap_range(unsigned long addr,
+		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+		       unsigned int max_page_shift);
 extern int map_kernel_range_noflush(unsigned long start, unsigned long size,
 				    pgprot_t prot, struct page **pages);
 extern void unmap_kernel_range_noflush(unsigned long addr, unsigned long size);
diff --git a/lib/ioremap.c b/lib/ioremap.c
index 063213685563..e13946da8ec3 100644
--- a/lib/ioremap.c
+++ b/lib/ioremap.c
@@ -58,165 +58,24 @@  static inline int ioremap_pud_enabled(void) { return 0; }
 static inline int ioremap_pmd_enabled(void) { return 0; }
 #endif	/* CONFIG_HAVE_ARCH_HUGE_VMAP */
 
-static int ioremap_pte_range(pmd_t *pmd, unsigned long addr,
-		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
-{
-	pte_t *pte;
-	u64 pfn;
-
-	pfn = phys_addr >> PAGE_SHIFT;
-	pte = pte_alloc_kernel(pmd, addr);
-	if (!pte)
-		return -ENOMEM;
-	do {
-		BUG_ON(!pte_none(*pte));
-		set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot));
-		pfn++;
-	} while (pte++, addr += PAGE_SIZE, addr != end);
-	return 0;
-}
-
-static int ioremap_try_huge_pmd(pmd_t *pmd, unsigned long addr,
-				unsigned long end, phys_addr_t phys_addr,
-				pgprot_t prot)
-{
-	if (!ioremap_pmd_enabled())
-		return 0;
-
-	if ((end - addr) != PMD_SIZE)
-		return 0;
-
-	if (!IS_ALIGNED(phys_addr, PMD_SIZE))
-		return 0;
-
-	if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
-		return 0;
-
-	return pmd_set_huge(pmd, phys_addr, prot);
-}
-
-static inline int ioremap_pmd_range(pud_t *pud, unsigned long addr,
-		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
-{
-	pmd_t *pmd;
-	unsigned long next;
-
-	pmd = pmd_alloc(&init_mm, pud, addr);
-	if (!pmd)
-		return -ENOMEM;
-	do {
-		next = pmd_addr_end(addr, end);
-
-		if (ioremap_try_huge_pmd(pmd, addr, next, phys_addr, prot))
-			continue;
-
-		if (ioremap_pte_range(pmd, addr, next, phys_addr, prot))
-			return -ENOMEM;
-	} while (pmd++, phys_addr += (next - addr), addr = next, addr != end);
-	return 0;
-}
-
-static int ioremap_try_huge_pud(pud_t *pud, unsigned long addr,
-				unsigned long end, phys_addr_t phys_addr,
-				pgprot_t prot)
-{
-	if (!ioremap_pud_enabled())
-		return 0;
-
-	if ((end - addr) != PUD_SIZE)
-		return 0;
-
-	if (!IS_ALIGNED(phys_addr, PUD_SIZE))
-		return 0;
-
-	if (pud_present(*pud) && !pud_free_pmd_page(pud, addr))
-		return 0;
-
-	return pud_set_huge(pud, phys_addr, prot);
-}
-
-static inline int ioremap_pud_range(p4d_t *p4d, unsigned long addr,
-		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
-{
-	pud_t *pud;
-	unsigned long next;
-
-	pud = pud_alloc(&init_mm, p4d, addr);
-	if (!pud)
-		return -ENOMEM;
-	do {
-		next = pud_addr_end(addr, end);
-
-		if (ioremap_try_huge_pud(pud, addr, next, phys_addr, prot))
-			continue;
-
-		if (ioremap_pmd_range(pud, addr, next, phys_addr, prot))
-			return -ENOMEM;
-	} while (pud++, phys_addr += (next - addr), addr = next, addr != end);
-	return 0;
-}
-
-static int ioremap_try_huge_p4d(p4d_t *p4d, unsigned long addr,
-				unsigned long end, phys_addr_t phys_addr,
-				pgprot_t prot)
-{
-	if (!ioremap_p4d_enabled())
-		return 0;
-
-	if ((end - addr) != P4D_SIZE)
-		return 0;
-
-	if (!IS_ALIGNED(phys_addr, P4D_SIZE))
-		return 0;
-
-	if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr))
-		return 0;
-
-	return p4d_set_huge(p4d, phys_addr, prot);
-}
-
-static inline int ioremap_p4d_range(pgd_t *pgd, unsigned long addr,
-		unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
-{
-	p4d_t *p4d;
-	unsigned long next;
-
-	p4d = p4d_alloc(&init_mm, pgd, addr);
-	if (!p4d)
-		return -ENOMEM;
-	do {
-		next = p4d_addr_end(addr, end);
-
-		if (ioremap_try_huge_p4d(p4d, addr, next, phys_addr, prot))
-			continue;
-
-		if (ioremap_pud_range(p4d, addr, next, phys_addr, prot))
-			return -ENOMEM;
-	} while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
-	return 0;
-}
-
 int ioremap_page_range(unsigned long addr,
 		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
 {
-	pgd_t *pgd;
-	unsigned long start;
-	unsigned long next;
-	int err;
-
-	might_sleep();
-	BUG_ON(addr >= end);
-
-	start = addr;
-	pgd = pgd_offset_k(addr);
-	do {
-		next = pgd_addr_end(addr, end);
-		err = ioremap_p4d_range(pgd, addr, next, phys_addr, prot);
-		if (err)
-			break;
-	} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
-
-	flush_cache_vmap(start, end);
+	unsigned int max_page_shift = PAGE_SHIFT;
+
+	/*
+	 * Due to the max_page_shift parameter to vmap_range, platforms must
+	 * enable all smaller sizes to take advantage of a given size,
+	 * otherwise fall back to small pages.
+	 */
+	if (ioremap_pmd_enabled()) {
+		max_page_shift = PMD_SHIFT;
+		if (ioremap_pud_enabled()) {
+			max_page_shift = PUD_SHIFT;
+			if (ioremap_p4d_enabled())
+				max_page_shift = P4D_SHIFT;
+		}
+	}
 
-	return err;
+	return vmap_range(addr, end, phys_addr, prot, max_page_shift);
 }
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 233af6936c93..dd27cfb29b10 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -119,7 +119,7 @@  static void vunmap_p4d_range(pgd_t *pgd, unsigned long addr, unsigned long end)
 	} while (p4d++, addr = next, addr != end);
 }
 
-static void vunmap_page_range(unsigned long addr, unsigned long end)
+static void vunmap_range(unsigned long addr, unsigned long end)
 {
 	pgd_t *pgd;
 	unsigned long next;
@@ -135,6 +135,198 @@  static void vunmap_page_range(unsigned long addr, unsigned long end)
 }
 
 static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot)
+{
+	pte_t *pte;
+	u64 pfn;
+
+	pfn = phys_addr >> PAGE_SHIFT;
+	pte = pte_alloc_kernel(pmd, addr);
+	if (!pte)
+		return -ENOMEM;
+	do {
+		BUG_ON(!pte_none(*pte));
+		set_pte_at(&init_mm, addr, pte, pfn_pte(pfn, prot));
+		pfn++;
+	} while (pte++, addr += PAGE_SIZE, addr != end);
+	return 0;
+}
+
+static int vmap_try_huge_pmd(pmd_t *pmd, unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+			unsigned int max_page_shift)
+{
+	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
+		return 0;
+
+	if (max_page_shift < PMD_SHIFT)
+		return 0;
+
+	if ((end - addr) != PMD_SIZE)
+		return 0;
+
+	if (!IS_ALIGNED(phys_addr, PMD_SIZE))
+		return 0;
+
+	if (pmd_present(*pmd) && !pmd_free_pte_page(pmd, addr))
+		return 0;
+
+	return pmd_set_huge(pmd, phys_addr, prot);
+}
+
+static inline int vmap_pmd_range(pud_t *pud, unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+			unsigned int max_page_shift)
+{
+	pmd_t *pmd;
+	unsigned long next;
+
+	pmd = pmd_alloc(&init_mm, pud, addr);
+	if (!pmd)
+		return -ENOMEM;
+	do {
+		next = pmd_addr_end(addr, end);
+
+		if (vmap_try_huge_pmd(pmd, addr, next, phys_addr, prot,
+					max_page_shift))
+			continue;
+
+		if (vmap_pte_range(pmd, addr, next, phys_addr, prot))
+			return -ENOMEM;
+	} while (pmd++, phys_addr += (next - addr), addr = next, addr != end);
+	return 0;
+}
+
+static int vmap_try_huge_pud(pud_t *pud, unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+			unsigned int max_page_shift)
+{
+	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
+		return 0;
+
+	if (max_page_shift < PUD_SHIFT)
+		return 0;
+
+	if ((end - addr) != PUD_SIZE)
+		return 0;
+
+	if (!IS_ALIGNED(phys_addr, PUD_SIZE))
+		return 0;
+
+	if (pud_present(*pud) && !pud_free_pmd_page(pud, addr))
+		return 0;
+
+	return pud_set_huge(pud, phys_addr, prot);
+}
+
+static inline int vmap_pud_range(p4d_t *p4d, unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+			unsigned int max_page_shift)
+{
+	pud_t *pud;
+	unsigned long next;
+
+	pud = pud_alloc(&init_mm, p4d, addr);
+	if (!pud)
+		return -ENOMEM;
+	do {
+		next = pud_addr_end(addr, end);
+
+		if (vmap_try_huge_pud(pud, addr, next, phys_addr, prot,
+					max_page_shift))
+			continue;
+
+		if (vmap_pmd_range(pud, addr, next, phys_addr, prot,
+					max_page_shift))
+			return -ENOMEM;
+	} while (pud++, phys_addr += (next - addr), addr = next, addr != end);
+	return 0;
+}
+
+static int vmap_try_huge_p4d(p4d_t *p4d, unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+			unsigned int max_page_shift)
+{
+	if (!IS_ENABLED(CONFIG_HAVE_ARCH_HUGE_VMAP))
+		return 0;
+
+	if (max_page_shift < P4D_SHIFT)
+		return 0;
+
+	if ((end - addr) != P4D_SIZE)
+		return 0;
+
+	if (!IS_ALIGNED(phys_addr, P4D_SIZE))
+		return 0;
+
+	if (p4d_present(*p4d) && !p4d_free_pud_page(p4d, addr))
+		return 0;
+
+	return p4d_set_huge(p4d, phys_addr, prot);
+}
+
+static inline int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+			unsigned int max_page_shift)
+{
+	p4d_t *p4d;
+	unsigned long next;
+
+	p4d = p4d_alloc(&init_mm, pgd, addr);
+	if (!p4d)
+		return -ENOMEM;
+	do {
+		next = p4d_addr_end(addr, end);
+
+		if (vmap_try_huge_p4d(p4d, addr, next, phys_addr, prot,
+					max_page_shift))
+			continue;
+
+		if (vmap_pud_range(p4d, addr, next, phys_addr, prot,
+					max_page_shift))
+			return -ENOMEM;
+	} while (p4d++, phys_addr += (next - addr), addr = next, addr != end);
+	return 0;
+}
+
+static int vmap_range_noflush(unsigned long addr,
+			unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+			unsigned int max_page_shift)
+{
+	pgd_t *pgd;
+	unsigned long start;
+	unsigned long next;
+	int err;
+
+	might_sleep();
+	BUG_ON(addr >= end);
+
+	start = addr;
+	pgd = pgd_offset_k(addr);
+	do {
+		next = pgd_addr_end(addr, end);
+		err = vmap_p4d_range(pgd, addr, next, phys_addr, prot,
+					max_page_shift);
+		if (err)
+			break;
+	} while (pgd++, phys_addr += (next - addr), addr = next, addr != end);
+
+	return err;
+}
+
+int vmap_range(unsigned long addr,
+		       unsigned long end, phys_addr_t phys_addr, pgprot_t prot,
+		       unsigned int max_page_shift)
+{
+	int ret;
+
+	ret = vmap_range_noflush(addr, end, phys_addr, prot, max_page_shift);
+	flush_cache_vmap(addr, end);
+
+	return ret;
+}
+
+static int vmap_pages_pte_range(pmd_t *pmd, unsigned long addr,
 		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
 {
 	pte_t *pte;
@@ -160,7 +352,7 @@  static int vmap_pte_range(pmd_t *pmd, unsigned long addr,
 	return 0;
 }
 
-static int vmap_pmd_range(pud_t *pud, unsigned long addr,
+static int vmap_pages_pmd_range(pud_t *pud, unsigned long addr,
 		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
 {
 	pmd_t *pmd;
@@ -171,13 +363,13 @@  static int vmap_pmd_range(pud_t *pud, unsigned long addr,
 		return -ENOMEM;
 	do {
 		next = pmd_addr_end(addr, end);
-		if (vmap_pte_range(pmd, addr, next, prot, pages, nr))
+		if (vmap_pages_pte_range(pmd, addr, next, prot, pages, nr))
 			return -ENOMEM;
 	} while (pmd++, addr = next, addr != end);
 	return 0;
 }
 
-static int vmap_pud_range(p4d_t *p4d, unsigned long addr,
+static int vmap_pages_pud_range(p4d_t *p4d, unsigned long addr,
 		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
 {
 	pud_t *pud;
@@ -188,13 +380,13 @@  static int vmap_pud_range(p4d_t *p4d, unsigned long addr,
 		return -ENOMEM;
 	do {
 		next = pud_addr_end(addr, end);
-		if (vmap_pmd_range(pud, addr, next, prot, pages, nr))
+		if (vmap_pages_pmd_range(pud, addr, next, prot, pages, nr))
 			return -ENOMEM;
 	} while (pud++, addr = next, addr != end);
 	return 0;
 }
 
-static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
+static int vmap_pages_p4d_range(pgd_t *pgd, unsigned long addr,
 		unsigned long end, pgprot_t prot, struct page **pages, int *nr)
 {
 	p4d_t *p4d;
@@ -205,7 +397,7 @@  static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
 		return -ENOMEM;
 	do {
 		next = p4d_addr_end(addr, end);
-		if (vmap_pud_range(p4d, addr, next, prot, pages, nr))
+		if (vmap_pages_pud_range(p4d, addr, next, prot, pages, nr))
 			return -ENOMEM;
 	} while (p4d++, addr = next, addr != end);
 	return 0;
@@ -217,7 +409,7 @@  static int vmap_p4d_range(pgd_t *pgd, unsigned long addr,
  *
  * Ie. pte at addr+N*PAGE_SIZE shall point to pfn corresponding to pages[N]
  */
-static int vmap_page_range_noflush(unsigned long start, unsigned long end,
+static int vmap_pages_range_noflush(unsigned long start, unsigned long end,
 				   pgprot_t prot, struct page **pages)
 {
 	pgd_t *pgd;
@@ -230,7 +422,7 @@  static int vmap_page_range_noflush(unsigned long start, unsigned long end,
 	pgd = pgd_offset_k(addr);
 	do {
 		next = pgd_addr_end(addr, end);
-		err = vmap_p4d_range(pgd, addr, next, prot, pages, &nr);
+		err = vmap_pages_p4d_range(pgd, addr, next, prot, pages, &nr);
 		if (err)
 			return err;
 	} while (pgd++, addr = next, addr != end);
@@ -238,12 +430,12 @@  static int vmap_page_range_noflush(unsigned long start, unsigned long end,
 	return nr;
 }
 
-static int vmap_page_range(unsigned long start, unsigned long end,
+static int vmap_pages_range(unsigned long start, unsigned long end,
 			   pgprot_t prot, struct page **pages)
 {
 	int ret;
 
-	ret = vmap_page_range_noflush(start, end, prot, pages);
+	ret = vmap_pages_range_noflush(start, end, prot, pages);
 	flush_cache_vmap(start, end);
 	return ret;
 }
@@ -1148,7 +1340,7 @@  static void free_vmap_area(struct vmap_area *va)
  */
 static void unmap_vmap_area(struct vmap_area *va)
 {
-	vunmap_page_range(va->va_start, va->va_end);
+	vunmap_range(va->va_start, va->va_end);
 }
 
 /*
@@ -1586,7 +1778,7 @@  static void vb_free(const void *addr, unsigned long size)
 	rcu_read_unlock();
 	BUG_ON(!vb);
 
-	vunmap_page_range((unsigned long)addr, (unsigned long)addr + size);
+	vunmap_range((unsigned long)addr, (unsigned long)addr + size);
 
 	if (debug_pagealloc_enabled())
 		flush_tlb_kernel_range((unsigned long)addr,
@@ -1736,7 +1928,7 @@  void *vm_map_ram(struct page **pages, unsigned int count, int node, pgprot_t pro
 		addr = va->va_start;
 		mem = (void *)addr;
 	}
-	if (vmap_page_range(addr, addr + size, prot, pages) < 0) {
+	if (vmap_pages_range(addr, addr + size, prot, pages) < 0) {
 		vm_unmap_ram(mem, count);
 		return NULL;
 	}
@@ -1903,7 +2095,7 @@  void __init vmalloc_init(void)
 int map_kernel_range_noflush(unsigned long addr, unsigned long size,
 			     pgprot_t prot, struct page **pages)
 {
-	return vmap_page_range_noflush(addr, addr + size, prot, pages);
+	return vmap_pages_range_noflush(addr, addr + size, prot, pages);
 }
 
 /**
@@ -1922,7 +2114,7 @@  int map_kernel_range_noflush(unsigned long addr, unsigned long size,
  */
 void unmap_kernel_range_noflush(unsigned long addr, unsigned long size)
 {
-	vunmap_page_range(addr, addr + size);
+	vunmap_range(addr, addr + size);
 }
 EXPORT_SYMBOL_GPL(unmap_kernel_range_noflush);
 
@@ -1939,7 +2131,7 @@  void unmap_kernel_range(unsigned long addr, unsigned long size)
 	unsigned long end = addr + size;
 
 	flush_cache_vunmap(addr, end);
-	vunmap_page_range(addr, end);
+	vunmap_range(addr, end);
 	flush_tlb_kernel_range(addr, end);
 }
 EXPORT_SYMBOL_GPL(unmap_kernel_range);
@@ -1950,7 +2142,7 @@  int map_vm_area(struct vm_struct *area, pgprot_t prot, struct page **pages)
 	unsigned long end = addr + get_vm_area_size(area);
 	int err;
 
-	err = vmap_page_range(addr, end, prot, pages);
+	err = vmap_pages_range(addr, end, prot, pages);
 
 	return err > 0 ? 0 : err;
 }