Message ID | 20170801112535.20765-2-bsingharora@gmail.com (mailing list archive) |
---|---|
State | Superseded |
Headers | show |
Le 01/08/2017 à 13:25, Balbir Singh a écrit : > Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX > feature support we got support for changing the page permissions > for pte ranges. This patch adds support for both radix and hash > so that we can change their permissions via set/clear masks. > > A new helper is required for hash (hash__change_memory_range() > is changed to hash__change_boot_memory_range() as it deals with > bolted PTE's). > > hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests > for permission changes. hash__change_memory_range() does not invoke > updatepp, instead it changes the software PTE and invalidates the PTE. > > For radix, radix__change_memory_range() is setup to do the right > thing for vmalloc'd addresses. It takes a new parameter to decide > what attributes to set. > > Signed-off-by: Balbir Singh <bsingharora@gmail.com> > --- > arch/powerpc/include/asm/book3s/64/hash.h | 6 +++ > arch/powerpc/include/asm/book3s/64/radix.h | 6 +++ > arch/powerpc/include/asm/set_memory.h | 34 +++++++++++++++ > arch/powerpc/mm/pgtable-hash64.c | 51 ++++++++++++++++++++-- > arch/powerpc/mm/pgtable-radix.c | 26 ++++++------ > arch/powerpc/mm/pgtable_64.c | 68 ++++++++++++++++++++++++++++++ > 6 files changed, 175 insertions(+), 16 deletions(-) > create mode 100644 arch/powerpc/include/asm/set_memory.h > [...] > diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c > index 0736e94..3ee4c7d 100644 > --- a/arch/powerpc/mm/pgtable_64.c > +++ b/arch/powerpc/mm/pgtable_64.c > @@ -514,3 +514,71 @@ void mark_initmem_nx(void) > hash__mark_initmem_nx(); > } > #endif > + > +#ifdef CONFIG_ARCH_HAS_SET_MEMORY > +/* > + * Some of these bits are taken from arm64/mm/page_attr.c > + */ > +static int change_memory_common(unsigned long addr, int numpages, > + unsigned long set, unsigned long clear) > +{ > + unsigned long start = addr; > + unsigned long size = PAGE_SIZE*numpages; > + unsigned long end = start + size; > + struct vm_struct *area; > + > + if (!PAGE_ALIGNED(addr)) { > + start &= PAGE_MASK; > + end = start + size; > + WARN_ON_ONCE(1); > + } Why not just set start = addr & PAGE_MASK, then just do WARN_ON_ONCE(start != addr), instead of that if () > + > + /* > + * So check whether the [addr, addr + size) interval is entirely > + * covered by precisely one VM area that has the VM_ALLOC flag set. > + */ > + area = find_vm_area((void *)addr); > + if (!area || > + end > (unsigned long)area->addr + area->size || > + !(area->flags & VM_ALLOC)) > + return -EINVAL; > + > + if (!numpages) > + return 0; Shouldn't that be tested earlier ? > + > + if (radix_enabled()) > + return radix__change_memory_range(start, start + size, > + set, clear); > + else > + return hash__change_memory_range(start, start + size, > + set, clear); > +} The following functions should go in a place common to PPC32 and PPC64, otherwise they will have to be duplicated when implementing for PPC32. Maybe the above function should also go in a common place, only the last part should remain in a PPC64 dedicated part. It could be called change_memory_range(), something like int change_memory_range(unsigned long start, unsigned long end, unsigned long set, unsigned long clear) { if (radix_enabled()) return radix__change_memory_range(start, end, set, clear); return hash__change_memory_range(start, end, set, clear); } Then change_memory_range() could also be implemented for PPC32 later. > + > +int set_memory_ro(unsigned long addr, int numpages) > +{ > + return change_memory_common(addr, numpages, > + 0, _PAGE_WRITE); > +} > +EXPORT_SYMBOL(set_memory_ro); Take care that _PAGE_WRITE has value 0 when _PAGE_RO instead of _PAGE_RW is defined (eg for the 8xx). It would be better to use accessors like pte_wrprotect() and pte_mkwrite() > + > +int set_memory_rw(unsigned long addr, int numpages) > +{ > + return change_memory_common(addr, numpages, > + _PAGE_WRITE, 0); > +} > +EXPORT_SYMBOL(set_memory_rw); > + > +int set_memory_nx(unsigned long addr, int numpages) > +{ > + return change_memory_common(addr, numpages, > + 0, _PAGE_EXEC); > +} > +EXPORT_SYMBOL(set_memory_nx); > + > +int set_memory_x(unsigned long addr, int numpages) > +{ > + return change_memory_common(addr, numpages, > + _PAGE_EXEC, 0); > +} > +EXPORT_SYMBOL(set_memory_x); > +#endif > Christophe --- L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast. https://www.avast.com/antivirus
On Tue, 1 Aug 2017 21:08:49 +0200 christophe leroy <christophe.leroy@c-s.fr> wrote: > Le 01/08/2017 à 13:25, Balbir Singh a écrit : > > Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX > > feature support we got support for changing the page permissions > > for pte ranges. This patch adds support for both radix and hash > > so that we can change their permissions via set/clear masks. > > > > A new helper is required for hash (hash__change_memory_range() > > is changed to hash__change_boot_memory_range() as it deals with > > bolted PTE's). > > > > hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests > > for permission changes. hash__change_memory_range() does not invoke > > updatepp, instead it changes the software PTE and invalidates the PTE. > > > > For radix, radix__change_memory_range() is setup to do the right > > thing for vmalloc'd addresses. It takes a new parameter to decide > > what attributes to set. > > > > Signed-off-by: Balbir Singh <bsingharora@gmail.com> > > --- > > arch/powerpc/include/asm/book3s/64/hash.h | 6 +++ > > arch/powerpc/include/asm/book3s/64/radix.h | 6 +++ > > arch/powerpc/include/asm/set_memory.h | 34 +++++++++++++++ > > arch/powerpc/mm/pgtable-hash64.c | 51 ++++++++++++++++++++-- > > arch/powerpc/mm/pgtable-radix.c | 26 ++++++------ > > arch/powerpc/mm/pgtable_64.c | 68 ++++++++++++++++++++++++++++++ > > 6 files changed, 175 insertions(+), 16 deletions(-) > > create mode 100644 arch/powerpc/include/asm/set_memory.h > > > > [...] > > > diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c > > index 0736e94..3ee4c7d 100644 > > --- a/arch/powerpc/mm/pgtable_64.c > > +++ b/arch/powerpc/mm/pgtable_64.c > > @@ -514,3 +514,71 @@ void mark_initmem_nx(void) > > hash__mark_initmem_nx(); > > } > > #endif > > + > > +#ifdef CONFIG_ARCH_HAS_SET_MEMORY > > +/* > > + * Some of these bits are taken from arm64/mm/page_attr.c > > + */ > > +static int change_memory_common(unsigned long addr, int numpages, > > + unsigned long set, unsigned long clear) > > +{ > > + unsigned long start = addr; > > + unsigned long size = PAGE_SIZE*numpages; > > + unsigned long end = start + size; > > + struct vm_struct *area; > > + > > + if (!PAGE_ALIGNED(addr)) { > > + start &= PAGE_MASK; > > + end = start + size; > > + WARN_ON_ONCE(1); > > + } > > Why not just set start = addr & PAGE_MASK, then just do > WARN_ON_ONCE(start != addr), instead of that if () The code has been taken from arch/arm64/mm/page_attr.c. I did not change any bits, but we could make changes. > > > + > > + /* > > + * So check whether the [addr, addr + size) interval is entirely > > + * covered by precisely one VM area that has the VM_ALLOC flag set. > > + */ > > + area = find_vm_area((void *)addr); > > + if (!area || > > + end > (unsigned long)area->addr + area->size || > > + !(area->flags & VM_ALLOC)) > > + return -EINVAL; > > + > > + if (!numpages) > > + return 0; > > Shouldn't that be tested earlier ? > Same as above > > + > > + if (radix_enabled()) > > + return radix__change_memory_range(start, start + size, > > + set, clear); > > + else > > + return hash__change_memory_range(start, start + size, > > + set, clear); > > +} > > The following functions should go in a place common to PPC32 and PPC64, > otherwise they will have to be duplicated when implementing for PPC32. > Maybe the above function should also go in a common place, only the last > part should remain in a PPC64 dedicated part. It could be called > change_memory_range(), something like > > int change_memory_range(unsigned long start, unsigned long end, > unsigned long set, unsigned long clear) > { > if (radix_enabled()) > return radix__change_memory_range(start, end, > set, clear); > return hash__change_memory_range(start, end, set, clear); > } > > Then change_memory_range() could also be implemented for PPC32 later. I was hoping that when we implement support for PPC32, we could refactor the code then and move it to arch/powerpc/mm/page_attr.c if required. What do you think? > > > + > > +int set_memory_ro(unsigned long addr, int numpages) > > +{ > > + return change_memory_common(addr, numpages, > > + 0, _PAGE_WRITE); > > +} > > +EXPORT_SYMBOL(set_memory_ro); > > Take care that _PAGE_WRITE has value 0 when _PAGE_RO instead of _PAGE_RW > is defined (eg for the 8xx). > > It would be better to use accessors like pte_wrprotect() and pte_mkwrite() > Sure we can definitely refactor this for PPC32, pte_wrprotect() and pte_mkwrite() would require us to make the changes when we've walked down to the pte and then invoke different functions based on the flag, I kind of like the addr and permission abstraction<`2`><`2`> > > + > > +int set_memory_rw(unsigned long addr, int numpages) > > +{ > > + return change_memory_common(addr, numpages, > > + _PAGE_WRITE, 0); > > +} > > +EXPORT_SYMBOL(set_memory_rw); > > + > > +int set_memory_nx(unsigned long addr, int numpages) > > +{ > > + return change_memory_common(addr, numpages, > > + 0, _PAGE_EXEC); > > +} > > +EXPORT_SYMBOL(set_memory_nx); > > + > > +int set_memory_x(unsigned long addr, int numpages) > > +{ > > + return change_memory_common(addr, numpages, > > + _PAGE_EXEC, 0); > > +} > > +EXPORT_SYMBOL(set_memory_x); > > +#endif > > > Thanks for the review Balbir
Balbir Singh <bsingharora@gmail.com> writes: > Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX > feature support we got support for changing the page permissions > for pte ranges. This patch adds support for both radix and hash > so that we can change their permissions via set/clear masks. > > A new helper is required for hash (hash__change_memory_range() > is changed to hash__change_boot_memory_range() as it deals with > bolted PTE's). > > hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests > for permission changes. hash__change_memory_range() does not invoke > updatepp, instead it changes the software PTE and invalidates the PTE. > > For radix, radix__change_memory_range() is setup to do the right > thing for vmalloc'd addresses. It takes a new parameter to decide > what attributes to set. > .... > +int hash__change_memory_range(unsigned long start, unsigned long end, > + unsigned long set, unsigned long clear) > +{ > + unsigned long idx; > + pgd_t *pgdp; > + pud_t *pudp; > + pmd_t *pmdp; > + pte_t *ptep; > + > + start = ALIGN_DOWN(start, PAGE_SIZE); > + end = PAGE_ALIGN(end); // aligns up > + > + /* > + * Update the software PTE and flush the entry. > + * This should cause a new fault with the right > + * things setup in the hash page table > + */ > + pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n", > + start, end, set, clear); > + > + for (idx = start; idx < end; idx += PAGE_SIZE) { > + pgdp = pgd_offset_k(idx); > + pudp = pud_alloc(&init_mm, pgdp, idx); > + if (!pudp) > + return -1; > + pmdp = pmd_alloc(&init_mm, pudp, idx); > + if (!pmdp) > + return -1; > + ptep = pte_alloc_kernel(pmdp, idx); > + if (!ptep) > + return -1; > + hash__pte_update(&init_mm, idx, ptep, clear, set, 0); > + hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE); > + } You can use find_linux_pte_or_hugepte. with my recent patch series find_init_mm_pte() ? -aneesh
On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> wrote: > Balbir Singh <bsingharora@gmail.com> writes: > >> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX >> feature support we got support for changing the page permissions >> for pte ranges. This patch adds support for both radix and hash >> so that we can change their permissions via set/clear masks. >> >> A new helper is required for hash (hash__change_memory_range() >> is changed to hash__change_boot_memory_range() as it deals with >> bolted PTE's). >> >> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests >> for permission changes. hash__change_memory_range() does not invoke >> updatepp, instead it changes the software PTE and invalidates the PTE. >> >> For radix, radix__change_memory_range() is setup to do the right >> thing for vmalloc'd addresses. It takes a new parameter to decide >> what attributes to set. >> > .... > >> +int hash__change_memory_range(unsigned long start, unsigned long end, >> + unsigned long set, unsigned long clear) >> +{ >> + unsigned long idx; >> + pgd_t *pgdp; >> + pud_t *pudp; >> + pmd_t *pmdp; >> + pte_t *ptep; >> + >> + start = ALIGN_DOWN(start, PAGE_SIZE); >> + end = PAGE_ALIGN(end); // aligns up >> + >> + /* >> + * Update the software PTE and flush the entry. >> + * This should cause a new fault with the right >> + * things setup in the hash page table >> + */ >> + pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n", >> + start, end, set, clear); >> + >> + for (idx = start; idx < end; idx += PAGE_SIZE) { > > >> + pgdp = pgd_offset_k(idx); >> + pudp = pud_alloc(&init_mm, pgdp, idx); >> + if (!pudp) >> + return -1; >> + pmdp = pmd_alloc(&init_mm, pudp, idx); >> + if (!pmdp) >> + return -1; >> + ptep = pte_alloc_kernel(pmdp, idx); >> + if (!ptep) >> + return -1; >> + hash__pte_update(&init_mm, idx, ptep, clear, set, 0); I think this does the needful, if H_PAGE_HASHPTE is set, the flush will happen >> + hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE); >> + } > > You can use find_linux_pte_or_hugepte. with my recent patch series > find_init_mm_pte() ? > for pte_mkwrite and pte_wrprotect? Balbir Singh.
Balbir Singh <bsingharora@gmail.com> writes: > On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V > <aneesh.kumar@linux.vnet.ibm.com> wrote: >> Balbir Singh <bsingharora@gmail.com> writes: >> >>> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX >>> feature support we got support for changing the page permissions >>> for pte ranges. This patch adds support for both radix and hash >>> so that we can change their permissions via set/clear masks. >>> >>> A new helper is required for hash (hash__change_memory_range() >>> is changed to hash__change_boot_memory_range() as it deals with >>> bolted PTE's). >>> >>> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests >>> for permission changes. hash__change_memory_range() does not invoke >>> updatepp, instead it changes the software PTE and invalidates the PTE. >>> >>> For radix, radix__change_memory_range() is setup to do the right >>> thing for vmalloc'd addresses. It takes a new parameter to decide >>> what attributes to set. >>> >> .... >> >>> +int hash__change_memory_range(unsigned long start, unsigned long end, >>> + unsigned long set, unsigned long clear) >>> +{ >>> + unsigned long idx; >>> + pgd_t *pgdp; >>> + pud_t *pudp; >>> + pmd_t *pmdp; >>> + pte_t *ptep; >>> + >>> + start = ALIGN_DOWN(start, PAGE_SIZE); >>> + end = PAGE_ALIGN(end); // aligns up >>> + >>> + /* >>> + * Update the software PTE and flush the entry. >>> + * This should cause a new fault with the right >>> + * things setup in the hash page table >>> + */ >>> + pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n", >>> + start, end, set, clear); >>> + >>> + for (idx = start; idx < end; idx += PAGE_SIZE) { >> >> >>> + pgdp = pgd_offset_k(idx); >>> + pudp = pud_alloc(&init_mm, pgdp, idx); >>> + if (!pudp) >>> + return -1; >>> + pmdp = pmd_alloc(&init_mm, pudp, idx); >>> + if (!pmdp) >>> + return -1; >>> + ptep = pte_alloc_kernel(pmdp, idx); >>> + if (!ptep) >>> + return -1; >>> + hash__pte_update(&init_mm, idx, ptep, clear, set, 0); > > I think this does the needful, if H_PAGE_HASHPTE is set, the flush > will happen > >>> + hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE); >>> + } >> >> You can use find_linux_pte_or_hugepte. with my recent patch series >> find_init_mm_pte() ? >> > > for pte_mkwrite and pte_wrprotect? For walking page table. I am not sure you really want to allocate page table in that function. If you do, then what will be the initial value of PTE ? We are requesting to set an clear from and existing PTE entry right ? If you find a none page table entry you should handle it via a fault ? -aneesh
On Fri, Aug 4, 2017 at 1:38 PM, Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> wrote: > Balbir Singh <bsingharora@gmail.com> writes: > >> On Wed, Aug 2, 2017 at 8:09 PM, Aneesh Kumar K.V >> <aneesh.kumar@linux.vnet.ibm.com> wrote: >>> Balbir Singh <bsingharora@gmail.com> writes: >>> >>>> Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX >>>> feature support we got support for changing the page permissions >>>> for pte ranges. This patch adds support for both radix and hash >>>> so that we can change their permissions via set/clear masks. >>>> >>>> A new helper is required for hash (hash__change_memory_range() >>>> is changed to hash__change_boot_memory_range() as it deals with >>>> bolted PTE's). >>>> >>>> hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests >>>> for permission changes. hash__change_memory_range() does not invoke >>>> updatepp, instead it changes the software PTE and invalidates the PTE. >>>> >>>> For radix, radix__change_memory_range() is setup to do the right >>>> thing for vmalloc'd addresses. It takes a new parameter to decide >>>> what attributes to set. >>>> >>> .... >>> >>>> +int hash__change_memory_range(unsigned long start, unsigned long end, >>>> + unsigned long set, unsigned long clear) >>>> +{ >>>> + unsigned long idx; >>>> + pgd_t *pgdp; >>>> + pud_t *pudp; >>>> + pmd_t *pmdp; >>>> + pte_t *ptep; >>>> + >>>> + start = ALIGN_DOWN(start, PAGE_SIZE); >>>> + end = PAGE_ALIGN(end); // aligns up >>>> + >>>> + /* >>>> + * Update the software PTE and flush the entry. >>>> + * This should cause a new fault with the right >>>> + * things setup in the hash page table >>>> + */ >>>> + pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n", >>>> + start, end, set, clear); >>>> + >>>> + for (idx = start; idx < end; idx += PAGE_SIZE) { >>> >>> >>>> + pgdp = pgd_offset_k(idx); >>>> + pudp = pud_alloc(&init_mm, pgdp, idx); >>>> + if (!pudp) >>>> + return -1; >>>> + pmdp = pmd_alloc(&init_mm, pudp, idx); >>>> + if (!pmdp) >>>> + return -1; >>>> + ptep = pte_alloc_kernel(pmdp, idx); >>>> + if (!ptep) >>>> + return -1; >>>> + hash__pte_update(&init_mm, idx, ptep, clear, set, 0); >> >> I think this does the needful, if H_PAGE_HASHPTE is set, the flush >> will happen >> >>>> + hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE); >>>> + } >>> >>> You can use find_linux_pte_or_hugepte. with my recent patch series >>> find_init_mm_pte() ? >>> >> >> for pte_mkwrite and pte_wrprotect? > > For walking page table. I am not sure you really want to allocate page > table in that function. If you do, then what will be the initial value > of PTE ? We are requesting to set an clear from and existing PTE entry > right ? If you find a none page table entry you should handle it via a > fault ? > Fair enough, I've been lazy with that check, I'll fix it in v2 Balbir Singh.
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h index 36fc7bf..65003c9 100644 --- a/arch/powerpc/include/asm/book3s/64/hash.h +++ b/arch/powerpc/include/asm/book3s/64/hash.h @@ -94,6 +94,12 @@ extern void hash__mark_rodata_ro(void); extern void hash__mark_initmem_nx(void); #endif +/* + * For set_memory_* + */ +extern int hash__change_memory_range(unsigned long start, unsigned long end, + unsigned long set, unsigned long clear); + extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr, pte_t *ptep, unsigned long pte, int huge); extern unsigned long htab_convert_pte_flags(unsigned long pteflags); diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h index 544440b..5ca0636 100644 --- a/arch/powerpc/include/asm/book3s/64/radix.h +++ b/arch/powerpc/include/asm/book3s/64/radix.h @@ -121,6 +121,12 @@ extern void radix__mark_rodata_ro(void); extern void radix__mark_initmem_nx(void); #endif +/* + * For set_memory_* + */ +extern int radix__change_memory_range(unsigned long start, unsigned long end, + unsigned long set, unsigned long clear); + static inline unsigned long __radix_pte_update(pte_t *ptep, unsigned long clr, unsigned long set) { diff --git a/arch/powerpc/include/asm/set_memory.h b/arch/powerpc/include/asm/set_memory.h new file mode 100644 index 0000000..b19c67c --- /dev/null +++ b/arch/powerpc/include/asm/set_memory.h @@ -0,0 +1,34 @@ +/* + * set_memory.h + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + * + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + * + * You should have received a copy of the GNU General Public License + * along with this program; if not, you can access it online at + * http://www.gnu.org/licenses/gpl-2.0.html. + * + * Copyright IBM Corporation, 2017 + * + * Authors: Balbir Singh <bsingharora@gmail.com> + */ + +#ifndef __ASM_SET_MEMORY_H +#define __ASM_SET_MEMORY_H + +/* + * Functions to change memory attributes. + */ +int set_memory_ro(unsigned long addr, int numpages); +int set_memory_rw(unsigned long addr, int numpages); +int set_memory_x(unsigned long addr, int numpages); +int set_memory_nx(unsigned long addr, int numpages); + +#endif diff --git a/arch/powerpc/mm/pgtable-hash64.c b/arch/powerpc/mm/pgtable-hash64.c index 656f7f3..db5b477 100644 --- a/arch/powerpc/mm/pgtable-hash64.c +++ b/arch/powerpc/mm/pgtable-hash64.c @@ -424,9 +424,52 @@ int hash__has_transparent_hugepage(void) } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ +/* + * This routine will change pte protection only for vmalloc'd + * PAGE_SIZE pages, do not invoke for bolted pages + */ +int hash__change_memory_range(unsigned long start, unsigned long end, + unsigned long set, unsigned long clear) +{ + unsigned long idx; + pgd_t *pgdp; + pud_t *pudp; + pmd_t *pmdp; + pte_t *ptep; + + start = ALIGN_DOWN(start, PAGE_SIZE); + end = PAGE_ALIGN(end); // aligns up + + /* + * Update the software PTE and flush the entry. + * This should cause a new fault with the right + * things setup in the hash page table + */ + pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n", + start, end, set, clear); + + for (idx = start; idx < end; idx += PAGE_SIZE) { + pgdp = pgd_offset_k(idx); + pudp = pud_alloc(&init_mm, pgdp, idx); + if (!pudp) + return -1; + pmdp = pmd_alloc(&init_mm, pudp, idx); + if (!pmdp) + return -1; + ptep = pte_alloc_kernel(pmdp, idx); + if (!ptep) + return -1; + hash__pte_update(&init_mm, idx, ptep, clear, set, 0); + hash__flush_tlb_kernel_range(idx, idx + PAGE_SIZE); + } + return 0; + +} +EXPORT_SYMBOL(hash__change_memory_range); + #ifdef CONFIG_STRICT_KERNEL_RWX -static bool hash__change_memory_range(unsigned long start, unsigned long end, - unsigned long newpp) +bool hash__change_boot_memory_range(unsigned long start, unsigned long end, + unsigned long newpp) { unsigned long idx; unsigned int step, shift; @@ -482,7 +525,7 @@ void hash__mark_rodata_ro(void) start = (unsigned long)_stext; end = (unsigned long)__init_begin; - WARN_ON(!hash__change_memory_range(start, end, PP_RXXX)); + WARN_ON(!hash__change_boot_memory_range(start, end, PP_RXXX)); } void hash__mark_initmem_nx(void) @@ -494,6 +537,6 @@ void hash__mark_initmem_nx(void) pp = htab_convert_pte_flags(pgprot_val(PAGE_KERNEL)); - WARN_ON(!hash__change_memory_range(start, end, pp)); + WARN_ON(!hash__change_boot_memory_range(start, end, pp)); } #endif diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c index 6e0176d..0e66324 100644 --- a/arch/powerpc/mm/pgtable-radix.c +++ b/arch/powerpc/mm/pgtable-radix.c @@ -114,9 +114,8 @@ int radix__map_kernel_page(unsigned long ea, unsigned long pa, return 0; } -#ifdef CONFIG_STRICT_KERNEL_RWX -void radix__change_memory_range(unsigned long start, unsigned long end, - unsigned long clear) +int radix__change_memory_range(unsigned long start, unsigned long end, + unsigned long set, unsigned long clear) { unsigned long idx; pgd_t *pgdp; @@ -127,35 +126,38 @@ void radix__change_memory_range(unsigned long start, unsigned long end, start = ALIGN_DOWN(start, PAGE_SIZE); end = PAGE_ALIGN(end); // aligns up - pr_debug("Changing flags on range %lx-%lx removing 0x%lx\n", - start, end, clear); + pr_debug("Changing flags on range %lx-%lx setting 0x%lx removing 0x%lx\n", + start, end, set, clear); for (idx = start; idx < end; idx += PAGE_SIZE) { pgdp = pgd_offset_k(idx); pudp = pud_alloc(&init_mm, pgdp, idx); if (!pudp) - continue; + return -1; if (pud_huge(*pudp)) { ptep = (pte_t *)pudp; goto update_the_pte; } pmdp = pmd_alloc(&init_mm, pudp, idx); if (!pmdp) - continue; + return -1; if (pmd_huge(*pmdp)) { ptep = pmdp_ptep(pmdp); goto update_the_pte; } ptep = pte_alloc_kernel(pmdp, idx); if (!ptep) - continue; + return -1; update_the_pte: - radix__pte_update(&init_mm, idx, ptep, clear, 0, 0); + radix__pte_update(&init_mm, idx, ptep, clear, set, 0); } radix__flush_tlb_kernel_range(start, end); + return 0; } +EXPORT_SYMBOL(radix__change_memory_range); +#ifdef CONFIG_STRICT_KERNEL_RWX void radix__mark_rodata_ro(void) { unsigned long start, end; @@ -163,12 +165,12 @@ void radix__mark_rodata_ro(void) start = (unsigned long)_stext; end = (unsigned long)__init_begin; - radix__change_memory_range(start, end, _PAGE_WRITE); + radix__change_memory_range(start, end, 0, _PAGE_WRITE); start = (unsigned long)__start_interrupts - PHYSICAL_START; end = (unsigned long)__end_interrupts - PHYSICAL_START; - radix__change_memory_range(start, end, _PAGE_WRITE); + radix__change_memory_range(start, end, 0, _PAGE_WRITE); } @@ -177,7 +179,7 @@ void radix__mark_initmem_nx(void) unsigned long start = (unsigned long)__init_begin; unsigned long end = (unsigned long)__init_end; - radix__change_memory_range(start, end, _PAGE_EXEC); + radix__change_memory_range(start, end, 0, _PAGE_EXEC); } #endif /* CONFIG_STRICT_KERNEL_RWX */ diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c index 0736e94..3ee4c7d 100644 --- a/arch/powerpc/mm/pgtable_64.c +++ b/arch/powerpc/mm/pgtable_64.c @@ -514,3 +514,71 @@ void mark_initmem_nx(void) hash__mark_initmem_nx(); } #endif + +#ifdef CONFIG_ARCH_HAS_SET_MEMORY +/* + * Some of these bits are taken from arm64/mm/page_attr.c + */ +static int change_memory_common(unsigned long addr, int numpages, + unsigned long set, unsigned long clear) +{ + unsigned long start = addr; + unsigned long size = PAGE_SIZE*numpages; + unsigned long end = start + size; + struct vm_struct *area; + + if (!PAGE_ALIGNED(addr)) { + start &= PAGE_MASK; + end = start + size; + WARN_ON_ONCE(1); + } + + /* + * So check whether the [addr, addr + size) interval is entirely + * covered by precisely one VM area that has the VM_ALLOC flag set. + */ + area = find_vm_area((void *)addr); + if (!area || + end > (unsigned long)area->addr + area->size || + !(area->flags & VM_ALLOC)) + return -EINVAL; + + if (!numpages) + return 0; + + if (radix_enabled()) + return radix__change_memory_range(start, start + size, + set, clear); + else + return hash__change_memory_range(start, start + size, + set, clear); +} + +int set_memory_ro(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + 0, _PAGE_WRITE); +} +EXPORT_SYMBOL(set_memory_ro); + +int set_memory_rw(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + _PAGE_WRITE, 0); +} +EXPORT_SYMBOL(set_memory_rw); + +int set_memory_nx(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + 0, _PAGE_EXEC); +} +EXPORT_SYMBOL(set_memory_nx); + +int set_memory_x(unsigned long addr, int numpages) +{ + return change_memory_common(addr, numpages, + _PAGE_EXEC, 0); +} +EXPORT_SYMBOL(set_memory_x); +#endif
Add support for set_memory_xx routines. With the STRICT_KERNEL_RWX feature support we got support for changing the page permissions for pte ranges. This patch adds support for both radix and hash so that we can change their permissions via set/clear masks. A new helper is required for hash (hash__change_memory_range() is changed to hash__change_boot_memory_range() as it deals with bolted PTE's). hash__change_memory_range() works with vmalloc'ed PAGE_SIZE requests for permission changes. hash__change_memory_range() does not invoke updatepp, instead it changes the software PTE and invalidates the PTE. For radix, radix__change_memory_range() is setup to do the right thing for vmalloc'd addresses. It takes a new parameter to decide what attributes to set. Signed-off-by: Balbir Singh <bsingharora@gmail.com> --- arch/powerpc/include/asm/book3s/64/hash.h | 6 +++ arch/powerpc/include/asm/book3s/64/radix.h | 6 +++ arch/powerpc/include/asm/set_memory.h | 34 +++++++++++++++ arch/powerpc/mm/pgtable-hash64.c | 51 ++++++++++++++++++++-- arch/powerpc/mm/pgtable-radix.c | 26 ++++++------ arch/powerpc/mm/pgtable_64.c | 68 ++++++++++++++++++++++++++++++ 6 files changed, 175 insertions(+), 16 deletions(-) create mode 100644 arch/powerpc/include/asm/set_memory.h