Patchwork [-V7,10/10] powerpc: disable assert_pte_locked

login
register
mail settings
Submitter Aneesh Kumar K.V
Date April 28, 2013, 7:51 p.m.
Message ID <1367178711-8232-11-git-send-email-aneesh.kumar@linux.vnet.ibm.com>
Download mbox | patch
Permalink /patch/240304/
State Superseded
Headers show

Comments

Aneesh Kumar K.V - April 28, 2013, 7:51 p.m.
From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>

With THP we set pmd to none, before we do pte_clear. Hence we can't
walk page table to get the pte lock ptr and verify whether it is locked.
THP do take pte lock before calling pte_clear. So we don't change the locking
rules here. It is that we can't use page table walking to check whether
pte locks are help with THP.

NOTE: This needs to be re-written. Not to be merged upstream.
Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 arch/powerpc/mm/pgtable.c | 2 ++
 1 file changed, 2 insertions(+)
David Gibson - May 3, 2013, 5:30 a.m.
On Mon, Apr 29, 2013 at 01:21:51AM +0530, Aneesh Kumar K.V wrote:
> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
> 
> With THP we set pmd to none, before we do pte_clear. Hence we can't
> walk page table to get the pte lock ptr and verify whether it is locked.
> THP do take pte lock before calling pte_clear. So we don't change the locking
> rules here. It is that we can't use page table walking to check whether
> pte locks are help with THP.
> 
> NOTE: This needs to be re-written. Not to be merged upstream.

So, rewrite it..

> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  arch/powerpc/mm/pgtable.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index 214130a..d77f94f 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -224,6 +224,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
>  #ifdef CONFIG_DEBUG_VM
>  void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
>  {
> +#if 0
>  	pgd_t *pgd;
>  	pud_t *pud;
>  	pmd_t *pmd;
> @@ -237,6 +238,7 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
>  	pmd = pmd_offset(pud, addr);
>  	BUG_ON(!pmd_present(*pmd));
>  	assert_spin_locked(pte_lockptr(mm, pmd));
> +#endif
>  }
>  #endif /* CONFIG_DEBUG_VM */
>
Aneesh Kumar K.V - May 3, 2013, 7:07 p.m.
David Gibson <dwg@au1.ibm.com> writes:

> On Mon, Apr 29, 2013 at 01:21:51AM +0530, Aneesh Kumar K.V wrote:
>> From: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
>> 
>> With THP we set pmd to none, before we do pte_clear. Hence we can't
>> walk page table to get the pte lock ptr and verify whether it is locked.
>> THP do take pte lock before calling pte_clear. So we don't change the locking
>> rules here. It is that we can't use page table walking to check whether
>> pte locks are help with THP.
>> 
>> NOTE: This needs to be re-written. Not to be merged upstream.
>
> So, rewrite it..


That is something we need to discuss more. We can't do the pte_locked
assert the way we do now. Because as explained above, thp collapse
depend on setting pmd to none before doing pte_clear. So we clearly
cannot walk the page table and fine the ptl to check whether we are
holding that lock. But yes, these asserts are valid. Those function
should be called holding ptl locks. I still haven't found an alternative
way to do those asserts. Any suggestions ?


>
>> Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
>> ---
>>  arch/powerpc/mm/pgtable.c | 2 ++
>>  1 file changed, 2 insertions(+)
>> 
>> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
>> index 214130a..d77f94f 100644
>> --- a/arch/powerpc/mm/pgtable.c
>> +++ b/arch/powerpc/mm/pgtable.c
>> @@ -224,6 +224,7 @@ int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
>>  #ifdef CONFIG_DEBUG_VM
>>  void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
>>  {
>> +#if 0
>>  	pgd_t *pgd;
>>  	pud_t *pud;
>>  	pmd_t *pmd;
>> @@ -237,6 +238,7 @@ void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
>>  	pmd = pmd_offset(pud, addr);
>>  	BUG_ON(!pmd_present(*pmd));
>>  	assert_spin_locked(pte_lockptr(mm, pmd));
>> +#endif
>>  }
>>  #endif /* CONFIG_DEBUG_VM */
>>  
>

-aneesh

Patch

diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index 214130a..d77f94f 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -224,6 +224,7 @@  int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address,
 #ifdef CONFIG_DEBUG_VM
 void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
 {
+#if 0
 	pgd_t *pgd;
 	pud_t *pud;
 	pmd_t *pmd;
@@ -237,6 +238,7 @@  void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
 	pmd = pmd_offset(pud, addr);
 	BUG_ON(!pmd_present(*pmd));
 	assert_spin_locked(pte_lockptr(mm, pmd));
+#endif
 }
 #endif /* CONFIG_DEBUG_VM */