madvise09:Change PAGES size to the value more than 32
diff mbox series

Message ID 1567576823-10080-1-git-send-email-shuang.qiu@oracle.com
State Changes Requested
Delegated to: Petr Vorel
Headers show
Series
  • madvise09:Change PAGES size to the value more than 32
Related show

Commit Message

Shuang Qiu Sept. 4, 2019, 6 a.m. UTC
From: Shuang Qiu <shuang.qiu@oracle.com>

In upstream patch 1a61ab (mm: memcontrol: replace zone summing with lruvec_page_state()),
it modify the lruvec state in batch,equal and less than 32 MADV_FREE pages will not trigger
the account of lruvec_stat,and will not be free in memory pressure either.
So the testcase may fail with:
...
madvise09.c:219: INFO: Memory hungry child 6178 started, try 10
madvise09.c:254: INFO: Memory map: pppppppppppppppppppppppppppppppp
madvise09.c:259: FAIL: No MADV_FREE page was freed on low memory
...
Change the PAGES to the value more than 32 can fix such issue.

Signed-off-by: Shuang Qiu <shuang.qiu@oracle.com>
---
 testcases/kernel/syscalls/madvise/madvise09.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Thadeu Lima de Souza Cascardo Sept. 4, 2019, 2:32 p.m. UTC | #1
On Wed, Sep 04, 2019 at 02:00:23PM +0800, shuang.qiu@oracle.com wrote:
> From: Shuang Qiu <shuang.qiu@oracle.com>
> 
> In upstream patch 1a61ab (mm: memcontrol: replace zone summing with lruvec_page_state()),
> it modify the lruvec state in batch,equal and less than 32 MADV_FREE pages will not trigger
> the account of lruvec_stat,and will not be free in memory pressure either.
> So the testcase may fail with:
> ...
> madvise09.c:219: INFO: Memory hungry child 6178 started, try 10
> madvise09.c:254: INFO: Memory map: pppppppppppppppppppppppppppppppp
> madvise09.c:259: FAIL: No MADV_FREE page was freed on low memory
> ...
> Change the PAGES to the value more than 32 can fix such issue.
> 
> Signed-off-by: Shuang Qiu <shuang.qiu@oracle.com>

On ppc64le, I was getting retries, as OOM was getting hit. The test would
ultimately time out. This fixes it for me.

Thanks.
Cascardo.

Tested-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com>

> ---
>  testcases/kernel/syscalls/madvise/madvise09.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/testcases/kernel/syscalls/madvise/madvise09.c b/testcases/kernel/syscalls/madvise/madvise09.c
> index 01075f6..3759053 100644
> --- a/testcases/kernel/syscalls/madvise/madvise09.c
> +++ b/testcases/kernel/syscalls/madvise/madvise09.c
> @@ -57,7 +57,7 @@ static int sleep_between_faults;
>  
>  static int swap_accounting_enabled;
>  
> -#define PAGES 32
> +#define PAGES 64
>  #define TOUCHED_PAGE1 0
>  #define TOUCHED_PAGE2 10
>  
> -- 
> 1.9.1
> 
> 
> -- 
> Mailing list info: https://lists.linux.it/listinfo/ltp
Li Wang Sept. 5, 2019, 8:29 a.m. UTC | #2
On Wed, Sep 4, 2019 at 2:01 PM <shuang.qiu@oracle.com> wrote:
>
> From: Shuang Qiu <shuang.qiu@oracle.com>
>
> In upstream patch 1a61ab (mm: memcontrol: replace zone summing with lruvec_page_state()),
> it modify the lruvec state in batch,equal and less than 32 MADV_FREE pages will not trigger
> the account of lruvec_stat,and will not be free in memory pressure either.
> So the testcase may fail with:
> ...
> madvise09.c:219: INFO: Memory hungry child 6178 started, try 10
> madvise09.c:254: INFO: Memory map: pppppppppppppppppppppppppppppppp
> madvise09.c:259: FAIL: No MADV_FREE page was freed on low memory
> ...
> Change the PAGES to the value more than 32 can fix such issue.
>
> Signed-off-by: Shuang Qiu <shuang.qiu@oracle.com>
> ---
>  testcases/kernel/syscalls/madvise/madvise09.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/testcases/kernel/syscalls/madvise/madvise09.c b/testcases/kernel/syscalls/madvise/madvise09.c
> index 01075f6..3759053 100644
> --- a/testcases/kernel/syscalls/madvise/madvise09.c
> +++ b/testcases/kernel/syscalls/madvise/madvise09.c
> @@ -57,7 +57,7 @@ static int sleep_between_faults;
>
>  static int swap_accounting_enabled;
>
> -#define PAGES 32
> +#define PAGES 64

I'm not sure why 64 pages is a proper value? Can you explain more?
Shuang Qiu Sept. 5, 2019, 10:01 a.m. UTC | #3
On 09/05/2019 04:29 PM, Li Wang wrote:
> On Wed, Sep 4, 2019 at 2:01 PM <shuang.qiu@oracle.com> wrote:
>> From: Shuang Qiu <shuang.qiu@oracle.com>
>>
>> In upstream patch 1a61ab (mm: memcontrol: replace zone summing with lruvec_page_state()),
>> it modify the lruvec state in batch,equal and less than 32 MADV_FREE pages will not trigger
>> the account of lruvec_stat,and will not be free in memory pressure either.
>> So the testcase may fail with:
>> ...
>> madvise09.c:219: INFO: Memory hungry child 6178 started, try 10
>> madvise09.c:254: INFO: Memory map: pppppppppppppppppppppppppppppppp
>> madvise09.c:259: FAIL: No MADV_FREE page was freed on low memory
>> ...
>> Change the PAGES to the value more than 32 can fix such issue.
>>
>> Signed-off-by: Shuang Qiu <shuang.qiu@oracle.com>
>> ---
>>   testcases/kernel/syscalls/madvise/madvise09.c | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/testcases/kernel/syscalls/madvise/madvise09.c b/testcases/kernel/syscalls/madvise/madvise09.c
>> index 01075f6..3759053 100644
>> --- a/testcases/kernel/syscalls/madvise/madvise09.c
>> +++ b/testcases/kernel/syscalls/madvise/madvise09.c
>> @@ -57,7 +57,7 @@ static int sleep_between_faults;
>>
>>   static int swap_accounting_enabled;
>>
>> -#define PAGES 32
>> +#define PAGES 64
> I'm not sure why 64 pages is a proper value? Can you explain more?
I think any value which larger than 32 is ok.
So I tested 64 and works fine.

Thanks
Shuang
>
Li Wang Sept. 5, 2019, 10:29 a.m. UTC | #4
On Thu, Sep 5, 2019 at 6:02 PM Shuang Qiu <shuang.qiu@oracle.com> wrote:
>
> On 09/05/2019 04:29 PM, Li Wang wrote:
> > On Wed, Sep 4, 2019 at 2:01 PM <shuang.qiu@oracle.com> wrote:
> >> From: Shuang Qiu <shuang.qiu@oracle.com>
> >>
> >> In upstream patch 1a61ab (mm: memcontrol: replace zone summing with lruvec_page_state()),
> >> it modify the lruvec state in batch,equal and less than 32 MADV_FREE pages will not trigger
> >> the account of lruvec_stat,and will not be free in memory pressure either.
> >> So the testcase may fail with:
> >> ...
> >> madvise09.c:219: INFO: Memory hungry child 6178 started, try 10
> >> madvise09.c:254: INFO: Memory map: pppppppppppppppppppppppppppppppp
> >> madvise09.c:259: FAIL: No MADV_FREE page was freed on low memory
> >> ...
> >> Change the PAGES to the value more than 32 can fix such issue.
> >>
> >> Signed-off-by: Shuang Qiu <shuang.qiu@oracle.com>
> >> ---
> >>   testcases/kernel/syscalls/madvise/madvise09.c | 2 +-
> >>   1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/testcases/kernel/syscalls/madvise/madvise09.c b/testcases/kernel/syscalls/madvise/madvise09.c
> >> index 01075f6..3759053 100644
> >> --- a/testcases/kernel/syscalls/madvise/madvise09.c
> >> +++ b/testcases/kernel/syscalls/madvise/madvise09.c
> >> @@ -57,7 +57,7 @@ static int sleep_between_faults;
> >>
> >>   static int swap_accounting_enabled;
> >>
> >> -#define PAGES 32
> >> +#define PAGES 64
> > I'm not sure why 64 pages is a proper value? Can you explain more?
> I think any value which larger than 32 is ok.
> So I tested 64 and works fine.

I just tried with '42' randomly, but it doesn't work. I guess that
might have a waterline to trigger this memory page reclaiming.

I looked at the code calling path, it seems the 'sc->nr_reclaimed' is
the key value to control that, but still not 100% sure, I will keep
debugging it for a while.

try_to_free_mem_cgroup_pages
  do_try_to_free_pages
    shrink_zone
      shrink_node
        shrink_node_memcg
          get_scan_count
            lruvec_lru_size
              lruvec_page_state
Cyril Hrubis Sept. 11, 2019, 11:19 a.m. UTC | #5
Hi!
> I'm not sure why 64 pages is a proper value? Can you explain more?

I've just talked to Michal Hocko and the conclusion is that we should
allocate several megabytes to avoid depending kernel implementation
details, which would mean allocating thousands of pages.
Li Wang Sept. 12, 2019, 2:51 a.m. UTC | #6
On Wed, Sep 11, 2019 at 7:19 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > I'm not sure why 64 pages is a proper value? Can you explain more?
>
> I've just talked to Michal Hocko and the conclusion is that we should
> allocate several megabytes to avoid depending kernel implementation
> details, which would mean allocating thousands of pages.
>

Sounds reasonable. Allocating thousands of pages must trigger the MADV_FREE
page reclaiming.

@shuang.qiu, Hi Shuang, would you mind to rewrite the patch, or should I do
that?
Shuang Qiu Sept. 12, 2019, 3:35 a.m. UTC | #7
On 09/12/2019 10:51 AM, Li Wang wrote:
>
>
> On Wed, Sep 11, 2019 at 7:19 PM Cyril Hrubis <chrubis@suse.cz 
> <mailto:chrubis@suse.cz>> wrote:
>
>     Hi!
>     > I'm not sure why 64 pages is a proper value? Can you explain more?
>
>     I've just talked to Michal Hocko and the conclusion is that we should
>     allocate several megabytes to avoid depending kernel implementation
>     details, which would mean allocating thousands of pages.
>
>
> Sounds reasonable. Allocating thousands of pages must trigger 
> the MADV_FREE page reclaiming.
>
> @shuang.qiu, Hi Shuang, would you mind to rewrite the patch, or should 
> I do that?
Thanks for helping to investigate this issue.
I will update the patch.
32 pages is really small in current system which may be held on 
reclaiming small piece of memory,thousands of pages is reasonable.

Thanks
Shuang
>
> -- 
> Regards,
> Li Wang

Patch
diff mbox series

diff --git a/testcases/kernel/syscalls/madvise/madvise09.c b/testcases/kernel/syscalls/madvise/madvise09.c
index 01075f6..3759053 100644
--- a/testcases/kernel/syscalls/madvise/madvise09.c
+++ b/testcases/kernel/syscalls/madvise/madvise09.c
@@ -57,7 +57,7 @@  static int sleep_between_faults;
 
 static int swap_accounting_enabled;
 
-#define PAGES 32
+#define PAGES 64
 #define TOUCHED_PAGE1 0
 #define TOUCHED_PAGE2 10