diff mbox series

set_mempolicy01: cancel the limit of maximum runtime

Message ID 20221220054549.1757270-1-liwang@redhat.com
State Superseded
Headers show
Series set_mempolicy01: cancel the limit of maximum runtime | expand

Commit Message

Li Wang Dec. 20, 2022, 5:45 a.m. UTC
It needs more time for running on multiple numa nodes system.
Here propose to cancel the limit of max_runtime.

  ========= test log on 16 nodes system =========
  ...
  set_mempolicy01.c:80: TPASS: child: Node 15 allocated 16
  tst_numa.c:25: TINFO: Node 0 allocated 0 pages
  tst_numa.c:25: TINFO: Node 1 allocated 0 pages
  tst_numa.c:25: TINFO: Node 2 allocated 0 pages
  tst_numa.c:25: TINFO: Node 3 allocated 0 pages
  tst_numa.c:25: TINFO: Node 4 allocated 0 pages
  tst_numa.c:25: TINFO: Node 5 allocated 0 pages
  tst_numa.c:25: TINFO: Node 6 allocated 0 pages
  tst_numa.c:25: TINFO: Node 7 allocated 0 pages
  tst_numa.c:25: TINFO: Node 8 allocated 0 pages
  tst_numa.c:25: TINFO: Node 9 allocated 0 pages
  tst_numa.c:25: TINFO: Node 10 allocated 0 pages
  tst_numa.c:25: TINFO: Node 11 allocated 0 pages
  tst_numa.c:25: TINFO: Node 12 allocated 0 pages
  tst_numa.c:25: TINFO: Node 13 allocated 0 pages
  tst_numa.c:25: TINFO: Node 14 allocated 0 pages
  tst_numa.c:25: TINFO: Node 15 allocated 16 pages
  set_mempolicy01.c:80: TPASS: parent: Node 15 allocated 16

  Summary:
  passed   393210
  failed   0
  broken   0
  skipped  0
  warnings 0

  real	6m15.147s
  user	0m33.641s
  sys	0m44.553s

Signed-off-by: Li Wang <liwang@redhat.com>
---
 testcases/kernel/syscalls/set_mempolicy/set_mempolicy01.c | 1 +
 1 file changed, 1 insertion(+)

Comments

Cyril Hrubis Dec. 20, 2022, 2:16 p.m. UTC | #1
Hi!
> It needs more time for running on multiple numa nodes system.
> Here propose to cancel the limit of max_runtime.
> 
>   ========= test log on 16 nodes system =========
>   ...
>   set_mempolicy01.c:80: TPASS: child: Node 15 allocated 16
>   tst_numa.c:25: TINFO: Node 0 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 1 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 2 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 3 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 4 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 5 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 6 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 7 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 8 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 9 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 10 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 11 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 12 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 13 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 14 allocated 0 pages
>   tst_numa.c:25: TINFO: Node 15 allocated 16 pages
>   set_mempolicy01.c:80: TPASS: parent: Node 15 allocated 16
> 
>   Summary:
>   passed   393210
>   failed   0
>   broken   0
>   skipped  0
>   warnings 0
> 
>   real	6m15.147s
>   user	0m33.641s
>   sys	0m44.553s

Can't we just set the default to 30 minutes or something large enough?
Li Wang Dec. 21, 2022, 1:59 a.m. UTC | #2
On Tue, Dec 20, 2022 at 10:15 PM Cyril Hrubis <chrubis@suse.cz> wrote:

> Hi!
> > It needs more time for running on multiple numa nodes system.
> > Here propose to cancel the limit of max_runtime.
> >
> >   ========= test log on 16 nodes system =========
> >   ...
> >   set_mempolicy01.c:80: TPASS: child: Node 15 allocated 16
> >   tst_numa.c:25: TINFO: Node 0 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 1 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 2 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 3 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 4 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 5 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 6 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 7 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 8 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 9 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 10 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 11 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 12 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 13 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 14 allocated 0 pages
> >   tst_numa.c:25: TINFO: Node 15 allocated 16 pages
> >   set_mempolicy01.c:80: TPASS: parent: Node 15 allocated 16
> >
> >   Summary:
> >   passed   393210
> >   failed   0
> >   broken   0
> >   skipped  0
> >   warnings 0
> >
> >   real        6m15.147s
> >   user        0m33.641s
> >   sys 0m44.553s
>
> Can't we just set the default to 30 minutes or something large enough?
>

Yes, I thought about a fixed larger value before, but seems the test
time go increased extremely faster when the test matrix doubled.

I don't have a system with more than 32 nodes to check if 30mins
enough, so I guess probably canceling the limitation like what we
did for oom tests would make sense, that timeout value depends
on real system configurations.
Richard Palethorpe Dec. 28, 2022, 10:21 a.m. UTC | #3
Hello,

Li Wang <liwang@redhat.com> writes:

> On Tue, Dec 20, 2022 at 10:15 PM Cyril Hrubis <chrubis@suse.cz> wrote:
>
>> Hi!
>> > It needs more time for running on multiple numa nodes system.
>> > Here propose to cancel the limit of max_runtime.
>> >
>> >   ========= test log on 16 nodes system =========
>> >   ...
>> >   set_mempolicy01.c:80: TPASS: child: Node 15 allocated 16
>> >   tst_numa.c:25: TINFO: Node 0 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 1 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 2 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 3 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 4 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 5 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 6 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 7 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 8 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 9 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 10 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 11 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 12 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 13 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 14 allocated 0 pages
>> >   tst_numa.c:25: TINFO: Node 15 allocated 16 pages
>> >   set_mempolicy01.c:80: TPASS: parent: Node 15 allocated 16
>> >
>> >   Summary:
>> >   passed   393210
>> >   failed   0
>> >   broken   0
>> >   skipped  0
>> >   warnings 0
>> >
>> >   real        6m15.147s
>> >   user        0m33.641s
>> >   sys 0m44.553s
>>
>> Can't we just set the default to 30 minutes or something large enough?
>>
>
> Yes, I thought about a fixed larger value before, but seems the test
> time go increased extremely faster when the test matrix doubled.
>
> I don't have a system with more than 32 nodes to check if 30mins
> enough, so I guess probably canceling the limitation like what we
> did for oom tests would make sense, that timeout value depends
> on real system configurations.

IMO, this is what the timeout multiplier is for. So if you have a
computer with 512 CPUs or a tiny embedded device, you can adjust the
timeouts upwards.

The default timeouts are for workstations, commodity servers and
VMs. Although I suppose as this is a NUMA test the average machine will
be bigger, but 32 nodes on a physical machine would be 128-512 CPUs?

>
>
> -- 
> Regards,
> Li Wang
Li Wang Dec. 29, 2022, 3:04 a.m. UTC | #4
On Wed, Dec 28, 2022 at 6:44 PM Richard Palethorpe <rpalethorpe@suse.de>
wrote:

> Hello,
>
> Li Wang <liwang@redhat.com> writes:
>
> > On Tue, Dec 20, 2022 at 10:15 PM Cyril Hrubis <chrubis@suse.cz> wrote:
> >
> >> Hi!
> >> > It needs more time for running on multiple numa nodes system.
> >> > Here propose to cancel the limit of max_runtime.
> >> >
> >> >   ========= test log on 16 nodes system =========
> >> >   ...
> >> >   set_mempolicy01.c:80: TPASS: child: Node 15 allocated 16
> >> >   tst_numa.c:25: TINFO: Node 0 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 1 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 2 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 3 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 4 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 5 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 6 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 7 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 8 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 9 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 10 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 11 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 12 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 13 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 14 allocated 0 pages
> >> >   tst_numa.c:25: TINFO: Node 15 allocated 16 pages
> >> >   set_mempolicy01.c:80: TPASS: parent: Node 15 allocated 16
> >> >
> >> >   Summary:
> >> >   passed   393210
> >> >   failed   0
> >> >   broken   0
> >> >   skipped  0
> >> >   warnings 0
> >> >
> >> >   real        6m15.147s
> >> >   user        0m33.641s
> >> >   sys 0m44.553s
> >>
> >> Can't we just set the default to 30 minutes or something large enough?
> >>
> >
> > Yes, I thought about a fixed larger value before, but seems the test
> > time go increased extremely faster when the test matrix doubled.
> >
> > I don't have a system with more than 32 nodes to check if 30mins
> > enough, so I guess probably canceling the limitation like what we
> > did for oom tests would make sense, that timeout value depends
> > on real system configurations.
>
> IMO, this is what the timeout multiplier is for. So if you have a
> computer with 512 CPUs or a tiny embedded device, you can adjust the
> timeouts upwards.
>

Well, exporting LTP_RUNTIME_MUL to a large value is useful for
extending the maximal test runtime, but the side effect is, it will
change the runtime for many other tests as well, especially those
who use tst_remaining_runtime() in their infinite looping
(e.g. pty06/7, swapping01, mmap1, fork13),
which leads to the whole LTP suite costing more time to complete.

That's why we love LTP_RUNTIME_MUL but dare not set it too high.



>
> The default timeouts are for workstations, commodity servers and
> VMs. Although I suppose as this is a NUMA test the average machine will
> be bigger, but 32 nodes on a physical machine would be 128-512 CPUs?
>

I guess yes, after checking one 16nodes physical machine it has 128 CPUs.
diff mbox series

Patch

diff --git a/testcases/kernel/syscalls/set_mempolicy/set_mempolicy01.c b/testcases/kernel/syscalls/set_mempolicy/set_mempolicy01.c
index 07f5d789b..502e33024 100644
--- a/testcases/kernel/syscalls/set_mempolicy/set_mempolicy01.c
+++ b/testcases/kernel/syscalls/set_mempolicy/set_mempolicy01.c
@@ -110,6 +110,7 @@  static struct tst_test test = {
 	.tcnt = 2,
 	.forks_child = 1,
 	.needs_checkpoints = 1,
+	.max_runtime = TST_UNLIMITED_RUNTIME,
 };
 
 #else