diff mbox series

[3/4] lib: Introduce concept of max_test_runtime

Message ID 20210609114659.2445-4-chrubis@suse.cz
State Superseded
Headers show
Series Introduce a concept of test runtime cap | expand

Commit Message

Cyril Hrubis June 9, 2021, 11:46 a.m. UTC
This is an attempt on how to handle a cap on a test runtime correctly it
consists of several pieces namely:

* The idea of test maximal runtime is uncoupled from  test timeout

  - the maximal runtime is simply a cap for how long should an instance
    of a test run, it's mainly used by CVE reproducers that attempt to
    trigger a race until they run out of time, such test may exit sooner
    but must not run longer than the cap

  - the tst_timeout_remaining() is replaced with tst_remaining_runtime()
    which accounts correctly for .test_variants and .all_filesystems

* The default value for a test max_runtime is computed from test timeout

  - we scale the timeout down so that the there is some room for test to
    properly exit once test runtime was exhausted, this is our base for
    a test max_runtime

  - the scaled value is then divided, if needed, so that we end up a
    correct maximal runtime for an instance of a test, i.e. we have
    max runtime for an instance fork_testrun() that is inside of
    .test_variants and .all_filesystems loops

  - this also allows us to controll the test max runtime by setting a
    test timeout

* The maximal runtime, per whole test, can be passed down to the test

  - If LTP_MAX_TEST_RUNTIME is set in test environment it's used as a
    base for max_runtime instead of the scaled down timeout, it's still
    divided into pieces so that we have correct runtime cap for an
    fork_testrun() instance

  - We also make sure that test timeout is adjusted, if needed, to
    accomodate for the new test runtime cap, i.e. if upscaled runtime is
    greater than timeout, the test timeout is adjusted

Signed-off-by: Cyril Hrubis <chrubis@suse.cz>
---
 include/tst_fuzzy_sync.h                      |  4 +-
 include/tst_test.h                            |  7 +-
 lib/newlib_tests/.gitignore                   |  3 +-
 .../{test18.c => test_runtime01.c}            |  7 +-
 lib/newlib_tests/test_runtime02.c             | 31 +++++++++
 lib/tst_test.c                                | 64 ++++++++++++++++++-
 testcases/kernel/crypto/af_alg02.c            |  2 +-
 testcases/kernel/crypto/pcrypt_aead01.c       |  2 +-
 testcases/kernel/mem/mtest01/mtest01.c        |  6 +-
 testcases/kernel/mem/mtest06/mmap1.c          | 13 ++--
 .../kernel/syscalls/move_pages/move_pages12.c |  4 +-
 11 files changed, 117 insertions(+), 26 deletions(-)
 rename lib/newlib_tests/{test18.c => test_runtime01.c} (59%)
 create mode 100644 lib/newlib_tests/test_runtime02.c

Comments

Petr Vorel June 9, 2021, 1:24 p.m. UTC | #1
Hi Cyril,

> This is an attempt on how to handle a cap on a test runtime correctly it
> consists of several pieces namely:

> * The idea of test maximal runtime is uncoupled from  test timeout

>   - the maximal runtime is simply a cap for how long should an instance
>     of a test run, it's mainly used by CVE reproducers that attempt to
>     trigger a race until they run out of time, such test may exit sooner
>     but must not run longer than the cap

>   - the tst_timeout_remaining() is replaced with tst_remaining_runtime()
>     which accounts correctly for .test_variants and .all_filesystems

> * The default value for a test max_runtime is computed from test timeout

>   - we scale the timeout down so that the there is some room for test to
>     properly exit once test runtime was exhausted, this is our base for
>     a test max_runtime

>   - the scaled value is then divided, if needed, so that we end up a
>     correct maximal runtime for an instance of a test, i.e. we have
>     max runtime for an instance fork_testrun() that is inside of
>     .test_variants and .all_filesystems loops
Now "Max runtime per iteration" can vary, right? I.e. on .all_filesystems
runtime for each filesystems depends on number of filesystems? E.g. writev03.c
with setup .timeout = 600 on 2 filesystems is 5 min (300s), but with all 9
filesystems is about 1 min. We should document that author should expect max
number of filesystems. What happen with these values in the (long) future, when
LTP support new filesystem (or drop some)? This was a reason for me to define in
the test value for "Max runtime per iteration", not whole run.

>   - this also allows us to controll the test max runtime by setting a
>     test timeout

> * The maximal runtime, per whole test, can be passed down to the test

>   - If LTP_MAX_TEST_RUNTIME is set in test environment it's used as a
>     base for max_runtime instead of the scaled down timeout, it's still
>     divided into pieces so that we have correct runtime cap for an
>     fork_testrun() instance
LTP_MAX_TEST_RUNTIME should go to doc/user-guide.txt. I suppose you waiting for
a feedback before writing docs.

>   - We also make sure that test timeout is adjusted, if needed, to
>     accomodate for the new test runtime cap, i.e. if upscaled runtime is
>     greater than timeout, the test timeout is adjusted

> Signed-off-by: Cyril Hrubis <chrubis@suse.cz>
> ---
>  include/tst_fuzzy_sync.h                      |  4 +-
>  include/tst_test.h                            |  7 +-
>  lib/newlib_tests/.gitignore                   |  3 +-
>  .../{test18.c => test_runtime01.c}            |  7 +-
>  lib/newlib_tests/test_runtime02.c             | 31 +++++++++
>  lib/tst_test.c                                | 64 ++++++++++++++++++-
>  testcases/kernel/crypto/af_alg02.c            |  2 +-
>  testcases/kernel/crypto/pcrypt_aead01.c       |  2 +-
>  testcases/kernel/mem/mtest01/mtest01.c        |  6 +-
>  testcases/kernel/mem/mtest06/mmap1.c          | 13 ++--
>  .../kernel/syscalls/move_pages/move_pages12.c |  4 +-
>  11 files changed, 117 insertions(+), 26 deletions(-)
>  rename lib/newlib_tests/{test18.c => test_runtime01.c} (59%)
+1 for test description instead of plain number.

...
> +++ b/lib/newlib_tests/test_runtime01.c
...
>  static void run(void)
>  {
> -	do {
> +	while (tst_remaining_runtime())
>  		sleep(1);
> -	} while (tst_timeout_remaining() >= 4);

> -	tst_res(TPASS, "Timeout remaining: %d", tst_timeout_remaining());
> +	tst_res(TPASS, "Timeout remaining: %d", tst_remaining_runtime());

There is a warning:
tst_test.c:1369: TINFO: Timeout per run is 0h 00m 05s
tst_test.c:1265: TWARN: Timeout too short for runtime offset 5!
tst_test.c:1309: TINFO: runtime > timeout, adjusting test timeout to 6
tst_test.c:1318: TINFO: Max runtime per iteration 1s
test_runtime01.c:15: TPASS: Timeout remaining: 0

Maybe test should use value without warning (i.e. 7).
Or is the warning intended to be the test output?

.timeout = 6 fails:

tst_test.c:1369: TINFO: Timeout per run is 0h 00m 06s
tst_test.c:1304: TBROK: Test runtime too small!

.timeout >= 7 is ok:
tst_test.c:1369: TINFO: Timeout per run is 0h 00m 07s
tst_test.c:1318: TINFO: Max runtime per iteration 1s
test_runtime01.c:15: TPASS: Timeout remaining: 0

...
> diff --git a/lib/tst_test.c b/lib/tst_test.c
> index 7c9061d6d..23b52583a 100644
> --- a/lib/tst_test.c
> +++ b/lib/tst_test.c

> -unsigned int tst_timeout_remaining(void)
> +#define RUNTIME_TIMEOUT_OFFSET 5
=> maybe define it to 6 to allow running with .timeout = 6?

> +#define RUNTIME_TIMEOUT_SCALE  0.9
> +
> +static unsigned int timeout_to_runtime(void)
> +{
> +	if (results->timeout <= RUNTIME_TIMEOUT_OFFSET) {
> +		tst_res(TWARN, "Timeout too short for runtime offset %i!",
> +		        RUNTIME_TIMEOUT_OFFSET);
> +		return 1;
> +	}
> +
> +	return (results->timeout - RUNTIME_TIMEOUT_OFFSET) * RUNTIME_TIMEOUT_SCALE;
> +}
> +
> +static unsigned int runtime_to_timeout(unsigned int runtime)
> +{
> +	return runtime / RUNTIME_TIMEOUT_SCALE + RUNTIME_TIMEOUT_OFFSET;
> +}
...

Also test_runtime02.c fails, is that intended?
tst_test.c:1374: TINFO: Timeout per run is 0h 00m 05s
tst_test.c:1265 timeout_to_runtime(): results->timeout: 5
tst_test.c:1266 timeout_to_runtime(): RUNTIME_TIMEOUT_OFFSET: 5
tst_test.c:1268: TWARN: Timeout too short for runtime offset 5!
tst_test.c:1314: TINFO: runtime > timeout, adjusting test timeout to 6
tst_test.c:1321: TBROK: Test runtime too small!

Kind regards,
Petr
Cyril Hrubis June 9, 2021, 1:32 p.m. UTC | #2
Hi!
> >   - the scaled value is then divided, if needed, so that we end up a
> >     correct maximal runtime for an instance of a test, i.e. we have
> >     max runtime for an instance fork_testrun() that is inside of
> >     .test_variants and .all_filesystems loops
> Now "Max runtime per iteration" can vary, right? I.e. on .all_filesystems
> runtime for each filesystems depends on number of filesystems? E.g. writev03.c
> with setup .timeout = 600 on 2 filesystems is 5 min (300s), but with all 9
> filesystems is about 1 min. We should document that author should expect max
> number of filesystems. What happen with these values in the (long) future, when
> LTP support new filesystem (or drop some)? This was a reason for me to define in
> the test value for "Max runtime per iteration", not whole run.

That's one of the downsides of this approach.

The reason why I choose this approach is that you can set upper cap for
the whole test run and not only for a single filesystem/variant.

Also this way the test timeout corresponds to the maximal test runtime.

Another option would be to redefine the timeout to be timeout per a
fork_testrun() instance, which would make the approach slightly easier
in some places, however that would mean either changing default test
timeout to much smaller value and annotating all long running tests.

Hmm, I guess that annotating all long running tests and changing default
timeout may be a good idea regardless this approach.

> >   - this also allows us to controll the test max runtime by setting a
> >     test timeout
> 
> > * The maximal runtime, per whole test, can be passed down to the test
> 
> >   - If LTP_MAX_TEST_RUNTIME is set in test environment it's used as a
> >     base for max_runtime instead of the scaled down timeout, it's still
> >     divided into pieces so that we have correct runtime cap for an
> >     fork_testrun() instance
> LTP_MAX_TEST_RUNTIME should go to doc/user-guide.txt. I suppose you waiting for
> a feedback before writing docs.

Yes I do not consider this to be finished patchset and I do expect that
it would need some changes.

> >   - We also make sure that test timeout is adjusted, if needed, to
> >     accomodate for the new test runtime cap, i.e. if upscaled runtime is
> >     greater than timeout, the test timeout is adjusted
> 
> > Signed-off-by: Cyril Hrubis <chrubis@suse.cz>
> > ---
> >  include/tst_fuzzy_sync.h                      |  4 +-
> >  include/tst_test.h                            |  7 +-
> >  lib/newlib_tests/.gitignore                   |  3 +-
> >  .../{test18.c => test_runtime01.c}            |  7 +-
> >  lib/newlib_tests/test_runtime02.c             | 31 +++++++++
> >  lib/tst_test.c                                | 64 ++++++++++++++++++-
> >  testcases/kernel/crypto/af_alg02.c            |  2 +-
> >  testcases/kernel/crypto/pcrypt_aead01.c       |  2 +-
> >  testcases/kernel/mem/mtest01/mtest01.c        |  6 +-
> >  testcases/kernel/mem/mtest06/mmap1.c          | 13 ++--
> >  .../kernel/syscalls/move_pages/move_pages12.c |  4 +-
> >  11 files changed, 117 insertions(+), 26 deletions(-)
> >  rename lib/newlib_tests/{test18.c => test_runtime01.c} (59%)
> +1 for test description instead of plain number.
> 
> ...
> > +++ b/lib/newlib_tests/test_runtime01.c
> ...
> >  static void run(void)
> >  {
> > -	do {
> > +	while (tst_remaining_runtime())
> >  		sleep(1);
> > -	} while (tst_timeout_remaining() >= 4);
> 
> > -	tst_res(TPASS, "Timeout remaining: %d", tst_timeout_remaining());
> > +	tst_res(TPASS, "Timeout remaining: %d", tst_remaining_runtime());
> 
> There is a warning:
> tst_test.c:1369: TINFO: Timeout per run is 0h 00m 05s
> tst_test.c:1265: TWARN: Timeout too short for runtime offset 5!
> tst_test.c:1309: TINFO: runtime > timeout, adjusting test timeout to 6
> tst_test.c:1318: TINFO: Max runtime per iteration 1s
> test_runtime01.c:15: TPASS: Timeout remaining: 0

This is expected.

> Maybe test should use value without warning (i.e. 7).
> Or is the warning intended to be the test output?
> 
> .timeout = 6 fails:
> 
> tst_test.c:1369: TINFO: Timeout per run is 0h 00m 06s
> tst_test.c:1304: TBROK: Test runtime too small!

This is one of the corner cases that probably needs to be handled
differently.

> .timeout >= 7 is ok:
> tst_test.c:1369: TINFO: Timeout per run is 0h 00m 07s
> tst_test.c:1318: TINFO: Max runtime per iteration 1s
> test_runtime01.c:15: TPASS: Timeout remaining: 0
> 
> ...
> > diff --git a/lib/tst_test.c b/lib/tst_test.c
> > index 7c9061d6d..23b52583a 100644
> > --- a/lib/tst_test.c
> > +++ b/lib/tst_test.c
> 
> > -unsigned int tst_timeout_remaining(void)
> > +#define RUNTIME_TIMEOUT_OFFSET 5
> => maybe define it to 6 to allow running with .timeout = 6?
> 
> > +#define RUNTIME_TIMEOUT_SCALE  0.9
> > +
> > +static unsigned int timeout_to_runtime(void)
> > +{
> > +	if (results->timeout <= RUNTIME_TIMEOUT_OFFSET) {
> > +		tst_res(TWARN, "Timeout too short for runtime offset %i!",
> > +		        RUNTIME_TIMEOUT_OFFSET);
> > +		return 1;
> > +	}
> > +
> > +	return (results->timeout - RUNTIME_TIMEOUT_OFFSET) * RUNTIME_TIMEOUT_SCALE;
> > +}
> > +
> > +static unsigned int runtime_to_timeout(unsigned int runtime)
> > +{
> > +	return runtime / RUNTIME_TIMEOUT_SCALE + RUNTIME_TIMEOUT_OFFSET;
> > +}
> ...
> 
> Also test_runtime02.c fails, is that intended?
> tst_test.c:1374: TINFO: Timeout per run is 0h 00m 05s
> tst_test.c:1265 timeout_to_runtime(): results->timeout: 5
> tst_test.c:1266 timeout_to_runtime(): RUNTIME_TIMEOUT_OFFSET: 5
> tst_test.c:1268: TWARN: Timeout too short for runtime offset 5!
> tst_test.c:1314: TINFO: runtime > timeout, adjusting test timeout to 6
> tst_test.c:1321: TBROK: Test runtime too small!

Yes, this is also supposed to fail, it's written in the test comment as
well...
Cyril Hrubis June 9, 2021, 1:43 p.m. UTC | #3
Hi!
> > Another option would be to redefine the timeout to be timeout per a
> > fork_testrun() instance, which would make the approach slightly easier
> > in some places, however that would mean either changing default test
> > timeout to much smaller value and annotating all long running tests.
> IMHO slightly better approach to me.
> 
> > Hmm, I guess that annotating all long running tests and changing default
> > timeout may be a good idea regardless this approach.
> +1

I can send a v2 if this approach ends up being prefered...
Petr Vorel June 9, 2021, 2:05 p.m. UTC | #4
Hi Cyril,

> Hi!
> > >   - the scaled value is then divided, if needed, so that we end up a
> > >     correct maximal runtime for an instance of a test, i.e. we have
> > >     max runtime for an instance fork_testrun() that is inside of
> > >     .test_variants and .all_filesystems loops
> > Now "Max runtime per iteration" can vary, right? I.e. on .all_filesystems
> > runtime for each filesystems depends on number of filesystems? E.g. writev03.c
> > with setup .timeout = 600 on 2 filesystems is 5 min (300s), but with all 9
> > filesystems is about 1 min. We should document that author should expect max
> > number of filesystems. What happen with these values in the (long) future, when
> > LTP support new filesystem (or drop some)? This was a reason for me to define in
> > the test value for "Max runtime per iteration", not whole run.

> That's one of the downsides of this approach.

> The reason why I choose this approach is that you can set upper cap for
> the whole test run and not only for a single filesystem/variant.

> Also this way the test timeout corresponds to the maximal test runtime.

> Another option would be to redefine the timeout to be timeout per a
> fork_testrun() instance, which would make the approach slightly easier
> in some places, however that would mean either changing default test
> timeout to much smaller value and annotating all long running tests.
IMHO slightly better approach to me.

> Hmm, I guess that annotating all long running tests and changing default
> timeout may be a good idea regardless this approach.
+1

> > >   - this also allows us to controll the test max runtime by setting a
> > >     test timeout

> > > * The maximal runtime, per whole test, can be passed down to the test

> > >   - If LTP_MAX_TEST_RUNTIME is set in test environment it's used as a
> > >     base for max_runtime instead of the scaled down timeout, it's still
> > >     divided into pieces so that we have correct runtime cap for an
> > >     fork_testrun() instance
> > LTP_MAX_TEST_RUNTIME should go to doc/user-guide.txt. I suppose you waiting for
> > a feedback before writing docs.

> Yes I do not consider this to be finished patchset and I do expect that
> it would need some changes.
Sure.

> > >   - We also make sure that test timeout is adjusted, if needed, to
> > >     accomodate for the new test runtime cap, i.e. if upscaled runtime is
> > >     greater than timeout, the test timeout is adjusted

> > > Signed-off-by: Cyril Hrubis <chrubis@suse.cz>
> > > ---
> > >  include/tst_fuzzy_sync.h                      |  4 +-
> > >  include/tst_test.h                            |  7 +-
> > >  lib/newlib_tests/.gitignore                   |  3 +-
> > >  .../{test18.c => test_runtime01.c}            |  7 +-
> > >  lib/newlib_tests/test_runtime02.c             | 31 +++++++++
> > >  lib/tst_test.c                                | 64 ++++++++++++++++++-
> > >  testcases/kernel/crypto/af_alg02.c            |  2 +-
> > >  testcases/kernel/crypto/pcrypt_aead01.c       |  2 +-
> > >  testcases/kernel/mem/mtest01/mtest01.c        |  6 +-
> > >  testcases/kernel/mem/mtest06/mmap1.c          | 13 ++--
> > >  .../kernel/syscalls/move_pages/move_pages12.c |  4 +-
> > >  11 files changed, 117 insertions(+), 26 deletions(-)
> > >  rename lib/newlib_tests/{test18.c => test_runtime01.c} (59%)
> > +1 for test description instead of plain number.

> > ...
> > > +++ b/lib/newlib_tests/test_runtime01.c
> > ...
> > >  static void run(void)
> > >  {
> > > -	do {
> > > +	while (tst_remaining_runtime())
> > >  		sleep(1);
> > > -	} while (tst_timeout_remaining() >= 4);

> > > -	tst_res(TPASS, "Timeout remaining: %d", tst_timeout_remaining());
> > > +	tst_res(TPASS, "Timeout remaining: %d", tst_remaining_runtime());

> > There is a warning:
> > tst_test.c:1369: TINFO: Timeout per run is 0h 00m 05s
> > tst_test.c:1265: TWARN: Timeout too short for runtime offset 5!
> > tst_test.c:1309: TINFO: runtime > timeout, adjusting test timeout to 6
> > tst_test.c:1318: TINFO: Max runtime per iteration 1s
> > test_runtime01.c:15: TPASS: Timeout remaining: 0

> This is expected.

> > Maybe test should use value without warning (i.e. 7).
> > Or is the warning intended to be the test output?

> > .timeout = 6 fails:

> > tst_test.c:1369: TINFO: Timeout per run is 0h 00m 06s
> > tst_test.c:1304: TBROK: Test runtime too small!

> This is one of the corner cases that probably needs to be handled
> differently.
+1

...
> > Also test_runtime02.c fails, is that intended?
> > tst_test.c:1374: TINFO: Timeout per run is 0h 00m 05s
> > tst_test.c:1265 timeout_to_runtime(): results->timeout: 5
> > tst_test.c:1266 timeout_to_runtime(): RUNTIME_TIMEOUT_OFFSET: 5
> > tst_test.c:1268: TWARN: Timeout too short for runtime offset 5!
> > tst_test.c:1314: TINFO: runtime > timeout, adjusting test timeout to 6
> > tst_test.c:1321: TBROK: Test runtime too small!

> Yes, this is also supposed to fail, it's written in the test comment as
> well...
I'm sorry to overlook this. Hope I'll finish test-c-run soon, so that we can
continue with expected test output for API tests.

Kind regards,
Petr
Richard Palethorpe June 9, 2021, 2:44 p.m. UTC | #5
Hello Cyril,

Cyril Hrubis <chrubis@suse.cz> writes:

> This is an attempt on how to handle a cap on a test runtime correctly it
> consists of several pieces namely:
>
> * The idea of test maximal runtime is uncoupled from  test timeout
>
>   - the maximal runtime is simply a cap for how long should an instance
>     of a test run, it's mainly used by CVE reproducers that attempt to
>     trigger a race until they run out of time, such test may exit sooner
>     but must not run longer than the cap
>
>   - the tst_timeout_remaining() is replaced with tst_remaining_runtime()
>     which accounts correctly for .test_variants and .all_filesystems
>
> * The default value for a test max_runtime is computed from test timeout
>
>   - we scale the timeout down so that the there is some room for test to
>     properly exit once test runtime was exhausted, this is our base for
>     a test max_runtime
>
>   - the scaled value is then divided, if needed, so that we end up a
>     correct maximal runtime for an instance of a test, i.e. we have
>     max runtime for an instance fork_testrun() that is inside of
>     .test_variants and .all_filesystems loops
>
>   - this also allows us to controll the test max runtime by setting a
>     test timeout
>
> * The maximal runtime, per whole test, can be passed down to the test
>
>   - If LTP_MAX_TEST_RUNTIME is set in test environment it's used as a
>     base for max_runtime instead of the scaled down timeout, it's still
>     divided into pieces so that we have correct runtime cap for an
>     fork_testrun() instance
>
>   - We also make sure that test timeout is adjusted, if needed, to
>     accomodate for the new test runtime cap, i.e. if upscaled runtime is
>     greater than timeout, the test timeout is adjusted
>
> Signed-off-by: Cyril Hrubis <chrubis@suse.cz>
> ---
>  include/tst_fuzzy_sync.h                      |  4 +-
>  include/tst_test.h                            |  7 +-
>  lib/newlib_tests/.gitignore                   |  3 +-
>  .../{test18.c => test_runtime01.c}            |  7 +-
>  lib/newlib_tests/test_runtime02.c             | 31 +++++++++
>  lib/tst_test.c                                | 64 ++++++++++++++++++-
>  testcases/kernel/crypto/af_alg02.c            |  2 +-
>  testcases/kernel/crypto/pcrypt_aead01.c       |  2 +-
>  testcases/kernel/mem/mtest01/mtest01.c        |  6 +-
>  testcases/kernel/mem/mtest06/mmap1.c          | 13 ++--
>  .../kernel/syscalls/move_pages/move_pages12.c |  4 +-
>  11 files changed, 117 insertions(+), 26 deletions(-)
>  rename lib/newlib_tests/{test18.c => test_runtime01.c} (59%)
>  create mode 100644 lib/newlib_tests/test_runtime02.c
>
> diff --git a/include/tst_fuzzy_sync.h b/include/tst_fuzzy_sync.h
> index 8f97bb8f6..93adbb909 100644
> --- a/include/tst_fuzzy_sync.h
> +++ b/include/tst_fuzzy_sync.h
> @@ -319,7 +319,7 @@ static void tst_fzsync_pair_reset(struct tst_fzsync_pair *pair,
>  		SAFE_PTHREAD_CREATE(&pair->thread_b, 0, tst_fzsync_thread_wrapper, &wrap_run_b);
>  	}
>  
> -	pair->exec_time_start = (float)tst_timeout_remaining();
> +	pair->exec_time_start = (float)tst_remaining_runtime();
>  }
>  
>  /**
> @@ -663,7 +663,7 @@ static inline void tst_fzsync_wait_b(struct tst_fzsync_pair *pair)
>  static inline int tst_fzsync_run_a(struct tst_fzsync_pair *pair)
>  {
>  	int exit = 0;
> -	float rem_p = 1 - tst_timeout_remaining() / pair->exec_time_start;
> +	float rem_p = 1 - tst_remaining_runtime() / pair->exec_time_start;
>  
>  	if ((pair->exec_time_p * SAMPLING_SLICE < rem_p)
>  		&& (pair->sampling > 0)) {
> diff --git a/include/tst_test.h b/include/tst_test.h
> index 6ad355506..491fedc3e 100644
> --- a/include/tst_test.h
> +++ b/include/tst_test.h
> @@ -290,7 +290,12 @@ const char *tst_strsig(int sig);
>   */
>  const char *tst_strstatus(int status);
>  
> -unsigned int tst_timeout_remaining(void);
> +/*
> + * Returns remaining test runtime. Test that runs for more than a few seconds
> + * should check if they should exit by calling this function regularly.
> + */
> +unsigned int tst_remaining_runtime(void);
> +
>  unsigned int tst_multiply_timeout(unsigned int timeout);
>  void tst_set_timeout(int timeout);
>  
> diff --git a/lib/newlib_tests/.gitignore b/lib/newlib_tests/.gitignore
> index b95ead2c2..464d98aed 100644
> --- a/lib/newlib_tests/.gitignore
> +++ b/lib/newlib_tests/.gitignore
> @@ -23,7 +23,6 @@ tst_safe_fileops
>  tst_res_hexd
>  tst_strstatus
>  test17
> -test18
>  test19
>  test20
>  test22
> @@ -43,3 +42,5 @@ test_macros02
>  test_macros03
>  tst_fuzzy_sync01
>  tst_fuzzy_sync02
> +test_runtime01
> +test_runtime02
> diff --git a/lib/newlib_tests/test18.c b/lib/newlib_tests/test_runtime01.c
> similarity index 59%
> rename from lib/newlib_tests/test18.c
> rename to lib/newlib_tests/test_runtime01.c
> index 026435d7d..56f5ac44e 100644
> --- a/lib/newlib_tests/test18.c
> +++ b/lib/newlib_tests/test_runtime01.c
> @@ -1,6 +1,6 @@
>  // SPDX-License-Identifier: GPL-2.0-or-later
>  /*
> - * Copyright (c) 2018, Linux Test Project
> + * Copyright (c) 2021, Linux Test Project
>   */
>  
>  #include <stdlib.h>
> @@ -9,11 +9,10 @@
>  
>  static void run(void)
>  {
> -	do {
> +	while (tst_remaining_runtime())
>  		sleep(1);
> -	} while (tst_timeout_remaining() >= 4);
>  
> -	tst_res(TPASS, "Timeout remaining: %d", tst_timeout_remaining());
> +	tst_res(TPASS, "Timeout remaining: %d", tst_remaining_runtime());
>  }
>  
>  static struct tst_test test = {
> diff --git a/lib/newlib_tests/test_runtime02.c b/lib/newlib_tests/test_runtime02.c
> new file mode 100644
> index 000000000..12e4813ef
> --- /dev/null
> +++ b/lib/newlib_tests/test_runtime02.c
> @@ -0,0 +1,31 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * Copyright (c) 2021, Linux Test Project
> + */
> +/*
> + * This test is set up so that the timeout is not long enough to guarantee
> + * enough runtime for two iterations, i.e. the timeout without offset and after
> + * scaling is too small and the tests ends up with TBROK.
> + *
> + * You can fix this by exporting LTP_MAX_TEST_RUNTIME=10 before executing the
> + * test, in that case the runtime would be divided between iterations and timeout
> + * adjusted so that it provides enough safeguards for the test to finish.
> + */
> +
> +#include <stdlib.h>
> +#include <unistd.h>
> +#include "tst_test.h"
> +
> +static void run(void)
> +{
> +	while (tst_remaining_runtime())
> +		sleep(1);
> +
> +	tst_res(TPASS, "Timeout remaining: %d", tst_remaining_runtime());
> +}
> +
> +static struct tst_test test = {
> +	.test_all = run,
> +	.timeout = 5,
> +	.test_variants = 2
> +};
> diff --git a/lib/tst_test.c b/lib/tst_test.c
> index 7c9061d6d..23b52583a 100644
> --- a/lib/tst_test.c
> +++ b/lib/tst_test.c
> @@ -62,6 +62,7 @@ struct results {
>  	int warnings;
>  	int broken;
>  	unsigned int timeout;
> +	unsigned int max_runtime;
>  };
>  
>  static struct results *results;
> @@ -1255,17 +1256,74 @@ static void sigint_handler(int sig LTP_ATTRIBUTE_UNUSED)
>  	}
>  }
>  
> -unsigned int tst_timeout_remaining(void)
> +#define RUNTIME_TIMEOUT_OFFSET 5
> +#define RUNTIME_TIMEOUT_SCALE  0.9
> +
> +static unsigned int timeout_to_runtime(void)
> +{
> +	if (results->timeout <= RUNTIME_TIMEOUT_OFFSET) {
> +		tst_res(TWARN, "Timeout too short for runtime offset %i!",
> +		        RUNTIME_TIMEOUT_OFFSET);
> +		return 1;
> +	}
> +
> +	return (results->timeout - RUNTIME_TIMEOUT_OFFSET) * RUNTIME_TIMEOUT_SCALE;
> +}
> +
> +static unsigned int runtime_to_timeout(unsigned int runtime)
> +{
> +	return runtime / RUNTIME_TIMEOUT_SCALE + RUNTIME_TIMEOUT_OFFSET;
> +}
> +
> +static unsigned int divide_runtime(unsigned int runtime)
> +{
> +	if (tst_test->test_variants)
> +		runtime = 1.00 * runtime / tst_test->test_variants;
> +
> +	if (tst_test->all_filesystems)
> +		runtime = 1.00 * runtime / tst_fs_max_types();
> +
> +	return runtime;
> +}
> +
> +unsigned int tst_remaining_runtime(void)
>  {
>  	static struct timespec now;
>  	unsigned int elapsed;
>  
> +	if (!results->max_runtime) {
> +		const char *runtime = getenv("LTP_MAX_TEST_RUNTIME");
> +
> +		if (runtime) {
> +			results->max_runtime = atoi(runtime);

POSIX says atoi is deprecated. It should probably be strtoul().

> +		} else {
> +			results->max_runtime = timeout_to_runtime();
> +		}
> +
> +		if (!results->max_runtime)
> +			tst_brk(TBROK, "Test runtime too small!");
> +
> +
> +		if (runtime_to_timeout(results->max_runtime) >
> results->timeout) {

Maybe should rename the "results" struct?

It is turning into general shared test state.
Martin Doucha June 11, 2021, 3:07 p.m. UTC | #6
On 09. 06. 21 15:32, Cyril Hrubis wrote:
> Hi!
>>>   - the scaled value is then divided, if needed, so that we end up a
>>>     correct maximal runtime for an instance of a test, i.e. we have
>>>     max runtime for an instance fork_testrun() that is inside of
>>>     .test_variants and .all_filesystems loops
>> Now "Max runtime per iteration" can vary, right? I.e. on .all_filesystems
>> runtime for each filesystems depends on number of filesystems? E.g. writev03.c
>> with setup .timeout = 600 on 2 filesystems is 5 min (300s), but with all 9
>> filesystems is about 1 min. We should document that author should expect max
>> number of filesystems. What happen with these values in the (long) future, when
>> LTP support new filesystem (or drop some)? This was a reason for me to define in
>> the test value for "Max runtime per iteration", not whole run.
> 
> That's one of the downsides of this approach.
> 
> The reason why I choose this approach is that you can set upper cap for
> the whole test run and not only for a single filesystem/variant.
> 
> Also this way the test timeout corresponds to the maximal test runtime.
> 
> Another option would be to redefine the timeout to be timeout per a
> fork_testrun() instance, which would make the approach slightly easier
> in some places, however that would mean either changing default test
> timeout to much smaller value and annotating all long running tests.
> 
> Hmm, I guess that annotating all long running tests and changing default
> timeout may be a good idea regardless this approach.

Some fuzzysync tests have long run time by design because running too
few loops on broken systems will not trigger the bug. Limiting maximum
program execution time may be useful for quick smoke tests but it's not
usable for real test runs where we want reliable reproducibility.

I'd prefer adding a command line option to tst_test (e.g. -m) that would
just print test metadata, including total timeout of all fork_testrun()
subtests, and exit. Static metadata is not a sufficient solution for
this because the same test binary may have different runtimes on
different system configurations, for example because the list of
available filesystems may change arbitrarily between test runs. It'd be
great if test runners other than runltp-ng could get a straighforward
timeout number without reimplementing a calculation that may change in
future versions of LTP.
Petr Vorel June 13, 2021, 7:44 p.m. UTC | #7
Hi all,

> On 09. 06. 21 15:32, Cyril Hrubis wrote:
> > Hi!
> >>>   - the scaled value is then divided, if needed, so that we end up a
> >>>     correct maximal runtime for an instance of a test, i.e. we have
> >>>     max runtime for an instance fork_testrun() that is inside of
> >>>     .test_variants and .all_filesystems loops
> >> Now "Max runtime per iteration" can vary, right? I.e. on .all_filesystems
> >> runtime for each filesystems depends on number of filesystems? E.g. writev03.c
> >> with setup .timeout = 600 on 2 filesystems is 5 min (300s), but with all 9
> >> filesystems is about 1 min. We should document that author should expect max
> >> number of filesystems. What happen with these values in the (long) future, when
> >> LTP support new filesystem (or drop some)? This was a reason for me to define in
> >> the test value for "Max runtime per iteration", not whole run.

> > That's one of the downsides of this approach.

> > The reason why I choose this approach is that you can set upper cap for
> > the whole test run and not only for a single filesystem/variant.

> > Also this way the test timeout corresponds to the maximal test runtime.

> > Another option would be to redefine the timeout to be timeout per a
> > fork_testrun() instance, which would make the approach slightly easier
> > in some places, however that would mean either changing default test
> > timeout to much smaller value and annotating all long running tests.

> > Hmm, I guess that annotating all long running tests and changing default
> > timeout may be a good idea regardless this approach.

> Some fuzzysync tests have long run time by design because running too
> few loops on broken systems will not trigger the bug. Limiting maximum
> program execution time may be useful for quick smoke tests but it's not
> usable for real test runs where we want reliable reproducibility.
Interesting.

> I'd prefer adding a command line option to tst_test (e.g. -m) that would
> just print test metadata, including total timeout of all fork_testrun()
> subtests, and exit. Static metadata is not a sufficient solution for
FYI I suggested this some time ago with private chat with Cyril, he mentioned
that there were some problems with it. IMHO it'd be great to implement it.

> this because the same test binary may have different runtimes on
> different system configurations, for example because the list of
> available filesystems may change arbitrarily between test runs. It'd be
> great if test runners other than runltp-ng could get a straighforward
> timeout number without reimplementing a calculation that may change in
> future versions of LTP.
+1

Kind regards,
Petr
Richard Palethorpe June 14, 2021, 8:02 a.m. UTC | #8
Hello,

Petr Vorel <pvorel@suse.cz> writes:

> Hi all,
>
>> On 09. 06. 21 15:32, Cyril Hrubis wrote:
>> > Hi!
>> >>>   - the scaled value is then divided, if needed, so that we end up a
>> >>>     correct maximal runtime for an instance of a test, i.e. we have
>> >>>     max runtime for an instance fork_testrun() that is inside of
>> >>>     .test_variants and .all_filesystems loops
>> >> Now "Max runtime per iteration" can vary, right? I.e. on .all_filesystems
>> >> runtime for each filesystems depends on number of filesystems? E.g. writev03.c
>> >> with setup .timeout = 600 on 2 filesystems is 5 min (300s), but with all 9
>> >> filesystems is about 1 min. We should document that author should expect max
>> >> number of filesystems. What happen with these values in the (long) future, when
>> >> LTP support new filesystem (or drop some)? This was a reason for me to define in
>> >> the test value for "Max runtime per iteration", not whole run.
>
>> > That's one of the downsides of this approach.
>
>> > The reason why I choose this approach is that you can set upper cap for
>> > the whole test run and not only for a single filesystem/variant.
>
>> > Also this way the test timeout corresponds to the maximal test runtime.
>
>> > Another option would be to redefine the timeout to be timeout per a
>> > fork_testrun() instance, which would make the approach slightly easier
>> > in some places, however that would mean either changing default test
>> > timeout to much smaller value and annotating all long running tests.
>
>> > Hmm, I guess that annotating all long running tests and changing default
>> > timeout may be a good idea regardless this approach.
>
>> Some fuzzysync tests have long run time by design because running too
>> few loops on broken systems will not trigger the bug. Limiting maximum
>> program execution time may be useful for quick smoke tests but it's not
>> usable for real test runs where we want reliable reproducibility.
> Interesting.
>
>> I'd prefer adding a command line option to tst_test (e.g. -m) that would
>> just print test metadata, including total timeout of all fork_testrun()
>> subtests, and exit. Static metadata is not a sufficient solution for
> FYI I suggested this some time ago with private chat with Cyril, he mentioned
> that there were some problems with it. IMHO it'd be great to implement
> it.

Yes, it has been debated before. It may be an issue when cross
compiling. Also verifying whether a test should really produce TCONF. I
don't think it can be the primary way of extracting meta data. OTOH, it
really makes sense for the test to report some info to the test
runner. Including expected runtime and what environment it can see.

The test runner can compare this data with its expectations. For
example, if the test reports there is X NUMA nodes, but the runner
thinks there should be Y NUMA nodes. This can help to verify people's
configuration.

>
>> this because the same test binary may have different runtimes on
>> different system configurations, for example because the list of
>> available filesystems may change arbitrarily between test runs. It'd be
>> great if test runners other than runltp-ng could get a straighforward
>> timeout number without reimplementing a calculation that may change in
>> future versions of LTP.

Other possibilities are that a test takes much longer to run on single
core or larger page size. I have also theorised before that fuzzysync
could measure the first few loops and tune the timeouts based on that. I
don't think it is necessary, but that can change.
diff mbox series

Patch

diff --git a/include/tst_fuzzy_sync.h b/include/tst_fuzzy_sync.h
index 8f97bb8f6..93adbb909 100644
--- a/include/tst_fuzzy_sync.h
+++ b/include/tst_fuzzy_sync.h
@@ -319,7 +319,7 @@  static void tst_fzsync_pair_reset(struct tst_fzsync_pair *pair,
 		SAFE_PTHREAD_CREATE(&pair->thread_b, 0, tst_fzsync_thread_wrapper, &wrap_run_b);
 	}
 
-	pair->exec_time_start = (float)tst_timeout_remaining();
+	pair->exec_time_start = (float)tst_remaining_runtime();
 }
 
 /**
@@ -663,7 +663,7 @@  static inline void tst_fzsync_wait_b(struct tst_fzsync_pair *pair)
 static inline int tst_fzsync_run_a(struct tst_fzsync_pair *pair)
 {
 	int exit = 0;
-	float rem_p = 1 - tst_timeout_remaining() / pair->exec_time_start;
+	float rem_p = 1 - tst_remaining_runtime() / pair->exec_time_start;
 
 	if ((pair->exec_time_p * SAMPLING_SLICE < rem_p)
 		&& (pair->sampling > 0)) {
diff --git a/include/tst_test.h b/include/tst_test.h
index 6ad355506..491fedc3e 100644
--- a/include/tst_test.h
+++ b/include/tst_test.h
@@ -290,7 +290,12 @@  const char *tst_strsig(int sig);
  */
 const char *tst_strstatus(int status);
 
-unsigned int tst_timeout_remaining(void);
+/*
+ * Returns remaining test runtime. Test that runs for more than a few seconds
+ * should check if they should exit by calling this function regularly.
+ */
+unsigned int tst_remaining_runtime(void);
+
 unsigned int tst_multiply_timeout(unsigned int timeout);
 void tst_set_timeout(int timeout);
 
diff --git a/lib/newlib_tests/.gitignore b/lib/newlib_tests/.gitignore
index b95ead2c2..464d98aed 100644
--- a/lib/newlib_tests/.gitignore
+++ b/lib/newlib_tests/.gitignore
@@ -23,7 +23,6 @@  tst_safe_fileops
 tst_res_hexd
 tst_strstatus
 test17
-test18
 test19
 test20
 test22
@@ -43,3 +42,5 @@  test_macros02
 test_macros03
 tst_fuzzy_sync01
 tst_fuzzy_sync02
+test_runtime01
+test_runtime02
diff --git a/lib/newlib_tests/test18.c b/lib/newlib_tests/test_runtime01.c
similarity index 59%
rename from lib/newlib_tests/test18.c
rename to lib/newlib_tests/test_runtime01.c
index 026435d7d..56f5ac44e 100644
--- a/lib/newlib_tests/test18.c
+++ b/lib/newlib_tests/test_runtime01.c
@@ -1,6 +1,6 @@ 
 // SPDX-License-Identifier: GPL-2.0-or-later
 /*
- * Copyright (c) 2018, Linux Test Project
+ * Copyright (c) 2021, Linux Test Project
  */
 
 #include <stdlib.h>
@@ -9,11 +9,10 @@ 
 
 static void run(void)
 {
-	do {
+	while (tst_remaining_runtime())
 		sleep(1);
-	} while (tst_timeout_remaining() >= 4);
 
-	tst_res(TPASS, "Timeout remaining: %d", tst_timeout_remaining());
+	tst_res(TPASS, "Timeout remaining: %d", tst_remaining_runtime());
 }
 
 static struct tst_test test = {
diff --git a/lib/newlib_tests/test_runtime02.c b/lib/newlib_tests/test_runtime02.c
new file mode 100644
index 000000000..12e4813ef
--- /dev/null
+++ b/lib/newlib_tests/test_runtime02.c
@@ -0,0 +1,31 @@ 
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * Copyright (c) 2021, Linux Test Project
+ */
+/*
+ * This test is set up so that the timeout is not long enough to guarantee
+ * enough runtime for two iterations, i.e. the timeout without offset and after
+ * scaling is too small and the tests ends up with TBROK.
+ *
+ * You can fix this by exporting LTP_MAX_TEST_RUNTIME=10 before executing the
+ * test, in that case the runtime would be divided between iterations and timeout
+ * adjusted so that it provides enough safeguards for the test to finish.
+ */
+
+#include <stdlib.h>
+#include <unistd.h>
+#include "tst_test.h"
+
+static void run(void)
+{
+	while (tst_remaining_runtime())
+		sleep(1);
+
+	tst_res(TPASS, "Timeout remaining: %d", tst_remaining_runtime());
+}
+
+static struct tst_test test = {
+	.test_all = run,
+	.timeout = 5,
+	.test_variants = 2
+};
diff --git a/lib/tst_test.c b/lib/tst_test.c
index 7c9061d6d..23b52583a 100644
--- a/lib/tst_test.c
+++ b/lib/tst_test.c
@@ -62,6 +62,7 @@  struct results {
 	int warnings;
 	int broken;
 	unsigned int timeout;
+	unsigned int max_runtime;
 };
 
 static struct results *results;
@@ -1255,17 +1256,74 @@  static void sigint_handler(int sig LTP_ATTRIBUTE_UNUSED)
 	}
 }
 
-unsigned int tst_timeout_remaining(void)
+#define RUNTIME_TIMEOUT_OFFSET 5
+#define RUNTIME_TIMEOUT_SCALE  0.9
+
+static unsigned int timeout_to_runtime(void)
+{
+	if (results->timeout <= RUNTIME_TIMEOUT_OFFSET) {
+		tst_res(TWARN, "Timeout too short for runtime offset %i!",
+		        RUNTIME_TIMEOUT_OFFSET);
+		return 1;
+	}
+
+	return (results->timeout - RUNTIME_TIMEOUT_OFFSET) * RUNTIME_TIMEOUT_SCALE;
+}
+
+static unsigned int runtime_to_timeout(unsigned int runtime)
+{
+	return runtime / RUNTIME_TIMEOUT_SCALE + RUNTIME_TIMEOUT_OFFSET;
+}
+
+static unsigned int divide_runtime(unsigned int runtime)
+{
+	if (tst_test->test_variants)
+		runtime = 1.00 * runtime / tst_test->test_variants;
+
+	if (tst_test->all_filesystems)
+		runtime = 1.00 * runtime / tst_fs_max_types();
+
+	return runtime;
+}
+
+unsigned int tst_remaining_runtime(void)
 {
 	static struct timespec now;
 	unsigned int elapsed;
 
+	if (!results->max_runtime) {
+		const char *runtime = getenv("LTP_MAX_TEST_RUNTIME");
+
+		if (runtime) {
+			results->max_runtime = atoi(runtime);
+		} else {
+			results->max_runtime = timeout_to_runtime();
+		}
+
+		if (!results->max_runtime)
+			tst_brk(TBROK, "Test runtime too small!");
+
+
+		if (runtime_to_timeout(results->max_runtime) > results->timeout) {
+			results->timeout = runtime_to_timeout(results->max_runtime);
+			tst_res(TINFO, "runtime > timeout, adjusting test timeout to %u", results->timeout);
+			heartbeat();
+		}
+
+		results->max_runtime = divide_runtime(results->max_runtime);
+
+		if (!results->max_runtime)
+			tst_brk(TBROK, "Test runtime too small!");
+
+		tst_res(TINFO, "Max runtime per iteration %us", results->max_runtime);
+	}
+
 	if (tst_clock_gettime(CLOCK_MONOTONIC, &now))
 		tst_res(TWARN | TERRNO, "tst_clock_gettime() failed");
 
 	elapsed = (tst_timespec_diff_ms(now, tst_start_time) + 500) / 1000;
-	if (results->timeout > elapsed)
-		return results->timeout - elapsed;
+	if (results->max_runtime > elapsed)
+		return results->max_runtime - elapsed;
 
 	return 0;
 }
diff --git a/testcases/kernel/crypto/af_alg02.c b/testcases/kernel/crypto/af_alg02.c
index 31d30777c..26f184854 100644
--- a/testcases/kernel/crypto/af_alg02.c
+++ b/testcases/kernel/crypto/af_alg02.c
@@ -61,7 +61,7 @@  static void run(void)
 	TST_CHECKPOINT_WAIT(0);
 
 	while (pthread_kill(thr, 0) != ESRCH) {
-		if (tst_timeout_remaining() <= 10) {
+		if (!tst_remaining_runtime()) {
 			pthread_cancel(thr);
 			tst_brk(TBROK,
 				"Timed out while reading from request socket.");
diff --git a/testcases/kernel/crypto/pcrypt_aead01.c b/testcases/kernel/crypto/pcrypt_aead01.c
index 0609af9f6..7fe6aed14 100644
--- a/testcases/kernel/crypto/pcrypt_aead01.c
+++ b/testcases/kernel/crypto/pcrypt_aead01.c
@@ -55,7 +55,7 @@  void run(void)
 		if (TST_RET)
 			tst_brk(TBROK | TRERRNO, "del_alg");
 
-		if (tst_timeout_remaining() < 10) {
+		if (!tst_remaining_runtime()) {
 			tst_res(TINFO, "Time limit reached, stopping at "
 				"%d iterations", i);
 			break;
diff --git a/testcases/kernel/mem/mtest01/mtest01.c b/testcases/kernel/mem/mtest01/mtest01.c
index 9676ea4b5..b205722e3 100644
--- a/testcases/kernel/mem/mtest01/mtest01.c
+++ b/testcases/kernel/mem/mtest01/mtest01.c
@@ -155,10 +155,10 @@  static void child_loop_alloc(unsigned long long alloc_bytes)
 	}
 	if (dowrite)
 		tst_res(TINFO, "... [t=%d] %lu bytes allocated and used in child %d",
-				tst_timeout_remaining(), bytecount, getpid());
+				tst_remaining_runtime(), bytecount, getpid());
 	else
 		tst_res(TINFO, "... [t=%d] %lu bytes allocated only in child %d",
-				tst_timeout_remaining(), bytecount, getpid());
+				tst_remaining_runtime(), bytecount, getpid());
 
 	kill(getppid(), SIGRTMIN);
 	raise(SIGSTOP);
@@ -195,7 +195,7 @@  static void mem_test(void)
 
 	/* wait in the loop for all children finish allocating */
 	while (children_done < pid_cntr) {
-		if (tst_timeout_remaining() < STOP_THRESHOLD) {
+		if (!tst_remaining_runtime()) {
 			tst_res(TWARN,
 				"the remaininig time is not enough for testing");
 
diff --git a/testcases/kernel/mem/mtest06/mmap1.c b/testcases/kernel/mem/mtest06/mmap1.c
index 10c47c35c..699c20988 100644
--- a/testcases/kernel/mem/mtest06/mmap1.c
+++ b/testcases/kernel/mem/mtest06/mmap1.c
@@ -35,9 +35,6 @@ 
 #define GIGABYTE (1L*1024*1024*1024)
 #define TEST_FILENAME "ashfile"
 
-/* seconds remaining before reaching timeout */
-#define STOP_THRESHOLD 10
-
 #define PROGRESS_SEC 3
 
 static int file_size = 1024;
@@ -224,8 +221,8 @@  static void run(void)
 	pthread_t thid[2];
 	int start, last_update;
 
-	start = last_update = tst_timeout_remaining();
-	while (tst_timeout_remaining() > STOP_THRESHOLD) {
+	start = last_update = tst_remaining_runtime();
+	while (tst_remaining_runtime()) {
 		int fd = mkfile(file_size);
 
 		tst_atomic_store(0, &mapcnt);
@@ -240,11 +237,11 @@  static void run(void)
 
 		close(fd);
 
-		if (last_update - tst_timeout_remaining() >= PROGRESS_SEC) {
-			last_update = tst_timeout_remaining();
+		if (last_update - tst_remaining_runtime() >= PROGRESS_SEC) {
+			last_update = tst_remaining_runtime();
 			tst_res(TINFO, "[%03d] mapped: %lu, sigsegv hit: %lu, "
 				"threads spawned: %lu",
-				start - tst_timeout_remaining(),
+				start - tst_remaining_runtime(),
 				map_count, mapped_sigsegv_count,
 				threads_spawned);
 			tst_res(TINFO, "      repeated_reads: %ld, "
diff --git a/testcases/kernel/syscalls/move_pages/move_pages12.c b/testcases/kernel/syscalls/move_pages/move_pages12.c
index 220130f4b..fa45c41a5 100644
--- a/testcases/kernel/syscalls/move_pages/move_pages12.c
+++ b/testcases/kernel/syscalls/move_pages/move_pages12.c
@@ -153,7 +153,7 @@  static void do_test(unsigned int n)
 	void *ptr;
 	pid_t cpid = -1;
 	int status;
-	unsigned int twenty_percent = (tst_timeout_remaining() / 5);
+	unsigned int twenty_percent = (tst_remaining_runtime() / 5);
 
 	addr = SAFE_MMAP(NULL, tcases[n].tpages * hpsz, PROT_READ | PROT_WRITE,
 		MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, -1, 0);
@@ -198,7 +198,7 @@  static void do_test(unsigned int n)
 
 		SAFE_MUNMAP(addr, tcases[n].tpages * hpsz);
 
-		if (tst_timeout_remaining() < twenty_percent)
+		if (tst_remaining_runtime() < twenty_percent)
 			break;
 	}