diff mbox series

Loosen the limits of time/tst-cpuclock1.

Message ID 20200828085852.1861153-1-stli@linux.ibm.com
State New
Headers show
Series Loosen the limits of time/tst-cpuclock1. | expand

Commit Message

Stefan Liebler Aug. 28, 2020, 8:58 a.m. UTC
Starting with the commit 04deeaa9ea74b0679dfc9d9155a37b6425f19a9f
"Fix time/tst-cpuclock1 intermitent failures" (2020-07-11),
this test fails quite often on s390x/s390 with one/multiple of those:
"before - after" / "nanosleep time" / "dead - after" ourside reasonable range.

On a zVM/kvm guest the CPUs are shared between multiple guests.
And even on the lpar (kvm host) the CPUs are usually shared between multiple lpars.
The defined CPUs for a lpar/zVM-system could also have lower weights compared
to other lpars which let the steal time further grow.

Usually I build (-j$(nproc)) and test (PARALLELMFLAGS="-j$(nproc)") glibc multiple
times, e.g. with different GCCs, on various lpars or zVM guests at the same time.
During this time, I've run the test for 13500 times and obvserved the following fails:
~600x "before - after"
~60x "nanosleep time"
~70x "dead - after"

I've also observed a lot of "before - after" fails on a intel kvm-guest while
building/testing glibc on it.

The mentioned commit has tighten the limits of valid tv_nsec ranges:
"before - after" (expected: 500000000):
- 100000000 ... 600000000
+ 450000000 ... 550000000

"nanosleep time" (expected: 100000000):
- 100000000 ... 200000000
+ 090000000 ... 120000000

"dead - after" (expected: 100000000):
-           ... 200000000
+ 090000000 ... 120000000

The test itself forks a child process which chew_cpu (user- and kernel-space).
The parent process sleeps with nanosleep(0.5s) and measures the child_clock time:

Comments

Florian Weimer Aug. 28, 2020, 12:29 p.m. UTC | #1
* Stefan Liebler via Libc-alpha:

> Starting with the commit 04deeaa9ea74b0679dfc9d9155a37b6425f19a9f "Fix
> time/tst-cpuclock1 intermitent failures" (2020-07-11), this test fails
> quite often on s390x/s390 with one/multiple of those: "before - after"
> / "nanosleep time" / "dead - after" ourside reasonable range.

How much value do these cpuclock tests actually have?  Maybe we should
just remove them, given that their test objective is so difficult to
model in current execution environments.

Maybe we could measure clock discontinuities on the CPU-burning thread
and detect CPU stealing this way.  But I'm not sure if this would be
worth it.

Thanks,
Florian
Lucas A. M. Magalhaes Aug. 31, 2020, 12:57 p.m. UTC | #2
Quoting Florian Weimer via Libc-alpha (2020-08-28 09:29:45)
> * Stefan Liebler via Libc-alpha:
> 
> > Starting with the commit 04deeaa9ea74b0679dfc9d9155a37b6425f19a9f "Fix
> > time/tst-cpuclock1 intermitent failures" (2020-07-11), this test fails
> > quite often on s390x/s390 with one/multiple of those: "before - after"
> > / "nanosleep time" / "dead - after" ourside reasonable range.
> 
> How much value do these cpuclock tests actually have?  Maybe we should
> just remove them, given that their test objective is so difficult to
> model in current execution environments.

AFAICS this test was moved from rt/ to time/ when the clock_* functions
were migrated to libc.so. IMHO this test lost it's purpose during this
migration.  I can see why it was needed on rt/, with strict time
measures, but on time/ it's just testing clock_* API. I this case I don't
see a downside to flex the time requirements.

A good thing to add for this test is a comment on the beginning stating
the purpose of it. :)

> 
> Maybe we could measure clock discontinuities on the CPU-burning thread
> and detect CPU stealing this way.  But I'm not sure if this would be
> worth it.

What we want to achieve with this?

---
Lucas A. M. Magalhães
Florian Weimer Aug. 31, 2020, 12:59 p.m. UTC | #3
* Lucas A. M. Magalhaes:

> Quoting Florian Weimer via Libc-alpha (2020-08-28 09:29:45)
>> * Stefan Liebler via Libc-alpha:
>> 
>> > Starting with the commit 04deeaa9ea74b0679dfc9d9155a37b6425f19a9f "Fix
>> > time/tst-cpuclock1 intermitent failures" (2020-07-11), this test fails
>> > quite often on s390x/s390 with one/multiple of those: "before - after"
>> > / "nanosleep time" / "dead - after" ourside reasonable range.
>> 
>> How much value do these cpuclock tests actually have?  Maybe we should
>> just remove them, given that their test objective is so difficult to
>> model in current execution environments.
>
> AFAICS this test was moved from rt/ to time/ when the clock_* functions
> were migrated to libc.so. IMHO this test lost it's purpose during this
> migration.  I can see why it was needed on rt/, with strict time
> measures, but on time/ it's just testing clock_* API. I this case I don't
> see a downside to flex the time requirements.

The location in the tree does not impact the test objectives.  The move
was done to reflect that for quite a few releases now, clock_gettime
has been provided by libc, not librt.

>> Maybe we could measure clock discontinuities on the CPU-burning thread
>> and detect CPU stealing this way.  But I'm not sure if this would be
>> worth it.
>
> What we want to achieve with this?

I think the goal is to ensure that glibc and the kernel agree about the
definition of the clocks, i.e. that the clock is indeed a CPU time clock
and does not measure wall-clock time.

Thanks,
Florian
Stefan Liebler Sept. 2, 2020, 4:10 p.m. UTC | #4
On 8/31/20 2:59 PM, Florian Weimer wrote:
> * Lucas A. M. Magalhaes:
> 
>> Quoting Florian Weimer via Libc-alpha (2020-08-28 09:29:45)
>>> * Stefan Liebler via Libc-alpha:
>>>
>>>> Starting with the commit 04deeaa9ea74b0679dfc9d9155a37b6425f19a9f "Fix
>>>> time/tst-cpuclock1 intermitent failures" (2020-07-11), this test fails
>>>> quite often on s390x/s390 with one/multiple of those: "before - after"
>>>> / "nanosleep time" / "dead - after" ourside reasonable range.
>>>
>>> How much value do these cpuclock tests actually have?  Maybe we should
>>> just remove them, given that their test objective is so difficult to
>>> model in current execution environments.
>>
>> AFAICS this test was moved from rt/ to time/ when the clock_* functions
>> were migrated to libc.so. IMHO this test lost it's purpose during this
>> migration.  I can see why it was needed on rt/, with strict time
>> measures, but on time/ it's just testing clock_* API. I this case I don't
>> see a downside to flex the time requirements.
> 
> The location in the tree does not impact the test objectives.  The move
> was done to reflect that for quite a few releases now, clock_gettime
> has been provided by libc, not librt.
> 
>>> Maybe we could measure clock discontinuities on the CPU-burning thread
>>> and detect CPU stealing this way.  But I'm not sure if this would be
>>> worth it.
>>
>> What we want to achieve with this?
> 
> I think the goal is to ensure that glibc and the kernel agree about the
> definition of the clocks, i.e. that the clock is indeed a CPU time clock
> and does not measure wall-clock time.
> 
> Thanks,
> Florian
> 

Thanks for your comments.

How do we want to proceed here:
- Shall we just loosen the limits?
- Shall we remove the whole test?
- Shall we remove only the first check which compares nanosleep vs
clock_gettime (child_clock, before|after)?

Bye.
Stefan
diff mbox series

Patch

diff = after - before"
With much workload on the machine, the child won't make much progess
and it can fall much beyond the minimum limit. Thus I've set this limit to 0.45%.
Even with this limit, the test fails for 11..90 times. If set to 0.30% it fails
only 1..8 times.
Does this check makes sense at all?

Afterwards the parent process sleeps  with clock_nanosleep (child_clock, 0.1s):
diff = afterns - after
The test currently also allows 0.9 * 0.1s.  Shouldn't we test the hard limit of
1.0 * 0.1s as minimum limit?
Depending on the workload, the maximum limit can exceed the 1.2 * 0.1s. Therefore
I've set the upper limit to 1.35.

For "dead - after", the parent process kills the child process and waits long
enough to let the child finish dying. Then it gets the time of the child:
diff = dead - after
Note that diff also contains the time for the previous clock_nanosleep.
Thus you'll often see both fails at the same time. I've set the upper limit
to 1.4.
---
 time/tst-cpuclock1.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/time/tst-cpuclock1.c b/time/tst-cpuclock1.c
index 1ac611a92b..52a3485145 100644
--- a/time/tst-cpuclock1.c
+++ b/time/tst-cpuclock1.c
@@ -161,7 +161,7 @@  do_test (void)
      percentile of values and use those values for our testing allowed range.  */
   struct timespec diff = timespec_sub (support_timespec_normalize (after),
 				       support_timespec_normalize (before));
-  if (!support_timespec_check_in_range (sleeptime, diff, .9,  1.1))
+  if (!support_timespec_check_in_range (sleeptime, diff, .45,  1.1))
     {
       printf ("before - after %ju.%.9ju outside reasonable range\n",
 	      (uintmax_t) diff.tv_sec, (uintmax_t) diff.tv_nsec);
@@ -197,7 +197,7 @@  do_test (void)
            allowed range.  */
 	  diff = timespec_sub (support_timespec_normalize (afterns),
 			       support_timespec_normalize (after));
-	  if (!support_timespec_check_in_range (sleeptime, diff, .9, 1.2))
+	  if (!support_timespec_check_in_range (sleeptime, diff, 1.0, 1.35))
 	    {
 	      printf ("nanosleep time %ju.%.9ju outside reasonable range\n",
 		      (uintmax_t) diff.tv_sec, (uintmax_t) diff.tv_nsec);
@@ -240,7 +240,7 @@  do_test (void)
   diff = timespec_sub (support_timespec_normalize (dead),
 		       support_timespec_normalize (after));
   sleeptime.tv_nsec = 100000000;
-  if (!support_timespec_check_in_range (sleeptime, diff, .9, 1.2))
+  if (!support_timespec_check_in_range (sleeptime, diff, 1.0, 1.4))
     {
       printf ("dead - after %ju.%.9ju outside reasonable range\n",
 	      (uintmax_t) diff.tv_sec, (uintmax_t) diff.tv_nsec);