Message ID | 20220714215325.GA18923@ldh-imac.local |
---|---|
State | New |
Headers | show |
Series | libphobos: Fix instability in the parallelized testsuite | expand |
Excerpts from Lewis Hyatt via Gcc-patches's message of Juli 14, 2022 11:53 pm: > Hello- > > I get a different number of test results from libphobos.unittest/unittest.exp, > depending on server load. I believe it's because this testsuite doesn't check > runtest_file_p: > > $ make -j 1 RUNTESTFLAGS='unittest.exp' check-target-libphobos | grep '^#' > # of expected passes 10 > > $ make -j 2 RUNTESTFLAGS='unittest.exp' check-target-libphobos | grep '^#' > # of expected passes 10 > # of expected passes 10 > > $ make -j 4 RUNTESTFLAGS='unittest.exp' check-target-libphobos | grep '^#' > # of expected passes 10 > # of expected passes 10 > # of expected passes 10 > # of expected passes 10 > > When running in parallel along with other tests, even at a fixed argument > for -j, the number of tests that actually execute will depend on how many of the > parallel sub-makes happened to start prior to the first one finishing, hence > it changes from run to run. > > The attached patch fixes it for me, if it looks OK? Thanks, this would remove > some noise from before/after test comparisons. > > -Lewis > libphobos: Fix instability in the parallelized testsuite > > libphobos.unittest/unittest.exp calls bare dg-test rather than dg-runtest, and > so it should call runtest_file_p to determine whether to run each test or > not. Without that call, the tests run too many times in parallel mode (they will > run as many times, as the argument to make -j). Hi Lewis, Thanks! Good spot. I think it should be calling dg-runtest however, same as what libphobos.cycles/cycles.exp is doing. Could also fix the test name so each one is unique, just to hit two birds in one - something like the following would suffice (haven't had time to check). Kind Regards, Iain. --- --- a/libphobos/testsuite/libphobos.unittest/unittest.exp +++ b/libphobos/testsuite/libphobos.unittest/unittest.exp @@ -42,8 +42,10 @@ foreach unit_test $unit_test_list { set expected_fail [lindex $unit_test 1] foreach test $tests { - set shouldfail $expected_fail - dg-test $test "" $test_flags + set libphobos_test_name "[dg-trim-dirname $srcdir $test] $test_flags" + set shouldfail $expected_fail + dg-runtest $test "" $test_flags + set libphobos_test_name "" } set shouldfail 0
> Hi Lewis, > > Thanks! Good spot. I think it should be calling dg-runtest however, > same as what libphobos.cycles/cycles.exp is doing. Could also fix the > test name so each one is unique, just to hit two birds in one - > something like the following would suffice (haven't had time to check). > > Kind Regards, > Iain. > > --- > > --- a/libphobos/testsuite/libphobos.unittest/unittest.exp > +++ b/libphobos/testsuite/libphobos.unittest/unittest.exp > @@ -42,8 +42,10 @@ foreach unit_test $unit_test_list { > set expected_fail [lindex $unit_test 1] > > foreach test $tests { > - set shouldfail $expected_fail > - dg-test $test "" $test_flags > + set libphobos_test_name "[dg-trim-dirname $srcdir $test] $test_flags" > + set shouldfail $expected_fail > + dg-runtest $test "" $test_flags > + set libphobos_test_name "" > } > > set shouldfail 0 > Thanks for the followup. I tested and can confirm your version works fine: PASS: libphobos.unittest/customhandler.d -fversion=FailNoPrintout (test for excess errors) PASS: libphobos.unittest/customhandler.d -fversion=FailNoPrintout execution test PASS: libphobos.unittest/customhandler.d -fversion=FailedTests (test for excess errors) PASS: libphobos.unittest/customhandler.d -fversion=FailedTests execution test PASS: libphobos.unittest/customhandler.d -fversion=GoodTests (test for excess errors) PASS: libphobos.unittest/customhandler.d -fversion=GoodTests execution test PASS: libphobos.unittest/customhandler.d -fversion=NoTests (test for excess errors) PASS: libphobos.unittest/customhandler.d -fversion=NoTests execution test PASS: libphobos.unittest/customhandler.d -fversion=PassNoPrintout (test for excess errors) PASS: libphobos.unittest/customhandler.d -fversion=PassNoPrintout execution test Let me know if you want me to do anything from there please? By the way, there are a few other tests that cause some minor glitches with comparing results: libphobos.sum:PASS: libphobos.shared/link.d -I/home/lewis/gccdev/base/src/libphobos/testsuite/libphobos.shared lib.so -shared-libphobos (test for excess errors) libphobos.sum:PASS: libphobos.shared/link.d -I/home/lewis/gccdev/base/src/libphobos/testsuite/libphobos.shared lib.so -shared-libphobos execution test libphobos.sum:PASS: libphobos.shared/link_linkdep.d -I/home/lewis/gccdev/base/src/libphobos/testsuite/libphobos.shared liblinkdep.so lib.so -shared-libphobos (test for excess errors) libphobos.sum:PASS: libphobos.shared/link_linkdep.d -I/home/lewis/gccdev/base/src/libphobos/testsuite/libphobos.shared liblinkdep.so lib.so -shared-libphobos execution test libphobos.sum:PASS: libphobos.shared/link_loaddep.d -I/home/lewis/gccdev/base/src/libphobos/testsuite/libphobos.shared libloaddep.so -shared-libphobos (test for excess errors) libphobos.sum:PASS: libphobos.shared/link_loaddep.d -I/home/lewis/gccdev/base/src/libphobos/testsuite/libphobos.shared libloaddep.so -shared-libphobos execution test The problem here is that the absolute path to the test dir ends up in the results summary, since it appears in the options string that is part of the test name. It's not so hard to work around when doing the comparisons, but it seems to be the only case where this happens in the whole testsuite, other than one other similar case from libgo. Is there a standard way to handle it I take it? Thanks... -Lewis
diff --git a/libphobos/testsuite/libphobos.unittest/unittest.exp b/libphobos/testsuite/libphobos.unittest/unittest.exp index 2a019caca8c..175decdc333 100644 --- a/libphobos/testsuite/libphobos.unittest/unittest.exp +++ b/libphobos/testsuite/libphobos.unittest/unittest.exp @@ -42,6 +42,9 @@ foreach unit_test $unit_test_list { set expected_fail [lindex $unit_test 1] foreach test $tests { + if {![runtest_file_p $runtests $test]} { + continue + } set shouldfail $expected_fail dg-test $test "" $test_flags }