Message ID | 20110308180044.GD30899@tyan-ft48-01.lab.bos.redhat.com |
---|---|
State | New |
Headers | show |
Jakub Jelinek <jakub@redhat.com> writes: > Ok, here is an updated patch which uses both proposed env vars: > > GCCGO_RUN_ALL_TESTS=1 makes it fail for me as before (i.e. 10000 threads) > > GCC_TEST_RUN_EXPENSIVE=1 makes it run with max($[`ulimit -u`/4], 10000) > threads on Linux native, 10000 everywhere else Why should this be Linux-specific? I think the same logic applies everywhere. Rainer
On Tue, Mar 08, 2011 at 07:40:38PM +0100, Rainer Orth wrote: > Jakub Jelinek <jakub@redhat.com> writes: > > > Ok, here is an updated patch which uses both proposed env vars: > > > > GCCGO_RUN_ALL_TESTS=1 makes it fail for me as before (i.e. 10000 threads) > > > > GCC_TEST_RUN_EXPENSIVE=1 makes it run with max($[`ulimit -u`/4], 10000) > > threads on Linux native, 10000 everywhere else > > Why should this be Linux-specific? I think the same logic applies > everywhere. Because ulimit -u is Linux specific? At least, google doesn't show any hints about any other OSes having such limit, neither RLIMIT_NPROC nor ulimit -u. Jakub
Jakub Jelinek <jakub@redhat.com> writes: >> Why should this be Linux-specific? I think the same logic applies >> everywhere. > > Because ulimit -u is Linux specific? At least, google doesn't show any > hints about any other OSes having such limit, neither RLIMIT_NPROC nor > ulimit -u. At best, it's shell-specific: Solaris 11 /bin/sh (which is ksh93) does have it, although admittedly previous Solaris/IRIX/Tru64 UNIX shells don't. On the other hand, bash has it on all of those systems. Why not simply test if ulimit -u doesn't error and then use it? I'd very much prefer this to a solution that is unnecessarily OS-specific. Thanks. Rainer
On Tue, Mar 08, 2011 at 07:56:38PM +0100, Rainer Orth wrote: > Jakub Jelinek <jakub@redhat.com> writes: > At best, it's shell-specific: Solaris 11 /bin/sh (which is ksh93) does > have it, although admittedly previous Solaris/IRIX/Tru64 UNIX shells > don't. On the other hand, bash has it on all of those systems. > > Why not simply test if ulimit -u doesn't error and then use it? I'd > very much prefer this to a solution that is unnecessarily OS-specific. I'm happy to drop the [ ishost "*-linux*" ] && if you are going to look for failures on weirdo OSes. I have no idea what ulimit -u does on anything but Linux, while the tcl code only uses its value if it printed a number, whether it is something similar to limit on number of each user's threads or something completely else is unclear. Jakub
Jakub Jelinek <jakub@redhat.com> writes: > I'm happy to drop the [ ishost "*-linux*" ] && if you are going to look for > failures on weirdo OSes. I have no idea what ulimit -u does on anything but > Linux, while the tcl code only uses its value if it printed a number, > whether it is something similar to limit on number of each user's threads > or something completely else is unclear. In both bash and every non-bash shell I have that implements it at all, ulimit -u does exactly the same as on Linux. Rainer
Jakub Jelinek <jakub@redhat.com> writes: > 2011-03-08 Jakub Jelinek <jakub@redhat.com> > > * go.test/go-test.exp: For goroutines.go test if GCCGO_RUN_ALL_TESTS > is not set in the environment, pass 64 as first argument when not > running expensive tests or pass max($[`ulimit -u`/4], 10000) on > Linux native. This is OK, and it's also OK if you remove the ishost conditional as Rainer suggests. Thanks. Ian
On Mar 8, 2011, at 10:44 AM, Jakub Jelinek wrote:
> Because ulimit -u is Linux specific?
Seems to work on darwin (266).
--- gcc/testsuite/go.test/go-test.exp.jj 2011-01-15 11:26:32.000000000 +0100 +++ gcc/testsuite/go.test/go-test.exp 2011-03-08 13:23:36.078402148 +0100 @@ -265,6 +265,27 @@ proc go-gc-tests { } { verbose -log "$test: go_execute_args is $go_execute_args" set index [string last " $progargs" $test_line] set test_line [string replace $test_line $index end] + } elseif { [string match "*go.test/test/chan/goroutines.go" $test] \ + && [getenv GCCGO_RUN_ALL_TESTS] == "" } { + # goroutines.go spawns by default 10000 threads, which is too much + # for many OSes. + if { [getenv GCC_TEST_RUN_EXPENSIVE] == "" } { + set go_execute_args 64 + } elseif { [ishost "*-linux*" ] && ![is_remote host] && ![is_remote target] } { + # On Linux when using low ulimit -u limit, use maximum of + # a quarter of that limit and 10000 even when running expensive + # tests, otherwise parallel tests might fail after fork failures. + set nproc [lindex [remote_exec host {sh -c ulimit\ -u}] 1] + if { [string is integer -strict $nproc] } { + set nproc [expr $nproc / 4] + if { $nproc > 10000 } { set nproc 10000 } + if { $nproc < 16 } { set nproc 16 } + set go_execute_args $nproc + } + } + if { "$go_execute_args" != "" } { + verbose -log "$test: go_execute_args is $go_execute_args" + } } if { $test_line == "// \$G \$D/\$F\.go && \$L \$F\.\$A && \./\$A\.out >tmp.go &&" \