diff mbox series

[v1] cgroup_fj_stress: Avoid killall

Message ID 20191105112000.20633-1-cfamullaconrad@suse.de
State Accepted
Delegated to: Petr Vorel
Headers show
Series [v1] cgroup_fj_stress: Avoid killall | expand

Commit Message

Clemens Famulla-Conrad Nov. 5, 2019, 11:20 a.m. UTC
We discovered problems that killall didn't catched all processes. With
this patch, we collect the pids manually and kill them one after the
other.

Signed-off-by: Clemens Famulla-Conrad <cfamullaconrad@suse.de>
---
 testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

Comments

Petr Vorel Nov. 5, 2019, 1:20 p.m. UTC | #1
Hi Clements,

> We discovered problems that killall didn't catched all processes. With
> this patch, we collect the pids manually and kill them one after the
> other.

LGTM.
I wonder if we also want to kill cgroup_fj_proc this way (see cgroup_fj_common.sh).

I guess you're not planning to create minimal reproducer to prove the problem of
left processes after killall, are you?

> Signed-off-by: Clemens Famulla-Conrad <cfamullaconrad@suse.de>
> ---
>  testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh | 9 ++++++---
>  1 file changed, 6 insertions(+), 3 deletions(-)

> diff --git a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
> index 698aa4979..27ea7634a 100755
> --- a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
> +++ b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
> @@ -74,6 +74,7 @@ setup
>  export TMPFILE=./tmp_tasks.$$

>  count=0
> +collected_pids=""
nit:
collected_pids=

...

Kind regards,
Petr
Clemens Famulla-Conrad Nov. 5, 2019, 1:49 p.m. UTC | #2
Hi Petr,

On Tue, 2019-11-05 at 14:20 +0100, Petr Vorel wrote:
<snip>
> I wonder if we also want to kill cgroup_fj_proc this way (see
> cgroup_fj_common.sh).

I'm not sure if I understand you. We do kill cgroup_fj_proc this way.
The `killall -9 cgroup_fj_proc` call in cgrouip_fj_common.sh looks for
me like a cleaner and there is no `wait` or similar afterwards, so I
would guess we are not facing the problem here. And I would keep
killall here.
As far as I can see, all other `cgroup_fj_proc&` calls already kill
them separately.

> I guess you're not planning to create minimal reproducer to prove the
> problem of
> left processes after killall, are you?

Sure nice idea, I can give it a try. But not within this patchset.

Thanks
Clemens
Petr Vorel Nov. 6, 2019, 5:12 p.m. UTC | #3
Hi Clemens,

> > I wonder if we also want to kill cgroup_fj_proc this way (see
> > cgroup_fj_common.sh).

> I'm not sure if I understand you. We do kill cgroup_fj_proc this way.
> The `killall -9 cgroup_fj_proc` call in cgrouip_fj_common.sh looks for
> me like a cleaner and there is no `wait` or similar afterwards, so I
> would guess we are not facing the problem here. And I would keep
> killall here.
> As far as I can see, all other `cgroup_fj_proc&` calls already kill
> them separately.
OK, merged :).
Thanks!

> > I guess you're not planning to create minimal reproducer to prove the
> > problem of
> > left processes after killall, are you?

> Sure nice idea, I can give it a try. But not within this patchset.
Thanks!

Kind regards,
Petr
diff mbox series

Patch

diff --git a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
index 698aa4979..27ea7634a 100755
--- a/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
+++ b/testcases/kernel/controllers/cgroup_fj/cgroup_fj_stress.sh
@@ -74,6 +74,7 @@  setup
 export TMPFILE=./tmp_tasks.$$
 
 count=0
+collected_pids=""
 
 build_subgroups()
 {
@@ -107,6 +108,7 @@  attach_task()
     if [ -z "$ppid" ]; then
         cgroup_fj_proc&
         pid=$!
+        collected_pids="$collected_pids $pid"
     else
         pid="$ppid"
     fi
@@ -148,9 +150,10 @@  case $attach_operation in
 "each" )
     tst_resm TINFO "Attaching task to each subgroup"
     attach_task "$start_path" 0
-    ROD killall -9 "cgroup_fj_proc"
-    # Wait for attached tasks to terminate
-    wait
+    for pid in $collected_pids; do
+        ROD kill -9 "$pid"
+        wait "$pid"
+    done
     ;;
 *  )
     ;;