Message ID | 20220830135007.16818-4-mdoucha@suse.cz |
---|---|
State | Changes Requested |
Headers | show |
Series | Max_runtime and other minor fixes | expand |
Hi! > ksm02, ksm04 and ksm05 take 10+ seconds to finish. Set max_runtime to avoid > random timeout issues. I wonder if we can do better. I guess that the actual runtime does depends on the size of the RAM because we wait for at least two finished full scans for ksmd. I guess that for large enough machines we would end up with minutes of runtime. So I guess that it would make more sense to treat the max_runtime as a upper bound and set it to large enough number as we do for AIO testcases (30 minutes) and then make the wait_ksmd_full_scan() runtime avare so that it exits when the runtime is exhausted. With that we would get a clear message that we timed-out in the loop that waited for the ksmd scan. > Signed-off-by: Martin Doucha <mdoucha@suse.cz> > --- > testcases/kernel/mem/ksm/ksm02.c | 1 + > testcases/kernel/mem/ksm/ksm04.c | 1 + > testcases/kernel/mem/ksm/ksm05.c | 1 + > 3 files changed, 3 insertions(+) > > diff --git a/testcases/kernel/mem/ksm/ksm02.c b/testcases/kernel/mem/ksm/ksm02.c > index 1cb7d8e73..1f5677425 100644 > --- a/testcases/kernel/mem/ksm/ksm02.c > +++ b/testcases/kernel/mem/ksm/ksm02.c > @@ -110,6 +110,7 @@ static struct tst_test test = { > }, > .test_all = verify_ksm, > .min_kver = "2.6.32", > + .max_runtime = 20, > .needs_cgroup_ctrls = (const char *const []){ "cpuset", NULL }, > }; > > diff --git a/testcases/kernel/mem/ksm/ksm04.c b/testcases/kernel/mem/ksm/ksm04.c > index 39c741876..f7dc5befc 100644 > --- a/testcases/kernel/mem/ksm/ksm04.c > +++ b/testcases/kernel/mem/ksm/ksm04.c > @@ -112,6 +112,7 @@ static struct tst_test test = { > }, > .test_all = verify_ksm, > .min_kver = "2.6.32", > + .max_runtime = 20, > .needs_cgroup_ctrls = (const char *const []){ > "memory", "cpuset", NULL > }, > diff --git a/testcases/kernel/mem/ksm/ksm05.c b/testcases/kernel/mem/ksm/ksm05.c > index 146a9a3b7..6f94c4a9c 100644 > --- a/testcases/kernel/mem/ksm/ksm05.c > +++ b/testcases/kernel/mem/ksm/ksm05.c > @@ -88,6 +88,7 @@ static struct tst_test test = { > .forks_child = 1, > .test_all = test_ksm, > .min_kver = "2.6.32", > + .max_runtime = 10, > .save_restore = (const struct tst_path_val[]) { > {"!/sys/kernel/mm/ksm/run", "1"}, > {} > -- > 2.37.2 > > > -- > Mailing list info: https://lists.linux.it/listinfo/ltp
On 30. 08. 22 16:49, Cyril Hrubis wrote: > Hi! >> ksm02, ksm04 and ksm05 take 10+ seconds to finish. Set max_runtime to avoid >> random timeout issues. > > I wonder if we can do better. > > I guess that the actual runtime does depends on the size of the RAM > because we wait for at least two finished full scans for ksmd. I guess > that for large enough machines we would end up with minutes of runtime. > > So I guess that it would make more sense to treat the max_runtime as a > upper bound and set it to large enough number as we do for AIO testcases > (30 minutes) and then make the wait_ksmd_full_scan() runtime avare so > that it exits when the runtime is exhausted. With that we would get a > clear message that we timed-out in the loop that waited for the ksmd > scan. Alternatively, we could measure 1 full ksmd scan in setup() and then set max_runtime dynamically. Each call of create_same_memory() would need roughly 16 scan times. Time spent in ksm_child_memset() is included in that estimate.
Hi! > > I wonder if we can do better. > > > > I guess that the actual runtime does depends on the size of the RAM > > because we wait for at least two finished full scans for ksmd. I guess > > that for large enough machines we would end up with minutes of runtime. > > > > So I guess that it would make more sense to treat the max_runtime as a > > upper bound and set it to large enough number as we do for AIO testcases > > (30 minutes) and then make the wait_ksmd_full_scan() runtime avare so > > that it exits when the runtime is exhausted. With that we would get a > > clear message that we timed-out in the loop that waited for the ksmd > > scan. > > Alternatively, we could measure 1 full ksmd scan in setup() and then set > max_runtime dynamically. Each call of create_same_memory() would need > roughly 16 scan times. Time spent in ksm_child_memset() is included in > that estimate. That sounds good as well, but I would still set the .max_runtime to rough guess in tst_test structure and then adjusted it in the test setup().
On 31. 08. 22 14:50, Cyril Hrubis wrote: > Hi! >> Alternatively, we could measure 1 full ksmd scan in setup() and then set >> max_runtime dynamically. Each call of create_same_memory() would need >> roughly 16 scan times. Time spent in ksm_child_memset() is included in >> that estimate. > > That sounds good as well, but I would still set the .max_runtime to > rough guess in tst_test structure and then adjusted it in the test > setup(). The current patch is a good enough guess for ~2-4GB machines. Or do you want to target bigger machines by default?
> On 31. 08. 22 14:50, Cyril Hrubis wrote: > > Hi! > > > Alternatively, we could measure 1 full ksmd scan in setup() and then set > > > max_runtime dynamically. Each call of create_same_memory() would need > > > roughly 16 scan times. Time spent in ksm_child_memset() is included in > > > that estimate. > > That sounds good as well, but I would still set the .max_runtime to > > rough guess in tst_test structure and then adjusted it in the test > > setup(). > The current patch is a good enough guess for ~2-4GB machines. Or do you want > to target bigger machines by default? I guess it'd be safer to expect machines with bigger memory. Kind regards, Petr
Hi! > > > > Alternatively, we could measure 1 full ksmd scan in setup() and then set > > > > max_runtime dynamically. Each call of create_same_memory() would need > > > > roughly 16 scan times. Time spent in ksm_child_memset() is included in > > > > that estimate. > > > > That sounds good as well, but I would still set the .max_runtime to > > > rough guess in tst_test structure and then adjusted it in the test > > > setup(). > > > The current patch is a good enough guess for ~2-4GB machines. Or do you want > > to target bigger machines by default? > > I guess it'd be safer to expect machines with bigger memory. I would just multiply the value you proposed by 10, which should be large enough default.
diff --git a/testcases/kernel/mem/ksm/ksm02.c b/testcases/kernel/mem/ksm/ksm02.c index 1cb7d8e73..1f5677425 100644 --- a/testcases/kernel/mem/ksm/ksm02.c +++ b/testcases/kernel/mem/ksm/ksm02.c @@ -110,6 +110,7 @@ static struct tst_test test = { }, .test_all = verify_ksm, .min_kver = "2.6.32", + .max_runtime = 20, .needs_cgroup_ctrls = (const char *const []){ "cpuset", NULL }, }; diff --git a/testcases/kernel/mem/ksm/ksm04.c b/testcases/kernel/mem/ksm/ksm04.c index 39c741876..f7dc5befc 100644 --- a/testcases/kernel/mem/ksm/ksm04.c +++ b/testcases/kernel/mem/ksm/ksm04.c @@ -112,6 +112,7 @@ static struct tst_test test = { }, .test_all = verify_ksm, .min_kver = "2.6.32", + .max_runtime = 20, .needs_cgroup_ctrls = (const char *const []){ "memory", "cpuset", NULL }, diff --git a/testcases/kernel/mem/ksm/ksm05.c b/testcases/kernel/mem/ksm/ksm05.c index 146a9a3b7..6f94c4a9c 100644 --- a/testcases/kernel/mem/ksm/ksm05.c +++ b/testcases/kernel/mem/ksm/ksm05.c @@ -88,6 +88,7 @@ static struct tst_test test = { .forks_child = 1, .test_all = test_ksm, .min_kver = "2.6.32", + .max_runtime = 10, .save_restore = (const struct tst_path_val[]) { {"!/sys/kernel/mm/ksm/run", "1"}, {}
ksm02, ksm04 and ksm05 take 10+ seconds to finish. Set max_runtime to avoid random timeout issues. Signed-off-by: Martin Doucha <mdoucha@suse.cz> --- testcases/kernel/mem/ksm/ksm02.c | 1 + testcases/kernel/mem/ksm/ksm04.c | 1 + testcases/kernel/mem/ksm/ksm05.c | 1 + 3 files changed, 3 insertions(+)