diff mbox

[2/3] rhashtable-test: retry insert operations in threads

Message ID 1440757685-14241-2-git-send-email-phil@nwl.cc
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Phil Sutter Aug. 28, 2015, 10:28 a.m. UTC
After adding cond_resched() calls to threadfunc(), a surprisingly high
rate of insert failures occurred probably due to table resizes getting a
better chance to run in background. To not soften up the remaining
tests, retry inserts until they either succeed or fail permanently.

Signed-off-by: Phil Sutter <phil@nwl.cc>
---
 lib/test_rhashtable.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

Comments

Thomas Graf Aug. 28, 2015, 11:09 a.m. UTC | #1
On 08/28/15 at 12:28pm, Phil Sutter wrote:
> After adding cond_resched() calls to threadfunc(), a surprisingly high
> rate of insert failures occurred probably due to table resizes getting a
> better chance to run in background. To not soften up the remaining
> tests, retry inserts until they either succeed or fail permanently.
> 
> Signed-off-by: Phil Sutter <phil@nwl.cc>
> ---
>  lib/test_rhashtable.c | 13 +++++++------
>  1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
> index 63654e3..093cf84 100644
> --- a/lib/test_rhashtable.c
> +++ b/lib/test_rhashtable.c
> @@ -244,7 +244,7 @@ static int thread_lookup_test(struct thread_data *tdata)
>  
>  static int threadfunc(void *data)
>  {
> -	int i, step, err = 0, insert_fails = 0;
> +	int i, step, err = 0, retries = 0;
>  	struct thread_data *tdata = data;
>  
>  	up(&prestart_sem);
> @@ -253,21 +253,22 @@ static int threadfunc(void *data)
>  
>  	for (i = 0; i < entries; i++) {
>  		tdata->objs[i].value = (tdata->id << 16) | i;
> +insert_retry:
>  		cond_resched();
>  		err = rhashtable_insert_fast(&ht, &tdata->objs[i].node,
>  		                             test_rht_params);
>  		if (err == -ENOMEM || err == -EBUSY) {
> -			tdata->objs[i].value = TEST_INSERT_FAIL;
> -			insert_fails++;
> +			retries++;
> +			goto insert_retry;

Is it safe to retry indefinitely on ENOMEM? Retrying on EBUSY is
definitely an improvement and we should do the same in the non
threaded test as well.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Phil Sutter Aug. 28, 2015, 11:13 a.m. UTC | #2
On Fri, Aug 28, 2015 at 01:09:29PM +0200, Thomas Graf wrote:
> On 08/28/15 at 12:28pm, Phil Sutter wrote:
> > After adding cond_resched() calls to threadfunc(), a surprisingly high
> > rate of insert failures occurred probably due to table resizes getting a
> > better chance to run in background. To not soften up the remaining
> > tests, retry inserts until they either succeed or fail permanently.
> > 
> > Signed-off-by: Phil Sutter <phil@nwl.cc>
> > ---
> >  lib/test_rhashtable.c | 13 +++++++------
> >  1 file changed, 7 insertions(+), 6 deletions(-)
> > 
> > diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
> > index 63654e3..093cf84 100644
> > --- a/lib/test_rhashtable.c
> > +++ b/lib/test_rhashtable.c
> > @@ -244,7 +244,7 @@ static int thread_lookup_test(struct thread_data *tdata)
> >  
> >  static int threadfunc(void *data)
> >  {
> > -	int i, step, err = 0, insert_fails = 0;
> > +	int i, step, err = 0, retries = 0;
> >  	struct thread_data *tdata = data;
> >  
> >  	up(&prestart_sem);
> > @@ -253,21 +253,22 @@ static int threadfunc(void *data)
> >  
> >  	for (i = 0; i < entries; i++) {
> >  		tdata->objs[i].value = (tdata->id << 16) | i;
> > +insert_retry:
> >  		cond_resched();
> >  		err = rhashtable_insert_fast(&ht, &tdata->objs[i].node,
> >  		                             test_rht_params);
> >  		if (err == -ENOMEM || err == -EBUSY) {
> > -			tdata->objs[i].value = TEST_INSERT_FAIL;
> > -			insert_fails++;
> > +			retries++;
> > +			goto insert_retry;
> 
> Is it safe to retry indefinitely on ENOMEM? Retrying on EBUSY is
> definitely an improvement and we should do the same in the non
> threaded test as well.

Oh yes, that is definitely a bug. I will respin and add the same for the
normal test, too.

Thanks, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Phil Sutter Aug. 28, 2015, 1:34 p.m. UTC | #3
On Fri, Aug 28, 2015 at 01:13:20PM +0200, Phil Sutter wrote:
> On Fri, Aug 28, 2015 at 01:09:29PM +0200, Thomas Graf wrote:
> > On 08/28/15 at 12:28pm, Phil Sutter wrote:
> > > After adding cond_resched() calls to threadfunc(), a surprisingly high
> > > rate of insert failures occurred probably due to table resizes getting a
> > > better chance to run in background. To not soften up the remaining
> > > tests, retry inserts until they either succeed or fail permanently.
> > > 
> > > Signed-off-by: Phil Sutter <phil@nwl.cc>
> > > ---
> > >  lib/test_rhashtable.c | 13 +++++++------
> > >  1 file changed, 7 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
> > > index 63654e3..093cf84 100644
> > > --- a/lib/test_rhashtable.c
> > > +++ b/lib/test_rhashtable.c
> > > @@ -244,7 +244,7 @@ static int thread_lookup_test(struct thread_data *tdata)
> > >  
> > >  static int threadfunc(void *data)
> > >  {
> > > -	int i, step, err = 0, insert_fails = 0;
> > > +	int i, step, err = 0, retries = 0;
> > >  	struct thread_data *tdata = data;
> > >  
> > >  	up(&prestart_sem);
> > > @@ -253,21 +253,22 @@ static int threadfunc(void *data)
> > >  
> > >  	for (i = 0; i < entries; i++) {
> > >  		tdata->objs[i].value = (tdata->id << 16) | i;
> > > +insert_retry:
> > >  		cond_resched();
> > >  		err = rhashtable_insert_fast(&ht, &tdata->objs[i].node,
> > >  		                             test_rht_params);
> > >  		if (err == -ENOMEM || err == -EBUSY) {
> > > -			tdata->objs[i].value = TEST_INSERT_FAIL;
> > > -			insert_fails++;
> > > +			retries++;
> > > +			goto insert_retry;
> > 
> > Is it safe to retry indefinitely on ENOMEM? Retrying on EBUSY is
> > definitely an improvement and we should do the same in the non
> > threaded test as well.
> 
> Oh yes, that is definitely a bug. I will respin and add the same for the
> normal test, too.

Quite ugly, IMHO: rhashtable_insert_fast() may return -ENOMEM as
non-permanent error, if allocation in GFP_ATOMIC failed. In this case,
allocation in GFP_KERNEL is retried by rht_deferred_worker(). Sadly,
there is no way to determine if that has already been tried and failed.

The thread test triggers GFP_ATOMIC allocation failure quite easily, so
I can't really just ignore this issue. :)

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Thomas Graf Aug. 28, 2015, 10:43 p.m. UTC | #4
On 08/28/15 at 03:34pm, Phil Sutter wrote:
> Quite ugly, IMHO: rhashtable_insert_fast() may return -ENOMEM as
> non-permanent error, if allocation in GFP_ATOMIC failed. In this case,
> allocation in GFP_KERNEL is retried by rht_deferred_worker(). Sadly,
> there is no way to determine if that has already been tried and failed.
> 
> The thread test triggers GFP_ATOMIC allocation failure quite easily, so
> I can't really just ignore this issue. :)

Return EBUSY or ENOBUFS in the non-permanent case? It is definitely
helpful if the API allows to differ between permanent and
non-permanent errors.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Phil Sutter Aug. 29, 2015, 9:07 a.m. UTC | #5
On Sat, Aug 29, 2015 at 12:43:03AM +0200, Thomas Graf wrote:
> On 08/28/15 at 03:34pm, Phil Sutter wrote:
> > Quite ugly, IMHO: rhashtable_insert_fast() may return -ENOMEM as
> > non-permanent error, if allocation in GFP_ATOMIC failed. In this case,
> > allocation in GFP_KERNEL is retried by rht_deferred_worker(). Sadly,
> > there is no way to determine if that has already been tried and failed.
> > 
> > The thread test triggers GFP_ATOMIC allocation failure quite easily, so
> > I can't really just ignore this issue. :)
> 
> Return EBUSY or ENOBUFS in the non-permanent case? It is definitely
> helpful if the API allows to differ between permanent and
> non-permanent errors.

Yes, indeed. Therefore rht_deferred_worker() needs to check the return
value of rhashtable_expand(). The question is how to propagate the error
condition, as the worker's return value is not being kept track of
(function returns void even).

Should we introduce a new field to struct rhashtable to track the
internal state? This might allow to clean up some rather obscure tests,
e.g. whether a table resize is in progress or not.

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Aug. 30, 2015, 7:47 a.m. UTC | #6
Phil Sutter <phil@nwl.cc> wrote:
>
> Should we introduce a new field to struct rhashtable to track the
> internal state? This might allow to clean up some rather obscure tests,
> e.g. whether a table resize is in progress or not.

Why would we want to do that? The deferred expansion is done
on a best effort basis so its failure has nothing to do with
the failure of a subsequent insertion.

The insertion must have tried its own last-ditch synchronous
expansion and only fail if that fails.

Cheers,
Phil Sutter Aug. 31, 2015, 11 a.m. UTC | #7
On Sun, Aug 30, 2015 at 03:47:17PM +0800, Herbert Xu wrote:
> Phil Sutter <phil@nwl.cc> wrote:
> >
> > Should we introduce a new field to struct rhashtable to track the
> > internal state? This might allow to clean up some rather obscure tests,
> > e.g. whether a table resize is in progress or not.
> 
> Why would we want to do that? The deferred expansion is done
> on a best effort basis so its failure has nothing to do with
> the failure of a subsequent insertion.

The variable would be used to track if the worker has failed to allocate
memory in background.

Since the failing insertion will be retried, subsequent inserts are not
necessary unrelated.

> The insertion must have tried its own last-ditch synchronous
> expansion and only fail if that fails.

Who do you mean with "the insertion"? The user or the API?

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Sept. 1, 2015, 11:43 a.m. UTC | #8
On Mon, Aug 31, 2015 at 01:00:12PM +0200, Phil Sutter wrote:
> 
> The variable would be used to track if the worker has failed to allocate
> memory in background.
> 
> Since the failing insertion will be retried, subsequent inserts are not
> necessary unrelated.

If an insertion fails it is never retried.  The only thing that is
retried is the expansion of the table.  So I have no idea what
you are talking about.

Cheers,
Phil Sutter Sept. 1, 2015, 12:46 p.m. UTC | #9
On Tue, Sep 01, 2015 at 07:43:00PM +0800, Herbert Xu wrote:
> On Mon, Aug 31, 2015 at 01:00:12PM +0200, Phil Sutter wrote:
> > 
> > The variable would be used to track if the worker has failed to allocate
> > memory in background.
> > 
> > Since the failing insertion will be retried, subsequent inserts are not
> > necessary unrelated.
> 
> If an insertion fails it is never retried.  The only thing that is
> retried is the expansion of the table.  So I have no idea what
> you are talking about.

This is not an inherent behaviour of the implementation but general
agreement. The insertion may fail non-permanently (returning -EBUSY),
users are expected to handle this by retrying the operation.

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Sept. 1, 2015, 1 p.m. UTC | #10
On Tue, Sep 01, 2015 at 02:46:48PM +0200, Phil Sutter wrote:
>
> This is not an inherent behaviour of the implementation but general
> agreement. The insertion may fail non-permanently (returning -EBUSY),
> users are expected to handle this by retrying the operation.

Absolutely not.  The only reason for an insertion to fail is if we
can't allocate enough memory.  Unless the user is also looping its
kmalloc calls it definitely shouldn't be retrying the insert.

If an expansion fails it means either that the system is suffering
a catastrophic memory shortage, or the user of rhashtable is doing
something wrong.

Cheers,
Eric Dumazet Sept. 1, 2015, 1:40 p.m. UTC | #11
On Tue, 2015-09-01 at 21:00 +0800, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 02:46:48PM +0200, Phil Sutter wrote:
> >
> > This is not an inherent behaviour of the implementation but general
> > agreement. The insertion may fail non-permanently (returning -EBUSY),
> > users are expected to handle this by retrying the operation.
> 
> Absolutely not.  The only reason for an insertion to fail is if we
> can't allocate enough memory.  Unless the user is also looping its
> kmalloc calls it definitely shouldn't be retrying the insert.
> 
> If an expansion fails it means either that the system is suffering
> a catastrophic memory shortage, or the user of rhashtable is doing
> something wrong.


-EBUSY does not sound as a memory allocation error.


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Phil Sutter Sept. 1, 2015, 1:43 p.m. UTC | #12
On Tue, Sep 01, 2015 at 09:00:57PM +0800, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 02:46:48PM +0200, Phil Sutter wrote:
> >
> > This is not an inherent behaviour of the implementation but general
> > agreement. The insertion may fail non-permanently (returning -EBUSY),
> > users are expected to handle this by retrying the operation.
> 
> Absolutely not.  The only reason for an insertion to fail is if we
> can't allocate enough memory.  Unless the user is also looping its
> kmalloc calls it definitely shouldn't be retrying the insert.

rhashtable_insert_fast() returns -EBUSY if the table is full
(rht_grow_above_100() returns true) and an asynchronous rehash operation
is active. AFAICT, this is not necessarily caused by memory pressure.

> If an expansion fails it means either that the system is suffering
> a catastrophic memory shortage, or the user of rhashtable is doing
> something wrong.

Hmm. Since memory allocation is first tried with GFP_ATOMIC set and upon
failure retried in background, this seems like a situation which might
happen during normal use. If that already indicates a severe problem,
why retry in background at all?

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Sept. 1, 2015, 1:50 p.m. UTC | #13
On Tue, Sep 01, 2015 at 03:43:11PM +0200, Phil Sutter wrote:
>
> Hmm. Since memory allocation is first tried with GFP_ATOMIC set and upon
> failure retried in background, this seems like a situation which might
> happen during normal use. If that already indicates a severe problem,
> why retry in background at all?

It should be tried in the background first at 70% and only when
that fails would we hit the 100% case and then we will try it
with GFP_ATOMIC.  If that fails then the insertion will fail.

Cheers,
Phil Sutter Sept. 1, 2015, 1:56 p.m. UTC | #14
On Tue, Sep 01, 2015 at 09:50:19PM +0800, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 03:43:11PM +0200, Phil Sutter wrote:
> >
> > Hmm. Since memory allocation is first tried with GFP_ATOMIC set and upon
> > failure retried in background, this seems like a situation which might
> > happen during normal use. If that already indicates a severe problem,
> > why retry in background at all?
> 
> It should be tried in the background first at 70% and only when
> that fails would we hit the 100% case and then we will try it
> with GFP_ATOMIC.  If that fails then the insertion will fail.

Ah, good to know. Thanks for clarifying this!

Looking at rhashtable_test.c, I see the initial table size is 8 entries.
70% of that is 5.6 entries, so background expansion is started after the
6th entry has been added, right? Given there are 10 threads running
which try to insert 50k entries at the same time, I don't think it's
unlikely that three more entries are inserted before the background
expansion completes.

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Sept. 1, 2015, 2:03 p.m. UTC | #15
On Tue, Sep 01, 2015 at 03:56:18PM +0200, Phil Sutter wrote:
> 
> Looking at rhashtable_test.c, I see the initial table size is 8 entries.
> 70% of that is 5.6 entries, so background expansion is started after the
> 6th entry has been added, right? Given there are 10 threads running
> which try to insert 50k entries at the same time, I don't think it's
> unlikely that three more entries are inserted before the background
> expansion completes.

Yes but in that case the GFP_ATOMIC allocation should work because
the table is so small anyway.

Cheers,
Thomas Graf Sept. 1, 2015, 2:13 p.m. UTC | #16
On 09/01/15 at 10:03pm, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 03:56:18PM +0200, Phil Sutter wrote:
> > 
> > Looking at rhashtable_test.c, I see the initial table size is 8 entries.
> > 70% of that is 5.6 entries, so background expansion is started after the
> > 6th entry has been added, right? Given there are 10 threads running
> > which try to insert 50k entries at the same time, I don't think it's
> > unlikely that three more entries are inserted before the background
> > expansion completes.
> 
> Yes but in that case the GFP_ATOMIC allocation should work because
> the table is so small anyway.

You can easily trigger this outside of the testsuite as well. Open
10K Netlink sockets in a loop and the creation of the sockets will
fail way before memory pressure kicks in.

I agree with you that the user should never retry on memory failure.
That's why I suggested to differentiate between a "permanent" failure
(memory pressure) and non-permanent failure (temporary overload on
background expansion). Hence the proposed difference of return codes
ENOMEM and EBUSY to report this back to the API user.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Sept. 1, 2015, 2:16 p.m. UTC | #17
On Tue, Sep 01, 2015 at 04:13:05PM +0200, Thomas Graf wrote:
>
> You can easily trigger this outside of the testsuite as well. Open
> 10K Netlink sockets in a loop and the creation of the sockets will
> fail way before memory pressure kicks in.

That means our implementation is buggy.  Do you have a test program?

Thanks,
Thomas Graf Sept. 1, 2015, 2:51 p.m. UTC | #18
On 09/01/15 at 10:16pm, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 04:13:05PM +0200, Thomas Graf wrote:
> >
> > You can easily trigger this outside of the testsuite as well. Open
> > 10K Netlink sockets in a loop and the creation of the sockets will
> > fail way before memory pressure kicks in.
> 
> That means our implementation is buggy.  Do you have a test program?

1. The current in-kernel self-test
2. bind_netlink.c: https://github.com/tgraf/rhashtable
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Sept. 2, 2015, 2 a.m. UTC | #19
On Tue, Sep 01, 2015 at 04:51:24PM +0200, Thomas Graf wrote:
>
> 1. The current in-kernel self-test
> 2. bind_netlink.c: https://github.com/tgraf/rhashtable

Thanks, I will try to reproduce this.
Thomas Graf Sept. 2, 2015, 7:07 a.m. UTC | #20
On 09/02/15 at 10:00am, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 04:51:24PM +0200, Thomas Graf wrote:
> >
> > 1. The current in-kernel self-test
> > 2. bind_netlink.c: https://github.com/tgraf/rhashtable
> 
> Thanks, I will try to reproduce this.

The path in question is:

int rhashtable_insert_rehash(struct rhashtable *ht)
{
	[...]

        old_tbl = rht_dereference_rcu(ht->tbl, ht);
        tbl = rhashtable_last_table(ht, old_tbl);

        size = tbl->size;

        if (rht_grow_above_75(ht, tbl))
                size *= 2;
        /* Do not schedule more than one rehash */
        else if (old_tbl != tbl)
                return -EBUSY;

The behaviour in question is the immediate rehash during
insertion which we want to fail.

Commits:
        ccd57b1bd32460d27bbb9c599e795628a3c66983
        a87b9ebf1709687ff213091d0fdb4254b1564803
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Herbert Xu Sept. 10, 2015, 8:03 a.m. UTC | #21
On Tue, Sep 01, 2015 at 04:51:24PM +0200, Thomas Graf wrote:
> 
> 1. The current in-kernel self-test
> 2. bind_netlink.c: https://github.com/tgraf/rhashtable

I can't reproduce it:

$ while :; do ./bind_netlink 10000 12345; done
Ports successfully created, terminating
Create 10000 ports
Created 1000 ports
Created 2000 ports
Created 3000 ports
Created 4000 ports
Created 5000 ports
Created 6000 ports
Created 7000 ports
Created 8000 ports
Created 9000 ports
Created 10000 ports
Ports successfully created, terminating
...

So what am I missing?

Thanks,
Phil Sutter Sept. 10, 2015, 10:05 a.m. UTC | #22
On Thu, Sep 10, 2015 at 04:03:44PM +0800, Herbert Xu wrote:
> On Tue, Sep 01, 2015 at 04:51:24PM +0200, Thomas Graf wrote:
> > 
> > 1. The current in-kernel self-test
> > 2. bind_netlink.c: https://github.com/tgraf/rhashtable
> 
> I can't reproduce it:

I can't speak for netlink, but if you apply patch 1/3 from this thread,
test_rhashtable.c starts generating many insert failures during the
multiple thread test.

Cheers, Phil
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/lib/test_rhashtable.c b/lib/test_rhashtable.c
index 63654e3..093cf84 100644
--- a/lib/test_rhashtable.c
+++ b/lib/test_rhashtable.c
@@ -244,7 +244,7 @@  static int thread_lookup_test(struct thread_data *tdata)
 
 static int threadfunc(void *data)
 {
-	int i, step, err = 0, insert_fails = 0;
+	int i, step, err = 0, retries = 0;
 	struct thread_data *tdata = data;
 
 	up(&prestart_sem);
@@ -253,21 +253,22 @@  static int threadfunc(void *data)
 
 	for (i = 0; i < entries; i++) {
 		tdata->objs[i].value = (tdata->id << 16) | i;
+insert_retry:
 		cond_resched();
 		err = rhashtable_insert_fast(&ht, &tdata->objs[i].node,
 		                             test_rht_params);
 		if (err == -ENOMEM || err == -EBUSY) {
-			tdata->objs[i].value = TEST_INSERT_FAIL;
-			insert_fails++;
+			retries++;
+			goto insert_retry;
 		} else if (err) {
 			pr_err("  thread[%d]: rhashtable_insert_fast failed\n",
 			       tdata->id);
 			goto out;
 		}
 	}
-	if (insert_fails)
-		pr_info("  thread[%d]: %d insert failures\n",
-		        tdata->id, insert_fails);
+	if (retries)
+		pr_info("  thread[%d]: retried insert operation %d times\n",
+		        tdata->id, retries);
 
 	err = thread_lookup_test(tdata);
 	if (err) {