diff mbox

[1/3] rcu: do not let RCU callbacks pile up indefinitely

Message ID 1423674872-10676-2-git-send-email-pbonzini@redhat.com
State New
Headers show

Commit Message

Paolo Bonzini Feb. 11, 2015, 5:14 p.m. UTC
Always process them within half a second.  Even though waiting a little
is useful, it is not okay to delay e.g. qemu_opts_del forever.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 util/rcu.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

Comments

Paolo Bonzini Feb. 11, 2015, 5:30 p.m. UTC | #1
On 11/02/2015 18:14, Paolo Bonzini wrote:
> Always process them within half a second.

Actually, half a second is too much so I changed it to .05 seconds to
err on the safe side and still give time to the sending process to
register two-three callbacks in rapid succession.

Paolo

> Even though waiting a little
> is useful, it is not okay to delay e.g. qemu_opts_del forever.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Fam Zheng Feb. 12, 2015, 9:05 a.m. UTC | #2
On Wed, 02/11 18:14, Paolo Bonzini wrote:
> Always process them within half a second.  Even though waiting a little
> is useful, it is not okay to delay e.g. qemu_opts_del forever.
> 
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> ---
>  util/rcu.c | 14 ++++++++------
>  1 file changed, 8 insertions(+), 6 deletions(-)
> 
> diff --git a/util/rcu.c b/util/rcu.c
> index c9c3e6e..486d7b6 100644
> --- a/util/rcu.c
> +++ b/util/rcu.c
> @@ -223,14 +223,16 @@ static void *call_rcu_thread(void *opaque)
>           * Fetch rcu_call_count now, we only must process elements that were
>           * added before synchronize_rcu() starts.
>           */
> -        while (n < RCU_CALL_MIN_SIZE && ++tries <= 5) {
> -            g_usleep(100000);
> -            qemu_event_reset(&rcu_call_ready_event);
> -            n = atomic_read(&rcu_call_count);
> -            if (n < RCU_CALL_MIN_SIZE) {
> -                qemu_event_wait(&rcu_call_ready_event);
> +        while (n == 0 || (n < RCU_CALL_MIN_SIZE && ++tries <= 5)) {
> +            g_usleep(10000);
> +            if (n == 0) {
> +                qemu_event_reset(&rcu_call_ready_event);
>                  n = atomic_read(&rcu_call_count);
> +                if (n == 0) {
> +                    qemu_event_wait(&rcu_call_ready_event);
> +                }
>              }
> +            n = atomic_read(&rcu_call_count);
>          }
>  
>          atomic_sub(&rcu_call_count, n);

Reviewed-by: Fam Zheng <famz@redhat.com>
diff mbox

Patch

diff --git a/util/rcu.c b/util/rcu.c
index c9c3e6e..486d7b6 100644
--- a/util/rcu.c
+++ b/util/rcu.c
@@ -223,14 +223,16 @@  static void *call_rcu_thread(void *opaque)
          * Fetch rcu_call_count now, we only must process elements that were
          * added before synchronize_rcu() starts.
          */
-        while (n < RCU_CALL_MIN_SIZE && ++tries <= 5) {
-            g_usleep(100000);
-            qemu_event_reset(&rcu_call_ready_event);
-            n = atomic_read(&rcu_call_count);
-            if (n < RCU_CALL_MIN_SIZE) {
-                qemu_event_wait(&rcu_call_ready_event);
+        while (n == 0 || (n < RCU_CALL_MIN_SIZE && ++tries <= 5)) {
+            g_usleep(10000);
+            if (n == 0) {
+                qemu_event_reset(&rcu_call_ready_event);
                 n = atomic_read(&rcu_call_count);
+                if (n == 0) {
+                    qemu_event_wait(&rcu_call_ready_event);
+                }
             }
+            n = atomic_read(&rcu_call_count);
         }
 
         atomic_sub(&rcu_call_count, n);