Patchwork [v6,2/2] block: Support GlusterFS as a QEMU block backend

login
register
mail settings
Submitter Paolo Bonzini
Date Sept. 7, 2012, 3:11 p.m.
Message ID <504A0EA5.2060308@redhat.com>
Download mbox | patch
Permalink /patch/182406/
State New
Headers show

Comments

Paolo Bonzini - Sept. 7, 2012, 3:11 p.m.
Il 07/09/2012 17:06, Bharata B Rao ha scritto:
> qemu_gluster_aio_event_reader() is the node->io_read in qemu_aio_wait().
> 
> qemu_aio_wait() calls node->io_read() which calls qemu_gluster_complete_aio().
> Before we return back to qemu_aio_wait(), many other things happen:
> 
> bdrv_close() gets called from qcow2_create2()
> This closes the gluster connection, closes the pipe, does
> qemu_set_fd_hander(read_pipe_fd, NULL, NULL, NULL, NULL), which results
> in the AioHandler node being deleted from aio_handlers list.
> 
> Now qemu_gluster_aio_event_reader (node->io_read) which was called from
> qemu_aio_wait() finally completes and goes ahead and accesses "node"
> which has already been deleted. This causes segfault.
> 
> So I think the option 1 (scheduling a BH from node->io_read) would
> be better for gluster.

This is a bug that has to be fixed anyway.  There are provisions in
aio.c, but they are broken apparently.  Can you try this:



Paolo
Bharata B Rao - Sept. 8, 2012, 2:22 p.m.
On Fri, Sep 07, 2012 at 05:11:33PM +0200, Paolo Bonzini wrote:
> This is a bug that has to be fixed anyway.  There are provisions in
> aio.c, but they are broken apparently.  Can you try this:
> 
> diff --git a/aio.c b/aio.c
> index 0a9eb10..99b8b72 100644
> --- a/aio.c
> +++ b/aio.c
> @@ -119,7 +119,7 @@ bool qemu_aio_wait(void)
>          return true;
>      }
> 
> -    walking_handlers = 1;
> +    walking_handlers++;
> 
>      FD_ZERO(&rdfds);
>      FD_ZERO(&wrfds);
> @@ -147,7 +147,7 @@ bool qemu_aio_wait(void)
>          }
>      }
> 
> -    walking_handlers = 0;
> +    walking_handlers--;
> 
>      /* No AIO operations?  Get us out of here */
>      if (!busy) {
> @@ -159,7 +159,7 @@ bool qemu_aio_wait(void)
> 
>      /* if we have any readable fds, dispatch event */
>      if (ret > 0) {
> -        walking_handlers = 1;
> +        walking_handlers++;
> 
>          /* we have to walk very carefully in case
>           * qemu_aio_set_fd_handler is called while we're walking */
> @@ -187,7 +187,7 @@ bool qemu_aio_wait(void)
>              }
>          }
> 
> -        walking_handlers = 0;
> +        walking_handlers--;
>      }
> 
>      return true;
> 

This works. I am able to create qcow2 files on gluster backend with this fix.

Regards,
Bharata.

Patch

diff --git a/aio.c b/aio.c
index 0a9eb10..99b8b72 100644
--- a/aio.c
+++ b/aio.c
@@ -119,7 +119,7 @@  bool qemu_aio_wait(void)
         return true;
     }

-    walking_handlers = 1;
+    walking_handlers++;

     FD_ZERO(&rdfds);
     FD_ZERO(&wrfds);
@@ -147,7 +147,7 @@  bool qemu_aio_wait(void)
         }
     }

-    walking_handlers = 0;
+    walking_handlers--;

     /* No AIO operations?  Get us out of here */
     if (!busy) {
@@ -159,7 +159,7 @@  bool qemu_aio_wait(void)

     /* if we have any readable fds, dispatch event */
     if (ret > 0) {
-        walking_handlers = 1;
+        walking_handlers++;

         /* we have to walk very carefully in case
          * qemu_aio_set_fd_handler is called while we're walking */
@@ -187,7 +187,7 @@  bool qemu_aio_wait(void)
             }
         }

-        walking_handlers = 0;
+        walking_handlers--;
     }

     return true;