diff mbox

[v2,1/2] virtio-blk: Release s->rq queue at system_reset

Message ID 1470142856-742-2-git-send-email-famz@redhat.com
State New
Headers show

Commit Message

Fam Zheng Aug. 2, 2016, 1 p.m. UTC
At system_reset, there is no point in retrying the queued request,
because the driver that issued the request won't be around any more.

Analyzed-by: Laszlo Ersek <lersek@redhat.com>
Reported-by: Laszlo Ersek <lersek@redhat.com>
Signed-off-by: Fam Zheng <famz@redhat.com>
---
 hw/block/virtio-blk.c | 6 ++++++
 1 file changed, 6 insertions(+)

Comments

Laszlo Ersek Aug. 2, 2016, 1:31 p.m. UTC | #1
On 08/02/16 15:00, Fam Zheng wrote:
> At system_reset, there is no point in retrying the queued request,
> because the driver that issued the request won't be around any more.
> 
> Analyzed-by: Laszlo Ersek <lersek@redhat.com>
> Reported-by: Laszlo Ersek <lersek@redhat.com>
> Signed-off-by: Fam Zheng <famz@redhat.com>
> ---
>  hw/block/virtio-blk.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index 475a822..12587d9 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -654,6 +654,7 @@ static void virtio_blk_reset(VirtIODevice *vdev)
>  {
>      VirtIOBlock *s = VIRTIO_BLK(vdev);
>      AioContext *ctx;
> +    VirtIOBlockReq *req;
>  
>      /*
>       * This should cancel pending requests, but can't do nicely until there
> @@ -662,6 +663,11 @@ static void virtio_blk_reset(VirtIODevice *vdev)
>      ctx = blk_get_aio_context(s->blk);
>      aio_context_acquire(ctx);
>      blk_drain(s->blk);
> +    while (s->rq) {
> +        req = s->rq;
> +        s->rq = req->next;
> +        virtio_blk_free_request(req);
> +    }
>  
>      if (s->dataplane) {
>          virtio_blk_data_plane_stop(s->dataplane);
> 

I'd prefer if Paolo's remark (about blk_drain()'s ability to produce
more failed requests, stashed in s->rq) were captured in either the
commit message, or in a code comment. Something like:

  /* We drop queued requests after blk_drain() because blk_drain()
   * itself can produce them. */

What do you think? It's your call. I certainly lacked that bit of
information.

Reviewed-by: Laszlo Ersek <lersek@redhat.com>

Thanks!
Laszlo
Paolo Bonzini Aug. 2, 2016, 5 p.m. UTC | #2
> I'd prefer if Paolo's remark (about blk_drain()'s ability to produce
> more failed requests, stashed in s->rq) were captured in either the
> commit message, or in a code comment. Something like:
> 
>   /* We drop queued requests after blk_drain() because blk_drain()
>    * itself can produce them. */

It's also (perhaps especially) because blk_drain() can consume them.  Fam's
patch to do blk_drain() first would cause a double-free.

Paolo

> What do you think? It's your call. I certainly lacked that bit of
> information.
> 
> Reviewed-by: Laszlo Ersek <lersek@redhat.com>
> 
> Thanks!
> Laszlo
>
Fam Zheng Aug. 3, 2016, 12:52 a.m. UTC | #3
On Tue, 08/02 13:00, Paolo Bonzini wrote:
> 
> > I'd prefer if Paolo's remark (about blk_drain()'s ability to produce
> > more failed requests, stashed in s->rq) were captured in either the
> > commit message, or in a code comment. Something like:
> > 
> >   /* We drop queued requests after blk_drain() because blk_drain()
> >    * itself can produce them. */
> 
> It's also (perhaps especially) because blk_drain() can consume them.  Fam's
> patch to do blk_drain() first would cause a double-free.

That "consume" part is what I don't understand.

Shouldn't blk_drain() only process submitted requests (and further requests
they dequeue indirectly), while s->rq only contains failed requests. They don't
look overlap, because I suppose failed requests are only going to be processed
by run state change.

What am I missing?

Fam
Paolo Bonzini Aug. 4, 2016, 12:42 p.m. UTC | #4
On 03/08/2016 02:52, Fam Zheng wrote:
>> > It's also (perhaps especially) because blk_drain() can consume them.  Fam's
>> > patch to do blk_drain() first would cause a double-free.
> That "consume" part is what I don't understand.
> 
> Shouldn't blk_drain() only process submitted requests (and further requests
> they dequeue indirectly), while s->rq only contains failed requests.

Nevermind, I was confused.  virtio_blk_init_request doesn't store the
requests in a list, unlike SCSI.

Paolo
diff mbox

Patch

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 475a822..12587d9 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -654,6 +654,7 @@  static void virtio_blk_reset(VirtIODevice *vdev)
 {
     VirtIOBlock *s = VIRTIO_BLK(vdev);
     AioContext *ctx;
+    VirtIOBlockReq *req;
 
     /*
      * This should cancel pending requests, but can't do nicely until there
@@ -662,6 +663,11 @@  static void virtio_blk_reset(VirtIODevice *vdev)
     ctx = blk_get_aio_context(s->blk);
     aio_context_acquire(ctx);
     blk_drain(s->blk);
+    while (s->rq) {
+        req = s->rq;
+        s->rq = req->next;
+        virtio_blk_free_request(req);
+    }
 
     if (s->dataplane) {
         virtio_blk_data_plane_stop(s->dataplane);