Message ID | 20230404112044.427062-1-stefanha@redhat.com |
---|---|
State | New |
Headers | show |
Series | block/nvme: use AIO_WAIT_WHILE_UNLOCKED() | expand |
On 4/4/23 13:20, Stefan Hajnoczi wrote: > A few Admin Queue commands are submitted during nvme_file_open(). They > are synchronous since device initialization cannot continue until the > commands complete. > > AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually > doesn't rely on the AioContext lock. Replace it with > AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior > and the dependency on the AioContext lock is eliminated. > > This is a step towards removing the AioContext lock. > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > block/nvme.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
On 4/4/23 13:20, Stefan Hajnoczi wrote: > A few Admin Queue commands are submitted during nvme_file_open(). They > are synchronous since device initialization cannot continue until the > commands complete. > > AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually > doesn't rely on the AioContext lock. Replace it with > AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior > and the dependency on the AioContext lock is eliminated. > > This is a step towards removing the AioContext lock. > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > block/nvme.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/block/nvme.c b/block/nvme.c > index 5b744c2bda..829b9c04db 100644 > --- a/block/nvme.c > +++ b/block/nvme.c > @@ -512,7 +512,6 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd) > { > BDRVNVMeState *s = bs->opaque; > NVMeQueuePair *q = s->queues[INDEX_ADMIN]; > - AioContext *aio_context = bdrv_get_aio_context(bs); > NVMeRequest *req; > int ret = -EINPROGRESS; > req = nvme_get_free_req_nowait(q); > @@ -521,7 +520,7 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd) > } > nvme_submit_command(q, req, cmd, nvme_admin_cmd_sync_cb, &ret); > > - AIO_WAIT_WHILE(aio_context, ret == -EINPROGRESS); > + AIO_WAIT_WHILE_UNLOCKED(NULL, ret == -EINPROGRESS); > return ret; > } > Reviewed-by: Paolo Bonzini <pbonzini@redhat.com>
Am 04.04.2023 um 13:20 hat Stefan Hajnoczi geschrieben: > A few Admin Queue commands are submitted during nvme_file_open(). They > are synchronous since device initialization cannot continue until the > commands complete. > > AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually > doesn't rely on the AioContext lock. Replace it with > AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior > and the dependency on the AioContext lock is eliminated. > > This is a step towards removing the AioContext lock. > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > block/nvme.c | 3 +-- > 1 file changed, 1 insertion(+), 2 deletions(-) > > diff --git a/block/nvme.c b/block/nvme.c > index 5b744c2bda..829b9c04db 100644 > --- a/block/nvme.c > +++ b/block/nvme.c > @@ -512,7 +512,6 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd) > { > BDRVNVMeState *s = bs->opaque; > NVMeQueuePair *q = s->queues[INDEX_ADMIN]; > - AioContext *aio_context = bdrv_get_aio_context(bs); > NVMeRequest *req; > int ret = -EINPROGRESS; > req = nvme_get_free_req_nowait(q); > @@ -521,7 +520,7 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd) > } > nvme_submit_command(q, req, cmd, nvme_admin_cmd_sync_cb, &ret); > > - AIO_WAIT_WHILE(aio_context, ret == -EINPROGRESS); > + AIO_WAIT_WHILE_UNLOCKED(NULL, ret == -EINPROGRESS); > return ret; > } Wait, do we hold the lock in this piece of code or don't we? Either the old code was buggy (then the commit message should be explicit about it), or the new one is. It seems that in practice, it doesn't matter much because this function is only called through .bdrv_file_open, which I think always run in the main thread for a protocol driver. I believe that we generally don't hold the AioContext lock there, so it's the old code that was wrong? Kevin
diff --git a/block/nvme.c b/block/nvme.c index 5b744c2bda..829b9c04db 100644 --- a/block/nvme.c +++ b/block/nvme.c @@ -512,7 +512,6 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd) { BDRVNVMeState *s = bs->opaque; NVMeQueuePair *q = s->queues[INDEX_ADMIN]; - AioContext *aio_context = bdrv_get_aio_context(bs); NVMeRequest *req; int ret = -EINPROGRESS; req = nvme_get_free_req_nowait(q); @@ -521,7 +520,7 @@ static int nvme_admin_cmd_sync(BlockDriverState *bs, NvmeCmd *cmd) } nvme_submit_command(q, req, cmd, nvme_admin_cmd_sync_cb, &ret); - AIO_WAIT_WHILE(aio_context, ret == -EINPROGRESS); + AIO_WAIT_WHILE_UNLOCKED(NULL, ret == -EINPROGRESS); return ret; }
A few Admin Queue commands are submitted during nvme_file_open(). They are synchronous since device initialization cannot continue until the commands complete. AIO_WAIT_WHILE() is currently used, but the block/nvme.c code actually doesn't rely on the AioContext lock. Replace it with AIO_WAIT_WHILE_UNLOCKED(NULL, condition). There is no change in behavior and the dependency on the AioContext lock is eliminated. This is a step towards removing the AioContext lock. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> --- block/nvme.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-)