diff mbox series

[06/17] block/nvme: Replace qemu_try_blockalign(bs) by qemu_try_memalign(pg_sz)

Message ID 20200625184838.28172-7-philmd@redhat.com
State New
Headers show
Series block/nvme: Various cleanups required to use multiple queues | expand

Commit Message

Philippe Mathieu-Daudé June 25, 2020, 6:48 p.m. UTC
qemu_try_blockalign() is a generic API that call back to the
block driver to return its page alignment. As we call from
within the very same driver, we already know to page alignment
stored in our state. Remove indirections and use the value from
BDRVNVMeState.

Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 block/nvme.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

Comments

Stefan Hajnoczi June 26, 2020, 12:24 p.m. UTC | #1
On Thu, Jun 25, 2020 at 08:48:27PM +0200, Philippe Mathieu-Daudé wrote:
> qemu_try_blockalign() is a generic API that call back to the
> block driver to return its page alignment. As we call from
> within the very same driver, we already know to page alignment
> stored in our state. Remove indirections and use the value from
> BDRVNVMeState.

The higher-level qemu_try_blockalign() API does not require all callers
to be aware of the memory alignment details. It seems like a
disadvantage to duplicate that knowledge throughout the code, even if
it's in the same driver source code.

Is there an advantage to this patch that I've missed?
Philippe Mathieu-Daudé June 26, 2020, 12:48 p.m. UTC | #2
On 6/26/20 2:24 PM, Stefan Hajnoczi wrote:
> On Thu, Jun 25, 2020 at 08:48:27PM +0200, Philippe Mathieu-Daudé wrote:
>> qemu_try_blockalign() is a generic API that call back to the
>> block driver to return its page alignment. As we call from
>> within the very same driver, we already know to page alignment
>> stored in our state. Remove indirections and use the value from
>> BDRVNVMeState.
> 
> The higher-level qemu_try_blockalign() API does not require all callers
> to be aware of the memory alignment details. It seems like a
> disadvantage to duplicate that knowledge throughout the code, even if
> it's in the same driver source code.
> 
> Is there an advantage to this patch that I've missed?

This is required to later remove the BlockDriverState argument,
so nvme_init_queue() is per-hardware, not per-block-driver.
Stefan Hajnoczi June 29, 2020, 1:07 p.m. UTC | #3
On Fri, Jun 26, 2020 at 02:48:55PM +0200, Philippe Mathieu-Daudé wrote:
> On 6/26/20 2:24 PM, Stefan Hajnoczi wrote:
> > On Thu, Jun 25, 2020 at 08:48:27PM +0200, Philippe Mathieu-Daudé wrote:
> >> qemu_try_blockalign() is a generic API that call back to the
> >> block driver to return its page alignment. As we call from
> >> within the very same driver, we already know to page alignment
> >> stored in our state. Remove indirections and use the value from
> >> BDRVNVMeState.
> > 
> > The higher-level qemu_try_blockalign() API does not require all callers
> > to be aware of the memory alignment details. It seems like a
> > disadvantage to duplicate that knowledge throughout the code, even if
> > it's in the same driver source code.
> > 
> > Is there an advantage to this patch that I've missed?
> 
> This is required to later remove the BlockDriverState argument,
> so nvme_init_queue() is per-hardware, not per-block-driver.

Makes sense. Please include this in the commit description.

Thanks,
Stefan
diff mbox series

Patch

diff --git a/block/nvme.c b/block/nvme.c
index bdddcd975d..cec9ace3dd 100644
--- a/block/nvme.c
+++ b/block/nvme.c
@@ -158,7 +158,7 @@  static void nvme_init_queue(BlockDriverState *bs, NVMeQueue *q,
 
     bytes = ROUND_UP(nentries * entry_bytes, s->page_size);
     q->head = q->tail = 0;
-    q->queue = qemu_try_blockalign(bs, bytes);
+    q->queue = qemu_try_memalign(s->page_size, bytes);
     if (!q->queue) {
         error_setg(errp, "Cannot allocate queue");
         return;
@@ -204,7 +204,7 @@  static NVMeQueuePair *nvme_create_queue_pair(BlockDriverState *bs,
     if (!q) {
         return NULL;
     }
-    q->prp_list_pages = qemu_try_blockalign(bs,
+    q->prp_list_pages = qemu_try_memalign(s->page_size,
                                           s->page_size * NVME_QUEUE_SIZE);
     if (!q->prp_list_pages) {
         goto fail;
@@ -451,7 +451,7 @@  static void nvme_identify(BlockDriverState *bs, int namespace, Error **errp)
     };
 
     idsz_max = MAX_CONST(sizeof(NvmeIdCtrl), sizeof(NvmeIdNs));
-    resp = qemu_try_blockalign(bs, idsz_max);
+    resp = qemu_try_memalign(s->page_size, idsz_max);
     if (!resp) {
         error_setg(errp, "Cannot allocate buffer for identify response");
         goto out;
@@ -1061,7 +1061,7 @@  static int nvme_co_prw(BlockDriverState *bs, uint64_t offset, uint64_t bytes,
         return nvme_co_prw_aligned(bs, offset, bytes, qiov, is_write, flags);
     }
     trace_nvme_prw_buffered(s, offset, bytes, qiov->niov, is_write);
-    buf = qemu_try_blockalign(bs, bytes);
+    buf = qemu_try_memalign(s->page_size, bytes);
 
     if (!buf) {
         return -ENOMEM;
@@ -1205,7 +1205,7 @@  static int coroutine_fn nvme_co_pdiscard(BlockDriverState *bs,
 
     assert(s->nr_queues > 1);
 
-    buf = qemu_try_blockalign(bs, s->page_size);
+    buf = qemu_try_memalign(s->page_size, s->page_size);
     if (!buf) {
         return -ENOMEM;
     }