Message ID | 20200401162023.GA15912@simran-Inspiron-5558 |
---|---|
State | New |
Headers | show |
Series | lockable: Replace locks with lock guard macros | expand |
Patchew URL: https://patchew.org/QEMU/20200401162023.GA15912@simran-Inspiron-5558/ Hi, This series failed the asan build test. Please find the testing commands and their output below. If you have Docker installed, you can probably reproduce it locally. === TEST SCRIPT BEGIN === #!/bin/bash export ARCH=x86_64 make docker-image-fedora V=1 NETWORK=1 time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1 === TEST SCRIPT END === CC x86_64-softmmu/hw/i386/x86.o CC x86_64-softmmu/hw/i386/pc.o CC x86_64-softmmu/hw/i386/pc_sysfw.o /tmp/qemu-test/src/hw/rdma/rdma_utils.c:74:5: error: unused variable 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] QEMU_LOCK_GUARD(&list->lock); ^ /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD' --- <scratch space>:194:1: note: expanded from here qemu_lockable_auto__COUNTER__ ^ /tmp/qemu-test/src/hw/rdma/rdma_utils.c:109:5: error: unused variable 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] QEMU_LOCK_GUARD(&list->lock); ^ /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD' --- <scratch space>:207:1: note: expanded from here qemu_lockable_auto__COUNTER__ ^ /tmp/qemu-test/src/hw/rdma/rdma_utils.c:116:5: error: unused variable 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] QEMU_LOCK_GUARD(&list->lock); ^ /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD' --- qemu_lockable_auto__COUNTER__ ^ 3 errors generated. make[1]: *** [/tmp/qemu-test/src/rules.mak:69: hw/rdma/rdma_utils.o] Error 1 make[1]: *** Waiting for unfinished jobs.... CC x86_64-softmmu/hw/i386/pc_piix.o /tmp/qemu-test/src/hw/hyperv/hyperv.c:495:5: error: unused variable 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] QEMU_LOCK_GUARD(&handlers_mutex); ^ /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD' --- <scratch space>:24:1: note: expanded from here qemu_lockable_auto__COUNTER__ ^ /tmp/qemu-test/src/hw/hyperv/hyperv.c:568:5: error: unused variable 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] QEMU_LOCK_GUARD(&handlers_mutex); ^ /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD' --- qemu_lockable_auto__COUNTER__ ^ 2 errors generated. make[1]: *** [/tmp/qemu-test/src/rules.mak:69: hw/hyperv/hyperv.o] Error 1 /tmp/qemu-test/src/hw/rdma/rdma_rm.c:150:5: error: unused variable 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] QEMU_LOCK_GUARD(&tbl->lock); ^ /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD' --- qemu_lockable_auto__COUNTER__ ^ 1 error generated. make[1]: *** [/tmp/qemu-test/src/rules.mak:69: hw/rdma/rdma_rm.o] Error 1 make: *** [Makefile:527: x86_64-softmmu/all] Error 2 Traceback (most recent call last): File "./tests/docker/docker.py", line 664, in <module> sys.exit(main()) --- raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['sudo', '-n', 'docker', 'run', '--label', 'com.qemu.instance.uuid=d930676030934c1fbb5e45fcfd347949', '-u', '1001', '--security-opt', 'seccomp=unconfined', '--rm', '-e', 'TARGET_LIST=x86_64-softmmu', '-e', 'EXTRA_CONFIGURE_OPTS=', '-e', 'V=', '-e', 'J=14', '-e', 'DEBUG=', '-e', 'SHOW_ENV=', '-e', 'CCACHE_DIR=/var/tmp/ccache', '-v', '/home/patchew/.cache/qemu-docker-ccache:/var/tmp/ccache:z', '-v', '/var/tmp/patchew-tester-tmp-ie9_gn1g/src/docker-src.2020-04-01-14.53.27.23876:/var/tmp/qemu:z,ro', 'qemu:fedora', '/var/tmp/qemu/run', 'test-debug']' returned non-zero exit status 2. filter=--filter=label=com.qemu.instance.uuid=d930676030934c1fbb5e45fcfd347949 make[1]: *** [docker-run] Error 1 make[1]: Leaving directory `/var/tmp/patchew-tester-tmp-ie9_gn1g/src' make: *** [docker-run-test-debug@fedora] Error 2 real 4m7.415s user 0m8.437s The full log is available at http://patchew.org/logs/20200401162023.GA15912@simran-Inspiron-5558/testing.asan/?type=message. --- Email generated automatically by Patchew [https://patchew.org/]. Please send your feedback to patchew-devel@redhat.com
On Wed, Apr 01, 2020 at 11:57:34AM -0700, no-reply@patchew.org wrote: > Patchew URL: https://patchew.org/QEMU/20200401162023.GA15912@simran-Inspiron-5558/ > > > > Hi, > > This series failed the asan build test. Please find the testing commands and > their output below. If you have Docker installed, you can probably reproduce it > locally. > > === TEST SCRIPT BEGIN === > #!/bin/bash > export ARCH=x86_64 > make docker-image-fedora V=1 NETWORK=1 > time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 NETWORK=1 > === TEST SCRIPT END === > > CC x86_64-softmmu/hw/i386/x86.o > CC x86_64-softmmu/hw/i386/pc.o > CC x86_64-softmmu/hw/i386/pc_sysfw.o > /tmp/qemu-test/src/hw/rdma/rdma_utils.c:74:5: error: unused variable 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] > QEMU_LOCK_GUARD(&list->lock); > ^ > /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from macro 'QEMU_LOCK_GUARD' This patch series depends on Daniel Brodsky's "[PATCH v4 0/2] Replaced locks with lock guard macros" patch series. I think Daniel intends to fix the unused variable warnings. Stefan
Hello Stefan On Fri, Apr 3, 2020 at 6:30 PM Stefan Hajnoczi <stefanha@gmail.com> wrote: > On Wed, Apr 01, 2020 at 11:57:34AM -0700, no-reply@patchew.org wrote: > > Patchew URL: > https://patchew.org/QEMU/20200401162023.GA15912@simran-Inspiron-5558/ > > > > > > > > Hi, > > > > This series failed the asan build test. Please find the testing commands > and > > their output below. If you have Docker installed, you can probably > reproduce it > > locally. > > > > === TEST SCRIPT BEGIN === > > #!/bin/bash > > export ARCH=x86_64 > > make docker-image-fedora V=1 NETWORK=1 > > time make docker-test-debug@fedora TARGET_LIST=x86_64-softmmu J=14 > NETWORK=1 > > === TEST SCRIPT END === > > > > CC x86_64-softmmu/hw/i386/x86.o > > CC x86_64-softmmu/hw/i386/pc.o > > CC x86_64-softmmu/hw/i386/pc_sysfw.o > > /tmp/qemu-test/src/hw/rdma/rdma_utils.c:74:5: error: unused variable > 'qemu_lockable_auto__COUNTER__' [-Werror,-Wunused-variable] > > QEMU_LOCK_GUARD(&list->lock); > > ^ > > /tmp/qemu-test/src/include/qemu/lockable.h:173:29: note: expanded from > macro 'QEMU_LOCK_GUARD' > > This patch series depends on Daniel Brodsky's "[PATCH v4 0/2] Replaced > locks with lock guard macros" patch series. I think Daniel intends to > fix the unused variable warnings. > Yes, I saw that patch series. I'll send the next version after Daniel Brodsky fixes the unused variable warnings. Thanks & Regards Simran Singhal > > Stefan >
On Wed, Apr 01, 2020 at 09:50:23PM +0530, Simran Singhal wrote: > Replace manual lock()/unlock() calls with lock guard macros > (QEMU_LOCK_GUARD/WITH_QEMU_LOCK_GUARD). > > Signed-off-by: Simran Singhal <singhalsimran0@gmail.com> > --- > hw/hyperv/hyperv.c | 15 ++++++------- > hw/rdma/rdma_backend.c | 50 +++++++++++++++++++++--------------------- > hw/rdma/rdma_rm.c | 3 +-- > hw/rdma/rdma_utils.c | 15 +++++-------- > 4 files changed, 39 insertions(+), 44 deletions(-) > > diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c > index 8ca3706f5b..4ddafe1de1 100644 > --- a/hw/hyperv/hyperv.c > +++ b/hw/hyperv/hyperv.c > @@ -15,6 +15,7 @@ > #include "sysemu/kvm.h" > #include "qemu/bitops.h" > #include "qemu/error-report.h" > +#include "qemu/lockable.h" > #include "qemu/queue.h" > #include "qemu/rcu.h" > #include "qemu/rcu_queue.h" > @@ -491,7 +492,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler handler, void *data) > int ret; > MsgHandler *mh; > > - qemu_mutex_lock(&handlers_mutex); > + QEMU_LOCK_GUARD(&handlers_mutex); > QLIST_FOREACH(mh, &msg_handlers, link) { > if (mh->conn_id == conn_id) { > if (handler) { > @@ -501,7 +502,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler handler, void *data) > g_free_rcu(mh, rcu); > ret = 0; > } > - goto unlock; > + return ret; > } > } > > @@ -515,8 +516,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler handler, void *data) > } else { > ret = -ENOENT; > } > -unlock: > - qemu_mutex_unlock(&handlers_mutex); > + > return ret; > } > If you respin this patch this function could be further simplified by removing ret and using "return 0", "return -EEXIST", or "return -ENOENT" directly instead. > @@ -565,7 +565,7 @@ static int set_event_flag_handler(uint32_t conn_id, EventNotifier *notifier) > int ret; > EventFlagHandler *handler; > > - qemu_mutex_lock(&handlers_mutex); > + QEMU_LOCK_GUARD(&handlers_mutex); > QLIST_FOREACH(handler, &event_flag_handlers, link) { > if (handler->conn_id == conn_id) { > if (notifier) { > @@ -575,7 +575,7 @@ static int set_event_flag_handler(uint32_t conn_id, EventNotifier *notifier) > g_free_rcu(handler, rcu); > ret = 0; > } > - goto unlock; > + return ret; > } > } > > @@ -588,8 +588,7 @@ static int set_event_flag_handler(uint32_t conn_id, EventNotifier *notifier) > } else { > ret = -ENOENT; > } > -unlock: > - qemu_mutex_unlock(&handlers_mutex); > + > return ret; > } Same in this function. Otherwise: Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
For the hw/rdma stuff: Reviewed-by: Yuval Shaia <yuval.shaia.ml@gmail.com> Tested-by: Yuval Shaia <yuval.shaia.ml@gmail.com> Thanks, Yuval On Wed, 1 Apr 2020 at 19:20, Simran Singhal <singhalsimran0@gmail.com> wrote: > Replace manual lock()/unlock() calls with lock guard macros > (QEMU_LOCK_GUARD/WITH_QEMU_LOCK_GUARD). > > Signed-off-by: Simran Singhal <singhalsimran0@gmail.com> > --- > hw/hyperv/hyperv.c | 15 ++++++------- > hw/rdma/rdma_backend.c | 50 +++++++++++++++++++++--------------------- > hw/rdma/rdma_rm.c | 3 +-- > hw/rdma/rdma_utils.c | 15 +++++-------- > 4 files changed, 39 insertions(+), 44 deletions(-) > > diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c > index 8ca3706f5b..4ddafe1de1 100644 > --- a/hw/hyperv/hyperv.c > +++ b/hw/hyperv/hyperv.c > @@ -15,6 +15,7 @@ > #include "sysemu/kvm.h" > #include "qemu/bitops.h" > #include "qemu/error-report.h" > +#include "qemu/lockable.h" > #include "qemu/queue.h" > #include "qemu/rcu.h" > #include "qemu/rcu_queue.h" > @@ -491,7 +492,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, > HvMsgHandler handler, void *data) > int ret; > MsgHandler *mh; > > - qemu_mutex_lock(&handlers_mutex); > + QEMU_LOCK_GUARD(&handlers_mutex); > QLIST_FOREACH(mh, &msg_handlers, link) { > if (mh->conn_id == conn_id) { > if (handler) { > @@ -501,7 +502,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, > HvMsgHandler handler, void *data) > g_free_rcu(mh, rcu); > ret = 0; > } > - goto unlock; > + return ret; > } > } > > @@ -515,8 +516,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, > HvMsgHandler handler, void *data) > } else { > ret = -ENOENT; > } > -unlock: > - qemu_mutex_unlock(&handlers_mutex); > + > return ret; > } > > @@ -565,7 +565,7 @@ static int set_event_flag_handler(uint32_t conn_id, > EventNotifier *notifier) > int ret; > EventFlagHandler *handler; > > - qemu_mutex_lock(&handlers_mutex); > + QEMU_LOCK_GUARD(&handlers_mutex); > QLIST_FOREACH(handler, &event_flag_handlers, link) { > if (handler->conn_id == conn_id) { > if (notifier) { > @@ -575,7 +575,7 @@ static int set_event_flag_handler(uint32_t conn_id, > EventNotifier *notifier) > g_free_rcu(handler, rcu); > ret = 0; > } > - goto unlock; > + return ret; > } > } > > @@ -588,8 +588,7 @@ static int set_event_flag_handler(uint32_t conn_id, > EventNotifier *notifier) > } else { > ret = -ENOENT; > } > -unlock: > - qemu_mutex_unlock(&handlers_mutex); > + > return ret; > } > > diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c > index 3dd39fe1a7..db7e5c8be5 100644 > --- a/hw/rdma/rdma_backend.c > +++ b/hw/rdma/rdma_backend.c > @@ -95,36 +95,36 @@ static int rdma_poll_cq(RdmaDeviceResources > *rdma_dev_res, struct ibv_cq *ibcq) > struct ibv_wc wc[2]; > RdmaProtectedGSList *cqe_ctx_list; > > - qemu_mutex_lock(&rdma_dev_res->lock); > - do { > - ne = ibv_poll_cq(ibcq, ARRAY_SIZE(wc), wc); > + WITH_QEMU_LOCK_GUARD(&rdma_dev_res->lock) { > + do { > + ne = ibv_poll_cq(ibcq, ARRAY_SIZE(wc), wc); > > - trace_rdma_poll_cq(ne, ibcq); > + trace_rdma_poll_cq(ne, ibcq); > > - for (i = 0; i < ne; i++) { > - bctx = rdma_rm_get_cqe_ctx(rdma_dev_res, wc[i].wr_id); > - if (unlikely(!bctx)) { > - rdma_error_report("No matching ctx for req %"PRId64, > - wc[i].wr_id); > - continue; > - } > + for (i = 0; i < ne; i++) { > + bctx = rdma_rm_get_cqe_ctx(rdma_dev_res, wc[i].wr_id); > + if (unlikely(!bctx)) { > + rdma_error_report("No matching ctx for req %"PRId64, > + wc[i].wr_id); > + continue; > + } > > - comp_handler(bctx->up_ctx, &wc[i]); > + comp_handler(bctx->up_ctx, &wc[i]); > > - if (bctx->backend_qp) { > - cqe_ctx_list = &bctx->backend_qp->cqe_ctx_list; > - } else { > - cqe_ctx_list = &bctx->backend_srq->cqe_ctx_list; > - } > + if (bctx->backend_qp) { > + cqe_ctx_list = &bctx->backend_qp->cqe_ctx_list; > + } else { > + cqe_ctx_list = &bctx->backend_srq->cqe_ctx_list; > + } > > - rdma_protected_gslist_remove_int32(cqe_ctx_list, wc[i].wr_id); > - rdma_rm_dealloc_cqe_ctx(rdma_dev_res, wc[i].wr_id); > - g_free(bctx); > - } > - total_ne += ne; > - } while (ne > 0); > - atomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne); > - qemu_mutex_unlock(&rdma_dev_res->lock); > + rdma_protected_gslist_remove_int32(cqe_ctx_list, > wc[i].wr_id); > + rdma_rm_dealloc_cqe_ctx(rdma_dev_res, wc[i].wr_id); > + g_free(bctx); > + } > + total_ne += ne; > + } while (ne > 0); > + atomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne); > + } > > if (ne < 0) { > rdma_error_report("ibv_poll_cq fail, rc=%d, errno=%d", ne, errno); > diff --git a/hw/rdma/rdma_rm.c b/hw/rdma/rdma_rm.c > index 7e9ea283c9..60957f88db 100644 > --- a/hw/rdma/rdma_rm.c > +++ b/hw/rdma/rdma_rm.c > @@ -147,14 +147,13 @@ static inline void rdma_res_tbl_dealloc(RdmaRmResTbl > *tbl, uint32_t handle) > { > trace_rdma_res_tbl_dealloc(tbl->name, handle); > > - qemu_mutex_lock(&tbl->lock); > + QEMU_LOCK_GUARD(&tbl->lock); > > if (handle < tbl->tbl_sz) { > clear_bit(handle, tbl->bitmap); > tbl->used--; > } > > - qemu_mutex_unlock(&tbl->lock); > } > > int rdma_rm_alloc_pd(RdmaDeviceResources *dev_res, RdmaBackendDev > *backend_dev, > diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c > index 73f279104c..ef43606c84 100644 > --- a/hw/rdma/rdma_utils.c > +++ b/hw/rdma/rdma_utils.c > @@ -71,18 +71,17 @@ void rdma_protected_qlist_destroy(RdmaProtectedQList > *list) > > void rdma_protected_qlist_append_int64(RdmaProtectedQList *list, int64_t > value) > { > - qemu_mutex_lock(&list->lock); > + QEMU_LOCK_GUARD(&list->lock); > qlist_append_int(list->list, value); > - qemu_mutex_unlock(&list->lock); > } > > int64_t rdma_protected_qlist_pop_int64(RdmaProtectedQList *list) > { > QObject *obj; > > - qemu_mutex_lock(&list->lock); > - obj = qlist_pop(list->list); > - qemu_mutex_unlock(&list->lock); > + WITH_QEMU_LOCK_GUARD(&list->lock) { > + obj = qlist_pop(list->list); > + } > > if (!obj) { > return -ENOENT; > @@ -107,15 +106,13 @@ void > rdma_protected_gslist_destroy(RdmaProtectedGSList *list) > void rdma_protected_gslist_append_int32(RdmaProtectedGSList *list, > int32_t value) > { > - qemu_mutex_lock(&list->lock); > + QEMU_LOCK_GUARD(&list->lock); > list->list = g_slist_prepend(list->list, GINT_TO_POINTER(value)); > - qemu_mutex_unlock(&list->lock); > } > > void rdma_protected_gslist_remove_int32(RdmaProtectedGSList *list, > int32_t value) > { > - qemu_mutex_lock(&list->lock); > + QEMU_LOCK_GUARD(&list->lock); > list->list = g_slist_remove(list->list, GINT_TO_POINTER(value)); > - qemu_mutex_unlock(&list->lock); > } > -- > 2.17.1 > >
diff --git a/hw/hyperv/hyperv.c b/hw/hyperv/hyperv.c index 8ca3706f5b..4ddafe1de1 100644 --- a/hw/hyperv/hyperv.c +++ b/hw/hyperv/hyperv.c @@ -15,6 +15,7 @@ #include "sysemu/kvm.h" #include "qemu/bitops.h" #include "qemu/error-report.h" +#include "qemu/lockable.h" #include "qemu/queue.h" #include "qemu/rcu.h" #include "qemu/rcu_queue.h" @@ -491,7 +492,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler handler, void *data) int ret; MsgHandler *mh; - qemu_mutex_lock(&handlers_mutex); + QEMU_LOCK_GUARD(&handlers_mutex); QLIST_FOREACH(mh, &msg_handlers, link) { if (mh->conn_id == conn_id) { if (handler) { @@ -501,7 +502,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler handler, void *data) g_free_rcu(mh, rcu); ret = 0; } - goto unlock; + return ret; } } @@ -515,8 +516,7 @@ int hyperv_set_msg_handler(uint32_t conn_id, HvMsgHandler handler, void *data) } else { ret = -ENOENT; } -unlock: - qemu_mutex_unlock(&handlers_mutex); + return ret; } @@ -565,7 +565,7 @@ static int set_event_flag_handler(uint32_t conn_id, EventNotifier *notifier) int ret; EventFlagHandler *handler; - qemu_mutex_lock(&handlers_mutex); + QEMU_LOCK_GUARD(&handlers_mutex); QLIST_FOREACH(handler, &event_flag_handlers, link) { if (handler->conn_id == conn_id) { if (notifier) { @@ -575,7 +575,7 @@ static int set_event_flag_handler(uint32_t conn_id, EventNotifier *notifier) g_free_rcu(handler, rcu); ret = 0; } - goto unlock; + return ret; } } @@ -588,8 +588,7 @@ static int set_event_flag_handler(uint32_t conn_id, EventNotifier *notifier) } else { ret = -ENOENT; } -unlock: - qemu_mutex_unlock(&handlers_mutex); + return ret; } diff --git a/hw/rdma/rdma_backend.c b/hw/rdma/rdma_backend.c index 3dd39fe1a7..db7e5c8be5 100644 --- a/hw/rdma/rdma_backend.c +++ b/hw/rdma/rdma_backend.c @@ -95,36 +95,36 @@ static int rdma_poll_cq(RdmaDeviceResources *rdma_dev_res, struct ibv_cq *ibcq) struct ibv_wc wc[2]; RdmaProtectedGSList *cqe_ctx_list; - qemu_mutex_lock(&rdma_dev_res->lock); - do { - ne = ibv_poll_cq(ibcq, ARRAY_SIZE(wc), wc); + WITH_QEMU_LOCK_GUARD(&rdma_dev_res->lock) { + do { + ne = ibv_poll_cq(ibcq, ARRAY_SIZE(wc), wc); - trace_rdma_poll_cq(ne, ibcq); + trace_rdma_poll_cq(ne, ibcq); - for (i = 0; i < ne; i++) { - bctx = rdma_rm_get_cqe_ctx(rdma_dev_res, wc[i].wr_id); - if (unlikely(!bctx)) { - rdma_error_report("No matching ctx for req %"PRId64, - wc[i].wr_id); - continue; - } + for (i = 0; i < ne; i++) { + bctx = rdma_rm_get_cqe_ctx(rdma_dev_res, wc[i].wr_id); + if (unlikely(!bctx)) { + rdma_error_report("No matching ctx for req %"PRId64, + wc[i].wr_id); + continue; + } - comp_handler(bctx->up_ctx, &wc[i]); + comp_handler(bctx->up_ctx, &wc[i]); - if (bctx->backend_qp) { - cqe_ctx_list = &bctx->backend_qp->cqe_ctx_list; - } else { - cqe_ctx_list = &bctx->backend_srq->cqe_ctx_list; - } + if (bctx->backend_qp) { + cqe_ctx_list = &bctx->backend_qp->cqe_ctx_list; + } else { + cqe_ctx_list = &bctx->backend_srq->cqe_ctx_list; + } - rdma_protected_gslist_remove_int32(cqe_ctx_list, wc[i].wr_id); - rdma_rm_dealloc_cqe_ctx(rdma_dev_res, wc[i].wr_id); - g_free(bctx); - } - total_ne += ne; - } while (ne > 0); - atomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne); - qemu_mutex_unlock(&rdma_dev_res->lock); + rdma_protected_gslist_remove_int32(cqe_ctx_list, wc[i].wr_id); + rdma_rm_dealloc_cqe_ctx(rdma_dev_res, wc[i].wr_id); + g_free(bctx); + } + total_ne += ne; + } while (ne > 0); + atomic_sub(&rdma_dev_res->stats.missing_cqe, total_ne); + } if (ne < 0) { rdma_error_report("ibv_poll_cq fail, rc=%d, errno=%d", ne, errno); diff --git a/hw/rdma/rdma_rm.c b/hw/rdma/rdma_rm.c index 7e9ea283c9..60957f88db 100644 --- a/hw/rdma/rdma_rm.c +++ b/hw/rdma/rdma_rm.c @@ -147,14 +147,13 @@ static inline void rdma_res_tbl_dealloc(RdmaRmResTbl *tbl, uint32_t handle) { trace_rdma_res_tbl_dealloc(tbl->name, handle); - qemu_mutex_lock(&tbl->lock); + QEMU_LOCK_GUARD(&tbl->lock); if (handle < tbl->tbl_sz) { clear_bit(handle, tbl->bitmap); tbl->used--; } - qemu_mutex_unlock(&tbl->lock); } int rdma_rm_alloc_pd(RdmaDeviceResources *dev_res, RdmaBackendDev *backend_dev, diff --git a/hw/rdma/rdma_utils.c b/hw/rdma/rdma_utils.c index 73f279104c..ef43606c84 100644 --- a/hw/rdma/rdma_utils.c +++ b/hw/rdma/rdma_utils.c @@ -71,18 +71,17 @@ void rdma_protected_qlist_destroy(RdmaProtectedQList *list) void rdma_protected_qlist_append_int64(RdmaProtectedQList *list, int64_t value) { - qemu_mutex_lock(&list->lock); + QEMU_LOCK_GUARD(&list->lock); qlist_append_int(list->list, value); - qemu_mutex_unlock(&list->lock); } int64_t rdma_protected_qlist_pop_int64(RdmaProtectedQList *list) { QObject *obj; - qemu_mutex_lock(&list->lock); - obj = qlist_pop(list->list); - qemu_mutex_unlock(&list->lock); + WITH_QEMU_LOCK_GUARD(&list->lock) { + obj = qlist_pop(list->list); + } if (!obj) { return -ENOENT; @@ -107,15 +106,13 @@ void rdma_protected_gslist_destroy(RdmaProtectedGSList *list) void rdma_protected_gslist_append_int32(RdmaProtectedGSList *list, int32_t value) { - qemu_mutex_lock(&list->lock); + QEMU_LOCK_GUARD(&list->lock); list->list = g_slist_prepend(list->list, GINT_TO_POINTER(value)); - qemu_mutex_unlock(&list->lock); } void rdma_protected_gslist_remove_int32(RdmaProtectedGSList *list, int32_t value) { - qemu_mutex_lock(&list->lock); + QEMU_LOCK_GUARD(&list->lock); list->list = g_slist_remove(list->list, GINT_TO_POINTER(value)); - qemu_mutex_unlock(&list->lock); }
Replace manual lock()/unlock() calls with lock guard macros (QEMU_LOCK_GUARD/WITH_QEMU_LOCK_GUARD). Signed-off-by: Simran Singhal <singhalsimran0@gmail.com> --- hw/hyperv/hyperv.c | 15 ++++++------- hw/rdma/rdma_backend.c | 50 +++++++++++++++++++++--------------------- hw/rdma/rdma_rm.c | 3 +-- hw/rdma/rdma_utils.c | 15 +++++-------- 4 files changed, 39 insertions(+), 44 deletions(-)