Message ID | 20230227205704.1910562-2-stefanha@redhat.com |
---|---|
State | New |
Headers | show |
Series | block: protect BlockBackend->queued_requests with a lock | expand |
On 27.02.23 21:57, Stefan Hajnoczi wrote: > The main loop thread increments/decrements BlockBackend->quiesce_counter > when drained sections begin/end. The counter is read in the I/O code > path. Therefore this field is used to communicate between threads > without a lock. > > Use qatomic_set()/qatomic_read() to make it clear that this field is > accessed by multiple threads. > > Acquire/release are not necessary because the BlockBackend->in_flight > counter already uses sequentially consistent accesses and running I/O > requests hold that counter when blk_wait_while_drained() is called. > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > --- > block/block-backend.c | 18 +++++++++++------- > 1 file changed, 11 insertions(+), 7 deletions(-) > > diff --git a/block/block-backend.c b/block/block-backend.c > index 278b04ce69..f00bf2ab35 100644 > --- a/block/block-backend.c > +++ b/block/block-backend.c [...] > @@ -2568,7 +2568,9 @@ static void blk_root_drained_begin(BdrvChild *child) > BlockBackend *blk = child->opaque; > ThrottleGroupMember *tgm = &blk->public.throttle_group_member; > > - if (++blk->quiesce_counter == 1) { > + int new_counter = qatomic_read(&blk->quiesce_counter) + 1; > + qatomic_set(&blk->quiesce_counter, new_counter); > + if (new_counter == 1) { > if (blk->dev_ops && blk->dev_ops->drained_begin) { > blk->dev_ops->drained_begin(blk->dev_opaque); > } [...] > @@ -2597,12 +2599,14 @@ static bool blk_root_drained_poll(BdrvChild *child) [...] > assert(blk->public.throttle_group_member.io_limits_disabled); > qatomic_dec(&blk->public.throttle_group_member.io_limits_disabled); > > - if (--blk->quiesce_counter == 0) { > + int new_counter = qatomic_read(&blk->quiesce_counter) - 1; > + qatomic_set(&blk->quiesce_counter, new_counter); I don’t quite understand why you decided not to use simple atomic increments/decrements with just SeqCst in these places. Maybe it is fine this way, but it isn’t trivial to see. As far as I understand, these aren’t hot paths, so I don’t think we’d lose performance by using fully atomic operations here. Hanna
On Fri, Mar 03, 2023 at 04:29:54PM +0100, Hanna Czenczek wrote: > On 27.02.23 21:57, Stefan Hajnoczi wrote: > > The main loop thread increments/decrements BlockBackend->quiesce_counter > > when drained sections begin/end. The counter is read in the I/O code > > path. Therefore this field is used to communicate between threads > > without a lock. > > > > Use qatomic_set()/qatomic_read() to make it clear that this field is > > accessed by multiple threads. > > > > Acquire/release are not necessary because the BlockBackend->in_flight > > counter already uses sequentially consistent accesses and running I/O > > requests hold that counter when blk_wait_while_drained() is called. > > > > Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> > > --- > > block/block-backend.c | 18 +++++++++++------- > > 1 file changed, 11 insertions(+), 7 deletions(-) > > > > diff --git a/block/block-backend.c b/block/block-backend.c > > index 278b04ce69..f00bf2ab35 100644 > > --- a/block/block-backend.c > > +++ b/block/block-backend.c > > [...] > > > @@ -2568,7 +2568,9 @@ static void blk_root_drained_begin(BdrvChild *child) > > BlockBackend *blk = child->opaque; > > ThrottleGroupMember *tgm = &blk->public.throttle_group_member; > > - if (++blk->quiesce_counter == 1) { > > + int new_counter = qatomic_read(&blk->quiesce_counter) + 1; > > + qatomic_set(&blk->quiesce_counter, new_counter); > > + if (new_counter == 1) { > > if (blk->dev_ops && blk->dev_ops->drained_begin) { > > blk->dev_ops->drained_begin(blk->dev_opaque); > > } > > [...] > > > @@ -2597,12 +2599,14 @@ static bool blk_root_drained_poll(BdrvChild *child) > > [...] > > > assert(blk->public.throttle_group_member.io_limits_disabled); > > qatomic_dec(&blk->public.throttle_group_member.io_limits_disabled); > > - if (--blk->quiesce_counter == 0) { > > + int new_counter = qatomic_read(&blk->quiesce_counter) - 1; > > + qatomic_set(&blk->quiesce_counter, new_counter); > > I don’t quite understand why you decided not to use simple atomic > increments/decrements with just SeqCst in these places. Maybe it is fine > this way, but it isn’t trivial to see. As far as I understand, these aren’t > hot paths, so I don’t think we’d lose performance by using fully atomic > operations here. Good idea. It would be much easier to read. Stefan
diff --git a/block/block-backend.c b/block/block-backend.c index 278b04ce69..f00bf2ab35 100644 --- a/block/block-backend.c +++ b/block/block-backend.c @@ -80,7 +80,7 @@ struct BlockBackend { NotifierList remove_bs_notifiers, insert_bs_notifiers; QLIST_HEAD(, BlockBackendAioNotifier) aio_notifiers; - int quiesce_counter; + int quiesce_counter; /* atomic: written under BQL, read by other threads */ CoQueue queued_requests; bool disable_request_queuing; @@ -1057,7 +1057,7 @@ void blk_set_dev_ops(BlockBackend *blk, const BlockDevOps *ops, blk->dev_opaque = opaque; /* Are we currently quiesced? Should we enforce this right now? */ - if (blk->quiesce_counter && ops && ops->drained_begin) { + if (qatomic_read(&blk->quiesce_counter) && ops && ops->drained_begin) { ops->drained_begin(opaque); } } @@ -1271,7 +1271,7 @@ static void coroutine_fn blk_wait_while_drained(BlockBackend *blk) { assert(blk->in_flight > 0); - if (blk->quiesce_counter && !blk->disable_request_queuing) { + if (qatomic_read(&blk->quiesce_counter) && !blk->disable_request_queuing) { blk_dec_in_flight(blk); qemu_co_queue_wait(&blk->queued_requests, NULL); blk_inc_in_flight(blk); @@ -2568,7 +2568,9 @@ static void blk_root_drained_begin(BdrvChild *child) BlockBackend *blk = child->opaque; ThrottleGroupMember *tgm = &blk->public.throttle_group_member; - if (++blk->quiesce_counter == 1) { + int new_counter = qatomic_read(&blk->quiesce_counter) + 1; + qatomic_set(&blk->quiesce_counter, new_counter); + if (new_counter == 1) { if (blk->dev_ops && blk->dev_ops->drained_begin) { blk->dev_ops->drained_begin(blk->dev_opaque); } @@ -2586,7 +2588,7 @@ static bool blk_root_drained_poll(BdrvChild *child) { BlockBackend *blk = child->opaque; bool busy = false; - assert(blk->quiesce_counter); + assert(qatomic_read(&blk->quiesce_counter)); if (blk->dev_ops && blk->dev_ops->drained_poll) { busy = blk->dev_ops->drained_poll(blk->dev_opaque); @@ -2597,12 +2599,14 @@ static bool blk_root_drained_poll(BdrvChild *child) static void blk_root_drained_end(BdrvChild *child) { BlockBackend *blk = child->opaque; - assert(blk->quiesce_counter); + assert(qatomic_read(&blk->quiesce_counter)); assert(blk->public.throttle_group_member.io_limits_disabled); qatomic_dec(&blk->public.throttle_group_member.io_limits_disabled); - if (--blk->quiesce_counter == 0) { + int new_counter = qatomic_read(&blk->quiesce_counter) - 1; + qatomic_set(&blk->quiesce_counter, new_counter); + if (new_counter == 0) { if (blk->dev_ops && blk->dev_ops->drained_end) { blk->dev_ops->drained_end(blk->dev_opaque); }
The main loop thread increments/decrements BlockBackend->quiesce_counter when drained sections begin/end. The counter is read in the I/O code path. Therefore this field is used to communicate between threads without a lock. Use qatomic_set()/qatomic_read() to make it clear that this field is accessed by multiple threads. Acquire/release are not necessary because the BlockBackend->in_flight counter already uses sequentially consistent accesses and running I/O requests hold that counter when blk_wait_while_drained() is called. Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> --- block/block-backend.c | 18 +++++++++++------- 1 file changed, 11 insertions(+), 7 deletions(-)