From patchwork Wed Dec 12 11:16:25 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paul Durrant X-Patchwork-Id: 1011839 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=citrix.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43FDmm3wydz9s4s for ; Wed, 12 Dec 2018 22:17:28 +1100 (AEDT) Received: from localhost ([::1]:43883 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gX2W6-00072P-7d for incoming@patchwork.ozlabs.org; Wed, 12 Dec 2018 06:17:26 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34965) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gX2VJ-0006p8-Aq for qemu-devel@nongnu.org; Wed, 12 Dec 2018 06:16:38 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gX2VI-0003BZ-6R for qemu-devel@nongnu.org; Wed, 12 Dec 2018 06:16:37 -0500 Received: from smtp03.citrix.com ([162.221.156.55]:43645) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gX2VD-0002eF-1W; Wed, 12 Dec 2018 06:16:31 -0500 X-IronPort-AV: E=Sophos;i="5.56,344,1539648000"; d="scan'208";a="73098899" From: Paul Durrant To: , , Date: Wed, 12 Dec 2018 11:16:25 +0000 Message-ID: <1544613386-22045-3-git-send-email-paul.durrant@citrix.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1544613386-22045-1-git-send-email-paul.durrant@citrix.com> References: <1544613386-22045-1-git-send-email-paul.durrant@citrix.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 162.221.156.55 Subject: [Qemu-devel] [PATCH v3 2/3] xen-block: improve response latency X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Stefano Stabellini , Tim Smith , Max Reitz , Paul Durrant , Stefan Hajnoczi , Anthony Perard Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Tim Smith If the I/O ring is full, the guest cannot send any more requests until some responses are sent. Only sending all available responses just before checking for new work does not leave much time for the guest to supply new work, so this will cause stalls if the ring gets full. Also, not completing reads as soon as possible adds latency to the guest. To alleviate that, complete IO requests as soon as they come back. xen_block_send_response() already returns a value indicating whether a notify should be sent, which is all the batching we need. Signed-off-by: Tim Smith Re-based and commit comment adjusted. Signed-off-by: Paul Durrant --- Cc: Stefan Hajnoczi Cc: Stefano Stabellini Cc: Anthony Perard Cc: Kevin Wolf Cc: Max Reitz --- hw/block/dataplane/xen-block.c | 56 ++++++++++++++---------------------------- 1 file changed, 18 insertions(+), 38 deletions(-) diff --git a/hw/block/dataplane/xen-block.c b/hw/block/dataplane/xen-block.c index db17ab5..b4ff2e3 100644 --- a/hw/block/dataplane/xen-block.c +++ b/hw/block/dataplane/xen-block.c @@ -55,11 +55,9 @@ struct XenBlockDataPlane { blkif_back_rings_t rings; int more_work; QLIST_HEAD(inflight_head, XenBlockRequest) inflight; - QLIST_HEAD(finished_head, XenBlockRequest) finished; QLIST_HEAD(freelist_head, XenBlockRequest) freelist; int requests_total; int requests_inflight; - int requests_finished; unsigned int max_requests; BlockBackend *blk; QEMUBH *bh; @@ -116,12 +114,10 @@ static void xen_block_finish_request(XenBlockRequest *request) XenBlockDataPlane *dataplane = request->dataplane; QLIST_REMOVE(request, list); - QLIST_INSERT_HEAD(&dataplane->finished, request, list); dataplane->requests_inflight--; - dataplane->requests_finished++; } -static void xen_block_release_request(XenBlockRequest *request, bool finish) +static void xen_block_release_request(XenBlockRequest *request) { XenBlockDataPlane *dataplane = request->dataplane; @@ -129,11 +125,7 @@ static void xen_block_release_request(XenBlockRequest *request, bool finish) reset_request(request); request->dataplane = dataplane; QLIST_INSERT_HEAD(&dataplane->freelist, request, list); - if (finish) { - dataplane->requests_finished--; - } else { - dataplane->requests_inflight--; - } + dataplane->requests_inflight--; } /* @@ -248,6 +240,7 @@ static int xen_block_copy_request(XenBlockRequest *request) } static int xen_block_do_aio(XenBlockRequest *request); +static int xen_block_send_response(XenBlockRequest *request); static void xen_block_complete_aio(void *opaque, int ret) { @@ -312,6 +305,18 @@ static void xen_block_complete_aio(void *opaque, int ret) default: break; } + if (xen_block_send_response(request)) { + Error *local_err = NULL; + + xen_device_notify_event_channel(dataplane->xendev, + dataplane->event_channel, + &local_err); + if (local_err) { + error_report_err(local_err); + } + } + xen_block_release_request(request); + qemu_bh_schedule(dataplane->bh); done: @@ -419,7 +424,7 @@ err: return -1; } -static int xen_block_send_response_one(XenBlockRequest *request) +static int xen_block_send_response(XenBlockRequest *request) { XenBlockDataPlane *dataplane = request->dataplane; int send_notify = 0; @@ -474,29 +479,6 @@ static int xen_block_send_response_one(XenBlockRequest *request) return send_notify; } -/* walk finished list, send outstanding responses, free requests */ -static void xen_block_send_response_all(XenBlockDataPlane *dataplane) -{ - XenBlockRequest *request; - int send_notify = 0; - - while (!QLIST_EMPTY(&dataplane->finished)) { - request = QLIST_FIRST(&dataplane->finished); - send_notify += xen_block_send_response_one(request); - xen_block_release_request(request, true); - } - if (send_notify) { - Error *local_err = NULL; - - xen_device_notify_event_channel(dataplane->xendev, - dataplane->event_channel, - &local_err); - if (local_err) { - error_report_err(local_err); - } - } -} - static int xen_block_get_request(XenBlockDataPlane *dataplane, XenBlockRequest *request, RING_IDX rc) { @@ -547,7 +529,6 @@ static void xen_block_handle_requests(XenBlockDataPlane *dataplane) rp = dataplane->rings.common.sring->req_prod; xen_rmb(); /* Ensure we see queued requests up to 'rp'. */ - xen_block_send_response_all(dataplane); /* * If there was more than IO_PLUG_THRESHOLD requests in flight * when we got here, this is an indication that there the bottleneck @@ -591,7 +572,7 @@ static void xen_block_handle_requests(XenBlockDataPlane *dataplane) break; }; - if (xen_block_send_response_one(request)) { + if (xen_block_send_response(request)) { Error *local_err = NULL; xen_device_notify_event_channel(dataplane->xendev, @@ -601,7 +582,7 @@ static void xen_block_handle_requests(XenBlockDataPlane *dataplane) error_report_err(local_err); } } - xen_block_release_request(request, false); + xen_block_release_request(request); continue; } @@ -657,7 +638,6 @@ XenBlockDataPlane *xen_block_dataplane_create(XenDevice *xendev, dataplane->file_size = blk_getlength(dataplane->blk); QLIST_INIT(&dataplane->inflight); - QLIST_INIT(&dataplane->finished); QLIST_INIT(&dataplane->freelist); if (iothread) {