From patchwork Fri Sep 16 14:25:52 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 114934 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 6C494B70C3 for ; Sat, 17 Sep 2011 00:35:18 +1000 (EST) Received: from localhost ([::1]:52440 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1R4ZOT-0007FT-EG for incoming@patchwork.ozlabs.org; Fri, 16 Sep 2011 10:27:53 -0400 Received: from eggs.gnu.org ([140.186.70.92]:58798) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1R4ZNE-0003ow-WF for qemu-devel@nongnu.org; Fri, 16 Sep 2011 10:26:38 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1R4ZN8-00057G-8a for qemu-devel@nongnu.org; Fri, 16 Sep 2011 10:26:31 -0400 Received: from mail-wy0-f175.google.com ([74.125.82.175]:40566) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1R4ZN8-0004y7-0z for qemu-devel@nongnu.org; Fri, 16 Sep 2011 10:26:30 -0400 Received: by mail-wy0-f175.google.com with SMTP id 5so2976511wyh.34 for ; Fri, 16 Sep 2011 07:26:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=sender:from:to:subject:date:message-id:x-mailer:in-reply-to :references; bh=pbbPn2cRTXQeERsc4PfbHt6+wznkWRAWvyCkEJO2Vr8=; b=rlZ38CryybRiIRx9DXKuBq8ELLGxnDSxX8fP6fJP+dSstOhbKr5YBEkhWEs0Paecxr 3RmrJrdS3m6Nk6jefLlPeWBwFuCllI7hbPlpPgH10DLw2JsCrMIzlywSD4oGyKjlG67l 9o3uczCXoUSvZQGDfHEinTJrTz6axa5YMrVjI= Received: by 10.227.167.1 with SMTP id o1mr1060107wby.6.1316183189681; Fri, 16 Sep 2011 07:26:29 -0700 (PDT) Received: from localhost.localdomain (93-34-199-31.ip51.fastwebnet.it. [93.34.199.31]) by mx.google.com with ESMTPS id ek13sm12701198wbb.23.2011.09.16.07.26.27 (version=TLSv1/SSLv3 cipher=OTHER); Fri, 16 Sep 2011 07:26:28 -0700 (PDT) From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Fri, 16 Sep 2011 16:25:52 +0200 Message-Id: <1316183152-5481-16-git-send-email-pbonzini@redhat.com> X-Mailer: git-send-email 1.7.6 In-Reply-To: <1316183152-5481-1-git-send-email-pbonzini@redhat.com> References: <1316183152-5481-1-git-send-email-pbonzini@redhat.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2) X-Received-From: 74.125.82.175 Subject: [Qemu-devel] [PATCH v2 15/15] nbd: allow multiple in-flight requests X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Allow sending up to 16 requests, and drive the replies to the coroutine that did the request. The code is written to be exactly the same as before this patch when MAX_NBD_REQUESTS == 1 (modulo the extra mutex and state). Signed-off-by: Paolo Bonzini --- block/nbd.c | 69 +++++++++++++++++++++++++++++++++++++++++++++++----------- 1 files changed, 56 insertions(+), 13 deletions(-) diff --git a/block/nbd.c b/block/nbd.c index 25abaf7..8eb946f 100644 --- a/block/nbd.c +++ b/block/nbd.c @@ -46,6 +46,10 @@ #define logout(fmt, ...) ((void)0) #endif +#define MAX_NBD_REQUESTS 16 +#define HANDLE_TO_INDEX(bs, handle) ((handle) ^ ((uint64_t)(intptr_t)bs)) +#define INDEX_TO_HANDLE(bs, index) ((index) ^ ((uint64_t)(intptr_t)bs)) + typedef struct BDRVNBDState { int sock; uint32_t nbdflags; @@ -53,9 +57,12 @@ typedef struct BDRVNBDState { size_t blocksize; char *export_name; /* An NBD server may export several devices */ - CoMutex mutex; - Coroutine *coroutine; + CoMutex send_mutex; + CoMutex free_sema; + Coroutine *send_coroutine; + int in_flight; + Coroutine *recv_coroutine[MAX_NBD_REQUESTS]; struct nbd_reply reply; /* If it begins with '/', this is a UNIX domain socket. Otherwise, @@ -112,41 +119,68 @@ out: static void nbd_coroutine_start(BDRVNBDState *s, struct nbd_request *request) { - qemu_co_mutex_lock(&s->mutex); - s->coroutine = qemu_coroutine_self(); - request->handle = (uint64_t)(intptr_t)s; + int i; + + /* Poor man semaphore. The free_sema is locked when no other request + * can be accepted, and unlocked after receiving one reply. */ + if (s->in_flight >= MAX_NBD_REQUESTS - 1) { + qemu_co_mutex_lock(&s->free_sema); + assert(s->in_flight < MAX_NBD_REQUESTS); + } + s->in_flight++; + + for (i = 0; i < MAX_NBD_REQUESTS; i++) { + if (s->recv_coroutine[i] == NULL) { + s->recv_coroutine[i] = qemu_coroutine_self(); + break; + } + } + + assert(i < MAX_NBD_REQUESTS); + request->handle = INDEX_TO_HANDLE(s, i); } static int nbd_have_request(void *opaque) { BDRVNBDState *s = opaque; - return !!s->coroutine; + return s->in_flight > 0; } static void nbd_reply_ready(void *opaque) { BDRVNBDState *s = opaque; + int i; if (s->reply.handle == 0) { /* No reply already in flight. Fetch a header. */ if (nbd_receive_reply(s->sock, &s->reply) < 0) { s->reply.handle = 0; + goto fail; } } /* There's no need for a mutex on the receive side, because the * handler acts as a synchronization point and ensures that only * one coroutine is called until the reply finishes. */ - if (s->coroutine) { - qemu_coroutine_enter(s->coroutine, NULL); + i = HANDLE_TO_INDEX(s, s->reply.handle); + if (s->recv_coroutine[i]) { + qemu_coroutine_enter(s->recv_coroutine[i], NULL); + return; + } + +fail: + for (i = 0; i < MAX_NBD_REQUESTS; i++) { + if (s->recv_coroutine[i]) { + qemu_coroutine_enter(s->recv_coroutine[i], NULL); + } } } static void nbd_restart_write(void *opaque) { BDRVNBDState *s = opaque; - qemu_coroutine_enter(s->coroutine, NULL); + qemu_coroutine_enter(s->send_coroutine, NULL); } static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request, @@ -154,6 +188,8 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request, { int rc, ret; + qemu_co_mutex_lock(&s->send_mutex); + s->send_coroutine = qemu_coroutine_self(); qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, nbd_restart_write, nbd_have_request, NULL, s); rc = nbd_send_request(s->sock, request); @@ -166,6 +202,8 @@ static int nbd_co_send_request(BDRVNBDState *s, struct nbd_request *request, } qemu_aio_set_fd_handler(s->sock, nbd_reply_ready, NULL, nbd_have_request, NULL, s); + s->send_coroutine = NULL; + qemu_co_mutex_unlock(&s->send_mutex); return rc; } @@ -175,7 +213,8 @@ static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request, { int ret; - /* Wait until we're woken up by the read handler. */ + /* Wait until we're woken up by the read handler. TODO: perhaps + * peek at the next reply and avoid yielding if it's ours? */ qemu_coroutine_yield(); *reply = s->reply; if (reply->handle != request->handle) { @@ -195,8 +234,11 @@ static void nbd_co_receive_reply(BDRVNBDState *s, struct nbd_request *request, static void nbd_coroutine_end(BDRVNBDState *s, struct nbd_request *request) { - s->coroutine = NULL; - qemu_co_mutex_unlock(&s->mutex); + int i = HANDLE_TO_INDEX(s, request->handle); + s->recv_coroutine[i] = NULL; + if (s->in_flight-- == MAX_NBD_REQUESTS) { + qemu_co_mutex_unlock(&s->free_sema); + } } static int nbd_establish_connection(BlockDriverState *bs) @@ -260,7 +302,8 @@ static int nbd_open(BlockDriverState *bs, const char* filename, int flags) BDRVNBDState *s = bs->opaque; int result; - qemu_co_mutex_init(&s->mutex); + qemu_co_mutex_init(&s->send_mutex); + qemu_co_mutex_init(&s->free_sema); /* Pop the config into our state object. Exit if invalid. */ result = nbd_config(s, filename, flags);