From patchwork Tue Jun 14 18:18:23 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 100410 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 173E2B6F95 for ; Wed, 15 Jun 2011 04:41:18 +1000 (EST) Received: from localhost ([::1]:53480 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QWYY6-0002Ll-J4 for incoming@patchwork.ozlabs.org; Tue, 14 Jun 2011 14:41:14 -0400 Received: from eggs.gnu.org ([140.186.70.92]:38330) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QWYCU-0005xO-8b for qemu-devel@nongnu.org; Tue, 14 Jun 2011 14:18:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QWYCQ-000692-6j for qemu-devel@nongnu.org; Tue, 14 Jun 2011 14:18:53 -0400 Received: from mtagate1.uk.ibm.com ([194.196.100.161]:41578) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QWYCP-00068B-FY for qemu-devel@nongnu.org; Tue, 14 Jun 2011 14:18:49 -0400 Received: from d06nrmr1806.portsmouth.uk.ibm.com (d06nrmr1806.portsmouth.uk.ibm.com [9.149.39.193]) by mtagate1.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p5EIIkmP023344 for ; Tue, 14 Jun 2011 18:18:46 GMT Received: from d06av12.portsmouth.uk.ibm.com (d06av12.portsmouth.uk.ibm.com [9.149.37.247]) by d06nrmr1806.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p5EIIi4v2666560 for ; Tue, 14 Jun 2011 19:18:46 +0100 Received: from d06av12.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av12.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p5EIIhXP015132 for ; Tue, 14 Jun 2011 12:18:44 -0600 Received: from stefanha-thinkpad.ibm.com (sig-9-145-202-176.de.ibm.com [9.145.202.176]) by d06av12.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p5EIIdDm014627; Tue, 14 Jun 2011 12:18:43 -0600 From: Stefan Hajnoczi To: Date: Tue, 14 Jun 2011 19:18:23 +0100 Message-Id: <1308075511-4745-6-git-send-email-stefanha@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.5.3 In-Reply-To: <1308075511-4745-1-git-send-email-stefanha@linux.vnet.ibm.com> References: <1308075511-4745-1-git-send-email-stefanha@linux.vnet.ibm.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6, seldom 2.4 (older, 4) X-Received-From: 194.196.100.161 Cc: Kevin Wolf , Anthony Liguori , Stefan Hajnoczi , Adam Litke Subject: [Qemu-devel] [PATCH 05/13] qed: make qed_aio_write_alloc() reusable X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Copy-on-read requests will share the allocating write code path. This requires making qed_aio_write_alloc() reusable outside of a write request. This patch ensures that iovec setup is performed in a common place before qed_aio_write_alloc() is called. Signed-off-by: Stefan Hajnoczi --- block/qed.c | 53 +++++++++++++++-------------------------------------- 1 files changed, 15 insertions(+), 38 deletions(-) diff --git a/block/qed.c b/block/qed.c index cc193ad..4f535aa 100644 --- a/block/qed.c +++ b/block/qed.c @@ -1133,19 +1133,18 @@ static bool qed_start_allocating_write(QEDAIOCB *acb) * * This path is taken when writing to previously unallocated clusters. */ -static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) +static void qed_aio_write_alloc(QEDAIOCB *acb) { BDRVQEDState *s = acb_to_s(acb); - BlockDriverCompletionFunc *cb; if (!qed_start_allocating_write(acb)) { - return; + qemu_iovec_reset(&acb->cur_qiov); + return; /* wait until current allocating write completes */ } acb->cur_nclusters = qed_bytes_to_clusters(s, - qed_offset_into_cluster(s, acb->cur_pos) + len); + qed_offset_into_cluster(s, acb->cur_pos) + acb->cur_qiov.size); acb->cur_cluster = qed_alloc_clusters(s, acb->cur_nclusters); - qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len); if (qed_should_set_need_check(s)) { s->header.features |= QED_F_NEED_CHECK; @@ -1156,25 +1155,6 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) } /** - * Write data cluster in place - * - * @acb: Write request - * @offset: Cluster offset in bytes - * @len: Length in bytes - * - * This path is taken when writing to already allocated clusters. - */ -static void qed_aio_write_inplace(QEDAIOCB *acb, uint64_t offset, size_t len) -{ - /* Calculate the I/O vector */ - acb->cur_cluster = offset; - qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len); - - /* Do the actual write */ - qed_aio_write_main(acb, 0); -} - -/** * Write data cluster * * @opaque: Write request @@ -1192,22 +1172,19 @@ static void qed_aio_write_data(void *opaque, int ret, trace_qed_aio_write_data(acb_to_s(acb), acb, ret, offset, len); - acb->find_cluster_ret = ret; - - switch (ret) { - case QED_CLUSTER_FOUND: - qed_aio_write_inplace(acb, offset, len); - break; + if (ret < 0) { + qed_aio_complete(acb, ret); + return; + } - case QED_CLUSTER_L2: - case QED_CLUSTER_L1: - case QED_CLUSTER_ZERO: - qed_aio_write_alloc(acb, len); - break; + acb->find_cluster_ret = ret; + qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len); - default: - qed_aio_complete(acb, ret); - break; + if (ret == QED_CLUSTER_FOUND) { + acb->cur_cluster = offset; + qed_aio_write_main(acb, 0); + } else { + qed_aio_write_alloc(acb); } }