From patchwork Wed Jul 27 13:44:44 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Hajnoczi X-Patchwork-Id: 107086 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 4881EB6F68 for ; Thu, 28 Jul 2011 00:17:18 +1000 (EST) Received: from localhost ([::1]:38234 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qm4RI-0002iL-2H for incoming@patchwork.ozlabs.org; Wed, 27 Jul 2011 09:46:20 -0400 Received: from eggs.gnu.org ([140.186.70.92]:33412) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qm4Q8-0007ER-M1 for qemu-devel@nongnu.org; Wed, 27 Jul 2011 09:45:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Qm4Q5-0004Vu-Nr for qemu-devel@nongnu.org; Wed, 27 Jul 2011 09:45:08 -0400 Received: from mtagate7.uk.ibm.com ([194.196.100.167]:39380) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Qm4Q5-0004Ua-Dn for qemu-devel@nongnu.org; Wed, 27 Jul 2011 09:45:05 -0400 Received: from d06nrmr1707.portsmouth.uk.ibm.com (d06nrmr1707.portsmouth.uk.ibm.com [9.149.39.225]) by mtagate7.uk.ibm.com (8.13.1/8.13.1) with ESMTP id p6RDj4pf008604 for ; Wed, 27 Jul 2011 13:45:04 GMT Received: from d06av09.portsmouth.uk.ibm.com (d06av09.portsmouth.uk.ibm.com [9.149.37.250]) by d06nrmr1707.portsmouth.uk.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id p6RDj4b92171002 for ; Wed, 27 Jul 2011 14:45:04 +0100 Received: from d06av09.portsmouth.uk.ibm.com (loopback [127.0.0.1]) by d06av09.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVout) with ESMTP id p6RDj4wQ016129 for ; Wed, 27 Jul 2011 07:45:04 -0600 Received: from stefanha-thinkpad.ibm.com ([9.78.66.144]) by d06av09.portsmouth.uk.ibm.com (8.14.4/8.13.1/NCO v10.0 AVin) with ESMTP id p6RDj1DD016013; Wed, 27 Jul 2011 07:45:03 -0600 From: Stefan Hajnoczi To: Date: Wed, 27 Jul 2011 14:44:44 +0100 Message-Id: <1311774295-8696-5-git-send-email-stefanha@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.5.4 In-Reply-To: <1311774295-8696-1-git-send-email-stefanha@linux.vnet.ibm.com> References: <1311774295-8696-1-git-send-email-stefanha@linux.vnet.ibm.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6, seldom 2.4 (older, 4) X-Received-From: 194.196.100.167 Cc: Kevin Wolf , Anthony Liguori , Stefan Hajnoczi , Adam Litke Subject: [Qemu-devel] [PATCH 04/15] qed: make qed_aio_write_alloc() reusable X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Copy-on-read requests will share the allocating write code path. This requires making qed_aio_write_alloc() reusable outside of a write request. This patch ensures that iovec setup is performed in a common place before qed_aio_write_alloc() is called. Signed-off-by: Stefan Hajnoczi --- block/qed.c | 53 +++++++++++++++-------------------------------------- 1 files changed, 15 insertions(+), 38 deletions(-) diff --git a/block/qed.c b/block/qed.c index cc193ad..4f535aa 100644 --- a/block/qed.c +++ b/block/qed.c @@ -1133,19 +1133,18 @@ static bool qed_start_allocating_write(QEDAIOCB *acb) * * This path is taken when writing to previously unallocated clusters. */ -static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) +static void qed_aio_write_alloc(QEDAIOCB *acb) { BDRVQEDState *s = acb_to_s(acb); - BlockDriverCompletionFunc *cb; if (!qed_start_allocating_write(acb)) { - return; + qemu_iovec_reset(&acb->cur_qiov); + return; /* wait until current allocating write completes */ } acb->cur_nclusters = qed_bytes_to_clusters(s, - qed_offset_into_cluster(s, acb->cur_pos) + len); + qed_offset_into_cluster(s, acb->cur_pos) + acb->cur_qiov.size); acb->cur_cluster = qed_alloc_clusters(s, acb->cur_nclusters); - qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len); if (qed_should_set_need_check(s)) { s->header.features |= QED_F_NEED_CHECK; @@ -1156,25 +1155,6 @@ static void qed_aio_write_alloc(QEDAIOCB *acb, size_t len) } /** - * Write data cluster in place - * - * @acb: Write request - * @offset: Cluster offset in bytes - * @len: Length in bytes - * - * This path is taken when writing to already allocated clusters. - */ -static void qed_aio_write_inplace(QEDAIOCB *acb, uint64_t offset, size_t len) -{ - /* Calculate the I/O vector */ - acb->cur_cluster = offset; - qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len); - - /* Do the actual write */ - qed_aio_write_main(acb, 0); -} - -/** * Write data cluster * * @opaque: Write request @@ -1192,22 +1172,19 @@ static void qed_aio_write_data(void *opaque, int ret, trace_qed_aio_write_data(acb_to_s(acb), acb, ret, offset, len); - acb->find_cluster_ret = ret; - - switch (ret) { - case QED_CLUSTER_FOUND: - qed_aio_write_inplace(acb, offset, len); - break; + if (ret < 0) { + qed_aio_complete(acb, ret); + return; + } - case QED_CLUSTER_L2: - case QED_CLUSTER_L1: - case QED_CLUSTER_ZERO: - qed_aio_write_alloc(acb, len); - break; + acb->find_cluster_ret = ret; + qemu_iovec_copy(&acb->cur_qiov, acb->qiov, acb->qiov_offset, len); - default: - qed_aio_complete(acb, ret); - break; + if (ret == QED_CLUSTER_FOUND) { + acb->cur_cluster = offset; + qed_aio_write_main(acb, 0); + } else { + qed_aio_write_alloc(acb); } }