From patchwork Wed Aug 22 12:31:54 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Julien Grall X-Patchwork-Id: 179369 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 8585F2C009E for ; Thu, 23 Aug 2012 04:56:41 +1000 (EST) Received: from localhost ([::1]:45712 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T4G6Z-0005PZ-NM for incoming@patchwork.ozlabs.org; Wed, 22 Aug 2012 14:56:39 -0400 Received: from eggs.gnu.org ([208.118.235.92]:55152) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T4G5L-0002xo-Cx for qemu-devel@nongnu.org; Wed, 22 Aug 2012 14:55:24 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1T4G5E-0007Yi-Ek for qemu-devel@nongnu.org; Wed, 22 Aug 2012 14:55:23 -0400 Received: from smtp02.citrix.com ([66.165.176.63]:24714) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1T4G5E-0007IO-98 for qemu-devel@nongnu.org; Wed, 22 Aug 2012 14:55:16 -0400 X-IronPort-AV: E=Sophos;i="4.80,809,1344225600"; d="scan'208";a="205943123" Received: from ftlpmailmx01.citrite.net ([10.13.107.65]) by FTLPIPO02.CITRIX.COM with ESMTP/TLS/RC4-MD5; 22 Aug 2012 14:55:15 -0400 Received: from meteora.cam.xci-test.com (10.80.248.22) by smtprelay.citrix.com (10.13.107.65) with Microsoft SMTP Server id 8.3.213.0; Wed, 22 Aug 2012 14:55:15 -0400 From: Julien Grall To: qemu-devel@nongnu.org Date: Wed, 22 Aug 2012 13:31:54 +0100 Message-ID: X-Mailer: git-send-email 1.7.2.5 In-Reply-To: References: MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 66.165.176.63 Cc: Julien Grall , christian.limpach@gmail.com, Stefano.Stabellini@eu.citrix.com, xen-devel@lists.xen.org Subject: [Qemu-devel] [XEN][RFC PATCH V2 08/17] hvm-io: Handle server in buffered IO X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org As for the normal IO, Xen browses the ranges to find which server is able to handle the IO. There is a special case for IOREQ_TYPE_TIMEOFFSET. Indeed, this IO must be send to all servers. For this purpose, a new function hvm_buffered_io_send_server was introduced. It sends an IO to a specific server. Signed-off-by: Julien Grall --- xen/arch/x86/hvm/io.c | 75 +++++++++++++++++++++++++++++++++++++----------- 1 files changed, 58 insertions(+), 17 deletions(-) diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c index b73a462..6e0160c 100644 --- a/xen/arch/x86/hvm/io.c +++ b/xen/arch/x86/hvm/io.c @@ -46,28 +46,17 @@ #include #include -int hvm_buffered_io_send(ioreq_t *p) +static int hvm_buffered_io_send_to_server(ioreq_t *p, struct hvm_ioreq_server *s) { struct vcpu *v = current; - struct hvm_ioreq_page *iorp = &v->domain->arch.hvm_domain.buf_ioreq; - buffered_iopage_t *pg = iorp->va; + struct hvm_ioreq_page *iorp; + buffered_iopage_t *pg; buf_ioreq_t bp; /* Timeoffset sends 64b data, but no address. Use two consecutive slots. */ int qw = 0; - /* Ensure buffered_iopage fits in a page */ - BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); - - /* - * Return 0 for the cases we can't deal with: - * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB - * - we cannot buffer accesses to guest memory buffers, as the guest - * may expect the memory buffer to be synchronously accessed - * - the count field is usually used with data_is_ptr and since we don't - * support data_is_ptr we do not waste space for the count field either - */ - if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) ) - return 0; + iorp = &s->buf_ioreq; + pg = iorp->va; bp.type = p->type; bp.dir = p->dir; @@ -119,12 +108,64 @@ int hvm_buffered_io_send(ioreq_t *p) pg->write_pointer += qw ? 2 : 1; notify_via_xen_event_channel(v->domain, - v->domain->arch.hvm_domain.params[HVM_PARAM_BUFIOREQ_EVTCHN]); + s->buf_ioreq_evtchn); spin_unlock(&iorp->lock); return 1; } +int hvm_buffered_io_send(ioreq_t *p) +{ + struct vcpu *v = current; + struct hvm_ioreq_server *s; + int rc = 1; + + /* Ensure buffered_iopage fits in a page */ + BUILD_BUG_ON(sizeof(buffered_iopage_t) > PAGE_SIZE); + + /* + * Return 0 for the cases we can't deal with: + * - 'addr' is only a 20-bit field, so we cannot address beyond 1MB + * - we cannot buffer accesses to guest memory buffers, as the guest + * may expect the memory buffer to be synchronously accessed + * - the count field is usually used with data_is_ptr and since we don't + * support data_is_ptr we do not waste space for the count field either + */ + if ( (p->addr > 0xffffful) || p->data_is_ptr || (p->count != 1) ) + return 0; + + spin_lock(&v->domain->arch.hvm_domain.ioreq_server_lock); + if ( p->type == IOREQ_TYPE_TIMEOFFSET ) + { + /* Send TIME OFFSET to all servers */ + for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next ) + rc = hvm_buffered_io_send_to_server(p, s) && rc; + } + else + { + for ( s = v->domain->arch.hvm_domain.ioreq_server_list; s; s = s->next ) + { + struct hvm_io_range *x = (p->type == IOREQ_TYPE_COPY) + ? s->mmio_range_list : s->portio_range_list; + for ( ; x; x = x->next ) + { + if ( (p->addr >= x->s) && (p->addr <= x->e) ) + { + rc = hvm_buffered_io_send_to_server(p, s); + spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock); + + return rc; + } + } + } + rc = 0; + } + + spin_unlock(&v->domain->arch.hvm_domain.ioreq_server_lock); + + return rc; +} + void send_timeoffset_req(unsigned long timeoff) { ioreq_t p[1];