From patchwork Fri Jun 26 14:49:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Marc-Andr=C3=A9_Lureau?= X-Patchwork-Id: 488865 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id F0F9914028F for ; Sat, 27 Jun 2015 01:01:47 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=MIQAKnWU; dkim-atps=neutral Received: from localhost ([::1]:60512 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z8V8T-0006tG-Ul for incoming@patchwork.ozlabs.org; Fri, 26 Jun 2015 11:01:45 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57500) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z8UyL-0005aq-II for qemu-devel@nongnu.org; Fri, 26 Jun 2015 10:51:18 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Z8UyG-0002Y0-J1 for qemu-devel@nongnu.org; Fri, 26 Jun 2015 10:51:17 -0400 Received: from mail-qk0-x22b.google.com ([2607:f8b0:400d:c09::22b]:33583) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Z8UyG-0002Xk-68 for qemu-devel@nongnu.org; Fri, 26 Jun 2015 10:51:12 -0400 Received: by qkhu186 with SMTP id u186so56285218qkh.0 for ; Fri, 26 Jun 2015 07:51:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; bh=6bxA1/w861cF6vrtYiVKyEIZrQr7ANVFZ2aPk/bsAaY=; b=MIQAKnWUnehL/J2VUN9kncoqhTdRL7DwOch0I0dlh907f5QtjUUFvHOk7AMkmgBDQJ U/Aue4XhPhlMZlx+FGRfYfPd4UpL7QbhfSC929gl1JmuD+RMacncaPmYDZjI3B2gLFHY SYLdN9vYxV1P9Ootf+NfXMlC6f6V0Gx8Aw2R3eBmQ4DTu4gHNacVqIE4r7+hToZlEb2M 6wAXNT5T0WSme8w0b+SG2yMAwx4qAalGsWwi+613ZNQOFD37PcvcHMJx26r4jV4apZIf avg/iX21dowW8/fym8l6Rp7qyuAjiKKQX2JF9vkn7y1mk/2ZkdIH1CF6ttcoWXAp4R5G drMg== X-Received: by 10.55.27.70 with SMTP id b67mr4518945qkb.86.1435330271777; Fri, 26 Jun 2015 07:51:11 -0700 (PDT) Received: from localhost (bne75-h02-31-39-163-232.dsl.sta.abo.bbox.fr. [31.39.163.232]) by mx.google.com with ESMTPSA id h133sm6901474qhc.46.2015.06.26.07.51.10 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 26 Jun 2015 07:51:11 -0700 (PDT) From: =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= To: qemu-devel@nongnu.org Date: Fri, 26 Jun 2015 16:49:33 +0200 Message-Id: <1435330185-23248-28-git-send-email-marcandre.lureau@gmail.com> X-Mailer: git-send-email 2.4.3 In-Reply-To: <1435330185-23248-1-git-send-email-marcandre.lureau@gmail.com> References: <1435330185-23248-1-git-send-email-marcandre.lureau@gmail.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2607:f8b0:400d:c09::22b Cc: cam@cs.ualberta.ca, =?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= , stefanha@redhat.com Subject: [Qemu-devel] [PATCH 27/39] ivshmem: replace 'guest' for 'peer' appropriately X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The terms 'guest' and 'peer' are used sometime interchangeably which may be confusing. Instead, use 'peer' for the remote instances of ivshmem clients, and 'guest' for the local VM. Signed-off-by: Marc-André Lureau --- hw/misc/ivshmem.c | 28 ++++++++++++++-------------- 1 file changed, 14 insertions(+), 14 deletions(-) diff --git a/hw/misc/ivshmem.c b/hw/misc/ivshmem.c index df3bd9d..11b49c3 100644 --- a/hw/misc/ivshmem.c +++ b/hw/misc/ivshmem.c @@ -89,7 +89,7 @@ typedef struct IVShmemState { int shm_fd; /* shared memory file descriptor */ Peer *peers; - int nb_peers; /* how many guests we have space for */ + int nb_peers; /* how many peers we have space for */ int vm_id; uint32_t vectors; @@ -388,9 +388,9 @@ static void ivshmem_del_eventfd(IVShmemState *s, int posn, int i) &s->peers[posn].eventfds[i]); } -static void close_guest_eventfds(IVShmemState *s, int posn) +static void close_peer_eventfds(IVShmemState *s, int posn) { - int i, guest_curr_max; + int i, n; if (!ivshmem_has_feature(s, IVSHMEM_IOEVENTFD)) { return; @@ -400,14 +400,14 @@ static void close_guest_eventfds(IVShmemState *s, int posn) return; } - guest_curr_max = s->peers[posn].nb_eventfds; + n = s->peers[posn].nb_eventfds; memory_region_transaction_begin(); - for (i = 0; i < guest_curr_max; i++) { + for (i = 0; i < n; i++) { ivshmem_del_eventfd(s, posn, i); } memory_region_transaction_commit(); - for (i = 0; i < guest_curr_max; i++) { + for (i = 0; i < n; i++) { event_notifier_cleanup(&s->peers[posn].eventfds[i]); } @@ -416,7 +416,7 @@ static void close_guest_eventfds(IVShmemState *s, int posn) } /* this function increase the dynamic storage need to store data about other - * guests */ + * peers */ static int resize_peers(IVShmemState *s, int new_min_size) { @@ -433,7 +433,7 @@ static int resize_peers(IVShmemState *s, int new_min_size) old_size = s->nb_peers; s->nb_peers = new_min_size; - IVSHMEM_DPRINTF("bumping storage to %d guests\n", s->nb_peers); + IVSHMEM_DPRINTF("bumping storage to %d peers\n", s->nb_peers); s->peers = g_realloc(s->peers, s->nb_peers * sizeof(Peer)); @@ -504,7 +504,7 @@ static void ivshmem_read(void *opaque, const uint8_t *buf, int size) incoming_fd = qemu_chr_fe_get_msgfd(s->server_chr); IVSHMEM_DPRINTF("posn is %ld, fd is %d\n", incoming_posn, incoming_fd); - /* make sure we have enough space for this guest */ + /* make sure we have enough space for this peer */ if (incoming_posn >= s->nb_peers) { if (resize_peers(s, incoming_posn + 1) < 0) { error_report("failed to resize peers array"); @@ -523,9 +523,9 @@ static void ivshmem_read(void *opaque, const uint8_t *buf, int size) /* receive our posn */ s->vm_id = incoming_posn; } else { - /* otherwise an fd == -1 means an existing guest has gone away */ + /* otherwise an fd == -1 means an existing peer has gone away */ IVSHMEM_DPRINTF("posn %ld has gone away\n", incoming_posn); - close_guest_eventfds(s, incoming_posn); + close_peer_eventfds(s, incoming_posn); } return; } @@ -572,7 +572,7 @@ static void ivshmem_read(void *opaque, const uint8_t *buf, int size) /* get a new eventfd */ nth_eventfd = peer->nb_eventfds++; - /* this is an eventfd for a particular guest VM */ + /* this is an eventfd for a particular peer VM */ IVSHMEM_DPRINTF("eventfds[%ld][%d] = %d\n", incoming_posn, nth_eventfd, incoming_fd); event_notifier_init_fd(&peer->eventfds[nth_eventfd], incoming_fd); @@ -752,7 +752,7 @@ static void pci_ivshmem_realize(PCIDevice *dev, Error **errp) return; } - /* we allocate enough space for 16 guests and grow as needed */ + /* we allocate enough space for 16 peers and grow as needed */ resize_peers(s, 16); s->vm_id = -1; @@ -830,7 +830,7 @@ static void pci_ivshmem_exit(PCIDevice *dev) if (s->peers) { for (i = 0; i < s->nb_peers; i++) { - close_guest_eventfds(s, i); + close_peer_eventfds(s, i); } g_free(s->peers); }