From patchwork Tue Aug 8 16:22:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juan Quintela X-Patchwork-Id: 799320 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3xRfwG5GCwz9s5L for ; Wed, 9 Aug 2017 02:28:29 +1000 (AEST) Received: from localhost ([::1]:43567 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1df7Mp-0002WH-4L for incoming@patchwork.ozlabs.org; Tue, 08 Aug 2017 12:28:27 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58947) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1df7K2-0000dC-IX for qemu-devel@nongnu.org; Tue, 08 Aug 2017 12:25:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1df7K1-000743-66 for qemu-devel@nongnu.org; Tue, 08 Aug 2017 12:25:34 -0400 Received: from mx1.redhat.com ([209.132.183.28]:33150) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1df7K0-00073e-Tu for qemu-devel@nongnu.org; Tue, 08 Aug 2017 12:25:33 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9E94D13A47 for ; Tue, 8 Aug 2017 16:25:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 9E94D13A47 Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx05.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=quintela@redhat.com Received: from secure.mitica (ovpn-117-165.ams2.redhat.com [10.36.117.165]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6815A18B93; Tue, 8 Aug 2017 16:25:26 +0000 (UTC) From: Juan Quintela To: qemu-devel@nongnu.org Date: Tue, 8 Aug 2017 18:22:18 +0200 Message-Id: <20170808162224.32419-14-quintela@redhat.com> In-Reply-To: <20170808162224.32419-1-quintela@redhat.com> References: <20170808162224.32419-1-quintela@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.29]); Tue, 08 Aug 2017 16:25:31 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v6 13/19] migration: Really use multiple pages at a time X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, dgilbert@redhat.com, peterx@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" We now send several pages at a time each time that we wakeup a thread. Signed-off-by: Juan Quintela --- Use iovec's instead of creating the equivalent. Clear memory used by pages (dave) Use g_new0(danp) define MULTIFD_CONTINUE --- migration/ram.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++--------- 1 file changed, 48 insertions(+), 9 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 03f3427..7310da9 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -49,6 +49,7 @@ #include "migration/colo.h" #include "sysemu/sysemu.h" #include "qemu/uuid.h" +#include "qemu/iov.h" /***********************************************************/ /* ram save/restore */ @@ -362,6 +363,15 @@ static void compress_threads_save_setup(void) /* Multiple fd's */ +/* used to continue on the same multifd group */ +#define MULTIFD_CONTINUE UINT16_MAX + +typedef struct { + int num; + size_t size; + struct iovec *iov; +} multifd_pages_t; + struct MultiFDSendParams { /* not changed */ uint8_t id; @@ -371,11 +381,7 @@ struct MultiFDSendParams { QemuMutex mutex; /* protected by param mutex */ bool quit; - /* This is a temp field. We are using it now to transmit - something the address of the page. Later in the series, we - change it for the real page. - */ - uint8_t *address; + multifd_pages_t pages; /* protected by multifd mutex */ /* has the thread finish the last submitted job */ bool done; @@ -388,8 +394,24 @@ struct { int count; QemuMutex mutex; QemuSemaphore sem; + multifd_pages_t pages; } *multifd_send_state; +static void multifd_init_group(multifd_pages_t *pages) +{ + pages->num = 0; + pages->size = migrate_multifd_group(); + pages->iov = g_new0(struct iovec, pages->size); +} + +static void multifd_clear_group(multifd_pages_t *pages) +{ + pages->num = 0; + pages->size = 0; + g_free(pages->iov); + pages->iov = NULL; +} + static void terminate_multifd_send_threads(void) { int i; @@ -419,9 +441,11 @@ void multifd_save_cleanup(void) qemu_mutex_destroy(&p->mutex); qemu_sem_destroy(&p->sem); socket_send_channel_destroy(p->c); + multifd_clear_group(&p->pages); } g_free(multifd_send_state->params); multifd_send_state->params = NULL; + multifd_clear_group(&multifd_send_state->pages); g_free(multifd_send_state); multifd_send_state = NULL; } @@ -454,8 +478,8 @@ static void *multifd_send_thread(void *opaque) qemu_mutex_unlock(&p->mutex); break; } - if (p->address) { - p->address = 0; + if (p->pages.num) { + p->pages.num = 0; qemu_mutex_unlock(&p->mutex); qemu_mutex_lock(&multifd_send_state->mutex); p->done = true; @@ -484,6 +508,7 @@ int multifd_save_setup(void) multifd_send_state->count = 0; qemu_mutex_init(&multifd_send_state->mutex); qemu_sem_init(&multifd_send_state->sem, 0); + multifd_init_group(&multifd_send_state->pages); for (i = 0; i < thread_count; i++) { char thread_name[16]; MultiFDSendParams *p = &multifd_send_state->params[i]; @@ -493,7 +518,7 @@ int multifd_save_setup(void) p->quit = false; p->id = i; p->done = true; - p->address = 0; + multifd_init_group(&p->pages); p->c = socket_send_channel_create(); if (!p->c) { error_report("Error creating a send channel"); @@ -512,6 +537,17 @@ static uint16_t multifd_send_page(uint8_t *address, bool last_page) { int i; MultiFDSendParams *p = NULL; /* make happy gcc */ + multifd_pages_t *pages = &multifd_send_state->pages; + + pages->iov[pages->num].iov_base = address; + pages->iov[pages->num].iov_len = TARGET_PAGE_SIZE; + pages->num++; + + if (!last_page) { + if (pages->num < (pages->size - 1)) { + return MULTIFD_CONTINUE; + } + } qemu_sem_wait(&multifd_send_state->sem); qemu_mutex_lock(&multifd_send_state->mutex); @@ -525,7 +561,10 @@ static uint16_t multifd_send_page(uint8_t *address, bool last_page) } qemu_mutex_unlock(&multifd_send_state->mutex); qemu_mutex_lock(&p->mutex); - p->address = address; + p->pages.num = pages->num; + iov_copy(p->pages.iov, pages->num, pages->iov, pages->num, 0, + iov_size(pages->iov, pages->num)); + pages->num = 0; qemu_mutex_unlock(&p->mutex); qemu_sem_post(&p->sem);