From patchwork Wed Jan 3 09:38:28 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juan Quintela X-Patchwork-Id: 854925 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3zBR3h5PQXz9sRm for ; Wed, 3 Jan 2018 20:49:36 +1100 (AEDT) Received: from localhost ([::1]:46645 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eWffy-0008E8-S9 for incoming@patchwork.ozlabs.org; Wed, 03 Jan 2018 04:49:34 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:34682) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1eWfVh-0007CS-2m for qemu-devel@nongnu.org; Wed, 03 Jan 2018 04:38:58 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1eWfVf-00077I-Nv for qemu-devel@nongnu.org; Wed, 03 Jan 2018 04:38:57 -0500 Received: from mx1.redhat.com ([209.132.183.28]:39620) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1eWfVf-00076H-FA for qemu-devel@nongnu.org; Wed, 03 Jan 2018 04:38:55 -0500 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 98ABA693C8; Wed, 3 Jan 2018 09:38:54 +0000 (UTC) Received: from secure.mitica (ovpn-116-63.ams2.redhat.com [10.36.116.63]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0E07D5E1C9; Wed, 3 Jan 2018 09:38:52 +0000 (UTC) From: Juan Quintela To: qemu-devel@nongnu.org Date: Wed, 3 Jan 2018 10:38:28 +0100 Message-Id: <20180103093834.20879-9-quintela@redhat.com> In-Reply-To: <20180103093834.20879-1-quintela@redhat.com> References: <20180103093834.20879-1-quintela@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Wed, 03 Jan 2018 09:38:54 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 08/14] migration: add postcopy blocktime ctx into MigrationIncomingState X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, dgilbert@redhat.com, peterx@redhat.com, Alexey Perevalov Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Alexey Perevalov This patch adds request to kernel space for UFFD_FEATURE_THREAD_ID, in case this feature is provided by kernel. PostcopyBlocktimeContext is encapsulated inside postcopy-ram.c, due to it being a postcopy-only feature. Also it defines PostcopyBlocktimeContext's instance live time. Information from PostcopyBlocktimeContext instance will be provided much after postcopy migration end, instance of PostcopyBlocktimeContext will live till QEMU exit, but part of it (vcpu_addr, page_fault_vcpu_time) used only during calculation, will be released when postcopy ended or failed. To enable postcopy blocktime calculation on destination, need to request proper compatibility (Patch for documentation will be at the tail of the patch set). As an example following command enable that capability, assume QEMU was started with -chardev socket,id=charmonitor,path=/var/lib/migrate-vm-monitor.sock option to control it [root@host]#printf "{\"execute\" : \"qmp_capabilities\"}\r\n \ {\"execute\": \"migrate-set-capabilities\" , \"arguments\": { \"capabilities\": [ { \"capability\": \"postcopy-blocktime\", \"state\": true } ] } }" | nc -U /var/lib/migrate-vm-monitor.sock Or just with HMP (qemu) migrate_set_capability postcopy-blocktime on Reviewed-by: Dr. David Alan Gilbert Signed-off-by: Alexey Perevalov Reviewed-by: Juan Quintela Signed-off-by: Juan Quintela --- migration/migration.h | 8 +++++++ migration/postcopy-ram.c | 59 ++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 67 insertions(+) diff --git a/migration/migration.h b/migration/migration.h index 9ab8d07004..7e06c11773 100644 --- a/migration/migration.h +++ b/migration/migration.h @@ -22,6 +22,8 @@ #include "hw/qdev.h" #include "io/channel.h" +struct PostcopyBlocktimeContext; + /* State for the incoming migration */ struct MigrationIncomingState { QEMUFile *from_src_file; @@ -59,6 +61,12 @@ struct MigrationIncomingState { /* The coroutine we should enter (back) after failover */ Coroutine *migration_incoming_co; QemuSemaphore colo_incoming_sem; + + /* + * PostcopyBlocktimeContext to keep information for postcopy + * live migration, to calculate vCPU block time + * */ + struct PostcopyBlocktimeContext *blocktime_ctx; }; MigrationIncomingState *migration_incoming_get_current(void); diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index bec6c2c66b..c18ec5aa18 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -61,6 +61,52 @@ struct PostcopyDiscardState { #include #include +typedef struct PostcopyBlocktimeContext { + /* time when page fault initiated per vCPU */ + int64_t *page_fault_vcpu_time; + /* page address per vCPU */ + uint64_t *vcpu_addr; + int64_t total_blocktime; + /* blocktime per vCPU */ + int64_t *vcpu_blocktime; + /* point in time when last page fault was initiated */ + int64_t last_begin; + /* number of vCPU are suspended */ + int smp_cpus_down; + + /* + * Handler for exit event, necessary for + * releasing whole blocktime_ctx + */ + Notifier exit_notifier; +} PostcopyBlocktimeContext; + +static void destroy_blocktime_context(struct PostcopyBlocktimeContext *ctx) +{ + g_free(ctx->page_fault_vcpu_time); + g_free(ctx->vcpu_addr); + g_free(ctx->vcpu_blocktime); + g_free(ctx); +} + +static void migration_exit_cb(Notifier *n, void *data) +{ + PostcopyBlocktimeContext *ctx = container_of(n, PostcopyBlocktimeContext, + exit_notifier); + destroy_blocktime_context(ctx); +} + +static struct PostcopyBlocktimeContext *blocktime_context_new(void) +{ + PostcopyBlocktimeContext *ctx = g_new0(PostcopyBlocktimeContext, 1); + ctx->page_fault_vcpu_time = g_new0(int64_t, smp_cpus); + ctx->vcpu_addr = g_new0(uint64_t, smp_cpus); + ctx->vcpu_blocktime = g_new0(int64_t, smp_cpus); + + ctx->exit_notifier.notify = migration_exit_cb; + qemu_add_exit_notifier(&ctx->exit_notifier); + return ctx; +} /** * receive_ufd_features: check userfault fd features, to request only supported @@ -153,6 +199,19 @@ static bool ufd_check_and_apply(int ufd, MigrationIncomingState *mis) } } +#ifdef UFFD_FEATURE_THREAD_ID + if (migrate_postcopy_blocktime() && mis && + UFFD_FEATURE_THREAD_ID & supported_features) { + /* kernel supports that feature */ + /* don't create blocktime_context if it exists */ + if (!mis->blocktime_ctx) { + mis->blocktime_ctx = blocktime_context_new(); + } + + asked_features |= UFFD_FEATURE_THREAD_ID; + } +#endif + /* * request features, even if asked_features is 0, due to * kernel expects UFFD_API before UFFDIO_REGISTER, per