From patchwork Wed Jun 16 16:29:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 1492998 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FsJ7Lv6Z; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4rQK6v3nz9sWl for ; Thu, 17 Jun 2021 02:35:21 +1000 (AEST) Received: from localhost ([::1]:54440 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltYVb-00016D-Ez for incoming@patchwork.ozlabs.org; Wed, 16 Jun 2021 12:35:19 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:39970) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRL-0002us-Mz for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:55 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:57085) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRG-0007aR-Ro for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:55 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623861049; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=IdggkUwN/97+RRPskx1Ww94j6yugnn6qzW8xr+/5R6k=; b=FsJ7Lv6Z/P5oN/k+ifZRBSaOGuIDE2pIhcTsDTWTTgD632KNDfoSsqPJiD6BWMHLtFbx65 iQOXDY8UUxrEfWBc53woR4LzBj2tgGjqjH9Sa5bYPEcMN+WNniSfrieeAgdwTEdSN7Wms5 Q3PvZS91KZtrDRkfM3a8d2cPl+0qAco= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-444-_zu_lGOsMS6ARqgM1D6rIQ-1; Wed, 16 Jun 2021 12:30:12 -0400 X-MC-Unique: _zu_lGOsMS6ARqgM1D6rIQ-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BE9DB101258B; Wed, 16 Jun 2021 16:30:10 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-100.ams2.redhat.com [10.36.114.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 1D74D60C54; Wed, 16 Jun 2021 16:30:00 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v1 1/6] memory: Introduce replay_discarded callback for RamDiscardManager Date: Wed, 16 Jun 2021 18:29:35 +0200 Message-Id: <20210616162940.28630-2-david@redhat.com> In-Reply-To: <20210616162940.28630-1-david@redhat.com> References: <20210616162940.28630-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Yang , "Michael S. Tsirkin" , Pankaj Gupta , Juan Quintela , David Hildenbrand , "Dr. David Alan Gilbert" , Peter Xu , Marek Kedzierski , Alex Williamson , teawater , Paolo Bonzini , Andrey Gruzdev Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Introduce replay_discarded callback similar to our existing replay_populated callback, to be used my migration code to never migrate discarded memory. Signed-off-by: David Hildenbrand --- include/exec/memory.h | 21 +++++++++++++++++++++ softmmu/memory.c | 11 +++++++++++ 2 files changed, 32 insertions(+) diff --git a/include/exec/memory.h b/include/exec/memory.h index 84e739f106..1afdba2c27 100644 --- a/include/exec/memory.h +++ b/include/exec/memory.h @@ -530,6 +530,7 @@ static inline void ram_discard_listener_init(RamDiscardListener *rdl, } typedef int (*ReplayRamPopulate)(MemoryRegionSection *section, void *opaque); +typedef void (*ReplayRamDiscard)(MemoryRegionSection *section, void *opaque); /* * RamDiscardManagerClass: @@ -618,6 +619,21 @@ struct RamDiscardManagerClass { MemoryRegionSection *section, ReplayRamPopulate replay_fn, void *opaque); + /** + * @replay_discarded: + * + * Call the #ReplayRamDiscard callback for all discarded parts within the + * #MemoryRegionSection via the #RamDiscardManager. + * + * @rdm: the #RamDiscardManager + * @section: the #MemoryRegionSection + * @replay_fn: the #ReplayRamDiscard callback + * @opaque: pointer to forward to the callback + */ + void (*replay_discarded)(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscard replay_fn, void *opaque); + /** * @register_listener: * @@ -662,6 +678,11 @@ int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, ReplayRamPopulate replay_fn, void *opaque); +void ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscard replay_fn, + void *opaque); + void ram_discard_manager_register_listener(RamDiscardManager *rdm, RamDiscardListener *rdl, MemoryRegionSection *section); diff --git a/softmmu/memory.c b/softmmu/memory.c index 920ebc0f4e..ae32083873 100644 --- a/softmmu/memory.c +++ b/softmmu/memory.c @@ -2073,6 +2073,17 @@ int ram_discard_manager_replay_populated(const RamDiscardManager *rdm, return rdmc->replay_populated(rdm, section, replay_fn, opaque); } +void ram_discard_manager_replay_discarded(const RamDiscardManager *rdm, + MemoryRegionSection *section, + ReplayRamDiscard replay_fn, + void *opaque) +{ + RamDiscardManagerClass *rdmc = RAM_DISCARD_MANAGER_GET_CLASS(rdm); + + g_assert(rdmc->replay_discarded); + rdmc->replay_discarded(rdm, section, replay_fn, opaque); +} + void ram_discard_manager_register_listener(RamDiscardManager *rdm, RamDiscardListener *rdl, MemoryRegionSection *section) From patchwork Wed Jun 16 16:29:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 1492994 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=R/cBMx0g; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4rKV11y6z9sXN for ; Thu, 17 Jun 2021 02:31:10 +1000 (AEST) Received: from localhost ([::1]:43602 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltYRX-00025j-QE for incoming@patchwork.ozlabs.org; Wed, 16 Jun 2021 12:31:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:39786) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYQu-00023n-F6 for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:28 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:58296) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYQr-0007Fc-W7 for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:28 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623861025; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=xrmG1HrC49Xsw2bKc5rtD1Gkuqb3l6Hoh0HmzH5/zYs=; b=R/cBMx0gdyWqfQtcCTWQXJPW40em7Zj9RFqqOPFFRHURHJ+3jZdmcemcgHz4Wpjw0VOimu 4y4CaWBy6iPwG169z1fwl0g38n+E4lGuOK9FiEv8HVP4LVtxf3PiTALTSWvy60VadtdAhe y0DOnV+iY4dzqykDzIG+C3pjoyESIeo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-295-5hm5nw4UPm2zhseljGHsYg-1; Wed, 16 Jun 2021 12:30:21 -0400 X-MC-Unique: 5hm5nw4UPm2zhseljGHsYg-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7BC0379EE4; Wed, 16 Jun 2021 16:30:20 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-100.ams2.redhat.com [10.36.114.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 24E8860C54; Wed, 16 Jun 2021 16:30:10 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v1 2/6] virtio-mem: Implement replay_discarded RamDiscardManager callback Date: Wed, 16 Jun 2021 18:29:36 +0200 Message-Id: <20210616162940.28630-3-david@redhat.com> In-Reply-To: <20210616162940.28630-1-david@redhat.com> References: <20210616162940.28630-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Yang , "Michael S. Tsirkin" , Pankaj Gupta , Juan Quintela , David Hildenbrand , "Dr. David Alan Gilbert" , Peter Xu , Marek Kedzierski , Alex Williamson , teawater , Paolo Bonzini , Andrey Gruzdev Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Implement it similar to the replay_populated callback. Signed-off-by: David Hildenbrand --- hw/virtio/virtio-mem.c | 58 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 58 insertions(+) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index df91e454b2..284096ec5f 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -228,6 +228,38 @@ static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem, return ret; } +static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem, + MemoryRegionSection *s, + void *arg, + virtio_mem_section_cb cb) +{ + unsigned long first_bit, last_bit; + uint64_t offset, size; + int ret = 0; + + first_bit = s->offset_within_region / vmem->bitmap_size; + first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, first_bit); + while (first_bit < vmem->bitmap_size) { + MemoryRegionSection tmp = *s; + + offset = first_bit * vmem->block_size; + last_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size, + first_bit + 1) - 1; + size = (last_bit - first_bit + 1) * vmem->block_size; + + if (!virito_mem_intersect_memory_section(&tmp, offset, size)) { + break; + } + ret = cb(&tmp, arg); + if (ret) { + break; + } + first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, + last_bit + 2); + } + return ret; +} + static int virtio_mem_notify_populate_cb(MemoryRegionSection *s, void *arg) { RamDiscardListener *rdl = arg; @@ -1170,6 +1202,31 @@ static int virtio_mem_rdm_replay_populated(const RamDiscardManager *rdm, virtio_mem_rdm_replay_populated_cb); } +static int virtio_mem_rdm_replay_discarded_cb(MemoryRegionSection *s, + void *arg) +{ + struct VirtIOMEMReplayData *data = arg; + + ((ReplayRamDiscard)data->fn)(s, data->opaque); + return 0; +} + +static void virtio_mem_rdm_replay_discarded(const RamDiscardManager *rdm, + MemoryRegionSection *s, + ReplayRamDiscard replay_fn, + void *opaque) +{ + const VirtIOMEM *vmem = VIRTIO_MEM(rdm); + struct VirtIOMEMReplayData data = { + .fn = replay_fn, + .opaque = opaque, + }; + + g_assert(s->mr == &vmem->memdev->mr); + virtio_mem_for_each_unplugged_section(vmem, s, &data, + virtio_mem_rdm_replay_discarded_cb); +} + static void virtio_mem_rdm_register_listener(RamDiscardManager *rdm, RamDiscardListener *rdl, MemoryRegionSection *s) @@ -1234,6 +1291,7 @@ static void virtio_mem_class_init(ObjectClass *klass, void *data) rdmc->get_min_granularity = virtio_mem_rdm_get_min_granularity; rdmc->is_populated = virtio_mem_rdm_is_populated; rdmc->replay_populated = virtio_mem_rdm_replay_populated; + rdmc->replay_discarded = virtio_mem_rdm_replay_discarded; rdmc->register_listener = virtio_mem_rdm_register_listener; rdmc->unregister_listener = virtio_mem_rdm_unregister_listener; } From patchwork Wed Jun 16 16:29:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 1493001 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FiIHbOkd; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4rT36L8Jz9sXN for ; Thu, 17 Jun 2021 02:37:43 +1000 (AEST) Received: from localhost ([::1]:60852 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltYXt-0005VI-Ke for incoming@patchwork.ozlabs.org; Wed, 16 Jun 2021 12:37:41 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:39816) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYR6-0002PN-HB for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:40 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:27487) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYR4-0007SO-GW for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:40 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623861038; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rAd5KKAy4P1jm+87Iaub23Kp15FKRD1SIs182kYRclA=; b=FiIHbOkdlxZ0OdE0TbMrJq1K7cwtmSIVOkVoO/j+cAmT0e3eFEgd5ywRrNLzM+wlx6sbl4 mTzFe2MwotydsYN5PEuhiFjjAbCx6zCFbN+4SBGdeFSVgf51w0bV0iH0kunIumTxSJ0P6G hDZQfl9e8ecAcTpwHTUk5agllJKDARE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-361-qDM4slNTNmyScZcoVtI94w-1; Wed, 16 Jun 2021 12:30:31 -0400 X-MC-Unique: qDM4slNTNmyScZcoVtI94w-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 64F98818703; Wed, 16 Jun 2021 16:30:30 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-100.ams2.redhat.com [10.36.114.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7C2960C54; Wed, 16 Jun 2021 16:30:20 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v1 3/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on the migration source Date: Wed, 16 Jun 2021 18:29:37 +0200 Message-Id: <20210616162940.28630-4-david@redhat.com> In-Reply-To: <20210616162940.28630-1-david@redhat.com> References: <20210616162940.28630-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Yang , "Michael S. Tsirkin" , Pankaj Gupta , Juan Quintela , David Hildenbrand , "Dr. David Alan Gilbert" , Peter Xu , Marek Kedzierski , Alex Williamson , teawater , Paolo Bonzini , Andrey Gruzdev Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" We never want to migrate memory that corresponds to discarded ranges as managed by a RamDiscardManager responsible for the mapped memory region of the RAMBlock. The content of these pages is essentially stale and without any guarantees for the VM ("logically unplugged"). Depending on the underlying memory type, even reading memory might populate memory on the source, resulting in an undesired memory consumption. Of course, on the destination, even writing a zeropage consumes memory, which we also want to avoid (similar to free page hinting). Currently, virtio-mem tries achieving that goal (not migrating "unplugged" memory that was discarded) by going via qemu_guest_free_page_hint() - but it's hackish and incomplete. For example, background snapshots still end up reading all memory, as they don't do bitmap syncs. Postcopy recovery code will re-add previously cleared bits to the dirty bitmap and migrate them. Let's consult the RamDiscardManager whenever we add bits to the bitmap and exclude all discarded pages. As colo is incompatible with discarding of RAM and inhibits it, we don't have to bother. Signed-off-by: David Hildenbrand --- migration/ram.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++--- 1 file changed, 65 insertions(+), 4 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 60ea913c54..03867b4971 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -832,14 +832,67 @@ static inline bool migration_bitmap_clear_dirty(RAMState *rs, return ret; } +static void dirty_bitmap_exclude_section(MemoryRegionSection *section, + void *opaque) +{ + const hwaddr offset = section->offset_within_region; + const hwaddr size = int128_get64(section->size); + const unsigned long start = offset >> TARGET_PAGE_BITS; + const unsigned long npages = size >> TARGET_PAGE_BITS; + RAMBlock *rb = section->mr->ram_block; + uint64_t *cleared_bits = opaque; + + /* Clear the discarded part from our bitmap. */ + *cleared_bits += bitmap_count_one_with_offset(rb->bmap, start, npages); + bitmap_clear(rb->bmap, start, npages); +} + +/* + * Exclude all dirty pages that fall into a discarded range as managed by a + * RamDiscardManager responsible for the mapped memory region of the RAMBlock. + * + * Discarded pages ("logically unplugged") have undefined content and must + * never be migrated, because even reading these pages for migrating might + * result in undesired behavior. + * + * Returns the number of cleared bits in the dirty bitmap. + * + * Note: The result is only stable while migration (precopy/postcopy). + */ +static uint64_t ramblock_dirty_bitmap_exclude_discarded_pages(RAMBlock *rb) +{ + uint64_t cleared_bits = 0; + + if (rb->mr && memory_region_has_ram_discard_manager(rb->mr)) { + RamDiscardManager *rdm = memory_region_get_ram_discard_manager(rb->mr); + MemoryRegionSection section = { + .mr = rb->mr, + .offset_within_region = 0, + .size = int128_make64(qemu_ram_get_used_length(rb)), + }; + + ram_discard_manager_replay_discarded(rdm, §ion, + dirty_bitmap_exclude_section, + &cleared_bits); + } + return cleared_bits; +} + /* Called with RCU critical section */ static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb) { uint64_t new_dirty_pages = cpu_physical_memory_sync_dirty_bitmap(rb, 0, rb->used_length); + uint64_t cleared_pages = 0; - rs->migration_dirty_pages += new_dirty_pages; - rs->num_dirty_pages_period += new_dirty_pages; + if (new_dirty_pages) { + cleared_pages = ramblock_dirty_bitmap_exclude_discarded_pages(rb); + } + + rs->migration_dirty_pages += new_dirty_pages - cleared_pages; + if (new_dirty_pages > cleared_pages) { + rs->num_dirty_pages_period += new_dirty_pages - cleared_pages; + } } /** @@ -2603,7 +2656,7 @@ static int ram_state_init(RAMState **rsp) return 0; } -static void ram_list_init_bitmaps(void) +static void ram_list_init_bitmaps(RAMState *rs) { MigrationState *ms = migrate_get_current(); RAMBlock *block; @@ -2638,6 +2691,10 @@ static void ram_list_init_bitmaps(void) bitmap_set(block->bmap, 0, pages); block->clear_bmap_shift = shift; block->clear_bmap = bitmap_new(clear_bmap_size(pages, shift)); + + /* Exclude all discarded pages we never want to migrate. */ + pages = ramblock_dirty_bitmap_exclude_discarded_pages(block); + rs->migration_dirty_pages -= pages; } } } @@ -2649,7 +2706,7 @@ static void ram_init_bitmaps(RAMState *rs) qemu_mutex_lock_ramlist(); WITH_RCU_READ_LOCK_GUARD() { - ram_list_init_bitmaps(); + ram_list_init_bitmaps(rs); /* We don't use dirty log with background snapshots */ if (!migrate_background_snapshot()) { memory_global_dirty_log_start(); @@ -4068,6 +4125,10 @@ int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *block) */ bitmap_complement(block->bmap, block->bmap, nbits); + /* Exclude all discarded pages we never want to migrate. */ + ramblock_dirty_bitmap_exclude_discarded_pages(block); + + /* We'll recalculate migration_dirty_pages in ram_state_resume_prepare(). */ trace_ram_dirty_bitmap_reload_complete(block->idstr); /* From patchwork Wed Jun 16 16:29:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 1492996 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=IiVTSqxm; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4rN41m7gz9sWl for ; Thu, 17 Jun 2021 02:33:24 +1000 (AEST) Received: from localhost ([::1]:49386 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltYTi-00066K-0X for incoming@patchwork.ozlabs.org; Wed, 16 Jun 2021 12:33:22 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:39872) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRD-0002ik-C7 for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:47 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:44303) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRB-0007XR-GF for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:30:47 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623861044; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=1ykQuSsCqtzByJeKywlfuGKSwJTcMtmKOm8JjUVsQww=; b=IiVTSqxmCGrjmn9ZMM9kuL513v7NXixSdJMn0fE5tiyAfc1RayoumFBLpwxUhHYlhbnbD/ KMIV+rys+6tC1GvhjgKDLlJWs2zl019Qr5SyA2jfcnmdtAiHlEEeQgc8AP4STz1wpEtXi7 3ldK/7+EKlSkPhbpG3aWQH4JJK8/gGE= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-387-ZynhAplFM2arTGRDvO2RNw-1; Wed, 16 Jun 2021 12:30:43 -0400 X-MC-Unique: ZynhAplFM2arTGRDvO2RNw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 83F2D79EF2; Wed, 16 Jun 2021 16:30:42 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-100.ams2.redhat.com [10.36.114.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id D94A414103; Wed, 16 Jun 2021 16:30:30 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v1 4/6] virtio-mem: Drop precopy notifier Date: Wed, 16 Jun 2021 18:29:38 +0200 Message-Id: <20210616162940.28630-5-david@redhat.com> In-Reply-To: <20210616162940.28630-1-david@redhat.com> References: <20210616162940.28630-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Yang , "Michael S. Tsirkin" , Pankaj Gupta , Juan Quintela , David Hildenbrand , "Dr. David Alan Gilbert" , Peter Xu , Marek Kedzierski , Alex Williamson , teawater , Paolo Bonzini , Andrey Gruzdev Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Migration code now properly handles RAMBlocks which are indirectly managed by a RamDiscardManager. No need for manual handling via the free page optimization interface, let's get rid of it. Signed-off-by: David Hildenbrand Acked-by: Michael S. Tsirkin --- hw/virtio/virtio-mem.c | 34 ---------------------------------- include/hw/virtio/virtio-mem.h | 3 --- 2 files changed, 37 deletions(-) diff --git a/hw/virtio/virtio-mem.c b/hw/virtio/virtio-mem.c index 284096ec5f..d5a578142b 100644 --- a/hw/virtio/virtio-mem.c +++ b/hw/virtio/virtio-mem.c @@ -776,7 +776,6 @@ static void virtio_mem_device_realize(DeviceState *dev, Error **errp) host_memory_backend_set_mapped(vmem->memdev, true); vmstate_register_ram(&vmem->memdev->mr, DEVICE(vmem)); qemu_register_reset(virtio_mem_system_reset, vmem); - precopy_add_notifier(&vmem->precopy_notifier); /* * Set ourselves as RamDiscardManager before the plug handler maps the @@ -796,7 +795,6 @@ static void virtio_mem_device_unrealize(DeviceState *dev) * found via an address space anymore. Unset ourselves. */ memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL); - precopy_remove_notifier(&vmem->precopy_notifier); qemu_unregister_reset(virtio_mem_system_reset, vmem); vmstate_unregister_ram(&vmem->memdev->mr, DEVICE(vmem)); host_memory_backend_set_mapped(vmem->memdev, false); @@ -1089,43 +1087,11 @@ static void virtio_mem_set_block_size(Object *obj, Visitor *v, const char *name, vmem->block_size = value; } -static int virtio_mem_precopy_exclude_range_cb(const VirtIOMEM *vmem, void *arg, - uint64_t offset, uint64_t size) -{ - void * const host = qemu_ram_get_host_addr(vmem->memdev->mr.ram_block); - - qemu_guest_free_page_hint(host + offset, size); - return 0; -} - -static void virtio_mem_precopy_exclude_unplugged(VirtIOMEM *vmem) -{ - virtio_mem_for_each_unplugged_range(vmem, NULL, - virtio_mem_precopy_exclude_range_cb); -} - -static int virtio_mem_precopy_notify(NotifierWithReturn *n, void *data) -{ - VirtIOMEM *vmem = container_of(n, VirtIOMEM, precopy_notifier); - PrecopyNotifyData *pnd = data; - - switch (pnd->reason) { - case PRECOPY_NOTIFY_AFTER_BITMAP_SYNC: - virtio_mem_precopy_exclude_unplugged(vmem); - break; - default: - break; - } - - return 0; -} - static void virtio_mem_instance_init(Object *obj) { VirtIOMEM *vmem = VIRTIO_MEM(obj); notifier_list_init(&vmem->size_change_notifiers); - vmem->precopy_notifier.notify = virtio_mem_precopy_notify; QLIST_INIT(&vmem->rdl_list); object_property_add(obj, VIRTIO_MEM_SIZE_PROP, "size", virtio_mem_get_size, diff --git a/include/hw/virtio/virtio-mem.h b/include/hw/virtio/virtio-mem.h index 9a6e348fa2..a5dd6a493b 100644 --- a/include/hw/virtio/virtio-mem.h +++ b/include/hw/virtio/virtio-mem.h @@ -65,9 +65,6 @@ struct VirtIOMEM { /* notifiers to notify when "size" changes */ NotifierList size_change_notifiers; - /* don't migrate unplugged memory */ - NotifierWithReturn precopy_notifier; - /* listeners to notify on plug/unplug activity. */ QLIST_HEAD(, RamDiscardListener) rdl_list; }; From patchwork Wed Jun 16 16:29:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 1493000 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=E/6LKlOK; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4rSG6XrBz9sWl for ; Thu, 17 Jun 2021 02:37:02 +1000 (AEST) Received: from localhost ([::1]:58760 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltYXE-00047p-Kg for incoming@patchwork.ozlabs.org; Wed, 16 Jun 2021 12:37:00 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40032) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRV-0003BU-7o for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:31:05 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]:55263) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRR-0007ga-Sw for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:31:04 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623861061; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Xdhpng/kXrzOf1CcBueWIP516qy0BGf+SAgUprQwh5s=; b=E/6LKlOKTtVDJii8eUO/Jl6I2siLVAmpri0JVhclQAkNfj8nmJk0QqtRpm693SuoH3qj0J uqa0+YIOy3htSlLJ6t+NF2+4Qvg7o1AYCstIEhNjAMDXCV539TP0r8lTNQlwmIjBgq/aMK SBeBuSKUUOpDBko3UJuRk6hDJ8F+Pd8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-520-Qc9WLxa_MVeAL3EDa1Dvsw-1; Wed, 16 Jun 2021 12:30:54 -0400 X-MC-Unique: Qc9WLxa_MVeAL3EDa1Dvsw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 205D41012582; Wed, 16 Jun 2021 16:30:53 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-100.ams2.redhat.com [10.36.114.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id DB3916090F; Wed, 16 Jun 2021 16:30:42 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v1 5/6] migration/postcopy: Handle RAMBlocks with a RamDiscardManager on the destination Date: Wed, 16 Jun 2021 18:29:39 +0200 Message-Id: <20210616162940.28630-6-david@redhat.com> In-Reply-To: <20210616162940.28630-1-david@redhat.com> References: <20210616162940.28630-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=170.10.133.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Yang , "Michael S. Tsirkin" , Pankaj Gupta , Juan Quintela , David Hildenbrand , "Dr. David Alan Gilbert" , Peter Xu , Marek Kedzierski , Alex Williamson , teawater , Paolo Bonzini , Andrey Gruzdev Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Currently, when someone (i.e., the VM) accesses discarded parts inside a RAMBlock with a RamDiscardManager managing the corresponding mapped memory region, postcopy will request migration of the corresponding page from the source. The source, however, will never answer, because it refuses to migrate such pages with undefined content ("logically unplugged"): the pages are never dirty, and get_queued_page() will consequently skip processing these postcopy requests. Especially reading discarded ("logically unplugged") ranges is supposed to work in some setups (for example with current virtio-mem), although it barely ever happens: still, not placing a page would currently stall the VM, as it cannot make forward progress. Let's check the state via the RamDiscardManager (the state e.g., of virtio-mem is migrated during precopy) and avoid sending a request that will never get answered. Place a fresh zero page instead to keep the VM working. This is the same behavior that would happen automatically without userfaultfd being active, when accessing virtual memory regions without populated pages -- "populate on demand". Signed-off-by: David Hildenbrand --- migration/postcopy-ram.c | 25 +++++++++++++++++++++---- migration/ram.c | 23 +++++++++++++++++++++++ migration/ram.h | 1 + 3 files changed, 45 insertions(+), 4 deletions(-) diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index 2e9697bdd2..673f60ce2b 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -671,6 +671,23 @@ int postcopy_wake_shared(struct PostCopyFD *pcfd, return ret; } +static int postcopy_request_page(MigrationIncomingState *mis, RAMBlock *rb, + ram_addr_t start, uint64_t haddr) +{ + /* + * Discarded pages (via RamDiscardManager) are never migrated. On unlikely + * access, place a zeropage, which will also set the relevant bits in the + * recv_bitmap accordingly, so we won't try placing a zeropage twice. + */ + if (ramblock_page_is_discarded(rb, start)) { + bool received = ramblock_recv_bitmap_test_byte_offset(rb, start); + + return received ? 0 : postcopy_place_page_zero(mis, (void *)haddr, rb); + } + + return migrate_send_rp_req_pages(mis, rb, start, haddr); +} + /* * Callback from shared fault handlers to ask for a page, * the page must be specified by a RAMBlock and an offset in that rb @@ -690,7 +707,7 @@ int postcopy_request_shared_page(struct PostCopyFD *pcfd, RAMBlock *rb, qemu_ram_get_idstr(rb), rb_offset); return postcopy_wake_shared(pcfd, client_addr, rb); } - migrate_send_rp_req_pages(mis, rb, aligned_rbo, client_addr); + postcopy_request_page(mis, rb, aligned_rbo, client_addr); return 0; } @@ -984,8 +1001,8 @@ retry: * Send the request to the source - we want to request one * of our host page sizes (which is >= TPS) */ - ret = migrate_send_rp_req_pages(mis, rb, rb_offset, - msg.arg.pagefault.address); + ret = postcopy_request_page(mis, rb, rb_offset, + msg.arg.pagefault.address); if (ret) { /* May be network failure, try to wait for recovery */ if (ret == -EIO && postcopy_pause_fault_thread(mis)) { @@ -993,7 +1010,7 @@ retry: goto retry; } else { /* This is a unavoidable fault */ - error_report("%s: migrate_send_rp_req_pages() get %d", + error_report("%s: postcopy_request_page() get %d", __func__, ret); break; } diff --git a/migration/ram.c b/migration/ram.c index 03867b4971..54136cd76e 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -878,6 +878,29 @@ static uint64_t ramblock_dirty_bitmap_exclude_discarded_pages(RAMBlock *rb) return cleared_bits; } +/* + * Check if a page falls into a discarded range as managed by a + * RamDiscardManager responsible for the mapped memory region of the RAMBlock. + * Pages inside discarded ranges are never migrated and postcopy has to + * place zeropages instead. + * + * Note: The result is only stable while migration (precopy/postcopy). + */ +bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t offset) +{ + if (rb->mr && memory_region_has_ram_discard_manager(rb->mr)) { + RamDiscardManager *rdm = memory_region_get_ram_discard_manager(rb->mr); + MemoryRegionSection section = { + .mr = rb->mr, + .offset_within_region = offset, + .size = int128_get64(TARGET_PAGE_SIZE), + }; + + return !ram_discard_manager_is_populated(rdm, §ion); + } + return false; +} + /* Called with RCU critical section */ static void ramblock_sync_dirty_bitmap(RAMState *rs, RAMBlock *rb) { diff --git a/migration/ram.h b/migration/ram.h index 4833e9fd5b..6e7f12e556 100644 --- a/migration/ram.h +++ b/migration/ram.h @@ -72,6 +72,7 @@ void ramblock_recv_bitmap_set_range(RAMBlock *rb, void *host_addr, size_t nr); int64_t ramblock_recv_bitmap_send(QEMUFile *file, const char *block_name); int ram_dirty_bitmap_reload(MigrationState *s, RAMBlock *rb); +bool ramblock_page_is_discarded(RAMBlock *rb, ram_addr_t offset); /* ram cache */ int colo_init_ram_cache(void); From patchwork Wed Jun 16 16:29:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Hildenbrand X-Patchwork-Id: 1493002 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=cmAdRLdU; dkim-atps=neutral Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4G4rV11qMzz9sWl for ; Thu, 17 Jun 2021 02:38:33 +1000 (AEST) Received: from localhost ([::1]:34756 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1ltYYg-0006vq-NG for incoming@patchwork.ozlabs.org; Wed, 16 Jun 2021 12:38:30 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:40064) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRX-0003Iz-Tq for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:31:07 -0400 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:56864) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1ltYRW-0007k8-0m for qemu-devel@nongnu.org; Wed, 16 Jun 2021 12:31:07 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623861065; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=oPYqdqPD2TvTVREQgGoOOrmUQTrpNKqcxKNfZqE08OE=; b=cmAdRLdUlFLGZzVf0gvuHU477b6fVqYrltGcUu23Ks2wt7kC4VkrhPxIdOoLpxZ9+frIMG IeoS0X2m+O11G8pQHmENrwpov0W2kgPXYHRKfq2MMnVM11VAA2nLNUGl79ix+cuH6PEaxZ 2J4NoBSsXEfcmn0Gbtc9Gj0HhC2ZrO8= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-81-NSGd07aSO4GyS53pLaZ7Jw-1; Wed, 16 Jun 2021 12:31:04 -0400 X-MC-Unique: NSGd07aSO4GyS53pLaZ7Jw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0E9EB10068F3; Wed, 16 Jun 2021 16:31:03 +0000 (UTC) Received: from t480s.redhat.com (ovpn-114-100.ams2.redhat.com [10.36.114.100]) by smtp.corp.redhat.com (Postfix) with ESMTP id 6BF5A60CCB; Wed, 16 Jun 2021 16:30:53 +0000 (UTC) From: David Hildenbrand To: qemu-devel@nongnu.org Subject: [PATCH v1 6/6] migration/ram: Handle RAMBlocks with a RamDiscardManager on background snapshots Date: Wed, 16 Jun 2021 18:29:40 +0200 Message-Id: <20210616162940.28630-7-david@redhat.com> In-Reply-To: <20210616162940.28630-1-david@redhat.com> References: <20210616162940.28630-1-david@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Received-SPF: pass client-ip=216.205.24.124; envelope-from=david@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -29 X-Spam_score: -3.0 X-Spam_bar: --- X-Spam_report: (-3.0 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.199, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H4=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Yang , "Michael S. Tsirkin" , Pankaj Gupta , Juan Quintela , David Hildenbrand , "Dr. David Alan Gilbert" , Peter Xu , Marek Kedzierski , Alex Williamson , teawater , Paolo Bonzini , Andrey Gruzdev Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" We already don't ever migrate memory that corresponds to discarded ranges as managed by a RamDiscardManager responsible for the mapped memory region of the RAMBlock. virtio-mem uses this mechanism to logically unplug parts of a RAMBlock. Right now, we still populate zeropages for the whole usable part of the RAMBlock, which is undesired because: 1. Even populating the shared zeropage will result in memory getting consumed for page tables. 2. Memory backends without a shared zeropage (like hugetlbfs and shmem) will populate an actual, fresh page, resulting in an unintended memory consumption. Discarded ("logically unplugged") parts have to remain discarded. As these pages are never part of the migration stream, there is no need to track modifications via userfaultfd WP reliably for these parts. Further, any writes to these ranges by the VM are invalid and the behavior is undefined. Note that Linux only supports userfaultfd WP on private anonymous memory for now, which usually results in the shared zeropage getting populated. The issue will become more relevant once userfaultfd WP supports shmem and hugetlb. Signed-off-by: David Hildenbrand --- migration/ram.c | 53 +++++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 45 insertions(+), 8 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 54136cd76e..7236230950 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -1614,6 +1614,28 @@ out: return ret; } +static inline void populate_range(RAMBlock *block, hwaddr offset, hwaddr size) +{ + char *ptr = (char *) block->host; + + for (; offset < size; offset += qemu_real_host_page_size) { + char tmp = *(ptr + offset); + + /* Don't optimize the read out */ + asm volatile("" : "+r" (tmp)); + } +} + +static inline int populate_section(MemoryRegionSection *section, void *opaque) +{ + const hwaddr size = int128_get64(section->size); + hwaddr offset = section->offset_within_region; + RAMBlock *block = section->mr->ram_block; + + populate_range(block, offset, size); + return 0; +} + /* * ram_block_populate_pages: populate memory in the RAM block by reading * an integer from the beginning of each page. @@ -1623,16 +1645,31 @@ out: * * @block: RAM block to populate */ -static void ram_block_populate_pages(RAMBlock *block) +static void ram_block_populate_pages(RAMBlock *rb) { - char *ptr = (char *) block->host; - - for (ram_addr_t offset = 0; offset < block->used_length; - offset += qemu_real_host_page_size) { - char tmp = *(ptr + offset); + /* + * Skip populating all pages that fall into a discarded range as managed by + * a RamDiscardManager responsible for the mapped memory region of the + * RAMBlock. Such discarded ("logically unplugged") parts of a RAMBlock + * must not get populated automatically. We don't have to track + * modifications via userfaultfd WP reliably, because these pages will + * not be part of the migration stream either way -- see + * ramblock_dirty_bitmap_exclude_discarded_pages(). + * + * Note: The result is only stable while migration (precopy/postcopy). + */ + if (rb->mr && memory_region_has_ram_discard_manager(rb->mr)) { + RamDiscardManager *rdm = memory_region_get_ram_discard_manager(rb->mr); + MemoryRegionSection section = { + .mr = rb->mr, + .offset_within_region = 0, + .size = rb->mr->size, + }; - /* Don't optimize the read out */ - asm volatile("" : "+r" (tmp)); + ram_discard_manager_replay_populated(rdm, §ion, + populate_section, NULL); + } else { + populate_range(rb, 0, qemu_ram_get_used_length(rb)); } }