From patchwork Mon Jul 29 14:52:29 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Igor Mammedov X-Patchwork-Id: 1138432 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 45y2jk1JMTz9sML for ; Tue, 30 Jul 2019 00:52:58 +1000 (AEST) Received: from localhost ([::1]:52800 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hs71E-00083H-8E for incoming@patchwork.ozlabs.org; Mon, 29 Jul 2019 10:52:56 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:36412) by lists.gnu.org with esmtp (Exim 4.86_2) (envelope-from ) id 1hs70x-0007z7-LN for qemu-devel@nongnu.org; Mon, 29 Jul 2019 10:52:40 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hs70w-0000TQ-KR for qemu-devel@nongnu.org; Mon, 29 Jul 2019 10:52:39 -0400 Received: from mx1.redhat.com ([209.132.183.28]:34478) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hs70w-0000Sw-DR; Mon, 29 Jul 2019 10:52:38 -0400 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id B990E307CDFC; Mon, 29 Jul 2019 14:52:37 +0000 (UTC) Received: from dell-r430-03.lab.eng.brq.redhat.com (dell-r430-03.lab.eng.brq.redhat.com [10.37.153.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id C2F145D9C8; Mon, 29 Jul 2019 14:52:36 +0000 (UTC) From: Igor Mammedov To: qemu-devel@nongnu.org Date: Mon, 29 Jul 2019 10:52:29 -0400 Message-Id: <20190729145229.4333-3-imammedo@redhat.com> In-Reply-To: <20190729145229.4333-1-imammedo@redhat.com> References: <20190729145229.4333-1-imammedo@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.49]); Mon, 29 Jul 2019 14:52:37 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH RFC 2/2] s390: do not call memory_region_allocate_system_memory() multiple times X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: pbonzini@redhat.com, qemu-s390x@nongnu.org, dgilbert@redhat.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" s390 was trying to solve limited KVM memslot size issue by abusing memory_region_allocate_system_memory(), which breaks API contract where the function might be called only once. s390 should have used memory aliases to fragment inital memory into smaller chunks to satisfy KVM's memslot limitation. But its a bit late now, since allocated pieces are transfered in migration stream separately, so it's not possible to just replace broken layout with correct one. To workaround previous patch made MemoryRegion alases migratable and this patch switches to use them to split big initial RAM chunk into smaller pieces up to KVM_SLOT_MAX_BYTES each and registers aliases for migration. That should keep migration and guest ABI (RAM mapping) compatible with previous QEMU versions. Signed-off-by: Igor Mammedov --- I don't have access to a suitable system to test it, so I've simulated it with smaller chunks on x84 host. Ping-pong migration between old and new QEMU worked fine. KVM part should be fine as memslots using mapped MemoryRegions (in this case it would be aliases) as far as I know but is someone could test it on big enough host it would be nice. --- hw/s390x/s390-virtio-ccw.c | 20 +++++++++++++++----- 1 file changed, 15 insertions(+), 5 deletions(-) diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c index 5b6a9a4e55..77910419e6 100644 --- a/hw/s390x/s390-virtio-ccw.c +++ b/hw/s390x/s390-virtio-ccw.c @@ -163,21 +163,31 @@ static void virtio_ccw_register_hcalls(void) static void s390_memory_init(ram_addr_t mem_size) { MemoryRegion *sysmem = get_system_memory(); + MemoryRegion *ram = g_new(MemoryRegion, 1); ram_addr_t chunk, offset = 0; unsigned int number = 0; Error *local_err = NULL; gchar *name; /* allocate RAM for core */ + memory_region_allocate_system_memory(ram, NULL, "s390.whole.ram", mem_size); + /* + * memory_region_allocate_system_memory() registers allocated RAM for + * migration, however for compat reasons the RAM should be passed over + * as RAMBlocks of the size upto KVM_SLOT_MAX_BYTES. So unregister just + * allocated RAM so it won't be migrated directly. Aliases will take care + * of segmenting RAM into legacy chunks that migration and KVM compatible. + */ + vmstate_unregister_ram(ram, NULL); name = g_strdup_printf("s390.ram"); while (mem_size) { - MemoryRegion *ram = g_new(MemoryRegion, 1); - uint64_t size = mem_size; + MemoryRegion *alias = g_new(MemoryRegion, 1); /* KVM does not allow memslots >= 8 TB */ - chunk = MIN(size, KVM_SLOT_MAX_BYTES); - memory_region_allocate_system_memory(ram, NULL, name, chunk); - memory_region_add_subregion(sysmem, offset, ram); + chunk = MIN(mem_size, KVM_SLOT_MAX_BYTES); + memory_region_init_alias(alias, NULL, name, ram, offset, chunk); + vmstate_register_ram_global(alias); + memory_region_add_subregion(sysmem, offset, alias); mem_size -= chunk; offset += chunk; g_free(name);