[{"id":3191006,"web_url":"http://patchwork.ozlabs.org/comment/3191006/","msgid":"<de10b63e-0142-d9c5-8c7a-acf2c58e78cb@redhat.com>","list_archive_url":null,"date":"2023-10-02T08:58:53","subject":"Re: [PATCH v4 00/18] virtio-mem: Expose device memory through\n multiple memslots","submitter":{"id":70402,"url":"http://patchwork.ozlabs.org/api/people/70402/","name":"David Hildenbrand","email":"david@redhat.com"},"content":"On 26.09.23 20:57, David Hildenbrand wrote:\n> Quoting from patch #16:\n> \n>      Having large virtio-mem devices that only expose little memory to a VM\n>      is currently a problem: we map the whole sparse memory region into the\n>      guest using a single memslot, resulting in one gigantic memslot in KVM.\n>      KVM allocates metadata for the whole memslot, which can result in quite\n>      some memory waste.\n> \n>      Assuming we have a 1 TiB virtio-mem device and only expose little (e.g.,\n>      1 GiB) memory, we would create a single 1 TiB memslot and KVM has to\n>      allocate metadata for that 1 TiB memslot: on x86, this implies allocating\n>      a significant amount of memory for metadata:\n> \n>      (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB\n>          -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %)\n> \n>          With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets\n>          allocated lazily when required for nested VMs\n>      (2) gfn_track: 2 bytes per 4 KiB\n>          -> For 1 TiB: 536870912 = ~512 MiB (0.05 %)\n>      (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB\n>          -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %)\n>      (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page\n>          -> For 1 TiB: 536870912 = 64 MiB (0.006 %)\n> \n>      So we primarily care about (1) and (2). The bad thing is, that the\n>      memory consumption doubles once SMM is enabled, because we create the\n>      memslot once for !SMM and once for SMM.\n> \n>      Having a 1 TiB memslot without the TDP MMU consumes around:\n>      * With SMM: 5 GiB\n>      * Without SMM: 2.5 GiB\n>      Having a 1 TiB memslot with the TDP MMU consumes around:\n>      * With SMM: 1 GiB\n>      * Without SMM: 512 MiB\n> \n>      ... and that's really something we want to optimize, to be able to just\n>      start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device\n>      that can grow very large (e.g., 1 TiB).\n> \n>      Consequently, using multiple memslots and only mapping the memslots we\n>      really need can significantly reduce memory waste and speed up\n>      memslot-related operations. Let's expose the sparse RAM memory region using\n>      multiple memslots, mapping only the memslots we currently need into our\n>      device memory region container.\n> \n> The hyper-v balloon driver has similar demands [1].\n> \n> For virtio-mem, this has to be turned manually on (\"dynamic-memslots=on\"),\n> due to the interaction with vhost (below).\n> \n> If we have less than 509 memslots available, we always default to a single\n> memslot. Otherwise, we automatically decide how many memslots to use\n> based on a simple heuristic (see patch #12), and try not to use more than\n> 256 memslots across all memory devices: our historical DIMM limit.\n> \n> As soon as any memory devices automatically decided on using more than\n> one memslot, vhost devices that support less than 509 memslots (e.g.,\n> currently most vhost-user devices like with virtiofsd) can no longer be\n> plugged as a precaution.\n> \n> Quoting from patch #12:\n> \n>      Plugging vhost devices with less than 509 memslots available while we\n>      have memory devices plugged that consume multiple memslots due to\n>      automatic decisions can be problematic. Most configurations might just fail\n>      due to \"limit < used + reserved\", however, it can also happen that these\n>      memory devices would suddenly consume memslots that would actually be\n>      required by other memslot consumers (boot, PCI BARs) later. Note that this\n>      has always been sketchy with vhost devices that support only a small number\n>      of memslots; but we don't want to make it any worse.So let's keep it simple\n>      and simply reject plugging such vhost devices in such a configuration.\n> \n>      Eventually, all vhost devices that want to be fully compatible with such\n>      memory devices should support a decent number of memslots (>= 509).\n> \n> \n> The recommendation is to plug such vhost devices before the virtio-mem\n> decides, or to not set \"dynamic-memslots=on\". As soon as these devices\n> support a reasonable number of memslots (>= 509), this will start working\n> automatically.\n> \n> I run some tests on x86_64, now also including vfio and migration tests.\n> Seems to work as expected, even when multiple memslots are used.\n> \n> \n> Patch #1 -- #3 are from [2] that were not picked up yet.\n> \n> Patch #4 -- #12 add handling of multiple memslots to memory devices\n> \n> Patch #13 -- #16 add \"dynamic-memslots=on\" support to virtio-mem\n> \n> Patch #15 -- #16 make sure that virtio-mem memslots can be enabled/disable\n>               atomically\n\n\nIf there is no further feedback until the end of the week, I'll queue \nthis to mem-next.","headers":{"Return-Path":"<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=HPy4oX8d;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org\n (client-ip=209.51.188.17; helo=lists.gnu.org;\n envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from lists.gnu.org (lists.gnu.org [209.51.188.17])\n\t(using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4RzZfS6Nppz1yng\n\tfor <incoming@patchwork.ozlabs.org>; Mon,  2 Oct 2023 20:00:16 +1100 (AEDT)","from localhost ([::1] helo=lists1p.gnu.org)\n\tby lists.gnu.org with esmtp (Exim 4.90_1)\n\t(envelope-from <qemu-devel-bounces@nongnu.org>)\n\tid 1qnEli-0003gg-Ev; Mon, 02 Oct 2023 04:59:10 -0400","from eggs.gnu.org ([2001:470:142:3::10])\n by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <david@redhat.com>) id 1qnEle-0003Kt-H9\n for qemu-devel@nongnu.org; Mon, 02 Oct 2023 04:59:07 -0400","from us-smtp-delivery-124.mimecast.com ([170.10.133.124])\n by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <david@redhat.com>) id 1qnElc-0003aH-DA\n for qemu-devel@nongnu.org; Mon, 02 Oct 2023 04:59:06 -0400","from mail-wm1-f71.google.com (mail-wm1-f71.google.com\n [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS\n (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id\n us-mta-450-PJH0fziYMfmpQivHwagJew-1; Mon, 02 Oct 2023 04:58:56 -0400","by mail-wm1-f71.google.com with SMTP id\n 5b1f17b1804b1-3fe1521678fso142057075e9.1\n for <qemu-devel@nongnu.org>; Mon, 02 Oct 2023 01:58:56 -0700 (PDT)","from ?IPV6:2003:cb:c735:f200:cb49:cb8f:88fc:9446?\n (p200300cbc735f200cb49cb8f88fc9446.dip0.t-ipconnect.de.\n [2003:cb:c735:f200:cb49:cb8f:88fc:9446])\n by smtp.gmail.com with ESMTPSA id\n f1-20020a5d50c1000000b003142e438e8csm27549025wrt.26.2023.10.02.01.58.54\n (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);\n Mon, 02 Oct 2023 01:58:54 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1696237143;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=Wxgm+8Bfp8FIpwJp0F8Lsnx34gN58d0JfUtgLpCGQBE=;\n b=HPy4oX8dnURPEtxVHTVhGP1PExgRktKF5wyjC5axGUo97rifXQ/8Loq7tYOpCX7zPO5iLu\n hvNkReEY3BtaQX3g47KEZ3+RVmE/yw4/kEacrmRC9FcoSH9za3i6Vd1BKC/lgkLXQr6owO\n MhUpddyg+3jILeZZfFW8Rn+cqRNevQs=","X-MC-Unique":"PJH0fziYMfmpQivHwagJew-1","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20230601; t=1696237136; x=1696841936;\n h=content-transfer-encoding:in-reply-to:organization:from:references\n :cc:to:content-language:subject:user-agent:mime-version:date\n :message-id:x-gm-message-state:from:to:cc:subject:date:message-id\n :reply-to;\n bh=Wxgm+8Bfp8FIpwJp0F8Lsnx34gN58d0JfUtgLpCGQBE=;\n b=KCiBFvOpEP/LGRU5sI93YejJSHF1aYzEJGvRnjMPnKDNm8irDtWavU7DPVNMXYyvMj\n Yf6JVxTZbG2hGXkRb8R80XW21H3aiKAb9Ln0nMRx5+u7FcHmFFFdUrNM4DFFO5Xwp6vF\n abbLEdxa5ZvmMdmn7FssPrQNHEFiJVHze0LzXddD4h0sdK6tYXTi2wd+cqGXVFcghC52\n Ab+2gFqK4FfUTJj4DaNmFvKINC+sQUqCGbb8HgsDMnX5FYR4PxlcAQ2iMz4Xhmx2DHqS\n ZZKfNOi05GrJXWr8i5G8XnAKUTJCbJeIDQaOMbtJkTo+UGLnSYC96sM/44adnu2aH+7+\n ewtA==","X-Gm-Message-State":"AOJu0Yx24pVMmOKqWC0B+RqYaX9Fz3qYwraoyqzR7pwmV4nM+mRGEG70\n vqB4LQyyh1CAB93uqeakxvGA6eWLcyMYkC9JQQ3q7y5mVwCLVH8kiTX3VWjqnjE9lNL6WIgvGHB\n t3PXuVsvFWgX3SRCTE/bNww9QQzNZFDx2b0dSGfE878V64cKCyZVYMU+bNFjos+iLOocBRe8=","X-Received":["by 2002:adf:fc81:0:b0:323:37a3:8d1e with SMTP id\n g1-20020adffc81000000b0032337a38d1emr8808150wrr.0.1696237135768;\n Mon, 02 Oct 2023 01:58:55 -0700 (PDT)","by 2002:adf:fc81:0:b0:323:37a3:8d1e with SMTP id\n g1-20020adffc81000000b0032337a38d1emr8808120wrr.0.1696237135274;\n Mon, 02 Oct 2023 01:58:55 -0700 (PDT)"],"X-Google-Smtp-Source":"\n AGHT+IEfCMXfIIwkIEgnzxGA3HJaP/BwimwF8mHAu3w+YCG8lwk09csa41Lgg8LzF3B0mOFWm6kLjg==","Message-ID":"<de10b63e-0142-d9c5-8c7a-acf2c58e78cb@redhat.com>","Date":"Mon, 2 Oct 2023 10:58:53 +0200","MIME-Version":"1.0","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101\n Thunderbird/102.15.1","Subject":"Re: [PATCH v4 00/18] virtio-mem: Expose device memory through\n multiple memslots","Content-Language":"en-US","To":"qemu-devel@nongnu.org","Cc":"Paolo Bonzini <pbonzini@redhat.com>, Igor Mammedov <imammedo@redhat.com>,\n  Xiao Guangrong <xiaoguangrong.eric@gmail.com>,\n \"Michael S. Tsirkin\" <mst@redhat.com>, Peter Xu <peterx@redhat.com>,\n\t=?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>,\n Eduardo Habkost <eduardo@habkost.net>,\n Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,\n Yanan Wang <wangyanan55@huawei.com>, Michal Privoznik <mprivozn@redhat.com>,\n\t=?utf-8?q?Daniel_P_=2E_Berrang=C3=A9?= <berrange@redhat.com>,\n Gavin Shan <gshan@redhat.com>, Alex Williamson <alex.williamson@redhat.com>,\n Stefan Hajnoczi <stefanha@redhat.com>,\n \"Maciej S . Szmigiero\" <mail@maciej.szmigiero.name>, kvm@vger.kernel.org","References":"<20230926185738.277351-1-david@redhat.com>","From":"David Hildenbrand <david@redhat.com>","Organization":"Red Hat","In-Reply-To":"<20230926185738.277351-1-david@redhat.com>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"7bit","Received-SPF":"pass client-ip=170.10.133.124; envelope-from=david@redhat.com;\n helo=us-smtp-delivery-124.mimecast.com","X-Spam_score_int":"-51","X-Spam_score":"-5.2","X-Spam_bar":"-----","X-Spam_report":"(-5.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001,\n DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,\n NICE_REPLY_A=-3.058, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01,\n RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001,\n SPF_PASS=-0.001 autolearn=ham autolearn_force=no","X-Spam_action":"no action","X-BeenThere":"qemu-devel@nongnu.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<qemu-devel.nongnu.org>","List-Unsubscribe":"<https://lists.nongnu.org/mailman/options/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>","List-Archive":"<https://lists.nongnu.org/archive/html/qemu-devel>","List-Post":"<mailto:qemu-devel@nongnu.org>","List-Help":"<mailto:qemu-devel-request@nongnu.org?subject=help>","List-Subscribe":"<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=subscribe>","Errors-To":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org","Sender":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org"}},{"id":3191943,"web_url":"http://patchwork.ozlabs.org/comment/3191943/","msgid":"<20231003093802-mutt-send-email-mst@kernel.org>","list_archive_url":null,"date":"2023-10-03T13:39:01","subject":"Re: [PATCH v4 00/18] virtio-mem: Expose device memory through\n multiple memslots","submitter":{"id":2235,"url":"http://patchwork.ozlabs.org/api/people/2235/","name":"Michael S. Tsirkin","email":"mst@redhat.com"},"content":"On Tue, Sep 26, 2023 at 08:57:20PM +0200, David Hildenbrand wrote:\n> Quoting from patch #16:\n> \n>     Having large virtio-mem devices that only expose little memory to a VM\n>     is currently a problem: we map the whole sparse memory region into the\n>     guest using a single memslot, resulting in one gigantic memslot in KVM.\n>     KVM allocates metadata for the whole memslot, which can result in quite\n>     some memory waste.\n> \n>     Assuming we have a 1 TiB virtio-mem device and only expose little (e.g.,\n>     1 GiB) memory, we would create a single 1 TiB memslot and KVM has to\n>     allocate metadata for that 1 TiB memslot: on x86, this implies allocating\n>     a significant amount of memory for metadata:\n> \n>     (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB\n>         -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %)\n> \n>         With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets\n>         allocated lazily when required for nested VMs\n>     (2) gfn_track: 2 bytes per 4 KiB\n>         -> For 1 TiB: 536870912 = ~512 MiB (0.05 %)\n>     (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB\n>         -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %)\n>     (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page\n>         -> For 1 TiB: 536870912 = 64 MiB (0.006 %)\n> \n>     So we primarily care about (1) and (2). The bad thing is, that the\n>     memory consumption doubles once SMM is enabled, because we create the\n>     memslot once for !SMM and once for SMM.\n> \n>     Having a 1 TiB memslot without the TDP MMU consumes around:\n>     * With SMM: 5 GiB\n>     * Without SMM: 2.5 GiB\n>     Having a 1 TiB memslot with the TDP MMU consumes around:\n>     * With SMM: 1 GiB\n>     * Without SMM: 512 MiB\n> \n>     ... and that's really something we want to optimize, to be able to just\n>     start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device\n>     that can grow very large (e.g., 1 TiB).\n> \n>     Consequently, using multiple memslots and only mapping the memslots we\n>     really need can significantly reduce memory waste and speed up\n>     memslot-related operations. Let's expose the sparse RAM memory region using\n>     multiple memslots, mapping only the memslots we currently need into our\n>     device memory region container.\n> \n> The hyper-v balloon driver has similar demands [1].\n> \n> For virtio-mem, this has to be turned manually on (\"dynamic-memslots=on\"),\n> due to the interaction with vhost (below).\n> \n> If we have less than 509 memslots available, we always default to a single\n> memslot. Otherwise, we automatically decide how many memslots to use\n> based on a simple heuristic (see patch #12), and try not to use more than\n> 256 memslots across all memory devices: our historical DIMM limit.\n> \n> As soon as any memory devices automatically decided on using more than\n> one memslot, vhost devices that support less than 509 memslots (e.g.,\n> currently most vhost-user devices like with virtiofsd) can no longer be\n> plugged as a precaution.\n> \n> Quoting from patch #12:\n> \n>     Plugging vhost devices with less than 509 memslots available while we\n>     have memory devices plugged that consume multiple memslots due to\n>     automatic decisions can be problematic. Most configurations might just fail\n>     due to \"limit < used + reserved\", however, it can also happen that these\n>     memory devices would suddenly consume memslots that would actually be\n>     required by other memslot consumers (boot, PCI BARs) later. Note that this\n>     has always been sketchy with vhost devices that support only a small number\n>     of memslots; but we don't want to make it any worse.So let's keep it simple\n>     and simply reject plugging such vhost devices in such a configuration.\n> \n>     Eventually, all vhost devices that want to be fully compatible with such\n>     memory devices should support a decent number of memslots (>= 509).\n> \n> \n> The recommendation is to plug such vhost devices before the virtio-mem\n> decides, or to not set \"dynamic-memslots=on\". As soon as these devices\n> support a reasonable number of memslots (>= 509), this will start working\n> automatically.\n> \n> I run some tests on x86_64, now also including vfio and migration tests.\n> Seems to work as expected, even when multiple memslots are used.\n> \n> \n> Patch #1 -- #3 are from [2] that were not picked up yet.\n> \n> Patch #4 -- #12 add handling of multiple memslots to memory devices\n> \n> Patch #13 -- #16 add \"dynamic-memslots=on\" support to virtio-mem\n> \n> Patch #15 -- #16 make sure that virtio-mem memslots can be enabled/disable\n>              atomically\n\n\nReviewed-by: Michael S. Tsirkin <mst@redhat.com>\n\npls feel free to merge.\n\n\n> v3 -> v4:\n> * \"virtio-mem: Pass non-const VirtIOMEM via virtio_mem_range_cb\"\n>  -> Cleanup patch added\n> * \"virtio-mem: Update state to match bitmap as soon as it's been migrated\"\n>  -> Cleanup patch added\n> * \"virtio-mem: Expose device memory dynamically via multiple memslots if\n>    enabled\"\n>  -> Parameter now called \"dynamic-memslots\"\n>  -> With \"dynamic-memslots=off\", don't use a memory region container and\n>     just use the old handling: always map the RAM memory region [thus the\n>     new parameter name]\n>  -> Require \"unplugged-inaccessible=on\" (default) with\n>     \"dynamic-memslots=on\" for simplicity\n>  -> Take care of proper migration handling\n>  -> Remove accidential additional busy check in virtio_mem_unplug_all()\n>  -> Minor comment cleanups\n>  -> Dropped RB because of changes\n> \n> v2 -> v3:\n> * \"kvm: Return number of free memslots\"\n>  -> Return 0 in stub\n> * \"kvm: Add stub for kvm_get_max_memslots()\"\n>  -> Return 0 in stub\n> * Adjust other patches to check for kvm_enabled() before calling\n>   kvm_get_free_memslots()/kvm_get_max_memslots()\n> * Add RBs\n> \n> v1 -> v2:\n> * Include patches from [1]\n> * A lot of code simplification and reorganization, too many to spell out\n> * don't add a general soft-limit on memslots, to avoid warning in sane\n>   setups\n> * Simplify handling of vhost devices with a small number of memslots:\n>   simply fail plugging them\n> * \"virtio-mem: Expose device memory via multiple memslots if enabled\"\n>  -> Fix one \"is this the last memslot\" check\n> * Much more testing\n> \n> \n> [1] https://lkml.kernel.org/r/cover.1689786474.git.maciej.szmigiero@oracle.com\n> [2] https://lkml.kernel.org/r/20230523185915.540373-1-david@redhat.com\n> \n> Cc: Paolo Bonzini <pbonzini@redhat.com>\n> Cc: Igor Mammedov <imammedo@redhat.com>\n> Cc: Xiao Guangrong <xiaoguangrong.eric@gmail.com>\n> Cc: \"Michael S. Tsirkin\" <mst@redhat.com>\n> Cc: Peter Xu <peterx@redhat.com>\n> Cc: \"Philippe Mathieu-Daudé\" <philmd@linaro.org>\n> Cc: Eduardo Habkost <eduardo@habkost.net>\n> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com>\n> Cc: Yanan Wang <wangyanan55@huawei.com>\n> Cc: Michal Privoznik <mprivozn@redhat.com>\n> Cc: Daniel P. Berrangé <berrange@redhat.com>\n> Cc: Gavin Shan <gshan@redhat.com>\n> Cc: Alex Williamson <alex.williamson@redhat.com>\n> Cc: Stefan Hajnoczi <stefanha@redhat.com>\n> Cc: Maciej S. Szmigiero <mail@maciej.szmigiero.name>\n> Cc: kvm@vger.kernel.org\n> \n> David Hildenbrand (18):\n>   vhost: Rework memslot filtering and fix \"used_memslot\" tracking\n>   vhost: Remove vhost_backend_can_merge() callback\n>   softmmu/physmem: Fixup qemu_ram_block_from_host() documentation\n>   kvm: Return number of free memslots\n>   vhost: Return number of free memslots\n>   memory-device: Support memory devices with multiple memslots\n>   stubs: Rename qmp_memory_device.c to memory_device.c\n>   memory-device: Track required and actually used memslots in\n>     DeviceMemoryState\n>   memory-device,vhost: Support memory devices that dynamically consume\n>     memslots\n>   kvm: Add stub for kvm_get_max_memslots()\n>   vhost: Add vhost_get_max_memslots()\n>   memory-device,vhost: Support automatic decision on the number of\n>     memslots\n>   memory: Clarify mapping requirements for RamDiscardManager\n>   virtio-mem: Pass non-const VirtIOMEM via virtio_mem_range_cb\n>   virtio-mem: Update state to match bitmap as soon as it's been migrated\n>   virtio-mem: Expose device memory dynamically via multiple memslots if\n>     enabled\n>   memory,vhost: Allow for marking memory device memory regions\n>     unmergeable\n>   virtio-mem: Mark memslot alias memory regions unmergeable\n> \n>  MAINTAINERS                                   |   1 +\n>  accel/kvm/kvm-all.c                           |  35 +-\n>  accel/stubs/kvm-stub.c                        |   9 +-\n>  hw/mem/memory-device.c                        | 196 ++++++++++-\n>  hw/virtio/vhost-stub.c                        |   9 +-\n>  hw/virtio/vhost-user.c                        |  21 +-\n>  hw/virtio/vhost-vdpa.c                        |   1 -\n>  hw/virtio/vhost.c                             | 103 +++++-\n>  hw/virtio/virtio-mem-pci.c                    |  21 ++\n>  hw/virtio/virtio-mem.c                        | 330 +++++++++++++++++-\n>  include/exec/cpu-common.h                     |  15 +\n>  include/exec/memory.h                         |  27 +-\n>  include/hw/boards.h                           |  14 +-\n>  include/hw/mem/memory-device.h                |  57 +++\n>  include/hw/virtio/vhost-backend.h             |   9 +-\n>  include/hw/virtio/vhost.h                     |   3 +-\n>  include/hw/virtio/virtio-mem.h                |  32 +-\n>  include/sysemu/kvm.h                          |   4 +-\n>  include/sysemu/kvm_int.h                      |   1 +\n>  softmmu/memory.c                              |  35 +-\n>  softmmu/physmem.c                             |  17 -\n>  .../{qmp_memory_device.c => memory_device.c}  |  10 +\n>  stubs/meson.build                             |   2 +-\n>  23 files changed, 839 insertions(+), 113 deletions(-)\n>  rename stubs/{qmp_memory_device.c => memory_device.c} (56%)\n> \n> -- \n> 2.41.0","headers":{"Return-Path":"<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=SVhANUg8;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org\n (client-ip=209.51.188.17; helo=lists.gnu.org;\n envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from lists.gnu.org (lists.gnu.org [209.51.188.17])\n\t(using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4S0JqV6qVbz1yqM\n\tfor <incoming@patchwork.ozlabs.org>; Wed,  4 Oct 2023 00:40:38 +1100 (AEDT)","from localhost ([::1] helo=lists1p.gnu.org)\n\tby lists.gnu.org with esmtp (Exim 4.90_1)\n\t(envelope-from <qemu-devel-bounces@nongnu.org>)\n\tid 1qnfcq-0006jF-W7; Tue, 03 Oct 2023 09:39:49 -0400","from eggs.gnu.org ([2001:470:142:3::10])\n by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <mst@redhat.com>) id 1qnfcp-0006hS-BY\n for qemu-devel@nongnu.org; Tue, 03 Oct 2023 09:39:47 -0400","from us-smtp-delivery-124.mimecast.com ([170.10.133.124])\n by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <mst@redhat.com>) id 1qnfcn-0005fs-6n\n for qemu-devel@nongnu.org; Tue, 03 Oct 2023 09:39:47 -0400","from mail-wm1-f70.google.com (mail-wm1-f70.google.com\n [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS\n (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id\n us-mta-128-Fd-ImJhePLmr1RrLzUctxg-1; Tue, 03 Oct 2023 09:39:43 -0400","by mail-wm1-f70.google.com with SMTP id\n 5b1f17b1804b1-405917470e8so6901995e9.1\n for <qemu-devel@nongnu.org>; Tue, 03 Oct 2023 06:39:42 -0700 (PDT)","from redhat.com ([2.52.132.27]) by smtp.gmail.com with ESMTPSA id\n i12-20020a5d438c000000b0031fe0576460sm1629648wrq.11.2023.10.03.06.39.17\n (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n Tue, 03 Oct 2023 06:39:38 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1696340384;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=DW9sDr4AfG6o1aJcxuQdohehi94/uh5JiXo4Eu6EULk=;\n b=SVhANUg8DgZGSe5T10zPR2gc0sJ+uNhFCfIHMjG76m2NZ7yNbE+SiS5hyimZFOj7093V05\n itg0ldIW6TPbroKjrOzaJlO3K5BpfJajfB7MkHQQVa3yN2NFSJq4kFsziEkSdn7ELUN3+h\n wrjrITsMgBOZQmpRGgy5Io0t+P28l08=","X-MC-Unique":"Fd-ImJhePLmr1RrLzUctxg-1","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20230601; t=1696340380; x=1696945180;\n h=in-reply-to:content-transfer-encoding:content-disposition\n :mime-version:references:message-id:subject:cc:to:from:date\n :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;\n bh=DW9sDr4AfG6o1aJcxuQdohehi94/uh5JiXo4Eu6EULk=;\n b=JDKAOROYDTstbUp6gkPHHprk0420M65Hv572C1HIDUd+OSDEhoepXi7yeug7gfFUMg\n X55/2Uai66PMGnrJShH9Cj8Vb4YM5c7PJcyAOIqWbb7jgF5Kq8ZizJK1c7phcFPvrRYK\n 6QCkCwKsX9+L4YpXcC4GwU9Uy/ACM1XutCgJ2ephK+7qSdOBk4BtXWG8MMGKBAVDM/vX\n z6yI6UAGSz1cEj9CPzJxonNgXMmjJX+xUrfTnIThD6PNzbQ6Ve0s7l/427xrouv1tnyC\n f0iP3hO/p263cwB9ValJyq3nDB+q19ufTHQ5wdf2mDSPuDCVYVW49yC9ZDNF9w0VhHEK\n t7Ig==","X-Gm-Message-State":"AOJu0YxB2HB1V6Xtu9oIQ9Q/rCbdMMYSmkh2iKPakoQWVTY+q4up3JkI\n szAFvP9FFIxazQ8NpUpF3Iw3vM/RgyOZWP8gmSW5oiXd62ESbzGmaqsTSYIlZOOCLjHUaNANmJD\n hKrS41GCSXYmbUsM=","X-Received":["by 2002:a1c:7219:0:b0:403:8fb9:8d69 with SMTP id\n n25-20020a1c7219000000b004038fb98d69mr13161718wmc.25.1696340380250;\n Tue, 03 Oct 2023 06:39:40 -0700 (PDT)","by 2002:a1c:7219:0:b0:403:8fb9:8d69 with SMTP id\n n25-20020a1c7219000000b004038fb98d69mr13161669wmc.25.1696340379137;\n Tue, 03 Oct 2023 06:39:39 -0700 (PDT)"],"X-Google-Smtp-Source":"\n AGHT+IGUIPZu0O4hosGfJflZrf+cQcIl9r+aoinChQFjA8XZ2y385NWR8ehQfVAFw5O2EQgQHqKXMg==","Date":"Tue, 3 Oct 2023 09:39:01 -0400","From":"\"Michael S. Tsirkin\" <mst@redhat.com>","To":"David Hildenbrand <david@redhat.com>","Cc":"qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,\n Igor Mammedov <imammedo@redhat.com>,\n Xiao Guangrong <xiaoguangrong.eric@gmail.com>, Peter Xu <peterx@redhat.com>,\n Philippe =?iso-8859-1?q?Mathieu-Daud=E9?= <philmd@linaro.org>,\n Eduardo Habkost <eduardo@habkost.net>,\n Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,\n Yanan Wang <wangyanan55@huawei.com>, Michal Privoznik <mprivozn@redhat.com>,\n Daniel P =?iso-8859-1?q?=2E_Berrang=E9?= <berrange@redhat.com>,\n Gavin Shan <gshan@redhat.com>, Alex Williamson <alex.williamson@redhat.com>,\n Stefan Hajnoczi <stefanha@redhat.com>,\n \"Maciej S . Szmigiero\" <mail@maciej.szmigiero.name>, kvm@vger.kernel.org","Subject":"Re: [PATCH v4 00/18] virtio-mem: Expose device memory through\n multiple memslots","Message-ID":"<20231003093802-mutt-send-email-mst@kernel.org>","References":"<20230926185738.277351-1-david@redhat.com>","MIME-Version":"1.0","Content-Type":"text/plain; charset=iso-8859-1","Content-Disposition":"inline","Content-Transfer-Encoding":"8bit","In-Reply-To":"<20230926185738.277351-1-david@redhat.com>","Received-SPF":"pass client-ip=170.10.133.124; envelope-from=mst@redhat.com;\n helo=us-smtp-delivery-124.mimecast.com","X-Spam_score_int":"-20","X-Spam_score":"-2.1","X-Spam_bar":"--","X-Spam_report":"(-2.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001,\n DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,\n RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001,\n SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no","X-Spam_action":"no action","X-BeenThere":"qemu-devel@nongnu.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<qemu-devel.nongnu.org>","List-Unsubscribe":"<https://lists.nongnu.org/mailman/options/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>","List-Archive":"<https://lists.nongnu.org/archive/html/qemu-devel>","List-Post":"<mailto:qemu-devel@nongnu.org>","List-Help":"<mailto:qemu-devel-request@nongnu.org?subject=help>","List-Subscribe":"<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=subscribe>","Errors-To":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org","Sender":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org"}},{"id":3194227,"web_url":"http://patchwork.ozlabs.org/comment/3194227/","msgid":"<edf56572-1e7a-be30-d331-635493785d8c@redhat.com>","list_archive_url":null,"date":"2023-10-06T09:29:18","subject":"Re: [PATCH v4 00/18] virtio-mem: Expose device memory through\n multiple memslots","submitter":{"id":70402,"url":"http://patchwork.ozlabs.org/api/people/70402/","name":"David Hildenbrand","email":"david@redhat.com"},"content":"On 03.10.23 15:39, Michael S. Tsirkin wrote:\n> On Tue, Sep 26, 2023 at 08:57:20PM +0200, David Hildenbrand wrote:\n>> Quoting from patch #16:\n>>\n>>      Having large virtio-mem devices that only expose little memory to a VM\n>>      is currently a problem: we map the whole sparse memory region into the\n>>      guest using a single memslot, resulting in one gigantic memslot in KVM.\n>>      KVM allocates metadata for the whole memslot, which can result in quite\n>>      some memory waste.\n>>\n>>      Assuming we have a 1 TiB virtio-mem device and only expose little (e.g.,\n>>      1 GiB) memory, we would create a single 1 TiB memslot and KVM has to\n>>      allocate metadata for that 1 TiB memslot: on x86, this implies allocating\n>>      a significant amount of memory for metadata:\n>>\n>>      (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB\n>>          -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %)\n>>\n>>          With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets\n>>          allocated lazily when required for nested VMs\n>>      (2) gfn_track: 2 bytes per 4 KiB\n>>          -> For 1 TiB: 536870912 = ~512 MiB (0.05 %)\n>>      (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB\n>>          -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %)\n>>      (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page\n>>          -> For 1 TiB: 536870912 = 64 MiB (0.006 %)\n>>\n>>      So we primarily care about (1) and (2). The bad thing is, that the\n>>      memory consumption doubles once SMM is enabled, because we create the\n>>      memslot once for !SMM and once for SMM.\n>>\n>>      Having a 1 TiB memslot without the TDP MMU consumes around:\n>>      * With SMM: 5 GiB\n>>      * Without SMM: 2.5 GiB\n>>      Having a 1 TiB memslot with the TDP MMU consumes around:\n>>      * With SMM: 1 GiB\n>>      * Without SMM: 512 MiB\n>>\n>>      ... and that's really something we want to optimize, to be able to just\n>>      start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device\n>>      that can grow very large (e.g., 1 TiB).\n>>\n>>      Consequently, using multiple memslots and only mapping the memslots we\n>>      really need can significantly reduce memory waste and speed up\n>>      memslot-related operations. Let's expose the sparse RAM memory region using\n>>      multiple memslots, mapping only the memslots we currently need into our\n>>      device memory region container.\n>>\n>> The hyper-v balloon driver has similar demands [1].\n>>\n>> For virtio-mem, this has to be turned manually on (\"dynamic-memslots=on\"),\n>> due to the interaction with vhost (below).\n>>\n>> If we have less than 509 memslots available, we always default to a single\n>> memslot. Otherwise, we automatically decide how many memslots to use\n>> based on a simple heuristic (see patch #12), and try not to use more than\n>> 256 memslots across all memory devices: our historical DIMM limit.\n>>\n>> As soon as any memory devices automatically decided on using more than\n>> one memslot, vhost devices that support less than 509 memslots (e.g.,\n>> currently most vhost-user devices like with virtiofsd) can no longer be\n>> plugged as a precaution.\n>>\n>> Quoting from patch #12:\n>>\n>>      Plugging vhost devices with less than 509 memslots available while we\n>>      have memory devices plugged that consume multiple memslots due to\n>>      automatic decisions can be problematic. Most configurations might just fail\n>>      due to \"limit < used + reserved\", however, it can also happen that these\n>>      memory devices would suddenly consume memslots that would actually be\n>>      required by other memslot consumers (boot, PCI BARs) later. Note that this\n>>      has always been sketchy with vhost devices that support only a small number\n>>      of memslots; but we don't want to make it any worse.So let's keep it simple\n>>      and simply reject plugging such vhost devices in such a configuration.\n>>\n>>      Eventually, all vhost devices that want to be fully compatible with such\n>>      memory devices should support a decent number of memslots (>= 509).\n>>\n>>\n>> The recommendation is to plug such vhost devices before the virtio-mem\n>> decides, or to not set \"dynamic-memslots=on\". As soon as these devices\n>> support a reasonable number of memslots (>= 509), this will start working\n>> automatically.\n>>\n>> I run some tests on x86_64, now also including vfio and migration tests.\n>> Seems to work as expected, even when multiple memslots are used.\n>>\n>>\n>> Patch #1 -- #3 are from [2] that were not picked up yet.\n>>\n>> Patch #4 -- #12 add handling of multiple memslots to memory devices\n>>\n>> Patch #13 -- #16 add \"dynamic-memslots=on\" support to virtio-mem\n>>\n>> Patch #15 -- #16 make sure that virtio-mem memslots can be enabled/disable\n>>               atomically\n> \n> \n> Reviewed-by: Michael S. Tsirkin <mst@redhat.com>\n> \n> pls feel free to merge.\n\nThanks!\n\nQueued to\n\nhttps://github.com/davidhildenbrand/qemu.git mem-next","headers":{"Return-Path":"<qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org>","X-Original-To":"incoming@patchwork.ozlabs.org","Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=AFF49I9k;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org\n (client-ip=209.51.188.17; helo=lists.gnu.org;\n envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org;\n receiver=patchwork.ozlabs.org)"],"Received":["from lists.gnu.org (lists.gnu.org [209.51.188.17])\n\t(using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4S236q5Gg4z1yq9\n\tfor <incoming@patchwork.ozlabs.org>; Fri,  6 Oct 2023 20:29:55 +1100 (AEDT)","from localhost ([::1] helo=lists1p.gnu.org)\n\tby lists.gnu.org with esmtp (Exim 4.90_1)\n\t(envelope-from <qemu-devel-bounces@nongnu.org>)\n\tid 1qoh9P-0003SH-L7; Fri, 06 Oct 2023 05:29:39 -0400","from eggs.gnu.org ([2001:470:142:3::10])\n by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <david@redhat.com>) id 1qoh9J-00037s-Bn\n for qemu-devel@nongnu.org; Fri, 06 Oct 2023 05:29:33 -0400","from us-smtp-delivery-124.mimecast.com ([170.10.129.124])\n by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256)\n (Exim 4.90_1) (envelope-from <david@redhat.com>) id 1qoh9H-0006Xm-EQ\n for qemu-devel@nongnu.org; Fri, 06 Oct 2023 05:29:33 -0400","from mail-wm1-f70.google.com (mail-wm1-f70.google.com\n [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS\n (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id\n us-mta-583-ewPvP5xCNhS8z0s18OVtXg-1; Fri, 06 Oct 2023 05:29:22 -0400","by mail-wm1-f70.google.com with SMTP id\n 5b1f17b1804b1-4053a5c6a59so12585025e9.3\n for <qemu-devel@nongnu.org>; Fri, 06 Oct 2023 02:29:21 -0700 (PDT)","from ?IPV6:2003:cb:c715:ee00:4e24:cf8e:3de0:8819?\n (p200300cbc715ee004e24cf8e3de08819.dip0.t-ipconnect.de.\n [2003:cb:c715:ee00:4e24:cf8e:3de0:8819])\n by smtp.gmail.com with ESMTPSA id\n 4-20020a05600c248400b004060f0a0fd5sm3316313wms.13.2023.10.06.02.29.18\n (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);\n Fri, 06 Oct 2023 02:29:19 -0700 (PDT)"],"DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1696584568;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n content-transfer-encoding:content-transfer-encoding:\n in-reply-to:in-reply-to:references:references;\n bh=vvA1lHKzQGxfWAWbwTOmAmPDMRdiZLc1hXad/9hlzkk=;\n b=AFF49I9kxhFab+L1JadhymjH7va4TdFXwqffAumyDSW0kxUlxTeuV0FK1Exr2s8lrN869r\n 8QjjoZ06yxk7C2/AA+XEV5m0fNaq4gjiLk1jZ0g17BqB4HJ09wktWoIfPZLQBq7eHLcHVk\n C2Lq+lye2wtG5plAzfCabt1eV1cNolU=","X-MC-Unique":"ewPvP5xCNhS8z0s18OVtXg-1","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20230601; t=1696584561; x=1697189361;\n h=content-transfer-encoding:in-reply-to:subject:organization:from\n :references:cc:to:content-language:user-agent:mime-version:date\n :message-id:x-gm-message-state:from:to:cc:subject:date:message-id\n :reply-to;\n bh=vvA1lHKzQGxfWAWbwTOmAmPDMRdiZLc1hXad/9hlzkk=;\n b=t5OO1b+n8+sMxmJm1zjhLWOK1A/lnBeum1Jte87L/Qj79dgKABkoPZoYEwww49Rpuf\n XDrbJ3t7c7dMofNy5LbB67eimoif9bDHH5wwCXV4wtlyp40QvTk7izvbVVcNjEpdDscx\n mCnoS+lWLn1zVkpVI9IQCBC65s258FsOOY59XSZXOhpjvl1s5ELWSrT/fRKp4vWio9Jo\n LWmUQ/psY8kXOUkouU5v23VYSsH6qHH1MCv/j/oJ/F1vHuoa6P+SphhccbiJrxXSi3WC\n rQDsDFAGKRRTdkphEx/T2Ghdy2+UbKM0+AxtFQYNE4l+i2PwKH3KApSR2UVI8rukOvSo\n ug2Q==","X-Gm-Message-State":"AOJu0Yw73qGIlWoNnu48Oiur1OpD7/r1JzXacJ+ARrhCBINqnY//waKx\n ZEdoDxUtYP12CWFi/mYc7wDW9WBppF91ElZ0nR0JOw1KAXroJcp/LCLCq9H1ktzdoAVAktq9MQa\n dw7uefLWTICWsX/w=","X-Received":["by 2002:a05:600c:2219:b0:405:1c19:b747 with SMTP id\n z25-20020a05600c221900b004051c19b747mr6694341wml.15.1696584560733;\n Fri, 06 Oct 2023 02:29:20 -0700 (PDT)","by 2002:a05:600c:2219:b0:405:1c19:b747 with SMTP id\n z25-20020a05600c221900b004051c19b747mr6694318wml.15.1696584560060;\n Fri, 06 Oct 2023 02:29:20 -0700 (PDT)"],"X-Google-Smtp-Source":"\n AGHT+IHupXihKRAP5qBZjCaoRVmVkV+yniUqhncS+SA6+IhjXmDvvug/kRNA1LTItXRR2oraHZDzDg==","Message-ID":"<edf56572-1e7a-be30-d331-635493785d8c@redhat.com>","Date":"Fri, 6 Oct 2023 11:29:18 +0200","MIME-Version":"1.0","User-Agent":"Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101\n Thunderbird/102.15.1","Content-Language":"en-US","To":"\"Michael S. Tsirkin\" <mst@redhat.com>","Cc":"qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,\n Igor Mammedov <imammedo@redhat.com>,\n Xiao Guangrong <xiaoguangrong.eric@gmail.com>, Peter Xu <peterx@redhat.com>,\n\t=?utf-8?q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>,\n Eduardo Habkost <eduardo@habkost.net>,\n Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,\n Yanan Wang <wangyanan55@huawei.com>, Michal Privoznik <mprivozn@redhat.com>,\n\t=?utf-8?q?Daniel_P=2E_Berrang=C3=A9?= <berrange@redhat.com>,\n Gavin Shan <gshan@redhat.com>, Alex Williamson <alex.williamson@redhat.com>,\n Stefan Hajnoczi <stefanha@redhat.com>,\n \"Maciej S . Szmigiero\" <mail@maciej.szmigiero.name>, kvm@vger.kernel.org","References":"<20230926185738.277351-1-david@redhat.com>\n <20231003093802-mutt-send-email-mst@kernel.org>","From":"David Hildenbrand <david@redhat.com>","Organization":"Red Hat","Subject":"Re: [PATCH v4 00/18] virtio-mem: Expose device memory through\n multiple memslots","In-Reply-To":"<20231003093802-mutt-send-email-mst@kernel.org>","Content-Type":"text/plain; charset=UTF-8; format=flowed","Content-Transfer-Encoding":"7bit","Received-SPF":"pass client-ip=170.10.129.124; envelope-from=david@redhat.com;\n helo=us-smtp-delivery-124.mimecast.com","X-Spam_score_int":"-62","X-Spam_score":"-6.3","X-Spam_bar":"------","X-Spam_report":"(-6.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.001,\n DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1,\n NICE_REPLY_A=-4.219, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H4=0.001,\n RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001,\n SPF_PASS=-0.001 autolearn=ham autolearn_force=no","X-Spam_action":"no action","X-BeenThere":"qemu-devel@nongnu.org","X-Mailman-Version":"2.1.29","Precedence":"list","List-Id":"<qemu-devel.nongnu.org>","List-Unsubscribe":"<https://lists.nongnu.org/mailman/options/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=unsubscribe>","List-Archive":"<https://lists.nongnu.org/archive/html/qemu-devel>","List-Post":"<mailto:qemu-devel@nongnu.org>","List-Help":"<mailto:qemu-devel-request@nongnu.org?subject=help>","List-Subscribe":"<https://lists.nongnu.org/mailman/listinfo/qemu-devel>,\n <mailto:qemu-devel-request@nongnu.org?subject=subscribe>","Errors-To":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org","Sender":"qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org"}}]