From patchwork Mon Jan 18 21:26:28 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marcelo Henrique Cerri X-Patchwork-Id: 1428306 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DKPxJ0kCMz9sT6; Tue, 19 Jan 2021 08:26:42 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1l1c2l-0000JM-M2; Mon, 18 Jan 2021 21:26:35 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1l1c2k-0000JF-Nz for kernel-team@lists.ubuntu.com; Mon, 18 Jan 2021 21:26:34 +0000 Received: from mail-qk1-f198.google.com ([209.85.222.198]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1l1c2k-0005KQ-CT for kernel-team@lists.ubuntu.com; Mon, 18 Jan 2021 21:26:34 +0000 Received: by mail-qk1-f198.google.com with SMTP id q7so18137034qki.16 for ; Mon, 18 Jan 2021 13:26:34 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=u+T3kW+EYETllTNyCu2Bw7VLja8E3ooFNdlnI6OP1dg=; b=jmAB4FxcR5ofotUM0myhRLKc/33Wyz/JyH27UaXbwnsdWxfsNk5IMrYtd1zKkyv8+V 2F/DoNNM2bD4AXnRvZs9LYeOERPUTPhPMMaQl0IH1YB3uL2eOTWGo9fGdnmUmUwEczOC pagedxri/XKo+t1iNRDBtqH+je7dRmrcSCE5xXo/98guq9+E33+4S6AdoIbNPs3fKJjS GO+NKOw56+DbK1SSKADB06Ku2kflWh9dhJfkmLV361GwXBhkF8zDogp/Fd3YPTP22g4C qPtCas/0mD2YZXE99XLcvdVF84FxrXbQ57vtmayOfb/nFeOwuQ9C7hUk1Zc98Q1Mz+7b CqxA== X-Gm-Message-State: AOAM5307eBvySmrdoYtHi9kmT9ukH5dEtUsQJwCplD81zDNpfTpe9Tbe MiYq1WK3afQlPrW0Wk4cZ1GiNhr8+bktU9E59u3ZwFBhZkJfLpgAhnh/MyeOSHcQ7WVwLhgj6hP PzrRNXhcjJzo12F2KAjZCT2KUx0yQGOSOY/KLDQ2K X-Received: by 2002:ac8:36e2:: with SMTP id b31mr1544123qtc.19.1611005193163; Mon, 18 Jan 2021 13:26:33 -0800 (PST) X-Google-Smtp-Source: ABdhPJxGnIplSPPErYz/pF1b8XLvODC2nGueLozH8guKLHtRa/IQdoAAD85At+Q1vCC3gSLL5Wp26w== X-Received: by 2002:ac8:36e2:: with SMTP id b31mr1544103qtc.19.1611005192859; Mon, 18 Jan 2021 13:26:32 -0800 (PST) Received: from valinor.lan ([177.45.97.178]) by smtp.gmail.com with ESMTPSA id b12sm11219773qtt.74.2021.01.18.13.26.31 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Jan 2021 13:26:31 -0800 (PST) From: Marcelo Henrique Cerri To: kernel-team@lists.ubuntu.com Subject: [bionic:linux-azure-4.15, focal:linux-azure, groovy:linux-azure][PATCH] video: hyperv_fb: Fix the cache type when mapping the VRAM Date: Mon, 18 Jan 2021 18:26:28 -0300 Message-Id: <20210118212628.3574191-1-marcelo.cerri@canonical.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Dexuan Cui BugLink: https://bugs.launchpad.net/bugs/1908569 x86 Hyper-V used to essentially always overwrite the effective cache type of guest memory accesses to WB. This was problematic in cases where there is a physical device assigned to the VM, since that often requires that the VM should have control over cache types. Thus, on newer Hyper-V since 2018, Hyper-V always honors the VM's cache type, but unexpectedly Linux VM users start to complain that Linux VM's VRAM becomes very slow, and it turns out that Linux VM should not map the VRAM uncacheable by ioremap(). Fix this slowness issue by using ioremap_cache(). On ARM64, ioremap_cache() is also required as the host also maps the VRAM cacheable, otherwise VM Connect can't display properly with ioremap() or ioremap_wc(). With this change, the VRAM on new Hyper-V is as fast as regular RAM, so it's no longer necessary to use the hacks we added to mitigate the slowness, i.e. we no longer need to allocate physical memory and use it to back up the VRAM in Generation-1 VM, and we also no longer need to allocate physical memory to back up the framebuffer in a Generation-2 VM and copy the framebuffer to the real VRAM. A further big change will address these for v5.11. Fixes: 68a2d20b79b1 ("drivers/video: add Hyper-V Synthetic Video Frame Buffer Driver") Tested-by: Boqun Feng Signed-off-by: Dexuan Cui Reviewed-by: Michael Kelley Reviewed-by: Haiyang Zhang Link: https://lore.kernel.org/r/20201118000305.24797-1-decui@microsoft.com Signed-off-by: Wei Liu (cherry picked from commit 5f1251a48c17b54939d7477305e39679a565382c) Signed-off-by: Marcelo Henrique Cerri Acked-by: Stefan Bader Acked-by: William Breathitt Gray --- drivers/video/fbdev/hyperv_fb.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c index fe4731f97df7..aad4eea522a9 100644 --- a/drivers/video/fbdev/hyperv_fb.c +++ b/drivers/video/fbdev/hyperv_fb.c @@ -705,7 +705,12 @@ static int hvfb_getmem(struct hv_device *hdev, struct fb_info *info) goto err1; } - fb_virt = ioremap(par->mem->start, screen_fb_size); + /* + * Map the VRAM cacheable for performance. This is also required for + * VM Connect to display properly for ARM64 Linux VM, as the host also + * maps the VRAM cacheable. + */ + fb_virt = ioremap_cache(par->mem->start, screen_fb_size); if (!fb_virt) goto err2;