From patchwork Tue Nov 13 08:31:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 996941 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42vLjn3m8zz9s3x for ; Tue, 13 Nov 2018 19:42:53 +1100 (AEDT) Received: from localhost ([::1]:52692 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMUHa-0003Bo-KB for incoming@patchwork.ozlabs.org; Tue, 13 Nov 2018 03:42:50 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45106) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMUGr-0002s3-Q9 for qemu-devel@nongnu.org; Tue, 13 Nov 2018 03:42:07 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gMU70-0006iR-IJ for qemu-devel@nongnu.org; Tue, 13 Nov 2018 03:31:55 -0500 Received: from [107.173.13.209] (port=32783 helo=ozlabs.ru) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gMU70-0006go-8V; Tue, 13 Nov 2018 03:31:54 -0500 Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id 531A3AE805A6; Tue, 13 Nov 2018 03:31:21 -0500 (EST) From: Alexey Kardashevskiy To: qemu-devel@nongnu.org Date: Tue, 13 Nov 2018 19:31:02 +1100 Message-Id: <20181113083104.2692-6-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113083104.2692-1-aik@ozlabs.ru> References: <20181113083104.2692-1-aik@ozlabs.ru> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x X-Received-From: 107.173.13.209 Subject: [Qemu-devel] [PATCH qemu RFC 5/7] spapr-iommu: Always advertise the maximum possible DMA window size X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Jose Ricardo Ziviani , Alexey Kardashevskiy , Alistair Popple , Alex Williamson , Sam Bobroff , Piotr Jaroszynski , qemu-ppc@nongnu.org, =?utf-8?q?Leonardo_Au?= =?utf-8?q?gusto_Guimar=C3=A3es_Garcia?= , Oliver O'Halloran , Reza Arbab , David Gibson Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" When deciding about the huge DMA window, the typical Linux pseries guest uses the maximum allowed RAM size as the upper limit. We did the same on QEMU side to match that logic. Now we are going to support GPU RAM pass through which is not available at the guest boot time as it requires the guest driver interaction. As the result, the guest requests a smaller window than it should. Therefore the guest needs to be patched to understand this new memory and so does QEMU. Instead of reimplementing here whatever solution we will choose for the guest, this advertises the biggest possible window size limited by 32 bit (as defined by LoPAPR). This seems to be safe as: 1. The guest visible emulated table is allocated in KVM (actual pages are allocated in page fault handler) and QEMU (actual pages are allocated when changed); 2. The hardware table (and corresponding userspace addresses cache) supports sparse allocation and also checks for locked_vm limit so it is unable to cause the host any damage. Signed-off-by: Alexey Kardashevskiy --- hw/ppc/spapr_rtas_ddw.c | 19 +++---------------- 1 file changed, 3 insertions(+), 16 deletions(-) diff --git a/hw/ppc/spapr_rtas_ddw.c b/hw/ppc/spapr_rtas_ddw.c index 329feb1..df60351 100644 --- a/hw/ppc/spapr_rtas_ddw.c +++ b/hw/ppc/spapr_rtas_ddw.c @@ -96,9 +96,8 @@ static void rtas_ibm_query_pe_dma_window(PowerPCCPU *cpu, uint32_t nret, target_ulong rets) { sPAPRPHBState *sphb; - uint64_t buid, max_window_size; + uint64_t buid; uint32_t avail, addr, pgmask = 0; - MachineState *machine = MACHINE(spapr); if ((nargs != 3) || (nret != 5)) { goto param_error_exit; @@ -114,27 +113,15 @@ static void rtas_ibm_query_pe_dma_window(PowerPCCPU *cpu, /* Translate page mask to LoPAPR format */ pgmask = spapr_page_mask_to_query_mask(sphb->page_size_mask); - /* - * This is "Largest contiguous block of TCEs allocated specifically - * for (that is, are reserved for) this PE". - * Return the maximum number as maximum supported RAM size was in 4K pages. - */ - if (machine->ram_size == machine->maxram_size) { - max_window_size = machine->ram_size; - } else { - max_window_size = machine->device_memory->base + - memory_region_size(&machine->device_memory->mr); - } - avail = SPAPR_PCI_DMA_MAX_WINDOWS - spapr_phb_get_active_win_num(sphb); rtas_st(rets, 0, RTAS_OUT_SUCCESS); rtas_st(rets, 1, avail); - rtas_st(rets, 2, max_window_size >> SPAPR_TCE_PAGE_SHIFT); + rtas_st(rets, 2, 0xFFFFFFFF); /* Largest contiguous block of TCEs */ rtas_st(rets, 3, pgmask); rtas_st(rets, 4, 0); /* DMA migration mask, not supported */ - trace_spapr_iommu_ddw_query(buid, addr, avail, max_window_size, pgmask); + trace_spapr_iommu_ddw_query(buid, addr, avail, 0xFFFFFFFF, pgmask); return; param_error_exit: