From patchwork Sun Jun 17 11:14:22 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 930476 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 417s7b0F5Rz9s3C for ; Sun, 17 Jun 2018 21:14:34 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754986AbeFQLOe (ORCPT ); Sun, 17 Jun 2018 07:14:34 -0400 Received: from 107-173-13-209-host.colocrossing.com ([107.173.13.209]:57358 "EHLO ozlabs.ru" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1754046AbeFQLOd (ORCPT ); Sun, 17 Jun 2018 07:14:33 -0400 Received: from vpl1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id E2F11AE8000E; Sun, 17 Jun 2018 07:13:21 -0400 (EDT) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Alexey Kardashevskiy , David Gibson , kvm-ppc@vger.kernel.org, Alex Williamson , Benjamin Herrenschmidt , Russell Currey Subject: [PATCH kernel v2 0/6] powerpc/powernv/iommu: Optimize memory use Date: Sun, 17 Jun 2018 21:14:22 +1000 Message-Id: <20180617111428.24349-1-aik@ozlabs.ru> X-Mailer: git-send-email 2.11.0 Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This patchset aims to reduce actual memory use for guests with sparse memory. The pseries guest uses dynamic DMA windows to map the entire guest RAM but it only actually maps onlined memory which may be not be contiguous. I hit this when tried passing through NVLink2-connected GPU RAM of NVIDIA V100 and trying to map this RAM at the same offset as in the real hardware forced me to rework I handle these windows. This moves userspace-to-host-physical translation table (iommu_table::it_userspace) from VFIO TCE IOMMU subdriver to the platform code and reuses the already existing multilevel TCE table code which we have for the hardware tables. At last in 6/6 I switch to on-demand allocation so we do not allocate huge chunks of the table if we do not have to; there is some math in 6/6. Changes: v2: * bugfix and error handling in 6/6 Please comment. Thanks. Alexey Kardashevskiy (6): powerpc/powernv: Remove useless wrapper powerpc/powernv: Move TCE manupulation code to its own file KVM: PPC: Make iommu_table::it_userspace big endian powerpc/powernv: Add indirect levels to it_userspace powerpc/powernv: Rework TCE level allocation powerpc/powernv/ioda: Allocate indirect TCE levels on demand arch/powerpc/platforms/powernv/Makefile | 2 +- arch/powerpc/include/asm/iommu.h | 11 +- arch/powerpc/platforms/powernv/pci.h | 44 ++- arch/powerpc/kvm/book3s_64_vio.c | 11 +- arch/powerpc/kvm/book3s_64_vio_hv.c | 18 +- arch/powerpc/platforms/powernv/pci-ioda-tce.c | 395 ++++++++++++++++++++++++++ arch/powerpc/platforms/powernv/pci-ioda.c | 192 ++----------- arch/powerpc/platforms/powernv/pci.c | 158 ----------- drivers/vfio/vfio_iommu_spapr_tce.c | 65 +---- 9 files changed, 482 insertions(+), 414 deletions(-) create mode 100644 arch/powerpc/platforms/powernv/pci-ioda-tce.c