From patchwork Wed Jul 30 09:31:27 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 374725 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id E1CFC1401AB for ; Wed, 30 Jul 2014 19:37:22 +1000 (EST) Received: from ozlabs.org (ozlabs.org [103.22.144.67]) by lists.ozlabs.org (Postfix) with ESMTP id CE6C51A1094 for ; Wed, 30 Jul 2014 19:37:22 +1000 (EST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from e23smtp05.au.ibm.com (e23smtp05.au.ibm.com [202.81.31.147]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id EAC9F1A03E5 for ; Wed, 30 Jul 2014 19:31:47 +1000 (EST) Received: from /spool/local by e23smtp05.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Wed, 30 Jul 2014 19:31:36 +1000 Received: from d23dlp02.au.ibm.com (202.81.31.213) by e23smtp05.au.ibm.com (202.81.31.211) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Wed, 30 Jul 2014 19:31:34 +1000 Received: from d23relay05.au.ibm.com (d23relay05.au.ibm.com [9.190.235.152]) by d23dlp02.au.ibm.com (Postfix) with ESMTP id BF9CA2BB0023 for ; Wed, 30 Jul 2014 19:31:43 +1000 (EST) Received: from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138]) by d23relay05.au.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id s6U98hTZ24182996 for ; Wed, 30 Jul 2014 19:08:43 +1000 Received: from d23av02.au.ibm.com (localhost [127.0.0.1]) by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s6U9VgtG026701 for ; Wed, 30 Jul 2014 19:31:43 +1000 Received: from ozlabs.au.ibm.com (ozlabs.au.ibm.com [9.190.163.12]) by d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id s6U9VgAO026693; Wed, 30 Jul 2014 19:31:42 +1000 Received: from bran.ozlabs.ibm.com (haven.au.ibm.com [9.190.164.82]) by ozlabs.au.ibm.com (Postfix) with ESMTP id 700B9A01E5; Wed, 30 Jul 2014 19:31:42 +1000 (EST) Received: from ka1.ozlabs.ibm.com (ka1.ozlabs.ibm.com [10.61.145.11]) by bran.ozlabs.ibm.com (Postfix) with ESMTP id C1F8616AB76; Wed, 30 Jul 2014 19:31:41 +1000 (EST) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Subject: [PATCH v4 08/16] powerpc/powernv: Convert/move set_bypass() callback to take_ownership() Date: Wed, 30 Jul 2014 19:31:27 +1000 Message-Id: <1406712695-9491-9-git-send-email-aik@ozlabs.ru> X-Mailer: git-send-email 2.0.0 In-Reply-To: <1406712695-9491-1-git-send-email-aik@ozlabs.ru> References: <1406712695-9491-1-git-send-email-aik@ozlabs.ru> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14073009-1396-0000-0000-000005566255 Cc: Alexey Kardashevskiy , Michael Ellerman , Paul Mackerras , Gavin Shan X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.18 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" At the moment the iommu_table struct has a set_bypass() which enables/ disables DMA bypass on IODA2 PHB. This is exposed to POWERPC IOMMU code which calls this callback when external IOMMU users such as VFIO are about to get over a PHB. Since the set_bypass() is not really an iommu_table function but PE's function, and we have an ops struct per IOMMU owner, let's move set_bypass() to the spapr_tce_iommu_ops struct. As arch/powerpc/kernel/iommu.c is more about POWERPC IOMMU tables and has very little to do with PEs, this moves take_ownership() calls to the VFIO SPAPR TCE driver. This renames set_bypass() to take_ownership() as it is not necessarily just enabling bypassing, it can be something else/more so let's give it a generic name. The bool parameter is inverted. Signed-off-by: Alexey Kardashevskiy Reviewed-by: Gavin Shan --- arch/powerpc/include/asm/iommu.h | 1 - arch/powerpc/include/asm/tce.h | 2 ++ arch/powerpc/kernel/iommu.c | 12 ------------ arch/powerpc/platforms/powernv/pci-ioda.c | 18 +++++++++++------- drivers/vfio/vfio_iommu_spapr_tce.c | 17 +++++++++++++++++ 5 files changed, 30 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/iommu.h index 84ee339..2b0b01d 100644 --- a/arch/powerpc/include/asm/iommu.h +++ b/arch/powerpc/include/asm/iommu.h @@ -77,7 +77,6 @@ struct iommu_table { #ifdef CONFIG_IOMMU_API struct iommu_group *it_group; #endif - void (*set_bypass)(struct iommu_table *tbl, bool enable); }; /* Pure 2^n version of get_order */ diff --git a/arch/powerpc/include/asm/tce.h b/arch/powerpc/include/asm/tce.h index 8bfe98f..5ee4987 100644 --- a/arch/powerpc/include/asm/tce.h +++ b/arch/powerpc/include/asm/tce.h @@ -58,6 +58,8 @@ struct spapr_tce_iommu_ops { struct iommu_table *(*get_table)( struct spapr_tce_iommu_group *data, phys_addr_t addr); + void (*take_ownership)(struct spapr_tce_iommu_group *data, + bool enable); }; struct spapr_tce_iommu_group { diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c index e203314..06984d5 100644 --- a/arch/powerpc/kernel/iommu.c +++ b/arch/powerpc/kernel/iommu.c @@ -1116,14 +1116,6 @@ int iommu_take_ownership(struct iommu_table *tbl) memset(tbl->it_map, 0xff, sz); iommu_clear_tces_and_put_pages(tbl, tbl->it_offset, tbl->it_size); - /* - * Disable iommu bypass, otherwise the user can DMA to all of - * our physical memory via the bypass window instead of just - * the pages that has been explicitly mapped into the iommu - */ - if (tbl->set_bypass) - tbl->set_bypass(tbl, false); - return 0; } EXPORT_SYMBOL_GPL(iommu_take_ownership); @@ -1138,10 +1130,6 @@ void iommu_release_ownership(struct iommu_table *tbl) /* Restore bit#0 set by iommu_init_table() */ if (tbl->it_offset == 0) set_bit(0, tbl->it_map); - - /* The kernel owns the device now, we can restore the iommu bypass */ - if (tbl->set_bypass) - tbl->set_bypass(tbl, true); } EXPORT_SYMBOL_GPL(iommu_release_ownership); diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c index 495137b..f828c57 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda.c +++ b/arch/powerpc/platforms/powernv/pci-ioda.c @@ -709,10 +709,8 @@ static void pnv_pci_ioda_setup_dma_pe(struct pnv_phb *phb, __free_pages(tce_mem, get_order(TCE32_TABLE_SIZE * segs)); } -static void pnv_pci_ioda2_set_bypass(struct iommu_table *tbl, bool enable) +static void pnv_pci_ioda2_set_bypass(struct pnv_ioda_pe *pe, bool enable) { - struct pnv_ioda_pe *pe = container_of(tbl, struct pnv_ioda_pe, - tce32.table); uint16_t window_id = (pe->pe_number << 1 ) + 1; int64_t rc; @@ -752,15 +750,21 @@ static void pnv_pci_ioda2_setup_bypass_pe(struct pnv_phb *phb, /* TVE #1 is selected by PCI address bit 59 */ pe->tce_bypass_base = 1ull << 59; - /* Install set_bypass callback for VFIO */ - pe->tce32.table.set_bypass = pnv_pci_ioda2_set_bypass; - /* Enable bypass by default */ - pnv_pci_ioda2_set_bypass(&pe->tce32.table, true); + pnv_pci_ioda2_set_bypass(pe, true); +} + +static void pnv_ioda2_take_ownership(struct spapr_tce_iommu_group *data, + bool enable) +{ + struct pnv_ioda_pe *pe = data->iommu_owner; + + pnv_pci_ioda2_set_bypass(pe, !enable); } static struct spapr_tce_iommu_ops pnv_pci_ioda2_ops = { .get_table = pnv_ioda1_iommu_get_table, + .take_ownership = pnv_ioda2_take_ownership, }; static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb, diff --git a/drivers/vfio/vfio_iommu_spapr_tce.c b/drivers/vfio/vfio_iommu_spapr_tce.c index 46c2bc8..d9845af 100644 --- a/drivers/vfio/vfio_iommu_spapr_tce.c +++ b/drivers/vfio/vfio_iommu_spapr_tce.c @@ -47,6 +47,14 @@ struct tce_container { bool enabled; }; + +static void tce_iommu_take_ownership_notify(struct spapr_tce_iommu_group *data, + bool enable) +{ + if (data && data->ops && data->ops->take_ownership) + data->ops->take_ownership(data, enable); +} + static int tce_iommu_enable(struct tce_container *container) { int ret = 0; @@ -367,6 +375,12 @@ static int tce_iommu_attach_group(void *iommu_data, ret = iommu_take_ownership(tbl); if (!ret) container->grp = iommu_group; + /* + * Disable iommu bypass, otherwise the user can DMA to all of + * our physical memory via the bypass window instead of just + * the pages that has been explicitly mapped into the iommu + */ + tce_iommu_take_ownership_notify(data, true); } mutex_unlock(&container->lock); @@ -404,6 +418,9 @@ static void tce_iommu_detach_group(void *iommu_data, BUG_ON(!tbl); iommu_release_ownership(tbl); + + /* Kernel owns the device now, we can restore bypass */ + tce_iommu_take_ownership_notify(data, false); } mutex_unlock(&container->lock); }