From patchwork Tue Nov 13 08:28:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Kardashevskiy X-Patchwork-Id: 996908 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=ozlabs.ru Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 42vLPX6zhyz9sCm for ; Tue, 13 Nov 2018 19:28:48 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731325AbeKMSZs (ORCPT ); Tue, 13 Nov 2018 13:25:48 -0500 Received: from 107-173-13-209-host.colocrossing.com ([107.173.13.209]:52210 "EHLO ozlabs.ru" rhost-flags-OK-FAIL-OK-OK) by vger.kernel.org with ESMTP id S1731023AbeKMSZs (ORCPT ); Tue, 13 Nov 2018 13:25:48 -0500 Received: from fstn1-p1.ozlabs.ibm.com (localhost [IPv6:::1]) by ozlabs.ru (Postfix) with ESMTP id A03FCAE800ED; Tue, 13 Nov 2018 03:28:44 -0500 (EST) From: Alexey Kardashevskiy To: linuxppc-dev@lists.ozlabs.org Cc: Alexey Kardashevskiy , David Gibson , kvm-ppc@vger.kernel.org, Alistair Popple , Reza Arbab , Sam Bobroff , Piotr Jaroszynski , =?utf-8?q?Leonardo_Augusto_Guimar=C3=A3es_?= =?utf-8?q?Garcia?= , Jose Ricardo Ziviani , "Oliver O'Halloran" , Alex Williamson , Andrew Donnellan , Balbir Singh , Russell Currey Subject: [PATCH kernel v3 05/22] powerpc/powernv/npu: Add helper to access struct npu for NPU device Date: Tue, 13 Nov 2018 19:28:06 +1100 Message-Id: <20181113082823.2440-6-aik@ozlabs.ru> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181113082823.2440-1-aik@ozlabs.ru> References: <20181113082823.2440-1-aik@ozlabs.ru> Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This step is to help removing the npu struct from pnv_phb so it can be used by pseries as well. Signed-off-by: Alexey Kardashevskiy Reviewed-by: David Gibson Reviewed-by: Alistair Popple --- arch/powerpc/platforms/powernv/npu-dma.c | 22 ++++++++++++++++------ 1 file changed, 16 insertions(+), 6 deletions(-) diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c index 91d488f..9f48831 100644 --- a/arch/powerpc/platforms/powernv/npu-dma.c +++ b/arch/powerpc/platforms/powernv/npu-dma.c @@ -327,6 +327,18 @@ struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe) return gpe; } +/* + * NPU2 ATS + */ +static struct npu *npdev_to_npu(struct pci_dev *npdev) +{ + struct pnv_phb *nphb; + + nphb = pci_bus_to_host(npdev->bus)->private_data; + + return &nphb->npu; +} + /* Maximum number of nvlinks per npu */ #define NV_MAX_LINKS 6 @@ -478,7 +490,6 @@ static void acquire_atsd_reg(struct npu_context *npu_context, int i, j; struct npu *npu; struct pci_dev *npdev; - struct pnv_phb *nphb; for (i = 0; i <= max_npu2_index; i++) { mmio_atsd_reg[i].reg = -1; @@ -493,8 +504,7 @@ static void acquire_atsd_reg(struct npu_context *npu_context, if (!npdev) continue; - nphb = pci_bus_to_host(npdev->bus)->private_data; - npu = &nphb->npu; + npu = npdev_to_npu(npdev); mmio_atsd_reg[i].npu = npu; mmio_atsd_reg[i].reg = get_mmio_atsd_reg(npu); while (mmio_atsd_reg[i].reg < 0) { @@ -690,7 +700,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, } nphb = pci_bus_to_host(npdev->bus)->private_data; - npu = &nphb->npu; + npu = npdev_to_npu(npdev); /* * Setup the NPU context table for a particular GPU. These need to be @@ -764,7 +774,7 @@ struct npu_context *pnv_npu2_init_context(struct pci_dev *gpdev, */ WRITE_ONCE(npu_context->npdev[npu->index][nvlink_index], npdev); - if (!nphb->npu.nmmu_flush) { + if (!npu->nmmu_flush) { /* * If we're not explicitly flushing ourselves we need to mark * the thread for global flushes @@ -810,7 +820,7 @@ void pnv_npu2_destroy_context(struct npu_context *npu_context, return; nphb = pci_bus_to_host(npdev->bus)->private_data; - npu = &nphb->npu; + npu = npdev_to_npu(npdev); nvlink_dn = of_parse_phandle(npdev->dev.of_node, "ibm,nvlink", 0); if (WARN_ON(of_property_read_u32(nvlink_dn, "ibm,npu-link-index", &nvlink_index)))