From patchwork Fri Jan 11 01:09:26 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Donnellan X-Patchwork-Id: 1023314 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43bPtn2BQZz9sCh for ; Fri, 11 Jan 2019 12:10:41 +1100 (AEDT) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=au1.ibm.com Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 43bPtn19pCzDqxH for ; Fri, 11 Jan 2019 12:10:41 +1100 (AEDT) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=au1.ibm.com (client-ip=148.163.158.5; helo=mx0a-001b2d01.pphosted.com; envelope-from=andrew.donnellan@au1.ibm.com; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=au1.ibm.com Received: from mx0a-001b2d01.pphosted.com (mx0b-001b2d01.pphosted.com [148.163.158.5]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 43bPst6CPBzDqwh for ; Fri, 11 Jan 2019 12:09:54 +1100 (AEDT) Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id x0B14D5C144845 for ; Thu, 10 Jan 2019 20:09:52 -0500 Received: from e06smtp03.uk.ibm.com (e06smtp03.uk.ibm.com [195.75.94.99]) by mx0b-001b2d01.pphosted.com with ESMTP id 2pxfbjktdg-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Thu, 10 Jan 2019 20:09:52 -0500 Received: from localhost by e06smtp03.uk.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 11 Jan 2019 01:09:50 -0000 Received: from b06cxnps4076.portsmouth.uk.ibm.com (9.149.109.198) by e06smtp03.uk.ibm.com (192.168.101.133) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; (version=TLSv1/SSLv3 cipher=AES256-GCM-SHA384 bits=256/256) Fri, 11 Jan 2019 01:09:48 -0000 Received: from b06wcsmtp001.portsmouth.uk.ibm.com (b06wcsmtp001.portsmouth.uk.ibm.com [9.149.105.160]) by b06cxnps4076.portsmouth.uk.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id x0B19luM64225366 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 11 Jan 2019 01:09:47 GMT Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id C5340A405C; Fri, 11 Jan 2019 01:09:47 +0000 (GMT) Received: from b06wcsmtp001.portsmouth.uk.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 2C7CCA405B; Fri, 11 Jan 2019 01:09:47 +0000 (GMT) Received: from ozlabs.au.ibm.com (unknown [9.192.253.14]) by b06wcsmtp001.portsmouth.uk.ibm.com (Postfix) with ESMTP; Fri, 11 Jan 2019 01:09:47 +0000 (GMT) Received: from intelligence.ozlabs.ibm.com (haven.au.ibm.com [9.192.254.114]) (using TLSv1.2 with cipher DHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.au.ibm.com (Postfix) with ESMTPSA id DD305A03B1; Fri, 11 Jan 2019 12:09:43 +1100 (AEDT) From: Andrew Donnellan To: skiboot@lists.ozlabs.org Date: Fri, 11 Jan 2019 12:09:26 +1100 X-Mailer: git-send-email 2.11.0 In-Reply-To: References: In-Reply-To: References: X-TM-AS-GCONF: 00 x-cbid: 19011101-0012-0000-0000-000002E6349C X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 19011101-0013-0000-0000-0000211D3CAC Message-Id: MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-01-11_01:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 mlxscore=0 impostorscore=0 mlxlogscore=960 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1810050000 definitions=main-1901110006 Subject: [Skiboot] [PATCH v2 06/15] hw/npu2: Don't repopulate NPU devices in NVLink init path X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: alistair@popple.id.au, arbab@linux.ibm.com Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" In 68415d5e38ef ("hw/npu2: Common NPU2 init routine between NVLink and OpenCAPI") we refactored a large chunk of the NPU init path into common code, including walking the device tree to setup npu2_devs based on the ibm,npu-link device tree nodes. We didn't actually remove the code that does that in npu2_populate_devices(), so currently we populate the devices in the common setup path, then repopulate them incorrectly in the NVLink setup path. Currently this is fine, because we don't support having both NVLink and OpenCAPI devices on the same NPU, but when we do, this will be a problem. Fix npu2_populate_devices() to only populate additional fields as required for NVLink devices. Rename it to npu2_configure_devices() to better reflect what it now does. Signed-off-by: Andrew Donnellan Reviewed-by: Frederic Barrat --- v1->v2: - remove unneeded scan_map assignment, it's zalloced anyway (Alexey) - rebase on Reza's cleanups --- hw/npu2.c | 42 ++++++++++-------------------------------- 1 file changed, 10 insertions(+), 32 deletions(-) diff --git a/hw/npu2.c b/hw/npu2.c index 1c7af14958e8..106b32150994 100644 --- a/hw/npu2.c +++ b/hw/npu2.c @@ -1599,12 +1599,12 @@ static void npu2_populate_cfg(struct npu2_dev *dev) PCI_VIRT_CFG_INIT_RO(pvd, pos + 1, 1, 0); } -static uint32_t npu_allocate_bdfn(struct npu2 *p, uint32_t group) +static uint32_t npu_allocate_bdfn(struct npu2 *p, uint32_t group, int size) { int i; int bdfn = (group << 3); - for (i = 0; i < p->total_devices; i++) { + for (i = 0; i < size; i++) { if ((p->devices[i].bdfn & 0xf8) == (bdfn & 0xf8)) bdfn++; } @@ -1612,51 +1612,29 @@ static uint32_t npu_allocate_bdfn(struct npu2 *p, uint32_t group) return bdfn; } -static void npu2_populate_devices(struct npu2 *p, - struct dt_node *dn) +static void npu2_configure_devices(struct npu2 *p) { struct npu2_dev *dev; - struct dt_node *npu2_dn, *link; - uint32_t npu_phandle, index = 0; - - /* - * Get the npu node which has the links which we expand here - * into pci like devices attached to our emulated phb. - */ - npu_phandle = dt_prop_get_u32(dn, "ibm,npcq"); - npu2_dn = dt_find_by_phandle(dt_root, npu_phandle); - assert(npu2_dn); + uint32_t index = 0; - /* Walk the link@x nodes to initialize devices */ - p->total_devices = 0; - p->phb_nvlink.scan_map = 0; - dt_for_each_compatible(npu2_dn, link, "ibm,npu-link") { + for (index = 0; index < p->total_devices; index++) { uint32_t group_id; dev = &p->devices[index]; - dev->type = NPU2_DEV_TYPE_NVLINK; - dev->npu = p; - dev->dt_node = link; - dev->link_index = dt_prop_get_u32(link, "ibm,npu-link-index"); - dev->brick_index = dev->link_index; + if (dev->type != NPU2_DEV_TYPE_NVLINK) + continue; - group_id = dt_prop_get_u32(link, "ibm,npu-group-id"); - dev->bdfn = npu_allocate_bdfn(p, group_id); + group_id = dt_prop_get_u32(dev->dt_node, "ibm,npu-group-id"); + dev->bdfn = npu_allocate_bdfn(p, group_id, index); /* This must be done after calling * npu_allocate_bdfn() */ - p->total_devices++; p->phb_nvlink.scan_map |= 0x1 << ((dev->bdfn & 0xf8) >> 3); - dev->pl_xscom_base = dt_prop_get_u64(link, "ibm,npu-phy"); - dev->lane_mask = dt_prop_get_u32(link, "ibm,npu-lane-mask"); - /* Initialize PCI virtual device */ dev->nvlink.pvd = pci_virt_add_device(&p->phb_nvlink, dev->bdfn, 0x100, dev); if (dev->nvlink.pvd) npu2_populate_cfg(dev); - - index++; } } @@ -1857,7 +1835,7 @@ void npu2_nvlink_create_phb(struct npu2 *npu, struct dt_node *dn) list_head_init(&npu->phb_nvlink.virt_devices); npu2_setup_irqs(npu); - npu2_populate_devices(npu, dn); + npu2_configure_devices(npu); npu2_add_interrupt_map(npu, dn); npu2_add_phb_properties(npu);