From patchwork Thu Sep 17 16:35:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?C=C3=A9dric_Le_Goater?= X-Patchwork-Id: 1366266 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [203.11.71.2]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BsjNG6Zm2z9sSW for ; Fri, 18 Sep 2020 02:39:10 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=kaod.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BsjNG48yjzDqcY for ; Fri, 18 Sep 2020 02:39:10 +1000 (AEST) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kaod.org (client-ip=79.137.123.220; helo=smtpout1.mo804.mail-out.ovh.net; envelope-from=clg@kaod.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=kaod.org Received: from smtpout1.mo804.mail-out.ovh.net (smtpout1.mo804.mail-out.ovh.net [79.137.123.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BsjJY5Z2BzDqcF for ; Fri, 18 Sep 2020 02:35:53 +1000 (AEST) Received: from mxplan5.mail.ovh.net (unknown [10.108.20.54]) by mo804.mail-out.ovh.net (Postfix) with ESMTPS id 7CDBF62BFFD1; Thu, 17 Sep 2020 18:35:47 +0200 (CEST) Received: from kaod.org (37.59.142.96) by DAG4EX1.mxp5.local (172.16.2.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Thu, 17 Sep 2020 18:35:46 +0200 Authentication-Results: garm.ovh; auth=pass (GARM-96R00132581310-c179-4fa1-91e0-b39a30063ec0, FF1720B74B3888C93CA5040C1F5ACCC945AC33B8) smtp.auth=clg@kaod.org From: =?utf-8?q?C=C3=A9dric_Le_Goater?= To: Date: Thu, 17 Sep 2020 18:35:39 +0200 Message-ID: <20200917163544.142593-3-clg@kaod.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200917163544.142593-1-clg@kaod.org> References: <20200917163544.142593-1-clg@kaod.org> MIME-Version: 1.0 X-Originating-IP: [37.59.142.96] X-ClientProxiedBy: DAG1EX1.mxp5.local (172.16.2.1) To DAG4EX1.mxp5.local (172.16.2.31) X-Ovh-Tracer-GUID: b82049d4-0448-4f5a-a315-84c253f9356b X-Ovh-Tracer-Id: 10404159567042153437 X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: -100 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedujedrtdeggddutdehucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvufffkffojghfgggtgfhisehtkeertdertdejnecuhfhrohhmpeevrogurhhitgcunfgvucfiohgrthgvrhcuoegtlhhgsehkrghougdrohhrgheqnecuggftrfgrthhtvghrnhepheehfeegjeeitdfffeetjeduveejueefuefgtdefueelueetveeliefhhffgtdelnecukfhppedtrddtrddtrddtpdefjedrheelrddugedvrdelieenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhhouggvpehsmhhtphdqohhuthdphhgvlhhopehmgihplhgrnhehrdhmrghilhdrohhvhhdrnhgvthdpihhnvghtpedtrddtrddtrddtpdhmrghilhhfrhhomheptghlgheskhgrohgurdhorhhgpdhrtghpthhtoheptghlgheskhgrohgurdhorhhg Subject: [Skiboot] [RFC PATCH 2/6] xive/p9: Add debugfs entries for internal tables X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?C=C3=A9dric_Le_Goater?= Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" The XIVE interrupt controller relies on a set of tables to configure the routing of interrupts from source to target. The IVT associates a source number to an event queue, the EQDT associates an event queue to a notification virtual target (NVT) and the NVTT holds the NVT configuration and serves as memory back store for the interrupt thread context registers. Each table contains valuable information which is interesting to expose. Signed-off-by: Cédric Le Goater --- hw/xive.c | 173 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 173 insertions(+) diff --git a/hw/xive.c b/hw/xive.c index fe352fe26885..ccebb1e1d17d 100644 --- a/hw/xive.c +++ b/hw/xive.c @@ -19,6 +19,7 @@ #include #include #include +#include /* Always notify from EQ to VP (no EOI on EQs). Will speed up * EOIs at the expense of potentially higher powerbus traffic. @@ -5141,6 +5142,8 @@ static void xive_init_globals(void) xive_block_to_chip[i] = XIVE_INVALID_CHIP; } +static void xive_init_debug(struct xive *x); + void init_xive(void) { struct dt_node *np; @@ -5220,5 +5223,175 @@ void init_xive(void) opal_register(OPAL_XIVE_GET_QUEUE_STATE, opal_xive_get_queue_state, 4); opal_register(OPAL_XIVE_SET_QUEUE_STATE, opal_xive_set_queue_state, 4); opal_register(OPAL_XIVE_GET_VP_STATE, opal_xive_get_vp_state, 2); + + for_each_chip(chip) { + if (chip->xive) + xive_init_debug(chip->xive); + } +} + +static int xive_ivt_read(struct opal_debug *d, void *buf, uint64_t size) +{ + struct xive *x = d->private; + struct xive_ive *ivt = x->ivt_base; + int i; + int n = 0; + + n += snprintf(buf + n, size - n, "IVT[%d]\n", x->block_id); + for (i = 0; i < XIVE_INT_COUNT; i++) { + struct xive_ive *ive = &ivt[i]; + uint32_t eq_blk, eq_idx, eq_data; + /* TODO: get ESB mmio */ + + if (xive_get_field64(IVE_MASKED, ive->w) || + !xive_get_field64(IVE_VALID, ive->w)) + continue; + n += snprintf(buf + n, size - n, "%08x ", + BLKIDX_TO_GIRQ(x->block_id, i)); + lock(&x->lock); + eq_blk = xive_get_field64(IVE_EQ_BLOCK, ive->w); + eq_idx = xive_get_field64(IVE_EQ_INDEX, ive->w); + eq_data = xive_get_field64(IVE_EQ_DATA, ive->w); + unlock(&x->lock); + + n += snprintf(buf + n, size - n, "eq=%x/%x data=%x\n", + eq_blk, eq_idx, eq_data); + } + + return n; +} + +static int xive_eqt_read(struct opal_debug *d, void *buf, uint64_t size) +{ + struct xive *x = d->private; + int i, j; + int n = 0; + + n += snprintf(buf + n, size - n, "EQT[%d]\n", x->block_id); + bitmap_for_each_one(*x->eq_map, XIVE_EQ_COUNT >> 3, i) { + for (j = 0; j < NUM_INT_PRIORITIES; j++) { + struct xive_eq *eq; + uint32_t idx = (i << 3) | j; + + eq = xive_get_eq(x, idx); + if (!eq || !xive_get_field32(EQ_W0_VALID, eq->w0)) + continue; + + lock(&x->lock); + xive_eqc_scrub(x, x->block_id, idx); + unlock(&x->lock); + + n += snprintf(buf + n, size - n, + "%08x %08x %08x %08x %08x %08x %08x %08x %08x\n", + idx, + eq->w0, eq->w1, eq->w2, eq->w3, + eq->w4, eq->w5, eq->w6, eq->w7); + } + } + return n; +} + +static int xive_esc_read(struct opal_debug *d, void *buf, uint64_t size) +{ + struct xive *x = d->private; + int i, j; + int n = 0; + + n += snprintf(buf + n, size - n, "ESC IVT[%d]\n", x->block_id); + bitmap_for_each_one(*x->eq_map, XIVE_EQ_COUNT >> 3, i) { + for (j = 0; j < NUM_INT_PRIORITIES; j++) { + uint32_t idx = (i << 3) | j; + struct xive_eq *eq; + struct xive_ive *ive; + uint32_t eq_blk, eq_idx, eq_data; + + eq = xive_get_eq(x, idx); + if (!eq || !xive_get_field32(EQ_W0_VALID, eq->w0)) + continue; + if (!xive_get_field32(EQ_W0_ESCALATE_CTL, eq->w0)) + continue; + + ive = (struct xive_ive*)(char *)&eq->w4; + + n += snprintf(buf + n, size - n, "%08x ", + MAKE_ESCALATION_GIRQ(x->block_id, i)); + lock(&x->lock); + xive_eqc_scrub(x, x->block_id, idx); + eq_blk = xive_get_field64(IVE_EQ_BLOCK, ive->w); + eq_idx = xive_get_field64(IVE_EQ_INDEX, ive->w); + eq_data = xive_get_field64(IVE_EQ_DATA, ive->w); + unlock(&x->lock); + + n += snprintf(buf + n, size - n, "eq=%x/%x data=%x\n", + eq_blk, eq_idx, eq_data); + } + } + return n; +} + +static int xive_vpt_read(struct opal_debug *d, void *buf, uint64_t size) +{ + struct xive *x = d->private; + int i; + int n = 0; + + n += snprintf(buf + n, size - n, "VPT[%d]\n", x->block_id); + for (i = 0; i < XIVE_VP_COUNT; i++) { + struct xive_vp *vp; + + /* Ignore the physical CPU VPs */ + if (i >= XIVE_HW_VP_BASE && + i < (XIVE_HW_VP_BASE + XIVE_HW_VP_COUNT)) + continue; + + vp = xive_get_vp(x, i); + if (!vp || !xive_get_field32(VP_W0_VALID, vp->w0)) + continue; + lock(&x->lock); + xive_vpc_scrub(x, x->block_id, i); + unlock(&x->lock); + n += snprintf(buf + n, size - n, + "%08x %08x %08x %08x %08x %08x %08x %08x %08x\n", + i, + vp->w0, vp->w1, vp->w2, vp->w3, + vp->w4, vp->w5, vp->w6, vp->w7); + } + return n; +} + +static const struct opal_debug_ops xive_ivt_ops = { + .read = xive_ivt_read, +}; +static const struct opal_debug_ops xive_eqt_ops = { + .read = xive_eqt_read, +}; +static const struct opal_debug_ops xive_esc_ops = { + .read = xive_esc_read, +}; +static const struct opal_debug_ops xive_vpt_ops = { + .read = xive_vpt_read, +}; + +static const struct { + const char *name; + const struct opal_debug_ops *ops; +} xive_debug_handlers[] = { + { "xive-ivt", &xive_ivt_ops, }, + { "xive-eqt", &xive_eqt_ops, }, + { "xive-esc", &xive_esc_ops, }, + { "xive-vpt", &xive_vpt_ops, }, +}; + +static void xive_init_debug(struct xive *x) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(xive_debug_handlers); i++) { + struct opal_debug *d; + d = opal_debug_create(xive_debug_handlers[i].name, + xive_debug_handlers[i].ops, x); + + dt_add_property_cells(d->node, "ibm,chip-id", x->chip_id); + } }