From patchwork Thu Sep 17 16:35:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?C=C3=A9dric_Le_Goater?= X-Patchwork-Id: 1366263 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BsjLp2WkKz9sSW for ; Fri, 18 Sep 2020 02:37:54 +1000 (AEST) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=kaod.org Received: from bilbo.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 4BsjLp0ccFzDqfH for ; Fri, 18 Sep 2020 02:37:54 +1000 (AEST) X-Original-To: skiboot@lists.ozlabs.org Delivered-To: skiboot@lists.ozlabs.org Authentication-Results: lists.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=kaod.org (client-ip=79.137.123.220; helo=smtpout1.mo804.mail-out.ovh.net; envelope-from=clg@kaod.org; receiver=) Authentication-Results: lists.ozlabs.org; dmarc=none (p=none dis=none) header.from=kaod.org Received: from smtpout1.mo804.mail-out.ovh.net (smtpout1.mo804.mail-out.ovh.net [79.137.123.220]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 4BsjJW5clNzDqcT for ; Fri, 18 Sep 2020 02:35:55 +1000 (AEST) Received: from mxplan5.mail.ovh.net (unknown [10.109.156.98]) by mo804.mail-out.ovh.net (Postfix) with ESMTPS id 258D462BFFD7; Thu, 17 Sep 2020 18:35:48 +0200 (CEST) Received: from kaod.org (37.59.142.96) by DAG4EX1.mxp5.local (172.16.2.31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2044.4; Thu, 17 Sep 2020 18:35:47 +0200 Authentication-Results: garm.ovh; auth=pass (GARM-96R0016632b280-0a42-4e95-b3f7-d218360314b7, FF1720B74B3888C93CA5040C1F5ACCC945AC33B8) smtp.auth=clg@kaod.org From: =?utf-8?q?C=C3=A9dric_Le_Goater?= To: Date: Thu, 17 Sep 2020 18:35:42 +0200 Message-ID: <20200917163544.142593-6-clg@kaod.org> X-Mailer: git-send-email 2.25.4 In-Reply-To: <20200917163544.142593-1-clg@kaod.org> References: <20200917163544.142593-1-clg@kaod.org> MIME-Version: 1.0 X-Originating-IP: [37.59.142.96] X-ClientProxiedBy: DAG1EX1.mxp5.local (172.16.2.1) To DAG4EX1.mxp5.local (172.16.2.31) X-Ovh-Tracer-GUID: 1faf7e83-89f9-4196-9756-e323e52b89a2 X-Ovh-Tracer-Id: 10404159567241841629 X-VR-SPAMSTATE: OK X-VR-SPAMSCORE: -100 X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgedujedrtdeggddutdehucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhephffvufffkffojghfgggtgfhisehtkeertdertdejnecuhfhrohhmpeevrogurhhitgcunfgvucfiohgrthgvrhcuoegtlhhgsehkrghougdrohhrgheqnecuggftrfgrthhtvghrnhepheehfeegjeeitdfffeetjeduveejueefuefgtdefueelueetveeliefhhffgtdelnecukfhppedtrddtrddtrddtpdefjedrheelrddugedvrdelieenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhhouggvpehsmhhtphdqohhuthdphhgvlhhopehmgihplhgrnhehrdhmrghilhdrohhvhhdrnhgvthdpihhnvghtpedtrddtrddtrddtpdhmrghilhhfrhhomheptghlgheskhgrohgurdhorhhgpdhrtghpthhtoheptghlgheskhgrohgurdhorhhg Subject: [Skiboot] [RFC PATCH 5/6] xive/p9: Add statistics for HW procedures X-BeenThere: skiboot@lists.ozlabs.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Mailing list for skiboot development List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: =?utf-8?q?C=C3=A9dric_Le_Goater?= Errors-To: skiboot-bounces+incoming=patchwork.ozlabs.org@lists.ozlabs.org Sender: "Skiboot" Common XIVE HW procedures are cache updates and synchronization to ensure pending interrupts have reached their event queues. These can be done frequently in some scenarios. Collect some statistics for these procedures and expose the results in a debug handler to be read from Linux. The write handler does a reset. Signed-off-by: Cédric Le Goater --- hw/xive.c | 107 +++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 98 insertions(+), 9 deletions(-) diff --git a/hw/xive.c b/hw/xive.c index b13beb575ba1..abdb2115a1a2 100644 --- a/hw/xive.c +++ b/hw/xive.c @@ -20,6 +20,7 @@ #include #include #include +#include /* Always notify from EQ to VP (no EOI on EQs). Will speed up * EOIs at the expense of potentially higher powerbus traffic. @@ -357,6 +358,32 @@ static inline void log_print(struct xive_cpu_state *xs __unused) { } #endif /* XIVE_PERCPU_LOG */ +/* + * Statistics + */ +enum { + XIVE_IVC_SCRUB, + XIVE_VPC_SCRUB, + XIVE_VPC_SCRUB_CLEAN, + XIVE_EQC_SCRUB, + XIVE_SYNC, + XIVE_SYNC_NOLOCK, + XIVE_VC_CACHE_KILL, + XIVE_PC_CACHE_KILL, + XIVE_STAT_LAST, +}; + +static const char *xive_stat_names[] = { + "XIVE_IVC_SCRUB", + "XIVE_VPC_SCRUB", + "XIVE_VPC_SCRUB_CLEAN", + "XIVE_EQC_SCRUB", + "XIVE_SYNC", + "XIVE_SYNC_NOLOCK", + "XIVE_VC_CACHE_KILL", + "XIVE_PC_CACHE_KILL", +}; + struct xive { uint32_t chip_id; uint32_t block_id; @@ -463,6 +490,8 @@ struct xive { /* In memory queue overflow */ void *q_ovf; + + struct stat stat[XIVE_STAT_LAST]; }; #define XIVE_CAN_STORE_EOI(x) XIVE_STORE_EOI_ENABLED @@ -1202,26 +1231,34 @@ static int64_t __xive_cache_scrub(struct xive *x, enum xive_cache_type ctype, return 0; } -static int64_t xive_ivc_scrub(struct xive *x, uint64_t block, uint64_t idx) +static int64_t __xive_ivc_scrub(struct xive *x, uint64_t block, uint64_t idx) { /* IVC has no "want_inval" bit, it always invalidates */ return __xive_cache_scrub(x, xive_cache_ivc, block, idx, false, false); } +#define xive_ivc_scrub(x, b, i) \ + stat_call(__xive_ivc_scrub(x, b, i), &x->stat[XIVE_IVC_SCRUB]) -static int64_t xive_vpc_scrub(struct xive *x, uint64_t block, uint64_t idx) +static int64_t __xive_vpc_scrub(struct xive *x, uint64_t block, uint64_t idx) { return __xive_cache_scrub(x, xive_cache_vpc, block, idx, false, false); } +#define xive_vpc_scrub(x, b, i) \ + stat_call(__xive_vpc_scrub(x, b, i), &x->stat[XIVE_VPC_SCRUB]) -static int64_t xive_vpc_scrub_clean(struct xive *x, uint64_t block, uint64_t idx) +static int64_t __xive_vpc_scrub_clean(struct xive *x, uint64_t block, uint64_t idx) { return __xive_cache_scrub(x, xive_cache_vpc, block, idx, true, false); } +#define xive_vpc_scrub_clean(x, b, i) \ + stat_call(__xive_vpc_scrub_clean(x, b, i), &x->stat[XIVE_VPC_SCRUB_CLEAN]) -static int64_t xive_eqc_scrub(struct xive *x, uint64_t block, uint64_t idx) +static int64_t __xive_eqc_scrub(struct xive *x, uint64_t block, uint64_t idx) { return __xive_cache_scrub(x, xive_cache_eqc, block, idx, false, false); } +#define xive_eqc_scrub(x, b, i) \ + stat_call(__xive_eqc_scrub(x, b, i), &x->stat[XIVE_EQC_SCRUB]) static int64_t __xive_cache_watch(struct xive *x, enum xive_cache_type ctype, uint64_t block, uint64_t idx, @@ -2280,13 +2317,11 @@ static void xive_update_irq_mask(struct xive_src *s, uint32_t idx, bool masked) in_be64(mmio_base + offset); } -static int64_t xive_sync(struct xive *x) +static int64_t __xive_sync_nolock(struct xive *x) { uint64_t r; void *p; - lock(&x->lock); - /* Second 2K range of second page */ p = x->ic_base + (1 << x->ic_shift) + 0x800; @@ -2316,10 +2351,20 @@ static int64_t xive_sync(struct xive *x) /* Workaround HW issue, read back before allowing a new sync */ xive_regr(x, VC_GLOBAL_CONFIG); + return 0; +} +#define xive_sync_nolock(x) \ + stat_call(__xive_sync_nolock(x), &x->stat[XIVE_SYNC_NOLOCK]) + +static int64_t __xive_sync(struct xive *x) +{ + lock(&x->lock); + xive_sync_nolock(x); unlock(&x->lock); return 0; } +#define xive_sync(x) stat_call(__xive_sync(x), &x->stat[XIVE_SYNC]) static int64_t __xive_set_irq_config(struct irq_source *is, uint32_t girq, uint64_t vp, uint8_t prio, uint32_t lirq, @@ -2586,6 +2631,16 @@ void xive_register_ipi_source(uint32_t base, uint32_t count, void *data, flags, false, data, ops); } +#define XIVE_STAT_MAX_TIME 50 /* usecs */ + +static void xive_stat_init(struct xive *x) +{ + int i; + + for (i = 0; i < ARRAY_SIZE(x->stat); i++) + stat_init(&x->stat[i], xive_stat_names[i], XIVE_STAT_MAX_TIME); +} + static struct xive *init_one_xive(struct dt_node *np) { struct xive *x; @@ -2695,6 +2750,8 @@ static struct xive *init_one_xive(struct dt_node *np) false, NULL, NULL); + xive_stat_init(x); + return x; fail: xive_err(x, "Initialization failed...\n"); @@ -4357,7 +4414,7 @@ static void xive_cleanup_cpu_tima(struct cpu_thread *c) xive_regw(x, PC_TCTXT_INDIR0, 0); } -static int64_t xive_vc_ind_cache_kill(struct xive *x, uint64_t type) +static int64_t __xive_vc_ind_cache_kill(struct xive *x, uint64_t type) { uint64_t val; @@ -4378,8 +4435,10 @@ static int64_t xive_vc_ind_cache_kill(struct xive *x, uint64_t type) } return 0; } +#define xive_vc_ind_cache_kill(x, type) \ + stat_call(__xive_vc_ind_cache_kill(x, type), &x->stat[XIVE_VC_CACHE_KILL]) -static int64_t xive_pc_ind_cache_kill(struct xive *x) +static int64_t __xive_pc_ind_cache_kill(struct xive *x) { uint64_t val; @@ -4399,6 +4458,8 @@ static int64_t xive_pc_ind_cache_kill(struct xive *x) } return 0; } +#define xive_pc_ind_cache_kill(x) \ + stat_call(__xive_pc_ind_cache_kill(x), &x->stat[XIVE_PC_CACHE_KILL]) static void xive_cleanup_vp_ind(struct xive *x) { @@ -5380,6 +5441,29 @@ static int xive_perf_read(struct opal_debug *d, void *buf, uint64_t size) return n; } +static int xive_stat_read(struct opal_debug *d, void *buf, uint64_t size) +{ + struct xive *x = d->private; + int n = 0; + int i; + + for (i = 0; i < ARRAY_SIZE(x->stat); i++) + n += stat_printf(&x->stat[i], buf + n, size - n); + return n; +} + +static int xive_stat_write(struct opal_debug *d, void *buf, uint64_t size) +{ + struct xive *x = d->private; + + if (!strncmp(buf, "reset", size)) { + xive_stat_init(x); + return OPAL_SUCCESS; + } else { + return OPAL_PARAMETER; + } +} + static const struct opal_debug_ops xive_ivt_ops = { .read = xive_ivt_read, }; @@ -5395,6 +5479,10 @@ static const struct opal_debug_ops xive_vpt_ops = { static const struct opal_debug_ops xive_perf_ops = { .read = xive_perf_read, }; +static const struct opal_debug_ops xive_stat_ops = { + .read = xive_stat_read, + .write = xive_stat_write, +}; static const struct { const char *name; @@ -5405,6 +5493,7 @@ static const struct { { "xive-esc", &xive_esc_ops, }, { "xive-vpt", &xive_vpt_ops, }, { "xive-perf", &xive_perf_ops, }, + { "xive-stat", &xive_stat_ops, }, }; static void xive_init_debug(struct xive *x)