From patchwork Tue Jan 9 10:44:53 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 1884365 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=qsIzC66r; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.a=rsa-sha256 header.s=20230601 header.b=v+DPtUY0; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T8SJ52gbdz1yPh for ; Tue, 9 Jan 2024 21:45:25 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=IxwvRW/vhF3quF70z4eNhpvQRrK4aE1eoZLln78TlFo=; b=qsIzC66rsap+4v 4XU9+VH/lKpSbZIwe6PnrxePgmPmoj6xIXnFN/7vDLDnROh/zGcsiszSW7qufNPvqGsL/gwwNV1VE TtAZROBbwOdTxFdHbTxCDRMxsp3K0+RgS1GEuyMucOxzQDLf68AVNGF+9BoilqGf7OYWlsMeG5lf0 N/Vmj/0K4XU4bkue5HZaOngA8uwyzpWjQxah+6JFqsMSZ7tMUzLsu8ydzmOUAhNFvpc9BdOIqLX0M +WF1APhEw8S+eG1Ltbj8XTxgExcpl4PM+fsj26DqIDKkhgxXMbzegqlJUz8g5YofWdasjCGyfb6y6 rmIQ0uyxjma+YXliFfBg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rN9bd-007eoj-1O; Tue, 09 Jan 2024 10:45:13 +0000 Received: from mail-wr1-x436.google.com ([2a00:1450:4864:20::436]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rN9ba-007enY-1W for opensbi@lists.infradead.org; Tue, 09 Jan 2024 10:45:11 +0000 Received: by mail-wr1-x436.google.com with SMTP id ffacd0b85a97d-33694c0ebd1so305029f8f.1 for ; Tue, 09 Jan 2024 02:45:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704797106; x=1705401906; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=0YG4ZEr+juhW78GlxqchLpJy9TFKGIEOgp2/J9x42E4=; b=v+DPtUY0H55FcqgteP+yThg789C4UsUQGKdxCvuJTPbcc8C01Hxgjmx+mUmX34KT7C +4p+SeSUz8KYdv4UfUllFSjjdC45JX6LLy0KOJuA9/B5moq4m8is/2qedlrwA8AaMkY9 ehvXzzJZjSR27S9x0BNEYpiYEspU6a6XYjDAUzdzPDcYXfA+ZGfu+0TPJDff4peRsgmB VfF/VPBjM1g3EmlX8lZsEfFi+9guI8QFuL1hfwvayPfkDdbbv6UAAijPHuHOdD5iXLPW d+g2HaZvE6KzkdEhdr5BXYn8p5ixGZniV/0P+B8pCLfdoAXWjPlXsRyoEhYqO/8abMOp jsHQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704797106; x=1705401906; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=0YG4ZEr+juhW78GlxqchLpJy9TFKGIEOgp2/J9x42E4=; b=fVtmElycCg0XSWYYB6YmR/X8eABXHeE9xs2j9/gtr75E9GvngJDJ8vnTW+qt6cOZ0H Iulrtd1xEa9SsNbmigiWnk07jDZHimSk1TWli2tDe/9mVX1Kcom4lBVL4eCxa0UOikjz racZXy7jejYpL1cNZ1XE9XS+n3EzmCZlq1Iz6Y8rf1gfd8DucAdSMQi81nWPEnGcHwiz rnaXyjjOHlRaVQFgTGaZWx4h/9LwGocLKtK2RXWvXRKTxMD5Bx0+7lzKz/IVAcax4HwR rNkL+pE3k4/cTAMGD/IyDhHoDZTOwwyJNX3+pSjUVGb/uGxuEyAZ6c9pDRtCJCb5rrU4 CyWw== X-Gm-Message-State: AOJu0YxLmCPjbnYXmw+E22XvIj4vyOTZFY8eqBtYXujvKJ6IQssgj8iC j3JT4sst70WMAalEx2yBuW0kffvhvQe5+6LjAaNiRjdqtLaMwg== X-Google-Smtp-Source: AGHT+IFR8TmrOux5CSy5dCgo/fRWvqotq/s+/zVZYuWJhYZQFeMYz2eVZRtWQQ6Y3MgIt66DFiBCtA== X-Received: by 2002:a05:6000:90b:b0:337:7598:87ae with SMTP id cw11-20020a056000090b00b00337759887aemr911566wrb.2.1704797106710; Tue, 09 Jan 2024 02:45:06 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:401f:b86a:bfd:b927]) by smtp.gmail.com with ESMTPSA id l12-20020a5d668c000000b003375c072fbcsm2045585wru.100.2024.01.09.02.45.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jan 2024 02:45:05 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: opensbi@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Deepak Gupta , Anup Patel , Himanshu Chauhan , Xiang W , Jessica Clarke Subject: [RFC PATCH v2 1/4] lib: sbi: provides regs to sbi_ipi_process() Date: Tue, 9 Jan 2024 11:44:53 +0100 Message-ID: <20240109104500.2080121-2-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240109104500.2080121-1-cleger@rivosinc.com> References: <20240109104500.2080121-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240109_024510_513884_DC49CD8C X-CRM114-Status: GOOD ( 14.92 ) X-Spam-Score: 0.0 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: In order to implement SSE on IPI notifications, provides regs to sbi_ipi_process(). This will be used in two cases, global event triggered from one cpu to another and ipi injection to a specific hart. Content analysis details: (0.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:436 listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org In order to implement SSE on IPI notifications, provides regs to sbi_ipi_process(). This will be used in two cases, global event triggered from one cpu to another and ipi injection to a specific hart. Signed-off-by: Clément Léger Reviewed-by: Anup Patel --- include/sbi/sbi_ipi.h | 6 ++++-- lib/sbi/sbi_ipi.c | 12 +++++++----- lib/sbi/sbi_tlb.c | 2 +- lib/sbi/sbi_trap.c | 4 ++-- lib/utils/irqchip/imsic.c | 2 +- 5 files changed, 15 insertions(+), 11 deletions(-) diff --git a/include/sbi/sbi_ipi.h b/include/sbi/sbi_ipi.h index d396233..0b32194 100644 --- a/include/sbi/sbi_ipi.h +++ b/include/sbi/sbi_ipi.h @@ -11,6 +11,7 @@ #define __SBI_IPI_H__ #include +#include /* clang-format off */ @@ -68,7 +69,8 @@ struct sbi_ipi_event_ops { * Note: This is a mandatory callback and it is called on the * remote HART after IPI is triggered. */ - void (* process)(struct sbi_scratch *scratch); + void (* process)(struct sbi_scratch *scratch, + struct sbi_trap_regs *regs); }; int sbi_ipi_send_many(ulong hmask, ulong hbase, u32 event, void *data); @@ -83,7 +85,7 @@ void sbi_ipi_clear_smode(void); int sbi_ipi_send_halt(ulong hmask, ulong hbase); -void sbi_ipi_process(void); +void sbi_ipi_process(struct sbi_trap_regs *regs); int sbi_ipi_raw_send(u32 hartindex); diff --git a/lib/sbi/sbi_ipi.c b/lib/sbi/sbi_ipi.c index 048aaa6..269d48a 100644 --- a/lib/sbi/sbi_ipi.c +++ b/lib/sbi/sbi_ipi.c @@ -186,7 +186,8 @@ void sbi_ipi_event_destroy(u32 event) ipi_ops_array[event] = NULL; } -static void sbi_ipi_process_smode(struct sbi_scratch *scratch) +static void sbi_ipi_process_smode(struct sbi_scratch *scratch, + struct sbi_trap_regs *regs) { csr_set(CSR_MIP, MIP_SSIP); } @@ -208,7 +209,8 @@ void sbi_ipi_clear_smode(void) csr_clear(CSR_MIP, MIP_SSIP); } -static void sbi_ipi_process_halt(struct sbi_scratch *scratch) +static void sbi_ipi_process_halt(struct sbi_scratch *scratch, + struct sbi_trap_regs *regs) { sbi_hsm_hart_stop(scratch, true); } @@ -225,7 +227,7 @@ int sbi_ipi_send_halt(ulong hmask, ulong hbase) return sbi_ipi_send_many(hmask, hbase, ipi_halt_event, NULL); } -void sbi_ipi_process(void) +void sbi_ipi_process(struct sbi_trap_regs *regs) { unsigned long ipi_type; unsigned int ipi_event; @@ -244,7 +246,7 @@ void sbi_ipi_process(void) if (ipi_type & 1UL) { ipi_ops = ipi_ops_array[ipi_event]; if (ipi_ops) - ipi_ops->process(scratch); + ipi_ops->process(scratch, regs); } ipi_type = ipi_type >> 1; ipi_event++; @@ -349,7 +351,7 @@ void sbi_ipi_exit(struct sbi_scratch *scratch) csr_clear(CSR_MIE, MIP_MSIP); /* Process pending IPIs */ - sbi_ipi_process(); + sbi_ipi_process(NULL); /* Platform exit */ sbi_platform_ipi_exit(sbi_platform_ptr(scratch)); diff --git a/lib/sbi/sbi_tlb.c b/lib/sbi/sbi_tlb.c index cca319f..3fff519 100644 --- a/lib/sbi/sbi_tlb.c +++ b/lib/sbi/sbi_tlb.c @@ -240,7 +240,7 @@ static bool tlb_process_once(struct sbi_scratch *scratch) return false; } -static void tlb_process(struct sbi_scratch *scratch) +static void tlb_process(struct sbi_scratch *scratch, struct sbi_trap_regs *regs) { while (tlb_process_once(scratch)); } diff --git a/lib/sbi/sbi_trap.c b/lib/sbi/sbi_trap.c index dbf307c..d574ab1 100644 --- a/lib/sbi/sbi_trap.c +++ b/lib/sbi/sbi_trap.c @@ -206,7 +206,7 @@ static int sbi_trap_nonaia_irq(struct sbi_trap_regs *regs, ulong mcause) sbi_timer_process(); break; case IRQ_M_SOFT: - sbi_ipi_process(); + sbi_ipi_process(regs); break; case IRQ_M_EXT: return sbi_irqchip_process(regs); @@ -229,7 +229,7 @@ static int sbi_trap_aia_irq(struct sbi_trap_regs *regs, ulong mcause) sbi_timer_process(); break; case IRQ_M_SOFT: - sbi_ipi_process(); + sbi_ipi_process(regs); break; case IRQ_M_EXT: rc = sbi_irqchip_process(regs); diff --git a/lib/utils/irqchip/imsic.c b/lib/utils/irqchip/imsic.c index 36ef66c..a207dbc 100644 --- a/lib/utils/irqchip/imsic.c +++ b/lib/utils/irqchip/imsic.c @@ -149,7 +149,7 @@ static int imsic_external_irqfn(struct sbi_trap_regs *regs) switch (mirq) { case IMSIC_IPI_ID: - sbi_ipi_process(); + sbi_ipi_process(regs); break; default: sbi_printf("%s: unhandled IRQ%d\n", From patchwork Tue Jan 9 10:44:54 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 1884368 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=wASiiWS+; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.a=rsa-sha256 header.s=20230601 header.b=hH5CJx+P; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T8SJD0pWkz1xqk for ; Tue, 9 Jan 2024 21:45:32 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=jn44qfEvzt1u+D6pWY00YoQ5/Oy8Fo3iVO+ROvbNB6k=; b=wASiiWS+EpTXvD uj16SkgBDI/G/SVseX/fXrzfIFXdyYAEMJWtfEW35UK9s3I+HGYtiDSi/ImHQsMTB1pMws/kMN2cE g8G7VkVmcWmAzC2jFtAg11rgHegEUqHx3LDNq/qZ2JMo1SWdIPIpeZPD2n8IuuBdg9MoDp6VgnBy5 MZPuetAtWqh9Z/6Jz8fkG2+KkNsWZw5hiXCTKGIcDtroH08VgeZAUOOGSYAWaqLWaGJWj/5hPn/CH TaCTuOpN+VFvrVNrxhAB6d7CA28EMfW9Gd9WoTu6jVnmy2fS67cEiGTojT//Mme9S7Bid2YXyZsPt srhR4GvhQXvhERVzhs5g==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rN9bi-007erA-15; Tue, 09 Jan 2024 10:45:18 +0000 Received: from mail-wm1-x32a.google.com ([2a00:1450:4864:20::32a]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rN9bc-007enp-1g for opensbi@lists.infradead.org; Tue, 09 Jan 2024 10:45:16 +0000 Received: by mail-wm1-x32a.google.com with SMTP id 5b1f17b1804b1-40e412c1c58so3361745e9.0 for ; Tue, 09 Jan 2024 02:45:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704797108; x=1705401908; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ewPsk2BbColhWlcUuRkGASoTaunstJiKiPVdzzdhzVk=; b=hH5CJx+PmDmTL5++fU34ZcGzWR1DBSB6t9EcIaj4YGoHxVn938fUq+PV4h2Gses6H+ /UADb0dRkDI0dnVXOD8PJtWtt5d3IfQLc+HZSVm8zzT9qD7SeWSZKnQAbWABpP2EuCMH c2xm7jjosfjSTC31lf5QBDKU8orQovamQmoXzjc9luScG5heeJBVeSKD0Y0jNBZMecS/ o37Zc5xoLnois3wP0mv7MqK1iBrguoRUwbx1zqDBWzhhvHhhTDEvs2bpXBAdWQ+u/tQz PRAFK/8Bq/kxzEvDaIm/x6edILC1zbsTfNeVdJlLxyQ1pfplqFGGCE+LIXBCsTkeNjNq PmKA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704797108; x=1705401908; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ewPsk2BbColhWlcUuRkGASoTaunstJiKiPVdzzdhzVk=; b=VCqDShRZ+T6wvKcQ2NBnHRnDppXNHDQossoCHVUNeqECqlWyYiJdqvyLsNvOwXlXuO MZFoyyzylRqw6PJos02f8p+TruMQYwHXN+O99Bjedas9L58hVRCY2gz1ovOc1M4sMoI3 FcfE4I88QQW/YKuhtl7rRzBezPhACZRvuyHyr1LJRWVAMJjXKYG8u4v1fWkCJRowh1t5 HnCplof/GMo4iKcGZj5e+9pxw5yZ/MvfP14aJSynXrW+K1aYjFJwEmB041N4BApv1xw+ o5yl76KrNqep2nSt2fzecMKYm4wTDeQHnucVLbWUgIzWP6Acz9vNKqG5CtdSAIdzgvmw uLWw== X-Gm-Message-State: AOJu0YzX9zOolQmoZgWbu+LqEFjh8pAj9EWyFBeIoU/JWBpn6iQvusGp o/f01gsp16eiRl/u29F0unUXUbMMXDHqmVsRGfP6L4mR16WSDg== X-Google-Smtp-Source: AGHT+IFC6YbJi9/S741ClEFBw9RvGRYk5tSdo/74O3V2HPmv6QWQir4Y64bK/mQKOQLgrlOLm7lFOw== X-Received: by 2002:a05:6000:86:b0:336:6414:4018 with SMTP id m6-20020a056000008600b0033664144018mr5612662wrx.4.1704797107903; Tue, 09 Jan 2024 02:45:07 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:401f:b86a:bfd:b927]) by smtp.gmail.com with ESMTPSA id l12-20020a5d668c000000b003375c072fbcsm2045585wru.100.2024.01.09.02.45.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jan 2024 02:45:07 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: opensbi@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Deepak Gupta , Anup Patel , Himanshu Chauhan , Xiang W , Jessica Clarke Subject: [RFC PATCH v2 2/4] lib: sbi: add support for Supervisor Software Events extension Date: Tue, 9 Jan 2024 11:44:54 +0100 Message-ID: <20240109104500.2080121-3-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240109104500.2080121-1-cleger@rivosinc.com> References: <20240109104500.2080121-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240109_024512_569663_E9D53C14 X-CRM114-Status: GOOD ( 22.39 ) X-Spam-Score: 0.0 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: This extension [1] allows to deliver events from SBI to supervisor via a software mecanism. This extensions defines events (either local or global) which are signaled by the SBI on specific signal sou [...] Content analysis details: (0.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:32a listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org This extension [1] allows to deliver events from SBI to supervisor via a software mecanism. This extensions defines events (either local or global) which are signaled by the SBI on specific signal sources (IRQ, traps, etc) and are injected to be executed in supervisor mode. [1] https://lists.riscv.org/g/tech-prs/message/744 Signed-off-by: Clément Léger --- include/sbi/sbi_ecall_interface.h | 48 +- include/sbi/sbi_error.h | 4 +- include/sbi/sbi_sse.h | 93 +++ lib/sbi/Kconfig | 4 + lib/sbi/objects.mk | 4 + lib/sbi/sbi_ecall_sse.c | 58 ++ lib/sbi/sbi_init.c | 13 + lib/sbi/sbi_ipi.c | 2 +- lib/sbi/sbi_sse.c | 1078 +++++++++++++++++++++++++++++ 9 files changed, 1301 insertions(+), 3 deletions(-) create mode 100644 include/sbi/sbi_sse.h create mode 100644 lib/sbi/sbi_ecall_sse.c create mode 100644 lib/sbi/sbi_sse.c diff --git a/include/sbi/sbi_ecall_interface.h b/include/sbi/sbi_ecall_interface.h index d8c646d..a5f3edf 100644 --- a/include/sbi/sbi_ecall_interface.h +++ b/include/sbi/sbi_ecall_interface.h @@ -32,6 +32,7 @@ #define SBI_EXT_DBCN 0x4442434E #define SBI_EXT_SUSP 0x53555350 #define SBI_EXT_CPPC 0x43505043 +#define SBI_EXT_SSE 0x535345 /* SBI function IDs for BASE extension*/ #define SBI_EXT_BASE_GET_SPEC_VERSION 0x0 @@ -293,6 +294,48 @@ enum sbi_cppc_reg_id { SBI_CPPC_NON_ACPI_LAST = SBI_CPPC_TRANSITION_LATENCY, }; +/* SBI Function IDs for SSE extension */ +#define SBI_EXT_SSE_READ_ATTR 0x00000000 +#define SBI_EXT_SSE_WRITE_ATTR 0x00000001 +#define SBI_EXT_SSE_REGISTER 0x00000002 +#define SBI_EXT_SSE_UNREGISTER 0x00000003 +#define SBI_EXT_SSE_ENABLE 0x00000004 +#define SBI_EXT_SSE_DISABLE 0x00000005 +#define SBI_EXT_SSE_COMPLETE 0x00000006 +#define SBI_EXT_SSE_INJECT 0x00000007 + +/* SBI SSE Event Attributes. */ +#define SBI_SSE_ATTR_STATUS 0x00000000 +#define SBI_SSE_ATTR_PRIO 0x00000001 +#define SBI_SSE_ATTR_CONFIG 0x00000002 +#define SBI_SSE_ATTR_PREFERRED_HART 0x00000003 +#define SBI_SSE_ATTR_ENTRY_PC 0x00000004 +#define SBI_SSE_ATTR_ENTRY_A0 0x00000005 +#define SBI_SSE_ATTR_ENTRY_A6 0x00000006 +#define SBI_SSE_ATTR_ENTRY_A7 0x00000007 +#define SBI_SSE_ATTR_INTERRUPTED_MODE 0x00000008 +#define SBI_SSE_ATTR_INTERRUPTED_PC 0x00000009 +#define SBI_SSE_ATTR_INTERRUPTED_A0 0x0000000A +#define SBI_SSE_ATTR_INTERRUPTED_A6 0x0000000B +#define SBI_SSE_ATTR_INTERRUPTED_A7 0x0000000C + +#define SBI_SSE_ATTR_STATUS_STATE_OFFSET 0 +#define SBI_SSE_ATTR_STATUS_STATE_MASK 0x3 +#define SBI_SSE_ATTR_STATUS_PENDING_OFFSET 2 +#define SBI_SSE_ATTR_STATUS_INJECT_OFFSET 3 + +#define SBI_SSE_ATTR_CONFIG_ONESHOT (1 << 0) + +/* SBI SSE Event IDs. */ +#define SBI_SSE_EVENT_LOCAL_RAS 0x00000000 +#define SBI_SSE_EVENT_GLOBAL_RAS 0x00008000 +#define SBI_SSE_EVENT_LOCAL_PMU 0x00010000 +#define SBI_SSE_EVENT_LOCAL_SOFTWARE 0xffff0000 +#define SBI_SSE_EVENT_GLOBAL_SOFTWARE 0xffff8000 + +#define SBI_SSE_EVENT_GLOBAL (1 << 15) +#define SBI_SSE_EVENT_PLATFORM (1 << 14) + /* SBI base specification related macros */ #define SBI_SPEC_VERSION_MAJOR_OFFSET 24 #define SBI_SPEC_VERSION_MAJOR_MASK 0x7f @@ -313,8 +356,11 @@ enum sbi_cppc_reg_id { #define SBI_ERR_ALREADY_STARTED -7 #define SBI_ERR_ALREADY_STOPPED -8 #define SBI_ERR_NO_SHMEM -9 +#define SBI_ERR_INVALID_STATE -10 +#define SBI_ERR_BAD_RANGE -11 +#define SBI_ERR_BUSY -12 -#define SBI_LAST_ERR SBI_ERR_NO_SHMEM +#define SBI_LAST_ERR SBI_ERR_BUSY /* clang-format on */ diff --git a/include/sbi/sbi_error.h b/include/sbi/sbi_error.h index a77e3f8..5efb3b9 100644 --- a/include/sbi/sbi_error.h +++ b/include/sbi/sbi_error.h @@ -24,6 +24,9 @@ #define SBI_EALREADY_STARTED SBI_ERR_ALREADY_STARTED #define SBI_EALREADY_STOPPED SBI_ERR_ALREADY_STOPPED #define SBI_ENO_SHMEM SBI_ERR_NO_SHMEM +#define SBI_EINVALID_STATE SBI_ERR_INVALID_STATE +#define SBI_EBAD_RANGE SBI_ERR_BAD_RANGE +#define SBI_EBUSY SBI_ERR_BUSY #define SBI_ENODEV -1000 #define SBI_ENOSYS -1001 @@ -34,7 +37,6 @@ #define SBI_ENOMEM -1006 #define SBI_EUNKNOWN -1007 #define SBI_ENOENT -1008 - /* clang-format on */ #endif diff --git a/include/sbi/sbi_sse.h b/include/sbi/sbi_sse.h new file mode 100644 index 0000000..ed1b138 --- /dev/null +++ b/include/sbi/sbi_sse.h @@ -0,0 +1,93 @@ +/* + * SPDX-License-Identifier: BSD-2-Clause + * + * Copyright (c) 2023 Rivos Systems. + */ + +#ifndef __SBI_SSE_H__ +#define __SBI_SSE_H__ + +#include +#include +#include + +struct sbi_scratch; + +#define EXC_MODE_PP_SHIFT 0 +#define EXC_MODE_PP BIT(EXC_MODE_PP_SHIFT) +#define EXC_MODE_PV_SHIFT 1 +#define EXC_MODE_PV BIT(EXC_MODE_PV_SHIFT) +#define EXC_MODE_SSTATUS_SPIE_SHIFT 2 +#define EXC_MODE_SSTATUS_SPIE BIT(EXC_MODE_SSTATUS_SPIE_SHIFT) + + +struct sbi_sse_cb_ops { + /** + * Called when hart_id is changed on the event. + */ + void (*set_hartid_cb)(uint32_t event_id, unsigned long hart_id); + + /** + * Called when the SBI_EXT_SSE_COMPLETE is invoked on the event. + */ + void (*complete_cb)(uint32_t event_id); + + /** + * Called when the SBI_EXT_SSE_REGISTER is invoked on the event. + */ + void (*register_cb)(uint32_t event_id); + + /** + * Called when the SBI_EXT_SSE_UNREGISTER is invoked on the event. + */ + void (*unregister_cb)(uint32_t event_id); + + /** + * Called when the SBI_EXT_SSE_ENABLE is invoked on the event. + */ + void (*enable_cb)(uint32_t event_id); + + /** + * Called when the SBI_EXT_SSE_DISABLE is invoked on the event. + */ + void (*disable_cb)(uint32_t event_id); +}; + +/* Set the callback operations for an event + * @param event_id Event identifier (SBI_SSE_EVENT_*) + * @param cb_ops Callback operations + * @return 0 on success, error otherwise + */ +int sbi_sse_set_cb_ops(uint32_t event_id, const struct sbi_sse_cb_ops *cb_ops); + +/* Inject an event to the current hard + * @param event_id Event identifier (SBI_SSE_EVENT_*) + * @param regs Registers that were used on SBI entry + * @return 0 on success, error otherwise + */ +int sbi_sse_inject_event(uint32_t event_id, struct sbi_trap_regs *regs); + +int sbi_sse_init(struct sbi_scratch *scratch, bool cold_boot); +void sbi_sse_exit(struct sbi_scratch *scratch); + +/* Interface called from sbi_ecall_sse.c */ +int sbi_sse_register(uint32_t event_id, unsigned long handler_entry_pc, + unsigned long handler_entry_a0, + unsigned long handler_entry_a6, + unsigned long handler_entry_a7); +int sbi_sse_unregister(uint32_t event_id); +int sbi_sse_enable(uint32_t event_id); +int sbi_sse_disable(uint32_t event_id); +int sbi_sse_complete(uint32_t event_id, struct sbi_trap_regs *regs, + struct sbi_ecall_return *out); +int sbi_sse_inject_from_ecall(uint32_t event_id, unsigned long hart_id, + struct sbi_trap_regs *regs, + struct sbi_ecall_return *out); +int sbi_sse_read_attrs(uint32_t event_id, uint32_t base_attr_id, + uint32_t attr_count, unsigned long output_phys_lo, + unsigned long output_phys_hi); +int sbi_sse_write_attrs(uint32_t event_id, uint32_t base_attr_id, + uint32_t attr_count, unsigned long input_phys_lo, + unsigned long input_phys_hi); + +#endif diff --git a/lib/sbi/Kconfig b/lib/sbi/Kconfig index 477775e..1b713e9 100644 --- a/lib/sbi/Kconfig +++ b/lib/sbi/Kconfig @@ -46,4 +46,8 @@ config SBI_ECALL_VENDOR bool "Platform-defined vendor extensions" default y +config SBI_ECALL_SSE + bool "SSE extension" + default y + endmenu diff --git a/lib/sbi/objects.mk b/lib/sbi/objects.mk index c699187..011c824 100644 --- a/lib/sbi/objects.mk +++ b/lib/sbi/objects.mk @@ -52,6 +52,9 @@ libsbi-objs-$(CONFIG_SBI_ECALL_LEGACY) += sbi_ecall_legacy.o carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_VENDOR) += ecall_vendor libsbi-objs-$(CONFIG_SBI_ECALL_VENDOR) += sbi_ecall_vendor.o +carray-sbi_ecall_exts-$(CONFIG_SBI_ECALL_SSE) += ecall_sse +libsbi-objs-$(CONFIG_SBI_ECALL_SSE) += sbi_ecall_sse.o + libsbi-objs-y += sbi_bitmap.o libsbi-objs-y += sbi_bitops.o libsbi-objs-y += sbi_console.o @@ -71,6 +74,7 @@ libsbi-objs-y += sbi_misaligned_ldst.o libsbi-objs-y += sbi_platform.o libsbi-objs-y += sbi_pmu.o libsbi-objs-y += sbi_scratch.o +libsbi-objs-y += sbi_sse.o libsbi-objs-y += sbi_string.o libsbi-objs-y += sbi_system.o libsbi-objs-y += sbi_timer.o diff --git a/lib/sbi/sbi_ecall_sse.c b/lib/sbi/sbi_ecall_sse.c new file mode 100644 index 0000000..15c1a65 --- /dev/null +++ b/lib/sbi/sbi_ecall_sse.c @@ -0,0 +1,58 @@ +#include +#include +#include +#include + +static int sbi_ecall_sse_handler(unsigned long extid, unsigned long funcid, + struct sbi_trap_regs *regs, + struct sbi_ecall_return *out) +{ + int ret; + + switch (funcid) { + case SBI_EXT_SSE_READ_ATTR: + ret = sbi_sse_read_attrs(regs->a0, regs->a1, regs->a2, regs->a3, + regs->a4); + break; + case SBI_EXT_SSE_WRITE_ATTR: + ret = sbi_sse_write_attrs(regs->a0, regs->a1, regs->a2, + regs->a3, regs->a4); + break; + case SBI_EXT_SSE_REGISTER: + ret = sbi_sse_register(regs->a0, regs->a1, regs->a2, regs->a3, + regs->a2); + break; + case SBI_EXT_SSE_UNREGISTER: + ret = sbi_sse_unregister(regs->a0); + break; + case SBI_EXT_SSE_ENABLE: + ret = sbi_sse_enable(regs->a0); + break; + case SBI_EXT_SSE_DISABLE: + ret = sbi_sse_disable(regs->a0); + break; + case SBI_EXT_SSE_COMPLETE: + ret = sbi_sse_complete(regs->a0, regs, out); + break; + case SBI_EXT_SSE_INJECT: + ret = sbi_sse_inject_from_ecall(regs->a0, regs->a1, regs, out); + break; + default: + ret = SBI_ENOTSUPP; + } + return ret; +} + +struct sbi_ecall_extension ecall_sse; + +static int sbi_ecall_sse_register_extensions(void) +{ + return sbi_ecall_register_extension(&ecall_sse); +} + +struct sbi_ecall_extension ecall_sse = { + .extid_start = SBI_EXT_SSE, + .extid_end = SBI_EXT_SSE, + .register_extensions = sbi_ecall_sse_register_extensions, + .handle = sbi_ecall_sse_handler, +}; diff --git a/lib/sbi/sbi_init.c b/lib/sbi/sbi_init.c index 6a98e13..f9e6bb9 100644 --- a/lib/sbi/sbi_init.c +++ b/lib/sbi/sbi_init.c @@ -23,6 +23,7 @@ #include #include #include +#include #include #include #include @@ -315,6 +316,12 @@ static void __noreturn init_coldboot(struct sbi_scratch *scratch, u32 hartid) if (rc) sbi_hart_hang(); + rc = sbi_sse_init(scratch, true); + if (rc) { + sbi_printf("%s: sse init failed (error %d)\n", __func__, rc); + sbi_hart_hang(); + } + rc = sbi_pmu_init(scratch, true); if (rc) { sbi_printf("%s: pmu init failed (error %d)\n", @@ -435,6 +442,10 @@ static void __noreturn init_warm_startup(struct sbi_scratch *scratch, if (rc) sbi_hart_hang(); + rc = sbi_sse_init(scratch, false); + if (rc) + sbi_hart_hang(); + rc = sbi_pmu_init(scratch, false); if (rc) sbi_hart_hang(); @@ -639,6 +650,8 @@ void __noreturn sbi_exit(struct sbi_scratch *scratch) sbi_platform_early_exit(plat); + sbi_sse_exit(scratch); + sbi_pmu_exit(scratch); sbi_timer_exit(scratch); diff --git a/lib/sbi/sbi_ipi.c b/lib/sbi/sbi_ipi.c index 269d48a..9967016 100644 --- a/lib/sbi/sbi_ipi.c +++ b/lib/sbi/sbi_ipi.c @@ -66,7 +66,7 @@ static int sbi_ipi_send(struct sbi_scratch *scratch, u32 remote_hartindex, * SBI_IPI_UPDATE_BREAK for self-IPIs. For other events, check * for self-IPI and execute the callback directly here. */ - ipi_ops->process(scratch); + ipi_ops->process(scratch, NULL); return 0; } diff --git a/lib/sbi/sbi_sse.c b/lib/sbi/sbi_sse.c new file mode 100644 index 0000000..7dc7881 --- /dev/null +++ b/lib/sbi/sbi_sse.c @@ -0,0 +1,1078 @@ +/* + * SPDX-License-Identifier: BSD-2-Clause + * + * Copyright (c) 2023 Rivos Systems Inc. + * + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include + +#define sse_get_hart_state_ptr(__scratch) \ + sbi_scratch_read_type((__scratch), void *, shs_ptr_off) + +#define sse_thishart_state_ptr() \ + sse_get_hart_state_ptr(sbi_scratch_thishart_ptr()) + +#define sse_set_hart_state_ptr(__scratch, __sse_state) \ + sbi_scratch_write_type((__scratch), void *, shs_ptr_off, (__sse_state)) + +/* + * Rather than using memcpy to copy the context (which does it byte-per-byte), + * copy each field which generates ld/lw. + */ +#define sse_regs_copy(dst, src) \ + (dst)->a0 = (src)->a0; \ + (dst)->a6 = (src)->a6; \ + (dst)->a7 = (src)->a7 + +#define EVENT_IS_GLOBAL(__event_id) ((__event_id) & SBI_SSE_EVENT_GLOBAL) + +static const uint32_t supported_events[] = +{ + SBI_SSE_EVENT_LOCAL_RAS, + SBI_SSE_EVENT_GLOBAL_RAS, + SBI_SSE_EVENT_LOCAL_PMU, + SBI_SSE_EVENT_LOCAL_SOFTWARE, + SBI_SSE_EVENT_GLOBAL_SOFTWARE, +}; + +#define EVENT_COUNT array_size(supported_events) + +#define sse_event_invoke_cb(_event, _cb, ...) \ +{ \ + if (_event->cb_ops && _event->cb_ops->_cb) \ + _event->cb_ops->_cb(_event->event_id, ##__VA_ARGS__); \ +} + +#define SSE_EVENT_STATE(__e) __e->attrs.status.state +#define SSE_EVENT_PENDING(__e) __e->attrs.status.pending +#define SSE_EVENT_CAN_INJECT(__e) __e->attrs.status.inject +#define SSE_EVENT_HARTID(__e) __e->attrs.hartid +#define SSE_EVENT_PRIO(__e) __e->attrs.prio +#define SSE_EVENT_CONFIG(__e) __e->attrs.config +#define SSE_EVENT_ENTRY(__e) __e->attrs.entry +#define SSE_EVENT_INTERRUPTED(__e) __e->attrs.interrupted + +struct sse_entry_state { + /** Entry program counter */ + unsigned long pc; + /** a0 register state */ + unsigned long a0; + /** a6 register state */ + unsigned long a6; + /** a7 register state */ + unsigned long a7; +}; + +struct sse_interrupted_state { + /** Exception mode */ + unsigned long exc_mode; + /** Interrupted program counter */ + unsigned long pc; + /** a0 register state */ + unsigned long a0; + /** a6 register state */ + unsigned long a6; + /** a7 register state */ + unsigned long a7; +}; + +enum sbi_sse_state { + SSE_STATE_UNUSED = 0, + SSE_STATE_REGISTERED = 1, + SSE_STATE_ENABLED = 2, + SSE_STATE_RUNNING = 3, +}; + +struct sse_ipi_inject_data { + uint32_t event_id; +}; + +struct sbi_sse_event_status { + unsigned long state:2; + unsigned long pending:1; + unsigned long inject:1; +} __packed; + +struct sbi_sse_event_attrs { + struct sbi_sse_event_status status; + unsigned long prio; + unsigned long config; + unsigned long hartid; + struct sse_entry_state entry; + struct sse_interrupted_state interrupted; +}; + +/* Make sure all attributes are packed for direct memcpy in ATTR_READ */ +#define assert_field_offset(field, attr_offset) \ + _Static_assert( \ + ((offsetof(struct sbi_sse_event_attrs, field)) / sizeof(unsigned long)) == attr_offset, \ + "field "#field " from struct sbi_sse_event_attrs invalid offset, expected "#attr_offset \ + ) + +assert_field_offset(status, SBI_SSE_ATTR_STATUS); +assert_field_offset(prio, SBI_SSE_ATTR_PRIO); +assert_field_offset(config, SBI_SSE_ATTR_CONFIG); +assert_field_offset(hartid, SBI_SSE_ATTR_PREFERRED_HART); +assert_field_offset(entry.pc, SBI_SSE_ATTR_ENTRY_PC); +assert_field_offset(entry.a0, SBI_SSE_ATTR_ENTRY_A0); +assert_field_offset(entry.a6, SBI_SSE_ATTR_ENTRY_A6); +assert_field_offset(entry.a7, SBI_SSE_ATTR_ENTRY_A7); +assert_field_offset(interrupted.exc_mode, SBI_SSE_ATTR_INTERRUPTED_MODE); +assert_field_offset(interrupted.pc, SBI_SSE_ATTR_INTERRUPTED_PC); +assert_field_offset(interrupted.a0, SBI_SSE_ATTR_INTERRUPTED_A0); +assert_field_offset(interrupted.a6, SBI_SSE_ATTR_INTERRUPTED_A6); +assert_field_offset(interrupted.a7, SBI_SSE_ATTR_INTERRUPTED_A7); + +struct sbi_sse_event { + struct sbi_sse_event_attrs attrs; + uint32_t event_id; + const struct sbi_sse_cb_ops *cb_ops; + struct sbi_dlist node; + /* Only global events are using the lock, local ones don't need it */ + spinlock_t lock; +}; + +struct sse_hart_state { + struct sbi_dlist event_list; + spinlock_t list_lock; + struct sbi_sse_event *local_events; +}; + +static unsigned int local_event_count; +static unsigned int global_event_count; +static struct sbi_sse_event *global_events; + +static unsigned long sse_inject_fifo_off; +static unsigned long sse_inject_fifo_mem_off; +/* Offset of pointer to SSE HART state in scratch space */ +static unsigned long shs_ptr_off; + +static u32 sse_ipi_inject_event = SBI_IPI_EVENT_MAX; + +static int sse_ipi_inject_send(unsigned long hartid, uint32_t event_id); + +static bool sse_event_is_global(struct sbi_sse_event *e) +{ + return EVENT_IS_GLOBAL(e->event_id); +} + +static void sse_global_event_lock(struct sbi_sse_event *e) +{ + if (sse_event_is_global(e)) + spin_lock(&e->lock); +} + +static void sse_global_event_unlock(struct sbi_sse_event *e) +{ + if (sse_event_is_global(e)) + spin_unlock(&e->lock); +} + +static void sse_event_set_state(struct sbi_sse_event *e, + enum sbi_sse_state new_state) +{ + enum sbi_sse_state prev_state = SSE_EVENT_STATE(e); + + if ((new_state - prev_state == 1) || (prev_state - new_state == 1)) { + SSE_EVENT_STATE(e) = new_state; + return; + } + + sbi_panic("Invalid SSE state transition: %d -> %d\n", prev_state, + new_state); +} + +static struct sbi_sse_event *sse_event_get(uint32_t event) +{ + unsigned int i; + struct sbi_sse_event *events, *e; + unsigned int count; + struct sse_hart_state *shs; + + if (EVENT_IS_GLOBAL(event)) { + count = global_event_count; + events = global_events; + } else { + count = local_event_count; + shs = sse_thishart_state_ptr(); + events = shs->local_events; + } + + for (i = 0; i < count; i++) { + e = &events[i]; + if (e->event_id == event) + return e; + } + + return NULL; +} + +static struct sse_hart_state *sse_event_get_hart_state(struct sbi_sse_event *e) +{ + struct sbi_scratch *s = sbi_hartid_to_scratch(SSE_EVENT_HARTID(e)); + + return sse_get_hart_state_ptr(s); +} + +static void sse_event_remove_from_list(struct sbi_sse_event *e) +{ + struct sse_hart_state *state = sse_event_get_hart_state(e); + + spin_lock(&state->list_lock); + sbi_list_del(&e->node); + spin_unlock(&state->list_lock); +} + +static void sse_event_add_to_list(struct sbi_sse_event *e) +{ + struct sse_hart_state *state = sse_event_get_hart_state(e); + struct sbi_sse_event *tmp; + + spin_lock(&state->list_lock); + sbi_list_for_each_entry(tmp, &state->event_list, node) { + if (SSE_EVENT_PRIO(e) < SSE_EVENT_PRIO(tmp)) + break; + if (SSE_EVENT_PRIO(e) == SSE_EVENT_PRIO(tmp) && + e->event_id < tmp->event_id) + break; + } + sbi_list_add_tail(&e->node, &tmp->node); + + spin_unlock(&state->list_lock); +} + +static int sse_event_disable(struct sbi_sse_event *e) +{ + if (SSE_EVENT_STATE(e) != SSE_STATE_ENABLED) + return SBI_EINVALID_STATE; + + sse_event_invoke_cb(e, disable_cb); + + sse_event_remove_from_list(e); + sse_event_set_state(e, SSE_STATE_REGISTERED); + + return SBI_OK; +} + +static int sse_event_set_hart_id_check(struct sbi_sse_event *e, + unsigned long new_hartid) +{ + int hstate; + unsigned int hartid = (uint32_t) new_hartid; + struct sbi_domain * hd = sbi_domain_thishart_ptr(); + + if (!sse_event_is_global(e)) + return SBI_EDENIED; + + if (SSE_EVENT_STATE(e) >= SSE_STATE_ENABLED) + return SBI_EBUSY; + + if (!sbi_domain_is_assigned_hart(hd, new_hartid)) + return SBI_EINVAL; + + hstate = sbi_hsm_hart_get_state(hd, hartid); + if (hstate != SBI_HSM_STATE_STARTED) + return SBI_EINVAL; + + return SBI_OK; +} + +static int sse_event_set_attr_check(struct sbi_sse_event *e, uint32_t attr_id, + unsigned long val) +{ + int ret = SBI_OK; + + switch (attr_id) { + case SBI_SSE_ATTR_CONFIG: + if (val & ~SBI_SSE_ATTR_CONFIG_ONESHOT) + ret = SBI_ERR_INVALID_PARAM; + break; + case SBI_SSE_ATTR_PRIO: + if (SSE_EVENT_STATE(e) >= SSE_STATE_ENABLED) + ret = SBI_EINVALID_STATE; + break; + case SBI_SSE_ATTR_PREFERRED_HART: + ret = sse_event_set_hart_id_check(e, val); + break; + default: + ret = SBI_EBAD_RANGE; + break; + } + + return ret; +} + +static void sse_event_set_attr(struct sbi_sse_event *e, uint32_t attr_id, + unsigned long val) +{ + switch (attr_id) { + case SBI_SSE_ATTR_CONFIG: + SSE_EVENT_CONFIG(e) = val; + break; + case SBI_SSE_ATTR_PRIO: + SSE_EVENT_PRIO(e) = (uint32_t)val; + break; + case SBI_SSE_ATTR_PREFERRED_HART: + SSE_EVENT_HARTID(e) = val; + sse_event_invoke_cb(e, set_hartid_cb, val); + break; + } +} + +static int sse_event_register(struct sbi_sse_event *e, + unsigned long handler_entry_pc, + unsigned long handler_entry_a0, + unsigned long handler_entry_a6, + unsigned long handler_entry_a7) +{ + if (sse_event_is_global(e) && SSE_EVENT_HARTID(e) != current_hartid()) + return SBI_EINVAL; + + if (SSE_EVENT_STATE(e) != SSE_STATE_UNUSED) + return SBI_EINVALID_STATE; + + SSE_EVENT_ENTRY(e).a0 = handler_entry_a0; + SSE_EVENT_ENTRY(e).a6 = handler_entry_a6; + SSE_EVENT_ENTRY(e).a7 = handler_entry_a7; + SSE_EVENT_ENTRY(e).pc = handler_entry_pc; + + sse_event_set_state(e, SSE_STATE_REGISTERED); + + sse_event_invoke_cb(e, register_cb); + + return 0; +} + +static int sse_event_unregister(struct sbi_sse_event *e) +{ + if (SSE_EVENT_STATE(e) != SSE_STATE_REGISTERED) + return SBI_EINVALID_STATE; + + sse_event_invoke_cb(e, unregister_cb); + + sse_event_set_state(e, SSE_STATE_UNUSED); + + return 0; +} + +static void sse_event_inject(struct sbi_sse_event *e, + struct sbi_sse_event *prev_e, + struct sbi_trap_regs *regs) +{ + ulong prev_smode, prev_virt; + struct sse_interrupted_state *i_ctx = &SSE_EVENT_INTERRUPTED(e); + struct sse_interrupted_state *prev_i_ctx; + struct sse_entry_state *e_ctx = &SSE_EVENT_ENTRY(e); + + sse_event_set_state(e, SSE_STATE_RUNNING); + SSE_EVENT_PENDING(e) = false; + + if (prev_e) { + /* back-to-back injection after another event, copy previous + * event context for correct restoration + */ + prev_i_ctx = &SSE_EVENT_INTERRUPTED(prev_e); + + sse_regs_copy(i_ctx, prev_i_ctx); + i_ctx->exc_mode = prev_i_ctx->exc_mode; + i_ctx->pc = prev_i_ctx->pc; + } else { + sse_regs_copy(i_ctx, regs); + + prev_smode = (regs->mstatus & MSTATUS_MPP) >> MSTATUS_MPP_SHIFT; + #if __riscv_xlen == 32 + prev_virt = (regs->mstatusH & MSTATUSH_MPV) ? 1 : 0; + #else + prev_virt = (regs->mstatus & MSTATUS_MPV) ? 1 : 0; + #endif + + i_ctx->exc_mode = prev_smode << EXC_MODE_PP_SHIFT; + i_ctx->exc_mode |= prev_virt << EXC_MODE_PV_SHIFT; + if (regs->mstatus & MSTATUS_SPIE) + i_ctx->exc_mode |= EXC_MODE_SSTATUS_SPIE; + i_ctx->pc = regs->mepc; + + /* We only want to set SPIE for the first event injected after + * entering M-Mode. For the event injected right after another + * event (after calling sse_event_complete(), we will keep the + * saved SPIE). + */ + regs->mstatus &= ~MSTATUS_SPIE; + if (regs->mstatus & MSTATUS_SIE) + regs->mstatus |= MSTATUS_SPIE; + } + + sse_regs_copy(regs, e_ctx); + regs->mepc = e_ctx->pc; + + regs->mstatus &= ~MSTATUS_MPP; + regs->mstatus |= (PRV_S << MSTATUS_MPP_SHIFT); + + #if __riscv_xlen == 32 + regs->mstatusH &= ~MSTATUSH_MPV; + #else + regs->mstatus &= ~MSTATUS_MPV; + #endif + + regs->mstatus &= ~MSTATUS_SIE; +} + +static void sse_event_resume(struct sbi_sse_event *e, struct sbi_trap_regs *regs) +{ + struct sse_interrupted_state *i_ctx = &SSE_EVENT_INTERRUPTED(e); + + sse_regs_copy(regs, i_ctx); + + /* Restore previous virtualization state */ +#if __riscv_xlen == 32 + regs->mstatusH &= ~MSTATUSH_MPV; + if (i_ctx->exc_mode & EXC_MODE_PV) + regs->mstatusH |= MSTATUSH_MPV; +#else + regs->mstatus &= ~MSTATUS_MPV; + if (i_ctx->exc_mode & EXC_MODE_PV) + regs->mstatus |= MSTATUS_MPV; +#endif + + regs->mstatus &= ~MSTATUS_MPP; + if (i_ctx->exc_mode & EXC_MODE_PP) + regs->mstatus |= (PRV_S << MSTATUS_MPP_SHIFT); + + regs->mstatus &= ~MSTATUS_SIE; + if (regs->mstatus & MSTATUS_SPIE) + regs->mstatus |= MSTATUS_SIE; + + regs->mstatus &= ~MSTATUS_SPIE; + if (i_ctx->exc_mode & EXC_MODE_SSTATUS_SPIE) + regs->mstatus |= MSTATUS_SPIE; + + regs->mepc = i_ctx->pc; +} + +static bool sse_event_is_ready(struct sbi_sse_event *e) +{ + if (!SSE_EVENT_PENDING(e) || SSE_EVENT_STATE(e) != SSE_STATE_ENABLED || + SSE_EVENT_HARTID(e) != current_hartid()) { + return false; + } + + return true; +} + +/* Return true if an event has been injected, false otherwise */ +static bool sse_process_pending_events(struct sbi_sse_event *prev_e, + struct sbi_trap_regs *regs) +{ + struct sbi_sse_event *e, *to_run = NULL; + struct sse_hart_state *state = sse_thishart_state_ptr(); + +retry: + spin_lock(&state->list_lock); + + if (sbi_list_empty(&state->event_list)) { + spin_unlock(&state->list_lock); + return false; + } + + sbi_list_for_each_entry(e, &state->event_list, node) { + /* + * List of event is ordered by priority, stop at first running + * event since all other events after this one are of lower + * priority. + */ + if (SSE_EVENT_STATE(e) == SSE_STATE_RUNNING) + break; + + if (sse_event_is_ready(e)) { + to_run = e; + break; + } + } + + spin_unlock(&state->list_lock); + + if (!to_run) + return false; + + sse_global_event_lock(e); + /* + * If the event is global, the event might have been moved to another + * hart or disabled, evaluate readiness again. + */ + if (sse_event_is_global(e) && !sse_event_is_ready(e)) { + sse_global_event_unlock(e); + goto retry; + } + + sse_event_inject(e, prev_e, regs); + sse_global_event_unlock(e); + + return true; +} + +static int sse_event_set_pending(struct sbi_sse_event *e) +{ + if (SSE_EVENT_STATE(e) != SSE_STATE_RUNNING && + SSE_EVENT_STATE(e) != SSE_STATE_ENABLED) + return SBI_ERR_INVALID_STATE; + + SSE_EVENT_PENDING(e) = true; + + return SBI_OK; +} + +static void sse_ipi_inject_process(struct sbi_scratch *scratch, + struct sbi_trap_regs *regs) +{ + struct sbi_sse_event *e; + struct sse_ipi_inject_data evt; + struct sbi_fifo *sse_inject_fifo_r = + sbi_scratch_offset_ptr(scratch, sse_inject_fifo_off); + + /* This can be the case when sbi_exit() is called */ + if (!regs) + return; + + /* Mark all queued events as pending */ + while(!sbi_fifo_dequeue(sse_inject_fifo_r, &evt)) { + e = sse_event_get(evt.event_id); + if (!e) + continue; + + sse_global_event_lock(e); + sse_event_set_pending(e); + sse_global_event_unlock(e); + } + + sse_process_pending_events(NULL, regs); +} + +static struct sbi_ipi_event_ops sse_ipi_inject_ops = { + .name = "IPI_SSE_INJECT", + .process = sse_ipi_inject_process, +}; + +static int sse_ipi_inject_send(unsigned long hartid, uint32_t event_id) +{ + int ret; + struct sbi_scratch *remote_scratch = NULL; + struct sse_ipi_inject_data evt = {event_id}; + struct sbi_fifo *sse_inject_fifo_r; + + remote_scratch = sbi_hartid_to_scratch(hartid); + if (!remote_scratch) + return SBI_EINVAL; + sse_inject_fifo_r = sbi_scratch_offset_ptr(remote_scratch, + sse_inject_fifo_off); + + ret = sbi_fifo_enqueue(sse_inject_fifo_r, &evt); + if (ret) + return SBI_EFAIL; + + ret = sbi_ipi_send_many(1, hartid, sse_ipi_inject_event, NULL); + if (ret) + return SBI_EFAIL; + + return SBI_OK; +} + +static int sse_inject_event(uint32_t event_id, unsigned long hartid, + struct sbi_trap_regs *regs, + struct sbi_ecall_return *out, bool from_ecall) +{ + int ret; + struct sbi_sse_event *e; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + + /* In case of global event, provided hart_id is ignored */ + if (sse_event_is_global(e)) + hartid = SSE_EVENT_HARTID(e); + + /* Event is for another hart, send it through IPI */ + if (hartid != current_hartid()) { + sse_global_event_unlock(e); + return sse_ipi_inject_send(hartid, event_id); + } + + ret = sse_event_set_pending(e); + sse_global_event_unlock(e); + if (ret) + return ret; + + if (from_ecall) { + /* Due to skip_regs_update = true being set if injection + * succeed an the fact that we need to modify regs here before + * injecting the event (regs are copied by the event), we need + * to set out->skip_regs_update to true to avoid. Moreover + * injection might not be done right now if another event of + * higher priority is already running so always set + * out->skip_regs_update to true there. + */ + regs->mepc += 4; + regs->a0 = SBI_OK; + out->skip_regs_update = true; + } + + if (sse_process_pending_events(NULL, regs)) + out->skip_regs_update = true; + + return SBI_OK; +} + +static int sse_event_enable(struct sbi_sse_event *e) +{ + if (SSE_EVENT_STATE(e) != SSE_STATE_REGISTERED) + return SBI_EINVALID_STATE; + + sse_event_set_state(e, SSE_STATE_ENABLED); + sse_event_add_to_list(e); + + if (SSE_EVENT_PENDING(e)) + sbi_ipi_send_many(1, SSE_EVENT_HARTID(e), sse_ipi_inject_event, + NULL); + + sse_event_invoke_cb(e, enable_cb); + + return SBI_OK; +} + +static int sse_event_complete(struct sbi_sse_event *e, + struct sbi_trap_regs *regs, + struct sbi_ecall_return *out) +{ + bool inject; + + if (SSE_EVENT_STATE(e) != SSE_STATE_RUNNING) + return SBI_EINVALID_STATE; + + if (SSE_EVENT_HARTID(e) != current_hartid()) + return SBI_EDENIED; + + if (SSE_EVENT_CONFIG(e) & SBI_SSE_ATTR_CONFIG_ONESHOT) + sse_event_disable(e); + else + sse_event_set_state(e, SSE_STATE_ENABLED); + + sse_event_invoke_cb(e, complete_cb); + + inject = sse_process_pending_events(e, regs); + if (!inject) + sse_event_resume(e, regs); + + out->skip_regs_update = true; + + return SBI_OK; +} + +int sbi_sse_complete(uint32_t event_id, struct sbi_trap_regs *regs, + struct sbi_ecall_return *out) +{ + int ret; + struct sbi_sse_event *e; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + ret = sse_event_complete(e, regs, out); + sse_global_event_unlock(e); + + return ret; +} + +int sbi_sse_enable(uint32_t event_id) +{ + int ret; + struct sbi_sse_event *e; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + ret = sse_event_enable(e); + sse_global_event_unlock(e); + + return ret; +} + +int sbi_sse_disable(uint32_t event_id) +{ + int ret; + struct sbi_sse_event *e; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + ret = sse_event_disable(e); + sse_global_event_unlock(e); + + return ret; +} + +int sbi_sse_inject_from_ecall(uint32_t event_id, unsigned long hartid, + struct sbi_trap_regs *regs, + struct sbi_ecall_return *out) +{ + if (!sbi_domain_is_assigned_hart(sbi_domain_thishart_ptr(), hartid)) + return SBI_EINVAL; + + return sse_inject_event(event_id, hartid, regs, out, true); +} + +int sbi_sse_inject_event(uint32_t event_id, struct sbi_trap_regs *regs) +{ + /* We don't really care about return value here */ + struct sbi_ecall_return out; + + return sse_inject_event(event_id, current_hartid(), regs, &out, false); +} + +int sbi_sse_set_cb_ops(uint32_t event_id, const struct sbi_sse_cb_ops *cb_ops) +{ + struct sbi_sse_event *e; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + if (cb_ops->set_hartid_cb && !sse_event_is_global(e)) + return SBI_EINVAL; + + sse_global_event_lock(e); + e->cb_ops = cb_ops; + sse_global_event_unlock(e); + + return SBI_OK; +} + +int sbi_sse_attr_check(uint32_t base_attr_id, uint32_t attr_count, + unsigned long phys_lo, + unsigned long phys_hi, + unsigned long access) +{ + const unsigned align = __riscv_xlen >> 3; + /* Avoid 32 bits overflow */ + uint64_t end_id = (uint64_t)base_attr_id + attr_count; + + if (end_id > SBI_SSE_ATTR_INTERRUPTED_A7) + return SBI_EBAD_RANGE; + + if (phys_lo & (align - 1)) + return SBI_EINVALID_ADDR; + + /* + * On RV32, the M-mode can only access the first 4GB of + * the physical address space because M-mode does not have + * MMU to access full 34-bit physical address space. + * + * Based on above, we simply fail if the upper 32bits of + * the physical address (i.e. a2 register) is non-zero on + * RV32. + */ + if (phys_hi) + return SBI_EINVALID_ADDR; + + if (!sbi_domain_check_addr_range(sbi_domain_thishart_ptr(), phys_lo, + sizeof(unsigned long) * attr_count, 1, + access)) + return SBI_EINVALID_ADDR; + + return SBI_OK; +} + + +static void copy_attrs(unsigned long *out, const unsigned long *in, + unsigned int long_count) +{ + int i = 0; + + /* + * sbi_memcpy() does byte-per-byte copy, using this yields long-per-long + * copy + */ + for (i = 0; i < long_count; i++) + out[i] = in[i]; +} + +int sbi_sse_read_attrs(uint32_t event_id, uint32_t base_attr_id, + uint32_t attr_count, unsigned long output_phys_lo, + unsigned long output_phys_hi) +{ + int ret; + unsigned long *e_attrs; + struct sbi_sse_event *e; + unsigned long *attrs; + + ret = sbi_sse_attr_check(base_attr_id, attr_count, output_phys_lo, + output_phys_hi, SBI_DOMAIN_WRITE); + if (ret) + return ret; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + + sbi_hart_map_saddr(output_phys_lo, sizeof(unsigned long) * attr_count); + + /* + * Copy all attributes at once since struct sse_event_attrs is matching + * the SBI_SSE_ATTR_* attributes. While WRITE_ATTR attribute is not used + * in s-mode sse handling path, READ_ATTR is used to retrieve the value + * of registers when interrupted. rather than doing multiple SBI calls, + * a single one is done allowing to retrieve them all at once. + */ + e_attrs = (unsigned long *)&e->attrs; + attrs = (unsigned long *)output_phys_lo; + copy_attrs(attrs, &e_attrs[base_attr_id], attr_count); + + sbi_hart_unmap_saddr(); + + sse_global_event_unlock(e); + + return SBI_OK; +} + +int sbi_sse_write_attrs(uint32_t event_id, uint32_t base_attr_id, + uint32_t attr_count, unsigned long input_phys_lo, + unsigned long input_phys_hi) +{ + int ret; + struct sbi_sse_event *e; + unsigned long attr = 0, val; + uint32_t id, end_id = base_attr_id + attr_count; + unsigned long *attrs = (unsigned long *)input_phys_lo; + + ret = sbi_sse_attr_check(base_attr_id, attr_count, input_phys_lo, + input_phys_hi, SBI_DOMAIN_READ); + if (ret) + return ret; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + + sbi_hart_map_saddr(input_phys_lo, sizeof(unsigned long) * attr_count); + + for (id = base_attr_id; id < end_id; id++) { + val = attrs[attr++]; + ret = sse_event_set_attr_check(e, id, val); + if (ret) + goto out; + } + + attr = 0; + for (id = base_attr_id; id < end_id; id++) { + val = attrs[attr++]; + sse_event_set_attr(e, id, val); + } +out: + sbi_hart_unmap_saddr(); + + sse_global_event_unlock(e); + + return SBI_OK; +} + +int sbi_sse_register(uint32_t event_id, unsigned long handler_entry_pc, + unsigned long handler_entry_a0, + unsigned long handler_entry_a6, + unsigned long handler_entry_a7) +{ + int ret; + struct sbi_sse_event *e; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + ret = sse_event_register(e, handler_entry_pc, handler_entry_a0, + handler_entry_a6, handler_entry_a7); + sse_global_event_unlock(e); + + return ret; +} + +int sbi_sse_unregister(uint32_t event_id) +{ + int ret; + struct sbi_sse_event *e; + + e = sse_event_get(event_id); + if (!e) + return SBI_EINVAL; + + sse_global_event_lock(e); + ret = sse_event_unregister(e); + sse_global_event_unlock(e); + + return ret; +} + +static void sse_event_init(struct sbi_sse_event *e, uint32_t event_id) +{ + e->event_id = event_id; + SSE_EVENT_HARTID(e) = current_hartid(); + /* Declare all events as injectable */ + SSE_EVENT_CAN_INJECT(e) = 1; +} + +static int sse_global_init() +{ + struct sbi_sse_event *e; + unsigned int i, ev = 0; + + for (i = 0; i < EVENT_COUNT; i++) { + if (EVENT_IS_GLOBAL(supported_events[i])) + global_event_count++; + else + local_event_count++; + } + + global_events = sbi_zalloc(sizeof(*global_events) * global_event_count); + if (!global_events) + return SBI_ENOMEM; + + for (i = 0; i < EVENT_COUNT; i++) { + if (!EVENT_IS_GLOBAL(supported_events[i])) + continue; + + e = &global_events[ev]; + sse_event_init(e, supported_events[i]); + SPIN_LOCK_INIT(e->lock); + + ev++; + } + + return 0; +} + +static void sse_local_init(struct sse_hart_state *shs) +{ + unsigned int i, ev = 0; + + SBI_INIT_LIST_HEAD(&shs->event_list); + SPIN_LOCK_INIT(shs->list_lock); + + for (i = 0; i < EVENT_COUNT; i++) { + if (EVENT_IS_GLOBAL(supported_events[i])) + continue; + + sse_event_init(&shs->local_events[ev++], supported_events[i]); + } +} + +int sbi_sse_init(struct sbi_scratch *scratch, bool cold_boot) +{ + int ret; + void *sse_inject_mem; + struct sse_hart_state *shs; + struct sbi_fifo *sse_inject_q; + + if (cold_boot) { + ret = sse_global_init(); + if (ret) + return ret; + + shs_ptr_off = sbi_scratch_alloc_offset(sizeof(void *)); + if (!shs_ptr_off) + return SBI_ENOMEM; + + sse_inject_fifo_off = sbi_scratch_alloc_offset( + sizeof(*sse_inject_q)); + if (!sse_inject_fifo_off) { + sbi_scratch_free_offset(shs_ptr_off); + return SBI_ENOMEM; + } + + sse_inject_fifo_mem_off = sbi_scratch_alloc_offset( + EVENT_COUNT * sizeof(struct sse_ipi_inject_data)); + if (!sse_inject_fifo_mem_off) { + sbi_scratch_free_offset(sse_inject_fifo_off); + sbi_scratch_free_offset(shs_ptr_off); + return SBI_ENOMEM; + } + + ret = sbi_ipi_event_create(&sse_ipi_inject_ops); + if (ret < 0) { + sbi_scratch_free_offset(shs_ptr_off); + return ret; + } + sse_ipi_inject_event = ret; + } + + shs = sse_get_hart_state_ptr(scratch); + if (!shs) { + /* Allocate per hart state and local events at once */ + shs = sbi_zalloc(sizeof(*shs) + + sizeof(struct sbi_sse_event) * local_event_count); + if (!shs) + return SBI_ENOMEM; + + shs->local_events = (struct sbi_sse_event *)(shs + 1); + + sse_set_hart_state_ptr(scratch, shs); + } + + sse_local_init(shs); + + sse_inject_q = sbi_scratch_offset_ptr(scratch, sse_inject_fifo_off); + sse_inject_mem = sbi_scratch_offset_ptr(scratch, + sse_inject_fifo_mem_off); + + sbi_fifo_init(sse_inject_q, sse_inject_mem, EVENT_COUNT, + sizeof(struct sse_ipi_inject_data)); + + return 0; +} + +void sbi_sse_exit(struct sbi_scratch *scratch) +{ + int i; + struct sbi_sse_event *e; + + for (i = 0; i < EVENT_COUNT; i++) { + e = sse_event_get(supported_events[i]); + + if (SSE_EVENT_HARTID(e) != current_hartid()) + continue; + + if (SSE_EVENT_STATE(e) > SSE_STATE_REGISTERED) + sbi_printf("Event %d in invalid state at exit", i); + } +} From patchwork Tue Jan 9 10:44:55 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 1884366 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=SQKdbMkK; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.a=rsa-sha256 header.s=20230601 header.b=NT5ws5qq; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T8SJ86zvNz1xqk for ; Tue, 9 Jan 2024 21:45:28 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=L6nx51taGYqxipY/JKUUwl/VXsW1tb7PuvpvpAKwOz0=; b=SQKdbMkKhcL6Zx gLkoLyW08dSCzs1SvOQ3XBErVzBYXm7iYh1Tyw96O5jna8ec3SZ2v/kDV79MzLyIeBCvLbXdMT+tC 8mYfHIK98PgJK+3eAQMDqx9NET+cMOqe9wkmtZLn1Bunsmrt+XQSIe5DvNxjo1Oo0SyKQAO/0ssAe RbSr2wPDGO0KYmp2uL//cpz3uTVk6s2gm/bDEKUsjVaNE7J3crF/R8/YGHx3yy1fy4KJ5ZUkjbk26 YRHH9Rp0tgEtavEtG6EXCko8UbVCN5vONtFOPoKngXyeNN3A+QuZE33YJA807kawG705Bl0Ek9gGZ k1BSIbxKwfD6o8nSXtCg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rN9bg-007epx-13; Tue, 09 Jan 2024 10:45:16 +0000 Received: from mail-wm1-x334.google.com ([2a00:1450:4864:20::334]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rN9bd-007ens-1C for opensbi@lists.infradead.org; Tue, 09 Jan 2024 10:45:14 +0000 Received: by mail-wm1-x334.google.com with SMTP id 5b1f17b1804b1-40d5a733823so3920795e9.1 for ; Tue, 09 Jan 2024 02:45:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704797108; x=1705401908; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Ahgxq6fGWoq/qDnTNh0s+nhDeFsildGtCz0lvnUIzto=; b=NT5ws5qqxOzHaM+KKsJMJx+L8T3iU8lgfWWVw4dbldccGAz4Q36Niz9DekXT0Vj0qw hWYDNS4/qUZb3xXKsWI4EZH3q91wsK2RZDmO2q6CTeCxZ4ESwJK4GYKok6t2N1sKGwUJ dvbG9NnPoQUSb1nxDo4fX07lkhgdfD23oheTy69kabydrWbwU9Ljc6pZ07Vuyxt8b11V okd6z55oHiKh1OicgqLcc7HEqtNws9KC+pNYrx3qfRgBYJHZGxuTl1xNZvcQ5ois9/Zx FSqaPnSAAmNGkxMn9TtH2IBlEzBQyc8M5jXDlYfs5piTpuIXrbCNWzI2+l20qAnsOXJ4 83GA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704797108; x=1705401908; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Ahgxq6fGWoq/qDnTNh0s+nhDeFsildGtCz0lvnUIzto=; b=tZ20ZtAeYHJItfUT0xFc73L9luDeHKBGzMRbeb+l9ktGXDYnK2FR8DF1UZdqaLoCPi 0feLpl5BXJ7MpiBuaG5aIow4yd4BuCLdvrF0Wg8CPXP7+Qmj8UOwYF7iu10WRVHJz591 GcJP/CbGLem5JJoemSY+nBil1uQBp6BaMnUXVsrnfJsg4w+Yqb7PX198jAptWhr6HgxS 2sO3UDPVaAcL9h1Nf7aFSjbnZ1EvOAjom+tLwhDMZgsORSBJx0dwlfE7hYTxACGkoMlf +PxXps6UYz/i1dRHjd/mp0EnhTWShhDQhlfek5gH/6b8GYcvTWHqG/Xi/yWqglOYJ3He r8rQ== X-Gm-Message-State: AOJu0YzjB+ShZE33rLqJB+PopGrGkMha4UdWjWzXmr4GlTCFeYO6VYlV tZWaSym4tSGeMo5/GKMzxvy/snvx9mdiZFtelIRuoai5nWyR4Q== X-Google-Smtp-Source: AGHT+IE1aPbPPpwwD0r/zuVvuedELQVDBqdSuyp5CQiFmQv9OXfYOzp26pRGrmoZbsuJGe1MbCo/gg== X-Received: by 2002:a7b:c5d0:0:b0:40e:4025:2cd4 with SMTP id n16-20020a7bc5d0000000b0040e40252cd4mr5816319wmk.4.1704797108655; Tue, 09 Jan 2024 02:45:08 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:401f:b86a:bfd:b927]) by smtp.gmail.com with ESMTPSA id l12-20020a5d668c000000b003375c072fbcsm2045585wru.100.2024.01.09.02.45.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jan 2024 02:45:08 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: opensbi@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Deepak Gupta , Anup Patel , Himanshu Chauhan , Xiang W , Jessica Clarke Subject: [RFC PATCH v2 3/4] lib: sbi: trigger SSE PMU event on PMU overflow IRQ Date: Tue, 9 Jan 2024 11:44:55 +0100 Message-ID: <20240109104500.2080121-4-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240109104500.2080121-1-cleger@rivosinc.com> References: <20240109104500.2080121-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240109_024513_417340_7713F59A X-CRM114-Status: GOOD ( 15.90 ) X-Spam-Score: 0.0 (/) X-Spam-Report: =?unknown-8bit?q?Spam_detection_software=2C_running_on_the_sy?= =?unknown-8bit?q?stem_=22bombadil=2Einfradead=2Eorg=22=2C?= =?unknown-8bit?q?_has_NOT_identified_this_incoming_email_as_spam=2E__The_ori?= =?unknown-8bit?q?ginal?= =?unknown-8bit?q?_message_has_been_attached_to_this_so_you_can_view_it_or_la?= =?unknown-8bit?q?bel?= =?unknown-8bit?q?_similar_future_email=2E__If_you_have_any_questions=2C_see?= =?unknown-8bit?q?_the_administrator_of_that_system_for_details=2E?= =?unknown-8bit?q?_?= =?unknown-8bit?q?_Content_preview=3A__In_order_to_send_SSE_local_PMU_event_a?= =?unknown-8bit?q?nd_receive_PMU_interrupt?= =?unknown-8bit?q?_in_openSBI=2C_we_need_to_disable_interrupt_delegation_for_?= =?unknown-8bit?q?PMU_interrupts_and?= =?unknown-8bit?q?_send_the_SSE_event_upon_LCOFIP_IRQ=2E_Signed-off-by=3A_Cl?= =?unknown-8bit?q?=C3=A9ment_L=C3=A9ger_=3Ccleger=40rivosinc=2Ecom=3E?= =?unknown-8bit?q?_---_include/sbi/sbi=5Fpmu=2Eh_=7C_3_+++_lib/sbi/sbi=5Fpmu?= =?unknown-8bit?q?=2Ec_=7C_49_+++++++++++++++++++++++++++++++++++++++++++?= =?unknown-8bit?q?_lib/sbi/sbi=5Ftrap=2Ec_=7C_6_++++++_3_files_changed=2C_=5B?= =?unknown-8bit?q?=2E=2E=2E=5D_?= =?unknown-8bit?q?_?= =?unknown-8bit?q?_Content_analysis_details=3A___=280=2E0_points=2C_5=2E0_req?= =?unknown-8bit?q?uired=29?= =?unknown-8bit?q?_?= =?unknown-8bit?q?_pts_rule_name______________description?= =?unknown-8bit?q?_----_----------------------_------------------------------?= =?unknown-8bit?q?--------------------?= =?unknown-8bit?q?_-0=2E0_RCVD=5FIN=5FDNSWL=5FNONE_____RBL=3A_Sender_listed_a?= =?unknown-8bit?q?t_https=3A//www=2Ednswl=2Eorg/=2C?= =?unknown-8bit?q?_no_trust?= =?unknown-8bit?b?IFsyYTAwOjE0NTA6NDg2NDoyMDowOjA6MDozMzQgbGlzdGVkIGluXQ==?= =?unknown-8bit?b?IFtsaXN0LmRuc3dsLm9yZ10=?= =?unknown-8bit?q?_-0=2E0_SPF=5FPASS_______________SPF=3A_sender_matches_SPF_?= =?unknown-8bit?q?record?= =?unknown-8bit?q?_0=2E0_SPF=5FHELO=5FNONE__________SPF=3A_HELO_does_not_publ?= =?unknown-8bit?q?ish_an_SPF_Record?= =?unknown-8bit?q?_-0=2E1_DKIM=5FVALID_____________Message_has_at_least_one_v?= =?unknown-8bit?q?alid_DKIM_or_DK_signature?= =?unknown-8bit?q?_0=2E1_DKIM=5FSIGNED____________Message_has_a_DKIM_or_DK_si?= =?unknown-8bit?q?gnature=2C_not_necessarily?= =?unknown-8bit?q?_valid?= X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org In order to send SSE local PMU event and receive PMU interrupt in openSBI, we need to disable interrupt delegation for PMU interrupts and send the SSE event upon LCOFIP IRQ. Signed-off-by: Clément Léger --- include/sbi/sbi_pmu.h | 3 +++ lib/sbi/sbi_pmu.c | 49 +++++++++++++++++++++++++++++++++++++++++++ lib/sbi/sbi_trap.c | 6 ++++++ 3 files changed, 58 insertions(+) diff --git a/include/sbi/sbi_pmu.h b/include/sbi/sbi_pmu.h index 7d32a4d..c5b4e51 100644 --- a/include/sbi/sbi_pmu.h +++ b/include/sbi/sbi_pmu.h @@ -11,6 +11,7 @@ #define __SBI_PMU_H__ #include +#include struct sbi_scratch; @@ -150,4 +151,6 @@ int sbi_pmu_ctr_cfg_match(unsigned long cidx_base, unsigned long cidx_mask, int sbi_pmu_ctr_incr_fw(enum sbi_pmu_fw_event_code_id fw_id); +void sbi_pmu_ovf_irq(struct sbi_trap_regs *regs); + #endif diff --git a/lib/sbi/sbi_pmu.c b/lib/sbi/sbi_pmu.c index 6209ccc..fe2eda1 100644 --- a/lib/sbi/sbi_pmu.c +++ b/lib/sbi/sbi_pmu.c @@ -17,6 +17,7 @@ #include #include #include +#include /** Information about hardware counters */ struct sbi_pmu_hw_event { @@ -62,6 +63,8 @@ struct sbi_pmu_hart_state { uint32_t active_events[SBI_PMU_HW_CTR_MAX + SBI_PMU_FW_CTR_MAX]; /* Bitmap of firmware counters started */ unsigned long fw_counters_started; + /* if true, SSE is enabled */ + bool sse_enabled; /* * Counter values for SBI firmware events and event codes * for platform firmware events. Both are mutually exclusive @@ -297,6 +300,22 @@ int sbi_pmu_add_raw_event_counter_map(uint64_t select, uint64_t select_mask, u32 SBI_PMU_EVENT_RAW_IDX, cmap, select, select_mask); } +void sbi_pmu_irq_clear(void) +{ + csr_set(CSR_MIE, MIP_LCOFIP); + csr_clear(CSR_MIP, MIP_LCOFIP); +} + +void sbi_pmu_ovf_irq(struct sbi_trap_regs *regs) +{ + /* + * We need to disable LCOFIP before returning to S-mode or we will loop + * on LCOFIP being triggered + */ + csr_clear(CSR_MIE, MIP_LCOFIP); + sbi_sse_inject_event(SBI_SSE_EVENT_LOCAL_PMU, regs); +} + static int pmu_ctr_enable_irq_hw(int ctr_idx) { unsigned long mhpmevent_csr; @@ -564,6 +583,9 @@ int sbi_pmu_ctr_stop(unsigned long cbase, unsigned long cmask, } } + if (phs->sse_enabled) + sbi_pmu_irq_clear(); + return ret; } @@ -944,6 +966,7 @@ static void pmu_reset_event_map(struct sbi_pmu_hart_state *phs) for (j = 0; j < SBI_PMU_FW_CTR_MAX; j++) phs->fw_counters_data[j] = 0; phs->fw_counters_started = 0; + phs->sse_enabled = 0; } const struct sbi_pmu_device *sbi_pmu_get_device(void) @@ -970,6 +993,30 @@ void sbi_pmu_exit(struct sbi_scratch *scratch) pmu_reset_event_map(pmu_get_hart_state_ptr(scratch)); } +static void pmu_sse_enable(uint32_t event_id) +{ + struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr(); + + phs->sse_enabled = true; + csr_clear(CSR_MIDELEG, sbi_pmu_irq_bit()); + csr_clear(CSR_MIP, MIP_LCOFIP); + csr_set(CSR_MIE, MIP_LCOFIP); +} + +static void pmu_sse_disable(uint32_t event_id) +{ + struct sbi_pmu_hart_state *phs = pmu_thishart_state_ptr(); + + phs->sse_enabled = false; + csr_clear(CSR_MIE, MIP_LCOFIP); + csr_set(CSR_MIDELEG, sbi_pmu_irq_bit()); +} + +static const struct sbi_sse_cb_ops pmu_sse_cb_ops = { + .enable_cb = pmu_sse_enable, + .disable_cb = pmu_sse_disable, +}; + int sbi_pmu_init(struct sbi_scratch *scratch, bool cold_boot) { int hpm_count = sbi_fls(sbi_hart_mhpm_mask(scratch)); @@ -1009,6 +1056,8 @@ int sbi_pmu_init(struct sbi_scratch *scratch, bool cold_boot) total_ctrs = num_hw_ctrs + SBI_PMU_FW_CTR_MAX; } + sbi_sse_set_cb_ops(SBI_SSE_EVENT_LOCAL_PMU, &pmu_sse_cb_ops); + phs = pmu_get_hart_state_ptr(scratch); if (!phs) { phs = sbi_zalloc(sizeof(*phs)); diff --git a/lib/sbi/sbi_trap.c b/lib/sbi/sbi_trap.c index d574ab1..10117b7 100644 --- a/lib/sbi/sbi_trap.c +++ b/lib/sbi/sbi_trap.c @@ -208,6 +208,9 @@ static int sbi_trap_nonaia_irq(struct sbi_trap_regs *regs, ulong mcause) case IRQ_M_SOFT: sbi_ipi_process(regs); break; + case IRQ_PMU_OVF: + sbi_pmu_ovf_irq(regs); + break; case IRQ_M_EXT: return sbi_irqchip_process(regs); default: @@ -231,6 +234,9 @@ static int sbi_trap_aia_irq(struct sbi_trap_regs *regs, ulong mcause) case IRQ_M_SOFT: sbi_ipi_process(regs); break; + case IRQ_PMU_OVF: + sbi_pmu_ovf_irq(regs); + break; case IRQ_M_EXT: rc = sbi_irqchip_process(regs); if (rc) From patchwork Tue Jan 9 10:44:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= X-Patchwork-Id: 1884367 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; secure) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20210309 header.b=Sy6PEHTX; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=rivosinc-com.20230601.gappssmtp.com header.i=@rivosinc-com.20230601.gappssmtp.com header.a=rsa-sha256 header.s=20230601 header.b=gJdpya9n; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:3::133; helo=bombadil.infradead.org; envelope-from=opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=patchwork.ozlabs.org) Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T8SJ870TLz1yPh for ; Tue, 9 Jan 2024 21:45:28 +1100 (AEDT) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-ID:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=gkAjal489pOW8fL8k6jpd49CRG0f9PmpHYKW666IDkk=; b=Sy6PEHTXVSgayZ 8oAeNXWxSzdDPlYr6VjkcGeEflE9RiCnJx2/zWCyzmcGuJSAR58m2FR+XjU04MUH4DvZZlHrlMsNf tkfdlWc1q7F3msr6L3SeNSs9AW8K2pWT7sPPXUSUAXlXLVo3Re4BvBQvVEXJ6DI7pihfo1rN6UDpT sVQZU8oul6b/SGYmHfmUhmXuLJJZb8jQ5owP9qv/Ia4aUN12e6xC9h2YYvLNkuLU+40jgaOK6Ly79 cvo1QAPARU3PN2uRtHDH5WUaiRWTGM8wPMV3S692NrNSW5cVOhiGiCo9BnKQnzOsVV7az0wJlP2Jn VwbBc7pJ+O3SMc43m77Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.96 #2 (Red Hat Linux)) id 1rN9bg-007eqH-2h; Tue, 09 Jan 2024 10:45:16 +0000 Received: from mail-wr1-x42d.google.com ([2a00:1450:4864:20::42d]) by bombadil.infradead.org with esmtps (Exim 4.96 #2 (Red Hat Linux)) id 1rN9be-007eoE-1r for opensbi@lists.infradead.org; Tue, 09 Jan 2024 10:45:16 +0000 Received: by mail-wr1-x42d.google.com with SMTP id ffacd0b85a97d-33686565c35so459911f8f.0 for ; Tue, 09 Jan 2024 02:45:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rivosinc-com.20230601.gappssmtp.com; s=20230601; t=1704797110; x=1705401910; darn=lists.infradead.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=h3Obmugg+0L+HBjo/3DjifzYlrK9oFuX3KPsFVqiD/M=; b=gJdpya9nONwDttz4IR8m2cOkaJWD06nLn882BpIkqXtF5AQGTIfm9N6UgDQojOt/TP ifmJnfPHhdLKrUULT461VHCsMHDYpiQxhylrvBxV5qq/bQCiiihe5OtuV8nlNY732Kaw XLLAXmCjzrg/u9AiGYhqGXKOLZUx+JWj9ybxMeRIVsgC8d0vC7/1iFCCjRtrpFstxPTg 3aK8QpOGq94x60YcLx5Ya5l6sD7+0PmEC0uXoOidm4vWKlZFgEqttVgYL5D6orK4fzG8 qYFg2x8wjBLhqt+DDsEUvb3bvrG0e9b5UsNVQnKAUnSSCiKj2+UL5L5VBUG+fGW8F341 qQAw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1704797110; x=1705401910; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=h3Obmugg+0L+HBjo/3DjifzYlrK9oFuX3KPsFVqiD/M=; b=jS1f+mjXALlLgMTPd3ixOjWlgisX+8jgsYLcOVrRN1UiyURUIKYr2HNWA+CF1GJqYK IVptgU7OF7ZfwXG6oToQHEttKMqSry8EH2cV567M08N8XVuz0rF1spkCGptxHO2dXo44 yTWbXMFONhTtlnnHGkNyamZRiuAOks8JvZGc0ksY8iNerpgOC49S5VO72Q/IUreInz45 aEAZZG0oDOc7xu6yhnmkLP0vVx4noddpmvxCBLHaC1U7oMRyDghjAdsg2w3GF4i5QtEY rnYhT73wHNQ+7C/w8kfKybXXiJi1I6Zti/sOr0gJIwTgGsM6ekJBizYjM6dCfWXR60Wj yYmw== X-Gm-Message-State: AOJu0YxSvfx4MQtflG2y2RpDUb4LtZW0Cxwn/oD9uzsMrIYYOk0K+0m+ OPI6taib9b4xJxZfNdzcFytiJCRqdhncj8juIRT0J5JL5yqWGg== X-Google-Smtp-Source: AGHT+IG1c1B/K69+Mc6t/ERczS253sdN+hmuOBY4rADSKzV4+VF42QiQFHdjxAuQon31ylFsopuYpw== X-Received: by 2002:a05:6000:4c5:b0:337:41c7:730b with SMTP id h5-20020a05600004c500b0033741c7730bmr5329958wri.6.1704797110191; Tue, 09 Jan 2024 02:45:10 -0800 (PST) Received: from carbon-x1.. ([2a01:e0a:999:a3a0:401f:b86a:bfd:b927]) by smtp.gmail.com with ESMTPSA id l12-20020a5d668c000000b003375c072fbcsm2045585wru.100.2024.01.09.02.45.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 09 Jan 2024 02:45:09 -0800 (PST) From: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= To: opensbi@lists.infradead.org Cc: =?utf-8?b?Q2zDqW1lbnQgTMOpZ2Vy?= , Atish Patra , Deepak Gupta , Anup Patel , Himanshu Chauhan , Xiang W , Jessica Clarke Subject: [RFC PATCH v2 4/4] lib: sbi: add SBI_EXT_PMU_IRQ_CLEAR Date: Tue, 9 Jan 2024 11:44:56 +0100 Message-ID: <20240109104500.2080121-5-cleger@rivosinc.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240109104500.2080121-1-cleger@rivosinc.com> References: <20240109104500.2080121-1-cleger@rivosinc.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20240109_024514_612919_025E2215 X-CRM114-Status: GOOD ( 11.74 ) X-Spam-Score: 0.0 (/) X-Spam-Report: Spam detection software, running on the system "bombadil.infradead.org", has NOT identified this incoming email as spam. The original message has been attached to this so you can view it or label similar future email. If you have any questions, see the administrator of that system for details. Content preview: Now that we have sse support for such interrupt, the IRQ can only be cleared by the SBI. In some cases in S-mode OS, it might be needed to clear the pending IRQ. Add SBI_EXT_PMU_IRQ_CLEAR ecall to cle [...] Content analysis details: (0.0 points, 5.0 required) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at https://www.dnswl.org/, no trust [2a00:1450:4864:20:0:0:0:42d listed in] [list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid X-BeenThere: opensbi@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "opensbi" Errors-To: opensbi-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Now that we have sse support for such interrupt, the IRQ can only be cleared by the SBI. In some cases in S-mode OS, it might be needed to clear the pending IRQ. Add SBI_EXT_PMU_IRQ_CLEAR ecall to clear such IRQ. NOTE: this is temporary until the counter delegation support from Atish lands in Linux/OpenSBI. Signed-off-by: Clément Léger --- include/sbi/sbi_ecall_interface.h | 1 + include/sbi/sbi_pmu.h | 2 ++ lib/sbi/sbi_ecall_pmu.c | 3 +++ 3 files changed, 6 insertions(+) diff --git a/include/sbi/sbi_ecall_interface.h b/include/sbi/sbi_ecall_interface.h index a5f3edf..3815b42 100644 --- a/include/sbi/sbi_ecall_interface.h +++ b/include/sbi/sbi_ecall_interface.h @@ -105,6 +105,7 @@ #define SBI_EXT_PMU_COUNTER_FW_READ 0x5 #define SBI_EXT_PMU_COUNTER_FW_READ_HI 0x6 #define SBI_EXT_PMU_SNAPSHOT_SET_SHMEM 0x7 +#define SBI_EXT_PMU_IRQ_CLEAR 0x8 /** General pmu event codes specified in SBI PMU extension */ enum sbi_pmu_hw_generic_events_t { diff --git a/include/sbi/sbi_pmu.h b/include/sbi/sbi_pmu.h index c5b4e51..6bed6c9 100644 --- a/include/sbi/sbi_pmu.h +++ b/include/sbi/sbi_pmu.h @@ -153,4 +153,6 @@ int sbi_pmu_ctr_incr_fw(enum sbi_pmu_fw_event_code_id fw_id); void sbi_pmu_ovf_irq(struct sbi_trap_regs *regs); +void sbi_pmu_irq_clear(void); + #endif diff --git a/lib/sbi/sbi_ecall_pmu.c b/lib/sbi/sbi_ecall_pmu.c index 40a63a6..5b55eb5 100644 --- a/lib/sbi/sbi_ecall_pmu.c +++ b/lib/sbi/sbi_ecall_pmu.c @@ -73,6 +73,9 @@ static int sbi_ecall_pmu_handler(unsigned long extid, unsigned long funcid, case SBI_EXT_PMU_COUNTER_STOP: ret = sbi_pmu_ctr_stop(regs->a0, regs->a1, regs->a2); break; + case SBI_EXT_PMU_IRQ_CLEAR: + sbi_pmu_irq_clear(); + break; case SBI_EXT_PMU_SNAPSHOT_SET_SHMEM: /* fallthrough as OpenSBI doesn't support snapshot yet */ default: