From patchwork Thu Jul 2 07:48:32 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321008 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=n2X2tl2t; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9G609c6z9sTZ for ; Thu, 2 Jul 2020 17:49:02 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728253AbgGBHtB (ORCPT ); Thu, 2 Jul 2020 03:49:01 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41688 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725287AbgGBHs7 (ORCPT ); Thu, 2 Jul 2020 03:48:59 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7EB11C08C5C1; Thu, 2 Jul 2020 00:48:59 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id l6so8996920pjq.1; Thu, 02 Jul 2020 00:48:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=p3+TUO0b9VV9VdTNyuHlA2h8v8EGxg7Uc2Jzn7JHwh8=; b=n2X2tl2t1Z2jMbL8ftjNC8UfbtwIZAGcSuTp5qN1FvmOYuwgjX5KnJtALwmUyd5wHW HuWBDqPjKQ3jhWx7H0ve063UAbqIvU5T7BGPTxwObqeD3U77dQibTYtK7uFzOozjPGfj bqB8JF4dxa4GjyuRRMYu5EMlwn3xeK0QOSdBMDCvw8FDXDVxaVEpDX8z3F6bD427EoFk NGrFGc4CXku52XHrp65RvqF6vcc9sga6E2Jes0IIyw6uUSeZxHAJK37keYVe0X3ky8Ft 4wfJ8U29IEzSKtDV/aWIswgU/itkss0JJydb1Xz5o6YooNXTyY/jjqd8bfdVUb1nuhum iHnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=p3+TUO0b9VV9VdTNyuHlA2h8v8EGxg7Uc2Jzn7JHwh8=; b=nn/cpIONbNJglyj7qQzI7f6kZGZMJ7CHzHhKQ1spg6EoulqIxp83nyVTdFYcR91aWb OaDtyq0jHs47zI9c+VIjSigtb6aqk2YnUB9ClFq6DZ4EHGmBIK79KUi4t79eel43CSpS RFqQbczR6O6Lc0u5S9XSE6+t/dPjqVcLhzD8v+HqRObAD0z7I9GL1nnf3hHBhaIXnaEX 3JcbAF9jumPmCNoGeqGCd0TOu6KDflXp7tYiUEt6cIv7aTrv2hmktb9/xenm0YZpKgDZ HcqDt8sTxQQDPvWSVe3L6sw3DyGWtpY0ZNjhuZGx3G46Qs1EV/QBgm8VIuRTMiBYYFXQ Ubew== X-Gm-Message-State: AOAM531+85DvMwBS4qZuwmO1Ks6Q5BT8lMStCeooR9HK8GzpgALQQ6sk A/4XttLxL/JlwlK//mRMkE0= X-Google-Smtp-Source: ABdhPJyrX8ZqLoSxDfScS/c0oudlkWwFEh6ZwhILTvmdjNw4QT7UUZQ1EYJPbV20RtW1fhoz7YmYKg== X-Received: by 2002:a17:902:b205:: with SMTP id t5mr13521549plr.7.1593676139118; Thu, 02 Jul 2020 00:48:59 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.48.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:48:58 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 1/8] powerpc/powernv: must include hvcall.h to get PAPR defines Date: Thu, 2 Jul 2020 17:48:32 +1000 Message-Id: <20200702074839.1057733-2-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org An include goes away in future patches which breaks compilation without this. Signed-off-by: Nicholas Piggin --- arch/powerpc/platforms/powernv/pci-ioda-tce.c | 1 + 1 file changed, 1 insertion(+) diff --git a/arch/powerpc/platforms/powernv/pci-ioda-tce.c b/arch/powerpc/platforms/powernv/pci-ioda-tce.c index f923359d8afc..8eba6ece7808 100644 --- a/arch/powerpc/platforms/powernv/pci-ioda-tce.c +++ b/arch/powerpc/platforms/powernv/pci-ioda-tce.c @@ -15,6 +15,7 @@ #include #include +#include /* share error returns with PAPR */ #include "pci.h" unsigned long pnv_ioda_parse_tce_sizes(struct pnv_phb *phb) From patchwork Thu Jul 2 07:48:33 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321009 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=PCTOgeyX; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9GB5Zvdz9sTZ for ; Thu, 2 Jul 2020 17:49:06 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728269AbgGBHtG (ORCPT ); Thu, 2 Jul 2020 03:49:06 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41704 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725287AbgGBHtE (ORCPT ); Thu, 2 Jul 2020 03:49:04 -0400 Received: from mail-pj1-x1041.google.com (mail-pj1-x1041.google.com [IPv6:2607:f8b0:4864:20::1041]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C5BFC08C5C1; Thu, 2 Jul 2020 00:49:04 -0700 (PDT) Received: by mail-pj1-x1041.google.com with SMTP id k5so2600119pjg.3; Thu, 02 Jul 2020 00:49:04 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=fa9T6PNML2/xkl0bGh/Ztj2VCHxJI+rGnDXzulQp9g4=; b=PCTOgeyXmxCRHqkPbdWbNCD5RF4HWZdbQyTIojx6EjkOEWF+8/wPlG+BElrR7+YXPZ sLjTHouTQD0P60dVOilWpe+9ScH0g0t3DQfrJjVACgVsaQop6YHakwMXx0bTb/i9Kpyr IfE7/orhNh+x85icci2L+rRczXIMxKvjeObHUliBHpTheYNK4UHn2US9x2BpsMUNug+I EusvpjUPF/4LJKVp6u8AbSLcPZwrmIIEF3Y1b5zCKSh3qKcVdNqSh8SeyDBvoKorJgff xdbFqgnTG+Pi2peG6cNEYSMF+oKs2l5g3Bon66SblCaVCGRP5jxZmJeOVkncsl3jVmza hxSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=fa9T6PNML2/xkl0bGh/Ztj2VCHxJI+rGnDXzulQp9g4=; b=RM9r9tpa30RDaGDBwqAuiBOYCtpSZVZjNxxOSnAbXfngvLS+0WJ2/4upvXx63TemSh C61wdDIHuHwJ7EcBLQu8xcoH6cJLsbxRIXYPY+DmhNBCk0oVw2zRoCBPvQSO/S5mpgkv I8EeIrGvNXVCZCNqNulm+Tv27TePmD+s+oFrcGIzBCNzss6d4hiuJsirl9ihhLz+NFYW p6uJa1SRZ4E21spsMljapiaLFwOhC29jh05MYyLI50e5gTk5iFZPagZzNYDZ+dGNQOvH p9IjECy/1tGu2tMirhwY0HE1ldwjhF0sYRTk5z9lPpJm/0w8lGf7H9ZF95kgHigGgtBs pdhA== X-Gm-Message-State: AOAM530OzTE275nCRXwcMMNkfuSSH10iahCp9H5XXB7yJI0kBmFIhn2q +fq3W0DnhwCngwb2Q5WRhjI= X-Google-Smtp-Source: ABdhPJz0DrYb1pN7ZPb0vDXXXEcS0FJvLihL6IDzUc7haMlJV7H/L6XRfoSOyJ9cS7kxACtRgiESpQ== X-Received: by 2002:a17:90a:950c:: with SMTP id t12mr32900004pjo.173.1593676144195; Thu, 02 Jul 2020 00:49:04 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.48.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:49:03 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 2/8] powerpc/pseries: use smp_rmb() in H_CONFER spin yield Date: Thu, 2 Jul 2020 17:48:33 +1000 Message-Id: <20200702074839.1057733-3-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org There is no need for rmb(), this allows faster lwsync here. Signed-off-by: Nicholas Piggin --- arch/powerpc/lib/locks.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index 6440d5943c00..47a530de733e 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -30,7 +30,7 @@ void splpar_spin_yield(arch_spinlock_t *lock) yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ - rmb(); + smp_rmb(); if (lock->slock != lock_value) return; /* something has changed */ plpar_hcall_norets(H_CONFER, @@ -56,7 +56,7 @@ void splpar_rw_yield(arch_rwlock_t *rw) yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ - rmb(); + smp_rmb(); if (rw->lock != lock_value) return; /* something has changed */ plpar_hcall_norets(H_CONFER, From patchwork Thu Jul 2 07:48:34 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321010 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=BNuvRoXc; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9GJ0dHLz9sTg for ; Thu, 2 Jul 2020 17:49:12 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728286AbgGBHtL (ORCPT ); Thu, 2 Jul 2020 03:49:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41724 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725287AbgGBHtJ (ORCPT ); Thu, 2 Jul 2020 03:49:09 -0400 Received: from mail-pj1-x1043.google.com (mail-pj1-x1043.google.com [IPv6:2607:f8b0:4864:20::1043]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B60C7C08C5C1; Thu, 2 Jul 2020 00:49:09 -0700 (PDT) Received: by mail-pj1-x1043.google.com with SMTP id l6so8997099pjq.1; Thu, 02 Jul 2020 00:49:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=d9qDy28V/l12LKHlDIDGKIfRlvDzNdRYO/HGHgC/vfE=; b=BNuvRoXcLCFfem/nFw+5KVoU+ksXAMFPIqPRGZ9NQyeSj+WWqEKp7dAtpYy/+w/we6 zliImCOisgI1+bmpmc+R1hdt31+5Rc2pXdbHu/qITD40SK6nkN6D/8IJBeuVlJCKnV8u TeEz8wsX0pTAeDZm0DdrSECXQptG1FIcjeK05VKeHXYhwk4+4AjGRzHGV1WAd+CD/mGK IMEnj06DooYPxYOEXAGPPrXZufg0hunysciHHk/wllDH03Gtnm+4zfk8xDwK1giDHoR3 CboxAaD41vRoBF08aAZALmjucDZWtNxMd63X1Pq9x8lIoAckeNfR6GI1bp25mqBzZ7Ns Qg8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=d9qDy28V/l12LKHlDIDGKIfRlvDzNdRYO/HGHgC/vfE=; b=qJ3JKBr1Q4K0cNqG5k6XBplmP7RkRcdB+xg9ZKYC0MmI/suSEcXhzZSZcCLEyWX36i kjKKqOyjc9A5ObYcDYvgmOTZgjBf640LmWx5/oYRvcZWC+rAW5pUu2yMX+QXGO43ZHnN yUe2ywd1Q6ES2sbgHaZeOs9CVmA2MWMS8b2FIbPMSfgMRx/2HjnyJD66VTKbJL6L2eY9 C3abVJUZzgtRbhIys/BS8bDzogzezG6b0eiX901m2444W5UANAO3el7h6lglrQv9cwjr 25OpbGbreNBneHweIy7X0fIJTCnoSGzRl+01u4iJaNhABlCemjPqJMoqH517iq2b8IJP jjLA== X-Gm-Message-State: AOAM5320w7vVI0wESPaIorUqRUUyuQbZ9NVnJHCba2mZlE9ZX0SG+oaw daEeF3ZAC9zlHQBaeriXamk= X-Google-Smtp-Source: ABdhPJzYIsR9dUwtbmJgrNa+BV16XUFp+MN2M3IfhIFC/XVFv0/IDK3iOxneS45yQwUdhlRzdXuXIw== X-Received: by 2002:a17:902:8b8a:: with SMTP id ay10mr26538690plb.236.1593676149301; Thu, 02 Jul 2020 00:49:09 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.49.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:49:08 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 3/8] powerpc/pseries: move some PAPR paravirt functions to their own file Date: Thu, 2 Jul 2020 17:48:34 +1000 Message-Id: <20200702074839.1057733-4-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/paravirt.h | 61 +++++++++++++++++++++++++++++ arch/powerpc/include/asm/spinlock.h | 24 +----------- arch/powerpc/lib/locks.c | 12 +++--- 3 files changed, 68 insertions(+), 29 deletions(-) create mode 100644 arch/powerpc/include/asm/paravirt.h diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h new file mode 100644 index 000000000000..7a8546660a63 --- /dev/null +++ b/arch/powerpc/include/asm/paravirt.h @@ -0,0 +1,61 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_PARAVIRT_H +#define __ASM_PARAVIRT_H +#ifdef __KERNEL__ + +#include +#include +#ifdef CONFIG_PPC64 +#include +#include +#endif + +#ifdef CONFIG_PPC_SPLPAR +DECLARE_STATIC_KEY_FALSE(shared_processor); + +static inline bool is_shared_processor(void) +{ + return static_branch_unlikely(&shared_processor); +} + +/* If bit 0 is set, the cpu has been preempted */ +static inline u32 yield_count_of(int cpu) +{ + __be32 yield_count = READ_ONCE(lppaca_of(cpu).yield_count); + return be32_to_cpu(yield_count); +} + +static inline void yield_to_preempted(int cpu, u32 yield_count) +{ + plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); +} +#else +static inline bool is_shared_processor(void) +{ + return false; +} + +static inline u32 yield_count_of(int cpu) +{ + return 0; +} + +extern void ___bad_yield_to_preempted(void); +static inline void yield_to_preempted(int cpu, u32 yield_count) +{ + ___bad_yield_to_preempted(); /* This would be a bug */ +} +#endif + +#define vcpu_is_preempted vcpu_is_preempted +static inline bool vcpu_is_preempted(int cpu) +{ + if (!is_shared_processor()) + return false; + if (yield_count_of(cpu) & 1) + return true; + return false; +} + +#endif /* __KERNEL__ */ +#endif /* __ASM_PARAVIRT_H */ diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 2d620896cdae..79be9bb10bbb 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -15,11 +15,10 @@ * * (the type definitions are in asm/spinlock_types.h) */ -#include #include +#include #ifdef CONFIG_PPC64 #include -#include #endif #include #include @@ -35,18 +34,6 @@ #define LOCK_TOKEN 1 #endif -#ifdef CONFIG_PPC_PSERIES -DECLARE_STATIC_KEY_FALSE(shared_processor); - -#define vcpu_is_preempted vcpu_is_preempted -static inline bool vcpu_is_preempted(int cpu) -{ - if (!static_branch_unlikely(&shared_processor)) - return false; - return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1); -} -#endif - static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) { return lock.slock == 0; @@ -110,15 +97,6 @@ static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; #endif -static inline bool is_shared_processor(void) -{ -#ifdef CONFIG_PPC_SPLPAR - return static_branch_unlikely(&shared_processor); -#else - return false; -#endif -} - static inline void spin_yield(arch_spinlock_t *lock) { if (is_shared_processor()) diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c index 47a530de733e..e35fd1a16992 100644 --- a/arch/powerpc/lib/locks.c +++ b/arch/powerpc/lib/locks.c @@ -27,14 +27,14 @@ void splpar_spin_yield(arch_spinlock_t *lock) return; holder_cpu = lock_value & 0xffff; BUG_ON(holder_cpu >= NR_CPUS); - yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + + yield_count = yield_count_of(holder_cpu); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ smp_rmb(); if (lock->slock != lock_value) return; /* something has changed */ - plpar_hcall_norets(H_CONFER, - get_hard_smp_processor_id(holder_cpu), yield_count); + yield_to_preempted(holder_cpu, yield_count); } EXPORT_SYMBOL_GPL(splpar_spin_yield); @@ -53,13 +53,13 @@ void splpar_rw_yield(arch_rwlock_t *rw) return; /* no write lock at present */ holder_cpu = lock_value & 0xffff; BUG_ON(holder_cpu >= NR_CPUS); - yield_count = be32_to_cpu(lppaca_of(holder_cpu).yield_count); + + yield_count = yield_count_of(holder_cpu); if ((yield_count & 1) == 0) return; /* virtual cpu is currently running */ smp_rmb(); if (rw->lock != lock_value) return; /* something has changed */ - plpar_hcall_norets(H_CONFER, - get_hard_smp_processor_id(holder_cpu), yield_count); + yield_to_preempted(holder_cpu, yield_count); } #endif From patchwork Thu Jul 2 07:48:35 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321011 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=Y66qi7KT; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9GN5n3zz9sSd for ; Thu, 2 Jul 2020 17:49:16 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728315AbgGBHtP (ORCPT ); Thu, 2 Jul 2020 03:49:15 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41740 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725287AbgGBHtP (ORCPT ); Thu, 2 Jul 2020 03:49:15 -0400 Received: from mail-pf1-x444.google.com (mail-pf1-x444.google.com [IPv6:2607:f8b0:4864:20::444]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3B89BC08C5C1; Thu, 2 Jul 2020 00:49:15 -0700 (PDT) Received: by mail-pf1-x444.google.com with SMTP id m9so1920918pfh.0; Thu, 02 Jul 2020 00:49:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xjWMms9hqssB2L3Uqzup8R5/c6eD/F4/doDU2/0geL4=; b=Y66qi7KTnT1taneBq0c2GknBYUbrHkrBf/yTwbY7xqDkyk3fxovXrCCfbuZHG2Ryng 00jrh2jlN7r2buYyolaKrujXgkbR5TOM0A701cllTUEhrwjhkl+0kzS6FVNjG2rSmrL2 wGdNzDZU1Utarq78GrxyKgL3Qeme4Ilmej1yA/zF20NfGWn8xS5BOtyamd1gbL/K8sni Yds84pcbWeHKloWRVuyaWkVKhACeDzd92QdDwfyVxbNnMy8kh7EgnkodeY3+VVGhisq0 8G1GhV76Jn/hpFUT9xj3SH7iHFO9D3Q6UJxRQ4LKrgq7D06XIwRK2NNgRVP6wAEiVLfm xBUw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xjWMms9hqssB2L3Uqzup8R5/c6eD/F4/doDU2/0geL4=; b=McypAptLMj8x9BRYe+MtA9n50fBe89dDn4RkMQjwpKL4WpspDtuYAJPc0ptTdfX7Dg WzQn6LP9C3wJcMhrz8lWwKnb/ws545YE9VhTvBfpABKkMCKtWWUfhY0l8LRo1sSVBMMH Be7n9GXqh3i6TA8F+af+aHF2KAwAY6MHft63dBs66sPyuvxqyAUjvwjHzwBPJEZYdt6V GA2dGjWoiNDUE5yqOFBWQr6tFHeeYjWm02+rSgyUUWS6VCJS9psQi+3UM0Abmra/ooWS piBVtgHkztFts0zzJBwSm6d1bAaWlMtwOmQtM48RIOlPq2moMMrPP37n6yT4B+Cqj9WF +s/A== X-Gm-Message-State: AOAM531PTtEJnMqz7nLe/v8WC4/mdn51ixlHfS8JRRwsfB74FHgm2Vyf d2zzcsjrQtmba/9fqKKDyMLo4dsQ X-Google-Smtp-Source: ABdhPJy+mTm8LCuyZKYqIjPkX4VwTYXnSZPzyCK/ivHCmlYEheRX+fVlVZz+rtbe49RiPU/50WEUfA== X-Received: by 2002:a05:6a00:2257:: with SMTP id i23mr21115807pfu.25.1593676154625; Thu, 02 Jul 2020 00:49:14 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.49.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:49:14 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 4/8] powerpc: move spinlock implementation to simple_spinlock Date: Thu, 2 Jul 2020 17:48:35 +1000 Message-Id: <20200702074839.1057733-5-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org To prepare for queued spinlocks. This is a simple rename except to update preprocessor guard name and a file reference. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/simple_spinlock.h | 292 ++++++++++++++++++ .../include/asm/simple_spinlock_types.h | 21 ++ arch/powerpc/include/asm/spinlock.h | 285 +---------------- arch/powerpc/include/asm/spinlock_types.h | 12 +- 4 files changed, 315 insertions(+), 295 deletions(-) create mode 100644 arch/powerpc/include/asm/simple_spinlock.h create mode 100644 arch/powerpc/include/asm/simple_spinlock_types.h diff --git a/arch/powerpc/include/asm/simple_spinlock.h b/arch/powerpc/include/asm/simple_spinlock.h new file mode 100644 index 000000000000..e048c041c4a9 --- /dev/null +++ b/arch/powerpc/include/asm/simple_spinlock.h @@ -0,0 +1,292 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_SIMPLE_SPINLOCK_H +#define __ASM_SIMPLE_SPINLOCK_H +#ifdef __KERNEL__ + +/* + * Simple spin lock operations. + * + * Copyright (C) 2001-2004 Paul Mackerras , IBM + * Copyright (C) 2001 Anton Blanchard , IBM + * Copyright (C) 2002 Dave Engebretsen , IBM + * Rework to support virtual processors + * + * Type of int is used as a full 64b word is not necessary. + * + * (the type definitions are in asm/simple_spinlock_types.h) + */ +#include +#include +#ifdef CONFIG_PPC64 +#include +#endif +#include +#include + +#ifdef CONFIG_PPC64 +/* use 0x800000yy when locked, where yy == CPU number */ +#ifdef __BIG_ENDIAN__ +#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) +#else +#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) +#endif +#else +#define LOCK_TOKEN 1 +#endif + +static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) +{ + return lock.slock == 0; +} + +static inline int arch_spin_is_locked(arch_spinlock_t *lock) +{ + smp_mb(); + return !arch_spin_value_unlocked(*lock); +} + +/* + * This returns the old value in the lock, so we succeeded + * in getting the lock if the return value is 0. + */ +static inline unsigned long __arch_spin_trylock(arch_spinlock_t *lock) +{ + unsigned long tmp, token; + + token = LOCK_TOKEN; + __asm__ __volatile__( +"1: " PPC_LWARX(%0,0,%2,1) "\n\ + cmpwi 0,%0,0\n\ + bne- 2f\n\ + stwcx. %1,0,%2\n\ + bne- 1b\n" + PPC_ACQUIRE_BARRIER +"2:" + : "=&r" (tmp) + : "r" (token), "r" (&lock->slock) + : "cr0", "memory"); + + return tmp; +} + +static inline int arch_spin_trylock(arch_spinlock_t *lock) +{ + return __arch_spin_trylock(lock) == 0; +} + +/* + * On a system with shared processors (that is, where a physical + * processor is multiplexed between several virtual processors), + * there is no point spinning on a lock if the holder of the lock + * isn't currently scheduled on a physical processor. Instead + * we detect this situation and ask the hypervisor to give the + * rest of our timeslice to the lock holder. + * + * So that we can tell which virtual processor is holding a lock, + * we put 0x80000000 | smp_processor_id() in the lock when it is + * held. Conveniently, we have a word in the paca that holds this + * value. + */ + +#if defined(CONFIG_PPC_SPLPAR) +/* We only yield to the hypervisor if we are in shared processor mode */ +void splpar_spin_yield(arch_spinlock_t *lock); +void splpar_rw_yield(arch_rwlock_t *lock); +#else /* SPLPAR */ +static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; +static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; +#endif + +static inline void spin_yield(arch_spinlock_t *lock) +{ + if (is_shared_processor()) + splpar_spin_yield(lock); + else + barrier(); +} + +static inline void rw_yield(arch_rwlock_t *lock) +{ + if (is_shared_processor()) + splpar_rw_yield(lock); + else + barrier(); +} + +static inline void arch_spin_lock(arch_spinlock_t *lock) +{ + while (1) { + if (likely(__arch_spin_trylock(lock) == 0)) + break; + do { + HMT_low(); + if (is_shared_processor()) + splpar_spin_yield(lock); + } while (unlikely(lock->slock != 0)); + HMT_medium(); + } +} + +static inline +void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) +{ + unsigned long flags_dis; + + while (1) { + if (likely(__arch_spin_trylock(lock) == 0)) + break; + local_save_flags(flags_dis); + local_irq_restore(flags); + do { + HMT_low(); + if (is_shared_processor()) + splpar_spin_yield(lock); + } while (unlikely(lock->slock != 0)); + HMT_medium(); + local_irq_restore(flags_dis); + } +} +#define arch_spin_lock_flags arch_spin_lock_flags + +static inline void arch_spin_unlock(arch_spinlock_t *lock) +{ + __asm__ __volatile__("# arch_spin_unlock\n\t" + PPC_RELEASE_BARRIER: : :"memory"); + lock->slock = 0; +} + +/* + * Read-write spinlocks, allowing multiple readers + * but only one writer. + * + * NOTE! it is quite common to have readers in interrupts + * but no interrupt writers. For those circumstances we + * can "mix" irq-safe locks - any writer needs to get a + * irq-safe write-lock, but readers can get non-irqsafe + * read-locks. + */ + +#ifdef CONFIG_PPC64 +#define __DO_SIGN_EXTEND "extsw %0,%0\n" +#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ +#else +#define __DO_SIGN_EXTEND +#define WRLOCK_TOKEN (-1) +#endif + +/* + * This returns the old value in the lock + 1, + * so we got a read lock if the return value is > 0. + */ +static inline long __arch_read_trylock(arch_rwlock_t *rw) +{ + long tmp; + + __asm__ __volatile__( +"1: " PPC_LWARX(%0,0,%1,1) "\n" + __DO_SIGN_EXTEND +" addic. %0,%0,1\n\ + ble- 2f\n" +" stwcx. %0,0,%1\n\ + bne- 1b\n" + PPC_ACQUIRE_BARRIER +"2:" : "=&r" (tmp) + : "r" (&rw->lock) + : "cr0", "xer", "memory"); + + return tmp; +} + +/* + * This returns the old value in the lock, + * so we got the write lock if the return value is 0. + */ +static inline long __arch_write_trylock(arch_rwlock_t *rw) +{ + long tmp, token; + + token = WRLOCK_TOKEN; + __asm__ __volatile__( +"1: " PPC_LWARX(%0,0,%2,1) "\n\ + cmpwi 0,%0,0\n\ + bne- 2f\n" +" stwcx. %1,0,%2\n\ + bne- 1b\n" + PPC_ACQUIRE_BARRIER +"2:" : "=&r" (tmp) + : "r" (token), "r" (&rw->lock) + : "cr0", "memory"); + + return tmp; +} + +static inline void arch_read_lock(arch_rwlock_t *rw) +{ + while (1) { + if (likely(__arch_read_trylock(rw) > 0)) + break; + do { + HMT_low(); + if (is_shared_processor()) + splpar_rw_yield(rw); + } while (unlikely(rw->lock < 0)); + HMT_medium(); + } +} + +static inline void arch_write_lock(arch_rwlock_t *rw) +{ + while (1) { + if (likely(__arch_write_trylock(rw) == 0)) + break; + do { + HMT_low(); + if (is_shared_processor()) + splpar_rw_yield(rw); + } while (unlikely(rw->lock != 0)); + HMT_medium(); + } +} + +static inline int arch_read_trylock(arch_rwlock_t *rw) +{ + return __arch_read_trylock(rw) > 0; +} + +static inline int arch_write_trylock(arch_rwlock_t *rw) +{ + return __arch_write_trylock(rw) == 0; +} + +static inline void arch_read_unlock(arch_rwlock_t *rw) +{ + long tmp; + + __asm__ __volatile__( + "# read_unlock\n\t" + PPC_RELEASE_BARRIER +"1: lwarx %0,0,%1\n\ + addic %0,%0,-1\n" +" stwcx. %0,0,%1\n\ + bne- 1b" + : "=&r"(tmp) + : "r"(&rw->lock) + : "cr0", "xer", "memory"); +} + +static inline void arch_write_unlock(arch_rwlock_t *rw) +{ + __asm__ __volatile__("# write_unlock\n\t" + PPC_RELEASE_BARRIER: : :"memory"); + rw->lock = 0; +} + +#define arch_spin_relax(lock) spin_yield(lock) +#define arch_read_relax(lock) rw_yield(lock) +#define arch_write_relax(lock) rw_yield(lock) + +/* See include/linux/spinlock.h */ +#define smp_mb__after_spinlock() smp_mb() + +#endif /* __KERNEL__ */ +#endif /* __ASM_SIMPLE_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/simple_spinlock_types.h b/arch/powerpc/include/asm/simple_spinlock_types.h new file mode 100644 index 000000000000..7c2b48ce62dc --- /dev/null +++ b/arch/powerpc/include/asm/simple_spinlock_types.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H +#define _ASM_POWERPC_SIMPLE_SPINLOCK_TYPES_H + +#ifndef __LINUX_SPINLOCK_TYPES_H +# error "please don't include this file directly" +#endif + +typedef struct { + volatile unsigned int slock; +} arch_spinlock_t; + +#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } + +typedef struct { + volatile signed int lock; +} arch_rwlock_t; + +#define __ARCH_RW_LOCK_UNLOCKED { 0 } + +#endif diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 79be9bb10bbb..21357fe05fe0 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -3,290 +3,7 @@ #define __ASM_SPINLOCK_H #ifdef __KERNEL__ -/* - * Simple spin lock operations. - * - * Copyright (C) 2001-2004 Paul Mackerras , IBM - * Copyright (C) 2001 Anton Blanchard , IBM - * Copyright (C) 2002 Dave Engebretsen , IBM - * Rework to support virtual processors - * - * Type of int is used as a full 64b word is not necessary. - * - * (the type definitions are in asm/spinlock_types.h) - */ -#include -#include -#ifdef CONFIG_PPC64 -#include -#endif -#include -#include - -#ifdef CONFIG_PPC64 -/* use 0x800000yy when locked, where yy == CPU number */ -#ifdef __BIG_ENDIAN__ -#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) -#else -#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) -#endif -#else -#define LOCK_TOKEN 1 -#endif - -static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) -{ - return lock.slock == 0; -} - -static inline int arch_spin_is_locked(arch_spinlock_t *lock) -{ - smp_mb(); - return !arch_spin_value_unlocked(*lock); -} - -/* - * This returns the old value in the lock, so we succeeded - * in getting the lock if the return value is 0. - */ -static inline unsigned long __arch_spin_trylock(arch_spinlock_t *lock) -{ - unsigned long tmp, token; - - token = LOCK_TOKEN; - __asm__ __volatile__( -"1: " PPC_LWARX(%0,0,%2,1) "\n\ - cmpwi 0,%0,0\n\ - bne- 2f\n\ - stwcx. %1,0,%2\n\ - bne- 1b\n" - PPC_ACQUIRE_BARRIER -"2:" - : "=&r" (tmp) - : "r" (token), "r" (&lock->slock) - : "cr0", "memory"); - - return tmp; -} - -static inline int arch_spin_trylock(arch_spinlock_t *lock) -{ - return __arch_spin_trylock(lock) == 0; -} - -/* - * On a system with shared processors (that is, where a physical - * processor is multiplexed between several virtual processors), - * there is no point spinning on a lock if the holder of the lock - * isn't currently scheduled on a physical processor. Instead - * we detect this situation and ask the hypervisor to give the - * rest of our timeslice to the lock holder. - * - * So that we can tell which virtual processor is holding a lock, - * we put 0x80000000 | smp_processor_id() in the lock when it is - * held. Conveniently, we have a word in the paca that holds this - * value. - */ - -#if defined(CONFIG_PPC_SPLPAR) -/* We only yield to the hypervisor if we are in shared processor mode */ -void splpar_spin_yield(arch_spinlock_t *lock); -void splpar_rw_yield(arch_rwlock_t *lock); -#else /* SPLPAR */ -static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; -static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; -#endif - -static inline void spin_yield(arch_spinlock_t *lock) -{ - if (is_shared_processor()) - splpar_spin_yield(lock); - else - barrier(); -} - -static inline void rw_yield(arch_rwlock_t *lock) -{ - if (is_shared_processor()) - splpar_rw_yield(lock); - else - barrier(); -} - -static inline void arch_spin_lock(arch_spinlock_t *lock) -{ - while (1) { - if (likely(__arch_spin_trylock(lock) == 0)) - break; - do { - HMT_low(); - if (is_shared_processor()) - splpar_spin_yield(lock); - } while (unlikely(lock->slock != 0)); - HMT_medium(); - } -} - -static inline -void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) -{ - unsigned long flags_dis; - - while (1) { - if (likely(__arch_spin_trylock(lock) == 0)) - break; - local_save_flags(flags_dis); - local_irq_restore(flags); - do { - HMT_low(); - if (is_shared_processor()) - splpar_spin_yield(lock); - } while (unlikely(lock->slock != 0)); - HMT_medium(); - local_irq_restore(flags_dis); - } -} -#define arch_spin_lock_flags arch_spin_lock_flags - -static inline void arch_spin_unlock(arch_spinlock_t *lock) -{ - __asm__ __volatile__("# arch_spin_unlock\n\t" - PPC_RELEASE_BARRIER: : :"memory"); - lock->slock = 0; -} - -/* - * Read-write spinlocks, allowing multiple readers - * but only one writer. - * - * NOTE! it is quite common to have readers in interrupts - * but no interrupt writers. For those circumstances we - * can "mix" irq-safe locks - any writer needs to get a - * irq-safe write-lock, but readers can get non-irqsafe - * read-locks. - */ - -#ifdef CONFIG_PPC64 -#define __DO_SIGN_EXTEND "extsw %0,%0\n" -#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ -#else -#define __DO_SIGN_EXTEND -#define WRLOCK_TOKEN (-1) -#endif - -/* - * This returns the old value in the lock + 1, - * so we got a read lock if the return value is > 0. - */ -static inline long __arch_read_trylock(arch_rwlock_t *rw) -{ - long tmp; - - __asm__ __volatile__( -"1: " PPC_LWARX(%0,0,%1,1) "\n" - __DO_SIGN_EXTEND -" addic. %0,%0,1\n\ - ble- 2f\n" -" stwcx. %0,0,%1\n\ - bne- 1b\n" - PPC_ACQUIRE_BARRIER -"2:" : "=&r" (tmp) - : "r" (&rw->lock) - : "cr0", "xer", "memory"); - - return tmp; -} - -/* - * This returns the old value in the lock, - * so we got the write lock if the return value is 0. - */ -static inline long __arch_write_trylock(arch_rwlock_t *rw) -{ - long tmp, token; - - token = WRLOCK_TOKEN; - __asm__ __volatile__( -"1: " PPC_LWARX(%0,0,%2,1) "\n\ - cmpwi 0,%0,0\n\ - bne- 2f\n" -" stwcx. %1,0,%2\n\ - bne- 1b\n" - PPC_ACQUIRE_BARRIER -"2:" : "=&r" (tmp) - : "r" (token), "r" (&rw->lock) - : "cr0", "memory"); - - return tmp; -} - -static inline void arch_read_lock(arch_rwlock_t *rw) -{ - while (1) { - if (likely(__arch_read_trylock(rw) > 0)) - break; - do { - HMT_low(); - if (is_shared_processor()) - splpar_rw_yield(rw); - } while (unlikely(rw->lock < 0)); - HMT_medium(); - } -} - -static inline void arch_write_lock(arch_rwlock_t *rw) -{ - while (1) { - if (likely(__arch_write_trylock(rw) == 0)) - break; - do { - HMT_low(); - if (is_shared_processor()) - splpar_rw_yield(rw); - } while (unlikely(rw->lock != 0)); - HMT_medium(); - } -} - -static inline int arch_read_trylock(arch_rwlock_t *rw) -{ - return __arch_read_trylock(rw) > 0; -} - -static inline int arch_write_trylock(arch_rwlock_t *rw) -{ - return __arch_write_trylock(rw) == 0; -} - -static inline void arch_read_unlock(arch_rwlock_t *rw) -{ - long tmp; - - __asm__ __volatile__( - "# read_unlock\n\t" - PPC_RELEASE_BARRIER -"1: lwarx %0,0,%1\n\ - addic %0,%0,-1\n" -" stwcx. %0,0,%1\n\ - bne- 1b" - : "=&r"(tmp) - : "r"(&rw->lock) - : "cr0", "xer", "memory"); -} - -static inline void arch_write_unlock(arch_rwlock_t *rw) -{ - __asm__ __volatile__("# write_unlock\n\t" - PPC_RELEASE_BARRIER: : :"memory"); - rw->lock = 0; -} - -#define arch_spin_relax(lock) spin_yield(lock) -#define arch_read_relax(lock) rw_yield(lock) -#define arch_write_relax(lock) rw_yield(lock) - -/* See include/linux/spinlock.h */ -#define smp_mb__after_spinlock() smp_mb() +#include #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h index 87adaf13b7e8..3906f52dae65 100644 --- a/arch/powerpc/include/asm/spinlock_types.h +++ b/arch/powerpc/include/asm/spinlock_types.h @@ -6,16 +6,6 @@ # error "please don't include this file directly" #endif -typedef struct { - volatile unsigned int slock; -} arch_spinlock_t; - -#define __ARCH_SPIN_LOCK_UNLOCKED { 0 } - -typedef struct { - volatile signed int lock; -} arch_rwlock_t; - -#define __ARCH_RW_LOCK_UNLOCKED { 0 } +#include #endif From patchwork Thu Jul 2 07:48:36 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321012 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=G6pnBvOV; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9GY0hpFz9sTk for ; Thu, 2 Jul 2020 17:49:25 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728328AbgGBHtV (ORCPT ); Thu, 2 Jul 2020 03:49:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41756 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725287AbgGBHtU (ORCPT ); Thu, 2 Jul 2020 03:49:20 -0400 Received: from mail-pg1-x542.google.com (mail-pg1-x542.google.com [IPv6:2607:f8b0:4864:20::542]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0241AC08C5C1; Thu, 2 Jul 2020 00:49:20 -0700 (PDT) Received: by mail-pg1-x542.google.com with SMTP id m22so2689067pgv.9; Thu, 02 Jul 2020 00:49:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=LQD911N1OirjA6dQWnorGvys5+1kn+3MUQKX5XOw8zE=; b=G6pnBvOVHdWam68jActENnVnHw3vwiygfcyv1fzpJYKUMAKwJol+9WclROicfZfAj5 U4hPsVPtNHrwTdu0U7lHNtsQu0EQYvVRBLH8YD/4lyGfy2Fe41+7LZHx7p0Rlz2FGJ5U dRhD29upezXn1llhh5uERPtFLxrZO6hi3qY6AhTwaIEP+UpWJ7u2n/NfIfUUkLAOK4Mw GFQivb1+EAh1uQdcN0hIdeoH5V7pvyzY9ue4NXkAIEbnojh6Y0aaAD7jkOyyxVfZWfIC t9ZgiwpSnfFi7B2mdHeABGM6Z5yLP3N+5tfPJ4XEUSH26lxUgAmAHHeRAWs7GpHEQdYy amrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=LQD911N1OirjA6dQWnorGvys5+1kn+3MUQKX5XOw8zE=; b=iTPczjLXKcZzyOr72xCU4PnD7tnq4w+/Es1Wx8654Sex1YNGQNQr+cDYq7h9J0es5R X7IqZUDIbwqaqV50ztMKVsOqmtZw7z74jEpBs21aYzMS6BmYNxBD6sbFu0FGcsvgAtQ9 PY58Qar73nha1HIpjC99BNA1P3xW9QxVyLMaIgmp9F3kjK/xzHksTRi1FdmYKByqbowS pPDrKeRKHIwH377YPdqC/9fYlZT13UuSklwOqTkOVmV8wckMoZkjcIwHk0o3UbomTMM4 YB+0aQljKsM6M8I1pYH4F7LUdIOd1oPkCLG83X+RaUlf4my8ikyaz95Jum9eTX0l35H0 62eg== X-Gm-Message-State: AOAM5323zNJKQyXJSuXgbBo+jmi23phCrw5x2qXNR/ui4Vdr74W+hPI/ aZxsAM+8G1o7zqc31luStHM= X-Google-Smtp-Source: ABdhPJwk332+lHS3BBrZ6qIQL0gRTtBCD7Sux5ILRg4+f+M5YN36mn7IqydqIQj2tH8ktyacmzXeug== X-Received: by 2002:a05:6a00:1346:: with SMTP id k6mr11368466pfu.116.1593676159569; Thu, 02 Jul 2020 00:49:19 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.49.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:49:19 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 5/8] powerpc/64s: implement queued spinlocks and rwlocks Date: Thu, 2 Jul 2020 17:48:36 +1000 Message-Id: <20200702074839.1057733-6-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org These have shown significantly improved performance and fairness when spinlock contention is moderate to high on very large systems. [ Numbers hopefully forthcoming after more testing, but initial results look good ] Thanks to the fast path, single threaded performance is not noticably hurt. Signed-off-by: Nicholas Piggin --- arch/powerpc/Kconfig | 13 +++++++++++++ arch/powerpc/include/asm/Kbuild | 2 ++ arch/powerpc/include/asm/qspinlock.h | 20 ++++++++++++++++++++ arch/powerpc/include/asm/spinlock.h | 5 +++++ arch/powerpc/include/asm/spinlock_types.h | 5 +++++ arch/powerpc/lib/Makefile | 3 +++ include/asm-generic/qspinlock.h | 2 ++ 7 files changed, 50 insertions(+) create mode 100644 arch/powerpc/include/asm/qspinlock.h diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig index 9fa23eb320ff..b17575109876 100644 --- a/arch/powerpc/Kconfig +++ b/arch/powerpc/Kconfig @@ -145,6 +145,8 @@ config PPC select ARCH_SUPPORTS_ATOMIC_RMW select ARCH_USE_BUILTIN_BSWAP select ARCH_USE_CMPXCHG_LOCKREF if PPC64 + select ARCH_USE_QUEUED_RWLOCKS if PPC_QUEUED_SPINLOCKS + select ARCH_USE_QUEUED_SPINLOCKS if PPC_QUEUED_SPINLOCKS select ARCH_WANT_IPC_PARSE_VERSION select ARCH_WEAK_RELEASE_ACQUIRE select BINFMT_ELF @@ -490,6 +492,17 @@ config HOTPLUG_CPU Say N if you are unsure. +config PPC_QUEUED_SPINLOCKS + bool "Queued spinlocks" + depends on SMP + default "y" if PPC_BOOK3S_64 + help + Say Y here to use to use queued spinlocks which are more complex + but give better salability and fairness on large SMP and NUMA + systems. + + If unsure, say "Y" if you have lots of cores, otherwise "N". + config ARCH_CPU_PROBE_RELEASE def_bool y depends on HOTPLUG_CPU diff --git a/arch/powerpc/include/asm/Kbuild b/arch/powerpc/include/asm/Kbuild index dadbcf3a0b1e..1dd8b6adff5e 100644 --- a/arch/powerpc/include/asm/Kbuild +++ b/arch/powerpc/include/asm/Kbuild @@ -6,5 +6,7 @@ generated-y += syscall_table_spu.h generic-y += export.h generic-y += local64.h generic-y += mcs_spinlock.h +generic-y += qrwlock.h +generic-y += qspinlock.h generic-y += vtime.h generic-y += early_ioremap.h diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h new file mode 100644 index 000000000000..f84da77b6bb7 --- /dev/null +++ b/arch/powerpc/include/asm/qspinlock.h @@ -0,0 +1,20 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef _ASM_POWERPC_QSPINLOCK_H +#define _ASM_POWERPC_QSPINLOCK_H + +#include + +#define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ + +#define smp_mb__after_spinlock() smp_mb() + +static __always_inline int queued_spin_is_locked(struct qspinlock *lock) +{ + smp_mb(); + return atomic_read(&lock->val); +} +#define queued_spin_is_locked queued_spin_is_locked + +#include + +#endif /* _ASM_POWERPC_QSPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 21357fe05fe0..434615f1d761 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -3,7 +3,12 @@ #define __ASM_SPINLOCK_H #ifdef __KERNEL__ +#ifdef CONFIG_PPC_QUEUED_SPINLOCKS +#include +#include +#else #include +#endif #endif /* __KERNEL__ */ #endif /* __ASM_SPINLOCK_H */ diff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h index 3906f52dae65..c5d742f18021 100644 --- a/arch/powerpc/include/asm/spinlock_types.h +++ b/arch/powerpc/include/asm/spinlock_types.h @@ -6,6 +6,11 @@ # error "please don't include this file directly" #endif +#ifdef CONFIG_PPC_QUEUED_SPINLOCKS +#include +#include +#else #include +#endif #endif diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile index 5e994cda8e40..d66a645503eb 100644 --- a/arch/powerpc/lib/Makefile +++ b/arch/powerpc/lib/Makefile @@ -41,7 +41,10 @@ obj-$(CONFIG_PPC_BOOK3S_64) += copyuser_power7.o copypage_power7.o \ obj64-y += copypage_64.o copyuser_64.o mem_64.o hweight_64.o \ memcpy_64.o memcpy_mcsafe_64.o +ifndef CONFIG_PPC_QUEUED_SPINLOCKS obj64-$(CONFIG_SMP) += locks.o +endif + obj64-$(CONFIG_ALTIVEC) += vmx-helper.o obj64-$(CONFIG_KPROBES_SANITY_TEST) += test_emulate_step.o \ test_emulate_step_exec_instr.o diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index fde943d180e0..fb0a814d4395 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -12,6 +12,7 @@ #include +#ifndef queued_spin_is_locked /** * queued_spin_is_locked - is the spinlock locked? * @lock: Pointer to queued spinlock structure @@ -25,6 +26,7 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) */ return atomic_read(&lock->val); } +#endif /** * queued_spin_value_unlocked - is the spinlock structure unlocked? From patchwork Thu Jul 2 07:48:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321013 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=DJiFhBbb; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9GZ25MZz9sV3 for ; Thu, 2 Jul 2020 17:49:26 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728006AbgGBHtZ (ORCPT ); Thu, 2 Jul 2020 03:49:25 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41770 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727065AbgGBHtZ (ORCPT ); Thu, 2 Jul 2020 03:49:25 -0400 Received: from mail-pj1-x1044.google.com (mail-pj1-x1044.google.com [IPv6:2607:f8b0:4864:20::1044]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 22E61C08C5C1; Thu, 2 Jul 2020 00:49:25 -0700 (PDT) Received: by mail-pj1-x1044.google.com with SMTP id l6so8997420pjq.1; Thu, 02 Jul 2020 00:49:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=AeOCtXH9aAkyUxIRivkqAv5uE037Fz8q0aJlvR0kawo=; b=DJiFhBbbRtmTxheRZKS84383nsu3Q8OP1Af/O9wXGzi5q1MAmFBhcAmI/HI5aC9coF MSVgJ2wNtaC+g9Kq2h5UA2Y3+8JbnOioWm0ygdaBJwP+noHQzoF4IrwAy2uBQKzI5vUc cneTpuWPsVTkMfFJeyVWthrhrPMN6ombSjXaha+lN66KA+DxJqc2y6r6qhOLHt83OjoY HJp8fpMEyoXykNiZhL8SxkUHzloDeLLT38lOybrVZrxKLr9rMYHU2N+z9/gZAZHcjqfq eatnrNbguKReMd6Rg2WHqlkvZhdzv4jADk/JB1Y7tqUy8HiHjtSYn5uux8NTSQ3KAADV CNIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=AeOCtXH9aAkyUxIRivkqAv5uE037Fz8q0aJlvR0kawo=; b=TKC1c+FUlnpVHYTYgGPV3WvqWzKf0xJ8BHUqbIw7shAUCOHfofDQG+v0thjmrXBoTn 8q0qBVMvhKxxLT79zJ8TPecOZk46h1QoaUn7FbCO6OoTCxhN0GrcLJ6/Xz3aWcERKmBM kYRM5QTdJqWOndOJikitVGIw4d0iZT09dJwgG1PEoj8kmbVT4pZ478t6MVx+lQkCC3xC Rq9OFdcpcjNNqPVZn833lw4B+RgUywQs8J3Ub+Lncu/RcdfOTt0I7UfgOOeIERVdLbtp e5JO2TNQ8UyWvSKbjAevV1YY4c25GQwd+PowkGwVuJM6L+4jpPl5R+iwHucz8yM5UkN/ 1gtQ== X-Gm-Message-State: AOAM533Qg2aLIjHBLBedQ1bNvnD9xqT60iRl51M8DL39hrbrsgMh6r53 FUjqwDuMOecdnGjWsYM8xRRZSqCI X-Google-Smtp-Source: ABdhPJysPnxkgshf8EJH717uNcY2uPYhM8eamXvqLvT6GNuILItj6fg+Tsmhs2OBwqsfoYoJf1FaUg== X-Received: by 2002:a17:902:9a81:: with SMTP id w1mr24349902plp.50.1593676164606; Thu, 02 Jul 2020 00:49:24 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.49.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:49:24 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 6/8] powerpc/pseries: implement paravirt qspinlocks for SPLPAR Date: Thu, 2 Jul 2020 17:48:37 +1000 Message-Id: <20200702074839.1057733-7-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/paravirt.h | 23 ++++++++ arch/powerpc/include/asm/qspinlock.h | 55 +++++++++++++++++++ arch/powerpc/include/asm/qspinlock_paravirt.h | 5 ++ arch/powerpc/platforms/pseries/Kconfig | 5 ++ arch/powerpc/platforms/pseries/setup.c | 6 +- include/asm-generic/qspinlock.h | 2 + 6 files changed, 95 insertions(+), 1 deletion(-) create mode 100644 arch/powerpc/include/asm/qspinlock_paravirt.h diff --git a/arch/powerpc/include/asm/paravirt.h b/arch/powerpc/include/asm/paravirt.h index 7a8546660a63..5fae9dfa6fe9 100644 --- a/arch/powerpc/include/asm/paravirt.h +++ b/arch/powerpc/include/asm/paravirt.h @@ -29,6 +29,16 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { plpar_hcall_norets(H_CONFER, get_hard_smp_processor_id(cpu), yield_count); } + +static inline void prod_cpu(int cpu) +{ + plpar_hcall_norets(H_PROD, get_hard_smp_processor_id(cpu)); +} + +static inline void yield_to_any(void) +{ + plpar_hcall_norets(H_CONFER, -1, 0); +} #else static inline bool is_shared_processor(void) { @@ -45,6 +55,19 @@ static inline void yield_to_preempted(int cpu, u32 yield_count) { ___bad_yield_to_preempted(); /* This would be a bug */ } + +extern void ___bad_yield_to_any(void); +static inline void yield_to_any(void) +{ + ___bad_yield_to_any(); /* This would be a bug */ +} + +extern void ___bad_prod_cpu(void); +static inline void prod_cpu(int cpu) +{ + ___bad_prod_cpu(); /* This would be a bug */ +} + #endif #define vcpu_is_preempted vcpu_is_preempted diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h index f84da77b6bb7..997a9a32df77 100644 --- a/arch/powerpc/include/asm/qspinlock.h +++ b/arch/powerpc/include/asm/qspinlock.h @@ -3,9 +3,36 @@ #define _ASM_POWERPC_QSPINLOCK_H #include +#include #define _Q_PENDING_LOOPS (1 << 9) /* not tuned */ +#ifdef CONFIG_PARAVIRT_SPINLOCKS +extern void native_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +extern void __pv_queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); + +static __always_inline void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) +{ + if (!is_shared_processor()) + native_queued_spin_lock_slowpath(lock, val); + else + __pv_queued_spin_lock_slowpath(lock, val); +} +#else +extern void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +#endif + +static __always_inline void queued_spin_lock(struct qspinlock *lock) +{ + u32 val = 0; + + if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + return; + + queued_spin_lock_slowpath(lock, val); +} +#define queued_spin_lock queued_spin_lock + #define smp_mb__after_spinlock() smp_mb() static __always_inline int queued_spin_is_locked(struct qspinlock *lock) @@ -15,6 +42,34 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock) } #define queued_spin_is_locked queued_spin_is_locked +#ifdef CONFIG_PARAVIRT_SPINLOCKS +#define SPIN_THRESHOLD (1<<15) /* not tuned */ + +static __always_inline void pv_wait(u8 *ptr, u8 val) +{ + if (*ptr != val) + return; + yield_to_any(); + /* + * We could pass in a CPU here if waiting in the queue and yield to + * the previous CPU in the queue. + */ +} + +static __always_inline void pv_kick(int cpu) +{ + prod_cpu(cpu); +} + +extern void __pv_init_lock_hash(void); + +static inline void pv_spinlocks_init(void) +{ + __pv_init_lock_hash(); +} + +#endif + #include #endif /* _ASM_POWERPC_QSPINLOCK_H */ diff --git a/arch/powerpc/include/asm/qspinlock_paravirt.h b/arch/powerpc/include/asm/qspinlock_paravirt.h new file mode 100644 index 000000000000..6dbdb8a4f84f --- /dev/null +++ b/arch/powerpc/include/asm/qspinlock_paravirt.h @@ -0,0 +1,5 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef __ASM_QSPINLOCK_PARAVIRT_H +#define __ASM_QSPINLOCK_PARAVIRT_H + +#endif /* __ASM_QSPINLOCK_PARAVIRT_H */ diff --git a/arch/powerpc/platforms/pseries/Kconfig b/arch/powerpc/platforms/pseries/Kconfig index 24c18362e5ea..756e727b383f 100644 --- a/arch/powerpc/platforms/pseries/Kconfig +++ b/arch/powerpc/platforms/pseries/Kconfig @@ -25,9 +25,14 @@ config PPC_PSERIES select SWIOTLB default y +config PARAVIRT_SPINLOCKS + bool + default n + config PPC_SPLPAR depends on PPC_PSERIES bool "Support for shared-processor logical partitions" + select PARAVIRT_SPINLOCKS if PPC_QUEUED_SPINLOCKS help Enabling this option will make the kernel run more efficiently on logically-partitioned pSeries systems which use shared diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c index 2db8469e475f..747a203d9453 100644 --- a/arch/powerpc/platforms/pseries/setup.c +++ b/arch/powerpc/platforms/pseries/setup.c @@ -771,8 +771,12 @@ static void __init pSeries_setup_arch(void) if (firmware_has_feature(FW_FEATURE_LPAR)) { vpa_init(boot_cpuid); - if (lppaca_shared_proc(get_lppaca())) + if (lppaca_shared_proc(get_lppaca())) { static_branch_enable(&shared_processor); +#ifdef CONFIG_PARAVIRT_SPINLOCKS + pv_spinlocks_init(); +#endif + } ppc_md.power_save = pseries_lpar_idle; ppc_md.enable_pmcs = pseries_lpar_enable_pmcs; diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h index fb0a814d4395..38ca14e79a86 100644 --- a/include/asm-generic/qspinlock.h +++ b/include/asm-generic/qspinlock.h @@ -69,6 +69,7 @@ static __always_inline int queued_spin_trylock(struct qspinlock *lock) extern void queued_spin_lock_slowpath(struct qspinlock *lock, u32 val); +#ifndef queued_spin_lock /** * queued_spin_lock - acquire a queued spinlock * @lock: Pointer to queued spinlock structure @@ -82,6 +83,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) queued_spin_lock_slowpath(lock, val); } +#endif #ifndef queued_spin_unlock /** From patchwork Thu Jul 2 07:48:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321014 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=h1o1LcgG; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9Gh3z05z9sVD for ; Thu, 2 Jul 2020 17:49:32 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728353AbgGBHtb (ORCPT ); Thu, 2 Jul 2020 03:49:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727065AbgGBHta (ORCPT ); Thu, 2 Jul 2020 03:49:30 -0400 Received: from mail-pg1-x544.google.com (mail-pg1-x544.google.com [IPv6:2607:f8b0:4864:20::544]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 17666C08C5C1; Thu, 2 Jul 2020 00:49:30 -0700 (PDT) Received: by mail-pg1-x544.google.com with SMTP id j19so6198915pgm.11; Thu, 02 Jul 2020 00:49:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9NJ1Ulnd8FUmc15smZh6pt0VA5vxUSOm2TGGwsdqYoM=; b=h1o1LcgG1xlbxsTXx56FrrXDQFQguL0GPMYYSt5jqmcQS9iaN8toSEHNjvoLQHl5g2 kzDzzyNXxq0XYbgxtm1X7QREncpnWLB1Eu58VIUB0QuwN2jXCVFxKvOCnuLd0vhkqZNf 7XS8ybdePQ/GGeEP9XrvMzir35iJR2SgXnl9wB7NtEhEMpbdkvDZCh2jdW5ci9ZxZRVq wmMJGIH33PvOU3axKN/hDYhkKBb8CGOyBYeL0lG2SXjANW5pAr9EBa8AmAoEI2P0z2df pwJwVQQ+KNZdfS632ULuiUJ7WUxOD2vhmhwGdR20X/XaNBtIygo3+0MK/U7nnju8pVc8 vPmg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9NJ1Ulnd8FUmc15smZh6pt0VA5vxUSOm2TGGwsdqYoM=; b=LzDgZvTj7F8MhX12VW7RHrmgB6WZKZ9k3vD4iT9ccwTjF53CSZCtMD4p2QH4j7rDi8 xc93Z94Ht1SoxtSFJJMcEbSzFyni/29JE1CJ8ksPV0fYd0dyh1i0VLkO43RNI/yN5lOO /WL3kYMuuUDtmBevrjVk0dYelFofCsZKrMSXCBc9NtlnCQWK0gAPf3tV/JZzdgIGIv2K 07zWaOaHrn89gECBfJr9P3owuVpbMVoSeYiGAvKhIGN9be28VrsrA26MYPlzDAnxwL/r SZ2r50rbxUjm3mn54DN56tCHn7s+0LvMt+dPRv25ZcOm5ZP5G/CmoC9wKavUYRM+jw+z mGoA== X-Gm-Message-State: AOAM530dsMpacGRtk8u/ZKUZz+u4kkcMzxuVCN8HnK/WrGWjFJgiWERC 3MYiUgMM/6x8zE0tgGxGzOk= X-Google-Smtp-Source: ABdhPJzwYFeDE0Zesg+dB6oDlHyvbtGF9QEI6LdVjc/BuBvcwXIzeNCPUJRMFXKCpdtG6Hbd1Ttt8g== X-Received: by 2002:a63:7741:: with SMTP id s62mr22972293pgc.332.1593676169702; Thu, 02 Jul 2020 00:49:29 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.49.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:49:29 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 7/8] powerpc/qspinlock: optimised atomic_try_cmpxchg_lock that adds the lock hint Date: Thu, 2 Jul 2020 17:48:38 +1000 Message-Id: <20200702074839.1057733-8-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org This brings the behaviour of the uncontended fast path back to roughly equivalent to simple spinlocks -- a single atomic op with lock hint. Signed-off-by: Nicholas Piggin --- arch/powerpc/include/asm/atomic.h | 28 ++++++++++++++++++++++++++++ arch/powerpc/include/asm/qspinlock.h | 2 +- 2 files changed, 29 insertions(+), 1 deletion(-) diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h index 498785ffc25f..f6a3d145ffb7 100644 --- a/arch/powerpc/include/asm/atomic.h +++ b/arch/powerpc/include/asm/atomic.h @@ -193,6 +193,34 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v) #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) #define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new)) +/* + * Don't want to override the generic atomic_try_cmpxchg_acquire, because + * we add a lock hint to the lwarx, which may not be wanted for the + * _acquire case (and is not used by the other _acquire variants so it + * would be a surprise). + */ +static __always_inline bool +atomic_try_cmpxchg_lock(atomic_t *v, int *old, int new) +{ + int r, o = *old; + + __asm__ __volatile__ ( +"1:\t" PPC_LWARX(%0,0,%2,1) " # atomic_try_cmpxchg_acquire \n" +" cmpw 0,%0,%3 \n" +" bne- 2f \n" +" stwcx. %4,0,%2 \n" +" bne- 1b \n" +"\t" PPC_ACQUIRE_BARRIER " \n" +"2: \n" + : "=&r" (r), "+m" (v->counter) + : "r" (&v->counter), "r" (o), "r" (new) + : "cr0", "memory"); + + if (unlikely(r != o)) + *old = r; + return likely(r == o); +} + /** * atomic_fetch_add_unless - add unless the number is a given value * @v: pointer of type atomic_t diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h index 997a9a32df77..7091f1ceec3d 100644 --- a/arch/powerpc/include/asm/qspinlock.h +++ b/arch/powerpc/include/asm/qspinlock.h @@ -26,7 +26,7 @@ static __always_inline void queued_spin_lock(struct qspinlock *lock) { u32 val = 0; - if (likely(atomic_try_cmpxchg_acquire(&lock->val, &val, _Q_LOCKED_VAL))) + if (likely(atomic_try_cmpxchg_lock(&lock->val, &val, _Q_LOCKED_VAL))) return; queued_spin_lock_slowpath(lock, val); From patchwork Thu Jul 2 07:48:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nicholas Piggin X-Patchwork-Id: 1321015 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=kvm-ppc-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256 header.s=20161025 header.b=pZqhzDJK; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49y9Gm5xypz9sVF for ; Thu, 2 Jul 2020 17:49:36 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728373AbgGBHtg (ORCPT ); Thu, 2 Jul 2020 03:49:36 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727065AbgGBHtf (ORCPT ); Thu, 2 Jul 2020 03:49:35 -0400 Received: from mail-pl1-x644.google.com (mail-pl1-x644.google.com [IPv6:2607:f8b0:4864:20::644]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5357EC08C5C1; Thu, 2 Jul 2020 00:49:35 -0700 (PDT) Received: by mail-pl1-x644.google.com with SMTP id x11so10979416plo.7; Thu, 02 Jul 2020 00:49:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4UHbhQFEdFPJ2RYno/xIf1h+jNRhusa+eWEK6fGK5jM=; b=pZqhzDJKFXLW28kF6+0/00P3hNnIiZSIjop2A3REoKDLEAcF4CwFSv6pffel8OHXVK Xa3JgTrUiSN6YIWcOq3o/bF37Nu2jl7qbzTdBIuatB3ChEhwVxwiWFU+jI++1se9tF6k 1AmN2R6JrOAsAFYoQtIEDuXOeldeugtU5t6eo91iY90vvuH6caRhFJCjNJdsrGw7laqr 6UR9WyhWbQseiUJ74YiMxfHgWln2OWAdZDi/G7+H56nSqIOCP0Ka4D82SCVutzUvgLdH 8qg/bq3otY3SZWPbQTG1OFUJcv6zOGPqMhOcBG5M8qSDs9OtW+weAlyhMNvfzy9kCrX9 Bq/A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4UHbhQFEdFPJ2RYno/xIf1h+jNRhusa+eWEK6fGK5jM=; b=STxFLFTPpm/EQ+plZPsZqZPMOPEbBOzdoNSQzEOQRC6McAJVY5/J85nYv/h0bpssRc QjeYWITTqZOx2+pKWFYdo0bB/th9rSJYlpg3Jlu9GvYIYFMKt6BUR80vBGNjqGzWoa72 wp3xRtXnp7msydwOreAYI8qoEkxdwwA74v0Yv4PwrGLbM8/xV240wRNV5x/1hmYjsxWZ RJIWnVuQx5s7pqNx+2I7YUPFckAiw3w4wLTWJTaGqvNtcA42bpGS9JOs6SfbWmRDLueu JtMxZa1TzxK0oCuxvXtXHVeTwwDVyeBUg6ePE+h8j2BS0Ar8r43KgD27H2notNi67B4I 7goQ== X-Gm-Message-State: AOAM532Vj7QChzR+2RXft4oDN0teNPNlzD1MVBt865uyybAfMY2/2BLk aYTa6q3psAukiZgaMACND1k= X-Google-Smtp-Source: ABdhPJzo+47v2QY6N1oj7RydZM/DMTPv9/tLJu+97Hhlqb5Qr+YK1gcQR2DRq1JqZJWOsTfT0Dp8Gw== X-Received: by 2002:a17:90b:3809:: with SMTP id mq9mr31112781pjb.156.1593676174796; Thu, 02 Jul 2020 00:49:34 -0700 (PDT) Received: from bobo.ozlabs.ibm.com (61-68-186-125.tpgi.com.au. [61.68.186.125]) by smtp.gmail.com with ESMTPSA id 17sm6001953pfv.16.2020.07.02.00.49.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 02 Jul 2020 00:49:34 -0700 (PDT) From: Nicholas Piggin Cc: Nicholas Piggin , Will Deacon , Peter Zijlstra , Boqun Feng , Ingo Molnar , Waiman Long , Anton Blanchard , linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org, kvm-ppc@vger.kernel.org, linux-arch@vger.kernel.org Subject: [PATCH 8/8] powerpc/64s: remove paravirt from simple spinlocks (RFC only) Date: Thu, 2 Jul 2020 17:48:39 +1000 Message-Id: <20200702074839.1057733-9-npiggin@gmail.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200702074839.1057733-1-npiggin@gmail.com> References: <20200702074839.1057733-1-npiggin@gmail.com> MIME-Version: 1.0 To: unlisted-recipients:; (no To-header on input) Sender: kvm-ppc-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: kvm-ppc@vger.kernel.org RFC until we settle on queued spinlocks for 64s and remove the option to go back to simple locks. If other sub-archs want to keep simple spinlocks, the code can be nicely simplified. --- arch/powerpc/include/asm/simple_spinlock.h | 61 +------------------- arch/powerpc/kvm/book3s_hv_rm_mmu.c | 6 -- arch/powerpc/lib/Makefile | 4 -- arch/powerpc/lib/locks.c | 65 ---------------------- 4 files changed, 2 insertions(+), 134 deletions(-) delete mode 100644 arch/powerpc/lib/locks.c diff --git a/arch/powerpc/include/asm/simple_spinlock.h b/arch/powerpc/include/asm/simple_spinlock.h index e048c041c4a9..5f0980dea001 100644 --- a/arch/powerpc/include/asm/simple_spinlock.h +++ b/arch/powerpc/include/asm/simple_spinlock.h @@ -16,23 +16,10 @@ * (the type definitions are in asm/simple_spinlock_types.h) */ #include -#include -#ifdef CONFIG_PPC64 -#include -#endif #include #include -#ifdef CONFIG_PPC64 -/* use 0x800000yy when locked, where yy == CPU number */ -#ifdef __BIG_ENDIAN__ -#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) -#else -#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) -#endif -#else #define LOCK_TOKEN 1 -#endif static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock) { @@ -74,43 +61,14 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock) return __arch_spin_trylock(lock) == 0; } -/* - * On a system with shared processors (that is, where a physical - * processor is multiplexed between several virtual processors), - * there is no point spinning on a lock if the holder of the lock - * isn't currently scheduled on a physical processor. Instead - * we detect this situation and ask the hypervisor to give the - * rest of our timeslice to the lock holder. - * - * So that we can tell which virtual processor is holding a lock, - * we put 0x80000000 | smp_processor_id() in the lock when it is - * held. Conveniently, we have a word in the paca that holds this - * value. - */ - -#if defined(CONFIG_PPC_SPLPAR) -/* We only yield to the hypervisor if we are in shared processor mode */ -void splpar_spin_yield(arch_spinlock_t *lock); -void splpar_rw_yield(arch_rwlock_t *lock); -#else /* SPLPAR */ -static inline void splpar_spin_yield(arch_spinlock_t *lock) {}; -static inline void splpar_rw_yield(arch_rwlock_t *lock) {}; -#endif - static inline void spin_yield(arch_spinlock_t *lock) { - if (is_shared_processor()) - splpar_spin_yield(lock); - else - barrier(); + barrier(); } static inline void rw_yield(arch_rwlock_t *lock) { - if (is_shared_processor()) - splpar_rw_yield(lock); - else - barrier(); + barrier(); } static inline void arch_spin_lock(arch_spinlock_t *lock) @@ -120,8 +78,6 @@ static inline void arch_spin_lock(arch_spinlock_t *lock) break; do { HMT_low(); - if (is_shared_processor()) - splpar_spin_yield(lock); } while (unlikely(lock->slock != 0)); HMT_medium(); } @@ -139,8 +95,6 @@ void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags) local_irq_restore(flags); do { HMT_low(); - if (is_shared_processor()) - splpar_spin_yield(lock); } while (unlikely(lock->slock != 0)); HMT_medium(); local_irq_restore(flags_dis); @@ -166,13 +120,7 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock) * read-locks. */ -#ifdef CONFIG_PPC64 -#define __DO_SIGN_EXTEND "extsw %0,%0\n" -#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */ -#else -#define __DO_SIGN_EXTEND #define WRLOCK_TOKEN (-1) -#endif /* * This returns the old value in the lock + 1, @@ -184,7 +132,6 @@ static inline long __arch_read_trylock(arch_rwlock_t *rw) __asm__ __volatile__( "1: " PPC_LWARX(%0,0,%1,1) "\n" - __DO_SIGN_EXTEND " addic. %0,%0,1\n\ ble- 2f\n" " stwcx. %0,0,%1\n\ @@ -227,8 +174,6 @@ static inline void arch_read_lock(arch_rwlock_t *rw) break; do { HMT_low(); - if (is_shared_processor()) - splpar_rw_yield(rw); } while (unlikely(rw->lock < 0)); HMT_medium(); } @@ -241,8 +186,6 @@ static inline void arch_write_lock(arch_rwlock_t *rw) break; do { HMT_low(); - if (is_shared_processor()) - splpar_rw_yield(rw); } while (unlikely(rw->lock != 0)); HMT_medium(); } diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c index 88da2764c1bb..909025083161 100644 --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c @@ -410,12 +410,6 @@ long kvmppc_h_enter(struct kvm_vcpu *vcpu, unsigned long flags, &vcpu->arch.regs.gpr[4]); } -#ifdef __BIG_ENDIAN__ -#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token)) -#else -#define LOCK_TOKEN (*(u32 *)(&get_paca()->paca_index)) -#endif - static inline int is_mmio_hpte(unsigned long v, unsigned long r) { return ((v & HPTE_V_ABSENT) && diff --git a/arch/powerpc/lib/Makefile b/arch/powerpc/lib/Makefile index d66a645503eb..158e71abc14c 100644 --- a/arch/powerpc/lib/Makefile +++ b/arch/powerpc/lib/Makefile @@ -41,10 +41,6 @@ obj-$(CONFIG_PPC_BOOK3S_64) += copyuser_power7.o copypage_power7.o \ obj64-y += copypage_64.o copyuser_64.o mem_64.o hweight_64.o \ memcpy_64.o memcpy_mcsafe_64.o -ifndef CONFIG_PPC_QUEUED_SPINLOCKS -obj64-$(CONFIG_SMP) += locks.o -endif - obj64-$(CONFIG_ALTIVEC) += vmx-helper.o obj64-$(CONFIG_KPROBES_SANITY_TEST) += test_emulate_step.o \ test_emulate_step_exec_instr.o diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c deleted file mode 100644 index e35fd1a16992..000000000000 --- a/arch/powerpc/lib/locks.c +++ /dev/null @@ -1,65 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Spin and read/write lock operations. - * - * Copyright (C) 2001-2004 Paul Mackerras , IBM - * Copyright (C) 2001 Anton Blanchard , IBM - * Copyright (C) 2002 Dave Engebretsen , IBM - * Rework to support virtual processors - */ - -#include -#include -#include -#include - -/* waiting for a spinlock... */ -#if defined(CONFIG_PPC_SPLPAR) -#include -#include - -void splpar_spin_yield(arch_spinlock_t *lock) -{ - unsigned int lock_value, holder_cpu, yield_count; - - lock_value = lock->slock; - if (lock_value == 0) - return; - holder_cpu = lock_value & 0xffff; - BUG_ON(holder_cpu >= NR_CPUS); - - yield_count = yield_count_of(holder_cpu); - if ((yield_count & 1) == 0) - return; /* virtual cpu is currently running */ - smp_rmb(); - if (lock->slock != lock_value) - return; /* something has changed */ - yield_to_preempted(holder_cpu, yield_count); -} -EXPORT_SYMBOL_GPL(splpar_spin_yield); - -/* - * Waiting for a read lock or a write lock on a rwlock... - * This turns out to be the same for read and write locks, since - * we only know the holder if it is write-locked. - */ -void splpar_rw_yield(arch_rwlock_t *rw) -{ - int lock_value; - unsigned int holder_cpu, yield_count; - - lock_value = rw->lock; - if (lock_value >= 0) - return; /* no write lock at present */ - holder_cpu = lock_value & 0xffff; - BUG_ON(holder_cpu >= NR_CPUS); - - yield_count = yield_count_of(holder_cpu); - if ((yield_count & 1) == 0) - return; /* virtual cpu is currently running */ - smp_rmb(); - if (rw->lock != lock_value) - return; /* something has changed */ - yield_to_preempted(holder_cpu, yield_count); -} -#endif