From patchwork Fri Dec 29 20:07:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sam Ravnborg X-Patchwork-Id: 1881162 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=T1OqExGz; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=mvLTOYQ4; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=gandalf.ozlabs.org; envelope-from=srs0=7/1o=ii=vger.kernel.org=sparclinux+bounces-119-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T1xKC1hXPz20Rq for ; Sat, 30 Dec 2023 07:08:47 +1100 (AEDT) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4T1xKC1FhQz4wdD for ; Sat, 30 Dec 2023 07:08:47 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4T1xKC1CJTz4xKb; Sat, 30 Dec 2023 07:08:47 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=T1OqExGz; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=mvLTOYQ4; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:4601:e00::3; helo=am.mirrors.kernel.org; envelope-from=sparclinux+bounces-119-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from am.mirrors.kernel.org (am.mirrors.kernel.org [IPv6:2604:1380:4601:e00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4T1xKB4gRcz4wdD for ; Sat, 30 Dec 2023 07:08:46 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by am.mirrors.kernel.org (Postfix) with ESMTPS id 38D891F21DC5 for ; Fri, 29 Dec 2023 20:08:43 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id C759F13FF9; Fri, 29 Dec 2023 20:08:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="T1OqExGz"; dkim=permerror (0-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="mvLTOYQ4" X-Original-To: sparclinux@vger.kernel.org Received: from mailrelay5-1.pub.mailoutpod3-cph3.one.com (mailrelay5-1.pub.mailoutpod3-cph3.one.com [46.30.211.244]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 305DD13FE1 for ; Fri, 29 Dec 2023 20:08:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=ravnborg.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ravnborg.org; s=rsa2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=OBDlXXL+0E05xTGTlvy9f5xPcZvRbYslKFVDrl+AoDQ=; b=T1OqExGz0ugg6D0vr98I9QtANQPbXRysa5viVl7iF0Uvba1laSWzzQeauVbSbqxsIN8YScdslUMLJ Vyb4N4/OfQK7YwhaACrAe/sNzQHj/EVcJLt/DP/jbZDrAz9SI1bMXn5+YG6rTfptvWs3HSbO/qWTV1 mHlFkJcpbB30Ux/3ZKaCIPYd4WPk2okCpbPAlLmSC5dKUtiq7MhStDKxVYOox85+v6aWJAY1xoWvhB vU3YW5oHh3+KZlHaB/jvGQ+sATUX9Rcayv0/GyaALdc+Mgq0FkuLGPnnM4qfl8FPKh4Exdt0E+p9HW V8kmXMaddGy74TMlrZ9S15HVkcN/vSA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=ravnborg.org; s=ed2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=OBDlXXL+0E05xTGTlvy9f5xPcZvRbYslKFVDrl+AoDQ=; b=mvLTOYQ4eUlX+x8cuBxi5ma1IjN6rWoOlD5p0fiZXNoOsLkjILagbglHXEfpxz+f1nZQ5R6gxDgC2 d7fxT6ZDw== X-HalOne-ID: e2416407-a685-11ee-b51c-2b77c2ae2e64 Received: from ravnborg.org (2-105-2-98-cable.dk.customer.tdc.net [2.105.2.98]) by mailrelay5 (Halon) with ESMTPSA id e2416407-a685-11ee-b51c-2b77c2ae2e64; Fri, 29 Dec 2023 20:07:27 +0000 (UTC) Date: Fri, 29 Dec 2023 21:07:26 +0100 From: Sam Ravnborg To: "David S. Miller" , Arnd Bergmann , Andreas Larsson , sparclinux@vger.kernel.org Subject: [PATCH 1/4] sparc32: Add support for specifying -mcpu Message-ID: <20231229200726.GA4034411@ravnborg.org> References: <20231229200604.GA4033529@ravnborg.org> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231229200604.GA4033529@ravnborg.org> Add support for selecting the CPU architecture. The default is leon3 - which is the minimum required as the kernel uses CAS instructions. Inspired by (from gaisler-buildroot-2023.02-1.0): 0001-sparc32-leon-Build-with-mcpu-leon3-for-SPARC_LEON.patch 0028-sparc32-leon-Make-what-mcpu-to-be-used-configurable-.patch Signed-off-by: Sam Ravnborg Cc: Andreas Larsson Cc: Arnd Bergmann Cc: "David S. Miller" --- arch/sparc/Kconfig | 24 ++++++++++++++++++++++++ arch/sparc/Makefile | 13 +++++-------- 2 files changed, 29 insertions(+), 8 deletions(-) diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig index 1b9cf7f3c500..e94783ceb409 100644 --- a/arch/sparc/Kconfig +++ b/arch/sparc/Kconfig @@ -161,6 +161,30 @@ config ARCH_SUPPORTS_UPROBES menu "Processor type and features" +choice + prompt "LEON architecture" + default SPARC_CPU_LEON3 + help + Select the architecture the kernel shall be built for + +config SPARC_CPU_LEON3 + prompt "LEON 3" + help + Build the kernel for the LEON 3 architecture + +config SPARC_CPU_LEON5 + prompt "LEON 5" + help + Build the kernel for the LEON 5 architecture + +config SPARC_CPU_DEFAULT + bool "Toolchain default" + help + Build the kernel with no -mcpu option, getting the default + for the toolchain that is being used. + +endchoice + config SMP bool "Symmetric multi-processing support" help diff --git a/arch/sparc/Makefile b/arch/sparc/Makefile index 5f6035936131..3c3a1fd8c873 100644 --- a/arch/sparc/Makefile +++ b/arch/sparc/Makefile @@ -25,14 +25,11 @@ KBUILD_LDFLAGS := -m elf32_sparc export BITS := 32 UTS_MACHINE := sparc -# We are adding -Wa,-Av8 to KBUILD_CFLAGS to deal with a specs bug in some -# versions of gcc. Some gcc versions won't pass -Av8 to binutils when you -# give -mcpu=v8. This silently worked with older bintutils versions but -# does not any more. -KBUILD_CFLAGS += -m32 -mcpu=v8 -pipe -mno-fpu -fcall-used-g5 -fcall-used-g7 -KBUILD_CFLAGS += -Wa,-Av8 - -KBUILD_AFLAGS += -m32 -Wa,-Av8 +cpuflags-$(CONFIG_SPARC_CPU_LEON3) := -mcpu=leon3 +cpuflags-$(CONFIG_SPARC_CPU_LEON5) := -mcpu=leon5 + +KBUILD_CFLAGS += -m32 $(cpuflags-y) -pipe -mno-fpu -fcall-used-g5 -fcall-used-g7 +KBUILD_AFLAGS += -m32 $(cpuflags-y) else ##### From patchwork Fri Dec 29 20:07:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sam Ravnborg X-Patchwork-Id: 1881163 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=G1VXb/wa; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=jaWnDv46; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=gandalf.ozlabs.org; envelope-from=srs0=/bhv=ii=vger.kernel.org=sparclinux+bounces-120-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T1xKm2M4jz20Rq for ; Sat, 30 Dec 2023 07:09:16 +1100 (AEDT) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4T1xKm1tl6z4wdD for ; Sat, 30 Dec 2023 07:09:16 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4T1xKm1qksz4xKb; Sat, 30 Dec 2023 07:09:16 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=G1VXb/wa; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=jaWnDv46; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=139.178.88.99; helo=sv.mirrors.kernel.org; envelope-from=sparclinux+bounces-120-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from sv.mirrors.kernel.org (sv.mirrors.kernel.org [139.178.88.99]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4T1xKl6gyzz4wdD for ; Sat, 30 Dec 2023 07:09:15 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by sv.mirrors.kernel.org (Postfix) with ESMTPS id B16AC281FAC for ; Fri, 29 Dec 2023 20:09:14 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 3339113FEB; Fri, 29 Dec 2023 20:09:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="G1VXb/wa"; dkim=permerror (0-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="jaWnDv46" X-Original-To: sparclinux@vger.kernel.org Received: from mailrelay1-1.pub.mailoutpod2-cph3.one.com (mailrelay1-1.pub.mailoutpod2-cph3.one.com [46.30.211.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A3B813FF1 for ; Fri, 29 Dec 2023 20:09:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=ravnborg.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ravnborg.org; s=rsa2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=EGpU/wBREU1TuW0HNalg3hWWoCzJ9fEHAfVCsF2Pf7c=; b=G1VXb/waTvmLVCaoyhnNp9mH+k0dTudIAYQ/cR+oSOF1EQqLo4jFiSv6CHKP/NN7I5TWc0JL/PPN2 zjdSKNkrVgRoI5yxhmGJ6V+yd4oH7KElGBPWbfV+HVe1BrQTF/J4wJ8sPaEroere9dWC9pchY7OZo/ D/uaLjd1qRAvng2OfqxIRKzUe0KhiuPpFlttuOUYfOqM1bi71M4z+rjruwPjcZrZSAqRU1iZxdlSHQ HKrJA6GFRvyhZvY1+WKVxjnBHxfo1kykNMqXepXikjsmOEDI/QeLXzRE5K6zZ5+87GVt2VLfDmUKCf E4pgED0B45GrHep/VZ8UepovUfXYOQA== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=ravnborg.org; s=ed2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=EGpU/wBREU1TuW0HNalg3hWWoCzJ9fEHAfVCsF2Pf7c=; b=jaWnDv46p+OmzU0dsGkv0wmgmOaiZI/KweQnWvMckgcuCn3CjUFtZfMQkQEyV53N2qJ/+h1dy01j0 hqpo/44Ag== X-HalOne-ID: f5d8b1f6-a685-11ee-a342-2b733b0ff8f0 Received: from ravnborg.org (2-105-2-98-cable.dk.customer.tdc.net [2.105.2.98]) by mailrelay1 (Halon) with ESMTPSA id f5d8b1f6-a685-11ee-a342-2b733b0ff8f0; Fri, 29 Dec 2023 20:07:59 +0000 (UTC) Date: Fri, 29 Dec 2023 21:07:58 +0100 From: Sam Ravnborg To: "David S. Miller" , Arnd Bergmann , Andreas Larsson , sparclinux@vger.kernel.org Subject: [PATCH 2/4] sparc32: Add cmpxchg support using CAS Message-ID: <20231229200758.GB4034411@ravnborg.org> References: <20231229200604.GA4033529@ravnborg.org> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231229200604.GA4033529@ravnborg.org> Utilize the casa instruction to implement cmpxchg support. The implementation is based on the patch: 0002-sparc32-leon-Add-support-for-atomic-operations-with-.patch included in gaisler-buildroot-2023.02-1.0 Drop the emulated version as the minimum supported CPU is leon3, and leon3 has CAS support. Signed-off-by: Sam Ravnborg Cc: "David S. Miller" Cc: Andreas Larsson Cc: Arnd Bergmann --- arch/sparc/include/asm/cmpxchg_32.h | 72 +++++++++++++++++------------ arch/sparc/lib/atomic32.c | 42 ----------------- 2 files changed, 42 insertions(+), 72 deletions(-) diff --git a/arch/sparc/include/asm/cmpxchg_32.h b/arch/sparc/include/asm/cmpxchg_32.h index d0af82c240b7..a35f2aa5d2ce 100644 --- a/arch/sparc/include/asm/cmpxchg_32.h +++ b/arch/sparc/include/asm/cmpxchg_32.h @@ -12,10 +12,21 @@ #ifndef __ARCH_SPARC_CMPXCHG__ #define __ARCH_SPARC_CMPXCHG__ -unsigned long __xchg_u32(volatile u32 *m, u32 new); -void __xchg_called_with_bad_pointer(void); +void __xchg_called_with_bad_pointer(void) + __compiletime_error("Bad argument size for xchg"); -static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ void * ptr, int size) +static __always_inline +unsigned long __xchg_u32(volatile unsigned long *m, unsigned long val) +{ + asm volatile("swap [%2], %0" + : "=&r" (val) + : "0" (val), "r" (m) + : "memory"); + return val; +} + +static __always_inline +unsigned long __arch_xchg(unsigned long x, volatile void * ptr, int size) { switch (size) { case 4: @@ -25,25 +36,31 @@ static __always_inline unsigned long __arch_xchg(unsigned long x, __volatile__ v return x; } -#define arch_xchg(ptr,x) ({(__typeof__(*(ptr)))__arch_xchg((unsigned long)(x),(ptr),sizeof(*(ptr)));}) +#define arch_xchg(ptr,x) \ +({ \ + (__typeof__(*(ptr))) __arch_xchg((unsigned long)(x), \ + (ptr), \ + sizeof(*(ptr))); \ +}) -/* Emulate cmpxchg() the same way we emulate atomics, - * by hashing the object address and indexing into an array - * of spinlocks to get a bit of performance... - * - * See arch/sparc/lib/atomic32.c for implementation. - * - * Cribbed from - */ +void __cmpxchg_called_with_bad_pointer(void) + __compiletime_error("Bad argument size for cmpxchg"); -/* bug catcher for when unsupported size is used - won't link */ -void __cmpxchg_called_with_bad_pointer(void); /* we only need to support cmpxchg of a u32 on sparc */ -unsigned long __cmpxchg_u32(volatile u32 *m, u32 old, u32 new_); +static __always_inline +unsigned long __cmpxchg_u32(volatile int *m, int old, int new) +{ + asm volatile("casa [%2] 0xb, %3, %0" + : "=&r" (new) + : "0" (new), "r" (m), "r" (old) + : "memory"); + + return new; +} /* don't worry...optimizer will get rid of most of this */ -static inline unsigned long -__cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size) +static __always_inline +unsigned long __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size) { switch (size) { case 4: @@ -52,6 +69,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size) __cmpxchg_called_with_bad_pointer(); break; } + return old; } @@ -59,22 +77,16 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size) ({ \ __typeof__(*(ptr)) _o_ = (o); \ __typeof__(*(ptr)) _n_ = (n); \ - (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ - (unsigned long)_n_, sizeof(*(ptr))); \ + \ + (__typeof__(*(ptr))) __cmpxchg((ptr), \ + (unsigned long)_o_, \ + (unsigned long)_n_, \ + sizeof(*(ptr))); \ }) -u64 __cmpxchg_u64(u64 *ptr, u64 old, u64 new); -#define arch_cmpxchg64(ptr, old, new) __cmpxchg_u64(ptr, old, new) - -#include - /* - * cmpxchg_local and cmpxchg64_local are atomic wrt current CPU. Always make - * them available. + * We can not support 64-bit cmpxchg using LEON CASA. Better fail to link than + * pretend we can support something that is not atomic towards 64-bit writes. */ -#define arch_cmpxchg_local(ptr, o, n) \ - ((__typeof__(*(ptr)))__generic_cmpxchg_local((ptr), (unsigned long)(o),\ - (unsigned long)(n), sizeof(*(ptr)))) -#define arch_cmpxchg64_local(ptr, o, n) __generic_cmpxchg64_local((ptr), (o), (n)) #endif /* __ARCH_SPARC_CMPXCHG__ */ diff --git a/arch/sparc/lib/atomic32.c b/arch/sparc/lib/atomic32.c index cf80d1ae352b..f378471adeca 100644 --- a/arch/sparc/lib/atomic32.c +++ b/arch/sparc/lib/atomic32.c @@ -158,45 +158,3 @@ unsigned long sp32___change_bit(unsigned long *addr, unsigned long mask) return old & mask; } EXPORT_SYMBOL(sp32___change_bit); - -unsigned long __cmpxchg_u32(volatile u32 *ptr, u32 old, u32 new) -{ - unsigned long flags; - u32 prev; - - spin_lock_irqsave(ATOMIC_HASH(ptr), flags); - if ((prev = *ptr) == old) - *ptr = new; - spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags); - - return (unsigned long)prev; -} -EXPORT_SYMBOL(__cmpxchg_u32); - -u64 __cmpxchg_u64(u64 *ptr, u64 old, u64 new) -{ - unsigned long flags; - u64 prev; - - spin_lock_irqsave(ATOMIC_HASH(ptr), flags); - if ((prev = *ptr) == old) - *ptr = new; - spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags); - - return prev; -} -EXPORT_SYMBOL(__cmpxchg_u64); - -unsigned long __xchg_u32(volatile u32 *ptr, u32 new) -{ - unsigned long flags; - u32 prev; - - spin_lock_irqsave(ATOMIC_HASH(ptr), flags); - prev = *ptr; - *ptr = new; - spin_unlock_irqrestore(ATOMIC_HASH(ptr), flags); - - return (unsigned long)prev; -} -EXPORT_SYMBOL(__xchg_u32); From patchwork Fri Dec 29 20:08:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sam Ravnborg X-Patchwork-Id: 1881161 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=XtG0wCMR; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=0J9enZRS; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=150.107.74.76; helo=gandalf.ozlabs.org; envelope-from=srs0=uvy/=ii=vger.kernel.org=sparclinux+bounces-118-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (gandalf.ozlabs.org [150.107.74.76]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T1xK96jB8z20Rq for ; Sat, 30 Dec 2023 07:08:45 +1100 (AEDT) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4T1xK76f03z4wdD for ; Sat, 30 Dec 2023 07:08:43 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4T1xK76Zybz4xKb; Sat, 30 Dec 2023 07:08:43 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=XtG0wCMR; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=0J9enZRS; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=sparclinux+bounces-118-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [IPv6:2604:1380:45d1:ec00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4T1xK73Hg4z4wdD for ; Sat, 30 Dec 2023 07:08:43 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 7E0D61C214CF for ; Fri, 29 Dec 2023 20:08:41 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id CD08913FEB; Fri, 29 Dec 2023 20:08:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="XtG0wCMR"; dkim=permerror (0-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="0J9enZRS" X-Original-To: sparclinux@vger.kernel.org Received: from mailrelay4-1.pub.mailoutpod2-cph3.one.com (mailrelay4-1.pub.mailoutpod2-cph3.one.com [46.30.211.179]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 07FCD13AE4 for ; Fri, 29 Dec 2023 20:08:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=ravnborg.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ravnborg.org; s=rsa2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=ynnfHQAVyzxK7C1A33SeufC/Y4Zrdp9vy+YbewmKWCQ=; b=XtG0wCMR4SkvP+rmB0UE1KspJmFak0e6UGyAO0lrorjrhNQJWkNP5uh0Bj6Pg+a8t3v+JlMuHaEFe URTCAGS1xZcy9fv/j7Dvg+mrHelZpiutUVgCC/G0lAGbyQD+e2Xc0ulskE59eI23MWQis6BlA6mDDq C9ntDT18bjtJ87CNDvOaEmC4AW7Od34fck2NHfa82yqgRSp+HGe5u4QiERfglfvmaun23o0t0hS18A WacVAmiQY8eyzTxmbhlCtJY9vOg2jVAxC6rWQt9pP1RhYrmyBeqEEqR4aa/TqtcYjybNDX0vOOG8ej l1yQc0vT1/QSy51Nv4IqT7puIIk2emw== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=ravnborg.org; s=ed2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=ynnfHQAVyzxK7C1A33SeufC/Y4Zrdp9vy+YbewmKWCQ=; b=0J9enZRSX/DYPODmQpeh9VdC6/Oyoix0rKnTPKfSZhjSSQKjXk6EGSXDKKnE4F+EaUzV388dFy3y7 8y4u5i0DA== X-HalOne-ID: 05ad9308-a686-11ee-bba8-2dc64a403fa2 Received: from ravnborg.org (2-105-2-98-cable.dk.customer.tdc.net [2.105.2.98]) by mailrelay4 (Halon) with ESMTPSA id 05ad9308-a686-11ee-bba8-2dc64a403fa2; Fri, 29 Dec 2023 20:08:27 +0000 (UTC) Date: Fri, 29 Dec 2023 21:08:25 +0100 From: Sam Ravnborg To: "David S. Miller" , Arnd Bergmann , Andreas Larsson , sparclinux@vger.kernel.org Subject: [PATCH 3/4] sparc32: Add atomic bitops support using CAS Message-ID: <20231229200825.GC4034411@ravnborg.org> References: <20231229200604.GA4033529@ravnborg.org> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231229200604.GA4033529@ravnborg.org> This implements the atomic bit operations using the CAS instruction so they are atomic. The implementation uses a single asm helper, to make the code as readable as possible. The implementation is done inline in bitops/atomic.h to mirror the structure used in asm-generic. As an added benefit the bitops can be instrumented. The generated code is more compact with the majority implemented in C as this allows the compiler to do optimizations especially when the arguments passed are constant. The old emulated bitops implementation is no longer used and deleted. Signed-off-by: Sam Ravnborg Cc: Andreas Larsson Cc: Arnd Bergmann Cc: "David S. Miller" --- arch/sparc/include/asm/bitops/atomic_32.h | 124 ++++++++++++++++++++++ arch/sparc/include/asm/bitops_32.h | 71 +------------ arch/sparc/lib/atomic32.c | 39 ------- 3 files changed, 125 insertions(+), 109 deletions(-) create mode 100644 arch/sparc/include/asm/bitops/atomic_32.h diff --git a/arch/sparc/include/asm/bitops/atomic_32.h b/arch/sparc/include/asm/bitops/atomic_32.h new file mode 100644 index 000000000000..b9e33d21b58d --- /dev/null +++ b/arch/sparc/include/asm/bitops/atomic_32.h @@ -0,0 +1,124 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef __ASM_SPARC_BITOPS_ATOMIC_H_ +#define __ASM_SPARC_BITOPS_ATOMIC_H_ + +#include +#include + +#include +#include + +static __always_inline +int __boa_casa(volatile unsigned long *p, + unsigned long check, + unsigned long swap) +{ + // casa [p], check, swap + // check == swap for success, otherwise try again + asm volatile("casa [%2] 0xb, %3, %0" + : "=&r" (swap) + : "0" (swap), "r" (p), "r" (check) + : "memory"); + + return swap; +} + +static __always_inline void +arch_set_bit(unsigned int nr, volatile unsigned long *p) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long check; + unsigned long swap; + + p += BIT_WORD(nr); + + do { + check = *p; + swap = check | mask; + } while (__boa_casa(p, check, swap) != check); +} + +static __always_inline void +arch_clear_bit(unsigned int nr, volatile unsigned long *p) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long check; + unsigned long swap; + + p += BIT_WORD(nr); + + do { + check = *p; + swap = check & ~mask; + } while (__boa_casa(p, check, swap) != check); +} + +static __always_inline void +arch_change_bit(unsigned int nr, volatile unsigned long *p) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long check; + unsigned long swap; + + p += BIT_WORD(nr); + + do { + check = *p; + swap = check ^ mask; + } while (__boa_casa(p, check, swap) != check); +} + +static __always_inline int +arch_test_and_set_bit(unsigned int nr, volatile unsigned long *p) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long check; + unsigned long swap; + + p += BIT_WORD(nr); + + do { + check = *p; + swap = check | mask; + } while (__boa_casa(p, check, swap) != check); + + return !!(check & mask); +} + +static __always_inline int +arch_test_and_clear_bit(unsigned int nr, volatile unsigned long *p) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long check; + unsigned long swap; + + p += BIT_WORD(nr); + + do { + check = *p; + swap = check & ~mask; + } while (__boa_casa(p, check, swap) != check); + + return !!(check & mask); +} + +static __always_inline int +arch_test_and_change_bit(unsigned int nr, volatile unsigned long *p) +{ + unsigned long mask = BIT_MASK(nr); + unsigned long check; + unsigned long swap; + + p += BIT_WORD(nr); + + do { + check = *p; + swap = check ^ mask; + } while (__boa_casa(p, check, swap) != check); + + return !!(check & mask); +} + +#include + +#endif /* __ASM_SPARC_BITOPS_ATOMIC_H_ */ diff --git a/arch/sparc/include/asm/bitops_32.h b/arch/sparc/include/asm/bitops_32.h index 3448c191b484..34279e9572a4 100644 --- a/arch/sparc/include/asm/bitops_32.h +++ b/arch/sparc/include/asm/bitops_32.h @@ -19,76 +19,7 @@ #error only can be included directly #endif -unsigned long sp32___set_bit(unsigned long *addr, unsigned long mask); -unsigned long sp32___clear_bit(unsigned long *addr, unsigned long mask); -unsigned long sp32___change_bit(unsigned long *addr, unsigned long mask); - -/* - * Set bit 'nr' in 32-bit quantity at address 'addr' where bit '0' - * is in the highest of the four bytes and bit '31' is the high bit - * within the first byte. Sparc is BIG-Endian. Unless noted otherwise - * all bit-ops return 0 if bit was previously clear and != 0 otherwise. - */ -static inline int test_and_set_bit(unsigned long nr, volatile unsigned long *addr) -{ - unsigned long *ADDR, mask; - - ADDR = ((unsigned long *) addr) + (nr >> 5); - mask = 1 << (nr & 31); - - return sp32___set_bit(ADDR, mask) != 0; -} - -static inline void set_bit(unsigned long nr, volatile unsigned long *addr) -{ - unsigned long *ADDR, mask; - - ADDR = ((unsigned long *) addr) + (nr >> 5); - mask = 1 << (nr & 31); - - (void) sp32___set_bit(ADDR, mask); -} - -static inline int test_and_clear_bit(unsigned long nr, volatile unsigned long *addr) -{ - unsigned long *ADDR, mask; - - ADDR = ((unsigned long *) addr) + (nr >> 5); - mask = 1 << (nr & 31); - - return sp32___clear_bit(ADDR, mask) != 0; -} - -static inline void clear_bit(unsigned long nr, volatile unsigned long *addr) -{ - unsigned long *ADDR, mask; - - ADDR = ((unsigned long *) addr) + (nr >> 5); - mask = 1 << (nr & 31); - - (void) sp32___clear_bit(ADDR, mask); -} - -static inline int test_and_change_bit(unsigned long nr, volatile unsigned long *addr) -{ - unsigned long *ADDR, mask; - - ADDR = ((unsigned long *) addr) + (nr >> 5); - mask = 1 << (nr & 31); - - return sp32___change_bit(ADDR, mask) != 0; -} - -static inline void change_bit(unsigned long nr, volatile unsigned long *addr) -{ - unsigned long *ADDR, mask; - - ADDR = ((unsigned long *) addr) + (nr >> 5); - mask = 1 << (nr & 31); - - (void) sp32___change_bit(ADDR, mask); -} - +#include #include #include diff --git a/arch/sparc/lib/atomic32.c b/arch/sparc/lib/atomic32.c index f378471adeca..ed778f7ebe97 100644 --- a/arch/sparc/lib/atomic32.c +++ b/arch/sparc/lib/atomic32.c @@ -119,42 +119,3 @@ void arch_atomic_set(atomic_t *v, int i) spin_unlock_irqrestore(ATOMIC_HASH(v), flags); } EXPORT_SYMBOL(arch_atomic_set); - -unsigned long sp32___set_bit(unsigned long *addr, unsigned long mask) -{ - unsigned long old, flags; - - spin_lock_irqsave(ATOMIC_HASH(addr), flags); - old = *addr; - *addr = old | mask; - spin_unlock_irqrestore(ATOMIC_HASH(addr), flags); - - return old & mask; -} -EXPORT_SYMBOL(sp32___set_bit); - -unsigned long sp32___clear_bit(unsigned long *addr, unsigned long mask) -{ - unsigned long old, flags; - - spin_lock_irqsave(ATOMIC_HASH(addr), flags); - old = *addr; - *addr = old & ~mask; - spin_unlock_irqrestore(ATOMIC_HASH(addr), flags); - - return old & mask; -} -EXPORT_SYMBOL(sp32___clear_bit); - -unsigned long sp32___change_bit(unsigned long *addr, unsigned long mask) -{ - unsigned long old, flags; - - spin_lock_irqsave(ATOMIC_HASH(addr), flags); - old = *addr; - *addr = old ^ mask; - spin_unlock_irqrestore(ATOMIC_HASH(addr), flags); - - return old & mask; -} -EXPORT_SYMBOL(sp32___change_bit); From patchwork Fri Dec 29 20:08:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sam Ravnborg X-Patchwork-Id: 1881164 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=EXEdHdaB; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=pYUxhdPF; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=ozlabs.org (client-ip=2404:9400:2221:ea00::3; helo=gandalf.ozlabs.org; envelope-from=srs0=vbwz=ii=vger.kernel.org=sparclinux+bounces-121-patchwork-incoming=ozlabs.org@ozlabs.org; receiver=patchwork.ozlabs.org) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4T1xLv6sd0z20RF for ; Sat, 30 Dec 2023 07:10:15 +1100 (AEDT) Received: from gandalf.ozlabs.org (mail.ozlabs.org [IPv6:2404:9400:2221:ea00::3]) by gandalf.ozlabs.org (Postfix) with ESMTP id 4T1xLv6Jpdz4wjB for ; Sat, 30 Dec 2023 07:10:15 +1100 (AEDT) Received: by gandalf.ozlabs.org (Postfix) id 4T1xLv6Fcgz4wc6; Sat, 30 Dec 2023 07:10:15 +1100 (AEDT) Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: gandalf.ozlabs.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: gandalf.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=ravnborg.org header.i=@ravnborg.org header.a=rsa-sha256 header.s=rsa2 header.b=EXEdHdaB; dkim=pass header.d=ravnborg.org header.i=@ravnborg.org header.a=ed25519-sha256 header.s=ed2 header.b=pYUxhdPF; dkim-atps=neutral Authentication-Results: gandalf.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2604:1380:45d1:ec00::1; helo=ny.mirrors.kernel.org; envelope-from=sparclinux+bounces-121-patchwork-incoming=ozlabs.org@vger.kernel.org; receiver=ozlabs.org) Received: from ny.mirrors.kernel.org (ny.mirrors.kernel.org [IPv6:2604:1380:45d1:ec00::1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits)) (No client certificate requested) by gandalf.ozlabs.org (Postfix) with ESMTPS id 4T1xLv35swz4wjB for ; Sat, 30 Dec 2023 07:10:15 +1100 (AEDT) Received: from smtp.subspace.kernel.org (wormhole.subspace.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ny.mirrors.kernel.org (Postfix) with ESMTPS id 693691C2104F for ; Fri, 29 Dec 2023 20:10:13 +0000 (UTC) Received: from localhost.localdomain (localhost.localdomain [127.0.0.1]) by smtp.subspace.kernel.org (Postfix) with ESMTP id 1632713FEB; Fri, 29 Dec 2023 20:10:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="EXEdHdaB"; dkim=permerror (0-bit key) header.d=ravnborg.org header.i=@ravnborg.org header.b="pYUxhdPF" X-Original-To: sparclinux@vger.kernel.org Received: from mailrelay5-1.pub.mailoutpod2-cph3.one.com (mailrelay5-1.pub.mailoutpod2-cph3.one.com [46.30.211.180]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0B9B513FE1 for ; Fri, 29 Dec 2023 20:10:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; dmarc=none (p=none dis=none) header.from=ravnborg.org Authentication-Results: smtp.subspace.kernel.org; spf=none smtp.mailfrom=ravnborg.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ravnborg.org; s=rsa2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=6VUDWzIbsGTunNIUfX4yrxks6yn2yowqEFeoftmH1c8=; b=EXEdHdaBV4ArHXIVXH/1hAWceTojDekbUu41ERRJV6etvrVek2ngH7gCN7puVAoeQS0D5+Fd2aynQ u8iDPnVla+T6yDyrcInV+xQMBR/leLeKusAWs2l/mR2mx83T3ekKZWV3mesLmQHuUNndS4pIvPFg3U k/L/MNzlW+QUP3SD1kL6875DyYr05wIvhe6pMNP7eGvjOnRDOiOFmZiVnFVutB570XdVRPVxa1TxWk bczvSJsr3CyJF1z/2ra61AopjZhDpqi2wj69ohm51YGSwoY1IkIAq+J6UL+tIVEqk0VZQRcirtGzn3 CeXsVjBi8jeDHC//N+VsW5n5yHmKRCg== DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=ravnborg.org; s=ed2; h=in-reply-to:content-type:mime-version:references:message-id:subject:to:from: date:from; bh=6VUDWzIbsGTunNIUfX4yrxks6yn2yowqEFeoftmH1c8=; b=pYUxhdPFeW6xi392Jy7do6Oq+Quvevu2811lmoptjWtJJ4sViIcRMWYdkzqqcSz3u/cr5pAc+ya3s VJk/uMLDQ== X-HalOne-ID: 1867d5ad-a686-11ee-979e-a71ee59276a3 Received: from ravnborg.org (2-105-2-98-cable.dk.customer.tdc.net [2.105.2.98]) by mailrelay5 (Halon) with ESMTPSA id 1867d5ad-a686-11ee-979e-a71ee59276a3; Fri, 29 Dec 2023 20:08:58 +0000 (UTC) Date: Fri, 29 Dec 2023 21:08:56 +0100 From: Sam Ravnborg To: "David S. Miller" , Arnd Bergmann , Andreas Larsson , sparclinux@vger.kernel.org Subject: [PATCH 4/4] sparc32: Add atomic support using CAS Message-ID: <20231229200856.GD4034411@ravnborg.org> References: <20231229200604.GA4033529@ravnborg.org> Precedence: bulk X-Mailing-List: sparclinux@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20231229200604.GA4033529@ravnborg.org> Implement the atomic operations using the leon casa instruction. The implmentation uses a single asm helper, to make the code as readable as possible. The generated code is more compact with the majority implemented in C as this allows the compiler to do optimizations especially when the arguments passed are constant. The old emulated atomic implementation is no longer used and deleted. Signed-off-by: Sam Ravnborg Cc: Andreas Larsson Cc: "David S. Miller" Cc: Arnd Bergmann --- arch/sparc/include/asm/atomic_32.h | 151 +++++++++++++++++++---------- arch/sparc/lib/Makefile | 2 +- arch/sparc/lib/atomic32.c | 121 ----------------------- 3 files changed, 100 insertions(+), 174 deletions(-) delete mode 100644 arch/sparc/lib/atomic32.c diff --git a/arch/sparc/include/asm/atomic_32.h b/arch/sparc/include/asm/atomic_32.h index 60ce2fe57fcd..54f39148c492 100644 --- a/arch/sparc/include/asm/atomic_32.h +++ b/arch/sparc/include/asm/atomic_32.h @@ -1,61 +1,108 @@ /* SPDX-License-Identifier: GPL-2.0 */ -/* atomic.h: These still suck, but the I-cache hit rate is higher. - * - * Copyright (C) 1996 David S. Miller (davem@davemloft.net) - * Copyright (C) 2000 Anton Blanchard (anton@linuxcare.com.au) - * Copyright (C) 2007 Kyle McMartin (kyle@parisc-linux.org) - * - * Additions by Keith M Wesolowski (wesolows@foobazco.org) based - * on asm-parisc/atomic.h Copyright (C) 2000 Philipp Rumpf . - */ - #ifndef __ARCH_SPARC_ATOMIC__ #define __ARCH_SPARC_ATOMIC__ #include #include -#include -#include - -int arch_atomic_add_return(int, atomic_t *); -#define arch_atomic_add_return arch_atomic_add_return - -int arch_atomic_fetch_add(int, atomic_t *); -#define arch_atomic_fetch_add arch_atomic_fetch_add - -int arch_atomic_fetch_and(int, atomic_t *); -#define arch_atomic_fetch_and arch_atomic_fetch_and - -int arch_atomic_fetch_or(int, atomic_t *); -#define arch_atomic_fetch_or arch_atomic_fetch_or - -int arch_atomic_fetch_xor(int, atomic_t *); -#define arch_atomic_fetch_xor arch_atomic_fetch_xor - -int arch_atomic_cmpxchg(atomic_t *, int, int); -#define arch_atomic_cmpxchg arch_atomic_cmpxchg - -int arch_atomic_xchg(atomic_t *, int); -#define arch_atomic_xchg arch_atomic_xchg - -int arch_atomic_fetch_add_unless(atomic_t *, int, int); -#define arch_atomic_fetch_add_unless arch_atomic_fetch_add_unless - -void arch_atomic_set(atomic_t *, int); - -#define arch_atomic_set_release(v, i) arch_atomic_set((v), (i)) - -#define arch_atomic_read(v) READ_ONCE((v)->counter) - -#define arch_atomic_add(i, v) ((void)arch_atomic_add_return( (int)(i), (v))) -#define arch_atomic_sub(i, v) ((void)arch_atomic_add_return(-(int)(i), (v))) - -#define arch_atomic_and(i, v) ((void)arch_atomic_fetch_and((i), (v))) -#define arch_atomic_or(i, v) ((void)arch_atomic_fetch_or((i), (v))) -#define arch_atomic_xor(i, v) ((void)arch_atomic_fetch_xor((i), (v))) - -#define arch_atomic_sub_return(i, v) (arch_atomic_add_return(-(int)(i), (v))) -#define arch_atomic_fetch_sub(i, v) (arch_atomic_fetch_add (-(int)(i), (v))) +#include + +static __always_inline int arch_atomic_read(const atomic_t *v) +{ + return READ_ONCE(v->counter); +} + +static __always_inline void arch_atomic_set(atomic_t *v, int i) +{ + WRITE_ONCE(v->counter, i); +} + +static __always_inline +int __atomic_casa(volatile int *p, int check, int swap) +{ + // casa [p], check, swap + // check == swap for success, otherwise try again + asm volatile("casa [%2] 0xb, %3, %0" + : "=&r" (swap) + : "0" (swap), "r" (p), "r" (check) + : "memory"); + + return swap; +} + +/* Do v->counter c_op i */ +#define ATOMIC_OP(op, c_op) \ +static inline void arch_atomic_##op(int i, atomic_t *v) \ +{ \ + int check; \ + int swap; \ + \ + do { \ + check = v->counter; \ + swap = check c_op i; \ + } while (__atomic_casa(&v->counter, check, swap) != check); \ +} + +/* Do v->counter c_op i, and return the result */ +#define ATOMIC_OP_RETURN(op, c_op) \ +static inline int arch_atomic_##op##_return(int i, atomic_t *v) \ +{ \ + int check; \ + int swap; \ + \ + do { \ + check = v->counter; \ + swap = check c_op i; \ + } while (__atomic_casa(&v->counter, check, swap) != check); \ + \ + return swap; \ +} + +/* Do v->counter c_op i, and return the original v->counter value */ +#define ATOMIC_FETCH_OP(op, c_op) \ +static inline int arch_atomic_fetch_##op(int i, atomic_t *v) \ +{ \ + int check; \ + int swap; \ + \ + do { \ + check = v->counter; \ + swap = check c_op i; \ + } while (__atomic_casa(&v->counter, check, swap) != check); \ + \ + return check; \ +} + +ATOMIC_OP_RETURN(add, +) +ATOMIC_OP_RETURN(sub, -) + +ATOMIC_FETCH_OP(add, +) +ATOMIC_FETCH_OP(sub, -) +ATOMIC_FETCH_OP(and, &) +ATOMIC_FETCH_OP(or, |) +ATOMIC_FETCH_OP(xor, ^) + +ATOMIC_OP(add, +) +ATOMIC_OP(sub, -) +ATOMIC_OP(and, &) +ATOMIC_OP(or, |) +ATOMIC_OP(xor, ^) + +#undef ATOMIC_FETCH_OP +#undef ATOMIC_OP_RETURN +#undef ATOMIC_OP + +#define arch_atomic_add_return arch_atomic_add_return +#define arch_atomic_sub_return arch_atomic_sub_return +#define arch_atomic_fetch_add arch_atomic_fetch_add +#define arch_atomic_fetch_sub arch_atomic_fetch_sub +#define arch_atomic_fetch_and arch_atomic_fetch_and +#define arch_atomic_fetch_or arch_atomic_fetch_or +#define arch_atomic_fetch_xor arch_atomic_fetch_xor +#define arch_atomic_add arch_atomic_add +#define arch_atomic_sub arch_atomic_sub +#define arch_atomic_and arch_atomic_and +#define arch_atomic_or arch_atomic_or +#define arch_atomic_xor arch_atomic_xor #endif /* !(__ARCH_SPARC_ATOMIC__) */ diff --git a/arch/sparc/lib/Makefile b/arch/sparc/lib/Makefile index 063556fe2cb1..907f497bfcec 100644 --- a/arch/sparc/lib/Makefile +++ b/arch/sparc/lib/Makefile @@ -52,5 +52,5 @@ lib-$(CONFIG_SPARC64) += copy_in_user.o memmove.o lib-$(CONFIG_SPARC64) += mcount.o ipcsum.o xor.o hweight.o ffs.o obj-$(CONFIG_SPARC64) += iomap.o -obj-$(CONFIG_SPARC32) += atomic32.o ucmpdi2.o +obj-$(CONFIG_SPARC32) += ucmpdi2.o obj-$(CONFIG_SPARC64) += PeeCeeI.o diff --git a/arch/sparc/lib/atomic32.c b/arch/sparc/lib/atomic32.c deleted file mode 100644 index ed778f7ebe97..000000000000 --- a/arch/sparc/lib/atomic32.c +++ /dev/null @@ -1,121 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0 -/* - * atomic32.c: 32-bit atomic_t implementation - * - * Copyright (C) 2004 Keith M Wesolowski - * Copyright (C) 2007 Kyle McMartin - * - * Based on asm-parisc/atomic.h Copyright (C) 2000 Philipp Rumpf - */ - -#include -#include -#include - -#ifdef CONFIG_SMP -#define ATOMIC_HASH_SIZE 4 -#define ATOMIC_HASH(a) (&__atomic_hash[(((unsigned long)a)>>8) & (ATOMIC_HASH_SIZE-1)]) - -spinlock_t __atomic_hash[ATOMIC_HASH_SIZE] = { - [0 ... (ATOMIC_HASH_SIZE-1)] = __SPIN_LOCK_UNLOCKED(__atomic_hash) -}; - -#else /* SMP */ - -static DEFINE_SPINLOCK(dummy); -#define ATOMIC_HASH_SIZE 1 -#define ATOMIC_HASH(a) (&dummy) - -#endif /* SMP */ - -#define ATOMIC_FETCH_OP(op, c_op) \ -int arch_atomic_fetch_##op(int i, atomic_t *v) \ -{ \ - int ret; \ - unsigned long flags; \ - spin_lock_irqsave(ATOMIC_HASH(v), flags); \ - \ - ret = v->counter; \ - v->counter c_op i; \ - \ - spin_unlock_irqrestore(ATOMIC_HASH(v), flags); \ - return ret; \ -} \ -EXPORT_SYMBOL(arch_atomic_fetch_##op); - -#define ATOMIC_OP_RETURN(op, c_op) \ -int arch_atomic_##op##_return(int i, atomic_t *v) \ -{ \ - int ret; \ - unsigned long flags; \ - spin_lock_irqsave(ATOMIC_HASH(v), flags); \ - \ - ret = (v->counter c_op i); \ - \ - spin_unlock_irqrestore(ATOMIC_HASH(v), flags); \ - return ret; \ -} \ -EXPORT_SYMBOL(arch_atomic_##op##_return); - -ATOMIC_OP_RETURN(add, +=) - -ATOMIC_FETCH_OP(add, +=) -ATOMIC_FETCH_OP(and, &=) -ATOMIC_FETCH_OP(or, |=) -ATOMIC_FETCH_OP(xor, ^=) - -#undef ATOMIC_FETCH_OP -#undef ATOMIC_OP_RETURN - -int arch_atomic_xchg(atomic_t *v, int new) -{ - int ret; - unsigned long flags; - - spin_lock_irqsave(ATOMIC_HASH(v), flags); - ret = v->counter; - v->counter = new; - spin_unlock_irqrestore(ATOMIC_HASH(v), flags); - return ret; -} -EXPORT_SYMBOL(arch_atomic_xchg); - -int arch_atomic_cmpxchg(atomic_t *v, int old, int new) -{ - int ret; - unsigned long flags; - - spin_lock_irqsave(ATOMIC_HASH(v), flags); - ret = v->counter; - if (likely(ret == old)) - v->counter = new; - - spin_unlock_irqrestore(ATOMIC_HASH(v), flags); - return ret; -} -EXPORT_SYMBOL(arch_atomic_cmpxchg); - -int arch_atomic_fetch_add_unless(atomic_t *v, int a, int u) -{ - int ret; - unsigned long flags; - - spin_lock_irqsave(ATOMIC_HASH(v), flags); - ret = v->counter; - if (ret != u) - v->counter += a; - spin_unlock_irqrestore(ATOMIC_HASH(v), flags); - return ret; -} -EXPORT_SYMBOL(arch_atomic_fetch_add_unless); - -/* Atomic operations are already serializing */ -void arch_atomic_set(atomic_t *v, int i) -{ - unsigned long flags; - - spin_lock_irqsave(ATOMIC_HASH(v), flags); - v->counter = i; - spin_unlock_irqrestore(ATOMIC_HASH(v), flags); -} -EXPORT_SYMBOL(arch_atomic_set);