diff mbox series

lib: sbi: Remove xchg/cmpxchg implemented via lr/sc

Message ID 20231108032859.159198-1-wxjstz@126.com
State Accepted
Headers show
Series lib: sbi: Remove xchg/cmpxchg implemented via lr/sc | expand

Commit Message

Xiang W Nov. 8, 2023, 3:28 a.m. UTC
lr/sc is part of the A extension. If the A extension is not supported,
lr/sc cannot be used. So remove xchg/cmpxchg.

Signed-off-by: Xiang W <wxjstz@126.com>
---
 lib/sbi/riscv_atomic.c | 106 ++---------------------------------------
 1 file changed, 4 insertions(+), 102 deletions(-)

Comments

Bo Gan Nov. 8, 2023, 8:09 a.m. UTC | #1
On 11/7/23 7:28 PM, Xiang W wrote:
> lr/sc is part of the A extension. If the A extension is not supported,
> lr/sc cannot be used. So remove xchg/cmpxchg.
>
> Signed-off-by: Xiang W <wxjstz@126.com>
> ---
>   lib/sbi/riscv_atomic.c | 106 ++---------------------------------------
>   1 file changed, 4 insertions(+), 102 deletions(-)
>
> diff --git a/lib/sbi/riscv_atomic.c b/lib/sbi/riscv_atomic.c
> index 528686f..a143218 100644
> --- a/lib/sbi/riscv_atomic.c
> +++ b/lib/sbi/riscv_atomic.c
> @@ -12,6 +12,10 @@
>   #include <sbi/riscv_atomic.h>
>   #include <sbi/riscv_barrier.h>
>   +#ifndef __riscv_atomic
> +#error "opensbi strongly relies on the A extension of RISC-V"
> +#endif
> +
>   long atomic_read(atomic_t *atom)
>   {
Hi Xiang,

I like this patch. I'm new to OpenSBI mailing list, but I've been studying JH7110 (U74) for a while.
I just want to add one more thing. Even if A extension is supported, we might still not able to use
lr/sc. The U74MC has 1 S7 (M/U nommu) core and 4 U7 core (M/S/U). In the U74MC core complex manual
https://starfivetech.com/uploads/u74mc_core_complex_manual_21G1.pdf, Chapter 3.6 states that the S7
core does not have data cache, and LR/SC will generate a precise access exception, thus the only way
for S7 and U7 to synchronize is amo.

My advise is, if for any reason, that we decided to keep lr/sc, perhaps we can consider make it an
optional feature that by default disable, but can be enabled at build time -- to make everyone happy.

Bo
Xiang W Nov. 8, 2023, 9:26 a.m. UTC | #2
在 2023-11-08星期三的 00:09 -0800,Bo Gan写道:
> On 11/7/23 7:28 PM, Xiang W wrote:
> > lr/sc is part of the A extension. If the A extension is not supported,
> > lr/sc cannot be used. So remove xchg/cmpxchg.
> > 
> > Signed-off-by: Xiang W <wxjstz@126.com>
> > ---
> >   lib/sbi/riscv_atomic.c | 106 ++---------------------------------------
> >   1 file changed, 4 insertions(+), 102 deletions(-)
> > 
> > diff --git a/lib/sbi/riscv_atomic.c b/lib/sbi/riscv_atomic.c
> > index 528686f..a143218 100644
> > --- a/lib/sbi/riscv_atomic.c
> > +++ b/lib/sbi/riscv_atomic.c
> > @@ -12,6 +12,10 @@
> >   #include <sbi/riscv_atomic.h>
> >   #include <sbi/riscv_barrier.h>
> >   +#ifndef __riscv_atomic
> > +#error "opensbi strongly relies on the A extension of RISC-V"
> > +#endif
> > +
> >   long atomic_read(atomic_t *atom)
> >   {
> Hi Xiang,
> 
> I like this patch. I'm new to OpenSBI mailing list, but I've been studying JH7110 (U74) for a while.
> I just want to add one more thing. Even if A extension is supported, we might still not able to use
> lr/sc. The U74MC has 1 S7 (M/U nommu) core and 4 U7 core (M/S/U). In the U74MC core complex manual
> https://starfivetech.com/uploads/u74mc_core_complex_manual_21G1.pdf, Chapter 3.6 states that the S7
> core does not have data cache, and LR/SC will generate a precise access exception, thus the only way
> for S7 and U7 to synchronize is amo.
Thanks for the feedback.

I'm a bit confused: doesn't your example prove that lr/sc doesn't work on a specific platform?
Why do you advise keeping?

Regards,
Xiang W
> 
> My advise is, if for any reason, that we decided to keep lr/sc, perhaps we can consider make it an
> optional feature that by default disable, but can be enabled at build time -- to make everyone happy.
> 
> Bo
>
Bo Gan Nov. 8, 2023, 10:26 a.m. UTC | #3
On 11/8/23 1:26 AM, Xiang W wrote:
> Thanks for the feedback.
> 
> I'm a bit confused: doesn't your example prove that lr/sc doesn't work on a specific platform?
> Why do you advise keeping?
> 
> Regards,
> Xiang W

I have no preference on whether it should be removed. If there's no objection, go ahead.
If other people find it might be useful and against the removal, then I'd prefer lr/sc to
be made optional, and by default use amo.
Anup Patel Dec. 8, 2023, 10:59 a.m. UTC | #4
On Wed, Nov 8, 2023 at 9:00 AM Xiang W <wxjstz@126.com> wrote:
>
> lr/sc is part of the A extension. If the A extension is not supported,
> lr/sc cannot be used. So remove xchg/cmpxchg.
>
> Signed-off-by: Xiang W <wxjstz@126.com>

Looks good to me.

Reviewed-by: Anup Patel <anup@brainfault.org>

Applied this patch to the riscv/opensbi repo.

Thanks,
Anup

> ---
>  lib/sbi/riscv_atomic.c | 106 ++---------------------------------------
>  1 file changed, 4 insertions(+), 102 deletions(-)
>
> diff --git a/lib/sbi/riscv_atomic.c b/lib/sbi/riscv_atomic.c
> index 528686f..a143218 100644
> --- a/lib/sbi/riscv_atomic.c
> +++ b/lib/sbi/riscv_atomic.c
> @@ -12,6 +12,10 @@
>  #include <sbi/riscv_atomic.h>
>  #include <sbi/riscv_barrier.h>
>
> +#ifndef __riscv_atomic
> +#error "opensbi strongly relies on the A extension of RISC-V"
> +#endif
> +
>  long atomic_read(atomic_t *atom)
>  {
>         long ret = atom->counter;
> @@ -79,131 +83,29 @@ long atomic_sub_return(atomic_t *atom, long value)
>                 (__typeof__(*(ptr))) __axchg((ptr), _x_, sizeof(*(ptr)));       \
>         })
>
> -
> -#define __xchg(ptr, new, size)                                            \
> -       ({                                                                \
> -               __typeof__(ptr) __ptr    = (ptr);                         \
> -               __typeof__(*(ptr)) __new = (new);                         \
> -               __typeof__(*(ptr)) __ret;                                 \
> -               register unsigned int __rc;                               \
> -               switch (size) {                                           \
> -               case 4:                                                   \
> -                       __asm__ __volatile__("0:        lr.w %0, %2\n"    \
> -                                            "  sc.w.rl %1, %z3, %2\n"    \
> -                                            "  bnez %1, 0b\n"            \
> -                                            "  fence rw, rw\n"           \
> -                                            : "=&r"(__ret), "=&r"(__rc), \
> -                                              "+A"(*__ptr)               \
> -                                            : "rJ"(__new)                \
> -                                            : "memory");                 \
> -                       break;                                            \
> -               case 8:                                                   \
> -                       __asm__ __volatile__("0:        lr.d %0, %2\n"    \
> -                                            "  sc.d.rl %1, %z3, %2\n"    \
> -                                            "  bnez %1, 0b\n"            \
> -                                            "  fence rw, rw\n"           \
> -                                            : "=&r"(__ret), "=&r"(__rc), \
> -                                              "+A"(*__ptr)               \
> -                                            : "rJ"(__new)                \
> -                                            : "memory");                 \
> -                       break;                                            \
> -               default:                                                  \
> -                       break;                                            \
> -               }                                                         \
> -               __ret;                                                    \
> -       })
> -
> -#define xchg(ptr, n)                                                     \
> -       ({                                                               \
> -               __typeof__(*(ptr)) _n_ = (n);                            \
> -               (__typeof__(*(ptr))) __xchg((ptr), _n_, sizeof(*(ptr))); \
> -       })
> -
> -#define __cmpxchg(ptr, old, new, size)                                    \
> -       ({                                                                \
> -               __typeof__(ptr) __ptr    = (ptr);                         \
> -               __typeof__(*(ptr)) __old = (old);                         \
> -               __typeof__(*(ptr)) __new = (new);                         \
> -               __typeof__(*(ptr)) __ret;                                 \
> -               register unsigned int __rc;                               \
> -               switch (size) {                                           \
> -               case 4:                                                   \
> -                       __asm__ __volatile__("0:        lr.w %0, %2\n"    \
> -                                            "  bne  %0, %z3, 1f\n"       \
> -                                            "  sc.w.rl %1, %z4, %2\n"    \
> -                                            "  bnez %1, 0b\n"            \
> -                                            "  fence rw, rw\n"           \
> -                                            "1:\n"                       \
> -                                            : "=&r"(__ret), "=&r"(__rc), \
> -                                              "+A"(*__ptr)               \
> -                                            : "rJ"(__old), "rJ"(__new)   \
> -                                            : "memory");                 \
> -                       break;                                            \
> -               case 8:                                                   \
> -                       __asm__ __volatile__("0:        lr.d %0, %2\n"    \
> -                                            "  bne %0, %z3, 1f\n"        \
> -                                            "  sc.d.rl %1, %z4, %2\n"    \
> -                                            "  bnez %1, 0b\n"            \
> -                                            "  fence rw, rw\n"           \
> -                                            "1:\n"                       \
> -                                            : "=&r"(__ret), "=&r"(__rc), \
> -                                              "+A"(*__ptr)               \
> -                                            : "rJ"(__old), "rJ"(__new)   \
> -                                            : "memory");                 \
> -                       break;                                            \
> -               default:                                                  \
> -                       break;                                            \
> -               }                                                         \
> -               __ret;                                                    \
> -       })
> -
> -#define cmpxchg(ptr, o, n)                                          \
> -       ({                                                          \
> -               __typeof__(*(ptr)) _o_ = (o);                       \
> -               __typeof__(*(ptr)) _n_ = (n);                       \
> -               (__typeof__(*(ptr)))                                \
> -                       __cmpxchg((ptr), _o_, _n_, sizeof(*(ptr))); \
> -       })
> -
>  long atomic_cmpxchg(atomic_t *atom, long oldval, long newval)
>  {
> -#ifdef __riscv_atomic
>         return __sync_val_compare_and_swap(&atom->counter, oldval, newval);
> -#else
> -       return cmpxchg(&atom->counter, oldval, newval);
> -#endif
>  }
>
>  long atomic_xchg(atomic_t *atom, long newval)
>  {
>         /* Atomically set new value and return old value. */
> -#ifdef __riscv_atomic
>         return axchg(&atom->counter, newval);
> -#else
> -       return xchg(&atom->counter, newval);
> -#endif
>  }
>
>  unsigned int atomic_raw_xchg_uint(volatile unsigned int *ptr,
>                                   unsigned int newval)
>  {
>         /* Atomically set new value and return old value. */
> -#ifdef __riscv_atomic
>         return axchg(ptr, newval);
> -#else
> -       return xchg(ptr, newval);
> -#endif
>  }
>
>  unsigned long atomic_raw_xchg_ulong(volatile unsigned long *ptr,
>                                     unsigned long newval)
>  {
>         /* Atomically set new value and return old value. */
> -#ifdef __riscv_atomic
>         return axchg(ptr, newval);
> -#else
> -       return xchg(ptr, newval);
> -#endif
>  }
>
>  #if (__SIZEOF_POINTER__ == 8)
> --
> 2.42.0
>
diff mbox series

Patch

diff --git a/lib/sbi/riscv_atomic.c b/lib/sbi/riscv_atomic.c
index 528686f..a143218 100644
--- a/lib/sbi/riscv_atomic.c
+++ b/lib/sbi/riscv_atomic.c
@@ -12,6 +12,10 @@ 
 #include <sbi/riscv_atomic.h>
 #include <sbi/riscv_barrier.h>
 
+#ifndef __riscv_atomic
+#error "opensbi strongly relies on the A extension of RISC-V"
+#endif
+
 long atomic_read(atomic_t *atom)
 {
 	long ret = atom->counter;
@@ -79,131 +83,29 @@  long atomic_sub_return(atomic_t *atom, long value)
 		(__typeof__(*(ptr))) __axchg((ptr), _x_, sizeof(*(ptr)));	\
 	})
 
-
-#define __xchg(ptr, new, size)                                            \
-	({                                                                \
-		__typeof__(ptr) __ptr	 = (ptr);                         \
-		__typeof__(*(ptr)) __new = (new);                         \
-		__typeof__(*(ptr)) __ret;                                 \
-		register unsigned int __rc;                               \
-		switch (size) {                                           \
-		case 4:                                                   \
-			__asm__ __volatile__("0:	lr.w %0, %2\n"    \
-					     "	sc.w.rl %1, %z3, %2\n"    \
-					     "	bnez %1, 0b\n"            \
-					     "	fence rw, rw\n"           \
-					     : "=&r"(__ret), "=&r"(__rc), \
-					       "+A"(*__ptr)               \
-					     : "rJ"(__new)                \
-					     : "memory");                 \
-			break;                                            \
-		case 8:                                                   \
-			__asm__ __volatile__("0:	lr.d %0, %2\n"    \
-					     "	sc.d.rl %1, %z3, %2\n"    \
-					     "	bnez %1, 0b\n"            \
-					     "	fence rw, rw\n"           \
-					     : "=&r"(__ret), "=&r"(__rc), \
-					       "+A"(*__ptr)               \
-					     : "rJ"(__new)                \
-					     : "memory");                 \
-			break;                                            \
-		default:                                                  \
-			break;                                            \
-		}                                                         \
-		__ret;                                                    \
-	})
-
-#define xchg(ptr, n)                                                     \
-	({                                                               \
-		__typeof__(*(ptr)) _n_ = (n);                            \
-		(__typeof__(*(ptr))) __xchg((ptr), _n_, sizeof(*(ptr))); \
-	})
-
-#define __cmpxchg(ptr, old, new, size)                                    \
-	({                                                                \
-		__typeof__(ptr) __ptr	 = (ptr);                         \
-		__typeof__(*(ptr)) __old = (old);                         \
-		__typeof__(*(ptr)) __new = (new);                         \
-		__typeof__(*(ptr)) __ret;                                 \
-		register unsigned int __rc;                               \
-		switch (size) {                                           \
-		case 4:                                                   \
-			__asm__ __volatile__("0:	lr.w %0, %2\n"    \
-					     "	bne  %0, %z3, 1f\n"       \
-					     "	sc.w.rl %1, %z4, %2\n"    \
-					     "	bnez %1, 0b\n"            \
-					     "	fence rw, rw\n"           \
-					     "1:\n"                       \
-					     : "=&r"(__ret), "=&r"(__rc), \
-					       "+A"(*__ptr)               \
-					     : "rJ"(__old), "rJ"(__new)   \
-					     : "memory");                 \
-			break;                                            \
-		case 8:                                                   \
-			__asm__ __volatile__("0:	lr.d %0, %2\n"    \
-					     "	bne %0, %z3, 1f\n"        \
-					     "	sc.d.rl %1, %z4, %2\n"    \
-					     "	bnez %1, 0b\n"            \
-					     "	fence rw, rw\n"           \
-					     "1:\n"                       \
-					     : "=&r"(__ret), "=&r"(__rc), \
-					       "+A"(*__ptr)               \
-					     : "rJ"(__old), "rJ"(__new)   \
-					     : "memory");                 \
-			break;                                            \
-		default:                                                  \
-			break;                                            \
-		}                                                         \
-		__ret;                                                    \
-	})
-
-#define cmpxchg(ptr, o, n)                                          \
-	({                                                          \
-		__typeof__(*(ptr)) _o_ = (o);                       \
-		__typeof__(*(ptr)) _n_ = (n);                       \
-		(__typeof__(*(ptr)))                                \
-			__cmpxchg((ptr), _o_, _n_, sizeof(*(ptr))); \
-	})
-
 long atomic_cmpxchg(atomic_t *atom, long oldval, long newval)
 {
-#ifdef __riscv_atomic
 	return __sync_val_compare_and_swap(&atom->counter, oldval, newval);
-#else
-	return cmpxchg(&atom->counter, oldval, newval);
-#endif
 }
 
 long atomic_xchg(atomic_t *atom, long newval)
 {
 	/* Atomically set new value and return old value. */
-#ifdef __riscv_atomic
 	return axchg(&atom->counter, newval);
-#else
-	return xchg(&atom->counter, newval);
-#endif
 }
 
 unsigned int atomic_raw_xchg_uint(volatile unsigned int *ptr,
 				  unsigned int newval)
 {
 	/* Atomically set new value and return old value. */
-#ifdef __riscv_atomic
 	return axchg(ptr, newval);
-#else
-	return xchg(ptr, newval);
-#endif
 }
 
 unsigned long atomic_raw_xchg_ulong(volatile unsigned long *ptr,
 				    unsigned long newval)
 {
 	/* Atomically set new value and return old value. */
-#ifdef __riscv_atomic
 	return axchg(ptr, newval);
-#else
-	return xchg(ptr, newval);
-#endif
 }
 
 #if (__SIZEOF_POINTER__ == 8)