Message ID | 20210804191554.1252776-1-vgupta@synopsys.com |
---|---|
Headers | show |
Series | ARC atomics update | expand |
On Wed, Aug 04, 2021 at 12:15:43PM -0700, Vineet Gupta wrote: > Vineet Gupta (10): > ARC: atomics: disintegrate header > ARC: atomic: !LLSC: remove hack in atomic_set() for for UP > ARC: atomic: !LLSC: use int data type consistently > ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_return > ARC: atomics: implement relaxed variants > ARC: bitops: fls/ffs to take int (vs long) per asm-generic defines > ARC: xchg: !LLSC: remove UP micro-optimization/hack > ARC: cmpxchg/xchg: rewrite as macros to make type safe > ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only) > ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variants > > Will Deacon (1): > ARC: switch to generic bitops Didn't see any weird things: Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
On 8/5/21 2:02 AM, Peter Zijlstra wrote: > On Wed, Aug 04, 2021 at 12:15:43PM -0700, Vineet Gupta wrote: > >> Vineet Gupta (10): >> ARC: atomics: disintegrate header >> ARC: atomic: !LLSC: remove hack in atomic_set() for for UP >> ARC: atomic: !LLSC: use int data type consistently >> ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_return >> ARC: atomics: implement relaxed variants >> ARC: bitops: fls/ffs to take int (vs long) per asm-generic defines >> ARC: xchg: !LLSC: remove UP micro-optimization/hack >> ARC: cmpxchg/xchg: rewrite as macros to make type safe >> ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only) >> ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variants >> >> Will Deacon (1): >> ARC: switch to generic bitops > > Didn't see any weird things: > > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Thx Peter. A lot of this is your code anyways ;-) Any initial thoughts/comments on patch 06/11 - is there an obvious reason that generic bitops take signed @nr or the hurdle is need to be done consistently cross-arch. -Vineet
On Thu, Aug 05, 2021 at 04:18:29PM +0000, Vineet Gupta wrote: > On 8/5/21 2:02 AM, Peter Zijlstra wrote: > > On Wed, Aug 04, 2021 at 12:15:43PM -0700, Vineet Gupta wrote: > > > >> Vineet Gupta (10): > >> ARC: atomics: disintegrate header > >> ARC: atomic: !LLSC: remove hack in atomic_set() for for UP > >> ARC: atomic: !LLSC: use int data type consistently > >> ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_return > >> ARC: atomics: implement relaxed variants > >> ARC: bitops: fls/ffs to take int (vs long) per asm-generic defines > >> ARC: xchg: !LLSC: remove UP micro-optimization/hack > >> ARC: cmpxchg/xchg: rewrite as macros to make type safe > >> ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only) > >> ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variants > >> > >> Will Deacon (1): > >> ARC: switch to generic bitops > > > > Didn't see any weird things: > > > > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> > > Thx Peter. A lot of this is your code anyways ;-) > > Any initial thoughts/comments on patch 06/11 - is there an obvious > reason that generic bitops take signed @nr or the hurdle is need to be > done consistently cross-arch. That does indeed seem daft and ready for a cleanup. Will any recollection from when you touched this? AFAICT bitops/atomic.h is consistently 'unsigned int nr', but bitops/non-atomic.h is 'int nr' while bitops/instrumented-non-atomic.h is consistently 'long nr'. I'm thinking 'unsigned int nr' is the most sensible allround, but I've not gone through all the cases.
On Thu, Aug 05, 2021 at 07:04:32PM +0200, Peter Zijlstra wrote: > On Thu, Aug 05, 2021 at 04:18:29PM +0000, Vineet Gupta wrote: > > On 8/5/21 2:02 AM, Peter Zijlstra wrote: > > > On Wed, Aug 04, 2021 at 12:15:43PM -0700, Vineet Gupta wrote: > > > > > >> Vineet Gupta (10): > > >> ARC: atomics: disintegrate header > > >> ARC: atomic: !LLSC: remove hack in atomic_set() for for UP > > >> ARC: atomic: !LLSC: use int data type consistently > > >> ARC: atomic64: LLSC: elide unused atomic_{and,or,xor,andnot}_return > > >> ARC: atomics: implement relaxed variants > > >> ARC: bitops: fls/ffs to take int (vs long) per asm-generic defines > > >> ARC: xchg: !LLSC: remove UP micro-optimization/hack > > >> ARC: cmpxchg/xchg: rewrite as macros to make type safe > > >> ARC: cmpxchg/xchg: implement relaxed variants (LLSC config only) > > >> ARC: atomic_cmpxchg/atomic_xchg: implement relaxed variants > > >> > > >> Will Deacon (1): > > >> ARC: switch to generic bitops > > > > > > Didn't see any weird things: > > > > > > Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> > > > > Thx Peter. A lot of this is your code anyways ;-) > > > > Any initial thoughts/comments on patch 06/11 - is there an obvious > > reason that generic bitops take signed @nr or the hurdle is need to be > > done consistently cross-arch. > > That does indeed seem daft and ready for a cleanup. Will any > recollection from when you touched this? I had a patch to fix this but it blew up in the robot and I didn't get round to reworking it: https://lore.kernel.org/patchwork/patch/1245555/ Will