diff mbox series

Improve performance of libc locks

Message ID AM5PR0801MB16680E6787BC0553E1847A0B83649@AM5PR0801MB1668.eurprd08.prod.outlook.com
State New
Headers show
Series Improve performance of libc locks | expand

Commit Message

Wilco Dijkstra Aug. 11, 2022, 4:22 p.m. UTC
Improve performance of libc locks by adding a fast path for the
single-threaded case.

On Neoverse V1, a loop using rand() improved 3.6 times. Multithreaded
performance is unchanged.

Passes regress on AArch64, OK for commit?

---

Comments

Carlos O'Donell Aug. 15, 2022, 2:07 p.m. UTC | #1
On 8/11/22 12:22, Wilco Dijkstra via Libc-alpha wrote:
> Improve performance of libc locks by adding a fast path for the
> single-threaded case.
> 
> On Neoverse V1, a loop using rand() improved 3.6 times. Multithreaded
> performance is unchanged.
> 
> Passes regress on AArch64, OK for commit?

This impacts all architectures.

Are you able to run the microbenchmarks to show that this improves one of them?

If you can, then we can at least ask the other machine maintainers to test the
patch shows gains there too. Conceptually I don't see why it wouldn't improve
the performance of all architectures, but having a baseline at the point of the
patch is good for recording the performance for future discussions.

If we don't have a benchmark that shows this specific base of ST vs MT and
internal __libc_lock_lock-locks then we should add one. Improving the internal
locking for our algorithms is always going to be a point of interest for IHVs.

Thanks.
 
> ---
> 
> diff --git a/sysdeps/nptl/libc-lockP.h b/sysdeps/nptl/libc-lockP.h
> index d3a6837fd212f3f5dfd80f46d0e9ce365042ae0c..ccdb11fee6f14069d0b936be93d0f0fa6d8bc30b 100644
> --- a/sysdeps/nptl/libc-lockP.h
> +++ b/sysdeps/nptl/libc-lockP.h
> @@ -108,7 +108,14 @@ _Static_assert (LLL_LOCK_INITIALIZER == 0, "LLL_LOCK_INITIALIZER != 0");
>  #define __libc_rwlock_fini(NAME) ((void) 0)
>  
>  /* Lock the named lock variable.  */
> -#define __libc_lock_lock(NAME) ({ lll_lock (NAME, LLL_PRIVATE); 0; })
> +#define __libc_lock_lock(NAME)						\
> + ({									\
> +    if (SINGLE_THREAD_P)						\
> +      (NAME) = LLL_LOCK_INITIALIZER_LOCKED;				\
> +    else								\
> +      lll_lock (NAME, LLL_PRIVATE);					\
> +    0;									\
> +  })
>  #define __libc_rwlock_rdlock(NAME) __pthread_rwlock_rdlock (&(NAME))
>  #define __libc_rwlock_wrlock(NAME) __pthread_rwlock_wrlock (&(NAME))
>  
> @@ -116,7 +123,14 @@ _Static_assert (LLL_LOCK_INITIALIZER == 0, "LLL_LOCK_INITIALIZER != 0");
>  #define __libc_lock_trylock(NAME) lll_trylock (NAME)
>  
>  /* Unlock the named lock variable.  */
> -#define __libc_lock_unlock(NAME) lll_unlock (NAME, LLL_PRIVATE)
> +#define __libc_lock_unlock(NAME)					\
> + ({									\
> +    if (SINGLE_THREAD_P)						\
> +      (NAME) = LLL_LOCK_INITIALIZER;					\
> +    else								\
> +      lll_unlock (NAME, LLL_PRIVATE);					\
> +    0;									\
> + })
>  #define __libc_rwlock_unlock(NAME) __pthread_rwlock_unlock (&(NAME))
>  
>  #if IS_IN (rtld)
> 
> 
>
Wilco Dijkstra Aug. 15, 2022, 5:35 p.m. UTC | #2
Hi Carlos,

> This impacts all architectures.

That was the goal indeed - we should add single-threaded optimizations in a
generic way.

> If we don't have a benchmark that shows this specific base of ST vs MT and
> internal __libc_lock_lock-locks then we should add one. Improving the internal
> locking for our algorithms is always going to be a point of interest for IHVs.

I can easily wrap my rand() microbench in json and add it to the benchtests.
I think it would be harder to do more tests on internal locks/headers since they
are not easily usable from benchtest infrastructure (just including libc-lock.h
results in lots of errors...).

Cheers,
Wilco
Noah Goldstein Aug. 16, 2022, 7:26 a.m. UTC | #3
On Tue, Aug 16, 2022 at 1:35 AM Wilco Dijkstra via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
>  Hi Carlos,
>
> > This impacts all architectures.
>
> That was the goal indeed - we should add single-threaded optimizations in a
> generic way.
>
> > If we don't have a benchmark that shows this specific base of ST vs MT and
> > internal __libc_lock_lock-locks then we should add one. Improving the internal
> > locking for our algorithms is always going to be a point of interest for IHVs.
>
> I can easily wrap my rand() microbench in json and add it to the benchtests.

Think that would be good so we can easily measure on other architectures.

> I think it would be harder to do more tests on internal locks/headers since they
> are not easily usable from benchtest infrastructure (just including libc-lock.h
> results in lots of errors...).
>
> Cheers,
> Wilco
Cristian Rodríguez Nov. 15, 2022, 8:17 p.m. UTC | #4
On Thu, Aug 11, 2022 at 12:23 PM Wilco Dijkstra via Libc-alpha
<libc-alpha@sourceware.org> wrote:
>
> Improve performance of libc locks by adding a fast path for the
> single-threaded case.
>
> On Neoverse V1, a loop using rand() improved 3.6 times. Multithreaded
> performance is unchanged.
>
> Passes regress on AArch64, OK for commit?
>
> ---
>

Ping ? I saw the stdio one was committed but what happened with this one ?
Wilco Dijkstra Dec. 9, 2022, 2:10 p.m. UTC | #5
Hi Cristian,

>> Improve performance of libc locks by adding a fast path for the
>> single-threaded case.
>>
>> On Neoverse V1, a loop using rand() improved 3.6 times. Multithreaded
>> performance is unchanged.
>>
>> Passes regress on AArch64, OK for commit?
>
> Ping ? I saw the stdio one was committed but what happened with this one ?

It is waiting on the locking benchmark being accepted. I've pinged that
(https://sourceware.org/pipermail/libc-alpha/2022-December/143944.html)
since it would be great to get all this in the next GLIBC.

Cheers,
Wilco
Cristian Rodríguez Nov. 23, 2023, 5:16 p.m. UTC | #6
On Fri, Dec 9, 2022 at 11:10 AM Wilco Dijkstra <Wilco.Dijkstra@arm.com>
wrote:

> Hi Cristian,
>
> >> Improve performance of libc locks by adding a fast path for the
> >> single-threaded case.
> >>
> >> On Neoverse V1, a loop using rand() improved 3.6 times. Multithreaded
> >> performance is unchanged.
> >>
> >> Passes regress on AArch64, OK for commit?
> >
> > Ping ? I saw the stdio one was committed but what happened with this one
> ?
>
> It is waiting on the locking benchmark being accepted. I've pinged that
> (https://sourceware.org/pipermail/libc-alpha/2022-December/143944.html)
> since it would be great to get all this in the next GLIBC.
>
> Cheers,
> Wilco


Ping ? did this move forward?
Florian Weimer Nov. 23, 2023, 6:17 p.m. UTC | #7
* Wilco Dijkstra via Libc-alpha:

> Improve performance of libc locks by adding a fast path for the
> single-threaded case.
>
> On Neoverse V1, a loop using rand() improved 3.6 times. Multithreaded
> performance is unchanged.
>
> Passes regress on AArch64, OK for commit?
>
> ---
>
> diff --git a/sysdeps/nptl/libc-lockP.h b/sysdeps/nptl/libc-lockP.h
> index d3a6837fd212f3f5dfd80f46d0e9ce365042ae0c..ccdb11fee6f14069d0b936be93d0f0fa6d8bc30b 100644
> --- a/sysdeps/nptl/libc-lockP.h
> +++ b/sysdeps/nptl/libc-lockP.h
> @@ -108,7 +108,14 @@ _Static_assert (LLL_LOCK_INITIALIZER == 0, "LLL_LOCK_INITIALIZER != 0");
>  #define __libc_rwlock_fini(NAME) ((void) 0)
>  
>  /* Lock the named lock variable.  */
> -#define __libc_lock_lock(NAME) ({ lll_lock (NAME, LLL_PRIVATE); 0; })
> +#define __libc_lock_lock(NAME)						\
> + ({									\
> +    if (SINGLE_THREAD_P)						\
> +      (NAME) = LLL_LOCK_INITIALIZER_LOCKED;				\
> +    else								\
> +      lll_lock (NAME, LLL_PRIVATE);					\
> +    0;									\
> +  })

We already have SINGLE_THREAD_P checks around locking in several places.
This makes the __libc_lock_lock check redudant in those cases.  I
believe this was done deliberately because in many cases, we can also to
skip cancellation handling at the same time.

The rand performance issue could be addressed with a similar localized
change.  I believe that would be far less controversial.

Thanks,
Florian
Wilco Dijkstra Nov. 24, 2023, 1:47 p.m. UTC | #8
Hi Florian, 

> We already have SINGLE_THREAD_P checks around locking in several places.
> This makes the __libc_lock_lock check redudant in those cases.  I
> believe this was done deliberately because in many cases, we can also to
> skip cancellation handling at the same time.

Yes, this is best for the most performance critical cases. However there are lots of
functions that use locks and many will improve with a single thread check at a 
higher level. You're right that would add extra checks in cases that are not
performance critical.

Maybe a solution would be to introduce __libc_lock_fast() or similar? That way one
can improve performance critical code easily without having to add special fast
paths. Today it would use SINGLE_THREAD_P, but it could potentially use RSEQ in
the future.
 
> The rand performance issue could be addressed with a similar localized
> change.  I believe that would be far less controversial.

I can send a patch that adds fast paths to rand() if that helps unblocking this.

Cheers,
Wilco
Carlos O'Donell Nov. 24, 2023, 4:29 p.m. UTC | #9
On 11/24/23 08:47, Wilco Dijkstra wrote:
> Hi Florian, 
> 
>> We already have SINGLE_THREAD_P checks around locking in several places.
>> This makes the __libc_lock_lock check redudant in those cases.  I
>> believe this was done deliberately because in many cases, we can also to
>> skip cancellation handling at the same time.
> 
> Yes, this is best for the most performance critical cases. However there are lots of
> functions that use locks and many will improve with a single thread check at a 
> higher level. You're right that would add extra checks in cases that are not
> performance critical.

Right.

> Maybe a solution would be to introduce __libc_lock_fast() or similar? That way one
> can improve performance critical code easily without having to add special fast
> paths. Today it would use SINGLE_THREAD_P, but it could potentially use RSEQ in
> the future.

I would prefer a solution like this because you can actively audit, and migrate
the callers rather than adding a hidden (to the developer) change in the macro.

>> The rand performance issue could be addressed with a similar localized
>> change.  I believe that would be far less controversial.
> 
> I can send a patch that adds fast paths to rand() if that helps unblocking this.

I think it would. Add the fast path to rand(), and a microbenchmark to show that this 
is good for performance on your machine, that way we don't regress this change
in the future when we work on rand(). I'm amenable to not having a microbenchmark, but
every time we talk about performance adding a little bit to the corpus helps ensure
we don't loose track of the gains.
diff mbox series

Patch

diff --git a/sysdeps/nptl/libc-lockP.h b/sysdeps/nptl/libc-lockP.h
index d3a6837fd212f3f5dfd80f46d0e9ce365042ae0c..ccdb11fee6f14069d0b936be93d0f0fa6d8bc30b 100644
--- a/sysdeps/nptl/libc-lockP.h
+++ b/sysdeps/nptl/libc-lockP.h
@@ -108,7 +108,14 @@  _Static_assert (LLL_LOCK_INITIALIZER == 0, "LLL_LOCK_INITIALIZER != 0");
 #define __libc_rwlock_fini(NAME) ((void) 0)
 
 /* Lock the named lock variable.  */
-#define __libc_lock_lock(NAME) ({ lll_lock (NAME, LLL_PRIVATE); 0; })
+#define __libc_lock_lock(NAME)						\
+ ({									\
+    if (SINGLE_THREAD_P)						\
+      (NAME) = LLL_LOCK_INITIALIZER_LOCKED;				\
+    else								\
+      lll_lock (NAME, LLL_PRIVATE);					\
+    0;									\
+  })
 #define __libc_rwlock_rdlock(NAME) __pthread_rwlock_rdlock (&(NAME))
 #define __libc_rwlock_wrlock(NAME) __pthread_rwlock_wrlock (&(NAME))
 
@@ -116,7 +123,14 @@  _Static_assert (LLL_LOCK_INITIALIZER == 0, "LLL_LOCK_INITIALIZER != 0");
 #define __libc_lock_trylock(NAME) lll_trylock (NAME)
 
 /* Unlock the named lock variable.  */
-#define __libc_lock_unlock(NAME) lll_unlock (NAME, LLL_PRIVATE)
+#define __libc_lock_unlock(NAME)					\
+ ({									\
+    if (SINGLE_THREAD_P)						\
+      (NAME) = LLL_LOCK_INITIALIZER;					\
+    else								\
+      lll_unlock (NAME, LLL_PRIVATE);					\
+    0;									\
+ })
 #define __libc_rwlock_unlock(NAME) __pthread_rwlock_unlock (&(NAME))
 
 #if IS_IN (rtld)