mbox series

[v4,00/15] Add futex2 syscalls

Message ID 20210603195924.361327-1-andrealmeid@collabora.com
Headers show
Series Add futex2 syscalls | expand

Message

André Almeida June 3, 2021, 7:59 p.m. UTC
Hi,

This patch series introduces the futex2 syscalls.

* What happened to the current futex()?

For some years now, developers have been trying to add new features to
futex, but maintainers have been reluctant to accept then, given the
multiplexed interface full of legacy features and tricky to do big
changes. Some problems that people tried to address with patchsets are:
NUMA-awareness[0], smaller sized futexes[1], wait on multiple futexes[2].
NUMA, for instance, just doesn't fit the current API in a reasonable
way. Considering that, it's not possible to merge new features into the
current futex.

 ** The NUMA problem

 At the current implementation, all futex kernel side infrastructure is
 stored on a single node. Given that, all futex() calls issued by
 processors that aren't located on that node will have a memory access
 penalty when doing it.

 ** The 32bit sized futex problem

 Futexes are used to implement atomic operations in userspace.
 Supporting 8, 16, 32 and 64 bit sized futexes allows user libraries to
 implement all those sizes in a performant way. Thanks Boost devs for
 feedback: https://lists.boost.org/Archives/boost/2021/05/251508.php

 Embedded systems or anything with memory constrains could benefit of
 using smaller sizes for the futex userspace integer.

 ** The wait on multiple problem

 The use case lies in the Wine implementation of the Windows NT interface
 WaitMultipleObjects. This Windows API function allows a thread to sleep
 waiting on the first of a set of event sources (mutexes, timers, signal,
 console input, etc) to signal.  Considering this is a primitive
 synchronization operation for Windows applications, being able to quickly
 signal events on the producer side, and quickly go to sleep on the
 consumer side is essential for good performance of those running over Wine.

[0] https://lore.kernel.org/lkml/20160505204230.932454245@linutronix.de/
[1] https://lore.kernel.org/lkml/20191221155659.3159-2-malteskarupke@web.de/
[2] https://lore.kernel.org/lkml/20200213214525.183689-1-andrealmeid@collabora.com/

* The solution

As proposed by Peter Zijlstra and Florian Weimer[3], a new interface
is required to solve this, which must be designed with those features in
mind. futex2() is that interface. As opposed to the current multiplexed
interface, the new one should have one syscall per operation. This will
allow the maintainability of the API if it gets extended, and will help
users with type checking of arguments.

In particular, the new interface is extended to support the ability to
wait on any of a list of futexes at a time, which could be seen as a
vectored extension of the FUTEX_WAIT semantics.

[3] https://lore.kernel.org/lkml/20200303120050.GC2596@hirez.programming.kicks-ass.net/

* The interface

The new interface can be seen in details in the following patches, but
this is a high level summary of what the interface can do:

 - Supports wake/wait semantics, as in futex()
 - Supports requeue operations, similarly as FUTEX_CMP_REQUEUE, but with
   individual flags for each address
 - Supports waiting for a vector of futexes, using a new syscall named
   futex_waitv()
 - Supports variable sized futexes (8bits, 16bits, 32bits and 64bits)
 - Supports NUMA-awareness operations, where the user can specify on
   which memory node would like to operate

* Implementation

The internal implementation follows a similar design to the original futex.
Given that we want to replicate the same external behavior of current
futex, this should be somewhat expected. For some functions, like the
init and the code to get a shared key, I literally copied code and
comments from kernel/futex.c. I decided to do so instead of exposing the
original function as a public function since in that way we can freely
modify our implementation if required, without any impact on old futex.
Also, the comments precisely describes the details and corner cases of
the implementation.

Each patch contains a brief description of implementation, but patch 7
"docs: locking: futex2: Add documentation" adds a more complete document
about it.

* The patchset

This patchset can be also found at my git tree:

https://gitlab.collabora.com/tonyk/linux/-/tree/futex2-dev

  - Patch 1: Implements wait/wake, and the basics foundations of futex2

  - Patches 2-5: Implement the remaining features (shared, waitv,
    requeue, sizes).

  - Patch 6:  Adds the x86_x32 ABI handling. I kept it in a separated
    patch since I'm not sure if x86_x32 is still a thing, or if it should
    return -ENOSYS.

  - Patch 7: Add a documentation file which details the interface and
    the internal implementation.

  - Patches 8-14: Selftests for all operations along with perf
    support for futex2.

  - Patch 15: While working on porting glibc for futex2, I found out
    that there's a futex_wake() call at the user thread exit path, if
    that thread was created with clone(..., CLONE_CHILD_SETTID, ...). In
    order to make pthreads work with futex2, it was required to add
    this patch. Note that this is more a proof-of-concept of what we
    will need to do in future, rather than part of the interface and
    shouldn't be merged as it is.

* Testing:

This patchset provides selftests for each operation and their flags.
Along with that, the following work was done:

 ** Stability

 To stress the interface in "real world scenarios":

 - glibc[4]: nptl's low level locking was modified to use futex2 API
   (except for robust and PI things). All relevant nptl/ tests passed.

 - Wine[5]: Proton/Wine was modified in order to use futex2() for the
   emulation of Windows NT sync mechanisms based on futex, called "fsync".
   Triple-A games with huge CPU's loads and tons of parallel jobs worked
   as expected when compared with the previous FUTEX_WAIT_MULTIPLE
   implementation at futex(). Some games issue 42k futex2() calls
   per second.

 - Full GNU/Linux distro: I installed the modified glibc in my host
   machine, so all pthread's programs would use futex2(). After tweaking
   systemd[6] to allow futex2() calls at seccomp, everything worked as
   expected (web browsers do some syscall sandboxing and need some
   configuration as well).

 - perf: The perf benchmarks tests can also be used to stress the
   interface, and they can be found in this patchset.

 ** Performance

 - For comparing futex() and futex2() performance, I used the artificial
   benchmarks implemented at perf (wake, wake-parallel, hash and
   requeue). The setup was 200 runs for each test and using 8, 80, 800,
   8000 for the number of threads, Note that for this test, I'm not using
   patch 14 ("kernel: Enable waitpid() for futex2") , for reasons explained
   at "The patchset" section.

 - For the first three ones, I measured an average of 4% gain in
   performance. This is not a big step, but it shows that the new
   interface is at least comparable in performance with the current one.

 - For requeue, I measured an average of 21% decrease in performance
   compared to the original futex implementation. This is expected given
   the new design with individual flags. The performance trade-offs are
   explained at patch 4 ("futex2: Implement requeue operation").

[4] https://gitlab.collabora.com/tonyk/glibc/-/tree/futex2
[5] https://gitlab.collabora.com/tonyk/wine/-/tree/proton_5.13
[6] https://gitlab.collabora.com/tonyk/systemd

* FAQ

 ** "Where's the code for NUMA?"

 NUMA will be implemented in future versions of this patch, and like the
 size feature, it will require work with users of futex to get feedback
 about it.

 ** "Where's the PI/robust stuff?"

 As said by Peter Zijlstra at [3], all those new features are related to
 the "simple" futex interface, that doesn't use PI or robust. Do we want
 to have this complexity at futex2() and if so, should it be part of
 this patchset or can it be future work?

Thanks,
	André

* Changelog

Changes from v3:
- Implemented variable sized futexes
v3: https://lore.kernel.org/lkml/20210427231248.220501-1-andrealmeid@collabora.com/

Changes from v2:
- API now supports 64bit futexes, in addition to 8, 16 and 32.
- This API change will break the glibc[4] and Proton[5] ports for now.
- Refactored futex2_wait and futex2_waitv selftests
v2: https://lore.kernel.org/lkml/20210304004219.134051-1-andrealmeid@collabora.com/

Changes from v1:
- Unified futex_set_timer_and_wait and __futex_wait code
- Dropped _carefull from linked list function calls
- Fixed typos on docs patch
- uAPI flags are now added as features are introduced, instead of all flags
  in patch 1
- Removed struct futex_single_waiter in favor of an anon struct
v1: https://lore.kernel.org/lkml/20210215152404.250281-1-andrealmeid@collabora.com/

André Almeida (15):
  futex2: Implement wait and wake functions
  futex2: Add support for shared futexes
  futex2: Implement vectorized wait
  futex2: Implement requeue operation
  futex2: Implement support for different futex sizes
  futex2: Add compatibility entry point for x86_x32 ABI
  docs: locking: futex2: Add documentation
  selftests: futex2: Add wake/wait test
  selftests: futex2: Add timeout test
  selftests: futex2: Add wouldblock test
  selftests: futex2: Add waitv test
  selftests: futex2: Add requeue test
  selftests: futex2: Add futex sizes test
  perf bench: Add futex2 benchmark tests
  kernel: Enable waitpid() for futex2

 Documentation/locking/futex2.rst              |  198 +++
 Documentation/locking/index.rst               |    1 +
 MAINTAINERS                                   |    2 +-
 arch/arm/tools/syscall.tbl                    |    4 +
 arch/arm64/include/asm/unistd.h               |    2 +-
 arch/arm64/include/asm/unistd32.h             |    8 +
 arch/x86/entry/syscalls/syscall_32.tbl        |    4 +
 arch/x86/entry/syscalls/syscall_64.tbl        |    4 +
 fs/inode.c                                    |    1 +
 include/linux/compat.h                        |   26 +
 include/linux/fs.h                            |    1 +
 include/linux/syscalls.h                      |   17 +
 include/uapi/asm-generic/unistd.h             |   14 +-
 include/uapi/linux/futex.h                    |   34 +
 init/Kconfig                                  |    7 +
 kernel/Makefile                               |    1 +
 kernel/fork.c                                 |    2 +
 kernel/futex2.c                               | 1289 +++++++++++++++++
 kernel/sys_ni.c                               |    9 +
 tools/arch/x86/include/asm/unistd_64.h        |   12 +
 tools/include/uapi/asm-generic/unistd.h       |   11 +-
 .../arch/x86/entry/syscalls/syscall_64.tbl    |    4 +
 tools/perf/bench/bench.h                      |    4 +
 tools/perf/bench/futex-hash.c                 |   24 +-
 tools/perf/bench/futex-requeue.c              |   57 +-
 tools/perf/bench/futex-wake-parallel.c        |   41 +-
 tools/perf/bench/futex-wake.c                 |   37 +-
 tools/perf/bench/futex.h                      |   47 +
 tools/perf/builtin-bench.c                    |   18 +-
 .../selftests/futex/functional/.gitignore     |    4 +
 .../selftests/futex/functional/Makefile       |    7 +-
 .../futex/functional/futex2_requeue.c         |  164 +++
 .../selftests/futex/functional/futex2_sizes.c |  146 ++
 .../selftests/futex/functional/futex2_wait.c  |  195 +++
 .../selftests/futex/functional/futex2_waitv.c |  154 ++
 .../futex/functional/futex_wait_timeout.c     |   58 +-
 .../futex/functional/futex_wait_wouldblock.c  |   33 +-
 .../testing/selftests/futex/functional/run.sh |    6 +
 .../selftests/futex/include/futex2test.h      |  113 ++
 39 files changed, 2707 insertions(+), 52 deletions(-)
 create mode 100644 Documentation/locking/futex2.rst
 create mode 100644 kernel/futex2.c
 create mode 100644 tools/testing/selftests/futex/functional/futex2_requeue.c
 create mode 100644 tools/testing/selftests/futex/functional/futex2_sizes.c
 create mode 100644 tools/testing/selftests/futex/functional/futex2_wait.c
 create mode 100644 tools/testing/selftests/futex/functional/futex2_waitv.c
 create mode 100644 tools/testing/selftests/futex/include/futex2test.h

Comments

Zebediah Figura June 4, 2021, 4:51 a.m. UTC | #1
On 6/3/21 2:59 PM, André Almeida wrote:
>   ** The wait on multiple problem
> 
>   The use case lies in the Wine implementation of the Windows NT interface
>   WaitMultipleObjects. This Windows API function allows a thread to sleep
>   waiting on the first of a set of event sources (mutexes, timers, signal,
>   console input, etc) to signal.  Considering this is a primitive
>   synchronization operation for Windows applications, being able to quickly
>   signal events on the producer side, and quickly go to sleep on the
>   consumer side is essential for good performance of those running over Wine.
> 

I know this is part of the cover letter, but I really do want to clarify 
that this isn't really accurate. The use case that this is referring to 
is not "the Wine implementation of WaitForMultipleObjects", it is an 
out-of-tree implementation of WaitForMultipleObjects that provides 
improved performance compared to the in-tree implementation.

This is especially salient because:

(1) this out-of-tree implementation is only in a small handful of cases 
any more performant than a different out-of-tree implementation which 
uses eventfd and poll() instead;

(2) these implementations will remain out-of-tree due to compatibility 
and robustness problems;

(3) I believe there is potential for an upstreamable implementation 
which does not rely on futex or futex2.
Nicholas Piggin June 4, 2021, 11:36 a.m. UTC | #2
Excerpts from André Almeida's message of June 4, 2021 5:59 am:
> Hi,
> 
> This patch series introduces the futex2 syscalls.
> 
> * What happened to the current futex()?
> 
> For some years now, developers have been trying to add new features to
> futex, but maintainers have been reluctant to accept then, given the
> multiplexed interface full of legacy features and tricky to do big
> changes. Some problems that people tried to address with patchsets are:
> NUMA-awareness[0], smaller sized futexes[1], wait on multiple futexes[2].
> NUMA, for instance, just doesn't fit the current API in a reasonable
> way. Considering that, it's not possible to merge new features into the
> current futex.
> 
>  ** The NUMA problem
> 
>  At the current implementation, all futex kernel side infrastructure is
>  stored on a single node. Given that, all futex() calls issued by
>  processors that aren't located on that node will have a memory access
>  penalty when doing it.
> 
>  ** The 32bit sized futex problem
> 
>  Futexes are used to implement atomic operations in userspace.
>  Supporting 8, 16, 32 and 64 bit sized futexes allows user libraries to
>  implement all those sizes in a performant way. Thanks Boost devs for
>  feedback: https://lists.boost.org/Archives/boost/2021/05/251508.php
> 
>  Embedded systems or anything with memory constrains could benefit of
>  using smaller sizes for the futex userspace integer.
> 
>  ** The wait on multiple problem
> 
>  The use case lies in the Wine implementation of the Windows NT interface
>  WaitMultipleObjects. This Windows API function allows a thread to sleep
>  waiting on the first of a set of event sources (mutexes, timers, signal,
>  console input, etc) to signal.  Considering this is a primitive
>  synchronization operation for Windows applications, being able to quickly
>  signal events on the producer side, and quickly go to sleep on the
>  consumer side is essential for good performance of those running over Wine.
> 
> [0] https://lore.kernel.org/lkml/20160505204230.932454245@linutronix.de/
> [1] https://lore.kernel.org/lkml/20191221155659.3159-2-malteskarupke@web.de/
> [2] https://lore.kernel.org/lkml/20200213214525.183689-1-andrealmeid@collabora.com/
> 
> * The solution
> 
> As proposed by Peter Zijlstra and Florian Weimer[3], a new interface
> is required to solve this, which must be designed with those features in
> mind. futex2() is that interface. As opposed to the current multiplexed
> interface, the new one should have one syscall per operation. This will
> allow the maintainability of the API if it gets extended, and will help
> users with type checking of arguments.
> 
> In particular, the new interface is extended to support the ability to
> wait on any of a list of futexes at a time, which could be seen as a
> vectored extension of the FUTEX_WAIT semantics.
> 
> [3] https://lore.kernel.org/lkml/20200303120050.GC2596@hirez.programming.kicks-ass.net/
> 
> * The interface
> 
> The new interface can be seen in details in the following patches, but
> this is a high level summary of what the interface can do:
> 
>  - Supports wake/wait semantics, as in futex()
>  - Supports requeue operations, similarly as FUTEX_CMP_REQUEUE, but with
>    individual flags for each address
>  - Supports waiting for a vector of futexes, using a new syscall named
>    futex_waitv()
>  - Supports variable sized futexes (8bits, 16bits, 32bits and 64bits)
>  - Supports NUMA-awareness operations, where the user can specify on
>    which memory node would like to operate

A few comments.

- Do you need a syscall for wait and for waitv, or can wait just be a
degenerate case of waitv (which could use the stack variable)?  I guess
it does save the user access unlock.

- Did you consider a wakev interface? An example is a read-write mutex 
which has read-blocking futexes split (e.g., per-node) for scalability 
then the writer may unlock and wake all readers with one call. We 
actually have some scalability challenges of this nature with certain 
large database programs.

- Great to see 64-bit support in, it is helpful to do some interesting 
things with locks without hacks (e.g., putting an address in the lock 
word).

- Are we really keen on squashing node ID into flags in this day and age?
I guess okay but seems like it would be nice to allow a bit more space
in general for the operations. I don't want to turn it into a whole big
multiplexing nightmare again with lots of such flags, or propose
complexity with no code behind it, but I think we need a bit of leeway
for unforeseen locking innovations to be added carefully. The pthread
locking today is still fairly primitive really, I don't think we know
what will work best for the next 10 years.

One scalability issue we are starting to hit and will only get worse is 
futex queue spinlock contention. Perhaps this is better addressed in 
userspace but the kernel could play a part so I like to leave some doors
open. One example is that the wait (or wake) side may like to depend not
just on the memory value, but on the success of a cmpxchg to avoid 
unqueueing and queueing spuriously, which increases lock contention but
also ends up putting the poor task on the back of the list -- yes RT
priorities can help the realtime case, but non-RT cases can get bad
outlier latencies if lock stealing is allowed (which can be very good
for performance).

- The private global futex hash table sucks for various reasons, and
having 128 waiters per thread makes it orders of magnitude easier for
userspace to DoS stuff with hash collisions. NUMA doesn't fix that, the
per process hashing that Thomas suggested does fix the DoS but the
non-deterministic hash collisions still seem to be a problem for real
time response, and at the other end of the scale some apps (certain 
databases, etc) can have ten thousand futex waiters at once so birthday
paradox can also lead to guaranteed (low level) variable beahviour 
within a single process.

I know the kernel in general is not very robust against this kind of 
DoS/nondeterminism, but it's a bit sad to introduce new APIs with the 
problem still there. Yes we could address it later, but I think it's 
better done first because the solution might influence what the best 
syscall API is.

For example the idea of using the address as the handle for the wait 
queue _and_ the value is very convenient but hashing is annoying for
all the above reasons and the shared wait queue case is pretty clunky. 
It's also constraining in some corner cases to have the wait queue 
associated with the address 1:1. For example a type of NUMA mutex might 
want to have per-node waitqueues associated with a lock word, and wake
local node waiters 99% of the time. Not trivial to do with futexes and
seems to at least require bouncing of more cache lines, possibly more
atomics, etc.

Could anything else be done?

I'll be burned at the stake for suggesting it but it would be great if 
we could use file descriptors. At least for the shared futex, maybe 
private could use a per-process futex allocator. It solves all of the
above, although I'm sure has many of its own problem. It may not play
so nicely with the pthread mutex API because of the whole static
initialiser problem, but the first futex proposal did use fds. But it's
an example of an alternate API.

And.. thanks for the great work, apologies if I missed some discussion
related to the above comments, I did some reading but I know there has
been a lot of discussions going on.

Thanks,
Nick
André Almeida June 4, 2021, 5:04 p.m. UTC | #3
Às 01:51 de 04/06/21, Zebediah Figura escreveu:
> On 6/3/21 2:59 PM, André Almeida wrote:
>>   ** The wait on multiple problem
>>
>>   The use case lies in the Wine implementation of the Windows NT
>> interface
>>   WaitMultipleObjects. This Windows API function allows a thread to sleep
>>   waiting on the first of a set of event sources (mutexes, timers,
>> signal,
>>   console input, etc) to signal.  Considering this is a primitive
>>   synchronization operation for Windows applications, being able to
>> quickly
>>   signal events on the producer side, and quickly go to sleep on the
>>   consumer side is essential for good performance of those running
>> over Wine.
>>
> 
> I know this is part of the cover letter, but I really do want to clarify
> that this isn't really accurate. The use case that this is referring to
> is not "the Wine implementation of WaitForMultipleObjects", it is an
> out-of-tree implementation of WaitForMultipleObjects that provides
> improved performance compared to the in-tree implementation.
> 
> This is especially salient because:
> 
> (1) this out-of-tree implementation is only in a small handful of cases
> any more performant than a different out-of-tree implementation which
> uses eventfd and poll() instead;
> 
> (2) these implementations will remain out-of-tree due to compatibility
> and robustness problems;
> 
> (3) I believe there is potential for an upstreamable implementation
> which does not rely on futex or futex2.

I'll let it more clear next time that this applies to Proton's Wine, and
not Wine.

Along with that, wait on multiple will be useful for other workloads,
such as the ones that uses Boost's mass locking algorithms and native
game engines for instance.
André Almeida June 4, 2021, 8:01 p.m. UTC | #4
Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
> Excerpts from André Almeida's message of June 4, 2021 5:59 am:
>> Hi,
>>
>> This patch series introduces the futex2 syscalls.
>>
>> * What happened to the current futex()?
>>
>> For some years now, developers have been trying to add new features to
>> futex, but maintainers have been reluctant to accept then, given the
>> multiplexed interface full of legacy features and tricky to do big
>> changes. Some problems that people tried to address with patchsets are:
>> NUMA-awareness[0], smaller sized futexes[1], wait on multiple futexes[2].
>> NUMA, for instance, just doesn't fit the current API in a reasonable
>> way. Considering that, it's not possible to merge new features into the
>> current futex.
>>
>>  ** The NUMA problem
>>
>>  At the current implementation, all futex kernel side infrastructure is
>>  stored on a single node. Given that, all futex() calls issued by
>>  processors that aren't located on that node will have a memory access
>>  penalty when doing it.
>>
>>  ** The 32bit sized futex problem
>>
>>  Futexes are used to implement atomic operations in userspace.
>>  Supporting 8, 16, 32 and 64 bit sized futexes allows user libraries to
>>  implement all those sizes in a performant way. Thanks Boost devs for
>>  feedback: https://lists.boost.org/Archives/boost/2021/05/251508.php
>>
>>  Embedded systems or anything with memory constrains could benefit of
>>  using smaller sizes for the futex userspace integer.
>>
>>  ** The wait on multiple problem
>>
>>  The use case lies in the Wine implementation of the Windows NT interface
>>  WaitMultipleObjects. This Windows API function allows a thread to sleep
>>  waiting on the first of a set of event sources (mutexes, timers, signal,
>>  console input, etc) to signal.  Considering this is a primitive
>>  synchronization operation for Windows applications, being able to quickly
>>  signal events on the producer side, and quickly go to sleep on the
>>  consumer side is essential for good performance of those running over Wine.
>>
>> [0] https://lore.kernel.org/lkml/20160505204230.932454245@linutronix.de/
>> [1] https://lore.kernel.org/lkml/20191221155659.3159-2-malteskarupke@web.de/
>> [2] https://lore.kernel.org/lkml/20200213214525.183689-1-andrealmeid@collabora.com/
>>
>> * The solution
>>
>> As proposed by Peter Zijlstra and Florian Weimer[3], a new interface
>> is required to solve this, which must be designed with those features in
>> mind. futex2() is that interface. As opposed to the current multiplexed
>> interface, the new one should have one syscall per operation. This will
>> allow the maintainability of the API if it gets extended, and will help
>> users with type checking of arguments.
>>
>> In particular, the new interface is extended to support the ability to
>> wait on any of a list of futexes at a time, which could be seen as a
>> vectored extension of the FUTEX_WAIT semantics.
>>
>> [3] https://lore.kernel.org/lkml/20200303120050.GC2596@hirez.programming.kicks-ass.net/
>>
>> * The interface
>>
>> The new interface can be seen in details in the following patches, but
>> this is a high level summary of what the interface can do:
>>
>>  - Supports wake/wait semantics, as in futex()
>>  - Supports requeue operations, similarly as FUTEX_CMP_REQUEUE, but with
>>    individual flags for each address
>>  - Supports waiting for a vector of futexes, using a new syscall named
>>    futex_waitv()
>>  - Supports variable sized futexes (8bits, 16bits, 32bits and 64bits)
>>  - Supports NUMA-awareness operations, where the user can specify on
>>    which memory node would like to operate
> 
> A few comments.
> 
> - Do you need a syscall for wait and for waitv, or can wait just be a
> degenerate case of waitv (which could use the stack variable)?  I guess
> it does save the user access unlock.
> 

Yes. waitv with one element has a overhead compared to wait, given the
extra user_copy(). Given that waiting on a single futex is the common
case, I think it's worth to have it.

> - Did you consider a wakev interface? An example is a read-write mutex 
> which has read-blocking futexes split (e.g., per-node) for scalability 
> then the writer may unlock and wake all readers with one call. We 
> actually have some scalability challenges of this nature with certain 
> large database programs.
> 

Not really, I haven't heard any use case for that until now. It should
be easy to implement it, though, and I think you have an interesting use
case here. Could you point me some of those database programs?

> - Great to see 64-bit support in, it is helpful to do some interesting 
> things with locks without hacks (e.g., putting an address in the lock 
> word).
> 
> - Are we really keen on squashing node ID into flags in this day and age?
> I guess okay but seems like it would be nice to allow a bit more space
> in general for the operations. I don't want to turn it into a whole big
> multiplexing nightmare again with lots of such flags, or propose
> complexity with no code behind it, but I think we need a bit of leeway
> for unforeseen locking innovations to be added carefully. The pthread
> locking today is still fairly primitive really, I don't think we know
> what will work best for the next 10 years.

In the interface that I'd proposed, the node ID isn't part of the flags.
You have a flag FUTEX_FLAG_NUMA, and when that is used, you pass in
`void *uaddr` a pointer to a `struct futex_numa { int value, int hint
}`, where hint should be the node ID you would like to work on, and
value is just the userspace futex. This is documented in more details in
patch 7 "docs: locking: futex2: Add documentation".

If you have any feedback about how this NUMA interface looks like, I
would like to hear.

Also, did something in my writing indicated that the node ID would be
part of the flags? I'll improve this it if so.

> 
> One scalability issue we are starting to hit and will only get worse is 
> futex queue spinlock contention. Perhaps this is better addressed in 
> userspace but the kernel could play a part so I like to leave some doors
> open. One example is that the wait (or wake) side may like to depend not
> just on the memory value, but on the success of a cmpxchg to avoid 
> unqueueing and queueing spuriously, which increases lock contention but
> also ends up putting the poor task on the back of the list -- yes RT
> priorities can help the realtime case, but non-RT cases can get bad
> outlier latencies if lock stealing is allowed (which can be very good
> for performance).
> 

Sorry, I'm not sure what do you mean here. Are you proposing to have a
cmpxchg in kernel side, so the lock would be taken by the kernel, and
not by the userspace like it's now?

> - The private global futex hash table sucks for various reasons, and
> having 128 waiters per thread makes it orders of magnitude easier for
> userspace to DoS stuff with hash collisions. NUMA doesn't fix that, the
> per process hashing that Thomas suggested does fix the DoS but the
> non-deterministic hash collisions still seem to be a problem for real
> time response, and at the other end of the scale some apps (certain 
> databases, etc) can have ten thousand futex waiters at once so birthday
> paradox can also lead to guaranteed (low level) variable beahviour 
> within a single process.
> 
> I know the kernel in general is not very robust against this kind of 
> DoS/nondeterminism, but it's a bit sad to introduce new APIs with the 
> problem still there. Yes we could address it later, but I think it's 
> better done first because the solution might influence what the best 
> syscall API is.
> 
> For example the idea of using the address as the handle for the wait 
> queue _and_ the value is very convenient but hashing is annoying for
> all the above reasons and the shared wait queue case is pretty clunky. 
> It's also constraining in some corner cases to have the wait queue 
> associated with the address 1:1. For example a type of NUMA mutex might 
> want to have per-node waitqueues associated with a lock word, and wake
> local node waiters 99% of the time. Not trivial to do with futexes and
> seems to at least require bouncing of more cache lines, possibly more
> atomics, etc.
> 
> Could anything else be done?

I wasn't aware that userspace doing DoS is something to be concerned
from the kernel point of view. Is this scenario considering a malicious
actor? If so, there are plenty of resources to be denied, so not sure
how futex could be protected of this. Or is this just a program that
uses tons of futexes?

> 
> I'll be burned at the stake for suggesting it but it would be great if 
> we could use file descriptors. At least for the shared futex, maybe 
> private could use a per-process futex allocator. It solves all of the
> above, although I'm sure has many of its own problem. It may not play
> so nicely with the pthread mutex API because of the whole static
> initialiser problem, but the first futex proposal did use fds. But it's
> an example of an alternate API.
> 

FDs and futex doesn't play well, because for futex_wait() you need to
tell the kernel the expected value in the futex address to avoid
sleeping in a free lock. FD operations (poll, select) don't have this
`value` argument, so they could sleep forever, but I'm not sure if you
had taken this in consideration.

> And.. thanks for the great work, apologies if I missed some discussion
> related to the above comments, I did some reading but I know there has
> been a lot of discussions going on.
> 

:) and thank you for the valuable feedback and detailed ideas!

> Thanks,
> Nick
>
Nicholas Piggin June 5, 2021, 1:09 a.m. UTC | #5
Excerpts from André Almeida's message of June 5, 2021 6:01 am:
> Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
>> Excerpts from André Almeida's message of June 4, 2021 5:59 am:
>>> Hi,
>>>
>>> This patch series introduces the futex2 syscalls.
>>>
>>> * What happened to the current futex()?
>>>
>>> For some years now, developers have been trying to add new features to
>>> futex, but maintainers have been reluctant to accept then, given the
>>> multiplexed interface full of legacy features and tricky to do big
>>> changes. Some problems that people tried to address with patchsets are:
>>> NUMA-awareness[0], smaller sized futexes[1], wait on multiple futexes[2].
>>> NUMA, for instance, just doesn't fit the current API in a reasonable
>>> way. Considering that, it's not possible to merge new features into the
>>> current futex.
>>>
>>>  ** The NUMA problem
>>>
>>>  At the current implementation, all futex kernel side infrastructure is
>>>  stored on a single node. Given that, all futex() calls issued by
>>>  processors that aren't located on that node will have a memory access
>>>  penalty when doing it.
>>>
>>>  ** The 32bit sized futex problem
>>>
>>>  Futexes are used to implement atomic operations in userspace.
>>>  Supporting 8, 16, 32 and 64 bit sized futexes allows user libraries to
>>>  implement all those sizes in a performant way. Thanks Boost devs for
>>>  feedback: https://lists.boost.org/Archives/boost/2021/05/251508.php
>>>
>>>  Embedded systems or anything with memory constrains could benefit of
>>>  using smaller sizes for the futex userspace integer.
>>>
>>>  ** The wait on multiple problem
>>>
>>>  The use case lies in the Wine implementation of the Windows NT interface
>>>  WaitMultipleObjects. This Windows API function allows a thread to sleep
>>>  waiting on the first of a set of event sources (mutexes, timers, signal,
>>>  console input, etc) to signal.  Considering this is a primitive
>>>  synchronization operation for Windows applications, being able to quickly
>>>  signal events on the producer side, and quickly go to sleep on the
>>>  consumer side is essential for good performance of those running over Wine.
>>>
>>> [0] https://lore.kernel.org/lkml/20160505204230.932454245@linutronix.de/
>>> [1] https://lore.kernel.org/lkml/20191221155659.3159-2-malteskarupke@web.de/
>>> [2] https://lore.kernel.org/lkml/20200213214525.183689-1-andrealmeid@collabora.com/
>>>
>>> * The solution
>>>
>>> As proposed by Peter Zijlstra and Florian Weimer[3], a new interface
>>> is required to solve this, which must be designed with those features in
>>> mind. futex2() is that interface. As opposed to the current multiplexed
>>> interface, the new one should have one syscall per operation. This will
>>> allow the maintainability of the API if it gets extended, and will help
>>> users with type checking of arguments.
>>>
>>> In particular, the new interface is extended to support the ability to
>>> wait on any of a list of futexes at a time, which could be seen as a
>>> vectored extension of the FUTEX_WAIT semantics.
>>>
>>> [3] https://lore.kernel.org/lkml/20200303120050.GC2596@hirez.programming.kicks-ass.net/
>>>
>>> * The interface
>>>
>>> The new interface can be seen in details in the following patches, but
>>> this is a high level summary of what the interface can do:
>>>
>>>  - Supports wake/wait semantics, as in futex()
>>>  - Supports requeue operations, similarly as FUTEX_CMP_REQUEUE, but with
>>>    individual flags for each address
>>>  - Supports waiting for a vector of futexes, using a new syscall named
>>>    futex_waitv()
>>>  - Supports variable sized futexes (8bits, 16bits, 32bits and 64bits)
>>>  - Supports NUMA-awareness operations, where the user can specify on
>>>    which memory node would like to operate
>> 
>> A few comments.
>> 
>> - Do you need a syscall for wait and for waitv, or can wait just be a
>> degenerate case of waitv (which could use the stack variable)?  I guess
>> it does save the user access unlock.
>> 
> 
> Yes. waitv with one element has a overhead compared to wait, given the
> extra user_copy(). Given that waiting on a single futex is the common
> case, I think it's worth to have it.

Okay.

>> - Did you consider a wakev interface? An example is a read-write mutex 
>> which has read-blocking futexes split (e.g., per-node) for scalability 
>> then the writer may unlock and wake all readers with one call. We 
>> actually have some scalability challenges of this nature with certain 
>> large database programs.
>> 
> 
> Not really, I haven't heard any use case for that until now. It should
> be easy to implement it, though, and I think you have an interesting use
> case here. Could you point me some of those database programs?

Not source code unfortunately. I know that's not a very good answer, but 
they are far ahead of what most open source apps are doing scalability 
wise today, and they end up rolling their own complex locking. Hopefully
the example I give is simple enough to understand.

> 
>> - Great to see 64-bit support in, it is helpful to do some interesting 
>> things with locks without hacks (e.g., putting an address in the lock 
>> word).
>> 
>> - Are we really keen on squashing node ID into flags in this day and age?
>> I guess okay but seems like it would be nice to allow a bit more space
>> in general for the operations. I don't want to turn it into a whole big
>> multiplexing nightmare again with lots of such flags, or propose
>> complexity with no code behind it, but I think we need a bit of leeway
>> for unforeseen locking innovations to be added carefully. The pthread
>> locking today is still fairly primitive really, I don't think we know
>> what will work best for the next 10 years.
> 
> In the interface that I'd proposed, the node ID isn't part of the flags.
> You have a flag FUTEX_FLAG_NUMA, and when that is used, you pass in
> `void *uaddr` a pointer to a `struct futex_numa { int value, int hint
> }`, where hint should be the node ID you would like to work on, and
> value is just the userspace futex. This is documented in more details in
> patch 7 "docs: locking: futex2: Add documentation".
> 
> If you have any feedback about how this NUMA interface looks like, I
> would like to hear.
> 
> Also, did something in my writing indicated that the node ID would be
> part of the flags? I'll improve this it if so.

Oh I did miss this, thank you. No it wasn't your writing, I think it was 
me trying to read through a lot of messages and got confused with some
earlier conversations.

I'll look a bit more at the NUMA interface.

> 
>> 
>> One scalability issue we are starting to hit and will only get worse is 
>> futex queue spinlock contention. Perhaps this is better addressed in 
>> userspace but the kernel could play a part so I like to leave some doors
>> open. One example is that the wait (or wake) side may like to depend not
>> just on the memory value, but on the success of a cmpxchg to avoid 
>> unqueueing and queueing spuriously, which increases lock contention but
>> also ends up putting the poor task on the back of the list -- yes RT
>> priorities can help the realtime case, but non-RT cases can get bad
>> outlier latencies if lock stealing is allowed (which can be very good
>> for performance).
>> 
> 
> Sorry, I'm not sure what do you mean here. Are you proposing to have a
> cmpxchg in kernel side, so the lock would be taken by the kernel, and
> not by the userspace like it's now?

Yes. Only in slow paths, of course, to reduce starvation / erratic
latencies and spurious wait queue manipulations.

Actually one other scalability thing while I remember it:

futex_wait currently requires that the lock word is tested under the 
queue spin lock (to avoid consuming a wakeup). The problem with this is 
that the lock word can be a very hot cache line if you have a lot of
concurrency, so accessing it under the queue lock can increase queue
lock hold time.

I would prefer if the new API was relaxed to avoid this restriction
(e.g., any wait call may consume a wakeup so it's up to userspace to
avoid that if it is a problem).

>> - The private global futex hash table sucks for various reasons, and
>> having 128 waiters per thread makes it orders of magnitude easier for
>> userspace to DoS stuff with hash collisions. NUMA doesn't fix that, the
>> per process hashing that Thomas suggested does fix the DoS but the
>> non-deterministic hash collisions still seem to be a problem for real
>> time response, and at the other end of the scale some apps (certain 
>> databases, etc) can have ten thousand futex waiters at once so birthday
>> paradox can also lead to guaranteed (low level) variable beahviour 
>> within a single process.
>> 
>> I know the kernel in general is not very robust against this kind of 
>> DoS/nondeterminism, but it's a bit sad to introduce new APIs with the 
>> problem still there. Yes we could address it later, but I think it's 
>> better done first because the solution might influence what the best 
>> syscall API is.
>> 
>> For example the idea of using the address as the handle for the wait 
>> queue _and_ the value is very convenient but hashing is annoying for
>> all the above reasons and the shared wait queue case is pretty clunky. 
>> It's also constraining in some corner cases to have the wait queue 
>> associated with the address 1:1. For example a type of NUMA mutex might 
>> want to have per-node waitqueues associated with a lock word, and wake
>> local node waiters 99% of the time. Not trivial to do with futexes and
>> seems to at least require bouncing of more cache lines, possibly more
>> atomics, etc.
>> 
>> Could anything else be done?
> 
> I wasn't aware that userspace doing DoS is something to be concerned
> from the kernel point of view. Is this scenario considering a malicious
> actor? If so, there are plenty of resources to be denied, so not sure
> how futex could be protected of this. Or is this just a program that
> uses tons of futexes?

Both really. AFAIKS one of the efforts that prompted the futex 
modernisation work was the RT latency issues from Thomas in 2016 when 
the per process table was proposed.


>> I'll be burned at the stake for suggesting it but it would be great if 
>> we could use file descriptors. At least for the shared futex, maybe 
>> private could use a per-process futex allocator. It solves all of the
>> above, although I'm sure has many of its own problem. It may not play
>> so nicely with the pthread mutex API because of the whole static
>> initialiser problem, but the first futex proposal did use fds. But it's
>> an example of an alternate API.
>> 
> 
> FDs and futex doesn't play well, because for futex_wait() you need to
> tell the kernel the expected value in the futex address to avoid
> sleeping in a free lock. FD operations (poll, select) don't have this
> `value` argument, so they could sleep forever, but I'm not sure if you
> had taken this in consideration.

I had. The futex wait API would take a fd additional. The only 
difference is the waitqueue that is used when a sleep or wake is 
required is derived from the fd, not from an address.

I think the bigger sticking points would be if it's too heavyweight an 
object to use (which could be somewhat mitigated with a simpler ida
allocator although that's difficult to do with shared), and whether libc
could sanely use them due to the static initialiser problem of pthread
mutexes.

Thanks,
Nick
Andrey Semashev June 5, 2021, 8:56 a.m. UTC | #6
On 6/5/21 4:09 AM, Nicholas Piggin wrote:
> Excerpts from André Almeida's message of June 5, 2021 6:01 am:
>> Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
> 
>>> I'll be burned at the stake for suggesting it but it would be great if
>>> we could use file descriptors. At least for the shared futex, maybe
>>> private could use a per-process futex allocator. It solves all of the
>>> above, although I'm sure has many of its own problem. It may not play
>>> so nicely with the pthread mutex API because of the whole static
>>> initialiser problem, but the first futex proposal did use fds. But it's
>>> an example of an alternate API.
>>>
>>
>> FDs and futex doesn't play well, because for futex_wait() you need to
>> tell the kernel the expected value in the futex address to avoid
>> sleeping in a free lock. FD operations (poll, select) don't have this
>> `value` argument, so they could sleep forever, but I'm not sure if you
>> had taken this in consideration.
> 
> I had. The futex wait API would take a fd additional. The only
> difference is the waitqueue that is used when a sleep or wake is
> required is derived from the fd, not from an address.
> 
> I think the bigger sticking points would be if it's too heavyweight an
> object to use (which could be somewhat mitigated with a simpler ida
> allocator although that's difficult to do with shared), and whether libc
> could sanely use them due to the static initialiser problem of pthread
> mutexes.

The static initialization feature is not the only benefit of the current 
futex design, and probably not the most important one. You can work 
around the static initialization in userspace, e.g. by initializing fd 
to an invalid value and creating a valid fd upon the first use. Although 
that would still incur a performance penalty and add a new source of 
failure.

What is more important is that waiting on fd always requires a kernel 
call. This will be terrible for performance of uncontended locks, which 
is the majority of time.

Another important point is that a futex that is not being waited on 
consumes zero kernel resources while fd is a limited resource even when 
not used. You can have millions futexes in userspace and you are 
guaranteed not to exhaust any limit as long as you have memory. That is 
an important feature, and the current userspace is relying on it by 
assuming that creating mutexes and condition variables is cheap.

Having futex fd would be useful in some cases to be able to integrate 
futexes with IO. I did have use cases where I would have liked to have 
FUTEX_FD in the past. These cases arise when you already have a thread 
that operates on fds and you want to avoid having a separate thread that 
blocks on futexes in a similar fashion. But, IMO, that should be an 
optional opt-in feature. By far, not every futex needs to have an fd. 
For just waiting on multiple futexes, the native support that futex2 
provides is superior.

PS: I'm not asking FUTEX_FD to be implemented as part of futex2 API. 
futex2 would be great even without it.
Nicholas Piggin June 6, 2021, 11:57 a.m. UTC | #7
Excerpts from Andrey Semashev's message of June 5, 2021 6:56 pm:
> On 6/5/21 4:09 AM, Nicholas Piggin wrote:
>> Excerpts from André Almeida's message of June 5, 2021 6:01 am:
>>> Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
>> 
>>>> I'll be burned at the stake for suggesting it but it would be great if
>>>> we could use file descriptors. At least for the shared futex, maybe
>>>> private could use a per-process futex allocator. It solves all of the
>>>> above, although I'm sure has many of its own problem. It may not play
>>>> so nicely with the pthread mutex API because of the whole static
>>>> initialiser problem, but the first futex proposal did use fds. But it's
>>>> an example of an alternate API.
>>>>
>>>
>>> FDs and futex doesn't play well, because for futex_wait() you need to
>>> tell the kernel the expected value in the futex address to avoid
>>> sleeping in a free lock. FD operations (poll, select) don't have this
>>> `value` argument, so they could sleep forever, but I'm not sure if you
>>> had taken this in consideration.
>> 
>> I had. The futex wait API would take a fd additional. The only
>> difference is the waitqueue that is used when a sleep or wake is
>> required is derived from the fd, not from an address.
>> 
>> I think the bigger sticking points would be if it's too heavyweight an
>> object to use (which could be somewhat mitigated with a simpler ida
>> allocator although that's difficult to do with shared), and whether libc
>> could sanely use them due to the static initialiser problem of pthread
>> mutexes.
> 
> The static initialization feature is not the only benefit of the current 
> futex design, and probably not the most important one. You can work 
> around the static initialization in userspace, e.g. by initializing fd 
> to an invalid value and creating a valid fd upon the first use. Although 
> that would still incur a performance penalty and add a new source of 
> failure.

Sounds like a serious problem, but maybe it isn't. On the other hand,
maybe we don't have to support pthread mutexes as they are anyway 
because futex already does that fairly well.

> What is more important is that waiting on fd always requires a kernel 
> call. This will be terrible for performance of uncontended locks, which 
> is the majority of time.

No. As I said just before, it would be the same except the waitqueue is 
derived from fd rather than address.

> 
> Another important point is that a futex that is not being waited on 
> consumes zero kernel resources while fd is a limited resource even when 
> not used. You can have millions futexes in userspace and you are 
> guaranteed not to exhaust any limit as long as you have memory. That is 
> an important feature, and the current userspace is relying on it by 
> assuming that creating mutexes and condition variables is cheap.

Is it an important feture? Would 1 byte of kernel memory per uncontended 
futex be okay? 10? 100?

I do see it's very nice the current design that requires no 
initialization for uncontended, I'm just asking questions to get an idea 
of what constraints we're working with. We have a pretty good API 
already which can support unlimited uncontended futexes, so I'm 
wondering do we really need another very very similar API that doesn't
fix the really difficult problems of the existing one?

Thanks,
Nick

> Having futex fd would be useful in some cases to be able to integrate 
> futexes with IO. I did have use cases where I would have liked to have 
> FUTEX_FD in the past. These cases arise when you already have a thread 
> that operates on fds and you want to avoid having a separate thread that 
> blocks on futexes in a similar fashion. But, IMO, that should be an 
> optional opt-in feature. By far, not every futex needs to have an fd. 
> For just waiting on multiple futexes, the native support that futex2 
> provides is superior.
> 
> PS: I'm not asking FUTEX_FD to be implemented as part of futex2 API. 
> futex2 would be great even without it.
Andrey Semashev June 6, 2021, 1:15 p.m. UTC | #8
On 6/6/21 2:57 PM, Nicholas Piggin wrote:
> Excerpts from Andrey Semashev's message of June 5, 2021 6:56 pm:
>> On 6/5/21 4:09 AM, Nicholas Piggin wrote:
>>> Excerpts from André Almeida's message of June 5, 2021 6:01 am:
>>>> Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
>>>
>>>>> I'll be burned at the stake for suggesting it but it would be great if
>>>>> we could use file descriptors. At least for the shared futex, maybe
>>>>> private could use a per-process futex allocator. It solves all of the
>>>>> above, although I'm sure has many of its own problem. It may not play
>>>>> so nicely with the pthread mutex API because of the whole static
>>>>> initialiser problem, but the first futex proposal did use fds. But it's
>>>>> an example of an alternate API.
>>>>>
>>>>
>>>> FDs and futex doesn't play well, because for futex_wait() you need to
>>>> tell the kernel the expected value in the futex address to avoid
>>>> sleeping in a free lock. FD operations (poll, select) don't have this
>>>> `value` argument, so they could sleep forever, but I'm not sure if you
>>>> had taken this in consideration.
>>>
>>> I had. The futex wait API would take a fd additional. The only
>>> difference is the waitqueue that is used when a sleep or wake is
>>> required is derived from the fd, not from an address.
>>>
>>> I think the bigger sticking points would be if it's too heavyweight an
>>> object to use (which could be somewhat mitigated with a simpler ida
>>> allocator although that's difficult to do with shared), and whether libc
>>> could sanely use them due to the static initialiser problem of pthread
>>> mutexes.
>>
>> The static initialization feature is not the only benefit of the current
>> futex design, and probably not the most important one. You can work
>> around the static initialization in userspace, e.g. by initializing fd
>> to an invalid value and creating a valid fd upon the first use. Although
>> that would still incur a performance penalty and add a new source of
>> failure.
> 
> Sounds like a serious problem, but maybe it isn't. On the other hand,
> maybe we don't have to support pthread mutexes as they are anyway
> because futex already does that fairly well.
> 
>> What is more important is that waiting on fd always requires a kernel
>> call. This will be terrible for performance of uncontended locks, which
>> is the majority of time.
> 
> No. As I said just before, it would be the same except the waitqueue is
> derived from fd rather than address.

Sorry, in that case I'm not sure I understand how that would work. You 
do need to allocate a fd, do you?

>> Another important point is that a futex that is not being waited on
>> consumes zero kernel resources while fd is a limited resource even when
>> not used. You can have millions futexes in userspace and you are
>> guaranteed not to exhaust any limit as long as you have memory. That is
>> an important feature, and the current userspace is relying on it by
>> assuming that creating mutexes and condition variables is cheap.
> 
> Is it an important feture? Would 1 byte of kernel memory per uncontended
> futex be okay? 10? 100?
> 
> I do see it's very nice the current design that requires no
> initialization for uncontended, I'm just asking questions to get an idea
> of what constraints we're working with. We have a pretty good API
> already which can support unlimited uncontended futexes, so I'm
> wondering do we really need another very very similar API that doesn't
> fix the really difficult problems of the existing one?

It does provide the very much needed features that are missing in the 
current futex. Namely, more futex sizes and wait for multiple. So the 
argument of "why have two similar APIs" is not quite fair. It would be, 
if there was feature parity with futex.

I believe, the low cost of a futex is an important feature, and was one 
of the reasons for its original design and introduction. Otherwise we 
would be using eventfds in mutexes.

One other feature that I didn't mention earlier and which follows from 
its "address in memory" design is the ability to use futexes in 
process-shared memory. This is important for process-shared pthread 
components, too, but has its own value even without this, if you use 
futexes directly. With fds, you can't place the fd in a shared memory 
since every process needs to have its own fd referring to the same 
kernel object, and passing fds cannot be done without a UNIX socket. 
This is incompatible with pthreads API design and would require 
non-trivial design changes to the applications using futexes directly.
André Almeida June 7, 2021, 3:40 p.m. UTC | #9
Às 22:09 de 04/06/21, Nicholas Piggin escreveu:
> Excerpts from André Almeida's message of June 5, 2021 6:01 am:
>> Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
>>> Excerpts from André Almeida's message of June 4, 2021 5:59 am:
>>> - Did you consider a wakev interface? An example is a read-write mutex 
>>> which has read-blocking futexes split (e.g., per-node) for scalability 
>>> then the writer may unlock and wake all readers with one call. We 
>>> actually have some scalability challenges of this nature with certain 
>>> large database programs.
>>>
>>
>> Not really, I haven't heard any use case for that until now. It should
>> be easy to implement it, though, and I think you have an interesting use
>> case here. Could you point me some of those database programs?
> 
> Not source code unfortunately. I know that's not a very good answer, but 
> they are far ahead of what most open source apps are doing scalability 
> wise today, and they end up rolling their own complex locking. Hopefully
> the example I give is simple enough to understand.
> 

I see, that's makes things a bit harder. I understood the use case and
the wakev can be implemented without affecting the rest of API, so I
think I'll get back to it later, for now.

>>>
>>> - Are we really keen on squashing node ID into flags in this day and age?
>>> I guess okay but seems like it would be nice to allow a bit more space
>>> in general for the operations. I don't want to turn it into a whole big
>>> multiplexing nightmare again with lots of such flags, or propose
>>> complexity with no code behind it, but I think we need a bit of leeway
>>> for unforeseen locking innovations to be added carefully. The pthread
>>> locking today is still fairly primitive really, I don't think we know
>>> what will work best for the next 10 years.
>>
>> In the interface that I'd proposed, the node ID isn't part of the flags.
>> You have a flag FUTEX_FLAG_NUMA, and when that is used, you pass in
>> `void *uaddr` a pointer to a `struct futex_numa { int value, int hint
>> }`, where hint should be the node ID you would like to work on, and
>> value is just the userspace futex. This is documented in more details in
>> patch 7 "docs: locking: futex2: Add documentation".
>>
>> If you have any feedback about how this NUMA interface looks like, I
>> would like to hear.
>>
>> Also, did something in my writing indicated that the node ID would be
>> part of the flags? I'll improve this it if so.
> 
> Oh I did miss this, thank you. No it wasn't your writing, I think it was 
> me trying to read through a lot of messages and got confused with some
> earlier conversations.
> 
> I'll look a bit more at the NUMA interface.
> 

Thanks!

>>
>>>
>>> One scalability issue we are starting to hit and will only get worse is 
>>> futex queue spinlock contention. Perhaps this is better addressed in 
>>> userspace but the kernel could play a part so I like to leave some doors
>>> open. One example is that the wait (or wake) side may like to depend not
>>> just on the memory value, but on the success of a cmpxchg to avoid 
>>> unqueueing and queueing spuriously, which increases lock contention but
>>> also ends up putting the poor task on the back of the list -- yes RT
>>> priorities can help the realtime case, but non-RT cases can get bad
>>> outlier latencies if lock stealing is allowed (which can be very good
>>> for performance).
>>>
>>
>> Sorry, I'm not sure what do you mean here. Are you proposing to have a
>> cmpxchg in kernel side, so the lock would be taken by the kernel, and
>> not by the userspace like it's now?
> 
> Yes. Only in slow paths, of course, to reduce starvation / erratic
> latencies and spurious wait queue manipulations.

Right, so if we need to go into the kernel to do the cmpxchg, we can't
take a free lock without a syscall, and this goes against the futex
semantics, the "strength" of this interface is to not require context
switch in uncontended cases.

Is not a bad thing itself to go into the kernel to get a lock, other
operating systems do that and if the kernel has more knowledge about who
has the lock, it can even make some smart decisions. But this is not
futex, this probably belongs to another interface (that's probably
slower in the common case than futex).

> 
> Actually one other scalability thing while I remember it:
> 
> futex_wait currently requires that the lock word is tested under the 
> queue spin lock (to avoid consuming a wakeup). The problem with this is 
> that the lock word can be a very hot cache line if you have a lot of
> concurrency, so accessing it under the queue lock can increase queue
> lock hold time.
> 
> I would prefer if the new API was relaxed to avoid this restriction
> (e.g., any wait call may consume a wakeup so it's up to userspace to
> avoid that if it is a problem).

Maybe I'm wrong, but AFAIK the goal of checking the lock word inside the
spin lock is to avoid sleeping forever (in other words, wrongly assuming
that the lock is taken and missing a wakeup call), not to avoid
consuming wakeups. Or at least this is my interpretation of this long
comment in futex.c:

https://elixir.bootlin.com/linux/v5.12.9/source/kernel/futex.c#L51

So removing this requirement of checking the futex word with the lock
taken could led to undesirable behavior.

> 
>>> - The private global futex hash table sucks for various reasons, and
>>> having 128 waiters per thread makes it orders of magnitude easier for
>>> userspace to DoS stuff with hash collisions. NUMA doesn't fix that, the
>>> per process hashing that Thomas suggested does fix the DoS but the
>>> non-deterministic hash collisions still seem to be a problem for real
>>> time response, and at the other end of the scale some apps (certain 
>>> databases, etc) can have ten thousand futex waiters at once so birthday
>>> paradox can also lead to guaranteed (low level) variable beahviour 
>>> within a single process.
>>>
>>> I know the kernel in general is not very robust against this kind of 
>>> DoS/nondeterminism, but it's a bit sad to introduce new APIs with the 
>>> problem still there. Yes we could address it later, but I think it's 
>>> better done first because the solution might influence what the best 
>>> syscall API is.
>>>
>>> For example the idea of using the address as the handle for the wait 
>>> queue _and_ the value is very convenient but hashing is annoying for
>>> all the above reasons and the shared wait queue case is pretty clunky. 
>>> It's also constraining in some corner cases to have the wait queue 
>>> associated with the address 1:1. For example a type of NUMA mutex might 
>>> want to have per-node waitqueues associated with a lock word, and wake
>>> local node waiters 99% of the time. Not trivial to do with futexes and
>>> seems to at least require bouncing of more cache lines, possibly more
>>> atomics, etc.
>>>
>>> Could anything else be done?
>>
>> I wasn't aware that userspace doing DoS is something to be concerned
>> from the kernel point of view. Is this scenario considering a malicious
>> actor? If so, there are plenty of resources to be denied, so not sure
>> how futex could be protected of this. Or is this just a program that
>> uses tons of futexes?
> 
> Both really. AFAIKS one of the efforts that prompted the futex 
> modernisation work was the RT latency issues from Thomas in 2016 when 
> the per process table was proposed.
> 

When I first read Thomas proposal for per table process, I thought that
the main goal there was to solve NUMA locality issues, not RT latency,
but I think you are right. However, re-reading the thread at [0], it
seems that the RT problems where not completely solved in that
interface, maybe the people involved with that patchset can help to shed
some light on it.

Otherwise, this same proposal could be integrated in futex2, given that
we would only need to provide to userland some extra flags and add some
`if`s around the hash table code (in a very similar way the NUMA code
will be implemented in futex2).

[0] https://lore.kernel.org/lkml/20160505204230.932454245@linutronix.de/

> 
>>> I'll be burned at the stake for suggesting it but it would be great if 
>>> we could use file descriptors. At least for the shared futex, maybe 
>>> private could use a per-process futex allocator. It solves all of the
>>> above, although I'm sure has many of its own problem. It may not play
>>> so nicely with the pthread mutex API because of the whole static
>>> initialiser problem, but the first futex proposal did use fds. But it's
>>> an example of an alternate API.
>>>
>>
>> FDs and futex doesn't play well, because for futex_wait() you need to
>> tell the kernel the expected value in the futex address to avoid
>> sleeping in a free lock. FD operations (poll, select) don't have this
>> `value` argument, so they could sleep forever, but I'm not sure if you
>> had taken this in consideration.
> 
> I had. The futex wait API would take a fd additional. The only 
> difference is the waitqueue that is used when a sleep or wake is 
> required is derived from the fd, not from an address.
> 
> I think the bigger sticking points would be if it's too heavyweight an 
> object to use (which could be somewhat mitigated with a simpler ida
> allocator although that's difficult to do with shared), and whether libc
> could sanely use them due to the static initialiser problem of pthread
> mutexes.
> 
> Thanks,
> Nick
>
Nicholas Piggin June 8, 2021, 1:25 a.m. UTC | #10
Excerpts from Andrey Semashev's message of June 6, 2021 11:15 pm:
> On 6/6/21 2:57 PM, Nicholas Piggin wrote:
>> Excerpts from Andrey Semashev's message of June 5, 2021 6:56 pm:
>>> On 6/5/21 4:09 AM, Nicholas Piggin wrote:
>>>> Excerpts from André Almeida's message of June 5, 2021 6:01 am:
>>>>> Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
>>>>
>>>>>> I'll be burned at the stake for suggesting it but it would be great if
>>>>>> we could use file descriptors. At least for the shared futex, maybe
>>>>>> private could use a per-process futex allocator. It solves all of the
>>>>>> above, although I'm sure has many of its own problem. It may not play
>>>>>> so nicely with the pthread mutex API because of the whole static
>>>>>> initialiser problem, but the first futex proposal did use fds. But it's
>>>>>> an example of an alternate API.
>>>>>>
>>>>>
>>>>> FDs and futex doesn't play well, because for futex_wait() you need to
>>>>> tell the kernel the expected value in the futex address to avoid
>>>>> sleeping in a free lock. FD operations (poll, select) don't have this
>>>>> `value` argument, so they could sleep forever, but I'm not sure if you
>>>>> had taken this in consideration.
>>>>
>>>> I had. The futex wait API would take a fd additional. The only
>>>> difference is the waitqueue that is used when a sleep or wake is
>>>> required is derived from the fd, not from an address.
>>>>
>>>> I think the bigger sticking points would be if it's too heavyweight an
>>>> object to use (which could be somewhat mitigated with a simpler ida
>>>> allocator although that's difficult to do with shared), and whether libc
>>>> could sanely use them due to the static initialiser problem of pthread
>>>> mutexes.
>>>
>>> The static initialization feature is not the only benefit of the current
>>> futex design, and probably not the most important one. You can work
>>> around the static initialization in userspace, e.g. by initializing fd
>>> to an invalid value and creating a valid fd upon the first use. Although
>>> that would still incur a performance penalty and add a new source of
>>> failure.
>> 
>> Sounds like a serious problem, but maybe it isn't. On the other hand,
>> maybe we don't have to support pthread mutexes as they are anyway
>> because futex already does that fairly well.
>> 
>>> What is more important is that waiting on fd always requires a kernel
>>> call. This will be terrible for performance of uncontended locks, which
>>> is the majority of time.
>> 
>> No. As I said just before, it would be the same except the waitqueue is
>> derived from fd rather than address.
> 
> Sorry, in that case I'm not sure I understand how that would work. You 
> do need to allocate a fd, do you?

Yes. As I said, imagine a futex_wait API that also takes a fd. The
wait queue is derived from that fd rather than the hash table.

>>> Another important point is that a futex that is not being waited on
>>> consumes zero kernel resources while fd is a limited resource even when
>>> not used. You can have millions futexes in userspace and you are
>>> guaranteed not to exhaust any limit as long as you have memory. That is
>>> an important feature, and the current userspace is relying on it by
>>> assuming that creating mutexes and condition variables is cheap.
>> 
>> Is it an important feture? Would 1 byte of kernel memory per uncontended
>> futex be okay? 10? 100?
>> 
>> I do see it's very nice the current design that requires no
>> initialization for uncontended, I'm just asking questions to get an idea
>> of what constraints we're working with. We have a pretty good API
>> already which can support unlimited uncontended futexes, so I'm
>> wondering do we really need another very very similar API that doesn't
>> fix the really difficult problems of the existing one?
> 
> It does provide the very much needed features that are missing in the 
> current futex. Namely, more futex sizes and wait for multiple. So the 
> argument of "why have two similar APIs" is not quite fair. It would be, 
> if there was feature parity with futex.

It does provide some extra features sure, with some straightforward 
extension of the existing API. The really interesting or tricky part of
the API is left unchanged though.

My line of thinking is that while we're changing the API anyway, we 
should see if it can be changed to help those other problems too.

> I believe, the low cost of a futex is an important feature, and was one 
> of the reasons for its original design and introduction.

It is of course. The first futex proposal did use fds, interestingly.
I didn't look back further into the libc side of that thing, but maybe
I should.

> Otherwise we 
> would be using eventfds in mutexes.

I don't think so, not even if eventfd came before the futex syscall.

> 
> One other feature that I didn't mention earlier and which follows from 
> its "address in memory" design is the ability to use futexes in 
> process-shared memory. This is important for process-shared pthread 
> components, too, but has its own value even without this, if you use 
> futexes directly. With fds, you can't place the fd in a shared memory 
> since every process needs to have its own fd referring to the same 
> kernel object, and passing fds cannot be done without a UNIX socket. 
> This is incompatible with pthreads API design and would require 
> non-trivial design changes to the applications using futexes directly.
> 

That may be true. file is a natural object to share such a resource, but 
the means to share the fd is not so easy. OTOH you could also use a 
syscall to open the same file and get a new fd.

Are shared pthread mutexes using existing pthread APIs that are today
implemented okay with futex1 system call a good reason to constrain 
futex2 I wonder? Or do we have an opportunity to make a bigger change
to the API so it suffers less from non deterministic latency (for
example)?

I don't want to limit it to just files vs addresses, fds was an example 
of something that could solve some of the problems.

Thanks,
Nick
Nicholas Piggin June 8, 2021, 1:31 a.m. UTC | #11
Excerpts from André Almeida's message of June 8, 2021 1:40 am:
> Às 22:09 de 04/06/21, Nicholas Piggin escreveu:
>> Excerpts from André Almeida's message of June 5, 2021 6:01 am:
>>> Às 08:36 de 04/06/21, Nicholas Piggin escreveu:
>>>> Excerpts from André Almeida's message of June 4, 2021 5:59 am:
>>>> - Did you consider a wakev interface? An example is a read-write mutex 
>>>> which has read-blocking futexes split (e.g., per-node) for scalability 
>>>> then the writer may unlock and wake all readers with one call. We 
>>>> actually have some scalability challenges of this nature with certain 
>>>> large database programs.
>>>>
>>>
>>> Not really, I haven't heard any use case for that until now. It should
>>> be easy to implement it, though, and I think you have an interesting use
>>> case here. Could you point me some of those database programs?
>> 
>> Not source code unfortunately. I know that's not a very good answer, but 
>> they are far ahead of what most open source apps are doing scalability 
>> wise today, and they end up rolling their own complex locking. Hopefully
>> the example I give is simple enough to understand.
>> 
> 
> I see, that's makes things a bit harder. I understood the use case and
> the wakev can be implemented without affecting the rest of API, so I
> think I'll get back to it later, for now.

Yeah that's fine.

>>>> - Are we really keen on squashing node ID into flags in this day and age?
>>>> I guess okay but seems like it would be nice to allow a bit more space
>>>> in general for the operations. I don't want to turn it into a whole big
>>>> multiplexing nightmare again with lots of such flags, or propose
>>>> complexity with no code behind it, but I think we need a bit of leeway
>>>> for unforeseen locking innovations to be added carefully. The pthread
>>>> locking today is still fairly primitive really, I don't think we know
>>>> what will work best for the next 10 years.
>>>
>>> In the interface that I'd proposed, the node ID isn't part of the flags.
>>> You have a flag FUTEX_FLAG_NUMA, and when that is used, you pass in
>>> `void *uaddr` a pointer to a `struct futex_numa { int value, int hint
>>> }`, where hint should be the node ID you would like to work on, and
>>> value is just the userspace futex. This is documented in more details in
>>> patch 7 "docs: locking: futex2: Add documentation".
>>>
>>> If you have any feedback about how this NUMA interface looks like, I
>>> would like to hear.
>>>
>>> Also, did something in my writing indicated that the node ID would be
>>> part of the flags? I'll improve this it if so.
>> 
>> Oh I did miss this, thank you. No it wasn't your writing, I think it was 
>> me trying to read through a lot of messages and got confused with some
>> earlier conversations.
>> 
>> I'll look a bit more at the NUMA interface.
>> 
> 
> Thanks!
> 
>>>
>>>>
>>>> One scalability issue we are starting to hit and will only get worse is 
>>>> futex queue spinlock contention. Perhaps this is better addressed in 
>>>> userspace but the kernel could play a part so I like to leave some doors
>>>> open. One example is that the wait (or wake) side may like to depend not
>>>> just on the memory value, but on the success of a cmpxchg to avoid 
>>>> unqueueing and queueing spuriously, which increases lock contention but
>>>> also ends up putting the poor task on the back of the list -- yes RT
>>>> priorities can help the realtime case, but non-RT cases can get bad
>>>> outlier latencies if lock stealing is allowed (which can be very good
>>>> for performance).
>>>>
>>>
>>> Sorry, I'm not sure what do you mean here. Are you proposing to have a
>>> cmpxchg in kernel side, so the lock would be taken by the kernel, and
>>> not by the userspace like it's now?
>> 
>> Yes. Only in slow paths, of course, to reduce starvation / erratic
>> latencies and spurious wait queue manipulations.
> 
> Right, so if we need to go into the kernel to do the cmpxchg, we can't
> take a free lock without a syscall,

Yes you can.

> and this goes against the futex
> semantics, the "strength" of this interface is to not require context
> switch in uncontended cases.
> 
> Is not a bad thing itself to go into the kernel to get a lock, other
> operating systems do that and if the kernel has more knowledge about who
> has the lock, it can even make some smart decisions. But this is not
> futex, this probably belongs to another interface (that's probably
> slower in the common case than futex).
> 
>> 
>> Actually one other scalability thing while I remember it:
>> 
>> futex_wait currently requires that the lock word is tested under the 
>> queue spin lock (to avoid consuming a wakeup). The problem with this is 
>> that the lock word can be a very hot cache line if you have a lot of
>> concurrency, so accessing it under the queue lock can increase queue
>> lock hold time.
>> 
>> I would prefer if the new API was relaxed to avoid this restriction
>> (e.g., any wait call may consume a wakeup so it's up to userspace to
>> avoid that if it is a problem).
> 
> Maybe I'm wrong, but AFAIK the goal of checking the lock word inside the
> spin lock is to avoid sleeping forever (in other words, wrongly assuming
> that the lock is taken and missing a wakeup call), not to avoid
> consuming wakeups. Or at least this is my interpretation of this long
> comment in futex.c:
> 
> https://elixir.bootlin.com/linux/v5.12.9/source/kernel/futex.c#L51
> 
> So removing this requirement of checking the futex word with the lock
> taken could led to undesirable behavior.

No, there are two requirements. Obviously you need to avoid the missed
wakeup at minimum. You don't need to check under the lock unless you
want to avoid consuming an extra wakeup though (it can possibly be done
in more complex ways like you detect if you took a wakeup and if so then
look at the queue and see if you can pass it on, or have some extra flag
to signal you are ready for wake up, but those all seem more complex and
fragile and possibly have weird corner cases, better to just set out 
that userspace should deal with it).

> 
>> 
>>>> - The private global futex hash table sucks for various reasons, and
>>>> having 128 waiters per thread makes it orders of magnitude easier for
>>>> userspace to DoS stuff with hash collisions. NUMA doesn't fix that, the
>>>> per process hashing that Thomas suggested does fix the DoS but the
>>>> non-deterministic hash collisions still seem to be a problem for real
>>>> time response, and at the other end of the scale some apps (certain 
>>>> databases, etc) can have ten thousand futex waiters at once so birthday
>>>> paradox can also lead to guaranteed (low level) variable beahviour 
>>>> within a single process.
>>>>
>>>> I know the kernel in general is not very robust against this kind of 
>>>> DoS/nondeterminism, but it's a bit sad to introduce new APIs with the 
>>>> problem still there. Yes we could address it later, but I think it's 
>>>> better done first because the solution might influence what the best 
>>>> syscall API is.
>>>>
>>>> For example the idea of using the address as the handle for the wait 
>>>> queue _and_ the value is very convenient but hashing is annoying for
>>>> all the above reasons and the shared wait queue case is pretty clunky. 
>>>> It's also constraining in some corner cases to have the wait queue 
>>>> associated with the address 1:1. For example a type of NUMA mutex might 
>>>> want to have per-node waitqueues associated with a lock word, and wake
>>>> local node waiters 99% of the time. Not trivial to do with futexes and
>>>> seems to at least require bouncing of more cache lines, possibly more
>>>> atomics, etc.
>>>>
>>>> Could anything else be done?
>>>
>>> I wasn't aware that userspace doing DoS is something to be concerned
>>> from the kernel point of view. Is this scenario considering a malicious
>>> actor? If so, there are plenty of resources to be denied, so not sure
>>> how futex could be protected of this. Or is this just a program that
>>> uses tons of futexes?
>> 
>> Both really. AFAIKS one of the efforts that prompted the futex 
>> modernisation work was the RT latency issues from Thomas in 2016 when 
>> the per process table was proposed.
>> 
> 
> When I first read Thomas proposal for per table process, I thought that
> the main goal there was to solve NUMA locality issues, not RT latency,
> but I think you are right. However, re-reading the thread at [0], it
> seems that the RT problems where not completely solved in that
> interface, maybe the people involved with that patchset can help to shed
> some light on it.
> 
> Otherwise, this same proposal could be integrated in futex2, given that
> we would only need to provide to userland some extra flags and add some
> `if`s around the hash table code (in a very similar way the NUMA code
> will be implemented in futex2).
> 
> [0] https://lore.kernel.org/lkml/20160505204230.932454245@linutronix.de/

Right. The accidental collisions within a process (or 
accidental/deliberate collisions with shared futex) problems remain.

Thanks,
Nick
Davidlohr Bueso June 8, 2021, 2:33 a.m. UTC | #12
On Mon, 07 Jun 2021, Andr� Almeida wrote:

>Às 22:09 de 04/06/21, Nicholas Piggin escreveu:
>> Actually one other scalability thing while I remember it:
>>
>> futex_wait currently requires that the lock word is tested under the
>> queue spin lock (to avoid consuming a wakeup). The problem with this is
>> that the lock word can be a very hot cache line if you have a lot of
>> concurrency, so accessing it under the queue lock can increase queue
>> lock hold time.
>>
>> I would prefer if the new API was relaxed to avoid this restriction
>> (e.g., any wait call may consume a wakeup so it's up to userspace to
>> avoid that if it is a problem).
>
>Maybe I'm wrong, but AFAIK the goal of checking the lock word inside the
>spin lock is to avoid sleeping forever (in other words, wrongly assuming
>that the lock is taken and missing a wakeup call), not to avoid
>consuming wakeups. Or at least this is my interpretation of this long
>comment in futex.c:
>
>https://elixir.bootlin.com/linux/v5.12.9/source/kernel/futex.c#L51

I think what Nick is referring to is that futex_wait() could return 0
instead of EAGAIN upon a uval != val condition if the check is done
without the hb lock. The value could have changed between when userspace
did the condition check and called into futex(2) to block in the slowpath.

But such spurious scenarios should be pretty rare, and while I agree that
the cacheline can be hot, I'm not sure how much of a performance issue this
really is(?), compared to other issues, certainly not to govern futex2
design. Changing such semantics would be a _huge_ difference between futex1
and futex2.

At least compared, for example, to the hb collisions serializing independent
futexes, affecting both performance and determinism. And I agree that a new
interface should address this problem - albeit most of the workloads I have
seen in production use but a handful of futexes and larger thread counts.
One thing that crossed my mind (but have not actually sat down to look at)
would be to use rlhastables for the dynamic resizing, but of course that would
probably add a decent amount of overhead to the simple hashing we currently have.

Thanks,
Davidlohr
Nicholas Piggin June 8, 2021, 4:45 a.m. UTC | #13
Excerpts from Davidlohr Bueso's message of June 8, 2021 12:33 pm:
> On Mon, 07 Jun 2021, Andr� Almeida wrote:
> 
>>Às 22:09 de 04/06/21, Nicholas Piggin escreveu:
>>> Actually one other scalability thing while I remember it:
>>>
>>> futex_wait currently requires that the lock word is tested under the
>>> queue spin lock (to avoid consuming a wakeup). The problem with this is
>>> that the lock word can be a very hot cache line if you have a lot of
>>> concurrency, so accessing it under the queue lock can increase queue
>>> lock hold time.
>>>
>>> I would prefer if the new API was relaxed to avoid this restriction
>>> (e.g., any wait call may consume a wakeup so it's up to userspace to
>>> avoid that if it is a problem).
>>
>>Maybe I'm wrong, but AFAIK the goal of checking the lock word inside the
>>spin lock is to avoid sleeping forever (in other words, wrongly assuming
>>that the lock is taken and missing a wakeup call), not to avoid
>>consuming wakeups. Or at least this is my interpretation of this long
>>comment in futex.c:
>>
>>https://elixir.bootlin.com/linux/v5.12.9/source/kernel/futex.c#L51
> 
> I think what Nick is referring to is that futex_wait() could return 0
> instead of EAGAIN upon a uval != val condition if the check is done
> without the hb lock. The value could have changed between when userspace
> did the condition check and called into futex(2) to block in the slowpath.

I just mean the check could be done after queueing ourselves on the wait 
queue (and unlocking the waitqueue lock, not checking while holding the
lock). That is the standard pattern used everywhere else by the kernel:

  prepare_to_wait() /* -> lock; add_wait_queue; unlock; */
  check_again();
  schedule();

It can still return EAGAIN if there is a reasonable use for it, but I'd
be wary about user code that cares about this -- it's racy you could 
arrive right before the value changes or right after it changes, so any
user code checking this I would be suspicious of (I'm willing to see a
use case that really cares).

> 
> But such spurious scenarios should be pretty rare, and while I agree that
> the cacheline can be hot, I'm not sure how much of a performance issue this
> really is(?),

It's not a spurious anything. The problem is the contention on the
lock word cacheline means it can take a relatively long time just to perform
that one load instruction. Mandating that it must be done while holding the
lock translates to increased lock hold times.

This matters particularly in situations that have lock stealing, 
optimistic spinning, reader-writer locks or more exotic kind of things 
that allow some common types of critical section to go through while others
are blocking. And partiuclarly when such things hash collide on other
futexes that share the same hash lock.

> compared to other issues, certainly not to govern futex2
> design. Changing such semantics would be a _huge_ difference between futex1
> and futex2.

futex1 behaviour should not govern futex2 design. That's the only nice 
thing you get with an API change, so we should take full advantage of 
it. I'm not saying make changes for no reason, but I gave a reason, so 
that should be countered with a better reason to not change.

Thanks,
Nick

> 
> At least compared, for example, to the hb collisions serializing independent
> futexes, affecting both performance and determinism. And I agree that a new
> interface should address this problem - albeit most of the workloads I have
> seen in production use but a handful of futexes and larger thread counts.
> One thing that crossed my mind (but have not actually sat down to look at)
> would be to use rlhastables for the dynamic resizing, but of course that would
> probably add a decent amount of overhead to the simple hashing we currently have.
Andrey Semashev June 8, 2021, 11:03 a.m. UTC | #14
On 6/8/21 4:25 AM, Nicholas Piggin wrote:
> 
> Are shared pthread mutexes using existing pthread APIs that are today
> implemented okay with futex1 system call a good reason to constrain
> futex2 I wonder? Or do we have an opportunity to make a bigger change
> to the API so it suffers less from non deterministic latency (for
> example)?

If futex2 is not able to cover futex1 use cases then it cannot be viewed 
as a replacement. In the long term this means futex1 cannot be 
deprecated and has to be maintained. My impression was that futex1 was 
basically unmaintainable(*) and futex2 was an evolution of futex1 so 
that users of futex1 could migrate relatively easily and futex1 
eventually removed. Maybe my impression was wrong, but I would like to 
see futex2 as a replacement and extension of futex1, so the latter can 
be deprecated at some point.

In any case, creating a new API should consider requirements of its 
potential users. If futex2 is intended to eventually replace futex1 then 
all current futex1 users are potential users of futex2. If not, then the 
futex2 submission should list its intended users, at least in general 
terms, and their requirements that led to the proposed API design.

(*) I use "unmaintainable" in a broad sense here. It exists and works in 
newer kernel versions and may receive code changes that are necessary to 
keep it working, but maintainers refuse any extensions or modifications 
of the code, mostly because of its complexity.
Greg Kroah-Hartman June 8, 2021, 11:13 a.m. UTC | #15
On Tue, Jun 08, 2021 at 02:03:50PM +0300, Andrey Semashev wrote:
> On 6/8/21 4:25 AM, Nicholas Piggin wrote:
> > 
> > Are shared pthread mutexes using existing pthread APIs that are today
> > implemented okay with futex1 system call a good reason to constrain
> > futex2 I wonder? Or do we have an opportunity to make a bigger change
> > to the API so it suffers less from non deterministic latency (for
> > example)?
> 
> If futex2 is not able to cover futex1 use cases then it cannot be viewed as
> a replacement. In the long term this means futex1 cannot be deprecated and
> has to be maintained. My impression was that futex1 was basically
> unmaintainable(*) and futex2 was an evolution of futex1 so that users of
> futex1 could migrate relatively easily and futex1 eventually removed. Maybe
> my impression was wrong, but I would like to see futex2 as a replacement and
> extension of futex1, so the latter can be deprecated at some point.

You can never delete a kernel system call, so even if you "deprecate"
it, it still needs to be supported for forever.

Best of all would be if internally your "futex2" code would replace the
"futex1" code so that there is no two different code bases.  That would
be the only sane way forward, having 2 code bases to work with is just
insane.

> (*) I use "unmaintainable" in a broad sense here. It exists and works in
> newer kernel versions and may receive code changes that are necessary to
> keep it working, but maintainers refuse any extensions or modifications of
> the code, mostly because of its complexity.

Adding additional complexity for no good reason is not a good idea,
especially if you are asking others to maintain and support that
complexity.  Would you want to have to do that work?

So what's keeping the futex2 code from doing all that futex1 does so
that the futex1 code can be deleted internally?

thanks,

greg k-h
Peter Zijlstra June 8, 2021, 11:44 a.m. UTC | #16
On Tue, Jun 08, 2021 at 01:13:30PM +0200, Greg KH wrote:
> On Tue, Jun 08, 2021 at 02:03:50PM +0300, Andrey Semashev wrote:
> So what's keeping the futex2 code from doing all that futex1 does so
> that the futex1 code can be deleted internally?

I'd much rather see it the other way around. Much of what futex2 does
can already be done with the futex1 code-base. And then we can add
features on top.

I've been moaning about this for the past many versions, even older
versions actually implemented some of the new features in futex1,
showing it can be done.

We just wanted a saner futex interface because the existing futex
syscall is a monster and making it even worse seemed like a bad idea.

So *again*: please add the new syscalls on top of futex1 and then
extend.  Do not re-implement.
Andrey Semashev June 8, 2021, 12:06 p.m. UTC | #17
On 6/8/21 2:13 PM, Greg KH wrote:
> On Tue, Jun 08, 2021 at 02:03:50PM +0300, Andrey Semashev wrote:
>> On 6/8/21 4:25 AM, Nicholas Piggin wrote:
>>>
>>> Are shared pthread mutexes using existing pthread APIs that are today
>>> implemented okay with futex1 system call a good reason to constrain
>>> futex2 I wonder? Or do we have an opportunity to make a bigger change
>>> to the API so it suffers less from non deterministic latency (for
>>> example)?
>>
>> If futex2 is not able to cover futex1 use cases then it cannot be viewed as
>> a replacement. In the long term this means futex1 cannot be deprecated and
>> has to be maintained. My impression was that futex1 was basically
>> unmaintainable(*) and futex2 was an evolution of futex1 so that users of
>> futex1 could migrate relatively easily and futex1 eventually removed. Maybe
>> my impression was wrong, but I would like to see futex2 as a replacement and
>> extension of futex1, so the latter can be deprecated at some point.
> 
> You can never delete a kernel system call, so even if you "deprecate"
> it, it still needs to be supported for forever.

If I'm not mistaken, some syscalls were dropped from kernel in the past, 
after it was established they are no longer used. So it is not 
impossible, though might be more difficult specifically with futex.

> Best of all would be if internally your "futex2" code would replace the
> "futex1" code so that there is no two different code bases.  That would
> be the only sane way forward, having 2 code bases to work with is just
> insane.

Yes, implementing futex1 in terms of futex2 internally is a possible way 
forward. Though I'm not sure it is reasonable to require that to be done 
in the initial futex2 submission. This requires all of the futex1 
functionality to implemented in futex2 from the start, which I think is 
too much to ask. Even with some futex1 features missing, futex2 would be 
already very much useful to users, and it is easier to implement the 
missing bits incrementally over time.

Also, one other point I'd like to make is that not all futex1 features 
might need to be reimplemented if futex2 provides a better alternative. 
For example, as a user, I would like to see a different approach to 
robust futexes that does not mandate a single user (libc) and allows to 
use robust futexes directly.

>> (*) I use "unmaintainable" in a broad sense here. It exists and works in
>> newer kernel versions and may receive code changes that are necessary to
>> keep it working, but maintainers refuse any extensions or modifications of
>> the code, mostly because of its complexity.
> 
> Adding additional complexity for no good reason is not a good idea,
> especially if you are asking others to maintain and support that
> complexity.  Would you want to have to do that work?
> 
> So what's keeping the futex2 code from doing all that futex1 does so
> that the futex1 code can be deleted internally?

I think, André will answer this, but my guess is, as stated above, this 
is a lot of work and time while the intermediate version is already useful.
Sebastian Andrzej Siewior June 8, 2021, 12:26 p.m. UTC | #18
On 2021-06-07 12:40:54 [-0300], André Almeida wrote:
> 
> When I first read Thomas proposal for per table process, I thought that
> the main goal there was to solve NUMA locality issues, not RT latency,
> but I think you are right. However, re-reading the thread at [0], it
> seems that the RT problems where not completely solved in that
> interface, maybe the people involved with that patchset can help to shed
> some light on it.
> 
> Otherwise, this same proposal could be integrated in futex2, given that
> we would only need to provide to userland some extra flags and add some
> `if`s around the hash table code (in a very similar way the NUMA code
> will be implemented in futex2).

There are slides at [0] describing some attempts and the kernel tree [1]
from that time.

The process-table solves the problem to some degree that two random
process don't collide on the same hash bucket. But as Peter Zijlstra
pointed out back then two threads from the same task could collide on
the same hash bucket (and with ASLR not always). So the collision is
there but limited and this was not perfect.

All the attempts with API extensions didn't go well because glibc did
not want to change a bit. This starts with a mutex that has a static
initializer which has to work (I don't remember why the first
pthread_mutex_lock() could not fail with -ENOMEM but there was
something) and ends with glibc's struct mutex which is full and has no
room for additional data storage.

The additional data in user's struct mutex + init would have the benefit
that instead uaddr (which is hashed for the in-kernel lookup) a cookie
could be used for the hash-less lookup (and NUMA pointer where memory
should be stored).

So. We couldn't change a thing back then so nothing did happen. We
didn't want to create a new interface and a library implementing it plus
all the functionality around it (like pthread_cond, phtread_barrier, …).
Not to mention that if glibc continues to use the "old" locking
internally then the application is still affected by the hash-collision
locking (or the NUMA problem) should it block on the lock.

[0] https://linutronix.de/PDF/2016_futex_Siewior.pdf
[1] https://git.kernel.org/pub/scm/linux/kernel/git/bigeasy/linux-futex.git/

Sebastian
Greg Kroah-Hartman June 8, 2021, 12:33 p.m. UTC | #19
On Tue, Jun 08, 2021 at 03:06:48PM +0300, Andrey Semashev wrote:
> On 6/8/21 2:13 PM, Greg KH wrote:
> > On Tue, Jun 08, 2021 at 02:03:50PM +0300, Andrey Semashev wrote:
> > > On 6/8/21 4:25 AM, Nicholas Piggin wrote:
> > > > 
> > > > Are shared pthread mutexes using existing pthread APIs that are today
> > > > implemented okay with futex1 system call a good reason to constrain
> > > > futex2 I wonder? Or do we have an opportunity to make a bigger change
> > > > to the API so it suffers less from non deterministic latency (for
> > > > example)?
> > > 
> > > If futex2 is not able to cover futex1 use cases then it cannot be viewed as
> > > a replacement. In the long term this means futex1 cannot be deprecated and
> > > has to be maintained. My impression was that futex1 was basically
> > > unmaintainable(*) and futex2 was an evolution of futex1 so that users of
> > > futex1 could migrate relatively easily and futex1 eventually removed. Maybe
> > > my impression was wrong, but I would like to see futex2 as a replacement and
> > > extension of futex1, so the latter can be deprecated at some point.
> > 
> > You can never delete a kernel system call, so even if you "deprecate"
> > it, it still needs to be supported for forever.
> 
> If I'm not mistaken, some syscalls were dropped from kernel in the past,
> after it was established they are no longer used. So it is not impossible,
> though might be more difficult specifically with futex.

Those syscalls were all "compatible with other obsolete operating
system" syscalls from what I remember.  You can still run binaries
built in 1995 just fine on your system today (I have a few around here
somewhere...)

Thinking that you can drop futex() in the next 10+ years is very wishful
thinking given just how slow userspace applications are ever updated,
sorry.

greg k-h
Greg Kroah-Hartman June 8, 2021, 12:35 p.m. UTC | #20
On Tue, Jun 08, 2021 at 03:06:48PM +0300, Andrey Semashev wrote:
> On 6/8/21 2:13 PM, Greg KH wrote:
> > On Tue, Jun 08, 2021 at 02:03:50PM +0300, Andrey Semashev wrote:
> > > On 6/8/21 4:25 AM, Nicholas Piggin wrote:
> > > > 
> > > > Are shared pthread mutexes using existing pthread APIs that are today
> > > > implemented okay with futex1 system call a good reason to constrain
> > > > futex2 I wonder? Or do we have an opportunity to make a bigger change
> > > > to the API so it suffers less from non deterministic latency (for
> > > > example)?
> > > 
> > > If futex2 is not able to cover futex1 use cases then it cannot be viewed as
> > > a replacement. In the long term this means futex1 cannot be deprecated and
> > > has to be maintained. My impression was that futex1 was basically
> > > unmaintainable(*) and futex2 was an evolution of futex1 so that users of
> > > futex1 could migrate relatively easily and futex1 eventually removed. Maybe
> > > my impression was wrong, but I would like to see futex2 as a replacement and
> > > extension of futex1, so the latter can be deprecated at some point.
> > 
> > You can never delete a kernel system call, so even if you "deprecate"
> > it, it still needs to be supported for forever.
> 
> If I'm not mistaken, some syscalls were dropped from kernel in the past,
> after it was established they are no longer used. So it is not impossible,
> though might be more difficult specifically with futex.
> 
> > Best of all would be if internally your "futex2" code would replace the
> > "futex1" code so that there is no two different code bases.  That would
> > be the only sane way forward, having 2 code bases to work with is just
> > insane.
> 
> Yes, implementing futex1 in terms of futex2 internally is a possible way
> forward. Though I'm not sure it is reasonable to require that to be done in
> the initial futex2 submission. This requires all of the futex1 functionality
> to implemented in futex2 from the start, which I think is too much to ask.
> Even with some futex1 features missing, futex2 would be already very much
> useful to users, and it is easier to implement the missing bits
> incrementally over time.

Then do it the other way around, as Peter points out.

> > So what's keeping the futex2 code from doing all that futex1 does so
> > that the futex1 code can be deleted internally?
> 
> I think, André will answer this, but my guess is, as stated above, this is a
> lot of work and time while the intermediate version is already useful.

useful to who?  I still do not understand what users will be needing
this.  All I can tell is a single userspace program wants to use it, and
that is a fork from the real project it was based on and that the
maintainers have no plan to merge it back.

So who does need/want this?

thanks,

greg k-h
Andrey Semashev June 8, 2021, 1:18 p.m. UTC | #21
On 6/8/21 3:35 PM, Greg KH wrote:
> On Tue, Jun 08, 2021 at 03:06:48PM +0300, Andrey Semashev wrote:
>> On 6/8/21 2:13 PM, Greg KH wrote:
> 
>>> So what's keeping the futex2 code from doing all that futex1 does so
>>> that the futex1 code can be deleted internally?
>>
>> I think, André will answer this, but my guess is, as stated above, this is a
>> lot of work and time while the intermediate version is already useful.
> 
> useful to who?  I still do not understand what users will be needing
> this.  All I can tell is a single userspace program wants to use it, and
> that is a fork from the real project it was based on and that the
> maintainers have no plan to merge it back.
> 
> So who does need/want this?

I mentioned C++ std::atomic and Boost.Atomic before. Those need variable 
sized futexes.

The project you mention is probably Wine and its derivatives. Those need 
variable sized futexes and "wait for multiple" operation. I'm not sure 
about the "no plan to merge it back" part, I probably missed it in an 
earlier discussion. There are multiple different patches and versions 
out there, and I don't know which one it refers to. But WaitOnAddress 
and WaitForMultipleObjects APIs are very important and I would assume 
Wine wants to emulate those with best efficiency.

I have a media processing engine application that needs 64-bit futexes 
would benefit from a "wait for multiple" function. Its source code is 
not open currently, so I'm not sure if you can count it as a valid user.

There is a generic std::lock algorithm in C++ and an equivalent in 
Boost.Thread that is supposed to lock multiple lockables (a mutex-like 
object). Those could benefit from the "wait for multiple" function in 
some cases, e.g. when the objects are actually futex-based mutexes, and 
the function can access the internal futex. I'm not saying this will 
definitely be implemented, it just looks like a possible optimization to me.

I think someone mentioned databases earlier in the discussion, but I 
don't know the details. I hope someone will be able to expand.
Greg Kroah-Hartman June 8, 2021, 1:27 p.m. UTC | #22
On Tue, Jun 08, 2021 at 04:18:42PM +0300, Andrey Semashev wrote:
> On 6/8/21 3:35 PM, Greg KH wrote:
> > On Tue, Jun 08, 2021 at 03:06:48PM +0300, Andrey Semashev wrote:
> > > On 6/8/21 2:13 PM, Greg KH wrote:
> > 
> > > > So what's keeping the futex2 code from doing all that futex1 does so
> > > > that the futex1 code can be deleted internally?
> > > 
> > > I think, André will answer this, but my guess is, as stated above, this is a
> > > lot of work and time while the intermediate version is already useful.
> > 
> > useful to who?  I still do not understand what users will be needing
> > this.  All I can tell is a single userspace program wants to use it, and
> > that is a fork from the real project it was based on and that the
> > maintainers have no plan to merge it back.
> > 
> > So who does need/want this?
> 
> I mentioned C++ std::atomic and Boost.Atomic before. Those need variable
> sized futexes.

And has anyone converted them to use this new api to see if it works
well or not?

As was pointed out to me numerous times when I tried to propose
readfile(), you need a real user that can show and prove it is needed
before we can take new syscalls, especially complex beasts like this
one.

thanks,

greg k-h
Andrey Semashev June 8, 2021, 1:41 p.m. UTC | #23
On 6/8/21 4:27 PM, Greg KH wrote:
> On Tue, Jun 08, 2021 at 04:18:42PM +0300, Andrey Semashev wrote:
>> On 6/8/21 3:35 PM, Greg KH wrote:
>>> On Tue, Jun 08, 2021 at 03:06:48PM +0300, Andrey Semashev wrote:
>>>> On 6/8/21 2:13 PM, Greg KH wrote:
>>>
>>>>> So what's keeping the futex2 code from doing all that futex1 does so
>>>>> that the futex1 code can be deleted internally?
>>>>
>>>> I think, André will answer this, but my guess is, as stated above, this is a
>>>> lot of work and time while the intermediate version is already useful.
>>>
>>> useful to who?  I still do not understand what users will be needing
>>> this.  All I can tell is a single userspace program wants to use it, and
>>> that is a fork from the real project it was based on and that the
>>> maintainers have no plan to merge it back.
>>>
>>> So who does need/want this?
>>
>> I mentioned C++ std::atomic and Boost.Atomic before. Those need variable
>> sized futexes.
> 
> And has anyone converted them to use this new api to see if it works
> well or not?
> 
> As was pointed out to me numerous times when I tried to propose
> readfile(), you need a real user that can show and prove it is needed
> before we can take new syscalls, especially complex beasts like this
> one.

André has mentioned that he tested the patch set with patched Wine and 
glibc.

I didn't patch Boost.Atomic or std::atomic, but it doesn't look to be 
problematic. The only difference it would make there is to enable 
futex2-based implementation for multiple atomic sizes and set up flags 
to indicate the futex size, instead of only enabling futex-based 
implementation for 32-bit atomics.
André Almeida June 8, 2021, 2:14 p.m. UTC | #24
Hi Greg,

Às 08:13 de 08/06/21, Greg KH escreveu:
> On Tue, Jun 08, 2021 at 02:03:50PM +0300, Andrey Semashev wrote:
>> On 6/8/21 4:25 AM, Nicholas Piggin wrote:
>>>
>>> Are shared pthread mutexes using existing pthread APIs that are today
>>> implemented okay with futex1 system call a good reason to constrain
>>> futex2 I wonder? Or do we have an opportunity to make a bigger change
>>> to the API so it suffers less from non deterministic latency (for
>>> example)?
>>
>> If futex2 is not able to cover futex1 use cases then it cannot be viewed as
>> a replacement. In the long term this means futex1 cannot be deprecated and
>> has to be maintained. My impression was that futex1 was basically
>> unmaintainable(*) and futex2 was an evolution of futex1 so that users of
>> futex1 could migrate relatively easily and futex1 eventually removed. Maybe
>> my impression was wrong, but I would like to see futex2 as a replacement and
>> extension of futex1, so the latter can be deprecated at some point.
> 
> You can never delete a kernel system call, so even if you "deprecate"
> it, it still needs to be supported for forever.
> 
> Best of all would be if internally your "futex2" code would replace the
> "futex1" code so that there is no two different code bases.  That would
> be the only sane way forward, having 2 code bases to work with is just
> insane.
> 
>> (*) I use "unmaintainable" in a broad sense here. It exists and works in
>> newer kernel versions and may receive code changes that are necessary to
>> keep it working, but maintainers refuse any extensions or modifications of
>> the code, mostly because of its complexity.
> 
> Adding additional complexity for no good reason is not a good idea,
> especially if you are asking others to maintain and support that
> complexity.  Would you want to have to do that work?
> 
> So what's keeping the futex2 code from doing all that futex1 does so
> that the futex1 code can be deleted internally?
> 

My very first submission of futex2[0] was just an overlay on top of
futex.c, I didn't get much feedback at that time, but I think this is
what you and Peter are thinking of?

After that, last year at Plumbers' RT MC, I presented a talk called
"futex2: A New Interface" and my conclusion after the discussion on this
talk + responses I got from my FUTEX_WAIT_MULTIPLE patchset[1] was that
this work couldn't be done at futex.c, given how fragile things are
there. futex.c would be "feature freeze" and no new major changes would
happen there.

This is the context where this new futex2 code base comes from. So,
which one is it? Happy to go either way but I'm getting conflicting
messages here.

Thanks,
	André

[0]
https://lore.kernel.org/lkml/20200612185122.327860-2-andrealmeid@collabora.com/

[1]
https://lore.kernel.org/lkml/20200213214525.183689-1-andrealmeid@collabora.com/

> thanks,
> 
> greg k-h
>
Peter Zijlstra June 8, 2021, 2:23 p.m. UTC | #25
On Tue, Jun 08, 2021 at 02:26:22PM +0200, Sebastian Andrzej Siewior wrote:
> On 2021-06-07 12:40:54 [-0300], André Almeida wrote:
> > 
> > When I first read Thomas proposal for per table process, I thought that
> > the main goal there was to solve NUMA locality issues, not RT latency,
> > but I think you are right. However, re-reading the thread at [0], it
> > seems that the RT problems where not completely solved in that
> > interface, maybe the people involved with that patchset can help to shed
> > some light on it.
> > 
> > Otherwise, this same proposal could be integrated in futex2, given that
> > we would only need to provide to userland some extra flags and add some
> > `if`s around the hash table code (in a very similar way the NUMA code
> > will be implemented in futex2).
> 
> There are slides at [0] describing some attempts and the kernel tree [1]
> from that time.
> 
> The process-table solves the problem to some degree that two random
> process don't collide on the same hash bucket. But as Peter Zijlstra
> pointed out back then two threads from the same task could collide on
> the same hash bucket (and with ASLR not always). So the collision is
> there but limited and this was not perfect.
> 
> All the attempts with API extensions didn't go well because glibc did
> not want to change a bit. This starts with a mutex that has a static
> initializer which has to work (I don't remember why the first
> pthread_mutex_lock() could not fail with -ENOMEM but there was
> something) and ends with glibc's struct mutex which is full and has no
> room for additional data storage.
> 
> The additional data in user's struct mutex + init would have the benefit
> that instead uaddr (which is hashed for the in-kernel lookup) a cookie
> could be used for the hash-less lookup (and NUMA pointer where memory
> should be stored).
> 
> So. We couldn't change a thing back then so nothing did happen. We
> didn't want to create a new interface and a library implementing it plus
> all the functionality around it (like pthread_cond, phtread_barrier, …).
> Not to mention that if glibc continues to use the "old" locking
> internally then the application is still affected by the hash-collision
> locking (or the NUMA problem) should it block on the lock.

There's more futex users than glibc, and some of them are really hurting
because of the NUMA issue. Oracle used to (I've no idea what they do or
do not do these days) use sysvsem because the futex hash table was a
massive bottleneck for them.

And as Nick said, other vendors are having the same problems.

And if you don't extend the futex to store the nid you put the waiter in
(see all the problems above) you will have to do wakeups on all nodes,
which is both slower than it is today, and scales possibly even worse.

The whole numa-aware qspinlock saga is in part because of futex.


That said; if we're going to do the whole futex-vector thing, we really
do need a new interface, because the futex multiplex monster is about to
crumble (see the fun wrt timeouts for example).

And if we're going to do a new interface, we ought to make one that can
solve all these problems. Now, ideally glibc will bring forth some
opinions, but if they don't want to play, we'll go back to the good old
days of non-standard locking libraries.. we're halfway there already due
to glibc not wanting to break with POSIX were we know POSIX was just
dead wrong broken.

See: https://github.com/dvhart/librtpi
Davidlohr Bueso June 8, 2021, 2:31 p.m. UTC | #26
On Tue, 08 Jun 2021, Peter Zijlstra wrote:

>On Tue, Jun 08, 2021 at 01:13:30PM +0200, Greg KH wrote:
>> On Tue, Jun 08, 2021 at 02:03:50PM +0300, Andrey Semashev wrote:
>> So what's keeping the futex2 code from doing all that futex1 does so
>> that the futex1 code can be deleted internally?
>
>I'd much rather see it the other way around. Much of what futex2 does
>can already be done with the futex1 code-base. And then we can add
>features on top.

Yeah furthermore, how can futex2 be thought of replacing futex1 in the
future when the PI aspects haven't even been mentioned?

Thanks,
Davidlohr
Sebastian Andrzej Siewior June 8, 2021, 2:57 p.m. UTC | #27
On 2021-06-08 16:23:45 [+0200], Peter Zijlstra wrote:
> There's more futex users than glibc, and some of them are really hurting
> because of the NUMA issue. Oracle used to (I've no idea what they do or
> do not do these days) use sysvsem because the futex hash table was a
> massive bottleneck for them.
> 
> And as Nick said, other vendors are having the same problems.

I just wanted to do a brief summary of last events. The implementation
tglx did with the cookie resulting in a quick lookup did not have any
downsides except that the user-API had to change glibc couldn't. So if
we are back to square one why not start with that.

> And if you don't extend the futex to store the nid you put the waiter in
> (see all the problems above) you will have to do wakeups on all nodes,
> which is both slower than it is today, and scales possibly even worse.
> 
> The whole numa-aware qspinlock saga is in part because of futex.

sure.

> That said; if we're going to do the whole futex-vector thing, we really
> do need a new interface, because the futex multiplex monster is about to
> crumble (see the fun wrt timeouts for example).

This might have been a series of unfortunate events leading to this. The
sad part is that glibc has a comment that the kernel does not support
this and nobody bother to change it (until recently).

> And if we're going to do a new interface, we ought to make one that can
> solve all these problems. Now, ideally glibc will bring forth some
> opinions, but if they don't want to play, we'll go back to the good old
> days of non-standard locking libraries.. we're halfway there already due
> to glibc not wanting to break with POSIX were we know POSIX was just
> dead wrong broken.
> 
> See: https://github.com/dvhart/librtpi

I'm aware of that, I hacked on it, too :) This was the unfortunate
result of a ~8y old bug which was not fixed instead and part of the code
was rewritten and a bit-spinlock was added in user-land. You may
remember the discussion regarding spins in userland…
That said, REQUEUE_PI is no longer used by glibc.

Sebastian
André Almeida June 8, 2021, 3:04 p.m. UTC | #28
Às 11:23 de 08/06/21, Peter Zijlstra escreveu:
> On Tue, Jun 08, 2021 at 02:26:22PM +0200, Sebastian Andrzej Siewior wrote:
>> On 2021-06-07 12:40:54 [-0300], André Almeida wrote:
>>>
>>> When I first read Thomas proposal for per table process, I thought that
>>> the main goal there was to solve NUMA locality issues, not RT latency,
>>> but I think you are right. However, re-reading the thread at [0], it
>>> seems that the RT problems where not completely solved in that
>>> interface, maybe the people involved with that patchset can help to shed
>>> some light on it.
>>>
>>> Otherwise, this same proposal could be integrated in futex2, given that
>>> we would only need to provide to userland some extra flags and add some
>>> `if`s around the hash table code (in a very similar way the NUMA code
>>> will be implemented in futex2).
>>
>> There are slides at [0] describing some attempts and the kernel tree [1]
>> from that time.
>>
>> The process-table solves the problem to some degree that two random
>> process don't collide on the same hash bucket. But as Peter Zijlstra
>> pointed out back then two threads from the same task could collide on
>> the same hash bucket (and with ASLR not always). So the collision is
>> there but limited and this was not perfect.
>>
>> All the attempts with API extensions didn't go well because glibc did
>> not want to change a bit. This starts with a mutex that has a static
>> initializer which has to work (I don't remember why the first
>> pthread_mutex_lock() could not fail with -ENOMEM but there was
>> something) and ends with glibc's struct mutex which is full and has no
>> room for additional data storage.
>>
>> The additional data in user's struct mutex + init would have the benefit
>> that instead uaddr (which is hashed for the in-kernel lookup) a cookie
>> could be used for the hash-less lookup (and NUMA pointer where memory
>> should be stored).
>>
>> So. We couldn't change a thing back then so nothing did happen. We
>> didn't want to create a new interface and a library implementing it plus
>> all the functionality around it (like pthread_cond, phtread_barrier, …).
>> Not to mention that if glibc continues to use the "old" locking
>> internally then the application is still affected by the hash-collision
>> locking (or the NUMA problem) should it block on the lock.
> 
> There's more futex users than glibc, and some of them are really hurting
> because of the NUMA issue. Oracle used to (I've no idea what they do or
> do not do these days) use sysvsem because the futex hash table was a
> massive bottleneck for them.
> 
> And as Nick said, other vendors are having the same problems.

Since we're talking about NUMA, which userspace communities would be
able to provide feedback about the futex2() NUMA-aware feature, to check
if this interface would help solving those issues?

> 
> And if you don't extend the futex to store the nid you put the waiter in
> (see all the problems above) you will have to do wakeups on all nodes,
> which is both slower than it is today, and scales possibly even worse.
> 
> The whole numa-aware qspinlock saga is in part because of futex.
> 
> 
> That said; if we're going to do the whole futex-vector thing, we really
> do need a new interface, because the futex multiplex monster is about to
> crumble (see the fun wrt timeouts for example).
> 
> And if we're going to do a new interface, we ought to make one that can
> solve all these problems. Now, ideally glibc will bring forth some
> opinions, but if they don't want to play, we'll go back to the good old
> days of non-standard locking libraries.. we're halfway there already due
> to glibc not wanting to break with POSIX were we know POSIX was just
> dead wrong broken.
> 
> See: https://github.com/dvhart/librtpi
> 
>
Zebediah Figura June 8, 2021, 5:06 p.m. UTC | #29
On 6/8/21 8:18 AM, Andrey Semashev wrote:
> On 6/8/21 3:35 PM, Greg KH wrote:
>> On Tue, Jun 08, 2021 at 03:06:48PM +0300, Andrey Semashev wrote:
>>> On 6/8/21 2:13 PM, Greg KH wrote:
>>
>>>> So what's keeping the futex2 code from doing all that futex1 does so
>>>> that the futex1 code can be deleted internally?
>>>
>>> I think, André will answer this, but my guess is, as stated above, 
>>> this is a
>>> lot of work and time while the intermediate version is already useful.
>>
>> useful to who?  I still do not understand what users will be needing
>> this.  All I can tell is a single userspace program wants to use it, and
>> that is a fork from the real project it was based on and that the
>> maintainers have no plan to merge it back.
>>
>> So who does need/want this?
> 
> I mentioned C++ std::atomic and Boost.Atomic before. Those need variable 
> sized futexes.
> 
> The project you mention is probably Wine and its derivatives. Those need 
> variable sized futexes and "wait for multiple" operation. I'm not sure 
> about the "no plan to merge it back" part, I probably missed it in an 
> earlier discussion. There are multiple different patches and versions 
> out there, and I don't know which one it refers to. But WaitOnAddress 
> and WaitForMultipleObjects APIs are very important and I would assume 
> Wine wants to emulate those with best efficiency.

See [0]. The short version is that we can't use futexes the way that 
out-of-tree patch set does, due to compatibility and robustness 
problems. I wrote said patch set and I'm currently working on a 
different solution for upstreaming.

We also can't exactly use futexes to back WaitOnAddress() directly. We 
actually do currently, but for various complex reasons that needs to 
change, and none of the proposals for futex2 help the new implementation.

ἔρρωσο,
Zebediah

[0] 
https://lore.kernel.org/lkml/dab34fd2-b494-8686-bcd7-68beeba4f386@gmail.com/
Adhemerval Zanella June 8, 2021, 6:08 p.m. UTC | #30
> All the attempts with API extensions didn't go well because glibc did
> not want to change a bit. This starts with a mutex that has a static
> initializer which has to work (I don't remember why the first
> pthread_mutex_lock() could not fail with -ENOMEM but there was
> something) and ends with glibc's struct mutex which is full and has no
> room for additional data storage.

Yes, we have binary compatibility constraints that prevents us to simply
broken old binaries.  This is quite true for static initialization,
which imposes even harder constraints, different than the pthread_mutex_t
size where we can workaround with symbols versioning. But even then we hear
from users that out pthread_mutex_t is still way larger, specially for
fine grained locking so I am not sure if we do want to extend it.

> That said; if we're going to do the whole futex-vector thing, we really
> do need a new interface, because the futex multiplex monster is about to
> crumble (see the fun wrt timeouts for example).
> 
> And if we're going to do a new interface, we ought to make one that can
> solve all these problems. Now, ideally glibc will bring forth some
> opinions, but if they don't want to play, we'll go back to the good old
> days of non-standard locking libraries.. we're halfway there already due
> to glibc not wanting to break with POSIX were we know POSIX was just
> dead wrong broken.
> 
> See: https://github.com/dvhart/librtpi

You are right, we don't really want to break POSIX requirements in this
regard because users constantly come with scenarios where they do expect
our implementation to be conformant [1].  And even now, there are case we 
don't get it fully right [2] and it is really hard to fix such issues.

If I recall correctly from a recent plumber couple of years ago about 
the librtpi, the presents stated their implement do not follow POSIX
standard by design.  It suits then for their required work, but it is 
not what we really aim for glibc.  We *might* try to provide as an 
extension, but even then I am not if it would be fully possible due 
API constraints.

So, regarding the futex2 we might try to support it eventually; but if
this newer interface is not a really a superset of futex1 we won't 
put much effort. Supporting newer syscall requires an extra effort from
glibc, we need to keep fallback for older ones in case the kernel is
too old and it also imposes runtime costs.

Also currently we don't have a specific usage.  The proposed patch to
add the 'pthread_mutex_lock_any' and 'pthreada_timedlock_any' [3] 
also did not gave much detail in realword usages or how it can be
leveraged.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=13165
[2] https://sourceware.org/bugzilla/show_bug.cgi?id=25847
[3] https://sourceware.org/pipermail/libc-alpha/2019-July/105422.html
Florian Weimer June 8, 2021, 6:19 p.m. UTC | #31
* Adhemerval Zanella:

> Also currently we don't have a specific usage.  The proposed patch to
> add the 'pthread_mutex_lock_any' and 'pthreada_timedlock_any' [3] 
> also did not gave much detail in realword usages or how it can be
> leveraged.

The current rwlock implementation in glibc uses a torn 32-bit futex read
which is part of an atomically updated 64-bit word.  That's just really,
really ugly, and I suspect we could make that go away with futex2.

Thanks,
Florian
Adhemerval Zanella June 8, 2021, 6:22 p.m. UTC | #32
On 08/06/2021 15:19, Florian Weimer wrote:
> * Adhemerval Zanella:
> 
>> Also currently we don't have a specific usage.  The proposed patch to
>> add the 'pthread_mutex_lock_any' and 'pthreada_timedlock_any' [3] 
>> also did not gave much detail in realword usages or how it can be
>> leveraged.
> 
> The current rwlock implementation in glibc uses a torn 32-bit futex read
> which is part of an atomically updated 64-bit word.  That's just really,
> really ugly, and I suspect we could make that go away with futex2.

You are right, I had in the mind the multiple wait proposed by this
patch and by the glib RFC one.  Not only rwlock, but the posix
semaphore might also be simplified on 32 bits I think.
David Laight June 9, 2021, 4:26 p.m. UTC | #33
From: Peter Zijlstra
> Sent: 08 June 2021 15:24
...
> And if we're going to do a new interface, we ought to make one that can
> solve all these problems. Now, ideally glibc will bring forth some
> opinions, but if they don't want to play, we'll go back to the good old
> days of non-standard locking libraries.. we're halfway there already due
> to glibc not wanting to break with POSIX were we know POSIX was just
> dead wrong broken.

I had a problem with pthread_broadcast().
I've got multiple threads all bound to different cpu and
I really do want to wake them all up at once.
Now, I know they'll spin on the mutex for a bit - but that
is soon released (the 'adaptive' spin is probably long enough).

But what actually happens (probably because of the way glibc
is constrained by the futext system call) is:
1) The first thread is woken.
2) It's cpu comes out of sleep.
3) Once running it wakes the second thread.
4) It's cpu comes out of sleep.
...
So you get a very slow ripple of the threads starting.

Worse still, if the thread can't be scheduled (eg because
its cpu is running all the network stack softint code)
then none of the later threads start running.

I've mitigated it by using a separate cv for each thread
and looping to wake them all - but it shouldn't really
be necessary.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)