{"id":2226303,"url":"http://patchwork.ozlabs.org/api/1.2/patches/2226303/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hhubrmqub2.gcc.gcc-TEST.redi.31.1.11@forge-stage.sourceware.org/","project":{"id":17,"url":"http://patchwork.ozlabs.org/api/1.2/projects/17/?format=json","name":"GNU Compiler Collection","link_name":"gcc","list_id":"gcc-patches.gcc.gnu.org","list_email":"gcc-patches@gcc.gnu.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<bmm.hhubrmqub2.gcc.gcc-TEST.redi.31.1.11@forge-stage.sourceware.org>","list_archive_url":null,"date":"2026-04-22T10:44:26","name":"[v1,11/16] libstdc++: Move atomic wait/notify entry points into the library","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"283371cecfcdac1a0bd7d07a4717b5aecad8c031","submitter":{"id":93210,"url":"http://patchwork.ozlabs.org/api/1.2/people/93210/?format=json","name":"Jonathan Wakely via Sourceware Forge","email":"forge-bot+redi@forge-stage.sourceware.org"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hhubrmqub2.gcc.gcc-TEST.redi.31.1.11@forge-stage.sourceware.org/mbox/","series":[{"id":500987,"url":"http://patchwork.ozlabs.org/api/1.2/series/500987/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/list/?series=500987","date":"2026-04-22T10:44:17","name":"atomic wait/notify ABI stabilization","version":1,"mbox":"http://patchwork.ozlabs.org/series/500987/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2226303/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2226303/checks/","tags":{},"related":[],"headers":{"Return-Path":"<gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=38.145.34.32; helo=vm01.sourceware.org;\n envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org; dmarc=none (p=none dis=none)\n header.from=forge-stage.sourceware.org","sourceware.org;\n spf=pass smtp.mailfrom=forge-stage.sourceware.org","server2.sourceware.org;\n arc=none smtp.remote-ip=38.145.34.39"],"Received":["from vm01.sourceware.org (vm01.sourceware.org [38.145.34.32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g0xf82qTmz1yD5\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 21:22:40 +1000 (AEST)","from vm01.sourceware.org (localhost [127.0.0.1])\n\tby sourceware.org (Postfix) with ESMTP id 5C1E64314212\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 11:22:38 +0000 (GMT)","from forge-stage.sourceware.org (vm08.sourceware.org [38.145.34.39])\n by sourceware.org (Postfix) with ESMTPS id 03282409599E\n for <gcc-patches@gcc.gnu.org>; Wed, 22 Apr 2026 10:46:09 +0000 (GMT)","from forge-stage.sourceware.org (localhost [IPv6:::1])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange x25519 server-signature ECDSA (prime256v1) server-digest SHA256)\n (No client certificate requested)\n by forge-stage.sourceware.org (Postfix) with ESMTPS id 1838B42BBA\n for <gcc-patches@gcc.gnu.org>; Wed, 22 Apr 2026 10:46:02 +0000 (UTC)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org 5C1E64314212","OpenDKIM Filter v2.11.0 sourceware.org 03282409599E"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 03282409599E","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org 03282409599E","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1776854769; cv=none;\n b=KxxiMAk9nll/E/iy5gU7OY8nXSSsa6JFoMvDUSjXpsrSXuyoCsUckGLhokKyZs9YIBvag83bfHSJ8C+RxUILNJD+JuJVAoFuX0KXlPzPswiVT8P5oIm1AX3I3rWH4uyZ5qPjqo2ZQbEjuKSSWHJ0N7xDfAR6h2O9Na7cOboNJz4=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1776854769; c=relaxed/simple;\n bh=myEqJbLxuMswlZymhJrLm8oJ9VnOHpRQcxk5Tp+Wz7I=;\n h=From:Date:Subject:To:Message-ID;\n b=ZWyKOd6cu7C1tai65sSMkBt/NJevl7sWgLXagiBWYqhbnvXNIcS7cWXDbYRWLztYi67EZyqUJ6Hu9VolX177b+Z/ClWSHSjJLn/MMHSljk9EbSTRq8Ad8rVPWVoCwUUzdc4cyNQU7oUDPucDu2hYTm4yYDPg84NaZCRsbmqgqMI=","ARC-Authentication-Results":"i=1; server2.sourceware.org","From":"Jonathan Wakely via Sourceware Forge\n <forge-bot+redi@forge-stage.sourceware.org>","Date":"Wed, 22 Apr 2026 10:44:26 +0000","Subject":"[PATCH v1 11/16] libstdc++: Move atomic wait/notify entry points into\n the library","To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>","Message-ID":"\n <bmm.hhubrmqub2.gcc.gcc-TEST.redi.31.1.11@forge-stage.sourceware.org>","X-Mailer":"batrachomyomachia","X-Pull-Request-Organization":"gcc","X-Pull-Request-Repository":"gcc-TEST","X-Pull-Request":"https://forge.sourceware.org/gcc/gcc-TEST/pulls/31","References":"\n <bmm.hhubrmqub2.gcc.gcc-TEST.redi.31.1.0@forge-stage.sourceware.org>","In-Reply-To":"\n <bmm.hhubrmqub2.gcc.gcc-TEST.redi.31.1.0@forge-stage.sourceware.org>","X-Patch-URL":"\n https://forge.sourceware.org/redi/gcc/commit/1fe264f2bb58a36e94585faf97821886989b9fe2","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Reply-To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>, redi@gcc.gnu.org","Errors-To":"gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org"},"content":"From: Jonathan Wakely <jwakely@redhat.com>\n\nThis moves the implementation details of atomic wait/notify functions\ninto the library, so that only a small API surface is exposed to users.\n\nThis also fixes some race conditions present in the design for proxied\nwaits:\n\n- The stores to _M_ver in __notify_impl must be protected by the mutex,\n  and the loads from _M_ver in __wait_impl and __wait_until_impl to\n  check for changes must also be protected by the mutex. This ensures\n  that checking _M_ver for updates and waiting on the condition_variable\n  happens atomically. Otherwise it's possible to have: _M_ver == old\n  happens-before {++_M_ver; cv.notify;} which happens-before cv.wait.\n  That scenario results in a missed notification, and so the waiting\n  function never wakes. This wasn't a problem for Linux, because the\n  futex wait call re-checks the _M_ver value before sleeping, so the\n  increment cannot interleave between the check and the wait.\n\n- The initial load from _M_ver that reads the 'old' value used for the\n  _M_ver == old checks must be done before loading and checking the\n  value of the atomic variable. Otherwise it's possible to have:\n  var.load() == val happens-before {++_M_ver; _M_cv.notify_all();}\n  happens-before {old = _M_ver; lock mutex; if (_M_ver == old) cv.wait}.\n  This results in the waiting thread seeing the already-incremented\n  value of _M_ver and then waiting for it to change again, which doesn't\n  happen. This race was present even for Linux, because using a futex\n  instead of mutex+condvar doesn't prevent the increment from happening\n  before the waiting threads checks for the increment.\n\nThe first race can be solved locally in the waiting and notifying\nfunctions, by acquiring the mutex lock earlier in the function. The\nsecond race cannot be fixed locally, because the load of the atomic\nvariable and the check for updates to _M_ver happen in different\nfunctions (one in a function template in the headers and one in the\nlibrary). We do have an _M_old data member in the __wait_args_base\nstruct which was previously only used for non-proxy waits using a futex.\nWe can add a new entry point into the library to look up the waitable\nstate for the address and then load its _M_ver into the _M_old member.\nThis allows the inline function template to ensure that loading _M_ver\nhappens-before testing whether the atomic variable has been changed, so\nthat we can reliably tell if _M_ver changes after we've already tested\nthe atomic variable. This isn't 100% reliable, because _M_ver could be\nincremented 2^32 times and wrap back to the same value, but that seems\nunlikely in practice. If/when we support waiting on user-defined\npredicates (which could execute long enough for _M_ver to wrap) we might\nwant to always wait with a timeout, so that we get a chance to re-check\nthe predicate even in the rare case that _M_ver wraps.\n\nAnother change is to make the __wait_until_impl function take a\n__wait_clock_t::duration instead of a __wait_clock_t::time_point, so\nthat the __wait_until_impl function doesn't depend on the symbol name of\nchrono::steady_clock. Inside the library it can be converted back to a\ntime_point for the clock. This would potentially allow using a different\nclock, if we made a different __abi_version in the __wait_args imply\nwaiting with a different clock.\n\nThis also adds a void* to the __wait_args_base structure, so that\n__wait_impl can store the __waitable_state* in there the first time it's\nlooked up for a given wait, so that it doesn't need to be retrieved\nagain on each loop. This requires passing the __wait_args_base structure\nby non-const reference.\n\nThe __waitable_state::_S_track function can be removed now that it's all\ninternal to the library, and namespace-scope RAII types added for\nlocking and tracking contention.\n\nlibstdc++-v3/ChangeLog:\n\n\t* config/abi/pre/gnu.ver: Add new symbol exports.\n\t* include/bits/atomic_timed_wait.h (__platform_wait_until): Move\n\tto atomic.cc.\n\t(__cond_wait_until, __spin_until_impl): Likewise.\n\t(__wait_until_impl): Likewise. Change __wait_args_base parameter\n\tto non-const reference and change third parameter to\n\t__wait_clock_t::duration.\n\t(__wait_until): Change __wait_args_base parameter to non-const\n\treference. Change Call time_since_epoch() to get duration from\n\ttime_point.\n\t(__wait_for): Change __wait_args_base parameter to non-const\n\treference.\n\t(__atomic_wait_address_until): Call _M_prep_for_wait_on on args.\n\t(__atomic_wait_address_for): Likewise.\n\t(__atomic_wait_address_until_v): Qualify call to avoid ADL. Do\n\tnot forward __vfn.\n\t* include/bits/atomic_wait.h (__platform_wait_uses_type): Use\n\talignof(T) not alignof(T*).\n\t(__futex_wait_flags, __platform_wait, __platform_notify)\n\t(__waitable_state, __spin_impl, __notify_impl): Move to\n\tatomic.cc.\n\t(__wait_impl): Likewise. Change __wait_args_base parameter to\n\tnon-const reference.\n\t(__wait_args_base::_M_wait_state): New data member.\n\t(__wait_args_base::_M_prep_for_wait_on): New member function.\n\t(__wait_args_base::_M_load_proxy_wait_val): New member\n\tfunction.\n\t(__wait_args_base::_S_memory_order_for): Remove member function.\n\t(__atomic_wait_address): Call _M_prep_for_wait_on on args.\n\t(__atomic_wait_address_v): Qualify call to avoid ADL.\n\t* src/c++20/Makefile.am: Add new file.\n\t* src/c++20/Makefile.in: Regenerate.\n\t* src/c++20/atomic.cc: New file.\n\t* testsuite/17_intro/headers/c++1998/49745.cc: Remove XFAIL for\n\tC++20 and later.\n\t* testsuite/29_atomics/atomic/wait_notify/100334.cc: Remove use\n\tof internal implementation details.\n---\n libstdc++-v3/config/abi/pre/gnu.ver           |   6 +\n libstdc++-v3/include/bits/atomic_timed_wait.h | 164 +-----\n libstdc++-v3/include/bits/atomic_wait.h       | 312 ++----------\n libstdc++-v3/src/c++20/Makefile.am            |   2 +-\n libstdc++-v3/src/c++20/Makefile.in            |   4 +-\n libstdc++-v3/src/c++20/atomic.cc              | 468 ++++++++++++++++++\n .../17_intro/headers/c++1998/49745.cc         |   2 -\n .../29_atomics/atomic/wait_notify/100334.cc   |   2 +\n 8 files changed, 532 insertions(+), 428 deletions(-)\n create mode 100644 libstdc++-v3/src/c++20/atomic.cc","diff":"diff --git a/libstdc++-v3/config/abi/pre/gnu.ver b/libstdc++-v3/config/abi/pre/gnu.ver\nindex 29bc7d86256e..6ae0144335b1 100644\n--- a/libstdc++-v3/config/abi/pre/gnu.ver\n+++ b/libstdc++-v3/config/abi/pre/gnu.ver\n@@ -2544,6 +2544,12 @@ GLIBCXX_3.4.34 {\n     # void std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_construct<bool>(char const*, size_t)\n     # and wide char version\n     _ZNSt7__cxx1112basic_stringI[cw]St11char_traitsI[cw]ESaI[cw]EE12_M_constructILb[01]EEEvPK[cw][jmy];\n+\n+    _ZNSt8__detail11__wait_implEPKvRNS_16__wait_args_baseE;\n+    _ZNSt8__detail13__notify_implEPKvbRKNS_16__wait_args_baseE;\n+    _ZNSt8__detail17__wait_until_implEPKvRNS_16__wait_args_baseERKNSt6chrono8durationI[lx]St5ratioIL[lx]1EL[lx]1000000000EEEE;\n+    _ZNSt8__detail11__wait_args22_M_load_proxy_wait_valEPKv;\n+\n } GLIBCXX_3.4.33;\n \n # Symbols in the support library (libsupc++) have their own tag.\ndiff --git a/libstdc++-v3/include/bits/atomic_timed_wait.h b/libstdc++-v3/include/bits/atomic_timed_wait.h\nindex 19a0225c63b2..3e25607b7d4c 100644\n--- a/libstdc++-v3/include/bits/atomic_timed_wait.h\n+++ b/libstdc++-v3/include/bits/atomic_timed_wait.h\n@@ -37,7 +37,6 @@\n #include <bits/atomic_wait.h>\n \n #if __glibcxx_atomic_wait\n-#include <bits/functional_hash.h>\n #include <bits/this_thread_sleep.h>\n #include <bits/chrono.h>\n \n@@ -78,154 +77,25 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n \n #ifdef _GLIBCXX_HAVE_LINUX_FUTEX\n #define _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT\n-    // returns true if wait ended before timeout\n-    bool\n-    __platform_wait_until(const __platform_wait_t* __addr,\n-\t\t\t  __platform_wait_t __old,\n-\t\t\t  const __wait_clock_t::time_point& __atime) noexcept\n-    {\n-      auto __s = chrono::time_point_cast<chrono::seconds>(__atime);\n-      auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);\n-\n-      struct timespec __rt =\n-\t{\n-\t  static_cast<std::time_t>(__s.time_since_epoch().count()),\n-\t  static_cast<long>(__ns.count())\n-\t};\n-\n-      auto __e = syscall (SYS_futex, __addr,\n-\t\t\t  static_cast<int>(__futex_wait_flags::__wait_bitset_private),\n-\t\t\t  __old, &__rt, nullptr,\n-\t\t\t  static_cast<int>(__futex_wait_flags::__bitset_match_any));\n-      if (__e)\n-\t{\n-\t  if (errno == ETIMEDOUT)\n-\t    return false;\n-\t  if (errno != EINTR && errno != EAGAIN)\n-\t    __throw_system_error(errno);\n-\t}\n-      return true;\n-    }\n #else\n // define _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT and implement __platform_wait_until\n // if there is a more efficient primitive supported by the platform\n // (e.g. __ulock_wait) which is better than pthread_cond_clockwait.\n #endif // ! HAVE_LINUX_FUTEX\n \n-#ifdef _GLIBCXX_HAS_GTHREADS\n-    // Returns true if wait ended before timeout.\n-    inline bool\n-    __cond_wait_until(__condvar& __cv, mutex& __mx,\n-\t\t      const __wait_clock_t::time_point& __atime)\n-    {\n-      auto __s = chrono::time_point_cast<chrono::seconds>(__atime);\n-      auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);\n-\n-      __gthread_time_t __ts =\n-\t{\n-\t  static_cast<std::time_t>(__s.time_since_epoch().count()),\n-\t  static_cast<long>(__ns.count())\n-\t};\n-\n-#ifdef _GLIBCXX_USE_PTHREAD_COND_CLOCKWAIT\n-      if constexpr (is_same_v<chrono::steady_clock, __wait_clock_t>)\n-\t__cv.wait_until(__mx, CLOCK_MONOTONIC, __ts);\n-      else\n-#endif\n-\t__cv.wait_until(__mx, __ts);\n-      return __wait_clock_t::now() < __atime;\n-    }\n-#endif // _GLIBCXX_HAS_GTHREADS\n-\n-    inline __wait_result_type\n-    __spin_until_impl(const __platform_wait_t* __addr,\n-\t\t      const __wait_args_base& __args,\n-\t\t      const __wait_clock_t::time_point& __deadline)\n-    {\n-      auto __t0 = __wait_clock_t::now();\n-      using namespace literals::chrono_literals;\n-\n-      __platform_wait_t __val{};\n-      auto __now = __wait_clock_t::now();\n-      for (; __now < __deadline; __now = __wait_clock_t::now())\n-\t{\n-\t  auto __elapsed = __now - __t0;\n-#ifndef _GLIBCXX_NO_SLEEP\n-\t  if (__elapsed > 128ms)\n-\t    this_thread::sleep_for(64ms);\n-\t  else if (__elapsed > 64us)\n-\t    this_thread::sleep_for(__elapsed / 2);\n-\t  else\n-#endif\n-\t  if (__elapsed > 4us)\n-\t    __thread_yield();\n-\t  else if (auto __res = __detail::__spin_impl(__addr, __args); __res.first)\n-\t    return __res;\n-\n-\t  __atomic_load(__addr, &__val, __args._M_order);\n-\t  if (__val != __args._M_old)\n-\t    return { true, __val };\n-\t}\n-      return { false, __val };\n-    }\n-\n-    inline __wait_result_type\n-    __wait_until_impl(const void* __addr, const __wait_args_base& __a,\n-\t\t      const __wait_clock_t::time_point& __atime)\n-    {\n-      __wait_args_base __args = __a;\n-      __waitable_state* __state = nullptr;\n-      const __platform_wait_t* __wait_addr;\n-      if (__args & __wait_flags::__proxy_wait)\n-\t{\n-\t  __state = &__waitable_state::_S_state_for(__addr);\n-\t  __wait_addr = &__state->_M_ver;\n-\t  __atomic_load(__wait_addr, &__args._M_old, __args._M_order);\n-\t}\n-      else\n-\t__wait_addr = static_cast<const __platform_wait_t*>(__addr);\n-\n-      if (__args & __wait_flags::__do_spin)\n-\t{\n-\t  auto __res = __detail::__spin_until_impl(__wait_addr, __args, __atime);\n-\t  if (__res.first)\n-\t    return __res;\n-\t  if (__args & __wait_flags::__spin_only)\n-\t    return __res;\n-\t}\n-\n-      auto __tracker = __waitable_state::_S_track(__state, __args, __addr);\n-\n-#ifdef _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT\n-      if (__platform_wait_until(__wait_addr, __args._M_old, __atime))\n-\treturn { true, __args._M_old };\n-      else\n-\treturn { false, __args._M_old };\n-#else\n-      __platform_wait_t __val{};\n-      __atomic_load(__wait_addr, &__val, __args._M_order);\n-      if (__val == __args._M_old)\n-\t{\n-\t  if (!__state)\n-\t    __state = &__waitable_state::_S_state_for(__addr);\n-\t  lock_guard<mutex> __l{ __state->_M_mtx };\n-\t  __atomic_load(__wait_addr, &__val, __args._M_order);\n-\t  if (__val == __args._M_old\n-\t\t&& __cond_wait_until(__state->_M_cv, __state->_M_mtx, __atime))\n-\t    return { true, __val };\n-\t}\n-      return { false, __val };\n-#endif\n-    }\n+    __wait_result_type\n+    __wait_until_impl(const void* __addr, __wait_args_base& __args,\n+\t\t      const __wait_clock_t::duration& __atime);\n \n     // Returns {true, val} if wait ended before a timeout.\n     template<typename _Clock, typename _Dur>\n       __wait_result_type\n-      __wait_until(const void* __addr, const __wait_args_base& __args,\n+      __wait_until(const void* __addr, __wait_args_base& __args,\n \t\t   const chrono::time_point<_Clock, _Dur>& __atime) noexcept\n       {\n \tauto __at = __detail::__to_wait_clock(__atime);\n-\tauto __res = __detail::__wait_until_impl(__addr, __args, __at);\n+\tauto __res = __detail::__wait_until_impl(__addr, __args,\n+\t\t\t\t\t\t __at.time_since_epoch());\n \n \tif constexpr (!is_same_v<__wait_clock_t, _Clock>)\n \t  if (!__res.first)\n@@ -242,15 +112,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n     // Returns {true, val} if wait ended before a timeout.\n     template<typename _Rep, typename _Period>\n       __wait_result_type\n-      __wait_for(const void* __addr, const __wait_args_base& __args,\n+      __wait_for(const void* __addr, __wait_args_base& __args,\n \t\t const chrono::duration<_Rep, _Period>& __rtime) noexcept\n       {\n \tif (!__rtime.count())\n \t  {\n-\t    __wait_args_base __a = __args;\n \t    // no rtime supplied, just spin a bit\n-\t    __a._M_flags |= __wait_flags::__do_spin | __wait_flags::__spin_only;\n-\t    return __detail::__wait_impl(__addr, __a);\n+\t    __args._M_flags |= __wait_flags::__do_spin | __wait_flags::__spin_only;\n+\t    return __detail::__wait_impl(__addr, __args);\n \t  }\n \n \tauto const __reltime = chrono::ceil<__wait_clock_t::duration>(__rtime);\n@@ -270,14 +139,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n \t\t\t\tbool __bare_wait = false) noexcept\n     {\n       __detail::__wait_args __args{ __addr, __bare_wait };\n-      _Tp __val = __vfn();\n+      _Tp __val = __args._M_prep_for_wait_on(__addr, __vfn);\n       while (!__pred(__val))\n \t{\n \t  auto __res = __detail::__wait_until(__addr, __args, __atime);\n \t  if (!__res.first)\n \t    // timed out\n \t    return __res.first; // C++26 will also return last observed __val\n-\t  __val = __vfn();\n+\t  __val = __args._M_prep_for_wait_on(__addr, __vfn);\n \t}\n       return true; // C++26 will also return last observed __val\n     }\n@@ -298,15 +167,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n   template<typename _Tp, typename _ValFn,\n \t   typename _Clock, typename _Dur>\n     bool\n-    __atomic_wait_address_until_v(const _Tp* __addr, _Tp&& __old, _ValFn&& __vfn,\n+    __atomic_wait_address_until_v(const _Tp* __addr, _Tp&& __old,\n+\t\t\t\t  _ValFn&& __vfn,\n \t\t\t\t  const chrono::time_point<_Clock, _Dur>& __atime,\n \t\t\t\t  bool __bare_wait = false) noexcept\n     {\n       auto __pfn = [&](const _Tp& __val) {\n \treturn !__detail::__atomic_eq(__old, __val);\n       };\n-      return __atomic_wait_address_until(__addr, __pfn, forward<_ValFn>(__vfn),\n-\t\t\t\t\t __atime, __bare_wait);\n+      return std::__atomic_wait_address_until(__addr, __pfn, __vfn, __atime,\n+\t\t\t\t\t      __bare_wait);\n     }\n \n   template<typename _Tp,\n@@ -319,14 +189,14 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n \t\t\t      bool __bare_wait = false) noexcept\n     {\n       __detail::__wait_args __args{ __addr, __bare_wait };\n-      _Tp __val = __vfn();\n+      _Tp __val = __args._M_prep_for_wait_on(__addr, __vfn);\n       while (!__pred(__val))\n \t{\n \t  auto __res = __detail::__wait_for(__addr, __args, __rtime);\n \t  if (!__res.first)\n \t    // timed out\n \t    return __res.first; // C++26 will also return last observed __val\n-\t  __val = __vfn();\n+\t  __val = __args._M_prep_for_wait_on(__addr, __vfn);\n \t}\n       return true; // C++26 will also return last observed __val\n     }\ndiff --git a/libstdc++-v3/include/bits/atomic_wait.h b/libstdc++-v3/include/bits/atomic_wait.h\nindex bdc8677e9ea9..33e8d3202566 100644\n--- a/libstdc++-v3/include/bits/atomic_wait.h\n+++ b/libstdc++-v3/include/bits/atomic_wait.h\n@@ -37,21 +37,10 @@\n #include <bits/version.h>\n \n #if __glibcxx_atomic_wait\n-#include <cstdint>\n-#include <bits/functional_hash.h>\n #include <bits/gthr.h>\n #include <ext/numeric_traits.h>\n \n-#ifdef _GLIBCXX_HAVE_LINUX_FUTEX\n-# include <cerrno>\n-# include <climits>\n-# include <unistd.h>\n-# include <syscall.h>\n-# include <bits/functexcept.h>\n-#endif\n-\n #include <bits/stl_pair.h>\n-#include <bits/std_mutex.h>  // std::mutex, std::__condvar\n \n namespace std _GLIBCXX_VISIBILITY(default)\n {\n@@ -82,55 +71,13 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n #ifdef _GLIBCXX_HAVE_PLATFORM_WAIT\n       = is_scalar_v<_Tp>\n \t&& ((sizeof(_Tp) == sizeof(__detail::__platform_wait_t))\n-\t&& (alignof(_Tp*) >= __detail::__platform_wait_alignment));\n+\t&& (alignof(_Tp) >= __detail::__platform_wait_alignment));\n #else\n       = false;\n #endif\n \n   namespace __detail\n   {\n-#ifdef _GLIBCXX_HAVE_LINUX_FUTEX\n-    enum class __futex_wait_flags : int\n-    {\n-#ifdef _GLIBCXX_HAVE_LINUX_FUTEX_PRIVATE\n-      __private_flag = 128,\n-#else\n-      __private_flag = 0,\n-#endif\n-      __wait = 0,\n-      __wake = 1,\n-      __wait_bitset = 9,\n-      __wake_bitset = 10,\n-      __wait_private = __wait | __private_flag,\n-      __wake_private = __wake | __private_flag,\n-      __wait_bitset_private = __wait_bitset | __private_flag,\n-      __wake_bitset_private = __wake_bitset | __private_flag,\n-      __bitset_match_any = -1\n-    };\n-\n-    // If the futex *__addr is equal to __val, wait on the futex until woken.\n-    inline void\n-    __platform_wait(const int* __addr, int __val) noexcept\n-    {\n-      auto __e = syscall (SYS_futex, __addr,\n-\t\t\t  static_cast<int>(__futex_wait_flags::__wait_private),\n-\t\t\t  __val, nullptr);\n-      if (!__e || errno == EAGAIN)\n-\treturn;\n-      if (errno != EINTR)\n-\t__throw_system_error(errno);\n-    }\n-\n-    // Wake threads waiting on the futex *__addr.\n-    inline void\n-    __platform_notify(const int* __addr, bool __all) noexcept\n-    {\n-      syscall (SYS_futex, __addr,\n-\t       static_cast<int>(__futex_wait_flags::__wake_private),\n-\t       __all ? INT_MAX : 1);\n-    }\n-#endif\n-\n     inline void\n     __thread_yield() noexcept\n     {\n@@ -149,9 +96,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n #endif\n     }\n \n-    inline constexpr auto __atomic_spin_count_relax = 12;\n-    inline constexpr auto __atomic_spin_count = 16;\n-\n     // return true if equal\n     template<typename _Tp>\n       inline bool\n@@ -161,65 +105,6 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n \treturn __builtin_memcmp(&__a, &__b, sizeof(_Tp)) == 0;\n       }\n \n-    struct __wait_args_base;\n-\n-    // The state used by atomic waiting and notifying functions.\n-    struct __waitable_state\n-    {\n-      // Don't use std::hardware_destructive_interference_size here because we\n-      // don't want the layout of library types to depend on compiler options.\n-      static constexpr auto _S_align = 64;\n-\n-      // Count of threads blocked waiting on this state.\n-      alignas(_S_align) __platform_wait_t _M_waiters = 0;\n-\n-#ifndef _GLIBCXX_HAVE_PLATFORM_WAIT\n-      mutex _M_mtx;\n-#endif\n-\n-      // If we can't do a platform wait on the atomic variable itself,\n-      // we use this member as a proxy for the atomic variable and we\n-      // use this for waiting and notifying functions instead.\n-      alignas(_S_align) __platform_wait_t _M_ver = 0;\n-\n-#ifndef _GLIBCXX_HAVE_PLATFORM_WAIT\n-      __condvar _M_cv;\n-#endif\n-\n-      __waitable_state() = default;\n-\n-      void\n-      _M_enter_wait() noexcept\n-      { __atomic_fetch_add(&_M_waiters, 1, __ATOMIC_SEQ_CST); }\n-\n-      void\n-      _M_leave_wait() noexcept\n-      { __atomic_fetch_sub(&_M_waiters, 1, __ATOMIC_RELEASE); }\n-\n-      bool\n-      _M_waiting() const noexcept\n-      {\n-\t__platform_wait_t __res;\n-\t__atomic_load(&_M_waiters, &__res, __ATOMIC_SEQ_CST);\n-\treturn __res != 0;\n-      }\n-\n-      static __waitable_state&\n-      _S_state_for(const void* __addr) noexcept\n-      {\n-\tconstexpr __UINTPTR_TYPE__ __ct = 16;\n-\tstatic __waitable_state __w[__ct];\n-\tauto __key = ((__UINTPTR_TYPE__)__addr >> 2) % __ct;\n-\treturn __w[__key];\n-      }\n-\n-      // Return an RAII type that calls _M_enter_wait() on construction\n-      // and _M_leave_wait() on destruction.\n-      static auto\n-      _S_track(__waitable_state*& __state, const __wait_args_base& __args,\n-\t       const void* __addr) noexcept;\n-    };\n-\n     enum class __wait_flags : __UINT_LEAST32_TYPE__\n     {\n        __abi_version = 0,\n@@ -250,6 +135,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n       __wait_flags _M_flags;\n       int _M_order = __ATOMIC_ACQUIRE;\n       __platform_wait_t _M_old = 0;\n+      void* _M_wait_state = nullptr;\n \n       // Test whether _M_flags & __flags is non-zero.\n       bool\n@@ -277,7 +163,33 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n       __wait_args(const __wait_args&) noexcept = default;\n       __wait_args& operator=(const __wait_args&) noexcept = default;\n \n+      template<typename _ValFn,\n+\t       typename _Tp = decay_t<decltype(std::declval<_ValFn&>()())>>\n+\t_Tp\n+\t_M_prep_for_wait_on(const void* __addr, _ValFn __vfn)\n+\t{\n+\t  if constexpr (__platform_wait_uses_type<_Tp>)\n+\t    {\n+\t      _Tp __val = __vfn();\n+\t      // If the wait is not proxied, set the value that we're waiting\n+\t      // to change.\n+\t      _M_old = __builtin_bit_cast(__platform_wait_t, __val);\n+\t      return __val;\n+\t    }\n+\t  else\n+\t    {\n+\t      // Otherwise, it's a proxy wait and the proxy's _M_ver is used.\n+\t      // This load must happen before the one done by __vfn().\n+\t      _M_load_proxy_wait_val(__addr);\n+\t      return __vfn();\n+\t    }\n+\t}\n+\n     private:\n+      // Populates _M_wait_state and _M_old from the proxy for __addr.\n+      void\n+      _M_load_proxy_wait_val(const void* __addr);\n+\n       template<typename _Tp>\n \tstatic constexpr __wait_flags\n \t_S_flags_for(const _Tp*, bool __bare_wait) noexcept\n@@ -290,161 +202,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n \t    __res |= __proxy_wait;\n \t  return __res;\n \t}\n-\n-      // XXX what is this for? It's never used.\n-      template<typename _Tp>\n-\tstatic int\n-\t_S_memory_order_for(const _Tp*, int __order) noexcept\n-\t{\n-\t  if constexpr (__platform_wait_uses_type<_Tp>)\n-\t    return __order;\n-\t  return __ATOMIC_ACQUIRE;\n-\t}\n     };\n \n-    inline auto\n-    __waitable_state::_S_track(__waitable_state*& __state,\n-\t\t\t       const __wait_args_base& __args,\n-\t\t\t       const void* __addr) noexcept\n-    {\n-      struct _Tracker\n-      {\n-\t_Tracker() noexcept : _M_st(nullptr) { }\n-\n-\t[[__gnu__::__nonnull__]]\n-\texplicit\n-\t_Tracker(__waitable_state* __st) noexcept\n-\t: _M_st(__st)\n-\t{ __st->_M_enter_wait(); }\n-\n-\t_Tracker(const _Tracker&) = delete;\n-\t_Tracker& operator=(const _Tracker&) = delete;\n-\n-\t~_Tracker() { if (_M_st) _M_st->_M_leave_wait(); }\n-\n-\t__waitable_state* _M_st;\n-      };\n-\n-      if (__args & __wait_flags::__track_contention)\n-\t{\n-\t  // Caller does not externally track contention,\n-\t  // so we want to increment+decrement __state->_M_waiters\n-\n-\t  // First make sure we have a waitable state for the address.\n-\t  if (!__state)\n-\t    __state = &__waitable_state::_S_state_for(__addr);\n-\n-\t  // This object will increment the number of waiters and\n-\t  // decrement it again on destruction.\n-\t  return _Tracker{__state};\n-\t}\n-      return _Tracker{}; // For bare waits caller tracks waiters.\n-    }\n-\n     using __wait_result_type = pair<bool, __platform_wait_t>;\n \n-    inline __wait_result_type\n-    __spin_impl(const __platform_wait_t* __addr, const __wait_args_base& __args)\n-    {\n-      __platform_wait_t __val;\n-      for (auto __i = 0; __i < __atomic_spin_count; ++__i)\n-\t{\n-\t  __atomic_load(__addr, &__val, __args._M_order);\n-\t  if (__val != __args._M_old)\n-\t    return { true, __val };\n-\t  if (__i < __atomic_spin_count_relax)\n-\t    __detail::__thread_relax();\n-\t  else\n-\t    __detail::__thread_yield();\n-\t}\n-      return { false, __val };\n-    }\n+    __wait_result_type\n+    __wait_impl(const void* __addr, __wait_args_base&);\n \n-    inline __wait_result_type\n-    __wait_impl(const void* __addr, const __wait_args_base& __a)\n-    {\n-      __wait_args_base __args = __a;\n-      __waitable_state* __state = nullptr;\n-\n-      const __platform_wait_t* __wait_addr;\n-      if (__args & __wait_flags::__proxy_wait)\n-\t{\n-\t  __state = &__waitable_state::_S_state_for(__addr);\n-\t  __wait_addr = &__state->_M_ver;\n-\t  __atomic_load(__wait_addr, &__args._M_old, __args._M_order);\n-\t}\n-      else\n-\t__wait_addr = static_cast<const __platform_wait_t*>(__addr);\n-\n-      if (__args & __wait_flags::__do_spin)\n-\t{\n-\t  auto __res = __detail::__spin_impl(__wait_addr, __args);\n-\t  if (__res.first)\n-\t    return __res;\n-\t  if (__args & __wait_flags::__spin_only)\n-\t    return __res;\n-\t}\n-\n-      auto __tracker = __waitable_state::_S_track(__state, __args, __addr);\n-\n-#ifdef _GLIBCXX_HAVE_PLATFORM_WAIT\n-      __platform_wait(__wait_addr, __args._M_old);\n-      return { false, __args._M_old };\n-#else\n-      __platform_wait_t __val;\n-      __atomic_load(__wait_addr, &__val, __args._M_order);\n-      if (__val == __args._M_old)\n-\t{\n-\t  if (!__state)\n-\t    __state = &__waitable_state::_S_state_for(__addr);\n-\t  lock_guard<mutex> __l{ __state->_M_mtx };\n-\t  __atomic_load(__wait_addr, &__val, __args._M_order);\n-\t  if (__val == __args._M_old)\n-\t    __state->_M_cv.wait(__state->_M_mtx);\n-\t}\n-      return { false, __val };\n-#endif\n-    }\n-\n-    inline void\n-    __notify_impl(const void* __addr, [[maybe_unused]] bool __all,\n-\t\t  const __wait_args_base& __args)\n-    {\n-      __waitable_state* __state = nullptr;\n-\n-      const __platform_wait_t* __wait_addr;\n-      if (__args & __wait_flags::__proxy_wait)\n-\t{\n-\t  __state = &__waitable_state::_S_state_for(__addr);\n-\t  // Waiting for *__addr is actually done on the proxy's _M_ver.\n-\t  __wait_addr = &__state->_M_ver;\n-\t  __atomic_fetch_add(&__state->_M_ver, 1, __ATOMIC_RELAXED);\n-\t  // Because the proxy might be shared by several waiters waiting\n-\t  // on different atomic variables, we need to wake them all so\n-\t  // they can re-evaluate their conditions to see if they should\n-\t  // stop waiting or should wait again.\n-\t  __all = true;\n-\t}\n-      else // Use the atomic variable's own address.\n-\t__wait_addr = static_cast<const __platform_wait_t*>(__addr);\n-\n-      if (__args & __wait_flags::__track_contention)\n-\t{\n-\t  if (!__state)\n-\t    __state = &__waitable_state::_S_state_for(__addr);\n-\t  if (!__state->_M_waiting())\n-\t    return;\n-\t}\n-\n-#ifdef _GLIBCXX_HAVE_PLATFORM_WAIT\n-      __platform_notify(__wait_addr, __all);\n-#else\n-      if (!__state)\n-\t__state = &__waitable_state::_S_state_for(__addr);\n-      lock_guard<mutex> __l{ __state->_M_mtx };\n-      __state->_M_cv.notify_all();\n-#endif\n-    }\n+    void\n+    __notify_impl(const void* __addr, bool __all, const __wait_args_base&);\n   } // namespace __detail\n \n   // Wait on __addr while __pred(__vfn()) is false.\n@@ -456,18 +222,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n \t\t\t  bool __bare_wait = false) noexcept\n     {\n       __detail::__wait_args __args{ __addr, __bare_wait };\n-      _Tp __val = __vfn();\n+      _Tp __val = __args._M_prep_for_wait_on(__addr, __vfn);\n       while (!__pred(__val))\n \t{\n-\t  // If the wait is not proxied, set the value that we're waiting\n-\t  // to change.\n-\t  if constexpr (__platform_wait_uses_type<_Tp>)\n-\t    __args._M_old = __builtin_bit_cast(__detail::__platform_wait_t,\n-\t\t\t\t\t       __val);\n-\t  // Otherwise, it's a proxy wait and the proxy's _M_ver is used.\n-\n \t  __detail::__wait_impl(__addr, __args);\n-\t  __val = __vfn();\n+\t  __val = __args._M_prep_for_wait_on(__addr, __vfn);\n \t}\n       // C++26 will return __val\n     }\n@@ -490,7 +249,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n     {\n       auto __pfn = [&](const _Tp& __val)\n \t  { return !__detail::__atomic_eq(__old, __val); };\n-      __atomic_wait_address(__addr, __pfn, forward<_ValFn>(__vfn));\n+      std::__atomic_wait_address(__addr, __pfn, forward<_ValFn>(__vfn));\n     }\n \n   template<typename _Tp>\n@@ -501,6 +260,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n       __detail::__wait_args __args{ __addr, __bare_wait };\n       __detail::__notify_impl(__addr, __all, __args);\n     }\n+\n _GLIBCXX_END_NAMESPACE_VERSION\n } // namespace std\n #endif // __glibcxx_atomic_wait\ndiff --git a/libstdc++-v3/src/c++20/Makefile.am b/libstdc++-v3/src/c++20/Makefile.am\nindex 4f7a6d12a6cf..15e6f3445fb6 100644\n--- a/libstdc++-v3/src/c++20/Makefile.am\n+++ b/libstdc++-v3/src/c++20/Makefile.am\n@@ -36,7 +36,7 @@ else\n inst_sources =\n endif\n \n-sources = tzdb.cc format.cc\n+sources = tzdb.cc format.cc atomic.cc\n \n vpath % $(top_srcdir)/src/c++20\n \ndiff --git a/libstdc++-v3/src/c++20/Makefile.in b/libstdc++-v3/src/c++20/Makefile.in\nindex d759b8dcc7cd..d9e1615bbca8 100644\n--- a/libstdc++-v3/src/c++20/Makefile.in\n+++ b/libstdc++-v3/src/c++20/Makefile.in\n@@ -121,7 +121,7 @@ CONFIG_CLEAN_FILES =\n CONFIG_CLEAN_VPATH_FILES =\n LTLIBRARIES = $(noinst_LTLIBRARIES)\n libc__20convenience_la_LIBADD =\n-am__objects_1 = tzdb.lo format.lo\n+am__objects_1 = tzdb.lo format.lo atomic.lo\n @ENABLE_EXTERN_TEMPLATE_TRUE@am__objects_2 = sstream-inst.lo\n @GLIBCXX_HOSTED_TRUE@am_libc__20convenience_la_OBJECTS =  \\\n @GLIBCXX_HOSTED_TRUE@\t$(am__objects_1) $(am__objects_2)\n@@ -432,7 +432,7 @@ headers =\n @ENABLE_EXTERN_TEMPLATE_TRUE@inst_sources = \\\n @ENABLE_EXTERN_TEMPLATE_TRUE@\tsstream-inst.cc\n \n-sources = tzdb.cc format.cc\n+sources = tzdb.cc format.cc atomic.cc\n @GLIBCXX_HOSTED_FALSE@libc__20convenience_la_SOURCES = \n @GLIBCXX_HOSTED_TRUE@libc__20convenience_la_SOURCES = $(sources)  $(inst_sources)\n \ndiff --git a/libstdc++-v3/src/c++20/atomic.cc b/libstdc++-v3/src/c++20/atomic.cc\nnew file mode 100644\nindex 000000000000..b9ad66b1ec30\n--- /dev/null\n+++ b/libstdc++-v3/src/c++20/atomic.cc\n@@ -0,0 +1,468 @@\n+// Definitions for <atomic> wait/notify -*- C++ -*-\n+\n+// Copyright (C) 2020-2025 Free Software Foundation, Inc.\n+//\n+// This file is part of the GNU ISO C++ Library.  This library is free\n+// software; you can redistribute it and/or modify it under the\n+// terms of the GNU General Public License as published by the\n+// Free Software Foundation; either version 3, or (at your option)\n+// any later version.\n+\n+// This library is distributed in the hope that it will be useful,\n+// but WITHOUT ANY WARRANTY; without even the implied warranty of\n+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the\n+// GNU General Public License for more details.\n+\n+// Under Section 7 of GPL version 3, you are granted additional\n+// permissions described in the GCC Runtime Library Exception, version\n+// 3.1, as published by the Free Software Foundation.\n+\n+// You should have received a copy of the GNU General Public License and\n+// a copy of the GCC Runtime Library Exception along with this program;\n+// see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see\n+// <http://www.gnu.org/licenses/>.\n+\n+#include <bits/version.h>\n+\n+#if __glibcxx_atomic_wait\n+#include <atomic>\n+#include <bits/atomic_timed_wait.h>\n+#include <bits/functional_hash.h>\n+#include <cstdint>\n+#include <bits/std_mutex.h>  // std::mutex, std::__condvar\n+\n+#ifdef _GLIBCXX_HAVE_LINUX_FUTEX\n+# include <cerrno>\n+# include <climits>\n+# include <unistd.h>\n+# include <syscall.h>\n+# include <bits/functexcept.h>\n+# include <sys/time.h>\n+#endif\n+\n+#ifdef _GLIBCXX_HAVE_PLATFORM_WAIT\n+# ifndef _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT\n+// __waitable_state assumes that we consistently use the same implementation\n+// (i.e. futex vs mutex+condvar) for timed and untimed waiting.\n+#  error \"This configuration is not currently supported\"\n+# endif\n+#endif\n+\n+namespace std\n+{\n+_GLIBCXX_BEGIN_NAMESPACE_VERSION\n+namespace __detail\n+{\n+namespace\n+{\n+#ifdef _GLIBCXX_HAVE_LINUX_FUTEX\n+  enum class __futex_wait_flags : int\n+  {\n+#ifdef _GLIBCXX_HAVE_LINUX_FUTEX_PRIVATE\n+    __private_flag = 128,\n+#else\n+    __private_flag = 0,\n+#endif\n+    __wait = 0,\n+    __wake = 1,\n+    __wait_bitset = 9,\n+    __wake_bitset = 10,\n+    __wait_private = __wait | __private_flag,\n+    __wake_private = __wake | __private_flag,\n+    __wait_bitset_private = __wait_bitset | __private_flag,\n+    __wake_bitset_private = __wake_bitset | __private_flag,\n+    __bitset_match_any = -1\n+  };\n+\n+  void\n+  __platform_wait(const int* __addr, int __val) noexcept\n+  {\n+    auto __e = syscall (SYS_futex, __addr,\n+\t\t\tstatic_cast<int>(__futex_wait_flags::__wait_private),\n+\t\t\t__val, nullptr);\n+    if (!__e || errno == EAGAIN)\n+      return;\n+    if (errno != EINTR)\n+      __throw_system_error(errno);\n+  }\n+\n+  void\n+  __platform_notify(const int* __addr, bool __all) noexcept\n+  {\n+    syscall (SYS_futex, __addr,\n+\t     static_cast<int>(__futex_wait_flags::__wake_private),\n+\t     __all ? INT_MAX : 1);\n+  }\n+#endif\n+\n+  // The state used by atomic waiting and notifying functions.\n+  struct __waitable_state\n+  {\n+    // Don't use std::hardware_destructive_interference_size here because we\n+    // don't want the layout of library types to depend on compiler options.\n+    static constexpr auto _S_align = 64;\n+\n+    // Count of threads blocked waiting on this state.\n+    alignas(_S_align) __platform_wait_t _M_waiters = 0;\n+\n+#ifndef _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT\n+    mutex _M_mtx;\n+\n+    // This type meets the Cpp17BasicLockable requirements.\n+    void lock() { _M_mtx.lock(); }\n+    void unlock() { _M_mtx.unlock(); }\n+#else\n+    void lock() { }\n+    void unlock() { }\n+#endif\n+\n+    // If we can't do a platform wait on the atomic variable itself,\n+    // we use this member as a proxy for the atomic variable and we\n+    // use this for waiting and notifying functions instead.\n+    alignas(_S_align) __platform_wait_t _M_ver = 0;\n+\n+#ifndef _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT\n+    __condvar _M_cv;\n+#endif\n+\n+    __waitable_state() = default;\n+\n+    void\n+    _M_enter_wait() noexcept\n+    { __atomic_fetch_add(&_M_waiters, 1, __ATOMIC_SEQ_CST); }\n+\n+    void\n+    _M_leave_wait() noexcept\n+    { __atomic_fetch_sub(&_M_waiters, 1, __ATOMIC_RELEASE); }\n+\n+    bool\n+    _M_waiting() const noexcept\n+    {\n+      __platform_wait_t __res;\n+      __atomic_load(&_M_waiters, &__res, __ATOMIC_SEQ_CST);\n+      return __res != 0;\n+    }\n+\n+    static __waitable_state&\n+    _S_state_for(const void* __addr) noexcept\n+    {\n+      constexpr __UINTPTR_TYPE__ __ct = 16;\n+      static __waitable_state __w[__ct];\n+      auto __key = ((__UINTPTR_TYPE__)__addr >> 2) % __ct;\n+      return __w[__key];\n+    }\n+  };\n+\n+  // Scope-based contention tracking.\n+  struct scoped_wait\n+  {\n+    // pre: if track_contention is in flags, then args._M_wait_state != nullptr\n+    explicit\n+    scoped_wait(const __wait_args_base& args) : _M_state(nullptr)\n+    {\n+      if (args & __wait_flags::__track_contention)\n+\t{\n+\t  _M_state = static_cast<__waitable_state*>(args._M_wait_state);\n+\t  _M_state->_M_enter_wait();\n+\t}\n+    }\n+\n+    ~scoped_wait()\n+    {\n+      if (_M_state)\n+\t_M_state->_M_leave_wait();\n+    }\n+\n+    scoped_wait(scoped_wait&&) = delete;\n+\n+    __waitable_state* _M_state;\n+  };\n+\n+  // Scoped lock type\n+  struct waiter_lock\n+  {\n+    // pre: args._M_state != nullptr\n+    explicit\n+    waiter_lock(const __wait_args_base& args)\n+    : _M_state(*static_cast<__waitable_state*>(args._M_wait_state)),\n+      _M_track_contention(args & __wait_flags::__track_contention)\n+    {\n+      _M_state.lock();\n+      if (_M_track_contention)\n+\t_M_state._M_enter_wait();\n+    }\n+\n+    waiter_lock(waiter_lock&&) = delete;\n+\n+    ~waiter_lock()\n+    {\n+      if (_M_track_contention)\n+\t_M_state._M_leave_wait();\n+      _M_state.unlock();\n+    }\n+\n+    __waitable_state& _M_state;\n+    bool _M_track_contention;\n+  };\n+\n+  constexpr auto __atomic_spin_count_relax = 12;\n+  constexpr auto __atomic_spin_count = 16;\n+\n+  __wait_result_type\n+  __spin_impl(const __platform_wait_t* __addr, const __wait_args_base& __args)\n+  {\n+    __platform_wait_t __val;\n+    for (auto __i = 0; __i < __atomic_spin_count; ++__i)\n+      {\n+\t__atomic_load(__addr, &__val, __args._M_order);\n+\tif (__val != __args._M_old)\n+\t  return { true, __val };\n+\tif (__i < __atomic_spin_count_relax)\n+\t  __thread_relax();\n+\telse\n+\t  __thread_yield();\n+      }\n+    return { false, __val };\n+  }\n+\n+  inline __waitable_state*\n+  set_wait_state(const void* addr, __wait_args_base& args)\n+  {\n+    if (args._M_wait_state == nullptr)\n+      args._M_wait_state = &__waitable_state::_S_state_for(addr);\n+    return static_cast<__waitable_state*>(args._M_wait_state);\n+  }\n+\n+} // namespace\n+\n+// Called for a proxy wait\n+void\n+__wait_args::_M_load_proxy_wait_val(const void* addr)\n+{\n+  // __glibcxx_assert( *this & __wait_flags::__proxy_wait );\n+\n+  // We always need a waitable state for proxy waits.\n+  auto state = set_wait_state(addr, *this);\n+\n+  // Read the value of the _M_ver counter.\n+  __atomic_load(&state->_M_ver, &_M_old, __ATOMIC_ACQUIRE);\n+}\n+\n+__wait_result_type\n+__wait_impl(const void* __addr, __wait_args_base& __args)\n+{\n+  auto __state = static_cast<__waitable_state*>(__args._M_wait_state);\n+\n+  const __platform_wait_t* __wait_addr;\n+\n+  if (__args & __wait_flags::__proxy_wait)\n+    __wait_addr = &__state->_M_ver;\n+  else\n+    __wait_addr = static_cast<const __platform_wait_t*>(__addr);\n+\n+  if (__args & __wait_flags::__do_spin)\n+    {\n+      auto __res = __detail::__spin_impl(__wait_addr, __args);\n+      if (__res.first)\n+\treturn __res;\n+      if (__args & __wait_flags::__spin_only)\n+\treturn __res;\n+    }\n+\n+#ifdef _GLIBCXX_HAVE_PLATFORM_WAIT\n+  if (__args & __wait_flags::__track_contention)\n+    set_wait_state(__addr, __args);\n+  scoped_wait s(__args);\n+  __platform_wait(__wait_addr, __args._M_old);\n+  return { false, __args._M_old };\n+#else\n+  waiter_lock l(__args);\n+  __platform_wait_t __val;\n+  __atomic_load(__wait_addr, &__val, __args._M_order);\n+  if (__val == __args._M_old)\n+    __state->_M_cv.wait(__state->_M_mtx);\n+  return { false, __val };\n+#endif\n+}\n+\n+void\n+__notify_impl(const void* __addr, [[maybe_unused]] bool __all,\n+\t      const __wait_args_base& __args)\n+{\n+  auto __state = static_cast<__waitable_state*>(__args._M_wait_state);\n+  if (!__state)\n+    __state = &__waitable_state::_S_state_for(__addr);\n+\n+  [[maybe_unused]] const __platform_wait_t* __wait_addr;\n+\n+  // Lock mutex so that proxied waiters cannot race with incrementing _M_ver\n+  // and see the old value, then sleep after the increment and notify_all().\n+  lock_guard __l{ *__state };\n+\n+  if (__args & __wait_flags::__proxy_wait)\n+    {\n+      // Waiting for *__addr is actually done on the proxy's _M_ver.\n+      __wait_addr = &__state->_M_ver;\n+\n+      // Increment _M_ver so that waiting threads see something changed.\n+      // This has to be atomic because the load in _M_load_proxy_wait_val\n+      // is done without the mutex locked.\n+      __atomic_fetch_add(&__state->_M_ver, 1, __ATOMIC_RELEASE);\n+\n+      // Because the proxy might be shared by several waiters waiting\n+      // on different atomic variables, we need to wake them all so\n+      // they can re-evaluate their conditions to see if they should\n+      // stop waiting or should wait again.\n+      __all = true;\n+    }\n+  else // Use the atomic variable's own address.\n+    __wait_addr = static_cast<const __platform_wait_t*>(__addr);\n+\n+  if (__args & __wait_flags::__track_contention)\n+    {\n+      if (!__state->_M_waiting())\n+\treturn;\n+    }\n+\n+#ifdef _GLIBCXX_HAVE_PLATFORM_WAIT\n+  __platform_notify(__wait_addr, __all);\n+#else\n+  __state->_M_cv.notify_all();\n+#endif\n+}\n+\n+// Timed atomic waiting functions\n+\n+namespace\n+{\n+#ifdef _GLIBCXX_HAVE_LINUX_FUTEX\n+// returns true if wait ended before timeout\n+bool\n+__platform_wait_until(const __platform_wait_t* __addr,\n+\t\t      __platform_wait_t __old,\n+\t\t      const __wait_clock_t::time_point& __atime) noexcept\n+{\n+  auto __s = chrono::time_point_cast<chrono::seconds>(__atime);\n+  auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);\n+\n+  struct timespec __rt =\n+  {\n+    static_cast<std::time_t>(__s.time_since_epoch().count()),\n+    static_cast<long>(__ns.count())\n+  };\n+\n+  if (syscall (SYS_futex, __addr,\n+\t       static_cast<int>(__futex_wait_flags::__wait_bitset_private),\n+\t       __old, &__rt, nullptr,\n+\t       static_cast<int>(__futex_wait_flags::__bitset_match_any)))\n+    {\n+      if (errno == ETIMEDOUT)\n+\treturn false;\n+      if (errno != EINTR && errno != EAGAIN)\n+\t__throw_system_error(errno);\n+    }\n+  return true;\n+}\n+#endif // HAVE_LINUX_FUTEX\n+\n+#ifndef _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT\n+bool\n+__cond_wait_until(__condvar& __cv, mutex& __mx,\n+\t\t  const __wait_clock_t::time_point& __atime)\n+{\n+  auto __s = chrono::time_point_cast<chrono::seconds>(__atime);\n+  auto __ns = chrono::duration_cast<chrono::nanoseconds>(__atime - __s);\n+\n+  __gthread_time_t __ts =\n+  {\n+    static_cast<std::time_t>(__s.time_since_epoch().count()),\n+    static_cast<long>(__ns.count())\n+  };\n+\n+#ifdef _GLIBCXX_USE_PTHREAD_COND_CLOCKWAIT\n+  if constexpr (is_same_v<chrono::steady_clock, __wait_clock_t>)\n+    __cv.wait_until(__mx, CLOCK_MONOTONIC, __ts);\n+  else\n+#endif\n+    __cv.wait_until(__mx, __ts);\n+  return __wait_clock_t::now() < __atime;\n+}\n+#endif // ! HAVE_PLATFORM_TIMED_WAIT\n+\n+__wait_result_type\n+__spin_until_impl(const __platform_wait_t* __addr,\n+\t\t  const __wait_args_base& __args,\n+\t\t  const __wait_clock_t::time_point& __deadline)\n+{\n+  auto __t0 = __wait_clock_t::now();\n+  using namespace literals::chrono_literals;\n+\n+  __platform_wait_t __val{};\n+  auto __now = __wait_clock_t::now();\n+  for (; __now < __deadline; __now = __wait_clock_t::now())\n+    {\n+      auto __elapsed = __now - __t0;\n+#ifndef _GLIBCXX_NO_SLEEP\n+      if (__elapsed > 128ms)\n+\tthis_thread::sleep_for(64ms);\n+      else if (__elapsed > 64us)\n+\tthis_thread::sleep_for(__elapsed / 2);\n+      else\n+#endif\n+      if (__elapsed > 4us)\n+\t__thread_yield();\n+      else if (auto __res = __detail::__spin_impl(__addr, __args); __res.first)\n+\treturn __res;\n+\n+      __atomic_load(__addr, &__val, __args._M_order);\n+      if (__val != __args._M_old)\n+\treturn { true, __val };\n+    }\n+  return { false, __val };\n+}\n+} // namespace\n+\n+__wait_result_type\n+__wait_until_impl(const void* __addr, __wait_args_base& __args,\n+\t\t  const __wait_clock_t::duration& __time)\n+{\n+  const __wait_clock_t::time_point __atime(__time);\n+  auto __state = static_cast<__waitable_state*>(__args._M_wait_state);\n+  const __platform_wait_t* __wait_addr;\n+  if (__args & __wait_flags::__proxy_wait)\n+    __wait_addr = &__state->_M_ver;\n+  else\n+    __wait_addr = static_cast<const __platform_wait_t*>(__addr);\n+\n+  if (__args & __wait_flags::__do_spin)\n+    {\n+      auto __res = __detail::__spin_until_impl(__wait_addr, __args, __atime);\n+      if (__res.first)\n+\treturn __res;\n+      if (__args & __wait_flags::__spin_only)\n+\treturn __res;\n+    }\n+\n+#ifdef _GLIBCXX_HAVE_PLATFORM_TIMED_WAIT\n+  if (__args & __wait_flags::__track_contention)\n+    set_wait_state(__addr, __args);\n+  scoped_wait s(__args);\n+  if (__platform_wait_until(__wait_addr, __args._M_old, __atime))\n+    return { true, __args._M_old };\n+  else\n+    return { false, __args._M_old };\n+#else\n+  waiter_lock l(__args);\n+  __platform_wait_t __val;\n+  __atomic_load(__wait_addr, &__val, __args._M_order);\n+  if (__val == __args._M_old\n+\t&& __cond_wait_until(__state->_M_cv, __state->_M_mtx, __atime))\n+    return { true, __val };\n+  return { false, __val };\n+#endif\n+}\n+\n+} // namespace __detail\n+_GLIBCXX_END_NAMESPACE_VERSION\n+} // namespace std\n+#endif\ndiff --git a/libstdc++-v3/testsuite/17_intro/headers/c++1998/49745.cc b/libstdc++-v3/testsuite/17_intro/headers/c++1998/49745.cc\nindex 7fafe7b64b0c..3b9d2ebd910e 100644\n--- a/libstdc++-v3/testsuite/17_intro/headers/c++1998/49745.cc\n+++ b/libstdc++-v3/testsuite/17_intro/headers/c++1998/49745.cc\n@@ -131,5 +131,3 @@\n #endif\n \n int truncate = 0;\n-\n-// { dg-xfail-if \"PR libstdc++/99995\" { c++20 } }\ndiff --git a/libstdc++-v3/testsuite/29_atomics/atomic/wait_notify/100334.cc b/libstdc++-v3/testsuite/29_atomics/atomic/wait_notify/100334.cc\nindex 58a0da6e6def..21ff570ce20b 100644\n--- a/libstdc++-v3/testsuite/29_atomics/atomic/wait_notify/100334.cc\n+++ b/libstdc++-v3/testsuite/29_atomics/atomic/wait_notify/100334.cc\n@@ -53,9 +53,11 @@ main()\n     atom->store(0);\n   }\n \n+#if 0\n   auto a = &std::__detail::__waitable_state::_S_state_for((void*)(atomics.a[0]));\n   auto b = &std::__detail::__waitable_state::_S_state_for((void*)(atomics.a[1]));\n   VERIFY( a == b );\n+#endif\n \n   auto fut0 = std::async(std::launch::async, [&] { atomics.a[0]->wait(0); });\n   auto fut1 = std::async(std::launch::async, [&] { atomics.a[1]->wait(0); });\n","prefixes":["v1","11/16"]}