{"id":2226767,"url":"http://patchwork.ozlabs.org/api/patches/2226767/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hhup55wx16.gcc.gcc-TEST.tkaminsk.85.1.2@forge-stage.sourceware.org/","project":{"id":17,"url":"http://patchwork.ozlabs.org/api/projects/17/?format=json","name":"GNU Compiler Collection","link_name":"gcc","list_id":"gcc-patches.gcc.gnu.org","list_email":"gcc-patches@gcc.gnu.org","web_url":null,"scm_url":null,"webscm_url":null,"list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<bmm.hhup55wx16.gcc.gcc-TEST.tkaminsk.85.1.2@forge-stage.sourceware.org>","list_archive_url":null,"date":"2026-04-22T18:49:39","name":"[v1,02/10] Merged __atomic_ref_base into single specialization.","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"f1845c84d1d7f2e6074f78e36e28735179b9c39b","submitter":{"id":93223,"url":"http://patchwork.ozlabs.org/api/people/93223/?format=json","name":"tkaminsk via Sourceware Forge","email":"forge-bot+tkaminsk@forge-stage.sourceware.org"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/gcc/patch/bmm.hhup55wx16.gcc.gcc-TEST.tkaminsk.85.1.2@forge-stage.sourceware.org/mbox/","series":[{"id":501094,"url":"http://patchwork.ozlabs.org/api/series/501094/?format=json","web_url":"http://patchwork.ozlabs.org/project/gcc/list/?series=501094","date":"2026-04-22T18:49:39","name":"WIP: libstdc++: add support for cv-qualified types in atomic_ref (P3323R1)","version":1,"mbox":"http://patchwork.ozlabs.org/series/501094/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2226767/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2226767/checks/","tags":{},"related":[],"headers":{"Return-Path":"<gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org>","X-Original-To":["incoming@patchwork.ozlabs.org","gcc-patches@gcc.gnu.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","gcc-patches@gcc.gnu.org"],"Authentication-Results":["legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org\n (client-ip=2620:52:6:3111::32; helo=vm01.sourceware.org;\n envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org;\n receiver=patchwork.ozlabs.org)","sourceware.org; dmarc=none (p=none dis=none)\n header.from=forge-stage.sourceware.org","sourceware.org;\n spf=pass smtp.mailfrom=forge-stage.sourceware.org","server2.sourceware.org;\n arc=none smtp.remote-ip=38.145.34.39"],"Received":["from vm01.sourceware.org (vm01.sourceware.org\n [IPv6:2620:52:6:3111::32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g18S14CBwz1yGs\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 23 Apr 2026 05:29:37 +1000 (AEST)","from vm01.sourceware.org (localhost [127.0.0.1])\n\tby sourceware.org (Postfix) with ESMTP id 9478B4332F0A\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 22 Apr 2026 19:29:35 +0000 (GMT)","from forge-stage.sourceware.org (vm08.sourceware.org [38.145.34.39])\n by sourceware.org (Postfix) with ESMTPS id 1451D40AB68E\n for <gcc-patches@gcc.gnu.org>; Wed, 22 Apr 2026 18:51:08 +0000 (GMT)","from forge-stage.sourceware.org (localhost [IPv6:::1])\n (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n key-exchange x25519 server-signature ECDSA (prime256v1) server-digest SHA256)\n (No client certificate requested)\n by forge-stage.sourceware.org (Postfix) with ESMTPS id E481443591\n for <gcc-patches@gcc.gnu.org>; Wed, 22 Apr 2026 18:51:07 +0000 (UTC)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org 9478B4332F0A","OpenDKIM Filter v2.11.0 sourceware.org 1451D40AB68E"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 1451D40AB68E","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org 1451D40AB68E","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1776883868; cv=none;\n b=o7SG9CwRXTgdvGO+SfswgcxH9TvFew4npeB0M3a7tAAPOgYWBkk1PznjUuo2X1MZxWHxc0OYYBuoS7/UYbt4CyvjWEuJmxPDB3DKThRluAb3TUKA4poExs2UM9Om0g1CmKbso+evgsJD2eLRzbSew7jNKJDSmQU2lSFUNdQRvyk=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1776883868; c=relaxed/simple;\n bh=4Ch9ZDAjq6XiF0P9vAIHwq+uOChsKHFTXhRE59iRaVc=;\n h=From:Date:Subject:MIME-Version:To:Message-ID;\n b=JDh2RSHtVFLS7q2kVM9EjpCai/97dhxUGXHCjBlrXDOnVpnV6wp6YFM2hIZ8uASEKPabHdjoxQVYKGjqho+ZiODCCmWgMePXTvNXBL8Q6g92LoVMtK+K6xgLR1FmkZcOWhLXH+aSsYfV+nx/6fnKhT3Nm1ga3EInB8FlLThf5t4=","ARC-Authentication-Results":"i=1; server2.sourceware.org","From":"tkaminsk via Sourceware Forge\n <forge-bot+tkaminsk@forge-stage.sourceware.org>","Date":"Wed, 22 Apr 2026 18:49:39 +0000","Subject":"[PATCH v1 02/10] Merged __atomic_ref_base into single specialization.","MIME-Version":"1.0","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"8bit","To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>","Message-ID":"\n <bmm.hhup55wx16.gcc.gcc-TEST.tkaminsk.85.1.2@forge-stage.sourceware.org>","X-Mailer":"batrachomyomachia","X-Pull-Request-Organization":"gcc","X-Pull-Request-Repository":"gcc-TEST","X-Pull-Request":"https://forge.sourceware.org/gcc/gcc-TEST/pulls/85","References":"\n <bmm.hhup55wx16.gcc.gcc-TEST.tkaminsk.85.1.0@forge-stage.sourceware.org>","In-Reply-To":"\n <bmm.hhup55wx16.gcc.gcc-TEST.tkaminsk.85.1.0@forge-stage.sourceware.org>","X-Patch-URL":"\n https://forge.sourceware.org/tkaminsk/gcc/commit/2fea9df38cc655df0de4227cbebb60f538170bb1","X-BeenThere":"gcc-patches@gcc.gnu.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Gcc-patches mailing list <gcc-patches.gcc.gnu.org>","List-Unsubscribe":"<https://gcc.gnu.org/mailman/options/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=unsubscribe>","List-Archive":"<https://gcc.gnu.org/pipermail/gcc-patches/>","List-Post":"<mailto:gcc-patches@gcc.gnu.org>","List-Help":"<mailto:gcc-patches-request@gcc.gnu.org?subject=help>","List-Subscribe":"<https://gcc.gnu.org/mailman/listinfo/gcc-patches>,\n <mailto:gcc-patches-request@gcc.gnu.org?subject=subscribe>","Reply-To":"gcc-patches mailing list <gcc-patches@gcc.gnu.org>,\n tkaminsk@gcc.gnu.org","Errors-To":"gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org"},"content":"From: Tomasz Kamiński <tkaminsk@redhat.com>\n\nThey specializations only differed in the is_always_atomic and\nrequired_aligment value. Their computation is now handled in the\nbase class by _S_is_always_atomic and _S_required_aligment functions.\n---\n libstdc++-v3/include/bits/atomic_base.h       | 362 ++++++------------\n .../testsuite/29_atomics/atomic_ref/115402.cc |   3 +-\n 2 files changed, 118 insertions(+), 247 deletions(-)","diff":"diff --git a/libstdc++-v3/include/bits/atomic_base.h b/libstdc++-v3/include/bits/atomic_base.h\nindex 1763a64ab981..b8744ac49b45 100644\n--- a/libstdc++-v3/include/bits/atomic_base.h\n+++ b/libstdc++-v3/include/bits/atomic_base.h\n@@ -1515,54 +1515,47 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n   // __atomic ref adds on top the APIs for non-const types, thus implementing\n   // the various constraints in [atomic.ref].\n \n-  template<typename _Tp,\n-           bool = is_const_v<_Tp>,\n-           bool = is_integral_v<_Tp> && !is_same_v<remove_cv_t<_Tp>, bool>,\n-           bool = is_floating_point_v<_Tp>,\n-           bool = is_pointer_v<_Tp>>\n-    struct __atomic_ref;\n-\n-  template<typename _Tp,\n-           bool _IsIntegral,\n-           bool _IsFloatingPoint,\n-           bool _IsPointer>\n+  template<typename _Tp>\n     struct __atomic_ref_base;\n \n-  // Const types\n-  template<typename _Tp, bool _IsIntegral, bool _IsFloatingPoint, bool _IsPointer>\n-    struct __atomic_ref<_Tp, true, _IsIntegral, _IsFloatingPoint, _IsPointer>\n-      : __atomic_ref_base<_Tp, _IsIntegral, _IsFloatingPoint, _IsPointer>\n-    {\n-      __atomic_ref() = delete;\n-      __atomic_ref& operator=(const __atomic_ref&) = delete;\n-\n-      explicit\n-      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, _IsIntegral, _IsFloatingPoint, _IsPointer>(__t)\n-      { }\n-    };\n \n-  // Non-integral, non-floating-point, non-pointer types\n   template<typename _Tp>\n-    struct __atomic_ref_base<_Tp, false, false, false>\n+    class __atomic_ref_base\n     {\n-      static_assert(is_trivially_copyable_v<_Tp>);\n+      using _Vt = remove_cv_t<_Tp>;\n \n-      // 1/2/4/8/16-byte types must be aligned to at least their size.\n-      static constexpr int _S_min_alignment\n-\t= (sizeof(_Tp) & (sizeof(_Tp) - 1)) || sizeof(_Tp) > 16\n-\t? 0 : sizeof(_Tp);\n+      static consteval bool\n+      _S_is_always_lock_free()\n+      {\n+\tif constexpr (is_pointer_v<_Vt>)\n+\t  return ATOMIC_POINTER_LOCK_FREE == 2;\n+\telse\n+\t  return __atomic_always_lock_free(sizeof(_Vt), 0);\n+      }\n \n-    public:\n-      using value_type = remove_cv_t<_Tp>;\n+      static consteval int\n+      _S_required_aligment()\n+      {\n+\tif constexpr (is_floating_point_v<_Vt> || is_pointer_v<_Vt>)\n+\t  return alignof(_Vt);\n+\telse if constexpr ((sizeof(_Vt) & (sizeof(_Vt) - 1)) || sizeof(_Vt) > 16)\n+\t  return alignof(_Vt);\n+\telse\n+\t  // 1/2/4/8/16-byte types, including integral types,\n+\t  // must be aligned to at least their size.\n+\t  return (sizeof(_Vt) > alignof(_Vt)) ? sizeof(_Vt) : alignof(_Vt);\n+      }\n \n-      static constexpr bool is_always_lock_free\n-\t= __atomic_always_lock_free(sizeof(_Tp), 0);\n+    public:\n+      using value_type = _Vt;\n+      static_assert(is_trivially_copyable_v<value_type>);\n \n+      static constexpr bool is_always_lock_free = _S_is_always_lock_free();\n       static_assert(is_always_lock_free || !is_volatile_v<_Tp>);\n \n-      static constexpr size_t required_alignment\n-\t= _S_min_alignment > alignof(_Tp) ? _S_min_alignment : alignof(_Tp);\n+      static constexpr size_t required_alignment = _S_required_aligment();\n \n+      __atomic_ref_base() = delete;\n       __atomic_ref_base& operator=(const __atomic_ref_base&) = delete;\n \n       explicit\n@@ -1595,17 +1588,54 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n       _Tp* _M_ptr;\n     };\n \n+  template<typename _Tp,\n+\t   bool = is_const_v<_Tp>,\n+\t   bool = is_integral_v<_Tp> && !is_same_v<remove_cv_t<_Tp>, bool>,\n+\t   bool = is_floating_point_v<_Tp>,\n+\t   bool = is_pointer_v<_Tp>>\n+    struct __atomic_ref;\n+\n+  // base classes for const qualified types\n   template<typename _Tp>\n-    struct __atomic_ref<_Tp, false, false, false, false>\n-      : __atomic_ref_base<_Tp, false, false, false>\n+    struct __atomic_ref<_Tp, true, false, false, false>\n+      : __atomic_ref_base<_Tp>\n+    {\n+      explicit\n+      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp>(__t)\n+      { }\n+    };\n+\n+  template<typename _Tp>\n+    struct __atomic_ref<_Tp, true, false, false, true>\n+      : __atomic_ref_base<_Tp>\n+    {\n+      using difference_type = ptrdiff_t;\n+\n+      explicit\n+      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp>(__t)\n+      { }\n+    };\n+\n+  template<typename _Tp, bool _IsIntegral, bool _IsFloatingPoint>\n+    struct __atomic_ref<_Tp, true, _IsIntegral, _IsFloatingPoint, false>\n+      : __atomic_ref_base<_Tp>\n     {\n-      using value_type = typename __atomic_ref_base<_Tp, false, false, false>::value_type;\n+      using difference_type = typename __atomic_ref_base<_Tp>::value_type;\n+\n+      explicit\n+      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp>(__t)\n+      { }\n+    };\n \n-      __atomic_ref() = delete;\n-      __atomic_ref& operator=(const __atomic_ref&) = delete;\n \n+  // base class for non-integral, non-floating-point, non-pointer types\n+  template<typename _Tp>\n+    struct __atomic_ref<_Tp, false, false, false, false>\n+      : __atomic_ref_base<_Tp>\n+    {\n+      using value_type = typename __atomic_ref_base<_Tp>::value_type;\n       explicit\n-      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, false, false, false>(__t)\n+      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp>(__t)\n       { }\n \n       void\n@@ -1675,71 +1705,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n #endif // __glibcxx_atomic_wait\n     };\n \n-\n-  // Integral types (except cv-bool)\n-  template<typename _Tp>\n-    struct __atomic_ref_base<_Tp, true, false, false>\n-    {\n-      static_assert(is_integral_v<_Tp>);\n-\n-    public:\n-      using value_type = remove_cv_t<_Tp>;\n-      using difference_type = value_type;\n-\n-      static constexpr bool is_always_lock_free\n-\t= __atomic_always_lock_free(sizeof(_Tp), 0);\n-\n-      static_assert(is_always_lock_free || !is_volatile_v<_Tp>);\n-\n-      static constexpr size_t required_alignment\n-\t= sizeof(_Tp) > alignof(_Tp) ? sizeof(_Tp) : alignof(_Tp);\n-\n-      __atomic_ref_base() = delete;\n-      __atomic_ref_base& operator=(const __atomic_ref_base&) = delete;\n-\n-      explicit\n-      __atomic_ref_base(_Tp& __t) : _M_ptr(&__t)\n-      {\n-\t__glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment) == 0);\n-      }\n-\n-      __atomic_ref_base(const __atomic_ref_base&) noexcept = default;\n-\n-      operator value_type() const noexcept { return this->load(); }\n-\n-      bool\n-      is_lock_free() const noexcept\n-      {\n-\treturn __atomic_impl::is_lock_free<sizeof(_Tp), required_alignment>();\n-      }\n-\n-      value_type\n-      load(memory_order __m = memory_order_seq_cst) const noexcept\n-      { return __atomic_impl::load(_M_ptr, __m); }\n-\n-#if __glibcxx_atomic_wait\n-      _GLIBCXX_ALWAYS_INLINE void\n-      wait(value_type __old, memory_order __m = memory_order_seq_cst) const noexcept\n-      { __atomic_impl::wait(_M_ptr, __old, __m); }\n-\n-      // TODO add const volatile overload\n-#endif // __glibcxx_atomic_wait\n-\n-    protected:\n-      _Tp* _M_ptr;\n-    };\n-\n+  // base class for atomic_ref<integral-type>\n   template<typename _Tp>\n     struct __atomic_ref<_Tp, false, true, false, false>\n-      : __atomic_ref_base<_Tp, true, false, false>\n+      : __atomic_ref_base<_Tp>\n     {\n-      using value_type = typename __atomic_ref_base<_Tp, true, false, false>::value_type;\n-\n-      __atomic_ref() = delete;\n-      __atomic_ref& operator=(const __atomic_ref&) = delete;\n+      using value_type = typename __atomic_ref_base<_Tp>::value_type;\n+      using difference_type = value_type;\n \n       explicit\n-      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, true, false, false>(__t)\n+      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp>(__t)\n       { }\n \n       value_type\n@@ -1870,69 +1845,16 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n       { return __atomic_impl::__xor_fetch(this->_M_ptr, __i); }\n     };\n \n-  // Floating-point types\n-  template<typename _Fp>\n-    struct __atomic_ref_base<_Fp, false, true, false>\n-    {\n-      static_assert(is_floating_point_v<_Fp>);\n-\n-    public:\n-      using value_type = remove_cv_t<_Fp>;\n-      using difference_type = value_type;\n-\n-      static constexpr bool is_always_lock_free\n-\t= __atomic_always_lock_free(sizeof(_Fp), 0);\n-\n-      static_assert(is_always_lock_free || !is_volatile_v<_Fp>);\n-\n-      static constexpr size_t required_alignment = __alignof__(_Fp);\n-\n-      __atomic_ref_base() = delete;\n-      __atomic_ref_base& operator=(const __atomic_ref_base&) = delete;\n-\n-      explicit\n-      __atomic_ref_base(_Fp& __t) : _M_ptr(&__t)\n-      {\n-\t__glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment) == 0);\n-      }\n-\n-      __atomic_ref_base(const __atomic_ref_base&) noexcept = default;\n-\n-      operator value_type() const noexcept { return this->load(); }\n-\n-      bool\n-      is_lock_free() const noexcept\n-      {\n-\treturn __atomic_impl::is_lock_free<sizeof(_Fp), required_alignment>();\n-      }\n-\n-      _Fp\n-      load(memory_order __m = memory_order_seq_cst) const noexcept\n-      { return __atomic_impl::load(_M_ptr, __m); }\n-\n-#if __glibcxx_atomic_wait\n-      _GLIBCXX_ALWAYS_INLINE void\n-      wait(value_type __old, memory_order __m = memory_order_seq_cst) const noexcept\n-      { __atomic_impl::wait(_M_ptr, __old, __m); }\n-\n-      // TODO add const volatile overload\n-#endif // __glibcxx_atomic_wait\n-\n-    protected:\n-      _Fp* _M_ptr;\n-    };\n-\n+  // base class for atomic_ref<floating-point-type>\n   template<typename _Fp>\n     struct __atomic_ref<_Fp, false, false, true, false>\n-      : __atomic_ref_base<_Fp, false, true, false>\n+      : __atomic_ref_base<_Fp>\n     {\n-      using value_type = typename __atomic_ref_base<_Fp, false, true, false>::value_type;\n-\n-      __atomic_ref() = delete;\n-      __atomic_ref& operator=(const __atomic_ref&) = delete;\n+      using value_type = typename __atomic_ref_base<_Fp>::value_type;\n+      using difference_type = value_type;\n \n       explicit\n-      __atomic_ref(_Fp& __t) : __atomic_ref_base<_Fp, false, true, false>(__t)\n+      __atomic_ref(_Fp& __t) : __atomic_ref_base<_Fp>(__t)\n       { }\n \n       value_type\n@@ -2020,93 +1942,18 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n       { return __atomic_impl::__sub_fetch_flt(this->_M_ptr, __i); }\n     };\n \n-  // Pointer types\n-  template<typename _Tp>\n-    struct __atomic_ref_base<_Tp, false, false, true>\n-    {\n-      static_assert(is_pointer_v<_Tp>);\n-\n-    public:\n-      using value_type = remove_cv_t<_Tp>;\n-      using difference_type = ptrdiff_t;\n-\n-      static constexpr bool is_always_lock_free = ATOMIC_POINTER_LOCK_FREE == 2;\n-\n-      static_assert(is_always_lock_free || !is_volatile_v<_Tp>);\n-\n-      static constexpr size_t required_alignment = __alignof__(_Tp);\n-\n-      __atomic_ref_base() = delete;\n-      __atomic_ref_base& operator=(const __atomic_ref_base&) = delete;\n-\n-      explicit\n-      __atomic_ref_base(_Tp& __t) : _M_ptr(std::__addressof(__t))\n-      {\n-\t__glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment) == 0);\n-      }\n-\n-      __atomic_ref_base(const __atomic_ref_base&) noexcept = default;\n-\n-      operator value_type() const noexcept { return this->load(); }\n-\n-      bool\n-      is_lock_free() const noexcept\n-      {\n-\treturn __atomic_impl::is_lock_free<sizeof(_Tp*), required_alignment>();\n-      }\n-\n-      value_type\n-      load(memory_order __m = memory_order_seq_cst) const noexcept\n-      { return __atomic_impl::load(_M_ptr, __m); }\n-\n-#if __glibcxx_atomic_wait\n-      _GLIBCXX_ALWAYS_INLINE void\n-      wait(value_type __old, memory_order __m = memory_order_seq_cst) const noexcept\n-      { __atomic_impl::wait(_M_ptr, __old, __m); }\n-\n-      // TODO add const volatile overload\n-#endif // __glibcxx_atomic_wait\n-\n-    protected:\n-      static constexpr ptrdiff_t\n-      _S_type_size(ptrdiff_t __d) noexcept\n-      {\n-\tusing _PointedType = remove_pointer_t<_Tp>;\n-\tstatic_assert(is_object_v<_PointedType>);\n-\treturn __d * sizeof(_PointedType);\n-      }\n-\n-      _Tp* _M_ptr;\n-    };\n-\n+  // base class for atomic_ref<pointer-type>\n   template<typename _Tp>\n     struct __atomic_ref<_Tp, false, false, false, true>\n-      : __atomic_ref_base<_Tp, false, false, true>\n+      : __atomic_ref_base<_Tp>\n     {\n-      using value_type = typename __atomic_ref_base<_Tp, false, false, true>::value_type;\n-      using difference_type = typename __atomic_ref_base<_Tp, false, false, true>::difference_type;\n-\n-      __atomic_ref() = delete;\n-      __atomic_ref& operator=(const __atomic_ref&) = delete;\n+      using value_type = typename __atomic_ref_base<_Tp>::value_type;\n+      using difference_type = ptrdiff_t;\n \n       explicit\n-      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, false, false, true>(__t)\n+      __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp>(__t)\n       { }\n \n-#if __glibcxx_atomic_wait\n-      _GLIBCXX_ALWAYS_INLINE void\n-      notify_one() const noexcept\n-      { __atomic_impl::notify_one(this->_M_ptr); }\n-\n-      // TODO add const volatile overload\n-\n-      _GLIBCXX_ALWAYS_INLINE void\n-      notify_all() const noexcept\n-      { __atomic_impl::notify_all(this->_M_ptr); }\n-\n-      // TODO add const volatile overload\n-#endif // __glibcxx_atomic_wait\n-\n       value_type\n       operator=(value_type __t) const noexcept\n       {\n@@ -2159,15 +2006,29 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n \t\t\t\t       __cmpexch_failure_order(__order));\n       }\n \n+#if __glibcxx_atomic_wait\n+      _GLIBCXX_ALWAYS_INLINE void\n+      notify_one() const noexcept\n+      { __atomic_impl::notify_one(this->_M_ptr); }\n+\n+      // TODO add const volatile overload\n+\n+      _GLIBCXX_ALWAYS_INLINE void\n+      notify_all() const noexcept\n+      { __atomic_impl::notify_all(this->_M_ptr); }\n+\n+      // TODO add const volatile overload\n+#endif // __glibcxx_atomic_wait\n+\n       _GLIBCXX_ALWAYS_INLINE value_type\n       fetch_add(difference_type __d,\n \t\tmemory_order __m = memory_order_seq_cst) const noexcept\n-      { return __atomic_impl::fetch_add(this->_M_ptr, this->_S_type_size(__d), __m); }\n+      { return __atomic_impl::fetch_add(this->_M_ptr, _S_type_size(__d), __m); }\n \n       _GLIBCXX_ALWAYS_INLINE value_type\n       fetch_sub(difference_type __d,\n \t\tmemory_order __m = memory_order_seq_cst) const noexcept\n-      { return __atomic_impl::fetch_sub(this->_M_ptr, this->_S_type_size(__d), __m); }\n+      { return __atomic_impl::fetch_sub(this->_M_ptr, _S_type_size(__d), __m); }\n \n       value_type\n       operator++(int) const noexcept\n@@ -2180,25 +2041,34 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION\n       value_type\n       operator++() const noexcept\n       {\n-\treturn __atomic_impl::__add_fetch(this->_M_ptr, this->_S_type_size(1));\n+\treturn __atomic_impl::__add_fetch(this->_M_ptr, _S_type_size(1));\n       }\n \n       value_type\n       operator--() const noexcept\n       {\n-\treturn __atomic_impl::__sub_fetch(this->_M_ptr, this->_S_type_size(1));\n+\treturn __atomic_impl::__sub_fetch(this->_M_ptr, _S_type_size(1));\n       }\n \n       value_type\n       operator+=(difference_type __d) const noexcept\n       {\n-\treturn __atomic_impl::__add_fetch(this->_M_ptr, this->_S_type_size(__d));\n+\treturn __atomic_impl::__add_fetch(this->_M_ptr, _S_type_size(__d));\n       }\n \n       value_type\n       operator-=(difference_type __d) const noexcept\n       {\n-\treturn __atomic_impl::__sub_fetch(this->_M_ptr, this->_S_type_size(__d));\n+\treturn __atomic_impl::__sub_fetch(this->_M_ptr, _S_type_size(__d));\n+      }\n+\n+    private:\n+      static constexpr ptrdiff_t\n+      _S_type_size(ptrdiff_t __d) noexcept\n+      {\n+\tusing _Et = remove_pointer_t<value_type>;\n+\tstatic_assert(is_object_v<_Et>);\n+\treturn __d * sizeof(_Et);\n       }\n     };\n #endif // C++2a\ndiff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/115402.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/115402.cc\nindex ca449c243c49..615834400174 100644\n--- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/115402.cc\n+++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/115402.cc\n@@ -12,5 +12,6 @@ main()\n   vref.exchange(val);\n   vref.compare_exchange_weak(val, 0);\n   vref.compare_exchange_strong(val, 0);\n-  vref.wait(0);\n+  // TODO volatile waits are not supported\n+  // vref.wait(0);\n }\n","prefixes":["v1","02/10"]}