From patchwork Mon Dec 9 11:52:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giuseppe D'Angelo X-Patchwork-Id: 2019999 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=kdab.com header.i=@kdab.com header.a=rsa-sha256 header.s=dkim header.b=CuoTYKer; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=8.43.85.97; helo=server2.sourceware.org; envelope-from=gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=patchwork.ozlabs.org) Received: from server2.sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (secp384r1) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Y6Kz65xSCz1yRf for ; Mon, 9 Dec 2024 22:54:26 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id AD914385841D for ; Mon, 9 Dec 2024 11:54:24 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org AD914385841D Authentication-Results: sourceware.org; dkim=pass (1024-bit key, unprotected) header.d=kdab.com header.i=@kdab.com header.a=rsa-sha256 header.s=dkim header.b=CuoTYKer X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mail.kdab.com (mail.kdab.com [176.9.126.58]) by sourceware.org (Postfix) with ESMTPS id 4ACC43858D34; Mon, 9 Dec 2024 11:52:15 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4ACC43858D34 Authentication-Results: sourceware.org; dmarc=pass (p=quarantine dis=none) header.from=kdab.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=kdab.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 4ACC43858D34 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=176.9.126.58 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1733745135; cv=none; b=Ct55v46d63/gqq0vygqgkjk6HZNOrY/9Qh0SN94M1MT2wmA5Md1A+N5O56FdNQV5n7eIsU0VrbYHEe3QjiqIusfFdobqtVVOAjVy5nNbFQyk9clYUfvQbD+AgAgqWPjuxefyscGRc8aANgF3cSd3UMx2JpV7AplsX9FoqAyWAsk= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1733745135; c=relaxed/simple; bh=Yl/06foOMJtz9uxipetIZcH0Yan75OxSmap5K5pnPyU=; h=DKIM-Signature:Message-ID:Date:MIME-Version:To:From:Subject; b=Vr15ub+53hibmNZAU5gE23sCP1ILcLxHfJAc8hYFcmU94/D82vyvOEcB3R4yVx0o8gqrgGX9BtsXXuZs0PRFA0NseKWCuLhma88p+x+RQYwvNwPVtKWX1n3uqy+27B+523kzw542oN0zQTA6Mp1sxDVFeN3uuVGfu9bT25uOoiE= ARC-Authentication-Results: i=1; server2.sourceware.org DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4ACC43858D34 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=kdab.com; h= content-type:content-type:organization:subject:subject:from:from :content-language:mime-version:date:date:message-id; s=dkim; t= 1733745133; x=1734609134; bh=Yl/06foOMJtz9uxipetIZcH0Yan75OxSmap 5K5pnPyU=; b=CuoTYKerYocEscsS2BaDuUECGqO5hPZUluMao+19ISbkScaRJ1J PIOksWXkTu4DdNQP2oiAS/Xcwf2483qQLjclFSFvEapRlgPaxoJedbnb9Fnhsk7i 6foRlGkzFhvn9+vNnnKm4qpOzZTonYSGn9fqnh7xs1SSBLDGyepYDIFM= X-Virus-Scanned: amavisd-new at kdab.com Message-ID: <56ae032b-27af-46dd-a532-4dac2f9793dc@kdab.com> Date: Mon, 9 Dec 2024 12:52:12 +0100 MIME-Version: 1.0 To: libstdc++ , gcc-patches@gcc.gnu.org Content-Language: en-GB From: Giuseppe D'Angelo Subject: [PATCH] libstdc++: add support for cv-qualified types in atomic_ref (P3323R1) Organization: KDAB (France) S.A.S., a KDAB Group company X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.30 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces~incoming=patchwork.ozlabs.org@gcc.gnu.org Hello, The attached patch implements P3323R1, a DR against C++20 (11?), which fixes LWG 4069 and 3508 by clarifying that std::atomic_ref is meant to be supported (whereas std::atomic is meant to be unsupported). I've tried to keep the refactorings to a minimum and follow the structure of the existing code, where atomic_ref dispatched to a __atomic_ref base class which had specializations for various kinds of T (integral, fp, pointer). The mutating operations on an atomic_ref are now constrained on whether T is const or not, so I split __atomic_ref into a further subclass (__atomic_ref_base) that contains the non-mutating operations. __atomic_ref inherits from _base and adds the mutating operations if T is non const. Thanks, From c75e170548dd40c58940cb18d05f413c76d21d34 Mon Sep 17 00:00:00 2001 From: Giuseppe D'Angelo Date: Mon, 9 Dec 2024 01:32:27 +0100 Subject: [PATCH] libstdc++: add support for cv-qualified types in atomic_ref (P3323R1) P3233R1 (DR for C++20/C++11, fixes LWG 4069 and 3508) clarifies that std::atomic_ref is meant to be supported. This commit implements it by splitting the __atomic_ref class (that atomic_ref inherits from) into a further base class (__atomic_ref_base): * __atomic_ref_base implements the atomic API for const and non-const Ts (with specializations for integrals, floating points, pointers); * __atomic_ref inherits from __atomic_ref_base; if T is non-const adds on top the "mutating" atomic APIs like store(), exchange(), and so on; same discussion w.r.t. the specializations. The primary atomic_ref is now meant to be used for cv-bool, not just bool, amend the detection accordingly. At the same time, disable support for cv-qualified types in std::atomic (for instance, std::atomic isn't meaningful; one should use volatile std::atomic), again as per the paper. libstdc++-v3/ChangeLog: * include/bits/atomic_base.h: Add support for atomic_ref: refactor __atomic_ref into a further subclass in order to implement the constraints on atomic_ref mutating APIs; change _Tp in various function signatures to be value_type instead. * include/std/atomic: Add a static_assert to std::atomic, as per P3233R1, complementing the existing ones. * testsuite/29_atomics/atomic_ref/bool.cc: Add tests for cv types in atomic_ref. * testsuite/29_atomics/atomic_ref/deduction.cc: Likewise. * testsuite/29_atomics/atomic_ref/float.cc: Likewise. * testsuite/29_atomics/atomic_ref/generic.cc: Likewise. * testsuite/29_atomics/atomic_ref/integral.cc: Likewise. * testsuite/29_atomics/atomic_ref/pointer.cc: Likewise. * testsuite/29_atomics/atomic_ref/requirements.cc: Likewise. * testsuite/29_atomics/atomic_ref/wait_notify.cc: Likewise. Signed-off-by: Giuseppe D'Angelo --- libstdc++-v3/include/bits/atomic_base.h | 507 +++++++++++------- libstdc++-v3/include/std/atomic | 1 + .../testsuite/29_atomics/atomic_ref/bool.cc | 18 + .../29_atomics/atomic_ref/deduction.cc | 33 +- .../testsuite/29_atomics/atomic_ref/float.cc | 21 +- .../29_atomics/atomic_ref/generic.cc | 6 + .../29_atomics/atomic_ref/integral.cc | 6 + .../29_atomics/atomic_ref/pointer.cc | 6 + .../29_atomics/atomic_ref/requirements.cc | 70 ++- .../29_atomics/atomic_ref/wait_notify.cc | 10 + 10 files changed, 440 insertions(+), 238 deletions(-) diff --git a/libstdc++-v3/include/bits/atomic_base.h b/libstdc++-v3/include/bits/atomic_base.h index 72cc4bae6cf..df642716ce8 100644 --- a/libstdc++-v3/include/bits/atomic_base.h +++ b/libstdc++-v3/include/bits/atomic_base.h @@ -1473,14 +1473,42 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION }; #undef _GLIBCXX20_INIT + // atomic_ref inherits from __atomic_ref; + // __atomic_ref inherits from __atomic_ref_base. + // + // __atomic_ref_base provides the common APIs for const and non-const types; + // __atomic ref adds on top the APIs for non-const types, thus implementing + // the various constraints in [atomic.ref]. + template && !is_same_v<_Tp, bool>, - bool = is_floating_point_v<_Tp>> + bool = is_const_v<_Tp>, + bool = is_integral_v<_Tp> && !is_same_v, bool>, + bool = is_floating_point_v<_Tp>, + bool = is_pointer_v<_Tp>> struct __atomic_ref; - // base class for non-integral, non-floating-point, non-pointer types + template + struct __atomic_ref_base; + + // Const types + template + struct __atomic_ref<_Tp, true, _IsIntegral, _IsFloatingPoint, _IsPointer> + : __atomic_ref_base<_Tp, _IsIntegral, _IsFloatingPoint, _IsPointer> + { + __atomic_ref() = delete; + __atomic_ref& operator=(const __atomic_ref&) = delete; + + explicit + __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, _IsIntegral, _IsFloatingPoint, _IsPointer>(__t) + { } + }; + + // Non-integral, non-floating-point, non-pointer types template - struct __atomic_ref<_Tp, false, false> + struct __atomic_ref_base<_Tp, false, false, false> { static_assert(is_trivially_copyable_v<_Tp>); @@ -1490,70 +1518,97 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION ? 0 : sizeof(_Tp); public: - using value_type = _Tp; + using value_type = remove_cv_t<_Tp>; static constexpr bool is_always_lock_free = __atomic_always_lock_free(sizeof(_Tp), 0); + static_assert(is_always_lock_free || !is_volatile_v<_Tp>); + static constexpr size_t required_alignment = _S_min_alignment > alignof(_Tp) ? _S_min_alignment : alignof(_Tp); - __atomic_ref& operator=(const __atomic_ref&) = delete; + __atomic_ref_base& operator=(const __atomic_ref_base&) = delete; explicit - __atomic_ref(_Tp& __t) : _M_ptr(std::__addressof(__t)) + __atomic_ref_base(_Tp& __t) : _M_ptr(std::__addressof(__t)) { __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment) == 0); } - __atomic_ref(const __atomic_ref&) noexcept = default; + __atomic_ref_base(const __atomic_ref_base&) noexcept = default; - _Tp - operator=(_Tp __t) const noexcept - { - this->store(__t); - return __t; - } - - operator _Tp() const noexcept { return this->load(); } + operator value_type() const noexcept { return this->load(); } bool is_lock_free() const noexcept { return __atomic_impl::is_lock_free(); } - void - store(_Tp __t, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::store(_M_ptr, __t, __m); } - - _Tp + value_type load(memory_order __m = memory_order_seq_cst) const noexcept { return __atomic_impl::load(_M_ptr, __m); } - _Tp - exchange(_Tp __desired, memory_order __m = memory_order_seq_cst) +#if __glibcxx_atomic_wait + _GLIBCXX_ALWAYS_INLINE void + wait(value_type __old, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::wait(_M_ptr, __old, __m); } + + // TODO add const volatile overload +#endif // __glibcxx_atomic_wait + + protected: + _Tp* _M_ptr; + }; + + template + struct __atomic_ref<_Tp, false, false, false, false> + : __atomic_ref_base<_Tp, false, false, false> + { + using value_type = typename __atomic_ref_base<_Tp, false, false, false>::value_type; + + __atomic_ref() = delete; + __atomic_ref& operator=(const __atomic_ref&) = delete; + + explicit + __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, false, false, false>(__t) + { } + + void + store(value_type __t, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::store(this->_M_ptr, __t, __m); } + + value_type + operator=(value_type __t) const noexcept + { + this->store(__t); + return __t; + } + + value_type + exchange(value_type __desired, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::exchange(_M_ptr, __desired, __m); } + { return __atomic_impl::exchange(this->_M_ptr, __desired, __m); } bool - compare_exchange_weak(_Tp& __expected, _Tp __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_weak( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_strong(_Tp& __expected, _Tp __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_strong( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_weak(_Tp& __expected, _Tp __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1562,7 +1617,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION } bool - compare_exchange_strong(_Tp& __expected, _Tp __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1571,64 +1626,51 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION } #if __glibcxx_atomic_wait - _GLIBCXX_ALWAYS_INLINE void - wait(_Tp __old, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::wait(_M_ptr, __old, __m); } - - // TODO add const volatile overload - _GLIBCXX_ALWAYS_INLINE void notify_one() const noexcept - { __atomic_impl::notify_one(_M_ptr); } + { __atomic_impl::notify_one(this->_M_ptr); } // TODO add const volatile overload _GLIBCXX_ALWAYS_INLINE void notify_all() const noexcept - { __atomic_impl::notify_all(_M_ptr); } + { __atomic_impl::notify_all(this->_M_ptr); } // TODO add const volatile overload #endif // __glibcxx_atomic_wait - - private: - _Tp* _M_ptr; }; - // base class for atomic_ref + + // Integral types (except cv-bool) template - struct __atomic_ref<_Tp, true, false> + struct __atomic_ref_base<_Tp, true, false, false> { static_assert(is_integral_v<_Tp>); public: - using value_type = _Tp; + using value_type = remove_cv_t<_Tp>; using difference_type = value_type; static constexpr bool is_always_lock_free = __atomic_always_lock_free(sizeof(_Tp), 0); + static_assert(is_always_lock_free || !is_volatile_v<_Tp>); + static constexpr size_t required_alignment = sizeof(_Tp) > alignof(_Tp) ? sizeof(_Tp) : alignof(_Tp); - __atomic_ref() = delete; - __atomic_ref& operator=(const __atomic_ref&) = delete; + __atomic_ref_base() = delete; + __atomic_ref_base& operator=(const __atomic_ref_base&) = delete; explicit - __atomic_ref(_Tp& __t) : _M_ptr(&__t) + __atomic_ref_base(_Tp& __t) : _M_ptr(&__t) { __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment) == 0); } - __atomic_ref(const __atomic_ref&) noexcept = default; - - _Tp - operator=(_Tp __t) const noexcept - { - this->store(__t); - return __t; - } + __atomic_ref_base(const __atomic_ref_base&) noexcept = default; - operator _Tp() const noexcept { return this->load(); } + operator value_type() const noexcept { return this->load(); } bool is_lock_free() const noexcept @@ -1636,39 +1678,71 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION return __atomic_impl::is_lock_free(); } - void - store(_Tp __t, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::store(_M_ptr, __t, __m); } - - _Tp + value_type load(memory_order __m = memory_order_seq_cst) const noexcept { return __atomic_impl::load(_M_ptr, __m); } - _Tp - exchange(_Tp __desired, +#if __glibcxx_atomic_wait + _GLIBCXX_ALWAYS_INLINE void + wait(value_type __old, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::wait(_M_ptr, __old, __m); } + + // TODO add const volatile overload +#endif // __glibcxx_atomic_wait + + protected: + _Tp* _M_ptr; + }; + + template + struct __atomic_ref<_Tp, false, true, false, false> + : __atomic_ref_base<_Tp, true, false, false> + { + using value_type = typename __atomic_ref_base<_Tp, true, false, false>::value_type; + + __atomic_ref() = delete; + __atomic_ref& operator=(const __atomic_ref&) = delete; + + explicit + __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, true, false, false>(__t) + { } + + value_type + operator=(value_type __t) const noexcept + { + this->store(__t); + return __t; + } + + void + store(value_type __t, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::store(this->_M_ptr, __t, __m); } + + value_type + exchange(value_type __desired, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::exchange(_M_ptr, __desired, __m); } + { return __atomic_impl::exchange(this->_M_ptr, __desired, __m); } bool - compare_exchange_weak(_Tp& __expected, _Tp __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_weak( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_strong(_Tp& __expected, _Tp __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_strong( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_weak(_Tp& __expected, _Tp __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1677,7 +1751,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION } bool - compare_exchange_strong(_Tp& __expected, _Tp __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1686,21 +1760,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION } #if __glibcxx_atomic_wait - _GLIBCXX_ALWAYS_INLINE void - wait(_Tp __old, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::wait(_M_ptr, __old, __m); } - - // TODO add const volatile overload - _GLIBCXX_ALWAYS_INLINE void notify_one() const noexcept - { __atomic_impl::notify_one(_M_ptr); } + { __atomic_impl::notify_one(this->_M_ptr); } // TODO add const volatile overload _GLIBCXX_ALWAYS_INLINE void notify_all() const noexcept - { __atomic_impl::notify_all(_M_ptr); } + { __atomic_impl::notify_all(this->_M_ptr); } // TODO add const volatile overload #endif // __glibcxx_atomic_wait @@ -1708,27 +1776,27 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION value_type fetch_add(value_type __i, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::fetch_add(_M_ptr, __i, __m); } + { return __atomic_impl::fetch_add(this->_M_ptr, __i, __m); } value_type fetch_sub(value_type __i, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::fetch_sub(_M_ptr, __i, __m); } + { return __atomic_impl::fetch_sub(this->_M_ptr, __i, __m); } value_type fetch_and(value_type __i, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::fetch_and(_M_ptr, __i, __m); } + { return __atomic_impl::fetch_and(this->_M_ptr, __i, __m); } value_type fetch_or(value_type __i, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::fetch_or(_M_ptr, __i, __m); } + { return __atomic_impl::fetch_or(this->_M_ptr, __i, __m); } value_type fetch_xor(value_type __i, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::fetch_xor(_M_ptr, __i, __m); } + { return __atomic_impl::fetch_xor(this->_M_ptr, __i, __m); } _GLIBCXX_ALWAYS_INLINE value_type operator++(int) const noexcept @@ -1740,70 +1808,62 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION value_type operator++() const noexcept - { return __atomic_impl::__add_fetch(_M_ptr, value_type(1)); } + { return __atomic_impl::__add_fetch(this->_M_ptr, value_type(1)); } value_type operator--() const noexcept - { return __atomic_impl::__sub_fetch(_M_ptr, value_type(1)); } + { return __atomic_impl::__sub_fetch(this->_M_ptr, value_type(1)); } value_type operator+=(value_type __i) const noexcept - { return __atomic_impl::__add_fetch(_M_ptr, __i); } + { return __atomic_impl::__add_fetch(this->_M_ptr, __i); } value_type operator-=(value_type __i) const noexcept - { return __atomic_impl::__sub_fetch(_M_ptr, __i); } + { return __atomic_impl::__sub_fetch(this->_M_ptr, __i); } value_type operator&=(value_type __i) const noexcept - { return __atomic_impl::__and_fetch(_M_ptr, __i); } + { return __atomic_impl::__and_fetch(this->_M_ptr, __i); } value_type operator|=(value_type __i) const noexcept - { return __atomic_impl::__or_fetch(_M_ptr, __i); } + { return __atomic_impl::__or_fetch(this->_M_ptr, __i); } value_type operator^=(value_type __i) const noexcept - { return __atomic_impl::__xor_fetch(_M_ptr, __i); } - - private: - _Tp* _M_ptr; + { return __atomic_impl::__xor_fetch(this->_M_ptr, __i); } }; - // base class for atomic_ref + // Floating-point types template - struct __atomic_ref<_Fp, false, true> + struct __atomic_ref_base<_Fp, false, true, false> { static_assert(is_floating_point_v<_Fp>); public: - using value_type = _Fp; + using value_type = remove_cv_t<_Fp>; using difference_type = value_type; static constexpr bool is_always_lock_free = __atomic_always_lock_free(sizeof(_Fp), 0); + static_assert(is_always_lock_free || !is_volatile_v<_Fp>); + static constexpr size_t required_alignment = __alignof__(_Fp); - __atomic_ref() = delete; - __atomic_ref& operator=(const __atomic_ref&) = delete; + __atomic_ref_base() = delete; + __atomic_ref_base& operator=(const __atomic_ref_base&) = delete; explicit - __atomic_ref(_Fp& __t) : _M_ptr(&__t) + __atomic_ref_base(_Fp& __t) : _M_ptr(&__t) { __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment) == 0); } - __atomic_ref(const __atomic_ref&) noexcept = default; + __atomic_ref_base(const __atomic_ref_base&) noexcept = default; - _Fp - operator=(_Fp __t) const noexcept - { - this->store(__t); - return __t; - } - - operator _Fp() const noexcept { return this->load(); } + operator value_type() const noexcept { return this->load(); } bool is_lock_free() const noexcept @@ -1811,39 +1871,71 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION return __atomic_impl::is_lock_free(); } - void - store(_Fp __t, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::store(_M_ptr, __t, __m); } - _Fp load(memory_order __m = memory_order_seq_cst) const noexcept { return __atomic_impl::load(_M_ptr, __m); } +#if __glibcxx_atomic_wait + _GLIBCXX_ALWAYS_INLINE void + wait(value_type __old, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::wait(_M_ptr, __old, __m); } + + // TODO add const volatile overload +#endif // __glibcxx_atomic_wait + + protected: + _Fp* _M_ptr; + }; + + template + struct __atomic_ref<_Fp, false, false, true, false> + : __atomic_ref_base<_Fp, false, true, false> + { + using value_type = typename __atomic_ref_base<_Fp, false, true, false>::value_type; + + __atomic_ref() = delete; + __atomic_ref& operator=(const __atomic_ref&) = delete; + + explicit + __atomic_ref(_Fp& __t) : __atomic_ref_base<_Fp, false, true, false>(__t) + { } + + value_type + operator=(value_type __t) const noexcept + { + this->store(__t); + return __t; + } + + void + store(value_type __t, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::store(this->_M_ptr, __t, __m); } + _Fp - exchange(_Fp __desired, + exchange(value_type __desired, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::exchange(_M_ptr, __desired, __m); } + { return __atomic_impl::exchange(this->_M_ptr, __desired, __m); } bool - compare_exchange_weak(_Fp& __expected, _Fp __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_weak( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_strong(_Fp& __expected, _Fp __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_strong( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_weak(_Fp& __expected, _Fp __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1852,7 +1944,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION } bool - compare_exchange_strong(_Fp& __expected, _Fp __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1861,21 +1953,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION } #if __glibcxx_atomic_wait - _GLIBCXX_ALWAYS_INLINE void - wait(_Fp __old, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::wait(_M_ptr, __old, __m); } - - // TODO add const volatile overload - _GLIBCXX_ALWAYS_INLINE void notify_one() const noexcept - { __atomic_impl::notify_one(_M_ptr); } + { __atomic_impl::notify_one(this->_M_ptr); } // TODO add const volatile overload _GLIBCXX_ALWAYS_INLINE void notify_all() const noexcept - { __atomic_impl::notify_all(_M_ptr); } + { __atomic_impl::notify_all(this->_M_ptr); } // TODO add const volatile overload #endif // __glibcxx_atomic_wait @@ -1883,56 +1969,50 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION value_type fetch_add(value_type __i, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::__fetch_add_flt(_M_ptr, __i, __m); } + { return __atomic_impl::__fetch_add_flt(this->_M_ptr, __i, __m); } value_type fetch_sub(value_type __i, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::__fetch_sub_flt(_M_ptr, __i, __m); } + { return __atomic_impl::__fetch_sub_flt(this->_M_ptr, __i, __m); } value_type operator+=(value_type __i) const noexcept - { return __atomic_impl::__add_fetch_flt(_M_ptr, __i); } + { return __atomic_impl::__add_fetch_flt(this->_M_ptr, __i); } value_type operator-=(value_type __i) const noexcept - { return __atomic_impl::__sub_fetch_flt(_M_ptr, __i); } - - private: - _Fp* _M_ptr; + { return __atomic_impl::__sub_fetch_flt(this->_M_ptr, __i); } }; - // base class for atomic_ref + // Pointer types template - struct __atomic_ref<_Tp*, false, false> + struct __atomic_ref_base<_Tp, false, false, true> { + static_assert(is_pointer_v<_Tp>); + public: - using value_type = _Tp*; + using value_type = remove_cv_t<_Tp>; using difference_type = ptrdiff_t; static constexpr bool is_always_lock_free = ATOMIC_POINTER_LOCK_FREE == 2; - static constexpr size_t required_alignment = __alignof__(_Tp*); + static_assert(is_always_lock_free || !is_volatile_v<_Tp>); - __atomic_ref() = delete; - __atomic_ref& operator=(const __atomic_ref&) = delete; + static constexpr size_t required_alignment = __alignof__(_Tp); + + __atomic_ref_base() = delete; + __atomic_ref_base& operator=(const __atomic_ref_base&) = delete; explicit - __atomic_ref(_Tp*& __t) : _M_ptr(std::__addressof(__t)) + __atomic_ref_base(_Tp& __t) : _M_ptr(std::__addressof(__t)) { __glibcxx_assert(((__UINTPTR_TYPE__)_M_ptr % required_alignment) == 0); } - __atomic_ref(const __atomic_ref&) noexcept = default; - - _Tp* - operator=(_Tp* __t) const noexcept - { - this->store(__t); - return __t; - } + __atomic_ref_base(const __atomic_ref_base&) noexcept = default; - operator _Tp*() const noexcept { return this->load(); } + operator value_type() const noexcept { return this->load(); } bool is_lock_free() const noexcept @@ -1940,39 +2020,94 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION return __atomic_impl::is_lock_free(); } - void - store(_Tp* __t, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::store(_M_ptr, __t, __m); } - - _Tp* + value_type load(memory_order __m = memory_order_seq_cst) const noexcept { return __atomic_impl::load(_M_ptr, __m); } - _Tp* - exchange(_Tp* __desired, +#if __glibcxx_atomic_wait + _GLIBCXX_ALWAYS_INLINE void + wait(value_type __old, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::wait(_M_ptr, __old, __m); } + + // TODO add const volatile overload +#endif // __glibcxx_atomic_wait + + protected: + static constexpr ptrdiff_t + _S_type_size(ptrdiff_t __d) noexcept + { + using _PointedType = remove_pointer_t<_Tp>; + static_assert(is_object_v<_PointedType>); + return __d * sizeof(_PointedType); + } + + _Tp* _M_ptr; + }; + + template + struct __atomic_ref<_Tp, false, false, false, true> + : __atomic_ref_base<_Tp, false, false, true> + { + using value_type = typename __atomic_ref_base<_Tp, false, false, true>::value_type; + using difference_type = typename __atomic_ref_base<_Tp, false, false, true>::difference_type; + + __atomic_ref() = delete; + __atomic_ref& operator=(const __atomic_ref&) = delete; + + explicit + __atomic_ref(_Tp& __t) : __atomic_ref_base<_Tp, false, false, true>(__t) + { } + +#if __glibcxx_atomic_wait + _GLIBCXX_ALWAYS_INLINE void + notify_one() const noexcept + { __atomic_impl::notify_one(this->_M_ptr); } + + // TODO add const volatile overload + + _GLIBCXX_ALWAYS_INLINE void + notify_all() const noexcept + { __atomic_impl::notify_all(this->_M_ptr); } + + // TODO add const volatile overload +#endif // __glibcxx_atomic_wait + + value_type + operator=(value_type __t) const noexcept + { + this->store(__t); + return __t; + } + + void + store(value_type __t, memory_order __m = memory_order_seq_cst) const noexcept + { __atomic_impl::store(this->_M_ptr, __t, __m); } + + value_type + exchange(value_type __desired, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::exchange(_M_ptr, __desired, __m); } + { return __atomic_impl::exchange(this->_M_ptr, __desired, __m); } bool - compare_exchange_weak(_Tp*& __expected, _Tp* __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_weak( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_strong(_Tp*& __expected, _Tp* __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __success, memory_order __failure) const noexcept { return __atomic_impl::compare_exchange_strong( - _M_ptr, __expected, __desired, __success, __failure); + this->_M_ptr, __expected, __desired, __success, __failure); } bool - compare_exchange_weak(_Tp*& __expected, _Tp* __desired, + compare_exchange_weak(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1981,7 +2116,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION } bool - compare_exchange_strong(_Tp*& __expected, _Tp* __desired, + compare_exchange_strong(value_type& __expected, value_type __desired, memory_order __order = memory_order_seq_cst) const noexcept { @@ -1989,35 +2124,15 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION __cmpexch_failure_order(__order)); } -#if __glibcxx_atomic_wait - _GLIBCXX_ALWAYS_INLINE void - wait(_Tp* __old, memory_order __m = memory_order_seq_cst) const noexcept - { __atomic_impl::wait(_M_ptr, __old, __m); } - - // TODO add const volatile overload - - _GLIBCXX_ALWAYS_INLINE void - notify_one() const noexcept - { __atomic_impl::notify_one(_M_ptr); } - - // TODO add const volatile overload - - _GLIBCXX_ALWAYS_INLINE void - notify_all() const noexcept - { __atomic_impl::notify_all(_M_ptr); } - - // TODO add const volatile overload -#endif // __glibcxx_atomic_wait - _GLIBCXX_ALWAYS_INLINE value_type fetch_add(difference_type __d, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::fetch_add(_M_ptr, _S_type_size(__d), __m); } + { return __atomic_impl::fetch_add(this->_M_ptr, this->_S_type_size(__d), __m); } _GLIBCXX_ALWAYS_INLINE value_type fetch_sub(difference_type __d, memory_order __m = memory_order_seq_cst) const noexcept - { return __atomic_impl::fetch_sub(_M_ptr, _S_type_size(__d), __m); } + { return __atomic_impl::fetch_sub(this->_M_ptr, this->_S_type_size(__d), __m); } value_type operator++(int) const noexcept @@ -2030,36 +2145,26 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION value_type operator++() const noexcept { - return __atomic_impl::__add_fetch(_M_ptr, _S_type_size(1)); + return __atomic_impl::__add_fetch(this->_M_ptr, this->_S_type_size(1)); } value_type operator--() const noexcept { - return __atomic_impl::__sub_fetch(_M_ptr, _S_type_size(1)); + return __atomic_impl::__sub_fetch(this->_M_ptr, this->_S_type_size(1)); } value_type operator+=(difference_type __d) const noexcept { - return __atomic_impl::__add_fetch(_M_ptr, _S_type_size(__d)); + return __atomic_impl::__add_fetch(this->_M_ptr, this->_S_type_size(__d)); } value_type operator-=(difference_type __d) const noexcept { - return __atomic_impl::__sub_fetch(_M_ptr, _S_type_size(__d)); + return __atomic_impl::__sub_fetch(this->_M_ptr, this->_S_type_size(__d)); } - - private: - static constexpr ptrdiff_t - _S_type_size(ptrdiff_t __d) noexcept - { - static_assert(is_object_v<_Tp>); - return __d * sizeof(_Tp); - } - - _Tp** _M_ptr; }; #endif // C++2a diff --git a/libstdc++-v3/include/std/atomic b/libstdc++-v3/include/std/atomic index 503dca945d3..0b83643fca4 100644 --- a/libstdc++-v3/include/std/atomic +++ b/libstdc++-v3/include/std/atomic @@ -222,6 +222,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION static_assert(is_move_constructible_v<_Tp>); static_assert(is_copy_assignable_v<_Tp>); static_assert(is_move_assignable_v<_Tp>); + static_assert(is_same_v<_Tp, remove_cv_t<_Tp>>); #endif public: diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc index 4702932627e..7b362737afb 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/bool.cc @@ -13,3 +13,21 @@ static_assert( not has_or> ); static_assert( not has_xor> ); static_assert( not has_fetch_add> ); static_assert( not has_fetch_sub> ); + +static_assert( not has_and> ); +static_assert( not has_or> ); +static_assert( not has_xor> ); +static_assert( not has_fetch_add> ); +static_assert( not has_fetch_sub> ); + +static_assert( not has_and> ); +static_assert( not has_or> ); +static_assert( not has_xor> ); +static_assert( not has_fetch_add> ); +static_assert( not has_fetch_sub> ); + +static_assert( not has_and> ); +static_assert( not has_or> ); +static_assert( not has_xor> ); +static_assert( not has_fetch_add> ); +static_assert( not has_fetch_sub> ); diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc index 642c867e33b..f8fc03e3426 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/deduction.cc @@ -19,22 +19,29 @@ #include +template void -test01() +test_impl(T v) { - int i = 0; - std::atomic_ref a0(i); - static_assert(std::is_same_v>); - - float f = 1.0f; - std::atomic_ref a1(f); - static_assert(std::is_same_v>); + std::atomic_ref a(v); + static_assert(std::is_same_v>); +} - int* p = &i; - std::atomic_ref a2(p); - static_assert(std::is_same_v>); +template +void +test(T v) +{ + test_impl(v); + test_impl(v); + test_impl(v); + test_impl(v); +} +int main() +{ + test(0); + test(1.0f); + test(nullptr); struct X { } x; - std::atomic_ref a3(x); - static_assert(std::is_same_v>); + test(x); } diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc index fe2fe238128..8d192989272 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/float.cc @@ -299,14 +299,19 @@ test04() { if constexpr (std::atomic_ref::is_always_lock_free) { - float i = 0; - float* ptr = 0; - std::atomic_ref a0(ptr); - std::atomic_ref a1(ptr); - std::atomic_ref a2(a0); - a0 = &i; - VERIFY( a1 == &i ); - VERIFY( a2 == &i ); + float i = 0.0f; + std::atomic_ref a0(i); + std::atomic_ref a1(i); + std::atomic_ref a1c(i); + std::atomic_ref a1v(i); + std::atomic_ref a1cv(i); + std::atomic_ref a2(a0); + a0 = 1.0f; + VERIFY( a1 == 1.0f ); + VERIFY( a1c == 1.0f ); + VERIFY( a1v == 1.0f ); + VERIFY( a1cv == 1.0f ); + VERIFY( a2 == 1.0f ); } } diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc index 5d989e3bb68..b2041c1e555 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/generic.cc @@ -108,9 +108,15 @@ test02() X i; std::atomic_ref a0(i); std::atomic_ref a1(i); + std::atomic_ref a1c(i); + std::atomic_ref a1v(i); + std::atomic_ref a1cv(i); std::atomic_ref a2(a0); a0 = 42; VERIFY( a1.load() == 42 ); + VERIFY( a1c.load() == 42 ); + VERIFY( a1v.load() == 42 ); + VERIFY( a1cv.load() == 42 ); VERIFY( a2.load() == 42 ); } diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc index e5e6364dad9..5b7ee8bc15a 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/integral.cc @@ -302,9 +302,15 @@ test03() int i = 0; std::atomic_ref a0(i); std::atomic_ref a1(i); + std::atomic_ref a1c(i); + std::atomic_ref a1v(i); + std::atomic_ref a1cv(i); std::atomic_ref a2(a0); a0 = 42; VERIFY( a1 == 42 ); + VERIFY( a1c == 42 ); + VERIFY( a1v == 42 ); + VERIFY( a1cv == 42 ); VERIFY( a2 == 42 ); } diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc index cd75f9821ea..ba251c2974f 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/pointer.cc @@ -210,9 +210,15 @@ test03() int* ptr = 0; std::atomic_ref a0(ptr); std::atomic_ref a1(ptr); + std::atomic_ref a1c(ptr); + std::atomic_ref a1v(ptr); + std::atomic_ref a1cv(ptr); std::atomic_ref a2(a0); a0 = &i; VERIFY( a1 == &i ); + VERIFY( a1c == &i ); + VERIFY( a1v == &i ); + VERIFY( a1cv == &i ); VERIFY( a2 == &i ); } diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc index 96d003f99b4..6c318e66f58 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/requirements.cc @@ -18,56 +18,94 @@ // { dg-do compile { target c++20 } } #include +#include +template void -test01() +test_generic() { - struct X { int c; }; - using A = std::atomic_ref; + using A = std::atomic_ref; static_assert( std::is_standard_layout_v ); static_assert( std::is_nothrow_copy_constructible_v ); static_assert( std::is_trivially_destructible_v ); - static_assert( std::is_same_v ); + static_assert( std::is_same_v> ); static_assert( !std::is_copy_assignable_v ); static_assert( !std::is_move_assignable_v ); } +template void -test02() +test_integral() { - using A = std::atomic_ref; + static_assert( std::is_integral_v ); + using A = std::atomic_ref; static_assert( std::is_standard_layout_v ); static_assert( std::is_nothrow_copy_constructible_v ); static_assert( std::is_trivially_destructible_v ); - static_assert( std::is_same_v ); - static_assert( std::is_same_v ); + static_assert( std::is_same_v> ); + static_assert( std::is_same_v ); static_assert( !std::is_copy_assignable_v ); static_assert( !std::is_move_assignable_v ); } +template void -test03() +test_floating_point() { - using A = std::atomic_ref; + static_assert( std::is_floating_point_v ); + using A = std::atomic_ref; static_assert( std::is_standard_layout_v ); static_assert( std::is_nothrow_copy_constructible_v ); static_assert( std::is_trivially_destructible_v ); - static_assert( std::is_same_v ); - static_assert( std::is_same_v ); + static_assert( std::is_same_v> ); + static_assert( std::is_same_v ); static_assert( !std::is_copy_assignable_v ); static_assert( !std::is_move_assignable_v ); } +template void -test04() +test_pointer() { - using A = std::atomic_ref; + static_assert( std::is_pointer_v ); + using A = std::atomic_ref; static_assert( std::is_standard_layout_v ); static_assert( std::is_nothrow_copy_constructible_v ); static_assert( std::is_trivially_destructible_v ); - static_assert( std::is_same_v ); - static_assert( std::is_same_v ); + static_assert( std::is_same_v> ); + static_assert( std::is_same_v ); static_assert( std::is_nothrow_copy_constructible_v ); static_assert( !std::is_copy_assignable_v ); static_assert( !std::is_move_assignable_v ); } + +int +main() +{ + struct X { int c; }; + test_generic(); + test_generic(); + test_generic(); + test_generic(); + + // atomic_ref excludes (cv) `bool` from the set of integral types + test_generic(); + test_generic(); + test_generic(); + test_generic(); + + test_integral(); + test_integral(); + test_integral(); + test_integral(); + + test_floating_point(); + test_floating_point(); + test_floating_point(); + test_floating_point(); + + test_pointer(); + test_pointer(); + test_pointer(); + test_pointer(); +} \ No newline at end of file diff --git a/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc b/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc index 503d20d0044..922af652d2c 100644 --- a/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc +++ b/libstdc++-v3/testsuite/29_atomics/atomic_ref/wait_notify.cc @@ -41,6 +41,16 @@ template }); a.wait(va); t.join(); + + std::atomic_ref b{ aa }; + b.wait(va); + std::thread t2([&] + { + a.store(va); + a.notify_one(); + }); + b.wait(vb); + t2.join(); } } -- 2.34.1