From patchwork Thu Nov 8 18:02:29 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jonathan Wakely X-Patchwork-Id: 995091 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-489441-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="xeSjvR7P"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 42rWNG35PLz9s55 for ; Fri, 9 Nov 2018 05:02:53 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; q=dns; s= default; b=PRFNjrTd9g9DBXNTelIi3jwHPr2qxVv3tnyYEm+fwLR32OnNB8JHw gmFaJhafDvbZusnMKHnVKSPDBxrq/ovG54wY30gF4gHBDwJ2bIEuo2Jig6hejzbB L/9WTS7qfJg1q0IFJ268XboVww1ENDG0WuK1D6IRRDsBmMYL8Iplok= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; s= default; bh=0VN5Xh6fZstxrHUraWeEisA5V9k=; b=xeSjvR7PQxHShOz5qBzn JD8kslaUTxvUGY40hXnoKWWuuUIGHkpYOdFtyFUQnwsgQJre6M1oSAMXbeJrFp9/ AhnXsplA6SlsAPzEoeqtsBaCX9QKhJfAN2BoUj64qcpOAf3GApo7R0RAgteZCyTS NnlXiEPGJ2Yv9Z5+1GFwSk4= Received: (qmail 104413 invoked by alias); 8 Nov 2018 18:02:36 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 104338 invoked by uid 89); 8 Nov 2018 18:02:36 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-25.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LAZY_DOMAIN_SECURITY, SPF_HELO_PASS autolearn=ham version=3.3.2 spammy=sk:synchro, spurious, obtained X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 08 Nov 2018 18:02:31 +0000 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BCD9380F9E; Thu, 8 Nov 2018 18:02:29 +0000 (UTC) Received: from localhost (ovpn-116-92.phx2.redhat.com [10.3.116.92]) by smtp.corp.redhat.com (Postfix) with ESMTP id 7DF3EBA64; Thu, 8 Nov 2018 18:02:29 +0000 (UTC) Date: Thu, 8 Nov 2018 18:02:29 +0000 From: Jonathan Wakely To: libstdc++@gcc.gnu.org, gcc-patches@gcc.gnu.org Subject: [PATCH] Implement std::pmr::synchronized_pool_resource Message-ID: <20181108180229.GA31340@redhat.com> MIME-Version: 1.0 Content-Disposition: inline X-Clacks-Overhead: GNU Terry Pratchett User-Agent: Mutt/1.9.2 (2017-12-15) Define the thread-safe pool resource, using a shared_mutex to allow multiple threads to concurrently allocate from thread-specific pools. Define new weak symbols for the pthread_rwlock_t functions, to avoid making libstdc++.so depend on libpthread.so * config/abi/pre/gnu.ver: Add new symbols. * include/std/memory_resource [_GLIBCXX_HAS_GTHREADS] (synchronized_pool_resource): New class. * include/std/shared_mutex (__glibcxx_rwlock_rdlock) (__glibcxx_rwlock_tryrdlock, __glibcxx_rwlock_wrlock) (__glibcxx_rwlock_trywrlock, __glibcxx_rwlock_unlock) (__glibcxx_rwlock_destroy, __glibcxx_rwlock_init) (__glibcxx_rwlock_timedrdlock, __glibcxx_rwlock_timedwrlock): Define weak symbols for POSIX rwlock functions. (__shared_mutex_pthread): Use weak symbols. * src/c++17/memory_resource.cc [_GLIBCXX_HAS_GTHREADS] (synchronized_pool_resource::_TPools): New class. (destroy_TPools): New function for pthread_key_create destructor. (synchronized_pool_resource::synchronized_pool_resource) (synchronized_pool_resource::~synchronized_pool_resource) (synchronized_pool_resource::release) (synchronized_pool_resource::do_allocate) (synchronized_pool_resource::do_deallocate): Define public members. (synchronized_pool_resource::_M_thread_specific_pools) (synchronized_pool_resource::_M_alloc_tpools) (synchronized_pool_resource::_M_alloc_shared_tpools): Define private members. The performance of this implementation is ... not great. But it's better than simply adding a mutex to unsynchronized_pool_resource and locking that around every allocation and deallocation. I am not committing this yet, because I still need to test on non-GNU targets, where the new weak wrappers around the pthread_rwlock_t functions might not work properly. commit adf32f3dd71a8312f212aadb7df2b6d32d8fe59b Author: Jonathan Wakely Date: Sat Nov 3 23:13:26 2018 +0000 Implement std::pmr::synchronized_pool_resource Define the thread-safe pool resource, using a shared_mutex to allow multiple threads to concurrently allocate from thread-specific pools. Define new weak symbols for the pthread_rwlock_t functions, to avoid making libstdc++.so depend on libpthread.so * config/abi/pre/gnu.ver: Add new symbols. * include/std/memory_resource [_GLIBCXX_HAS_GTHREADS] (synchronized_pool_resource): New class. * include/std/shared_mutex (__glibcxx_rwlock_rdlock) (__glibcxx_rwlock_tryrdlock, __glibcxx_rwlock_wrlock) (__glibcxx_rwlock_trywrlock, __glibcxx_rwlock_unlock) (__glibcxx_rwlock_destroy, __glibcxx_rwlock_init) (__glibcxx_rwlock_timedrdlock, __glibcxx_rwlock_timedwrlock): Define weak symbols for POSIX rwlock functions. (__shared_mutex_pthread): Use weak symbols. * src/c++17/memory_resource.cc [_GLIBCXX_HAS_GTHREADS] (synchronized_pool_resource::_TPools): New class. (destroy_TPools): New function for pthread_key_create destructor. (synchronized_pool_resource::synchronized_pool_resource) (synchronized_pool_resource::~synchronized_pool_resource) (synchronized_pool_resource::release) (synchronized_pool_resource::do_allocate) (synchronized_pool_resource::do_deallocate): Define public members. (synchronized_pool_resource::_M_thread_specific_pools) (synchronized_pool_resource::_M_alloc_tpools) (synchronized_pool_resource::_M_alloc_shared_tpools): Define private members. diff --git a/libstdc++-v3/config/abi/pre/gnu.ver b/libstdc++-v3/config/abi/pre/gnu.ver index 9d66f908e1a..c301fc31afd 100644 --- a/libstdc++-v3/config/abi/pre/gnu.ver +++ b/libstdc++-v3/config/abi/pre/gnu.ver @@ -2039,13 +2039,6 @@ GLIBCXX_3.4.26 { _ZNSt7__cxx1118basic_stringstreamI[cw]St11char_traitsI[cw]ESaI[cw]EEC[12]Ev; _ZNSt7__cxx1119basic_[io]stringstreamI[cw]St11char_traitsI[cw]ESaI[cw]EEC[12]Ev; - _ZNSt3pmr19new_delete_resourceEv; - _ZNSt3pmr20null_memory_resourceEv; - _ZNSt3pmr20get_default_resourceEv; - _ZNSt3pmr20set_default_resourceEPNS_15memory_resourceE; - _ZNSt3pmr25monotonic_buffer_resource13_M_new_bufferE[jmy][jmy]; - _ZNSt3pmr25monotonic_buffer_resource18_M_release_buffersEv; - # std::__throw_ios_failure(const char*, int); _ZSt19__throw_ios_failurePKci; @@ -2057,6 +2050,18 @@ GLIBCXX_3.4.26 { _ZN11__gnu_debug25_Safe_local_iterator_base16_M_attach_singleEPNS_19_Safe_sequence_baseEb; # members + _ZNSt3pmr19new_delete_resourceEv; + _ZNSt3pmr20null_memory_resourceEv; + _ZNSt3pmr20get_default_resourceEv; + _ZNSt3pmr20set_default_resourceEPNS_15memory_resourceE; + _ZNSt3pmr25monotonic_buffer_resource13_M_new_bufferE[jmy][jmy]; + _ZNSt3pmr25monotonic_buffer_resource18_M_release_buffersEv; + _ZTINSt3pmr26synchronized_pool_resourceE; + _ZNSt3pmr26synchronized_pool_resourceC1ERKNS_12pool_optionsEPNS_15memory_resourceE; + _ZNSt3pmr26synchronized_pool_resourceD[12]Ev; + _ZNSt3pmr26synchronized_pool_resource7releaseEv; + _ZNSt3pmr26synchronized_pool_resource11do_allocateE[jmy][jmy]; + _ZNSt3pmr26synchronized_pool_resource13do_deallocateEPv[jmy][jmy]; _ZTINSt3pmr28unsynchronized_pool_resourceE; _ZNSt3pmr28unsynchronized_pool_resourceC[12]ERKNS_12pool_optionsEPNS_15memory_resourceE; _ZNSt3pmr28unsynchronized_pool_resourceD[12]Ev; diff --git a/libstdc++-v3/include/std/memory_resource b/libstdc++-v3/include/std/memory_resource index 40486af82fe..86866f6c4d5 100644 --- a/libstdc++-v3/include/std/memory_resource +++ b/libstdc++-v3/include/std/memory_resource @@ -37,6 +37,7 @@ #include // pair, index_sequence #include // vector #include // size_t, max_align_t +#include // shared_mutex #include namespace std _GLIBCXX_VISIBILITY(default) @@ -338,7 +339,72 @@ namespace pmr const int _M_npools; }; - // TODO class synchronized_pool_resource +#ifdef _GLIBCXX_HAS_GTHREADS + /// A thread-safe memory resource that manages pools of fixed-size blocks. + class synchronized_pool_resource : public memory_resource + { + public: + synchronized_pool_resource(const pool_options& __opts, + memory_resource* __upstream) + __attribute__((__nonnull__)); + + synchronized_pool_resource() + : synchronized_pool_resource(pool_options(), get_default_resource()) + { } + + explicit + synchronized_pool_resource(memory_resource* __upstream) + __attribute__((__nonnull__)) + : synchronized_pool_resource(pool_options(), __upstream) + { } + + explicit + synchronized_pool_resource(const pool_options& __opts) + : synchronized_pool_resource(__opts, get_default_resource()) { } + + synchronized_pool_resource(const synchronized_pool_resource&) = delete; + + virtual ~synchronized_pool_resource(); + + synchronized_pool_resource& + operator=(const synchronized_pool_resource&) = delete; + + void release(); + + memory_resource* + upstream_resource() const noexcept + __attribute__((__returns_nonnull__)) + { return _M_impl.resource(); } + + pool_options options() const noexcept { return _M_impl._M_opts; } + + protected: + void* + do_allocate(size_t __bytes, size_t __alignment) override; + + void + do_deallocate(void* __p, size_t __bytes, size_t __alignment) override; + + bool + do_is_equal(const memory_resource& __other) const noexcept override + { return this == &__other; } + + public: + // Thread-specific pools (only public for access by implementation details) + struct _TPools; + + private: + _TPools* _M_alloc_tpools(lock_guard&); + _TPools* _M_alloc_shared_tpools(lock_guard&); + auto _M_thread_specific_pools() noexcept; + + __pool_resource _M_impl; + __gthread_key_t _M_key; + // Linked list of thread-specific pools. All threads share _M_tpools[0]. + _TPools* _M_tpools = nullptr; + mutable shared_mutex _M_mx; + }; +#endif /// A non-thread-safe memory resource that manages pools of fixed-size blocks. class unsynchronized_pool_resource : public memory_resource diff --git a/libstdc++-v3/include/std/shared_mutex b/libstdc++-v3/include/std/shared_mutex index dce97f48a3f..8aa6e9f0d4f 100644 --- a/libstdc++-v3/include/std/shared_mutex +++ b/libstdc++-v3/include/std/shared_mutex @@ -57,6 +57,90 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION class shared_timed_mutex; #if _GLIBCXX_USE_PTHREAD_RWLOCK_T +#ifdef __gthrw +#define _GLIBCXX_GTHRW(name) \ + __gthrw(pthread_ ## name); \ + static inline int \ + __glibcxx_ ## name (pthread_rwlock_t *__rwlock) \ + { \ + if (__gthread_active_p ()) \ + return __gthrw_(pthread_ ## name) (__rwlock); \ + else \ + return 0; \ + } + _GLIBCXX_GTHRW(rwlock_rdlock) + _GLIBCXX_GTHRW(rwlock_tryrdlock) + _GLIBCXX_GTHRW(rwlock_wrlock) + _GLIBCXX_GTHRW(rwlock_trywrlock) + _GLIBCXX_GTHRW(rwlock_unlock) +# ifndef PTHREAD_RWLOCK_INITIALIZER + _GLIBCXX_GTHRW(rwlock_destroy) + __gthrw(pthread_rwlock_init); + static inline int + __glibcxx_rwlock_init (pthread_rwlock_t *__rwlock) + { + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_init) (__rwlock, NULL); + else + return 0; + } +# endif +# if _GTHREAD_USE_MUTEX_TIMEDLOCK + __gthrw(pthread_rwlock_timedrdlock); + static inline int + __glibcxx_rwlock_timedrdlock (pthread_rwlock_t *__rwlock, + const timespec *__ts) + { + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_timedrdlock) (__rwlock, __ts); + else + return 0; + } + __gthrw(pthread_rwlock_timedwrlock); + static inline int + __glibcxx_rwlock_timedwrlock (pthread_rwlock_t *__rwlock, + const timespec *__ts) + { + if (__gthread_active_p ()) + return __gthrw_(pthread_rwlock_timedwrlock) (__rwlock, __ts); + else + return 0; + } +# endif +#else + static inline int + __glibcxx_rwlock_rdlock (pthread_rwlock_t *__rwlock) + { return pthread_rwlock_rdlock (__rwlock); } + static inline int + __glibcxx_rwlock_tryrdlock (pthread_rwlock_t *__rwlock) + { return pthread_rwlock_tryrdlock (__rwlock); } + static inline int + __glibcxx_rwlock_wrlock (pthread_rwlock_t *__rwlock) + { return pthread_rwlock_wrlock (__rwlock); } + static inline int + __glibcxx_rwlock_trywrlock (pthread_rwlock_t *__rwlock) + { return pthread_rwlock_trywrlock (__rwlock); } + static inline int + __glibcxx_rwlock_unlock (pthread_rwlock_t *__rwlock) + { return pthread_rwlock_unlock (__rwlock); } + static inline int + __glibcxx_rwlock_destroy(pthread_rwlock_t *__rwlock) + { return pthread_rwlock_destroy (__rwlock); } + static inline int + __glibcxx_rwlock_init(pthread_rwlock_t *__rwlock) + { return pthread_rwlock_init (__rwlock, NULL); } +# if _GTHREAD_USE_MUTEX_TIMEDLOCK + static inline int + __glibcxx_rwlock_timedrdlock (pthread_rwlock_t *__rwlock, + const timespec *__ts) + { return pthread_rwlock_timedrdlock (__rwlock, __ts); } + static inline int + __glibcxx_rwlock_timedwrlock (pthread_rwlock_t *__rwlock, + const timespec *__ts) + { return pthread_rwlock_timedwrlock (__rwlock, __ts); } +# endif +#endif + /// A shared mutex type implemented using pthread_rwlock_t. class __shared_mutex_pthread { @@ -74,7 +158,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION public: __shared_mutex_pthread() { - int __ret = pthread_rwlock_init(&_M_rwlock, NULL); + int __ret = __glibcxx_rwlock_init(&_M_rwlock, NULL); if (__ret == ENOMEM) __throw_bad_alloc(); else if (__ret == EAGAIN) @@ -87,7 +171,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION ~__shared_mutex_pthread() { - int __ret __attribute((__unused__)) = pthread_rwlock_destroy(&_M_rwlock); + int __ret __attribute((__unused__)) = __glibcxx_rwlock_destroy(&_M_rwlock); // Errors not handled: EBUSY, EINVAL __glibcxx_assert(__ret == 0); } @@ -99,7 +183,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION void lock() { - int __ret = pthread_rwlock_wrlock(&_M_rwlock); + int __ret = __glibcxx_rwlock_wrlock(&_M_rwlock); if (__ret == EDEADLK) __throw_system_error(int(errc::resource_deadlock_would_occur)); // Errors not handled: EINVAL @@ -109,7 +193,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION bool try_lock() { - int __ret = pthread_rwlock_trywrlock(&_M_rwlock); + int __ret = __glibcxx_rwlock_trywrlock(&_M_rwlock); if (__ret == EBUSY) return false; // Errors not handled: EINVAL __glibcxx_assert(__ret == 0); @@ -119,7 +203,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION void unlock() { - int __ret __attribute((__unused__)) = pthread_rwlock_unlock(&_M_rwlock); + int __ret __attribute((__unused__)) = __glibcxx_rwlock_unlock(&_M_rwlock); // Errors not handled: EPERM, EBUSY, EINVAL __glibcxx_assert(__ret == 0); } @@ -135,7 +219,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // is okay based on the current specification of forward progress // guarantees by the standard. do - __ret = pthread_rwlock_rdlock(&_M_rwlock); + __ret = __glibcxx_rwlock_rdlock(&_M_rwlock); while (__ret == EAGAIN); if (__ret == EDEADLK) __throw_system_error(int(errc::resource_deadlock_would_occur)); @@ -146,7 +230,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION bool try_lock_shared() { - int __ret = pthread_rwlock_tryrdlock(&_M_rwlock); + int __ret = __glibcxx_rwlock_tryrdlock(&_M_rwlock); // If the maximum number of read locks has been exceeded, we just fail // to acquire the lock. Unlike for lock(), we are not allowed to throw // an exception. @@ -413,7 +497,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION static_cast(__ns.count()) }; - int __ret = pthread_rwlock_timedwrlock(&_M_rwlock, &__ts); + int __ret = __glibcxx_rwlock_timedwrlock(&_M_rwlock, &__ts); // On self-deadlock, we just fail to acquire the lock. Technically, // the program violated the precondition. if (__ret == ETIMEDOUT || __ret == EDEADLK) @@ -466,7 +550,7 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION // mistaken for a spurious failure, which might help users realise // there is a deadlock. do - __ret = pthread_rwlock_timedrdlock(&_M_rwlock, &__ts); + __ret = __glibcxx_rwlock_timedrdlock(&_M_rwlock, &__ts); while (__ret == EAGAIN || __ret == EDEADLK); if (__ret == ETIMEDOUT) return false; diff --git a/libstdc++-v3/src/c++17/memory_resource.cc b/libstdc++-v3/src/c++17/memory_resource.cc index 781bdada381..c34b817abe0 100644 --- a/libstdc++-v3/src/c++17/memory_resource.cc +++ b/libstdc++-v3/src/c++17/memory_resource.cc @@ -863,6 +863,11 @@ namespace pmr return 1; } +#ifdef _GLIBCXX_HAS_GTHREADS + using shared_lock = std::shared_lock; + using exclusive_lock = lock_guard; +#endif + } // namespace __pool_resource:: @@ -948,6 +953,292 @@ namespace pmr return p; } +#ifdef _GLIBCXX_HAS_GTHREADS + // synchronized_pool_resource members. + + /* Notes on implementation and thread safety: + * + * Each synchronized_pool_resource manages an linked list of N+1 _TPools + * objects, where N is the number of threads using the pool resource. + * Each _TPools object has its own set of pools, with their own chunks. + * The first element of the list, _M_tpools[0], can be used by any thread. + * The rest of the list contains a _TPools object for each thread, + * accessed via the thread-specific key _M_key (and referred to for + * exposition as _M_tpools[_M_key]). + * The first element, _M_tpools[0], contains "orphaned chunks" which were + * allocated by a thread which has since exited, and so there is no + * _M_tpools[_M_key] for that thread. + * A thread can access its own thread-specific set of pools via _M_key + * while holding a shared lock on _M_mx. Accessing _M_impl._M_unpooled + * or _M_tpools[0] or any other thread's _M_tpools[_M_key] requires an + * exclusive lock. + * The upstream_resource() pointer can be obtained without a lock, but + * any dereference of that pointer requires an exclusive lock. + * The _M_impl._M_opts and _M_impl._M_npools members are immutable, + * and can safely be accessed concurrently. + */ + + extern "C" { + static void destroy_TPools(void*); + } + + struct synchronized_pool_resource::_TPools + { + // Exclusive lock must be held in the thread where this constructor runs. + explicit + _TPools(synchronized_pool_resource& owner, exclusive_lock&) + : owner(owner), pools(owner._M_impl._M_alloc_pools()) + { + // __builtin_printf("%p constructing\n", this); + __glibcxx_assert(pools); + } + + // Exclusive lock must be held in the thread where this destructor runs. + ~_TPools() + { + __glibcxx_assert(pools); + if (pools) + { + memory_resource* r = owner.upstream_resource(); + for (int i = 0; i < owner._M_impl._M_npools; ++i) + pools[i].release(r); + std::destroy_n(pools, owner._M_impl._M_npools); + polymorphic_allocator<__pool_resource::_Pool> a(r); + a.deallocate(pools, owner._M_impl._M_npools); + } + if (prev) + prev->next = next; + if (next) + next->prev = prev; + } + + // Exclusive lock must be held in the thread where this function runs. + void move_nonempty_chunks() + { + __glibcxx_assert(pools); + if (!pools) + return; + memory_resource* r = owner.upstream_resource(); + // move all non-empty chunks to the shared _TPools + for (int i = 0; i < owner._M_impl._M_npools; ++i) + for (auto& c : pools[i]._M_chunks) + if (!c.empty()) + owner._M_tpools->pools[i]._M_chunks.insert(std::move(c), r); + } + + synchronized_pool_resource& owner; + __pool_resource::_Pool* pools = nullptr; + _TPools* prev = nullptr; + _TPools* next = nullptr; + + static void destroy(_TPools* p) + { + exclusive_lock l(p->owner._M_mx); + // __glibcxx_assert(p != p->owner._M_tpools); + p->move_nonempty_chunks(); + polymorphic_allocator<_TPools> a(p->owner.upstream_resource()); + p->~_TPools(); + a.deallocate(p, 1); + } + }; + + // Called when a thread exits + extern "C" { + static void destroy_TPools(void* p) + { + using _TPools = synchronized_pool_resource::_TPools; + _TPools::destroy(static_cast<_TPools*>(p)); + } + } + + // Constructor + synchronized_pool_resource:: + synchronized_pool_resource(const pool_options& opts, + memory_resource* upstream) + : _M_impl(opts, upstream) + { + if (int err = __gthread_key_create(&_M_key, destroy_TPools)) + __throw_system_error(err); + exclusive_lock l(_M_mx); + _M_tpools = _M_alloc_shared_tpools(l); + } + + // Destructor + synchronized_pool_resource::~synchronized_pool_resource() + { + release(); + __gthread_key_delete(_M_key); // does not run destroy_TPools + } + + void + synchronized_pool_resource::release() + { + exclusive_lock l(_M_mx); + if (_M_tpools) + { + __gthread_key_delete(_M_key); // does not run destroy_TPools + __gthread_key_create(&_M_key, destroy_TPools); + polymorphic_allocator<_TPools> a(upstream_resource()); + // destroy+deallocate each _TPools + do + { + _TPools* p = _M_tpools; + _M_tpools = _M_tpools->next; + p->~_TPools(); + a.deallocate(p, 1); + } + while (_M_tpools); + } + // release unpooled memory + _M_impl.release(); + } + + // Caller must hold shared or exclusive lock to ensure the pointer + // isn't invalidated before it can be used. + auto + synchronized_pool_resource::_M_thread_specific_pools() noexcept + { + __pool_resource::_Pool* pools = nullptr; + if (auto tp = static_cast<_TPools*>(__gthread_getspecific(_M_key))) + { + pools = tp->pools; + __glibcxx_assert(tp->pools); + } + return pools; + } + + // Override for memory_resource::do_allocate + void* + synchronized_pool_resource:: + do_allocate(size_t bytes, size_t alignment) + { + const auto block_size = std::max(bytes, alignment); + if (block_size <= _M_impl._M_opts.largest_required_pool_block) + { + const ptrdiff_t index = pool_index(block_size, _M_impl._M_npools); + memory_resource* const r = upstream_resource(); + const pool_options opts = _M_impl._M_opts; + { + // Try to allocate from the thread-specific pool + shared_lock l(_M_mx); + if (auto pools = _M_thread_specific_pools()) // [[likely]] + { + // Need exclusive lock to replenish so use try_allocate: + if (void* p = pools[index].try_allocate()) + return p; + // Need to take exclusive lock and replenish pool. + } + // Need to allocate or replenish thread-specific pools using + // upstream resource, so need to hold exclusive lock. + } + // N.B. Another thread could call release() now lock is not held. + exclusive_lock excl(_M_mx); + if (!_M_tpools) // [[unlikely]] + _M_tpools = _M_alloc_shared_tpools(excl); + auto pools = _M_thread_specific_pools(); + if (!pools) + pools = _M_alloc_tpools(excl)->pools; + return pools[index].allocate(r, opts); + } + exclusive_lock l(_M_mx); + return _M_impl.allocate(bytes, alignment); // unpooled allocation + } + + // Override for memory_resource::do_deallocate + void + synchronized_pool_resource:: + do_deallocate(void* p, size_t bytes, size_t alignment) + { + size_t block_size = std::max(bytes, alignment); + if (block_size <= _M_impl._M_opts.largest_required_pool_block) + { + const ptrdiff_t index = pool_index(block_size, _M_impl._M_npools); + __glibcxx_assert(index != -1); + { + shared_lock l(_M_mx); + auto pools = _M_thread_specific_pools(); + if (pools) + { + // No need to lock here, no other thread is accessing this pool. + if (pools[index].deallocate(upstream_resource(), p)) + return; + } + // Block might have come from a different thread's pool, + // take exclusive lock and check every pool. + } + // TODO store {p, bytes, alignment} somewhere and defer returning + // the block to the correct thread-specific pool until we next + // take the exclusive lock. + exclusive_lock excl(_M_mx); + for (_TPools* t = _M_tpools; t != nullptr; t = t->next) + { + if (t->pools) // [[likely]] + { + if (t->pools[index].deallocate(upstream_resource(), p)) + return; + } + } + } + exclusive_lock l(_M_mx); + _M_impl.deallocate(p, bytes, alignment); + } + + // Allocate a thread-specific _TPools object and add it to the linked list. + auto + synchronized_pool_resource::_M_alloc_tpools(exclusive_lock& l) + -> _TPools* + { + __glibcxx_assert(_M_tpools != nullptr); + // dump_list(_M_tpools); + polymorphic_allocator<_TPools> a(upstream_resource()); + _TPools* p = a.allocate(1); + bool constructed = false; + __try + { + a.construct(p, *this, l); + constructed = true; + // __glibcxx_assert(__gthread_getspecific(_M_key) == nullptr); + if (int err = __gthread_setspecific(_M_key, p)) + __throw_system_error(err); + } + __catch(...) + { + if (constructed) + a.destroy(p); + a.deallocate(p, 1); + __throw_exception_again; + } + p->prev = _M_tpools; + p->next = _M_tpools->next; + _M_tpools->next = p; + if (p->next) + p->next->prev = p; + return p; + } + + // Allocate the shared _TPools object, _M_tpools[0] + auto + synchronized_pool_resource::_M_alloc_shared_tpools(exclusive_lock& l) + -> _TPools* + { + __glibcxx_assert(_M_tpools == nullptr); + polymorphic_allocator<_TPools> a(upstream_resource()); + _TPools* p = a.allocate(1); + __try + { + a.construct(p, *this, l); + } + __catch(...) + { + a.deallocate(p, 1); + __throw_exception_again; + } + // __glibcxx_assert(p->next == nullptr); + // __glibcxx_assert(p->prev == nullptr); + return p; + } +#endif // _GLIBCXX_HAS_GTHREADS + // unsynchronized_pool_resource member functions // Constructor