From patchwork Tue Sep 1 15:42:04 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Dmitry Vyukov X-Patchwork-Id: 512892 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id E1B0014076C for ; Wed, 2 Sep 2015 01:42:36 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=FU4DqeOr; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:in-reply-to:references:from:date:message-id :subject:to:cc:content-type; q=dns; s=default; b=mJZXEJJs0YIOVRW zfGtOcN8mbPj2EmprDno2Sh3UqnT0tyhOnB2T8Mknf1pjHq4VyH2IFwrDklxAUqe us2CcmLnRulFKas3AZjronYu9R8GQPD0AtS8zDTtF7N1NlLJHPWf1xmeOJmD5nIJ ajNY+z/UCy0gcTTHN91gEgU1hXJE= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:in-reply-to:references:from:date:message-id :subject:to:cc:content-type; s=default; bh=Q86hhpDjqU4SX4R4PKCIG 9F1k/M=; b=FU4DqeOrz+sfUhlSADCX55B/hQLuzh+23SzDAyFucOZVvu2DlcWKn aRuxZ1tmHaKnP37ze5HMj+faooqjp3pWY5f4OkGOGvkFC8MK0u1l7NMCqw9LV6sY vJkyXT3BSSg0jVNSb+K7QN40JoUy3Ad94bPPckujKziOoXIwyd2npg= Received: (qmail 84026 invoked by alias); 1 Sep 2015 15:42:29 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 83974 invoked by uid 89); 1 Sep 2015 15:42:28 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-2.2 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_LOW, SPF_PASS, T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: mail-wi0-f182.google.com Received: from mail-wi0-f182.google.com (HELO mail-wi0-f182.google.com) (209.85.212.182) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-GCM-SHA256 encrypted) ESMTPS; Tue, 01 Sep 2015 15:42:26 +0000 Received: by wicfx3 with SMTP id fx3so17122244wic.0 for ; Tue, 01 Sep 2015 08:42:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc:content-type; bh=dkLY4zhPG8apSTQfpgyc/BbhEiJE3qc7lIj/AOET/94=; b=Yx4FL14335LbxiBZmy9bsyAVyJtO0go+6pNrq/4L8k+Mw5LOEiMpHlv5sK1ljBWxyU s7Qzgz/KAl42DS/+kyyP8DeIK6IqUr/Sgcl/VP4rhN2iKS9m2sqZCC0hVoBhUYszwl8h XK0Qag7itNsug819djUclR0IQH6cebg9/cOE+iqjFnUoIgYzhwDmz4hLJC4XU1Ac2tRF oudYlEIEqbm8NrvJtmd87HEjZfqfFTgI7Kft6J0WTuEwRfII3Cfm9Ldh9Q6SqlnX0SqI Va68cy0hI8yGsFsqYx+2zoRjH3h/xtBWPzGaHjzR5jCtG9TiSUXAsss7jowTa8HI9cbq RV9w== X-Gm-Message-State: ALoCoQncSZTiT4LJuUHURARiSlF3vXU67QiAuCDyNl/lcsdxekwNp0fxw5+1dzfSXMNmKFtmMcZW X-Received: by 10.194.205.37 with SMTP id ld5mr37496820wjc.14.1441122143935; Tue, 01 Sep 2015 08:42:23 -0700 (PDT) MIME-Version: 1.0 Received: by 10.28.64.193 with HTTP; Tue, 1 Sep 2015 08:42:04 -0700 (PDT) In-Reply-To: <20150901150847.GH2631@redhat.com> References: <20150901142713.GG2631@redhat.com> <20150901150847.GH2631@redhat.com> From: Dmitry Vyukov Date: Tue, 1 Sep 2015 17:42:04 +0200 Message-ID: Subject: Re: [Patch, libstdc++] Fix data races in basic_string implementation To: Jonathan Wakely Cc: GCC Patches , libstdc++@gcc.gnu.org, Alexander Potapenko , Kostya Serebryany , Torvald Riegel X-IsSubscribed: yes On Tue, Sep 1, 2015 at 5:08 PM, Jonathan Wakely wrote: > On 01/09/15 16:56 +0200, Dmitry Vyukov wrote: >> >> I don't understand how a new gcc may not support __atomic builtins on >> ints. How it is even possible? That's a portable API provided by >> recent gcc's... > > > The built-in function is always defined, but it might expand to a call > to an external function in libatomic, and it would be a regression for > code using std::string to start requiring libatomic (although maybe it > would be necessary if it's the only way to make the code correct). > > I don't know if there are any targets that define __GTHREADS and also > don't support __atomic_load(int*, ...) without libatomic. If such > targets exist then adding a new configure check that only depends on > __atomic_load(int*, ...) would mean we keep supporting those targets. > > Another option would be to simply do: > > bool > _M_is_shared() const _GLIBCXX_NOEXCEPT > #if defined(__GTHREADS) > + { return __atomic_load(&this->_M_refcount, __ATOMIC_ACQUIRE) > 0; } > +#else > { return this->_M_refcount > 0; } > +#endif > > and see if anyone complains! I like this option! If a platform uses multithreading and has non-inlined atomic loads, then the way to fix this is to provide inlined atomic loads rather than to fix all call sites. Attaching new patch. Please take another look. Index: include/bits/basic_string.h =================================================================== --- include/bits/basic_string.h (revision 227363) +++ include/bits/basic_string.h (working copy) @@ -2601,11 +2601,32 @@ bool _M_is_leaked() const _GLIBCXX_NOEXCEPT - { return this->_M_refcount < 0; } + { +#if defined(__GTHREADS) + // _M_refcount is mutated concurrently by _M_refcopy/_M_dispose, + // so we need to use an atomic load. However, _M_is_leaked + // predicate does not change concurrently (i.e. the string is either + // leaked or not), so a relaxed load is enough. + return __atomic_load_n(&this->_M_refcount, __ATOMIC_RELAXED) < 0; +#else + return this->_M_refcount < 0; +#endif + } bool _M_is_shared() const _GLIBCXX_NOEXCEPT - { return this->_M_refcount > 0; } + { +#if defined(__GTHREADS) + // _M_refcount is mutated concurrently by _M_refcopy/_M_dispose, + // so we need to use an atomic load. Another thread can drop last + // but one reference concurrently with this check, so we need this + // load to be acquire to synchronize with release fetch_and_add in + // _M_dispose. + return __atomic_load_n(&this->_M_refcount, __ATOMIC_ACQUIRE) > 0; +#else + return this->_M_refcount > 0; +#endif + } void _M_set_leaked() _GLIBCXX_NOEXCEPT