From patchwork Fri Jun 26 12:07:09 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Wahab X-Patchwork-Id: 488794 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id A64B2140273 for ; Fri, 26 Jun 2015 22:07:22 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=JKXiNUWa; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:subject:references :in-reply-to:content-type; q=dns; s=default; b=mSOp2RziYjCiw8ExG 8rlj+Ds8ZkbXvG2i3kolp1Bl6TOMAYIq0SMFqliUNt3d6sflufd7AyZWzHBkqOON 0dET1T7kCrVeJ6AvCKXXBbhngVxhtnjl3M8oSFIzyfvMg+g4Pgem64QsPP0kYtvu UknwoZKAADcDR427wWVBIgQdjY= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:subject:references :in-reply-to:content-type; s=default; bh=bdzdXICXLbgNVJUgxXvCD4r mjNQ=; b=JKXiNUWa9zVEPJodQTBsw/Ne6PW9LrAj2LPDdE/gzjmy66095BJOjRy aHtvSZY69SKd56ntCL8il4IIGrJ9RDfp4cNbdctqlGd19Zp9640P9aDu4A2G/DvA KaZLQtnO+knTlJCqkt/arUVF+JnKSXDyi07FQ1481+6SMiJtrCcA= Received: (qmail 39424 invoked by alias); 26 Jun 2015 12:07:14 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 38675 invoked by uid 89); 26 Jun 2015 12:07:14 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.8 required=5.0 tests=AWL, BAYES_00, SPF_PASS autolearn=ham version=3.3.2 X-HELO: eu-smtp-delivery-143.mimecast.com Received: from eu-smtp-delivery-143.mimecast.com (HELO eu-smtp-delivery-143.mimecast.com) (146.101.78.143) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 26 Jun 2015 12:07:12 +0000 Received: from cam-owa2.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.140]) by eu-smtp-1.mimecast.com with ESMTP id uk-mta-5-onoiFk46TRSlE7UXxNwkuA-1 Received: from e106327-lin.cambridge.arm.com ([10.1.2.79]) by cam-owa2.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Fri, 26 Jun 2015 13:07:09 +0100 Message-ID: <558D406D.3070407@arm.com> Date: Fri, 26 Jun 2015 13:07:09 +0100 From: Matthew Wahab User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.7.0 MIME-Version: 1.0 To: gcc-patches@gcc.gnu.org Subject: [PATCH 2/4][PR target/65697][5.1][Aarch64] Backport stronger barriers for __sync, fetch-op builtins. References: <558D3FFB.8080207@arm.com> In-Reply-To: <558D3FFB.8080207@arm.com> X-MC-Unique: onoiFk46TRSlE7UXxNwkuA-1 X-IsSubscribed: yes This patch backports the changes made to strengthen the barriers emitted for the __sync fetch-and-op/op-and-fetch builtins. The trunk patch submission is at https://gcc.gnu.org/ml/gcc-patches/2015-05/msg01989.html The commit is at https://gcc.gnu.org/ml/gcc-cvs/2015-06/msg00076.html Tested the series for aarch64-none-linux-gnu with check-gcc Ok for the branch? Matthew 2015-06-26 Matthew Wahab Backport from trunk. 2015-06-01 Matthew Wahab PR target/65697 * config/aarch64/aarch64.c (aarch64_emit_post_barrier):New. (aarch64_split_atomic_op): Check for __sync memory models, emit appropriate initial loads and final barriers. From d6d3351b4547d0ad52e4d7e9955fafdced11491a Mon Sep 17 00:00:00 2001 From: mwahab Date: Mon, 1 Jun 2015 15:18:19 +0000 Subject: [PATCH 2/4] [Aarch64][5.1] Strengthen barriers for sync-fetch-op builtin. PR target/65697 * config/aarch64/aarch64.c (aarch64_emit_post_barrier):New. (aarch64_split_atomic_op): Check for __sync memory models, emit appropriate initial loads and final barriers. git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@223983 138bc75d-0d04-0410-961f-82ee72b054a4 Conflicts: gcc/ChangeLog gcc/config/aarch64/aarch64.c Change-Id: I45600c4dd0002b4c2d48de36d695c83581fe50da --- gcc/config/aarch64/aarch64.c | 31 ++++++++++++++++++++++++++++++- 1 file changed, 30 insertions(+), 1 deletion(-) diff --git a/gcc/config/aarch64/aarch64.c b/gcc/config/aarch64/aarch64.c index b8b37b8..708fc23 100644 --- a/gcc/config/aarch64/aarch64.c +++ b/gcc/config/aarch64/aarch64.c @@ -9066,6 +9066,23 @@ aarch64_expand_compare_and_swap (rtx operands[]) emit_insn (gen_rtx_SET (VOIDmode, bval, x)); } +/* Emit a barrier, that is appropriate for memory model MODEL, at the end of a + sequence implementing an atomic operation. */ + +static void +aarch64_emit_post_barrier (enum memmodel model) +{ + const enum memmodel base_model = memmodel_base (model); + + if (is_mm_sync (model) + && (base_model == MEMMODEL_ACQUIRE + || base_model == MEMMODEL_ACQ_REL + || base_model == MEMMODEL_SEQ_CST)) + { + emit_insn (gen_mem_thread_fence (GEN_INT (MEMMODEL_SEQ_CST))); + } +} + /* Split a compare and swap pattern. */ void @@ -9128,6 +9145,8 @@ aarch64_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem, { machine_mode mode = GET_MODE (mem); machine_mode wmode = (mode == DImode ? DImode : SImode); + const enum memmodel model = memmodel_from_int (INTVAL (model_rtx)); + const bool is_sync = is_mm_sync (model); rtx_code_label *label; rtx x; @@ -9142,7 +9161,13 @@ aarch64_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem, old_out = new_out; value = simplify_gen_subreg (wmode, value, mode, 0); - aarch64_emit_load_exclusive (mode, old_out, mem, model_rtx); + /* The initial load can be relaxed for a __sync operation since a final + barrier will be emitted to stop code hoisting. */ + if (is_sync) + aarch64_emit_load_exclusive (mode, old_out, mem, + GEN_INT (MEMMODEL_RELAXED)); + else + aarch64_emit_load_exclusive (mode, old_out, mem, model_rtx); switch (code) { @@ -9178,6 +9203,10 @@ aarch64_split_atomic_op (enum rtx_code code, rtx old_out, rtx new_out, rtx mem, x = gen_rtx_IF_THEN_ELSE (VOIDmode, x, gen_rtx_LABEL_REF (Pmode, label), pc_rtx); aarch64_emit_unlikely_jump (gen_rtx_SET (VOIDmode, pc_rtx, x)); + + /* Emit any final barrier needed for a __sync operation. */ + if (is_sync) + aarch64_emit_post_barrier (model); } static void -- 1.9.1