From patchwork Thu Dec 1 00:44:06 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 128614 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id E8B83B6F69 for ; Thu, 1 Dec 2011 11:45:34 +1100 (EST) Received: (qmail 24780 invoked by alias); 1 Dec 2011 00:45:18 -0000 Received: (qmail 24631 invoked by uid 22791); 1 Dec 2011 00:45:14 -0000 X-SWARE-Spam-Status: No, hits=-2.2 required=5.0 tests=AWL, BAYES_00, DKIM_SIGNED, DKIM_VALID, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FROM, RCVD_IN_DNSWL_LOW, TW_SW X-Spam-Check-By: sourceware.org Received: from mail-vw0-f47.google.com (HELO mail-vw0-f47.google.com) (209.85.212.47) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 01 Dec 2011 00:44:52 +0000 Received: by vbbfc21 with SMTP id fc21so1067713vbb.20 for ; Wed, 30 Nov 2011 16:44:51 -0800 (PST) Received: by 10.52.64.232 with SMTP id r8mr4079468vds.75.1322700291753; Wed, 30 Nov 2011 16:44:51 -0800 (PST) Received: from localhost.localdomain ([173.160.232.49]) by mx.google.com with ESMTPS id c7sm4463273vdh.12.2011.11.30.16.44.50 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 30 Nov 2011 16:44:51 -0800 (PST) From: Richard Henderson To: gcc-patches@gcc.gnu.org Cc: richard.earnshaw@arm.com, ramana.radhakrishnan@arm.com, joseph@codesourcery.com Subject: [PATCH 2/5] arm: Emit swp for pre-armv6. Date: Wed, 30 Nov 2011 16:44:06 -0800 Message-Id: <1322700249-4693-3-git-send-email-rth@redhat.com> In-Reply-To: <1322700249-4693-1-git-send-email-rth@redhat.com> References: <1322700249-4693-1-git-send-email-rth@redhat.com> X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org --- gcc/config/arm/arm.h | 6 ++++ gcc/config/arm/sync.md | 63 +++++++++++++++++++++++++++++++++++++++++++++++- 2 files changed, 68 insertions(+), 1 deletions(-) diff --git a/gcc/config/arm/arm.h b/gcc/config/arm/arm.h index 31f4856..33e5b8e 100644 --- a/gcc/config/arm/arm.h +++ b/gcc/config/arm/arm.h @@ -276,6 +276,12 @@ extern void (*arm_lang_output_object_attributes_hook)(void); /* Nonzero if this chip implements a memory barrier instruction. */ #define TARGET_HAVE_MEMORY_BARRIER (TARGET_HAVE_DMB || TARGET_HAVE_DMB_MCR) +/* Nonzero if this chip supports swp and swpb. These are technically present + post-armv6, but deprecated. Never use it if we have OS support, as swp is + not well-defined on SMP systems. */ +#define TARGET_HAVE_SWP \ + (TARGET_ARM && arm_arch4 && !arm_arch6 && arm_abi != ARM_ABI_AAPCS_LINUX) + /* Nonzero if this chip supports ldrex and strex */ #define TARGET_HAVE_LDREX ((arm_arch6 && TARGET_ARM) || arm_arch7) diff --git a/gcc/config/arm/sync.md b/gcc/config/arm/sync.md index 124ebf0..72e7181 100644 --- a/gcc/config/arm/sync.md +++ b/gcc/config/arm/sync.md @@ -26,6 +26,10 @@ (DI "TARGET_HAVE_LDREXD && ARM_DOUBLEWORD_ALIGN && TARGET_HAVE_MEMORY_BARRIER")]) +(define_mode_attr swp_predtab + [(QI "TARGET_HAVE_SWP") (HI "false") + (SI "TARGET_HAVE_SWP") (DI "false")]) + (define_code_iterator syncop [plus minus ior xor and]) (define_code_attr sync_optab @@ -132,7 +136,41 @@ DONE; }) -(define_insn_and_split "atomic_exchange" +(define_expand "atomic_exchange" + [(match_operand:QHSD 0 "s_register_operand" "") + (match_operand:QHSD 1 "mem_noofs_operand" "") + (match_operand:QHSD 2 "s_register_operand" "r") + (match_operand:SI 3 "const_int_operand" "")] + " || " +{ + if () + emit_insn (gen_atomic_exchange_rex (operands[0], operands[1], + operands[2], operands[3])); + else + { + /* Memory barriers are introduced in armv6, which also gains the + ldrex insns. Therefore we can ignore the memory model argument + when issuing a SWP instruction. */ + gcc_checking_assert (!TARGET_HAVE_MEMORY_BARRIER); + + if (mode == QImode) + { + rtx x = gen_reg_rtx (SImode); + emit_insn (gen_atomic_exchangeqi_swp (x, operands[1], operands[2])); + emit_move_insn (operands[0], gen_lowpart (QImode, x)); + } + else if (mode == SImode) + { + emit_insn (gen_atomic_exchangesi_swp + (operands[0], operands[1], operands[2])); + } + else + gcc_unreachable (); + } + DONE; +}) + +(define_insn_and_split "atomic_exchange_rex" [(set (match_operand:QHSD 0 "s_register_operand" "=&r") ;; output (match_operand:QHSD 1 "mem_noofs_operand" "+Ua")) ;; memory (set (match_dup 1) @@ -152,6 +190,29 @@ DONE; }) +(define_insn "atomic_exchangeqi_swp" + [(set (match_operand:SI 0 "s_register_operand" "=&r") ;; output + (zero_extend:SI + (match_operand:QI 1 "mem_noofs_operand" "+Ua"))) ;; memory + (set (match_dup 1) + (unspec_volatile:QI + [(match_operand:QI 2 "s_register_operand" "r")] ;; input + VUNSPEC_ATOMIC_XCHG))] + "TARGET_HAVE_SWP" + "swpb%?\t%0, %2, %C1" + [(set_attr "predicable" "yes")]) + +(define_insn "atomic_exchangesi_swp" + [(set (match_operand:SI 0 "s_register_operand" "=&r") ;; output + (match_operand:SI 1 "mem_noofs_operand" "+Ua")) ;; memory + (set (match_dup 1) + (unspec_volatile:SI + [(match_operand:SI 2 "s_register_operand" "r")] ;; input + VUNSPEC_ATOMIC_XCHG))] + "TARGET_HAVE_SWP" + "swp%?\t%0, %2, %C1" + [(set_attr "predicable" "yes")]) + (define_mode_attr atomic_op_operand [(QI "reg_or_int_operand") (HI "reg_or_int_operand")