From patchwork Fri Aug 6 08:57:58 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ramana Radhakrishnan X-Patchwork-Id: 61075 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id CE901B70A5 for ; Fri, 6 Aug 2010 18:58:18 +1000 (EST) Received: (qmail 24484 invoked by alias); 6 Aug 2010 08:58:14 -0000 Received: (qmail 24460 invoked by uid 22791); 6 Aug 2010 08:58:10 -0000 X-SWARE-Spam-Status: No, hits=-0.6 required=5.0 tests=AWL, BAYES_50, TW_FC, TW_VF, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from cam-admin0.cambridge.arm.com (HELO cam-admin0.cambridge.arm.com) (217.140.96.50) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 06 Aug 2010 08:58:02 +0000 Received: from cam-owa1.Emea.Arm.com (cam-owa1.emea.arm.com [10.1.255.62]) by cam-admin0.cambridge.arm.com (8.12.6/8.12.6) with ESMTP id o768vbF9012629 for ; Fri, 6 Aug 2010 09:57:37 +0100 (BST) Received: from [10.1.66.29] ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.0); Fri, 6 Aug 2010 09:57:59 +0100 Subject: [Patch ARM] Cortex A9 VFP Pipeline description. From: Ramana Radhakrishnan Reply-To: ramana.radhakrishnan@arm.com To: gcc-patches@gcc.gnu.org Cc: Richard Earnshaw Date: Fri, 06 Aug 2010 09:57:58 +0100 Message-Id: <1281085078.20364.17.camel@e102325-lin.cambridge.arm.com> Mime-Version: 1.0 X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hi, This implements the floating point pipeline for the Cortex A9 and improves performance overall by about 3% for SPECFP2k at -O3. There was only a 1% regression with facerec but every other benchmark improved significantly and thus I believe this to be a reasonable improvement for the A9. The compilers used for benchmarking were configured with --with-cpu=cortex-a9 --with-fpu=vfpv3-d16 --with-float=softfp. Bootstrapped this patch on trunk on an A9 board - Regression tests are still running. Ok to commit to trunk ? cheers Ramana 2010-08-06 Ramana Radhakrishnan * config/arm/cortex-a9.md: Rewrite VFP Pipeline description. * config/arm/arm.c (arm_xscale_tune): Initialize sched_adjust_cost. (arm_fastmul_tune,arm_slowmul_tune, arm_9e_tune): Likewise. (arm_adjust_cost): Split into xscale_sched_adjust_cost and a generic part. (cortex_a9_sched_adjust_cost): New function. (xscale_sched_adjust_cost): New function. * config/arm/arm-protos.h (struct tune_params): New field sched_adjust_cost. * config/arm/arm-cores.def: Adjust costs for cortex-a9. Index: gcc/config/arm/arm.c =================================================================== --- gcc/config/arm/arm.c (revision 162914) +++ gcc/config/arm/arm.c (working copy) @@ -228,6 +228,8 @@ static void arm_asm_trampoline_template static void arm_trampoline_init (rtx, tree, rtx); static rtx arm_trampoline_adjust_address (rtx); static rtx arm_pic_static_addr (rtx orig, rtx reg); +static bool cortex_a9_sched_adjust_cost (rtx, rtx, rtx, int *); +static bool xscale_sched_adjust_cost (rtx, rtx, rtx, int *); /* Table of machine attributes. */ @@ -766,27 +768,39 @@ struct processors const struct tune_params arm_slowmul_tune = { arm_slowmul_rtx_costs, + NULL, 3 }; const struct tune_params arm_fastmul_tune = { arm_fastmul_rtx_costs, + NULL, 1 }; const struct tune_params arm_xscale_tune = { arm_xscale_rtx_costs, + xscale_sched_adjust_cost, 2 }; const struct tune_params arm_9e_tune = { arm_9e_rtx_costs, + NULL, 1 }; +const struct tune_params arm_cortex_a9_tune = +{ + arm_9e_rtx_costs, + cortex_a9_sched_adjust_cost, + 1 +}; + + /* Not all of these give usefully different compilation alternatives, but there is no simple way of generalizing them. */ static const struct processors all_cores[] = @@ -7691,30 +7705,14 @@ arm_address_cost (rtx x, bool speed ATTR { return TARGET_32BIT ? arm_arm_address_cost (x) : arm_thumb_address_cost (x); } -/* This function implements the target macro TARGET_SCHED_ADJUST_COST. - It corrects the value of COST based on the relationship between - INSN and DEP through the dependence LINK. It returns the new - value. */ - -static int -arm_adjust_cost (rtx insn, rtx link, rtx dep, int cost) -{ - rtx i_pat, d_pat; - - /* When generating Thumb-1 code, we want to place flag-setting operations - close to a conditional branch which depends on them, so that we can - omit the comparison. */ - if (TARGET_THUMB1 - && REG_NOTE_KIND (link) == 0 - && recog_memoized (insn) == CODE_FOR_cbranchsi4_insn - && recog_memoized (dep) >= 0 - && get_attr_conds (dep) == CONDS_SET) - return 0; +/* Adjust cost hook for XScale. */ +static bool +xscale_sched_adjust_cost (rtx insn, rtx link, rtx dep, int * cost) +{ /* Some true dependencies can have a higher cost depending on precisely how certain input operands are used. */ - if (arm_tune_xscale - && REG_NOTE_KIND (link) == 0 + if (REG_NOTE_KIND(link) == 0 && recog_memoized (insn) >= 0 && recog_memoized (dep) >= 0) { @@ -7748,10 +7746,116 @@ arm_adjust_cost (rtx insn, rtx link, rtx if (reg_overlap_mentioned_p (recog_data.operand[opno], shifted_operand)) - return 2; + { + *cost = 2; + return false; + } } } } + return true; +} + +/* Adjust cost hook for Cortex A9. */ +static bool +cortex_a9_sched_adjust_cost (rtx insn, rtx link, rtx dep, int * cost) +{ + switch (REG_NOTE_KIND (link)) + { + case REG_DEP_ANTI: + *cost = 0; + return false; + + case REG_DEP_TRUE: + case REG_DEP_OUTPUT: + if (recog_memoized (insn) >= 0 + && recog_memoized (dep) >= 0) + { + if (GET_CODE (PATTERN (insn)) == SET) + { + if (GET_MODE_CLASS + (GET_MODE (SET_DEST (PATTERN (insn)))) == MODE_FLOAT + || GET_MODE_CLASS + (GET_MODE (SET_SRC (PATTERN (insn)))) == MODE_FLOAT) + { + enum attr_type attr_type_insn = get_attr_type (insn); + enum attr_type attr_type_dep = get_attr_type (dep); + + /* By default all dependencies of the form + s0 = s0 s1 + s0 = s0 s2 + have an extra latency of 1 cycle because + of the input and output dependency in this + case. However this gets modeled as an true + dependency and hence all these checks. */ + if (REG_P (SET_DEST (PATTERN (insn))) + && REG_P (SET_DEST (PATTERN (dep))) + && reg_overlap_mentioned_p (SET_DEST (PATTERN (insn)), + SET_DEST (PATTERN (dep)))) + { + /* FMACS is a special case where the dependant + instruction can be issued 3 cycles before + the normal latency in case of an output + dependency. */ + if ((attr_type_insn == TYPE_FMACS + || attr_type_insn == TYPE_FMACD) + && (attr_type_dep == TYPE_FMACS + || attr_type_dep == TYPE_FMACD)) + { + if (REG_NOTE_KIND (link) == REG_DEP_OUTPUT) + *cost = insn_default_latency (dep) - 3; + else + *cost = insn_default_latency (dep); + return false; + } + else + { + if (REG_NOTE_KIND (link) == REG_DEP_OUTPUT) + *cost = insn_default_latency (dep) + 1; + else + *cost = insn_default_latency (dep); + } + return false; + } + } + } + } + break; + + default: + gcc_unreachable (); + } + + return true; +} + +/* This function implements the target macro TARGET_SCHED_ADJUST_COST. + It corrects the value of COST based on the relationship between + INSN and DEP through the dependence LINK. It returns the new + value. There is a per-core adjust_cost hook to adjust scheduler costs + and the per-core hook can choose to completely override the generic + adjust_cost function. Only put bits of code into arm_adjust_cost that + are common across all cores. */ +static int +arm_adjust_cost (rtx insn, rtx link, rtx dep, int cost) +{ + rtx i_pat, d_pat; + + /* When generating Thumb-1 code, we want to place flag-setting operations + close to a conditional branch which depends on them, so that we can + omit the comparison. */ + if (TARGET_THUMB1 + && REG_NOTE_KIND (link) == 0 + && recog_memoized (insn) == CODE_FOR_cbranchsi4_insn + && recog_memoized (dep) >= 0 + && get_attr_conds (dep) == CONDS_SET) + return 0; + + if (current_tune->sched_adjust_cost != NULL) + { + if (!current_tune->sched_adjust_cost (insn, link, dep, &cost)) + return cost; + } /* XXX This is not strictly true for the FPA. */ if (REG_NOTE_KIND (link) == REG_DEP_ANTI @@ -7774,7 +7878,8 @@ arm_adjust_cost (rtx insn, rtx link, rtx constant pool are cached, and that others will miss. This is a hack. */ - if ((GET_CODE (src_mem) == SYMBOL_REF && CONSTANT_POOL_ADDRESS_P (src_mem)) + if ((GET_CODE (src_mem) == SYMBOL_REF + && CONSTANT_POOL_ADDRESS_P (src_mem)) || reg_mentioned_p (stack_pointer_rtx, src_mem) || reg_mentioned_p (frame_pointer_rtx, src_mem) || reg_mentioned_p (hard_frame_pointer_rtx, src_mem)) Index: gcc/config/arm/arm-cores.def =================================================================== --- gcc/config/arm/arm-cores.def (revision 162914) +++ gcc/config/arm/arm-cores.def (working copy) @@ -120,7 +120,7 @@ ARM_CORE("arm1156t2-s", arm1156t2s, 6T ARM_CORE("arm1156t2f-s", arm1156t2fs, 6T2, FL_LDSCHED | FL_VFPV2, 9e) ARM_CORE("cortex-a5", cortexa5, 7A, FL_LDSCHED, 9e) ARM_CORE("cortex-a8", cortexa8, 7A, FL_LDSCHED, 9e) -ARM_CORE("cortex-a9", cortexa9, 7A, FL_LDSCHED, 9e) +ARM_CORE("cortex-a9", cortexa9, 7A, FL_LDSCHED, cortex_a9) ARM_CORE("cortex-r4", cortexr4, 7R, FL_LDSCHED, 9e) ARM_CORE("cortex-r4f", cortexr4f, 7R, FL_LDSCHED, 9e) ARM_CORE("cortex-m4", cortexm4, 7EM, FL_LDSCHED, 9e) Index: gcc/config/arm/arm-protos.h =================================================================== --- gcc/config/arm/arm-protos.h (revision 162914) +++ gcc/config/arm/arm-protos.h (working copy) @@ -216,6 +216,7 @@ extern void arm_order_regs_for_local_all struct tune_params { bool (*rtx_costs) (rtx, RTX_CODE, RTX_CODE, int *, bool); + bool (*sched_adjust_cost) (rtx, rtx, rtx, int *); int constant_limit; }; Index: gcc/config/arm/cortex-a9.md =================================================================== --- gcc/config/arm/cortex-a9.md (revision 162914) +++ gcc/config/arm/cortex-a9.md (working copy) @@ -2,8 +2,10 @@ ;; Copyright (C) 2008, 2009 Free Software Foundation, Inc. ;; Originally written by CodeSourcery for VFP. ;; -;; Integer core pipeline description contributed by ARM Ltd. -;; +;; Rewritten by Ramana Radhakrishnan +;; Integer Pipeline description contributed by ARM Ltd. +;; VFP Pipeline description rewritten and contributed by ARM Ltd. + ;; This file is part of GCC. ;; ;; GCC is free software; you can redistribute it and/or modify it @@ -22,28 +24,27 @@ (define_automaton "cortex_a9") -;; The Cortex-A9 integer core is modelled as a dual issue pipeline that has +;; The Cortex-A9 core is modelled as a dual issue pipeline that has ;; the following components. ;; 1. 1 Load Store Pipeline. ;; 2. P0 / main pipeline for data processing instructions. ;; 3. P1 / Dual pipeline for Data processing instructions. ;; 4. MAC pipeline for multiply as well as multiply ;; and accumulate instructions. -;; 5. 1 VFP / Neon pipeline. -;; The Load/Store and VFP/Neon pipeline are multiplexed. +;; 5. 1 VFP and an optional Neon unit. +;; The Load/Store, VFP and Neon issue pipeline are multiplexed. ;; The P0 / main pipeline and M1 stage of the MAC pipeline are ;; multiplexed. ;; The P1 / dual pipeline and M2 stage of the MAC pipeline are ;; multiplexed. -;; There are only 4 register read ports and hence at any point of +;; There are only 4 integer register read ports and hence at any point of ;; time we can't have issue down the E1 and the E2 ports unless ;; of course there are bypass paths that get exercised. ;; Both P0 and P1 have 2 stages E1 and E2. ;; Data processing instructions issue to E1 or E2 depending on ;; whether they have an early shift or not. - -(define_cpu_unit "cortex_a9_vfp, cortex_a9_ls" "cortex_a9") +(define_cpu_unit "ca9_issue_vfp_neon, cortex_a9_ls" "cortex_a9") (define_cpu_unit "cortex_a9_p0_e1, cortex_a9_p0_e2" "cortex_a9") (define_cpu_unit "cortex_a9_p1_e1, cortex_a9_p1_e2" "cortex_a9") (define_cpu_unit "cortex_a9_p0_wb, cortex_a9_p1_wb" "cortex_a9") @@ -71,11 +72,7 @@ cortex_a9_p1_e2 + cortex_a9_p0_e1 + cort ;; Issue at the same time along the load store pipeline and ;; the VFP / Neon pipeline is not possible. -;; FIXME:: At some point we need to model the issue -;; of the load store and the vfp being shared rather than anything else. - -(exclusion_set "cortex_a9_ls" "cortex_a9_vfp") - +(exclusion_set "cortex_a9_ls" "ca9_issue_vfp_neon") ;; Default data processing instruction without any shift ;; The only exception to this is the mov instruction @@ -101,18 +98,13 @@ cortex_a9_p1_e2 + cortex_a9_p0_e1 + cort (define_insn_reservation "cortex_a9_load1_2" 4 (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "load1, load2, load_byte")) + (eq_attr "type" "load1, load2, load_byte, f_loads, f_loadd")) "cortex_a9_ls") ;; Loads multiples and store multiples can't be issued for 2 cycles in a ;; row. The description below assumes that addresses are 64 bit aligned. ;; If not, there is an extra cycle latency which is not modelled. -;; FIXME:: This bit might need to be reworked when we get to -;; tuning for the VFP because strictly speaking the ldm -;; is sent to the LSU unit as is and there is only an -;; issue restriction between the LSU and the VFP/ Neon unit. - (define_insn_reservation "cortex_a9_load3_4" 5 (and (eq_attr "tune" "cortexa9") (eq_attr "type" "load3, load4")) @@ -120,12 +112,13 @@ cortex_a9_p1_e2 + cortex_a9_p0_e1 + cort (define_insn_reservation "cortex_a9_store1_2" 0 (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "store1, store2")) + (eq_attr "type" "store1, store2, f_stores, f_stored")) "cortex_a9_ls") ;; Almost all our store multiples use an auto-increment ;; form. Don't issue back to back load and store multiples ;; because the load store unit will stall. + (define_insn_reservation "cortex_a9_store3_4" 0 (and (eq_attr "tune" "cortexa9") (eq_attr "type" "store3, store4")) @@ -193,47 +186,79 @@ cortex_a9_store3_4, cortex_a9_store1_2, (define_insn_reservation "cortex_a9_call" 0 (and (eq_attr "tune" "cortexa9") (eq_attr "type" "call")) - "cortex_a9_issue_branch + cortex_a9_multcycle1 + cortex_a9_ls + cortex_a9_vfp") + "cortex_a9_issue_branch + cortex_a9_multcycle1 + cortex_a9_ls + ca9_issue_vfp_neon") ;; Pipelining for VFP instructions. +;; Issue happens either along load store unit or the VFP / Neon unit. +;; Pipeline Instruction Classification. +;; FPS - fcpys, ffariths, ffarithd,r_2_f,f_2_r +;; FP_ADD - fadds, faddd, fcmps (1) +;; FPMUL - fmul{s,d}, fmac{s,d} +;; FPDIV - fdiv{s,d} +(define_cpu_unit "ca9fps" "cortex_a9") +(define_cpu_unit "ca9fp_add1, ca9fp_add2, ca9fp_add3, ca9fp_add4" "cortex_a9") +(define_cpu_unit "ca9fp_mul1, ca9fp_mul2 , ca9fp_mul3, ca9fp_mul4" "cortex_a9") +(define_cpu_unit "ca9fp_ds1" "cortex_a9") -(define_insn_reservation "cortex_a9_ffarith" 1 + +;; fmrs, fmrrd, fmstat and fmrx - The data is available after 1 cycle. +(define_insn_reservation "cortex_a9_fps" 2 (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fcpys,ffariths,ffarithd,fcmps,fcmpd,fconsts,fconstd")) - "cortex_a9_vfp") + (eq_attr "type" "fcpys, fconsts, fconstd, ffariths, ffarithd, r_2_f, f_2_r, f_flag")) + "ca9_issue_vfp_neon + ca9fps") + +(define_bypass 1 + "cortex_a9_fps" + "cortex_a9_fadd, cortex_a9_fps, cortex_a9_fcmp, cortex_a9_dp, cortex_a9_dp_shift, cortex_a9_multiply") + +;; Scheduling on the FP_ADD pipeline. +(define_reservation "ca9fp_add" "ca9_issue_vfp_neon + ca9fp_add1, ca9fp_add2, ca9fp_add3, ca9fp_add4") (define_insn_reservation "cortex_a9_fadd" 4 - (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fadds,faddd,f_cvt")) - "cortex_a9_vfp") + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fadds, faddd, f_cvt")) + "ca9fp_add") -(define_insn_reservation "cortex_a9_fmuls" 5 - (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fmuls")) - "cortex_a9_vfp") +(define_insn_reservation "cortex_a9_fcmp" 1 + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fcmps, fcmpd")) + "ca9_issue_vfp_neon + ca9fp_add1") -(define_insn_reservation "cortex_a9_fmuld" 6 - (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fmuld")) - "cortex_a9_vfp*2") +;; Scheduling for the Multiply and MAC instructions. +(define_reservation "ca9fmuls" + "ca9fp_mul1 + ca9_issue_vfp_neon, ca9fp_mul2, ca9fp_mul3, ca9fp_mul4") + +(define_reservation "ca9fmuld" + "ca9fp_mul1 + ca9_issue_vfp_neon, (ca9fp_mul1 + ca9fp_mul2), ca9fp_mul2, ca9fp_mul3, ca9fp_mul4") + +(define_insn_reservation "cortex_a9_fmuls" 4 + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fmuls")) + "ca9fmuls") + +(define_insn_reservation "cortex_a9_fmuld" 5 + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fmuld")) + "ca9fmuld") (define_insn_reservation "cortex_a9_fmacs" 8 - (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fmacs")) - "cortex_a9_vfp") + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fmacs")) + "ca9fmuls, ca9fp_add") -(define_insn_reservation "cortex_a9_fmacd" 8 - (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fmacd")) - "cortex_a9_vfp*2") +(define_insn_reservation "cortex_a9_fmacd" 9 + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fmacd")) + "ca9fmuld, ca9fp_add") +;; Division pipeline description. (define_insn_reservation "cortex_a9_fdivs" 15 - (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fdivs")) - "cortex_a9_vfp*10") + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fdivs")) + "ca9fp_ds1 + ca9_issue_vfp_neon, nothing*14") (define_insn_reservation "cortex_a9_fdivd" 25 - (and (eq_attr "tune" "cortexa9") - (eq_attr "type" "fdivd")) - "cortex_a9_vfp*20") + (and (eq_attr "tune" "cortexa9") + (eq_attr "type" "fdivd")) + "ca9fp_ds1 + ca9_issue_vfp_neon, nothing*24")