From patchwork Thu Aug 19 14:36:29 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 62163 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 4A97FB70E1 for ; Fri, 20 Aug 2010 00:37:19 +1000 (EST) Received: (qmail 10197 invoked by alias); 19 Aug 2010 14:37:17 -0000 Received: (qmail 10187 invoked by uid 22791); 19 Aug 2010 14:37:16 -0000 X-SWARE-Spam-Status: No, hits=-1.3 required=5.0 tests=AWL, BAYES_00, NO_DNS_FOR_FROM, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mga11.intel.com (HELO mga11.intel.com) (192.55.52.93) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 19 Aug 2010 14:36:31 +0000 Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga102.fm.intel.com with ESMTP; 19 Aug 2010 07:36:29 -0700 X-ExtLoop1: 1 Received: from gnu-6.sc.intel.com ([10.3.194.135]) by fmsmga001.fm.intel.com with ESMTP; 19 Aug 2010 07:36:29 -0700 Received: by gnu-6.sc.intel.com (Postfix, from userid 500) id 459E120DCA; Thu, 19 Aug 2010 07:36:29 -0700 (PDT) Date: Thu, 19 Aug 2010 07:36:29 -0700 From: "H.J. Lu" To: gcc-patches@gcc.gnu.org Subject: [ix86/gcc-4_5-branch] PATCH: Always avoid lea if possible on x86 Message-ID: <20100819143629.GA27895@intel.com> Reply-To: "H.J. Lu" References: <20100817144925.GA26996@intel.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: <20100817144925.GA26996@intel.com> User-Agent: Mutt/1.5.20 (2009-12-10) Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org On Tue, Aug 17, 2010 at 07:49:25AM -0700, H.J. Lu wrote: > Hi, > > We added ix86_lea_for_add_ok and modified *add_1 to make sure > that we use LEA on address and ADD on non-address for TARGET_OPT_AGU. > It turned out ADD is always faster than LEA on all processors, except for > TARGET_OPT_AGU. This patch changes *add_1 and ix86_lea_for_add_ok > to avoid lea for !TARGET_OPT_AGU processors. OK for trunk? > > Thanks. > > > H.J. > --- > 2010-08-17 H.J. Lu > > * config/i386/i386.c (ix86_lea_for_add_ok): For !TARGET_OPT_AGU > or optimizing for size, always avoid lea if possible. > > * config/i386/i386.md (*add_1): Always avoid lea if > possible. > I backported it to ix86/gcc-4_5-branch. H.J. diff --git a/gcc/ChangeLog.ix86 b/gcc/ChangeLog.ix86 index c5ba9c9..116f87a 100644 --- a/gcc/ChangeLog.ix86 +++ b/gcc/ChangeLog.ix86 @@ -1,3 +1,14 @@ +2010-08-19 H.J. Lu + + Backport from mainline + 2010-08-17 H.J. Lu + + * config/i386/i386.c (ix86_lea_for_add_ok): For !TARGET_OPT_AGU + or optimizing for size, always avoid lea if possible. + + * config/i386/i386.md (*add_1): Always avoid lea if + possible. + 2010-08-12 H.J. Lu Backport from mainline diff --git a/gcc/config/i386/i386.c b/gcc/config/i386/i386.c index 4b7a061..7265465 100644 --- a/gcc/config/i386/i386.c +++ b/gcc/config/i386/i386.c @@ -13902,10 +13902,10 @@ distance_agu_use (unsigned int regno0, rtx insn) #define IX86_LEA_PRIORITY 2 /* Return true if it is ok to optimize an ADD operation to LEA - operation to avoid flag register consumation. For the processors - like ATOM, if the destination register of LEA holds an actual - address which will be used soon, LEA is better and otherwise ADD - is better. */ + operation to avoid flag register consumation. For most processors, + ADD is faster than LEA. For the processors like ATOM, if the + destination register of LEA holds an actual address which will be + used soon, LEA is better and otherwise ADD is better. */ bool ix86_lea_for_add_ok (enum rtx_code code ATTRIBUTE_UNUSED, @@ -13913,17 +13913,15 @@ ix86_lea_for_add_ok (enum rtx_code code ATTRIBUTE_UNUSED, { unsigned int regno0 = true_regnum (operands[0]); unsigned int regno1 = true_regnum (operands[1]); - unsigned int regno2; - - if (!TARGET_OPT_AGU || optimize_function_for_size_p (cfun)) - return regno0 != regno1; - - regno2 = true_regnum (operands[2]); + unsigned int regno2 = true_regnum (operands[2]); /* If a = b + c, (a!=b && a!=c), must use lea form. */ if (regno0 != regno1 && regno0 != regno2) return true; - else + + if (!TARGET_OPT_AGU || optimize_function_for_size_p (cfun)) + return false; + else { int dist_define, dist_use; dist_define = distance_non_agu_define (regno1, regno2, insn); diff --git a/gcc/config/i386/i386.md b/gcc/config/i386/i386.md index d092a4a..2f1b25e 100644 --- a/gcc/config/i386/i386.md +++ b/gcc/config/i386/i386.md @@ -6015,8 +6015,10 @@ } default: - /* Use add as much as possible to replace lea for AGU optimization. */ - if (which_alternative == 2 && TARGET_OPT_AGU) + /* This alternative was added for TARGET_OPT_AGU to use add as + much as possible. But add is also faster than lea for + !TARGET_OPT_AGU. */ + if (which_alternative == 2) return "add{}\t{%1, %0|%0, %1}"; gcc_assert (rtx_equal_p (operands[0], operands[1])); @@ -6038,10 +6040,7 @@ } } [(set (attr "type") - (cond [(and (eq_attr "alternative" "2") - (eq (symbol_ref "TARGET_OPT_AGU") (const_int 0))) - (const_string "lea") - (eq_attr "alternative" "3") + (cond [(eq_attr "alternative" "3") (const_string "lea") ; Current assemblers are broken and do not allow @GOTOFF in ; ought but a memory context.