From patchwork Fri Oct 19 19:23:53 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joern Rennecke X-Patchwork-Id: 192795 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id B7DEC2C0090 for ; Sat, 20 Oct 2012 06:24:03 +1100 (EST) Comment: DKIM? See http://www.dkim.org DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=gcc.gnu.org; s=default; x=1351279445; h=Comment: DomainKey-Signature:Received:Received:Received:Received:Received: Message-ID:Date:From:To:Cc:Subject:References:In-Reply-To: MIME-Version:Content-Type:Content-Transfer-Encoding:User-Agent: Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:Sender:Delivered-To; bh=LR+iOcOQMi99/G+zT4/2 t7r+S8A=; b=mw456Gkbwm5byQ5uJkt7J0CJOQrHvr7ERRSkhHGNznttKDnJUpBd bCa2OFgc1CfvtZT6w3mDIyJkFafok0migXX1AiFtSuLL1BAIv5SH/npOFlrMr+Xs CdBvsSpL89kWmzudeUt/qL35CKOJ14I77Mwocb362SCxC5xD34wNTss= Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=gcc.gnu.org; h=Received:Received:X-SWARE-Spam-Status:X-Spam-Check-By:Received:Received:Received:Message-ID:Date:From:To:Cc:Subject:References:In-Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:User-Agent:Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help:Sender:Delivered-To; b=CYIE1l/JrUwOElsLC2GH5lQa7WT4ImG2xD41Jg6yeWx+Fvcwoaljfd+sZsXkiB MEUeoy1+e+62hE8LnCJbk5qWKK+1X7pwf2ntH5crs7pXWIbNuJX4fJr3SvwHZX3T LZkZBp6yXY6WXrg9VqYc3Btemw2TM9x2J4QVe5mKW9Ri8=; Received: (qmail 11411 invoked by alias); 19 Oct 2012 19:24:00 -0000 Received: (qmail 11403 invoked by uid 22791); 19 Oct 2012 19:23:59 -0000 X-SWARE-Spam-Status: No, hits=-2.8 required=5.0 tests=AWL, BAYES_00, KHOP_THREADED, MIME_QP_LONG_LINE, RCVD_IN_HOSTKARMA_NO X-Spam-Check-By: sourceware.org Received: from c62.cesmail.net (HELO c62.cesmail.net) (216.154.195.54) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 19 Oct 2012 19:23:55 +0000 Received: from unknown (HELO delta2) ([192.168.1.50]) by c62.cesmail.net with ESMTP; 19 Oct 2012 15:23:53 -0400 Received: from cust213-dsl91-135-11.idnet.net (cust213-dsl91-135-11.idnet.net [91.135.11.213]) by webmail.spamcop.net (Horde MIME library) with HTTP; Fri, 19 Oct 2012 15:23:53 -0400 Message-ID: <20121019152353.58plpczcgso4og8o-nzlynne@webmail.spamcop.net> Date: Fri, 19 Oct 2012 15:23:53 -0400 From: Joern Rennecke To: Richard Sandiford Cc: gcc-patches@gcc.gnu.org, Eric Botcazou Subject: Re: Ping: RFA: add lock_length attribute to break branch-shortening cycles References: <20121015013935.v6k1ge59foks8s88-nzlynne@webmail.spamcop.net> <87391flcjy.fsf@talisman.home> <20121016153515.0nmsft8o2soo4s40-nzlynne@webmail.spamcop.net> <87lif3k246.fsf@talisman.home> <20121018162901.49c9o85e00gswks0-nzlynne@webmail.spamcop.net> <87haprjxx6.fsf@talisman.home> In-Reply-To: <87haprjxx6.fsf@talisman.home> MIME-Version: 1.0 User-Agent: Internet Messaging Program (IMP) H3 (4.1.4) Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Quoting Richard Sandiford : > Joern Rennecke writes: >> When the condition is not fulfilled, we want to keep the length from the >> previous iteration. > > Right, that's what I mean. So we need to make sure that the difference > between the address of the current instruction and the address of the > next instruction also doesn't change. The code as posted was: > > else > { > new_length = insn_current_length (insn); > insn_current_address += new_length; > } > > #ifdef ADJUST_INSN_LENGTH > /* If needed, do any adjustment. */ > tmp_length = new_length; > ADJUST_INSN_LENGTH (insn, new_length); > insn_current_address += (new_length - tmp_length); > #endif > > if (new_length != insn_lengths[uid] > && (new_length > insn_lengths[uid] || first_pass)) > { > insn_lengths[uid] = new_length; > something_changed = 1; > } > > which always leaves insn_current_address based off new_length, > even if new_length is out of kilter with insn_lengths[uid]. Indeed. I have attached a patch which I believe addresses the issues you raised. Because making the length sticky is problematic when UIDs aren't unique, I have tested this patch on top of that patch: http://gcc.gnu.org/ml/gcc-patches/2012-10/msg01806.html regression tested on i686-pc-linux-gnu X cris-elf and i686-pc-linux-gnu X mipsel-elf. 2012-10-19 J"orn Rennecke * final.c (shorten_branches): When optimizing, start with small length and increase from there, and don't decrease lengths. Index: final.c =================================================================== --- final.c (revision 192573) +++ final.c (working copy) @@ -1066,7 +1066,15 @@ shorten_branches (rtx first ATTRIBUTE_UN } #endif /* CASE_VECTOR_SHORTEN_MODE */ + /* When optimizing, we start assuming minimum length, and keep increasing + lengths as we find the need for this, till nothing changes. + When not optimizing, we start assuming maximum lengths, and + do a single pass to update the lengths. */ + bool increasing = optimize != 0; + /* Compute initial lengths, addresses, and varying flags for each insn. */ + int (*length_fun) (rtx) = increasing ? insn_min_length : insn_default_length; + for (insn_current_address = 0, insn = first; insn != 0; insn_current_address += insn_lengths[uid], insn = NEXT_INSN (insn)) @@ -1117,6 +1125,8 @@ shorten_branches (rtx first ATTRIBUTE_UN #else const_delay_slots = 0; #endif + int (*inner_length_fun) (rtx) + = const_delay_slots ? length_fun : insn_default_length; /* Inside a delay slot sequence, we do not do any branch shortening if the shortening could change the number of delay slots of the branch. */ @@ -1131,7 +1141,7 @@ shorten_branches (rtx first ATTRIBUTE_UN inner_length = (asm_insn_count (PATTERN (inner_insn)) * insn_default_length (inner_insn)); else - inner_length = insn_default_length (inner_insn); + inner_length = inner_length_fun (inner_insn); insn_lengths[inner_uid] = inner_length; if (const_delay_slots) @@ -1149,7 +1159,7 @@ shorten_branches (rtx first ATTRIBUTE_UN } else if (GET_CODE (body) != USE && GET_CODE (body) != CLOBBER) { - insn_lengths[uid] = insn_default_length (insn); + insn_lengths[uid] = length_fun (insn); varying_length[uid] = insn_variable_length_p (insn); } @@ -1165,6 +1175,7 @@ shorten_branches (rtx first ATTRIBUTE_UN get the current insn length. If it has changed, reflect the change. When nothing changes for a full pass, we are done. */ + bool first_pass ATTRIBUTE_UNUSED = true; while (something_changed) { something_changed = 0; @@ -1220,6 +1231,7 @@ shorten_branches (rtx first ATTRIBUTE_UN rtx prev; int rel_align = 0; addr_diff_vec_flags flags; + enum machine_mode vec_mode; /* Avoid automatic aggregate initialization. */ flags = ADDR_DIFF_VEC_FLAGS (body); @@ -1298,9 +1310,12 @@ shorten_branches (rtx first ATTRIBUTE_UN else max_addr += align_fuzz (max_lab, rel_lab, 0, 0); } - PUT_MODE (body, CASE_VECTOR_SHORTEN_MODE (min_addr - rel_addr, - max_addr - rel_addr, - body)); + vec_mode = CASE_VECTOR_SHORTEN_MODE (min_addr - rel_addr, + max_addr - rel_addr, body); + if (first_pass + || (GET_MODE_SIZE (vec_mode) + >= GET_MODE_SIZE (GET_MODE (body)))) + PUT_MODE (body, vec_mode); if (JUMP_TABLES_IN_TEXT_SECTION || readonly_data_section == text_section) { @@ -1362,10 +1377,15 @@ shorten_branches (rtx first ATTRIBUTE_UN if (inner_length != insn_lengths[inner_uid]) { - insn_lengths[inner_uid] = inner_length; - something_changed = 1; + if (!increasing || inner_length > insn_lengths[inner_uid]) + { + insn_lengths[inner_uid] = inner_length; + something_changed = 1; + } + else + inner_length = insn_lengths[inner_uid]; } - insn_current_address += insn_lengths[inner_uid]; + insn_current_address += inner_length; new_length += inner_length; } } @@ -1382,15 +1402,19 @@ shorten_branches (rtx first ATTRIBUTE_UN insn_current_address += (new_length - tmp_length); #endif - if (new_length != insn_lengths[uid]) + if (new_length != insn_lengths[uid] + && (!increasing || new_length > insn_lengths[uid])) { insn_lengths[uid] = new_length; something_changed = 1; } + else + insn_current_address += insn_lengths[uid] - new_length; } /* For a non-optimizing compile, do only a single pass. */ - if (!optimize) + if (!increasing) break; + first_pass = false; } free (varying_length);