From patchwork Wed Dec 21 20:33:33 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 132731 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id AA2FDB7119 for ; Thu, 22 Dec 2011 07:33:56 +1100 (EST) Received: (qmail 12175 invoked by alias); 21 Dec 2011 20:33:54 -0000 Received: (qmail 12167 invoked by uid 22791); 21 Dec 2011 20:33:53 -0000 X-SWARE-Spam-Status: No, hits=-7.8 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_HI, RP_MATCHES_RCVD, SPF_HELO_PASS X-Spam-Check-By: sourceware.org Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 21 Dec 2011 20:33:35 +0000 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id pBLKXYTh026849 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 21 Dec 2011 15:33:34 -0500 Received: from anchor.twiddle.home (vpn-8-148.rdu.redhat.com [10.11.8.148]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id pBLKXXnt031470 for ; Wed, 21 Dec 2011 15:33:34 -0500 Message-ID: <4EF2429D.9080802@redhat.com> Date: Wed, 21 Dec 2011 12:33:33 -0800 From: Richard Henderson User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:8.0) Gecko/20111115 Thunderbird/8.0 MIME-Version: 1.0 To: GCC Patches Subject: Fix PR target/51552 X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org As I say in the pr, this is partially a bfin backend bug. But as it is also a debug/eh_frame size regression for all targets, I'm fixing it anyway. This has the side-effect of re-hiding the bfin backend bug and allowing the build to finish for bfin-rtems. For x86_64, cc1: before: .eh_frame 000af284 00000000010ed300 00000000010ed300 00ced300 2**3 after: .eh_frame 000af16c 00000000010ed260 00000000010ed260 00ced260 2**3 cc1plus: before: .eh_frame 000c3a6c 00000000012a4008 00000000012a4008 00ea4008 2**3 after: .eh_frame 000c3954 00000000012a3f68 00000000012a3f68 00ea3f68 2**3 The reason for the minimal size decrease is that almost all of the time the space reclaimed by eliding the advance is taken back by alignment at the end of the FDE. Only occasionally do we get lucky and have the FDE size decrease below a multiple of the word size. Still, progress is progress, and there are indeed fewer opcodes that need to be processed at runtime. r~ PR target/51552 * dwarf2cfi.c (dwarf2out_frame_debug): Move any_cfis_emitted code... (scan_trace): ... here. diff --git a/gcc/dwarf2cfi.c b/gcc/dwarf2cfi.c index 69e6f21..b2721e8 100644 --- a/gcc/dwarf2cfi.c +++ b/gcc/dwarf2cfi.c @@ -1930,9 +1930,6 @@ dwarf2out_frame_debug (rtx insn) { rtx note, n; bool handled_one = false; - bool need_flush = false; - - any_cfis_emitted = false; for (note = REG_NOTES (insn); note; note = XEXP (note, 1)) switch (REG_NOTE_KIND (note)) @@ -2020,8 +2017,7 @@ dwarf2out_frame_debug (rtx insn) break; case REG_CFA_FLUSH_QUEUE: - /* The actual flush happens below. */ - need_flush = true; + /* The actual flush happens elsewhere. */ handled_one = true; break; @@ -2029,13 +2025,7 @@ dwarf2out_frame_debug (rtx insn) break; } - if (handled_one) - { - /* Minimize the number of advances by emitting the entire queue - once anything is emitted. */ - need_flush |= any_cfis_emitted; - } - else + if (!handled_one) { insn = PATTERN (insn); do_frame_expr: @@ -2044,12 +2034,9 @@ dwarf2out_frame_debug (rtx insn) /* Check again. A parallel can save and update the same register. We could probably check just once, here, but this is safer than removing the check at the start of the function. */ - if (any_cfis_emitted || clobbers_queued_reg_save (insn)) - need_flush = true; + if (clobbers_queued_reg_save (insn)) + dwarf2out_flush_queued_reg_saves (); } - - if (need_flush) - dwarf2out_flush_queued_reg_saves (); } /* Emit CFI info to change the state from OLD_ROW to NEW_ROW. */ @@ -2489,6 +2476,7 @@ scan_trace (dw_trace_info *trace) /* Make sure any register saves are visible at the jump target. */ dwarf2out_flush_queued_reg_saves (); + any_cfis_emitted = false; /* However, if there is some adjustment on the call itself, e.g. a call_pop, that action should be considered to happen after @@ -2508,6 +2496,7 @@ scan_trace (dw_trace_info *trace) || clobbers_queued_reg_save (insn) || find_reg_note (insn, REG_CFA_FLUSH_QUEUE, NULL)) dwarf2out_flush_queued_reg_saves (); + any_cfis_emitted = false; add_cfi_insn = insn; scan_insn_after (insn); @@ -2518,6 +2507,12 @@ scan_trace (dw_trace_info *trace) emitted two cfa adjustments. Do it now. */ def_cfa_1 (&this_cfa); + /* Minimize the number of advances by emitting the entire queue + once anything is emitted. */ + if (any_cfis_emitted + || find_reg_note (insn, REG_CFA_FLUSH_QUEUE, NULL)) + dwarf2out_flush_queued_reg_saves (); + /* Note that a test for control_flow_insn_p does exactly the same tests as are done to actually create the edges. So always call the routine and let it not create edges for