From patchwork Wed Mar 23 14:55:33 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bernd Schmidt X-Patchwork-Id: 88095 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 06944B6EF2 for ; Thu, 24 Mar 2011 01:57:15 +1100 (EST) Received: (qmail 12370 invoked by alias); 23 Mar 2011 14:56:38 -0000 Received: (qmail 12207 invoked by uid 22791); 23 Mar 2011 14:56:34 -0000 X-SWARE-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL, BAYES_00, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mail.codesourcery.com (HELO mail.codesourcery.com) (38.113.113.100) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 23 Mar 2011 14:56:28 +0000 Received: (qmail 5102 invoked from network); 23 Mar 2011 14:56:26 -0000 Received: from unknown (HELO ?84.152.161.180?) (bernds@127.0.0.2) by mail.codesourcery.com with ESMTPA; 23 Mar 2011 14:56:26 -0000 Message-ID: <4D8A09E5.7070105@codesourcery.com> Date: Wed, 23 Mar 2011 15:55:33 +0100 From: Bernd Schmidt User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.15) Gecko/20110309 Lightning/1.0b3pre Thunderbird/3.1.9 MIME-Version: 1.0 To: GCC Patches Subject: [PATCH 5/6] Generate more shrink-wrapping opportunities References: <4D8A0703.9090306@codesourcery.com> In-Reply-To: <4D8A0703.9090306@codesourcery.com> Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org The first basic block contains insns to move incoming argument registers to pseudos. When these pseudos live across calls, they get allocated to call-saved registers. This in turns disables shrink-wrapping, since the move instruction requires the prologue (saving the call-saved reg) to occur before it. This patch addresses the problem by moving such moves downwards through the CFG until we find a place where the destination is used or the incoming argument is clobbered. Bernd * function.c (prepare_shrink_wrap): New function. (thread_prologue_and_epilogue_insns): Call it. Index: gcc/function.c =================================================================== --- gcc.orig/function.c +++ gcc/function.c @@ -5299,6 +5299,127 @@ requires_stack_frame_p (rtx insn) return true; return false; } + +/* Look for sets of call-saved registers in the first block of the + function, and move them down into successor blocks if the register + is used only on one path. This exposes more opportunities for + shrink-wrapping. + These kinds of sets often occur when incoming argument registers are + moved to call-saved registers because their values are live across + one or more calls during the function. */ + +static void +prepare_shrink_wrap (basic_block entry_block) +{ + rtx insn, curr; + FOR_BB_INSNS_SAFE (entry_block, insn, curr) + { + basic_block next_bb; + edge e, live_edge; + edge_iterator ei; + rtx set, scan; + unsigned destreg, srcreg; + + if (!NONDEBUG_INSN_P (insn)) + continue; + set = single_set (insn); + if (!set) + continue; + + if (!REG_P (SET_SRC (set)) || !REG_P (SET_DEST (set))) + continue; + srcreg = REGNO (SET_SRC (set)); + destreg = REGNO (SET_DEST (set)); + if (hard_regno_nregs[srcreg][GET_MODE (SET_SRC (set))] > 1 + || hard_regno_nregs[destreg][GET_MODE (SET_DEST (set))] > 1) + continue; + + next_bb = entry_block; + scan = insn; + + for (;;) + { + live_edge = NULL; + FOR_EACH_EDGE (e, ei, next_bb->succs) + { + if (REGNO_REG_SET_P (df_get_live_in (e->dest), destreg)) + { + if (live_edge) + { + live_edge = NULL; + break; + } + live_edge = e; + } + } + if (!live_edge) + break; + /* We can sometimes encounter dead code. Don't try to move it + into the exit block. */ + if (live_edge->dest == EXIT_BLOCK_PTR) + break; + if (EDGE_COUNT (live_edge->dest->preds) > 1) + break; + while (scan != BB_END (next_bb)) + { + scan = NEXT_INSN (scan); + if (NONDEBUG_INSN_P (scan)) + { + rtx link; + HARD_REG_SET set_regs; + + CLEAR_HARD_REG_SET (set_regs); + note_stores (PATTERN (scan), record_hard_reg_sets, + &set_regs); + if (CALL_P (scan)) + IOR_HARD_REG_SET (set_regs, call_used_reg_set); + for (link = REG_NOTES (scan); link; link = XEXP (link, 1)) + if (REG_NOTE_KIND (link) == REG_INC) + record_hard_reg_sets (XEXP (link, 0), NULL, &set_regs); + + if (TEST_HARD_REG_BIT (set_regs, srcreg) + || reg_referenced_p (SET_DEST (set), + PATTERN (scan))) + { + scan = NULL_RTX; + break; + } + if (CALL_P (scan)) + { + rtx link = CALL_INSN_FUNCTION_USAGE (scan); + while (link) + { + rtx tmp = XEXP (link, 0); + if (GET_CODE (tmp) == USE + && reg_referenced_p (SET_DEST (set), tmp)) + break; + link = XEXP (link, 1); + } + if (link) + { + scan = NULL_RTX; + break; + } + } + } + } + if (!scan) + break; + next_bb = live_edge->dest; + } + + if (next_bb != entry_block) + { + rtx after = BB_HEAD (next_bb); + while (!NOTE_P (after) + || NOTE_KIND (after) != NOTE_INSN_BASIC_BLOCK) + after = NEXT_INSN (after); + emit_insn_after (PATTERN (insn), after); + delete_insn (insn); + } + } +} + #endif #ifdef HAVE_return @@ -5499,6 +5620,8 @@ thread_prologue_and_epilogue_insns (void bitmap_head bb_antic_flags; bitmap_head bb_on_list; + prepare_shrink_wrap (entry_edge->dest); + bitmap_initialize (&bb_antic_flags, &bitmap_default_obstack); bitmap_initialize (&bb_on_list, &bitmap_default_obstack);