From patchwork Fri Nov 8 15:10:35 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Bosscher X-Patchwork-Id: 289856 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 0E0F62C00E2 for ; Sat, 9 Nov 2013 02:11:37 +1100 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:from:date:message-id:subject:to:content-type; q= dns; s=default; b=w9xI3xpyrUMiH5DlCIw+Jm5xbFQj0tZUsHmUyky+wuLG9P BJRAay3BO5elQXsFpXuS08thqo/6wd0gGxdBXoMqkfZ0NgiYjEiIxs/wBPdEJrzC p7sOo05z8D6GZzL8ik0bk9tF4GUaDojyr2gz8dcFNZ6nPnj3j4/tn51hYLDXc= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:from:date:message-id:subject:to:content-type; s= default; bh=h2MZWw4viuK2zD0g1CQhwSe8MD0=; b=APtqI9wsE821NdOk1N3C p5qtCxxyh/bURGyA5PzEEeTs0plMWilvImE+sDlpVHHqfHpGZ1i/kLk8SyHQcFzW /7apzJDl5O6k80L3D0hA8m1xbx95OfdBxvqcDrWWLP600XbCaA0cP4mZjyVa8Cqs vAidfVsRWVXV++4OpTY+d1I= Received: (qmail 19057 invoked by alias); 8 Nov 2013 15:11:25 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 19047 invoked by uid 89); 8 Nov 2013 15:11:24 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=2.1 required=5.0 tests=AWL, BAYES_50, FREEMAIL_FROM, RDNS_NONE, SPAM_SUBJECT, SPF_PASS autolearn=no version=3.3.2 X-HELO: mail-vb0-f42.google.com Received: from Unknown (HELO mail-vb0-f42.google.com) (209.85.212.42) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Fri, 08 Nov 2013 15:11:23 +0000 Received: by mail-vb0-f42.google.com with SMTP id p14so1504310vbm.29 for ; Fri, 08 Nov 2013 07:11:15 -0800 (PST) X-Received: by 10.52.34.76 with SMTP id x12mr2007384vdi.35.1383923475211; Fri, 08 Nov 2013 07:11:15 -0800 (PST) MIME-Version: 1.0 Received: by 10.58.235.3 with HTTP; Fri, 8 Nov 2013 07:10:35 -0800 (PST) From: Steven Bosscher Date: Fri, 8 Nov 2013 16:10:35 +0100 Message-ID: Subject: some prep work to make JUMP_TABLE_DATA a non-active_insn_p object To: GCC Patches X-IsSubscribed: yes Hello, I'd like to make JUMP_TABLE_DATA a non-active insn before the end of stage1. Most of the work required for this is pretty simple. It involves finding and fixing the few places where insns are walked across basic block boundaries and ignoring barriers. Ah, the madness of that! :-) Fortunately almost no code does this in the shared RTL middle end, and most targets are also safe. This is the first patch of what I think will be four to fix those few places. Bootstrapped&tested on powerpc64-unknown-linux-gnu. Also built SH to be sure. OK for trunk? Ciao! Steven * cfgrtl.c (can_fallthru): Reorder code to move tablejump check up. Make that check explicit. BB_HEAD cannot be NULL, remove check for it. * haifa-sched.c (ready_remove_first_dispatch): Check INSN_P before looking at INSN_CODE. * reload1.c (delete_dead_insn) Do not expect JUMP_TABLE_DATA to be an active_insn_p object, respect basic block boundaries. * reorg.c (follow_jumps): Use invariant that JUMP_TABLE_DATA always follows immediately after the jump table data label. * config/nds32/nds32.c (nds32_output_casesi_pc_relative): Likewise. * config/sh/sh.c (barrier_align): Likewise. Rearrange code such that JUMP_TABLE_DATA is not expected to be an active_insn_p object. Index: cfgrtl.c =================================================================== --- cfgrtl.c (revision 204543) +++ cfgrtl.c (working copy) @@ -610,7 +610,7 @@ forwarder_block_p (const_basic_block bb) } /* Return nonzero if we can reach target from src by falling through. */ -/* FIXME: Make this a cfg hook. */ +/* FIXME: Make this a cfg hook, the result is only valid in legacy cfgrtl mode. */ bool can_fallthru (basic_block src, basic_block target) @@ -623,17 +623,21 @@ can_fallthru (basic_block src, basic_block target) if (target == EXIT_BLOCK_PTR) return true; if (src->next_bb != target) - return 0; + return false; + + /* ??? Later we may add code to move jump tables offline. */ + if (tablejump_p (insn, NULL, NULL)) + return false; + FOR_EACH_EDGE (e, ei, src->succs) if (e->dest == EXIT_BLOCK_PTR && e->flags & EDGE_FALLTHRU) - return 0; + return false; insn2 = BB_HEAD (target); - if (insn2 && !active_insn_p (insn2)) + if (!active_insn_p (insn2)) insn2 = next_active_insn (insn2); - /* ??? Later we may add code to move jump tables offline. */ return next_active_insn (insn) == insn2; } Index: haifa-sched.c =================================================================== --- haifa-sched.c (revision 204543) +++ haifa-sched.c (working copy) @@ -8589,8 +8589,8 @@ ready_remove_first_dispatch (struct ready_list *re rtx insn = ready_element (ready, 0); if (ready->n_ready == 1 + || !INSN_P (insn) || INSN_CODE (insn) < 0 - || !INSN_P (insn) || !active_insn_p (insn) || targetm.sched.dispatch (insn, FITS_DISPATCH_WINDOW)) return ready_remove_first (ready); @@ -8599,8 +8599,8 @@ ready_remove_first_dispatch (struct ready_list *re { insn = ready_element (ready, i); - if (INSN_CODE (insn) < 0 - || !INSN_P (insn) + if (!INSN_P (insn) + || INSN_CODE (insn) < 0 || !active_insn_p (insn)) continue; @@ -8619,8 +8619,8 @@ ready_remove_first_dispatch (struct ready_list *re { insn = ready_element (ready, i); - if (INSN_CODE (insn) < 0 - || !INSN_P (insn) + if (! INSN_P (insn) + || INSN_CODE (insn) < 0 || !active_insn_p (insn)) continue; Index: reload1.c =================================================================== --- reload1.c (revision 204543) +++ reload1.c (working copy) @@ -2123,7 +2123,8 @@ delete_dead_insn (rtx insn) block local equivalences. Instead of trying to figure out the exact circumstances where we can delete the potentially dead insns, just let DCE do the job. */ - if (prev && GET_CODE (PATTERN (prev)) == SET + if (prev && BLOCK_FOR_INSN (prev) == BLOCK_FOR_INSN (insn) + && GET_CODE (PATTERN (prev)) == SET && (prev_dest = SET_DEST (PATTERN (prev)), REG_P (prev_dest)) && reg_mentioned_p (prev_dest, PATTERN (insn)) && find_regno_note (insn, REG_DEAD, REGNO (prev_dest)) Index: reorg.c =================================================================== --- reorg.c (revision 204543) +++ reorg.c (working copy) @@ -2302,15 +2302,16 @@ follow_jumps (rtx label, rtx jump, bool *crossing) depth++) { rtx this_label = JUMP_LABEL (insn); - rtx tem; /* If we have found a cycle, make the insn jump to itself. */ if (this_label == label) return label; + + /* Cannot follow returns and cannot look through tablejumps. */ if (ANY_RETURN_P (this_label)) return this_label; - tem = next_active_insn (this_label); - if (tem && JUMP_TABLE_DATA_P (tem)) + if (NEXT_INSN (this_label) + && JUMP_TABLE_DATA_P (NEXT_INSN (this_label))) break; if (!targetm.can_follow_jump (jump, insn)) Index: config/nds32/nds32.c =================================================================== --- config/nds32/nds32.c (revision 204543) +++ config/nds32/nds32.c (working copy) @@ -4677,7 +4677,7 @@ nds32_output_casesi_pc_relative (rtx *operands) enum machine_mode mode; rtx diff_vec; - diff_vec = PATTERN (next_active_insn (operands[1])); + diff_vec = PATTERN (NEXT_INSN (operands[1])); gcc_assert (GET_CODE (diff_vec) == ADDR_DIFF_VEC); Index: config/sh/sh.c =================================================================== --- config/sh/sh.c (revision 204543) +++ config/sh/sh.c (working copy) @@ -5774,24 +5774,18 @@ fixup_addr_diff_vecs (rtx first) int barrier_align (rtx barrier_or_label) { - rtx next = next_active_insn (barrier_or_label), pat, prev; - - if (! next) - return 0; - - pat = PATTERN (next); - - if (GET_CODE (pat) == ADDR_DIFF_VEC) - return 2; - - if (GET_CODE (pat) == UNSPEC_VOLATILE && XINT (pat, 1) == UNSPECV_ALIGN) - /* This is a barrier in front of a constant table. */ - return 0; - - prev = prev_active_insn (barrier_or_label); - if (GET_CODE (PATTERN (prev)) == ADDR_DIFF_VEC) - { - pat = PATTERN (prev); + rtx next, pat; + + if (LABEL_P (barrier_or_label) + && NEXT_INSN (barrier_or_label) + && JUMP_TABLE_DATA_P (NEXT_INSN (barrier_or_label))) + return 2; + + if (BARRIER_P (barrier_or_label) + && PREV_INSN (barrier_or_label) + && JUMP_TABLE_DATA_P (PREV_INSN (barrier_or_label))) + { + pat = PATTERN (PREV_INSN (barrier_or_label)); /* If this is a very small table, we want to keep the alignment after the table to the minimum for proper code alignment. */ return ((optimize_size @@ -5800,6 +5794,17 @@ barrier_align (rtx barrier_or_label) ? 1 << TARGET_SHMEDIA : align_jumps_log); } + next = next_active_insn (barrier_or_label); + + if (! next) + return 0; + + pat = PATTERN (next); + + if (GET_CODE (pat) == UNSPEC_VOLATILE && XINT (pat, 1) == UNSPECV_ALIGN) + /* This is a barrier in front of a constant table. */ + return 0; + if (optimize_size) return 0; @@ -5824,13 +5829,12 @@ barrier_align (rtx barrier_or_label) (fill_eager_delay_slots) and the branch is to the insn after the insn after the barrier. */ - /* PREV is presumed to be the JUMP_INSN for the barrier under - investigation. Skip to the insn before it. */ - int slot, credit; bool jump_to_next = false; - prev = prev_real_insn (prev); + /* Skip to the insn before the JUMP_INSN before the barrier under + investigation. */ + rtx prev = prev_real_insn (prev_active_insn (barrier_or_label)); for (slot = 2, credit = (1 << (CACHE_LOG - 2)) + 2; credit >= 0 && prev && NONJUMP_INSN_P (prev);