From patchwork Tue Apr 21 13:24:11 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: tbsaunde+gcc@tbsaunde.org X-Patchwork-Id: 463128 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id F281C14012C for ; Tue, 21 Apr 2015 23:26:10 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass reason="1024-bit key; unprotected key" header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=u4v5OJ0/; dkim-adsp=none (unprotected policy); dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; q=dns; s= default; b=UMSnrZ2qgG2ja5FnJ7w7CcL+6a6zWg6yU2TgsMXq7HNNeYNaJuZEE c4jAc4wuCVh++B3lkuM5MHewZcf3J2qzXJ7sQDcIi+18johay7YShPF13ePI97+B eNZRsF3JT4DsSLm3cLGfs/6SeOC1300PwMWlsTgUx4PJqlgGuRvbYU= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:in-reply-to:references; s= default; bh=hN6Ud0arkzaa4EZMZE4/fPB7x8I=; b=u4v5OJ0/HkRFquYAww0h yxEPYzMONzrTVlj++6k/iWYpEGfwOc+AsDEu3vGHdHRXOHW6++nrZOqm5DV6tPny ZDHIDR6ShpBNZRQ6pIqvEktiWfToPJAhl3QT2yej3sElvUdTvS7dJr8MUpxcysTA dOdNMmh11e4xyC3uXQ9ZTF4= Received: (qmail 58567 invoked by alias); 21 Apr 2015 13:25:17 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 58500 invoked by uid 89); 21 Apr 2015 13:25:16 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=0.4 required=5.0 tests=AWL, BAYES_40, KAM_LAZY_DOMAIN_SECURITY, T_RP_MATCHES_RCVD autolearn=no version=3.3.2 X-HELO: paperclip.tbsaunde.org Received: from tbsaunde.org (HELO paperclip.tbsaunde.org) (66.228.47.254) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 21 Apr 2015 13:25:13 +0000 Received: from iceball.corp.tor1.mozilla.com (unknown [23.233.68.71]) by paperclip.tbsaunde.org (Postfix) with ESMTPSA id ABCF8C0AA; Tue, 21 Apr 2015 13:25:11 +0000 (UTC) From: tbsaunde+gcc@tbsaunde.org To: gcc-patches@gcc.gnu.org Cc: Trevor Saunders Subject: [PATCH 05/12] make some HAVE_cc0 code always compiled Date: Tue, 21 Apr 2015 09:24:11 -0400 Message-Id: <1429622658-9034-6-git-send-email-tbsaunde+gcc@tbsaunde.org> In-Reply-To: <1429622658-9034-1-git-send-email-tbsaunde+gcc@tbsaunde.org> References: <1429622658-9034-1-git-send-email-tbsaunde+gcc@tbsaunde.org> X-IsSubscribed: yes From: Trevor Saunders gcc/ChangeLog: 2015-04-21 Trevor Saunders * cfgrtl.c (rtl_merge_blocks): Change #if HAVE_cc0 to if (HAVE_cc0) (try_redirect_by_replacing_jump): Likewise. (rtl_tidy_fallthru_edge): Likewise. * combine.c (insn_a_feeds_b): Likewise. (find_split_point): Likewise. (simplify_set): Likewise. * cprop.c (cprop_jump): Likewise. * cse.c (cse_extended_basic_block): Likewise. * df-problems.c (can_move_insns_across): Likewise. * function.c (emit_use_return_register_into_block): Likewise. * haifa-sched.c (sched_init): Likewise. * ira.c (find_moveable_pseudos): Likewise. * loop-invariant.c (find_invariant_insn): Likewise. * lra-constraints.c (curr_insn_transform): Likewise. * postreload.c (reload_combine_recognize_const_pattern): * Likewise. * reload.c (find_reloads): Likewise. * reorg.c (delete_scheduled_jump): Likewise. (steal_delay_list_from_target): Likewise. (steal_delay_list_from_fallthrough): Likewise. (redundant_insn): Likewise. (fill_simple_delay_slots): Likewise. (fill_slots_from_thread): Likewise. (delete_computation): Likewise. * sched-rgn.c (add_branch_dependences): Likewise. --- gcc/cfgrtl.c | 12 +++--------- gcc/combine.c | 10 ++-------- gcc/cprop.c | 4 +--- gcc/cse.c | 4 +--- gcc/df-problems.c | 4 +--- gcc/function.c | 5 ++--- gcc/haifa-sched.c | 3 +-- gcc/ira.c | 5 ++--- gcc/loop-invariant.c | 4 +--- gcc/lra-constraints.c | 6 ++---- gcc/postreload.c | 4 +--- gcc/reload.c | 10 +++------- gcc/reorg.c | 32 ++++++++------------------------ gcc/sched-rgn.c | 4 +--- 14 files changed, 29 insertions(+), 78 deletions(-) diff --git a/gcc/cfgrtl.c b/gcc/cfgrtl.c index 4c1708f..d93a49e 100644 --- a/gcc/cfgrtl.c +++ b/gcc/cfgrtl.c @@ -893,10 +893,9 @@ rtl_merge_blocks (basic_block a, basic_block b) del_first = a_end; -#if HAVE_cc0 /* If this was a conditional jump, we need to also delete the insn that set cc0. */ - if (only_sets_cc0_p (prev)) + if (HAVE_cc0 && only_sets_cc0_p (prev)) { rtx_insn *tmp = prev; @@ -905,7 +904,6 @@ rtl_merge_blocks (basic_block a, basic_block b) prev = BB_HEAD (a); del_first = tmp; } -#endif a_end = PREV_INSN (del_first); } @@ -1064,11 +1062,9 @@ try_redirect_by_replacing_jump (edge e, basic_block target, bool in_cfglayout) /* In case we zap a conditional jump, we'll need to kill the cc0 setter too. */ kill_from = insn; -#if HAVE_cc0 - if (reg_mentioned_p (cc0_rtx, PATTERN (insn)) + if (HAVE_cc0 && reg_mentioned_p (cc0_rtx, PATTERN (insn)) && only_sets_cc0_p (PREV_INSN (insn))) kill_from = PREV_INSN (insn); -#endif /* See if we can create the fallthru edge. */ if (in_cfglayout || can_fallthru (src, target)) @@ -1825,12 +1821,10 @@ rtl_tidy_fallthru_edge (edge e) delete_insn (table); } -#if HAVE_cc0 /* If this was a conditional jump, we need to also delete the insn that set cc0. */ - if (any_condjump_p (q) && only_sets_cc0_p (PREV_INSN (q))) + if (HAVE_cc0 && any_condjump_p (q) && only_sets_cc0_p (PREV_INSN (q))) q = PREV_INSN (q); -#endif q = PREV_INSN (q); } diff --git a/gcc/combine.c b/gcc/combine.c index 430084e..d71f863 100644 --- a/gcc/combine.c +++ b/gcc/combine.c @@ -1141,10 +1141,8 @@ insn_a_feeds_b (rtx_insn *a, rtx_insn *b) FOR_EACH_LOG_LINK (links, b) if (links->insn == a) return true; -#if HAVE_cc0 - if (sets_cc0_p (a)) + if (HAVE_cc0 && sets_cc0_p (a)) return true; -#endif return false; } @@ -4816,7 +4814,6 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src) break; case SET: -#if HAVE_cc0 /* If SET_DEST is CC0 and SET_SRC is not an operand, a COMPARE, or a ZERO_EXTRACT, the most likely reason why this doesn't match is that we need to put the operand into a register. So split at that @@ -4829,7 +4826,6 @@ find_split_point (rtx *loc, rtx_insn *insn, bool set_src) && ! (GET_CODE (SET_SRC (x)) == SUBREG && OBJECT_P (SUBREG_REG (SET_SRC (x))))) return &SET_SRC (x); -#endif /* See if we can split SET_SRC as it stands. */ split = find_split_point (&SET_SRC (x), insn, true); @@ -6582,13 +6578,12 @@ simplify_set (rtx x) else compare_mode = SELECT_CC_MODE (new_code, op0, op1); -#if !HAVE_cc0 /* If the mode changed, we have to change SET_DEST, the mode in the compare, and the mode in the place SET_DEST is used. If SET_DEST is a hard register, just build new versions with the proper mode. If it is a pseudo, we lose unless it is only time we set the pseudo, in which case we can safely change its mode. */ - if (compare_mode != GET_MODE (dest)) + if (!HAVE_cc0 && compare_mode != GET_MODE (dest)) { if (can_change_dest_mode (dest, 0, compare_mode)) { @@ -6610,7 +6605,6 @@ simplify_set (rtx x) dest = new_dest; } } -#endif /* cc0 */ #endif /* SELECT_CC_MODE */ /* If the code changed, we have to build a new comparison in diff --git a/gcc/cprop.c b/gcc/cprop.c index b1caabb..0103686 100644 --- a/gcc/cprop.c +++ b/gcc/cprop.c @@ -965,11 +965,9 @@ cprop_jump (basic_block bb, rtx_insn *setcc, rtx_insn *jump, rtx from, rtx src) remove_note (jump, note); } -#if HAVE_cc0 /* Delete the cc0 setter. */ - if (setcc != NULL && CC0_P (SET_DEST (single_set (setcc)))) + if (HAVE_cc0 && setcc != NULL && CC0_P (SET_DEST (single_set (setcc)))) delete_insn (setcc); -#endif global_const_prop_count++; if (dump_file != NULL) diff --git a/gcc/cse.c b/gcc/cse.c index 52f5a16..ee407ad 100644 --- a/gcc/cse.c +++ b/gcc/cse.c @@ -6515,8 +6515,7 @@ cse_extended_basic_block (struct cse_basic_block_data *ebb_data) && check_for_label_ref (insn)) recorded_label_ref = true; -#if HAVE_cc0 - if (NONDEBUG_INSN_P (insn)) + if (HAVE_cc0 && NONDEBUG_INSN_P (insn)) { /* If the previous insn sets CC0 and this insn no longer references CC0, delete the previous insn. @@ -6543,7 +6542,6 @@ cse_extended_basic_block (struct cse_basic_block_data *ebb_data) prev_insn_cc0_mode = this_insn_cc0_mode; } } -#endif } } diff --git a/gcc/df-problems.c b/gcc/df-problems.c index d213455..22fcfa6 100644 --- a/gcc/df-problems.c +++ b/gcc/df-problems.c @@ -3820,9 +3820,7 @@ can_move_insns_across (rtx_insn *from, rtx_insn *to, if (bitmap_intersect_p (merge_set, test_use) || bitmap_intersect_p (merge_use, test_set)) break; -#if HAVE_cc0 - if (!sets_cc0_p (insn)) -#endif + if (!HAVE_cc0 || !sets_cc0_p (insn)) max_to = insn; } next = NEXT_INSN (insn); diff --git a/gcc/function.c b/gcc/function.c index 70d20ef..dd146aa 100644 --- a/gcc/function.c +++ b/gcc/function.c @@ -5634,10 +5634,9 @@ emit_use_return_register_into_block (basic_block bb) seq = get_insns (); end_sequence (); insn = BB_END (bb); -#if HAVE_cc0 - if (reg_mentioned_p (cc0_rtx, PATTERN (insn))) + if (HAVE_cc0 && reg_mentioned_p (cc0_rtx, PATTERN (insn))) insn = prev_cc0_setter (insn); -#endif + emit_insn_before (seq, insn); } diff --git a/gcc/haifa-sched.c b/gcc/haifa-sched.c index 286bd1b..64c89e8 100644 --- a/gcc/haifa-sched.c +++ b/gcc/haifa-sched.c @@ -7184,9 +7184,8 @@ void sched_init (void) { /* Disable speculative loads in their presence if cc0 defined. */ -#if HAVE_cc0 + if (HAVE_cc0) flag_schedule_speculative_load = 0; -#endif if (targetm.sched.dispatch (NULL, IS_DISPATCH_ON)) targetm.sched.dispatch_do (NULL, DISPATCH_INIT); diff --git a/gcc/ira.c b/gcc/ira.c index 819d702..0750d11 100644 --- a/gcc/ira.c +++ b/gcc/ira.c @@ -4641,15 +4641,14 @@ find_moveable_pseudos (void) ? " (no unique first use)" : ""); continue; } -#if HAVE_cc0 - if (reg_referenced_p (cc0_rtx, PATTERN (closest_use))) + if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (closest_use))) { if (dump_file) fprintf (dump_file, "Reg %d: closest user uses cc0\n", regno); continue; } -#endif + bitmap_set_bit (&interesting, regno); /* If we get here, we know closest_use is a non-NULL insn (as opposed to const_0_rtx). */ diff --git a/gcc/loop-invariant.c b/gcc/loop-invariant.c index ef956a4..7a433bc 100644 --- a/gcc/loop-invariant.c +++ b/gcc/loop-invariant.c @@ -922,11 +922,9 @@ find_invariant_insn (rtx_insn *insn, bool always_reached, bool always_executed) bool simple = true; struct invariant *inv; -#if HAVE_cc0 /* We can't move a CC0 setter without the user. */ - if (sets_cc0_p (insn)) + if (HAVE_cc0 && sets_cc0_p (insn)) return; -#endif set = single_set (insn); if (!set) diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c index 39cb036..fcd0621 100644 --- a/gcc/lra-constraints.c +++ b/gcc/lra-constraints.c @@ -3354,12 +3354,10 @@ curr_insn_transform (bool check_only_p) if (JUMP_P (curr_insn) || CALL_P (curr_insn)) no_output_reloads_p = true; -#if HAVE_cc0 - if (reg_referenced_p (cc0_rtx, PATTERN (curr_insn))) + if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (curr_insn))) no_input_reloads_p = true; - if (reg_set_p (cc0_rtx, PATTERN (curr_insn))) + if (HAVE_cc0 && reg_set_p (cc0_rtx, PATTERN (curr_insn))) no_output_reloads_p = true; -#endif n_operands = curr_static_id->n_operands; n_alternatives = curr_static_id->n_alternatives; diff --git a/gcc/postreload.c b/gcc/postreload.c index 68443ab..948fcbd 100644 --- a/gcc/postreload.c +++ b/gcc/postreload.c @@ -1032,11 +1032,9 @@ reload_combine_recognize_const_pattern (rtx_insn *insn) && reg_state[clobbered_regno].real_store_ruid >= use_ruid) break; -#if HAVE_cc0 /* Do not separate cc0 setter and cc0 user on HAVE_cc0 targets. */ - if (must_move_add && sets_cc0_p (PATTERN (use_insn))) + if (HAVE_cc0 && must_move_add && sets_cc0_p (PATTERN (use_insn))) break; -#endif gcc_assert (reg_state[regno].store_ruid <= use_ruid); /* Avoid moving a use of ADDREG past a point where it is stored. */ diff --git a/gcc/reload.c b/gcc/reload.c index 8b253b8..bb5dae7 100644 --- a/gcc/reload.c +++ b/gcc/reload.c @@ -2706,12 +2706,10 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known, if (JUMP_P (insn) || CALL_P (insn)) no_output_reloads = 1; -#if HAVE_cc0 - if (reg_referenced_p (cc0_rtx, PATTERN (insn))) + if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (insn))) no_input_reloads = 1; - if (reg_set_p (cc0_rtx, PATTERN (insn))) + if (HAVE_cc0 && reg_set_p (cc0_rtx, PATTERN (insn))) no_output_reloads = 1; -#endif #ifdef SECONDARY_MEMORY_NEEDED /* The eliminated forms of any secondary memory locations are per-insn, so @@ -4579,16 +4577,14 @@ find_reloads (rtx_insn *insn, int replace, int ind_levels, int live_known, rld[j].in = 0; } -#if HAVE_cc0 /* If we made any reloads for addresses, see if they violate a "no input reloads" requirement for this insn. But loads that we do after the insn (such as for output addresses) are fine. */ - if (no_input_reloads) + if (HAVE_cc0 && no_input_reloads) for (i = 0; i < n_reloads; i++) gcc_assert (rld[i].in == 0 || rld[i].when_needed == RELOAD_FOR_OUTADDR_ADDRESS || rld[i].when_needed == RELOAD_FOR_OUTPUT_ADDRESS); -#endif /* Compute reload_mode and reload_nregs. */ for (i = 0; i < n_reloads; i++) diff --git a/gcc/reorg.c b/gcc/reorg.c index f059504..fedefcc 100644 --- a/gcc/reorg.c +++ b/gcc/reorg.c @@ -182,7 +182,6 @@ skip_consecutive_labels (rtx label_or_return) return label; } -#if HAVE_cc0 /* INSN uses CC0 and is being moved into a delay slot. Set up REG_CC_SETTER and REG_CC_USER notes so we can find it. */ @@ -197,7 +196,6 @@ link_cc0_insns (rtx insn) add_reg_note (user, REG_CC_SETTER, insn); add_reg_note (insn, REG_CC_USER, user); } -#endif /* Insns which have delay slots that have not yet been filled. */ @@ -699,8 +697,7 @@ delete_scheduled_jump (rtx_insn *insn) be other insns that became dead anyway, which we wouldn't know to delete. */ -#if HAVE_cc0 - if (reg_mentioned_p (cc0_rtx, insn)) + if (HAVE_cc0 && reg_mentioned_p (cc0_rtx, insn)) { rtx note = find_reg_note (insn, REG_CC_SETTER, NULL_RTX); @@ -730,7 +727,6 @@ delete_scheduled_jump (rtx_insn *insn) delete_from_delay_slot (trial); } } -#endif delete_related_insns (insn); } @@ -1171,11 +1167,9 @@ steal_delay_list_from_target (rtx_insn *insn, rtx condition, rtx_sequence *seq, if (insn_references_resource_p (trial, sets, false) || insn_sets_resource_p (trial, needed, false) || insn_sets_resource_p (trial, sets, false) -#if HAVE_cc0 /* If TRIAL sets CC0, we can't copy it, so we can't steal this delay list. */ - || find_reg_note (trial, REG_CC_USER, NULL_RTX) -#endif + || (HAVE_cc0 && find_reg_note (trial, REG_CC_USER, NULL_RTX)) /* If TRIAL is from the fallthrough code of an annulled branch insn in SEQ, we cannot use it. */ || (INSN_ANNULLED_BRANCH_P (seq->insn (0)) @@ -1279,10 +1273,7 @@ steal_delay_list_from_fallthrough (rtx_insn *insn, rtx condition, if (insn_references_resource_p (trial, sets, false) || insn_sets_resource_p (trial, needed, false) || insn_sets_resource_p (trial, sets, false) -#if HAVE_cc0 - || sets_cc0_p (PATTERN (trial)) -#endif - ) + || (HAVE_cc0 && sets_cc0_p (PATTERN (trial)))) break; @@ -1613,9 +1604,7 @@ redundant_insn (rtx insn, rtx_insn *target, rtx delay_list) target_main = XVECEXP (PATTERN (target), 0, 0); if (resource_conflicts_p (&needed, &set) -#if HAVE_cc0 - || reg_mentioned_p (cc0_rtx, ipat) -#endif + || (HAVE_cc0 && reg_mentioned_p (cc0_rtx, ipat)) /* The insn requiring the delay may not set anything needed or set by INSN. */ || insn_sets_resource_p (target_main, &needed, true) @@ -2254,10 +2243,9 @@ fill_simple_delay_slots (int non_jumps_p) { next_trial = next_nonnote_insn (trial); delay_list = add_to_delay_list (trial, delay_list); -#if HAVE_cc0 - if (reg_mentioned_p (cc0_rtx, pat)) + if (HAVE_cc0 && reg_mentioned_p (cc0_rtx, pat)) link_cc0_insns (trial); -#endif + delete_related_insns (trial); if (slots_to_fill == ++slots_filled) break; @@ -2589,10 +2577,8 @@ fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx thread_or_return, must_annul = 1; winner: -#if HAVE_cc0 - if (reg_mentioned_p (cc0_rtx, pat)) + if (HAVE_cc0 && reg_mentioned_p (cc0_rtx, pat)) link_cc0_insns (trial); -#endif /* If we own this thread, delete the insn. If this is the destination of a branch, show that a basic block status @@ -3145,8 +3131,7 @@ delete_computation (rtx insn) { rtx note, next; -#if HAVE_cc0 - if (reg_referenced_p (cc0_rtx, PATTERN (insn))) + if (HAVE_cc0 && reg_referenced_p (cc0_rtx, PATTERN (insn))) { rtx prev = prev_nonnote_insn (insn); /* We assume that at this stage @@ -3166,7 +3151,6 @@ delete_computation (rtx insn) add_reg_note (prev, REG_UNUSED, cc0_rtx); } } -#endif for (note = REG_NOTES (insn); note; note = next) { diff --git a/gcc/sched-rgn.c b/gcc/sched-rgn.c index 33261fc..7efd4ad 100644 --- a/gcc/sched-rgn.c +++ b/gcc/sched-rgn.c @@ -2487,9 +2487,7 @@ add_branch_dependences (rtx_insn *head, rtx_insn *tail) && (GET_CODE (PATTERN (insn)) == USE || GET_CODE (PATTERN (insn)) == CLOBBER || can_throw_internal (insn) -#if HAVE_cc0 - || sets_cc0_p (PATTERN (insn)) -#endif + || (HAVE_cc0 && sets_cc0_p (PATTERN (insn))) || (!reload_completed && sets_likely_spilled (PATTERN (insn))))) || NOTE_P (insn)