From patchwork Tue Apr 2 10:02:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Steven Bosscher X-Patchwork-Id: 232933 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "localhost", Issuer "www.qmailtoaster.com" (not verified)) by ozlabs.org (Postfix) with ESMTPS id 3E06C2C014B for ; Tue, 2 Apr 2013 21:03:33 +1100 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:from:date:message-id:subject:to:cc:content-type; q=dns; s=default; b=xKaCsmTOPdsJc5gyoi2Q7/NQxlYgHZRezQaYxePxxeM M9JmFpxAdIBdVehyxabOZsvDkl2Y5C2fg1gGLZjsQpPGSV3NcU+H6vwIGLjMT5Ox wVYnpljv7SuJ2tu6hLgdP2x4tHk1032N0uwQUdY85Gy6nydX5Hrvo+pulrNsoWG8 = DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :mime-version:from:date:message-id:subject:to:cc:content-type; s=default; bh=L7cPcgN0YU3wuDD7xxfRYhFSqgQ=; b=rD8TauQaP3wASjCsA 3YOTUS6KxECMP3f/9YovWqzNBTgIdTSNIFXw4dWtyCGCz/htL3MMGe9J+cIt48mi GTBk71OIlz5kevR+9KHsmQjKBhb5I3NdZmiV6Gsad7fSdJlzHQAJis437WWWq8p4 NTcRkqcMeumd5bDKwlwaTQFqkg= Received: (qmail 31449 invoked by alias); 2 Apr 2013 10:03:23 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 31375 invoked by uid 89); 2 Apr 2013 10:03:10 -0000 X-Spam-SWARE-Status: No, score=-4.4 required=5.0 tests=AWL, BAYES_00, FREEMAIL_FROM, KHOP_RCVD_TRUST, RCVD_IN_DNSWL_LOW, RCVD_IN_HOSTKARMA_YE autolearn=ham version=3.3.1 Received: from mail-ve0-f182.google.com (HELO mail-ve0-f182.google.com) (209.85.128.182) by sourceware.org (qpsmtpd/0.84/v0.84-167-ge50287c) with ESMTP; Tue, 02 Apr 2013 10:03:06 +0000 Received: by mail-ve0-f182.google.com with SMTP id m1so238380ves.41 for ; Tue, 02 Apr 2013 03:03:05 -0700 (PDT) X-Received: by 10.58.205.179 with SMTP id lh19mr11869139vec.7.1364896985175; Tue, 02 Apr 2013 03:03:05 -0700 (PDT) MIME-Version: 1.0 Received: by 10.58.68.228 with HTTP; Tue, 2 Apr 2013 03:02:24 -0700 (PDT) From: Steven Bosscher Date: Tue, 2 Apr 2013 12:02:24 +0200 Message-ID: Subject: [patch] replace a bunch of equivalent checks for asm operands with a new function To: GCC Patches Cc: Eric Botcazou X-Virus-Found: No Hello, This idiom: "if (GET_CODE (body) == ASM_INPUT || asm_noperands (body) >= 0)" appears in multiple places. There's even one place where the idiom above is used in reverse (making the GET_CODE... check redundant). A few more places to the equivalent by checking extract_asm_operands != NULL. It made sense to me, at least, to replace those equivalent checks with a new function: insn_with_asm_operands_p(). Bootstrapped&tested on ia64-unknown-linux-gnu and powerpc64-unknown-linux-gnu. OK for trunk? Ciao! Steven * rtl.h (insn_with_asm_operands_p): New prototype. * rtlanal.c (insn_with_asm_operands_p): New function. * cfgrtl.c (rtl_split_edge): Use it instead of extract_asm_operands. (fixup_reorder_chain): Likewise. * compare-elim.c (arithmetic_flags_clobber_p): Likewise. * dwarf2out.c (dwarf2out_var_location): Use it instead of open-coded equivalent checks. * final.c (get_attr_length_1, shorten_branches): Likewise. * haifa-sched.c (prune_ready_list, schedule_block): Likewise. * hw-doloop.c (scan_loop): Likewise. * reorg.c (stop_search_p): Likewise. (fill_slots_from_thread): Likewise. * sel-sched-ir.c (init_global_and_expr_for_insn): Likewise. * config/bfin/bfin.c (hwloop_optimize, workaround_rts_anomaly, workaround_speculation, add_sched_insns_for_speculation): Likewise. * config/c6x/c6x.c (c6x_sched_reorder_1): Likewise. * config/i386/i386.c (ix86_i387_mode_needed, min_insn_size): Likewise. * config/ia64/ia64.c (rtx_needs_barrier): Use it instead of extract_asm_operands. (ia64_dfa_sched_reorder): Use it instead of open-coded checks. * config/m68k/m68k.c (m68k_sched_variable_issue): Likewise. * config/mips/mips.c (mips_avoid_hazard): Likewise. * config/pa/pa.c (branch_to_delay_slot_p, branch_needs_nop_p, use_skip_p): Likewise. * config/spu/spu.c (get_branch_target): Use it instead of extract_asm_operands. Index: rtl.h =================================================================== --- rtl.h (revision 197307) +++ rtl.h (working copy) @@ -2058,6 +2058,7 @@ extern void remove_reg_equal_equiv_notes_for_regno extern int side_effects_p (const_rtx); extern int volatile_refs_p (const_rtx); extern int volatile_insn_p (const_rtx); +extern bool insn_with_asm_operands_p (const_rtx); extern int may_trap_p_1 (const_rtx, unsigned); extern int may_trap_p (const_rtx); extern int may_trap_or_fault_p (const_rtx); Index: rtlanal.c =================================================================== --- rtlanal.c (revision 197307) +++ rtlanal.c (working copy) @@ -2273,6 +2273,18 @@ side_effects_p (const_rtx x) } return 0; } + +/* Return true if INSN has asm operands. */ + +bool +insn_with_asm_operands_p (const_rtx insn) +{ + if (!INSN_P (insn)) + return false; + return (GET_CODE (PATTERN (insn)) == ASM_INPUT + || asm_noperands (PATTERN (insn)) >= 0); +} + /* Return nonzero if evaluating rtx X might cause a trap. FLAGS controls how to consider MEMs. A nonzero means the context Index: cfgrtl.c =================================================================== --- cfgrtl.c (revision 197307) +++ cfgrtl.c (working copy) @@ -1718,7 +1718,7 @@ rtl_split_edge (edge edge_in) if (last && JUMP_P (last) && edge_in->dest != EXIT_BLOCK_PTR - && extract_asm_operands (PATTERN (last)) != NULL_RTX + && insn_with_asm_operands_p (last) && patch_jump_insn (last, before, bb)) df_set_bb_dirty (edge_in->src); } @@ -3308,7 +3308,7 @@ fixup_reorder_chain (void) continue; } } - else if (extract_asm_operands (PATTERN (bb_end_insn)) != NULL) + else if (insn_with_asm_operands_p (bb_end_insn)) { /* If the old fallthru is still next or if asm goto doesn't have a fallthru (e.g. when followed by Index: compare-elim.c =================================================================== --- compare-elim.c (revision 197307) +++ compare-elim.c (working copy) @@ -162,10 +162,10 @@ arithmetic_flags_clobber_p (rtx insn) if (!NONJUMP_INSN_P (insn)) return false; - pat = PATTERN (insn); - if (extract_asm_operands (pat)) + if (insn_with_asm_operands_p (insn)) return false; + pat = PATTERN (insn); if (GET_CODE (pat) == PARALLEL && XVECLEN (pat, 0) == 2) { x = XVECEXP (pat, 0, 0); Index: dwarf2out.c =================================================================== --- dwarf2out.c (revision 197307) +++ dwarf2out.c (working copy) @@ -20798,8 +20798,7 @@ dwarf2out_var_location (rtx loc_note) if (GET_CODE (body) == USE || GET_CODE (body) == CLOBBER) continue; /* Inline asm could occupy zero bytes. */ - else if (GET_CODE (body) == ASM_INPUT - || asm_noperands (body) >= 0) + else if (insn_with_asm_operands_p (insn)) continue; #ifdef HAVE_attr_length else if (get_attr_min_length (insn) == 0) Index: final.c =================================================================== --- final.c (revision 197307) +++ final.c (working copy) @@ -400,7 +400,7 @@ get_attr_length_1 (rtx insn, int (*fallback_fn) (r if (GET_CODE (body) == USE || GET_CODE (body) == CLOBBER) return 0; - else if (GET_CODE (body) == ASM_INPUT || asm_noperands (body) >= 0) + else if (insn_with_asm_operands_p (insn)) length = asm_insn_count (body) * fallback_fn (insn); else if (GET_CODE (body) == SEQUENCE) for (i = 0; i < XVECLEN (body, 0); i++) @@ -1095,7 +1095,7 @@ shorten_branches (rtx first) * GET_MODE_SIZE (GET_MODE (body))); /* Alignment is handled by ADDR_VEC_ALIGN. */ } - else if (GET_CODE (body) == ASM_INPUT || asm_noperands (body) >= 0) + else if (insn_with_asm_operands_p (insn)) insn_lengths[uid] = asm_insn_count (body) * insn_default_length (insn); else if (GET_CODE (body) == SEQUENCE) { @@ -1117,8 +1117,7 @@ shorten_branches (rtx first) int inner_uid = INSN_UID (inner_insn); int inner_length; - if (GET_CODE (body) == ASM_INPUT - || asm_noperands (PATTERN (XVECEXP (body, 0, i))) >= 0) + if (insn_with_asm_operands_p (inner_insn)) inner_length = (asm_insn_count (PATTERN (inner_insn)) * insn_default_length (inner_insn)); else Index: haifa-sched.c =================================================================== --- haifa-sched.c (revision 197307) +++ haifa-sched.c (working copy) @@ -5705,9 +5705,8 @@ prune_ready_list (state_t temp_state, bool first_c } else if (recog_memoized (insn) < 0) { - if (!first_cycle_insn_p - && (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0)) + if (!first_cycle_insn_p + && insn_with_asm_operands_p (insn)) cost = 1; reason = "asm"; } @@ -6235,8 +6234,7 @@ schedule_block (basic_block *target_bb, state_t in asm_p = false; } else - asm_p = (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0); + asm_p = insn_with_asm_operands_p (insn); if (targetm.sched.variable_issue) ls.can_issue_more = Index: hw-doloop.c =================================================================== --- hw-doloop.c (revision 197307) +++ hw-doloop.c (working copy) @@ -125,9 +125,8 @@ scan_loop (hwloop_info loop) if (!NONDEBUG_INSN_P (insn)) continue; - if (recog_memoized (insn) < 0 - && (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0)) + if (recog_memoized (insn) < 0 + && insn_with_asm_operands_p (insn)) loop->has_asm = true; CLEAR_HARD_REG_SET (set_this_insn); Index: reorg.c =================================================================== --- reorg.c (revision 197307) +++ reorg.c (working copy) @@ -261,8 +261,7 @@ stop_search_p (rtx insn, int labels_p) /* OK unless it contains a delay slot or is an `asm' insn of some type. We don't know anything about these. */ return (GET_CODE (PATTERN (insn)) == SEQUENCE - || GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0); + || insn_with_asm_operands_p (insn)) default: gcc_unreachable (); @@ -2714,8 +2713,7 @@ fill_slots_from_thread (rtx insn, rtx condition, r && new_thread && !ANY_RETURN_P (new_thread) && NONJUMP_INSN_P (new_thread) && !RTX_FRAME_RELATED_P (new_thread) - && GET_CODE (PATTERN (new_thread)) != ASM_INPUT - && asm_noperands (PATTERN (new_thread)) < 0) + && !insn_with_asm_operands_p (new_thread)) { rtx pat = PATTERN (new_thread); rtx dest; Index: sel-sched-ir.c =================================================================== --- sel-sched-ir.c (revision 197307) +++ sel-sched-ir.c (working copy) @@ -2950,8 +2950,7 @@ init_global_and_expr_for_insn (insn_t insn) else init_global_data.prev_insn = NULL_RTX; - if (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0) + if (insn_with_asm_operands_p (insn)) /* Mark INSN as an asm. */ INSN_ASM_P (insn) = true; Index: config/bfin/bfin.c =================================================================== --- config/bfin/bfin.c (revision 197307) +++ config/bfin/bfin.c (working copy) @@ -3616,8 +3616,7 @@ hwloop_optimize (hwloop_info loop) || get_attr_type (last_insn) == TYPE_CALL || get_attr_seq_insns (last_insn) == SEQ_INSNS_MULTI || recog_memoized (last_insn) == CODE_FOR_return_internal - || GET_CODE (PATTERN (last_insn)) == ASM_INPUT - || asm_noperands (PATTERN (last_insn)) >= 0) + || insn_with_asm_operands_p (last_insn)) { if (loop->length + 2 > MAX_LOOP_LENGTH) { @@ -4091,8 +4090,7 @@ workaround_rts_anomaly (void) first_insn = insn; pat = PATTERN (insn); if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER - || GET_CODE (pat) == ASM_INPUT - || asm_noperands (pat) >= 0) + || insn_with_asm_operands_p (insn)) continue; if (CALL_P (insn)) @@ -4293,7 +4291,7 @@ workaround_speculation (void) if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER) continue; - if (GET_CODE (pat) == ASM_INPUT || asm_noperands (pat) >= 0) + if (insn_with_asm_operands_p (insn)) { np_check_regno = -1; continue; @@ -4443,8 +4441,7 @@ workaround_speculation (void) pat = PATTERN (target); if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER - || GET_CODE (pat) == ASM_INPUT - || asm_noperands (pat) >= 0) + || insn_with_asm_operands_p (target)) continue; if (NONDEBUG_INSN_P (target)) @@ -4522,8 +4519,7 @@ add_sched_insns_for_speculation (void) pat = PATTERN (insn); if (GET_CODE (pat) == USE || GET_CODE (pat) == CLOBBER - || GET_CODE (pat) == ASM_INPUT - || asm_noperands (pat) >= 0) + || insn_with_asm_operands_p (insn)) continue; if (JUMP_P (insn)) Index: config/c6x/c6x.c =================================================================== --- config/c6x/c6x.c (revision 197307) +++ config/c6x/c6x.c (working copy) @@ -4140,9 +4140,7 @@ c6x_sched_reorder_1 (rtx *ready, int *pn_ready, in { rtx insn = *insnp; int icode = recog_memoized (insn); - bool is_asm = (icode < 0 - && (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0)); + bool is_asm = (icode < 0 && insn_with_asm_operands_p (insn)); bool no_parallel = (is_asm || icode == CODE_FOR_sploop || (icode >= 0 && get_attr_type (insn) == TYPE_ATOMIC)); @@ -4201,9 +4199,7 @@ c6x_sched_reorder_1 (rtx *ready, int *pn_ready, in { rtx insn = *insnp; int icode = recog_memoized (insn); - bool is_asm = (icode < 0 - && (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0)); + bool is_asm = (icode < 0 && insn_with_asm_operands_p (insn)); int this_cycles, rsrv_cycles; enum attr_type type; Index: config/i386/i386.c =================================================================== --- config/i386/i386.c (revision 197307) +++ config/i386/i386.c (working copy) @@ -15335,9 +15335,7 @@ ix86_i387_mode_needed (int entity, rtx insn) bits we are interested in. */ if (CALL_P (insn) - || (NONJUMP_INSN_P (insn) - && (asm_noperands (PATTERN (insn)) >= 0 - || GET_CODE (PATTERN (insn)) == ASM_INPUT))) + || (NONJUMP_INSN_P (insn) && insn_with_asm_operands_p (insn))) return I387_CW_UNINITIALIZED; if (recog_memoized (insn) < 0) @@ -35136,8 +35134,7 @@ min_insn_size (rtx insn) switch (type) { case TYPE_MULTI: - if (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0) + if (insn_with_asm_operands_p (insn)) return 0; break; case TYPE_OTHER: Index: config/ia64/ia64.c =================================================================== --- config/ia64/ia64.c (revision 197307) +++ config/ia64/ia64.c (working copy) @@ -6520,7 +6520,7 @@ rtx_needs_barrier (rtx x, struct reg_flags flags, case CLOBBER: if (REG_P (XEXP (pat, 0)) - && extract_asm_operands (x) != NULL_RTX + && asm_noperands (x) >= 0 && REGNO (XEXP (pat, 0)) != AR_UNAT_REGNUM) { new_flags.is_write = 1; @@ -7350,8 +7350,7 @@ ia64_dfa_sched_reorder (FILE *dump, int sched_verb enum attr_type t = ia64_safe_type (insn); if (t == TYPE_UNKNOWN) { - if (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0) + if (insn_with_asm_operands_p (insn)) { rtx lowest = ready[n_asms]; ready[n_asms] = insn; Index: config/m68k/m68k.c =================================================================== --- config/m68k/m68k.c (revision 197307) +++ config/m68k/m68k.c (working copy) @@ -6081,8 +6081,7 @@ m68k_sched_variable_issue (FILE *sched_dump ATTRIB --can_issue_more; } - else if (GET_CODE (PATTERN (insn)) == ASM_INPUT - || asm_noperands (PATTERN (insn)) >= 0) + else if (insn_with_asm_operands_p (insn)) insn_size = sched_ib.filled; else insn_size = 0; Index: config/mips/mips.c =================================================================== --- config/mips/mips.c (revision 197307) +++ config/mips/mips.c (working copy) @@ -16006,7 +16006,7 @@ mips_avoid_hazard (rtx after, rtx insn, int *hilo_ /* Do not put the whole function in .set noreorder if it contains an asm statement. We don't know whether there will be hazards between the asm statement and the gcc-generated code. */ - if (GET_CODE (pattern) == ASM_INPUT || asm_noperands (pattern) >= 0) + if (insn_with_asm_operands_p (insn)) cfun->machine->all_noreorder_p = false; /* Ignore zero-length instructions (barriers and the like). */ Index: config/pa/pa.c =================================================================== --- config/pa/pa.c (revision 197307) +++ config/pa/pa.c (working copy) @@ -6341,8 +6341,7 @@ branch_to_delay_slot_p (rtx insn) /* We can't rely on the length of asms. So, we return FALSE when the branch is followed by an asm. */ if (!insn - || GET_CODE (PATTERN (insn)) == ASM_INPUT - || extract_asm_operands (PATTERN (insn)) != NULL_RTX + || insn_with_asm_operands_p (insn) || get_attr_length (insn) > 0) break; } @@ -6372,8 +6371,7 @@ branch_needs_nop_p (rtx insn) if (!insn || jump_insn == insn) return TRUE; - if (!(GET_CODE (PATTERN (insn)) == ASM_INPUT - || extract_asm_operands (PATTERN (insn)) != NULL_RTX) + if (!insn_with_asm_operands_p (insn) && get_attr_length (insn) > 0) break; } @@ -6395,9 +6393,7 @@ use_skip_p (rtx insn) insn = next_active_insn (insn); /* We can't rely on the length of asms, so we can't skip asms. */ - if (!insn - || GET_CODE (PATTERN (insn)) == ASM_INPUT - || extract_asm_operands (PATTERN (insn)) != NULL_RTX) + if (!insn || insn_with_asm_operands_p (insn)) break; if (get_attr_length (insn) == 4 && jump_insn == next_active_insn (insn)) Index: config/spu/spu.c =================================================================== --- config/spu/spu.c (revision 197307) +++ config/spu/spu.c (working copy) @@ -2171,8 +2171,8 @@ get_branch_target (rtx branch) if (GET_CODE (PATTERN (branch)) == RETURN) return gen_rtx_REG (SImode, LINK_REGISTER_REGNUM); - /* ASM GOTOs. */ - if (extract_asm_operands (PATTERN (branch)) != NULL) + /* ASM GOTOs. */ + if (insn_with_asm_operands_p (branch)) return NULL; set = single_set (branch);