From patchwork Sat Oct 5 06:36:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bernd Edlinger X-Patchwork-Id: 1172131 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-510331-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=hotmail.de Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="N0/7Ay1+"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46lcTy72hkz9sP7 for ; Sat, 5 Oct 2019 16:36:52 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:content-type:mime-version; q=dns; s= default; b=NG6pL0CMMfm0XHhCNvLIUP90HizX9Tns9yVkiDXpHIwfdlEblWQVc IihGj7H3Go4YYwIiv55j49XtUwWV/LXGn1Se9EWJiV6e8IqFu8aIOe/fMoiDzqs7 ofYUVyQnuCkMmbgijp0lIYNlKn/WwBB2mh5zvQltElMpw9VzCp6v+4= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:content-type:mime-version; s= default; bh=dHYEHLSYvq8hxyHCHaReNdWNeTs=; b=N0/7Ay1+Xby39xR36bY1 aB9VfYQVkTFT22owwdBCLAZOEkgRZpXAkUMbsCyWDccsFsjeAlQx+IeYbFGiwNpz ynBqkO6VC9jxoVmCrixVfFP3rUcAZVkFGsvXjtLdTTGjI+JzLFMTclS2wPJo2DDL eUZnrL4zzM0+OwN1Lre7eyk= Received: (qmail 87248 invoked by alias); 5 Oct 2019 06:36:43 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 86857 invoked by uid 89); 5 Oct 2019 06:36:42 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-12.0 required=5.0 tests=AWL, BAYES_00, FREEMAIL_FROM, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS, SPF_PASS autolearn=ham version=3.3.1 spammy=birth, footer, 2019-10-05, AFTER X-HELO: EUR01-VE1-obe.outbound.protection.outlook.com Received: from mail-oln040092066030.outbound.protection.outlook.com (HELO EUR01-VE1-obe.outbound.protection.outlook.com) (40.92.66.30) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Sat, 05 Oct 2019 06:36:37 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=c1XDnZyh+KOt79tmbv7Ah6EPNWkJN1OZOnuVtxOzhhKkKK7wkZtWcid0MqCvIqPCWYPcbEdn2qCFtMPnCcl3eYUtRi3BDDJrTgsJAfR3fvRCDisi/l+iLTwxthp+XEsly0P0ie6MiXQGe7dCtH5twJuutR0vu0VmtbZD0HMG4yRUCVUDYsYVXGBjjZRlAUwuCE4ElQrm6886fMJRRNpz8s4skRZXqiUsvQNSw/JfBk+Wh4YRoxY6Mx1ydjemqJh5vxtP7ft8zjxNAXW2/NTxXXHMpiIZ4mwasq4Viys65JStqT4ruQ9iBItggOkta/njAMWDP+0vGyeDdsuRZuwVjg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=maoZb5mBJaHmYp/wiy3VM4ndkGYWIra06RTvyEjzJgs=; b=Uj00LtDnW6u6EdbqYgoLV0QfS8x8lXz5gWKcDyS4COax1Kvj+k3oD2vWY0CoF14z6HpygpjXH2vw6wyXffq8SPqxLQ5Lf9i6bjNgjKLn3P51s1QWlzaU7FU34O63L4tKOQxLA9PM8buEcHc8QVGLnzrykPCCUvSYl6d3H/Ms4buMyk2wTTnHaT+oYOy3oKlxwIVM6rvHjd9Lou8n+9TuB3AgM5e7/mai/oJXd3N65KYk2ZWzLQe3LaYXsydx/zOM8kteUKrsNRI4I54vpvAbJEmytasB/20IzwpAhL8Tpodb88+Vx2l+oHrETTokU3YzcZzqQkn9twziWaY2CDz/sA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none Received: from HE1EUR01FT034.eop-EUR01.prod.protection.outlook.com (10.152.0.53) by HE1EUR01HT209.eop-EUR01.prod.protection.outlook.com (10.152.1.137) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.2327.20; Sat, 5 Oct 2019 06:36:34 +0000 Received: from VI1PR03MB4528.eurprd03.prod.outlook.com (10.152.0.60) by HE1EUR01FT034.mail.protection.outlook.com (10.152.0.172) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.2327.20 via Frontend Transport; Sat, 5 Oct 2019 06:36:34 +0000 Received: from VI1PR03MB4528.eurprd03.prod.outlook.com ([fe80::1917:2b45:72b0:548a]) by VI1PR03MB4528.eurprd03.prod.outlook.com ([fe80::1917:2b45:72b0:548a%7]) with mapi id 15.20.2327.023; Sat, 5 Oct 2019 06:36:34 +0000 From: Bernd Edlinger To: "gcc-patches@gcc.gnu.org" Subject: [PATCH, OBVIOUS] Fix -Wshadow=local warnings in gcc/[a-c]*.c Date: Sat, 5 Oct 2019 06:36:34 +0000 Message-ID: x-microsoft-original-message-id: x-ms-exchange-transport-forked: True MIME-Version: 1.0 Hi, this fixes -Wshadow=local warnings in the following files: M gcc/asan.c M gcc/attribs.c M gcc/auto-inc-dec.c M gcc/bb-reorder.c M gcc/builtins.c M gcc/caller-save.c M gcc/calls.c M gcc/cfgbuild.c M gcc/cfg.c M gcc/cfgcleanup.c M gcc/cfgexpand.c M gcc/cfgloopmanip.c M gcc/cfgrtl.c M gcc/cgraphbuild.c M gcc/cgraph.c M gcc/cgraphclones.c M gcc/cgraphunit.c M gcc/combine.c M gcc/cprop.c M gcc/cse.c M gcc/cselib.c I consider them obvious but will not commit them before monday, to give you all a chance to look at the changes, and request changes, if you like, for instance better variable names or so. Bootstrapped and reg-tested on x86_64-pc-linux-gnu. Thanks Bernd. 2019-10-05 Bernd Edlinger * asan.c (asan_emit_stack_protection, asan_expand_check_ifn): Remove shadowing local vars. * attribs.c (decl_attributes): Rename local var. * auto-inc-dec.c (find_inc): Remove shadowing local var. * bb-reorder.c (find_traces_1_round): Remove shadowing local vars. Avoid truncating the return value of bb_to_key from long to int. Rename local vars. (connect_traces): Rename local var. * builtins.c (compute_objsize): Rename local var. * caller-save.c (save_call_clobbered_regs): Remove shadowing local var. * calls.c (initialize_argument_information): Rename local var. (expand_call): Rename local var. Remove shadowing local var. * cfgbuild.c (make_edges): Rename local var. * cfg.c (dump_edge_info): Rename local var. * cfgcleanup.c (mentions_nonequal_regs): Rename local var. (outgoing_edges_match): Remove shadowing local var. Rename local var. (try_crossjump_to_edge): Rename local var. * cfgexpand.c (update_alias_info_with_stack_vars, expand_used_vars, pass_expand::execute): Remove shadowing local vars. * cfgloopmanip.c (duplicate_loop_to_header_edge): Remove shadowing local var. * cfgrtl.c (create_basic_block_structure): Rename parameter. (force_nonfallthru_and_redirect): Remove shadowing local var. (purge_dead_edges): Remove shadowing local vars. * cgraphbuild.c (pass_build_cgraph_edges::execute): Remove shadowing local var. * cgraph.c (cgraph_edge::redirect_call_stmt_to_callee, cgraph_node::remove, set_nothrow_flag_1, cgraph_node::verify_node): Remove shadowing local var. * cgraphclones.c (cgraph_node::set_call_stmt_including_clones, cgraph_node::create_edge_including_clones): Remove shadowing local var. * cgraphunit.c (analyze_functions): Rename local var. Remove shadowing local var. (symbol_table::compile): Rename local var. * combine.c (combine_instructions, can_combine_p): Rename local vars. (update_cfg_for_uncondjump, try_combine): Remove shadowing local vars. (find_split_point): Rename local vars. (combine_simplify_rtx, make_compound_operation, if_then_else_cond, simplify_shift_const_1): Remove shadowing local vars. (simplify_comparison): Rename local var. Remove shadowing local var. (get_last_value_validate): Remove shadowing local vars. (move_deaths): Rename local vars. * cprop.c (local_cprop_pass): Remove shadowing local var. * cse.c (approx_reg_cost, mention_regs, check_dependence, hash_rtx_cb, exp_equiv_p): Rename local var. (cse_insn) Remove shadowing local vars. Rename local var. (cse_find_path): Remove shadowing local var. (cse_change_cc_mode): Rename local var. * cselib.c (cselib_hash_rtx): Rename local var. Index: gcc/asan.c =================================================================== --- gcc/asan.c (revision 276598) +++ gcc/asan.c (working copy) @@ -1395,7 +1395,7 @@ asan_emit_stack_protection (rtx base, rtx pbase, u pp_space (&asan_pp); for (l = length - 2; l; l -= 2) { - tree decl = decls[l / 2 - 1]; + decl = decls[l / 2 - 1]; pp_wide_integer (&asan_pp, offsets[l] - base_offset); pp_space (&asan_pp); pp_wide_integer (&asan_pp, offsets[l - 1] - offsets[l]); @@ -3297,8 +3297,8 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, if (use_calls) { /* Instrument using callbacks. */ - gimple *g = gimple_build_assign (make_ssa_name (pointer_sized_int_node), - NOP_EXPR, base); + g = gimple_build_assign (make_ssa_name (pointer_sized_int_node), + NOP_EXPR, base); gimple_set_location (g, loc); gsi_insert_before (iter, g, GSI_SAME_STMT); tree base_addr = gimple_assign_lhs (g); @@ -3433,10 +3433,10 @@ asan_expand_check_ifn (gimple_stmt_iterator *iter, gsi_insert_after (&gsi, g, GSI_NEW_STMT); tree base_end_addr = gimple_assign_lhs (g); - tree shadow = build_shadow_mem_access (&gsi, loc, base_end_addr, - shadow_ptr_type); - gimple *shadow_test = build_assign (NE_EXPR, shadow, 0); - gimple_seq seq = NULL; + shadow = build_shadow_mem_access (&gsi, loc, base_end_addr, + shadow_ptr_type); + shadow_test = build_assign (NE_EXPR, shadow, 0); + seq = NULL; gimple_seq_add_stmt (&seq, shadow_test); gimple_seq_add_stmt (&seq, build_assign (BIT_AND_EXPR, base_end_addr, 7)); Index: gcc/attribs.c =================================================================== --- gcc/attribs.c (revision 276598) +++ gcc/attribs.c (working copy) @@ -605,8 +605,8 @@ decl_attributes (tree *node, tree attributes, int | (int) ATTR_FLAG_ARRAY_NEXT)) { /* Pass on this attribute to be tried again. */ - tree attr = tree_cons (name, args, NULL_TREE); - returned_attrs = chainon (returned_attrs, attr); + tree attr1 = tree_cons (name, args, NULL_TREE); + returned_attrs = chainon (returned_attrs, attr1); continue; } else @@ -651,8 +651,8 @@ decl_attributes (tree *node, tree attributes, int else if (flags & (int) ATTR_FLAG_FUNCTION_NEXT) { /* Pass on this attribute to be tried again. */ - tree attr = tree_cons (name, args, NULL_TREE); - returned_attrs = chainon (returned_attrs, attr); + tree attr1 = tree_cons (name, args, NULL_TREE); + returned_attrs = chainon (returned_attrs, attr1); continue; } Index: gcc/auto-inc-dec.c =================================================================== --- gcc/auto-inc-dec.c (revision 276598) +++ gcc/auto-inc-dec.c (working copy) @@ -1120,9 +1120,9 @@ find_inc (bool first_try) { /* Make sure that there is no insn that assigns to inc_insn.res between the mem_insn and the inc_insn. */ - rtx_insn *other_insn = get_next_ref (REGNO (inc_insn.reg_res), - BLOCK_FOR_INSN (mem_insn.insn), - reg_next_def); + other_insn = get_next_ref (REGNO (inc_insn.reg_res), + BLOCK_FOR_INSN (mem_insn.insn), + reg_next_def); if (other_insn != inc_insn.insn) { if (dump_file) @@ -1211,7 +1211,6 @@ find_inc (bool first_try) then we just abandon this. */ int luid = DF_INSN_LUID (inc_insn.insn); - rtx_insn *other_insn; /* Make sure this reg appears only once in this insn. */ if (count_occurrences (PATTERN (mem_insn.insn), mem_insn.reg1, 1) != 1) Index: gcc/bb-reorder.c =================================================================== --- gcc/bb-reorder.c (revision 276598) +++ gcc/bb-reorder.c (working copy) @@ -479,13 +479,13 @@ find_traces_1_round (int branch_th, profile_count && push_to_next_round_p (bb, round, number_of_rounds, count_th)) { - int key = bb_to_key (bb); + key = bb_to_key (bb); bbd[bb->index].heap = new_heap; bbd[bb->index].node = new_heap->insert (key, bb); if (dump_file) fprintf (dump_file, - " Possible start point of next round: %d (key: %d)\n", + " Possible start point of next round: %d (key: %ld)\n", bb->index, key); continue; } @@ -577,10 +577,10 @@ find_traces_1_round (int branch_th, profile_count && copy_bb_p (best_edge->dest, 0)) { bool only_crossing_preds = true; - edge e; - edge_iterator ei; - FOR_EACH_EDGE (e, ei, best_edge->dest->preds) - if (e != best_edge && !(e->flags & EDGE_CROSSING)) + edge e2; + edge_iterator ei2; + FOR_EACH_EDGE (e2, ei2, best_edge->dest->preds) + if (e2 != best_edge && !(e2->flags & EDGE_CROSSING)) { only_crossing_preds = false; break; @@ -1251,7 +1251,7 @@ connect_traces (int n_traces, struct trace *traces && !(e->flags & EDGE_COMPLEX) && (!best || e->probability > best->probability)) { - edge_iterator ei; + edge_iterator ei2; edge best2 = NULL; int best2_len = 0; @@ -1267,7 +1267,7 @@ connect_traces (int n_traces, struct trace *traces continue; } - FOR_EACH_EDGE (e2, ei, e->dest->succs) + FOR_EACH_EDGE (e2, ei2, e->dest->succs) { int di = e2->dest->index; Index: gcc/builtins.c =================================================================== --- gcc/builtins.c (revision 276598) +++ gcc/builtins.c (working copy) @@ -3582,13 +3582,13 @@ compute_objsize (tree dest, int ostype, tree *pdec if (!pdecl) pdecl = &dummy; - unsigned HOST_WIDE_INT size; + unsigned HOST_WIDE_INT hwisize; /* Only the two least significant bits are meaningful. */ ostype &= 3; - if (compute_builtin_object_size (dest, ostype, &size)) - return build_int_cst (sizetype, size); + if (compute_builtin_object_size (dest, ostype, &hwisize)) + return build_int_cst (sizetype, hwisize); if (TREE_CODE (dest) == SSA_NAME) { Index: gcc/caller-save.c =================================================================== --- gcc/caller-save.c (revision 276598) +++ gcc/caller-save.c (working copy) @@ -754,7 +754,6 @@ save_call_clobbered_regs (void) if (n_regs_saved) { int regno; - HARD_REG_SET this_insn_sets; if (code == JUMP_INSN) /* Restore all registers if this is a JUMP_INSN. */ Index: gcc/calls.c =================================================================== --- gcc/calls.c (revision 276598) +++ gcc/calls.c (working copy) @@ -1969,10 +1969,10 @@ initialize_argument_information (int num_actuals A if (tree alloc_size = lookup_attribute ("alloc_size", TYPE_ATTRIBUTES (fntype))) { - tree args = TREE_VALUE (alloc_size); - alloc_idx[0] = TREE_INT_CST_LOW (TREE_VALUE (args)) - 1; - if (TREE_CHAIN (args)) - alloc_idx[1] = TREE_INT_CST_LOW (TREE_VALUE (TREE_CHAIN (args))) - 1; + tree args1 = TREE_VALUE (alloc_size); + alloc_idx[0] = TREE_INT_CST_LOW (TREE_VALUE (args1)) - 1; + if (TREE_CHAIN (args1)) + alloc_idx[1] = TREE_INT_CST_LOW (TREE_VALUE (TREE_CHAIN (args1))) - 1; } /* Array for up to the two attribute alloc_size arguments. */ @@ -3503,16 +3503,16 @@ expand_call (tree exp, rtx target, int ignore) preferred_stack_boundary = PREFERRED_STACK_BOUNDARY; if (fndecl) { - struct cgraph_rtl_info *i = cgraph_node::rtl_info (fndecl); + struct cgraph_rtl_info *j = cgraph_node::rtl_info (fndecl); /* Without automatic stack alignment, we can't increase preferred stack boundary. With automatic stack alignment, it is unnecessary since unless we can guarantee that all callers will align the outgoing stack properly, callee has to align its stack anyway. */ - if (i - && i->preferred_incoming_stack_boundary - && i->preferred_incoming_stack_boundary < preferred_stack_boundary) - preferred_stack_boundary = i->preferred_incoming_stack_boundary; + if (j + && j->preferred_incoming_stack_boundary + && j->preferred_incoming_stack_boundary < preferred_stack_boundary) + preferred_stack_boundary = j->preferred_incoming_stack_boundary; } /* Operand 0 is a pointer-to-function; get the type of the function. */ @@ -4337,7 +4337,7 @@ expand_call (tree exp, rtx target, int ignore) if (pass && (flags & ECF_MALLOC)) { rtx temp = gen_reg_rtx (GET_MODE (valreg)); - rtx_insn *last, *insns; + rtx_insn *last; /* The return value from a malloc-like function is a pointer. */ if (TREE_CODE (rettype) == POINTER_TYPE) Index: gcc/cfgbuild.c =================================================================== --- gcc/cfgbuild.c (revision 276598) +++ gcc/cfgbuild.c (working copy) @@ -282,10 +282,10 @@ make_edges (basic_block min, basic_block max, int everything on the forced_labels list. */ else if (computed_jump_p (insn)) { - rtx_insn *insn; + rtx_insn *insn1; unsigned int i; - FOR_EACH_VEC_SAFE_ELT (forced_labels, i, insn) - make_label_edge (edge_cache, bb, insn, EDGE_ABNORMAL); + FOR_EACH_VEC_SAFE_ELT (forced_labels, i, insn1) + make_label_edge (edge_cache, bb, insn1, EDGE_ABNORMAL); } /* Returns create an exit out. */ Index: gcc/cfg.c =================================================================== --- gcc/cfg.c (revision 276598) +++ gcc/cfg.c (working copy) @@ -524,14 +524,14 @@ dump_edge_info (FILE *file, edge e, dump_flags_t f #undef DEF_EDGE_FLAG }; bool comma = false; - int i, flags = e->flags; + int i, flag = e->flags; - gcc_assert (e->flags <= EDGE_ALL_FLAGS); + gcc_assert (flag <= EDGE_ALL_FLAGS); fputs (" (", file); - for (i = 0; flags; i++) - if (flags & (1 << i)) + for (i = 0; flag; i++) + if (flag & (1 << i)) { - flags &= ~(1 << i); + flag &= ~(1 << i); if (comma) fputc (',', file); Index: gcc/cfgcleanup.c =================================================================== --- gcc/cfgcleanup.c (revision 276598) +++ gcc/cfgcleanup.c (working copy) @@ -231,11 +231,11 @@ mentions_nonequal_regs (const_rtx x, regset nonequ subrtx_iterator::array_type array; FOR_EACH_SUBRTX (iter, array, x, NONCONST) { - const_rtx x = *iter; - if (REG_P (x)) + const_rtx y = *iter; + if (REG_P (y)) { - unsigned int end_regno = END_REGNO (x); - for (unsigned int regno = REGNO (x); regno < end_regno; ++regno) + unsigned int end_regno = END_REGNO (y); + for (unsigned int regno = REGNO (y); regno < end_regno; ++regno) if (REGNO_REG_SET_P (nonequal, regno)) return true; } @@ -1896,14 +1896,13 @@ outgoing_edges_match (int mode, basic_block bb1, b version of sequence abstraction. */ FOR_EACH_EDGE (e1, ei, bb2->succs) { - edge e2; - edge_iterator ei; + edge_iterator ei2; basic_block d1 = e1->dest; if (FORWARDER_BLOCK_P (d1)) d1 = EDGE_SUCC (d1, 0)->dest; - FOR_EACH_EDGE (e2, ei, bb1->succs) + FOR_EACH_EDGE (e2, ei2, bb1->succs) { basic_block d2 = e2->dest; if (FORWARDER_BLOCK_P (d2)) @@ -2098,13 +2097,13 @@ try_crossjump_to_edge (int mode, edge e1, edge e2, FOR_EACH_EDGE (s, ei, redirect_edges_to->succs) { edge s2; - edge_iterator ei; + edge_iterator ei2; basic_block d = s->dest; if (FORWARDER_BLOCK_P (d)) d = single_succ (d); - FOR_EACH_EDGE (s2, ei, src1->succs) + FOR_EACH_EDGE (s2, ei2, src1->succs) { basic_block d2 = s2->dest; if (FORWARDER_BLOCK_P (d2)) Index: gcc/cfgexpand.c =================================================================== --- gcc/cfgexpand.c (revision 276598) +++ gcc/cfgexpand.c (working copy) @@ -773,10 +773,9 @@ static void update_alias_info_with_stack_vars (void) { part_hashmap *decls_to_partitions = NULL; - size_t i, j; tree var = NULL_TREE; - for (i = 0; i < stack_vars_num; i++) + for (size_t i = 0; i < stack_vars_num; i++) { bitmap part = NULL; tree name; @@ -803,7 +802,7 @@ update_alias_info_with_stack_vars (void) /* Create bitmaps representing partitions. They will be used for points-to sets later, so use GGC alloc. */ part = BITMAP_GGC_ALLOC (); - for (j = i; j != EOC; j = stack_vars[j].next) + for (size_t j = i; j != EOC; j = stack_vars[j].next) { tree decl = stack_vars[j].decl; unsigned int uid = DECL_PT_UID (decl); @@ -2096,7 +2095,7 @@ expand_used_vars (void) if (bitmap_bit_p (SA.partitions_for_parm_default_defs, i)) continue; - tree var = partition_to_var (SA.map, i); + var = partition_to_var (SA.map, i); gcc_assert (!virtual_operand_p (var)); @@ -6604,7 +6603,7 @@ pass_expand::execute (function *fun) after parm birth, but before NOTE_INSNS_FUNCTION_BEG. */ if (single_succ_p (ENTRY_BLOCK_PTR_FOR_FN (fun))) { - edge e = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (fun)); + e = single_succ_edge (ENTRY_BLOCK_PTR_FOR_FN (fun)); if (e->insns.r) { rtx_insn *insns = e->insns.r; @@ -6628,25 +6627,21 @@ pass_expand::execute (function *fun) FOR_BB_BETWEEN (bb, ENTRY_BLOCK_PTR_FOR_FN (fun)->next_bb, EXIT_BLOCK_PTR_FOR_FN (fun), next_bb) - { - edge e; - edge_iterator ei; - for (ei = ei_start (bb->succs); (e = ei_safe_edge (ei)); ) - { - /* Clear EDGE_EXECUTABLE. This flag is never used in the backend. */ - e->flags &= ~EDGE_EXECUTABLE; + for (ei = ei_start (bb->succs); (e = ei_safe_edge (ei)); ) + { + /* Clear EDGE_EXECUTABLE. This flag is never used in the backend. */ + e->flags &= ~EDGE_EXECUTABLE; - /* At the moment not all abnormal edges match the RTL - representation. It is safe to remove them here as - find_many_sub_basic_blocks will rediscover them. - In the future we should get this fixed properly. */ - if ((e->flags & EDGE_ABNORMAL) - && !(e->flags & EDGE_SIBCALL)) - remove_edge (e); - else - ei_next (&ei); - } - } + /* At the moment not all abnormal edges match the RTL + representation. It is safe to remove them here as + find_many_sub_basic_blocks will rediscover them. + In the future we should get this fixed properly. */ + if ((e->flags & EDGE_ABNORMAL) + && !(e->flags & EDGE_SIBCALL)) + remove_edge (e); + else + ei_next (&ei); + } auto_sbitmap blocks (last_basic_block_for_fn (fun)); bitmap_ones (blocks); Index: gcc/cfgloopmanip.c =================================================================== --- gcc/cfgloopmanip.c (revision 276598) +++ gcc/cfgloopmanip.c (working copy) @@ -1415,7 +1415,6 @@ duplicate_loop_to_header_edge (class loop *loop, e { basic_block dominated, dom_bb; vec dom_bbs; - unsigned j; bb = bbs[i]; bb->aux = 0; Index: gcc/cfgrtl.c =================================================================== --- gcc/cfgrtl.c (revision 276598) +++ gcc/cfgrtl.c (working copy) @@ -274,11 +274,11 @@ delete_insn_chain (rtx start, rtx_insn *finish, bo BASIC_BLOCK chain and should be used directly only by CFG construction code. END can be NULL in to create new empty basic block before HEAD. Both END and HEAD can be NULL to create basic block at the end of INSN chain. - AFTER is the basic block we should be put after. */ + INSERT_AFTER is the basic block we should be put after. */ basic_block create_basic_block_structure (rtx_insn *head, rtx_insn *end, rtx_note *bb_note, - basic_block after) + basic_block insert_after) { basic_block bb; @@ -336,7 +336,7 @@ create_basic_block_structure (rtx_insn *head, rtx_ BB_END (bb) = end; bb->index = last_basic_block_for_fn (cfun)++; bb->flags = BB_NEW | BB_RTL; - link_block (bb, after); + link_block (bb, insert_after); SET_BASIC_BLOCK_FOR_FN (cfun, bb->index, bb); df_bb_refs_record (bb->index, false); update_bb_for_insn (bb); @@ -1509,7 +1509,6 @@ force_nonfallthru_and_redirect (edge e, basic_bloc && any_condjump_p (BB_END (e->src)) && JUMP_LABEL (BB_END (e->src)) == BB_HEAD (e->dest)) { - rtx note; edge b = unchecked_make_edge (e->src, target, 0); bool redirected; @@ -1606,7 +1605,6 @@ force_nonfallthru_and_redirect (edge e, basic_bloc if (adjust_jump_target) { rtx_insn *insn = BB_END (e->src); - rtx note; rtx_insn *old_label = BB_HEAD (e->dest); rtx_insn *new_label = BB_HEAD (target); @@ -3145,9 +3143,7 @@ purge_dead_edges (basic_block bb) if (JUMP_P (insn)) { - rtx note; edge b,f; - edge_iterator ei; /* We do care only about conditional jumps and simplejumps. */ if (!any_condjump_p (insn) Index: gcc/cgraphbuild.c =================================================================== --- gcc/cgraphbuild.c (revision 276598) +++ gcc/cgraphbuild.c (working copy) @@ -310,7 +310,6 @@ pass_build_cgraph_edges::execute (function *fun) for (gsi = gsi_start_bb (bb); !gsi_end_p (gsi); gsi_next (&gsi)) { gimple *stmt = gsi_stmt (gsi); - tree decl; if (is_gimple_debug (stmt)) continue; Index: gcc/cgraph.c =================================================================== --- gcc/cgraph.c (revision 276598) +++ gcc/cgraph.c (working copy) @@ -1278,7 +1278,6 @@ cgraph_edge::redirect_call_stmt_to_callee (void) if (e->speculative) { cgraph_edge *e2; - gcall *new_stmt; ipa_ref *ref; e->speculative_call_info (e, e2, ref); @@ -1749,7 +1748,7 @@ cgraph_node::remove (void) next_sibling_clone->prev_sibling_clone = prev_sibling_clone; if (clones) { - cgraph_node *n, *next; + cgraph_node *n; if (clone_of) { @@ -2391,7 +2390,7 @@ set_nothrow_flag_1 (cgraph_node *node, bool nothro if (!nothrow || alias->get_availability () > AVAIL_INTERPOSABLE) set_nothrow_flag_1 (alias, nothrow, non_call, changed); } - for (cgraph_edge *e = node->callers; e; e = e->next_caller) + for (e = node->callers; e; e = e->next_caller) if (e->caller->thunk.thunk_p && (!nothrow || e->caller->get_availability () > AVAIL_INTERPOSABLE)) set_nothrow_flag_1 (e->caller, nothrow, non_call, changed); @@ -3335,8 +3334,8 @@ cgraph_node::verify_node (void) stmts.add (stmt); if (is_gimple_call (stmt)) { - cgraph_edge *e = get_edge (stmt); tree decl = gimple_call_fndecl (stmt); + e = get_edge (stmt); if (e) { if (e->aux) Index: gcc/cgraphclones.c =================================================================== --- gcc/cgraphclones.c (revision 276598) +++ gcc/cgraphclones.c (working copy) @@ -697,7 +697,7 @@ cgraph_node::set_call_stmt_including_clones (gimpl if (node) while (node != this) { - cgraph_edge *edge = node->get_edge (old_stmt); + edge = node->get_edge (old_stmt); if (edge) { edge->set_call_stmt (new_stmt, update_speculative); @@ -758,7 +758,7 @@ cgraph_node::create_edge_including_clones (cgraph_ /* Thunk clones do not get updated while copying inline function body. */ if (!node->thunk.thunk_p) { - cgraph_edge *edge = node->get_edge (old_stmt); + edge = node->get_edge (old_stmt); /* It is possible that clones already contain the edge while master didn't. Either we promoted indirect call into direct Index: gcc/cgraphunit.c =================================================================== --- gcc/cgraphunit.c (revision 276598) +++ gcc/cgraphunit.c (working copy) @@ -1144,11 +1144,11 @@ analyze_functions (bool first_time) if (opt_for_fn (cnode->decl, optimize) && opt_for_fn (cnode->decl, flag_devirtualize)) { - cgraph_edge *next; + cgraph_edge *next1; - for (edge = cnode->indirect_calls; edge; edge = next) + for (edge = cnode->indirect_calls; edge; edge = next1) { - next = edge->next_callee; + next1 = edge->next_callee; if (edge->indirect_info->polymorphic) walk_polymorphic_call_targets (&reachable_call_targets, edge); @@ -1182,7 +1182,6 @@ analyze_functions (bool first_time) if (node->same_comdat_group) { - symtab_node *next; for (next = node->same_comdat_group; next != node; next = next->same_comdat_group) @@ -2706,15 +2705,15 @@ symbol_table::compile (void) function bodies have been released from memory. */ if (!seen_error ()) { - cgraph_node *node; + cgraph_node *node1; bool error_found = false; - FOR_EACH_DEFINED_FUNCTION (node) - if (node->global.inlined_to - || gimple_has_body_p (node->decl)) + FOR_EACH_DEFINED_FUNCTION (node1) + if (node1->global.inlined_to + || gimple_has_body_p (node1->decl)) { error_found = true; - node->debug (); + node1->debug (); } if (error_found) internal_error ("nodes with unreleased memory found"); Index: gcc/combine.c =================================================================== --- gcc/combine.c (revision 276598) +++ gcc/combine.c (working copy) @@ -1219,7 +1219,7 @@ combine_instructions (rtx_insn *f, unsigned int nr FOR_BB_INSNS (this_basic_block, insn) if (INSN_P (insn) && BLOCK_FOR_INSN (insn)) { - rtx links; + rtx link; subst_low_luid = DF_INSN_LUID (insn); subst_insn = insn; @@ -1228,9 +1228,9 @@ combine_instructions (rtx_insn *f, unsigned int nr record_dead_and_set_regs (insn); if (AUTO_INC_DEC) - for (links = REG_NOTES (insn); links; links = XEXP (links, 1)) - if (REG_NOTE_KIND (links) == REG_INC) - set_nonzero_bits_and_sign_copies (XEXP (links, 0), NULL_RTX, + for (link = REG_NOTES (insn); link; link = XEXP (link, 1)) + if (REG_NOTE_KIND (link) == REG_INC) + set_nonzero_bits_and_sign_copies (XEXP (link, 0), NULL_RTX, insn); /* Record the current insn_cost of this instruction. */ @@ -1542,10 +1542,10 @@ retry: reg_stat.release (); { - struct undo *undo, *next; - for (undo = undobuf.frees; undo; undo = next) + struct undo *undo, *next1; + for (undo = undobuf.frees; undo; undo = next1) { - next = undo->next; + next1 = undo->next; free (undo); } undobuf.frees = 0; @@ -1899,12 +1899,12 @@ can_combine_p (rtx_insn *insn, rtx_insn *i3, rtx_i && GET_CODE (PATTERN (i3)) == PARALLEL) { rtx i3pat = PATTERN (i3); - int i = XVECLEN (i3pat, 0) - 1; + int j = XVECLEN (i3pat, 0) - 1; unsigned int regno = REGNO (XEXP (elt, 0)); do { - rtx i3elt = XVECEXP (i3pat, 0, i); + rtx i3elt = XVECEXP (i3pat, 0, j); if (GET_CODE (i3elt) == USE && REG_P (XEXP (i3elt, 0)) @@ -1914,7 +1914,7 @@ can_combine_p (rtx_insn *insn, rtx_insn *i3, rtx_i : regno >= FIRST_PSEUDO_REGISTER)) return 0; } - while (--i >= 0); + while (--j >= 0); } break; @@ -2546,8 +2546,6 @@ update_cfg_for_uncondjump (rtx_insn *insn) delete_insn (insn); if (EDGE_COUNT (bb->succs) == 1) { - rtx_insn *insn; - single_succ_edge (bb)->flags |= EDGE_FALLTHRU; /* Remove barriers from the footer if there are any. */ @@ -2714,7 +2712,6 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn int maxreg; rtx_insn *temp_insn; rtx temp_expr; - struct insn_link *link; rtx other_pat = 0; rtx new_other_notes; int i; @@ -2730,7 +2727,6 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn binary operations involving a constant. */ if (i0) { - int i; int ngood = 0; int nshift = 0; rtx set0, set3; @@ -4504,11 +4500,15 @@ try_combine (rtx_insn *i3, rtx_insn *i2, rtx_insn && (this_basic_block->next_bb == EXIT_BLOCK_PTR_FOR_FN (cfun) || BB_HEAD (this_basic_block) != temp_insn); temp_insn = NEXT_INSN (temp_insn)) - if (temp_insn != i3 && NONDEBUG_INSN_P (temp_insn)) - FOR_EACH_LOG_LINK (link, temp_insn) - if (link->insn == i2) - link->insn = i3; + { + struct insn_link *link; + if (temp_insn != i3 && NONDEBUG_INSN_P (temp_insn)) + FOR_EACH_LOG_LINK (link, temp_insn) + if (link->insn == i2) + link->insn = i3; + } + if (i3notes) { rtx link = i3notes; @@ -5128,24 +5128,24 @@ find_split_point (rtx *loc, rtx_insn *insn, bool s <= GET_MODE_PRECISION (inner_mode)) && ! side_effects_p (XEXP (SET_DEST (x), 0))) { - HOST_WIDE_INT pos = INTVAL (XEXP (SET_DEST (x), 2)); - unsigned HOST_WIDE_INT len = INTVAL (XEXP (SET_DEST (x), 1)); + HOST_WIDE_INT pos1 = INTVAL (XEXP (SET_DEST (x), 2)); + unsigned HOST_WIDE_INT len1 = INTVAL (XEXP (SET_DEST (x), 1)); unsigned HOST_WIDE_INT src = INTVAL (SET_SRC (x)); rtx dest = XEXP (SET_DEST (x), 0); unsigned HOST_WIDE_INT mask - = (HOST_WIDE_INT_1U << len) - 1; + = (HOST_WIDE_INT_1U << len1) - 1; rtx or_mask; if (BITS_BIG_ENDIAN) - pos = GET_MODE_PRECISION (inner_mode) - len - pos; + pos1 = GET_MODE_PRECISION (inner_mode) - len1 - pos1; - or_mask = gen_int_mode (src << pos, inner_mode); + or_mask = gen_int_mode (src << pos1, inner_mode); if (src == mask) SUBST (SET_SRC (x), simplify_gen_binary (IOR, inner_mode, dest, or_mask)); else { - rtx negmask = gen_int_mode (~(mask << pos), inner_mode); + rtx negmask = gen_int_mode (~(mask << pos1), inner_mode); SUBST (SET_SRC (x), simplify_gen_binary (IOR, inner_mode, simplify_gen_binary (AND, inner_mode, @@ -5205,11 +5205,11 @@ find_split_point (rtx *loc, rtx_insn *insn, bool s GET_MODE (XEXP (SET_SRC (x), 0))))) >= 1)) { - machine_mode mode = GET_MODE (XEXP (SET_SRC (x), 0)); - rtx pos_rtx = gen_int_shift_amount (mode, pos); + machine_mode mode1 = GET_MODE (XEXP (SET_SRC (x), 0)); + rtx pos_rtx = gen_int_shift_amount (mode1, pos); SUBST (SET_SRC (x), - gen_rtx_NEG (mode, - gen_rtx_LSHIFTRT (mode, + gen_rtx_NEG (mode1, + gen_rtx_LSHIFTRT (mode1, XEXP (SET_SRC (x), 0), pos_rtx))); @@ -5356,14 +5356,14 @@ find_split_point (rtx *loc, rtx_insn *insn, bool s && GET_CODE (XEXP (XEXP (x, 1), 1)) == CONST_INT && !pow2p_hwi (INTVAL (XEXP (XEXP (x, 1), 1)))) { - machine_mode mode = GET_MODE (x); + machine_mode mode1 = GET_MODE (x); unsigned HOST_WIDE_INT this_int = INTVAL (XEXP (XEXP (x, 1), 1)); - HOST_WIDE_INT other_int = trunc_int_for_mode (-this_int, mode); - SUBST (*loc, gen_rtx_PLUS (mode, - gen_rtx_MULT (mode, + HOST_WIDE_INT other_int = trunc_int_for_mode (-this_int, mode1); + SUBST (*loc, gen_rtx_PLUS (mode1, + gen_rtx_MULT (mode1, XEXP (XEXP (x, 1), 0), gen_int_mode (other_int, - mode)), + mode1)), XEXP (x, 0))); return find_split_point (loc, insn, set_src); } @@ -5984,7 +5984,6 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode if (GET_MODE_CLASS (GET_MODE (SUBREG_REG (x))) == MODE_CC) break; { - rtx temp; temp = simplify_subreg (mode, SUBREG_REG (x), op0_mode, SUBREG_BYTE (x)); if (temp) @@ -5991,7 +5990,7 @@ combine_simplify_rtx (rtx x, machine_mode op0_mode return temp; /* If op is known to have all lower bits zero, the result is zero. */ - scalar_int_mode int_mode, int_op0_mode; + scalar_int_mode int_op0_mode; if (!in_dest && is_a (mode, &int_mode) && is_a (op0_mode, &int_op0_mode) @@ -8506,8 +8505,7 @@ make_compound_operation (rtx x, enum rtx_code in_c scalar_int_mode mode; if (is_a (GET_MODE (x), &mode)) { - rtx new_rtx = make_compound_operation_int (mode, &x, in_code, - &next_code); + new_rtx = make_compound_operation_int (mode, &x, in_code, &next_code); if (new_rtx) return new_rtx; code = GET_CODE (x); @@ -9391,8 +9389,8 @@ if_then_else_cond (rtx x, rtx *ptrue, rtx *pfalse) || code == UMAX) && GET_CODE (XEXP (x, 0)) == MULT && GET_CODE (XEXP (x, 1)) == MULT) { - rtx op0 = XEXP (XEXP (x, 0), 1); - rtx op1 = XEXP (XEXP (x, 1), 1); + op0 = XEXP (XEXP (x, 0), 1); + op1 = XEXP (XEXP (x, 1), 1); cond0 = XEXP (XEXP (x, 0), 0); cond1 = XEXP (XEXP (x, 1), 0); @@ -10999,9 +10997,9 @@ simplify_shift_const_1 (enum rtx_code code, machin break; rtx count_rtx = gen_int_shift_amount (int_result_mode, count); - rtx new_rtx = simplify_const_binary_operation (code, int_mode, - XEXP (varop, 0), - count_rtx); + new_rtx = simplify_const_binary_operation (code, int_mode, + XEXP (varop, 0), + count_rtx); varop = gen_rtx_fmt_ee (code, int_mode, new_rtx, XEXP (varop, 1)); count = 0; continue; @@ -12103,8 +12101,8 @@ simplify_comparison (enum rtx_code code, rtx *pop0 && INTVAL (XEXP (op0, 1)) < HOST_BITS_PER_WIDE_INT && XEXP (op0, 1) == XEXP (op1, 1)) { - machine_mode mode = GET_MODE (op0); - unsigned HOST_WIDE_INT mask = GET_MODE_MASK (mode); + machine_mode mode0 = GET_MODE (op0); + unsigned HOST_WIDE_INT mask = GET_MODE_MASK (mode0); int shift_count = INTVAL (XEXP (op0, 1)); if (GET_CODE (op0) == LSHIFTRT || GET_CODE (op0) == ASHIFTRT) @@ -12112,8 +12110,8 @@ simplify_comparison (enum rtx_code code, rtx *pop0 else if (GET_CODE (op0) == ASHIFT) mask = (mask & (mask << shift_count)) >> shift_count; - if ((nonzero_bits (XEXP (op0, 0), mode) & ~mask) == 0 - && (nonzero_bits (XEXP (op1, 0), mode) & ~mask) == 0) + if ((nonzero_bits (XEXP (op0, 0), mode0) & ~mask) == 0 + && (nonzero_bits (XEXP (op1, 0), mode0) & ~mask) == 0) op0 = XEXP (op0, 0), op1 = XEXP (op1, 0); else break; @@ -12285,7 +12283,7 @@ simplify_comparison (enum rtx_code code, rtx *pop0 if (raw_mode == VOIDmode) break; - scalar_int_mode mode = as_a (raw_mode); + mode = as_a (raw_mode); /* Now try cases based on the opcode of OP0. If none of the cases does a "continue", we exit this loop immediately after the @@ -13662,7 +13660,7 @@ get_last_value_validate (rtx *loc, rtx_insn *insn, rtx x = *loc; const char *fmt = GET_RTX_FORMAT (GET_CODE (x)); int len = GET_RTX_LENGTH (GET_CODE (x)); - int i, j; + int i; if (REG_P (x)) { @@ -13744,7 +13742,7 @@ get_last_value_validate (rtx *loc, rtx_insn *insn, return 0; } else if (fmt[i] == 'E') - for (j = 0; j < XVECLEN (x, i); j++) + for (int j = 0; j < XVECLEN (x, i); j++) if (get_last_value_validate (&XVECEXP (x, i, j), insn, tick, replace) == 0) return 0; @@ -14092,11 +14090,11 @@ move_deaths (rtx x, rtx maybe_kill_insn, int from_ unsigned int deadregno = REGNO (XEXP (note, 0)); unsigned int deadend = END_REGNO (XEXP (note, 0)); unsigned int ourend = END_REGNO (x); - unsigned int i; + unsigned int j; - for (i = deadregno; i < deadend; i++) - if (i < regno || i >= ourend) - add_reg_note (where_dead, REG_DEAD, regno_reg_rtx[i]); + for (j = deadregno; j < deadend; j++) + if (j < regno || j >= ourend) + add_reg_note (where_dead, REG_DEAD, regno_reg_rtx[j]); } /* If we didn't find any note, or if we found a REG_DEAD note that @@ -14112,7 +14110,7 @@ move_deaths (rtx x, rtx maybe_kill_insn, int from_ && REG_NREGS (x) > 1) { unsigned int ourend = END_REGNO (x); - unsigned int i, offset; + unsigned int j, offset; rtx oldnotes = 0; if (note) @@ -14120,8 +14118,8 @@ move_deaths (rtx x, rtx maybe_kill_insn, int from_ else offset = 1; - for (i = regno + offset; i < ourend; i++) - move_deaths (regno_reg_rtx[i], + for (j = regno + offset; j < ourend; j++) + move_deaths (regno_reg_rtx[j], maybe_kill_insn, from_luid, to_insn, &oldnotes); } Index: gcc/cprop.c =================================================================== --- gcc/cprop.c (revision 276598) +++ gcc/cprop.c (working copy) @@ -1301,7 +1301,7 @@ local_cprop_pass (void) while (!uncond_traps.is_empty ()) { - rtx_insn *insn = uncond_traps.pop (); + insn = uncond_traps.pop (); basic_block to_split = BLOCK_FOR_INSN (insn); remove_edge (split_block (to_split, insn)); emit_barrier_after_bb (to_split); Index: gcc/cse.c =================================================================== --- gcc/cse.c (revision 276598) +++ gcc/cse.c (working copy) @@ -660,15 +660,15 @@ approx_reg_cost (const_rtx x) subrtx_iterator::array_type array; FOR_EACH_SUBRTX (iter, array, x, NONCONST) { - const_rtx x = *iter; - if (REG_P (x)) + const_rtx y = *iter; + if (REG_P (y)) { - unsigned int regno = REGNO (x); + unsigned int regno = REGNO (y); if (!CHEAP_REGNO (regno)) { if (regno < FIRST_PSEUDO_REGISTER) { - if (targetm.small_register_classes_for_mode_p (GET_MODE (x))) + if (targetm.small_register_classes_for_mode_p (GET_MODE (y))) return MAX_COST; cost += 2; } @@ -1004,15 +1004,15 @@ mention_regs (rtx x) { unsigned int regno = REGNO (x); unsigned int endregno = END_REGNO (x); - unsigned int i; + unsigned int k; - for (i = regno; i < endregno; i++) + for (k = regno; k < endregno; k++) { - if (REG_IN_TABLE (i) >= 0 && REG_IN_TABLE (i) != REG_TICK (i)) - remove_invalid_refs (i); + if (REG_IN_TABLE (k) >= 0 && REG_IN_TABLE (k) != REG_TICK (k)) + remove_invalid_refs (k); - REG_IN_TABLE (i) = REG_TICK (i); - SUBREG_TICKED (i) = -1; + REG_IN_TABLE (k) = REG_TICK (k); + SUBREG_TICKED (k) = -1; } return 0; @@ -1024,24 +1024,24 @@ mention_regs (rtx x) if (code == SUBREG && REG_P (SUBREG_REG (x)) && REGNO (SUBREG_REG (x)) >= FIRST_PSEUDO_REGISTER) { - unsigned int i = REGNO (SUBREG_REG (x)); + unsigned int k = REGNO (SUBREG_REG (x)); - if (REG_IN_TABLE (i) >= 0 && REG_IN_TABLE (i) != REG_TICK (i)) + if (REG_IN_TABLE (k) >= 0 && REG_IN_TABLE (k) != REG_TICK (k)) { - /* If REG_IN_TABLE (i) differs from REG_TICK (i) by one, and + /* If REG_IN_TABLE (k) differs from REG_TICK (k) by one, and the last store to this register really stored into this subreg, then remove the memory of this subreg. Otherwise, remove any memory of the entire register and all its subregs from the table. */ - if (REG_TICK (i) - REG_IN_TABLE (i) > 1 - || SUBREG_TICKED (i) != REGNO (SUBREG_REG (x))) - remove_invalid_refs (i); + if (REG_TICK (k) - REG_IN_TABLE (k) > 1 + || SUBREG_TICKED (k) != REGNO (SUBREG_REG (x))) + remove_invalid_refs (k); else - remove_invalid_subreg_refs (i, SUBREG_BYTE (x), GET_MODE (x)); + remove_invalid_subreg_refs (k, SUBREG_BYTE (x), GET_MODE (x)); } - REG_IN_TABLE (i) = REG_TICK (i); - SUBREG_TICKED (i) = REGNO (SUBREG_REG (x)); + REG_IN_TABLE (k) = REG_TICK (k); + SUBREG_TICKED (k) = REGNO (SUBREG_REG (x)); return 0; } @@ -1813,8 +1813,8 @@ check_dependence (const_rtx x, rtx exp, machine_mo subrtx_iterator::array_type array; FOR_EACH_SUBRTX (iter, array, x, NONCONST) { - const_rtx x = *iter; - if (MEM_P (x) && canon_anti_dependence (x, true, exp, mode, addr)) + const_rtx y = *iter; + if (MEM_P (y) && canon_anti_dependence (y, true, exp, mode, addr)) return true; } return false; @@ -2339,8 +2339,8 @@ hash_rtx_cb (const_rtx x, machine_mode mode, { inchash::hash h; h.add_int (hash); - for (unsigned int i = 0; i < NUM_POLY_INT_COEFFS; ++i) - h.add_wide_int (CONST_POLY_INT_COEFFS (x)[i]); + for (unsigned int k = 0; k < NUM_POLY_INT_COEFFS; ++k) + h.add_wide_int (CONST_POLY_INT_COEFFS (x)[k]); return h.end (); } @@ -2651,7 +2651,6 @@ exp_equiv_p (const_rtx x, const_rtx y, int validat else { unsigned int regno = REGNO (y); - unsigned int i; unsigned int endregno = END_REGNO (y); /* If the quantities are not the same, the expressions are not @@ -2664,8 +2663,8 @@ exp_equiv_p (const_rtx x, const_rtx y, int validat if (! validate) return 1; - for (i = regno; i < endregno; i++) - if (REG_IN_TABLE (i) != REG_TICK (i)) + for (unsigned int k = regno; k < endregno; k++) + if (REG_IN_TABLE (k) != REG_TICK (k)) return 0; return 1; @@ -5763,11 +5762,12 @@ cse_insn (rtx_insn *insn) { if (sets[i].rtl) { - rtx x = sets[i].inner_dest; struct table_elt *elt; machine_mode mode; unsigned hash; + x = sets[i].inner_dest; + if (MEM_P (x)) { x = XEXP (x, 0); @@ -5859,7 +5859,7 @@ cse_insn (rtx_insn *insn) { if (sets[i].rtl) { - rtx x = SET_DEST (sets[i].rtl); + x = SET_DEST (sets[i].rtl); if (!REG_P (x)) mention_regs (x); @@ -5879,14 +5879,14 @@ cse_insn (rtx_insn *insn) it leaves reg_in_table as -1 . */ unsigned int regno = REGNO (x); unsigned int endregno = END_REGNO (x); - unsigned int i; + unsigned int j; - for (i = regno; i < endregno; i++) + for (j = regno; j < endregno; j++) { - if (REG_IN_TABLE (i) >= 0) + if (REG_IN_TABLE (j) >= 0) { - remove_invalid_refs (i); - REG_IN_TABLE (i) = -1; + remove_invalid_refs (j); + REG_IN_TABLE (j) = -1; } } } @@ -6004,7 +6004,7 @@ cse_insn (rtx_insn *insn) && sets[i].src_elt != 0) { machine_mode new_mode = GET_MODE (SUBREG_REG (dest)); - struct table_elt *elt, *classp = 0; + struct table_elt *classp = 0; for (elt = sets[i].src_elt->first_same_value; elt; elt = elt->next_same_value) @@ -6364,7 +6364,6 @@ cse_find_path (basic_block first_bb, struct cse_ba while (path_size >= 2) { basic_block last_bb_in_path, previous_bb_in_path; - edge e; --path_size; last_bb_in_path = data->path[path_size].bb; @@ -7247,14 +7246,14 @@ cse_change_cc_mode (subrtx_ptr_iterator::array_typ { FOR_EACH_SUBRTX_PTR (iter, array, loc, NONCONST) { - rtx *loc = *iter; - rtx x = *loc; + rtx *loc1 = *iter; + rtx x = *loc1; if (x && REG_P (x) && REGNO (x) == REGNO (newreg) && GET_MODE (x) != GET_MODE (newreg)) { - validate_change (insn, loc, newreg, 1); + validate_change (insn, loc1, newreg, 1); iter.skip_subrtxes (); } } Index: gcc/cselib.c =================================================================== --- gcc/cselib.c (revision 276598) +++ gcc/cselib.c (working copy) @@ -1139,8 +1139,8 @@ cselib_hash_rtx (rtx x, int create, machine_mode m { inchash::hash h; h.add_int (hash); - for (unsigned int i = 0; i < NUM_POLY_INT_COEFFS; ++i) - h.add_wide_int (CONST_POLY_INT_COEFFS (x)[i]); + for (unsigned int k = 0; k < NUM_POLY_INT_COEFFS; ++k) + h.add_wide_int (CONST_POLY_INT_COEFFS (x)[k]); return h.end (); }