From patchwork Mon Jun 28 10:42:51 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 57129 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 86232B7091 for ; Mon, 28 Jun 2010 20:43:31 +1000 (EST) Received: (qmail 13027 invoked by alias); 28 Jun 2010 10:43:28 -0000 Received: (qmail 13004 invoked by uid 22791); 28 Jun 2010 10:43:21 -0000 X-SWARE-Spam-Status: No, hits=-1.7 required=5.0 tests=AWL, BAYES_50, KAM_STOCKTIP, RCVD_IN_DNSWL_HI, TW_CP, TW_OV, TW_TM, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from cantor.suse.de (HELO mx1.suse.de) (195.135.220.2) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Mon, 28 Jun 2010 10:42:57 +0000 Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.221.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 4EC258FEA2 for ; Mon, 28 Jun 2010 12:42:51 +0200 (CEST) Date: Mon, 28 Jun 2010 12:42:51 +0200 (CEST) From: Richard Guenther To: gcc-patches@gcc.gnu.org Subject: [PATCH] mem-ref2 merge, optimization passes (or rather, all the rest) Message-ID: User-Agent: Alpine 2.00 (LNX 1167 2008-08-23) MIME-Version: 1.0 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Most of the changes are mechanical, exceptions are memory builtin folding (to also fix PR42834) and the address-propagation in forwprop. Richard. * tree-vrp.c: (check_array_ref): Handle MEM_REF. (search_for_addr_array): Likewise. (check_array_bounds): Likewise. (vrp_stmt_computes_nonzero): Adjust for MEM_REF. * tree-ssa-loop-im.c (for_each_index): Handle MEM_REF. (ref_always_accessed_p): Likewise. (gen_lsm_tmp_name): Likewise. Handle ADDR_EXPR. * tree-complex.c (extract_component): Do not handle INDIRECT_REF. Handle MEM_REF. * cgraphbuild.c (mark_load): Properly check for NULL result from get_base_address. (mark_store): Likewise. * tree-ssa-loop-niter.c (array_at_struct_end_p): Handle MEM_REF. * tree-loop-distribution.c (generate_builtin): Exchange INDIRECT_REF handling for MEM_REF. * tree-scalar-evolution.c (follow_ssa_edge_expr): Handle &MEM[ptr + CST] similar to POINTER_PLUS_EXPR. * builtins.c (stabilize_va_list_loc): Use the function ABI valist type if we couldn't canonicalize the argument type. Always dereference with the canonical va-list type. (maybe_emit_free_warning): Handle MEM_REF. (fold_builtin_memory_op): Simplify and handle MEM_REFs in folding memmove to memcpy. PR middle-end/42834 * builtins.c (fold_builtin_memory_op): Use ref-all types for all memcpy foldings. * omp-low.c (build_receiver_ref): Adjust for MEM_REF. (build_outer_var_ref): Likewise. (scan_omp_1_op): Likewise. (lower_rec_input_clauses): Likewise. (lower_lastprivate_clauses): Likewise. (lower_reduction_clauses): Likewise. (lower_copyprivate_clauses): Likewise. (expand_omp_atomic_pipeline): Likewise. (expand_omp_atomic_mutex): Likewise. (create_task_copyfn): Likewise. * tree-ssa-sccvn.c (copy_reference_ops_from_ref): Handle MEM_REF. Remove old union trick. Initialize constant offsets. (ao_ref_init_from_vn_reference): Likewise. Do not handle INDIRECT_REF. Init base_alias_set properly. (vn_reference_lookup_3): Replace INDIRECT_REF handling with MEM_REF. (vn_reference_fold_indirect): Adjust for MEM_REFs. (valueize_refs): Fold MEM_REFs. Re-evaluate constant offset for ARRAY_REFs. (may_insert): Remove. (visit_reference_op_load): Do not test may_insert. (run_scc_vn): Remove parameter, do not fiddle with may_insert. * tree-ssa-sccvn.h (struct vn_reference_op_struct): Add a field to store the constant offset this op applies. (run_scc_vn): Adjust prototype. * cgraphunit.c (thunk_adjust): Adjust for MEM_REF. * tree-ssa-ccp.c (ccp_fold): Replace INDIRECT_REF folding with MEM_REF. Propagate &foo + CST as &MEM[&foo, CST]. Do not bother about volatile qualifiers on pointers. (fold_const_aggregate_ref): Handle MEM_REF, do not handle INDIRECT_REF. * tree-ssa-loop-ivopts.c * tree-ssa-loop-ivopts.c (determine_base_object): Adjust for MEM_REF. (strip_offset_1): Likewise. (find_interesting_uses_address): Replace INDIRECT_REF handling with MEM_REF handling. (get_computation_cost_at): Likewise. * ipa-pure-const.c (check_op): Handle MEM_REF. * tree-stdarg.c (check_all_va_list_escapes): Adjust for MEM_REF. * tree-ssa-sink.c (is_hidden_global_store): Handle MEM_REF and constants. * ipa-inline.c (likely_eliminated_by_inlining_p): Handle MEM_REF. * tree-parloops.c (take_address_of): Adjust for MEM_REF. (eliminate_local_variables_1): Likewise. (create_call_for_reduction_1): Likewise. (create_loads_for_reductions): Likewise. (create_loads_and_stores_for_name): Likewise. * matrix-reorg.c (may_flatten_matrices_1): Sanitize. (ssa_accessed_in_tree): Handle MEM_REF. (ssa_accessed_in_assign_rhs): Likewise. (update_type_size): Likewise. (analyze_accesses_for_call_stmt): Likewise. (analyze_accesses_for_assign_stmt): Likewise. (transform_access_sites): Likewise. (transform_allocation_sites): Likewise. * tree-affine.c (tree_to_aff_combination): Handle MEM_REF. * tree-vect-data-refs.c (vect_create_addr_base_for_vector_ref): Do not handle INDIRECT_REF. * tree-ssa-phiopt.c (add_or_mark_expr): Handle MEM_REF. (cond_store_replacement): Likewise. * tree-ssa-pre.c (create_component_ref_by_pieces_1): Handle MEM_REF, no not handle INDIRECT_REFs. (insert_into_preds_of_block): Properly initialize avail. (phi_translate_1): Fold MEM_REFs. Re-evaluate constant offset for ARRAY_REFs. Properly handle reference lookups that require a bit re-interpretation. (can_PRE_operation): Do not handle INDIRECT_REF. Handle MEM_REF. * tree-sra.c * tree-sra.c (build_access_from_expr_1): Handle MEM_REF. (build_ref_for_offset_1): Remove. (build_ref_for_offset): Build MEM_REFs. (gate_intra_sra): Disable for now. (sra_ipa_modify_expr): Handle MEM_REF. (ipa_early_sra_gate): Disable for now. * tree-sra.c (create_access): Swap INDIRECT_REF handling for MEM_REF handling. (disqualify_base_of_expr): Likewise. (ptr_parm_has_direct_uses): Swap INDIRECT_REF handling for MEM_REF handling. (sra_ipa_modify_expr): Remove INDIRECT_REF handling. Use mem_ref_offset. Remove bogus folding. (build_access_from_expr_1): Properly handle MEM_REF for non IPA-SRA. (make_fancy_name_1): Add support for MEM_REF. * tree-predcom.c (ref_at_iteration): Handle MEM_REFs. * tree-mudflap.c (mf_xform_derefs_1): Adjust for MEM_REF. * ipa-prop.c (compute_complex_assign_jump_func): Handle MEM_REF. (compute_complex_ancestor_jump_func): Likewise. (ipa_analyze_virtual_call_uses): Likewise. * tree-ssa-forwprop.c (forward_propagate_addr_expr_1): Replace INDIRECT_REF folding with more generalized MEM_REF folding. (tree_ssa_forward_propagate_single_use_vars): Adjust accordingly. (forward_propagate_addr_into_variable_array_index): Also handle &ARRAY + I in addition to &ARRAY[0] + I. * tree-ssa-dce.c (ref_may_be_aliased): Handle MEM_REF. * tree-ssa-ter.c (find_replaceable_in_bb): Avoid TER if that creates assignments with overlap. * tree-nested.c (get_static_chain): Adjust for MEM_REF. (get_frame_field): Likewise. (get_nonlocal_debug_decl): Likewise. (convert_nonlocal_reference_op): Likewise. (struct nesting_info): Add mem_refs pointer-set. (create_nesting_tree): Allocate it. (convert_local_reference_op): Insert to be folded mem-refs. (fold_mem_refs): New function. (finalize_nesting_tree_1): Perform defered folding of mem-refs (free_nesting_tree): Free the pointer-set. * tree-vect-stmts.c (vectorizable_store): Adjust for MEM_REF. (vectorizable_load): Likewise. * tree-ssa-phiprop.c (phiprop_insert_phi): Adjust for MEM_REF. (propagate_with_phi): Likewise. * tree-object-size.c (addr_object_size): Handle MEM_REFs instead of INDIRECT_REFs. (compute_object_offset): Handle MEM_REF. (plus_stmt_object_size): Handle MEM_REF. (collect_object_sizes_for): Dispatch to plus_stmt_object_size for &MEM_REF. * tree-flow.h (get_addr_base_and_unit_offset): Declare. (symbol_marked_for_renaming): Likewise. * Makefile.in (tree-dfa.o): Add $(TOPLEV_H). (fold-const.o): Add $(TREE_FLOW_H). * tree-ssa-structalias.c (get_constraint_for_1): Handle MEM_REF. (find_func_clobbers): Likewise. * ipa-struct-reorg.c (decompose_indirect_ref_acc): Handle MEM_REF. (decompose_access): Likewise. (replace_field_acc): Likewise. (replace_field_access_stmt): Likewise. (insert_new_var_in_stmt): Likewise. (get_stmt_accesses): Likewise. (reorg_structs_drive): Disable. * config/i386/i386.c (ix86_va_start): Adjust for MEM_REF. (ix86_canonical_va_list_type): Likewise. cp/ * cp-gimplify.c (cp_gimplify_expr): Open-code the rhs predicate we are looking for, allow non-gimplified INDIRECT_REFs. Index: gcc/tree-vrp.c =================================================================== --- gcc/tree-vrp.c (.../trunk) (revision 161367) +++ gcc/tree-vrp.c (.../branches/mem-ref2) (revision 161369) @@ -987,7 +987,7 @@ vrp_stmt_computes_nonzero (gimple stmt, tree base = get_base_address (TREE_OPERAND (expr, 0)); if (base != NULL_TREE - && TREE_CODE (base) == INDIRECT_REF + && TREE_CODE (base) == MEM_REF && TREE_CODE (TREE_OPERAND (base, 0)) == SSA_NAME) { value_range_t *vr = get_value_range (TREE_OPERAND (base, 0)); @@ -5075,8 +5075,7 @@ check_array_ref (location_t location, tr /* Accesses to trailing arrays via pointers may access storage beyond the types array bounds. */ base = get_base_address (ref); - if (base - && INDIRECT_REF_P (base)) + if (base && TREE_CODE (base) == MEM_REF) { tree cref, next = NULL_TREE; @@ -5175,6 +5174,51 @@ search_for_addr_array (tree t, location_ t = TREE_OPERAND (t, 0); } while (handled_component_p (t)); + + if (TREE_CODE (t) == MEM_REF + && TREE_CODE (TREE_OPERAND (t, 0)) == ADDR_EXPR + && !TREE_NO_WARNING (t)) + { + tree tem = TREE_OPERAND (TREE_OPERAND (t, 0), 0); + tree low_bound, up_bound, el_sz; + double_int idx; + if (TREE_CODE (TREE_TYPE (tem)) != ARRAY_TYPE + || TREE_CODE (TREE_TYPE (TREE_TYPE (tem))) == ARRAY_TYPE + || !TYPE_DOMAIN (TREE_TYPE (tem))) + return; + + low_bound = TYPE_MIN_VALUE (TYPE_DOMAIN (TREE_TYPE (tem))); + up_bound = TYPE_MAX_VALUE (TYPE_DOMAIN (TREE_TYPE (tem))); + el_sz = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (tem))); + if (!low_bound + || TREE_CODE (low_bound) != INTEGER_CST + || !up_bound + || TREE_CODE (up_bound) != INTEGER_CST + || !el_sz + || TREE_CODE (el_sz) != INTEGER_CST) + return; + + idx = mem_ref_offset (t); + idx = double_int_sdiv (idx, tree_to_double_int (el_sz), TRUNC_DIV_EXPR); + if (double_int_scmp (idx, double_int_zero) < 0) + { + warning_at (location, OPT_Warray_bounds, + "array subscript is below array bounds"); + TREE_NO_WARNING (t) = 1; + } + else if (double_int_scmp (idx, + double_int_add + (double_int_add + (tree_to_double_int (up_bound), + double_int_neg + (tree_to_double_int (low_bound))), + double_int_one)) > 0) + { + warning_at (location, OPT_Warray_bounds, + "array subscript is above array bounds"); + TREE_NO_WARNING (t) = 1; + } + } } /* walk_tree() callback that checks if *TP is @@ -5203,7 +5247,7 @@ check_array_bounds (tree *tp, int *walk_ if (TREE_CODE (t) == ARRAY_REF) check_array_ref (location, t, false /*ignore_off_by_one*/); - if (TREE_CODE (t) == INDIRECT_REF + if (TREE_CODE (t) == MEM_REF || (TREE_CODE (t) == RETURN_EXPR && TREE_OPERAND (t, 0))) search_for_addr_array (TREE_OPERAND (t, 0), location); Index: gcc/tree-ssa-loop-im.c =================================================================== --- gcc/tree-ssa-loop-im.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-loop-im.c (.../branches/mem-ref2) (revision 161369) @@ -274,7 +274,7 @@ for_each_index (tree *addr_p, bool (*cbc case MISALIGNED_INDIRECT_REF: case ALIGN_INDIRECT_REF: - case INDIRECT_REF: + case MEM_REF: nxt = &TREE_OPERAND (*addr_p, 0); return cbck (*addr_p, nxt, data); @@ -1985,11 +1985,15 @@ gen_lsm_tmp_name (tree ref) { case MISALIGNED_INDIRECT_REF: case ALIGN_INDIRECT_REF: - case INDIRECT_REF: + case MEM_REF: gen_lsm_tmp_name (TREE_OPERAND (ref, 0)); lsm_tmp_name_add ("_"); break; + case ADDR_EXPR: + gen_lsm_tmp_name (TREE_OPERAND (ref, 0)); + break; + case BIT_FIELD_REF: case VIEW_CONVERT_EXPR: case ARRAY_RANGE_REF: @@ -2150,7 +2154,8 @@ ref_always_accessed_p (struct loop *loop tree base; base = get_base_address (ref->mem); - if (INDIRECT_REF_P (base)) + if (INDIRECT_REF_P (base) + || TREE_CODE (base) == MEM_REF) base = TREE_OPERAND (base, 0); get_all_locs_in_loop (loop, ref, &locs); @@ -2169,7 +2174,8 @@ ref_always_accessed_p (struct loop *loop lhs = get_base_address (gimple_get_lhs (loc->stmt)); if (!lhs) continue; - if (INDIRECT_REF_P (lhs)) + if (INDIRECT_REF_P (lhs) + || TREE_CODE (lhs) == MEM_REF) lhs = TREE_OPERAND (lhs, 0); if (lhs != base) continue; Index: gcc/tree-complex.c =================================================================== --- gcc/tree-complex.c (.../trunk) (revision 161367) +++ gcc/tree-complex.c (.../branches/mem-ref2) (revision 161369) @@ -596,10 +596,10 @@ extract_component (gimple_stmt_iterator case VAR_DECL: case RESULT_DECL: case PARM_DECL: - case INDIRECT_REF: case COMPONENT_REF: case ARRAY_REF: case VIEW_CONVERT_EXPR: + case MEM_REF: { tree inner_type = TREE_TYPE (TREE_TYPE (t)); Index: gcc/cgraphbuild.c =================================================================== --- gcc/cgraphbuild.c (.../trunk) (revision 161367) +++ gcc/cgraphbuild.c (.../branches/mem-ref2) (revision 161369) @@ -275,7 +275,7 @@ mark_load (gimple stmt ATTRIBUTE_UNUSED, void *data ATTRIBUTE_UNUSED) { t = get_base_address (t); - if (TREE_CODE (t) == VAR_DECL + if (t && TREE_CODE (t) == VAR_DECL && (TREE_STATIC (t) || DECL_EXTERNAL (t))) { struct varpool_node *vnode = varpool_node (t); @@ -300,7 +300,7 @@ mark_store (gimple stmt ATTRIBUTE_UNUSED void *data ATTRIBUTE_UNUSED) { t = get_base_address (t); - if (TREE_CODE (t) == VAR_DECL + if (t && TREE_CODE (t) == VAR_DECL && (TREE_STATIC (t) || DECL_EXTERNAL (t))) { struct varpool_node *vnode = varpool_node (t); Index: gcc/tree-ssa-loop-niter.c =================================================================== --- gcc/tree-ssa-loop-niter.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-loop-niter.c (.../branches/mem-ref2) (revision 161369) @@ -2625,7 +2625,7 @@ array_at_struct_end_p (tree ref) /* Unless the reference is through a pointer, the size of the array matches its declaration. */ - if (!base || !INDIRECT_REF_P (base)) + if (!base || (!INDIRECT_REF_P (base) && TREE_CODE (base) != MEM_REF)) return false; for (;handled_component_p (ref); ref = parent) @@ -2651,7 +2651,6 @@ array_at_struct_end_p (tree ref) Therefore, continue checking. */ } - gcc_assert (INDIRECT_REF_P (ref)); return true; } Index: gcc/tree-loop-distribution.c =================================================================== --- gcc/tree-loop-distribution.c (.../trunk) (revision 161367) +++ gcc/tree-loop-distribution.c (.../branches/mem-ref2) (revision 161369) @@ -395,7 +395,7 @@ generate_builtin (struct loop *loop, bit op1 = gimple_assign_rhs1 (write); if (!(TREE_CODE (op0) == ARRAY_REF - || TREE_CODE (op0) == INDIRECT_REF)) + || TREE_CODE (op0) == MEM_REF)) goto end; /* The new statements will be placed before LOOP. */ Index: gcc/tree-scalar-evolution.c =================================================================== --- gcc/tree-scalar-evolution.c (.../trunk) (revision 161367) +++ gcc/tree-scalar-evolution.c (.../branches/mem-ref2) (revision 161369) @@ -1170,6 +1170,24 @@ follow_ssa_edge_expr (struct loop *loop, halting_phi, evolution_of_loop, limit); break; + case ADDR_EXPR: + /* Handle &MEM[ptr + CST] which is equivalent to POINTER_PLUS_EXPR. */ + if (TREE_CODE (TREE_OPERAND (expr, 0)) == MEM_REF) + { + expr = TREE_OPERAND (expr, 0); + rhs0 = TREE_OPERAND (expr, 0); + rhs1 = TREE_OPERAND (expr, 1); + type = TREE_TYPE (rhs0); + STRIP_USELESS_TYPE_CONVERSION (rhs0); + STRIP_USELESS_TYPE_CONVERSION (rhs1); + res = follow_ssa_edge_binary (loop, at_stmt, type, + rhs0, POINTER_PLUS_EXPR, rhs1, + halting_phi, evolution_of_loop, limit); + } + else + res = t_false; + break; + case ASSERT_EXPR: /* This assignment is of the form: "a_1 = ASSERT_EXPR " It must be handled as a copy assignment of the form a_1 = a_2. */ Index: gcc/builtins.c =================================================================== --- gcc/builtins.c (.../trunk) (revision 161367) +++ gcc/builtins.c (.../branches/mem-ref2) (revision 161369) @@ -4455,7 +4455,10 @@ stabilize_va_list_loc (location_t loc, t { tree vatype = targetm.canonical_va_list_type (TREE_TYPE (valist)); - gcc_assert (vatype != NULL_TREE); + /* The current way of determining the type of valist is completely + bogus. We should have the information on the va builtin instead. */ + if (!vatype) + vatype = targetm.fn_abi_va_list (cfun->decl); if (TREE_CODE (vatype) == ARRAY_TYPE) { @@ -4474,21 +4477,21 @@ stabilize_va_list_loc (location_t loc, t } else { - tree pt; + tree pt = build_pointer_type (vatype); if (! needs_lvalue) { if (! TREE_SIDE_EFFECTS (valist)) return valist; - pt = build_pointer_type (vatype); valist = fold_build1_loc (loc, ADDR_EXPR, pt, valist); TREE_SIDE_EFFECTS (valist) = 1; } if (TREE_SIDE_EFFECTS (valist)) valist = save_expr (valist); - valist = build_fold_indirect_ref_loc (loc, valist); + valist = fold_build2_loc (loc, MEM_REF, + vatype, valist, build_int_cst (pt, 0)); } return valist; @@ -8346,6 +8349,7 @@ fold_builtin_memory_op (location_t loc, { tree srctype, desttype; int src_align, dest_align; + tree off0; if (endp == 3) { @@ -8371,37 +8375,26 @@ fold_builtin_memory_op (location_t loc, } /* If *src and *dest can't overlap, optimize into memcpy as well. */ - srcvar = build_fold_indirect_ref_loc (loc, src); - destvar = build_fold_indirect_ref_loc (loc, dest); - if (srcvar - && !TREE_THIS_VOLATILE (srcvar) - && destvar - && !TREE_THIS_VOLATILE (destvar)) + if (TREE_CODE (src) == ADDR_EXPR + && TREE_CODE (dest) == ADDR_EXPR) { tree src_base, dest_base, fn; HOST_WIDE_INT src_offset = 0, dest_offset = 0; HOST_WIDE_INT size = -1; HOST_WIDE_INT maxsize = -1; - src_base = srcvar; - if (handled_component_p (src_base)) - src_base = get_ref_base_and_extent (src_base, &src_offset, - &size, &maxsize); - dest_base = destvar; - if (handled_component_p (dest_base)) - dest_base = get_ref_base_and_extent (dest_base, &dest_offset, - &size, &maxsize); + srcvar = TREE_OPERAND (src, 0); + src_base = get_ref_base_and_extent (srcvar, &src_offset, + &size, &maxsize); + destvar = TREE_OPERAND (dest, 0); + dest_base = get_ref_base_and_extent (destvar, &dest_offset, + &size, &maxsize); if (host_integerp (len, 1)) - { - maxsize = tree_low_cst (len, 1); - if (maxsize - > INTTYPE_MAXIMUM (HOST_WIDE_INT) / BITS_PER_UNIT) - maxsize = -1; - else - maxsize *= BITS_PER_UNIT; - } + maxsize = tree_low_cst (len, 1); else maxsize = -1; + src_offset /= BITS_PER_UNIT; + dest_offset /= BITS_PER_UNIT; if (SSA_VAR_P (src_base) && SSA_VAR_P (dest_base)) { @@ -8410,13 +8403,25 @@ fold_builtin_memory_op (location_t loc, dest_offset, maxsize)) return NULL_TREE; } - else if (TREE_CODE (src_base) == INDIRECT_REF - && TREE_CODE (dest_base) == INDIRECT_REF) + else if (TREE_CODE (src_base) == MEM_REF + && TREE_CODE (dest_base) == MEM_REF) { + double_int off; if (! operand_equal_p (TREE_OPERAND (src_base, 0), - TREE_OPERAND (dest_base, 0), 0) - || ranges_overlap_p (src_offset, maxsize, - dest_offset, maxsize)) + TREE_OPERAND (dest_base, 0), 0)) + return NULL_TREE; + off = double_int_add (mem_ref_offset (src_base), + shwi_to_double_int (src_offset)); + if (!double_int_fits_in_shwi_p (off)) + return NULL_TREE; + src_offset = off.low; + off = double_int_add (mem_ref_offset (dest_base), + shwi_to_double_int (dest_offset)); + if (!double_int_fits_in_shwi_p (off)) + return NULL_TREE; + dest_offset = off.low; + if (ranges_overlap_p (src_offset, maxsize, + dest_offset, maxsize)) return NULL_TREE; } else @@ -8472,12 +8477,12 @@ fold_builtin_memory_op (location_t loc, dest = build1 (NOP_EXPR, build_pointer_type (desttype), dest); } if (!srctype || !desttype + || TREE_ADDRESSABLE (srctype) + || TREE_ADDRESSABLE (desttype) || !TYPE_SIZE_UNIT (srctype) || !TYPE_SIZE_UNIT (desttype) || TREE_CODE (TYPE_SIZE_UNIT (srctype)) != INTEGER_CST - || TREE_CODE (TYPE_SIZE_UNIT (desttype)) != INTEGER_CST - || TYPE_VOLATILE (srctype) - || TYPE_VOLATILE (desttype)) + || TREE_CODE (TYPE_SIZE_UNIT (desttype)) != INTEGER_CST) return NULL_TREE; src_align = get_pointer_alignment (src, BIGGEST_ALIGNMENT); @@ -8489,97 +8494,44 @@ fold_builtin_memory_op (location_t loc, if (!ignore) dest = builtin_save_expr (dest); - srcvar = NULL_TREE; - if (tree_int_cst_equal (TYPE_SIZE_UNIT (srctype), len)) - { - srcvar = build_fold_indirect_ref_loc (loc, src); - if (TREE_THIS_VOLATILE (srcvar)) - return NULL_TREE; - else if (!tree_int_cst_equal (tree_expr_size (srcvar), len)) - srcvar = NULL_TREE; - /* With memcpy, it is possible to bypass aliasing rules, so without - this check i.e. execute/20060930-2.c would be misoptimized, - because it use conflicting alias set to hold argument for the - memcpy call. This check is probably unnecessary with - -fno-strict-aliasing. Similarly for destvar. See also - PR29286. */ - else if (!var_decl_component_p (srcvar)) - srcvar = NULL_TREE; - } + /* Build accesses at offset zero with a ref-all character type. */ + off0 = build_int_cst (build_pointer_type_for_mode (char_type_node, + ptr_mode, true), 0); + + destvar = dest; + STRIP_NOPS (destvar); + if (TREE_CODE (destvar) == ADDR_EXPR + && var_decl_component_p (TREE_OPERAND (destvar, 0)) + && tree_int_cst_equal (TYPE_SIZE_UNIT (desttype), len)) + destvar = fold_build2 (MEM_REF, desttype, destvar, off0); + else + destvar = NULL_TREE; - destvar = NULL_TREE; - if (tree_int_cst_equal (TYPE_SIZE_UNIT (desttype), len)) - { - destvar = build_fold_indirect_ref_loc (loc, dest); - if (TREE_THIS_VOLATILE (destvar)) - return NULL_TREE; - else if (!tree_int_cst_equal (tree_expr_size (destvar), len)) - destvar = NULL_TREE; - else if (!var_decl_component_p (destvar)) - destvar = NULL_TREE; - } + srcvar = src; + STRIP_NOPS (srcvar); + if (TREE_CODE (srcvar) == ADDR_EXPR + && var_decl_component_p (TREE_OPERAND (srcvar, 0)) + && tree_int_cst_equal (TYPE_SIZE_UNIT (srctype), len)) + srcvar = fold_build2 (MEM_REF, destvar ? desttype : srctype, + srcvar, off0); + else + srcvar = NULL_TREE; if (srcvar == NULL_TREE && destvar == NULL_TREE) return NULL_TREE; if (srcvar == NULL_TREE) { - tree srcptype; - if (TREE_ADDRESSABLE (TREE_TYPE (destvar))) - return NULL_TREE; - - srctype = build_qualified_type (desttype, 0); - if (src_align < (int) TYPE_ALIGN (srctype)) - { - if (AGGREGATE_TYPE_P (srctype) - || SLOW_UNALIGNED_ACCESS (TYPE_MODE (srctype), src_align)) - return NULL_TREE; - - srctype = build_variant_type_copy (srctype); - TYPE_ALIGN (srctype) = src_align; - TYPE_USER_ALIGN (srctype) = 1; - TYPE_PACKED (srctype) = 1; - } - srcptype = build_pointer_type_for_mode (srctype, ptr_mode, true); - src = fold_convert_loc (loc, srcptype, src); - srcvar = build_fold_indirect_ref_loc (loc, src); + STRIP_NOPS (src); + srcvar = fold_build2 (MEM_REF, desttype, src, off0); } else if (destvar == NULL_TREE) { - tree destptype; - if (TREE_ADDRESSABLE (TREE_TYPE (srcvar))) - return NULL_TREE; - - desttype = build_qualified_type (srctype, 0); - if (dest_align < (int) TYPE_ALIGN (desttype)) - { - if (AGGREGATE_TYPE_P (desttype) - || SLOW_UNALIGNED_ACCESS (TYPE_MODE (desttype), dest_align)) - return NULL_TREE; - - desttype = build_variant_type_copy (desttype); - TYPE_ALIGN (desttype) = dest_align; - TYPE_USER_ALIGN (desttype) = 1; - TYPE_PACKED (desttype) = 1; - } - destptype = build_pointer_type_for_mode (desttype, ptr_mode, true); - dest = fold_convert_loc (loc, destptype, dest); - destvar = build_fold_indirect_ref_loc (loc, dest); + STRIP_NOPS (dest); + destvar = fold_build2 (MEM_REF, srctype, dest, off0); } - if (srctype == desttype - || (gimple_in_ssa_p (cfun) - && useless_type_conversion_p (desttype, srctype))) - expr = srcvar; - else if ((INTEGRAL_TYPE_P (TREE_TYPE (srcvar)) - || POINTER_TYPE_P (TREE_TYPE (srcvar))) - && (INTEGRAL_TYPE_P (TREE_TYPE (destvar)) - || POINTER_TYPE_P (TREE_TYPE (destvar)))) - expr = fold_convert_loc (loc, TREE_TYPE (destvar), srcvar); - else - expr = fold_build1_loc (loc, VIEW_CONVERT_EXPR, - TREE_TYPE (destvar), srcvar); - expr = build2 (MODIFY_EXPR, TREE_TYPE (destvar), destvar, expr); + expr = build2 (MODIFY_EXPR, TREE_TYPE (destvar), destvar, srcvar); } if (ignore) @@ -12068,7 +12020,7 @@ maybe_emit_free_warning (tree exp) return; arg = get_base_address (TREE_OPERAND (arg, 0)); - if (arg == NULL || INDIRECT_REF_P (arg)) + if (arg == NULL || INDIRECT_REF_P (arg) || TREE_CODE (arg) == MEM_REF) return; if (SSA_VAR_P (arg)) Index: gcc/omp-low.c =================================================================== --- gcc/omp-low.c (.../trunk) (revision 161367) +++ gcc/omp-low.c (.../branches/mem-ref2) (revision 161369) @@ -864,10 +864,10 @@ build_receiver_ref (tree var, bool by_re if (x != NULL) field = x; - x = build_fold_indirect_ref (ctx->receiver_decl); + x = build_simple_mem_ref (ctx->receiver_decl); x = build3 (COMPONENT_REF, TREE_TYPE (field), x, field, NULL); if (by_ref) - x = build_fold_indirect_ref (x); + x = build_simple_mem_ref (x); return x; } @@ -887,7 +887,7 @@ build_outer_var_ref (tree var, omp_conte { x = TREE_OPERAND (DECL_VALUE_EXPR (var), 0); x = build_outer_var_ref (x, ctx); - x = build_fold_indirect_ref (x); + x = build_simple_mem_ref (x); } else if (is_taskreg_ctx (ctx)) { @@ -904,7 +904,7 @@ build_outer_var_ref (tree var, omp_conte gcc_unreachable (); if (is_reference (var)) - x = build_fold_indirect_ref (x); + x = build_simple_mem_ref (x); return x; } @@ -1916,7 +1916,18 @@ scan_omp_1_op (tree *tp, int *walk_subtr { *walk_subtrees = 1; if (ctx) - TREE_TYPE (t) = remap_type (TREE_TYPE (t), &ctx->cb); + { + tree tem = remap_type (TREE_TYPE (t), &ctx->cb); + if (tem != TREE_TYPE (t)) + { + if (TREE_CODE (t) == INTEGER_CST) + *tp = build_int_cst_wide (tem, + TREE_INT_CST_LOW (t), + TREE_INT_CST_HIGH (t)); + else + TREE_TYPE (t) = tem; + } + } } break; } @@ -2337,7 +2348,7 @@ lower_rec_input_clauses (tree clauses, g x = fold_convert_loc (clause_loc, TREE_TYPE (new_var), x); gimplify_assign (new_var, x, ilist); - new_var = build_fold_indirect_ref_loc (clause_loc, new_var); + new_var = build_simple_mem_ref_loc (clause_loc, new_var); } else if (c_kind == OMP_CLAUSE_REDUCTION && OMP_CLAUSE_REDUCTION_PLACEHOLDER (c)) @@ -2555,7 +2566,7 @@ lower_lastprivate_clauses (tree clauses, x = build_outer_var_ref (var, ctx); if (is_reference (var)) - new_var = build_fold_indirect_ref_loc (clause_loc, new_var); + new_var = build_simple_mem_ref_loc (clause_loc, new_var); x = lang_hooks.decls.omp_clause_assign_op (c, x, new_var); gimplify_and_add (x, stmt_list); } @@ -2622,7 +2633,7 @@ lower_reduction_clauses (tree clauses, g var = OMP_CLAUSE_DECL (c); new_var = lookup_decl (var, ctx); if (is_reference (var)) - new_var = build_fold_indirect_ref_loc (clause_loc, new_var); + new_var = build_simple_mem_ref_loc (clause_loc, new_var); ref = build_outer_var_ref (var, ctx); code = OMP_CLAUSE_REDUCTION_CODE (c); @@ -2714,8 +2725,8 @@ lower_copyprivate_clauses (tree clauses, if (is_reference (var)) { ref = fold_convert_loc (clause_loc, TREE_TYPE (new_var), ref); - ref = build_fold_indirect_ref_loc (clause_loc, ref); - new_var = build_fold_indirect_ref_loc (clause_loc, new_var); + ref = build_simple_mem_ref_loc (clause_loc, ref); + new_var = build_simple_mem_ref_loc (clause_loc, new_var); } x = lang_hooks.decls.omp_clause_assign_op (c, new_var, ref); gimplify_and_add (x, rlist); @@ -5067,8 +5078,12 @@ expand_omp_atomic_pipeline (basic_block loadedi = loaded_val; } - initial = force_gimple_operand_gsi (&si, build_fold_indirect_ref (iaddr), - true, NULL_TREE, true, GSI_SAME_STMT); + initial + = force_gimple_operand_gsi (&si, + build2 (MEM_REF, TREE_TYPE (TREE_TYPE (iaddr)), + iaddr, + build_int_cst (TREE_TYPE (iaddr), 0)), + true, NULL_TREE, true, GSI_SAME_STMT); /* Move the value to the LOADEDI temporary. */ if (gimple_in_ssa_p (cfun)) @@ -5212,15 +5227,15 @@ expand_omp_atomic_mutex (basic_block loa t = build_function_call_expr (UNKNOWN_LOCATION, t, 0); force_gimple_operand_gsi (&si, t, true, NULL_TREE, true, GSI_SAME_STMT); - stmt = gimple_build_assign (loaded_val, build_fold_indirect_ref (addr)); + stmt = gimple_build_assign (loaded_val, build_simple_mem_ref (addr)); gsi_insert_before (&si, stmt, GSI_SAME_STMT); gsi_remove (&si, true); si = gsi_last_bb (store_bb); gcc_assert (gimple_code (gsi_stmt (si)) == GIMPLE_OMP_ATOMIC_STORE); - stmt = gimple_build_assign (build_fold_indirect_ref (unshare_expr (addr)), - stored_val); + stmt = gimple_build_assign (build_simple_mem_ref (unshare_expr (addr)), + stored_val); gsi_insert_before (&si, stmt, GSI_SAME_STMT); t = built_in_decls[BUILT_IN_GOMP_ATOMIC_END]; @@ -6269,7 +6284,7 @@ create_task_copyfn (gimple task_stmt, om n = splay_tree_lookup (ctx->sfield_map, (splay_tree_key) decl); sf = (tree) n->value; sf = *(tree *) pointer_map_contains (tcctx.cb.decl_map, sf); - src = build_fold_indirect_ref_loc (loc, sarg); + src = build_simple_mem_ref_loc (loc, sarg); src = build3 (COMPONENT_REF, TREE_TYPE (sf), src, sf, NULL); t = build2 (MODIFY_EXPR, TREE_TYPE (*p), *p, src); append_to_statement_list (t, &list); @@ -6292,9 +6307,9 @@ create_task_copyfn (gimple task_stmt, om sf = (tree) n->value; if (tcctx.cb.decl_map) sf = *(tree *) pointer_map_contains (tcctx.cb.decl_map, sf); - src = build_fold_indirect_ref_loc (loc, sarg); + src = build_simple_mem_ref_loc (loc, sarg); src = build3 (COMPONENT_REF, TREE_TYPE (sf), src, sf, NULL); - dst = build_fold_indirect_ref_loc (loc, arg); + dst = build_simple_mem_ref_loc (loc, arg); dst = build3 (COMPONENT_REF, TREE_TYPE (f), dst, f, NULL); t = build2 (MODIFY_EXPR, TREE_TYPE (dst), dst, src); append_to_statement_list (t, &list); @@ -6315,14 +6330,14 @@ create_task_copyfn (gimple task_stmt, om sf = (tree) n->value; if (tcctx.cb.decl_map) sf = *(tree *) pointer_map_contains (tcctx.cb.decl_map, sf); - src = build_fold_indirect_ref_loc (loc, sarg); + src = build_simple_mem_ref_loc (loc, sarg); src = build3 (COMPONENT_REF, TREE_TYPE (sf), src, sf, NULL); if (use_pointer_for_field (decl, NULL) || is_reference (decl)) - src = build_fold_indirect_ref_loc (loc, src); + src = build_simple_mem_ref_loc (loc, src); } else src = decl; - dst = build_fold_indirect_ref_loc (loc, arg); + dst = build_simple_mem_ref_loc (loc, arg); dst = build3 (COMPONENT_REF, TREE_TYPE (f), dst, f, NULL); t = lang_hooks.decls.omp_clause_copy_ctor (c, dst, src); append_to_statement_list (t, &list); @@ -6341,14 +6356,14 @@ create_task_copyfn (gimple task_stmt, om sf = (tree) n->value; if (tcctx.cb.decl_map) sf = *(tree *) pointer_map_contains (tcctx.cb.decl_map, sf); - src = build_fold_indirect_ref_loc (loc, sarg); + src = build_simple_mem_ref_loc (loc, sarg); src = build3 (COMPONENT_REF, TREE_TYPE (sf), src, sf, NULL); if (use_pointer_for_field (decl, NULL)) - src = build_fold_indirect_ref_loc (loc, src); + src = build_simple_mem_ref_loc (loc, src); } else src = decl; - dst = build_fold_indirect_ref_loc (loc, arg); + dst = build_simple_mem_ref_loc (loc, arg); dst = build3 (COMPONENT_REF, TREE_TYPE (f), dst, f, NULL); t = build2 (MODIFY_EXPR, TREE_TYPE (dst), dst, src); append_to_statement_list (t, &list); @@ -6380,10 +6395,10 @@ create_task_copyfn (gimple task_stmt, om (splay_tree_key) TREE_OPERAND (ind, 0)); sf = (tree) n->value; sf = *(tree *) pointer_map_contains (tcctx.cb.decl_map, sf); - src = build_fold_indirect_ref_loc (loc, sarg); + src = build_simple_mem_ref_loc (loc, sarg); src = build3 (COMPONENT_REF, TREE_TYPE (sf), src, sf, NULL); - src = build_fold_indirect_ref_loc (loc, src); - dst = build_fold_indirect_ref_loc (loc, arg); + src = build_simple_mem_ref_loc (loc, src); + dst = build_simple_mem_ref_loc (loc, arg); dst = build3 (COMPONENT_REF, TREE_TYPE (f), dst, f, NULL); t = lang_hooks.decls.omp_clause_copy_ctor (c, dst, src); append_to_statement_list (t, &list); @@ -6391,7 +6406,7 @@ create_task_copyfn (gimple task_stmt, om (splay_tree_key) TREE_OPERAND (ind, 0)); df = (tree) n->value; df = *(tree *) pointer_map_contains (tcctx.cb.decl_map, df); - ptr = build_fold_indirect_ref_loc (loc, arg); + ptr = build_simple_mem_ref_loc (loc, arg); ptr = build3 (COMPONENT_REF, TREE_TYPE (df), ptr, df, NULL); t = build2 (MODIFY_EXPR, TREE_TYPE (ptr), ptr, build_fold_addr_expr_loc (loc, dst)); Index: gcc/tree-ssa-sccvn.c =================================================================== --- gcc/tree-ssa-sccvn.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-sccvn.c (.../branches/mem-ref2) (revision 161369) @@ -156,8 +156,6 @@ static unsigned int next_value_id; static unsigned int next_dfs_num; static VEC (tree, heap) *sccstack; -static bool may_insert; - DEF_VEC_P(vn_ssa_aux_t); DEF_VEC_ALLOC_P(vn_ssa_aux_t, heap); @@ -431,9 +429,41 @@ vn_reference_compute_hash (const vn_refe hashval_t result = 0; int i; vn_reference_op_t vro; + HOST_WIDE_INT off = -1; + bool deref = false; for (i = 0; VEC_iterate (vn_reference_op_s, vr1->operands, i, vro); i++) - result = vn_reference_op_compute_hash (vro, result); + { + if (vro->opcode == MEM_REF) + deref = true; + else if (vro->opcode != ADDR_EXPR) + deref = false; + if (vro->off != -1) + { + if (off == -1) + off = 0; + off += vro->off; + } + else + { + if (off != -1 + && off != 0) + result = iterative_hash_hashval_t (off, result); + off = -1; + if (deref + && vro->opcode == ADDR_EXPR) + { + if (vro->op0) + { + tree op = TREE_OPERAND (vro->op0, 0); + result = iterative_hash_hashval_t (TREE_CODE (op), result); + result = iterative_hash_expr (op, result); + } + } + else + result = vn_reference_op_compute_hash (vro, result); + } + } if (vr1->vuse) result += SSA_NAME_VERSION (vr1->vuse); @@ -446,8 +476,7 @@ vn_reference_compute_hash (const vn_refe int vn_reference_eq (const void *p1, const void *p2) { - int i; - vn_reference_op_t vro; + unsigned i, j; const_vn_reference_t const vr1 = (const_vn_reference_t) p1; const_vn_reference_t const vr2 = (const_vn_reference_t) p2; @@ -466,17 +495,58 @@ vn_reference_eq (const void *p1, const v if (vr1->operands == vr2->operands) return true; - /* We require that address operands be canonicalized in a way that - two memory references will have the same operands if they are - equivalent. */ - if (VEC_length (vn_reference_op_s, vr1->operands) - != VEC_length (vn_reference_op_s, vr2->operands)) + if (!expressions_equal_p (TYPE_SIZE (vr1->type), TYPE_SIZE (vr2->type))) return false; - for (i = 0; VEC_iterate (vn_reference_op_s, vr1->operands, i, vro); i++) - if (!vn_reference_op_eq (VEC_index (vn_reference_op_s, vr2->operands, i), - vro)) - return false; + i = 0; + j = 0; + do + { + HOST_WIDE_INT off1 = 0, off2 = 0; + vn_reference_op_t vro1, vro2; + vn_reference_op_s tem1, tem2; + bool deref1 = false, deref2 = false; + for (; VEC_iterate (vn_reference_op_s, vr1->operands, i, vro1); i++) + { + if (vro1->opcode == MEM_REF) + deref1 = true; + if (vro1->off == -1) + break; + off1 += vro1->off; + } + for (; VEC_iterate (vn_reference_op_s, vr2->operands, j, vro2); j++) + { + if (vro2->opcode == MEM_REF) + deref2 = true; + if (vro2->off == -1) + break; + off2 += vro2->off; + } + if (off1 != off2) + return false; + if (deref1 && vro1->opcode == ADDR_EXPR) + { + memset (&tem1, 0, sizeof (tem1)); + tem1.op0 = TREE_OPERAND (vro1->op0, 0); + tem1.type = TREE_TYPE (tem1.op0); + tem1.opcode = TREE_CODE (tem1.op0); + vro1 = &tem1; + } + if (deref2 && vro2->opcode == ADDR_EXPR) + { + memset (&tem2, 0, sizeof (tem2)); + tem2.op0 = TREE_OPERAND (vro2->op0, 0); + tem2.type = TREE_TYPE (tem2.op0); + tem2.opcode = TREE_CODE (tem2.op0); + vro2 = &tem2; + } + if (!vn_reference_op_eq (vro1, vro2)) + return false; + ++j; + ++i; + } + while (VEC_length (vn_reference_op_s, vr1->operands) != i + || VEC_length (vn_reference_op_s, vr2->operands) != j); return true; } @@ -503,6 +573,7 @@ copy_reference_ops_from_ref (tree ref, V temp.op0 = TMR_INDEX (ref); temp.op1 = TMR_STEP (ref); temp.op2 = TMR_OFFSET (ref); + temp.off = -1; VEC_safe_push (vn_reference_op_s, heap, *result, &temp); memset (&temp, 0, sizeof (temp)); @@ -510,6 +581,7 @@ copy_reference_ops_from_ref (tree ref, V temp.opcode = TREE_CODE (base); temp.op0 = base; temp.op1 = TMR_ORIGINAL (ref); + temp.off = -1; VEC_safe_push (vn_reference_op_s, heap, *result, &temp); return; } @@ -524,17 +596,23 @@ copy_reference_ops_from_ref (tree ref, V /* We do not care for spurious type qualifications. */ temp.type = TYPE_MAIN_VARIANT (TREE_TYPE (ref)); temp.opcode = TREE_CODE (ref); + temp.off = -1; switch (temp.opcode) { case ALIGN_INDIRECT_REF: - case INDIRECT_REF: /* The only operand is the address, which gets its own vn_reference_op_s structure. */ break; case MISALIGNED_INDIRECT_REF: temp.op0 = TREE_OPERAND (ref, 1); break; + case MEM_REF: + /* The base address gets its own vn_reference_op_s structure. */ + temp.op0 = TREE_OPERAND (ref, 1); + if (host_integerp (TREE_OPERAND (ref, 1), 0)) + temp.off = TREE_INT_CST_LOW (TREE_OPERAND (ref, 1)); + break; case BIT_FIELD_REF: /* Record bits and position. */ temp.op0 = TREE_OPERAND (ref, 1); @@ -547,17 +625,25 @@ copy_reference_ops_from_ref (tree ref, V temp.type = NULL_TREE; temp.op0 = TREE_OPERAND (ref, 1); temp.op1 = TREE_OPERAND (ref, 2); - /* If this is a reference to a union member, record the union - member size as operand. Do so only if we are doing - expression insertion (during FRE), as PRE currently gets - confused with this. */ - if (may_insert - && temp.op1 == NULL_TREE - && TREE_CODE (DECL_CONTEXT (temp.op0)) == UNION_TYPE - && integer_zerop (DECL_FIELD_OFFSET (temp.op0)) - && integer_zerop (DECL_FIELD_BIT_OFFSET (temp.op0)) - && host_integerp (DECL_SIZE (temp.op0), 0)) - temp.op0 = DECL_SIZE (temp.op0); + { + tree this_offset = component_ref_field_offset (ref); + if (this_offset + && TREE_CODE (this_offset) == INTEGER_CST) + { + tree bit_offset = DECL_FIELD_BIT_OFFSET (TREE_OPERAND (ref, 1)); + if (TREE_INT_CST_LOW (bit_offset) % BITS_PER_UNIT == 0) + { + double_int off + = double_int_add (tree_to_double_int (this_offset), + double_int_sdiv + (tree_to_double_int (bit_offset), + uhwi_to_double_int (BITS_PER_UNIT), + TRUNC_DIV_EXPR)); + if (double_int_fits_in_shwi_p (off)) + temp.off = off.low; + } + } + } break; case ARRAY_RANGE_REF: case ARRAY_REF: @@ -566,6 +652,18 @@ copy_reference_ops_from_ref (tree ref, V /* Always record lower bounds and element size. */ temp.op1 = array_ref_low_bound (ref); temp.op2 = array_ref_element_size (ref); + if (TREE_CODE (temp.op0) == INTEGER_CST + && TREE_CODE (temp.op1) == INTEGER_CST + && TREE_CODE (temp.op2) == INTEGER_CST) + { + double_int off = tree_to_double_int (temp.op0); + off = double_int_add (off, + double_int_neg + (tree_to_double_int (temp.op1))); + off = double_int_mul (off, tree_to_double_int (temp.op2)); + if (double_int_fits_in_shwi_p (off)) + temp.off = off.low; + } break; case STRING_CST: case INTEGER_CST: @@ -592,9 +690,13 @@ copy_reference_ops_from_ref (tree ref, V ref in the chain of references (IE they require an operand), so we don't have to put anything for op* as it will be handled by the iteration */ - case IMAGPART_EXPR: case REALPART_EXPR: case VIEW_CONVERT_EXPR: + temp.off = 0; + break; + case IMAGPART_EXPR: + /* This is only interesting for its constant offset. */ + temp.off = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (TREE_TYPE (ref))); break; default: gcc_unreachable (); @@ -627,16 +729,12 @@ ao_ref_init_from_vn_reference (ao_ref *r HOST_WIDE_INT max_size; HOST_WIDE_INT size = -1; tree size_tree = NULL_TREE; + alias_set_type base_alias_set = -1; /* First get the final access size from just the outermost expression. */ op = VEC_index (vn_reference_op_s, ops, 0); if (op->opcode == COMPONENT_REF) - { - if (TREE_CODE (op->op0) == INTEGER_CST) - size_tree = op->op0; - else - size_tree = DECL_SIZE (op->op0); - } + size_tree = DECL_SIZE (op->op0); else if (op->opcode == BIT_FIELD_REF) size_tree = op->op0; else @@ -667,13 +765,31 @@ ao_ref_init_from_vn_reference (ao_ref *r { /* These may be in the reference ops, but we cannot do anything sensible with them here. */ - case CALL_EXPR: case ADDR_EXPR: + /* Apart from ADDR_EXPR arguments to MEM_REF. */ + if (base != NULL_TREE + && TREE_CODE (base) == MEM_REF + && op->op0 + && DECL_P (TREE_OPERAND (op->op0, 0))) + { + vn_reference_op_t pop = VEC_index (vn_reference_op_s, ops, i-1); + base = TREE_OPERAND (op->op0, 0); + if (pop->off == -1) + { + max_size = -1; + offset = 0; + } + else + offset += pop->off * BITS_PER_UNIT; + op0_p = NULL; + break; + } + /* Fallthru. */ + case CALL_EXPR: return false; /* Record the base objects. */ case ALIGN_INDIRECT_REF: - case INDIRECT_REF: *op0_p = build1 (op->opcode, op->type, NULL_TREE); op0_p = &TREE_OPERAND (*op0_p, 0); break; @@ -684,11 +800,19 @@ ao_ref_init_from_vn_reference (ao_ref *r op0_p = &TREE_OPERAND (*op0_p, 0); break; + case MEM_REF: + base_alias_set = get_deref_alias_set (op->op0); + *op0_p = build2 (MEM_REF, op->type, + NULL_TREE, op->op0); + op0_p = &TREE_OPERAND (*op0_p, 0); + break; + case VAR_DECL: case PARM_DECL: case RESULT_DECL: case SSA_NAME: *op0_p = op->op0; + op0_p = NULL; break; /* And now the usual component-reference style ops. */ @@ -703,11 +827,8 @@ ao_ref_init_from_vn_reference (ao_ref *r cannot use component_ref_field_offset. Do the interesting parts manually. */ - /* Our union trick, done for offset zero only. */ - if (TREE_CODE (field) == INTEGER_CST) - ; - else if (op->op1 - || !host_integerp (DECL_FIELD_OFFSET (field), 1)) + if (op->op1 + || !host_integerp (DECL_FIELD_OFFSET (field), 1)) max_size = -1; else { @@ -768,7 +889,10 @@ ao_ref_init_from_vn_reference (ao_ref *r ref->size = size; ref->max_size = max_size; ref->ref_alias_set = set; - ref->base_alias_set = -1; + if (base_alias_set != -1) + ref->base_alias_set = base_alias_set; + else + ref->base_alias_set = get_alias_set (base); return true; } @@ -789,6 +913,7 @@ copy_reference_ops_from_call (gimple cal temp.opcode = CALL_EXPR; temp.op0 = gimple_call_fn (call); temp.op1 = gimple_call_chain (call); + temp.off = -1; VEC_safe_push (vn_reference_op_s, heap, *result, &temp); /* Copy the call arguments. As they can be references as well, @@ -830,62 +955,30 @@ void vn_reference_fold_indirect (VEC (vn_reference_op_s, heap) **ops, unsigned int *i_p) { - VEC(vn_reference_op_s, heap) *mem = NULL; - vn_reference_op_t op; unsigned int i = *i_p; - unsigned int j; - - /* Get ops for the addressed object. */ - op = VEC_index (vn_reference_op_s, *ops, i); - /* ??? If this is our usual typeof &ARRAY vs. &ARRAY[0] problem, work - around it to avoid later ICEs. */ - if (TREE_CODE (TREE_TYPE (TREE_OPERAND (op->op0, 0))) == ARRAY_TYPE - && TREE_CODE (TREE_TYPE (TREE_TYPE (op->op0))) != ARRAY_TYPE) - { - vn_reference_op_s aref; - tree dom; - aref.type = TYPE_MAIN_VARIANT (TREE_TYPE (TREE_TYPE (op->op0))); - aref.opcode = ARRAY_REF; - aref.op0 = integer_zero_node; - if ((dom = TYPE_DOMAIN (TREE_TYPE (TREE_OPERAND (op->op0, 0)))) - && TYPE_MIN_VALUE (dom)) - aref.op0 = TYPE_MIN_VALUE (dom); - aref.op1 = aref.op0; - aref.op2 = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (op->op0))); - VEC_safe_push (vn_reference_op_s, heap, mem, &aref); - } - copy_reference_ops_from_ref (TREE_OPERAND (op->op0, 0), &mem); - - /* Do the replacement - we should have at least one op in mem now. */ - if (VEC_length (vn_reference_op_s, mem) == 1) - { - VEC_replace (vn_reference_op_s, *ops, i - 1, - VEC_index (vn_reference_op_s, mem, 0)); - VEC_ordered_remove (vn_reference_op_s, *ops, i); - i--; - } - else if (VEC_length (vn_reference_op_s, mem) == 2) - { - VEC_replace (vn_reference_op_s, *ops, i - 1, - VEC_index (vn_reference_op_s, mem, 0)); - VEC_replace (vn_reference_op_s, *ops, i, - VEC_index (vn_reference_op_s, mem, 1)); - } - else if (VEC_length (vn_reference_op_s, mem) > 2) - { - VEC_replace (vn_reference_op_s, *ops, i - 1, - VEC_index (vn_reference_op_s, mem, 0)); - VEC_replace (vn_reference_op_s, *ops, i, - VEC_index (vn_reference_op_s, mem, 1)); - /* ??? There is no VEC_splice. */ - for (j = 2; VEC_iterate (vn_reference_op_s, mem, j, op); j++) - VEC_safe_insert (vn_reference_op_s, heap, *ops, ++i, op); + vn_reference_op_t op = VEC_index (vn_reference_op_s, *ops, i); + vn_reference_op_t mem_op = VEC_index (vn_reference_op_s, *ops, i - 1); + tree addr_base; + HOST_WIDE_INT addr_offset; + + /* The only thing we have to do is from &OBJ.foo.bar add the offset + from .foo.bar to the preceeding MEM_REF offset and replace the + address with &OBJ. */ + addr_base = get_addr_base_and_unit_offset (TREE_OPERAND (op->op0, 0), + &addr_offset); + gcc_checking_assert (addr_base && TREE_CODE (addr_base) != MEM_REF); + if (addr_base != op->op0) + { + double_int off = tree_to_double_int (mem_op->op0); + off = double_int_sext (off, TYPE_PRECISION (TREE_TYPE (mem_op->op0))); + off = double_int_add (off, shwi_to_double_int (addr_offset)); + mem_op->op0 = double_int_to_tree (TREE_TYPE (mem_op->op0), off); + op->op0 = build_fold_addr_expr (addr_base); + if (host_integerp (mem_op->op0, 0)) + mem_op->off = TREE_INT_CST_LOW (mem_op->op0); + else + mem_op->off = -1; } - else - gcc_unreachable (); - - VEC_free (vn_reference_op_s, heap, mem); - *i_p = i; } /* Optimize the reference REF to a constant if possible or return @@ -978,20 +1071,35 @@ valueize_refs (VEC (vn_reference_op_s, h the opcode. */ if (TREE_CODE (vro->op0) != SSA_NAME && vro->opcode == SSA_NAME) vro->opcode = TREE_CODE (vro->op0); - /* If it transforms from an SSA_NAME to an address, fold with - a preceding indirect reference. */ - if (i > 0 && TREE_CODE (vro->op0) == ADDR_EXPR - && VEC_index (vn_reference_op_s, - orig, i - 1)->opcode == INDIRECT_REF) - { - vn_reference_fold_indirect (&orig, &i); - continue; - } } if (vro->op1 && TREE_CODE (vro->op1) == SSA_NAME) vro->op1 = SSA_VAL (vro->op1); if (vro->op2 && TREE_CODE (vro->op2) == SSA_NAME) vro->op2 = SSA_VAL (vro->op2); + /* If it transforms from an SSA_NAME to an address, fold with + a preceding indirect reference. */ + if (i > 0 + && vro->op0 + && TREE_CODE (vro->op0) == ADDR_EXPR + && VEC_index (vn_reference_op_s, + orig, i - 1)->opcode == MEM_REF) + vn_reference_fold_indirect (&orig, &i); + /* If it transforms a non-constant ARRAY_REF into a constant + one, adjust the constant offset. */ + else if (vro->opcode == ARRAY_REF + && vro->off == -1 + && TREE_CODE (vro->op0) == INTEGER_CST + && TREE_CODE (vro->op1) == INTEGER_CST + && TREE_CODE (vro->op2) == INTEGER_CST) + { + double_int off = tree_to_double_int (vro->op0); + off = double_int_add (off, + double_int_neg + (tree_to_double_int (vro->op1))); + off = double_int_mul (off, tree_to_double_int (vro->op2)); + if (double_int_fits_in_shwi_p (off)) + vro->off = off.low; + } } return orig; @@ -1172,7 +1280,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree the copy kills ref. */ else if (gimple_assign_single_p (def_stmt) && (DECL_P (gimple_assign_rhs1 (def_stmt)) - || INDIRECT_REF_P (gimple_assign_rhs1 (def_stmt)) + || TREE_CODE (gimple_assign_rhs1 (def_stmt)) == MEM_REF || handled_component_p (gimple_assign_rhs1 (def_stmt)))) { tree base2; @@ -2092,9 +2200,9 @@ visit_reference_op_load (tree lhs, tree result = vn_nary_op_lookup (val, NULL); /* If the expression is not yet available, value-number lhs to a new SSA_NAME we create. */ - if (!result && may_insert) + if (!result) { - result = make_ssa_name (SSA_NAME_VAR (lhs), NULL); + result = make_ssa_name (SSA_NAME_VAR (lhs), gimple_build_nop ()); /* Initialize value-number information properly. */ VN_INFO_GET (result)->valnum = result; VN_INFO (result)->value_id = get_next_value_id (); @@ -3266,14 +3374,12 @@ set_hashtable_value_ids (void) due to resource constraints. */ bool -run_scc_vn (bool may_insert_arg) +run_scc_vn (void) { size_t i; tree param; bool changed = true; - may_insert = may_insert_arg; - init_scc_vn (); current_info = valid_info; @@ -3297,7 +3403,6 @@ run_scc_vn (bool may_insert_arg) if (!DFS (name)) { free_scc_vn (); - may_insert = false; return false; } } @@ -3359,7 +3464,6 @@ run_scc_vn (bool may_insert_arg) } } - may_insert = false; return true; } Index: gcc/tree-ssa-sccvn.h =================================================================== --- gcc/tree-ssa-sccvn.h (.../trunk) (revision 161367) +++ gcc/tree-ssa-sccvn.h (.../branches/mem-ref2) (revision 161369) @@ -72,6 +72,8 @@ typedef const struct vn_phi_s *const_vn_ typedef struct vn_reference_op_struct { enum tree_code opcode; + /* Constant offset this op adds or -1 if it is variable. */ + HOST_WIDE_INT off; tree type; tree op0; tree op1; @@ -167,7 +169,7 @@ typedef struct vn_ssa_aux extern vn_ssa_aux_t VN_INFO (tree); extern vn_ssa_aux_t VN_INFO_GET (tree); tree vn_get_expr_for (tree); -bool run_scc_vn (bool); +bool run_scc_vn (void); void free_scc_vn (void); tree vn_nary_op_lookup (tree, vn_nary_op_t *); tree vn_nary_op_lookup_stmt (gimple, vn_nary_op_t *); Index: gcc/cgraphunit.c =================================================================== --- gcc/cgraphunit.c (.../trunk) (revision 161367) +++ gcc/cgraphunit.c (.../branches/mem-ref2) (revision 161369) @@ -1364,8 +1364,7 @@ thunk_adjust (gimple_stmt_iterator * bsi vtabletmp2 = create_tmp_var (TREE_TYPE (TREE_TYPE (vtabletmp)), "vtableaddr"); stmt = gimple_build_assign (vtabletmp2, - build1 (INDIRECT_REF, - TREE_TYPE (vtabletmp2), vtabletmp)); + build_simple_mem_ref (vtabletmp)); gsi_insert_after (bsi, stmt, GSI_NEW_STMT); mark_symbols_for_renaming (stmt); find_referenced_vars_in (stmt); @@ -1384,9 +1383,7 @@ thunk_adjust (gimple_stmt_iterator * bsi vtabletmp3 = create_tmp_var (TREE_TYPE (TREE_TYPE (vtabletmp2)), "vcalloffset"); stmt = gimple_build_assign (vtabletmp3, - build1 (INDIRECT_REF, - TREE_TYPE (vtabletmp3), - vtabletmp2)); + build_simple_mem_ref (vtabletmp2)); gsi_insert_after (bsi, stmt, GSI_NEW_STMT); mark_symbols_for_renaming (stmt); find_referenced_vars_in (stmt); Index: gcc/cp/cp-gimplify.c =================================================================== --- gcc/cp/cp-gimplify.c (.../trunk) (revision 161367) +++ gcc/cp/cp-gimplify.c (.../branches/mem-ref2) (revision 161369) @@ -575,7 +575,7 @@ cp_gimplify_expr (tree *expr_p, gimple_s TREE_OPERAND (*expr_p, 1) = build1 (VIEW_CONVERT_EXPR, TREE_TYPE (op0), op1); - else if ((rhs_predicate_for (op0)) (op1) + else if ((is_gimple_lvalue (op1) || INDIRECT_REF_P (op1)) && !(TREE_CODE (op1) == CALL_EXPR && CALL_EXPR_RETURN_SLOT_OPT (op1)) && is_really_empty_class (TREE_TYPE (op0))) Index: gcc/tree-ssa-ccp.c =================================================================== --- gcc/tree-ssa-ccp.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-ccp.c (.../branches/mem-ref2) (revision 161369) @@ -896,20 +896,22 @@ ccp_fold (gimple stmt) base = &TREE_OPERAND (rhs, 0); while (handled_component_p (*base)) base = &TREE_OPERAND (*base, 0); - if (TREE_CODE (*base) == INDIRECT_REF + if (TREE_CODE (*base) == MEM_REF && TREE_CODE (TREE_OPERAND (*base, 0)) == SSA_NAME) { prop_value_t *val = get_value (TREE_OPERAND (*base, 0)); if (val->lattice_val == CONSTANT - && TREE_CODE (val->value) == ADDR_EXPR - && may_propagate_address_into_dereference - (val->value, *base)) + && TREE_CODE (val->value) == ADDR_EXPR) { + tree ret, save = *base; + tree new_base; + new_base = fold_build2 (MEM_REF, TREE_TYPE (*base), + unshare_expr (val->value), + TREE_OPERAND (*base, 1)); /* We need to return a new tree, not modify the IL or share parts of it. So play some tricks to avoid manually building it. */ - tree ret, save = *base; - *base = TREE_OPERAND (val->value, 0); + *base = new_base; ret = unshare_expr (rhs); recompute_tree_invariant_for_addr_expr (ret); *base = save; @@ -955,15 +957,19 @@ ccp_fold (gimple stmt) TREE_CODE (rhs), TREE_TYPE (rhs), val->value); } - else if (TREE_CODE (rhs) == INDIRECT_REF + else if (TREE_CODE (rhs) == MEM_REF && TREE_CODE (TREE_OPERAND (rhs, 0)) == SSA_NAME) { prop_value_t *val = get_value (TREE_OPERAND (rhs, 0)); if (val->lattice_val == CONSTANT - && TREE_CODE (val->value) == ADDR_EXPR - && useless_type_conversion_p (TREE_TYPE (rhs), - TREE_TYPE (TREE_TYPE (val->value)))) - rhs = TREE_OPERAND (val->value, 0); + && TREE_CODE (val->value) == ADDR_EXPR) + { + tree tem = fold_build2 (MEM_REF, TREE_TYPE (rhs), + unshare_expr (val->value), + TREE_OPERAND (rhs, 1)); + if (tem) + rhs = tem; + } } return fold_const_aggregate_ref (rhs); } @@ -987,16 +993,10 @@ ccp_fold (gimple stmt) allowed places. */ if (CONVERT_EXPR_CODE_P (subcode) && POINTER_TYPE_P (TREE_TYPE (lhs)) - && POINTER_TYPE_P (TREE_TYPE (op0)) - /* Do not allow differences in volatile qualification - as this might get us confused as to whether a - propagation destination statement is volatile - or not. See PR36988. */ - && (TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (lhs))) - == TYPE_VOLATILE (TREE_TYPE (TREE_TYPE (op0))))) + && POINTER_TYPE_P (TREE_TYPE (op0))) { tree tem; - /* Still try to generate a constant of correct type. */ + /* Try to re-construct array references on-the-fly. */ if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (op0)) && ((tem = maybe_fold_offset_to_address @@ -1018,19 +1018,21 @@ ccp_fold (gimple stmt) tree op0 = get_rhs_assign_op_for_ccp (stmt, 1); tree op1 = get_rhs_assign_op_for_ccp (stmt, 2); - /* Fold &foo + CST into an invariant reference if possible. */ + /* Translate &x + CST into an invariant form suitable for + further propagation. */ if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR && TREE_CODE (op0) == ADDR_EXPR && TREE_CODE (op1) == INTEGER_CST) { - tree tem = maybe_fold_offset_to_address - (loc, op0, op1, TREE_TYPE (op0)); - if (tem != NULL_TREE) - return tem; + tree off = fold_convert (ptr_type_node, op1); + return build_fold_addr_expr + (fold_build2 (MEM_REF, + TREE_TYPE (TREE_TYPE (op0)), + unshare_expr (op0), off)); } return fold_binary_loc (loc, subcode, - gimple_expr_type (stmt), op0, op1); + gimple_expr_type (stmt), op0, op1); } case GIMPLE_TERNARY_RHS: @@ -1299,18 +1301,97 @@ fold_const_aggregate_ref (tree t) break; } - case INDIRECT_REF: - { - tree base = TREE_OPERAND (t, 0); - if (TREE_CODE (base) == SSA_NAME - && (value = get_value (base)) - && value->lattice_val == CONSTANT - && TREE_CODE (value->value) == ADDR_EXPR - && useless_type_conversion_p (TREE_TYPE (t), - TREE_TYPE (TREE_TYPE (value->value)))) - return fold_const_aggregate_ref (TREE_OPERAND (value->value, 0)); - break; - } + case MEM_REF: + /* Get the base object we are accessing. */ + base = TREE_OPERAND (t, 0); + if (TREE_CODE (base) == SSA_NAME + && (value = get_value (base)) + && value->lattice_val == CONSTANT) + base = value->value; + if (TREE_CODE (base) != ADDR_EXPR) + return NULL_TREE; + base = TREE_OPERAND (base, 0); + switch (TREE_CODE (base)) + { + case VAR_DECL: + if (DECL_P (base) + && !AGGREGATE_TYPE_P (TREE_TYPE (base)) + && integer_zerop (TREE_OPERAND (t, 1))) + return get_symbol_constant_value (base); + + if (!TREE_READONLY (base) + || TREE_CODE (TREE_TYPE (base)) != ARRAY_TYPE + || !targetm.binds_local_p (base)) + return NULL_TREE; + + ctor = DECL_INITIAL (base); + break; + + case STRING_CST: + case CONSTRUCTOR: + ctor = base; + break; + + default: + return NULL_TREE; + } + + if (ctor == NULL_TREE + || (TREE_CODE (ctor) != CONSTRUCTOR + && TREE_CODE (ctor) != STRING_CST) + || !TREE_STATIC (ctor)) + return NULL_TREE; + + /* Get the byte offset. */ + idx = TREE_OPERAND (t, 1); + + /* Fold read from constant string. */ + if (TREE_CODE (ctor) == STRING_CST) + { + if ((TYPE_MODE (TREE_TYPE (t)) + == TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor)))) + && (GET_MODE_CLASS (TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor)))) + == MODE_INT) + && GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor)))) == 1 + && compare_tree_int (idx, TREE_STRING_LENGTH (ctor)) < 0) + return build_int_cst_type (TREE_TYPE (t), + (TREE_STRING_POINTER (ctor) + [TREE_INT_CST_LOW (idx)])); + return NULL_TREE; + } + + /* ??? Implement byte-offset indexing into a non-array CONSTRUCTOR. */ + if (TREE_CODE (TREE_TYPE (ctor)) == ARRAY_TYPE + && (TYPE_MODE (TREE_TYPE (t)) + == TYPE_MODE (TREE_TYPE (TREE_TYPE (ctor)))) + && GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (t))) != 0 + && integer_zerop + (int_const_binop + (TRUNC_MOD_EXPR, idx, + size_int (GET_MODE_SIZE (TYPE_MODE (TREE_TYPE (t)))), 0))) + { + idx = int_const_binop (TRUNC_DIV_EXPR, idx, + size_int (GET_MODE_SIZE + (TYPE_MODE (TREE_TYPE (t)))), 0); + FOR_EACH_CONSTRUCTOR_ELT (CONSTRUCTOR_ELTS (ctor), cnt, cfield, cval) + if (tree_int_cst_equal (cfield, idx)) + { + STRIP_NOPS (cval); + if (TREE_CODE (cval) == ADDR_EXPR) + { + tree base = get_base_address (TREE_OPERAND (cval, 0)); + if (base && TREE_CODE (base) == VAR_DECL) + add_referenced_var (base); + } + if (useless_type_conversion_p (TREE_TYPE (t), TREE_TYPE (cval))) + return cval; + else if (CONSTANT_CLASS_P (cval)) + return fold_build1 (VIEW_CONVERT_EXPR, TREE_TYPE (t), cval); + else + return NULL_TREE; + } + } + break; default: break; @@ -1498,7 +1579,7 @@ ccp_fold_stmt (gimple_stmt_iterator *gsi { tree rhs = unshare_expr (val->value); if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs))) - rhs = fold_convert (TREE_TYPE (lhs), rhs); + rhs = fold_build1 (VIEW_CONVERT_EXPR, TREE_TYPE (lhs), rhs); gimple_assign_set_rhs_from_tree (gsi, rhs); return true; } Index: gcc/tree-ssa-loop-ivopts.c =================================================================== --- gcc/tree-ssa-loop-ivopts.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-loop-ivopts.c (.../branches/mem-ref2) (revision 161369) @@ -813,7 +813,7 @@ determine_base_object (tree expr) if (!base) return expr; - if (TREE_CODE (base) == INDIRECT_REF) + if (TREE_CODE (base) == MEM_REF) return determine_base_object (TREE_OPERAND (base, 0)); return fold_convert (ptr_type_node, @@ -1694,9 +1694,11 @@ find_interesting_uses_address (struct iv tree *ref = &TREE_OPERAND (base, 0); while (handled_component_p (*ref)) ref = &TREE_OPERAND (*ref, 0); - if (TREE_CODE (*ref) == INDIRECT_REF) + if (TREE_CODE (*ref) == MEM_REF) { - tree tem = gimple_fold_indirect_ref (TREE_OPERAND (*ref, 0)); + tree tem = fold_binary (MEM_REF, TREE_TYPE (*ref), + TREE_OPERAND (*ref, 0), + TREE_OPERAND (*ref, 1)); if (tem) *ref = tem; } @@ -2018,7 +2020,8 @@ strip_offset_1 (tree expr, bool inside_a expr = build_fold_addr_expr (op0); return fold_convert (orig_type, expr); - case INDIRECT_REF: + case MEM_REF: + /* ??? Offset operand? */ inside_addr = false; break; @@ -3889,7 +3892,7 @@ fallback: return infinite_cost; if (address_p) - comp = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (comp)), comp); + comp = build_simple_mem_ref (comp); return new_cost (computation_cost (comp, speed), 0); } Index: gcc/ipa-pure-const.c =================================================================== --- gcc/ipa-pure-const.c (.../trunk) (revision 161367) +++ gcc/ipa-pure-const.c (.../branches/mem-ref2) (revision 161369) @@ -324,7 +324,7 @@ check_op (funct_state local, tree t, boo return; } else if (t - && INDIRECT_REF_P (t) + && (INDIRECT_REF_P (t) || TREE_CODE (t) == MEM_REF) && TREE_CODE (TREE_OPERAND (t, 0)) == SSA_NAME && !ptr_deref_may_alias_global_p (TREE_OPERAND (t, 0))) { Index: gcc/tree-stdarg.c =================================================================== --- gcc/tree-stdarg.c (.../trunk) (revision 161367) +++ gcc/tree-stdarg.c (.../branches/mem-ref2) (revision 161369) @@ -512,7 +512,7 @@ check_all_va_list_escapes (struct stdarg enum tree_code rhs_code = gimple_assign_rhs_code (stmt); /* x = *ap_temp; */ - if (gimple_assign_rhs_code (stmt) == INDIRECT_REF + if (gimple_assign_rhs_code (stmt) == MEM_REF && TREE_OPERAND (rhs, 0) == use && TYPE_SIZE_UNIT (TREE_TYPE (rhs)) && host_integerp (TYPE_SIZE_UNIT (TREE_TYPE (rhs)), 1) @@ -522,6 +522,7 @@ check_all_va_list_escapes (struct stdarg tree access_size = TYPE_SIZE_UNIT (TREE_TYPE (rhs)); gpr_size = si->offsets[SSA_NAME_VERSION (use)] + + tree_low_cst (TREE_OPERAND (rhs, 1), 0) + tree_low_cst (access_size, 1); if (gpr_size >= VA_LIST_MAX_GPR_SIZE) cfun->va_list_gpr_size = VA_LIST_MAX_GPR_SIZE; Index: gcc/tree-ssa-sink.c =================================================================== --- gcc/tree-ssa-sink.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-sink.c (.../branches/mem-ref2) (revision 161369) @@ -190,8 +190,11 @@ is_hidden_global_store (gimple stmt) return true; } - else if (INDIRECT_REF_P (lhs)) + else if (INDIRECT_REF_P (lhs) + || TREE_CODE (lhs) == MEM_REF) return ptr_deref_may_alias_global_p (TREE_OPERAND (lhs, 0)); + else if (CONSTANT_CLASS_P (lhs)) + return true; else gcc_unreachable (); } Index: gcc/ipa-inline.c =================================================================== --- gcc/ipa-inline.c (.../trunk) (revision 161367) +++ gcc/ipa-inline.c (.../branches/mem-ref2) (revision 161369) @@ -1830,10 +1830,12 @@ likely_eliminated_by_inlining_p (gimple bool rhs_free = false; bool lhs_free = false; - while (handled_component_p (inner_lhs) || TREE_CODE (inner_lhs) == INDIRECT_REF) + while (handled_component_p (inner_lhs) + || TREE_CODE (inner_lhs) == MEM_REF) inner_lhs = TREE_OPERAND (inner_lhs, 0); while (handled_component_p (inner_rhs) - || TREE_CODE (inner_rhs) == ADDR_EXPR || TREE_CODE (inner_rhs) == INDIRECT_REF) + || TREE_CODE (inner_rhs) == ADDR_EXPR + || TREE_CODE (inner_rhs) == MEM_REF) inner_rhs = TREE_OPERAND (inner_rhs, 0); @@ -1853,7 +1855,8 @@ likely_eliminated_by_inlining_p (gimple || (TREE_CODE (inner_lhs) == SSA_NAME && TREE_CODE (SSA_NAME_VAR (inner_lhs)) == RESULT_DECL)) lhs_free = true; - if (lhs_free && (is_gimple_reg (rhs) || is_gimple_min_invariant (rhs))) + if (lhs_free + && (is_gimple_reg (rhs) || is_gimple_min_invariant (rhs))) rhs_free = true; if (lhs_free && rhs_free) return true; Index: gcc/tree-parloops.c =================================================================== --- gcc/tree-parloops.c (.../trunk) (revision 161367) +++ gcc/tree-parloops.c (.../branches/mem-ref2) (revision 161369) @@ -357,7 +357,7 @@ take_address_of (tree obj, tree type, ed if (var_p != &obj) { - *var_p = build1 (INDIRECT_REF, TREE_TYPE (*var_p), name); + *var_p = build_simple_mem_ref (name); name = force_gimple_operand (build_addr (obj, current_function_decl), &stmts, true, NULL_TREE); if (!gimple_seq_empty_p (stmts)) @@ -456,7 +456,7 @@ eliminate_local_variables_1 (tree *tp, i type = TREE_TYPE (t); addr_type = build_pointer_type (type); addr = take_address_of (t, addr_type, dta->entry, dta->decl_address); - *tp = build1 (INDIRECT_REF, TREE_TYPE (*tp), addr); + *tp = build_simple_mem_ref (addr); dta->changed = true; return NULL_TREE; @@ -857,7 +857,6 @@ create_call_for_reduction_1 (void **slot struct clsn_data *const clsn_data = (struct clsn_data *) data; gimple_stmt_iterator gsi; tree type = TREE_TYPE (PHI_RESULT (reduc->reduc_phi)); - tree struct_type = TREE_TYPE (TREE_TYPE (clsn_data->load)); tree load_struct; basic_block bb; basic_block new_bb; @@ -866,7 +865,7 @@ create_call_for_reduction_1 (void **slot tree tmp_load, name; gimple load; - load_struct = fold_build1 (INDIRECT_REF, struct_type, clsn_data->load); + load_struct = build_simple_mem_ref (clsn_data->load); t = build3 (COMPONENT_REF, type, load_struct, reduc->field, NULL_TREE); addr = build_addr (t, current_function_decl); @@ -925,13 +924,12 @@ create_loads_for_reductions (void **slot gimple stmt; gimple_stmt_iterator gsi; tree type = TREE_TYPE (gimple_assign_lhs (red->reduc_stmt)); - tree struct_type = TREE_TYPE (TREE_TYPE (clsn_data->load)); tree load_struct; tree name; tree x; gsi = gsi_after_labels (clsn_data->load_bb); - load_struct = fold_build1 (INDIRECT_REF, struct_type, clsn_data->load); + load_struct = build_simple_mem_ref (clsn_data->load); load_struct = build3 (COMPONENT_REF, type, load_struct, red->field, NULL_TREE); @@ -1012,7 +1010,6 @@ create_loads_and_stores_for_name (void * gimple stmt; gimple_stmt_iterator gsi; tree type = TREE_TYPE (elt->new_name); - tree struct_type = TREE_TYPE (TREE_TYPE (clsn_data->load)); tree load_struct; gsi = gsi_last_bb (clsn_data->store_bb); @@ -1022,7 +1019,7 @@ create_loads_and_stores_for_name (void * gsi_insert_after (&gsi, stmt, GSI_NEW_STMT); gsi = gsi_last_bb (clsn_data->load_bb); - load_struct = fold_build1 (INDIRECT_REF, struct_type, clsn_data->load); + load_struct = build_simple_mem_ref (clsn_data->load); t = build3 (COMPONENT_REF, type, load_struct, elt->field, NULL_TREE); stmt = gimple_build_assign (elt->new_name, t); SSA_NAME_DEF_STMT (elt->new_name) = stmt; Index: gcc/matrix-reorg.c =================================================================== --- gcc/matrix-reorg.c (.../trunk) (revision 161367) +++ gcc/matrix-reorg.c (.../branches/mem-ref2) (revision 161369) @@ -218,7 +218,7 @@ collect_data_for_malloc_call (gimple stm initial address and index of each dimension. */ struct access_site_info { - /* The statement (INDIRECT_REF or POINTER_PLUS_EXPR). */ + /* The statement (MEM_REF or POINTER_PLUS_EXPR). */ gimple stmt; /* In case of POINTER_PLUS_EXPR, what is the offset. */ @@ -334,7 +334,7 @@ struct ssa_acc_in_tree /* The variable whose accesses in the tree we are looking for. */ tree ssa_var; /* The tree and code inside it the ssa_var is accessed, currently - it could be an INDIRECT_REF or CALL_EXPR. */ + it could be an MEM_REF or CALL_EXPR. */ enum tree_code t_code; tree t_tree; /* The place in the containing tree. */ @@ -413,33 +413,18 @@ mtt_info_eq (const void *mtt1, const voi static bool may_flatten_matrices_1 (gimple stmt) { - tree t; - switch (gimple_code (stmt)) { case GIMPLE_ASSIGN: - if (!gimple_assign_cast_p (stmt)) + case GIMPLE_CALL: + if (!gimple_has_lhs (stmt)) return true; - - t = gimple_assign_rhs1 (stmt); - while (CONVERT_EXPR_P (t)) + if (TREE_CODE (TREE_TYPE (gimple_get_lhs (stmt))) == VECTOR_TYPE) { - if (TREE_TYPE (t) && POINTER_TYPE_P (TREE_TYPE (t))) - { - tree pointee; - - pointee = TREE_TYPE (t); - while (POINTER_TYPE_P (pointee)) - pointee = TREE_TYPE (pointee); - if (TREE_CODE (pointee) == VECTOR_TYPE) - { - if (dump_file) - fprintf (dump_file, - "Found vector type, don't flatten matrix\n"); - return false; - } - } - t = TREE_OPERAND (t, 0); + if (dump_file) + fprintf (dump_file, + "Found vector type, don't flatten matrix\n"); + return false; } break; case GIMPLE_ASM: @@ -602,7 +587,7 @@ mark_min_matrix_escape_level (struct mat /* Find if the SSA variable is accessed inside the tree and record the tree containing it. The only relevant uses are the case of SSA_NAME, or SSA inside - INDIRECT_REF, PLUS_EXPR, POINTER_PLUS_EXPR, MULT_EXPR. */ + MEM_REF, PLUS_EXPR, POINTER_PLUS_EXPR, MULT_EXPR. */ static void ssa_accessed_in_tree (tree t, struct ssa_acc_in_tree *a) { @@ -613,7 +598,7 @@ ssa_accessed_in_tree (tree t, struct ssa if (t == a->ssa_var) a->var_found = true; break; - case INDIRECT_REF: + case MEM_REF: if (SSA_VAR_P (TREE_OPERAND (t, 0)) && TREE_OPERAND (t, 0) == a->ssa_var) a->var_found = true; @@ -660,7 +645,7 @@ ssa_accessed_in_assign_rhs (gimple stmt, tree op1, op2; case SSA_NAME: - case INDIRECT_REF: + case MEM_REF: CASE_CONVERT: case VIEW_CONVERT_EXPR: ssa_accessed_in_tree (gimple_assign_rhs1 (stmt), a); @@ -984,7 +969,7 @@ get_index_from_offset (tree offset, gimp /* update MI->dimension_type_size[CURRENT_INDIRECT_LEVEL] with the size of the type related to the SSA_VAR, or the type related to the - lhs of STMT, in the case that it is an INDIRECT_REF. */ + lhs of STMT, in the case that it is an MEM_REF. */ static void update_type_size (struct matrix_info *mi, gimple stmt, tree ssa_var, int current_indirect_level) @@ -992,9 +977,9 @@ update_type_size (struct matrix_info *mi tree lhs; HOST_WIDE_INT type_size; - /* Update type according to the type of the INDIRECT_REF expr. */ + /* Update type according to the type of the MEM_REF expr. */ if (is_gimple_assign (stmt) - && TREE_CODE (gimple_assign_lhs (stmt)) == INDIRECT_REF) + && TREE_CODE (gimple_assign_lhs (stmt)) == MEM_REF) { lhs = gimple_assign_lhs (stmt); gcc_assert (POINTER_TYPE_P @@ -1073,7 +1058,7 @@ analyze_accesses_for_call_stmt (struct m at this level because in this case we cannot calculate the address correctly. */ if ((lhs_acc.var_found && rhs_acc.var_found - && lhs_acc.t_code == INDIRECT_REF) + && lhs_acc.t_code == MEM_REF) || (!rhs_acc.var_found && !lhs_acc.var_found)) { mark_min_matrix_escape_level (mi, current_indirect_level, use_stmt); @@ -1087,7 +1072,7 @@ analyze_accesses_for_call_stmt (struct m { int l = current_indirect_level + 1; - gcc_assert (lhs_acc.t_code == INDIRECT_REF); + gcc_assert (lhs_acc.t_code == MEM_REF); mark_min_matrix_escape_level (mi, l, use_stmt); return current_indirect_level; } @@ -1213,7 +1198,7 @@ analyze_accesses_for_assign_stmt (struct at this level because in this case we cannot calculate the address correctly. */ if ((lhs_acc.var_found && rhs_acc.var_found - && lhs_acc.t_code == INDIRECT_REF) + && lhs_acc.t_code == MEM_REF) || (!rhs_acc.var_found && !lhs_acc.var_found)) { mark_min_matrix_escape_level (mi, current_indirect_level, use_stmt); @@ -1227,7 +1212,7 @@ analyze_accesses_for_assign_stmt (struct { int l = current_indirect_level + 1; - gcc_assert (lhs_acc.t_code == INDIRECT_REF); + gcc_assert (lhs_acc.t_code == MEM_REF); if (!(gimple_assign_copy_p (use_stmt) || gimple_assign_cast_p (use_stmt)) @@ -1248,7 +1233,7 @@ analyze_accesses_for_assign_stmt (struct is used. */ if (rhs_acc.var_found) { - if (rhs_acc.t_code != INDIRECT_REF + if (rhs_acc.t_code != MEM_REF && rhs_acc.t_code != POINTER_PLUS_EXPR && rhs_acc.t_code != SSA_NAME) { mark_min_matrix_escape_level (mi, current_indirect_level, use_stmt); @@ -1256,7 +1241,7 @@ analyze_accesses_for_assign_stmt (struct } /* If the access in the RHS has an indirection increase the indirection level. */ - if (rhs_acc.t_code == INDIRECT_REF) + if (rhs_acc.t_code == MEM_REF) { if (record_accesses) record_access_alloc_site_info (mi, use_stmt, NULL_TREE, @@ -1309,7 +1294,7 @@ analyze_accesses_for_assign_stmt (struct } /* If we are storing this level of indirection mark it as escaping. */ - if (lhs_acc.t_code == INDIRECT_REF || TREE_CODE (lhs) != SSA_NAME) + if (lhs_acc.t_code == MEM_REF || TREE_CODE (lhs) != SSA_NAME) { int l = current_indirect_level; @@ -1369,8 +1354,8 @@ analyze_matrix_accesses (struct matrix_i return; /* Now go over the uses of the SSA_NAME and check how it is used in - each one of them. We are mainly looking for the pattern INDIRECT_REF, - then a POINTER_PLUS_EXPR, then INDIRECT_REF etc. while in between there could + each one of them. We are mainly looking for the pattern MEM_REF, + then a POINTER_PLUS_EXPR, then MEM_REF etc. while in between there could be any number of copies and casts. */ gcc_assert (TREE_CODE (ssa_var) == SSA_NAME); @@ -1856,7 +1841,7 @@ transform_access_sites (void **slot, voi gimple new_stmt; gcc_assert (gimple_assign_rhs_code (acc_info->stmt) - == INDIRECT_REF); + == MEM_REF); /* Emit convert statement to convert to type of use. */ tmp = create_tmp_var (TREE_TYPE (lhs), "new"); add_referenced_var (tmp); @@ -1878,10 +1863,10 @@ transform_access_sites (void **slot, voi continue; } code = gimple_assign_rhs_code (acc_info->stmt); - if (code == INDIRECT_REF + if (code == MEM_REF && acc_info->level < min_escape_l - 1) { - /* Replace the INDIRECT_REF with NOP (cast) usually we are casting + /* Replace the MEM_REF with NOP (cast) usually we are casting from "pointer to type" to "type". */ tree t = build1 (NOP_EXPR, TREE_TYPE (gimple_assign_rhs1 (acc_info->stmt)), @@ -2206,7 +2191,6 @@ transform_allocation_sites (void **slot, for (i = 1; i < mi->min_indirect_level_escape; i++) { gimple_stmt_iterator gsi; - gimple use_stmt1 = NULL; gimple call_stmt = mi->malloc_for_level[i]; gcc_assert (is_gimple_call (call_stmt)); @@ -2216,17 +2200,9 @@ transform_allocation_sites (void **slot, gsi = gsi_for_stmt (call_stmt); /* Remove the call stmt. */ gsi_remove (&gsi, true); - /* remove the type cast stmt. */ - FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, - gimple_call_lhs (call_stmt)) - { - use_stmt1 = use_stmt; - gsi = gsi_for_stmt (use_stmt); - gsi_remove (&gsi, true); - } /* Remove the assignment of the allocated area. */ FOR_EACH_IMM_USE_STMT (use_stmt, imm_iter, - gimple_get_lhs (use_stmt1)) + gimple_call_lhs (call_stmt)) { gsi = gsi_for_stmt (use_stmt); gsi_remove (&gsi, true); Index: gcc/tree-affine.c =================================================================== --- gcc/tree-affine.c (.../trunk) (revision 161367) +++ gcc/tree-affine.c (.../branches/mem-ref2) (revision 161369) @@ -309,6 +309,15 @@ tree_to_aff_combination (tree expr, tree return; case ADDR_EXPR: + /* Handle &MEM[ptr + CST] which is equivalent to POINTER_PLUS_EXPR. */ + if (TREE_CODE (TREE_OPERAND (expr, 0)) == MEM_REF) + { + expr = TREE_OPERAND (expr, 0); + tree_to_aff_combination (TREE_OPERAND (expr, 0), type, comb); + tree_to_aff_combination (TREE_OPERAND (expr, 1), sizetype, &tmp); + aff_combination_add (comb, &tmp); + return; + } core = get_inner_reference (TREE_OPERAND (expr, 0), &bitsize, &bitpos, &toffset, &mode, &unsignedp, &volatilep, false); @@ -331,6 +340,25 @@ tree_to_aff_combination (tree expr, tree } return; + case MEM_REF: + if (TREE_CODE (TREE_OPERAND (expr, 0)) == ADDR_EXPR) + tree_to_aff_combination (TREE_OPERAND (TREE_OPERAND (expr, 0), 0), + type, comb); + else if (integer_zerop (TREE_OPERAND (expr, 1))) + { + aff_combination_elt (comb, type, expr); + return; + } + else + aff_combination_elt (comb, type, + build2 (MEM_REF, TREE_TYPE (expr), + TREE_OPERAND (expr, 0), + build_int_cst + (TREE_TYPE (TREE_OPERAND (expr, 1)), 0))); + tree_to_aff_combination (TREE_OPERAND (expr, 1), sizetype, &tmp); + aff_combination_add (comb, &tmp); + return; + default: break; } Index: gcc/tree-vect-data-refs.c =================================================================== --- gcc/tree-vect-data-refs.c (.../trunk) (revision 161367) +++ gcc/tree-vect-data-refs.c (.../branches/mem-ref2) (revision 161369) @@ -2398,12 +2398,9 @@ vect_create_addr_base_for_vector_ref (gi data_ref_base, base_offset); else { - if (TREE_CODE (DR_REF (dr)) == INDIRECT_REF) - addr_base = unshare_expr (TREE_OPERAND (DR_REF (dr), 0)); - else - addr_base = build1 (ADDR_EXPR, - build_pointer_type (TREE_TYPE (DR_REF (dr))), - unshare_expr (DR_REF (dr))); + addr_base = build1 (ADDR_EXPR, + build_pointer_type (TREE_TYPE (DR_REF (dr))), + unshare_expr (DR_REF (dr))); } vect_ptr_type = build_pointer_type (STMT_VINFO_VECTYPE (stmt_info)); Index: gcc/tree-ssa-phiopt.c =================================================================== --- gcc/tree-ssa-phiopt.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-phiopt.c (.../branches/mem-ref2) (revision 161369) @@ -994,10 +994,10 @@ abs_replacement (basic_block cond_bb, ba /* Auxiliary functions to determine the set of memory accesses which can't trap because they are preceded by accesses to the same memory - portion. We do that for INDIRECT_REFs, so we only need to track + portion. We do that for MEM_REFs, so we only need to track the SSA_NAME of the pointer indirectly referenced. The algorithm simply is a walk over all instructions in dominator order. When - we see an INDIRECT_REF we determine if we've already seen a same + we see an MEM_REF we determine if we've already seen a same ref anywhere up to the root of the dominator tree. If we do the current access can't trap. If we don't see any dominating access the current access might trap, but might also make later accesses @@ -1011,7 +1011,7 @@ abs_replacement (basic_block cond_bb, ba trap even if a store doesn't (write-only memory). This probably is overly conservative. */ -/* A hash-table of SSA_NAMEs, and in which basic block an INDIRECT_REF +/* A hash-table of SSA_NAMEs, and in which basic block an MEM_REF through it was seen, which would constitute a no-trap region for same accesses. */ struct name_to_bb @@ -1024,7 +1024,7 @@ struct name_to_bb /* The hash table for remembering what we've seen. */ static htab_t seen_ssa_names; -/* The set of INDIRECT_REFs which can't trap. */ +/* The set of MEM_REFs which can't trap. */ static struct pointer_set_t *nontrap_set; /* The hash function, based on the pointer to the pointer SSA_NAME. */ @@ -1047,7 +1047,7 @@ name_to_bb_eq (const void *p1, const voi } /* We see the expression EXP in basic block BB. If it's an interesting - expression (an INDIRECT_REF through an SSA_NAME) possibly insert the + expression (an MEM_REF through an SSA_NAME) possibly insert the expression into the set NONTRAP or the hash table of seen expressions. STORE is true if this expression is on the LHS, otherwise it's on the RHS. */ @@ -1055,7 +1055,7 @@ static void add_or_mark_expr (basic_block bb, tree exp, struct pointer_set_t *nontrap, bool store) { - if (INDIRECT_REF_P (exp) + if (TREE_CODE (exp) == MEM_REF && TREE_CODE (TREE_OPERAND (exp, 0)) == SSA_NAME) { tree name = TREE_OPERAND (exp, 0); @@ -1064,7 +1064,7 @@ add_or_mark_expr (basic_block bb, tree e struct name_to_bb *n2bb; basic_block found_bb = 0; - /* Try to find the last seen INDIRECT_REF through the same + /* Try to find the last seen MEM_REF through the same SSA_NAME, which can trap. */ map.ssa_name = name; map.bb = 0; @@ -1074,7 +1074,7 @@ add_or_mark_expr (basic_block bb, tree e if (n2bb) found_bb = n2bb->bb; - /* If we've found a trapping INDIRECT_REF, _and_ it dominates EXP + /* If we've found a trapping MEM_REF, _and_ it dominates EXP (it's in a basic block on the path from us to the dominator root) then we can't trap. */ if (found_bb && found_bb->aux == (void *)1) @@ -1135,7 +1135,7 @@ nt_fini_block (struct dom_walk_data *dat /* This is the entry point of gathering non trapping memory accesses. It will do a dominator walk over the whole function, and it will make use of the bb->aux pointers. It returns a set of trees - (the INDIRECT_REFs itself) which can't trap. */ + (the MEM_REFs itself) which can't trap. */ static struct pointer_set_t * get_non_trapping (void) { @@ -1200,7 +1200,8 @@ cond_store_replacement (basic_block midd locus = gimple_location (assign); lhs = gimple_assign_lhs (assign); rhs = gimple_assign_rhs1 (assign); - if (!INDIRECT_REF_P (lhs)) + if (TREE_CODE (lhs) != MEM_REF + || TREE_CODE (TREE_OPERAND (lhs, 0)) != SSA_NAME) return false; /* RHS is either a single SSA_NAME or a constant. */ Index: gcc/tree-ssa-pre.c =================================================================== --- gcc/tree-ssa-pre.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-pre.c (.../branches/mem-ref2) (revision 161369) @@ -1629,12 +1629,28 @@ phi_translate_1 (pre_expr expr, bitmap_s newop.op0 = op0; newop.op1 = op1; newop.op2 = op2; + /* If it transforms a non-constant ARRAY_REF into a constant + one, adjust the constant offset. */ + if (newop.opcode == ARRAY_REF + && newop.off == -1 + && TREE_CODE (op0) == INTEGER_CST + && TREE_CODE (op1) == INTEGER_CST + && TREE_CODE (op2) == INTEGER_CST) + { + double_int off = tree_to_double_int (op0); + off = double_int_add (off, + double_int_neg + (tree_to_double_int (op1))); + off = double_int_mul (off, tree_to_double_int (op2)); + if (double_int_fits_in_shwi_p (off)) + newop.off = off.low; + } VEC_replace (vn_reference_op_s, newoperands, j, &newop); /* If it transforms from an SSA_NAME to an address, fold with a preceding indirect reference. */ if (j > 0 && op0 && TREE_CODE (op0) == ADDR_EXPR && VEC_index (vn_reference_op_s, - newoperands, j - 1)->opcode == INDIRECT_REF) + newoperands, j - 1)->opcode == MEM_REF) vn_reference_fold_indirect (&newoperands, &j); } if (i != VEC_length (vn_reference_op_s, operands)) @@ -1661,6 +1677,7 @@ phi_translate_1 (pre_expr expr, bitmap_s { unsigned int new_val_id; pre_expr constant; + bool converted = false; tree result = vn_reference_lookup_pieces (newvuse, ref->set, ref->type, @@ -1669,6 +1686,13 @@ phi_translate_1 (pre_expr expr, bitmap_s if (result) VEC_free (vn_reference_op_s, heap, newoperands); + if (result + && !useless_type_conversion_p (ref->type, TREE_TYPE (result))) + { + result = fold_build1 (VIEW_CONVERT_EXPR, ref->type, result); + converted = true; + } + if (result && is_gimple_min_invariant (result)) { gcc_assert (!newoperands); @@ -1679,7 +1703,54 @@ phi_translate_1 (pre_expr expr, bitmap_s expr->kind = REFERENCE; expr->id = 0; - if (newref) + if (converted) + { + vn_nary_op_t nary; + tree nresult; + + gcc_assert (CONVERT_EXPR_P (result) + || TREE_CODE (result) == VIEW_CONVERT_EXPR); + + nresult = vn_nary_op_lookup_pieces (1, TREE_CODE (result), + TREE_TYPE (result), + TREE_OPERAND (result, 0), + NULL_TREE, NULL_TREE, + NULL_TREE, + &nary); + if (nresult && is_gimple_min_invariant (nresult)) + return get_or_alloc_expr_for_constant (nresult); + + expr->kind = NARY; + if (nary) + { + PRE_EXPR_NARY (expr) = nary; + constant = fully_constant_expression (expr); + if (constant != expr) + return constant; + + new_val_id = nary->value_id; + get_or_alloc_expression_id (expr); + } + else + { + new_val_id = get_next_value_id (); + VEC_safe_grow_cleared (bitmap_set_t, heap, + value_expressions, + get_max_value_id() + 1); + nary = vn_nary_op_insert_pieces (1, TREE_CODE (result), + TREE_TYPE (result), + TREE_OPERAND (result, 0), + NULL_TREE, NULL_TREE, + NULL_TREE, NULL_TREE, + new_val_id); + PRE_EXPR_NARY (expr) = nary; + constant = fully_constant_expression (expr); + if (constant != expr) + return constant; + get_or_alloc_expression_id (expr); + } + } + else if (newref) { PRE_EXPR_REFERENCE (expr) = newref; constant = fully_constant_expression (expr); @@ -2598,7 +2669,7 @@ can_PRE_operation (tree op) return UNARY_CLASS_P (op) || BINARY_CLASS_P (op) || COMPARISON_CLASS_P (op) - || TREE_CODE (op) == INDIRECT_REF + || TREE_CODE (op) == MEM_REF || TREE_CODE (op) == COMPONENT_REF || TREE_CODE (op) == VIEW_CONVERT_EXPR || TREE_CODE (op) == CALL_EXPR @@ -2674,6 +2745,29 @@ create_component_ref_by_pieces_1 (basic_ return folded; } break; + case MEM_REF: + { + tree baseop = create_component_ref_by_pieces_1 (block, ref, operand, + stmts, domstmt); + tree offset = currop->op0; + if (!baseop) + return NULL_TREE; + if (TREE_CODE (baseop) == ADDR_EXPR + && handled_component_p (TREE_OPERAND (baseop, 0))) + { + HOST_WIDE_INT off; + tree base; + base = get_addr_base_and_unit_offset (TREE_OPERAND (baseop, 0), + &off); + gcc_assert (base); + offset = int_const_binop (PLUS_EXPR, offset, + build_int_cst (TREE_TYPE (offset), + off), 0); + baseop = build_fold_addr_expr (base); + } + return fold_build2 (MEM_REF, currop->type, baseop, offset); + } + break; case TARGET_MEM_REF: { vn_reference_op_t nextop = VEC_index (vn_reference_op_s, ref->operands, @@ -2728,7 +2822,6 @@ create_component_ref_by_pieces_1 (basic_ break; case ALIGN_INDIRECT_REF: case MISALIGNED_INDIRECT_REF: - case INDIRECT_REF: { tree folded; tree genop1 = create_component_ref_by_pieces_1 (block, ref, @@ -2880,7 +2973,7 @@ create_component_ref_by_pieces_1 (basic_ } /* For COMPONENT_REF's and ARRAY_REF's, we can't have any intermediates for the - COMPONENT_REF or INDIRECT_REF or ARRAY_REF portion, because we'd end up with + COMPONENT_REF or MEM_REF or ARRAY_REF portion, because we'd end up with trying to rename aggregates into ssa form directly, which is a no no. Thus, this routine doesn't create temporaries, it just builds a @@ -3131,7 +3224,7 @@ create_expression_by_pieces (basic_block VN_INFO (name)->value_id = value_id; nameexpr = get_or_alloc_expr_for_name (name); add_to_value (value_id, nameexpr); - if (!in_fre) + if (NEW_SETS (block)) bitmap_value_replace_in_set (NEW_SETS (block), nameexpr); bitmap_value_replace_in_set (AVAIL_OUT (block), nameexpr); @@ -3310,6 +3403,8 @@ insert_into_preds_of_block (basic_block avail[bprime->index] = get_or_alloc_expr_for_name (forcedexpr); } } + else + avail[bprime->index] = get_or_alloc_expr_for_constant (builtexpr); } } else if (eprime->kind == NAME) @@ -4723,7 +4818,7 @@ execute_pre (bool do_fre) if (!do_fre) loop_optimizer_init (LOOPS_NORMAL); - if (!run_scc_vn (do_fre)) + if (!run_scc_vn ()) { if (!do_fre) loop_optimizer_finalize (); Index: gcc/tree-sra.c =================================================================== --- gcc/tree-sra.c (.../trunk) (revision 161367) +++ gcc/tree-sra.c (.../branches/mem-ref2) (revision 161369) @@ -751,7 +751,8 @@ create_access (tree expr, gimple stmt, b base = get_ref_base_and_extent (expr, &offset, &size, &max_size); - if (sra_mode == SRA_MODE_EARLY_IPA && INDIRECT_REF_P (base)) + if (sra_mode == SRA_MODE_EARLY_IPA + && TREE_CODE (base) == MEM_REF) { base = get_ssa_base_param (TREE_OPERAND (base, 0)); if (!base) @@ -885,15 +886,10 @@ completely_scalarize_record (tree base, static void disqualify_base_of_expr (tree t, const char *reason) { - while (handled_component_p (t)) - t = TREE_OPERAND (t, 0); - - if (sra_mode == SRA_MODE_EARLY_IPA) - { - if (INDIRECT_REF_P (t)) - t = TREE_OPERAND (t, 0); - t = get_ssa_base_param (t); - } + t = get_base_address (t); + if (sra_mode == SRA_MODE_EARLY_IPA + && TREE_CODE (t) == MEM_REF) + t = get_ssa_base_param (TREE_OPERAND (t, 0)); if (t && DECL_P (t)) disqualify_candidate (t, reason); @@ -935,8 +931,9 @@ build_access_from_expr_1 (tree expr, gim switch (TREE_CODE (expr)) { - case INDIRECT_REF: - if (sra_mode != SRA_MODE_EARLY_IPA) + case MEM_REF: + if (TREE_CODE (TREE_OPERAND (expr, 0)) != ADDR_EXPR + && sra_mode != SRA_MODE_EARLY_IPA) return NULL; /* fall through */ case VAR_DECL: @@ -1285,7 +1282,21 @@ make_fancy_name_1 (tree expr) break; sprintf (buffer, HOST_WIDE_INT_PRINT_DEC, TREE_INT_CST_LOW (index)); obstack_grow (&name_obstack, buffer, strlen (buffer)); + break; + case ADDR_EXPR: + make_fancy_name_1 (TREE_OPERAND (expr, 0)); + break; + + case MEM_REF: + make_fancy_name_1 (TREE_OPERAND (expr, 0)); + if (!integer_zerop (TREE_OPERAND (expr, 1))) + { + obstack_1grow (&name_obstack, '$'); + sprintf (buffer, HOST_WIDE_INT_PRINT_DEC, + TREE_INT_CST_LOW (TREE_OPERAND (expr, 1))); + obstack_grow (&name_obstack, buffer, strlen (buffer)); + } break; case BIT_FIELD_REF: @@ -1308,7 +1319,11 @@ make_fancy_name (tree expr) return XOBFINISH (&name_obstack, char *); } -/* Helper function for build_ref_for_offset. */ +/* Helper function for build_ref_for_offset. + + FIXME: Eventually this should be rewritten to either re-use the + original access expression unshared (which is good for alias + analysis) or to build a MEM_REF expression. */ static bool build_ref_for_offset_1 (tree *res, tree type, HOST_WIDE_INT offset, @@ -1406,12 +1421,7 @@ build_ref_for_offset_1 (tree *res, tree type TYPE at the given OFFSET of the type EXP_TYPE. If EXPR is NULL, the function only determines whether it can build such a reference without actually doing it, otherwise, the tree it points to is unshared first and - then used as a base for furhter sub-references. - - FIXME: Eventually this should be replaced with - maybe_fold_offset_to_reference() from tree-ssa-ccp.c but that requires a - minor rewrite of fold_stmt. - */ + then used as a base for furhter sub-references. */ bool build_ref_for_offset (tree *expr, tree type, HOST_WIDE_INT offset, @@ -1426,7 +1436,7 @@ build_ref_for_offset (tree *expr, tree t { type = TREE_TYPE (type); if (expr) - *expr = fold_build1_loc (loc, INDIRECT_REF, type, *expr); + *expr = build_simple_mem_ref_loc (loc, *expr); } return build_ref_for_offset_1 (expr, type, offset, exp_type); @@ -3026,8 +3036,11 @@ ptr_parm_has_direct_uses (tree parm) tree lhs = gimple_get_lhs (stmt); while (handled_component_p (lhs)) lhs = TREE_OPERAND (lhs, 0); - if (INDIRECT_REF_P (lhs) - && TREE_OPERAND (lhs, 0) == name) + if (TREE_CODE (lhs) == MEM_REF + && TREE_OPERAND (lhs, 0) == name + && integer_zerop (TREE_OPERAND (lhs, 1)) + && types_compatible_p (TREE_TYPE (lhs), + TREE_TYPE (TREE_TYPE (name)))) uses_ok++; } if (gimple_assign_single_p (stmt)) @@ -3035,8 +3048,11 @@ ptr_parm_has_direct_uses (tree parm) tree rhs = gimple_assign_rhs1 (stmt); while (handled_component_p (rhs)) rhs = TREE_OPERAND (rhs, 0); - if (INDIRECT_REF_P (rhs) - && TREE_OPERAND (rhs, 0) == name) + if (TREE_CODE (rhs) == MEM_REF + && TREE_OPERAND (rhs, 0) == name + && integer_zerop (TREE_OPERAND (rhs, 1)) + && types_compatible_p (TREE_TYPE (rhs), + TREE_TYPE (TREE_TYPE (name)))) uses_ok++; } else if (is_gimple_call (stmt)) @@ -3047,8 +3063,11 @@ ptr_parm_has_direct_uses (tree parm) tree arg = gimple_call_arg (stmt, i); while (handled_component_p (arg)) arg = TREE_OPERAND (arg, 0); - if (INDIRECT_REF_P (arg) - && TREE_OPERAND (arg, 0) == name) + if (TREE_CODE (arg) == MEM_REF + && TREE_OPERAND (arg, 0) == name + && integer_zerop (TREE_OPERAND (arg, 1)) + && types_compatible_p (TREE_TYPE (arg), + TREE_TYPE (TREE_TYPE (name)))) uses_ok++; } } @@ -3917,8 +3936,11 @@ sra_ipa_modify_expr (tree *expr, bool co if (!base || size == -1 || max_size == -1) return false; - if (INDIRECT_REF_P (base)) - base = TREE_OPERAND (base, 0); + if (TREE_CODE (base) == MEM_REF) + { + offset += mem_ref_offset (base).low * BITS_PER_UNIT; + base = TREE_OPERAND (base, 0); + } base = get_ssa_base_param (base); if (!base || TREE_CODE (base) != PARM_DECL) @@ -3939,14 +3961,7 @@ sra_ipa_modify_expr (tree *expr, bool co return false; if (cand->by_ref) - { - tree folded; - src = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (cand->reduction)), - cand->reduction); - folded = gimple_fold_indirect_ref (src); - if (folded) - src = folded; - } + src = build_simple_mem_ref (cand->reduction); else src = cand->reduction; Index: gcc/tree-predcom.c =================================================================== --- gcc/tree-predcom.c (.../trunk) (revision 161367) +++ gcc/tree-predcom.c (.../branches/mem-ref2) (revision 161369) @@ -1345,14 +1345,16 @@ ref_at_iteration (struct loop *loop, tre if (!op0) return NULL_TREE; } - else if (!INDIRECT_REF_P (ref)) + else if (!INDIRECT_REF_P (ref) + && TREE_CODE (ref) != MEM_REF) return unshare_expr (ref); - if (INDIRECT_REF_P (ref)) + if (INDIRECT_REF_P (ref) + || TREE_CODE (ref) == MEM_REF) { - /* Take care for INDIRECT_REF and MISALIGNED_INDIRECT_REF at + /* Take care for MEM_REF and MISALIGNED_INDIRECT_REF at the same time. */ - ret = copy_node (ref); + ret = unshare_expr (ref); idx = TREE_OPERAND (ref, 0); idx_p = &TREE_OPERAND (ret, 0); } Index: gcc/tree-mudflap.c =================================================================== --- gcc/tree-mudflap.c (.../trunk) (revision 161367) +++ gcc/tree-mudflap.c (.../branches/mem-ref2) (revision 161369) @@ -790,7 +790,8 @@ mf_xform_derefs_1 (gimple_stmt_iterator } else if (TREE_CODE (var) == COMPONENT_REF) var = TREE_OPERAND (var, 0); - else if (INDIRECT_REF_P (var)) + else if (INDIRECT_REF_P (var) + || TREE_CODE (var) == MEM_REF) { base = TREE_OPERAND (var, 0); break; @@ -863,6 +864,18 @@ mf_xform_derefs_1 (gimple_stmt_iterator base = addr; limit = fold_build2_loc (location, POINTER_PLUS_EXPR, ptr_type_node, fold_build2_loc (location, + POINTER_PLUS_EXPR, ptr_type_node, base, + size), + size_int (-1)); + break; + + case MEM_REF: + addr = build2 (POINTER_PLUS_EXPR, TREE_TYPE (TREE_OPERAND (t, 1)), + TREE_OPERAND (t, 0), + fold_convert (sizetype, TREE_OPERAND (t, 1))); + base = addr; + limit = fold_build2_loc (location, POINTER_PLUS_EXPR, ptr_type_node, + fold_build2_loc (location, POINTER_PLUS_EXPR, ptr_type_node, base, size), size_int (-1)); Index: gcc/ipa-prop.c =================================================================== --- gcc/ipa-prop.c (.../trunk) (revision 161367) +++ gcc/ipa-prop.c (.../branches/mem-ref2) (revision 161369) @@ -486,11 +486,12 @@ compute_complex_assign_jump_func (struct if (TREE_CODE (type) != RECORD_TYPE) return; op1 = get_ref_base_and_extent (op1, &offset, &size, &max_size); - if (TREE_CODE (op1) != INDIRECT_REF + if (TREE_CODE (op1) != MEM_REF /* If this is a varying address, punt. */ || max_size == -1 || max_size != size) return; + offset += mem_ref_offset (op1).low * BITS_PER_UNIT; op1 = TREE_OPERAND (op1, 0); if (TREE_CODE (op1) != SSA_NAME || !SSA_NAME_IS_DEFAULT_DEF (op1)) @@ -562,11 +563,12 @@ compute_complex_ancestor_jump_func (stru expr = TREE_OPERAND (expr, 0); expr = get_ref_base_and_extent (expr, &offset, &size, &max_size); - if (TREE_CODE (expr) != INDIRECT_REF + if (TREE_CODE (expr) != MEM_REF /* If this is a varying address, punt. */ || max_size == -1 || max_size != size) return; + offset += mem_ref_offset (expr).low * BITS_PER_UNIT; parm = TREE_OPERAND (expr, 0); if (TREE_CODE (parm) != SSA_NAME || !SSA_NAME_IS_DEFAULT_DEF (parm)) @@ -1204,7 +1206,7 @@ ipa_analyze_virtual_call_uses (struct cg obj = TREE_OPERAND (obj, 0); } while (TREE_CODE (obj) == COMPONENT_REF); - if (TREE_CODE (obj) != INDIRECT_REF) + if (TREE_CODE (obj) != MEM_REF) return; obj = TREE_OPERAND (obj, 0); } Index: gcc/tree-ssa-forwprop.c =================================================================== --- gcc/tree-ssa-forwprop.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-forwprop.c (.../branches/mem-ref2) (revision 161369) @@ -628,9 +628,14 @@ forward_propagate_addr_into_variable_arr { tree index, tunit; gimple offset_def, use_stmt = gsi_stmt (*use_stmt_gsi); - tree tmp; + tree new_rhs, tmp; - tunit = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (def_rhs))); + if (TREE_CODE (TREE_OPERAND (def_rhs, 0)) == ARRAY_REF) + tunit = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (def_rhs))); + else if (TREE_CODE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))) == ARRAY_TYPE) + tunit = TYPE_SIZE_UNIT (TREE_TYPE (TREE_TYPE (TREE_TYPE (def_rhs)))); + else + return false; if (!host_integerp (tunit, 1)) return false; @@ -697,10 +702,28 @@ forward_propagate_addr_into_variable_arr /* Replace the pointer addition with array indexing. */ index = force_gimple_operand_gsi (use_stmt_gsi, index, true, NULL_TREE, true, GSI_SAME_STMT); - gimple_assign_set_rhs_from_tree (use_stmt_gsi, unshare_expr (def_rhs)); + if (TREE_CODE (TREE_OPERAND (def_rhs, 0)) == ARRAY_REF) + { + new_rhs = unshare_expr (def_rhs); + TREE_OPERAND (TREE_OPERAND (new_rhs, 0), 1) = index; + } + else + { + new_rhs = build4 (ARRAY_REF, TREE_TYPE (TREE_TYPE (TREE_TYPE (def_rhs))), + unshare_expr (TREE_OPERAND (def_rhs, 0)), + index, integer_zero_node, NULL_TREE); + new_rhs = build_fold_addr_expr (new_rhs); + if (!useless_type_conversion_p (TREE_TYPE (gimple_assign_lhs (use_stmt)), + TREE_TYPE (new_rhs))) + { + new_rhs = force_gimple_operand_gsi (use_stmt_gsi, new_rhs, true, + NULL_TREE, true, GSI_SAME_STMT); + new_rhs = fold_convert (TREE_TYPE (gimple_assign_lhs (use_stmt)), + new_rhs); + } + } + gimple_assign_set_rhs_from_tree (use_stmt_gsi, new_rhs); use_stmt = gsi_stmt (*use_stmt_gsi); - TREE_OPERAND (TREE_OPERAND (gimple_assign_rhs1 (use_stmt), 0), 1) - = index; /* That should have created gimple, so there is no need to record information to undo the propagation. */ @@ -725,11 +748,9 @@ forward_propagate_addr_expr_1 (tree name bool single_use_p) { tree lhs, rhs, rhs2, array_ref; - tree *rhsp, *lhsp; gimple use_stmt = gsi_stmt (*use_stmt_gsi); enum tree_code rhs_code; bool res = true; - bool addr_p = false; gcc_assert (TREE_CODE (def_rhs) == ADDR_EXPR); @@ -767,31 +788,120 @@ forward_propagate_addr_expr_1 (tree name return true; } + /* Propagate through constant pointer adjustments. */ + if (TREE_CODE (lhs) == SSA_NAME + && rhs_code == POINTER_PLUS_EXPR + && rhs == name + && TREE_CODE (gimple_assign_rhs2 (use_stmt)) == INTEGER_CST) + { + tree new_def_rhs; + /* As we come here with non-invariant addresses in def_rhs we need + to make sure we can build a valid constant offsetted address + for further propagation. Simply rely on fold building that + and check after the fact. */ + new_def_rhs = fold_build2 (MEM_REF, TREE_TYPE (TREE_TYPE (rhs)), + def_rhs, + fold_convert (ptr_type_node, + gimple_assign_rhs2 (use_stmt))); + if (TREE_CODE (new_def_rhs) == MEM_REF + && TREE_CODE (TREE_OPERAND (new_def_rhs, 0)) == ADDR_EXPR + && !DECL_P (TREE_OPERAND (TREE_OPERAND (new_def_rhs, 0), 0)) + && !CONSTANT_CLASS_P (TREE_OPERAND (TREE_OPERAND (new_def_rhs, 0), 0))) + return false; + new_def_rhs = build_fold_addr_expr_with_type (new_def_rhs, + TREE_TYPE (rhs)); + + /* Recurse. If we could propagate into all uses of lhs do not + bother to replace into the current use but just pretend we did. */ + if (TREE_CODE (new_def_rhs) == ADDR_EXPR + && forward_propagate_addr_expr (lhs, new_def_rhs)) + return true; + + if (useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (new_def_rhs))) + gimple_assign_set_rhs_with_ops (use_stmt_gsi, TREE_CODE (new_def_rhs), + new_def_rhs, NULL_TREE); + else if (is_gimple_min_invariant (new_def_rhs)) + gimple_assign_set_rhs_with_ops (use_stmt_gsi, NOP_EXPR, + new_def_rhs, NULL_TREE); + else + return false; + gcc_assert (gsi_stmt (*use_stmt_gsi) == use_stmt); + update_stmt (use_stmt); + return true; + } + /* Now strip away any outer COMPONENT_REF/ARRAY_REF nodes from the LHS. ADDR_EXPR will not appear on the LHS. */ - lhsp = gimple_assign_lhs_ptr (use_stmt); - while (handled_component_p (*lhsp)) - lhsp = &TREE_OPERAND (*lhsp, 0); - lhs = *lhsp; + lhs = gimple_assign_lhs (use_stmt); + while (handled_component_p (lhs)) + lhs = TREE_OPERAND (lhs, 0); - /* Now see if the LHS node is an INDIRECT_REF using NAME. If so, + /* Now see if the LHS node is a MEM_REF using NAME. If so, propagate the ADDR_EXPR into the use of NAME and fold the result. */ - if (TREE_CODE (lhs) == INDIRECT_REF + if (TREE_CODE (lhs) == MEM_REF && TREE_OPERAND (lhs, 0) == name) { - if (may_propagate_address_into_dereference (def_rhs, lhs) - && (lhsp != gimple_assign_lhs_ptr (use_stmt) - || useless_type_conversion_p - (TREE_TYPE (TREE_OPERAND (def_rhs, 0)), TREE_TYPE (rhs)))) - { - *lhsp = unshare_expr (TREE_OPERAND (def_rhs, 0)); - fold_stmt_inplace (use_stmt); + tree def_rhs_base; + HOST_WIDE_INT def_rhs_offset; + /* If the address is invariant we can always fold it. */ + if ((def_rhs_base = get_addr_base_and_unit_offset (TREE_OPERAND (def_rhs, 0), + &def_rhs_offset))) + { + double_int off = mem_ref_offset (lhs); + tree new_ptr; + off = double_int_add (off, + shwi_to_double_int (def_rhs_offset)); + if (TREE_CODE (def_rhs_base) == MEM_REF) + { + off = double_int_add (off, mem_ref_offset (def_rhs_base)); + new_ptr = TREE_OPERAND (def_rhs_base, 0); + } + else + new_ptr = build_fold_addr_expr (def_rhs_base); + TREE_OPERAND (lhs, 0) = new_ptr; + TREE_OPERAND (lhs, 1) + = double_int_to_tree (TREE_TYPE (TREE_OPERAND (lhs, 1)), off); tidy_after_forward_propagate_addr (use_stmt); - /* Continue propagating into the RHS if this was not the only use. */ if (single_use_p) return true; } + /* If the LHS is a plain dereference and the value type is the same as + that of the pointed-to type of the address we can put the + dereferenced address on the LHS preserving the original alias-type. */ + else if (gimple_assign_lhs (use_stmt) == lhs + && useless_type_conversion_p + (TREE_TYPE (TREE_OPERAND (def_rhs, 0)), + TREE_TYPE (gimple_assign_rhs1 (use_stmt)))) + { + tree *def_rhs_basep = &TREE_OPERAND (def_rhs, 0); + tree new_offset, new_base, saved; + while (handled_component_p (*def_rhs_basep)) + def_rhs_basep = &TREE_OPERAND (*def_rhs_basep, 0); + saved = *def_rhs_basep; + if (TREE_CODE (*def_rhs_basep) == MEM_REF) + { + new_base = TREE_OPERAND (*def_rhs_basep, 0); + new_offset + = int_const_binop (PLUS_EXPR, TREE_OPERAND (lhs, 1), + TREE_OPERAND (*def_rhs_basep, 1), 0); + } + else + { + new_base = build_fold_addr_expr (*def_rhs_basep); + new_offset = TREE_OPERAND (lhs, 1); + } + *def_rhs_basep = build2 (MEM_REF, TREE_TYPE (*def_rhs_basep), + new_base, new_offset); + gimple_assign_set_lhs (use_stmt, + unshare_expr (TREE_OPERAND (def_rhs, 0))); + *def_rhs_basep = saved; + tidy_after_forward_propagate_addr (use_stmt); + /* Continue propagating into the RHS if this was not the + only use. */ + if (single_use_p) + return true; + } else /* We can have a struct assignment dereferencing our name twice. Note that we didn't propagate into the lhs to not falsely @@ -801,78 +911,75 @@ forward_propagate_addr_expr_1 (tree name /* Strip away any outer COMPONENT_REF, ARRAY_REF or ADDR_EXPR nodes from the RHS. */ - rhsp = gimple_assign_rhs1_ptr (use_stmt); - if (TREE_CODE (*rhsp) == ADDR_EXPR) - { - rhsp = &TREE_OPERAND (*rhsp, 0); - addr_p = true; - } - while (handled_component_p (*rhsp)) - rhsp = &TREE_OPERAND (*rhsp, 0); - rhs = *rhsp; + rhs = gimple_assign_rhs1 (use_stmt); + if (TREE_CODE (rhs) == ADDR_EXPR) + rhs = TREE_OPERAND (rhs, 0); + while (handled_component_p (rhs)) + rhs = TREE_OPERAND (rhs, 0); - /* Now see if the RHS node is an INDIRECT_REF using NAME. If so, + /* Now see if the RHS node is a MEM_REF using NAME. If so, propagate the ADDR_EXPR into the use of NAME and fold the result. */ - if (TREE_CODE (rhs) == INDIRECT_REF - && TREE_OPERAND (rhs, 0) == name - && may_propagate_address_into_dereference (def_rhs, rhs)) - { - *rhsp = unshare_expr (TREE_OPERAND (def_rhs, 0)); - fold_stmt_inplace (use_stmt); - tidy_after_forward_propagate_addr (use_stmt); - return res; - } - - /* Now see if the RHS node is an INDIRECT_REF using NAME. If so, - propagate the ADDR_EXPR into the use of NAME and try to - create a VCE and fold the result. */ - if (TREE_CODE (rhs) == INDIRECT_REF - && TREE_OPERAND (rhs, 0) == name - && TYPE_SIZE (TREE_TYPE (rhs)) - && TYPE_SIZE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))) - /* Function decls should not be used for VCE either as it could be a - function descriptor that we want and not the actual function code. */ - && TREE_CODE (TREE_OPERAND (def_rhs, 0)) != FUNCTION_DECL - /* We should not convert volatile loads to non volatile loads. */ - && !TYPE_VOLATILE (TREE_TYPE (rhs)) - && !TYPE_VOLATILE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))) - && operand_equal_p (TYPE_SIZE (TREE_TYPE (rhs)), - TYPE_SIZE (TREE_TYPE (TREE_OPERAND (def_rhs, 0))), 0) - /* Make sure we only do TBAA compatible replacements. */ - && get_alias_set (TREE_OPERAND (def_rhs, 0)) == get_alias_set (rhs)) - { - tree def_rhs_base, new_rhs = unshare_expr (TREE_OPERAND (def_rhs, 0)); - new_rhs = fold_build1 (VIEW_CONVERT_EXPR, TREE_TYPE (rhs), new_rhs); - if (TREE_CODE (new_rhs) != VIEW_CONVERT_EXPR) - { - /* If we have folded the VIEW_CONVERT_EXPR then the result is only - valid if we can replace the whole rhs of the use statement. */ - if (rhs != gimple_assign_rhs1 (use_stmt)) - return false; - new_rhs = force_gimple_operand_gsi (use_stmt_gsi, new_rhs, true, NULL, - true, GSI_NEW_STMT); - gimple_assign_set_rhs1 (use_stmt, new_rhs); - tidy_after_forward_propagate_addr (use_stmt); - return res; - } - /* If the defining rhs comes from an indirect reference, then do not - convert into a VIEW_CONVERT_EXPR. Likewise if we'll end up taking - the address of a V_C_E of a constant. */ - def_rhs_base = TREE_OPERAND (def_rhs, 0); - while (handled_component_p (def_rhs_base)) - def_rhs_base = TREE_OPERAND (def_rhs_base, 0); - if (!INDIRECT_REF_P (def_rhs_base) - && (!addr_p - || !is_gimple_min_invariant (def_rhs))) - { - /* We may have arbitrary VIEW_CONVERT_EXPRs in a nested component - reference. Place it there and fold the thing. */ - *rhsp = new_rhs; - fold_stmt_inplace (use_stmt); - tidy_after_forward_propagate_addr (use_stmt); - return res; - } - } + if (TREE_CODE (rhs) == MEM_REF + && TREE_OPERAND (rhs, 0) == name) + { + tree def_rhs_base; + HOST_WIDE_INT def_rhs_offset; + if ((def_rhs_base = get_addr_base_and_unit_offset (TREE_OPERAND (def_rhs, 0), + &def_rhs_offset))) + { + double_int off = mem_ref_offset (rhs); + tree new_ptr; + off = double_int_add (off, + shwi_to_double_int (def_rhs_offset)); + if (TREE_CODE (def_rhs_base) == MEM_REF) + { + off = double_int_add (off, mem_ref_offset (def_rhs_base)); + new_ptr = TREE_OPERAND (def_rhs_base, 0); + } + else + new_ptr = build_fold_addr_expr (def_rhs_base); + TREE_OPERAND (rhs, 0) = new_ptr; + TREE_OPERAND (rhs, 1) + = double_int_to_tree (TREE_TYPE (TREE_OPERAND (rhs, 1)), off); + fold_stmt_inplace (use_stmt); + tidy_after_forward_propagate_addr (use_stmt); + return res; + } + /* If the LHS is a plain dereference and the value type is the same as + that of the pointed-to type of the address we can put the + dereferenced address on the LHS preserving the original alias-type. */ + else if (gimple_assign_rhs1 (use_stmt) == rhs + && useless_type_conversion_p + (TREE_TYPE (gimple_assign_lhs (use_stmt)), + TREE_TYPE (TREE_OPERAND (def_rhs, 0)))) + { + tree *def_rhs_basep = &TREE_OPERAND (def_rhs, 0); + tree new_offset, new_base, saved; + while (handled_component_p (*def_rhs_basep)) + def_rhs_basep = &TREE_OPERAND (*def_rhs_basep, 0); + saved = *def_rhs_basep; + if (TREE_CODE (*def_rhs_basep) == MEM_REF) + { + new_base = TREE_OPERAND (*def_rhs_basep, 0); + new_offset + = int_const_binop (PLUS_EXPR, TREE_OPERAND (rhs, 1), + TREE_OPERAND (*def_rhs_basep, 1), 0); + } + else + { + new_base = build_fold_addr_expr (*def_rhs_basep); + new_offset = TREE_OPERAND (rhs, 1); + } + *def_rhs_basep = build2 (MEM_REF, TREE_TYPE (*def_rhs_basep), + new_base, new_offset); + gimple_assign_set_rhs1 (use_stmt, + unshare_expr (TREE_OPERAND (def_rhs, 0))); + *def_rhs_basep = saved; + fold_stmt_inplace (use_stmt); + tidy_after_forward_propagate_addr (use_stmt); + return res; + } + } /* If the use of the ADDR_EXPR is not a POINTER_PLUS_EXPR, there is nothing to do. */ @@ -885,9 +992,10 @@ forward_propagate_addr_expr_1 (tree name element zero in an array. If that is not the case then there is nothing to do. */ array_ref = TREE_OPERAND (def_rhs, 0); - if (TREE_CODE (array_ref) != ARRAY_REF - || TREE_CODE (TREE_TYPE (TREE_OPERAND (array_ref, 0))) != ARRAY_TYPE - || TREE_CODE (TREE_OPERAND (array_ref, 1)) != INTEGER_CST) + if ((TREE_CODE (array_ref) != ARRAY_REF + || TREE_CODE (TREE_TYPE (TREE_OPERAND (array_ref, 0))) != ARRAY_TYPE + || TREE_CODE (TREE_OPERAND (array_ref, 1)) != INTEGER_CST) + && TREE_CODE (TREE_TYPE (array_ref)) != ARRAY_TYPE) return false; rhs2 = gimple_assign_rhs2 (use_stmt); @@ -923,7 +1031,8 @@ forward_propagate_addr_expr_1 (tree name array elements, then the result is converted into the proper type for the arithmetic. */ if (TREE_CODE (rhs2) == SSA_NAME - && integer_zerop (TREE_OPERAND (array_ref, 1)) + && (TREE_CODE (array_ref) != ARRAY_REF + || integer_zerop (TREE_OPERAND (array_ref, 1))) && useless_type_conversion_p (TREE_TYPE (name), TREE_TYPE (def_rhs)) /* Avoid problems with IVopts creating PLUS_EXPRs with a different type than their operands. */ @@ -1300,13 +1409,35 @@ tree_ssa_forward_propagate_single_use_va else gsi_next (&gsi); } - else if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR - && is_gimple_min_invariant (rhs)) + else if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) { - /* Make sure to fold &a[0] + off_1 here. */ - fold_stmt_inplace (stmt); - update_stmt (stmt); - if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) + if (TREE_CODE (gimple_assign_rhs2 (stmt)) == INTEGER_CST + /* ??? Better adjust the interface to that function + instead of building new trees here. */ + && forward_propagate_addr_expr + (lhs, + build1 (ADDR_EXPR, + TREE_TYPE (rhs), + fold_build2 (MEM_REF, + TREE_TYPE (TREE_TYPE (rhs)), + rhs, + fold_convert + (ptr_type_node, + gimple_assign_rhs2 (stmt)))))) + { + release_defs (stmt); + todoflags |= TODO_remove_unused_locals; + gsi_remove (&gsi, true); + } + else if (is_gimple_min_invariant (rhs)) + { + /* Make sure to fold &a[0] + off_1 here. */ + fold_stmt_inplace (stmt); + update_stmt (stmt); + if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) + gsi_next (&gsi); + } + else gsi_next (&gsi); } else if ((gimple_assign_rhs_code (stmt) == BIT_NOT_EXPR Index: gcc/tree-ssa-dce.c =================================================================== --- gcc/tree-ssa-dce.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-dce.c (.../branches/mem-ref2) (revision 161369) @@ -499,6 +499,9 @@ ref_may_be_aliased (tree ref) { while (handled_component_p (ref)) ref = TREE_OPERAND (ref, 0); + if (TREE_CODE (ref) == MEM_REF + && TREE_CODE (TREE_OPERAND (ref, 0)) == ADDR_EXPR) + ref = TREE_OPERAND (TREE_OPERAND (ref, 0), 0); return !(DECL_P (ref) && !may_be_aliased (ref)); } Index: gcc/tree-ssa-ter.c =================================================================== --- gcc/tree-ssa-ter.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-ter.c (.../branches/mem-ref2) (revision 161369) @@ -616,6 +616,24 @@ find_replaceable_in_bb (temp_expr_table_ } } + /* If the stmt does a memory store and the replacement + is a load aliasing it avoid creating overlapping + assignments which we cannot expand correctly. */ + if (gimple_vdef (stmt) + && gimple_assign_single_p (stmt)) + { + gimple def_stmt = SSA_NAME_DEF_STMT (use); + while (is_gimple_assign (def_stmt) + && gimple_assign_rhs_code (def_stmt) == SSA_NAME) + def_stmt + = SSA_NAME_DEF_STMT (gimple_assign_rhs1 (def_stmt)); + if (gimple_vuse (def_stmt) + && gimple_assign_single_p (def_stmt) + && refs_may_alias_p (gimple_assign_lhs (stmt), + gimple_assign_rhs1 (def_stmt))) + same_root_var = true; + } + /* Mark expression as replaceable unless stmt is volatile or the def variable has the same root variable as something in the substitution list. */ Index: gcc/tree-nested.c =================================================================== --- gcc/tree-nested.c (.../trunk) (revision 161367) +++ gcc/tree-nested.c (.../branches/mem-ref2) (revision 161369) @@ -84,6 +84,7 @@ struct nesting_info struct pointer_map_t *field_map; struct pointer_map_t *var_map; + struct pointer_set_t *mem_refs; bitmap suppress_expansion; tree context; @@ -717,6 +718,7 @@ create_nesting_tree (struct cgraph_node struct nesting_info *info = XCNEW (struct nesting_info); info->field_map = pointer_map_create (); info->var_map = pointer_map_create (); + info->mem_refs = pointer_set_create (); info->suppress_expansion = BITMAP_ALLOC (&nesting_info_bitmap_obstack); info->context = cgn->decl; @@ -758,7 +760,7 @@ get_static_chain (struct nesting_info *i { tree field = get_chain_field (i); - x = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (x)), x); + x = build_simple_mem_ref (x); x = build3 (COMPONENT_REF, TREE_TYPE (field), x, field, NULL_TREE); x = init_tmp_var (info, x, gsi); } @@ -793,12 +795,12 @@ get_frame_field (struct nesting_info *in { tree field = get_chain_field (i); - x = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (x)), x); + x = build_simple_mem_ref (x); x = build3 (COMPONENT_REF, TREE_TYPE (field), x, field, NULL_TREE); x = init_tmp_var (info, x, gsi); } - x = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (x)), x); + x = build_simple_mem_ref (x); } x = build3 (COMPONENT_REF, TREE_TYPE (field), x, field, NULL_TREE); @@ -841,16 +843,16 @@ get_nonlocal_debug_decl (struct nesting_ for (i = info->outer; i->context != target_context; i = i->outer) { field = get_chain_field (i); - x = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (x)), x); + x = build_simple_mem_ref (x); x = build3 (COMPONENT_REF, TREE_TYPE (field), x, field, NULL_TREE); } - x = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (x)), x); + x = build_simple_mem_ref (x); } field = lookup_field_for_decl (i, decl, INSERT); x = build3 (COMPONENT_REF, TREE_TYPE (field), x, field, NULL_TREE); if (use_pointer_in_frame (decl)) - x = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (x)), x); + x = build_simple_mem_ref (x); /* ??? We should be remapping types as well, surely. */ new_decl = build_decl (DECL_SOURCE_LOCATION (decl), @@ -927,7 +929,7 @@ convert_nonlocal_reference_op (tree *tp, if (use_pointer_in_frame (t)) { x = init_tmp_var (info, x, &wi->gsi); - x = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (x)), x); + x = build_simple_mem_ref (x); } } @@ -1498,6 +1500,21 @@ convert_local_reference_op (tree *tp, in wi->val_only = save_val_only; break; + case MEM_REF: + save_val_only = wi->val_only; + wi->val_only = true; + wi->is_lhs = false; + walk_tree (&TREE_OPERAND (t, 0), convert_local_reference_op, + wi, NULL); + /* We need to re-fold the MEM_REF as component references as + part of a ADDR_EXPR address are not allowed. But we cannot + fold here, as the chain record type is not yet finalized. */ + if (TREE_CODE (TREE_OPERAND (t, 0)) == ADDR_EXPR + && !DECL_P (TREE_OPERAND (TREE_OPERAND (t, 0), 0))) + pointer_set_insert (info->mem_refs, tp); + wi->val_only = save_val_only; + break; + case VIEW_CONVERT_EXPR: /* Just request to look at the subtrees, leaving val_only and lhs untouched. This might actually be for !val_only + lhs, in which @@ -2247,6 +2264,15 @@ remap_vla_decls (tree block, struct nest pointer_map_destroy (id.cb.decl_map); } +/* Fold the MEM_REF *E. */ +static bool +fold_mem_refs (const void *e, void *data ATTRIBUTE_UNUSED) +{ + tree *ref_p = CONST_CAST2(tree *, const tree *, (const tree *)e); + *ref_p = fold (*ref_p); + return true; +} + /* Do "everything else" to clean up or complete state collected by the various walking passes -- lay out the types and decls, generate code to initialize the frame decl, store critical expressions in the @@ -2461,6 +2487,9 @@ finalize_nesting_tree_1 (struct nesting_ root->debug_var_chain); } + /* Fold the rewritten MEM_REF trees. */ + pointer_set_traverse (root->mem_refs, fold_mem_refs, NULL); + /* Dump the translated tree function. */ if (dump_file) { @@ -2514,6 +2543,7 @@ free_nesting_tree (struct nesting_info * next = iter_nestinfo_next (node); pointer_map_destroy (node->var_map); pointer_map_destroy (node->field_map); + pointer_set_destroy (node->mem_refs); free (node); node = next; } Index: gcc/tree-vect-stmts.c =================================================================== --- gcc/tree-vect-stmts.c (.../trunk) (revision 161367) +++ gcc/tree-vect-stmts.c (.../branches/mem-ref2) (revision 161369) @@ -3026,7 +3026,8 @@ vectorizable_store (gimple stmt, gimple_ && TREE_CODE (scalar_dest) != INDIRECT_REF && TREE_CODE (scalar_dest) != COMPONENT_REF && TREE_CODE (scalar_dest) != IMAGPART_EXPR - && TREE_CODE (scalar_dest) != REALPART_EXPR) + && TREE_CODE (scalar_dest) != REALPART_EXPR + && TREE_CODE (scalar_dest) != MEM_REF) return false; gcc_assert (gimple_assign_single_p (stmt)); @@ -3282,7 +3283,7 @@ vectorizable_store (gimple stmt, gimple_ vec_oprnd = VEC_index (tree, result_chain, i); if (aligned_access_p (first_dr)) - data_ref = build_fold_indirect_ref (dataref_ptr); + data_ref = build_simple_mem_ref (dataref_ptr); else { int mis = DR_MISALIGNMENT (first_dr); @@ -3421,7 +3422,8 @@ vectorizable_load (gimple stmt, gimple_s && code != INDIRECT_REF && code != COMPONENT_REF && code != IMAGPART_EXPR - && code != REALPART_EXPR) + && code != REALPART_EXPR + && code != MEM_REF) return false; if (!STMT_VINFO_DATA_REF (stmt_info)) @@ -3659,7 +3661,7 @@ vectorizable_load (gimple stmt, gimple_s { case dr_aligned: gcc_assert (aligned_access_p (first_dr)); - data_ref = build_fold_indirect_ref (dataref_ptr); + data_ref = build_simple_mem_ref (dataref_ptr); break; case dr_unaligned_supported: { Index: gcc/tree-ssa-phiprop.c =================================================================== --- gcc/tree-ssa-phiprop.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-phiprop.c (.../branches/mem-ref2) (revision 161369) @@ -139,7 +139,7 @@ phiprop_insert_phi (basic_block bb, gimp edge e; gcc_assert (is_gimple_assign (use_stmt) - && gimple_assign_rhs_code (use_stmt) == INDIRECT_REF); + && gimple_assign_rhs_code (use_stmt) == MEM_REF); /* Build a new PHI node to replace the definition of the indirect reference lhs. */ @@ -295,8 +295,11 @@ propagate_with_phi (basic_block bb, gimp /* Check whether this is a load of *ptr. */ if (!(is_gimple_assign (use_stmt) && TREE_CODE (gimple_assign_lhs (use_stmt)) == SSA_NAME - && gimple_assign_rhs_code (use_stmt) == INDIRECT_REF + && gimple_assign_rhs_code (use_stmt) == MEM_REF && TREE_OPERAND (gimple_assign_rhs1 (use_stmt), 0) == ptr + && integer_zerop (TREE_OPERAND (gimple_assign_rhs1 (use_stmt), 1)) + && types_compatible_p (TREE_TYPE (gimple_assign_rhs1 (use_stmt)), + TREE_TYPE (TREE_TYPE (ptr))) /* We cannot replace a load that may throw or is volatile. */ && !stmt_can_throw_internal (use_stmt))) continue; Index: gcc/tree-object-size.c =================================================================== --- gcc/tree-object-size.c (.../trunk) (revision 161367) +++ gcc/tree-object-size.c (.../branches/mem-ref2) (revision 161369) @@ -141,6 +141,10 @@ compute_object_offset (const_tree expr, off = size_binop (MULT_EXPR, TYPE_SIZE_UNIT (TREE_TYPE (expr)), t); break; + case MEM_REF: + gcc_assert (TREE_CODE (TREE_OPERAND (expr, 0)) == ADDR_EXPR); + return TREE_OPERAND (expr, 1); + default: return error_mark_node; } @@ -166,15 +170,21 @@ addr_object_size (struct object_size_inf pt_var = get_base_address (pt_var); if (pt_var - && TREE_CODE (pt_var) == INDIRECT_REF + && TREE_CODE (pt_var) == MEM_REF && TREE_CODE (TREE_OPERAND (pt_var, 0)) == SSA_NAME && POINTER_TYPE_P (TREE_TYPE (TREE_OPERAND (pt_var, 0)))) { unsigned HOST_WIDE_INT sz; if (!osi || (object_size_type & 1) != 0) - sz = compute_builtin_object_size (TREE_OPERAND (pt_var, 0), - object_size_type & ~1); + { + sz = compute_builtin_object_size (TREE_OPERAND (pt_var, 0), + object_size_type & ~1); + if (host_integerp (TREE_OPERAND (pt_var, 1), 0)) + sz -= TREE_INT_CST_LOW (TREE_OPERAND (pt_var, 1)); + else + sz = offset_limit; + } else { tree var = TREE_OPERAND (pt_var, 0); @@ -185,6 +195,10 @@ addr_object_size (struct object_size_inf sz = object_sizes[object_size_type][SSA_NAME_VERSION (var)]; else sz = unknown[object_size_type]; + if (host_integerp (TREE_OPERAND (pt_var, 1), 0)) + sz -= TREE_INT_CST_LOW (TREE_OPERAND (pt_var, 1)); + else + sz = offset_limit; } if (sz != unknown[object_size_type] && sz < offset_limit) @@ -225,7 +239,7 @@ addr_object_size (struct object_size_inf && tree_int_cst_lt (pt_var_size, TYPE_SIZE_UNIT (TREE_TYPE (var))))) var = pt_var; - else if (var != pt_var && TREE_CODE (pt_var) == INDIRECT_REF) + else if (var != pt_var && TREE_CODE (pt_var) == MEM_REF) { tree v = var; /* For &X->fld, compute object size only if fld isn't the last @@ -328,12 +342,14 @@ addr_object_size (struct object_size_inf } if (var != pt_var && pt_var_size - && TREE_CODE (pt_var) == INDIRECT_REF + && TREE_CODE (pt_var) == MEM_REF && bytes != error_mark_node) { tree bytes2 = compute_object_offset (TREE_OPERAND (ptr, 0), pt_var); if (bytes2 != error_mark_node) { + bytes2 = size_binop (PLUS_EXPR, bytes2, + TREE_OPERAND (pt_var, 1)); if (TREE_CODE (bytes2) == INTEGER_CST && tree_int_cst_lt (pt_var_size, bytes2)) bytes2 = size_zero_node; @@ -746,10 +762,20 @@ plus_stmt_object_size (struct object_siz unsigned HOST_WIDE_INT bytes; tree op0, op1; - gcc_assert (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR); - - op0 = gimple_assign_rhs1 (stmt); - op1 = gimple_assign_rhs2 (stmt); + if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) + { + op0 = gimple_assign_rhs1 (stmt); + op1 = gimple_assign_rhs2 (stmt); + } + else if (gimple_assign_rhs_code (stmt) == ADDR_EXPR) + { + tree rhs = TREE_OPERAND (gimple_assign_rhs1 (stmt), 0); + gcc_assert (TREE_CODE (rhs) == MEM_REF); + op0 = TREE_OPERAND (rhs, 0); + op1 = TREE_OPERAND (rhs, 1); + } + else + gcc_unreachable (); if (object_sizes[object_size_type][varno] == unknown[object_size_type]) return false; @@ -897,13 +923,14 @@ collect_object_sizes_for (struct object_ { case GIMPLE_ASSIGN: { - if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR) + tree rhs = gimple_assign_rhs1 (stmt); + if (gimple_assign_rhs_code (stmt) == POINTER_PLUS_EXPR + || (gimple_assign_rhs_code (stmt) == ADDR_EXPR + && TREE_CODE (TREE_OPERAND (rhs, 0)) == MEM_REF)) reexamine = plus_stmt_object_size (osi, var, stmt); else if (gimple_assign_single_p (stmt) || gimple_assign_unary_nop_p (stmt)) { - tree rhs = gimple_assign_rhs1 (stmt); - if (TREE_CODE (rhs) == SSA_NAME && POINTER_TYPE_P (TREE_TYPE (rhs))) reexamine = merge_object_sizes (osi, var, rhs, 0); Index: gcc/tree-flow.h =================================================================== --- gcc/tree-flow.h (.../trunk) (revision 161367) +++ gcc/tree-flow.h (.../branches/mem-ref2) (revision 161369) @@ -520,6 +520,7 @@ extern tree gimple_default_def (struct f extern bool stmt_references_abnormal_ssa_name (gimple); extern tree get_ref_base_and_extent (tree, HOST_WIDE_INT *, HOST_WIDE_INT *, HOST_WIDE_INT *); +extern tree get_addr_base_and_unit_offset (tree, HOST_WIDE_INT *); extern void find_referenced_vars_in (gimple); /* In tree-phinodes.c */ @@ -602,6 +603,7 @@ void release_ssa_name_after_update_ssa ( void compute_global_livein (bitmap, bitmap); void mark_sym_for_renaming (tree); void mark_set_for_renaming (bitmap); +bool symbol_marked_for_renaming (tree); tree get_current_def (tree); void set_current_def (tree, tree); Index: gcc/Makefile.in =================================================================== --- gcc/Makefile.in (.../trunk) (revision 161367) +++ gcc/Makefile.in (.../branches/mem-ref2) (revision 161369) @@ -2508,7 +2508,7 @@ tree-dfa.o : tree-dfa.c $(TREE_FLOW_H) $ $(TREE_INLINE_H) $(HASHTAB_H) pointer-set.h $(FLAGS_H) $(FUNCTION_H) \ $(TIMEVAR_H) convert.h $(TM_H) coretypes.h langhooks.h $(TREE_DUMP_H) \ $(TREE_PASS_H) $(PARAMS_H) $(CGRAPH_H) $(BASIC_BLOCK_H) $(GIMPLE_H) \ - tree-pretty-print.h + tree-pretty-print.h $(TOPLEV_H) tree-ssa-operands.o : tree-ssa-operands.c $(TREE_FLOW_H) $(CONFIG_H) \ $(SYSTEM_H) $(TREE_H) $(GGC_H) $(DIAGNOSTIC_H) $(TREE_INLINE_H) \ $(FLAGS_H) $(FUNCTION_H) $(TM_H) $(TIMEVAR_H) $(TREE_PASS_H) $(TOPLEV_H) \ @@ -2787,7 +2787,7 @@ tree-diagnostic.o : tree-diagnostic.c $( fold-const.o : fold-const.c $(CONFIG_H) $(SYSTEM_H) coretypes.h $(TM_H) \ $(TREE_H) $(FLAGS_H) $(TOPLEV_H) $(HASHTAB_H) $(EXPR_H) $(RTL_H) \ $(GGC_H) $(TM_P_H) langhooks.h $(MD5_H) intl.h $(TARGET_H) \ - $(GIMPLE_H) realmpfr.h + $(GIMPLE_H) realmpfr.h $(TREE_FLOW_H) diagnostic.o : diagnostic.c $(CONFIG_H) $(SYSTEM_H) coretypes.h \ version.h $(INPUT_H) intl.h $(DIAGNOSTIC_H) diagnostic.def opts.o : opts.c opts.h options.h $(TOPLEV_H) $(CONFIG_H) $(SYSTEM_H) \ Index: gcc/tree-ssa-structalias.c =================================================================== --- gcc/tree-ssa-structalias.c (.../trunk) (revision 161367) +++ gcc/tree-ssa-structalias.c (.../branches/mem-ref2) (revision 161369) @@ -3107,7 +3107,8 @@ get_constraint_for_component_ref (tree t &0->a.b */ forzero = t; while (handled_component_p (forzero) - || INDIRECT_REF_P (forzero)) + || INDIRECT_REF_P (forzero) + || TREE_CODE (forzero) == MEM_REF) forzero = TREE_OPERAND (forzero, 0); if (CONSTANT_CLASS_P (forzero) && integer_zerop (forzero)) @@ -3334,9 +3335,10 @@ get_constraint_for_1 (tree t, VEC (ce_s, { switch (TREE_CODE (t)) { - case INDIRECT_REF: + case MEM_REF: { - get_constraint_for_1 (TREE_OPERAND (t, 0), results, address_p); + get_constraint_for_ptr_offset (TREE_OPERAND (t, 0), + TREE_OPERAND (t, 1), results); do_deref (results); return; } @@ -4572,7 +4574,11 @@ find_func_clobbers (gimple origt) tem = TREE_OPERAND (tem, 0); if ((DECL_P (tem) && !auto_var_in_fn_p (tem, cfun->decl)) - || INDIRECT_REF_P (tem)) + || INDIRECT_REF_P (tem) + || (TREE_CODE (tem) == MEM_REF + && !(TREE_CODE (TREE_OPERAND (tem, 0)) == ADDR_EXPR + && auto_var_in_fn_p + (TREE_OPERAND (TREE_OPERAND (tem, 0), 0), cfun->decl)))) { struct constraint_expr lhsc, *rhsp; unsigned i; @@ -4596,7 +4602,11 @@ find_func_clobbers (gimple origt) tem = TREE_OPERAND (tem, 0); if ((DECL_P (tem) && !auto_var_in_fn_p (tem, cfun->decl)) - || INDIRECT_REF_P (tem)) + || INDIRECT_REF_P (tem) + || (TREE_CODE (tem) == MEM_REF + && !(TREE_CODE (TREE_OPERAND (tem, 0)) == ADDR_EXPR + && auto_var_in_fn_p + (TREE_OPERAND (TREE_OPERAND (tem, 0), 0), cfun->decl)))) { struct constraint_expr lhs, *rhsp; unsigned i; Index: gcc/ipa-struct-reorg.c =================================================================== --- gcc/ipa-struct-reorg.c (.../trunk) (revision 161367) +++ gcc/ipa-struct-reorg.c (.../branches/mem-ref2) (revision 161369) @@ -421,6 +421,10 @@ decompose_indirect_ref_acc (tree str_dec if (!is_result_of_mult (before_cast, &acc->num, struct_size)) return false; + /* ??? Add TREE_OPERAND (acc->ref, 1) to acc->offset. */ + if (!integer_zerop (TREE_OPERAND (acc->ref, 1))) + return false; + return true; } @@ -434,7 +438,7 @@ decompose_access (tree str_decl, struct { gcc_assert (acc->ref); - if (TREE_CODE (acc->ref) == INDIRECT_REF) + if (TREE_CODE (acc->ref) == MEM_REF) return decompose_indirect_ref_acc (str_decl, acc); else if (TREE_CODE (acc->ref) == ARRAY_REF) return true; @@ -969,12 +973,12 @@ replace_field_acc (struct field_access_s type_wrapper_t *wr_p = NULL; struct ref_pos r_pos; - while (TREE_CODE (ref_var) == INDIRECT_REF + while (TREE_CODE (ref_var) == MEM_REF || TREE_CODE (ref_var) == ARRAY_REF) { type_wrapper_t wr; - if ( TREE_CODE (ref_var) == INDIRECT_REF) + if (TREE_CODE (ref_var) == MEM_REF) { wr.wrap = 0; wr.domain = 0; @@ -1001,7 +1005,7 @@ replace_field_acc (struct field_access_s new_ref = build4 (ARRAY_REF, type, new_ref, wr_p->domain, NULL_TREE, NULL_TREE); else /* Pointer. */ - new_ref = build1 (INDIRECT_REF, type, new_ref); + new_ref = build_simple_mem_ref (new_ref); VEC_pop (type_wrapper_t, wrapper); } @@ -1041,7 +1045,7 @@ static void replace_field_access_stmt (struct field_access_site *acc, tree new_type) { - if (TREE_CODE (acc->ref) == INDIRECT_REF + if (TREE_CODE (acc->ref) == MEM_REF ||TREE_CODE (acc->ref) == ARRAY_REF ||TREE_CODE (acc->ref) == VAR_DECL) replace_field_acc (acc, new_type); @@ -1277,13 +1281,11 @@ insert_new_var_in_stmt (gimple stmt, tre pos = find_pos_in_stmt (stmt, var, &r_pos); gcc_assert (pos); - while (r_pos.container && (TREE_CODE(r_pos.container) == INDIRECT_REF + while (r_pos.container && (TREE_CODE(r_pos.container) == MEM_REF || TREE_CODE(r_pos.container) == ADDR_EXPR)) { - tree type = TREE_TYPE (TREE_TYPE (new_var)); - - if (TREE_CODE(r_pos.container) == INDIRECT_REF) - new_var = build1 (INDIRECT_REF, type, new_var); + if (TREE_CODE(r_pos.container) == MEM_REF) + new_var = build_simple_mem_ref (new_var); else new_var = build_fold_addr_expr (new_var); pos = find_pos_in_stmt (stmt, r_pos.container, &r_pos); @@ -2530,7 +2532,7 @@ get_stmt_accesses (tree *tp, int *walk_s tree field_decl = TREE_OPERAND (t, 1); - if ((TREE_CODE (ref) == INDIRECT_REF + if ((TREE_CODE (ref) == MEM_REF || TREE_CODE (ref) == ARRAY_REF || TREE_CODE (ref) == VAR_DECL) && TREE_CODE (field_decl) == FIELD_DECL) @@ -4031,7 +4033,10 @@ reorg_structs (void) static unsigned int reorg_structs_drive (void) { - reorg_structs (); + /* IPA struct-reorg is completely broken - its analysis phase is + non-conservative (which is not the only reason it is broken). */ + if (0) + reorg_structs (); return 0; } Index: gcc/config/i386/i386.c =================================================================== --- gcc/config/i386/i386.c (.../trunk) (revision 161367) +++ gcc/config/i386/i386.c (.../branches/mem-ref2) (revision 161369) @@ -7051,11 +7051,17 @@ ix86_va_start (tree valist, rtx nextarg) f_ovf = TREE_CHAIN (f_fpr); f_sav = TREE_CHAIN (f_ovf); - valist = build1 (INDIRECT_REF, TREE_TYPE (TREE_TYPE (valist)), valist); - gpr = build3 (COMPONENT_REF, TREE_TYPE (f_gpr), valist, f_gpr, NULL_TREE); - fpr = build3 (COMPONENT_REF, TREE_TYPE (f_fpr), valist, f_fpr, NULL_TREE); - ovf = build3 (COMPONENT_REF, TREE_TYPE (f_ovf), valist, f_ovf, NULL_TREE); - sav = build3 (COMPONENT_REF, TREE_TYPE (f_sav), valist, f_sav, NULL_TREE); + valist = build_simple_mem_ref (valist); + TREE_TYPE (valist) = TREE_TYPE (sysv_va_list_type_node); + /* The following should be folded into the MEM_REF offset. */ + gpr = build3 (COMPONENT_REF, TREE_TYPE (f_gpr), unshare_expr (valist), + f_gpr, NULL_TREE); + fpr = build3 (COMPONENT_REF, TREE_TYPE (f_fpr), unshare_expr (valist), + f_fpr, NULL_TREE); + ovf = build3 (COMPONENT_REF, TREE_TYPE (f_ovf), unshare_expr (valist), + f_ovf, NULL_TREE); + sav = build3 (COMPONENT_REF, TREE_TYPE (f_sav), unshare_expr (valist), + f_sav, NULL_TREE); /* Count number of gp and fp argument registers used. */ words = crtl->args.info.words; @@ -30578,6 +30584,8 @@ ix86_canonical_va_list_type (tree type) type = TREE_TYPE (type); else if (POINTER_TYPE_P (type) && POINTER_TYPE_P (TREE_TYPE(type))) type = TREE_TYPE (type); + else if (POINTER_TYPE_P (type) && TREE_CODE (TREE_TYPE (type)) == ARRAY_TYPE) + type = TREE_TYPE (type); if (TARGET_64BIT) {