From patchwork Tue Nov 19 12:10:51 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 292368 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 33AF22C00C2 for ; Tue, 19 Nov 2013 23:11:18 +1100 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; q=dns; s=default; b=LpqlgLcs6rMhbGGTXtjElIN95vPTC wskTPGmo9mC0oGIx++iYHYpkB0kQd4Jr29vLrpdT9aj4jit4FOP52w6qj9oaBp+o K5bwfrrGUpq5u8jQTdfwjy73NdqhKAwrNIhSrIzd3YBD8avXuM5kRL7zKVrfCljR GbIgOrlc/R0Z8Y= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:references:date:in-reply-to:message-id:mime-version :content-type; s=default; bh=rUpSeJ8jR0tka8cbkrc8KhgCy/4=; b=awY r/hhhLAld8melv5x7Dl6us7LhD4H5e5pyD7vR2YUfiD0wGxoMAlKNe2+sWZUmXd2 ytcodVXRrEe5jwcQxQ0hr9QrJu+0z7uMuHUqE7QnmbYqszHAm/wlWAyaKqE0j38Q kgQSRJ7dqj/P/znx+kpqDe4JgQzLPxNxe86/B5i8= Received: (qmail 24559 invoked by alias); 19 Nov 2013 12:11:08 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 24549 invoked by uid 89); 19 Nov 2013 12:11:07 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=1.9 required=5.0 tests=AWL, BAYES_99, FREEMAIL_FROM, RDNS_NONE, SPF_PASS, URIBL_BLOCKED autolearn=no version=3.3.2 X-HELO: mail-wg0-f46.google.com Received: from Unknown (HELO mail-wg0-f46.google.com) (74.125.82.46) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES128-SHA encrypted) ESMTPS; Tue, 19 Nov 2013 12:11:03 +0000 Received: by mail-wg0-f46.google.com with SMTP id x12so7530226wgg.25 for ; Tue, 19 Nov 2013 04:10:54 -0800 (PST) X-Received: by 10.180.215.3 with SMTP id oe3mr2259822wic.35.1384863053939; Tue, 19 Nov 2013 04:10:53 -0800 (PST) Received: from localhost ([2.28.235.51]) by mx.google.com with ESMTPSA id y20sm33935175wib.0.2013.11.19.04.10.52 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 19 Nov 2013 04:10:53 -0800 (PST) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, rdsandiford@googlemail.com Subject: [4/4] The rest of the tree_to_[su]hwi changes References: <871u2cpq4u.fsf@talisman.default> Date: Tue, 19 Nov 2013 12:10:51 +0000 In-Reply-To: <871u2cpq4u.fsf@talisman.default> (Richard Sandiford's message of "Tue, 19 Nov 2013 11:50:09 +0000") Message-ID: <87hab8oalw.fsf@talisman.default> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.3 (gnu/linux) MIME-Version: 1.0 This patch just changes TREE_INT_CST_LOW to tree_to_[su]hwi in cases where there is already a protecting tree_fits_[su]hwi_p. I've upped the number of context lines in case that helps, but there are still some hunks where the tree_fits_* call is too high up. Thanks, Richard gcc/ada/ 2013-11-19 Kenneth Zadeck Mike Stump Richard Sandiford * gcc-interface/cuintp.c (UI_From_gnu): Use tree_to_shwi. * gcc-interface/decl.c (gnat_to_gnu_entity): Use tree_to_uhwi. * gcc-interface/utils.c (make_packable_type): Likewise. gcc/c-family/ 2013-11-19 Kenneth Zadeck Mike Stump Richard Sandiford * c-ada-spec.c (is_simple_enum): Use tree_to_shwi and tree_to_uhwi instead of TREE_INT_CST_LOW, in cases where there is a protecting tree_fits_shwi_p or tree_fits_uhwi_p. (dump_generic_ada_node): Likewise. * c-format.c (check_format_arg): Likewise. * c-pretty-print.c (pp_c_integer_constant): Likewise. gcc/ 2013-11-19 Kenneth Zadeck Mike Stump Richard Sandiford * alias.c (ao_ref_from_mem): Use tree_to_shwi and tree_to_uhwi instead of TREE_INT_CST_LOW, in cases where there is a protecting tree_fits_shwi_p or tree_fits_uhwi_p. * builtins.c (fold_builtin_powi): Likewise. * config/epiphany/epiphany.c (epiphany_special_round_type_align): Likewise. * dbxout.c (dbxout_symbol): Likewise. * expr.c (expand_expr_real_1): Likewise. * fold-const.c (fold_single_bit_test, fold_plusminus_mult_expr) (fold_binary_loc): Likewise. * gimple-fold.c (fold_const_aggregate_ref_1): Likewise. * gimple-ssa-strength-reduction.c (stmt_cost): Likewise. * omp-low.c (lower_omp_for_lastprivate): Likewise. * simplify-rtx.c (delegitimize_mem_from_attrs): Likewise. * stor-layout.c (compute_record_mode): Likewise. * tree-cfg.c (verify_expr): Likewise. * tree-dfa.c (get_ref_base_and_extent): Likewise. * tree-pretty-print.c (dump_array_domain): Likewise. * tree-sra.c (build_user_friendly_ref_for_offset): Likewise. * tree-ssa-ccp.c (fold_builtin_alloca_with_align): Likewise. * tree-ssa-loop-ivopts.c (get_loop_invariant_expr_id): Likewise. * tree-ssa-math-opts.c (execute_cse_sincos): Likewise. * tree-ssa-phiopt.c (hoist_adjacent_loads): Likewise. * tree-ssa-reassoc.c (acceptable_pow_call): Likewise. * tree-ssa-sccvn.c (copy_reference_ops_from_ref): Likewise. (ao_ref_init_from_vn_reference, vn_reference_fold_indirect): Likewise. (vn_reference_lookup_3, simplify_binary_expression): Likewise. * tree-ssa-structalias.c (bitpos_of_field): Likewise. (get_constraint_for_1, push_fields_onto_fieldstack): Likewise. (create_variable_info_for_1): Likewise. * tree-vect-data-refs.c (vect_compute_data_ref_alignment): Likewise. (vect_verify_datarefs_alignment): Likewise. (vect_analyze_data_ref_accesses): Likewise. (vect_prune_runtime_alias_test_list): Likewise. * tree-vectorizer.h (NITERS_KNOWN_P): Likewise. Index: gcc/ada/gcc-interface/cuintp.c =================================================================== --- gcc/ada/gcc-interface/cuintp.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/ada/gcc-interface/cuintp.c 2013-11-19 12:09:07.933676448 +0000 @@ -150,28 +150,28 @@ UI_From_gnu (tree Input) Int_Vector vec; #if HOST_BITS_PER_WIDE_INT == 64 /* On 64-bit hosts, tree_fits_shwi_p tells whether the input fits in a signed 64-bit integer. Then a truncation tells whether it fits in a signed 32-bit integer. */ if (tree_fits_shwi_p (Input)) { - HOST_WIDE_INT hw_input = TREE_INT_CST_LOW (Input); + HOST_WIDE_INT hw_input = tree_to_shwi (Input); if (hw_input == (int) hw_input) return UI_From_Int (hw_input); } else return No_Uint; #else /* On 32-bit hosts, tree_fits_shwi_p tells whether the input fits in a signed 32-bit integer. Then a sign test tells whether it fits in a signed 64-bit integer. */ if (tree_fits_shwi_p (Input)) - return UI_From_Int (TREE_INT_CST_LOW (Input)); + return UI_From_Int (tree_to_shwi (Input)); else if (TREE_INT_CST_HIGH (Input) < 0 && TYPE_UNSIGNED (gnu_type)) return No_Uint; #endif gnu_base = build_int_cst (gnu_type, UI_Base); gnu_temp = Input; for (i = Max_For_Dint - 1; i >= 0; i--) Index: gcc/ada/gcc-interface/decl.c =================================================================== --- gcc/ada/gcc-interface/decl.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/ada/gcc-interface/decl.c 2013-11-19 12:09:07.934676456 +0000 @@ -4918,17 +4918,17 @@ gnat_to_gnu_entity (Entity_Id gnat_entit && !TYPE_FAT_POINTER_P (gnu_type)) size = rm_size (gnu_type); else size = TYPE_SIZE (gnu_type); /* Consider an alignment as suspicious if the alignment/size ratio is greater or equal to the byte/bit ratio. */ if (tree_fits_uhwi_p (size) - && align >= TREE_INT_CST_LOW (size) * BITS_PER_UNIT) + && align >= tree_to_uhwi (size) * BITS_PER_UNIT) post_error_ne ("?suspiciously large alignment specified for&", Expression (Alignment_Clause (gnat_entity)), gnat_entity); } } else if (Is_Atomic (gnat_entity) && !gnu_size && tree_fits_uhwi_p (TYPE_SIZE (gnu_type)) && integer_pow2p (TYPE_SIZE (gnu_type))) Index: gcc/ada/gcc-interface/utils.c =================================================================== --- gcc/ada/gcc-interface/utils.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/ada/gcc-interface/utils.c 2013-11-19 12:09:07.935676464 +0000 @@ -806,17 +806,17 @@ make_packable_type (tree type, bool in_r /* Do not try to shrink the size if the RM size is not constant. */ if (TYPE_CONTAINS_TEMPLATE_P (type) || !tree_fits_uhwi_p (TYPE_ADA_SIZE (type))) return type; /* Round the RM size up to a unit boundary to get the minimal size for a BLKmode record. Give up if it's already the size. */ - new_size = TREE_INT_CST_LOW (TYPE_ADA_SIZE (type)); + new_size = tree_to_uhwi (TYPE_ADA_SIZE (type)); new_size = (new_size + BITS_PER_UNIT - 1) & -BITS_PER_UNIT; if (new_size == size) return type; align = new_size & -new_size; TYPE_ALIGN (new_type) = MIN (TYPE_ALIGN (type), align); } Index: gcc/c-family/c-ada-spec.c =================================================================== --- gcc/c-family/c-ada-spec.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/c-family/c-ada-spec.c 2013-11-19 12:09:07.938676488 +0000 @@ -1798,29 +1798,29 @@ dump_ada_template (pretty_printer *buffe } /* Return true if NODE is a simple enum types, that can be mapped to an Ada enum type directly. */ static bool is_simple_enum (tree node) { - unsigned HOST_WIDE_INT count = 0; + HOST_WIDE_INT count = 0; tree value; for (value = TYPE_VALUES (node); value; value = TREE_CHAIN (value)) { tree int_val = TREE_VALUE (value); if (TREE_CODE (int_val) != INTEGER_CST) int_val = DECL_INITIAL (int_val); if (!tree_fits_shwi_p (int_val)) return false; - else if (TREE_INT_CST_LOW (int_val) != count) + else if (tree_to_shwi (int_val) != count) return false; count++; } return true; } @@ -2201,19 +2201,19 @@ dump_generic_ada_node (pretty_printer *b case INTEGER_CST: /* We treat the upper half of the sizetype range as negative. This is consistent with the internal treatment and makes it possible to generate the (0 .. -1) range for flexible array members. */ if (TREE_TYPE (node) == sizetype) node = fold_convert (ssizetype, node); if (tree_fits_shwi_p (node)) - pp_wide_integer (buffer, TREE_INT_CST_LOW (node)); + pp_wide_integer (buffer, tree_to_shwi (node)); else if (tree_fits_uhwi_p (node)) - pp_unsigned_wide_integer (buffer, TREE_INT_CST_LOW (node)); + pp_unsigned_wide_integer (buffer, tree_to_uhwi (node)); else { tree val = node; unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (val); HOST_WIDE_INT high = TREE_INT_CST_HIGH (val); if (tree_int_cst_sgn (val) < 0) { Index: gcc/c-family/c-format.c =================================================================== --- gcc/c-family/c-format.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/c-family/c-format.c 2013-11-19 12:09:07.938676488 +0000 @@ -1534,17 +1534,17 @@ check_format_arg (void *ctx, tree format format_length = TREE_STRING_LENGTH (format_tree); if (array_size != 0) { /* Variable length arrays can't be initialized. */ gcc_assert (TREE_CODE (array_size) == INTEGER_CST); if (tree_fits_shwi_p (array_size)) { - HOST_WIDE_INT array_size_value = TREE_INT_CST_LOW (array_size); + HOST_WIDE_INT array_size_value = tree_to_shwi (array_size); if (array_size_value > 0 && array_size_value == (int) array_size_value && format_length > array_size_value) format_length = array_size_value; } } if (offset) { Index: gcc/c-family/c-pretty-print.c =================================================================== --- gcc/c-family/c-pretty-print.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/c-family/c-pretty-print.c 2013-11-19 12:09:07.939676496 +0000 @@ -911,19 +911,19 @@ pp_c_integer_constant (c_pretty_printer /* We are going to compare the type of I to other types using pointer comparison so we need to use its canonical type. */ tree type = TYPE_CANONICAL (TREE_TYPE (i)) ? TYPE_CANONICAL (TREE_TYPE (i)) : TREE_TYPE (i); if (tree_fits_shwi_p (i)) - pp_wide_integer (pp, TREE_INT_CST_LOW (i)); + pp_wide_integer (pp, tree_to_shwi (i)); else if (tree_fits_uhwi_p (i)) - pp_unsigned_wide_integer (pp, TREE_INT_CST_LOW (i)); + pp_unsigned_wide_integer (pp, tree_to_uhwi (i)); else { unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (i); HOST_WIDE_INT high = TREE_INT_CST_HIGH (i); if (tree_int_cst_sgn (i) < 0) { pp_minus (pp); high = ~high + !low; Index: gcc/alias.c =================================================================== --- gcc/alias.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/alias.c 2013-11-19 12:09:07.954676615 +0000 @@ -334,18 +334,18 @@ ao_ref_from_mem (ao_ref *ref, const_rtx ref->max_size = ref->size; /* If MEM_OFFSET and MEM_SIZE get us outside of the base object of the MEM_EXPR punt. This happens for STRICT_ALIGNMENT targets a lot. */ if (MEM_EXPR (mem) != get_spill_slot_decl (false) && (ref->offset < 0 || (DECL_P (ref->base) && (!tree_fits_uhwi_p (DECL_SIZE (ref->base)) - || (TREE_INT_CST_LOW (DECL_SIZE ((ref->base))) - < (unsigned HOST_WIDE_INT)(ref->offset + ref->size)))))) + || (tree_to_uhwi (DECL_SIZE (ref->base)) + < (unsigned HOST_WIDE_INT) (ref->offset + ref->size)))))) return false; return true; } /* Query the alias-oracle on whether the two memory rtx X and MEM may alias. If TBAA_P is set also apply TBAA. Returns true if the two rtxen may alias, false otherwise. */ Index: gcc/builtins.c =================================================================== --- gcc/builtins.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/builtins.c 2013-11-19 12:09:07.937676480 +0000 @@ -8544,17 +8544,17 @@ fold_builtin_powi (location_t loc, tree return NULL_TREE; /* Optimize pow(1.0,y) = 1.0. */ if (real_onep (arg0)) return omit_one_operand_loc (loc, type, build_real (type, dconst1), arg1); if (tree_fits_shwi_p (arg1)) { - HOST_WIDE_INT c = TREE_INT_CST_LOW (arg1); + HOST_WIDE_INT c = tree_to_shwi (arg1); /* Evaluate powi at compile-time. */ if (TREE_CODE (arg0) == REAL_CST && !TREE_OVERFLOW (arg0)) { REAL_VALUE_TYPE x; x = TREE_REAL_CST (arg0); real_powi (&x, TYPE_MODE (type), &x, c); Index: gcc/config/epiphany/epiphany.c =================================================================== --- gcc/config/epiphany/epiphany.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/config/epiphany/epiphany.c 2013-11-19 12:09:07.939676496 +0000 @@ -2754,20 +2754,20 @@ epiphany_special_round_type_align (tree tree offset, size; if (TREE_CODE (field) != FIELD_DECL || TREE_TYPE (field) == error_mark_node) continue; offset = bit_position (field); size = DECL_SIZE (field); if (!tree_fits_uhwi_p (offset) || !tree_fits_uhwi_p (size) - || TREE_INT_CST_LOW (offset) >= try_align - || TREE_INT_CST_LOW (size) >= try_align) + || tree_to_uhwi (offset) >= try_align + || tree_to_uhwi (size) >= try_align) return try_align; - total = TREE_INT_CST_LOW (offset) + TREE_INT_CST_LOW (size); + total = tree_to_uhwi (offset) + tree_to_uhwi (size); if (total > max) max = total; } if (max >= (HOST_WIDE_INT) try_align) align = try_align; else if (try_align > 32 && max >= 32) align = max > 32 ? 64 : 32; return align; Index: gcc/dbxout.c =================================================================== --- gcc/dbxout.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/dbxout.c 2013-11-19 12:09:07.954676615 +0000 @@ -2919,17 +2919,17 @@ dbxout_symbol (tree decl, int local ATTR || TREE_CODE (DECL_CONTEXT (decl)) == NAMESPACE_DECL) && TREE_PUBLIC (decl) == 0) { /* The sun4 assembler does not grok this. */ if (TREE_CODE (TREE_TYPE (decl)) == INTEGER_TYPE || TREE_CODE (TREE_TYPE (decl)) == ENUMERAL_TYPE) { - HOST_WIDE_INT ival = TREE_INT_CST_LOW (DECL_INITIAL (decl)); + HOST_WIDE_INT ival = tree_to_shwi (DECL_INITIAL (decl)); dbxout_begin_complex_stabs (); dbxout_symbol_name (decl, NULL, 'c'); stabstr_S ("=i"); stabstr_D (ival); dbxout_finish_complex_stabs (0, N_LSYM, 0, 0, 0); DBXOUT_DECR_NESTING; return 1; Index: gcc/expr.c =================================================================== --- gcc/expr.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/expr.c 2013-11-19 12:09:07.941676512 +0000 @@ -9630,17 +9630,17 @@ expand_expr_real_1 (tree exp, rtx target might end up in a register. */ if (mem_ref_refers_to_non_mem_p (exp)) { HOST_WIDE_INT offset = mem_ref_offset (exp).low; base = TREE_OPERAND (base, 0); if (offset == 0 && tree_fits_uhwi_p (TYPE_SIZE (type)) && (GET_MODE_BITSIZE (DECL_MODE (base)) - == TREE_INT_CST_LOW (TYPE_SIZE (type)))) + == tree_to_uhwi (TYPE_SIZE (type)))) return expand_expr (build1 (VIEW_CONVERT_EXPR, type, base), target, tmode, modifier); if (TYPE_MODE (type) == BLKmode) { temp = assign_stack_temp (DECL_MODE (base), GET_MODE_SIZE (DECL_MODE (base))); store_expr (base, temp, 0, false); temp = adjust_address (temp, BLKmode, offset); Index: gcc/fold-const.c =================================================================== --- gcc/fold-const.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/fold-const.c 2013-11-19 12:09:07.943676528 +0000 @@ -6640,20 +6640,20 @@ fold_single_bit_test (location_t loc, en Similarly for (A & C) == 0. */ /* If INNER is a right shift of a constant and it plus BITNUM does not overflow, adjust BITNUM and INNER. */ if (TREE_CODE (inner) == RSHIFT_EXPR && TREE_CODE (TREE_OPERAND (inner, 1)) == INTEGER_CST && tree_fits_uhwi_p (TREE_OPERAND (inner, 1)) && bitnum < TYPE_PRECISION (type) - && (TREE_INT_CST_LOW (TREE_OPERAND (inner, 1)) + && (tree_to_uhwi (TREE_OPERAND (inner, 1)) < (unsigned) (TYPE_PRECISION (type) - bitnum))) { - bitnum += TREE_INT_CST_LOW (TREE_OPERAND (inner, 1)); + bitnum += tree_to_uhwi (TREE_OPERAND (inner, 1)); inner = TREE_OPERAND (inner, 0); } /* If we are going to be able to omit the AND below, we must do our operations as unsigned. If we must use the AND, we have a choice. Normally unsigned is faster, but for some machines signed is. */ #ifdef LOAD_EXTEND_OP ops_unsigned = (LOAD_EXTEND_OP (operand_mode) == SIGN_EXTEND @@ -7256,18 +7256,18 @@ fold_plusminus_mult_expr (location_t loc power-of-two factor in non-power-of-two multiplies. This can help in multi-dimensional array access. */ else if (tree_fits_shwi_p (arg01) && tree_fits_shwi_p (arg11)) { HOST_WIDE_INT int01, int11, tmp; bool swap = false; tree maybe_same; - int01 = TREE_INT_CST_LOW (arg01); - int11 = TREE_INT_CST_LOW (arg11); + int01 = tree_to_shwi (arg01); + int11 = tree_to_shwi (arg11); /* Move min of absolute values to int11. */ if (absu_hwi (int01) < absu_hwi (int11)) { tmp = int01, int01 = int11, int11 = tmp; alt0 = arg00, arg00 = arg10, arg10 = alt0; maybe_same = arg01; swap = true; @@ -12011,17 +12011,17 @@ fold_binary_loc (location_t loc, } /* If arg0 is derived from the address of an object or function, we may be able to fold this expression using the object or function's alignment. */ if (POINTER_TYPE_P (TREE_TYPE (arg0)) && tree_fits_uhwi_p (arg1)) { unsigned HOST_WIDE_INT modulus, residue; - unsigned HOST_WIDE_INT low = TREE_INT_CST_LOW (arg1); + unsigned HOST_WIDE_INT low = tree_to_uhwi (arg1); modulus = get_pointer_modulus_and_residue (arg0, &residue, integer_onep (arg1)); /* This works because modulus is a power of 2. If this weren't the case, we'd have to replace it by its greatest power-of-2 divisor: modulus & -modulus. */ if (low < modulus) @@ -12642,22 +12642,22 @@ fold_binary_loc (location_t loc, don't try to compute it in the compiler. */ if (TREE_CODE (arg1) == INTEGER_CST && tree_int_cst_sgn (arg1) < 0) return NULL_TREE; prec = element_precision (type); /* Turn (a OP c1) OP c2 into a OP (c1+c2). */ if (TREE_CODE (op0) == code && tree_fits_uhwi_p (arg1) - && TREE_INT_CST_LOW (arg1) < prec + && tree_to_uhwi (arg1) < prec && tree_fits_uhwi_p (TREE_OPERAND (arg0, 1)) - && TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) < prec) + && tree_to_uhwi (TREE_OPERAND (arg0, 1)) < prec) { - unsigned int low = (TREE_INT_CST_LOW (TREE_OPERAND (arg0, 1)) - + TREE_INT_CST_LOW (arg1)); + unsigned int low = (tree_to_uhwi (TREE_OPERAND (arg0, 1)) + + tree_to_uhwi (arg1)); /* Deal with a OP (c1 + c2) being undefined but (a OP c1) OP c2 being well defined. */ if (low >= prec) { if (code == LROTATE_EXPR || code == RROTATE_EXPR) low = low % prec; else if (TYPE_UNSIGNED (type) || code == LSHIFT_EXPR) Index: gcc/gimple-fold.c =================================================================== --- gcc/gimple-fold.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/gimple-fold.c 2013-11-19 12:09:07.955676623 +0000 @@ -3058,33 +3058,33 @@ fold_const_aggregate_ref_1 (tree t, tree TREE_CODE (low_bound) == INTEGER_CST) && (unit_size = array_ref_element_size (t), tree_fits_uhwi_p (unit_size)) && (doffset = (TREE_INT_CST (idx) - TREE_INT_CST (low_bound)) .sext (TYPE_PRECISION (TREE_TYPE (idx))), doffset.fits_shwi ())) { offset = doffset.to_shwi (); - offset *= TREE_INT_CST_LOW (unit_size); + offset *= tree_to_uhwi (unit_size); offset *= BITS_PER_UNIT; base = TREE_OPERAND (t, 0); ctor = get_base_constructor (base, &offset, valueize); /* Empty constructor. Always fold to 0. */ if (ctor == error_mark_node) return build_zero_cst (TREE_TYPE (t)); /* Out of bound array access. Value is undefined, but don't fold. */ if (offset < 0) return NULL_TREE; /* We can not determine ctor. */ if (!ctor) return NULL_TREE; return fold_ctor_reference (TREE_TYPE (t), ctor, offset, - TREE_INT_CST_LOW (unit_size) + tree_to_uhwi (unit_size) * BITS_PER_UNIT, base); } } /* Fallthru. */ case COMPONENT_REF: case BIT_FIELD_REF: Index: gcc/gimple-ssa-strength-reduction.c =================================================================== --- gcc/gimple-ssa-strength-reduction.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/gimple-ssa-strength-reduction.c 2013-11-19 12:09:07.944676536 +0000 @@ -605,17 +605,17 @@ stmt_cost (gimple gs, bool speed) lhs_mode = TYPE_MODE (TREE_TYPE (lhs)); switch (gimple_assign_rhs_code (gs)) { case MULT_EXPR: rhs2 = gimple_assign_rhs2 (gs); if (tree_fits_shwi_p (rhs2)) - return mult_by_coeff_cost (TREE_INT_CST_LOW (rhs2), lhs_mode, speed); + return mult_by_coeff_cost (tree_to_shwi (rhs2), lhs_mode, speed); gcc_assert (TREE_CODE (rhs1) != INTEGER_CST); return mul_cost (speed, lhs_mode); case PLUS_EXPR: case POINTER_PLUS_EXPR: case MINUS_EXPR: return add_cost (speed, lhs_mode); Index: gcc/omp-low.c =================================================================== --- gcc/omp-low.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/omp-low.c 2013-11-19 12:09:07.945676544 +0000 @@ -8828,17 +8828,17 @@ lower_omp_for_lastprivate (struct omp_fo cond_code = fd->loop.cond_code; cond_code = cond_code == LT_EXPR ? GE_EXPR : LE_EXPR; /* When possible, use a strict equality expression. This can let VRP type optimizations deduce the value and remove a copy. */ if (tree_fits_shwi_p (fd->loop.step)) { - HOST_WIDE_INT step = TREE_INT_CST_LOW (fd->loop.step); + HOST_WIDE_INT step = tree_to_shwi (fd->loop.step); if (step == 1 || step == -1) cond_code = EQ_EXPR; } cond = build2 (cond_code, boolean_type_node, fd->loop.v, fd->loop.n2); clauses = gimple_omp_for_clauses (fd->for_stmt); stmts = NULL; Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/simplify-rtx.c 2013-11-19 12:09:07.956676631 +0000 @@ -299,17 +299,17 @@ delegitimize_mem_from_attrs (rtx x) if (bitsize != GET_MODE_BITSIZE (mode) || (bitpos % BITS_PER_UNIT) || (toffset && !tree_fits_shwi_p (toffset))) decl = NULL; else { offset += bitpos / BITS_PER_UNIT; if (toffset) - offset += TREE_INT_CST_LOW (toffset); + offset += tree_to_shwi (toffset); } break; } } if (decl && mode == GET_MODE (x) && TREE_CODE (decl) == VAR_DECL Index: gcc/stor-layout.c =================================================================== --- gcc/stor-layout.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/stor-layout.c 2013-11-19 12:09:07.946676552 +0000 @@ -1616,17 +1616,17 @@ compute_record_mode (tree type) return; } /* If we only have one real field; use its mode if that mode's size matches the type's size. This only applies to RECORD_TYPE. This does not apply to unions. */ if (TREE_CODE (type) == RECORD_TYPE && mode != VOIDmode && tree_fits_uhwi_p (TYPE_SIZE (type)) - && GET_MODE_BITSIZE (mode) == TREE_INT_CST_LOW (TYPE_SIZE (type))) + && GET_MODE_BITSIZE (mode) == tree_to_uhwi (TYPE_SIZE (type))) SET_TYPE_MODE (type, mode); else SET_TYPE_MODE (type, mode_for_size_tree (TYPE_SIZE (type), MODE_INT, 1)); /* If structure's known alignment is less than what the scalar mode would need, and it matters, then stick with BLKmode. */ if (TYPE_MODE (type) != BLKmode && STRICT_ALIGNMENT Index: gcc/tree-cfg.c =================================================================== --- gcc/tree-cfg.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-cfg.c 2013-11-19 12:09:07.947676560 +0000 @@ -2704,26 +2704,26 @@ #define CHECK_OP(N, MSG) \ if (!tree_fits_uhwi_p (TREE_OPERAND (t, 1)) || !tree_fits_uhwi_p (TREE_OPERAND (t, 2))) { error ("invalid position or size operand to BIT_FIELD_REF"); return t; } if (INTEGRAL_TYPE_P (TREE_TYPE (t)) && (TYPE_PRECISION (TREE_TYPE (t)) - != TREE_INT_CST_LOW (TREE_OPERAND (t, 1)))) + != tree_to_uhwi (TREE_OPERAND (t, 1)))) { error ("integral result type precision does not match " "field size of BIT_FIELD_REF"); return t; } else if (!INTEGRAL_TYPE_P (TREE_TYPE (t)) && TYPE_MODE (TREE_TYPE (t)) != BLKmode && (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (t))) - != TREE_INT_CST_LOW (TREE_OPERAND (t, 1)))) + != tree_to_uhwi (TREE_OPERAND (t, 1)))) { error ("mode precision of non-integral result does not " "match field size of BIT_FIELD_REF"); return t; } } t = TREE_OPERAND (t, 0); Index: gcc/tree-dfa.c =================================================================== --- gcc/tree-dfa.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-dfa.c 2013-11-19 12:09:07.947676560 +0000 @@ -405,17 +405,17 @@ get_ref_base_and_extent (tree exp, HOST_ else bitsize = GET_MODE_BITSIZE (mode); } if (size_tree != NULL_TREE) { if (! tree_fits_uhwi_p (size_tree)) bitsize = -1; else - bitsize = TREE_INT_CST_LOW (size_tree); + bitsize = tree_to_uhwi (size_tree); } /* Initially, maxsize is the same as the accessed element size. In the following it will only grow (or become -1). */ maxsize = bitsize; /* Compute cumulative bit-offset for nested component-refs and array-refs, and find the ultimate containing object. */ @@ -453,18 +453,18 @@ get_ref_base_and_extent (tree exp, HOST_ if (!next || TREE_CODE (stype) != RECORD_TYPE) { tree fsize = DECL_SIZE_UNIT (field); tree ssize = TYPE_SIZE_UNIT (stype); if (tree_fits_shwi_p (fsize) && tree_fits_shwi_p (ssize) && doffset.fits_shwi ()) - maxsize += ((TREE_INT_CST_LOW (ssize) - - TREE_INT_CST_LOW (fsize)) + maxsize += ((tree_to_shwi (ssize) + - tree_to_shwi (fsize)) * BITS_PER_UNIT - doffset.to_shwi ()); else maxsize = -1; } } } else @@ -472,18 +472,17 @@ get_ref_base_and_extent (tree exp, HOST_ tree csize = TYPE_SIZE (TREE_TYPE (TREE_OPERAND (exp, 0))); /* We need to adjust maxsize to the whole structure bitsize. But we can subtract any constant offset seen so far, because that would get us out of the structure otherwise. */ if (maxsize != -1 && csize && tree_fits_uhwi_p (csize) && bit_offset.fits_shwi ()) - maxsize = TREE_INT_CST_LOW (csize) - - bit_offset.to_shwi (); + maxsize = tree_to_uhwi (csize) - bit_offset.to_shwi (); else maxsize = -1; } } break; case ARRAY_REF: case ARRAY_RANGE_REF: @@ -516,18 +515,17 @@ get_ref_base_and_extent (tree exp, HOST_ tree asize = TYPE_SIZE (TREE_TYPE (TREE_OPERAND (exp, 0))); /* We need to adjust maxsize to the whole array bitsize. But we can subtract any constant offset seen so far, because that would get us outside of the array otherwise. */ if (maxsize != -1 && asize && tree_fits_uhwi_p (asize) && bit_offset.fits_shwi ()) - maxsize = TREE_INT_CST_LOW (asize) - - bit_offset.to_shwi (); + maxsize = tree_to_uhwi (asize) - bit_offset.to_shwi (); else maxsize = -1; /* Remember that we have seen an array ref with a variable index. */ seen_variable_array_ref = true; } } @@ -566,17 +564,17 @@ get_ref_base_and_extent (tree exp, HOST_ is to punt in the case that offset + maxsize reaches the base type boundary. This needs to include possible trailing padding that is there for alignment purposes. */ if (seen_variable_array_ref && maxsize != -1 && (!bit_offset.fits_shwi () || !tree_fits_uhwi_p (TYPE_SIZE (TREE_TYPE (exp))) || (bit_offset.to_shwi () + maxsize - == (HOST_WIDE_INT) TREE_INT_CST_LOW + == (HOST_WIDE_INT) tree_to_uhwi (TYPE_SIZE (TREE_TYPE (exp)))))) maxsize = -1; /* Hand back the decl for MEM[&decl, off]. */ if (TREE_CODE (TREE_OPERAND (exp, 0)) == ADDR_EXPR) { if (integer_zerop (TREE_OPERAND (exp, 1))) exp = TREE_OPERAND (TREE_OPERAND (exp, 0), 0); @@ -603,17 +601,17 @@ get_ref_base_and_extent (tree exp, HOST_ } /* We need to deal with variable arrays ending structures. */ if (seen_variable_array_ref && maxsize != -1 && (!bit_offset.fits_shwi () || !tree_fits_uhwi_p (TYPE_SIZE (TREE_TYPE (exp))) || (bit_offset.to_shwi () + maxsize - == (HOST_WIDE_INT) TREE_INT_CST_LOW + == (HOST_WIDE_INT) tree_to_uhwi (TYPE_SIZE (TREE_TYPE (exp)))))) maxsize = -1; done: if (!bit_offset.fits_shwi ()) { *poffset = 0; *psize = bitsize; @@ -627,25 +625,25 @@ get_ref_base_and_extent (tree exp, HOST_ /* In case of a decl or constant base object we can do better. */ if (DECL_P (exp)) { /* If maxsize is unknown adjust it according to the size of the base decl. */ if (maxsize == -1 && tree_fits_uhwi_p (DECL_SIZE (exp))) - maxsize = TREE_INT_CST_LOW (DECL_SIZE (exp)) - hbit_offset; + maxsize = tree_to_uhwi (DECL_SIZE (exp)) - hbit_offset; } else if (CONSTANT_CLASS_P (exp)) { /* If maxsize is unknown adjust it according to the size of the base type constant. */ if (maxsize == -1 && tree_fits_uhwi_p (TYPE_SIZE (TREE_TYPE (exp)))) - maxsize = TREE_INT_CST_LOW (TYPE_SIZE (TREE_TYPE (exp))) - hbit_offset; + maxsize = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (exp))) - hbit_offset; } /* ??? Due to negative offsets in ARRAY_REF we can end up with negative bit_offset here. We might want to store a zero offset in this case. */ *poffset = hbit_offset; *psize = bitsize; *pmax_size = maxsize; Index: gcc/tree-pretty-print.c =================================================================== --- gcc/tree-pretty-print.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-pretty-print.c 2013-11-19 12:09:07.947676560 +0000 @@ -268,17 +268,17 @@ dump_array_domain (pretty_printer *buffe if (domain) { tree min = TYPE_MIN_VALUE (domain); tree max = TYPE_MAX_VALUE (domain); if (min && max && integer_zerop (min) && tree_fits_shwi_p (max)) - pp_wide_integer (buffer, TREE_INT_CST_LOW (max) + 1); + pp_wide_integer (buffer, tree_to_shwi (max) + 1); else { if (min) dump_generic_node (buffer, min, spc, flags, false); pp_colon (buffer); if (max) dump_generic_node (buffer, max, spc, flags, false); } Index: gcc/tree-sra.c =================================================================== --- gcc/tree-sra.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-sra.c 2013-11-19 12:09:07.948676568 +0000 @@ -1648,22 +1648,22 @@ build_user_friendly_ref_for_offset (tree tree tr_pos, expr, *expr_ptr; if (TREE_CODE (fld) != FIELD_DECL) continue; tr_pos = bit_position (fld); if (!tr_pos || !tree_fits_uhwi_p (tr_pos)) continue; - pos = TREE_INT_CST_LOW (tr_pos); + pos = tree_to_uhwi (tr_pos); gcc_assert (TREE_CODE (type) == RECORD_TYPE || pos == 0); tr_size = DECL_SIZE (fld); if (!tr_size || !tree_fits_uhwi_p (tr_size)) continue; - size = TREE_INT_CST_LOW (tr_size); + size = tree_to_uhwi (tr_size); if (size == 0) { if (pos != offset) continue; } else if (pos > offset || (pos + size) <= offset) continue; Index: gcc/tree-ssa-ccp.c =================================================================== --- gcc/tree-ssa-ccp.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-ssa-ccp.c 2013-11-19 12:09:07.956676631 +0000 @@ -1879,17 +1879,17 @@ fold_builtin_alloca_with_align (gimple s /* Detect constant argument. */ arg = get_constant_value (gimple_call_arg (stmt, 0)); if (arg == NULL_TREE || TREE_CODE (arg) != INTEGER_CST || !tree_fits_uhwi_p (arg)) return NULL_TREE; - size = TREE_INT_CST_LOW (arg); + size = tree_to_uhwi (arg); /* Heuristic: don't fold large allocas. */ threshold = (unsigned HOST_WIDE_INT)PARAM_VALUE (PARAM_LARGE_STACK_FRAME); /* In case the alloca is located at function entry, it has the same lifetime as a declared array, so we allow a larger size. */ block = gimple_block (stmt); if (!(cfun->after_inlining && TREE_CODE (BLOCK_SUPERCONTEXT (block)) == FUNCTION_DECL)) Index: gcc/tree-ssa-loop-ivopts.c =================================================================== --- gcc/tree-ssa-loop-ivopts.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-ssa-loop-ivopts.c 2013-11-19 12:09:07.949676576 +0000 @@ -3966,25 +3966,25 @@ get_loop_invariant_expr_id (struct ivopt usym = TREE_OPERAND (ubase, 0); csym = TREE_OPERAND (cbase, 0); if (TREE_CODE (usym) == ARRAY_REF) { tree ind = TREE_OPERAND (usym, 1); if (TREE_CODE (ind) == INTEGER_CST && tree_fits_shwi_p (ind) - && TREE_INT_CST_LOW (ind) == 0) + && tree_to_shwi (ind) == 0) usym = TREE_OPERAND (usym, 0); } if (TREE_CODE (csym) == ARRAY_REF) { tree ind = TREE_OPERAND (csym, 1); if (TREE_CODE (ind) == INTEGER_CST && tree_fits_shwi_p (ind) - && TREE_INT_CST_LOW (ind) == 0) + && tree_to_shwi (ind) == 0) csym = TREE_OPERAND (csym, 0); } if (operand_equal_p (usym, csym, 0)) return -1; } /* Now do more complex comparison */ tree_to_aff_combination (ubase, TREE_TYPE (ubase), &ubase_aff); tree_to_aff_combination (cbase, TREE_TYPE (cbase), &cbase_aff); Index: gcc/tree-ssa-math-opts.c =================================================================== --- gcc/tree-ssa-math-opts.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-ssa-math-opts.c 2013-11-19 12:09:07.949676576 +0000 @@ -1500,17 +1500,17 @@ execute_cse_sincos (void) gimple_set_location (stmt, loc); gsi_insert_before (&gsi, stmt, GSI_SAME_STMT); } else { if (!tree_fits_shwi_p (arg1)) break; - n = TREE_INT_CST_LOW (arg1); + n = tree_to_shwi (arg1); result = gimple_expand_builtin_powi (&gsi, loc, arg0, n); } if (result) { tree lhs = gimple_get_lhs (stmt); gimple new_stmt = gimple_build_assign (lhs, result); gimple_set_location (new_stmt, loc); Index: gcc/tree-ssa-phiopt.c =================================================================== --- gcc/tree-ssa-phiopt.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-ssa-phiopt.c 2013-11-19 12:09:07.950676584 +0000 @@ -1975,19 +1975,19 @@ hoist_adjacent_loads (basic_block bb0, b tree_offset2 = bit_position (field2); tree_size2 = DECL_SIZE (field2); if (!tree_fits_uhwi_p (tree_offset1) || !tree_fits_uhwi_p (tree_offset2) || !tree_fits_uhwi_p (tree_size2)) continue; - offset1 = TREE_INT_CST_LOW (tree_offset1); - offset2 = TREE_INT_CST_LOW (tree_offset2); - size2 = TREE_INT_CST_LOW (tree_size2); + offset1 = tree_to_uhwi (tree_offset1); + offset2 = tree_to_uhwi (tree_offset2); + size2 = tree_to_uhwi (tree_size2); align1 = DECL_ALIGN (field1) % param_align_bits; if (offset1 % BITS_PER_UNIT != 0) continue; /* For profitability, the two field references should fit within a single cache line. */ if (align1 + offset2 - offset1 + size2 > param_align_bits) Index: gcc/tree-ssa-reassoc.c =================================================================== --- gcc/tree-ssa-reassoc.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-ssa-reassoc.c 2013-11-19 12:09:07.951676592 +0000 @@ -3633,17 +3633,17 @@ acceptable_pow_call (gimple stmt, tree * CASE_FLT_FN (BUILT_IN_POWI): *base = gimple_call_arg (stmt, 0); arg1 = gimple_call_arg (stmt, 1); if (!tree_fits_shwi_p (arg1)) return false; - *exponent = TREE_INT_CST_LOW (arg1); + *exponent = tree_to_shwi (arg1); break; default: return false; } /* Expanding negative exponents is generally unproductive, so we don't complicate matters with those. Exponents of zero and one should Index: gcc/tree-ssa-sccvn.c =================================================================== --- gcc/tree-ssa-sccvn.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-ssa-sccvn.c 2013-11-19 12:09:07.951676592 +0000 @@ -778,17 +778,17 @@ copy_reference_ops_from_ref (tree ref, v case WITH_SIZE_EXPR: temp.op0 = TREE_OPERAND (ref, 1); temp.off = 0; break; case MEM_REF: /* The base address gets its own vn_reference_op_s structure. */ temp.op0 = TREE_OPERAND (ref, 1); if (tree_fits_shwi_p (TREE_OPERAND (ref, 1))) - temp.off = TREE_INT_CST_LOW (TREE_OPERAND (ref, 1)); + temp.off = tree_to_shwi (TREE_OPERAND (ref, 1)); break; case BIT_FIELD_REF: /* Record bits and position. */ temp.op0 = TREE_OPERAND (ref, 1); temp.op1 = TREE_OPERAND (ref, 2); break; case COMPONENT_REF: /* The field decl is enough to unambiguously specify the field, @@ -934,17 +934,17 @@ ao_ref_init_from_vn_reference (ao_ref *r else size = GET_MODE_BITSIZE (mode); } if (size_tree != NULL_TREE) { if (!tree_fits_uhwi_p (size_tree)) size = -1; else - size = TREE_INT_CST_LOW (size_tree); + size = tree_to_uhwi (size_tree); } /* Initially, maxsize is the same as the accessed element size. In the following it will only grow (or become -1). */ max_size = size; /* Compute cumulative bit-offset for nested component-refs and array-refs, and find the ultimate containing object. */ @@ -1005,35 +1005,35 @@ ao_ref_init_from_vn_reference (ao_ref *r cannot use component_ref_field_offset. Do the interesting parts manually. */ if (op->op1 || !tree_fits_uhwi_p (DECL_FIELD_OFFSET (field))) max_size = -1; else { - offset += (TREE_INT_CST_LOW (DECL_FIELD_OFFSET (field)) + offset += (tree_to_uhwi (DECL_FIELD_OFFSET (field)) * BITS_PER_UNIT); offset += TREE_INT_CST_LOW (DECL_FIELD_BIT_OFFSET (field)); } break; } case ARRAY_RANGE_REF: case ARRAY_REF: /* We recorded the lower bound and the element size. */ if (!tree_fits_shwi_p (op->op0) || !tree_fits_shwi_p (op->op1) || !tree_fits_shwi_p (op->op2)) max_size = -1; else { - HOST_WIDE_INT hindex = TREE_INT_CST_LOW (op->op0); - hindex -= TREE_INT_CST_LOW (op->op1); - hindex *= TREE_INT_CST_LOW (op->op2); + HOST_WIDE_INT hindex = tree_to_shwi (op->op0); + hindex -= tree_to_shwi (op->op1); + hindex *= tree_to_shwi (op->op2); hindex *= BITS_PER_UNIT; offset += hindex; } break; case REALPART_EXPR: break; @@ -1152,17 +1152,17 @@ vn_reference_fold_indirect (vecop0, 0)) { double_int off = tree_to_double_int (mem_op->op0); off = off.sext (TYPE_PRECISION (TREE_TYPE (mem_op->op0))); off += double_int::from_shwi (addr_offset); mem_op->op0 = double_int_to_tree (TREE_TYPE (mem_op->op0), off); op->op0 = build_fold_addr_expr (addr_base); if (tree_fits_shwi_p (mem_op->op0)) - mem_op->off = TREE_INT_CST_LOW (mem_op->op0); + mem_op->off = tree_to_shwi (mem_op->op0); else mem_op->off = -1; } } /* Fold *& at position *I_P in a vn_reference_op_s vector *OPS. Updates *I_P to point to the last element of the replacement. */ static void @@ -1217,17 +1217,17 @@ vn_reference_maybe_forwprop_address (vec return; off += tree_to_double_int (ptroff); op->op0 = ptr; } mem_op->op0 = double_int_to_tree (TREE_TYPE (mem_op->op0), off); if (tree_fits_shwi_p (mem_op->op0)) - mem_op->off = TREE_INT_CST_LOW (mem_op->op0); + mem_op->off = tree_to_shwi (mem_op->op0); else mem_op->off = -1; if (TREE_CODE (op->op0) == SSA_NAME) op->op0 = SSA_VAL (op->op0); if (TREE_CODE (op->op0) != SSA_NAME) op->opcode = TREE_CODE (op->op0); /* And recurse. */ @@ -1588,19 +1588,19 @@ vn_reference_lookup_3 (ao_ref *ref, tree && integer_zerop (gimple_call_arg (def_stmt, 1)) && tree_fits_uhwi_p (gimple_call_arg (def_stmt, 2)) && TREE_CODE (gimple_call_arg (def_stmt, 0)) == ADDR_EXPR) { tree ref2 = TREE_OPERAND (gimple_call_arg (def_stmt, 0), 0); tree base2; HOST_WIDE_INT offset2, size2, maxsize2; base2 = get_ref_base_and_extent (ref2, &offset2, &size2, &maxsize2); - size2 = TREE_INT_CST_LOW (gimple_call_arg (def_stmt, 2)) * 8; + size2 = tree_to_uhwi (gimple_call_arg (def_stmt, 2)) * 8; if ((unsigned HOST_WIDE_INT)size2 / 8 - == TREE_INT_CST_LOW (gimple_call_arg (def_stmt, 2)) + == tree_to_uhwi (gimple_call_arg (def_stmt, 2)) && maxsize2 != -1 && operand_equal_p (base, base2, 0) && offset2 <= offset && offset2 + size2 >= offset + maxsize) { tree val = build_zero_cst (vr->type); return vn_reference_lookup_or_insert_for_pieces (vuse, vr->set, vr->type, vr->operands, val); @@ -1860,17 +1860,17 @@ vn_reference_lookup_3 (ao_ref *ref, tree tree tem = get_addr_base_and_unit_offset (TREE_OPERAND (lhs, 0), &lhs_offset); if (!tem) return (void *)-1; if (TREE_CODE (tem) == MEM_REF && tree_fits_uhwi_p (TREE_OPERAND (tem, 1))) { lhs = TREE_OPERAND (tem, 0); - lhs_offset += TREE_INT_CST_LOW (TREE_OPERAND (tem, 1)); + lhs_offset += tree_to_uhwi (TREE_OPERAND (tem, 1)); } else if (DECL_P (tem)) lhs = build_fold_addr_expr (tem); else return (void *)-1; } if (TREE_CODE (lhs) != SSA_NAME && TREE_CODE (lhs) != ADDR_EXPR) @@ -1886,44 +1886,44 @@ vn_reference_lookup_3 (ao_ref *ref, tree tree tem = get_addr_base_and_unit_offset (TREE_OPERAND (rhs, 0), &rhs_offset); if (!tem) return (void *)-1; if (TREE_CODE (tem) == MEM_REF && tree_fits_uhwi_p (TREE_OPERAND (tem, 1))) { rhs = TREE_OPERAND (tem, 0); - rhs_offset += TREE_INT_CST_LOW (TREE_OPERAND (tem, 1)); + rhs_offset += tree_to_uhwi (TREE_OPERAND (tem, 1)); } else if (DECL_P (tem)) rhs = build_fold_addr_expr (tem); else return (void *)-1; } if (TREE_CODE (rhs) != SSA_NAME && TREE_CODE (rhs) != ADDR_EXPR) return (void *)-1; - copy_size = TREE_INT_CST_LOW (gimple_call_arg (def_stmt, 2)); + copy_size = tree_to_uhwi (gimple_call_arg (def_stmt, 2)); /* The bases of the destination and the references have to agree. */ if ((TREE_CODE (base) != MEM_REF && !DECL_P (base)) || (TREE_CODE (base) == MEM_REF && (TREE_OPERAND (base, 0) != lhs || !tree_fits_uhwi_p (TREE_OPERAND (base, 1)))) || (DECL_P (base) && (TREE_CODE (lhs) != ADDR_EXPR || TREE_OPERAND (lhs, 0) != base))) return (void *)-1; /* And the access has to be contained within the memcpy destination. */ at = offset / BITS_PER_UNIT; if (TREE_CODE (base) == MEM_REF) - at += TREE_INT_CST_LOW (TREE_OPERAND (base, 1)); + at += tree_to_uhwi (TREE_OPERAND (base, 1)); if (lhs_offset > at || lhs_offset + copy_size < at + maxsize / BITS_PER_UNIT) return (void *)-1; /* Make room for 2 operands in the new reference. */ if (vr->operands.length () < 2) { vec old = vr->operands; @@ -3221,17 +3221,17 @@ simplify_binary_expression (gimple stmt) /* Pointer plus constant can be represented as invariant address. Do so to allow further propatation, see also tree forwprop. */ if (code == POINTER_PLUS_EXPR && tree_fits_uhwi_p (op1) && TREE_CODE (op0) == ADDR_EXPR && is_gimple_min_invariant (op0)) return build_invariant_address (TREE_TYPE (op0), TREE_OPERAND (op0, 0), - TREE_INT_CST_LOW (op1)); + tree_to_uhwi (op1)); /* Avoid folding if nothing changed. */ if (op0 == gimple_assign_rhs1 (stmt) && op1 == gimple_assign_rhs2 (stmt)) return NULL_TREE; fold_defer_overflow_warnings (); Index: gcc/tree-ssa-structalias.c =================================================================== --- gcc/tree-ssa-structalias.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-ssa-structalias.c 2013-11-19 12:09:07.952676600 +0000 @@ -2993,18 +2993,18 @@ process_constraint (constraint_t t) static HOST_WIDE_INT bitpos_of_field (const tree fdecl) { if (!tree_fits_shwi_p (DECL_FIELD_OFFSET (fdecl)) || !tree_fits_shwi_p (DECL_FIELD_BIT_OFFSET (fdecl))) return -1; - return (TREE_INT_CST_LOW (DECL_FIELD_OFFSET (fdecl)) * BITS_PER_UNIT - + TREE_INT_CST_LOW (DECL_FIELD_BIT_OFFSET (fdecl))); + return (tree_to_shwi (DECL_FIELD_OFFSET (fdecl)) * BITS_PER_UNIT + + tree_to_shwi (DECL_FIELD_BIT_OFFSET (fdecl))); } /* Get constraint expressions for offsetting PTR by OFFSET. Stores the resulting constraint expressions in *RESULTS. */ static void get_constraint_for_ptr_offset (tree ptr, tree offset, @@ -3425,17 +3425,17 @@ get_constraint_for_1 (tree t, vec vi = get_varinfo (cs.var); curr = vi_next (vi); if (!vi->is_full_var && curr) { unsigned HOST_WIDE_INT size; if (tree_fits_uhwi_p (TYPE_SIZE (TREE_TYPE (t)))) - size = TREE_INT_CST_LOW (TYPE_SIZE (TREE_TYPE (t))); + size = tree_to_uhwi (TYPE_SIZE (TREE_TYPE (t))); else size = -1; for (; curr; curr = vi_next (curr)) { if (curr->offset - vi->offset < size) { cs.var = curr->id; results->safe_push (cs); @@ -5355,25 +5355,25 @@ push_fields_onto_fieldstack (tree type, must_have_pointers_p = field_must_have_pointers (field); if (pair && !has_unknown_size && !must_have_pointers_p && !pair->must_have_pointers && !pair->has_unknown_size && pair->offset + (HOST_WIDE_INT)pair->size == offset + foff) { - pair->size += TREE_INT_CST_LOW (DECL_SIZE (field)); + pair->size += tree_to_uhwi (DECL_SIZE (field)); } else { fieldoff_s e; e.offset = offset + foff; e.has_unknown_size = has_unknown_size; if (!has_unknown_size) - e.size = TREE_INT_CST_LOW (DECL_SIZE (field)); + e.size = tree_to_uhwi (DECL_SIZE (field)); else e.size = -1; e.must_have_pointers = must_have_pointers_p; e.may_have_pointers = true; e.only_restrict_pointers = (!has_unknown_size && POINTER_TYPE_P (TREE_TYPE (field)) && TYPE_RESTRICT (TREE_TYPE (field))); @@ -5680,25 +5680,25 @@ create_variable_info_for_1 (tree decl, c /* If we didn't end up collecting sub-variables create a full variable for the decl. */ if (fieldstack.length () <= 1 || fieldstack.length () > MAX_FIELDS_FOR_FIELD_SENSITIVE) { vi = new_var_info (decl, name); vi->offset = 0; vi->may_have_pointers = true; - vi->fullsize = TREE_INT_CST_LOW (declsize); + vi->fullsize = tree_to_uhwi (declsize); vi->size = vi->fullsize; vi->is_full_var = true; fieldstack.release (); return vi; } vi = new_var_info (decl, name); - vi->fullsize = TREE_INT_CST_LOW (declsize); + vi->fullsize = tree_to_uhwi (declsize); for (i = 0, newvi = vi; fieldstack.iterate (i, &fo); ++i, newvi = vi_next (newvi)) { const char *newname = "NULL"; char *tempname; if (dump_file) Index: gcc/tree-vect-data-refs.c =================================================================== --- gcc/tree-vect-data-refs.c 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-vect-data-refs.c 2013-11-19 12:09:07.953676607 +0000 @@ -776,17 +776,17 @@ vect_compute_data_ref_alignment (struct { /* Negative or overflowed misalignment value. */ if (dump_enabled_p ()) dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "unexpected misalign value\n"); return false; } - SET_DR_MISALIGNMENT (dr, TREE_INT_CST_LOW (misalign)); + SET_DR_MISALIGNMENT (dr, tree_to_uhwi (misalign)); if (dump_enabled_p ()) { dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location, "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr)); dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref); dump_printf (MSG_MISSED_OPTIMIZATION, "\n"); } @@ -958,17 +958,17 @@ vect_verify_datarefs_alignment (loop_vec than its size. */ static bool not_size_aligned (tree exp) { if (!tree_fits_uhwi_p (TYPE_SIZE (TREE_TYPE (exp)))) return true; - return (TREE_INT_CST_LOW (TYPE_SIZE (TREE_TYPE (exp))) + return (tree_to_uhwi (TYPE_SIZE (TREE_TYPE (exp))) > get_object_alignment (exp)); } /* Function vector_alignment_reachable_p Return true if vector alignment for DR is reachable by peeling a few loop iterations. Return false otherwise. */ @@ -2564,23 +2564,23 @@ vect_analyze_data_ref_accesses (loop_vec /* Sorting has ensured that DR_INIT (dra) <= DR_INIT (drb). */ HOST_WIDE_INT init_a = TREE_INT_CST_LOW (DR_INIT (dra)); HOST_WIDE_INT init_b = TREE_INT_CST_LOW (DR_INIT (drb)); gcc_assert (init_a < init_b); /* If init_b == init_a + the size of the type * k, we have an interleaving, and DRA is accessed before DRB. */ - HOST_WIDE_INT type_size_a = TREE_INT_CST_LOW (sza); + HOST_WIDE_INT type_size_a = tree_to_uhwi (sza); if ((init_b - init_a) % type_size_a != 0) break; /* The step (if not zero) is greater than the difference between data-refs' inits. This splits groups into suitable sizes. */ - HOST_WIDE_INT step = TREE_INT_CST_LOW (DR_STEP (dra)); + HOST_WIDE_INT step = tree_to_shwi (DR_STEP (dra)); if (step != 0 && step <= (init_b - init_a)) break; if (dump_enabled_p ()) { dump_printf_loc (MSG_NOTE, vect_location, "Detected interleaving "); dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dra)); @@ -2876,18 +2876,18 @@ vect_prune_runtime_alias_test_list (loop if (!operand_equal_p (DR_BASE_ADDRESS (dr_a1->dr), DR_BASE_ADDRESS (dr_a2->dr), 0) || !tree_fits_shwi_p (dr_a1->offset) || !tree_fits_shwi_p (dr_a2->offset)) continue; - HOST_WIDE_INT diff = TREE_INT_CST_LOW (dr_a2->offset) - - TREE_INT_CST_LOW (dr_a1->offset); + HOST_WIDE_INT diff = (tree_to_shwi (dr_a2->offset) + - tree_to_shwi (dr_a1->offset)); /* Now we check if the following condition is satisfied: DIFF - SEGMENT_LENGTH_A < SEGMENT_LENGTH_B where DIFF = DR_A2->OFFSET - DR_A1->OFFSET. However, SEGMENT_LENGTH_A or SEGMENT_LENGTH_B may not be constant so we Index: gcc/tree-vectorizer.h =================================================================== --- gcc/tree-vectorizer.h 2013-11-19 11:59:43.285326264 +0000 +++ gcc/tree-vectorizer.h 2013-11-19 12:09:07.953676607 +0000 @@ -384,17 +384,17 @@ #define LOOP_VINFO_OPERANDS_SWAPPED(L) #define LOOP_REQUIRES_VERSIONING_FOR_ALIGNMENT(L) \ (L)->may_misalign_stmts.length () > 0 #define LOOP_REQUIRES_VERSIONING_FOR_ALIAS(L) \ (L)->may_alias_ddrs.length () > 0 #define NITERS_KNOWN_P(n) \ (tree_fits_shwi_p ((n)) \ -&& TREE_INT_CST_LOW ((n)) > 0) +&& tree_to_shwi ((n)) > 0) #define LOOP_VINFO_NITERS_KNOWN_P(L) \ NITERS_KNOWN_P ((L)->num_iters) static inline loop_vec_info loop_vec_info_for_loop (struct loop *loop) { return (loop_vec_info) loop->aux;