From patchwork Thu May 2 17:21:54 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kenneth Zadeck X-Patchwork-Id: 241043 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "localhost", Issuer "www.qmailtoaster.com" (not verified)) by ozlabs.org (Postfix) with ESMTPS id 2DC502C00CF for ; Fri, 3 May 2013 03:22:38 +1000 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; q=dns; s=default; b=tYHzPPG1JKp17BWY2 +PfkPr0GZqgGabPA0YZosd6YFWn0Beq61XrjJnmsgoQWIeZZKUKAAjaQajAQkFWG iEm1uac44AgSd6BYHoU70g+vpApwJ5fkkibQm/Ht0DlE+XD6/6+19Hx3KZK8Q8Er 4VcNymIpdYXG3SVLVG/zV6/or0= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:date:from:mime-version:to:cc:subject:references :in-reply-to:content-type; s=default; bh=KO7FN9YGbHFPQUKZCzlSKBM 25qA=; b=Ut1xWlOlgLYFZa1cE64jMXh1A0vOzDfdBaXtG978zPfmrSohBl4d4eL 8liHvlb0Xd0j52u5nhCgggSqk+AjIugXmW+WSVAjErmwHvTXhJRlbG+tZyrmYEFO kTe00AmtTTzQkr9CW3nwCdSvPsNWflvVP0oZNmtBblmtDiVySXVs= Received: (qmail 22804 invoked by alias); 2 May 2013 17:22:26 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 22789 invoked by uid 89); 2 May 2013 17:22:26 -0000 X-Spam-SWARE-Status: No, score=0.9 required=5.0 tests=BAYES_50, KAM_STOCKGEN, KHOP_RCVD_UNTRUST, KHOP_THREADED, RCVD_IN_DNSWL_LOW, RCVD_IN_HOSTKARMA_YE, TW_CP, TW_EG, TW_OY, TW_YY autolearn=no version=3.3.1 Received: from mail-ie0-f179.google.com (HELO mail-ie0-f179.google.com) (209.85.223.179) by sourceware.org (qpsmtpd/0.84/v0.84-167-ge50287c) with ESMTP; Thu, 02 May 2013 17:21:58 +0000 Received: by mail-ie0-f179.google.com with SMTP id c13so943891ieb.10 for ; Thu, 02 May 2013 10:21:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:message-id:date:from:user-agent:mime-version:to:cc :subject:references:in-reply-to:content-type:x-gm-message-state; bh=/girxONgCO+kvZBEKQAnKXZhU+STDDlNU2VhDl6o3F4=; b=ZFE7zd6guF2o1DEPD8o6hmQ41sL+YuquKXELonIAd8wMODq14xv+Mb6Lz8fAAQgTpk Lu285s1OwDpfJ5TBevMlSBHl+0oqbi7HWvh1ulDKfFKfpnjaKOjQhbNDaTe3nzc54Iwv QySrCyNA1ONhdA2JElojpNX7PSEQo6HTI/b5SgwVM9RegYvDYLfusZwo4iU+Gs/xyvXC X3wShGC4P/1J6Wt0sAdf+SIOEfi8ZfwRCb53l8IiVL46uQRz3VcRP+7L9wT/4aQQbZ9o W4gbWxL4TgZuSkTNUNJVXCCg/lg++zrSoYacZeMVnko0GYqF7o8rV+Bg97TYyeAio8A9 a7cg== X-Received: by 10.50.17.234 with SMTP id r10mr9105486igd.102.1367515316910; Thu, 02 May 2013 10:21:56 -0700 (PDT) Received: from ?IPv6:2001:468:913:2044:880d:59b1:64e0:61ad? ([2001:468:913:2044:880d:59b1:64e0:61ad]) by mx.google.com with ESMTPSA id s16sm32682952ign.4.2013.05.02.10.21.54 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 02 May 2013 10:21:55 -0700 (PDT) Message-ID: <5182A0B2.2080703@naturalbridge.com> Date: Thu, 02 May 2013 13:21:54 -0400 From: Kenneth Zadeck User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130329 Thunderbird/17.0.5 MIME-Version: 1.0 To: Richard Biener CC: Mike Stump , gcc-patches , Lawrence Crowl , rdsandiford@googlemail.com, Ian Lance Taylor Subject: Re: patch to fix constant math -5th patch, rtl References: <506C72C7.7090207@naturalbridge.com> <50891AAC.8090301@naturalbridge.com> <87y5im3orb.fsf@sandifor-thinkpad.stglab.manchester.uk.ibm.com> <87pq3y3kyk.fsf@sandifor-thinkpad.stglab.manchester.uk.ibm.com> <50912D85.1070002@naturalbridge.com> <5091331C.3030504@naturalbridge.com> <512D686B.90000@naturalbridge.com> <515EC4E7.7040907@naturalbridge.com> <516DAF9B.3050008@naturalbridge.com> <516DB1F3.8090908@naturalbridge.com> In-Reply-To: X-Gm-Message-State: ALoCoQlnuIeWtS0/OSY4eXUjJbZok479Yq8OMHYwyPmuSWqI9lUkMiVl8aB14yyM/zVNj3a+Adoo On 04/24/2013 08:09 AM, Richard Biener wrote: > On Tue, Apr 16, 2013 at 10:17 PM, Kenneth Zadeck > wrote: >> Here is a refreshed version of the rtl changes for wide-int. the only >> change from the previous versions is that the wide-int binary operations >> have been simplified to use the new wide-int binary templates. > Looking for from_rtx calls (to see where we get the mode/precision from) I > see for example > > - o = rtx_to_double_int (outer); > - i = rtx_to_double_int (inner); > - > - m = double_int::mask (width); > - i &= m; > - m = m.llshift (offset, HOST_BITS_PER_DOUBLE_INT); > - i = i.llshift (offset, HOST_BITS_PER_DOUBLE_INT); > - o = o.and_not (m) | i; > - > + > + o = (wide_int::from_rtx (outer, GET_MODE (SET_DEST (temp))) > + .insert (wide_int::from_rtx (inner, GET_MODE (dest)), > + offset, width)); > > where I'd rather have the original code preserved as much as possible > and not introduce a new primitive wide_int::insert for this. The conversion > and review process will be much more error-prone if we do multiple > things at once (and it might keep the wide_int initial interface leaner). this code is doing an insertion. the old code does it in a round about way since there was not an insert primitive in double-int. I do not think that we should penalize ourselves forever because double-int did not have a robust interface. > Btw, the wide_int::insert implementation doesn't assert anything about > the inputs precision. Instead it reads > > + if (start + width >= precision) > + width = precision - start; > + > + mask = shifted_mask (start, width, false, precision); > + tmp = op0.lshift (start, 0, precision, NONE); > + result = tmp & mask; > + > + tmp = and_not (mask); > + result = result | tmp; > > which eventually ends up performing everything in target precision. So > we don't really care about the mode or precision of inner. I added checking to make sure that the width of the value being inserted was less than or equal to it's precision. But there is no reason make sure that both operands are the same. I do believe that it is the correct decision to have the target precision be the same as the first operand for this. > Then I see > > diff --git a/gcc/dwarf2out.h b/gcc/dwarf2out.h > index ad03a34..531a7c1 100644 > @@ -180,6 +182,7 @@ typedef struct GTY(()) dw_val_struct { > HOST_WIDE_INT GTY ((default)) val_int; > unsigned HOST_WIDE_INT GTY ((tag > ("dw_val_class_unsigned_const"))) val_unsigned; > double_int GTY ((tag ("dw_val_class_const_double"))) val_double; > + wide_int GTY ((tag ("dw_val_class_wide_int"))) val_wide; > dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec; > struct dw_val_die_union > { > > ick. That makes dw_val_struct really large ... (and thus dw_attr_struct). > You need to make this a pointer to a wide_int at least. This was an incredible mess to fix, but we did it. it required enhancements to the gty parser. The changes are in the new patch. The changes are to gengtype-lex.l gengtype-parse.c > -/* Return a CONST_INT or CONST_DOUBLE corresponding to target reading > +/* Return a constant integer corresponding to target reading > GET_MODE_BITSIZE (MODE) bits from string constant STR. */ > > static rtx > c_readstr (const char *str, enum machine_mode mode) > { > - HOST_WIDE_INT c[2]; > + wide_int c; > ... > - return immed_double_const (c[0], c[1], mode); > + > + c = wide_int::from_array (tmp, len, mode); > + return immed_wide_int_const (c, mode); > } > > err - what's this good for? It doesn't look necessary as part of the initial > wide-int conversion at least. (please audit your patches for such cases) it is good so that we never trip over this in the future. (where the future is defined as what i need right now with wider modes and that is just around the corner for other ports that are starting to use OImode types.) This patch makes the portable back end immune from issues that arise because of large types. As i have said before, there is nothing in gcc that should be limited by the size of two HWIs. > @@ -4994,12 +4999,12 @@ expand_builtin_signbit (tree exp, rtx target) > > if (bitpos < GET_MODE_BITSIZE (rmode)) > { > - double_int mask = double_int_zero.set_bit (bitpos); > + wide_int mask = wide_int::set_bit_in_zero (bitpos, rmode); > > if (GET_MODE_SIZE (imode) > GET_MODE_SIZE (rmode)) > temp = gen_lowpart (rmode, temp); > temp = expand_binop (rmode, and_optab, temp, > - immed_double_int_const (mask, rmode), > + immed_wide_int_const (mask, rmode), > NULL_RTX, 1, OPTAB_LIB_WIDEN); > } > else > > Likewise. I suppose you remove immed_double_int_const but I see no > reason to do that. It just makes your patch larger than necessary. Like that code with the double int would work for OImode. > [what was the reason again to have TARGET_SUPPORTS_WIDE_INT at all? > It's supposed to be a no-op conversion, right?] If target_supports_wide_int is non zero, then the port has been converted so that it expects that constant integers that require more than HOST_BITS_PER_WIDE_INT bits to represent to be in a CONST_WIDE_INT rather than sharing their representation with floating point in a DOUBLE_INT. If the target has not been converted then it shares the rep of these larger integers with CONST_DOUBLE and is not expected to see any constants that cannot be represented in two HWIs. So this patch covers the portable parts of the rtl code and converts all of the math so that there are no places where the portable parts will choke on large math, but it does that conversion in a way that the ports that have not yet converted continue to work properly. Note that I did not say "see no change". There are a fair number of places in the compiler where we have code that checked if the precision was less than or equal to HOST_BITS_PER_WIDE_INT and only did the transformation if that was true. Unless I could see that this was only applied to values that are always small, like string lengths, i have converted them to use the wide-int api. My goal here is to transform gcc from a compiler that does pretty well on things 64 bits or smaller, a lot less well on things in the range 65-128 bits and blows up or gets the wrong answer on things larger, to one that works uniformly well on all precisions that are supported by the target. You said in a branch to this thread that: I'd rather not have this patch-set introduce such subtle differences. The changes are not subtle. They are substantial and deliberate. The point of my work was never to introduce a prettier api, it was to make the compiler do reasonable things when the precision was larger than 64 bits and to keep it from doing brain dead things when the precision was larger than 128. That is my primary concern here. The rest of this is what had to be done to get there. > @@ -95,38 +95,9 @@ plus_constant (enum machine_mode mode, rtx x, > HOST_WIDE_INT c) > > switch (code) > { > - case CONST_INT: > - if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT) > - { > - double_int di_x = double_int::from_shwi (INTVAL (x)); > - double_int di_c = double_int::from_shwi (c); > - > - bool overflow; > - double_int v = di_x.add_with_sign (di_c, false, &overflow); > - if (overflow) > - gcc_unreachable (); > - > - return immed_double_int_const (v, VOIDmode); > - } > - > - return GEN_INT (INTVAL (x) + c); > - > - case CONST_DOUBLE: > - { > - double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x), > - CONST_DOUBLE_LOW (x)); > - double_int di_c = double_int::from_shwi (c); > - > - bool overflow; > - double_int v = di_x.add_with_sign (di_c, false, &overflow); > - if (overflow) > - /* Sorry, we have no way to represent overflows this wide. > - To fix, add constant support wider than CONST_DOUBLE. */ > - gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT); > - > - return immed_double_int_const (v, VOIDmode); > - } > - > + CASE_CONST_SCALAR_INT: > + return immed_wide_int_const (wide_int::from_rtx (x, mode) > + + wide_int::from_shwi (c, mode), mode); > > you said you didn't want to convert CONST_INT to wide-int. But the above > is certainly a lot less efficient than before - given your change to support > operator+ RTX even less efficient than possible. The above also shows > three 'mode' arguments while one to immed_wide_int_const would be > enough (given it would truncate the arbitrary precision result from the > addition to modes precision). i did not say this and i certainly have no intention to not to do this. What I did say was that I am going to preserve the rtl design to have two representations of rtl integer constants: one for the ones that fit into a single HWI and another for the ones that do not. However, I believe that my code is faster than this code. Note that my current patch looks more like + CASE_CONST_SCALAR_INT: + return immed_wide_int_const (wide_int::from_rtx (x, mode) + + c, mode); This code turns into (when the constant is a CONST_INT): 1) convert x to a wide int 2) do an inline add of first element of the array in the wide-int with c; 3) sign extend past the precision if the modes precision is less that HOST_BITS_PER_WIDE_INT. 4) convert back. I claim that it is cheaper because: 1) only one value is converted into a wide int and in the old code two values are converted to double-int 2) step 2 of the wide-int is cheaper than what is done in double-int. There is no carry code because we need to do two hwis of work and the conversion of c is basically free. The cost of the rest of the steps are about the same. This is one of the reasons that Richard and I are so vehemently opposed to your infinite precision idea. If we did infinite precision then you would likely be correct, this could be more expensive. But the finite precision implementation that we do means that we can short circuit the 64 bit or less case with an inline add of two HWIs and be done. Note that almost every operation in my wide-int has such a short circuit. And of course that 64 bit or less case happens 99% of the time. > That is, I see no reason to remove the CONST_INT case or the CONST_DOUBLE > case. [why is the above not in any way guarded with TARGET_SUPPORTS_WIDE_INT?] The CASE_CONST_SCALAR_INT covers all possibilities. > What happens with overflows in the wide-int case? The double-int case > asserted that there is no overflow across 2 * hwi precision, the wide-int > case does not. Still the wide-int case now truncates to 'mode' precision > while the CONST_DOUBLE case did not. > > That's a change in behavior, no? Effectively the code for CONST_INT > and CONST_DOUBLE did "arbitrary" precision arithmetic (up to the > precision they can encode) which wide-int changes. > > Can we in such cases please to a preparatory patch and change the > CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to > mode precision first? What does wide-int do with VOIDmode mode inputs? > It seems to ICE on them for from_rtx and use garbage (0) for from_shwi. Ugh. > Btw, plus_constant asserts that mode is either VOIDmode (I suppose semantically > do "arbitrary precision") or the same mode as the mode of x (I suppose > semantically do "mode precision"). Neither the current nor your implementation > seems to do something consistent here :/ > > So please, for those cases (I suppose there are many more, eventually > one of the reasons why you think that requiring a mode for all CONST_DOUBLEs > is impossible), can we massage the current code to 1) document what is > desired, 2) follow that specification with regarding to computation > mode / precision > and result mode / precision? what richard said. The only thing for me to add is to remind you that getting rid of that assert is why i am doing this work. > Thanks, > Richard. > >> kenny >> 2013-05-02 Kenneth Zadeck * alias.c (rtx_equal_for_memref_p): Fixed comment. * builtins.c (c_getstr, c_readstr, expand_builtin_signbit): Make to work with any size int. * combine.c (try_combine, subst): Changed to support any size integer. * coretypes.h (hwivec_def, hwivec, const_hwivec): New. * cse.c (hash_rtx_cb): Added CONST_WIDE_INT case are modified DOUBLE_INT case. * cselib.c (rtx_equal_for_cselib_1): Converted cases to CASE_CONST_UNIQUE. (cselib_hash_rtx): Added CONST_WIDE_INT case. * defaults.h (TARGET_SUPPORTS_WIDE_INT): New. * doc/rtl.texi (CONST_DOUBLE, CONST_WIDE_INT): Updated. * doc/tm.texi (TARGET_SUPPORTS_WIDE_INT): New. * doc/tm.texi.in (TARGET_SUPPORTS_WIDE_INT): New. * dojump.c (prefer_and_bit_test): Use wide int api. * dwarf2out.c (get_full_len): New function. (dw_val_equal_p, size_of_loc_descr, output_loc_operands, print_die, attr_checksum, same_dw_val_p, size_of_die, value_format, output_die, mem_loc_descriptor, loc_descriptor, extract_int, add_const_value_attribute, hash_loc_operands, compare_loc_operands): Add support for wide-ints. (add_AT_wide): New function. * dwarf2out.h (enum dw_val_class): Added dw_val_class_wide_int. * emit-rtl.c (const_wide_int_htab): Add marking. (const_wide_int_htab_hash, const_wide_int_htab_eq, lookup_const_wide_int, immed_wide_int_const): New functions. (const_double_htab_hash, const_double_htab_eq, rtx_to_double_int, immed_double_const): Conditionally changed CONST_DOUBLE behavior. (immed_double_const, init_emit_once): Changed to support wide-int. * explow.c (plus_constant): Now uses wide-int api. * expmed.c (mask_rtx, lshift_value): Now uses wide-int. (expand_mult, expand_smod_pow2): Make to work with any size int. (make_tree): Added CONST_WIDE_INT case. * expr.c (convert_modes): Added support for any size int. (emit_group_load_1): Added todo for place that still does not allow large ints. (store_expr, expand_constructor): Fixed comments. (expand_expr_real_2, expand_expr_real_1, reduce_to_bit_field_precision, const_vector_from_tree): Converted to use wide-int api. * final.c (output_addr_const): Added CONST_WIDE_INT case. * genemit.c (gen_exp): Added CONST_WIDE_INT case. * gengenrtl.c (excluded_rtx): Added CONST_WIDE_INT case. * gengtype-lex.l (CXX_KEYWORD): Added static. () added "^". * gengtype-parse.c (require_template_declaration) Added enum case. * gengtype.c (wide-int): New type. * genpreds.c (write_one_predicate_function): Fixed comment. (add_constraint): Added CONST_WIDE_INT test. (write_tm_constrs_h): Do not emit hval or lval if target supports wide integers. * gensupport.c (std_preds): Added const_wide_int_operand and const_scalar_int_operand. * optabs.c (expand_subword_shift, expand_doubleword_shift, expand_absneg_bit, expand_absneg_bit, expand_copysign_absneg, expand_copysign_bit): Made to work with any size int. * postreload.c (reload_cse_simplify_set): Now uses wide-int api. * print-rtl.c (print_rtx): Added CONST_WIDE_INT case. * read-rtl.c (validate_const_wide_int): New function. (read_rtx_code): Added CONST_WIDE_INT case. * recog.c (const_scalar_int_operand, const_double_operand): New versions if target supports wide integers. (const_wide_int_operand): New function. * rtl.c (DEF_RTL_EXPR): Added CONST_WIDE_INT case. (rtx_size): Ditto. (rtx_alloc_stat, hwivec_output_hex, hwivec_check_failed_bounds): New functions. (iterative_hash_rtx): Added CONST_WIDE_INT case. * rtl.def (CONST_WIDE_INT): New. * rtl.h (hwivec_def): New function. (HWI_GET_NUM_ELEM, HWI_PUT_NUM_ELEM, CONST_WIDE_INT_P, CONST_SCALAR_INT_P, XHWIVEC_ELT, HWIVEC_CHECK, CONST_WIDE_INT_VEC, CONST_WIDE_INT_NUNITS, CONST_WIDE_INT_ELT, rtx_alloc_v): New macros. (chain_next): Added hwiv case. (CASE_CONST_SCALAR_INT, CONST_INT, CONST_WIDE_INT): Added new defs if target supports wide ints. * rtlanal.c (commutative_operand_precedence, split_double): Added CONST_WIDE_INT case. * sched-vis.c (print_value): Added CONST_WIDE_INT case are modified DOUBLE_INT case. * sel-sched-ir.c (lhs_and_rhs_separable_p): Fixed comment * simplify-rtx.c (mode_signbit_p, simplify_const_unary_operation, simplify_binary_operation_1, simplify_const_binary_operation, simplify_const_relational_operation, simplify_immed_subreg): Make work with any size int. . * tree-ssa-address.c (addr_for_mem_ref): Changes to use wide-int rather than double-int. * tree.c (wide_int_to_tree): New function. * var-tracking.c (loc_cmp): Added CONST_WIDE_INT case. * varasm.c (const_rtx_hash_1): Added CONST_WIDE_INT case. diff --git a/gcc/alias.c b/gcc/alias.c index ef11c6a..ed5ceb4 100644 --- a/gcc/alias.c +++ b/gcc/alias.c @@ -1471,9 +1471,7 @@ rtx_equal_for_memref_p (const_rtx x, const_rtx y) case VALUE: CASE_CONST_UNIQUE: - /* There's no need to compare the contents of CONST_DOUBLEs or - CONST_INTs because pointer equality is a good enough - comparison for these nodes. */ + /* Pointer equality guarantees equality for these nodes. */ return 0; default: diff --git a/gcc/builtins.c b/gcc/builtins.c index 1fbd2f3..0c587d1 100644 --- a/gcc/builtins.c +++ b/gcc/builtins.c @@ -672,20 +672,24 @@ c_getstr (tree src) return TREE_STRING_POINTER (src) + tree_low_cst (offset_node, 1); } -/* Return a CONST_INT or CONST_DOUBLE corresponding to target reading +/* Return a constant integer corresponding to target reading GET_MODE_BITSIZE (MODE) bits from string constant STR. */ static rtx c_readstr (const char *str, enum machine_mode mode) { - HOST_WIDE_INT c[2]; + wide_int c; HOST_WIDE_INT ch; unsigned int i, j; + HOST_WIDE_INT tmp[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT]; + unsigned int len = (GET_MODE_PRECISION (mode) + HOST_BITS_PER_WIDE_INT - 1) + / HOST_BITS_PER_WIDE_INT; + + for (i = 0; i < len; i++) + tmp[i] = 0; gcc_assert (GET_MODE_CLASS (mode) == MODE_INT); - c[0] = 0; - c[1] = 0; ch = 1; for (i = 0; i < GET_MODE_SIZE (mode); i++) { @@ -696,13 +700,14 @@ c_readstr (const char *str, enum machine_mode mode) && GET_MODE_SIZE (mode) >= UNITS_PER_WORD) j = j + UNITS_PER_WORD - 2 * (j % UNITS_PER_WORD) - 1; j *= BITS_PER_UNIT; - gcc_assert (j < HOST_BITS_PER_DOUBLE_INT); if (ch) ch = (unsigned char) str[i]; - c[j / HOST_BITS_PER_WIDE_INT] |= ch << (j % HOST_BITS_PER_WIDE_INT); + tmp[j / HOST_BITS_PER_WIDE_INT] |= ch << (j % HOST_BITS_PER_WIDE_INT); } - return immed_double_const (c[0], c[1], mode); + + c = wide_int::from_array (tmp, len, GET_MODE_PRECISION (mode)); + return immed_wide_int_const (c, mode); } /* Cast a target constant CST to target CHAR and if that value fits into @@ -4994,12 +4999,12 @@ expand_builtin_signbit (tree exp, rtx target) if (bitpos < GET_MODE_BITSIZE (rmode)) { - double_int mask = double_int_zero.set_bit (bitpos); + wide_int mask = wide_int::set_bit_in_zero (bitpos, rmode); if (GET_MODE_SIZE (imode) > GET_MODE_SIZE (rmode)) temp = gen_lowpart (rmode, temp); temp = expand_binop (rmode, and_optab, temp, - immed_double_int_const (mask, rmode), + immed_wide_int_const (mask, rmode), NULL_RTX, 1, OPTAB_LIB_WIDEN); } else diff --git a/gcc/combine.c b/gcc/combine.c index 6d58b19..bfa151d 100644 --- a/gcc/combine.c +++ b/gcc/combine.c @@ -2669,23 +2669,15 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx i0, int *new_direct_jump_p, offset = -1; } - if (offset >= 0 - && (GET_MODE_PRECISION (GET_MODE (SET_DEST (temp))) - <= HOST_BITS_PER_DOUBLE_INT)) + if (offset >= 0) { - double_int m, o, i; + wide_int o; rtx inner = SET_SRC (PATTERN (i3)); rtx outer = SET_SRC (temp); - - o = rtx_to_double_int (outer); - i = rtx_to_double_int (inner); - - m = double_int::mask (width); - i &= m; - m = m.llshift (offset, HOST_BITS_PER_DOUBLE_INT); - i = i.llshift (offset, HOST_BITS_PER_DOUBLE_INT); - o = o.and_not (m) | i; - + + o = (wide_int::from_rtx (outer, GET_MODE (SET_DEST (temp))) + .insert (wide_int::from_rtx (inner, GET_MODE (dest)), + offset, width)); combine_merges++; subst_insn = i3; subst_low_luid = DF_INSN_LUID (i2); @@ -2696,8 +2688,8 @@ try_combine (rtx i3, rtx i2, rtx i1, rtx i0, int *new_direct_jump_p, /* Replace the source in I2 with the new constant and make the resulting insn the new pattern for I3. Then skip to where we validate the pattern. Everything was set up above. */ - SUBST (SET_SRC (temp), - immed_double_int_const (o, GET_MODE (SET_DEST (temp)))); + SUBST (SET_SRC (temp), + immed_wide_int_const (o, GET_MODE (SET_DEST (temp)))); newpat = PATTERN (i2); @@ -5112,7 +5104,7 @@ subst (rtx x, rtx from, rtx to, int in_dest, int in_cond, int unique_copy) if (! x) x = gen_rtx_CLOBBER (mode, const0_rtx); } - else if (CONST_INT_P (new_rtx) + else if (CONST_SCALAR_INT_P (new_rtx) && GET_CODE (x) == ZERO_EXTEND) { x = simplify_unary_operation (ZERO_EXTEND, GET_MODE (x), diff --git a/gcc/coretypes.h b/gcc/coretypes.h index 71d031d..c2b6983 100644 --- a/gcc/coretypes.h +++ b/gcc/coretypes.h @@ -55,6 +55,9 @@ typedef const struct rtx_def *const_rtx; struct rtvec_def; typedef struct rtvec_def *rtvec; typedef const struct rtvec_def *const_rtvec; +struct hwivec_def; +typedef struct hwivec_def *hwivec; +typedef const struct hwivec_def *const_hwivec; union tree_node; typedef union tree_node *tree; typedef const union tree_node *const_tree; diff --git a/gcc/cse.c b/gcc/cse.c index f2c8f63..8e3bb88 100644 --- a/gcc/cse.c +++ b/gcc/cse.c @@ -2331,15 +2331,23 @@ hash_rtx_cb (const_rtx x, enum machine_mode mode, + (unsigned int) INTVAL (x)); return hash; + case CONST_WIDE_INT: + { + int i; + for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++) + hash += CONST_WIDE_INT_ELT (x, i); + } + return hash; + case CONST_DOUBLE: /* This is like the general case, except that it only counts the integers representing the constant. */ hash += (unsigned int) code + (unsigned int) GET_MODE (x); - if (GET_MODE (x) != VOIDmode) - hash += real_hash (CONST_DOUBLE_REAL_VALUE (x)); - else + if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (x) == VOIDmode) hash += ((unsigned int) CONST_DOUBLE_LOW (x) + (unsigned int) CONST_DOUBLE_HIGH (x)); + else + hash += real_hash (CONST_DOUBLE_REAL_VALUE (x)); return hash; case CONST_FIXED: @@ -3756,6 +3764,7 @@ equiv_constant (rtx x) /* See if we previously assigned a constant value to this SUBREG. */ if ((new_rtx = lookup_as_function (x, CONST_INT)) != 0 + || (new_rtx = lookup_as_function (x, CONST_WIDE_INT)) != 0 || (new_rtx = lookup_as_function (x, CONST_DOUBLE)) != 0 || (new_rtx = lookup_as_function (x, CONST_FIXED)) != 0) return new_rtx; diff --git a/gcc/cselib.c b/gcc/cselib.c index 589e41e..2a3b11d 100644 --- a/gcc/cselib.c +++ b/gcc/cselib.c @@ -926,8 +926,7 @@ rtx_equal_for_cselib_1 (rtx x, rtx y, enum machine_mode memmode) /* These won't be handled correctly by the code below. */ switch (GET_CODE (x)) { - case CONST_DOUBLE: - case CONST_FIXED: + CASE_CONST_UNIQUE: case DEBUG_EXPR: return 0; @@ -1121,15 +1120,23 @@ cselib_hash_rtx (rtx x, int create, enum machine_mode memmode) hash += ((unsigned) CONST_INT << 7) + INTVAL (x); return hash ? hash : (unsigned int) CONST_INT; + case CONST_WIDE_INT: + { + int i; + for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++) + hash += CONST_WIDE_INT_ELT (x, i); + } + return hash; + case CONST_DOUBLE: /* This is like the general case, except that it only counts the integers representing the constant. */ hash += (unsigned) code + (unsigned) GET_MODE (x); - if (GET_MODE (x) != VOIDmode) - hash += real_hash (CONST_DOUBLE_REAL_VALUE (x)); - else + if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (x) == VOIDmode) hash += ((unsigned) CONST_DOUBLE_LOW (x) + (unsigned) CONST_DOUBLE_HIGH (x)); + else + hash += real_hash (CONST_DOUBLE_REAL_VALUE (x)); return hash ? hash : (unsigned int) CONST_DOUBLE; case CONST_FIXED: diff --git a/gcc/defaults.h b/gcc/defaults.h index 4f43f6f0..0801073 100644 --- a/gcc/defaults.h +++ b/gcc/defaults.h @@ -1404,6 +1404,14 @@ see the files COPYING3 and COPYING.RUNTIME respectively. If not, see #define SWITCHABLE_TARGET 0 #endif +/* If the target supports integers that are wider than two + HOST_WIDE_INTs on the host compiler, then the target should define + TARGET_SUPPORTS_WIDE_INT and make the appropriate fixups. + Otherwise the compiler really is not robust. */ +#ifndef TARGET_SUPPORTS_WIDE_INT +#define TARGET_SUPPORTS_WIDE_INT 0 +#endif + #endif /* GCC_INSN_FLAGS_H */ #endif /* ! GCC_DEFAULTS_H */ diff --git a/gcc/doc/rtl.texi b/gcc/doc/rtl.texi index 8829b0e..2254e2f 100644 --- a/gcc/doc/rtl.texi +++ b/gcc/doc/rtl.texi @@ -1525,17 +1525,22 @@ Similarly, there is only one object for the integer whose value is @findex const_double @item (const_double:@var{m} @var{i0} @var{i1} @dots{}) -Represents either a floating-point constant of mode @var{m} or an -integer constant too large to fit into @code{HOST_BITS_PER_WIDE_INT} -bits but small enough to fit within twice that number of bits (GCC -does not provide a mechanism to represent even larger constants). In -the latter case, @var{m} will be @code{VOIDmode}. For integral values -constants for modes with more bits than twice the number in -@code{HOST_WIDE_INT} the implied high order bits of that constant are -copies of the top bit of @code{CONST_DOUBLE_HIGH}. Note however that -integral values are neither inherently signed nor inherently unsigned; -where necessary, signedness is determined by the rtl operation -instead. +This represents either a floating-point constant of mode @var{m} or +(on ports older ports that do not define +@code{TARGET_SUPPORTS_WIDE_INT}) an integer constant too large to fit +into @code{HOST_BITS_PER_WIDE_INT} bits but small enough to fit within +twice that number of bits (GCC does not provide a mechanism to +represent even larger constants). In the latter case, @var{m} will be +@code{VOIDmode}. For integral values constants for modes with more +bits than twice the number in @code{HOST_WIDE_INT} the implied high +order bits of that constant are copies of the top bit of +@code{CONST_DOUBLE_HIGH}. Note however that integral values are +neither inherently signed nor inherently unsigned; where necessary, +signedness is determined by the rtl operation instead. + +On more modern ports, @code{CONST_DOUBLE} only represents floating +point values. New ports define to @code{TARGET_SUPPORTS_WIDE_INT} to +make this designation. @findex CONST_DOUBLE_LOW If @var{m} is @code{VOIDmode}, the bits of the value are stored in @@ -1550,6 +1555,37 @@ machine's or host machine's floating point format. To convert them to the precise bit pattern used by the target machine, use the macro @code{REAL_VALUE_TO_TARGET_DOUBLE} and friends (@pxref{Data Output}). +@findex CONST_WIDE_INT +@item (const_wide_int:@var{m} @var{nunits} @var{elt0} @dots{}) +This contains an array of @code{HOST_WIDE_INTS} that is large enough +to hold any constant that can be represented on the target. This form +of rtl is only used on targets that define +@code{TARGET_SUPPORTS_WIDE_INT} to be non zero and then +@code{CONST_DOUBLES} are only used to hold floating point values. If +the target leaves @code{TARGET_SUPPORTS_WIDE_INT} defined as 0, +@code{CONST_WIDE_INT}s are not used and @code{CONST_DOUBLE}s are as +they were before. + +The values are stored in a compressed format. The higher order +0s or -1s are not represented if they are just the logical sign +extension of the number that is represented. + +@findex CONST_WIDE_INT_VEC +@item CONST_WIDE_INT_VEC (@var{code}) +Returns the entire array of @code{HOST_WIDE_INT}s that are used to +store the value. This macro should be rarely used. + +@findex CONST_WIDE_INT_NUNITS +@item CONST_WIDE_INT_NUNITS (@var{code}) +The number of @code{HOST_WIDE_INT}s used to represent the number. +Note that this generally be smaller than the number of +@code{HOST_WIDE_INT}s implied by the mode size. + +@findex CONST_WIDE_INT_ELT +@item CONST_WIDE_INT_NUNITS (@var{code},@var{i}) +Returns the @code{i}th element of the array. Element 0 is contains +the low order bits of the constant. + @findex const_fixed @item (const_fixed:@var{m} @dots{}) Represents a fixed-point constant of mode @var{m}. diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi index ec7ef75..fb06ddd 100644 --- a/gcc/doc/tm.texi +++ b/gcc/doc/tm.texi @@ -11360,3 +11360,50 @@ It returns true if the target supports GNU indirect functions. The support includes the assembler, linker and dynamic linker. The default value of this hook is based on target's libc. @end deftypefn + +@defmac TARGET_SUPPORTS_WIDE_INT + +On older ports, large integers are stored in @code{CONST_DOUBLE} rtl +objects. Newer ports define @code{TARGET_SUPPORTS_WIDE_INT} to be non +zero to indicate that large integers are stored in +@code{CONST_WIDE_INT} rtl objects. The @code{CONST_WIDE_INT} allows +very large integer constants to be represented. @code{CONST_DOUBLE} +are limited to twice the size of host's @code{HOST_WIDE_INT} +representation. + +Converting a port mostly requires looking for the places where +@code{CONST_DOUBLES} are used with @code{VOIDmode} and replacing that +code with code that accesses @code{CONST_WIDE_INT}s. @samp{"grep -i +const_double"} at the port level gets you to 95% of the changes that +need to be made. There are a few places that require a deeper look. + +@itemize @bullet +@item +There is no equivalent to @code{hval} and @code{lval} for +@code{CONST_WIDE_INT}s. This would be difficult to express in the md +language since there are a variable number of elements. + +Most ports only check that @code{hval} is either 0 or -1 to see if the +value is small. As mentioned above, this will no longer be necessary +since small constants are always @code{CONST_INT}. Of course there +are still a few exceptions, the alpha's constraint used by the zap +instruction certainly requires careful examination by C code. +However, all the current code does is pass the hval and lval to C +code, so evolving the c code to look at the @code{CONST_WIDE_INT} is +not really a large change. + +@item +Because there is no standard template that ports use to materialize +constants, there is likely to be some futzing that is unique to each +port in this code. + +@item +The rtx costs may have to be adjusted to properly account for larger +constants that are represented as @code{CONST_WIDE_INT}. +@end itemize + +All and all it does not takes long to convert ports that the +maintainer is familiar with. + +@end defmac + diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in index a418733..105b980 100644 --- a/gcc/doc/tm.texi.in +++ b/gcc/doc/tm.texi.in @@ -11185,3 +11185,50 @@ memory model bits are allowed. @hook TARGET_ATOMIC_TEST_AND_SET_TRUEVAL @hook TARGET_HAS_IFUNC_P + +@defmac TARGET_SUPPORTS_WIDE_INT + +On older ports, large integers are stored in @code{CONST_DOUBLE} rtl +objects. Newer ports define @code{TARGET_SUPPORTS_WIDE_INT} to be non +zero to indicate that large integers are stored in +@code{CONST_WIDE_INT} rtl objects. The @code{CONST_WIDE_INT} allows +very large integer constants to be represented. @code{CONST_DOUBLE} +are limited to twice the size of host's @code{HOST_WIDE_INT} +representation. + +Converting a port mostly requires looking for the places where +@code{CONST_DOUBLES} are used with @code{VOIDmode} and replacing that +code with code that accesses @code{CONST_WIDE_INT}s. @samp{"grep -i +const_double"} at the port level gets you to 95% of the changes that +need to be made. There are a few places that require a deeper look. + +@itemize @bullet +@item +There is no equivalent to @code{hval} and @code{lval} for +@code{CONST_WIDE_INT}s. This would be difficult to express in the md +language since there are a variable number of elements. + +Most ports only check that @code{hval} is either 0 or -1 to see if the +value is small. As mentioned above, this will no longer be necessary +since small constants are always @code{CONST_INT}. Of course there +are still a few exceptions, the alpha's constraint used by the zap +instruction certainly requires careful examination by C code. +However, all the current code does is pass the hval and lval to C +code, so evolving the c code to look at the @code{CONST_WIDE_INT} is +not really a large change. + +@item +Because there is no standard template that ports use to materialize +constants, there is likely to be some futzing that is unique to each +port in this code. + +@item +The rtx costs may have to be adjusted to properly account for larger +constants that are represented as @code{CONST_WIDE_INT}. +@end itemize + +All and all it does not takes long to convert ports that the +maintainer is familiar with. + +@end defmac + diff --git a/gcc/dojump.c b/gcc/dojump.c index 3f04eac..ecbec40 100644 --- a/gcc/dojump.c +++ b/gcc/dojump.c @@ -142,6 +142,7 @@ static bool prefer_and_bit_test (enum machine_mode mode, int bitnum) { bool speed_p; + wide_int mask = wide_int::set_bit_in_zero (bitnum, mode); if (and_test == 0) { @@ -162,8 +163,7 @@ prefer_and_bit_test (enum machine_mode mode, int bitnum) } /* Fill in the integers. */ - XEXP (and_test, 1) - = immed_double_int_const (double_int_zero.set_bit (bitnum), mode); + XEXP (and_test, 1) = immed_wide_int_const (mask, mode); XEXP (XEXP (shift_test, 0), 1) = GEN_INT (bitnum); speed_p = optimize_insn_for_speed_p (); diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c index de69cc8..5b8088d 100644 --- a/gcc/dwarf2out.c +++ b/gcc/dwarf2out.c @@ -346,6 +346,17 @@ dump_struct_debug (tree type, enum debug_info_usage usage, #endif + +/* Get the number of host wide ints needed to represent the precision + of the number. */ + +static unsigned int +get_full_len (const wide_int &op) +{ + return ((op.get_precision () + HOST_BITS_PER_WIDE_INT - 1) + / HOST_BITS_PER_WIDE_INT); +} + static bool should_emit_struct_debug (tree type, enum debug_info_usage usage) { @@ -1377,6 +1388,9 @@ dw_val_equal_p (dw_val_node *a, dw_val_node *b) return (a->v.val_double.high == b->v.val_double.high && a->v.val_double.low == b->v.val_double.low); + case dw_val_class_wide_int: + return *a->v.val_wide == *b->v.val_wide; + case dw_val_class_vec: { size_t a_len = a->v.val_vec.elt_size * a->v.val_vec.length; @@ -1633,6 +1647,10 @@ size_of_loc_descr (dw_loc_descr_ref loc) case dw_val_class_const_double: size += HOST_BITS_PER_DOUBLE_INT / BITS_PER_UNIT; break; + case dw_val_class_wide_int: + size += (get_full_len (*loc->dw_loc_oprnd2.v.val_wide) + * HOST_BITS_PER_WIDE_INT / BITS_PER_UNIT); + break; default: gcc_unreachable (); } @@ -1810,6 +1828,20 @@ output_loc_operands (dw_loc_descr_ref loc, int for_eh_or_skip) second, NULL); } break; + case dw_val_class_wide_int: + { + int i; + int len = get_full_len (*val2->v.val_wide); + if (WORDS_BIG_ENDIAN) + for (i = len; i >= 0; --i) + dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR, + val2->v.val_wide->elt (i), NULL); + else + for (i = 0; i < len; ++i) + dw2_asm_output_data (HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR, + val2->v.val_wide->elt (i), NULL); + } + break; case dw_val_class_addr: gcc_assert (val1->v.val_unsigned == DWARF2_ADDR_SIZE); dw2_asm_output_addr_rtx (DWARF2_ADDR_SIZE, val2->v.val_addr, NULL); @@ -2019,6 +2051,21 @@ output_loc_operands (dw_loc_descr_ref loc, int for_eh_or_skip) dw2_asm_output_data (l, second, NULL); } break; + case dw_val_class_wide_int: + { + int i; + int len = get_full_len (*val2->v.val_wide); + l = HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR; + + dw2_asm_output_data (1, len * l, NULL); + if (WORDS_BIG_ENDIAN) + for (i = len; i >= 0; --i) + dw2_asm_output_data (l, val2->v.val_wide->elt (i), NULL); + else + for (i = 0; i < len; ++i) + dw2_asm_output_data (l, val2->v.val_wide->elt (i), NULL); + } + break; default: gcc_unreachable (); } @@ -3110,7 +3157,7 @@ static void add_AT_location_description (dw_die_ref, enum dwarf_attribute, static void add_data_member_location_attribute (dw_die_ref, tree); static bool add_const_value_attribute (dw_die_ref, rtx); static void insert_int (HOST_WIDE_INT, unsigned, unsigned char *); -static void insert_double (double_int, unsigned char *); +static void insert_wide_int (const wide_int &, unsigned char *); static void insert_float (const_rtx, unsigned char *); static rtx rtl_for_decl_location (tree); static bool add_location_or_const_value_attribute (dw_die_ref, tree, bool, @@ -3735,6 +3782,21 @@ AT_unsigned (dw_attr_ref a) return a->dw_attr_val.v.val_unsigned; } +/* Add an unsigned wide integer attribute value to a DIE. */ + +static inline void +add_AT_wide (dw_die_ref die, enum dwarf_attribute attr_kind, + const wide_int& w) +{ + dw_attr_node attr; + + attr.dw_attr = attr_kind; + attr.dw_attr_val.val_class = dw_val_class_wide_int; + attr.dw_attr_val.v.val_wide = ggc_alloc_cleared_wide_int (); + *attr.dw_attr_val.v.val_wide = w; + add_dwarf_attr (die, &attr); +} + /* Add an unsigned double integer attribute value to a DIE. */ static inline void @@ -5299,6 +5361,19 @@ print_die (dw_die_ref die, FILE *outfile) a->dw_attr_val.v.val_double.high, a->dw_attr_val.v.val_double.low); break; + case dw_val_class_wide_int: + { + int i = a->dw_attr_val.v.val_wide->get_len (); + fprintf (outfile, "constant ("); + gcc_assert (i > 0); + if (a->dw_attr_val.v.val_wide->elt (i) == 0) + fprintf (outfile, "0x"); + fprintf (outfile, HOST_WIDE_INT_PRINT_HEX, a->dw_attr_val.v.val_wide->elt (--i)); + while (-- i >= 0) + fprintf (outfile, HOST_WIDE_INT_PRINT_PADDED_HEX, a->dw_attr_val.v.val_wide->elt (i)); + fprintf (outfile, ")"); + break; + } case dw_val_class_vec: fprintf (outfile, "floating-point or vector constant"); break; @@ -5470,6 +5545,9 @@ attr_checksum (dw_attr_ref at, struct md5_ctx *ctx, int *mark) case dw_val_class_const_double: CHECKSUM (at->dw_attr_val.v.val_double); break; + case dw_val_class_wide_int: + CHECKSUM (*at->dw_attr_val.v.val_wide); + break; case dw_val_class_vec: CHECKSUM (at->dw_attr_val.v.val_vec); break; @@ -5740,6 +5818,12 @@ attr_checksum_ordered (enum dwarf_tag tag, dw_attr_ref at, CHECKSUM (at->dw_attr_val.v.val_double); break; + case dw_val_class_wide_int: + CHECKSUM_ULEB128 (DW_FORM_block); + CHECKSUM_ULEB128 (sizeof (*at->dw_attr_val.v.val_wide)); + CHECKSUM (*at->dw_attr_val.v.val_wide); + break; + case dw_val_class_vec: CHECKSUM_ULEB128 (DW_FORM_block); CHECKSUM_ULEB128 (sizeof (at->dw_attr_val.v.val_vec)); @@ -6204,6 +6288,8 @@ same_dw_val_p (const dw_val_node *v1, const dw_val_node *v2, int *mark) case dw_val_class_const_double: return v1->v.val_double.high == v2->v.val_double.high && v1->v.val_double.low == v2->v.val_double.low; + case dw_val_class_wide_int: + return *v1->v.val_wide == *v2->v.val_wide; case dw_val_class_vec: if (v1->v.val_vec.length != v2->v.val_vec.length || v1->v.val_vec.elt_size != v2->v.val_vec.elt_size) @@ -7676,6 +7762,13 @@ size_of_die (dw_die_ref die) if (HOST_BITS_PER_WIDE_INT >= 64) size++; /* block */ break; + case dw_val_class_wide_int: + size += (get_full_len (*a->dw_attr_val.v.val_wide) + * HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR); + if (get_full_len (*a->dw_attr_val.v.val_wide) * HOST_BITS_PER_WIDE_INT + > 64) + size++; /* block */ + break; case dw_val_class_vec: size += constant_size (a->dw_attr_val.v.val_vec.length * a->dw_attr_val.v.val_vec.elt_size) @@ -8014,6 +8107,20 @@ value_format (dw_attr_ref a) default: return DW_FORM_block1; } + case dw_val_class_wide_int: + switch (get_full_len (*a->dw_attr_val.v.val_wide) * HOST_BITS_PER_WIDE_INT) + { + case 8: + return DW_FORM_data1; + case 16: + return DW_FORM_data2; + case 32: + return DW_FORM_data4; + case 64: + return DW_FORM_data8; + default: + return DW_FORM_block1; + } case dw_val_class_vec: switch (constant_size (a->dw_attr_val.v.val_vec.length * a->dw_attr_val.v.val_vec.elt_size)) @@ -8453,6 +8560,32 @@ output_die (dw_die_ref die) } break; + case dw_val_class_wide_int: + { + int i; + int len = get_full_len (*a->dw_attr_val.v.val_wide); + int l = HOST_BITS_PER_WIDE_INT / HOST_BITS_PER_CHAR; + if (len * HOST_BITS_PER_WIDE_INT > 64) + dw2_asm_output_data (1, get_full_len (*a->dw_attr_val.v.val_wide) * l, + NULL); + + if (WORDS_BIG_ENDIAN) + for (i = len; i >= 0; --i) + { + dw2_asm_output_data (l, a->dw_attr_val.v.val_wide->elt (i), + name); + name = NULL; + } + else + for (i = 0; i < len; ++i) + { + dw2_asm_output_data (l, a->dw_attr_val.v.val_wide->elt (i), + name); + name = NULL; + } + } + break; + case dw_val_class_vec: { unsigned int elt_size = a->dw_attr_val.v.val_vec.elt_size; @@ -11603,9 +11736,8 @@ clz_loc_descriptor (rtx rtl, enum machine_mode mode, msb = GEN_INT ((unsigned HOST_WIDE_INT) 1 << (GET_MODE_BITSIZE (mode) - 1)); else - msb = immed_double_const (0, (unsigned HOST_WIDE_INT) 1 - << (GET_MODE_BITSIZE (mode) - - HOST_BITS_PER_WIDE_INT - 1), mode); + msb = immed_wide_int_const + (wide_int::set_bit_in_zero (GET_MODE_PRECISION (mode) - 1, mode), mode); if (GET_CODE (msb) == CONST_INT && INTVAL (msb) < 0) tmp = new_loc_descr (HOST_BITS_PER_WIDE_INT == 32 ? DW_OP_const4u : HOST_BITS_PER_WIDE_INT == 64 @@ -12546,7 +12678,16 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_die_ref; mem_loc_result->dw_loc_oprnd1.v.val_die_ref.die = type_die; mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0; - if (SCALAR_FLOAT_MODE_P (mode)) +#if TARGET_SUPPORTS_WIDE_INT == 0 + if (!SCALAR_FLOAT_MODE_P (mode)) + { + mem_loc_result->dw_loc_oprnd2.val_class + = dw_val_class_const_double; + mem_loc_result->dw_loc_oprnd2.v.val_double + = rtx_to_double_int (rtl); + } + else +#endif { unsigned int length = GET_MODE_SIZE (mode); unsigned char *array @@ -12558,13 +12699,27 @@ mem_loc_descriptor (rtx rtl, enum machine_mode mode, mem_loc_result->dw_loc_oprnd2.v.val_vec.elt_size = 4; mem_loc_result->dw_loc_oprnd2.v.val_vec.array = array; } - else - { - mem_loc_result->dw_loc_oprnd2.val_class - = dw_val_class_const_double; - mem_loc_result->dw_loc_oprnd2.v.val_double - = rtx_to_double_int (rtl); - } + } + break; + + case CONST_WIDE_INT: + if (!dwarf_strict) + { + dw_die_ref type_die; + + type_die = base_type_for_mode (mode, + GET_MODE_CLASS (mode) == MODE_INT); + if (type_die == NULL) + return NULL; + mem_loc_result = new_loc_descr (DW_OP_GNU_const_type, 0, 0); + mem_loc_result->dw_loc_oprnd1.val_class = dw_val_class_die_ref; + mem_loc_result->dw_loc_oprnd1.v.val_die_ref.die = type_die; + mem_loc_result->dw_loc_oprnd1.v.val_die_ref.external = 0; + mem_loc_result->dw_loc_oprnd2.val_class + = dw_val_class_wide_int; + mem_loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc_cleared_wide_int (); + *mem_loc_result->dw_loc_oprnd2.v.val_wide + = wide_int::from_rtx (rtl, mode); } break; @@ -13035,7 +13190,15 @@ loc_descriptor (rtx rtl, enum machine_mode mode, adequately represented. We output CONST_DOUBLEs as blocks. */ loc_result = new_loc_descr (DW_OP_implicit_value, GET_MODE_SIZE (mode), 0); - if (SCALAR_FLOAT_MODE_P (mode)) +#if TARGET_SUPPORTS_WIDE_INT == 0 + if (!SCALAR_FLOAT_MODE_P (mode)) + { + loc_result->dw_loc_oprnd2.val_class = dw_val_class_const_double; + loc_result->dw_loc_oprnd2.v.val_double + = rtx_to_double_int (rtl); + } + else +#endif { unsigned int length = GET_MODE_SIZE (mode); unsigned char *array @@ -13047,12 +13210,27 @@ loc_descriptor (rtx rtl, enum machine_mode mode, loc_result->dw_loc_oprnd2.v.val_vec.elt_size = 4; loc_result->dw_loc_oprnd2.v.val_vec.array = array; } - else - { - loc_result->dw_loc_oprnd2.val_class = dw_val_class_const_double; - loc_result->dw_loc_oprnd2.v.val_double - = rtx_to_double_int (rtl); - } + } + break; + + case CONST_WIDE_INT: + if (mode == VOIDmode) + mode = GET_MODE (rtl); + + if (mode != VOIDmode && (dwarf_version >= 4 || !dwarf_strict)) + { + gcc_assert (mode == GET_MODE (rtl) || VOIDmode == GET_MODE (rtl)); + + /* Note that a CONST_DOUBLE rtx could represent either an integer + or a floating-point constant. A CONST_DOUBLE is used whenever + the constant requires more than one word in order to be + adequately represented. We output CONST_DOUBLEs as blocks. */ + loc_result = new_loc_descr (DW_OP_implicit_value, + GET_MODE_SIZE (mode), 0); + loc_result->dw_loc_oprnd2.val_class = dw_val_class_wide_int; + loc_result->dw_loc_oprnd2.v.val_wide = ggc_alloc_cleared_wide_int (); + *loc_result->dw_loc_oprnd2.v.val_wide + = wide_int::from_rtx (rtl, mode); } break; @@ -13068,6 +13246,7 @@ loc_descriptor (rtx rtl, enum machine_mode mode, ggc_alloc_atomic (length * elt_size); unsigned int i; unsigned char *p; + enum machine_mode imode = GET_MODE_INNER (mode); gcc_assert (mode == GET_MODE (rtl) || VOIDmode == GET_MODE (rtl)); switch (GET_MODE_CLASS (mode)) @@ -13076,15 +13255,8 @@ loc_descriptor (rtx rtl, enum machine_mode mode, for (i = 0, p = array; i < length; i++, p += elt_size) { rtx elt = CONST_VECTOR_ELT (rtl, i); - double_int val = rtx_to_double_int (elt); - - if (elt_size <= sizeof (HOST_WIDE_INT)) - insert_int (val.to_shwi (), elt_size, p); - else - { - gcc_assert (elt_size == 2 * sizeof (HOST_WIDE_INT)); - insert_double (val, p); - } + wide_int val = wide_int::from_rtx (elt, imode); + insert_wide_int (val, p); } break; @@ -14709,22 +14881,27 @@ extract_int (const unsigned char *src, unsigned int size) return val; } -/* Writes double_int values to dw_vec_const array. */ +/* Writes wide_int values to dw_vec_const array. */ static void -insert_double (double_int val, unsigned char *dest) +insert_wide_int (const wide_int &val, unsigned char *dest) { - unsigned char *p0 = dest; - unsigned char *p1 = dest + sizeof (HOST_WIDE_INT); + int i; if (WORDS_BIG_ENDIAN) - { - p0 = p1; - p1 = dest; - } - - insert_int ((HOST_WIDE_INT) val.low, sizeof (HOST_WIDE_INT), p0); - insert_int ((HOST_WIDE_INT) val.high, sizeof (HOST_WIDE_INT), p1); + for (i = (int)get_full_len (val) - 1; i >= 0; i--) + { + insert_int ((HOST_WIDE_INT) val.elt (i), + sizeof (HOST_WIDE_INT), dest); + dest += sizeof (HOST_WIDE_INT); + } + else + for (i = 0; i < (int)get_full_len (val); i++) + { + insert_int ((HOST_WIDE_INT) val.elt (i), + sizeof (HOST_WIDE_INT), dest); + dest += sizeof (HOST_WIDE_INT); + } } /* Writes floating point values to dw_vec_const array. */ @@ -14769,6 +14946,11 @@ add_const_value_attribute (dw_die_ref die, rtx rtl) } return true; + case CONST_WIDE_INT: + add_AT_wide (die, DW_AT_const_value, + wide_int::from_rtx (rtl, GET_MODE (rtl))); + return true; + case CONST_DOUBLE: /* Note that a CONST_DOUBLE rtx could represent either an integer or a floating-point constant. A CONST_DOUBLE is used whenever the @@ -14777,7 +14959,10 @@ add_const_value_attribute (dw_die_ref die, rtx rtl) { enum machine_mode mode = GET_MODE (rtl); - if (SCALAR_FLOAT_MODE_P (mode)) + if (TARGET_SUPPORTS_WIDE_INT == 0 && !SCALAR_FLOAT_MODE_P (mode)) + add_AT_double (die, DW_AT_const_value, + CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl)); + else { unsigned int length = GET_MODE_SIZE (mode); unsigned char *array = (unsigned char *) ggc_alloc_atomic (length); @@ -14785,9 +14970,6 @@ add_const_value_attribute (dw_die_ref die, rtx rtl) insert_float (rtl, array); add_AT_vec (die, DW_AT_const_value, length / 4, 4, array); } - else - add_AT_double (die, DW_AT_const_value, - CONST_DOUBLE_HIGH (rtl), CONST_DOUBLE_LOW (rtl)); } return true; @@ -14800,6 +14982,7 @@ add_const_value_attribute (dw_die_ref die, rtx rtl) (length * elt_size); unsigned int i; unsigned char *p; + enum machine_mode imode = GET_MODE_INNER (mode); switch (GET_MODE_CLASS (mode)) { @@ -14807,15 +14990,8 @@ add_const_value_attribute (dw_die_ref die, rtx rtl) for (i = 0, p = array; i < length; i++, p += elt_size) { rtx elt = CONST_VECTOR_ELT (rtl, i); - double_int val = rtx_to_double_int (elt); - - if (elt_size <= sizeof (HOST_WIDE_INT)) - insert_int (val.to_shwi (), elt_size, p); - else - { - gcc_assert (elt_size == 2 * sizeof (HOST_WIDE_INT)); - insert_double (val, p); - } + wide_int val = wide_int::from_rtx (elt, imode); + insert_wide_int (val, p); } break; @@ -23185,6 +23361,9 @@ hash_loc_operands (dw_loc_descr_ref loc, hashval_t hash) hash = iterative_hash_object (val2->v.val_double.low, hash); hash = iterative_hash_object (val2->v.val_double.high, hash); break; + case dw_val_class_wide_int: + hash = iterative_hash_object (*val2->v.val_wide, hash); + break; case dw_val_class_addr: hash = iterative_hash_rtx (val2->v.val_addr, hash); break; @@ -23274,6 +23453,9 @@ hash_loc_operands (dw_loc_descr_ref loc, hashval_t hash) hash = iterative_hash_object (val2->v.val_double.low, hash); hash = iterative_hash_object (val2->v.val_double.high, hash); break; + case dw_val_class_wide_int: + hash = iterative_hash_object (*val2->v.val_wide, hash); + break; default: gcc_unreachable (); } @@ -23422,6 +23604,8 @@ compare_loc_operands (dw_loc_descr_ref x, dw_loc_descr_ref y) case dw_val_class_const_double: return valx2->v.val_double.low == valy2->v.val_double.low && valx2->v.val_double.high == valy2->v.val_double.high; + case dw_val_class_wide_int: + return *valx2->v.val_wide == *valy2->v.val_wide; case dw_val_class_addr: return rtx_equal_p (valx2->v.val_addr, valy2->v.val_addr); default: @@ -23465,6 +23649,8 @@ compare_loc_operands (dw_loc_descr_ref x, dw_loc_descr_ref y) case dw_val_class_const_double: return valx2->v.val_double.low == valy2->v.val_double.low && valx2->v.val_double.high == valy2->v.val_double.high; + case dw_val_class_wide_int: + return *valx2->v.val_wide == *valy2->v.val_wide; default: gcc_unreachable (); } diff --git a/gcc/dwarf2out.h b/gcc/dwarf2out.h index ad03a34..d6af85b 100644 --- a/gcc/dwarf2out.h +++ b/gcc/dwarf2out.h @@ -21,6 +21,7 @@ along with GCC; see the file COPYING3. If not see #define GCC_DWARF2OUT_H 1 #include "dwarf2.h" /* ??? Remove this once only used by dwarf2foo.c. */ +#include "wide-int.h" typedef struct die_struct *dw_die_ref; typedef const struct die_struct *const_dw_die_ref; @@ -29,6 +30,7 @@ typedef struct dw_val_struct *dw_val_ref; typedef struct dw_cfi_struct *dw_cfi_ref; typedef struct dw_loc_descr_struct *dw_loc_descr_ref; typedef struct dw_loc_list_struct *dw_loc_list_ref; +typedef struct wide_int *wide_int_ref; /* Call frames are described using a sequence of Call Frame @@ -139,6 +141,7 @@ enum dw_val_class dw_val_class_const, dw_val_class_unsigned_const, dw_val_class_const_double, + dw_val_class_wide_int, dw_val_class_vec, dw_val_class_flag, dw_val_class_die_ref, @@ -180,6 +183,7 @@ typedef struct GTY(()) dw_val_struct { HOST_WIDE_INT GTY ((default)) val_int; unsigned HOST_WIDE_INT GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned; double_int GTY ((tag ("dw_val_class_const_double"))) val_double; + wide_int_ref GTY ((tag ("dw_val_class_wide_int"))) val_wide; dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec; struct dw_val_die_union { diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c index 538b1ec..747735f 100644 --- a/gcc/emit-rtl.c +++ b/gcc/emit-rtl.c @@ -124,6 +124,9 @@ rtx cc0_rtx; static GTY ((if_marked ("ggc_marked_p"), param_is (struct rtx_def))) htab_t const_int_htab; +static GTY ((if_marked ("ggc_marked_p"), param_is (struct rtx_def))) + htab_t const_wide_int_htab; + /* A hash table storing memory attribute structures. */ static GTY ((if_marked ("ggc_marked_p"), param_is (struct mem_attrs))) htab_t mem_attrs_htab; @@ -149,6 +152,11 @@ static void set_used_decls (tree); static void mark_label_nuses (rtx); static hashval_t const_int_htab_hash (const void *); static int const_int_htab_eq (const void *, const void *); +#if TARGET_SUPPORTS_WIDE_INT +static hashval_t const_wide_int_htab_hash (const void *); +static int const_wide_int_htab_eq (const void *, const void *); +static rtx lookup_const_wide_int (rtx); +#endif static hashval_t const_double_htab_hash (const void *); static int const_double_htab_eq (const void *, const void *); static rtx lookup_const_double (rtx); @@ -185,6 +193,43 @@ const_int_htab_eq (const void *x, const void *y) return (INTVAL ((const_rtx) x) == *((const HOST_WIDE_INT *) y)); } +#if TARGET_SUPPORTS_WIDE_INT +/* Returns a hash code for X (which is a really a CONST_WIDE_INT). */ + +static hashval_t +const_wide_int_htab_hash (const void *x) +{ + int i; + HOST_WIDE_INT hash = 0; + const_rtx xr = (const_rtx) x; + + for (i = 0; i < CONST_WIDE_INT_NUNITS (xr); i++) + hash += CONST_WIDE_INT_ELT (xr, i); + + return (hashval_t) hash; +} + +/* Returns nonzero if the value represented by X (which is really a + CONST_WIDE_INT) is the same as that given by Y (which is really a + CONST_WIDE_INT). */ + +static int +const_wide_int_htab_eq (const void *x, const void *y) +{ + int i; + const_rtx xr = (const_rtx)x; + const_rtx yr = (const_rtx)y; + if (CONST_WIDE_INT_NUNITS (xr) != CONST_WIDE_INT_NUNITS (yr)) + return 0; + + for (i = 0; i < CONST_WIDE_INT_NUNITS (xr); i++) + if (CONST_WIDE_INT_ELT (xr, i) != CONST_WIDE_INT_ELT (yr, i)) + return 0; + + return 1; +} +#endif + /* Returns a hash code for X (which is really a CONST_DOUBLE). */ static hashval_t const_double_htab_hash (const void *x) @@ -192,7 +237,7 @@ const_double_htab_hash (const void *x) const_rtx const value = (const_rtx) x; hashval_t h; - if (GET_MODE (value) == VOIDmode) + if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (value) == VOIDmode) h = CONST_DOUBLE_LOW (value) ^ CONST_DOUBLE_HIGH (value); else { @@ -212,7 +257,7 @@ const_double_htab_eq (const void *x, const void *y) if (GET_MODE (a) != GET_MODE (b)) return 0; - if (GET_MODE (a) == VOIDmode) + if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (a) == VOIDmode) return (CONST_DOUBLE_LOW (a) == CONST_DOUBLE_LOW (b) && CONST_DOUBLE_HIGH (a) == CONST_DOUBLE_HIGH (b)); else @@ -478,6 +523,7 @@ const_fixed_from_fixed_value (FIXED_VALUE_TYPE value, enum machine_mode mode) return lookup_const_fixed (fixed); } +#if TARGET_SUPPORTS_WIDE_INT == 0 /* Constructs double_int from rtx CST. */ double_int @@ -497,17 +543,60 @@ rtx_to_double_int (const_rtx cst) return r; } +#endif +#if TARGET_SUPPORTS_WIDE_INT +/* Determine whether WIDE_INT, already exists in the hash table. If + so, return its counterpart; otherwise add it to the hash table and + return it. */ + +static rtx +lookup_const_wide_int (rtx wint) +{ + void **slot = htab_find_slot (const_wide_int_htab, wint, INSERT); + if (*slot == 0) + *slot = wint; -/* Return a CONST_DOUBLE or CONST_INT for a value specified as - a double_int. */ + return (rtx) *slot; +} +#endif +/* V contains a wide_int. A CONST_INT or CONST_WIDE_INT (if + TARGET_SUPPORTS_WIDE_INT is defined) or CONST_DOUBLE if + TARGET_SUPPORTS_WIDE_INT is not defined is produced based on the + number of HOST_WIDE_INTs that are necessary to represent the value + in compact form. */ rtx -immed_double_int_const (double_int i, enum machine_mode mode) +immed_wide_int_const (const wide_int &v, enum machine_mode mode) { - return immed_double_const (i.low, i.high, mode); + unsigned int len = v.get_len (); + + if (len < 2) + return gen_int_mode (v.elt (0), mode); + + gcc_assert (GET_MODE_PRECISION (mode) == v.get_precision ()); + +#if TARGET_SUPPORTS_WIDE_INT + { + rtx value = const_wide_int_alloc (len); + unsigned int i; + + /* It is so tempting to just put the mode in here. Must control + myself ... */ + PUT_MODE (value, VOIDmode); + HWI_PUT_NUM_ELEM (CONST_WIDE_INT_VEC (value), len); + + for (i = 0; i < len; i++) + CONST_WIDE_INT_ELT (value, i) = v.elt (i); + + return lookup_const_wide_int (value); + } +#else + return immed_double_const (v.elt (0), v.elt (1), mode); +#endif } +#if TARGET_SUPPORTS_WIDE_INT == 0 /* Return a CONST_DOUBLE or CONST_INT for a value specified as a pair of ints: I0 is the low-order word and I1 is the high-order word. For values that are larger than HOST_BITS_PER_DOUBLE_INT, the @@ -559,6 +648,7 @@ immed_double_const (HOST_WIDE_INT i0, HOST_WIDE_INT i1, enum machine_mode mode) return lookup_const_double (value); } +#endif rtx gen_rtx_REG (enum machine_mode mode, unsigned int regno) @@ -5694,11 +5784,15 @@ init_emit_once (void) enum machine_mode mode; enum machine_mode double_mode; - /* Initialize the CONST_INT, CONST_DOUBLE, CONST_FIXED, and memory attribute - hash tables. */ + /* Initialize the CONST_INT, CONST_WIDE_INT, CONST_DOUBLE, + CONST_FIXED, and memory attribute hash tables. */ const_int_htab = htab_create_ggc (37, const_int_htab_hash, const_int_htab_eq, NULL); +#if TARGET_SUPPORTS_WIDE_INT + const_wide_int_htab = htab_create_ggc (37, const_wide_int_htab_hash, + const_wide_int_htab_eq, NULL); +#endif const_double_htab = htab_create_ggc (37, const_double_htab_hash, const_double_htab_eq, NULL); diff --git a/gcc/explow.c b/gcc/explow.c index 08a6653..7eebef5 100644 --- a/gcc/explow.c +++ b/gcc/explow.c @@ -95,38 +95,8 @@ plus_constant (enum machine_mode mode, rtx x, HOST_WIDE_INT c) switch (code) { - case CONST_INT: - if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT) - { - double_int di_x = double_int::from_shwi (INTVAL (x)); - double_int di_c = double_int::from_shwi (c); - - bool overflow; - double_int v = di_x.add_with_sign (di_c, false, &overflow); - if (overflow) - gcc_unreachable (); - - return immed_double_int_const (v, VOIDmode); - } - - return GEN_INT (INTVAL (x) + c); - - case CONST_DOUBLE: - { - double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x), - CONST_DOUBLE_LOW (x)); - double_int di_c = double_int::from_shwi (c); - - bool overflow; - double_int v = di_x.add_with_sign (di_c, false, &overflow); - if (overflow) - /* Sorry, we have no way to represent overflows this wide. - To fix, add constant support wider than CONST_DOUBLE. */ - gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT); - - return immed_double_int_const (v, VOIDmode); - } - + CASE_CONST_SCALAR_INT: + return immed_wide_int_const (wide_int::from_rtx (x, mode) + c, mode); case MEM: /* If this is a reference to the constant pool, try replacing it with a reference to a new constant. If the resulting address isn't diff --git a/gcc/expmed.c b/gcc/expmed.c index 3c3a179..ae726c2 100644 --- a/gcc/expmed.c +++ b/gcc/expmed.c @@ -55,7 +55,6 @@ static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT, static rtx extract_fixed_bit_field (enum machine_mode, rtx, unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT, rtx, int, bool); -static rtx mask_rtx (enum machine_mode, int, int, int); static rtx lshift_value (enum machine_mode, rtx, int, int); static rtx extract_split_bit_field (rtx, unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT, int); @@ -63,6 +62,18 @@ static void do_cmp_and_jump (rtx, rtx, enum rtx_code, enum machine_mode, rtx); static rtx expand_smod_pow2 (enum machine_mode, rtx, HOST_WIDE_INT); static rtx expand_sdiv_pow2 (enum machine_mode, rtx, HOST_WIDE_INT); +/* Return a constant integer mask value of mode MODE with BITSIZE ones + followed by BITPOS zeros, or the complement of that if COMPLEMENT. + The mask is truncated if necessary to the width of mode MODE. The + mask is zero-extended if BITSIZE+BITPOS is too small for MODE. */ + +static inline rtx +mask_rtx (enum machine_mode mode, int bitpos, int bitsize, bool complement) +{ + return immed_wide_int_const + (wide_int::shifted_mask (bitpos, bitsize, complement, mode), mode); +} + /* Test whether a value is zero of a power of two. */ #define EXACT_POWER_OF_2_OR_ZERO_P(x) \ (((x) & ((x) - (unsigned HOST_WIDE_INT) 1)) == 0) @@ -1832,39 +1843,15 @@ extract_fixed_bit_field (enum machine_mode tmode, rtx op0, return expand_shift (RSHIFT_EXPR, mode, op0, GET_MODE_BITSIZE (mode) - bitsize, target, 0); } - -/* Return a constant integer (CONST_INT or CONST_DOUBLE) mask value - of mode MODE with BITSIZE ones followed by BITPOS zeros, or the - complement of that if COMPLEMENT. The mask is truncated if - necessary to the width of mode MODE. The mask is zero-extended if - BITSIZE+BITPOS is too small for MODE. */ - -static rtx -mask_rtx (enum machine_mode mode, int bitpos, int bitsize, int complement) -{ - double_int mask; - - mask = double_int::mask (bitsize); - mask = mask.llshift (bitpos, HOST_BITS_PER_DOUBLE_INT); - - if (complement) - mask = ~mask; - - return immed_double_int_const (mask, mode); -} - -/* Return a constant integer (CONST_INT or CONST_DOUBLE) rtx with the value - VALUE truncated to BITSIZE bits and then shifted left BITPOS bits. */ +/* Return a constant integer rtx with the value VALUE truncated to + BITSIZE bits and then shifted left BITPOS bits. */ static rtx lshift_value (enum machine_mode mode, rtx value, int bitpos, int bitsize) { - double_int val; - - val = double_int::from_uhwi (INTVAL (value)).zext (bitsize); - val = val.llshift (bitpos, HOST_BITS_PER_DOUBLE_INT); - - return immed_double_int_const (val, mode); + return + immed_wide_int_const (wide_int::from_rtx (value, mode) + .zext (bitsize).lshift (bitpos), mode); } /* Extract a bit field that is split across two words @@ -3069,37 +3056,41 @@ expand_mult (enum machine_mode mode, rtx op0, rtx op1, rtx target, only if the constant value exactly fits in an `unsigned int' without any truncation. This means that multiplying by negative values does not work; results are off by 2^32 on a 32 bit machine. */ - if (CONST_INT_P (scalar_op1)) { coeff = INTVAL (scalar_op1); is_neg = coeff < 0; } +#if TARGET_SUPPORTS_WIDE_INT + else if (CONST_WIDE_INT_P (scalar_op1)) +#else else if (CONST_DOUBLE_AS_INT_P (scalar_op1)) +#endif { - /* If we are multiplying in DImode, it may still be a win - to try to work with shifts and adds. */ - if (CONST_DOUBLE_HIGH (scalar_op1) == 0 - && (CONST_DOUBLE_LOW (scalar_op1) > 0 - || (CONST_DOUBLE_LOW (scalar_op1) < 0 - && EXACT_POWER_OF_2_OR_ZERO_P - (CONST_DOUBLE_LOW (scalar_op1))))) + int p = GET_MODE_PRECISION (mode); + wide_int val = wide_int::from_rtx (scalar_op1, mode); + int shift = val.exact_log2 ().to_shwi (); + /* Perfect power of 2. */ + is_neg = false; + if (shift > 0) { - coeff = CONST_DOUBLE_LOW (scalar_op1); - is_neg = false; + /* Do the shift count trucation against the bitsize, not + the precision. See the comment above + wide-int.c:trunc_shift for details. */ + if (SHIFT_COUNT_TRUNCATED) + shift &= GET_MODE_BITSIZE (mode) - 1; + /* We could consider adding just a move of 0 to target + if the shift >= p */ + if (shift < p) + return expand_shift (LSHIFT_EXPR, mode, op0, + shift, target, unsignedp); + /* Any positive number that fits in a word. */ + coeff = CONST_WIDE_INT_ELT (scalar_op1, 0); } - else if (CONST_DOUBLE_LOW (scalar_op1) == 0) + else if (val.sign_mask () == 0) { - coeff = CONST_DOUBLE_HIGH (scalar_op1); - if (EXACT_POWER_OF_2_OR_ZERO_P (coeff)) - { - int shift = floor_log2 (coeff) + HOST_BITS_PER_WIDE_INT; - if (shift < HOST_BITS_PER_DOUBLE_INT - 1 - || mode_bitsize <= HOST_BITS_PER_DOUBLE_INT) - return expand_shift (LSHIFT_EXPR, mode, op0, - shift, target, unsignedp); - } - goto skip_synth; + /* Any positive number that fits in a word. */ + coeff = CONST_WIDE_INT_ELT (scalar_op1, 0); } else goto skip_synth; @@ -3601,9 +3592,10 @@ expmed_mult_highpart (enum machine_mode mode, rtx op0, rtx op1, static rtx expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d) { - unsigned HOST_WIDE_INT masklow, maskhigh; rtx result, temp, shift, label; int logd; + wide_int mask; + int prec = GET_MODE_PRECISION (mode); logd = floor_log2 (d); result = gen_reg_rtx (mode); @@ -3616,8 +3608,8 @@ expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d) mode, 0, -1); if (signmask) { + HOST_WIDE_INT masklow = ((HOST_WIDE_INT) 1 << logd) - 1; signmask = force_reg (mode, signmask); - masklow = ((HOST_WIDE_INT) 1 << logd) - 1; shift = GEN_INT (GET_MODE_BITSIZE (mode) - logd); /* Use the rtx_cost of a LSHIFTRT instruction to determine @@ -3662,19 +3654,11 @@ expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d) modulus. By including the signbit in the operation, many targets can avoid an explicit compare operation in the following comparison against zero. */ - - masklow = ((HOST_WIDE_INT) 1 << logd) - 1; - if (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_WIDE_INT) - { - masklow |= (HOST_WIDE_INT) -1 << (GET_MODE_BITSIZE (mode) - 1); - maskhigh = -1; - } - else - maskhigh = (HOST_WIDE_INT) -1 - << (GET_MODE_BITSIZE (mode) - HOST_BITS_PER_WIDE_INT - 1); + mask = wide_int::mask (logd, false, mode); + mask = mask.set_bit (prec - 1); temp = expand_binop (mode, and_optab, op0, - immed_double_const (masklow, maskhigh, mode), + immed_wide_int_const (mask, mode), result, 1, OPTAB_LIB_WIDEN); if (temp != result) emit_move_insn (result, temp); @@ -3684,10 +3668,10 @@ expand_smod_pow2 (enum machine_mode mode, rtx op0, HOST_WIDE_INT d) temp = expand_binop (mode, sub_optab, result, const1_rtx, result, 0, OPTAB_LIB_WIDEN); - masklow = (HOST_WIDE_INT) -1 << logd; - maskhigh = -1; + + mask = wide_int::mask (logd, true, mode); temp = expand_binop (mode, ior_optab, temp, - immed_double_const (masklow, maskhigh, mode), + immed_wide_int_const (mask, mode), result, 1, OPTAB_LIB_WIDEN); temp = expand_binop (mode, add_optab, temp, const1_rtx, result, 0, OPTAB_LIB_WIDEN); @@ -4940,8 +4924,12 @@ make_tree (tree type, rtx x) return t; } + case CONST_WIDE_INT: + t = wide_int_to_tree (type, wide_int::from_rtx (x, TYPE_MODE (type))); + return t; + case CONST_DOUBLE: - if (GET_MODE (x) == VOIDmode) + if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (x) == VOIDmode) t = build_int_cst_wide (type, CONST_DOUBLE_LOW (x), CONST_DOUBLE_HIGH (x)); else diff --git a/gcc/expr.c b/gcc/expr.c index e3fb0b6..9011d46 100644 --- a/gcc/expr.c +++ b/gcc/expr.c @@ -710,23 +710,23 @@ convert_modes (enum machine_mode mode, enum machine_mode oldmode, rtx x, int uns if (mode == oldmode) return x; - /* There is one case that we must handle specially: If we are converting - a CONST_INT into a mode whose size is twice HOST_BITS_PER_WIDE_INT and - we are to interpret the constant as unsigned, gen_lowpart will do - the wrong if the constant appears negative. What we want to do is - make the high-order word of the constant zero, not all ones. */ + /* There is one case that we must handle specially: If we are + converting a CONST_INT into a mode whose size is larger than + HOST_BITS_PER_WIDE_INT and we are to interpret the constant as + unsigned, gen_lowpart will do the wrong if the constant appears + negative. What we want to do is make the high-order word of the + constant zero, not all ones. */ if (unsignedp && GET_MODE_CLASS (mode) == MODE_INT - && GET_MODE_BITSIZE (mode) == HOST_BITS_PER_DOUBLE_INT + && GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT && CONST_INT_P (x) && INTVAL (x) < 0) { - double_int val = double_int::from_uhwi (INTVAL (x)); - + HOST_WIDE_INT val = INTVAL (x); /* We need to zero extend VAL. */ if (oldmode != VOIDmode) - val = val.zext (GET_MODE_BITSIZE (oldmode)); + val &= GET_MODE_PRECISION (oldmode) - 1; - return immed_double_int_const (val, mode); + return immed_wide_int_const (wide_int::from_uhwi (val, mode), mode); } /* We can do this with a gen_lowpart if both desired and current modes @@ -738,7 +738,11 @@ convert_modes (enum machine_mode mode, enum machine_mode oldmode, rtx x, int uns && GET_MODE_PRECISION (mode) <= HOST_BITS_PER_WIDE_INT) || (GET_MODE_CLASS (mode) == MODE_INT && GET_MODE_CLASS (oldmode) == MODE_INT - && (CONST_DOUBLE_AS_INT_P (x) +#if TARGET_SUPPORTS_WIDE_INT + && (CONST_WIDE_INT_P (x) +#else + && (CONST_DOUBLE_AS_INT_P (x) +#endif || (GET_MODE_PRECISION (mode) <= GET_MODE_PRECISION (oldmode) && ((MEM_P (x) && ! MEM_VOLATILE_P (x) && direct_load[(int) mode]) @@ -1743,6 +1747,7 @@ emit_group_load_1 (rtx *tmps, rtx dst, rtx orig_src, tree type, int ssize) { rtx first, second; + /* TODO: const_wide_int can have sizes other than this... */ gcc_assert (2 * len == ssize); split_double (src, &first, &second); if (i) @@ -5239,10 +5244,10 @@ store_expr (tree exp, rtx target, int call_param_p, bool nontemporal) &alt_rtl); } - /* If TEMP is a VOIDmode constant and the mode of the type of EXP is not - the same as that of TARGET, adjust the constant. This is needed, for - example, in case it is a CONST_DOUBLE and we want only a word-sized - value. */ + /* If TEMP is a VOIDmode constant and the mode of the type of EXP is + not the same as that of TARGET, adjust the constant. This is + needed, for example, in case it is a CONST_DOUBLE or + CONST_WIDE_INT and we want only a word-sized value. */ if (CONSTANT_P (temp) && GET_MODE (temp) == VOIDmode && TREE_CODE (exp) != ERROR_MARK && GET_MODE (target) != TYPE_MODE (TREE_TYPE (exp))) @@ -7741,11 +7746,12 @@ expand_constructor (tree exp, rtx target, enum expand_modifier modifier, /* All elts simple constants => refer to a constant in memory. But if this is a non-BLKmode mode, let it store a field at a time - since that should make a CONST_INT or CONST_DOUBLE when we - fold. Likewise, if we have a target we can use, it is best to - store directly into the target unless the type is large enough - that memcpy will be used. If we are making an initializer and - all operands are constant, put it in memory as well. + since that should make a CONST_INT, CONST_WIDE_INT or + CONST_DOUBLE when we fold. Likewise, if we have a target we can + use, it is best to store directly into the target unless the type + is large enough that memcpy will be used. If we are making an + initializer and all operands are constant, put it in memory as + well. FIXME: Avoid trying to fill vector constructors piece-meal. Output them with output_constant_def below unless we're sure @@ -8215,17 +8221,18 @@ expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode, && TREE_CONSTANT (treeop1)) { rtx constant_part; + HOST_WIDE_INT wc; + enum machine_mode wmode = TYPE_MODE (TREE_TYPE (treeop1)); op1 = expand_expr (treeop1, subtarget, VOIDmode, EXPAND_SUM); - /* Use immed_double_const to ensure that the constant is + /* Use wide_int::from_shwi to ensure that the constant is truncated according to the mode of OP1, then sign extended to a HOST_WIDE_INT. Using the constant directly can result in non-canonical RTL in a 64x32 cross compile. */ - constant_part - = immed_double_const (TREE_INT_CST_LOW (treeop0), - (HOST_WIDE_INT) 0, - TYPE_MODE (TREE_TYPE (treeop1))); + wc = TREE_INT_CST_LOW (treeop0); + constant_part + = immed_wide_int_const (wide_int::from_shwi (wc, wmode), wmode); op1 = plus_constant (mode, op1, INTVAL (constant_part)); if (modifier != EXPAND_SUM && modifier != EXPAND_INITIALIZER) op1 = force_operand (op1, target); @@ -8237,7 +8244,8 @@ expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode, && TREE_CONSTANT (treeop0)) { rtx constant_part; - + HOST_WIDE_INT wc; + enum machine_mode wmode = TYPE_MODE (TREE_TYPE (treeop0)); op0 = expand_expr (treeop0, subtarget, VOIDmode, (modifier == EXPAND_INITIALIZER ? EXPAND_INITIALIZER : EXPAND_SUM)); @@ -8251,14 +8259,13 @@ expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode, return simplify_gen_binary (PLUS, mode, op0, op1); goto binop2; } - /* Use immed_double_const to ensure that the constant is + /* Use wide_int::from_shwi to ensure that the constant is truncated according to the mode of OP1, then sign extended to a HOST_WIDE_INT. Using the constant directly can result in non-canonical RTL in a 64x32 cross compile. */ - constant_part - = immed_double_const (TREE_INT_CST_LOW (treeop1), - (HOST_WIDE_INT) 0, - TYPE_MODE (TREE_TYPE (treeop0))); + wc = TREE_INT_CST_LOW (treeop1); + constant_part + = immed_wide_int_const (wide_int::from_shwi (wc, wmode), wmode); op0 = plus_constant (mode, op0, INTVAL (constant_part)); if (modifier != EXPAND_SUM && modifier != EXPAND_INITIALIZER) op0 = force_operand (op0, target); @@ -8760,10 +8767,13 @@ expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode, for unsigned bitfield expand this as XOR with a proper constant instead. */ if (reduce_bit_field && TYPE_UNSIGNED (type)) - temp = expand_binop (mode, xor_optab, op0, - immed_double_int_const - (double_int::mask (TYPE_PRECISION (type)), mode), - target, 1, OPTAB_LIB_WIDEN); + { + wide_int mask = wide_int::mask (TYPE_PRECISION (type), false, mode); + + temp = expand_binop (mode, xor_optab, op0, + immed_wide_int_const (mask, mode), + target, 1, OPTAB_LIB_WIDEN); + } else temp = expand_unop (mode, one_cmpl_optab, op0, target, 1); gcc_assert (temp); @@ -9396,9 +9406,8 @@ expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode, return decl_rtl; case INTEGER_CST: - temp = immed_double_const (TREE_INT_CST_LOW (exp), - TREE_INT_CST_HIGH (exp), mode); - + temp = immed_wide_int_const (wide_int::from_tree (exp), + TYPE_MODE (TREE_TYPE (exp))); return temp; case VECTOR_CST: @@ -9630,8 +9639,9 @@ expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode, op0 = memory_address_addr_space (address_mode, op0, as); if (!integer_zerop (TREE_OPERAND (exp, 1))) { - rtx off - = immed_double_int_const (mem_ref_offset (exp), address_mode); + wide_int wi = wide_int::from_double_int + (mem_ref_offset (exp), GET_MODE_PRECISION (address_mode)); + rtx off = immed_wide_int_const (wi, address_mode); op0 = simplify_gen_binary (PLUS, address_mode, op0, off); } op0 = memory_address_addr_space (mode, op0, as); @@ -10510,9 +10520,10 @@ reduce_to_bit_field_precision (rtx exp, rtx target, tree type) } else if (TYPE_UNSIGNED (type)) { - rtx mask = immed_double_int_const (double_int::mask (prec), - GET_MODE (exp)); - return expand_and (GET_MODE (exp), exp, mask, target); + enum machine_mode mode = GET_MODE (exp); + rtx mask = immed_wide_int_const + (wide_int::mask (prec, false, mode), mode); + return expand_and (mode, exp, mask, target); } else { @@ -11084,8 +11095,9 @@ const_vector_from_tree (tree exp) RTVEC_ELT (v, i) = CONST_FIXED_FROM_FIXED_VALUE (TREE_FIXED_CST (elt), inner); else - RTVEC_ELT (v, i) = immed_double_int_const (tree_to_double_int (elt), - inner); + RTVEC_ELT (v, i) + = immed_wide_int_const (wide_int::from_tree (elt), + TYPE_MODE (TREE_TYPE (elt))); } return gen_rtx_CONST_VECTOR (mode, v); diff --git a/gcc/final.c b/gcc/final.c index f6974f4..053aebc 100644 --- a/gcc/final.c +++ b/gcc/final.c @@ -3789,8 +3789,16 @@ output_addr_const (FILE *file, rtx x) output_addr_const (file, XEXP (x, 0)); break; + case CONST_WIDE_INT: + /* This should be ok for a while. */ + gcc_assert (CONST_WIDE_INT_NUNITS (x) == 2); + fprintf (file, HOST_WIDE_INT_PRINT_DOUBLE_HEX, + (unsigned HOST_WIDE_INT) CONST_WIDE_INT_ELT (x, 1), + (unsigned HOST_WIDE_INT) CONST_WIDE_INT_ELT (x, 0)); + break; + case CONST_DOUBLE: - if (GET_MODE (x) == VOIDmode) + if (CONST_DOUBLE_AS_INT_P (x)) { /* We can use %d if the number is one word and positive. */ if (CONST_DOUBLE_HIGH (x)) diff --git a/gcc/genemit.c b/gcc/genemit.c index 692ef52..7b1e471 100644 --- a/gcc/genemit.c +++ b/gcc/genemit.c @@ -204,6 +204,7 @@ gen_exp (rtx x, enum rtx_code subroutine_type, char *used) case CONST_DOUBLE: case CONST_FIXED: + case CONST_WIDE_INT: /* These shouldn't be written in MD files. Instead, the appropriate routines in varasm.c should be called. */ gcc_unreachable (); diff --git a/gcc/gengenrtl.c b/gcc/gengenrtl.c index 5b5a3ca..1f93dd5 100644 --- a/gcc/gengenrtl.c +++ b/gcc/gengenrtl.c @@ -142,6 +142,7 @@ static int excluded_rtx (int idx) { return ((strcmp (defs[idx].enumname, "CONST_DOUBLE") == 0) + || (strcmp (defs[idx].enumname, "CONST_WIDE_INT") == 0) || (strcmp (defs[idx].enumname, "CONST_FIXED") == 0)); } diff --git a/gcc/gengtype-lex.l b/gcc/gengtype-lex.l index f46cd17..7ece2ab 100644 --- a/gcc/gengtype-lex.l +++ b/gcc/gengtype-lex.l @@ -57,7 +57,7 @@ ITYPE {IWORD}({WS}{IWORD})* /* Include '::' in identifiers to capture C++ scope qualifiers. */ ID {CID}({HWS}::{HWS}{CID})* EOID [^[:alnum:]_] -CXX_KEYWORD inline|public:|private:|protected:|template|operator|friend +CXX_KEYWORD inline|public:|private:|protected:|template|operator|friend|static %x in_struct in_struct_comment in_comment %option warn noyywrap nounput nodefault perf-report @@ -110,6 +110,7 @@ CXX_KEYWORD inline|public:|private:|protected:|template|operator|friend "const"/{EOID} /* don't care */ {CXX_KEYWORD}/{EOID} | "~" | +"^" | "&" { *yylval = XDUPVAR (const char, yytext, yyleng, yyleng + 1); return IGNORABLE_CXX_KEYWORD; diff --git a/gcc/gengtype-parse.c b/gcc/gengtype-parse.c index 68d372e..e1c3c65 100644 --- a/gcc/gengtype-parse.c +++ b/gcc/gengtype-parse.c @@ -230,6 +230,12 @@ require_template_declaration (const char *tmpl_name) /* Read the comma-separated list of identifiers. */ while (token () != '>') { + if (token () == ENUM) + { + advance (); + str = concat (str, "enum ", (char *) 0); + continue; + } const char *id = require2 (ID, ','); if (id == NULL) id = ","; diff --git a/gcc/gengtype.c b/gcc/gengtype.c index eede798..ca2ee25 100644 --- a/gcc/gengtype.c +++ b/gcc/gengtype.c @@ -5442,6 +5442,7 @@ main (int argc, char **argv) POS_HERE (do_scalar_typedef ("REAL_VALUE_TYPE", &pos)); POS_HERE (do_scalar_typedef ("FIXED_VALUE_TYPE", &pos)); POS_HERE (do_scalar_typedef ("double_int", &pos)); + POS_HERE (do_scalar_typedef ("wide_int", &pos)); POS_HERE (do_scalar_typedef ("uint64_t", &pos)); POS_HERE (do_scalar_typedef ("uint8", &pos)); POS_HERE (do_scalar_typedef ("uintptr_t", &pos)); diff --git a/gcc/genpreds.c b/gcc/genpreds.c index 98488e3..29fafbe 100644 --- a/gcc/genpreds.c +++ b/gcc/genpreds.c @@ -612,7 +612,7 @@ write_one_predicate_function (struct pred_data *p) add_mode_tests (p); /* A normal predicate can legitimately not look at enum machine_mode - if it accepts only CONST_INTs and/or CONST_DOUBLEs. */ + if it accepts only CONST_INTs and/or CONST_WIDE_INT and/or CONST_DOUBLEs. */ printf ("int\n%s (rtx op, enum machine_mode mode ATTRIBUTE_UNUSED)\n{\n", p->name); write_predicate_stmts (p->exp); @@ -809,8 +809,11 @@ add_constraint (const char *name, const char *regclass, if (is_const_int || is_const_dbl) { enum rtx_code appropriate_code +#if TARGET_SUPPORTS_WIDE_INT + = is_const_int ? CONST_INT : CONST_WIDE_INT; +#else = is_const_int ? CONST_INT : CONST_DOUBLE; - +#endif /* Consider relaxing this requirement in the future. */ if (regclass || GET_CODE (exp) != AND @@ -1075,12 +1078,17 @@ write_tm_constrs_h (void) if (needs_ival) puts (" if (CONST_INT_P (op))\n" " ival = INTVAL (op);"); +#if TARGET_SUPPORTS_WIDE_INT + if (needs_lval || needs_hval) + error ("you can't use lval or hval"); +#else if (needs_hval) puts (" if (GET_CODE (op) == CONST_DOUBLE && mode == VOIDmode)" " hval = CONST_DOUBLE_HIGH (op);"); if (needs_lval) puts (" if (GET_CODE (op) == CONST_DOUBLE && mode == VOIDmode)" " lval = CONST_DOUBLE_LOW (op);"); +#endif if (needs_rval) puts (" if (GET_CODE (op) == CONST_DOUBLE && mode != VOIDmode)" " rval = CONST_DOUBLE_REAL_VALUE (op);"); diff --git a/gcc/gensupport.c b/gcc/gensupport.c index 9b9a03e..638e051 100644 --- a/gcc/gensupport.c +++ b/gcc/gensupport.c @@ -2775,7 +2775,13 @@ static const struct std_pred_table std_preds[] = { {"scratch_operand", false, false, {SCRATCH, REG}}, {"immediate_operand", false, true, {UNKNOWN}}, {"const_int_operand", false, false, {CONST_INT}}, +#if TARGET_SUPPORTS_WIDE_INT + {"const_wide_int_operand", false, false, {CONST_WIDE_INT}}, + {"const_scalar_int_operand", false, false, {CONST_INT, CONST_WIDE_INT}}, + {"const_double_operand", false, false, {CONST_DOUBLE}}, +#else {"const_double_operand", false, false, {CONST_INT, CONST_DOUBLE}}, +#endif {"nonimmediate_operand", false, false, {SUBREG, REG, MEM}}, {"nonmemory_operand", false, true, {SUBREG, REG}}, {"push_operand", false, false, {MEM}}, diff --git a/gcc/optabs.c b/gcc/optabs.c index a3051ad..3b534b2 100644 --- a/gcc/optabs.c +++ b/gcc/optabs.c @@ -851,7 +851,8 @@ expand_subword_shift (enum machine_mode op1_mode, optab binoptab, if (CONSTANT_P (op1) || shift_mask >= BITS_PER_WORD) { carries = outof_input; - tmp = immed_double_const (BITS_PER_WORD, 0, op1_mode); + tmp = immed_wide_int_const (wide_int::from_shwi (BITS_PER_WORD, + op1_mode), op1_mode); tmp = simplify_expand_binop (op1_mode, sub_optab, tmp, op1, 0, true, methods); } @@ -866,13 +867,14 @@ expand_subword_shift (enum machine_mode op1_mode, optab binoptab, outof_input, const1_rtx, 0, unsignedp, methods); if (shift_mask == BITS_PER_WORD - 1) { - tmp = immed_double_const (-1, -1, op1_mode); + tmp = immed_wide_int_const (wide_int::minus_one (op1_mode), op1_mode); tmp = simplify_expand_binop (op1_mode, xor_optab, op1, tmp, 0, true, methods); } else { - tmp = immed_double_const (BITS_PER_WORD - 1, 0, op1_mode); + tmp = immed_wide_int_const (wide_int::from_shwi (BITS_PER_WORD - 1, + op1_mode), op1_mode); tmp = simplify_expand_binop (op1_mode, sub_optab, tmp, op1, 0, true, methods); } @@ -1035,7 +1037,8 @@ expand_doubleword_shift (enum machine_mode op1_mode, optab binoptab, is true when the effective shift value is less than BITS_PER_WORD. Set SUPERWORD_OP1 to the shift count that should be used to shift OUTOF_INPUT into INTO_TARGET when the condition is false. */ - tmp = immed_double_const (BITS_PER_WORD, 0, op1_mode); + tmp = immed_wide_int_const (wide_int::from_shwi (BITS_PER_WORD, op1_mode), + op1_mode); if (!CONSTANT_P (op1) && shift_mask == BITS_PER_WORD - 1) { /* Set CMP1 to OP1 & BITS_PER_WORD. The result is zero iff OP1 @@ -2885,7 +2888,7 @@ expand_absneg_bit (enum rtx_code code, enum machine_mode mode, const struct real_format *fmt; int bitpos, word, nwords, i; enum machine_mode imode; - double_int mask; + wide_int mask; rtx temp, insns; /* The format has to have a simple sign bit. */ @@ -2921,7 +2924,7 @@ expand_absneg_bit (enum rtx_code code, enum machine_mode mode, nwords = (GET_MODE_BITSIZE (mode) + BITS_PER_WORD - 1) / BITS_PER_WORD; } - mask = double_int_zero.set_bit (bitpos); + mask = wide_int::set_bit_in_zero (bitpos, imode); if (code == ABS) mask = ~mask; @@ -2943,7 +2946,7 @@ expand_absneg_bit (enum rtx_code code, enum machine_mode mode, { temp = expand_binop (imode, code == ABS ? and_optab : xor_optab, op0_piece, - immed_double_int_const (mask, imode), + immed_wide_int_const (mask, imode), targ_piece, 1, OPTAB_LIB_WIDEN); if (temp != targ_piece) emit_move_insn (targ_piece, temp); @@ -2961,7 +2964,7 @@ expand_absneg_bit (enum rtx_code code, enum machine_mode mode, { temp = expand_binop (imode, code == ABS ? and_optab : xor_optab, gen_lowpart (imode, op0), - immed_double_int_const (mask, imode), + immed_wide_int_const (mask, imode), gen_lowpart (imode, target), 1, OPTAB_LIB_WIDEN); target = lowpart_subreg_maybe_copy (mode, temp, imode); @@ -3560,7 +3563,7 @@ expand_copysign_absneg (enum machine_mode mode, rtx op0, rtx op1, rtx target, } else { - double_int mask; + wide_int mask; if (GET_MODE_SIZE (mode) <= UNITS_PER_WORD) { @@ -3582,10 +3585,9 @@ expand_copysign_absneg (enum machine_mode mode, rtx op0, rtx op1, rtx target, op1 = operand_subword_force (op1, word, mode); } - mask = double_int_zero.set_bit (bitpos); - + mask = wide_int::set_bit_in_zero (bitpos, imode); sign = expand_binop (imode, and_optab, op1, - immed_double_int_const (mask, imode), + immed_wide_int_const (mask, imode), NULL_RTX, 1, OPTAB_LIB_WIDEN); } @@ -3629,7 +3631,7 @@ expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target, int bitpos, bool op0_is_abs) { enum machine_mode imode; - double_int mask; + wide_int mask, nmask; int word, nwords, i; rtx temp, insns; @@ -3653,7 +3655,7 @@ expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target, nwords = (GET_MODE_BITSIZE (mode) + BITS_PER_WORD - 1) / BITS_PER_WORD; } - mask = double_int_zero.set_bit (bitpos); + mask = wide_int::set_bit_in_zero (bitpos, imode); if (target == 0 || target == op0 @@ -3673,14 +3675,16 @@ expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target, if (i == word) { if (!op0_is_abs) - op0_piece - = expand_binop (imode, and_optab, op0_piece, - immed_double_int_const (~mask, imode), - NULL_RTX, 1, OPTAB_LIB_WIDEN); - + { + nmask = ~mask; + op0_piece + = expand_binop (imode, and_optab, op0_piece, + immed_wide_int_const (nmask, imode), + NULL_RTX, 1, OPTAB_LIB_WIDEN); + } op1 = expand_binop (imode, and_optab, operand_subword_force (op1, i, mode), - immed_double_int_const (mask, imode), + immed_wide_int_const (mask, imode), NULL_RTX, 1, OPTAB_LIB_WIDEN); temp = expand_binop (imode, ior_optab, op0_piece, op1, @@ -3700,15 +3704,17 @@ expand_copysign_bit (enum machine_mode mode, rtx op0, rtx op1, rtx target, else { op1 = expand_binop (imode, and_optab, gen_lowpart (imode, op1), - immed_double_int_const (mask, imode), + immed_wide_int_const (mask, imode), NULL_RTX, 1, OPTAB_LIB_WIDEN); op0 = gen_lowpart (imode, op0); if (!op0_is_abs) - op0 = expand_binop (imode, and_optab, op0, - immed_double_int_const (~mask, imode), - NULL_RTX, 1, OPTAB_LIB_WIDEN); - + { + nmask = ~mask; + op0 = expand_binop (imode, and_optab, op0, + immed_wide_int_const (nmask, imode), + NULL_RTX, 1, OPTAB_LIB_WIDEN); + } temp = expand_binop (imode, ior_optab, op0, op1, gen_lowpart (imode, target), 1, OPTAB_LIB_WIDEN); target = lowpart_subreg_maybe_copy (mode, temp, imode); diff --git a/gcc/postreload.c b/gcc/postreload.c index 33462e4..b899fe1 100644 --- a/gcc/postreload.c +++ b/gcc/postreload.c @@ -295,27 +295,25 @@ reload_cse_simplify_set (rtx set, rtx insn) #ifdef LOAD_EXTEND_OP if (extend_op != UNKNOWN) { - HOST_WIDE_INT this_val; + wide_int result; - /* ??? I'm lazy and don't wish to handle CONST_DOUBLE. Other - constants, such as SYMBOL_REF, cannot be extended. */ - if (!CONST_INT_P (this_rtx)) + if (!CONST_SCALAR_INT_P (this_rtx)) continue; - this_val = INTVAL (this_rtx); switch (extend_op) { case ZERO_EXTEND: - this_val &= GET_MODE_MASK (GET_MODE (src)); + result = (wide_int::from_rtx (this_rtx, GET_MODE (src)) + .zext (word_mode)); break; case SIGN_EXTEND: - /* ??? In theory we're already extended. */ - if (this_val == trunc_int_for_mode (this_val, GET_MODE (src))) - break; + result = (wide_int::from_rtx (this_rtx, GET_MODE (src)) + .sext (word_mode)); + break; default: gcc_unreachable (); } - this_rtx = GEN_INT (this_val); + this_rtx = immed_wide_int_const (result, GET_MODE (src)); } #endif this_cost = set_src_cost (this_rtx, speed); diff --git a/gcc/print-rtl.c b/gcc/print-rtl.c index d2bda9e..3620bd6 100644 --- a/gcc/print-rtl.c +++ b/gcc/print-rtl.c @@ -612,6 +612,12 @@ print_rtx (const_rtx in_rtx) fprintf (outfile, " [%s]", s); } break; + + case CONST_WIDE_INT: + if (! flag_simple) + fprintf (outfile, " "); + hwivec_output_hex (outfile, CONST_WIDE_INT_VEC (in_rtx)); + break; #endif case CODE_LABEL: diff --git a/gcc/read-rtl.c b/gcc/read-rtl.c index cd58b1f..a73a41b 100644 --- a/gcc/read-rtl.c +++ b/gcc/read-rtl.c @@ -806,6 +806,29 @@ validate_const_int (const char *string) fatal_with_file_and_line ("invalid decimal constant \"%s\"\n", string); } +static void +validate_const_wide_int (const char *string) +{ + const char *cp; + int valid = 1; + + cp = string; + while (*cp && ISSPACE (*cp)) + cp++; + /* Skip the leading 0x. */ + if (cp[0] == '0' || cp[1] == 'x') + cp += 2; + else + valid = 0; + if (*cp == 0) + valid = 0; + for (; *cp; cp++) + if (! ISXDIGIT (*cp)) + valid = 0; + if (!valid) + fatal_with_file_and_line ("invalid hex constant \"%s\"\n", string); +} + /* Record that PTR uses iterator ITERATOR. */ static void @@ -1319,6 +1342,56 @@ read_rtx_code (const char *code_name) gcc_unreachable (); } + if (CONST_WIDE_INT_P (return_rtx)) + { + read_name (&name); + validate_const_wide_int (name.string); + { + hwivec hwiv; + const char *s = name.string; + int len; + int index = 0; + int gs = HOST_BITS_PER_WIDE_INT/4; + int pos; + char * buf = XALLOCAVEC (char, gs + 1); + unsigned HOST_WIDE_INT wi; + int wlen; + + /* Skip the leading spaces. */ + while (*s && ISSPACE (*s)) + s++; + + /* Skip the leading 0x. */ + gcc_assert (s[0] == '0'); + gcc_assert (s[1] == 'x'); + s += 2; + + len = strlen (s); + pos = len - gs; + wlen = (len + gs - 1) / gs; /* Number of words needed */ + + return_rtx = const_wide_int_alloc (wlen); + + hwiv = CONST_WIDE_INT_VEC (return_rtx); + while (pos > 0) + { +#if HOST_BITS_PER_WIDE_INT == 64 + sscanf (s + pos, "%16" HOST_WIDE_INT_PRINT "x", &wi); +#else + sscanf (s + pos, "%8" HOST_WIDE_INT_PRINT "x", &wi); +#endif + XHWIVEC_ELT (hwiv, index++) = wi; + pos -= gs; + } + strncpy (buf, s, gs - pos); + buf [gs - pos] = 0; + sscanf (buf, "%" HOST_WIDE_INT_PRINT "x", &wi); + XHWIVEC_ELT (hwiv, index++) = wi; + /* TODO: After reading, do we want to canonicalize with: + value = lookup_const_wide_int (value); ? */ + } + } + c = read_skip_spaces (); /* Syntactic sugar for AND and IOR, allowing Lisp-like arbitrary number of arguments for them. */ diff --git a/gcc/recog.c b/gcc/recog.c index 75d1113..fc93d1a 100644 --- a/gcc/recog.c +++ b/gcc/recog.c @@ -1145,7 +1145,7 @@ immediate_operand (rtx op, enum machine_mode mode) : mode, op)); } -/* Returns 1 if OP is an operand that is a CONST_INT. */ +/* Returns 1 if OP is an operand that is a CONST_INT of mode MODE. */ int const_int_operand (rtx op, enum machine_mode mode) @@ -1160,8 +1160,64 @@ const_int_operand (rtx op, enum machine_mode mode) return 1; } +#if TARGET_SUPPORTS_WIDE_INT +/* Returns 1 if OP is an operand that is a CONST_INT or CONST_WIDE_INT + of mode MODE. */ +int +const_scalar_int_operand (rtx op, enum machine_mode mode) +{ + if (!CONST_SCALAR_INT_P (op)) + return 0; + + if (CONST_INT_P (op)) + return const_int_operand (op, mode); + + if (mode != VOIDmode) + { + int prec = GET_MODE_PRECISION (mode); + int bitsize = GET_MODE_BITSIZE (mode); + + if (CONST_WIDE_INT_NUNITS (op) * HOST_BITS_PER_WIDE_INT > bitsize) + return 0; + + if (prec == bitsize) + return 1; + else + { + /* Multiword partial int. */ + HOST_WIDE_INT x + = CONST_WIDE_INT_ELT (op, CONST_WIDE_INT_NUNITS (op) - 1); + return (wide_int::sext (x, prec & (HOST_BITS_PER_WIDE_INT - 1)) + == x); + } + } + return 1; +} + +/* Returns 1 if OP is an operand that is a CONST_WIDE_INT of mode + MODE. This most likely is not as useful as + const_scalar_int_operand, but is here for consistancy. */ +int +const_wide_int_operand (rtx op, enum machine_mode mode) +{ + if (!CONST_WIDE_INT_P (op)) + return 0; + + return const_scalar_int_operand (op, mode); +} + /* Returns 1 if OP is an operand that is a constant integer or constant - floating-point number. */ + floating-point number of MODE. */ + +int +const_double_operand (rtx op, enum machine_mode mode) +{ + return (GET_CODE (op) == CONST_DOUBLE) + && (GET_MODE (op) == mode || mode == VOIDmode); +} +#else +/* Returns 1 if OP is an operand that is a constant integer or constant + floating-point number of MODE. */ int const_double_operand (rtx op, enum machine_mode mode) @@ -1177,8 +1233,9 @@ const_double_operand (rtx op, enum machine_mode mode) && (mode == VOIDmode || GET_MODE (op) == mode || GET_MODE (op) == VOIDmode)); } - -/* Return 1 if OP is a general operand that is not an immediate operand. */ +#endif +/* Return 1 if OP is a general operand that is not an immediate + operand of mode MODE. */ int nonimmediate_operand (rtx op, enum machine_mode mode) @@ -1186,7 +1243,8 @@ nonimmediate_operand (rtx op, enum machine_mode mode) return (general_operand (op, mode) && ! CONSTANT_P (op)); } -/* Return 1 if OP is a register reference or immediate value of mode MODE. */ +/* Return 1 if OP is a register reference or immediate value of mode + MODE. */ int nonmemory_operand (rtx op, enum machine_mode mode) diff --git a/gcc/rtl.c b/gcc/rtl.c index b2d88f7..074e425 100644 --- a/gcc/rtl.c +++ b/gcc/rtl.c @@ -109,7 +109,7 @@ const enum rtx_class rtx_class[NUM_RTX_CODE] = { const unsigned char rtx_code_size[NUM_RTX_CODE] = { #define DEF_RTL_EXPR(ENUM, NAME, FORMAT, CLASS) \ (((ENUM) == CONST_INT || (ENUM) == CONST_DOUBLE \ - || (ENUM) == CONST_FIXED) \ + || (ENUM) == CONST_FIXED || (ENUM) == CONST_WIDE_INT) \ ? RTX_HDR_SIZE + (sizeof FORMAT - 1) * sizeof (HOST_WIDE_INT) \ : RTX_HDR_SIZE + (sizeof FORMAT - 1) * sizeof (rtunion)), @@ -181,18 +181,24 @@ shallow_copy_rtvec (rtvec vec) unsigned int rtx_size (const_rtx x) { + if (CONST_WIDE_INT_P (x)) + return (RTX_HDR_SIZE + + sizeof (struct hwivec_def) + + ((CONST_WIDE_INT_NUNITS (x) - 1) + * sizeof (HOST_WIDE_INT))); if (GET_CODE (x) == SYMBOL_REF && SYMBOL_REF_HAS_BLOCK_INFO_P (x)) return RTX_HDR_SIZE + sizeof (struct block_symbol); return RTX_CODE_SIZE (GET_CODE (x)); } -/* Allocate an rtx of code CODE. The CODE is stored in the rtx; - all the rest is initialized to zero. */ +/* Allocate an rtx of code CODE with EXTRA bytes in it. The CODE is + stored in the rtx; all the rest is initialized to zero. */ rtx -rtx_alloc_stat (RTX_CODE code MEM_STAT_DECL) +rtx_alloc_stat_v (RTX_CODE code MEM_STAT_DECL, int extra) { - rtx rt = ggc_alloc_rtx_def_stat (RTX_CODE_SIZE (code) PASS_MEM_STAT); + rtx rt = ggc_alloc_rtx_def_stat (RTX_CODE_SIZE (code) + extra + PASS_MEM_STAT); /* We want to clear everything up to the FLD array. Normally, this is one int, but we don't want to assume that and it isn't very @@ -210,6 +216,29 @@ rtx_alloc_stat (RTX_CODE code MEM_STAT_DECL) return rt; } +/* Allocate an rtx of code CODE. The CODE is stored in the rtx; + all the rest is initialized to zero. */ + +rtx +rtx_alloc_stat (RTX_CODE code MEM_STAT_DECL) +{ + return rtx_alloc_stat_v (code PASS_MEM_STAT, 0); +} + +/* Write the wide constant OP0 to OUTFILE. */ + +void +hwivec_output_hex (FILE *outfile, const_hwivec op0) +{ + int i = HWI_GET_NUM_ELEM (op0); + gcc_assert (i > 0); + if (XHWIVEC_ELT (op0, i-1) == 0) + fprintf (outfile, "0x"); + fprintf (outfile, HOST_WIDE_INT_PRINT_HEX, XHWIVEC_ELT (op0, --i)); + while (--i >= 0) + fprintf (outfile, HOST_WIDE_INT_PRINT_PADDED_HEX, XHWIVEC_ELT (op0, i)); +} + /* Return true if ORIG is a sharable CONST. */ @@ -428,7 +457,6 @@ rtx_equal_p_cb (const_rtx x, const_rtx y, rtx_equal_p_callback_function cb) if (XWINT (x, i) != XWINT (y, i)) return 0; break; - case 'n': case 'i': if (XINT (x, i) != XINT (y, i)) @@ -646,6 +674,10 @@ iterative_hash_rtx (const_rtx x, hashval_t hash) return iterative_hash_object (i, hash); case CONST_INT: return iterative_hash_object (INTVAL (x), hash); + case CONST_WIDE_INT: + for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++) + hash = iterative_hash_object (CONST_WIDE_INT_ELT (x, i), hash); + return hash; case SYMBOL_REF: if (XSTR (x, 0)) return iterative_hash (XSTR (x, 0), strlen (XSTR (x, 0)) + 1, @@ -811,6 +843,16 @@ rtl_check_failed_block_symbol (const char *file, int line, const char *func) /* XXX Maybe print the vector? */ void +hwivec_check_failed_bounds (const_hwivec r, int n, const char *file, int line, + const char *func) +{ + internal_error + ("RTL check: access of hwi elt %d of vector with last elt %d in %s, at %s:%d", + n, GET_NUM_ELEM (r) - 1, func, trim_filename (file), line); +} + +/* XXX Maybe print the vector? */ +void rtvec_check_failed_bounds (const_rtvec r, int n, const char *file, int line, const char *func) { diff --git a/gcc/rtl.def b/gcc/rtl.def index f8aea32..4c5eb00 100644 --- a/gcc/rtl.def +++ b/gcc/rtl.def @@ -342,6 +342,9 @@ DEF_RTL_EXPR(TRAP_IF, "trap_if", "ee", RTX_EXTRA) /* numeric integer constant */ DEF_RTL_EXPR(CONST_INT, "const_int", "w", RTX_CONST_OBJ) +/* numeric integer constant */ +DEF_RTL_EXPR(CONST_WIDE_INT, "const_wide_int", "", RTX_CONST_OBJ) + /* fixed-point constant */ DEF_RTL_EXPR(CONST_FIXED, "const_fixed", "www", RTX_CONST_OBJ) diff --git a/gcc/rtl.h b/gcc/rtl.h index e9013ec..d019d9f 100644 --- a/gcc/rtl.h +++ b/gcc/rtl.h @@ -249,6 +251,14 @@ struct GTY(()) object_block { vec *anchors; }; +struct GTY((variable_size)) hwivec_def { + int num_elem; /* number of elements */ + HOST_WIDE_INT elem[1]; +}; + +#define HWI_GET_NUM_ELEM(HWIVEC) ((HWIVEC)->num_elem) +#define HWI_PUT_NUM_ELEM(HWIVEC, NUM) ((HWIVEC)->num_elem = (NUM)) + /* RTL expression ("rtx"). */ struct GTY((chain_next ("RTX_NEXT (&%h)"), @@ -344,6 +354,7 @@ struct GTY((chain_next ("RTX_NEXT (&%h)"), struct block_symbol block_sym; struct real_value rv; struct fixed_value fv; + struct hwivec_def hwiv; } GTY ((special ("rtx_def"), desc ("GET_CODE (&%0)"))) u; }; @@ -383,13 +394,13 @@ struct GTY((chain_next ("RTX_NEXT (&%h)"), for a variable number of things. The principle use is inside PARALLEL expressions. */ +#define NULL_RTVEC (rtvec) 0 + struct GTY((variable_size)) rtvec_def { int num_elem; /* number of elements */ rtx GTY ((length ("%h.num_elem"))) elem[1]; }; -#define NULL_RTVEC (rtvec) 0 - #define GET_NUM_ELEM(RTVEC) ((RTVEC)->num_elem) #define PUT_NUM_ELEM(RTVEC, NUM) ((RTVEC)->num_elem = (NUM)) @@ -399,12 +410,38 @@ struct GTY((variable_size)) rtvec_def { /* Predicate yielding nonzero iff X is an rtx for a memory location. */ #define MEM_P(X) (GET_CODE (X) == MEM) +#if TARGET_SUPPORTS_WIDE_INT + +/* Match CONST_*s that can represent compile-time constant integers. */ +#define CASE_CONST_SCALAR_INT \ + case CONST_INT: \ + case CONST_WIDE_INT + +/* Match CONST_*s for which pointer equality corresponds to value + equality. */ +#define CASE_CONST_UNIQUE \ + case CONST_INT: \ + case CONST_WIDE_INT: \ + case CONST_DOUBLE: \ + case CONST_FIXED + +/* Match all CONST_* rtxes. */ +#define CASE_CONST_ANY \ + case CONST_INT: \ + case CONST_WIDE_INT: \ + case CONST_DOUBLE: \ + case CONST_FIXED: \ + case CONST_VECTOR + +#else + /* Match CONST_*s that can represent compile-time constant integers. */ #define CASE_CONST_SCALAR_INT \ case CONST_INT: \ case CONST_DOUBLE -/* Match CONST_*s for which pointer equality corresponds to value equality. */ +/* Match CONST_*s for which pointer equality corresponds to value +equality. */ #define CASE_CONST_UNIQUE \ case CONST_INT: \ case CONST_DOUBLE: \ @@ -416,10 +453,17 @@ struct GTY((variable_size)) rtvec_def { case CONST_DOUBLE: \ case CONST_FIXED: \ case CONST_VECTOR +#endif + + + /* Predicate yielding nonzero iff X is an rtx for a constant integer. */ #define CONST_INT_P(X) (GET_CODE (X) == CONST_INT) +/* Predicate yielding nonzero iff X is an rtx for a constant integer. */ +#define CONST_WIDE_INT_P(X) (GET_CODE (X) == CONST_WIDE_INT) + /* Predicate yielding nonzero iff X is an rtx for a constant fixed-point. */ #define CONST_FIXED_P(X) (GET_CODE (X) == CONST_FIXED) @@ -432,8 +476,13 @@ struct GTY((variable_size)) rtvec_def { (GET_CODE (X) == CONST_DOUBLE && GET_MODE (X) == VOIDmode) /* Predicate yielding true iff X is an rtx for a integer const. */ +#if TARGET_SUPPORTS_WIDE_INT +#define CONST_SCALAR_INT_P(X) \ + (CONST_INT_P (X) || CONST_WIDE_INT_P (X)) +#else #define CONST_SCALAR_INT_P(X) \ (CONST_INT_P (X) || CONST_DOUBLE_AS_INT_P (X)) +#endif /* Predicate yielding true iff X is an rtx for a double-int. */ #define CONST_DOUBLE_AS_FLOAT_P(X) \ @@ -594,6 +643,13 @@ struct GTY((variable_size)) rtvec_def { __FUNCTION__); \ &_rtx->u.hwint[_n]; })) +#define XHWIVEC_ELT(HWIVEC, I) __extension__ \ +(*({ __typeof (HWIVEC) const _hwivec = (HWIVEC); const int _i = (I); \ + if (_i < 0 || _i >= HWI_GET_NUM_ELEM (_hwivec)) \ + hwivec_check_failed_bounds (_hwivec, _i, __FILE__, __LINE__, \ + __FUNCTION__); \ + &_hwivec->elem[_i]; })) + #define XCWINT(RTX, N, C) __extension__ \ (*({ __typeof (RTX) const _rtx = (RTX); \ if (GET_CODE (_rtx) != (C)) \ @@ -630,6 +686,11 @@ struct GTY((variable_size)) rtvec_def { __FUNCTION__); \ &_symbol->u.block_sym; }) +#define HWIVEC_CHECK(RTX,C) __extension__ \ +({ __typeof (RTX) const _symbol = (RTX); \ + RTL_CHECKC1 (_symbol, 0, C); \ + &_symbol->u.hwiv; }) + extern void rtl_check_failed_bounds (const_rtx, int, const char *, int, const char *) ATTRIBUTE_NORETURN; @@ -650,6 +711,9 @@ extern void rtl_check_failed_code_mode (const_rtx, enum rtx_code, enum machine_m ATTRIBUTE_NORETURN; extern void rtl_check_failed_block_symbol (const char *, int, const char *) ATTRIBUTE_NORETURN; +extern void hwivec_check_failed_bounds (const_rtvec, int, const char *, int, + const char *) + ATTRIBUTE_NORETURN; extern void rtvec_check_failed_bounds (const_rtvec, int, const char *, int, const char *) ATTRIBUTE_NORETURN; @@ -662,12 +726,14 @@ extern void rtvec_check_failed_bounds (const_rtvec, int, const char *, int, #define RTL_CHECKC2(RTX, N, C1, C2) ((RTX)->u.fld[N]) #define RTVEC_ELT(RTVEC, I) ((RTVEC)->elem[I]) #define XWINT(RTX, N) ((RTX)->u.hwint[N]) +#define XHWIVEC_ELT(HWIVEC, I) ((HWIVEC)->elem[I]) #define XCWINT(RTX, N, C) ((RTX)->u.hwint[N]) #define XCMWINT(RTX, N, C, M) ((RTX)->u.hwint[N]) #define XCNMWINT(RTX, N, C, M) ((RTX)->u.hwint[N]) #define XCNMPRV(RTX, C, M) (&(RTX)->u.rv) #define XCNMPFV(RTX, C, M) (&(RTX)->u.fv) #define BLOCK_SYMBOL_CHECK(RTX) (&(RTX)->u.block_sym) +#define HWIVEC_CHECK(RTX,C) (&(RTX)->u.hwiv) #endif @@ -810,8 +876,8 @@ extern void rtl_check_failed_flag (const char *, const_rtx, const char *, #define XCCFI(RTX, N, C) (RTL_CHECKC1 (RTX, N, C).rt_cfi) #define XCCSELIB(RTX, N, C) (RTL_CHECKC1 (RTX, N, C).rt_cselib) -#define XCVECEXP(RTX, N, M, C) RTVEC_ELT (XCVEC (RTX, N, C), M) -#define XCVECLEN(RTX, N, C) GET_NUM_ELEM (XCVEC (RTX, N, C)) +#define XCVECEXP(RTX, N, M, C) RTVEC_ELT (XCVEC (RTX, N, C), M) +#define XCVECLEN(RTX, N, C) GET_NUM_ELEM (XCVEC (RTX, N, C)) #define XC2EXP(RTX, N, C1, C2) (RTL_CHECKC2 (RTX, N, C1, C2).rt_rtx) @@ -1152,9 +1218,19 @@ rhs_regno (const_rtx x) #define INTVAL(RTX) XCWINT(RTX, 0, CONST_INT) #define UINTVAL(RTX) ((unsigned HOST_WIDE_INT) INTVAL (RTX)) +/* For a CONST_WIDE_INT, CONST_WIDE_INT_NUNITS is the number of + elements actually needed to represent the constant. + CONST_WIDE_INT_ELT gets one of the elements. 0 is the least + significant HOST_WIDE_INT. */ +#define CONST_WIDE_INT_VEC(RTX) HWIVEC_CHECK (RTX, CONST_WIDE_INT) +#define CONST_WIDE_INT_NUNITS(RTX) HWI_GET_NUM_ELEM (CONST_WIDE_INT_VEC (RTX)) +#define CONST_WIDE_INT_ELT(RTX, N) XHWIVEC_ELT (CONST_WIDE_INT_VEC (RTX), N) + /* For a CONST_DOUBLE: +#if TARGET_SUPPORTS_WIDE_INT == 0 For a VOIDmode, there are two integers CONST_DOUBLE_LOW is the low-order word and ..._HIGH the high-order. +#endif For a float, there is a REAL_VALUE_TYPE structure, and CONST_DOUBLE_REAL_VALUE(r) is a pointer to it. */ #define CONST_DOUBLE_LOW(r) XCMWINT (r, 0, CONST_DOUBLE, VOIDmode) @@ -1764,6 +1889,12 @@ extern rtx plus_constant (enum machine_mode, rtx, HOST_WIDE_INT); /* In rtl.c */ extern rtx rtx_alloc_stat (RTX_CODE MEM_STAT_DECL); #define rtx_alloc(c) rtx_alloc_stat (c MEM_STAT_INFO) +extern rtx rtx_alloc_stat_v (RTX_CODE MEM_STAT_DECL, int); +#define rtx_alloc_v(c, SZ) rtx_alloc_stat_v (c MEM_STAT_INFO, SZ) +#define const_wide_int_alloc(NWORDS) \ + rtx_alloc_v (CONST_WIDE_INT, \ + (sizeof (struct hwivec_def) \ + + ((NWORDS)-1) * sizeof (HOST_WIDE_INT))) \ extern rtvec rtvec_alloc (int); extern rtvec shallow_copy_rtvec (rtvec); @@ -1820,10 +1951,17 @@ extern void start_sequence (void); extern void push_to_sequence (rtx); extern void push_to_sequence2 (rtx, rtx); extern void end_sequence (void); +#if TARGET_SUPPORTS_WIDE_INT == 0 extern double_int rtx_to_double_int (const_rtx); -extern rtx immed_double_int_const (double_int, enum machine_mode); +#endif +extern void hwivec_output_hex (FILE *, const_hwivec); +#ifndef GENERATOR_FILE +extern rtx immed_wide_int_const (const wide_int &cst, enum machine_mode mode); +#endif +#if TARGET_SUPPORTS_WIDE_INT == 0 extern rtx immed_double_const (HOST_WIDE_INT, HOST_WIDE_INT, enum machine_mode); +#endif /* In loop-iv.c */ diff --git a/gcc/rtlanal.c b/gcc/rtlanal.c index b198685..0fe1d0e 100644 --- a/gcc/rtlanal.c +++ b/gcc/rtlanal.c @@ -3091,6 +3091,8 @@ commutative_operand_precedence (rtx op) /* Constants always come the second operand. Prefer "nice" constants. */ if (code == CONST_INT) return -8; + if (code == CONST_WIDE_INT) + return -8; if (code == CONST_DOUBLE) return -7; if (code == CONST_FIXED) @@ -3103,6 +3105,8 @@ commutative_operand_precedence (rtx op) case RTX_CONST_OBJ: if (code == CONST_INT) return -6; + if (code == CONST_WIDE_INT) + return -6; if (code == CONST_DOUBLE) return -5; if (code == CONST_FIXED) @@ -5289,7 +5293,10 @@ get_address_mode (rtx mem) /* Split up a CONST_DOUBLE or integer constant rtx into two rtx's for single words, storing in *FIRST the word that comes first in memory in the target - and in *SECOND the other. */ + and in *SECOND the other. + + TODO: This function needs to be rewritten to work on any size + integer. */ void split_double (rtx value, rtx *first, rtx *second) @@ -5366,6 +5373,22 @@ split_double (rtx value, rtx *first, rtx *second) } } } + else if (GET_CODE (value) == CONST_WIDE_INT) + { + /* All of this is scary code and needs to be converted to + properly work with any size integer. */ + gcc_assert (CONST_WIDE_INT_NUNITS (value) == 2); + if (WORDS_BIG_ENDIAN) + { + *first = GEN_INT (CONST_WIDE_INT_ELT (value, 1)); + *second = GEN_INT (CONST_WIDE_INT_ELT (value, 0)); + } + else + { + *first = GEN_INT (CONST_WIDE_INT_ELT (value, 0)); + *second = GEN_INT (CONST_WIDE_INT_ELT (value, 1)); + } + } else if (!CONST_DOUBLE_P (value)) { if (WORDS_BIG_ENDIAN) diff --git a/gcc/sched-vis.c b/gcc/sched-vis.c index 763230c..979aab1 100644 --- a/gcc/sched-vis.c +++ b/gcc/sched-vis.c @@ -432,6 +432,23 @@ print_value (pretty_printer *pp, const_rtx x, int verbose) pp_scalar (pp, HOST_WIDE_INT_PRINT_HEX, (unsigned HOST_WIDE_INT) INTVAL (x)); break; + + case CONST_WIDE_INT: + { + const char *sep = "<"; + int i; + for (i = CONST_WIDE_INT_NUNITS (x) - 1; i >= 0; i--) + { + pp_string (pp, sep); + sep = ","; + sprintf (tmp, HOST_WIDE_INT_PRINT_HEX, + (unsigned HOST_WIDE_INT) CONST_WIDE_INT_ELT (x, i)); + pp_string (pp, tmp); + } + pp_greater (pp); + } + break; + case CONST_DOUBLE: if (FLOAT_MODE_P (GET_MODE (x))) { diff --git a/gcc/sel-sched-ir.c b/gcc/sel-sched-ir.c index 47e7695..828cac3 100644 --- a/gcc/sel-sched-ir.c +++ b/gcc/sel-sched-ir.c @@ -1141,10 +1141,10 @@ lhs_and_rhs_separable_p (rtx lhs, rtx rhs) if (lhs == NULL || rhs == NULL) return false; - /* Do not schedule CONST, CONST_INT and CONST_DOUBLE etc as rhs: no point - to use reg, if const can be used. Moreover, scheduling const as rhs may - lead to mode mismatch cause consts don't have modes but they could be - merged from branches where the same const used in different modes. */ + /* Do not schedule constants as rhs: no point to use reg, if const + can be used. Moreover, scheduling const as rhs may lead to mode + mismatch cause consts don't have modes but they could be merged + from branches where the same const used in different modes. */ if (CONSTANT_P (rhs)) return false; diff --git a/gcc/simplify-rtx.c b/gcc/simplify-rtx.c index 791f91a..8c7c9a4 100644 --- a/gcc/simplify-rtx.c +++ b/gcc/simplify-rtx.c @@ -86,6 +86,22 @@ mode_signbit_p (enum machine_mode mode, const_rtx x) if (width <= HOST_BITS_PER_WIDE_INT && CONST_INT_P (x)) val = INTVAL (x); +#if TARGET_SUPPORTS_WIDE_INT + else if (CONST_WIDE_INT_P (x)) + { + unsigned int i; + unsigned int elts = CONST_WIDE_INT_NUNITS (x); + if (elts != (width + HOST_BITS_PER_WIDE_INT - 1) / HOST_BITS_PER_WIDE_INT) + return false; + for (i = 0; i < elts - 1; i++) + if (CONST_WIDE_INT_ELT (x, i) != 0) + return false; + val = CONST_WIDE_INT_ELT (x, elts - 1); + width %= HOST_BITS_PER_WIDE_INT; + if (width == 0) + width = HOST_BITS_PER_WIDE_INT; + } +#else else if (width <= HOST_BITS_PER_DOUBLE_INT && CONST_DOUBLE_AS_INT_P (x) && CONST_DOUBLE_LOW (x) == 0) @@ -93,8 +109,9 @@ mode_signbit_p (enum machine_mode mode, const_rtx x) val = CONST_DOUBLE_HIGH (x); width -= HOST_BITS_PER_WIDE_INT; } +#endif else - /* FIXME: We don't yet have a representation for wider modes. */ + /* X is not an integer constant. */ return false; if (width < HOST_BITS_PER_WIDE_INT) @@ -1496,7 +1513,6 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, rtx op, enum machine_mode op_mode) { unsigned int width = GET_MODE_PRECISION (mode); - unsigned int op_width = GET_MODE_PRECISION (op_mode); if (code == VEC_DUPLICATE) { @@ -1570,8 +1586,19 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, if (CONST_INT_P (op)) lv = INTVAL (op), hv = HWI_SIGN_EXTEND (lv); else +#if TARGET_SUPPORTS_WIDE_INT + { + /* The conversion code to floats really want exactly 2 HWIs. + This needs to be fixed. For now, if the constant is + really big, just return 0 which is safe. */ + if (CONST_WIDE_INT_NUNITS (op) > 2) + return 0; + lv = CONST_WIDE_INT_ELT (op, 0); + hv = CONST_WIDE_INT_ELT (op, 1); + } +#else lv = CONST_DOUBLE_LOW (op), hv = CONST_DOUBLE_HIGH (op); - +#endif REAL_VALUE_FROM_INT (d, lv, hv, mode); d = real_value_truncate (mode, d); return CONST_DOUBLE_FROM_REAL_VALUE (d, mode); @@ -1584,8 +1611,19 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, if (CONST_INT_P (op)) lv = INTVAL (op), hv = HWI_SIGN_EXTEND (lv); else +#if TARGET_SUPPORTS_WIDE_INT + { + /* The conversion code to floats really want exactly 2 HWIs. + This needs to be fixed. For now, if the constant is + really big, just return 0 which is safe. */ + if (CONST_WIDE_INT_NUNITS (op) > 2) + return 0; + lv = CONST_WIDE_INT_ELT (op, 0); + hv = CONST_WIDE_INT_ELT (op, 1); + } +#else lv = CONST_DOUBLE_LOW (op), hv = CONST_DOUBLE_HIGH (op); - +#endif if (op_mode == VOIDmode || GET_MODE_PRECISION (op_mode) > HOST_BITS_PER_DOUBLE_INT) /* We should never get a negative number. */ @@ -1598,302 +1636,82 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, return CONST_DOUBLE_FROM_REAL_VALUE (d, mode); } - if (CONST_INT_P (op) - && width <= HOST_BITS_PER_WIDE_INT && width > 0) + if (CONST_SCALAR_INT_P (op) && width > 0) { - HOST_WIDE_INT arg0 = INTVAL (op); - HOST_WIDE_INT val; + wide_int result; + enum machine_mode imode = op_mode == VOIDmode ? mode : op_mode; + wide_int op0 = wide_int::from_rtx (op, imode); + +#if TARGET_SUPPORTS_WIDE_INT == 0 + /* This assert keeps the simplification from producing a result + that cannot be represented in a CONST_DOUBLE but a lot of + upstream callers expect that this function never fails to + simplify something and so you if you added this to the test + above the code would die later anyway. If this assert + happens, you just need to make the port support wide int. */ + gcc_assert (width <= HOST_BITS_PER_DOUBLE_INT); +#endif switch (code) { case NOT: - val = ~ arg0; + result = ~op0; break; case NEG: - val = - arg0; + result = op0.neg (); break; case ABS: - val = (arg0 >= 0 ? arg0 : - arg0); + result = op0.abs (); break; case FFS: - arg0 &= GET_MODE_MASK (mode); - val = ffs_hwi (arg0); + result = op0.ffs (); break; case CLZ: - arg0 &= GET_MODE_MASK (mode); - if (arg0 == 0 && CLZ_DEFINED_VALUE_AT_ZERO (mode, val)) - ; - else - val = GET_MODE_PRECISION (mode) - floor_log2 (arg0) - 1; + result = op0.clz (); break; case CLRSB: - arg0 &= GET_MODE_MASK (mode); - if (arg0 == 0) - val = GET_MODE_PRECISION (mode) - 1; - else if (arg0 >= 0) - val = GET_MODE_PRECISION (mode) - floor_log2 (arg0) - 2; - else if (arg0 < 0) - val = GET_MODE_PRECISION (mode) - floor_log2 (~arg0) - 2; + result = op0.clrsb (); break; - + case CTZ: - arg0 &= GET_MODE_MASK (mode); - if (arg0 == 0) - { - /* Even if the value at zero is undefined, we have to come - up with some replacement. Seems good enough. */ - if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, val)) - val = GET_MODE_PRECISION (mode); - } - else - val = ctz_hwi (arg0); + result = op0.ctz (); break; case POPCOUNT: - arg0 &= GET_MODE_MASK (mode); - val = 0; - while (arg0) - val++, arg0 &= arg0 - 1; + result = op0.popcount (); break; case PARITY: - arg0 &= GET_MODE_MASK (mode); - val = 0; - while (arg0) - val++, arg0 &= arg0 - 1; - val &= 1; + result = op0.parity (); break; case BSWAP: - { - unsigned int s; - - val = 0; - for (s = 0; s < width; s += 8) - { - unsigned int d = width - s - 8; - unsigned HOST_WIDE_INT byte; - byte = (arg0 >> s) & 0xff; - val |= byte << d; - } - } + result = op0.bswap (); break; case TRUNCATE: - val = arg0; + result = op0.zforce_to_size (width); break; case ZERO_EXTEND: - /* When zero-extending a CONST_INT, we need to know its - original mode. */ - gcc_assert (op_mode != VOIDmode); - if (op_width == HOST_BITS_PER_WIDE_INT) - { - /* If we were really extending the mode, - we would have to distinguish between zero-extension - and sign-extension. */ - gcc_assert (width == op_width); - val = arg0; - } - else if (GET_MODE_BITSIZE (op_mode) < HOST_BITS_PER_WIDE_INT) - val = arg0 & GET_MODE_MASK (op_mode); - else - return 0; + result = op0.zforce_to_size (width); break; case SIGN_EXTEND: - if (op_mode == VOIDmode) - op_mode = mode; - op_width = GET_MODE_PRECISION (op_mode); - if (op_width == HOST_BITS_PER_WIDE_INT) - { - /* If we were really extending the mode, - we would have to distinguish between zero-extension - and sign-extension. */ - gcc_assert (width == op_width); - val = arg0; - } - else if (op_width < HOST_BITS_PER_WIDE_INT) - { - val = arg0 & GET_MODE_MASK (op_mode); - if (val_signbit_known_set_p (op_mode, val)) - val |= ~GET_MODE_MASK (op_mode); - } - else - return 0; + result = op0.sforce_to_size (width); break; case SQRT: - case FLOAT_EXTEND: - case FLOAT_TRUNCATE: - case SS_TRUNCATE: - case US_TRUNCATE: - case SS_NEG: - case US_NEG: - case SS_ABS: - return 0; - - default: - gcc_unreachable (); - } - - return gen_int_mode (val, mode); - } - - /* We can do some operations on integer CONST_DOUBLEs. Also allow - for a DImode operation on a CONST_INT. */ - else if (width <= HOST_BITS_PER_DOUBLE_INT - && (CONST_DOUBLE_AS_INT_P (op) || CONST_INT_P (op))) - { - double_int first, value; - - if (CONST_DOUBLE_AS_INT_P (op)) - first = double_int::from_pair (CONST_DOUBLE_HIGH (op), - CONST_DOUBLE_LOW (op)); - else - first = double_int::from_shwi (INTVAL (op)); - - switch (code) - { - case NOT: - value = ~first; - break; - - case NEG: - value = -first; - break; - - case ABS: - if (first.is_negative ()) - value = -first; - else - value = first; - break; - - case FFS: - value.high = 0; - if (first.low != 0) - value.low = ffs_hwi (first.low); - else if (first.high != 0) - value.low = HOST_BITS_PER_WIDE_INT + ffs_hwi (first.high); - else - value.low = 0; - break; - - case CLZ: - value.high = 0; - if (first.high != 0) - value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.high) - 1 - - HOST_BITS_PER_WIDE_INT; - else if (first.low != 0) - value.low = GET_MODE_PRECISION (mode) - floor_log2 (first.low) - 1; - else if (! CLZ_DEFINED_VALUE_AT_ZERO (mode, value.low)) - value.low = GET_MODE_PRECISION (mode); - break; - - case CTZ: - value.high = 0; - if (first.low != 0) - value.low = ctz_hwi (first.low); - else if (first.high != 0) - value.low = HOST_BITS_PER_WIDE_INT + ctz_hwi (first.high); - else if (! CTZ_DEFINED_VALUE_AT_ZERO (mode, value.low)) - value.low = GET_MODE_PRECISION (mode); - break; - - case POPCOUNT: - value = double_int_zero; - while (first.low) - { - value.low++; - first.low &= first.low - 1; - } - while (first.high) - { - value.low++; - first.high &= first.high - 1; - } - break; - - case PARITY: - value = double_int_zero; - while (first.low) - { - value.low++; - first.low &= first.low - 1; - } - while (first.high) - { - value.low++; - first.high &= first.high - 1; - } - value.low &= 1; - break; - - case BSWAP: - { - unsigned int s; - - value = double_int_zero; - for (s = 0; s < width; s += 8) - { - unsigned int d = width - s - 8; - unsigned HOST_WIDE_INT byte; - - if (s < HOST_BITS_PER_WIDE_INT) - byte = (first.low >> s) & 0xff; - else - byte = (first.high >> (s - HOST_BITS_PER_WIDE_INT)) & 0xff; - - if (d < HOST_BITS_PER_WIDE_INT) - value.low |= byte << d; - else - value.high |= byte << (d - HOST_BITS_PER_WIDE_INT); - } - } - break; - - case TRUNCATE: - /* This is just a change-of-mode, so do nothing. */ - value = first; - break; - - case ZERO_EXTEND: - gcc_assert (op_mode != VOIDmode); - - if (op_width > HOST_BITS_PER_WIDE_INT) - return 0; - - value = double_int::from_uhwi (first.low & GET_MODE_MASK (op_mode)); - break; - - case SIGN_EXTEND: - if (op_mode == VOIDmode - || op_width > HOST_BITS_PER_WIDE_INT) - return 0; - else - { - value.low = first.low & GET_MODE_MASK (op_mode); - if (val_signbit_known_set_p (op_mode, value.low)) - value.low |= ~GET_MODE_MASK (op_mode); - - value.high = HWI_SIGN_EXTEND (value.low); - } - break; - - case SQRT: - return 0; - default: return 0; } - return immed_double_int_const (value, mode); + return immed_wide_int_const (result, mode); } else if (CONST_DOUBLE_AS_FLOAT_P (op) @@ -1945,7 +1763,6 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, } return CONST_DOUBLE_FROM_REAL_VALUE (d, mode); } - else if (CONST_DOUBLE_AS_FLOAT_P (op) && SCALAR_FLOAT_MODE_P (GET_MODE (op)) && GET_MODE_CLASS (mode) == MODE_INT @@ -1958,9 +1775,12 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, /* This was formerly used only for non-IEEE float. eggert@twinsun.com says it is safe for IEEE also. */ - HOST_WIDE_INT xh, xl, th, tl; + HOST_WIDE_INT th, tl; REAL_VALUE_TYPE x, t; + wide_int wc; REAL_VALUE_FROM_CONST_DOUBLE (x, op); + HOST_WIDE_INT tmp[2]; + switch (code) { case FIX: @@ -1982,8 +1802,8 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, real_from_integer (&t, VOIDmode, tl, th, 0); if (REAL_VALUES_LESS (t, x)) { - xh = th; - xl = tl; + tmp[1] = th; + tmp[0] = tl; break; } @@ -2002,11 +1822,11 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, real_from_integer (&t, VOIDmode, tl, th, 0); if (REAL_VALUES_LESS (x, t)) { - xh = th; - xl = tl; + tmp[1] = th; + tmp[0] = tl; break; } - REAL_VALUE_TO_INT (&xl, &xh, x); + REAL_VALUE_TO_INT (&tmp[0], &tmp[1], x); break; case UNSIGNED_FIX: @@ -2033,18 +1853,19 @@ simplify_const_unary_operation (enum rtx_code code, enum machine_mode mode, real_from_integer (&t, VOIDmode, tl, th, 1); if (REAL_VALUES_LESS (t, x)) { - xh = th; - xl = tl; + tmp[1] = th; + tmp[0] = tl; break; } - REAL_VALUE_TO_INT (&xl, &xh, x); + REAL_VALUE_TO_INT (&tmp[0], &tmp[1], x); break; default: gcc_unreachable (); } - return immed_double_const (xl, xh, mode); + wc = wide_int::from_array (tmp, 2, GET_MODE_PRECISION (mode)); + return immed_wide_int_const (wc, mode); } return NULL_RTX; @@ -2204,49 +2025,50 @@ simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode, if (SCALAR_INT_MODE_P (mode)) { - double_int coeff0, coeff1; + wide_int coeff0; + wide_int coeff1; rtx lhs = op0, rhs = op1; - coeff0 = double_int_one; - coeff1 = double_int_one; + coeff0 = wide_int::one (GET_MODE_PRECISION (mode)); + coeff1 = wide_int::one (GET_MODE_PRECISION (mode)); if (GET_CODE (lhs) == NEG) { - coeff0 = double_int_minus_one; + coeff0 = wide_int::minus_one (GET_MODE_PRECISION (mode)); lhs = XEXP (lhs, 0); } else if (GET_CODE (lhs) == MULT - && CONST_INT_P (XEXP (lhs, 1))) + && CONST_SCALAR_INT_P (XEXP (lhs, 1))) { - coeff0 = double_int::from_shwi (INTVAL (XEXP (lhs, 1))); + coeff0 = wide_int::from_rtx (XEXP (lhs, 1), mode); lhs = XEXP (lhs, 0); } else if (GET_CODE (lhs) == ASHIFT && CONST_INT_P (XEXP (lhs, 1)) && INTVAL (XEXP (lhs, 1)) >= 0 - && INTVAL (XEXP (lhs, 1)) < HOST_BITS_PER_WIDE_INT) + && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode)) { - coeff0 = double_int_zero.set_bit (INTVAL (XEXP (lhs, 1))); + coeff0 = wide_int::set_bit_in_zero (INTVAL (XEXP (lhs, 1)), mode); lhs = XEXP (lhs, 0); } if (GET_CODE (rhs) == NEG) { - coeff1 = double_int_minus_one; + coeff1 = wide_int::minus_one (GET_MODE_PRECISION (mode)); rhs = XEXP (rhs, 0); } else if (GET_CODE (rhs) == MULT && CONST_INT_P (XEXP (rhs, 1))) { - coeff1 = double_int::from_shwi (INTVAL (XEXP (rhs, 1))); + coeff1 = wide_int::from_rtx (XEXP (rhs, 1), mode); rhs = XEXP (rhs, 0); } else if (GET_CODE (rhs) == ASHIFT && CONST_INT_P (XEXP (rhs, 1)) && INTVAL (XEXP (rhs, 1)) >= 0 - && INTVAL (XEXP (rhs, 1)) < HOST_BITS_PER_WIDE_INT) + && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode)) { - coeff1 = double_int_zero.set_bit (INTVAL (XEXP (rhs, 1))); + coeff1 = wide_int::set_bit_in_zero (INTVAL (XEXP (rhs, 1)), mode); rhs = XEXP (rhs, 0); } @@ -2254,11 +2076,9 @@ simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode, { rtx orig = gen_rtx_PLUS (mode, op0, op1); rtx coeff; - double_int val; bool speed = optimize_function_for_speed_p (cfun); - val = coeff0 + coeff1; - coeff = immed_double_int_const (val, mode); + coeff = immed_wide_int_const (coeff0 + coeff1, mode); tem = simplify_gen_binary (MULT, mode, lhs, coeff); return set_src_cost (tem, speed) <= set_src_cost (orig, speed) @@ -2380,50 +2200,52 @@ simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode, if (SCALAR_INT_MODE_P (mode)) { - double_int coeff0, negcoeff1; + wide_int coeff0; + wide_int negcoeff1; rtx lhs = op0, rhs = op1; - coeff0 = double_int_one; - negcoeff1 = double_int_minus_one; + coeff0 = wide_int::one (GET_MODE_PRECISION (mode)); + negcoeff1 = wide_int::minus_one (GET_MODE_PRECISION (mode)); if (GET_CODE (lhs) == NEG) { - coeff0 = double_int_minus_one; + coeff0 = wide_int::minus_one (GET_MODE_PRECISION (mode)); lhs = XEXP (lhs, 0); } else if (GET_CODE (lhs) == MULT - && CONST_INT_P (XEXP (lhs, 1))) + && CONST_SCALAR_INT_P (XEXP (lhs, 1))) { - coeff0 = double_int::from_shwi (INTVAL (XEXP (lhs, 1))); + coeff0 = wide_int::from_rtx (XEXP (lhs, 1), mode); lhs = XEXP (lhs, 0); } else if (GET_CODE (lhs) == ASHIFT && CONST_INT_P (XEXP (lhs, 1)) && INTVAL (XEXP (lhs, 1)) >= 0 - && INTVAL (XEXP (lhs, 1)) < HOST_BITS_PER_WIDE_INT) + && INTVAL (XEXP (lhs, 1)) < GET_MODE_PRECISION (mode)) { - coeff0 = double_int_zero.set_bit (INTVAL (XEXP (lhs, 1))); + coeff0 = wide_int::set_bit_in_zero (INTVAL (XEXP (lhs, 1)), mode); lhs = XEXP (lhs, 0); } if (GET_CODE (rhs) == NEG) { - negcoeff1 = double_int_one; + negcoeff1 = wide_int::one (GET_MODE_PRECISION (mode)); rhs = XEXP (rhs, 0); } else if (GET_CODE (rhs) == MULT && CONST_INT_P (XEXP (rhs, 1))) { - negcoeff1 = double_int::from_shwi (-INTVAL (XEXP (rhs, 1))); + negcoeff1 = wide_int::from_rtx (XEXP (rhs, 1), mode).neg (); rhs = XEXP (rhs, 0); } else if (GET_CODE (rhs) == ASHIFT && CONST_INT_P (XEXP (rhs, 1)) && INTVAL (XEXP (rhs, 1)) >= 0 - && INTVAL (XEXP (rhs, 1)) < HOST_BITS_PER_WIDE_INT) + && INTVAL (XEXP (rhs, 1)) < GET_MODE_PRECISION (mode)) { - negcoeff1 = double_int_zero.set_bit (INTVAL (XEXP (rhs, 1))); - negcoeff1 = -negcoeff1; + negcoeff1 = wide_int::set_bit_in_zero (INTVAL (XEXP (rhs, 1)), + mode); + negcoeff1 = negcoeff1.neg (); rhs = XEXP (rhs, 0); } @@ -2431,11 +2253,9 @@ simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode, { rtx orig = gen_rtx_MINUS (mode, op0, op1); rtx coeff; - double_int val; bool speed = optimize_function_for_speed_p (cfun); - val = coeff0 + negcoeff1; - coeff = immed_double_int_const (val, mode); + coeff = immed_wide_int_const (coeff0 + negcoeff1, mode); tem = simplify_gen_binary (MULT, mode, lhs, coeff); return set_src_cost (tem, speed) <= set_src_cost (orig, speed) @@ -2587,26 +2407,13 @@ simplify_binary_operation_1 (enum rtx_code code, enum machine_mode mode, && trueop1 == CONST1_RTX (mode)) return op0; - /* Convert multiply by constant power of two into shift unless - we are still generating RTL. This test is a kludge. */ - if (CONST_INT_P (trueop1) - && (val = exact_log2 (UINTVAL (trueop1))) >= 0 - /* If the mode is larger than the host word size, and the - uppermost bit is set, then this isn't a power of two due - to implicit sign extension. */ - && (width <= HOST_BITS_PER_WIDE_INT - || val != HOST_BITS_PER_WIDE_INT - 1)) - return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); - - /* Likewise for multipliers wider than a word. */ - if (CONST_DOUBLE_AS_INT_P (trueop1) - && GET_MODE (op0) == mode - && CONST_DOUBLE_LOW (trueop1) == 0 - && (val = exact_log2 (CONST_DOUBLE_HIGH (trueop1))) >= 0 - && (val < HOST_BITS_PER_DOUBLE_INT - 1 - || GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT)) - return simplify_gen_binary (ASHIFT, mode, op0, - GEN_INT (val + HOST_BITS_PER_WIDE_INT)); + /* Convert multiply by constant power of two into shift. */ + if (CONST_SCALAR_INT_P (trueop1)) + { + val = wide_int::from_rtx (trueop1, mode).exact_log2 ().to_shwi (); + if (val >= 0 && val < GET_MODE_BITSIZE (mode)) + return simplify_gen_binary (ASHIFT, mode, op0, GEN_INT (val)); + } /* x*2 is x+x and x*(-1) is -x */ if (CONST_DOUBLE_AS_FLOAT_P (trueop1) @@ -3682,9 +3489,9 @@ rtx simplify_const_binary_operation (enum rtx_code code, enum machine_mode mode, rtx op0, rtx op1) { - HOST_WIDE_INT arg0, arg1, arg0s, arg1s; - HOST_WIDE_INT val; +#if TARGET_SUPPORTS_WIDE_INT == 0 unsigned int width = GET_MODE_PRECISION (mode); +#endif if (VECTOR_MODE_P (mode) && code != VEC_CONCAT @@ -3877,299 +3684,129 @@ simplify_const_binary_operation (enum rtx_code code, enum machine_mode mode, /* We can fold some multi-word operations. */ if (GET_MODE_CLASS (mode) == MODE_INT - && width == HOST_BITS_PER_DOUBLE_INT - && (CONST_DOUBLE_AS_INT_P (op0) || CONST_INT_P (op0)) - && (CONST_DOUBLE_AS_INT_P (op1) || CONST_INT_P (op1))) + && CONST_SCALAR_INT_P (op0) + && CONST_SCALAR_INT_P (op1)) { - double_int o0, o1, res, tmp; - bool overflow; - - o0 = rtx_to_double_int (op0); - o1 = rtx_to_double_int (op1); - + wide_int result; + wide_int wop0 = wide_int::from_rtx (op0, mode); + bool overflow = false; + unsigned int bitsize = GET_MODE_BITSIZE (mode); + rtx_mode_t pop1 = std::make_pair (op1, mode); + +#if TARGET_SUPPORTS_WIDE_INT == 0 + /* This assert keeps the simplification from producing a result + that cannot be represented in a CONST_DOUBLE but a lot of + upstream callers expect that this function never fails to + simplify something and so you if you added this to the test + above the code would die later anyway. If this assert + happens, you just need to make the port support wide int. */ + gcc_assert (width <= HOST_BITS_PER_DOUBLE_INT); +#endif switch (code) { case MINUS: - /* A - B == A + (-B). */ - o1 = -o1; - - /* Fall through.... */ + result = wop0 - pop1; + break; case PLUS: - res = o0 + o1; + result = wop0 + pop1; break; case MULT: - res = o0 * o1; + result = wop0 * pop1; break; case DIV: - res = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR, - &tmp, &overflow); + result = wop0.div_trunc (pop1, SIGNED, &overflow); if (overflow) - return 0; + return NULL_RTX; break; - + case MOD: - tmp = o0.divmod_with_overflow (o1, false, TRUNC_DIV_EXPR, - &res, &overflow); + result = wop0.mod_trunc (pop1, SIGNED, &overflow); if (overflow) - return 0; + return NULL_RTX; break; case UDIV: - res = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR, - &tmp, &overflow); + result = wop0.div_trunc (pop1, UNSIGNED, &overflow); if (overflow) - return 0; + return NULL_RTX; break; case UMOD: - tmp = o0.divmod_with_overflow (o1, true, TRUNC_DIV_EXPR, - &res, &overflow); + result = wop0.mod_trunc (pop1, UNSIGNED, &overflow); if (overflow) - return 0; + return NULL_RTX; break; case AND: - res = o0 & o1; + result = wop0 & pop1; break; case IOR: - res = o0 | o1; + result = wop0 | pop1; break; case XOR: - res = o0 ^ o1; + result = wop0 ^ pop1; break; case SMIN: - res = o0.smin (o1); + result = wop0.smin (pop1); break; case SMAX: - res = o0.smax (o1); + result = wop0.smax (pop1); break; case UMIN: - res = o0.umin (o1); + result = wop0.umin (pop1); break; case UMAX: - res = o0.umax (o1); - break; - - case LSHIFTRT: case ASHIFTRT: - case ASHIFT: - case ROTATE: case ROTATERT: - { - unsigned HOST_WIDE_INT cnt; - - if (SHIFT_COUNT_TRUNCATED) - { - o1.high = 0; - o1.low &= GET_MODE_PRECISION (mode) - 1; - } - - if (!o1.fits_uhwi () - || o1.to_uhwi () >= GET_MODE_PRECISION (mode)) - return 0; - - cnt = o1.to_uhwi (); - unsigned short prec = GET_MODE_PRECISION (mode); - - if (code == LSHIFTRT || code == ASHIFTRT) - res = o0.rshift (cnt, prec, code == ASHIFTRT); - else if (code == ASHIFT) - res = o0.alshift (cnt, prec); - else if (code == ROTATE) - res = o0.lrotate (cnt, prec); - else /* code == ROTATERT */ - res = o0.rrotate (cnt, prec); - } - break; - - default: - return 0; - } - - return immed_double_int_const (res, mode); - } - - if (CONST_INT_P (op0) && CONST_INT_P (op1) - && width <= HOST_BITS_PER_WIDE_INT && width != 0) - { - /* Get the integer argument values in two forms: - zero-extended in ARG0, ARG1 and sign-extended in ARG0S, ARG1S. */ - - arg0 = INTVAL (op0); - arg1 = INTVAL (op1); - - if (width < HOST_BITS_PER_WIDE_INT) - { - arg0 &= GET_MODE_MASK (mode); - arg1 &= GET_MODE_MASK (mode); - - arg0s = arg0; - if (val_signbit_known_set_p (mode, arg0s)) - arg0s |= ~GET_MODE_MASK (mode); - - arg1s = arg1; - if (val_signbit_known_set_p (mode, arg1s)) - arg1s |= ~GET_MODE_MASK (mode); - } - else - { - arg0s = arg0; - arg1s = arg1; - } - - /* Compute the value of the arithmetic. */ - - switch (code) - { - case PLUS: - val = arg0s + arg1s; - break; - - case MINUS: - val = arg0s - arg1s; - break; - - case MULT: - val = arg0s * arg1s; - break; - - case DIV: - if (arg1s == 0 - || ((unsigned HOST_WIDE_INT) arg0s - == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1) - && arg1s == -1)) - return 0; - val = arg0s / arg1s; - break; - - case MOD: - if (arg1s == 0 - || ((unsigned HOST_WIDE_INT) arg0s - == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1) - && arg1s == -1)) - return 0; - val = arg0s % arg1s; + result = wop0.umax (pop1); break; - case UDIV: - if (arg1 == 0 - || ((unsigned HOST_WIDE_INT) arg0s - == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1) - && arg1s == -1)) - return 0; - val = (unsigned HOST_WIDE_INT) arg0 / arg1; - break; - - case UMOD: - if (arg1 == 0 - || ((unsigned HOST_WIDE_INT) arg0s - == (unsigned HOST_WIDE_INT) 1 << (HOST_BITS_PER_WIDE_INT - 1) - && arg1s == -1)) - return 0; - val = (unsigned HOST_WIDE_INT) arg0 % arg1; - break; - - case AND: - val = arg0 & arg1; - break; - - case IOR: - val = arg0 | arg1; - break; + case LSHIFTRT: + if (wide_int::from_rtx (op1, mode).neg_p ()) + return NULL_RTX; - case XOR: - val = arg0 ^ arg1; + result = wop0.rshiftu (pop1, bitsize, TRUNC); break; - - case LSHIFTRT: - case ASHIFT: + case ASHIFTRT: - /* Truncate the shift if SHIFT_COUNT_TRUNCATED, otherwise make sure - the value is in range. We can't return any old value for - out-of-range arguments because either the middle-end (via - shift_truncation_mask) or the back-end might be relying on - target-specific knowledge. Nor can we rely on - shift_truncation_mask, since the shift might not be part of an - ashlM3, lshrM3 or ashrM3 instruction. */ - if (SHIFT_COUNT_TRUNCATED) - arg1 = (unsigned HOST_WIDE_INT) arg1 % width; - else if (arg1 < 0 || arg1 >= GET_MODE_BITSIZE (mode)) - return 0; - - val = (code == ASHIFT - ? ((unsigned HOST_WIDE_INT) arg0) << arg1 - : ((unsigned HOST_WIDE_INT) arg0) >> arg1); + if (wide_int::from_rtx (op1, mode).neg_p ()) + return NULL_RTX; - /* Sign-extend the result for arithmetic right shifts. */ - if (code == ASHIFTRT && arg0s < 0 && arg1 > 0) - val |= ((unsigned HOST_WIDE_INT) (-1)) << (width - arg1); + result = wop0.rshifts (pop1, bitsize, TRUNC); break; + + case ASHIFT: + if (wide_int::from_rtx (op1, mode).neg_p ()) + return NULL_RTX; - case ROTATERT: - if (arg1 < 0) - return 0; - - arg1 %= width; - val = ((((unsigned HOST_WIDE_INT) arg0) << (width - arg1)) - | (((unsigned HOST_WIDE_INT) arg0) >> arg1)); + result = wop0.lshift (pop1, bitsize, TRUNC); break; - + case ROTATE: - if (arg1 < 0) - return 0; - - arg1 %= width; - val = ((((unsigned HOST_WIDE_INT) arg0) << arg1) - | (((unsigned HOST_WIDE_INT) arg0) >> (width - arg1))); - break; - - case COMPARE: - /* Do nothing here. */ - return 0; - - case SMIN: - val = arg0s <= arg1s ? arg0s : arg1s; - break; - - case UMIN: - val = ((unsigned HOST_WIDE_INT) arg0 - <= (unsigned HOST_WIDE_INT) arg1 ? arg0 : arg1); - break; + if (wide_int::from_rtx (op1, mode).neg_p ()) + return NULL_RTX; - case SMAX: - val = arg0s > arg1s ? arg0s : arg1s; + result = wop0.lrotate (pop1); break; + + case ROTATERT: + if (wide_int::from_rtx (op1, mode).neg_p ()) + return NULL_RTX; - case UMAX: - val = ((unsigned HOST_WIDE_INT) arg0 - > (unsigned HOST_WIDE_INT) arg1 ? arg0 : arg1); + result = wop0.rrotate (pop1); break; - case SS_PLUS: - case US_PLUS: - case SS_MINUS: - case US_MINUS: - case SS_MULT: - case US_MULT: - case SS_DIV: - case US_DIV: - case SS_ASHIFT: - case US_ASHIFT: - /* ??? There are simplifications that can be done. */ - return 0; - default: - gcc_unreachable (); + return NULL_RTX; } - - return gen_int_mode (val, mode); + return immed_wide_int_const (result, mode); } return NULL_RTX; @@ -4837,10 +4474,11 @@ comparison_result (enum rtx_code code, int known_results) } } -/* Check if the given comparison (done in the given MODE) is actually a - tautology or a contradiction. - If no simplification is possible, this function returns zero. - Otherwise, it returns either const_true_rtx or const0_rtx. */ +/* Check if the given comparison (done in the given MODE) is actually + a tautology or a contradiction. If the mode is VOID_mode, the + comparison is done in "infinite precision". If no simplification + is possible, this function returns zero. Otherwise, it returns + either const_true_rtx or const0_rtx. */ rtx simplify_const_relational_operation (enum rtx_code code, @@ -4964,59 +4602,24 @@ simplify_const_relational_operation (enum rtx_code code, /* Otherwise, see if the operands are both integers. */ if ((GET_MODE_CLASS (mode) == MODE_INT || mode == VOIDmode) - && (CONST_DOUBLE_AS_INT_P (trueop0) || CONST_INT_P (trueop0)) - && (CONST_DOUBLE_AS_INT_P (trueop1) || CONST_INT_P (trueop1))) + && CONST_SCALAR_INT_P (trueop0) && CONST_SCALAR_INT_P (trueop1)) { - int width = GET_MODE_PRECISION (mode); - HOST_WIDE_INT l0s, h0s, l1s, h1s; - unsigned HOST_WIDE_INT l0u, h0u, l1u, h1u; - - /* Get the two words comprising each integer constant. */ - if (CONST_DOUBLE_AS_INT_P (trueop0)) - { - l0u = l0s = CONST_DOUBLE_LOW (trueop0); - h0u = h0s = CONST_DOUBLE_HIGH (trueop0); - } - else - { - l0u = l0s = INTVAL (trueop0); - h0u = h0s = HWI_SIGN_EXTEND (l0s); - } - - if (CONST_DOUBLE_AS_INT_P (trueop1)) - { - l1u = l1s = CONST_DOUBLE_LOW (trueop1); - h1u = h1s = CONST_DOUBLE_HIGH (trueop1); - } - else - { - l1u = l1s = INTVAL (trueop1); - h1u = h1s = HWI_SIGN_EXTEND (l1s); - } - - /* If WIDTH is nonzero and smaller than HOST_BITS_PER_WIDE_INT, - we have to sign or zero-extend the values. */ - if (width != 0 && width < HOST_BITS_PER_WIDE_INT) - { - l0u &= GET_MODE_MASK (mode); - l1u &= GET_MODE_MASK (mode); - - if (val_signbit_known_set_p (mode, l0s)) - l0s |= ~GET_MODE_MASK (mode); - - if (val_signbit_known_set_p (mode, l1s)) - l1s |= ~GET_MODE_MASK (mode); - } - if (width != 0 && width <= HOST_BITS_PER_WIDE_INT) - h0u = h1u = 0, h0s = HWI_SIGN_EXTEND (l0s), h1s = HWI_SIGN_EXTEND (l1s); - - if (h0u == h1u && l0u == l1u) + enum machine_mode cmode = mode; + wide_int wo0; + rtx_mode_t ptrueop1 = std::make_pair (trueop1, cmode); + + /* It would be nice if we really had a mode here. However, the + largest int representable on the target is as good as + infinite. */ + if (mode == VOIDmode) + cmode = MAX_MODE_INT; + wo0 = wide_int::from_rtx (trueop0, cmode); + if (wo0 == ptrueop1) return comparison_result (code, CMP_EQ); else { - int cr; - cr = (h0s < h1s || (h0s == h1s && l0u < l1u)) ? CMP_LT : CMP_GT; - cr |= (h0u < h1u || (h0u == h1u && l0u < l1u)) ? CMP_LTU : CMP_GTU; + int cr = wo0.lts_p (ptrueop1) ? CMP_LT : CMP_GT; + cr |= wo0.ltu_p (ptrueop1) ? CMP_LTU : CMP_GTU; return comparison_result (code, cr); } } @@ -5472,9 +5075,9 @@ simplify_ternary_operation (enum rtx_code code, enum machine_mode mode, return 0; } -/* Evaluate a SUBREG of a CONST_INT or CONST_DOUBLE or CONST_FIXED - or CONST_VECTOR, - returning another CONST_INT or CONST_DOUBLE or CONST_FIXED or CONST_VECTOR. +/* Evaluate a SUBREG of a CONST_INT or CONST_WIDE_INT or CONST_DOUBLE + or CONST_FIXED or CONST_VECTOR, returning another CONST_INT or + CONST_WIDE_INT or CONST_DOUBLE or CONST_FIXED or CONST_VECTOR. Works by unpacking OP into a collection of 8-bit values represented as a little-endian array of 'unsigned char', selecting by BYTE, @@ -5484,13 +5087,11 @@ static rtx simplify_immed_subreg (enum machine_mode outermode, rtx op, enum machine_mode innermode, unsigned int byte) { - /* We support up to 512-bit values (for V8DFmode). */ enum { - max_bitsize = 512, value_bit = 8, value_mask = (1 << value_bit) - 1 }; - unsigned char value[max_bitsize / value_bit]; + unsigned char value[MAX_BITSIZE_MODE_ANY_MODE/value_bit]; int value_start; int i; int elem; @@ -5502,6 +5103,7 @@ simplify_immed_subreg (enum machine_mode outermode, rtx op, rtvec result_v = NULL; enum mode_class outer_class; enum machine_mode outer_submode; + int max_bitsize; /* Some ports misuse CCmode. */ if (GET_MODE_CLASS (outermode) == MODE_CC && CONST_INT_P (op)) @@ -5511,6 +5113,10 @@ simplify_immed_subreg (enum machine_mode outermode, rtx op, if (COMPLEX_MODE_P (outermode)) return NULL_RTX; + /* We support any size mode. */ + max_bitsize = MAX (GET_MODE_BITSIZE (outermode), + GET_MODE_BITSIZE (innermode)); + /* Unpack the value. */ if (GET_CODE (op) == CONST_VECTOR) @@ -5560,8 +5166,20 @@ simplify_immed_subreg (enum machine_mode outermode, rtx op, *vp++ = INTVAL (el) < 0 ? -1 : 0; break; + case CONST_WIDE_INT: + { + wide_int val = wide_int::from_rtx (el, innermode); + unsigned char extend = val.sign_mask (); + + for (i = 0; i < elem_bitsize; i += value_bit) + *vp++ = val.extract_to_hwi (i, value_bit); + for (; i < elem_bitsize; i += value_bit) + *vp++ = extend; + } + break; + case CONST_DOUBLE: - if (GET_MODE (el) == VOIDmode) + if (TARGET_SUPPORTS_WIDE_INT == 0 && GET_MODE (el) == VOIDmode) { unsigned char extend = 0; /* If this triggers, someone should have generated a @@ -5584,7 +5202,8 @@ simplify_immed_subreg (enum machine_mode outermode, rtx op, } else { - long tmp[max_bitsize / 32]; + /* This is big enough for anything on the platform. */ + long tmp[MAX_BITSIZE_MODE_ANY_MODE / 32]; int bitsize = GET_MODE_BITSIZE (GET_MODE (el)); gcc_assert (SCALAR_FLOAT_MODE_P (GET_MODE (el))); @@ -5704,24 +5323,27 @@ simplify_immed_subreg (enum machine_mode outermode, rtx op, case MODE_INT: case MODE_PARTIAL_INT: { - unsigned HOST_WIDE_INT hi = 0, lo = 0; - - for (i = 0; - i < HOST_BITS_PER_WIDE_INT && i < elem_bitsize; - i += value_bit) - lo |= (unsigned HOST_WIDE_INT)(*vp++ & value_mask) << i; - for (; i < elem_bitsize; i += value_bit) - hi |= (unsigned HOST_WIDE_INT)(*vp++ & value_mask) - << (i - HOST_BITS_PER_WIDE_INT); - - /* immed_double_const doesn't call trunc_int_for_mode. I don't - know why. */ - if (elem_bitsize <= HOST_BITS_PER_WIDE_INT) - elems[elem] = gen_int_mode (lo, outer_submode); - else if (elem_bitsize <= HOST_BITS_PER_DOUBLE_INT) - elems[elem] = immed_double_const (lo, hi, outer_submode); - else - return NULL_RTX; + int u; + int base = 0; + int units + = (GET_MODE_BITSIZE (outer_submode) + HOST_BITS_PER_WIDE_INT - 1) + / HOST_BITS_PER_WIDE_INT; + HOST_WIDE_INT tmp[MAX_BITSIZE_MODE_ANY_INT / HOST_BITS_PER_WIDE_INT]; + wide_int r; + + for (u = 0; u < units; u++) + { + unsigned HOST_WIDE_INT buf = 0; + for (i = 0; + i < HOST_BITS_PER_WIDE_INT && base + i < elem_bitsize; + i += value_bit) + buf |= (unsigned HOST_WIDE_INT)(*vp++ & value_mask) << i; + + tmp[u] = buf; + base += HOST_BITS_PER_WIDE_INT; + } + r = wide_int::from_array (tmp, units, GET_MODE_PRECISION (outer_submode)); + elems[elem] = immed_wide_int_const (r, outer_submode); } break; @@ -5729,7 +5351,7 @@ simplify_immed_subreg (enum machine_mode outermode, rtx op, case MODE_DECIMAL_FLOAT: { REAL_VALUE_TYPE r; - long tmp[max_bitsize / 32]; + long tmp[MAX_BITSIZE_MODE_ANY_INT / 32]; /* real_from_target wants its input in words affected by FLOAT_WORDS_BIG_ENDIAN. However, we ignore this, diff --git a/gcc/testsuite/ada/acats/ada95.lst b/gcc/testsuite/ada/acats/ada95.lst new file mode 100644 index 0000000..839d5df --- /dev/null +++ b/gcc/testsuite/ada/acats/ada95.lst @@ -0,0 +1,31 @@ +ac3106a +c34005p +c34005r +c34005s +c34005u +c34005v +c34006g +c34006j +c34006l +c34008a +c3a0014 +c41103b +c41203b +c41306a +c460a01 +c650001 +c74302b +c74306a +c85014a +c85014b +c85014c +c87b26b +c87b41a +c99004a +cb40005 +cc3019c +cc51b03 +cc51d02 +cd10002 +cdd2a03 +cxac005 diff --git a/gcc/testsuite/gcc.c-torture/execute/pr33992.x b/gcc/testsuite/gcc.c-torture/execute/pr33992.x new file mode 100644 index 0000000..57e9840 --- /dev/null +++ b/gcc/testsuite/gcc.c-torture/execute/pr33992.x @@ -0,0 +1,7 @@ +load_lib target-supports.exp + +if { [ check_effective_target_nonpic ] } { + return 0 +} + +return 1 diff --git a/gcc/testsuite/gfortran.dg/module_md5_1.f90 b/gcc/testsuite/gfortran.dg/module_md5_1.f90 new file mode 100644 index 0000000..1f522cb --- /dev/null +++ b/gcc/testsuite/gfortran.dg/module_md5_1.f90 @@ -0,0 +1,13 @@ +! Check that we can write a module file, that it has a correct MD5 sum, +! and that we can read it back. +! +! { dg-do compile } +module foo + integer(kind=4), parameter :: pi = 3_4 +end module foo + +program test + use foo + print *, pi +end program test +! { dg-final { scan-module "foo" "MD5:510304affe70481794fecdb22fc9ca0c" } } diff --git a/gcc/tree-ssa-address.c b/gcc/tree-ssa-address.c index cfd42ad..33fe8df 100644 --- a/gcc/tree-ssa-address.c +++ b/gcc/tree-ssa-address.c @@ -189,15 +189,18 @@ addr_for_mem_ref (struct mem_address *addr, addr_space_t as, struct mem_addr_template *templ; if (addr->step && !integer_onep (addr->step)) - st = immed_double_int_const (tree_to_double_int (addr->step), pointer_mode); + st = immed_wide_int_const (wide_int::from_tree (addr->step), + TYPE_MODE (TREE_TYPE (addr->step))); else st = NULL_RTX; if (addr->offset && !integer_zerop (addr->offset)) - off = immed_double_int_const - (tree_to_double_int (addr->offset) - .sext (TYPE_PRECISION (TREE_TYPE (addr->offset))), - pointer_mode); + { + wide_int dc = wide_int::from_tree (addr->offset); + dc = dc.sforce_to_size (TYPE_PRECISION (TREE_TYPE (addr->offset))); + off = immed_wide_int_const (dc, + TYPE_MODE (TREE_TYPE (addr->offset))); + } else off = NULL_RTX; diff --git a/gcc/tree.c b/gcc/tree.c index d8f2424..22fc583 100644 --- a/gcc/tree.c +++ b/gcc/tree.c @@ -59,6 +59,7 @@ along with GCC; see the file COPYING3. If not see #include "except.h" #include "debug.h" #include "intl.h" +#include "wide-int.h" /* Tree code classes. */ @@ -1067,6 +1068,33 @@ double_int_to_tree (tree type, double_int cst) return build_int_cst_wide (type, cst.low, cst.high); } +/* Constructs tree in type TYPE from with value given by CST. Signedness + of CST is assumed to be the same as the signedness of TYPE. */ + +tree +wide_int_to_tree (tree type, const wide_int &cst) +{ + wide_int v; + unsigned int new_prec = TYPE_PRECISION (type); + + gcc_assert (cst.get_len () <= 2); + SignOp sgn = TYPE_UNSIGNED (type) ? UNSIGNED : SIGNED; + + /* This is something of a temporary hack. The current rep of a + INT_CST looks at all of the bits, even those past the precision + of the type. So we have to accomodate this. The first test + checks to see if the type we want to make this is shorter than + the current rep, but the second block just goes and extends what + is there to the full size of the INT_CST. */ + if (new_prec < cst.get_precision ()) + v = cst.zext (TYPE_PRECISION (type)) + .force_to_size (HOST_BITS_PER_DOUBLE_INT, sgn); + else + v = cst.force_to_size (HOST_BITS_PER_DOUBLE_INT, sgn); + + return build_int_cst_wide (type, v.elt (0), v.elt (1)); +} + /* Returns true if CST fits into range of TYPE. Signedness of CST is assumed to be the same as the signedness of TYPE. */ diff --git a/gcc/var-tracking.c b/gcc/var-tracking.c index 8108413..5bff082 100644 --- a/gcc/var-tracking.c +++ b/gcc/var-tracking.c @@ -3523,6 +3523,23 @@ loc_cmp (rtx x, rtx y) default: gcc_unreachable (); } + if (CONST_WIDE_INT_P (x)) + { + /* Compare the vector length first. */ + if (CONST_WIDE_INT_NUNITS (x) >= CONST_WIDE_INT_NUNITS (y)) + return 1; + else if (CONST_WIDE_INT_NUNITS (x) < CONST_WIDE_INT_NUNITS (y)) + return -1; + + /* Compare the vectors elements. */; + for (j = CONST_WIDE_INT_NUNITS (x) - 1; j >= 0 ; j--) + { + if (CONST_WIDE_INT_ELT (x, j) < CONST_WIDE_INT_ELT (y, j)) + return -1; + if (CONST_WIDE_INT_ELT (x, j) > CONST_WIDE_INT_ELT (y, j)) + return 1; + } + } return 0; } diff --git a/gcc/varasm.c b/gcc/varasm.c index 2532d80..7cca674 100644 --- a/gcc/varasm.c +++ b/gcc/varasm.c @@ -3406,6 +3406,7 @@ const_rtx_hash_1 (rtx *xp, void *data) enum rtx_code code; hashval_t h, *hp; rtx x; + int i; x = *xp; code = GET_CODE (x); @@ -3416,12 +3417,12 @@ const_rtx_hash_1 (rtx *xp, void *data) { case CONST_INT: hwi = INTVAL (x); + fold_hwi: { int shift = sizeof (hashval_t) * CHAR_BIT; const int n = sizeof (HOST_WIDE_INT) / sizeof (hashval_t); - int i; - + h ^= (hashval_t) hwi; for (i = 1; i < n; ++i) { @@ -3431,8 +3432,16 @@ const_rtx_hash_1 (rtx *xp, void *data) } break; + case CONST_WIDE_INT: + hwi = GET_MODE_PRECISION (mode); + { + for (i = 0; i < CONST_WIDE_INT_NUNITS (x); i++) + hwi ^= CONST_WIDE_INT_ELT (x, i); + goto fold_hwi; + } + case CONST_DOUBLE: - if (mode == VOIDmode) + if (TARGET_SUPPORTS_WIDE_INT == 0 && mode == VOIDmode) { hwi = CONST_DOUBLE_LOW (x) ^ CONST_DOUBLE_HIGH (x); goto fold_hwi;