diff mbox

Add a full_integral_type_p helper function

Message ID 87mv6xmh3b.fsf@linaro.org
State New
Headers show

Commit Message

Richard Sandiford Aug. 18, 2017, 8:10 a.m. UTC
There are several places that test whether:

    TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))

for some integer type T.  With SVE variable-length modes, this would
need to become:

    TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))

(or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).
But rather than add the "SCALAR_" everywhere, it seemed neater to
introduce a new helper function that tests whether T is an integral
type that has the same number of bits as its underlying mode.  This
patch does that, calling it full_integral_type_p.

It isn't possible to use TYPE_MODE in tree.h because vector_type_mode
is defined in stor-layout.h, so for now the function accesses the mode
field directly.  After the 77-patch machine_mode series (thanks again
Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.

Of the changes that didn't previously have an INTEGRAL_TYPE_P check:

- for fold_single_bit_test_into_sign_test it is obvious from the
  integer_foop tests that this is restricted to integral types.

- vect_recog_vector_vector_shift_pattern is inherently restricted
  to integral types.

- the register_edge_assert_for_2 hunk is dominated by:

      TREE_CODE (val) == INTEGER_CST

- the ubsan_instrument_shift hunk is preceded by an early exit:

      if (!INTEGRAL_TYPE_P (type0))
	return NULL_TREE;

- the second and third match.pd hunks are from:

    /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))
            (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))
       if the new mask might be further optimized.  */

I'm a bit confused about:

/* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))
   when profitable.
   For bitwise binary operations apply operand conversions to the
   binary operation result instead of to the operands.  This allows
   to combine successive conversions and bitwise binary operations.
   We combine the above two cases by using a conditional convert.  */
(for bitop (bit_and bit_ior bit_xor)
 (simplify
  (bitop (convert @0) (convert? @1))
  (if (((TREE_CODE (@1) == INTEGER_CST
	 && INTEGRAL_TYPE_P (TREE_TYPE (@0))
	 && int_fits_type_p (@1, TREE_TYPE (@0)))
	|| types_match (@0, @1))
       /* ???  This transform conflicts with fold-const.c doing
	  Convert (T)(x & c) into (T)x & (T)c, if c is an integer
	  constants (if x has signed type, the sign bit cannot be set
	  in c).  This folds extension into the BIT_AND_EXPR.
	  Restrict it to GIMPLE to avoid endless recursions.  */
       && (bitop != BIT_AND_EXPR || GIMPLE)
       && (/* That's a good idea if the conversion widens the operand, thus
	      after hoisting the conversion the operation will be narrower.  */
	   TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)
	   /* It's also a good idea if the conversion is to a non-integer
	      mode.  */
	   || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
	   /* Or if the precision of TO is not the same as the precision
	      of its mode.  */
	   || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
   (convert (bitop @0 (convert @1))))))

though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't
rely on @0 and @1 being integral (although conversions from float would
use FLOAT_EXPR), but then what is:

	   /* It's also a good idea if the conversion is to a non-integer
	      mode.  */
	   || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT

letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer
mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those
it would be better to apply the scalar rules to the element type.

Either way, having allowed all non-INT modes, using full_integral_type_p
for the remaining condition seems correct.

If the feeling is that this isn't a useful abstraction, I can just update
each site individually to cope with variable-sized modes.

Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?

Richard


2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>
	    Alan Hayward  <alan.hayward@arm.com>
	    David Sherwood  <david.sherwood@arm.com>

gcc/
	* tree.h (scalar_type_is_full_p): New function.
	(full_integral_type_p): Likewise.
	* fold-const.c (fold_single_bit_test_into_sign_test): Likewise.
	* match.pd: Likewise.
	* tree-ssa-forwprop.c (simplify_rotate): Likewise.
	* tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)
	(vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.
	(adjust_bool_pattern): Likewise.
	* tree-vrp.c (register_edge_assert_for_2): Likewise.
	* ubsan.c (instrument_si_overflow): Likewise.

gcc/c-family/
	* c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.

Comments

Richard Biener Aug. 18, 2017, 10:45 a.m. UTC | #1
On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> There are several places that test whether:
>
>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))
>
> for some integer type T.  With SVE variable-length modes, this would
> need to become:
>
>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))
>
> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).
> But rather than add the "SCALAR_" everywhere, it seemed neater to
> introduce a new helper function that tests whether T is an integral
> type that has the same number of bits as its underlying mode.  This
> patch does that, calling it full_integral_type_p.
>
> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode
> is defined in stor-layout.h, so for now the function accesses the mode
> field directly.  After the 77-patch machine_mode series (thanks again
> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.
>
> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:
>
> - for fold_single_bit_test_into_sign_test it is obvious from the
>   integer_foop tests that this is restricted to integral types.
>
> - vect_recog_vector_vector_shift_pattern is inherently restricted
>   to integral types.
>
> - the register_edge_assert_for_2 hunk is dominated by:
>
>       TREE_CODE (val) == INTEGER_CST
>
> - the ubsan_instrument_shift hunk is preceded by an early exit:
>
>       if (!INTEGRAL_TYPE_P (type0))
>         return NULL_TREE;
>
> - the second and third match.pd hunks are from:
>
>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))
>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))
>        if the new mask might be further optimized.  */
>
> I'm a bit confused about:
>
> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))
>    when profitable.
>    For bitwise binary operations apply operand conversions to the
>    binary operation result instead of to the operands.  This allows
>    to combine successive conversions and bitwise binary operations.
>    We combine the above two cases by using a conditional convert.  */
> (for bitop (bit_and bit_ior bit_xor)
>  (simplify
>   (bitop (convert @0) (convert? @1))
>   (if (((TREE_CODE (@1) == INTEGER_CST
>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))
>          && int_fits_type_p (@1, TREE_TYPE (@0)))
>         || types_match (@0, @1))
>        /* ???  This transform conflicts with fold-const.c doing
>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer
>           constants (if x has signed type, the sign bit cannot be set
>           in c).  This folds extension into the BIT_AND_EXPR.
>           Restrict it to GIMPLE to avoid endless recursions.  */
>        && (bitop != BIT_AND_EXPR || GIMPLE)
>        && (/* That's a good idea if the conversion widens the operand, thus
>               after hoisting the conversion the operation will be narrower.  */
>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)
>            /* It's also a good idea if the conversion is to a non-integer
>               mode.  */
>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>            /* Or if the precision of TO is not the same as the precision
>               of its mode.  */
>            || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
>    (convert (bitop @0 (convert @1))))))
>
> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't
> rely on @0 and @1 being integral (although conversions from float would
> use FLOAT_EXPR), but then what is:

bit_and is valid on POINTER_TYPE and vector integer types

>
>            /* It's also a good idea if the conversion is to a non-integer
>               mode.  */
>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>
> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer
> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those
> it would be better to apply the scalar rules to the element type.

I suppose extra caution ;)  I think I have seen BLKmode for not naturally
aligned integer types at least on strict-align targets?  The code is
a copy from original code in tree-ssa-forwprop.c.

> Either way, having allowed all non-INT modes, using full_integral_type_p
> for the remaining condition seems correct.
>
> If the feeling is that this isn't a useful abstraction, I can just update
> each site individually to cope with variable-sized modes.

I think "full_integral_type_p" is a name from which I cannot infer
its meaning.  Maybe type_has_mode_precision_p?  Or
type_matches_mode_p?  Does TYPE_PRECISION == GET_MODE_PRECISION
imply TYPE_SIZE == GET_MODE_BITSIZE btw?

Richard.

> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?
>
> Richard
>
>
> 2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>
>             Alan Hayward  <alan.hayward@arm.com>
>             David Sherwood  <david.sherwood@arm.com>
>
> gcc/
>         * tree.h (scalar_type_is_full_p): New function.
>         (full_integral_type_p): Likewise.
>         * fold-const.c (fold_single_bit_test_into_sign_test): Likewise.
>         * match.pd: Likewise.
>         * tree-ssa-forwprop.c (simplify_rotate): Likewise.
>         * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)
>         (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.
>         (adjust_bool_pattern): Likewise.
>         * tree-vrp.c (register_edge_assert_for_2): Likewise.
>         * ubsan.c (instrument_si_overflow): Likewise.
>
> gcc/c-family/
>         * c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.
>
> diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c
> index b1386db..20f78e7 100644
> --- a/gcc/c-family/c-ubsan.c
> +++ b/gcc/c-family/c-ubsan.c
> @@ -131,8 +131,8 @@ ubsan_instrument_shift (location_t loc, enum tree_code code,
>
>    /* If this is not a signed operation, don't perform overflow checks.
>       Also punt on bit-fields.  */
> -  if (TYPE_OVERFLOW_WRAPS (type0)
> -      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)
> +  if (!full_integral_type_p (type0)
> +      || TYPE_OVERFLOW_WRAPS (type0)
>        || !sanitize_flags_p (SANITIZE_SHIFT_BASE))
>      ;
>
> diff --git a/gcc/fold-const.c b/gcc/fold-const.c
> index 0a5b168..1985a14 100644
> --- a/gcc/fold-const.c
> +++ b/gcc/fold-const.c
> @@ -6672,8 +6672,7 @@ fold_single_bit_test_into_sign_test (location_t loc,
>        if (arg00 != NULL_TREE
>           /* This is only a win if casting to a signed type is cheap,
>              i.e. when arg00's type is not a partial mode.  */
> -         && TYPE_PRECISION (TREE_TYPE (arg00))
> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))
> +         && full_integral_type_p (TREE_TYPE (arg00)))
>         {
>           tree stype = signed_type_for (TREE_TYPE (arg00));
>           return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
> diff --git a/gcc/match.pd b/gcc/match.pd
> index 0e36f46..9ad9930 100644
> --- a/gcc/match.pd
> +++ b/gcc/match.pd
> @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>            /* Or if the precision of TO is not the same as the precision
>               of its mode.  */
> -          || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
> +          || !full_integral_type_p (type)))
>     (convert (bitop @0 (convert @1))))))
>
>  (for bitop (bit_and bit_ior)
> @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>         if (shift == LSHIFT_EXPR)
>          zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);
>         else if (shift == RSHIFT_EXPR
> -               && (TYPE_PRECISION (shift_type)
> -                   == GET_MODE_PRECISION (TYPE_MODE (shift_type))))
> +               && full_integral_type_p (shift_type))
>          {
>            prec = TYPE_PRECISION (TREE_TYPE (@3));
>            tree arg00 = @0;
> @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>                && TYPE_UNSIGNED (TREE_TYPE (@0)))
>              {
>                tree inner_type = TREE_TYPE (@0);
> -              if ((TYPE_PRECISION (inner_type)
> -                   == GET_MODE_PRECISION (TYPE_MODE (inner_type)))
> +              if (full_integral_type_p (inner_type)
>                    && TYPE_PRECISION (inner_type) < prec)
>                  {
>                    prec = TYPE_PRECISION (inner_type);
> @@ -3225,9 +3223,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>       ncmp (ge lt)
>   (simplify
>    (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)
> -  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
> -       && (TYPE_PRECISION (TREE_TYPE (@0))
> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
> +  (if (full_integral_type_p (TREE_TYPE (@0))
>         && element_precision (@2) >= element_precision (@0)
>         && wi::only_sign_bit_p (@1, element_precision (@0)))
>     (with { tree stype = signed_type_for (TREE_TYPE (@0)); }
> @@ -4021,19 +4017,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>  (for op (plus minus)
>    (simplify
>      (convert (op:s (convert@2 @0) (convert?@3 @1)))
> -    (if (INTEGRAL_TYPE_P (type)
> -        /* We check for type compatibility between @0 and @1 below,
> -           so there's no need to check that @1/@3 are integral types.  */
> -        && INTEGRAL_TYPE_P (TREE_TYPE (@0))
> -        && INTEGRAL_TYPE_P (TREE_TYPE (@2))
> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
>          /* The precision of the type of each operand must match the
>             precision of the mode of each operand, similarly for the
>             result.  */
> -        && (TYPE_PRECISION (TREE_TYPE (@0))
> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
> -        && (TYPE_PRECISION (TREE_TYPE (@1))
> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
> -        && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
> +        && full_integral_type_p (TREE_TYPE (@0))
> +        && full_integral_type_p (TREE_TYPE (@1))
> +        && full_integral_type_p (type)
>          /* The inner conversion must be a widening conversion.  */
>          && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
>          && types_match (@0, type)
> @@ -4055,19 +4045,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>  (for op (minus plus)
>   (simplify
>    (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)
> -  (if (INTEGRAL_TYPE_P (type)
> -       /* We check for type compatibility between @0 and @1 below,
> -         so there's no need to check that @1/@3 are integral types.  */
> -       && INTEGRAL_TYPE_P (TREE_TYPE (@0))
> -       && INTEGRAL_TYPE_P (TREE_TYPE (@2))
> +  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
>         /* The precision of the type of each operand must match the
>           precision of the mode of each operand, similarly for the
>           result.  */
> -       && (TYPE_PRECISION (TREE_TYPE (@0))
> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
> -       && (TYPE_PRECISION (TREE_TYPE (@1))
> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
> -       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
> +       && full_integral_type_p (TREE_TYPE (@0))
> +       && full_integral_type_p (TREE_TYPE (@1))
> +       && full_integral_type_p (type)
>         /* The inner conversion must be a widening conversion.  */
>         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
>         && types_match (@0, @1)
> diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
> index 5719b99..20d5c86 100644
> --- a/gcc/tree-ssa-forwprop.c
> +++ b/gcc/tree-ssa-forwprop.c
> @@ -1528,8 +1528,7 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>
>    /* Only create rotates in complete modes.  Other cases are not
>       expanded properly.  */
> -  if (!INTEGRAL_TYPE_P (rtype)
> -      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))
> +  if (!full_integral_type_p (rtype))
>      return false;
>
>    for (i = 0; i < 2; i++)
> @@ -1606,11 +1605,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>           defcodefor_name (def_arg2[i], &cdef_code[i],
>                            &cdef_arg1[i], &cdef_arg2[i]);
>           if (CONVERT_EXPR_CODE_P (cdef_code[i])
> -             && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))
> +             && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))
>               && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
> -                > floor_log2 (TYPE_PRECISION (rtype))
> -             && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))
> +                > floor_log2 (TYPE_PRECISION (rtype)))
>             {
>               def_arg2_alt[i] = cdef_arg1[i];
>               defcodefor_name (def_arg2_alt[i], &cdef_code[i],
> @@ -1636,11 +1633,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>               }
>             defcodefor_name (cdef_arg2[i], &code, &tem, NULL);
>             if (CONVERT_EXPR_CODE_P (code)
> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))
> +               && full_integral_type_p (TREE_TYPE (tem))
>                 && TYPE_PRECISION (TREE_TYPE (tem))
>                  > floor_log2 (TYPE_PRECISION (rtype))
> -               && TYPE_PRECISION (TREE_TYPE (tem))
> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
>                 && (tem == def_arg2[1 - i]
>                     || tem == def_arg2_alt[1 - i]))
>               {
> @@ -1664,11 +1659,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>
>             defcodefor_name (cdef_arg1[i], &code, &tem, NULL);
>             if (CONVERT_EXPR_CODE_P (code)
> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))
> +               && full_integral_type_p (TREE_TYPE (tem))
>                 && TYPE_PRECISION (TREE_TYPE (tem))
> -                > floor_log2 (TYPE_PRECISION (rtype))
> -               && TYPE_PRECISION (TREE_TYPE (tem))
> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))
> +                > floor_log2 (TYPE_PRECISION (rtype)))
>               defcodefor_name (tem, &code, &tem, NULL);
>
>             if (code == NEGATE_EXPR)
> @@ -1680,11 +1673,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>                   }
>                 defcodefor_name (tem, &code, &tem, NULL);
>                 if (CONVERT_EXPR_CODE_P (code)
> -                   && INTEGRAL_TYPE_P (TREE_TYPE (tem))
> +                   && full_integral_type_p (TREE_TYPE (tem))
>                     && TYPE_PRECISION (TREE_TYPE (tem))
>                        > floor_log2 (TYPE_PRECISION (rtype))
> -                   && TYPE_PRECISION (TREE_TYPE (tem))
> -                      == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
>                     && (tem == def_arg2[1 - i]
>                         || tem == def_arg2_alt[1 - i]))
>                   {
> diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c
> index 17d1083..a96f784 100644
> --- a/gcc/tree-vect-patterns.c
> +++ b/gcc/tree-vect-patterns.c
> @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,
>    if (TREE_CODE (oprnd0) != SSA_NAME
>        || TREE_CODE (oprnd1) != SSA_NAME
>        || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))
> -      || TYPE_PRECISION (TREE_TYPE (oprnd1))
> -        != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))
> +      || !full_integral_type_p (TREE_TYPE (oprnd1))
>        || TYPE_PRECISION (TREE_TYPE (lhs))
>          != TYPE_PRECISION (TREE_TYPE (oprnd0)))
>      return NULL;
> @@ -2469,8 +2468,7 @@ vect_recog_mult_pattern (vec<gimple *> *stmts,
>
>    if (TREE_CODE (oprnd0) != SSA_NAME
>        || TREE_CODE (oprnd1) != INTEGER_CST
> -      || !INTEGRAL_TYPE_P (itype)
> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
> +      || !full_integral_type_p (itype))
>      return NULL;
>
>    vectype = get_vectype_for_scalar_type (itype);
> @@ -2584,8 +2582,7 @@ vect_recog_divmod_pattern (vec<gimple *> *stmts,
>    itype = TREE_TYPE (oprnd0);
>    if (TREE_CODE (oprnd0) != SSA_NAME
>        || TREE_CODE (oprnd1) != INTEGER_CST
> -      || TREE_CODE (itype) != INTEGER_TYPE
> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
> +      || !full_integral_type_p (itype))
>      return NULL;
>
>    vectype = get_vectype_for_scalar_type (itype);
> @@ -3385,9 +3382,8 @@ adjust_bool_pattern (tree var, tree out_type,
>      do_compare:
>        gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);
>        if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE
> -         || !TYPE_UNSIGNED (TREE_TYPE (rhs1))
> -         || (TYPE_PRECISION (TREE_TYPE (rhs1))
> -             != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))
> +         || !full_integral_type_p (TREE_TYPE (rhs1))
> +         || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))
>         {
>           machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));
>           itype
> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
> index 657a8d1..cfa9a97 100644
> --- a/gcc/tree-vrp.c
> +++ b/gcc/tree-vrp.c
> @@ -5245,7 +5245,7 @@ register_edge_assert_for_2 (tree name, edge e,
>               && tree_fits_uhwi_p (cst2)
>               && INTEGRAL_TYPE_P (TREE_TYPE (name2))
>               && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)
> -             && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))
> +             && full_integral_type_p (TREE_TYPE (val)))
>             {
>               mask = wi::mask (tree_to_uhwi (cst2), false, prec);
>               val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);
> diff --git a/gcc/tree.h b/gcc/tree.h
> index 46debc1..237f234 100644
> --- a/gcc/tree.h
> +++ b/gcc/tree.h
> @@ -5394,4 +5394,25 @@ struct builtin_structptr_type
>    const char *str;
>  };
>  extern const builtin_structptr_type builtin_structptr_types[6];
> +
> +/* Return true if the mode underlying scalar type T has the same number
> +   of bits as T does.  Examples of when this is false include bitfields
> +   that are narrower than the mode that contains them.  */
> +
> +inline bool
> +scalar_type_is_full_p (const_tree t)
> +{
> +  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)
> +         == TYPE_PRECISION (t));
> +}
> +
> +/* Return true if T is an integral type that has the same number of bits
> +   as its underlying mode.  */
> +
> +inline bool
> +full_integral_type_p (const_tree t)
> +{
> +  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);
> +}
> +
>  #endif  /* GCC_TREE_H  */
> diff --git a/gcc/ubsan.c b/gcc/ubsan.c
> index 49e38fa..40f5f3e 100644
> --- a/gcc/ubsan.c
> +++ b/gcc/ubsan.c
> @@ -1582,9 +1582,8 @@ instrument_si_overflow (gimple_stmt_iterator gsi)
>
>    /* If this is not a signed operation, don't instrument anything here.
>       Also punt on bit-fields.  */
> -  if (!INTEGRAL_TYPE_P (lhsinner)
> -      || TYPE_OVERFLOW_WRAPS (lhsinner)
> -      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))
> +  if (!full_integral_type_p (lhsinner)
> +      || TYPE_OVERFLOW_WRAPS (lhsinner))
>      return;
>
>    switch (code)
Richard Sandiford Aug. 18, 2017, 11:04 a.m. UTC | #2
Richard Biener <richard.guenther@gmail.com> writes:
> On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford
> <richard.sandiford@linaro.org> wrote:
>> There are several places that test whether:
>>
>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))
>>
>> for some integer type T.  With SVE variable-length modes, this would
>> need to become:
>>
>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))
>>
>> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).
>> But rather than add the "SCALAR_" everywhere, it seemed neater to
>> introduce a new helper function that tests whether T is an integral
>> type that has the same number of bits as its underlying mode.  This
>> patch does that, calling it full_integral_type_p.
>>
>> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode
>> is defined in stor-layout.h, so for now the function accesses the mode
>> field directly.  After the 77-patch machine_mode series (thanks again
>> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.
>>
>> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:
>>
>> - for fold_single_bit_test_into_sign_test it is obvious from the
>>   integer_foop tests that this is restricted to integral types.
>>
>> - vect_recog_vector_vector_shift_pattern is inherently restricted
>>   to integral types.
>>
>> - the register_edge_assert_for_2 hunk is dominated by:
>>
>>       TREE_CODE (val) == INTEGER_CST
>>
>> - the ubsan_instrument_shift hunk is preceded by an early exit:
>>
>>       if (!INTEGRAL_TYPE_P (type0))
>>         return NULL_TREE;
>>
>> - the second and third match.pd hunks are from:
>>
>>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))
>>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))
>>        if the new mask might be further optimized.  */
>>
>> I'm a bit confused about:
>>
>> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))
>>    when profitable.
>>    For bitwise binary operations apply operand conversions to the
>>    binary operation result instead of to the operands.  This allows
>>    to combine successive conversions and bitwise binary operations.
>>    We combine the above two cases by using a conditional convert.  */
>> (for bitop (bit_and bit_ior bit_xor)
>>  (simplify
>>   (bitop (convert @0) (convert? @1))
>>   (if (((TREE_CODE (@1) == INTEGER_CST
>>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))
>>          && int_fits_type_p (@1, TREE_TYPE (@0)))
>>         || types_match (@0, @1))
>>        /* ???  This transform conflicts with fold-const.c doing
>>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer
>>           constants (if x has signed type, the sign bit cannot be set
>>           in c).  This folds extension into the BIT_AND_EXPR.
>>           Restrict it to GIMPLE to avoid endless recursions.  */
>>        && (bitop != BIT_AND_EXPR || GIMPLE)
>>        && (/* That's a good idea if the conversion widens the operand, thus
>>               after hoisting the conversion the operation will be narrower.  */
>>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)
>>            /* It's also a good idea if the conversion is to a non-integer
>>               mode.  */
>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>>            /* Or if the precision of TO is not the same as the precision
>>               of its mode.  */
>>            || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
>>    (convert (bitop @0 (convert @1))))))
>>
>> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't
>> rely on @0 and @1 being integral (although conversions from float would
>> use FLOAT_EXPR), but then what is:
>
> bit_and is valid on POINTER_TYPE and vector integer types
>
>>
>>            /* It's also a good idea if the conversion is to a non-integer
>>               mode.  */
>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>>
>> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer
>> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those
>> it would be better to apply the scalar rules to the element type.
>
> I suppose extra caution ;)  I think I have seen BLKmode for not naturally
> aligned integer types at least on strict-align targets?  The code is
> a copy from original code in tree-ssa-forwprop.c.
>
>> Either way, having allowed all non-INT modes, using full_integral_type_p
>> for the remaining condition seems correct.
>>
>> If the feeling is that this isn't a useful abstraction, I can just update
>> each site individually to cope with variable-sized modes.
>
> I think "full_integral_type_p" is a name from which I cannot infer
> its meaning.  Maybe type_has_mode_precision_p?  Or
> type_matches_mode_p?

type_has_mode_precision_p sounds good.  With that name I guess it
should be written to cope with all types (even those with variable-
width modes), so I think we'd need to continue using TYPE_MODE.
The VECTOR_MODE_P check should get optimised away in most of
the cases touched by the patch though.

Would it be OK to move the declaration of vector_type_mode to tree.h
so that type_has_mode_precision_p can be inline?

> Does TYPE_PRECISION == GET_MODE_PRECISION
> imply TYPE_SIZE == GET_MODE_BITSIZE btw?

Good question :-)  You'd know better than me.  The only case I can
think of is if we ever tried to use BImode for Fortran logicals
(regardless of kind).  But I guess we wouldn't do that?

Thanks,
Richard

>
> Richard.
>
>> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?
>>
>> Richard
>>
>>
>> 2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>
>>             Alan Hayward  <alan.hayward@arm.com>
>>             David Sherwood  <david.sherwood@arm.com>
>>
>> gcc/
>>         * tree.h (scalar_type_is_full_p): New function.
>>         (full_integral_type_p): Likewise.
>>         * fold-const.c (fold_single_bit_test_into_sign_test): Likewise.
>>         * match.pd: Likewise.
>>         * tree-ssa-forwprop.c (simplify_rotate): Likewise.
>>         * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)
>>         (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.
>>         (adjust_bool_pattern): Likewise.
>>         * tree-vrp.c (register_edge_assert_for_2): Likewise.
>>         * ubsan.c (instrument_si_overflow): Likewise.
>>
>> gcc/c-family/
>>         * c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.
>>
>> diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c
>> index b1386db..20f78e7 100644
>> --- a/gcc/c-family/c-ubsan.c
>> +++ b/gcc/c-family/c-ubsan.c
>> @@ -131,8 +131,8 @@ ubsan_instrument_shift (location_t loc, enum tree_code code,
>>
>>    /* If this is not a signed operation, don't perform overflow checks.
>>       Also punt on bit-fields.  */
>> -  if (TYPE_OVERFLOW_WRAPS (type0)
>> -      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)
>> +  if (!full_integral_type_p (type0)
>> +      || TYPE_OVERFLOW_WRAPS (type0)
>>        || !sanitize_flags_p (SANITIZE_SHIFT_BASE))
>>      ;
>>
>> diff --git a/gcc/fold-const.c b/gcc/fold-const.c
>> index 0a5b168..1985a14 100644
>> --- a/gcc/fold-const.c
>> +++ b/gcc/fold-const.c
>> @@ -6672,8 +6672,7 @@ fold_single_bit_test_into_sign_test (location_t loc,
>>        if (arg00 != NULL_TREE
>>           /* This is only a win if casting to a signed type is cheap,
>>              i.e. when arg00's type is not a partial mode.  */
>> -         && TYPE_PRECISION (TREE_TYPE (arg00))
>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))
>> +         && full_integral_type_p (TREE_TYPE (arg00)))
>>         {
>>           tree stype = signed_type_for (TREE_TYPE (arg00));
>>           return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
>> diff --git a/gcc/match.pd b/gcc/match.pd
>> index 0e36f46..9ad9930 100644
>> --- a/gcc/match.pd
>> +++ b/gcc/match.pd
>> @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>>            /* Or if the precision of TO is not the same as the precision
>>               of its mode.  */
>> -          || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
>> +          || !full_integral_type_p (type)))
>>     (convert (bitop @0 (convert @1))))))
>>
>>  (for bitop (bit_and bit_ior)
>> @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>         if (shift == LSHIFT_EXPR)
>>          zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);
>>         else if (shift == RSHIFT_EXPR
>> -               && (TYPE_PRECISION (shift_type)
>> -                   == GET_MODE_PRECISION (TYPE_MODE (shift_type))))
>> +               && full_integral_type_p (shift_type))
>>          {
>>            prec = TYPE_PRECISION (TREE_TYPE (@3));
>>            tree arg00 = @0;
>> @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>                && TYPE_UNSIGNED (TREE_TYPE (@0)))
>>              {
>>                tree inner_type = TREE_TYPE (@0);
>> -              if ((TYPE_PRECISION (inner_type)
>> -                   == GET_MODE_PRECISION (TYPE_MODE (inner_type)))
>> +              if (full_integral_type_p (inner_type)
>>                    && TYPE_PRECISION (inner_type) < prec)
>>                  {
>>                    prec = TYPE_PRECISION (inner_type);
>> @@ -3225,9 +3223,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>       ncmp (ge lt)
>>   (simplify
>>    (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)
>> -  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
>> -       && (TYPE_PRECISION (TREE_TYPE (@0))
>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
>> +  (if (full_integral_type_p (TREE_TYPE (@0))
>>         && element_precision (@2) >= element_precision (@0)
>>         && wi::only_sign_bit_p (@1, element_precision (@0)))
>>     (with { tree stype = signed_type_for (TREE_TYPE (@0)); }
>> @@ -4021,19 +4017,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>  (for op (plus minus)
>>    (simplify
>>      (convert (op:s (convert@2 @0) (convert?@3 @1)))
>> -    (if (INTEGRAL_TYPE_P (type)
>> -        /* We check for type compatibility between @0 and @1 below,
>> -           so there's no need to check that @1/@3 are integral types.  */
>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@0))
>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@2))
>> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
>>          /* The precision of the type of each operand must match the
>>             precision of the mode of each operand, similarly for the
>>             result.  */
>> -        && (TYPE_PRECISION (TREE_TYPE (@0))
>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
>> -        && (TYPE_PRECISION (TREE_TYPE (@1))
>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
>> -        && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
>> +        && full_integral_type_p (TREE_TYPE (@0))
>> +        && full_integral_type_p (TREE_TYPE (@1))
>> +        && full_integral_type_p (type)
>>          /* The inner conversion must be a widening conversion.  */
>>          && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
>>          && types_match (@0, type)
>> @@ -4055,19 +4045,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>  (for op (minus plus)
>>   (simplify
>>    (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)
>> -  (if (INTEGRAL_TYPE_P (type)
>> -       /* We check for type compatibility between @0 and @1 below,
>> -         so there's no need to check that @1/@3 are integral types.  */
>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@0))
>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@2))
>> +  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
>>         /* The precision of the type of each operand must match the
>>           precision of the mode of each operand, similarly for the
>>           result.  */
>> -       && (TYPE_PRECISION (TREE_TYPE (@0))
>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
>> -       && (TYPE_PRECISION (TREE_TYPE (@1))
>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
>> -       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
>> +       && full_integral_type_p (TREE_TYPE (@0))
>> +       && full_integral_type_p (TREE_TYPE (@1))
>> +       && full_integral_type_p (type)
>>         /* The inner conversion must be a widening conversion.  */
>>         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
>>         && types_match (@0, @1)
>> diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
>> index 5719b99..20d5c86 100644
>> --- a/gcc/tree-ssa-forwprop.c
>> +++ b/gcc/tree-ssa-forwprop.c
>> @@ -1528,8 +1528,7 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>
>>    /* Only create rotates in complete modes.  Other cases are not
>>       expanded properly.  */
>> -  if (!INTEGRAL_TYPE_P (rtype)
>> -      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))
>> +  if (!full_integral_type_p (rtype))
>>      return false;
>>
>>    for (i = 0; i < 2; i++)
>> @@ -1606,11 +1605,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>           defcodefor_name (def_arg2[i], &cdef_code[i],
>>                            &cdef_arg1[i], &cdef_arg2[i]);
>>           if (CONVERT_EXPR_CODE_P (cdef_code[i])
>> -             && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))
>> +             && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))
>>               && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
>> -                > floor_log2 (TYPE_PRECISION (rtype))
>> -             && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))
>> +                > floor_log2 (TYPE_PRECISION (rtype)))
>>             {
>>               def_arg2_alt[i] = cdef_arg1[i];
>>               defcodefor_name (def_arg2_alt[i], &cdef_code[i],
>> @@ -1636,11 +1633,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>               }
>>             defcodefor_name (cdef_arg2[i], &code, &tem, NULL);
>>             if (CONVERT_EXPR_CODE_P (code)
>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))
>> +               && full_integral_type_p (TREE_TYPE (tem))
>>                 && TYPE_PRECISION (TREE_TYPE (tem))
>>                  > floor_log2 (TYPE_PRECISION (rtype))
>> -               && TYPE_PRECISION (TREE_TYPE (tem))
>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
>>                 && (tem == def_arg2[1 - i]
>>                     || tem == def_arg2_alt[1 - i]))
>>               {
>> @@ -1664,11 +1659,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>
>>             defcodefor_name (cdef_arg1[i], &code, &tem, NULL);
>>             if (CONVERT_EXPR_CODE_P (code)
>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))
>> +               && full_integral_type_p (TREE_TYPE (tem))
>>                 && TYPE_PRECISION (TREE_TYPE (tem))
>> -                > floor_log2 (TYPE_PRECISION (rtype))
>> -               && TYPE_PRECISION (TREE_TYPE (tem))
>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))
>> +                > floor_log2 (TYPE_PRECISION (rtype)))
>>               defcodefor_name (tem, &code, &tem, NULL);
>>
>>             if (code == NEGATE_EXPR)
>> @@ -1680,11 +1673,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>                   }
>>                 defcodefor_name (tem, &code, &tem, NULL);
>>                 if (CONVERT_EXPR_CODE_P (code)
>> -                   && INTEGRAL_TYPE_P (TREE_TYPE (tem))
>> +                   && full_integral_type_p (TREE_TYPE (tem))
>>                     && TYPE_PRECISION (TREE_TYPE (tem))
>>                        > floor_log2 (TYPE_PRECISION (rtype))
>> -                   && TYPE_PRECISION (TREE_TYPE (tem))
>> -                      == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
>>                     && (tem == def_arg2[1 - i]
>>                         || tem == def_arg2_alt[1 - i]))
>>                   {
>> diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c
>> index 17d1083..a96f784 100644
>> --- a/gcc/tree-vect-patterns.c
>> +++ b/gcc/tree-vect-patterns.c
>> @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,
>>    if (TREE_CODE (oprnd0) != SSA_NAME
>>        || TREE_CODE (oprnd1) != SSA_NAME
>>        || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))
>> -      || TYPE_PRECISION (TREE_TYPE (oprnd1))
>> -        != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))
>> +      || !full_integral_type_p (TREE_TYPE (oprnd1))
>>        || TYPE_PRECISION (TREE_TYPE (lhs))
>>          != TYPE_PRECISION (TREE_TYPE (oprnd0)))
>>      return NULL;
>> @@ -2469,8 +2468,7 @@ vect_recog_mult_pattern (vec<gimple *> *stmts,
>>
>>    if (TREE_CODE (oprnd0) != SSA_NAME
>>        || TREE_CODE (oprnd1) != INTEGER_CST
>> -      || !INTEGRAL_TYPE_P (itype)
>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
>> +      || !full_integral_type_p (itype))
>>      return NULL;
>>
>>    vectype = get_vectype_for_scalar_type (itype);
>> @@ -2584,8 +2582,7 @@ vect_recog_divmod_pattern (vec<gimple *> *stmts,
>>    itype = TREE_TYPE (oprnd0);
>>    if (TREE_CODE (oprnd0) != SSA_NAME
>>        || TREE_CODE (oprnd1) != INTEGER_CST
>> -      || TREE_CODE (itype) != INTEGER_TYPE
>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
>> +      || !full_integral_type_p (itype))
>>      return NULL;
>>
>>    vectype = get_vectype_for_scalar_type (itype);
>> @@ -3385,9 +3382,8 @@ adjust_bool_pattern (tree var, tree out_type,
>>      do_compare:
>>        gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);
>>        if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE
>> -         || !TYPE_UNSIGNED (TREE_TYPE (rhs1))
>> -         || (TYPE_PRECISION (TREE_TYPE (rhs1))
>> -             != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))
>> +         || !full_integral_type_p (TREE_TYPE (rhs1))
>> +         || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))
>>         {
>>           machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));
>>           itype
>> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
>> index 657a8d1..cfa9a97 100644
>> --- a/gcc/tree-vrp.c
>> +++ b/gcc/tree-vrp.c
>> @@ -5245,7 +5245,7 @@ register_edge_assert_for_2 (tree name, edge e,
>>               && tree_fits_uhwi_p (cst2)
>>               && INTEGRAL_TYPE_P (TREE_TYPE (name2))
>>               && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)
>> -             && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))
>> +             && full_integral_type_p (TREE_TYPE (val)))
>>             {
>>               mask = wi::mask (tree_to_uhwi (cst2), false, prec);
>>               val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);
>> diff --git a/gcc/tree.h b/gcc/tree.h
>> index 46debc1..237f234 100644
>> --- a/gcc/tree.h
>> +++ b/gcc/tree.h
>> @@ -5394,4 +5394,25 @@ struct builtin_structptr_type
>>    const char *str;
>>  };
>>  extern const builtin_structptr_type builtin_structptr_types[6];
>> +
>> +/* Return true if the mode underlying scalar type T has the same number
>> +   of bits as T does.  Examples of when this is false include bitfields
>> +   that are narrower than the mode that contains them.  */
>> +
>> +inline bool
>> +scalar_type_is_full_p (const_tree t)
>> +{
>> +  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)
>> +         == TYPE_PRECISION (t));
>> +}
>> +
>> +/* Return true if T is an integral type that has the same number of bits
>> +   as its underlying mode.  */
>> +
>> +inline bool
>> +full_integral_type_p (const_tree t)
>> +{
>> +  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);
>> +}
>> +
>>  #endif  /* GCC_TREE_H  */
>> diff --git a/gcc/ubsan.c b/gcc/ubsan.c
>> index 49e38fa..40f5f3e 100644
>> --- a/gcc/ubsan.c
>> +++ b/gcc/ubsan.c
>> @@ -1582,9 +1582,8 @@ instrument_si_overflow (gimple_stmt_iterator gsi)
>>
>>    /* If this is not a signed operation, don't instrument anything here.
>>       Also punt on bit-fields.  */
>> -  if (!INTEGRAL_TYPE_P (lhsinner)
>> -      || TYPE_OVERFLOW_WRAPS (lhsinner)
>> -      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))
>> +  if (!full_integral_type_p (lhsinner)
>> +      || TYPE_OVERFLOW_WRAPS (lhsinner))
>>      return;
>>
>>    switch (code)
Richard Biener Aug. 18, 2017, 12:18 p.m. UTC | #3
On Fri, Aug 18, 2017 at 1:04 PM, Richard Sandiford
<richard.sandiford@linaro.org> wrote:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Fri, Aug 18, 2017 at 10:10 AM, Richard Sandiford
>> <richard.sandiford@linaro.org> wrote:
>>> There are several places that test whether:
>>>
>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (TYPE_MODE (t))
>>>
>>> for some integer type T.  With SVE variable-length modes, this would
>>> need to become:
>>>
>>>     TYPE_PRECISION (t) == GET_MODE_PRECISION (SCALAR_TYPE_MODE (t))
>>>
>>> (or SCALAR_INT_TYPE_MODE, it doesn't matter which in this case).
>>> But rather than add the "SCALAR_" everywhere, it seemed neater to
>>> introduce a new helper function that tests whether T is an integral
>>> type that has the same number of bits as its underlying mode.  This
>>> patch does that, calling it full_integral_type_p.
>>>
>>> It isn't possible to use TYPE_MODE in tree.h because vector_type_mode
>>> is defined in stor-layout.h, so for now the function accesses the mode
>>> field directly.  After the 77-patch machine_mode series (thanks again
>>> Jeff for the reviews) it would use SCALAR_TYPE_MODE instead.
>>>
>>> Of the changes that didn't previously have an INTEGRAL_TYPE_P check:
>>>
>>> - for fold_single_bit_test_into_sign_test it is obvious from the
>>>   integer_foop tests that this is restricted to integral types.
>>>
>>> - vect_recog_vector_vector_shift_pattern is inherently restricted
>>>   to integral types.
>>>
>>> - the register_edge_assert_for_2 hunk is dominated by:
>>>
>>>       TREE_CODE (val) == INTEGER_CST
>>>
>>> - the ubsan_instrument_shift hunk is preceded by an early exit:
>>>
>>>       if (!INTEGRAL_TYPE_P (type0))
>>>         return NULL_TREE;
>>>
>>> - the second and third match.pd hunks are from:
>>>
>>>     /* Fold (X << C1) & C2 into (X << C1) & (C2 | ((1 << C1) - 1))
>>>             (X >> C1) & C2 into (X >> C1) & (C2 | ~((type) -1 >> C1))
>>>        if the new mask might be further optimized.  */
>>>
>>> I'm a bit confused about:
>>>
>>> /* Try to fold (type) X op CST -> (type) (X op ((type-x) CST))
>>>    when profitable.
>>>    For bitwise binary operations apply operand conversions to the
>>>    binary operation result instead of to the operands.  This allows
>>>    to combine successive conversions and bitwise binary operations.
>>>    We combine the above two cases by using a conditional convert.  */
>>> (for bitop (bit_and bit_ior bit_xor)
>>>  (simplify
>>>   (bitop (convert @0) (convert? @1))
>>>   (if (((TREE_CODE (@1) == INTEGER_CST
>>>          && INTEGRAL_TYPE_P (TREE_TYPE (@0))
>>>          && int_fits_type_p (@1, TREE_TYPE (@0)))
>>>         || types_match (@0, @1))
>>>        /* ???  This transform conflicts with fold-const.c doing
>>>           Convert (T)(x & c) into (T)x & (T)c, if c is an integer
>>>           constants (if x has signed type, the sign bit cannot be set
>>>           in c).  This folds extension into the BIT_AND_EXPR.
>>>           Restrict it to GIMPLE to avoid endless recursions.  */
>>>        && (bitop != BIT_AND_EXPR || GIMPLE)
>>>        && (/* That's a good idea if the conversion widens the operand, thus
>>>               after hoisting the conversion the operation will be narrower.  */
>>>            TYPE_PRECISION (TREE_TYPE (@0)) < TYPE_PRECISION (type)
>>>            /* It's also a good idea if the conversion is to a non-integer
>>>               mode.  */
>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>>>            /* Or if the precision of TO is not the same as the precision
>>>               of its mode.  */
>>>            || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
>>>    (convert (bitop @0 (convert @1))))))
>>>
>>> though.  The "INTEGRAL_TYPE_P (TREE_TYPE (@0))" suggests that we can't
>>> rely on @0 and @1 being integral (although conversions from float would
>>> use FLOAT_EXPR), but then what is:
>>
>> bit_and is valid on POINTER_TYPE and vector integer types
>>
>>>
>>>            /* It's also a good idea if the conversion is to a non-integer
>>>               mode.  */
>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>>>
>>> letting through?  MODE_PARTIAL_INT maybe, but that's a sort of integer
>>> mode too.  MODE_COMPLEX_INT or MODE_VECTOR_INT?  I thought for those
>>> it would be better to apply the scalar rules to the element type.
>>
>> I suppose extra caution ;)  I think I have seen BLKmode for not naturally
>> aligned integer types at least on strict-align targets?  The code is
>> a copy from original code in tree-ssa-forwprop.c.
>>
>>> Either way, having allowed all non-INT modes, using full_integral_type_p
>>> for the remaining condition seems correct.
>>>
>>> If the feeling is that this isn't a useful abstraction, I can just update
>>> each site individually to cope with variable-sized modes.
>>
>> I think "full_integral_type_p" is a name from which I cannot infer
>> its meaning.  Maybe type_has_mode_precision_p?  Or
>> type_matches_mode_p?
>
> type_has_mode_precision_p sounds good.  With that name I guess it
> should be written to cope with all types (even those with variable-
> width modes), so I think we'd need to continue using TYPE_MODE.
> The VECTOR_MODE_P check should get optimised away in most of
> the cases touched by the patch though.
>
> Would it be OK to move the declaration of vector_type_mode to tree.h
> so that type_has_mode_precision_p can be inline?

Just move the implementation to tree.c then.

>> Does TYPE_PRECISION == GET_MODE_PRECISION
>> imply TYPE_SIZE == GET_MODE_BITSIZE btw?
>
> Good question :-)  You'd know better than me.  The only case I can
> think of is if we ever tried to use BImode for Fortran logicals
> (regardless of kind).  But I guess we wouldn't do that?

OTOH TYPE_SIZE should always be == GET_MODE_BITSIZE and if
not the type should get BLKmode.

Might grep for such checks and replace them with an assert ... ;)

Richard.

> Thanks,
> Richard
>
>>
>> Richard.
>>
>>> Tested on aarch64-linux-gnu and x86_64-linux-gnu.  OK to install?
>>>
>>> Richard
>>>
>>>
>>> 2017-08-18  Richard Sandiford  <richard.sandiford@linaro.org>
>>>             Alan Hayward  <alan.hayward@arm.com>
>>>             David Sherwood  <david.sherwood@arm.com>
>>>
>>> gcc/
>>>         * tree.h (scalar_type_is_full_p): New function.
>>>         (full_integral_type_p): Likewise.
>>>         * fold-const.c (fold_single_bit_test_into_sign_test): Likewise.
>>>         * match.pd: Likewise.
>>>         * tree-ssa-forwprop.c (simplify_rotate): Likewise.
>>>         * tree-vect-patterns.c (vect_recog_vector_vector_shift_pattern)
>>>         (vect_recog_mult_pattern, vect_recog_divmod_pattern): Likewise.
>>>         (adjust_bool_pattern): Likewise.
>>>         * tree-vrp.c (register_edge_assert_for_2): Likewise.
>>>         * ubsan.c (instrument_si_overflow): Likewise.
>>>
>>> gcc/c-family/
>>>         * c-ubsan.c (ubsan_instrument_shift): Use full_integral_type_p.
>>>
>>> diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c
>>> index b1386db..20f78e7 100644
>>> --- a/gcc/c-family/c-ubsan.c
>>> +++ b/gcc/c-family/c-ubsan.c
>>> @@ -131,8 +131,8 @@ ubsan_instrument_shift (location_t loc, enum tree_code code,
>>>
>>>    /* If this is not a signed operation, don't perform overflow checks.
>>>       Also punt on bit-fields.  */
>>> -  if (TYPE_OVERFLOW_WRAPS (type0)
>>> -      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)
>>> +  if (!full_integral_type_p (type0)
>>> +      || TYPE_OVERFLOW_WRAPS (type0)
>>>        || !sanitize_flags_p (SANITIZE_SHIFT_BASE))
>>>      ;
>>>
>>> diff --git a/gcc/fold-const.c b/gcc/fold-const.c
>>> index 0a5b168..1985a14 100644
>>> --- a/gcc/fold-const.c
>>> +++ b/gcc/fold-const.c
>>> @@ -6672,8 +6672,7 @@ fold_single_bit_test_into_sign_test (location_t loc,
>>>        if (arg00 != NULL_TREE
>>>           /* This is only a win if casting to a signed type is cheap,
>>>              i.e. when arg00's type is not a partial mode.  */
>>> -         && TYPE_PRECISION (TREE_TYPE (arg00))
>>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))
>>> +         && full_integral_type_p (TREE_TYPE (arg00)))
>>>         {
>>>           tree stype = signed_type_for (TREE_TYPE (arg00));
>>>           return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
>>> diff --git a/gcc/match.pd b/gcc/match.pd
>>> index 0e36f46..9ad9930 100644
>>> --- a/gcc/match.pd
>>> +++ b/gcc/match.pd
>>> @@ -992,7 +992,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>>            || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
>>>            /* Or if the precision of TO is not the same as the precision
>>>               of its mode.  */
>>> -          || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
>>> +          || !full_integral_type_p (type)))
>>>     (convert (bitop @0 (convert @1))))))
>>>
>>>  (for bitop (bit_and bit_ior)
>>> @@ -1920,8 +1920,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>>         if (shift == LSHIFT_EXPR)
>>>          zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);
>>>         else if (shift == RSHIFT_EXPR
>>> -               && (TYPE_PRECISION (shift_type)
>>> -                   == GET_MODE_PRECISION (TYPE_MODE (shift_type))))
>>> +               && full_integral_type_p (shift_type))
>>>          {
>>>            prec = TYPE_PRECISION (TREE_TYPE (@3));
>>>            tree arg00 = @0;
>>> @@ -1931,8 +1930,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>>                && TYPE_UNSIGNED (TREE_TYPE (@0)))
>>>              {
>>>                tree inner_type = TREE_TYPE (@0);
>>> -              if ((TYPE_PRECISION (inner_type)
>>> -                   == GET_MODE_PRECISION (TYPE_MODE (inner_type)))
>>> +              if (full_integral_type_p (inner_type)
>>>                    && TYPE_PRECISION (inner_type) < prec)
>>>                  {
>>>                    prec = TYPE_PRECISION (inner_type);
>>> @@ -3225,9 +3223,7 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>>       ncmp (ge lt)
>>>   (simplify
>>>    (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)
>>> -  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
>>> -       && (TYPE_PRECISION (TREE_TYPE (@0))
>>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
>>> +  (if (full_integral_type_p (TREE_TYPE (@0))
>>>         && element_precision (@2) >= element_precision (@0)
>>>         && wi::only_sign_bit_p (@1, element_precision (@0)))
>>>     (with { tree stype = signed_type_for (TREE_TYPE (@0)); }
>>> @@ -4021,19 +4017,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>>  (for op (plus minus)
>>>    (simplify
>>>      (convert (op:s (convert@2 @0) (convert?@3 @1)))
>>> -    (if (INTEGRAL_TYPE_P (type)
>>> -        /* We check for type compatibility between @0 and @1 below,
>>> -           so there's no need to check that @1/@3 are integral types.  */
>>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@0))
>>> -        && INTEGRAL_TYPE_P (TREE_TYPE (@2))
>>> +    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
>>>          /* The precision of the type of each operand must match the
>>>             precision of the mode of each operand, similarly for the
>>>             result.  */
>>> -        && (TYPE_PRECISION (TREE_TYPE (@0))
>>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
>>> -        && (TYPE_PRECISION (TREE_TYPE (@1))
>>> -            == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
>>> -        && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
>>> +        && full_integral_type_p (TREE_TYPE (@0))
>>> +        && full_integral_type_p (TREE_TYPE (@1))
>>> +        && full_integral_type_p (type)
>>>          /* The inner conversion must be a widening conversion.  */
>>>          && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
>>>          && types_match (@0, type)
>>> @@ -4055,19 +4045,13 @@ DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
>>>  (for op (minus plus)
>>>   (simplify
>>>    (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)
>>> -  (if (INTEGRAL_TYPE_P (type)
>>> -       /* We check for type compatibility between @0 and @1 below,
>>> -         so there's no need to check that @1/@3 are integral types.  */
>>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@0))
>>> -       && INTEGRAL_TYPE_P (TREE_TYPE (@2))
>>> +  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
>>>         /* The precision of the type of each operand must match the
>>>           precision of the mode of each operand, similarly for the
>>>           result.  */
>>> -       && (TYPE_PRECISION (TREE_TYPE (@0))
>>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
>>> -       && (TYPE_PRECISION (TREE_TYPE (@1))
>>> -          == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
>>> -       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
>>> +       && full_integral_type_p (TREE_TYPE (@0))
>>> +       && full_integral_type_p (TREE_TYPE (@1))
>>> +       && full_integral_type_p (type)
>>>         /* The inner conversion must be a widening conversion.  */
>>>         && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
>>>         && types_match (@0, @1)
>>> diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
>>> index 5719b99..20d5c86 100644
>>> --- a/gcc/tree-ssa-forwprop.c
>>> +++ b/gcc/tree-ssa-forwprop.c
>>> @@ -1528,8 +1528,7 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>>
>>>    /* Only create rotates in complete modes.  Other cases are not
>>>       expanded properly.  */
>>> -  if (!INTEGRAL_TYPE_P (rtype)
>>> -      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))
>>> +  if (!full_integral_type_p (rtype))
>>>      return false;
>>>
>>>    for (i = 0; i < 2; i++)
>>> @@ -1606,11 +1605,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>>           defcodefor_name (def_arg2[i], &cdef_code[i],
>>>                            &cdef_arg1[i], &cdef_arg2[i]);
>>>           if (CONVERT_EXPR_CODE_P (cdef_code[i])
>>> -             && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))
>>> +             && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))
>>>               && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
>>> -                > floor_log2 (TYPE_PRECISION (rtype))
>>> -             && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
>>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))
>>> +                > floor_log2 (TYPE_PRECISION (rtype)))
>>>             {
>>>               def_arg2_alt[i] = cdef_arg1[i];
>>>               defcodefor_name (def_arg2_alt[i], &cdef_code[i],
>>> @@ -1636,11 +1633,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>>               }
>>>             defcodefor_name (cdef_arg2[i], &code, &tem, NULL);
>>>             if (CONVERT_EXPR_CODE_P (code)
>>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))
>>> +               && full_integral_type_p (TREE_TYPE (tem))
>>>                 && TYPE_PRECISION (TREE_TYPE (tem))
>>>                  > floor_log2 (TYPE_PRECISION (rtype))
>>> -               && TYPE_PRECISION (TREE_TYPE (tem))
>>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
>>>                 && (tem == def_arg2[1 - i]
>>>                     || tem == def_arg2_alt[1 - i]))
>>>               {
>>> @@ -1664,11 +1659,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>>
>>>             defcodefor_name (cdef_arg1[i], &code, &tem, NULL);
>>>             if (CONVERT_EXPR_CODE_P (code)
>>> -               && INTEGRAL_TYPE_P (TREE_TYPE (tem))
>>> +               && full_integral_type_p (TREE_TYPE (tem))
>>>                 && TYPE_PRECISION (TREE_TYPE (tem))
>>> -                > floor_log2 (TYPE_PRECISION (rtype))
>>> -               && TYPE_PRECISION (TREE_TYPE (tem))
>>> -                == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))
>>> +                > floor_log2 (TYPE_PRECISION (rtype)))
>>>               defcodefor_name (tem, &code, &tem, NULL);
>>>
>>>             if (code == NEGATE_EXPR)
>>> @@ -1680,11 +1673,9 @@ simplify_rotate (gimple_stmt_iterator *gsi)
>>>                   }
>>>                 defcodefor_name (tem, &code, &tem, NULL);
>>>                 if (CONVERT_EXPR_CODE_P (code)
>>> -                   && INTEGRAL_TYPE_P (TREE_TYPE (tem))
>>> +                   && full_integral_type_p (TREE_TYPE (tem))
>>>                     && TYPE_PRECISION (TREE_TYPE (tem))
>>>                        > floor_log2 (TYPE_PRECISION (rtype))
>>> -                   && TYPE_PRECISION (TREE_TYPE (tem))
>>> -                      == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
>>>                     && (tem == def_arg2[1 - i]
>>>                         || tem == def_arg2_alt[1 - i]))
>>>                   {
>>> diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c
>>> index 17d1083..a96f784 100644
>>> --- a/gcc/tree-vect-patterns.c
>>> +++ b/gcc/tree-vect-patterns.c
>>> @@ -2067,8 +2067,7 @@ vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,
>>>    if (TREE_CODE (oprnd0) != SSA_NAME
>>>        || TREE_CODE (oprnd1) != SSA_NAME
>>>        || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))
>>> -      || TYPE_PRECISION (TREE_TYPE (oprnd1))
>>> -        != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))
>>> +      || !full_integral_type_p (TREE_TYPE (oprnd1))
>>>        || TYPE_PRECISION (TREE_TYPE (lhs))
>>>          != TYPE_PRECISION (TREE_TYPE (oprnd0)))
>>>      return NULL;
>>> @@ -2469,8 +2468,7 @@ vect_recog_mult_pattern (vec<gimple *> *stmts,
>>>
>>>    if (TREE_CODE (oprnd0) != SSA_NAME
>>>        || TREE_CODE (oprnd1) != INTEGER_CST
>>> -      || !INTEGRAL_TYPE_P (itype)
>>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
>>> +      || !full_integral_type_p (itype))
>>>      return NULL;
>>>
>>>    vectype = get_vectype_for_scalar_type (itype);
>>> @@ -2584,8 +2582,7 @@ vect_recog_divmod_pattern (vec<gimple *> *stmts,
>>>    itype = TREE_TYPE (oprnd0);
>>>    if (TREE_CODE (oprnd0) != SSA_NAME
>>>        || TREE_CODE (oprnd1) != INTEGER_CST
>>> -      || TREE_CODE (itype) != INTEGER_TYPE
>>> -      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
>>> +      || !full_integral_type_p (itype))
>>>      return NULL;
>>>
>>>    vectype = get_vectype_for_scalar_type (itype);
>>> @@ -3385,9 +3382,8 @@ adjust_bool_pattern (tree var, tree out_type,
>>>      do_compare:
>>>        gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);
>>>        if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE
>>> -         || !TYPE_UNSIGNED (TREE_TYPE (rhs1))
>>> -         || (TYPE_PRECISION (TREE_TYPE (rhs1))
>>> -             != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))
>>> +         || !full_integral_type_p (TREE_TYPE (rhs1))
>>> +         || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))
>>>         {
>>>           machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));
>>>           itype
>>> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
>>> index 657a8d1..cfa9a97 100644
>>> --- a/gcc/tree-vrp.c
>>> +++ b/gcc/tree-vrp.c
>>> @@ -5245,7 +5245,7 @@ register_edge_assert_for_2 (tree name, edge e,
>>>               && tree_fits_uhwi_p (cst2)
>>>               && INTEGRAL_TYPE_P (TREE_TYPE (name2))
>>>               && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)
>>> -             && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))
>>> +             && full_integral_type_p (TREE_TYPE (val)))
>>>             {
>>>               mask = wi::mask (tree_to_uhwi (cst2), false, prec);
>>>               val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);
>>> diff --git a/gcc/tree.h b/gcc/tree.h
>>> index 46debc1..237f234 100644
>>> --- a/gcc/tree.h
>>> +++ b/gcc/tree.h
>>> @@ -5394,4 +5394,25 @@ struct builtin_structptr_type
>>>    const char *str;
>>>  };
>>>  extern const builtin_structptr_type builtin_structptr_types[6];
>>> +
>>> +/* Return true if the mode underlying scalar type T has the same number
>>> +   of bits as T does.  Examples of when this is false include bitfields
>>> +   that are narrower than the mode that contains them.  */
>>> +
>>> +inline bool
>>> +scalar_type_is_full_p (const_tree t)
>>> +{
>>> +  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)
>>> +         == TYPE_PRECISION (t));
>>> +}
>>> +
>>> +/* Return true if T is an integral type that has the same number of bits
>>> +   as its underlying mode.  */
>>> +
>>> +inline bool
>>> +full_integral_type_p (const_tree t)
>>> +{
>>> +  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);
>>> +}
>>> +
>>>  #endif  /* GCC_TREE_H  */
>>> diff --git a/gcc/ubsan.c b/gcc/ubsan.c
>>> index 49e38fa..40f5f3e 100644
>>> --- a/gcc/ubsan.c
>>> +++ b/gcc/ubsan.c
>>> @@ -1582,9 +1582,8 @@ instrument_si_overflow (gimple_stmt_iterator gsi)
>>>
>>>    /* If this is not a signed operation, don't instrument anything here.
>>>       Also punt on bit-fields.  */
>>> -  if (!INTEGRAL_TYPE_P (lhsinner)
>>> -      || TYPE_OVERFLOW_WRAPS (lhsinner)
>>> -      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))
>>> +  if (!full_integral_type_p (lhsinner)
>>> +      || TYPE_OVERFLOW_WRAPS (lhsinner))
>>>      return;
>>>
>>>    switch (code)
diff mbox

Patch

diff --git a/gcc/c-family/c-ubsan.c b/gcc/c-family/c-ubsan.c
index b1386db..20f78e7 100644
--- a/gcc/c-family/c-ubsan.c
+++ b/gcc/c-family/c-ubsan.c
@@ -131,8 +131,8 @@  ubsan_instrument_shift (location_t loc, enum tree_code code,
 
   /* If this is not a signed operation, don't perform overflow checks.
      Also punt on bit-fields.  */
-  if (TYPE_OVERFLOW_WRAPS (type0)
-      || GET_MODE_BITSIZE (TYPE_MODE (type0)) != TYPE_PRECISION (type0)
+  if (!full_integral_type_p (type0)
+      || TYPE_OVERFLOW_WRAPS (type0)
       || !sanitize_flags_p (SANITIZE_SHIFT_BASE))
     ;
 
diff --git a/gcc/fold-const.c b/gcc/fold-const.c
index 0a5b168..1985a14 100644
--- a/gcc/fold-const.c
+++ b/gcc/fold-const.c
@@ -6672,8 +6672,7 @@  fold_single_bit_test_into_sign_test (location_t loc,
       if (arg00 != NULL_TREE
 	  /* This is only a win if casting to a signed type is cheap,
 	     i.e. when arg00's type is not a partial mode.  */
-	  && TYPE_PRECISION (TREE_TYPE (arg00))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (arg00))))
+	  && full_integral_type_p (TREE_TYPE (arg00)))
 	{
 	  tree stype = signed_type_for (TREE_TYPE (arg00));
 	  return fold_build2_loc (loc, code == EQ_EXPR ? GE_EXPR : LT_EXPR,
diff --git a/gcc/match.pd b/gcc/match.pd
index 0e36f46..9ad9930 100644
--- a/gcc/match.pd
+++ b/gcc/match.pd
@@ -992,7 +992,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 	   || GET_MODE_CLASS (TYPE_MODE (type)) != MODE_INT
 	   /* Or if the precision of TO is not the same as the precision
 	      of its mode.  */
-	   || TYPE_PRECISION (type) != GET_MODE_PRECISION (TYPE_MODE (type))))
+	   || !full_integral_type_p (type)))
    (convert (bitop @0 (convert @1))))))
 
 (for bitop (bit_and bit_ior)
@@ -1920,8 +1920,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
        if (shift == LSHIFT_EXPR)
 	 zerobits = ((HOST_WIDE_INT_1U << shiftc) - 1);
        else if (shift == RSHIFT_EXPR
-		&& (TYPE_PRECISION (shift_type)
-		    == GET_MODE_PRECISION (TYPE_MODE (shift_type))))
+		&& full_integral_type_p (shift_type))
 	 {
 	   prec = TYPE_PRECISION (TREE_TYPE (@3));
 	   tree arg00 = @0;
@@ -1931,8 +1930,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 	       && TYPE_UNSIGNED (TREE_TYPE (@0)))
 	     {
 	       tree inner_type = TREE_TYPE (@0);
-	       if ((TYPE_PRECISION (inner_type)
-		    == GET_MODE_PRECISION (TYPE_MODE (inner_type)))
+	       if (full_integral_type_p (inner_type)
 		   && TYPE_PRECISION (inner_type) < prec)
 		 {
 		   prec = TYPE_PRECISION (inner_type);
@@ -3225,9 +3223,7 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
      ncmp (ge lt)
  (simplify
   (cmp (bit_and (convert?@2 @0) integer_pow2p@1) integer_zerop)
-  (if (INTEGRAL_TYPE_P (TREE_TYPE (@0))
-       && (TYPE_PRECISION (TREE_TYPE (@0))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
+  (if (full_integral_type_p (TREE_TYPE (@0))
        && element_precision (@2) >= element_precision (@0)
        && wi::only_sign_bit_p (@1, element_precision (@0)))
    (with { tree stype = signed_type_for (TREE_TYPE (@0)); }
@@ -4021,19 +4017,13 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (for op (plus minus)
   (simplify
     (convert (op:s (convert@2 @0) (convert?@3 @1)))
-    (if (INTEGRAL_TYPE_P (type)
-	 /* We check for type compatibility between @0 and @1 below,
-	    so there's no need to check that @1/@3 are integral types.  */
-	 && INTEGRAL_TYPE_P (TREE_TYPE (@0))
-	 && INTEGRAL_TYPE_P (TREE_TYPE (@2))
+    (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
 	 /* The precision of the type of each operand must match the
 	    precision of the mode of each operand, similarly for the
 	    result.  */
-	 && (TYPE_PRECISION (TREE_TYPE (@0))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
-	 && (TYPE_PRECISION (TREE_TYPE (@1))
-	     == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
-	 && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
+	 && full_integral_type_p (TREE_TYPE (@0))
+	 && full_integral_type_p (TREE_TYPE (@1))
+	 && full_integral_type_p (type)
 	 /* The inner conversion must be a widening conversion.  */
 	 && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
 	 && types_match (@0, type)
@@ -4055,19 +4045,13 @@  DEFINE_INT_AND_FLOAT_ROUND_FN (RINT)
 (for op (minus plus)
  (simplify
   (bit_and (op:s (convert@2 @0) (convert@3 @1)) INTEGER_CST@4)
-  (if (INTEGRAL_TYPE_P (type)
-       /* We check for type compatibility between @0 and @1 below,
-	  so there's no need to check that @1/@3 are integral types.  */
-       && INTEGRAL_TYPE_P (TREE_TYPE (@0))
-       && INTEGRAL_TYPE_P (TREE_TYPE (@2))
+  (if (INTEGRAL_TYPE_P (TREE_TYPE (@2))
        /* The precision of the type of each operand must match the
 	  precision of the mode of each operand, similarly for the
 	  result.  */
-       && (TYPE_PRECISION (TREE_TYPE (@0))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@0))))
-       && (TYPE_PRECISION (TREE_TYPE (@1))
-	   == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (@1))))
-       && TYPE_PRECISION (type) == GET_MODE_PRECISION (TYPE_MODE (type))
+       && full_integral_type_p (TREE_TYPE (@0))
+       && full_integral_type_p (TREE_TYPE (@1))
+       && full_integral_type_p (type)
        /* The inner conversion must be a widening conversion.  */
        && TYPE_PRECISION (TREE_TYPE (@2)) > TYPE_PRECISION (TREE_TYPE (@0))
        && types_match (@0, @1)
diff --git a/gcc/tree-ssa-forwprop.c b/gcc/tree-ssa-forwprop.c
index 5719b99..20d5c86 100644
--- a/gcc/tree-ssa-forwprop.c
+++ b/gcc/tree-ssa-forwprop.c
@@ -1528,8 +1528,7 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 
   /* Only create rotates in complete modes.  Other cases are not
      expanded properly.  */
-  if (!INTEGRAL_TYPE_P (rtype)
-      || TYPE_PRECISION (rtype) != GET_MODE_PRECISION (TYPE_MODE (rtype)))
+  if (!full_integral_type_p (rtype))
     return false;
 
   for (i = 0; i < 2; i++)
@@ -1606,11 +1605,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 	  defcodefor_name (def_arg2[i], &cdef_code[i],
 			   &cdef_arg1[i], &cdef_arg2[i]);
 	  if (CONVERT_EXPR_CODE_P (cdef_code[i])
-	      && INTEGRAL_TYPE_P (TREE_TYPE (cdef_arg1[i]))
+	      && full_integral_type_p (TREE_TYPE (cdef_arg1[i]))
 	      && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
-		 > floor_log2 (TYPE_PRECISION (rtype))
-	      && TYPE_PRECISION (TREE_TYPE (cdef_arg1[i]))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (cdef_arg1[i]))))
+		 > floor_log2 (TYPE_PRECISION (rtype)))
 	    {
 	      def_arg2_alt[i] = cdef_arg1[i];
 	      defcodefor_name (def_arg2_alt[i], &cdef_code[i],
@@ -1636,11 +1633,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 	      }
 	    defcodefor_name (cdef_arg2[i], &code, &tem, NULL);
 	    if (CONVERT_EXPR_CODE_P (code)
-		&& INTEGRAL_TYPE_P (TREE_TYPE (tem))
+		&& full_integral_type_p (TREE_TYPE (tem))
 		&& TYPE_PRECISION (TREE_TYPE (tem))
 		 > floor_log2 (TYPE_PRECISION (rtype))
-		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
 		&& (tem == def_arg2[1 - i]
 		    || tem == def_arg2_alt[1 - i]))
 	      {
@@ -1664,11 +1659,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 
 	    defcodefor_name (cdef_arg1[i], &code, &tem, NULL);
 	    if (CONVERT_EXPR_CODE_P (code)
-		&& INTEGRAL_TYPE_P (TREE_TYPE (tem))
+		&& full_integral_type_p (TREE_TYPE (tem))
 		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 > floor_log2 (TYPE_PRECISION (rtype))
-		&& TYPE_PRECISION (TREE_TYPE (tem))
-		 == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem))))
+		 > floor_log2 (TYPE_PRECISION (rtype)))
 	      defcodefor_name (tem, &code, &tem, NULL);
 
 	    if (code == NEGATE_EXPR)
@@ -1680,11 +1673,9 @@  simplify_rotate (gimple_stmt_iterator *gsi)
 		  }
 		defcodefor_name (tem, &code, &tem, NULL);
 		if (CONVERT_EXPR_CODE_P (code)
-		    && INTEGRAL_TYPE_P (TREE_TYPE (tem))
+		    && full_integral_type_p (TREE_TYPE (tem))
 		    && TYPE_PRECISION (TREE_TYPE (tem))
 		       > floor_log2 (TYPE_PRECISION (rtype))
-		    && TYPE_PRECISION (TREE_TYPE (tem))
-		       == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (tem)))
 		    && (tem == def_arg2[1 - i]
 			|| tem == def_arg2_alt[1 - i]))
 		  {
diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c
index 17d1083..a96f784 100644
--- a/gcc/tree-vect-patterns.c
+++ b/gcc/tree-vect-patterns.c
@@ -2067,8 +2067,7 @@  vect_recog_vector_vector_shift_pattern (vec<gimple *> *stmts,
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != SSA_NAME
       || TYPE_MODE (TREE_TYPE (oprnd0)) == TYPE_MODE (TREE_TYPE (oprnd1))
-      || TYPE_PRECISION (TREE_TYPE (oprnd1))
-	 != GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (oprnd1)))
+      || !full_integral_type_p (TREE_TYPE (oprnd1))
       || TYPE_PRECISION (TREE_TYPE (lhs))
 	 != TYPE_PRECISION (TREE_TYPE (oprnd0)))
     return NULL;
@@ -2469,8 +2468,7 @@  vect_recog_mult_pattern (vec<gimple *> *stmts,
 
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != INTEGER_CST
-      || !INTEGRAL_TYPE_P (itype)
-      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
+      || !full_integral_type_p (itype))
     return NULL;
 
   vectype = get_vectype_for_scalar_type (itype);
@@ -2584,8 +2582,7 @@  vect_recog_divmod_pattern (vec<gimple *> *stmts,
   itype = TREE_TYPE (oprnd0);
   if (TREE_CODE (oprnd0) != SSA_NAME
       || TREE_CODE (oprnd1) != INTEGER_CST
-      || TREE_CODE (itype) != INTEGER_TYPE
-      || TYPE_PRECISION (itype) != GET_MODE_PRECISION (TYPE_MODE (itype)))
+      || !full_integral_type_p (itype))
     return NULL;
 
   vectype = get_vectype_for_scalar_type (itype);
@@ -3385,9 +3382,8 @@  adjust_bool_pattern (tree var, tree out_type,
     do_compare:
       gcc_assert (TREE_CODE_CLASS (rhs_code) == tcc_comparison);
       if (TREE_CODE (TREE_TYPE (rhs1)) != INTEGER_TYPE
-	  || !TYPE_UNSIGNED (TREE_TYPE (rhs1))
-	  || (TYPE_PRECISION (TREE_TYPE (rhs1))
-	      != GET_MODE_BITSIZE (TYPE_MODE (TREE_TYPE (rhs1)))))
+	  || !full_integral_type_p (TREE_TYPE (rhs1))
+	  || !TYPE_UNSIGNED (TREE_TYPE (rhs1)))
 	{
 	  machine_mode mode = TYPE_MODE (TREE_TYPE (rhs1));
 	  itype
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index 657a8d1..cfa9a97 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -5245,7 +5245,7 @@  register_edge_assert_for_2 (tree name, edge e,
 	      && tree_fits_uhwi_p (cst2)
 	      && INTEGRAL_TYPE_P (TREE_TYPE (name2))
 	      && IN_RANGE (tree_to_uhwi (cst2), 1, prec - 1)
-	      && prec == GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (val))))
+	      && full_integral_type_p (TREE_TYPE (val)))
 	    {
 	      mask = wi::mask (tree_to_uhwi (cst2), false, prec);
 	      val2 = fold_binary (LSHIFT_EXPR, TREE_TYPE (val), val, cst2);
diff --git a/gcc/tree.h b/gcc/tree.h
index 46debc1..237f234 100644
--- a/gcc/tree.h
+++ b/gcc/tree.h
@@ -5394,4 +5394,25 @@  struct builtin_structptr_type
   const char *str;
 };
 extern const builtin_structptr_type builtin_structptr_types[6];
+
+/* Return true if the mode underlying scalar type T has the same number
+   of bits as T does.  Examples of when this is false include bitfields
+   that are narrower than the mode that contains them.  */
+
+inline bool
+scalar_type_is_full_p (const_tree t)
+{
+  return (GET_MODE_PRECISION (TYPE_CHECK (t)->type_common.mode)
+	  == TYPE_PRECISION (t));
+}
+
+/* Return true if T is an integral type that has the same number of bits
+   as its underlying mode.  */
+
+inline bool
+full_integral_type_p (const_tree t)
+{
+  return INTEGRAL_TYPE_P (t) && scalar_type_is_full_p (t);
+}
+
 #endif  /* GCC_TREE_H  */
diff --git a/gcc/ubsan.c b/gcc/ubsan.c
index 49e38fa..40f5f3e 100644
--- a/gcc/ubsan.c
+++ b/gcc/ubsan.c
@@ -1582,9 +1582,8 @@  instrument_si_overflow (gimple_stmt_iterator gsi)
 
   /* If this is not a signed operation, don't instrument anything here.
      Also punt on bit-fields.  */
-  if (!INTEGRAL_TYPE_P (lhsinner)
-      || TYPE_OVERFLOW_WRAPS (lhsinner)
-      || GET_MODE_BITSIZE (TYPE_MODE (lhsinner)) != TYPE_PRECISION (lhsinner))
+  if (!full_integral_type_p (lhsinner)
+      || TYPE_OVERFLOW_WRAPS (lhsinner))
     return;
 
   switch (code)