Message ID | alpine.DEB.2.02.1306092132530.24602@stedding.saclay.inria.fr |
---|---|

State | New |

Headers | show |

On 06/09/13 13:43, Marc Glisse wrote: > Hello, > > just adapting yet another function so it also works with vectors. > > It seemed convenient to add a new macro. The name sucks (it doesn't > match the semantics of INTEGRAL_TYPE_P), but I didn't want to name it > INTEGER_SCALAR_OR_VECTOR_CONSTANT_P and didn't have any good idea for a > short name. I'd just use a long name. I can easily see someone getting easily not being aware that INTEGRAL_CST_P returns true for vectors and as a result doing something inappropriate. INTEGER_CST_OR_VECTOR_INTEGER_TYPE_P? Jeff

On Tue, 11 Jun 2013, Jeff Law wrote: > On 06/09/13 13:43, Marc Glisse wrote: >> Hello, >> >> just adapting yet another function so it also works with vectors. >> >> It seemed convenient to add a new macro. The name sucks (it doesn't >> match the semantics of INTEGRAL_TYPE_P), but I didn't want to name it >> INTEGER_SCALAR_OR_VECTOR_CONSTANT_P and didn't have any good idea for a >> short name. > I'd just use a long name. I can easily see someone getting easily not being > aware that INTEGRAL_CST_P returns true for vectors and as a result doing > something inappropriate. > > INTEGER_CST_OR_VECTOR_INTEGER_TYPE_P? Having TYPE in there seems confusing, and INTEGER_SCALAR_OR_VECTOR_CONSTANT_P is at least one character shorter ;-) Oh, you probably meant INTEGER_CST_OR_VECTOR_INTEGER_CST_P? Compacting could give INT_OR_VECINT_CST_P (or INTVEC instead of VECINT, I don't know which order sounds best). I don't really mind the name, so if you want INTEGER_CST_OR_VECTOR_INTEGER_CST_P that's ok with me. Thanks for the comments on the 2 patches,

On Tue, Jun 11, 2013 at 9:44 PM, Marc Glisse <marc.glisse@inria.fr> wrote: > On Tue, 11 Jun 2013, Jeff Law wrote: > >> On 06/09/13 13:43, Marc Glisse wrote: >>> >>> Hello, >>> >>> just adapting yet another function so it also works with vectors. >>> >>> It seemed convenient to add a new macro. The name sucks (it doesn't >>> match the semantics of INTEGRAL_TYPE_P), but I didn't want to name it >>> INTEGER_SCALAR_OR_VECTOR_CONSTANT_P and didn't have any good idea for a >>> short name. >> >> I'd just use a long name. I can easily see someone getting easily not >> being aware that INTEGRAL_CST_P returns true for vectors and as a result >> doing something inappropriate. >> >> INTEGER_CST_OR_VECTOR_INTEGER_TYPE_P? > > > Having TYPE in there seems confusing, and > INTEGER_SCALAR_OR_VECTOR_CONSTANT_P is at least one character shorter ;-) > Oh, you probably meant INTEGER_CST_OR_VECTOR_INTEGER_CST_P? > > Compacting could give INT_OR_VECINT_CST_P (or INTVEC instead of VECINT, I > don't know which order sounds best). > > I don't really mind the name, so if you want > INTEGER_CST_OR_VECTOR_INTEGER_CST_P that's ok with me. How about just adding VECTOR_INTEGER_CST_P and using TREE_CODE (x) == INTEGER_CST || VECTOR_INTEGER_CST_P (x) in the code? I suppose it's explicitely not allowing complex integer constants? Richard. > Thanks for the comments on the 2 patches, > > -- > Marc Glisse

On Wed, 12 Jun 2013, Richard Biener wrote: > On Tue, Jun 11, 2013 at 9:44 PM, Marc Glisse <marc.glisse@inria.fr> wrote: >> On Tue, 11 Jun 2013, Jeff Law wrote: >> >>> On 06/09/13 13:43, Marc Glisse wrote: >>>> >>>> Hello, >>>> >>>> just adapting yet another function so it also works with vectors. >>>> >>>> It seemed convenient to add a new macro. The name sucks (it doesn't >>>> match the semantics of INTEGRAL_TYPE_P), but I didn't want to name it >>>> INTEGER_SCALAR_OR_VECTOR_CONSTANT_P and didn't have any good idea for a >>>> short name. >>> >>> I'd just use a long name. I can easily see someone getting easily not >>> being aware that INTEGRAL_CST_P returns true for vectors and as a result >>> doing something inappropriate. >>> >>> INTEGER_CST_OR_VECTOR_INTEGER_TYPE_P? >> >> >> Having TYPE in there seems confusing, and >> INTEGER_SCALAR_OR_VECTOR_CONSTANT_P is at least one character shorter ;-) >> Oh, you probably meant INTEGER_CST_OR_VECTOR_INTEGER_CST_P? >> >> Compacting could give INT_OR_VECINT_CST_P (or INTVEC instead of VECINT, I >> don't know which order sounds best). >> >> I don't really mind the name, so if you want >> INTEGER_CST_OR_VECTOR_INTEGER_CST_P that's ok with me. > > How about just adding VECTOR_INTEGER_CST_P and using > TREE_CODE (x) == INTEGER_CST || VECTOR_INTEGER_CST_P (x) > in the code? That's a bit heavy, but I can live with that. > I suppose it's explicitely not allowing complex integer constants? Hmm... Thanks, I keep forgetting complex :-( Testing for CONSTANT_CLASS_P seems sufficient here. Some transformations also seem valid for complex, and the others are already restricted by the fact that they involve operators like AND or XOR, or because we exit early for FLOAT_TYPE_P and FIXED_POINT_TYPE_P. I'll test that (no new macro for now then).

On Wed, 12 Jun 2013, Marc Glisse wrote: > On Wed, 12 Jun 2013, Richard Biener wrote: > >> I suppose it's explicitely not allowing complex integer constants? > > Hmm... Thanks, I keep forgetting complex :-( Do we want A+~A -> -1-i for integer complex types? Is using BIT_NOT_EXPR on them even legal? Currently we restrict the transform to INTEGRAL_TYPE_P (TREE_TYPE (rhs1)), but it looks like any type on which BIT_NOT_EXPR is legal should be ok (with an all_ones constant, not the current minus_one constant).

Index: tree.h =================================================================== --- tree.h (revision 199867) +++ tree.h (working copy) @@ -1021,20 +1021,26 @@ extern void omp_clause_range_check_faile #define COMPLEX_FLOAT_TYPE_P(TYPE) \ (TREE_CODE (TYPE) == COMPLEX_TYPE \ && TREE_CODE (TREE_TYPE (TYPE)) == REAL_TYPE) /* Nonzero if TYPE represents a vector integer type. */ #define VECTOR_INTEGER_TYPE_P(TYPE) \ (VECTOR_TYPE_P (TYPE) \ && TREE_CODE (TREE_TYPE (TYPE)) == INTEGER_TYPE) +/* Nonzero if NODE represents a scalar or vector integer constant. */ + +#define INTEGRAL_CST_P(NODE) \ + (TREE_CODE (NODE) == INTEGER_CST \ + || (TREE_CODE (NODE) == VECTOR_CST \ + && VECTOR_INTEGER_TYPE_P (TREE_TYPE (NODE)))) /* Nonzero if TYPE represents a vector floating-point type. */ #define VECTOR_FLOAT_TYPE_P(TYPE) \ (VECTOR_TYPE_P (TYPE) \ && TREE_CODE (TREE_TYPE (TYPE)) == REAL_TYPE) /* Nonzero if TYPE represents a floating-point type, including complex and vector floating-point types. The vector and complex check does not use the previous two macros to enable early folding. */ Index: tree-ssa-forwprop.c =================================================================== --- tree-ssa-forwprop.c (revision 199867) +++ tree-ssa-forwprop.c (working copy) @@ -1971,22 +1971,22 @@ simplify_bitwise_binary (gimple_stmt_ite gimple_assign_set_rhs2 (stmt, b); gimple_assign_set_rhs_code (stmt, def1_code); update_stmt (stmt); return true; } } /* (a | CST1) & CST2 -> (a & CST2) | (CST1 & CST2). */ if (code == BIT_AND_EXPR && def1_code == BIT_IOR_EXPR - && TREE_CODE (arg2) == INTEGER_CST - && TREE_CODE (def1_arg2) == INTEGER_CST) + && INTEGRAL_CST_P (arg2) + && INTEGRAL_CST_P (def1_arg2)) { tree cst = fold_build2 (BIT_AND_EXPR, TREE_TYPE (arg2), arg2, def1_arg2); tree tem; gimple newop; if (integer_zerop (cst)) { gimple_assign_set_rhs1 (stmt, def1_arg1); update_stmt (stmt); return true; @@ -2002,34 +2002,33 @@ simplify_bitwise_binary (gimple_stmt_ite gimple_assign_set_rhs_code (stmt, BIT_IOR_EXPR); update_stmt (stmt); return true; } /* Combine successive equal operations with constants. */ if ((code == BIT_AND_EXPR || code == BIT_IOR_EXPR || code == BIT_XOR_EXPR) && def1_code == code - && TREE_CODE (arg2) == INTEGER_CST - && TREE_CODE (def1_arg2) == INTEGER_CST) + && INTEGRAL_CST_P (arg2) + && INTEGRAL_CST_P (def1_arg2)) { tree cst = fold_build2 (code, TREE_TYPE (arg2), arg2, def1_arg2); gimple_assign_set_rhs1 (stmt, def1_arg1); gimple_assign_set_rhs2 (stmt, cst); update_stmt (stmt); return true; } /* Canonicalize X ^ ~0 to ~X. */ if (code == BIT_XOR_EXPR - && TREE_CODE (arg2) == INTEGER_CST && integer_all_onesp (arg2)) { gimple_assign_set_rhs_with_ops (gsi, BIT_NOT_EXPR, arg1, NULL_TREE); gcc_assert (gsi_stmt (*gsi) == stmt); update_stmt (stmt); return true; } /* Try simple folding for X op !X, and X op X. */ res = simplify_bitwise_binary_1 (code, TREE_TYPE (arg1), arg1, arg2); @@ -2472,66 +2471,68 @@ associate_plusminus (gimple_stmt_iterato && code != def_code) { /* (A +- B) -+ B -> A. */ code = TREE_CODE (def_rhs1); rhs1 = def_rhs1; rhs2 = NULL_TREE; gimple_assign_set_rhs_with_ops (gsi, code, rhs1, NULL_TREE); gcc_assert (gsi_stmt (*gsi) == stmt); gimple_set_modified (stmt, true); } - else if (TREE_CODE (rhs2) == INTEGER_CST - && TREE_CODE (def_rhs1) == INTEGER_CST) + else if (INTEGRAL_CST_P (rhs2) + && INTEGRAL_CST_P (def_rhs1)) { /* (CST +- A) +- CST -> CST +- A. */ tree cst = fold_binary (code, TREE_TYPE (rhs1), def_rhs1, rhs2); if (cst && !TREE_OVERFLOW (cst)) { code = def_code; gimple_assign_set_rhs_code (stmt, code); rhs1 = cst; gimple_assign_set_rhs1 (stmt, rhs1); rhs2 = def_rhs2; gimple_assign_set_rhs2 (stmt, rhs2); gimple_set_modified (stmt, true); } } - else if (TREE_CODE (rhs2) == INTEGER_CST - && TREE_CODE (def_rhs2) == INTEGER_CST + else if (INTEGRAL_CST_P (rhs2) + && INTEGRAL_CST_P (def_rhs2) && def_code == PLUS_EXPR) { /* (A + CST) +- CST -> A + CST. */ tree cst = fold_binary (code, TREE_TYPE (rhs1), def_rhs2, rhs2); if (cst && !TREE_OVERFLOW (cst)) { code = PLUS_EXPR; gimple_assign_set_rhs_code (stmt, code); rhs1 = def_rhs1; gimple_assign_set_rhs1 (stmt, rhs1); rhs2 = cst; gimple_assign_set_rhs2 (stmt, rhs2); gimple_set_modified (stmt, true); } } } else if (def_code == BIT_NOT_EXPR - && INTEGRAL_TYPE_P (TREE_TYPE (rhs1))) + && (INTEGRAL_TYPE_P (TREE_TYPE (rhs1)) + || VECTOR_INTEGER_TYPE_P (TREE_TYPE (rhs1)))) { tree def_rhs1 = gimple_assign_rhs1 (def_stmt); if (code == PLUS_EXPR && operand_equal_p (def_rhs1, rhs2, 0)) { /* ~A + A -> -1. */ - code = INTEGER_CST; - rhs1 = build_int_cst_type (TREE_TYPE (rhs2), -1); + code = VECTOR_TYPE_P (TREE_TYPE (rhs2)) ? VECTOR_CST + : INTEGER_CST; + rhs1 = build_minus_one_cst (TREE_TYPE (rhs2)); rhs2 = NULL_TREE; gimple_assign_set_rhs_with_ops (gsi, code, rhs1, NULL_TREE); gcc_assert (gsi_stmt (*gsi) == stmt); gimple_set_modified (stmt, true); } else if (code == PLUS_EXPR && integer_onep (rhs1)) { /* ~A + 1 -> -A. */ code = NEGATE_EXPR; @@ -2573,65 +2574,67 @@ associate_plusminus (gimple_stmt_iterato { /* A +- (B +- A) -> +- B. */ code = ((code == PLUS_EXPR) ? TREE_CODE (def_rhs1) : NEGATE_EXPR); rhs1 = def_rhs1; rhs2 = NULL_TREE; gimple_assign_set_rhs_with_ops (gsi, code, rhs1, NULL_TREE); gcc_assert (gsi_stmt (*gsi) == stmt); gimple_set_modified (stmt, true); } - else if (TREE_CODE (rhs1) == INTEGER_CST - && TREE_CODE (def_rhs1) == INTEGER_CST) + else if (INTEGRAL_CST_P (rhs1) + && INTEGRAL_CST_P (def_rhs1)) { /* CST +- (CST +- A) -> CST +- A. */ tree cst = fold_binary (code, TREE_TYPE (rhs2), rhs1, def_rhs1); if (cst && !TREE_OVERFLOW (cst)) { code = (code == def_code ? PLUS_EXPR : MINUS_EXPR); gimple_assign_set_rhs_code (stmt, code); rhs1 = cst; gimple_assign_set_rhs1 (stmt, rhs1); rhs2 = def_rhs2; gimple_assign_set_rhs2 (stmt, rhs2); gimple_set_modified (stmt, true); } } - else if (TREE_CODE (rhs1) == INTEGER_CST - && TREE_CODE (def_rhs2) == INTEGER_CST) + else if (INTEGRAL_CST_P (rhs1) + && INTEGRAL_CST_P (def_rhs2)) { /* CST +- (A +- CST) -> CST +- A. */ tree cst = fold_binary (def_code == code ? PLUS_EXPR : MINUS_EXPR, TREE_TYPE (rhs2), rhs1, def_rhs2); if (cst && !TREE_OVERFLOW (cst)) { rhs1 = cst; gimple_assign_set_rhs1 (stmt, rhs1); rhs2 = def_rhs1; gimple_assign_set_rhs2 (stmt, rhs2); gimple_set_modified (stmt, true); } } } else if (def_code == BIT_NOT_EXPR - && INTEGRAL_TYPE_P (TREE_TYPE (rhs2))) + && (INTEGRAL_TYPE_P (TREE_TYPE (rhs2)) + || VECTOR_INTEGER_TYPE_P (TREE_TYPE (rhs2)))) { tree def_rhs1 = gimple_assign_rhs1 (def_stmt); if (code == PLUS_EXPR && operand_equal_p (def_rhs1, rhs1, 0)) { /* A + ~A -> -1. */ - code = INTEGER_CST; - rhs1 = build_int_cst_type (TREE_TYPE (rhs1), -1); + code = VECTOR_TYPE_P (TREE_TYPE (rhs1)) ? VECTOR_CST + : INTEGER_CST; + rhs1 = build_minus_one_cst (TREE_TYPE (rhs1)); rhs2 = NULL_TREE; gimple_assign_set_rhs_with_ops (gsi, code, rhs1, NULL_TREE); gcc_assert (gsi_stmt (*gsi) == stmt); gimple_set_modified (stmt, true); } } } } out: