diff mbox series

[18/46] Make SLP_TREE_SCALAR_STMTS a vec<stmt_vec_info>

Message ID 87r2jtoswr.fsf@arm.com
State New
Headers show
Series Remove vinfo_for_stmt etc. | expand

Commit Message

Richard Sandiford July 24, 2018, 10 a.m. UTC
This patch changes SLP_TREE_SCALAR_STMTS from a vec<gimple *> to
a vec<stmt_vec_info>.  It's longer than the previous conversions
but mostly mechanical.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (_slp_tree::stmts): Change from a vec<gimple *>
	to a vec<stmt_vec_info>.
	* tree-vect-slp.c (vect_free_slp_tree): Update accordingly.
	(vect_create_new_slp_node): Take a vec<gimple *> instead of a
	vec<stmt_vec_info>.
	(_slp_oprnd_info::def_stmts): Change from a vec<gimple *>
	to a vec<stmt_vec_info>.
	(bst_traits::value_type, bst_traits::value_type): Likewise.
	(bst_traits::hash): Update accordingly.
	(vect_get_and_check_slp_defs): Change the stmts parameter from
	a vec<gimple *> to a vec<stmt_vec_info>.
	(vect_two_operations_perm_ok_p, vect_build_slp_tree_1): Likewise.
	(vect_build_slp_tree): Likewise.
	(vect_build_slp_tree_2): Likewise.  Update uses of
	SLP_TREE_SCALAR_STMTS.
	(vect_print_slp_tree): Update uses of SLP_TREE_SCALAR_STMTS.
	(vect_mark_slp_stmts, vect_mark_slp_stmts_relevant)
	(vect_slp_rearrange_stmts, vect_attempt_slp_rearrange_stmts)
	(vect_supported_load_permutation_p, vect_find_last_scalar_stmt_in_slp)
	(vect_detect_hybrid_slp_stmts, vect_slp_analyze_node_operations_1)
	(vect_slp_analyze_node_operations, vect_slp_analyze_operations)
	(vect_bb_slp_scalar_cost, vect_slp_analyze_bb_1)
	(vect_get_constant_vectors, vect_get_slp_defs)
	(vect_transform_slp_perm_load, vect_schedule_slp_instance)
	(vect_remove_slp_scalar_calls, vect_schedule_slp): Likewise.
	(vect_analyze_slp_instance): Build up a vec of stmt_vec_infos
	instead of gimple stmts.
	* tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Change
	the stores parameter for a vec<gimple *> to a vec<stmt_vec_info>.
	(vect_slp_analyze_instance_dependence): Update uses of
	SLP_TREE_SCALAR_STMTS.
	(vect_slp_analyze_and_verify_node_alignment): Likewise.
	(vect_slp_analyze_and_verify_instance_alignment): Likewise.
	* tree-vect-loop.c (neutral_op_for_slp_reduction): Likewise.
	(get_initial_defs_for_reduction): Likewise.
	(vect_create_epilog_for_reduction): Likewise.
	(vectorize_fold_left_reduction): Likewise.
	* tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
	(vect_model_simple_cost, vectorizable_shift, vectorizable_load)
	(can_vectorize_live_stmts): Likewise.

Comments

Richard Biener July 25, 2018, 9:26 a.m. UTC | #1
On Tue, Jul 24, 2018 at 12:01 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch changes SLP_TREE_SCALAR_STMTS from a vec<gimple *> to
> a vec<stmt_vec_info>.  It's longer than the previous conversions
> but mostly mechanical.

OK.  I don't remember exactly but vect_external_def SLP nodes have
empty stmts vector then?  I realize we only have those for defs that
are in the vectorized region.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (_slp_tree::stmts): Change from a vec<gimple *>
>         to a vec<stmt_vec_info>.
>         * tree-vect-slp.c (vect_free_slp_tree): Update accordingly.
>         (vect_create_new_slp_node): Take a vec<gimple *> instead of a
>         vec<stmt_vec_info>.
>         (_slp_oprnd_info::def_stmts): Change from a vec<gimple *>
>         to a vec<stmt_vec_info>.
>         (bst_traits::value_type, bst_traits::value_type): Likewise.
>         (bst_traits::hash): Update accordingly.
>         (vect_get_and_check_slp_defs): Change the stmts parameter from
>         a vec<gimple *> to a vec<stmt_vec_info>.
>         (vect_two_operations_perm_ok_p, vect_build_slp_tree_1): Likewise.
>         (vect_build_slp_tree): Likewise.
>         (vect_build_slp_tree_2): Likewise.  Update uses of
>         SLP_TREE_SCALAR_STMTS.
>         (vect_print_slp_tree): Update uses of SLP_TREE_SCALAR_STMTS.
>         (vect_mark_slp_stmts, vect_mark_slp_stmts_relevant)
>         (vect_slp_rearrange_stmts, vect_attempt_slp_rearrange_stmts)
>         (vect_supported_load_permutation_p, vect_find_last_scalar_stmt_in_slp)
>         (vect_detect_hybrid_slp_stmts, vect_slp_analyze_node_operations_1)
>         (vect_slp_analyze_node_operations, vect_slp_analyze_operations)
>         (vect_bb_slp_scalar_cost, vect_slp_analyze_bb_1)
>         (vect_get_constant_vectors, vect_get_slp_defs)
>         (vect_transform_slp_perm_load, vect_schedule_slp_instance)
>         (vect_remove_slp_scalar_calls, vect_schedule_slp): Likewise.
>         (vect_analyze_slp_instance): Build up a vec of stmt_vec_infos
>         instead of gimple stmts.
>         * tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Change
>         the stores parameter for a vec<gimple *> to a vec<stmt_vec_info>.
>         (vect_slp_analyze_instance_dependence): Update uses of
>         SLP_TREE_SCALAR_STMTS.
>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>         (vect_slp_analyze_and_verify_instance_alignment): Likewise.
>         * tree-vect-loop.c (neutral_op_for_slp_reduction): Likewise.
>         (get_initial_defs_for_reduction): Likewise.
>         (vect_create_epilog_for_reduction): Likewise.
>         (vectorize_fold_left_reduction): Likewise.
>         * tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
>         (vect_model_simple_cost, vectorizable_shift, vectorizable_load)
>         (can_vectorize_live_stmts): Likewise.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:22:57.277070390 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:00.401042649 +0100
> @@ -138,7 +138,7 @@ struct _slp_tree {
>    /* Nodes that contain def-stmts of this node statements operands.  */
>    vec<slp_tree> children;
>    /* A group of scalar stmts to be vectorized together.  */
> -  vec<gimple *> stmts;
> +  vec<stmt_vec_info> stmts;
>    /* Load permutation relative to the stores, NULL if there is no
>       permutation.  */
>    vec<unsigned> load_permutation;
> Index: gcc/tree-vect-slp.c
> ===================================================================
> --- gcc/tree-vect-slp.c 2018-07-24 10:22:57.277070390 +0100
> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:00.401042649 +0100
> @@ -66,11 +66,11 @@ vect_free_slp_tree (slp_tree node, bool
>       statements would be redundant.  */
>    if (!final_p)
>      {
> -      gimple *stmt;
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +      stmt_vec_info stmt_info;
> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>         {
> -         gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
> -         STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
> +         gcc_assert (STMT_VINFO_NUM_SLP_USES (stmt_info) > 0);
> +         STMT_VINFO_NUM_SLP_USES (stmt_info)--;
>         }
>      }
>
> @@ -99,21 +99,21 @@ vect_free_slp_instance (slp_instance ins
>  /* Create an SLP node for SCALAR_STMTS.  */
>
>  static slp_tree
> -vect_create_new_slp_node (vec<gimple *> scalar_stmts)
> +vect_create_new_slp_node (vec<stmt_vec_info> scalar_stmts)
>  {
>    slp_tree node;
> -  gimple *stmt = scalar_stmts[0];
> +  stmt_vec_info stmt_info = scalar_stmts[0];
>    unsigned int nops;
>
> -  if (is_gimple_call (stmt))
> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>      nops = gimple_call_num_args (stmt);
> -  else if (is_gimple_assign (stmt))
> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        nops = gimple_num_ops (stmt) - 1;
>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>         nops++;
>      }
> -  else if (gimple_code (stmt) == GIMPLE_PHI)
> +  else if (is_a <gphi *> (stmt_info->stmt))
>      nops = 0;
>    else
>      return NULL;
> @@ -128,8 +128,8 @@ vect_create_new_slp_node (vec<gimple *>
>    SLP_TREE_DEF_TYPE (node) = vect_internal_def;
>
>    unsigned i;
> -  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt)
> -    STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))++;
> +  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt_info)
> +    STMT_VINFO_NUM_SLP_USES (stmt_info)++;
>
>    return node;
>  }
> @@ -141,7 +141,7 @@ vect_create_new_slp_node (vec<gimple *>
>  typedef struct _slp_oprnd_info
>  {
>    /* Def-stmts for the operands.  */
> -  vec<gimple *> def_stmts;
> +  vec<stmt_vec_info> def_stmts;
>    /* Information about the first statement, its vector def-type, type, the
>       operand itself in case it's constant, and an indication if it's a pattern
>       stmt.  */
> @@ -297,10 +297,10 @@ can_duplicate_and_interleave_p (unsigned
>     ok return 0.  */
>  static int
>  vect_get_and_check_slp_defs (vec_info *vinfo, unsigned char *swap,
> -                            vec<gimple *> stmts, unsigned stmt_num,
> +                            vec<stmt_vec_info> stmts, unsigned stmt_num,
>                              vec<slp_oprnd_info> *oprnds_info)
>  {
> -  gimple *stmt = stmts[stmt_num];
> +  stmt_vec_info stmt_info = stmts[stmt_num];
>    tree oprnd;
>    unsigned int i, number_of_oprnds;
>    enum vect_def_type dt = vect_uninitialized_def;
> @@ -312,12 +312,12 @@ vect_get_and_check_slp_defs (vec_info *v
>    bool first = stmt_num == 0;
>    bool second = stmt_num == 1;
>
> -  if (is_gimple_call (stmt))
> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>      {
>        number_of_oprnds = gimple_call_num_args (stmt);
>        first_op_idx = 3;
>      }
> -  else if (is_gimple_assign (stmt))
> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        enum tree_code code = gimple_assign_rhs_code (stmt);
>        number_of_oprnds = gimple_num_ops (stmt) - 1;
> @@ -347,12 +347,13 @@ vect_get_and_check_slp_defs (vec_info *v
>           int *map = maps[*swap];
>
>           if (i < 2)
> -           oprnd = TREE_OPERAND (gimple_op (stmt, first_op_idx), map[i]);
> +           oprnd = TREE_OPERAND (gimple_op (stmt_info->stmt,
> +                                            first_op_idx), map[i]);
>           else
> -           oprnd = gimple_op (stmt, map[i]);
> +           oprnd = gimple_op (stmt_info->stmt, map[i]);
>         }
>        else
> -       oprnd = gimple_op (stmt, first_op_idx + (swapped ? !i : i));
> +       oprnd = gimple_op (stmt_info->stmt, first_op_idx + (swapped ? !i : i));
>
>        oprnd_info = (*oprnds_info)[i];
>
> @@ -518,18 +519,20 @@ vect_get_and_check_slp_defs (vec_info *v
>      {
>        /* If there are already uses of this stmt in a SLP instance then
>           we've committed to the operand order and can't swap it.  */
> -      if (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) != 0)
> +      if (STMT_VINFO_NUM_SLP_USES (stmt_info) != 0)
>         {
>           if (dump_enabled_p ())
>             {
>               dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                "Build SLP failed: cannot swap operands of "
>                                "shared stmt ");
> -             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
> +             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> +                               stmt_info->stmt, 0);
>             }
>           return -1;
>         }
>
> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>        if (first_op_cond)
>         {
>           tree cond = gimple_assign_rhs1 (stmt);
> @@ -655,8 +658,9 @@ vect_record_max_nunits (vec_info *vinfo,
>     would be permuted.  */
>
>  static bool
> -vect_two_operations_perm_ok_p (vec<gimple *> stmts, unsigned int group_size,
> -                              tree vectype, tree_code alt_stmt_code)
> +vect_two_operations_perm_ok_p (vec<stmt_vec_info> stmts,
> +                              unsigned int group_size, tree vectype,
> +                              tree_code alt_stmt_code)
>  {
>    unsigned HOST_WIDE_INT count;
>    if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&count))
> @@ -666,7 +670,8 @@ vect_two_operations_perm_ok_p (vec<gimpl
>    for (unsigned int i = 0; i < count; ++i)
>      {
>        unsigned int elt = i;
> -      if (gimple_assign_rhs_code (stmts[i % group_size]) == alt_stmt_code)
> +      gassign *stmt = as_a <gassign *> (stmts[i % group_size]->stmt);
> +      if (gimple_assign_rhs_code (stmt) == alt_stmt_code)
>         elt += count;
>        sel.quick_push (elt);
>      }
> @@ -690,12 +695,12 @@ vect_two_operations_perm_ok_p (vec<gimpl
>
>  static bool
>  vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
> -                      vec<gimple *> stmts, unsigned int group_size,
> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>                        poly_uint64 *max_nunits, bool *matches,
>                        bool *two_operators)
>  {
>    unsigned int i;
> -  gimple *first_stmt = stmts[0], *stmt = stmts[0];
> +  stmt_vec_info first_stmt_info = stmts[0];
>    enum tree_code first_stmt_code = ERROR_MARK;
>    enum tree_code alt_stmt_code = ERROR_MARK;
>    enum tree_code rhs_code = ERROR_MARK;
> @@ -710,9 +715,10 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>    gimple *first_load = NULL, *prev_first_load = NULL;
>
>    /* For every stmt in NODE find its def stmt/s.  */
> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
> +  stmt_vec_info stmt_info;
> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>      {
> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +      gimple *stmt = stmt_info->stmt;
>        swap[i] = 0;
>        matches[i] = false;
>
> @@ -723,7 +729,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>         }
>
>        /* Fail to vectorize statements marked as unvectorizable.  */
> -      if (!STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (stmt)))
> +      if (!STMT_VINFO_VECTORIZABLE (stmt_info))
>          {
>            if (dump_enabled_p ())
>              {
> @@ -755,7 +761,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>        if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
>                                            &nunits_vectype)
>           || (nunits_vectype
> -             && !vect_record_max_nunits (vinfo, stmt, group_size,
> +             && !vect_record_max_nunits (vinfo, stmt_info, group_size,
>                                           nunits_vectype, max_nunits)))
>         {
>           /* Fatal mismatch.  */
> @@ -877,7 +883,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                    && (alt_stmt_code == PLUS_EXPR
>                        || alt_stmt_code == MINUS_EXPR)
>                    && rhs_code == alt_stmt_code)
> -              && !(STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
> +             && !(STMT_VINFO_GROUPED_ACCESS (stmt_info)
>                     && (first_stmt_code == ARRAY_REF
>                         || first_stmt_code == BIT_FIELD_REF
>                         || first_stmt_code == INDIRECT_REF
> @@ -893,7 +899,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                    "original stmt ");
>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                   first_stmt, 0);
> +                                   first_stmt_info->stmt, 0);
>                 }
>               /* Mismatch.  */
>               continue;
> @@ -915,8 +921,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>
>           if (rhs_code == CALL_EXPR)
>             {
> -             gimple *first_stmt = stmts[0];
> -             if (!compatible_calls_p (as_a <gcall *> (first_stmt),
> +             if (!compatible_calls_p (as_a <gcall *> (stmts[0]->stmt),
>                                        as_a <gcall *> (stmt)))
>                 {
>                   if (dump_enabled_p ())
> @@ -933,7 +938,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>         }
>
>        /* Grouped store or load.  */
> -      if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
> +      if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         {
>           if (REFERENCE_CLASS_P (lhs))
>             {
> @@ -943,7 +948,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>           else
>             {
>               /* Load.  */
> -              first_load = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
> +             first_load = DR_GROUP_FIRST_ELEMENT (stmt_info);
>                if (prev_first_load)
>                  {
>                    /* Check that there are no loads from different interleaving
> @@ -1061,7 +1066,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                                              vectype, alt_stmt_code))
>         {
>           for (i = 0; i < group_size; ++i)
> -           if (gimple_assign_rhs_code (stmts[i]) == alt_stmt_code)
> +           if (gimple_assign_rhs_code (stmts[i]->stmt) == alt_stmt_code)
>               {
>                 matches[i] = false;
>                 if (dump_enabled_p ())
> @@ -1070,11 +1075,11 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>                                      "Build SLP failed: different operation "
>                                      "in stmt ");
>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                     stmts[i], 0);
> +                                     stmts[i]->stmt, 0);
>                     dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                                      "original stmt ");
>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                     first_stmt, 0);
> +                                     first_stmt_info->stmt, 0);
>                   }
>               }
>           return false;
> @@ -1090,8 +1095,8 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>     need a special value for deleted that differs from empty.  */
>  struct bst_traits
>  {
> -  typedef vec <gimple *> value_type;
> -  typedef vec <gimple *> compare_type;
> +  typedef vec <stmt_vec_info> value_type;
> +  typedef vec <stmt_vec_info> compare_type;
>    static inline hashval_t hash (value_type);
>    static inline bool equal (value_type existing, value_type candidate);
>    static inline bool is_empty (value_type x) { return !x.exists (); }
> @@ -1105,7 +1110,7 @@ bst_traits::hash (value_type x)
>  {
>    inchash::hash h;
>    for (unsigned i = 0; i < x.length (); ++i)
> -    h.add_int (gimple_uid (x[i]));
> +    h.add_int (gimple_uid (x[i]->stmt));
>    return h.end ();
>  }
>  inline bool
> @@ -1128,7 +1133,7 @@ typedef hash_map <vec <gimple *>, slp_tr
>
>  static slp_tree
>  vect_build_slp_tree_2 (vec_info *vinfo,
> -                      vec<gimple *> stmts, unsigned int group_size,
> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>                        poly_uint64 *max_nunits,
>                        vec<slp_tree> *loads,
>                        bool *matches, unsigned *npermutes, unsigned *tree_size,
> @@ -1136,7 +1141,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>
>  static slp_tree
>  vect_build_slp_tree (vec_info *vinfo,
> -                    vec<gimple *> stmts, unsigned int group_size,
> +                    vec<stmt_vec_info> stmts, unsigned int group_size,
>                      poly_uint64 *max_nunits, vec<slp_tree> *loads,
>                      bool *matches, unsigned *npermutes, unsigned *tree_size,
>                      unsigned max_tree_size)
> @@ -1151,7 +1156,7 @@ vect_build_slp_tree (vec_info *vinfo,
>       scalars, see PR81723.  */
>    if (! res)
>      {
> -      vec <gimple *> x;
> +      vec <stmt_vec_info> x;
>        x.create (stmts.length ());
>        x.splice (stmts);
>        bst_fail->add (x);
> @@ -1168,7 +1173,7 @@ vect_build_slp_tree (vec_info *vinfo,
>
>  static slp_tree
>  vect_build_slp_tree_2 (vec_info *vinfo,
> -                      vec<gimple *> stmts, unsigned int group_size,
> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>                        poly_uint64 *max_nunits,
>                        vec<slp_tree> *loads,
>                        bool *matches, unsigned *npermutes, unsigned *tree_size,
> @@ -1176,53 +1181,54 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>  {
>    unsigned nops, i, this_tree_size = 0;
>    poly_uint64 this_max_nunits = *max_nunits;
> -  gimple *stmt;
>    slp_tree node;
>
>    matches[0] = false;
>
> -  stmt = stmts[0];
> -  if (is_gimple_call (stmt))
> +  stmt_vec_info stmt_info = stmts[0];
> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>      nops = gimple_call_num_args (stmt);
> -  else if (is_gimple_assign (stmt))
> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>      {
>        nops = gimple_num_ops (stmt) - 1;
>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>         nops++;
>      }
> -  else if (gimple_code (stmt) == GIMPLE_PHI)
> +  else if (is_a <gphi *> (stmt_info->stmt))
>      nops = 0;
>    else
>      return NULL;
>
>    /* If the SLP node is a PHI (induction or reduction), terminate
>       the recursion.  */
> -  if (gimple_code (stmt) == GIMPLE_PHI)
> +  if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
>      {
>        tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
>        tree vectype = get_vectype_for_scalar_type (scalar_type);
> -      if (!vect_record_max_nunits (vinfo, stmt, group_size, vectype,
> +      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
>                                    max_nunits))
>         return NULL;
>
> -      vect_def_type def_type = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt));
> +      vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
>        /* Induction from different IVs is not supported.  */
>        if (def_type == vect_induction_def)
>         {
> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
> -           if (stmt != stmts[0])
> +         stmt_vec_info other_info;
> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
> +           if (stmt_info != other_info)
>               return NULL;
>         }
>        else
>         {
>           /* Else def types have to match.  */
> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
> +         stmt_vec_info other_info;
> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
>             {
>               /* But for reduction chains only check on the first stmt.  */
> -             if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
> -                 && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) != stmt)
> +             if (REDUC_GROUP_FIRST_ELEMENT (other_info)
> +                 && REDUC_GROUP_FIRST_ELEMENT (other_info) != stmt_info)
>                 continue;
> -             if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != def_type)
> +             if (STMT_VINFO_DEF_TYPE (other_info) != def_type)
>                 return NULL;
>             }
>         }
> @@ -1238,8 +1244,8 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>      return NULL;
>
>    /* If the SLP node is a load, terminate the recursion.  */
> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
> -      && DR_IS_READ (STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))))
> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> +      && DR_IS_READ (STMT_VINFO_DATA_REF (stmt_info)))
>      {
>        *max_nunits = this_max_nunits;
>        node = vect_create_new_slp_node (stmts);
> @@ -1250,7 +1256,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>    /* Get at the operands, verifying they are compatible.  */
>    vec<slp_oprnd_info> oprnds_info = vect_create_oprnd_info (nops, group_size);
>    slp_oprnd_info oprnd_info;
> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>      {
>        int res = vect_get_and_check_slp_defs (vinfo, &swap[i],
>                                              stmts, i, &oprnds_info);
> @@ -1269,7 +1275,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>    auto_vec<slp_tree, 4> children;
>    auto_vec<slp_tree> this_loads;
>
> -  stmt = stmts[0];
> +  stmt_info = stmts[0];
>
>    if (tree_size)
>      max_tree_size -= *tree_size;
> @@ -1307,8 +1313,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>               /* ???  Rejecting patterns this way doesn't work.  We'd have to
>                  do extra work to cancel the pattern so the uses see the
>                  scalar version.  */
> -             && !is_pattern_stmt_p
> -                   (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
> +             && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>             {
>               slp_tree grandchild;
>
> @@ -1352,7 +1357,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>           /* ???  Rejecting patterns this way doesn't work.  We'd have to
>              do extra work to cancel the pattern so the uses see the
>              scalar version.  */
> -         && !is_pattern_stmt_p (vinfo_for_stmt (stmt)))
> +         && !is_pattern_stmt_p (stmt_info))
>         {
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "Building vector operands from scalars\n");
> @@ -1373,7 +1378,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>              as well as the arms under some constraints.  */
>           && nops == 2
>           && oprnds_info[1]->first_dt == vect_internal_def
> -         && is_gimple_assign (stmt)
> +         && is_gimple_assign (stmt_info->stmt)
>           /* Do so only if the number of not successful permutes was nor more
>              than a cut-ff as re-trying the recursive match on
>              possibly each level of the tree would expose exponential
> @@ -1389,9 +1394,10 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                 {
>                   if (matches[j] != !swap_not_matching)
>                     continue;
> -                 gimple *stmt = stmts[j];
> +                 stmt_vec_info stmt_info = stmts[j];
>                   /* Verify if we can swap operands of this stmt.  */
> -                 if (!is_gimple_assign (stmt)
> +                 gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
> +                 if (!stmt
>                       || !commutative_tree_code (gimple_assign_rhs_code (stmt)))
>                     {
>                       if (!swap_not_matching)
> @@ -1406,7 +1412,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                      node and temporarily do that when processing it
>                      (or wrap operand accessors in a helper).  */
>                   else if (swap[j] != 0
> -                          || STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)))
> +                          || STMT_VINFO_NUM_SLP_USES (stmt_info))
>                     {
>                       if (!swap_not_matching)
>                         {
> @@ -1417,7 +1423,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                                                "Build SLP failed: cannot swap "
>                                                "operands of shared stmt ");
>                               dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
> -                                               TDF_SLIM, stmts[j], 0);
> +                                               TDF_SLIM, stmts[j]->stmt, 0);
>                             }
>                           goto fail;
>                         }
> @@ -1454,31 +1460,23 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                  if we end up building the operand from scalars as
>                  we'll continue to process swapped operand two.  */
>               for (j = 0; j < group_size; ++j)
> -               {
> -                 gimple *stmt = stmts[j];
> -                 gimple_set_plf (stmt, GF_PLF_1, false);
> -               }
> +               gimple_set_plf (stmts[j]->stmt, GF_PLF_1, false);
>               for (j = 0; j < group_size; ++j)
> -               {
> -                 gimple *stmt = stmts[j];
> -                 if (matches[j] == !swap_not_matching)
> -                   {
> -                     /* Avoid swapping operands twice.  */
> -                     if (gimple_plf (stmt, GF_PLF_1))
> -                       continue;
> -                     swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
> -                                        gimple_assign_rhs2_ptr (stmt));
> -                     gimple_set_plf (stmt, GF_PLF_1, true);
> -                   }
> -               }
> +               if (matches[j] == !swap_not_matching)
> +                 {
> +                   gassign *stmt = as_a <gassign *> (stmts[j]->stmt);
> +                   /* Avoid swapping operands twice.  */
> +                   if (gimple_plf (stmt, GF_PLF_1))
> +                     continue;
> +                   swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
> +                                      gimple_assign_rhs2_ptr (stmt));
> +                   gimple_set_plf (stmt, GF_PLF_1, true);
> +                 }
>               /* Verify we swap all duplicates or none.  */
>               if (flag_checking)
>                 for (j = 0; j < group_size; ++j)
> -                 {
> -                   gimple *stmt = stmts[j];
> -                   gcc_assert (gimple_plf (stmt, GF_PLF_1)
> -                               == (matches[j] == !swap_not_matching));
> -                 }
> +                 gcc_assert (gimple_plf (stmts[j]->stmt, GF_PLF_1)
> +                             == (matches[j] == !swap_not_matching));
>
>               /* If we have all children of child built up from scalars then
>                  just throw that away and build it up this node from scalars.  */
> @@ -1486,8 +1484,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>                   /* ???  Rejecting patterns this way doesn't work.  We'd have
>                      to do extra work to cancel the pattern so the uses see the
>                      scalar version.  */
> -                 && !is_pattern_stmt_p
> -                       (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
> +                 && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>                 {
>                   unsigned int j;
>                   slp_tree grandchild;
> @@ -1550,16 +1547,16 @@ vect_print_slp_tree (dump_flags_t dump_k
>                      slp_tree node)
>  {
>    int i;
> -  gimple *stmt;
> +  stmt_vec_info stmt_info;
>    slp_tree child;
>
>    dump_printf_loc (dump_kind, loc, "node%s\n",
>                    SLP_TREE_DEF_TYPE (node) != vect_internal_def
>                    ? " (external)" : "");
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
>        dump_printf_loc (dump_kind, loc, "\tstmt %d ", i);
> -      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt, 0);
> +      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt_info->stmt, 0);
>      }
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      vect_print_slp_tree (dump_kind, loc, child);
> @@ -1575,15 +1572,15 @@ vect_print_slp_tree (dump_flags_t dump_k
>  vect_mark_slp_stmts (slp_tree node, enum slp_vect_type mark, int j)
>  {
>    int i;
> -  gimple *stmt;
> +  stmt_vec_info stmt_info;
>    slp_tree child;
>
>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>      return;
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      if (j < 0 || i == j)
> -      STMT_SLP_TYPE (vinfo_for_stmt (stmt)) = mark;
> +      STMT_SLP_TYPE (stmt_info) = mark;
>
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      vect_mark_slp_stmts (child, mark, j);
> @@ -1596,16 +1593,14 @@ vect_mark_slp_stmts (slp_tree node, enum
>  vect_mark_slp_stmts_relevant (slp_tree node)
>  {
>    int i;
> -  gimple *stmt;
>    stmt_vec_info stmt_info;
>    slp_tree child;
>
>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>      return;
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
> -      stmt_info = vinfo_for_stmt (stmt);
>        gcc_assert (!STMT_VINFO_RELEVANT (stmt_info)
>                    || STMT_VINFO_RELEVANT (stmt_info) == vect_used_in_scope);
>        STMT_VINFO_RELEVANT (stmt_info) = vect_used_in_scope;
> @@ -1622,8 +1617,8 @@ vect_mark_slp_stmts_relevant (slp_tree n
>  vect_slp_rearrange_stmts (slp_tree node, unsigned int group_size,
>                            vec<unsigned> permutation)
>  {
> -  gimple *stmt;
> -  vec<gimple *> tmp_stmts;
> +  stmt_vec_info stmt_info;
> +  vec<stmt_vec_info> tmp_stmts;
>    unsigned int i;
>    slp_tree child;
>
> @@ -1634,8 +1629,8 @@ vect_slp_rearrange_stmts (slp_tree node,
>    tmp_stmts.create (group_size);
>    tmp_stmts.quick_grow_cleared (group_size);
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> -    tmp_stmts[permutation[i]] = stmt;
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
> +    tmp_stmts[permutation[i]] = stmt_info;
>
>    SLP_TREE_SCALAR_STMTS (node).release ();
>    SLP_TREE_SCALAR_STMTS (node) = tmp_stmts;
> @@ -1696,13 +1691,14 @@ vect_attempt_slp_rearrange_stmts (slp_in
>    poly_uint64 unrolling_factor = SLP_INSTANCE_UNROLLING_FACTOR (slp_instn);
>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>      {
> -      gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -      first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
> +      stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> +      first_stmt_info
> +       = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
>        /* But we have to keep those permutations that are required because
>           of handling of gaps.  */
>        if (known_eq (unrolling_factor, 1U)
> -         || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
> -             && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0))
> +         || (group_size == DR_GROUP_SIZE (first_stmt_info)
> +             && DR_GROUP_GAP (first_stmt_info) == 0))
>         SLP_TREE_LOAD_PERMUTATION (node).release ();
>        else
>         for (j = 0; j < SLP_TREE_LOAD_PERMUTATION (node).length (); ++j)
> @@ -1721,7 +1717,7 @@ vect_supported_load_permutation_p (slp_i
>    unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
>    unsigned int i, j, k, next;
>    slp_tree node;
> -  gimple *stmt, *load, *next_load;
> +  gimple *next_load;
>
>    if (dump_enabled_p ())
>      {
> @@ -1750,18 +1746,18 @@ vect_supported_load_permutation_p (slp_i
>        return false;
>
>    node = SLP_INSTANCE_TREE (slp_instn);
> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>
>    /* Reduction (there are no data-refs in the root).
>       In reduction chain the order of the loads is not important.  */
> -  if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))
> -      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  if (!STMT_VINFO_DATA_REF (stmt_info)
> +      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      vect_attempt_slp_rearrange_stmts (slp_instn);
>
>    /* In basic block vectorization we allow any subchain of an interleaving
>       chain.
>       FORNOW: not supported in loop SLP because of realignment compications.  */
> -  if (STMT_VINFO_BB_VINFO (vinfo_for_stmt (stmt)))
> +  if (STMT_VINFO_BB_VINFO (stmt_info))
>      {
>        /* Check whether the loads in an instance form a subchain and thus
>           no permutation is necessary.  */
> @@ -1771,24 +1767,25 @@ vect_supported_load_permutation_p (slp_i
>             continue;
>           bool subchain_p = true;
>            next_load = NULL;
> -          FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load)
> -            {
> -              if (j != 0
> -                 && (next_load != load
> -                     || DR_GROUP_GAP (vinfo_for_stmt (load)) != 1))
> +         stmt_vec_info load_info;
> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
> +           {
> +             if (j != 0
> +                 && (next_load != load_info
> +                     || DR_GROUP_GAP (load_info) != 1))
>                 {
>                   subchain_p = false;
>                   break;
>                 }
> -              next_load = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (load));
> -            }
> +             next_load = DR_GROUP_NEXT_ELEMENT (load_info);
> +           }
>           if (subchain_p)
>             SLP_TREE_LOAD_PERMUTATION (node).release ();
>           else
>             {
> -             stmt_vec_info group_info
> -               = vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
> -             group_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
> +             stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
> +             group_info
> +               = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
>               unsigned HOST_WIDE_INT nunits;
>               unsigned k, maxk = 0;
>               FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
> @@ -1831,7 +1828,7 @@ vect_supported_load_permutation_p (slp_i
>    poly_uint64 test_vf
>      = force_common_multiple (SLP_INSTANCE_UNROLLING_FACTOR (slp_instn),
>                              LOOP_VINFO_VECT_FACTOR
> -                            (STMT_VINFO_LOOP_VINFO (vinfo_for_stmt (stmt))));
> +                            (STMT_VINFO_LOOP_VINFO (stmt_info)));
>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>      if (node->load_permutation.exists ()
>         && !vect_transform_slp_perm_load (node, vNULL, NULL, test_vf,
> @@ -1847,15 +1844,15 @@ vect_supported_load_permutation_p (slp_i
>  gimple *
>  vect_find_last_scalar_stmt_in_slp (slp_tree node)
>  {
> -  gimple *last = NULL, *stmt;
> +  gimple *last = NULL;
> +  stmt_vec_info stmt_vinfo;
>
> -  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt); i++)
> +  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
>      {
> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>        if (is_pattern_stmt_p (stmt_vinfo))
>         last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
>        else
> -       last = get_later_stmt (stmt, last);
> +       last = get_later_stmt (stmt_vinfo, last);
>      }
>
>    return last;
> @@ -1926,6 +1923,7 @@ calculate_unrolling_factor (poly_uint64
>  vect_analyze_slp_instance (vec_info *vinfo,
>                            gimple *stmt, unsigned max_tree_size)
>  {
> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>    slp_instance new_instance;
>    slp_tree node;
>    unsigned int group_size;
> @@ -1934,25 +1932,25 @@ vect_analyze_slp_instance (vec_info *vin
>    stmt_vec_info next_info;
>    unsigned int i;
>    vec<slp_tree> loads;
> -  struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
> -  vec<gimple *> scalar_stmts;
> +  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> +  vec<stmt_vec_info> scalar_stmts;
>
> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>      {
>        scalar_type = TREE_TYPE (DR_REF (dr));
>        vectype = get_vectype_for_scalar_type (scalar_type);
> -      group_size = DR_GROUP_SIZE (vinfo_for_stmt (stmt));
> +      group_size = DR_GROUP_SIZE (stmt_info);
>      }
> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        gcc_assert (is_a <loop_vec_info> (vinfo));
> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> -      group_size = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
> +      group_size = REDUC_GROUP_SIZE (stmt_info);
>      }
>    else
>      {
>        gcc_assert (is_a <loop_vec_info> (vinfo));
> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
>        group_size = as_a <loop_vec_info> (vinfo)->reductions.length ();
>      }
>
> @@ -1973,38 +1971,38 @@ vect_analyze_slp_instance (vec_info *vin
>    /* Create a node (a root of the SLP tree) for the packed grouped stores.  */
>    scalar_stmts.create (group_size);
>    next = stmt;
> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>      {
>        /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
>        while (next)
>          {
> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
> -           scalar_stmts.safe_push (
> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
> +         next_info = vinfo_for_stmt (next);
> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
> +             && STMT_VINFO_RELATED_STMT (next_info))
> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>           else
> -            scalar_stmts.safe_push (next);
> +           scalar_stmts.safe_push (next_info);
>            next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>          }
>      }
> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>      {
>        /* Collect the reduction stmts and store them in
>          SLP_TREE_SCALAR_STMTS.  */
>        while (next)
>          {
> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
> -           scalar_stmts.safe_push (
> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
> +         next_info = vinfo_for_stmt (next);
> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
> +             && STMT_VINFO_RELATED_STMT (next_info))
> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>           else
> -            scalar_stmts.safe_push (next);
> +           scalar_stmts.safe_push (next_info);
>            next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>          }
>        /* Mark the first element of the reduction chain as reduction to properly
>          transform the node.  In the reduction analysis phase only the last
>          element of the chain is marked as reduction.  */
> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_reduction_def;
> +      STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>      }
>    else
>      {
> @@ -2068,15 +2066,16 @@ vect_analyze_slp_instance (vec_info *vin
>         {
>           vec<unsigned> load_permutation;
>           int j;
> -         gimple *load, *first_stmt;
> +         stmt_vec_info load_info;
> +         gimple *first_stmt;
>           bool this_load_permuted = false;
>           load_permutation.create (group_size);
>           first_stmt = DR_GROUP_FIRST_ELEMENT
> -             (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
> -         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load)
> +           (SLP_TREE_SCALAR_STMTS (load_node)[0]);
> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
>             {
> -                 int load_place = vect_get_place_in_interleaving_chain
> -                                    (load, first_stmt);
> +             int load_place = vect_get_place_in_interleaving_chain
> +               (load_info, first_stmt);
>               gcc_assert (load_place != -1);
>               if (load_place != j)
>                 this_load_permuted = true;
> @@ -2124,7 +2123,7 @@ vect_analyze_slp_instance (vec_info *vin
>           FOR_EACH_VEC_ELT (loads, i, load_node)
>             {
>               gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
> -                 (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
> +               (SLP_TREE_SCALAR_STMTS (load_node)[0]);
>               stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
>                   /* Use SLP for strided accesses (or if we
>                      can't load-lanes).  */
> @@ -2307,10 +2306,10 @@ vect_make_slp_decision (loop_vec_info lo
>  static void
>  vect_detect_hybrid_slp_stmts (slp_tree node, unsigned i, slp_vect_type stype)
>  {
> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[i];
> +  stmt_vec_info stmt_vinfo = SLP_TREE_SCALAR_STMTS (node)[i];
>    imm_use_iterator imm_iter;
>    gimple *use_stmt;
> -  stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
> +  stmt_vec_info use_vinfo;
>    slp_tree child;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
>    int j;
> @@ -2326,6 +2325,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>        gcc_checking_assert (PURE_SLP_STMT (stmt_vinfo));
>        /* If we get a pattern stmt here we have to use the LHS of the
>           original stmt for immediate uses.  */
> +      gimple *stmt = stmt_vinfo->stmt;
>        if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
>           && STMT_VINFO_RELATED_STMT (stmt_vinfo))
>         stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
> @@ -2366,7 +2366,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>        if (dump_enabled_p ())
>         {
>           dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
>         }
>        STMT_SLP_TYPE (stmt_vinfo) = hybrid;
>      }
> @@ -2525,9 +2525,8 @@ vect_slp_analyze_node_operations_1 (vec_
>                                     slp_instance node_instance,
>                                     stmt_vector_for_cost *cost_vec)
>  {
> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> -  gcc_assert (stmt_info);
> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> +  gimple *stmt = stmt_info->stmt;
>    gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
>
>    /* For BB vectorization vector types are assigned here.
> @@ -2551,10 +2550,10 @@ vect_slp_analyze_node_operations_1 (vec_
>             return false;
>         }
>
> -      gimple *sstmt;
> +      stmt_vec_info sstmt_info;
>        unsigned int i;
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt)
> -       STMT_VINFO_VECTYPE (vinfo_for_stmt (sstmt)) = vectype;
> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt_info)
> +       STMT_VINFO_VECTYPE (sstmt_info) = vectype;
>      }
>
>    /* Calculate the number of vector statements to be created for the
> @@ -2626,14 +2625,14 @@ vect_slp_analyze_node_operations (vec_in
>    /* Push SLP node def-type to stmt operands.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>         = SLP_TREE_DEF_TYPE (child);
>    bool res = vect_slp_analyze_node_operations_1 (vinfo, node, node_instance,
>                                                  cost_vec);
>    /* Restore def-types.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>         = vect_internal_def;
>    if (! res)
>      return false;
> @@ -2665,11 +2664,11 @@ vect_slp_analyze_operations (vec_info *v
>                                              instance, visited, &lvisited,
>                                              &cost_vec))
>          {
> +         slp_tree node = SLP_INSTANCE_TREE (instance);
> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "removing SLP instance operations starting from: ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
> -                           SLP_TREE_SCALAR_STMTS
> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>           vect_free_slp_instance (instance, false);
>            vinfo->slp_instances.ordered_remove (i);
>           cost_vec.release ();
> @@ -2701,14 +2700,14 @@ vect_bb_slp_scalar_cost (basic_block bb,
>                          stmt_vector_for_cost *cost_vec)
>  {
>    unsigned i;
> -  gimple *stmt;
> +  stmt_vec_info stmt_info;
>    slp_tree child;
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
> +      gimple *stmt = stmt_info->stmt;
>        ssa_op_iter op_iter;
>        def_operand_p def_p;
> -      stmt_vec_info stmt_info;
>
>        if ((*life)[i])
>         continue;
> @@ -2724,8 +2723,7 @@ vect_bb_slp_scalar_cost (basic_block bb,
>           gimple *use_stmt;
>           FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
>             if (!is_gimple_debug (use_stmt)
> -               && (! vect_stmt_in_region_p (vinfo_for_stmt (stmt)->vinfo,
> -                                            use_stmt)
> +               && (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
>                     || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
>               {
>                 (*life)[i] = true;
> @@ -2740,7 +2738,6 @@ vect_bb_slp_scalar_cost (basic_block bb,
>         continue;
>        gimple_set_visited (stmt, true);
>
> -      stmt_info = vinfo_for_stmt (stmt);
>        vect_cost_for_stmt kind;
>        if (STMT_VINFO_DATA_REF (stmt_info))
>          {
> @@ -2944,11 +2941,11 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera
>        if (! vect_slp_analyze_and_verify_instance_alignment (instance)
>           || ! vect_slp_analyze_instance_dependence (instance))
>         {
> +         slp_tree node = SLP_INSTANCE_TREE (instance);
> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "removing SLP instance operations starting from: ");
> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
> -                           SLP_TREE_SCALAR_STMTS
> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>           vect_free_slp_instance (instance, false);
>           BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
>           continue;
> @@ -3299,9 +3296,9 @@ vect_get_constant_vectors (tree op, slp_
>                             vec<tree> *vec_oprnds,
>                            unsigned int op_num, unsigned int number_of_vectors)
>  {
> -  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
> -  gimple *stmt = stmts[0];
> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
> +  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
> +  stmt_vec_info stmt_vinfo = stmts[0];
> +  gimple *stmt = stmt_vinfo->stmt;
>    unsigned HOST_WIDE_INT nunits;
>    tree vec_cst;
>    unsigned j, number_of_places_left_in_vector;
> @@ -3320,7 +3317,7 @@ vect_get_constant_vectors (tree op, slp_
>
>    /* Check if vector type is a boolean vector.  */
>    if (VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (op))
> -      && vect_mask_constant_operand_p (stmt, op_num))
> +      && vect_mask_constant_operand_p (stmt_vinfo, op_num))
>      vector_type
>        = build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
>    else
> @@ -3366,8 +3363,9 @@ vect_get_constant_vectors (tree op, slp_
>    bool place_after_defs = false;
>    for (j = 0; j < number_of_copies; j++)
>      {
> -      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
> +      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
>          {
> +         stmt = stmt_vinfo->stmt;
>            if (is_store)
>              op = gimple_assign_rhs1 (stmt);
>            else
> @@ -3496,10 +3494,12 @@ vect_get_constant_vectors (tree op, slp_
>                 {
>                   gsi = gsi_for_stmt
>                           (vect_find_last_scalar_stmt_in_slp (slp_node));
> -                 init = vect_init_vector (stmt, vec_cst, vector_type, &gsi);
> +                 init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
> +                                          &gsi);
>                 }
>               else
> -               init = vect_init_vector (stmt, vec_cst, vector_type, NULL);
> +               init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
> +                                        NULL);
>               if (ctor_seq != NULL)
>                 {
>                   gsi = gsi_for_stmt (SSA_NAME_DEF_STMT (init));
> @@ -3612,15 +3612,14 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>           /* We have to check both pattern and original def, if available.  */
>           if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
>             {
> -             gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
> -             stmt_vec_info related
> -               = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
> +             stmt_vec_info first_def_info = SLP_TREE_SCALAR_STMTS (child)[0];
> +             stmt_vec_info related = STMT_VINFO_RELATED_STMT (first_def_info);
>               tree first_def_op;
>
> -             if (gimple_code (first_def) == GIMPLE_PHI)
> +             if (gphi *first_def = dyn_cast <gphi *> (first_def_info->stmt))
>                 first_def_op = gimple_phi_result (first_def);
>               else
> -               first_def_op = gimple_get_lhs (first_def);
> +               first_def_op = gimple_get_lhs (first_def_info->stmt);
>               if (operand_equal_p (oprnd, first_def_op, 0)
>                   || (related
>                       && operand_equal_p (oprnd,
> @@ -3686,8 +3685,7 @@ vect_transform_slp_perm_load (slp_tree n
>                               slp_instance slp_node_instance, bool analyze_only,
>                               unsigned *n_perms)
>  {
> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>    vec_info *vinfo = stmt_info->vinfo;
>    tree mask_element_type = NULL_TREE, mask_type;
>    int vec_index = 0;
> @@ -3779,7 +3777,7 @@ vect_transform_slp_perm_load (slp_tree n
>                                    "permutation requires at "
>                                    "least three vectors ");
>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                                   stmt, 0);
> +                                   stmt_info->stmt, 0);
>                 }
>               gcc_assert (analyze_only);
>               return false;
> @@ -3832,6 +3830,7 @@ vect_transform_slp_perm_load (slp_tree n
>                   stmt_vec_info perm_stmt_info;
>                   if (! noop_p)
>                     {
> +                     gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>                       tree perm_dest
>                         = vect_create_destination_var (gimple_assign_lhs (stmt),
>                                                        vectype);
> @@ -3841,7 +3840,8 @@ vect_transform_slp_perm_load (slp_tree n
>                                                first_vec, second_vec,
>                                                mask_vec);
>                       perm_stmt_info
> -                       = vect_finish_stmt_generation (stmt, perm_stmt, gsi);
> +                       = vect_finish_stmt_generation (stmt_info, perm_stmt,
> +                                                      gsi);
>                     }
>                   else
>                     /* If mask was NULL_TREE generate the requested
> @@ -3870,7 +3870,6 @@ vect_transform_slp_perm_load (slp_tree n
>  vect_schedule_slp_instance (slp_tree node, slp_instance instance,
>                             scalar_stmts_to_slp_tree_map_t *bst_map)
>  {
> -  gimple *stmt;
>    bool grouped_store, is_store;
>    gimple_stmt_iterator si;
>    stmt_vec_info stmt_info;
> @@ -3897,11 +3896,13 @@ vect_schedule_slp_instance (slp_tree nod
>    /* Push SLP node def-type to stmts.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
> -       STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = SLP_TREE_DEF_TYPE (child);
> +      {
> +       stmt_vec_info child_stmt_info;
> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = SLP_TREE_DEF_TYPE (child);
> +      }
>
> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
> -  stmt_info = vinfo_for_stmt (stmt);
> +  stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>
>    /* VECTYPE is the type of the destination.  */
>    vectype = STMT_VINFO_VECTYPE (stmt_info);
> @@ -3916,7 +3917,7 @@ vect_schedule_slp_instance (slp_tree nod
>      {
>        dump_printf_loc (MSG_NOTE,vect_location,
>                        "------>vectorizing SLP node starting from: ");
> -      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
> +      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>      }
>
>    /* Vectorized stmts go before the last scalar stmt which is where
> @@ -3928,7 +3929,7 @@ vect_schedule_slp_instance (slp_tree nod
>       chain is marked as reduction.  */
>    if (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
>        && REDUC_GROUP_FIRST_ELEMENT (stmt_info)
> -      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt)
> +      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
>      {
>        STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>        STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type;
> @@ -3938,29 +3939,33 @@ vect_schedule_slp_instance (slp_tree nod
>       both operations and then performing a merge.  */
>    if (SLP_TREE_TWO_OPERATORS (node))
>      {
> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>        enum tree_code code0 = gimple_assign_rhs_code (stmt);
>        enum tree_code ocode = ERROR_MARK;
> -      gimple *ostmt;
> +      stmt_vec_info ostmt_info;
>        vec_perm_builder mask (group_size, group_size, 1);
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt)
> -       if (gimple_assign_rhs_code (ostmt) != code0)
> -         {
> -           mask.quick_push (1);
> -           ocode = gimple_assign_rhs_code (ostmt);
> -         }
> -       else
> -         mask.quick_push (0);
> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt_info)
> +       {
> +         gassign *ostmt = as_a <gassign *> (ostmt_info->stmt);
> +         if (gimple_assign_rhs_code (ostmt) != code0)
> +           {
> +             mask.quick_push (1);
> +             ocode = gimple_assign_rhs_code (ostmt);
> +           }
> +         else
> +           mask.quick_push (0);
> +       }
>        if (ocode != ERROR_MARK)
>         {
>           vec<stmt_vec_info> v0;
>           vec<stmt_vec_info> v1;
>           unsigned j;
>           tree tmask = NULL_TREE;
> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
> +         vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
>           v0 = SLP_TREE_VEC_STMTS (node).copy ();
>           SLP_TREE_VEC_STMTS (node).truncate (0);
>           gimple_assign_set_rhs_code (stmt, ocode);
> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
> +         vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
>           gimple_assign_set_rhs_code (stmt, code0);
>           v1 = SLP_TREE_VEC_STMTS (node).copy ();
>           SLP_TREE_VEC_STMTS (node).truncate (0);
> @@ -3998,20 +4003,24 @@ vect_schedule_slp_instance (slp_tree nod
>                                            gimple_assign_lhs (v1[j]->stmt),
>                                            tmask);
>               SLP_TREE_VEC_STMTS (node).quick_push
> -               (vect_finish_stmt_generation (stmt, vstmt, &si));
> +               (vect_finish_stmt_generation (stmt_info, vstmt, &si));
>             }
>           v0.release ();
>           v1.release ();
>           return false;
>         }
>      }
> -  is_store = vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
> +  is_store = vect_transform_stmt (stmt_info, &si, &grouped_store, node,
> +                                 instance);
>
>    /* Restore stmt def-types.  */
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
> -       STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_internal_def;
> +      {
> +       stmt_vec_info child_stmt_info;
> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = vect_internal_def;
> +      }
>
>    return is_store;
>  }
> @@ -4024,7 +4033,7 @@ vect_schedule_slp_instance (slp_tree nod
>  static void
>  vect_remove_slp_scalar_calls (slp_tree node)
>  {
> -  gimple *stmt, *new_stmt;
> +  gimple *new_stmt;
>    gimple_stmt_iterator gsi;
>    int i;
>    slp_tree child;
> @@ -4037,13 +4046,12 @@ vect_remove_slp_scalar_calls (slp_tree n
>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>      vect_remove_slp_scalar_calls (child);
>
> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>      {
> -      if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
> +      gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
> +      if (!stmt || gimple_bb (stmt) == NULL)
>         continue;
> -      stmt_info = vinfo_for_stmt (stmt);
> -      if (stmt_info == NULL_STMT_VEC_INFO
> -         || is_pattern_stmt_p (stmt_info)
> +      if (is_pattern_stmt_p (stmt_info)
>           || !PURE_SLP_STMT (stmt_info))
>         continue;
>        lhs = gimple_call_lhs (stmt);
> @@ -4085,7 +4093,7 @@ vect_schedule_slp (vec_info *vinfo)
>    FOR_EACH_VEC_ELT (slp_instances, i, instance)
>      {
>        slp_tree root = SLP_INSTANCE_TREE (instance);
> -      gimple *store;
> +      stmt_vec_info store_info;
>        unsigned int j;
>        gimple_stmt_iterator gsi;
>
> @@ -4099,20 +4107,20 @@ vect_schedule_slp (vec_info *vinfo)
>        if (is_a <loop_vec_info> (vinfo))
>         vect_remove_slp_scalar_calls (root);
>
> -      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store)
> +      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store_info)
>                    && j < SLP_INSTANCE_GROUP_SIZE (instance); j++)
>          {
> -          if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (store)))
> -            break;
> +         if (!STMT_VINFO_DATA_REF (store_info))
> +           break;
>
> -         if (is_pattern_stmt_p (vinfo_for_stmt (store)))
> -           store = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (store));
> -          /* Free the attached stmt_vec_info and remove the stmt.  */
> -          gsi = gsi_for_stmt (store);
> -         unlink_stmt_vdef (store);
> -          gsi_remove (&gsi, true);
> -         release_defs (store);
> -          free_stmt_vec_info (store);
> +         if (is_pattern_stmt_p (store_info))
> +           store_info = STMT_VINFO_RELATED_STMT (store_info);
> +         /* Free the attached stmt_vec_info and remove the stmt.  */
> +         gsi = gsi_for_stmt (store_info);
> +         unlink_stmt_vdef (store_info);
> +         gsi_remove (&gsi, true);
> +         release_defs (store_info);
> +         free_stmt_vec_info (store_info);
>          }
>      }
>
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:22:47.485157343 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:00.397042684 +0100
> @@ -665,7 +665,8 @@ vect_slp_analyze_data_ref_dependence (st
>
>  static bool
>  vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
> -                                  vec<gimple *> stores, gimple *last_store)
> +                                  vec<stmt_vec_info> stores,
> +                                  gimple *last_store)
>  {
>    /* This walks over all stmts involved in the SLP load/store done
>       in NODE verifying we can sink them up to the last stmt in the
> @@ -673,13 +674,13 @@ vect_slp_analyze_node_dependences (slp_i
>    gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
>    for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
>      {
> -      gimple *access = SLP_TREE_SCALAR_STMTS (node)[k];
> -      if (access == last_access)
> +      stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
> +      if (access_info == last_access)
>         continue;
> -      data_reference *dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (access));
> +      data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
>        ao_ref ref;
>        bool ref_initialized_p = false;
> -      for (gimple_stmt_iterator gsi = gsi_for_stmt (access);
> +      for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
>            gsi_stmt (gsi) != last_access; gsi_next (&gsi))
Richard Sandiford July 31, 2018, 3:03 p.m. UTC | #2
Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Jul 24, 2018 at 12:01 PM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> This patch changes SLP_TREE_SCALAR_STMTS from a vec<gimple *> to
>> a vec<stmt_vec_info>.  It's longer than the previous conversions
>> but mostly mechanical.
>
> OK.  I don't remember exactly but vect_external_def SLP nodes have
> empty stmts vector then?  I realize we only have those for defs that
> are in the vectorized region.

Yeah, for this the thing we care about is that it's part of the
vectorisable region.  I'm not sure how much stuff we hang off
a vect_external_def SLP stmt_vec_info, but we do need at least
STMT_VINFO_DEF_TYPE as well STMT_VINFO_STMT itself.

Thanks,
Richard

>
>>
>> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>>
>> gcc/
>>         * tree-vectorizer.h (_slp_tree::stmts): Change from a vec<gimple *>
>>         to a vec<stmt_vec_info>.
>>         * tree-vect-slp.c (vect_free_slp_tree): Update accordingly.
>>         (vect_create_new_slp_node): Take a vec<gimple *> instead of a
>>         vec<stmt_vec_info>.
>>         (_slp_oprnd_info::def_stmts): Change from a vec<gimple *>
>>         to a vec<stmt_vec_info>.
>>         (bst_traits::value_type, bst_traits::value_type): Likewise.
>>         (bst_traits::hash): Update accordingly.
>>         (vect_get_and_check_slp_defs): Change the stmts parameter from
>>         a vec<gimple *> to a vec<stmt_vec_info>.
>>         (vect_two_operations_perm_ok_p, vect_build_slp_tree_1): Likewise.
>>         (vect_build_slp_tree): Likewise.
>>         (vect_build_slp_tree_2): Likewise.  Update uses of
>>         SLP_TREE_SCALAR_STMTS.
>>         (vect_print_slp_tree): Update uses of SLP_TREE_SCALAR_STMTS.
>>         (vect_mark_slp_stmts, vect_mark_slp_stmts_relevant)
>>         (vect_slp_rearrange_stmts, vect_attempt_slp_rearrange_stmts)
>>         (vect_supported_load_permutation_p, vect_find_last_scalar_stmt_in_slp)
>>         (vect_detect_hybrid_slp_stmts, vect_slp_analyze_node_operations_1)
>>         (vect_slp_analyze_node_operations, vect_slp_analyze_operations)
>>         (vect_bb_slp_scalar_cost, vect_slp_analyze_bb_1)
>>         (vect_get_constant_vectors, vect_get_slp_defs)
>>         (vect_transform_slp_perm_load, vect_schedule_slp_instance)
>>         (vect_remove_slp_scalar_calls, vect_schedule_slp): Likewise.
>>         (vect_analyze_slp_instance): Build up a vec of stmt_vec_infos
>>         instead of gimple stmts.
>>         * tree-vect-data-refs.c (vect_slp_analyze_node_dependences): Change
>>         the stores parameter for a vec<gimple *> to a vec<stmt_vec_info>.
>>         (vect_slp_analyze_instance_dependence): Update uses of
>>         SLP_TREE_SCALAR_STMTS.
>>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>>         (vect_slp_analyze_and_verify_instance_alignment): Likewise.
>>         * tree-vect-loop.c (neutral_op_for_slp_reduction): Likewise.
>>         (get_initial_defs_for_reduction): Likewise.
>>         (vect_create_epilog_for_reduction): Likewise.
>>         (vectorize_fold_left_reduction): Likewise.
>>         * tree-vect-stmts.c (vect_prologue_cost_for_slp_op): Likewise.
>>         (vect_model_simple_cost, vectorizable_shift, vectorizable_load)
>>         (can_vectorize_live_stmts): Likewise.
>>
>> Index: gcc/tree-vectorizer.h
>> ===================================================================
>> --- gcc/tree-vectorizer.h       2018-07-24 10:22:57.277070390 +0100
>> +++ gcc/tree-vectorizer.h       2018-07-24 10:23:00.401042649 +0100
>> @@ -138,7 +138,7 @@ struct _slp_tree {
>>    /* Nodes that contain def-stmts of this node statements operands.  */
>>    vec<slp_tree> children;
>>    /* A group of scalar stmts to be vectorized together.  */
>> -  vec<gimple *> stmts;
>> +  vec<stmt_vec_info> stmts;
>>    /* Load permutation relative to the stores, NULL if there is no
>>       permutation.  */
>>    vec<unsigned> load_permutation;
>> Index: gcc/tree-vect-slp.c
>> ===================================================================
>> --- gcc/tree-vect-slp.c 2018-07-24 10:22:57.277070390 +0100
>> +++ gcc/tree-vect-slp.c 2018-07-24 10:23:00.401042649 +0100
>> @@ -66,11 +66,11 @@ vect_free_slp_tree (slp_tree node, bool
>>       statements would be redundant.  */
>>    if (!final_p)
>>      {
>> -      gimple *stmt;
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +      stmt_vec_info stmt_info;
>> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>         {
>> -         gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
>> -         STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
>> +         gcc_assert (STMT_VINFO_NUM_SLP_USES (stmt_info) > 0);
>> +         STMT_VINFO_NUM_SLP_USES (stmt_info)--;
>>         }
>>      }
>>
>> @@ -99,21 +99,21 @@ vect_free_slp_instance (slp_instance ins
>>  /* Create an SLP node for SCALAR_STMTS.  */
>>
>>  static slp_tree
>> -vect_create_new_slp_node (vec<gimple *> scalar_stmts)
>> +vect_create_new_slp_node (vec<stmt_vec_info> scalar_stmts)
>>  {
>>    slp_tree node;
>> -  gimple *stmt = scalar_stmts[0];
>> +  stmt_vec_info stmt_info = scalar_stmts[0];
>>    unsigned int nops;
>>
>> -  if (is_gimple_call (stmt))
>> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>>      nops = gimple_call_num_args (stmt);
>> -  else if (is_gimple_assign (stmt))
>> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>>      {
>>        nops = gimple_num_ops (stmt) - 1;
>>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>>         nops++;
>>      }
>> -  else if (gimple_code (stmt) == GIMPLE_PHI)
>> +  else if (is_a <gphi *> (stmt_info->stmt))
>>      nops = 0;
>>    else
>>      return NULL;
>> @@ -128,8 +128,8 @@ vect_create_new_slp_node (vec<gimple *>
>>    SLP_TREE_DEF_TYPE (node) = vect_internal_def;
>>
>>    unsigned i;
>> -  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt)
>> -    STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))++;
>> +  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt_info)
>> +    STMT_VINFO_NUM_SLP_USES (stmt_info)++;
>>
>>    return node;
>>  }
>> @@ -141,7 +141,7 @@ vect_create_new_slp_node (vec<gimple *>
>>  typedef struct _slp_oprnd_info
>>  {
>>    /* Def-stmts for the operands.  */
>> -  vec<gimple *> def_stmts;
>> +  vec<stmt_vec_info> def_stmts;
>>    /* Information about the first statement, its vector def-type, type, the
>> operand itself in case it's constant, and an indication if it's a
> pattern
>>       stmt.  */
>> @@ -297,10 +297,10 @@ can_duplicate_and_interleave_p (unsigned
>>     ok return 0.  */
>>  static int
>>  vect_get_and_check_slp_defs (vec_info *vinfo, unsigned char *swap,
>> -                            vec<gimple *> stmts, unsigned stmt_num,
>> +                            vec<stmt_vec_info> stmts, unsigned stmt_num,
>>                              vec<slp_oprnd_info> *oprnds_info)
>>  {
>> -  gimple *stmt = stmts[stmt_num];
>> +  stmt_vec_info stmt_info = stmts[stmt_num];
>>    tree oprnd;
>>    unsigned int i, number_of_oprnds;
>>    enum vect_def_type dt = vect_uninitialized_def;
>> @@ -312,12 +312,12 @@ vect_get_and_check_slp_defs (vec_info *v
>>    bool first = stmt_num == 0;
>>    bool second = stmt_num == 1;
>>
>> -  if (is_gimple_call (stmt))
>> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>>      {
>>        number_of_oprnds = gimple_call_num_args (stmt);
>>        first_op_idx = 3;
>>      }
>> -  else if (is_gimple_assign (stmt))
>> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>>      {
>>        enum tree_code code = gimple_assign_rhs_code (stmt);
>>        number_of_oprnds = gimple_num_ops (stmt) - 1;
>> @@ -347,12 +347,13 @@ vect_get_and_check_slp_defs (vec_info *v
>>           int *map = maps[*swap];
>>
>>           if (i < 2)
>> -           oprnd = TREE_OPERAND (gimple_op (stmt, first_op_idx), map[i]);
>> +           oprnd = TREE_OPERAND (gimple_op (stmt_info->stmt,
>> +                                            first_op_idx), map[i]);
>>           else
>> -           oprnd = gimple_op (stmt, map[i]);
>> +           oprnd = gimple_op (stmt_info->stmt, map[i]);
>>         }
>>        else
>> -       oprnd = gimple_op (stmt, first_op_idx + (swapped ? !i : i));
>> + oprnd = gimple_op (stmt_info->stmt, first_op_idx + (swapped ? !i :
> i));
>>
>>        oprnd_info = (*oprnds_info)[i];
>>
>> @@ -518,18 +519,20 @@ vect_get_and_check_slp_defs (vec_info *v
>>      {
>>        /* If there are already uses of this stmt in a SLP instance then
>>           we've committed to the operand order and can't swap it.  */
>> -      if (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) != 0)
>> +      if (STMT_VINFO_NUM_SLP_USES (stmt_info) != 0)
>>         {
>>           if (dump_enabled_p ())
>>             {
>>               dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>>                                "Build SLP failed: cannot swap operands of "
>>                                "shared stmt ");
>> -             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
>> +             dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> +                               stmt_info->stmt, 0);
>>             }
>>           return -1;
>>         }
>>
>> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>>        if (first_op_cond)
>>         {
>>           tree cond = gimple_assign_rhs1 (stmt);
>> @@ -655,8 +658,9 @@ vect_record_max_nunits (vec_info *vinfo,
>>     would be permuted.  */
>>
>>  static bool
>> -vect_two_operations_perm_ok_p (vec<gimple *> stmts, unsigned int group_size,
>> -                              tree vectype, tree_code alt_stmt_code)
>> +vect_two_operations_perm_ok_p (vec<stmt_vec_info> stmts,
>> +                              unsigned int group_size, tree vectype,
>> +                              tree_code alt_stmt_code)
>>  {
>>    unsigned HOST_WIDE_INT count;
>>    if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&count))
>> @@ -666,7 +670,8 @@ vect_two_operations_perm_ok_p (vec<gimpl
>>    for (unsigned int i = 0; i < count; ++i)
>>      {
>>        unsigned int elt = i;
>> -      if (gimple_assign_rhs_code (stmts[i % group_size]) == alt_stmt_code)
>> +      gassign *stmt = as_a <gassign *> (stmts[i % group_size]->stmt);
>> +      if (gimple_assign_rhs_code (stmt) == alt_stmt_code)
>>         elt += count;
>>        sel.quick_push (elt);
>>      }
>> @@ -690,12 +695,12 @@ vect_two_operations_perm_ok_p (vec<gimpl
>>
>>  static bool
>>  vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
>> -                      vec<gimple *> stmts, unsigned int group_size,
>> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>>                        poly_uint64 *max_nunits, bool *matches,
>>                        bool *two_operators)
>>  {
>>    unsigned int i;
>> -  gimple *first_stmt = stmts[0], *stmt = stmts[0];
>> +  stmt_vec_info first_stmt_info = stmts[0];
>>    enum tree_code first_stmt_code = ERROR_MARK;
>>    enum tree_code alt_stmt_code = ERROR_MARK;
>>    enum tree_code rhs_code = ERROR_MARK;
>> @@ -710,9 +715,10 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>    gimple *first_load = NULL, *prev_first_load = NULL;
>>
>>    /* For every stmt in NODE find its def stmt/s.  */
>> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
>> +  stmt_vec_info stmt_info;
>> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>>      {
>> -      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>> +      gimple *stmt = stmt_info->stmt;
>>        swap[i] = 0;
>>        matches[i] = false;
>>
>> @@ -723,7 +729,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>         }
>>
>>        /* Fail to vectorize statements marked as unvectorizable.  */
>> -      if (!STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (stmt)))
>> +      if (!STMT_VINFO_VECTORIZABLE (stmt_info))
>>          {
>>            if (dump_enabled_p ())
>>              {
>> @@ -755,7 +761,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>        if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
>>                                            &nunits_vectype)
>>           || (nunits_vectype
>> -             && !vect_record_max_nunits (vinfo, stmt, group_size,
>> +             && !vect_record_max_nunits (vinfo, stmt_info, group_size,
>>                                           nunits_vectype, max_nunits)))
>>         {
>>           /* Fatal mismatch.  */
>> @@ -877,7 +883,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                    && (alt_stmt_code == PLUS_EXPR
>>                        || alt_stmt_code == MINUS_EXPR)
>>                    && rhs_code == alt_stmt_code)
>> -              && !(STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
>> +             && !(STMT_VINFO_GROUPED_ACCESS (stmt_info)
>>                     && (first_stmt_code == ARRAY_REF
>>                         || first_stmt_code == BIT_FIELD_REF
>>                         || first_stmt_code == INDIRECT_REF
>> @@ -893,7 +899,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                   dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>>                                    "original stmt ");
>>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                   first_stmt, 0);
>> +                                   first_stmt_info->stmt, 0);
>>                 }
>>               /* Mismatch.  */
>>               continue;
>> @@ -915,8 +921,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>
>>           if (rhs_code == CALL_EXPR)
>>             {
>> -             gimple *first_stmt = stmts[0];
>> -             if (!compatible_calls_p (as_a <gcall *> (first_stmt),
>> +             if (!compatible_calls_p (as_a <gcall *> (stmts[0]->stmt),
>>                                        as_a <gcall *> (stmt)))
>>                 {
>>                   if (dump_enabled_p ())
>> @@ -933,7 +938,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>         }
>>
>>        /* Grouped store or load.  */
>> -      if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
>> +      if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>         {
>>           if (REFERENCE_CLASS_P (lhs))
>>             {
>> @@ -943,7 +948,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>           else
>>             {
>>               /* Load.  */
>> -              first_load = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
>> +             first_load = DR_GROUP_FIRST_ELEMENT (stmt_info);
>>                if (prev_first_load)
>>                  {
>> /* Check that there are no loads from different interleaving
>> @@ -1061,7 +1066,7 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                                              vectype, alt_stmt_code))
>>         {
>>           for (i = 0; i < group_size; ++i)
>> -           if (gimple_assign_rhs_code (stmts[i]) == alt_stmt_code)
>> +           if (gimple_assign_rhs_code (stmts[i]->stmt) == alt_stmt_code)
>>               {
>>                 matches[i] = false;
>>                 if (dump_enabled_p ())
>> @@ -1070,11 +1075,11 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>                                      "Build SLP failed: different operation "
>>                                      "in stmt ");
>>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                     stmts[i], 0);
>> +                                     stmts[i]->stmt, 0);
>>                     dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>>                                      "original stmt ");
>>                     dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                     first_stmt, 0);
>> +                                     first_stmt_info->stmt, 0);
>>                   }
>>               }
>>           return false;
>> @@ -1090,8 +1095,8 @@ vect_build_slp_tree_1 (vec_info *vinfo,
>>     need a special value for deleted that differs from empty.  */
>>  struct bst_traits
>>  {
>> -  typedef vec <gimple *> value_type;
>> -  typedef vec <gimple *> compare_type;
>> +  typedef vec <stmt_vec_info> value_type;
>> +  typedef vec <stmt_vec_info> compare_type;
>>    static inline hashval_t hash (value_type);
>>    static inline bool equal (value_type existing, value_type candidate);
>>    static inline bool is_empty (value_type x) { return !x.exists (); }
>> @@ -1105,7 +1110,7 @@ bst_traits::hash (value_type x)
>>  {
>>    inchash::hash h;
>>    for (unsigned i = 0; i < x.length (); ++i)
>> -    h.add_int (gimple_uid (x[i]));
>> +    h.add_int (gimple_uid (x[i]->stmt));
>>    return h.end ();
>>  }
>>  inline bool
>> @@ -1128,7 +1133,7 @@ typedef hash_map <vec <gimple *>, slp_tr
>>
>>  static slp_tree
>>  vect_build_slp_tree_2 (vec_info *vinfo,
>> -                      vec<gimple *> stmts, unsigned int group_size,
>> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>>                        poly_uint64 *max_nunits,
>>                        vec<slp_tree> *loads,
>> bool *matches, unsigned *npermutes, unsigned *tree_size,
>> @@ -1136,7 +1141,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>
>>  static slp_tree
>>  vect_build_slp_tree (vec_info *vinfo,
>> -                    vec<gimple *> stmts, unsigned int group_size,
>> +                    vec<stmt_vec_info> stmts, unsigned int group_size,
>>                      poly_uint64 *max_nunits, vec<slp_tree> *loads,
>>                      bool *matches, unsigned *npermutes, unsigned *tree_size,
>>                      unsigned max_tree_size)
>> @@ -1151,7 +1156,7 @@ vect_build_slp_tree (vec_info *vinfo,
>>       scalars, see PR81723.  */
>>    if (! res)
>>      {
>> -      vec <gimple *> x;
>> +      vec <stmt_vec_info> x;
>>        x.create (stmts.length ());
>>        x.splice (stmts);
>>        bst_fail->add (x);
>> @@ -1168,7 +1173,7 @@ vect_build_slp_tree (vec_info *vinfo,
>>
>>  static slp_tree
>>  vect_build_slp_tree_2 (vec_info *vinfo,
>> -                      vec<gimple *> stmts, unsigned int group_size,
>> +                      vec<stmt_vec_info> stmts, unsigned int group_size,
>>                        poly_uint64 *max_nunits,
>>                        vec<slp_tree> *loads,
>> bool *matches, unsigned *npermutes, unsigned *tree_size,
>> @@ -1176,53 +1181,54 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>  {
>>    unsigned nops, i, this_tree_size = 0;
>>    poly_uint64 this_max_nunits = *max_nunits;
>> -  gimple *stmt;
>>    slp_tree node;
>>
>>    matches[0] = false;
>>
>> -  stmt = stmts[0];
>> -  if (is_gimple_call (stmt))
>> +  stmt_vec_info stmt_info = stmts[0];
>> +  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
>>      nops = gimple_call_num_args (stmt);
>> -  else if (is_gimple_assign (stmt))
>> +  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
>>      {
>>        nops = gimple_num_ops (stmt) - 1;
>>        if (gimple_assign_rhs_code (stmt) == COND_EXPR)
>>         nops++;
>>      }
>> -  else if (gimple_code (stmt) == GIMPLE_PHI)
>> +  else if (is_a <gphi *> (stmt_info->stmt))
>>      nops = 0;
>>    else
>>      return NULL;
>>
>>    /* If the SLP node is a PHI (induction or reduction), terminate
>>       the recursion.  */
>> -  if (gimple_code (stmt) == GIMPLE_PHI)
>> +  if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
>>      {
>>        tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
>>        tree vectype = get_vectype_for_scalar_type (scalar_type);
>> -      if (!vect_record_max_nunits (vinfo, stmt, group_size, vectype,
>> +      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
>>                                    max_nunits))
>>         return NULL;
>>
>> -      vect_def_type def_type = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt));
>> +      vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
>>        /* Induction from different IVs is not supported.  */
>>        if (def_type == vect_induction_def)
>>         {
>> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
>> -           if (stmt != stmts[0])
>> +         stmt_vec_info other_info;
>> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
>> +           if (stmt_info != other_info)
>>               return NULL;
>>         }
>>        else
>>         {
>>           /* Else def types have to match.  */
>> -         FOR_EACH_VEC_ELT (stmts, i, stmt)
>> +         stmt_vec_info other_info;
>> +         FOR_EACH_VEC_ELT (stmts, i, other_info)
>>             {
>>               /* But for reduction chains only check on the first stmt.  */
>> -             if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
>> - && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) != stmt)
>> +             if (REDUC_GROUP_FIRST_ELEMENT (other_info)
>> +                 && REDUC_GROUP_FIRST_ELEMENT (other_info) != stmt_info)
>>                 continue;
>> -             if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != def_type)
>> +             if (STMT_VINFO_DEF_TYPE (other_info) != def_type)
>>                 return NULL;
>>             }
>>         }
>> @@ -1238,8 +1244,8 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>      return NULL;
>>
>>    /* If the SLP node is a load, terminate the recursion.  */
>> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
>> -      && DR_IS_READ (STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))))
>> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
>> +      && DR_IS_READ (STMT_VINFO_DATA_REF (stmt_info)))
>>      {
>>        *max_nunits = this_max_nunits;
>>        node = vect_create_new_slp_node (stmts);
>> @@ -1250,7 +1256,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>    /* Get at the operands, verifying they are compatible.  */
>> vec<slp_oprnd_info> oprnds_info = vect_create_oprnd_info (nops,
> group_size);
>>    slp_oprnd_info oprnd_info;
>> -  FOR_EACH_VEC_ELT (stmts, i, stmt)
>> +  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
>>      {
>>        int res = vect_get_and_check_slp_defs (vinfo, &swap[i],
>>                                              stmts, i, &oprnds_info);
>> @@ -1269,7 +1275,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>    auto_vec<slp_tree, 4> children;
>>    auto_vec<slp_tree> this_loads;
>>
>> -  stmt = stmts[0];
>> +  stmt_info = stmts[0];
>>
>>    if (tree_size)
>>      max_tree_size -= *tree_size;
>> @@ -1307,8 +1313,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>               /* ???  Rejecting patterns this way doesn't work.  We'd have to
>>                  do extra work to cancel the pattern so the uses see the
>>                  scalar version.  */
>> -             && !is_pattern_stmt_p
>> -                   (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
>> +             && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>>             {
>>               slp_tree grandchild;
>>
>> @@ -1352,7 +1357,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>           /* ???  Rejecting patterns this way doesn't work.  We'd have to
>>              do extra work to cancel the pattern so the uses see the
>>              scalar version.  */
>> -         && !is_pattern_stmt_p (vinfo_for_stmt (stmt)))
>> +         && !is_pattern_stmt_p (stmt_info))
>>         {
>>           dump_printf_loc (MSG_NOTE, vect_location,
>>                            "Building vector operands from scalars\n");
>> @@ -1373,7 +1378,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>              as well as the arms under some constraints.  */
>>           && nops == 2
>>           && oprnds_info[1]->first_dt == vect_internal_def
>> -         && is_gimple_assign (stmt)
>> +         && is_gimple_assign (stmt_info->stmt)
>>           /* Do so only if the number of not successful permutes was nor more
>>              than a cut-ff as re-trying the recursive match on
>>              possibly each level of the tree would expose exponential
>> @@ -1389,9 +1394,10 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                 {
>>                   if (matches[j] != !swap_not_matching)
>>                     continue;
>> -                 gimple *stmt = stmts[j];
>> +                 stmt_vec_info stmt_info = stmts[j];
>>                   /* Verify if we can swap operands of this stmt.  */
>> -                 if (!is_gimple_assign (stmt)
>> +                 gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
>> +                 if (!stmt
>> || !commutative_tree_code (gimple_assign_rhs_code (stmt)))
>>                     {
>>                       if (!swap_not_matching)
>> @@ -1406,7 +1412,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                      node and temporarily do that when processing it
>>                      (or wrap operand accessors in a helper).  */
>>                   else if (swap[j] != 0
>> -                          || STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)))
>> +                          || STMT_VINFO_NUM_SLP_USES (stmt_info))
>>                     {
>>                       if (!swap_not_matching)
>>                         {
>> @@ -1417,7 +1423,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>> "Build SLP failed: cannot swap "
>>                                                "operands of shared stmt ");
>>                               dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
>> -                                               TDF_SLIM, stmts[j], 0);
>> +                                               TDF_SLIM, stmts[j]->stmt, 0);
>>                             }
>>                           goto fail;
>>                         }
>> @@ -1454,31 +1460,23 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                  if we end up building the operand from scalars as
>>                  we'll continue to process swapped operand two.  */
>>               for (j = 0; j < group_size; ++j)
>> -               {
>> -                 gimple *stmt = stmts[j];
>> -                 gimple_set_plf (stmt, GF_PLF_1, false);
>> -               }
>> +               gimple_set_plf (stmts[j]->stmt, GF_PLF_1, false);
>>               for (j = 0; j < group_size; ++j)
>> -               {
>> -                 gimple *stmt = stmts[j];
>> -                 if (matches[j] == !swap_not_matching)
>> -                   {
>> -                     /* Avoid swapping operands twice.  */
>> -                     if (gimple_plf (stmt, GF_PLF_1))
>> -                       continue;
>> -                     swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
>> -                                        gimple_assign_rhs2_ptr (stmt));
>> -                     gimple_set_plf (stmt, GF_PLF_1, true);
>> -                   }
>> -               }
>> +               if (matches[j] == !swap_not_matching)
>> +                 {
>> +                   gassign *stmt = as_a <gassign *> (stmts[j]->stmt);
>> +                   /* Avoid swapping operands twice.  */
>> +                   if (gimple_plf (stmt, GF_PLF_1))
>> +                     continue;
>> +                   swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
>> +                                      gimple_assign_rhs2_ptr (stmt));
>> +                   gimple_set_plf (stmt, GF_PLF_1, true);
>> +                 }
>>               /* Verify we swap all duplicates or none.  */
>>               if (flag_checking)
>>                 for (j = 0; j < group_size; ++j)
>> -                 {
>> -                   gimple *stmt = stmts[j];
>> -                   gcc_assert (gimple_plf (stmt, GF_PLF_1)
>> -                               == (matches[j] == !swap_not_matching));
>> -                 }
>> +                 gcc_assert (gimple_plf (stmts[j]->stmt, GF_PLF_1)
>> +                             == (matches[j] == !swap_not_matching));
>>
>>               /* If we have all children of child built up from scalars then
>> just throw that away and build it up this node from scalars.  */
>> @@ -1486,8 +1484,7 @@ vect_build_slp_tree_2 (vec_info *vinfo,
>>                   /* ???  Rejecting patterns this way doesn't work.  We'd have
>> to do extra work to cancel the pattern so the uses see the
>>                      scalar version.  */
>> -                 && !is_pattern_stmt_p
>> -                       (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
>> +                 && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
>>                 {
>>                   unsigned int j;
>>                   slp_tree grandchild;
>> @@ -1550,16 +1547,16 @@ vect_print_slp_tree (dump_flags_t dump_k
>>                      slp_tree node)
>>  {
>>    int i;
>> -  gimple *stmt;
>> +  stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>>    dump_printf_loc (dump_kind, loc, "node%s\n",
>>                    SLP_TREE_DEF_TYPE (node) != vect_internal_def
>>                    ? " (external)" : "");
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>>        dump_printf_loc (dump_kind, loc, "\tstmt %d ", i);
>> -      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt, 0);
>> +      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt_info->stmt, 0);
>>      }
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      vect_print_slp_tree (dump_kind, loc, child);
>> @@ -1575,15 +1572,15 @@ vect_print_slp_tree (dump_flags_t dump_k
>>  vect_mark_slp_stmts (slp_tree node, enum slp_vect_type mark, int j)
>>  {
>>    int i;
>> -  gimple *stmt;
>> +  stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>>      return;
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      if (j < 0 || i == j)
>> -      STMT_SLP_TYPE (vinfo_for_stmt (stmt)) = mark;
>> +      STMT_SLP_TYPE (stmt_info) = mark;
>>
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      vect_mark_slp_stmts (child, mark, j);
>> @@ -1596,16 +1593,14 @@ vect_mark_slp_stmts (slp_tree node, enum
>>  vect_mark_slp_stmts_relevant (slp_tree node)
>>  {
>>    int i;
>> -  gimple *stmt;
>>    stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>>    if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
>>      return;
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>> -      stmt_info = vinfo_for_stmt (stmt);
>>        gcc_assert (!STMT_VINFO_RELEVANT (stmt_info)
>>                    || STMT_VINFO_RELEVANT (stmt_info) == vect_used_in_scope);
>>        STMT_VINFO_RELEVANT (stmt_info) = vect_used_in_scope;
>> @@ -1622,8 +1617,8 @@ vect_mark_slp_stmts_relevant (slp_tree n
>>  vect_slp_rearrange_stmts (slp_tree node, unsigned int group_size,
>>                            vec<unsigned> permutation)
>>  {
>> -  gimple *stmt;
>> -  vec<gimple *> tmp_stmts;
>> +  stmt_vec_info stmt_info;
>> +  vec<stmt_vec_info> tmp_stmts;
>>    unsigned int i;
>>    slp_tree child;
>>
>> @@ -1634,8 +1629,8 @@ vect_slp_rearrange_stmts (slp_tree node,
>>    tmp_stmts.create (group_size);
>>    tmp_stmts.quick_grow_cleared (group_size);
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> -    tmp_stmts[permutation[i]] = stmt;
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>> +    tmp_stmts[permutation[i]] = stmt_info;
>>
>>    SLP_TREE_SCALAR_STMTS (node).release ();
>>    SLP_TREE_SCALAR_STMTS (node) = tmp_stmts;
>> @@ -1696,13 +1691,14 @@ vect_attempt_slp_rearrange_stmts (slp_in
>>    poly_uint64 unrolling_factor = SLP_INSTANCE_UNROLLING_FACTOR (slp_instn);
>>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>>      {
>> -      gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -      first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
>> +      stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> +      first_stmt_info
>> +       = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
>>        /* But we have to keep those permutations that are required because
>>           of handling of gaps.  */
>>        if (known_eq (unrolling_factor, 1U)
>> -         || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
>> -             && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0))
>> +         || (group_size == DR_GROUP_SIZE (first_stmt_info)
>> +             && DR_GROUP_GAP (first_stmt_info) == 0))
>>         SLP_TREE_LOAD_PERMUTATION (node).release ();
>>        else
>>         for (j = 0; j < SLP_TREE_LOAD_PERMUTATION (node).length (); ++j)
>> @@ -1721,7 +1717,7 @@ vect_supported_load_permutation_p (slp_i
>>    unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
>>    unsigned int i, j, k, next;
>>    slp_tree node;
>> -  gimple *stmt, *load, *next_load;
>> +  gimple *next_load;
>>
>>    if (dump_enabled_p ())
>>      {
>> @@ -1750,18 +1746,18 @@ vect_supported_load_permutation_p (slp_i
>>        return false;
>>
>>    node = SLP_INSTANCE_TREE (slp_instn);
>> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>
>>    /* Reduction (there are no data-refs in the root).
>>       In reduction chain the order of the loads is not important.  */
>> -  if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))
>> -      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
>> +  if (!STMT_VINFO_DATA_REF (stmt_info)
>> +      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>>      vect_attempt_slp_rearrange_stmts (slp_instn);
>>
>>    /* In basic block vectorization we allow any subchain of an interleaving
>>       chain.
>> FORNOW: not supported in loop SLP because of realignment compications.
> */
>> -  if (STMT_VINFO_BB_VINFO (vinfo_for_stmt (stmt)))
>> +  if (STMT_VINFO_BB_VINFO (stmt_info))
>>      {
>>        /* Check whether the loads in an instance form a subchain and thus
>>           no permutation is necessary.  */
>> @@ -1771,24 +1767,25 @@ vect_supported_load_permutation_p (slp_i
>>             continue;
>>           bool subchain_p = true;
>>            next_load = NULL;
>> -          FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load)
>> -            {
>> -              if (j != 0
>> -                 && (next_load != load
>> -                     || DR_GROUP_GAP (vinfo_for_stmt (load)) != 1))
>> +         stmt_vec_info load_info;
>> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
>> +           {
>> +             if (j != 0
>> +                 && (next_load != load_info
>> +                     || DR_GROUP_GAP (load_info) != 1))
>>                 {
>>                   subchain_p = false;
>>                   break;
>>                 }
>> -              next_load = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (load));
>> -            }
>> +             next_load = DR_GROUP_NEXT_ELEMENT (load_info);
>> +           }
>>           if (subchain_p)
>>             SLP_TREE_LOAD_PERMUTATION (node).release ();
>>           else
>>             {
>> -             stmt_vec_info group_info
>> -               = vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
>> - group_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
>> +             stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> +             group_info
>> +               = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
>>               unsigned HOST_WIDE_INT nunits;
>>               unsigned k, maxk = 0;
>>               FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
>> @@ -1831,7 +1828,7 @@ vect_supported_load_permutation_p (slp_i
>>    poly_uint64 test_vf
>>      = force_common_multiple (SLP_INSTANCE_UNROLLING_FACTOR (slp_instn),
>>                              LOOP_VINFO_VECT_FACTOR
>> -                            (STMT_VINFO_LOOP_VINFO (vinfo_for_stmt (stmt))));
>> +                            (STMT_VINFO_LOOP_VINFO (stmt_info)));
>>    FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
>>      if (node->load_permutation.exists ()
>>         && !vect_transform_slp_perm_load (node, vNULL, NULL, test_vf,
>> @@ -1847,15 +1844,15 @@ vect_supported_load_permutation_p (slp_i
>>  gimple *
>>  vect_find_last_scalar_stmt_in_slp (slp_tree node)
>>  {
>> -  gimple *last = NULL, *stmt;
>> +  gimple *last = NULL;
>> +  stmt_vec_info stmt_vinfo;
>>
>> -  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt); i++)
>> +  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
>>      {
>> -      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>>        if (is_pattern_stmt_p (stmt_vinfo))
>>         last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
>>        else
>> -       last = get_later_stmt (stmt, last);
>> +       last = get_later_stmt (stmt_vinfo, last);
>>      }
>>
>>    return last;
>> @@ -1926,6 +1923,7 @@ calculate_unrolling_factor (poly_uint64
>>  vect_analyze_slp_instance (vec_info *vinfo,
>>                            gimple *stmt, unsigned max_tree_size)
>>  {
>> +  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>>    slp_instance new_instance;
>>    slp_tree node;
>>    unsigned int group_size;
>> @@ -1934,25 +1932,25 @@ vect_analyze_slp_instance (vec_info *vin
>>    stmt_vec_info next_info;
>>    unsigned int i;
>>    vec<slp_tree> loads;
>> -  struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
>> -  vec<gimple *> scalar_stmts;
>> +  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> +  vec<stmt_vec_info> scalar_stmts;
>>
>> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
>> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>      {
>>        scalar_type = TREE_TYPE (DR_REF (dr));
>>        vectype = get_vectype_for_scalar_type (scalar_type);
>> -      group_size = DR_GROUP_SIZE (vinfo_for_stmt (stmt));
>> +      group_size = DR_GROUP_SIZE (stmt_info);
>>      }
>> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
>> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>>      {
>>        gcc_assert (is_a <loop_vec_info> (vinfo));
>> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
>> -      group_size = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
>> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
>> +      group_size = REDUC_GROUP_SIZE (stmt_info);
>>      }
>>    else
>>      {
>>        gcc_assert (is_a <loop_vec_info> (vinfo));
>> -      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
>> +      vectype = STMT_VINFO_VECTYPE (stmt_info);
>>        group_size = as_a <loop_vec_info> (vinfo)->reductions.length ();
>>      }
>>
>> @@ -1973,38 +1971,38 @@ vect_analyze_slp_instance (vec_info *vin
>> /* Create a node (a root of the SLP tree) for the packed grouped
> stores.  */
>>    scalar_stmts.create (group_size);
>>    next = stmt;
>> -  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
>> +  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>      {
>>        /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
>>        while (next)
>>          {
>> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
>> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
>> -           scalar_stmts.safe_push (
>> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
>> +         next_info = vinfo_for_stmt (next);
>> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
>> +             && STMT_VINFO_RELATED_STMT (next_info))
>> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>>           else
>> -            scalar_stmts.safe_push (next);
>> +           scalar_stmts.safe_push (next_info);
>>            next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>>          }
>>      }
>> -  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
>> +  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
>>      {
>>        /* Collect the reduction stmts and store them in
>>          SLP_TREE_SCALAR_STMTS.  */
>>        while (next)
>>          {
>> -         if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
>> -             && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
>> -           scalar_stmts.safe_push (
>> -                 STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
>> +         next_info = vinfo_for_stmt (next);
>> +         if (STMT_VINFO_IN_PATTERN_P (next_info)
>> +             && STMT_VINFO_RELATED_STMT (next_info))
>> +           scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
>>           else
>> -            scalar_stmts.safe_push (next);
>> +           scalar_stmts.safe_push (next_info);
>>            next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
>>          }
>> /* Mark the first element of the reduction chain as reduction to
> properly
>>          transform the node.  In the reduction analysis phase only the last
>>          element of the chain is marked as reduction.  */
>> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_reduction_def;
>> +      STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>>      }
>>    else
>>      {
>> @@ -2068,15 +2066,16 @@ vect_analyze_slp_instance (vec_info *vin
>>         {
>>           vec<unsigned> load_permutation;
>>           int j;
>> -         gimple *load, *first_stmt;
>> +         stmt_vec_info load_info;
>> +         gimple *first_stmt;
>>           bool this_load_permuted = false;
>>           load_permutation.create (group_size);
>>           first_stmt = DR_GROUP_FIRST_ELEMENT
>> -             (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
>> -         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load)
>> +           (SLP_TREE_SCALAR_STMTS (load_node)[0]);
>> +         FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
>>             {
>> -                 int load_place = vect_get_place_in_interleaving_chain
>> -                                    (load, first_stmt);
>> +             int load_place = vect_get_place_in_interleaving_chain
>> +               (load_info, first_stmt);
>>               gcc_assert (load_place != -1);
>>               if (load_place != j)
>>                 this_load_permuted = true;
>> @@ -2124,7 +2123,7 @@ vect_analyze_slp_instance (vec_info *vin
>>           FOR_EACH_VEC_ELT (loads, i, load_node)
>>             {
>>               gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
>> -                 (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
>> +               (SLP_TREE_SCALAR_STMTS (load_node)[0]);
>>               stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
>>                   /* Use SLP for strided accesses (or if we
>>                      can't load-lanes).  */
>> @@ -2307,10 +2306,10 @@ vect_make_slp_decision (loop_vec_info lo
>>  static void
>>  vect_detect_hybrid_slp_stmts (slp_tree node, unsigned i, slp_vect_type stype)
>>  {
>> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[i];
>> +  stmt_vec_info stmt_vinfo = SLP_TREE_SCALAR_STMTS (node)[i];
>>    imm_use_iterator imm_iter;
>>    gimple *use_stmt;
>> -  stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
>> +  stmt_vec_info use_vinfo;
>>    slp_tree child;
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
>>    int j;
>> @@ -2326,6 +2325,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>>        gcc_checking_assert (PURE_SLP_STMT (stmt_vinfo));
>>        /* If we get a pattern stmt here we have to use the LHS of the
>>           original stmt for immediate uses.  */
>> +      gimple *stmt = stmt_vinfo->stmt;
>>        if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
>>           && STMT_VINFO_RELATED_STMT (stmt_vinfo))
>>         stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
>> @@ -2366,7 +2366,7 @@ vect_detect_hybrid_slp_stmts (slp_tree n
>>        if (dump_enabled_p ())
>>         {
>>           dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
>> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
>> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
>>         }
>>        STMT_SLP_TYPE (stmt_vinfo) = hybrid;
>>      }
>> @@ -2525,9 +2525,8 @@ vect_slp_analyze_node_operations_1 (vec_
>>                                     slp_instance node_instance,
>>                                     stmt_vector_for_cost *cost_vec)
>>  {
>> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>> -  gcc_assert (stmt_info);
>> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> +  gimple *stmt = stmt_info->stmt;
>>    gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
>>
>>    /* For BB vectorization vector types are assigned here.
>> @@ -2551,10 +2550,10 @@ vect_slp_analyze_node_operations_1 (vec_
>>             return false;
>>         }
>>
>> -      gimple *sstmt;
>> +      stmt_vec_info sstmt_info;
>>        unsigned int i;
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt)
>> -       STMT_VINFO_VECTYPE (vinfo_for_stmt (sstmt)) = vectype;
>> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt_info)
>> +       STMT_VINFO_VECTYPE (sstmt_info) = vectype;
>>      }
>>
>>    /* Calculate the number of vector statements to be created for the
>> @@ -2626,14 +2625,14 @@ vect_slp_analyze_node_operations (vec_in
>>    /* Push SLP node def-type to stmt operands.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
>> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>>         = SLP_TREE_DEF_TYPE (child);
>>    bool res = vect_slp_analyze_node_operations_1 (vinfo, node, node_instance,
>>                                                  cost_vec);
>>    /* Restore def-types.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
>> +      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
>>         = vect_internal_def;
>>    if (! res)
>>      return false;
>> @@ -2665,11 +2664,11 @@ vect_slp_analyze_operations (vec_info *v
>>                                              instance, visited, &lvisited,
>>                                              &cost_vec))
>>          {
>> +         slp_tree node = SLP_INSTANCE_TREE (instance);
>> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>           dump_printf_loc (MSG_NOTE, vect_location,
>> "removing SLP instance operations starting from: ");
>> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
>> -                           SLP_TREE_SCALAR_STMTS
>> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
>> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>>           vect_free_slp_instance (instance, false);
>>            vinfo->slp_instances.ordered_remove (i);
>>           cost_vec.release ();
>> @@ -2701,14 +2700,14 @@ vect_bb_slp_scalar_cost (basic_block bb,
>>                          stmt_vector_for_cost *cost_vec)
>>  {
>>    unsigned i;
>> -  gimple *stmt;
>> +  stmt_vec_info stmt_info;
>>    slp_tree child;
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>> +      gimple *stmt = stmt_info->stmt;
>>        ssa_op_iter op_iter;
>>        def_operand_p def_p;
>> -      stmt_vec_info stmt_info;
>>
>>        if ((*life)[i])
>>         continue;
>> @@ -2724,8 +2723,7 @@ vect_bb_slp_scalar_cost (basic_block bb,
>>           gimple *use_stmt;
>>           FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
>>             if (!is_gimple_debug (use_stmt)
>> -               && (! vect_stmt_in_region_p (vinfo_for_stmt (stmt)->vinfo,
>> -                                            use_stmt)
>> +               && (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
>>                     || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
>>               {
>>                 (*life)[i] = true;
>> @@ -2740,7 +2738,6 @@ vect_bb_slp_scalar_cost (basic_block bb,
>>         continue;
>>        gimple_set_visited (stmt, true);
>>
>> -      stmt_info = vinfo_for_stmt (stmt);
>>        vect_cost_for_stmt kind;
>>        if (STMT_VINFO_DATA_REF (stmt_info))
>>          {
>> @@ -2944,11 +2941,11 @@ vect_slp_analyze_bb_1 (gimple_stmt_itera
>>        if (! vect_slp_analyze_and_verify_instance_alignment (instance)
>>           || ! vect_slp_analyze_instance_dependence (instance))
>>         {
>> +         slp_tree node = SLP_INSTANCE_TREE (instance);
>> +         stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>           dump_printf_loc (MSG_NOTE, vect_location,
>> "removing SLP instance operations starting from: ");
>> -         dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
>> -                           SLP_TREE_SCALAR_STMTS
>> -                             (SLP_INSTANCE_TREE (instance))[0], 0);
>> +         dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>>           vect_free_slp_instance (instance, false);
>>           BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
>>           continue;
>> @@ -3299,9 +3296,9 @@ vect_get_constant_vectors (tree op, slp_
>>                             vec<tree> *vec_oprnds,
>> unsigned int op_num, unsigned int number_of_vectors)
>>  {
>> -  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
>> -  gimple *stmt = stmts[0];
>> -  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
>> +  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
>> +  stmt_vec_info stmt_vinfo = stmts[0];
>> +  gimple *stmt = stmt_vinfo->stmt;
>>    unsigned HOST_WIDE_INT nunits;
>>    tree vec_cst;
>>    unsigned j, number_of_places_left_in_vector;
>> @@ -3320,7 +3317,7 @@ vect_get_constant_vectors (tree op, slp_
>>
>>    /* Check if vector type is a boolean vector.  */
>>    if (VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (op))
>> -      && vect_mask_constant_operand_p (stmt, op_num))
>> +      && vect_mask_constant_operand_p (stmt_vinfo, op_num))
>>      vector_type
>>        = build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
>>    else
>> @@ -3366,8 +3363,9 @@ vect_get_constant_vectors (tree op, slp_
>>    bool place_after_defs = false;
>>    for (j = 0; j < number_of_copies; j++)
>>      {
>> -      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
>> +      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
>>          {
>> +         stmt = stmt_vinfo->stmt;
>>            if (is_store)
>>              op = gimple_assign_rhs1 (stmt);
>>            else
>> @@ -3496,10 +3494,12 @@ vect_get_constant_vectors (tree op, slp_
>>                 {
>>                   gsi = gsi_for_stmt
>>                           (vect_find_last_scalar_stmt_in_slp (slp_node));
>> -                 init = vect_init_vector (stmt, vec_cst, vector_type, &gsi);
>> +                 init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
>> +                                          &gsi);
>>                 }
>>               else
>> -               init = vect_init_vector (stmt, vec_cst, vector_type, NULL);
>> +               init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
>> +                                        NULL);
>>               if (ctor_seq != NULL)
>>                 {
>>                   gsi = gsi_for_stmt (SSA_NAME_DEF_STMT (init));
>> @@ -3612,15 +3612,14 @@ vect_get_slp_defs (vec<tree> ops, slp_tr
>>           /* We have to check both pattern and original def, if available.  */
>>           if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
>>             {
>> -             gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
>> -             stmt_vec_info related
>> -               = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
>> +             stmt_vec_info first_def_info = SLP_TREE_SCALAR_STMTS (child)[0];
>> + stmt_vec_info related = STMT_VINFO_RELATED_STMT (first_def_info);
>>               tree first_def_op;
>>
>> -             if (gimple_code (first_def) == GIMPLE_PHI)
>> +             if (gphi *first_def = dyn_cast <gphi *> (first_def_info->stmt))
>>                 first_def_op = gimple_phi_result (first_def);
>>               else
>> -               first_def_op = gimple_get_lhs (first_def);
>> +               first_def_op = gimple_get_lhs (first_def_info->stmt);
>>               if (operand_equal_p (oprnd, first_def_op, 0)
>>                   || (related
>>                       && operand_equal_p (oprnd,
>> @@ -3686,8 +3685,7 @@ vect_transform_slp_perm_load (slp_tree n
>> slp_instance slp_node_instance, bool analyze_only,
>>                               unsigned *n_perms)
>>  {
>> -  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
>> +  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>    vec_info *vinfo = stmt_info->vinfo;
>>    tree mask_element_type = NULL_TREE, mask_type;
>>    int vec_index = 0;
>> @@ -3779,7 +3777,7 @@ vect_transform_slp_perm_load (slp_tree n
>>                                    "permutation requires at "
>>                                    "least three vectors ");
>>                   dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
>> -                                   stmt, 0);
>> +                                   stmt_info->stmt, 0);
>>                 }
>>               gcc_assert (analyze_only);
>>               return false;
>> @@ -3832,6 +3830,7 @@ vect_transform_slp_perm_load (slp_tree n
>>                   stmt_vec_info perm_stmt_info;
>>                   if (! noop_p)
>>                     {
>> +                     gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>>                       tree perm_dest
>> = vect_create_destination_var (gimple_assign_lhs (stmt),
>>                                                        vectype);
>> @@ -3841,7 +3840,8 @@ vect_transform_slp_perm_load (slp_tree n
>>                                                first_vec, second_vec,
>>                                                mask_vec);
>>                       perm_stmt_info
>> -                       = vect_finish_stmt_generation (stmt, perm_stmt, gsi);
>> +                       = vect_finish_stmt_generation (stmt_info, perm_stmt,
>> +                                                      gsi);
>>                     }
>>                   else
>>                     /* If mask was NULL_TREE generate the requested
>> @@ -3870,7 +3870,6 @@ vect_transform_slp_perm_load (slp_tree n
>>  vect_schedule_slp_instance (slp_tree node, slp_instance instance,
>>                             scalar_stmts_to_slp_tree_map_t *bst_map)
>>  {
>> -  gimple *stmt;
>>    bool grouped_store, is_store;
>>    gimple_stmt_iterator si;
>>    stmt_vec_info stmt_info;
>> @@ -3897,11 +3896,13 @@ vect_schedule_slp_instance (slp_tree nod
>>    /* Push SLP node def-type to stmts.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
>> - STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = SLP_TREE_DEF_TYPE
> (child);
>> +      {
>> +       stmt_vec_info child_stmt_info;
>> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
>> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = SLP_TREE_DEF_TYPE (child);
>> +      }
>>
>> -  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  stmt_info = vinfo_for_stmt (stmt);
>> +  stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>>
>>    /* VECTYPE is the type of the destination.  */
>>    vectype = STMT_VINFO_VECTYPE (stmt_info);
>> @@ -3916,7 +3917,7 @@ vect_schedule_slp_instance (slp_tree nod
>>      {
>>        dump_printf_loc (MSG_NOTE,vect_location,
>>                        "------>vectorizing SLP node starting from: ");
>> -      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
>> +      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
>>      }
>>
>>    /* Vectorized stmts go before the last scalar stmt which is where
>> @@ -3928,7 +3929,7 @@ vect_schedule_slp_instance (slp_tree nod
>>       chain is marked as reduction.  */
>>    if (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
>>        && REDUC_GROUP_FIRST_ELEMENT (stmt_info)
>> -      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt)
>> +      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
>>      {
>>        STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
>>        STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type;
>> @@ -3938,29 +3939,33 @@ vect_schedule_slp_instance (slp_tree nod
>>       both operations and then performing a merge.  */
>>    if (SLP_TREE_TWO_OPERATORS (node))
>>      {
>> +      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
>>        enum tree_code code0 = gimple_assign_rhs_code (stmt);
>>        enum tree_code ocode = ERROR_MARK;
>> -      gimple *ostmt;
>> +      stmt_vec_info ostmt_info;
>>        vec_perm_builder mask (group_size, group_size, 1);
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt)
>> -       if (gimple_assign_rhs_code (ostmt) != code0)
>> -         {
>> -           mask.quick_push (1);
>> -           ocode = gimple_assign_rhs_code (ostmt);
>> -         }
>> -       else
>> -         mask.quick_push (0);
>> +      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt_info)
>> +       {
>> +         gassign *ostmt = as_a <gassign *> (ostmt_info->stmt);
>> +         if (gimple_assign_rhs_code (ostmt) != code0)
>> +           {
>> +             mask.quick_push (1);
>> +             ocode = gimple_assign_rhs_code (ostmt);
>> +           }
>> +         else
>> +           mask.quick_push (0);
>> +       }
>>        if (ocode != ERROR_MARK)
>>         {
>>           vec<stmt_vec_info> v0;
>>           vec<stmt_vec_info> v1;
>>           unsigned j;
>>           tree tmask = NULL_TREE;
>> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
>> + vect_transform_stmt (stmt_info, &si, &grouped_store, node,
> instance);
>>           v0 = SLP_TREE_VEC_STMTS (node).copy ();
>>           SLP_TREE_VEC_STMTS (node).truncate (0);
>>           gimple_assign_set_rhs_code (stmt, ocode);
>> -         vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
>> + vect_transform_stmt (stmt_info, &si, &grouped_store, node,
> instance);
>>           gimple_assign_set_rhs_code (stmt, code0);
>>           v1 = SLP_TREE_VEC_STMTS (node).copy ();
>>           SLP_TREE_VEC_STMTS (node).truncate (0);
>> @@ -3998,20 +4003,24 @@ vect_schedule_slp_instance (slp_tree nod
>>                                            gimple_assign_lhs (v1[j]->stmt),
>>                                            tmask);
>>               SLP_TREE_VEC_STMTS (node).quick_push
>> -               (vect_finish_stmt_generation (stmt, vstmt, &si));
>> +               (vect_finish_stmt_generation (stmt_info, vstmt, &si));
>>             }
>>           v0.release ();
>>           v1.release ();
>>           return false;
>>         }
>>      }
>> -  is_store = vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
>> +  is_store = vect_transform_stmt (stmt_info, &si, &grouped_store, node,
>> +                                 instance);
>>
>>    /* Restore stmt def-types.  */
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
>> -      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
>> -       STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_internal_def;
>> +      {
>> +       stmt_vec_info child_stmt_info;
>> +       FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
>> +         STMT_VINFO_DEF_TYPE (child_stmt_info) = vect_internal_def;
>> +      }
>>
>>    return is_store;
>>  }
>> @@ -4024,7 +4033,7 @@ vect_schedule_slp_instance (slp_tree nod
>>  static void
>>  vect_remove_slp_scalar_calls (slp_tree node)
>>  {
>> -  gimple *stmt, *new_stmt;
>> +  gimple *new_stmt;
>>    gimple_stmt_iterator gsi;
>>    int i;
>>    slp_tree child;
>> @@ -4037,13 +4046,12 @@ vect_remove_slp_scalar_calls (slp_tree n
>>    FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
>>      vect_remove_slp_scalar_calls (child);
>>
>> -  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
>> +  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
>>      {
>> -      if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
>> +      gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
>> +      if (!stmt || gimple_bb (stmt) == NULL)
>>         continue;
>> -      stmt_info = vinfo_for_stmt (stmt);
>> -      if (stmt_info == NULL_STMT_VEC_INFO
>> -         || is_pattern_stmt_p (stmt_info)
>> +      if (is_pattern_stmt_p (stmt_info)
>>           || !PURE_SLP_STMT (stmt_info))
>>         continue;
>>        lhs = gimple_call_lhs (stmt);
>> @@ -4085,7 +4093,7 @@ vect_schedule_slp (vec_info *vinfo)
>>    FOR_EACH_VEC_ELT (slp_instances, i, instance)
>>      {
>>        slp_tree root = SLP_INSTANCE_TREE (instance);
>> -      gimple *store;
>> +      stmt_vec_info store_info;
>>        unsigned int j;
>>        gimple_stmt_iterator gsi;
>>
>> @@ -4099,20 +4107,20 @@ vect_schedule_slp (vec_info *vinfo)
>>        if (is_a <loop_vec_info> (vinfo))
>>         vect_remove_slp_scalar_calls (root);
>>
>> -      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store)
>> +      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store_info)
>>                    && j < SLP_INSTANCE_GROUP_SIZE (instance); j++)
>>          {
>> -          if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (store)))
>> -            break;
>> +         if (!STMT_VINFO_DATA_REF (store_info))
>> +           break;
>>
>> -         if (is_pattern_stmt_p (vinfo_for_stmt (store)))
>> -           store = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (store));
>> -          /* Free the attached stmt_vec_info and remove the stmt.  */
>> -          gsi = gsi_for_stmt (store);
>> -         unlink_stmt_vdef (store);
>> -          gsi_remove (&gsi, true);
>> -         release_defs (store);
>> -          free_stmt_vec_info (store);
>> +         if (is_pattern_stmt_p (store_info))
>> +           store_info = STMT_VINFO_RELATED_STMT (store_info);
>> +         /* Free the attached stmt_vec_info and remove the stmt.  */
>> +         gsi = gsi_for_stmt (store_info);
>> +         unlink_stmt_vdef (store_info);
>> +         gsi_remove (&gsi, true);
>> +         release_defs (store_info);
>> +         free_stmt_vec_info (store_info);
>>          }
>>      }
>>
>> Index: gcc/tree-vect-data-refs.c
>> ===================================================================
>> --- gcc/tree-vect-data-refs.c   2018-07-24 10:22:47.485157343 +0100
>> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:23:00.397042684 +0100
>> @@ -665,7 +665,8 @@ vect_slp_analyze_data_ref_dependence (st
>>
>>  static bool
>>  vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
>> -                                  vec<gimple *> stores, gimple *last_store)
>> +                                  vec<stmt_vec_info> stores,
>> +                                  gimple *last_store)
>>  {
>>    /* This walks over all stmts involved in the SLP load/store done
>>       in NODE verifying we can sink them up to the last stmt in the
>> @@ -673,13 +674,13 @@ vect_slp_analyze_node_dependences (slp_i
>>    gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
>>    for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
>>      {
>> -      gimple *access = SLP_TREE_SCALAR_STMTS (node)[k];
>> -      if (access == last_access)
>> +      stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
>> +      if (access_info == last_access)
>>         continue;
>> -      data_reference *dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (access));
>> +      data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
>>        ao_ref ref;
>>        bool ref_initialized_p = false;
>> -      for (gimple_stmt_iterator gsi = gsi_for_stmt (access);
>> +      for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
>>            gsi_stmt (gsi) != last_access; gsi_next (&gsi))
diff mbox series

Patch

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:22:57.277070390 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:23:00.401042649 +0100
@@ -138,7 +138,7 @@  struct _slp_tree {
   /* Nodes that contain def-stmts of this node statements operands.  */
   vec<slp_tree> children;
   /* A group of scalar stmts to be vectorized together.  */
-  vec<gimple *> stmts;
+  vec<stmt_vec_info> stmts;
   /* Load permutation relative to the stores, NULL if there is no
      permutation.  */
   vec<unsigned> load_permutation;
Index: gcc/tree-vect-slp.c
===================================================================
--- gcc/tree-vect-slp.c	2018-07-24 10:22:57.277070390 +0100
+++ gcc/tree-vect-slp.c	2018-07-24 10:23:00.401042649 +0100
@@ -66,11 +66,11 @@  vect_free_slp_tree (slp_tree node, bool
      statements would be redundant.  */
   if (!final_p)
     {
-      gimple *stmt;
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+      stmt_vec_info stmt_info;
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
 	{
-	  gcc_assert (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) > 0);
-	  STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))--;
+	  gcc_assert (STMT_VINFO_NUM_SLP_USES (stmt_info) > 0);
+	  STMT_VINFO_NUM_SLP_USES (stmt_info)--;
 	}
     }
 
@@ -99,21 +99,21 @@  vect_free_slp_instance (slp_instance ins
 /* Create an SLP node for SCALAR_STMTS.  */
 
 static slp_tree
-vect_create_new_slp_node (vec<gimple *> scalar_stmts)
+vect_create_new_slp_node (vec<stmt_vec_info> scalar_stmts)
 {
   slp_tree node;
-  gimple *stmt = scalar_stmts[0];
+  stmt_vec_info stmt_info = scalar_stmts[0];
   unsigned int nops;
 
-  if (is_gimple_call (stmt))
+  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
     nops = gimple_call_num_args (stmt);
-  else if (is_gimple_assign (stmt))
+  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
     {
       nops = gimple_num_ops (stmt) - 1;
       if (gimple_assign_rhs_code (stmt) == COND_EXPR)
 	nops++;
     }
-  else if (gimple_code (stmt) == GIMPLE_PHI)
+  else if (is_a <gphi *> (stmt_info->stmt))
     nops = 0;
   else
     return NULL;
@@ -128,8 +128,8 @@  vect_create_new_slp_node (vec<gimple *>
   SLP_TREE_DEF_TYPE (node) = vect_internal_def;
 
   unsigned i;
-  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt)
-    STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt))++;
+  FOR_EACH_VEC_ELT (scalar_stmts, i, stmt_info)
+    STMT_VINFO_NUM_SLP_USES (stmt_info)++;
 
   return node;
 }
@@ -141,7 +141,7 @@  vect_create_new_slp_node (vec<gimple *>
 typedef struct _slp_oprnd_info
 {
   /* Def-stmts for the operands.  */
-  vec<gimple *> def_stmts;
+  vec<stmt_vec_info> def_stmts;
   /* Information about the first statement, its vector def-type, type, the
      operand itself in case it's constant, and an indication if it's a pattern
      stmt.  */
@@ -297,10 +297,10 @@  can_duplicate_and_interleave_p (unsigned
    ok return 0.  */
 static int
 vect_get_and_check_slp_defs (vec_info *vinfo, unsigned char *swap,
-			     vec<gimple *> stmts, unsigned stmt_num,
+			     vec<stmt_vec_info> stmts, unsigned stmt_num,
 			     vec<slp_oprnd_info> *oprnds_info)
 {
-  gimple *stmt = stmts[stmt_num];
+  stmt_vec_info stmt_info = stmts[stmt_num];
   tree oprnd;
   unsigned int i, number_of_oprnds;
   enum vect_def_type dt = vect_uninitialized_def;
@@ -312,12 +312,12 @@  vect_get_and_check_slp_defs (vec_info *v
   bool first = stmt_num == 0;
   bool second = stmt_num == 1;
 
-  if (is_gimple_call (stmt))
+  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
     {
       number_of_oprnds = gimple_call_num_args (stmt);
       first_op_idx = 3;
     }
-  else if (is_gimple_assign (stmt))
+  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
     {
       enum tree_code code = gimple_assign_rhs_code (stmt);
       number_of_oprnds = gimple_num_ops (stmt) - 1;
@@ -347,12 +347,13 @@  vect_get_and_check_slp_defs (vec_info *v
 	  int *map = maps[*swap];
 
 	  if (i < 2)
-	    oprnd = TREE_OPERAND (gimple_op (stmt, first_op_idx), map[i]);
+	    oprnd = TREE_OPERAND (gimple_op (stmt_info->stmt,
+					     first_op_idx), map[i]);
 	  else
-	    oprnd = gimple_op (stmt, map[i]);
+	    oprnd = gimple_op (stmt_info->stmt, map[i]);
 	}
       else
-	oprnd = gimple_op (stmt, first_op_idx + (swapped ? !i : i));
+	oprnd = gimple_op (stmt_info->stmt, first_op_idx + (swapped ? !i : i));
 
       oprnd_info = (*oprnds_info)[i];
 
@@ -518,18 +519,20 @@  vect_get_and_check_slp_defs (vec_info *v
     {
       /* If there are already uses of this stmt in a SLP instance then
          we've committed to the operand order and can't swap it.  */
-      if (STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)) != 0)
+      if (STMT_VINFO_NUM_SLP_USES (stmt_info) != 0)
 	{
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 			       "Build SLP failed: cannot swap operands of "
 			       "shared stmt ");
-	      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM, stmt, 0);
+	      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
+				stmt_info->stmt, 0);
 	    }
 	  return -1;
 	}
 
+      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
       if (first_op_cond)
 	{
 	  tree cond = gimple_assign_rhs1 (stmt);
@@ -655,8 +658,9 @@  vect_record_max_nunits (vec_info *vinfo,
    would be permuted.  */
 
 static bool
-vect_two_operations_perm_ok_p (vec<gimple *> stmts, unsigned int group_size,
-			       tree vectype, tree_code alt_stmt_code)
+vect_two_operations_perm_ok_p (vec<stmt_vec_info> stmts,
+			       unsigned int group_size, tree vectype,
+			       tree_code alt_stmt_code)
 {
   unsigned HOST_WIDE_INT count;
   if (!TYPE_VECTOR_SUBPARTS (vectype).is_constant (&count))
@@ -666,7 +670,8 @@  vect_two_operations_perm_ok_p (vec<gimpl
   for (unsigned int i = 0; i < count; ++i)
     {
       unsigned int elt = i;
-      if (gimple_assign_rhs_code (stmts[i % group_size]) == alt_stmt_code)
+      gassign *stmt = as_a <gassign *> (stmts[i % group_size]->stmt);
+      if (gimple_assign_rhs_code (stmt) == alt_stmt_code)
 	elt += count;
       sel.quick_push (elt);
     }
@@ -690,12 +695,12 @@  vect_two_operations_perm_ok_p (vec<gimpl
 
 static bool
 vect_build_slp_tree_1 (vec_info *vinfo, unsigned char *swap,
-		       vec<gimple *> stmts, unsigned int group_size,
+		       vec<stmt_vec_info> stmts, unsigned int group_size,
 		       poly_uint64 *max_nunits, bool *matches,
 		       bool *two_operators)
 {
   unsigned int i;
-  gimple *first_stmt = stmts[0], *stmt = stmts[0];
+  stmt_vec_info first_stmt_info = stmts[0];
   enum tree_code first_stmt_code = ERROR_MARK;
   enum tree_code alt_stmt_code = ERROR_MARK;
   enum tree_code rhs_code = ERROR_MARK;
@@ -710,9 +715,10 @@  vect_build_slp_tree_1 (vec_info *vinfo,
   gimple *first_load = NULL, *prev_first_load = NULL;
 
   /* For every stmt in NODE find its def stmt/s.  */
-  FOR_EACH_VEC_ELT (stmts, i, stmt)
+  stmt_vec_info stmt_info;
+  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
     {
-      stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+      gimple *stmt = stmt_info->stmt;
       swap[i] = 0;
       matches[i] = false;
 
@@ -723,7 +729,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 	}
 
       /* Fail to vectorize statements marked as unvectorizable.  */
-      if (!STMT_VINFO_VECTORIZABLE (vinfo_for_stmt (stmt)))
+      if (!STMT_VINFO_VECTORIZABLE (stmt_info))
         {
           if (dump_enabled_p ())
             {
@@ -755,7 +761,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
       if (!vect_get_vector_types_for_stmt (stmt_info, &vectype,
 					   &nunits_vectype)
 	  || (nunits_vectype
-	      && !vect_record_max_nunits (vinfo, stmt, group_size,
+	      && !vect_record_max_nunits (vinfo, stmt_info, group_size,
 					  nunits_vectype, max_nunits)))
 	{
 	  /* Fatal mismatch.  */
@@ -877,7 +883,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 		   && (alt_stmt_code == PLUS_EXPR
 		       || alt_stmt_code == MINUS_EXPR)
 		   && rhs_code == alt_stmt_code)
-              && !(STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
+	      && !(STMT_VINFO_GROUPED_ACCESS (stmt_info)
                    && (first_stmt_code == ARRAY_REF
                        || first_stmt_code == BIT_FIELD_REF
                        || first_stmt_code == INDIRECT_REF
@@ -893,7 +899,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 		  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				   "original stmt ");
 		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				    first_stmt, 0);
+				    first_stmt_info->stmt, 0);
 		}
 	      /* Mismatch.  */
 	      continue;
@@ -915,8 +921,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 
 	  if (rhs_code == CALL_EXPR)
 	    {
-	      gimple *first_stmt = stmts[0];
-	      if (!compatible_calls_p (as_a <gcall *> (first_stmt),
+	      if (!compatible_calls_p (as_a <gcall *> (stmts[0]->stmt),
 				       as_a <gcall *> (stmt)))
 		{
 		  if (dump_enabled_p ())
@@ -933,7 +938,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 	}
 
       /* Grouped store or load.  */
-      if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
+      if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	{
 	  if (REFERENCE_CLASS_P (lhs))
 	    {
@@ -943,7 +948,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 	  else
 	    {
 	      /* Load.  */
-              first_load = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt));
+	      first_load = DR_GROUP_FIRST_ELEMENT (stmt_info);
               if (prev_first_load)
                 {
                   /* Check that there are no loads from different interleaving
@@ -1061,7 +1066,7 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 					     vectype, alt_stmt_code))
 	{
 	  for (i = 0; i < group_size; ++i)
-	    if (gimple_assign_rhs_code (stmts[i]) == alt_stmt_code)
+	    if (gimple_assign_rhs_code (stmts[i]->stmt) == alt_stmt_code)
 	      {
 		matches[i] = false;
 		if (dump_enabled_p ())
@@ -1070,11 +1075,11 @@  vect_build_slp_tree_1 (vec_info *vinfo,
 				     "Build SLP failed: different operation "
 				     "in stmt ");
 		    dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				      stmts[i], 0);
+				      stmts[i]->stmt, 0);
 		    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 				     "original stmt ");
 		    dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				      first_stmt, 0);
+				      first_stmt_info->stmt, 0);
 		  }
 	      }
 	  return false;
@@ -1090,8 +1095,8 @@  vect_build_slp_tree_1 (vec_info *vinfo,
    need a special value for deleted that differs from empty.  */
 struct bst_traits
 {
-  typedef vec <gimple *> value_type;
-  typedef vec <gimple *> compare_type;
+  typedef vec <stmt_vec_info> value_type;
+  typedef vec <stmt_vec_info> compare_type;
   static inline hashval_t hash (value_type);
   static inline bool equal (value_type existing, value_type candidate);
   static inline bool is_empty (value_type x) { return !x.exists (); }
@@ -1105,7 +1110,7 @@  bst_traits::hash (value_type x)
 {
   inchash::hash h;
   for (unsigned i = 0; i < x.length (); ++i)
-    h.add_int (gimple_uid (x[i]));
+    h.add_int (gimple_uid (x[i]->stmt));
   return h.end ();
 }
 inline bool
@@ -1128,7 +1133,7 @@  typedef hash_map <vec <gimple *>, slp_tr
 
 static slp_tree
 vect_build_slp_tree_2 (vec_info *vinfo,
-		       vec<gimple *> stmts, unsigned int group_size,
+		       vec<stmt_vec_info> stmts, unsigned int group_size,
 		       poly_uint64 *max_nunits,
 		       vec<slp_tree> *loads,
 		       bool *matches, unsigned *npermutes, unsigned *tree_size,
@@ -1136,7 +1141,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 
 static slp_tree
 vect_build_slp_tree (vec_info *vinfo,
-		     vec<gimple *> stmts, unsigned int group_size,
+		     vec<stmt_vec_info> stmts, unsigned int group_size,
 		     poly_uint64 *max_nunits, vec<slp_tree> *loads,
 		     bool *matches, unsigned *npermutes, unsigned *tree_size,
 		     unsigned max_tree_size)
@@ -1151,7 +1156,7 @@  vect_build_slp_tree (vec_info *vinfo,
      scalars, see PR81723.  */
   if (! res)
     {
-      vec <gimple *> x;
+      vec <stmt_vec_info> x;
       x.create (stmts.length ());
       x.splice (stmts);
       bst_fail->add (x);
@@ -1168,7 +1173,7 @@  vect_build_slp_tree (vec_info *vinfo,
 
 static slp_tree
 vect_build_slp_tree_2 (vec_info *vinfo,
-		       vec<gimple *> stmts, unsigned int group_size,
+		       vec<stmt_vec_info> stmts, unsigned int group_size,
 		       poly_uint64 *max_nunits,
 		       vec<slp_tree> *loads,
 		       bool *matches, unsigned *npermutes, unsigned *tree_size,
@@ -1176,53 +1181,54 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 {
   unsigned nops, i, this_tree_size = 0;
   poly_uint64 this_max_nunits = *max_nunits;
-  gimple *stmt;
   slp_tree node;
 
   matches[0] = false;
 
-  stmt = stmts[0];
-  if (is_gimple_call (stmt))
+  stmt_vec_info stmt_info = stmts[0];
+  if (gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt))
     nops = gimple_call_num_args (stmt);
-  else if (is_gimple_assign (stmt))
+  else if (gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt))
     {
       nops = gimple_num_ops (stmt) - 1;
       if (gimple_assign_rhs_code (stmt) == COND_EXPR)
 	nops++;
     }
-  else if (gimple_code (stmt) == GIMPLE_PHI)
+  else if (is_a <gphi *> (stmt_info->stmt))
     nops = 0;
   else
     return NULL;
 
   /* If the SLP node is a PHI (induction or reduction), terminate
      the recursion.  */
-  if (gimple_code (stmt) == GIMPLE_PHI)
+  if (gphi *stmt = dyn_cast <gphi *> (stmt_info->stmt))
     {
       tree scalar_type = TREE_TYPE (PHI_RESULT (stmt));
       tree vectype = get_vectype_for_scalar_type (scalar_type);
-      if (!vect_record_max_nunits (vinfo, stmt, group_size, vectype,
+      if (!vect_record_max_nunits (vinfo, stmt_info, group_size, vectype,
 				   max_nunits))
 	return NULL;
 
-      vect_def_type def_type = STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt));
+      vect_def_type def_type = STMT_VINFO_DEF_TYPE (stmt_info);
       /* Induction from different IVs is not supported.  */
       if (def_type == vect_induction_def)
 	{
-	  FOR_EACH_VEC_ELT (stmts, i, stmt)
-	    if (stmt != stmts[0])
+	  stmt_vec_info other_info;
+	  FOR_EACH_VEC_ELT (stmts, i, other_info)
+	    if (stmt_info != other_info)
 	      return NULL;
 	}
       else
 	{
 	  /* Else def types have to match.  */
-	  FOR_EACH_VEC_ELT (stmts, i, stmt)
+	  stmt_vec_info other_info;
+	  FOR_EACH_VEC_ELT (stmts, i, other_info)
 	    {
 	      /* But for reduction chains only check on the first stmt.  */
-	      if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt))
-		  && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)) != stmt)
+	      if (REDUC_GROUP_FIRST_ELEMENT (other_info)
+		  && REDUC_GROUP_FIRST_ELEMENT (other_info) != stmt_info)
 		continue;
-	      if (STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) != def_type)
+	      if (STMT_VINFO_DEF_TYPE (other_info) != def_type)
 		return NULL;
 	    }
 	}
@@ -1238,8 +1244,8 @@  vect_build_slp_tree_2 (vec_info *vinfo,
     return NULL;
 
   /* If the SLP node is a load, terminate the recursion.  */
-  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt))
-      && DR_IS_READ (STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))))
+  if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
+      && DR_IS_READ (STMT_VINFO_DATA_REF (stmt_info)))
     {
       *max_nunits = this_max_nunits;
       node = vect_create_new_slp_node (stmts);
@@ -1250,7 +1256,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
   /* Get at the operands, verifying they are compatible.  */
   vec<slp_oprnd_info> oprnds_info = vect_create_oprnd_info (nops, group_size);
   slp_oprnd_info oprnd_info;
-  FOR_EACH_VEC_ELT (stmts, i, stmt)
+  FOR_EACH_VEC_ELT (stmts, i, stmt_info)
     {
       int res = vect_get_and_check_slp_defs (vinfo, &swap[i],
 					     stmts, i, &oprnds_info);
@@ -1269,7 +1275,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
   auto_vec<slp_tree, 4> children;
   auto_vec<slp_tree> this_loads;
 
-  stmt = stmts[0];
+  stmt_info = stmts[0];
 
   if (tree_size)
     max_tree_size -= *tree_size;
@@ -1307,8 +1313,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 	      /* ???  Rejecting patterns this way doesn't work.  We'd have to
 		 do extra work to cancel the pattern so the uses see the
 		 scalar version.  */
-	      && !is_pattern_stmt_p
-	            (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
+	      && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
 	    {
 	      slp_tree grandchild;
 
@@ -1352,7 +1357,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 	  /* ???  Rejecting patterns this way doesn't work.  We'd have to
 	     do extra work to cancel the pattern so the uses see the
 	     scalar version.  */
-	  && !is_pattern_stmt_p (vinfo_for_stmt (stmt)))
+	  && !is_pattern_stmt_p (stmt_info))
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "Building vector operands from scalars\n");
@@ -1373,7 +1378,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 	     as well as the arms under some constraints.  */
 	  && nops == 2
 	  && oprnds_info[1]->first_dt == vect_internal_def
-	  && is_gimple_assign (stmt)
+	  && is_gimple_assign (stmt_info->stmt)
 	  /* Do so only if the number of not successful permutes was nor more
 	     than a cut-ff as re-trying the recursive match on
 	     possibly each level of the tree would expose exponential
@@ -1389,9 +1394,10 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 		{
 		  if (matches[j] != !swap_not_matching)
 		    continue;
-		  gimple *stmt = stmts[j];
+		  stmt_vec_info stmt_info = stmts[j];
 		  /* Verify if we can swap operands of this stmt.  */
-		  if (!is_gimple_assign (stmt)
+		  gassign *stmt = dyn_cast <gassign *> (stmt_info->stmt);
+		  if (!stmt
 		      || !commutative_tree_code (gimple_assign_rhs_code (stmt)))
 		    {
 		      if (!swap_not_matching)
@@ -1406,7 +1412,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 		     node and temporarily do that when processing it
 		     (or wrap operand accessors in a helper).  */
 		  else if (swap[j] != 0
-			   || STMT_VINFO_NUM_SLP_USES (vinfo_for_stmt (stmt)))
+			   || STMT_VINFO_NUM_SLP_USES (stmt_info))
 		    {
 		      if (!swap_not_matching)
 			{
@@ -1417,7 +1423,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 					       "Build SLP failed: cannot swap "
 					       "operands of shared stmt ");
 			      dump_gimple_stmt (MSG_MISSED_OPTIMIZATION,
-						TDF_SLIM, stmts[j], 0);
+						TDF_SLIM, stmts[j]->stmt, 0);
 			    }
 			  goto fail;
 			}
@@ -1454,31 +1460,23 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 		 if we end up building the operand from scalars as
 		 we'll continue to process swapped operand two.  */
 	      for (j = 0; j < group_size; ++j)
-		{
-		  gimple *stmt = stmts[j];
-		  gimple_set_plf (stmt, GF_PLF_1, false);
-		}
+		gimple_set_plf (stmts[j]->stmt, GF_PLF_1, false);
 	      for (j = 0; j < group_size; ++j)
-		{
-		  gimple *stmt = stmts[j];
-		  if (matches[j] == !swap_not_matching)
-		    {
-		      /* Avoid swapping operands twice.  */
-		      if (gimple_plf (stmt, GF_PLF_1))
-			continue;
-		      swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
-					 gimple_assign_rhs2_ptr (stmt));
-		      gimple_set_plf (stmt, GF_PLF_1, true);
-		    }
-		}
+		if (matches[j] == !swap_not_matching)
+		  {
+		    gassign *stmt = as_a <gassign *> (stmts[j]->stmt);
+		    /* Avoid swapping operands twice.  */
+		    if (gimple_plf (stmt, GF_PLF_1))
+		      continue;
+		    swap_ssa_operands (stmt, gimple_assign_rhs1_ptr (stmt),
+				       gimple_assign_rhs2_ptr (stmt));
+		    gimple_set_plf (stmt, GF_PLF_1, true);
+		  }
 	      /* Verify we swap all duplicates or none.  */
 	      if (flag_checking)
 		for (j = 0; j < group_size; ++j)
-		  {
-		    gimple *stmt = stmts[j];
-		    gcc_assert (gimple_plf (stmt, GF_PLF_1)
-				== (matches[j] == !swap_not_matching));
-		  }
+		  gcc_assert (gimple_plf (stmts[j]->stmt, GF_PLF_1)
+			      == (matches[j] == !swap_not_matching));
 
 	      /* If we have all children of child built up from scalars then
 		 just throw that away and build it up this node from scalars.  */
@@ -1486,8 +1484,7 @@  vect_build_slp_tree_2 (vec_info *vinfo,
 		  /* ???  Rejecting patterns this way doesn't work.  We'd have
 		     to do extra work to cancel the pattern so the uses see the
 		     scalar version.  */
-		  && !is_pattern_stmt_p
-			(vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0])))
+		  && !is_pattern_stmt_p (SLP_TREE_SCALAR_STMTS (child)[0]))
 		{
 		  unsigned int j;
 		  slp_tree grandchild;
@@ -1550,16 +1547,16 @@  vect_print_slp_tree (dump_flags_t dump_k
 		     slp_tree node)
 {
   int i;
-  gimple *stmt;
+  stmt_vec_info stmt_info;
   slp_tree child;
 
   dump_printf_loc (dump_kind, loc, "node%s\n",
 		   SLP_TREE_DEF_TYPE (node) != vect_internal_def
 		   ? " (external)" : "");
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
       dump_printf_loc (dump_kind, loc, "\tstmt %d ", i);
-      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt, 0);
+      dump_gimple_stmt (dump_kind, TDF_SLIM, stmt_info->stmt, 0);
     }
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     vect_print_slp_tree (dump_kind, loc, child);
@@ -1575,15 +1572,15 @@  vect_print_slp_tree (dump_flags_t dump_k
 vect_mark_slp_stmts (slp_tree node, enum slp_vect_type mark, int j)
 {
   int i;
-  gimple *stmt;
+  stmt_vec_info stmt_info;
   slp_tree child;
 
   if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
     return;
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     if (j < 0 || i == j)
-      STMT_SLP_TYPE (vinfo_for_stmt (stmt)) = mark;
+      STMT_SLP_TYPE (stmt_info) = mark;
 
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     vect_mark_slp_stmts (child, mark, j);
@@ -1596,16 +1593,14 @@  vect_mark_slp_stmts (slp_tree node, enum
 vect_mark_slp_stmts_relevant (slp_tree node)
 {
   int i;
-  gimple *stmt;
   stmt_vec_info stmt_info;
   slp_tree child;
 
   if (SLP_TREE_DEF_TYPE (node) != vect_internal_def)
     return;
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
-      stmt_info = vinfo_for_stmt (stmt);
       gcc_assert (!STMT_VINFO_RELEVANT (stmt_info)
                   || STMT_VINFO_RELEVANT (stmt_info) == vect_used_in_scope);
       STMT_VINFO_RELEVANT (stmt_info) = vect_used_in_scope;
@@ -1622,8 +1617,8 @@  vect_mark_slp_stmts_relevant (slp_tree n
 vect_slp_rearrange_stmts (slp_tree node, unsigned int group_size,
                           vec<unsigned> permutation)
 {
-  gimple *stmt;
-  vec<gimple *> tmp_stmts;
+  stmt_vec_info stmt_info;
+  vec<stmt_vec_info> tmp_stmts;
   unsigned int i;
   slp_tree child;
 
@@ -1634,8 +1629,8 @@  vect_slp_rearrange_stmts (slp_tree node,
   tmp_stmts.create (group_size);
   tmp_stmts.quick_grow_cleared (group_size);
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
-    tmp_stmts[permutation[i]] = stmt;
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
+    tmp_stmts[permutation[i]] = stmt_info;
 
   SLP_TREE_SCALAR_STMTS (node).release ();
   SLP_TREE_SCALAR_STMTS (node) = tmp_stmts;
@@ -1696,13 +1691,14 @@  vect_attempt_slp_rearrange_stmts (slp_in
   poly_uint64 unrolling_factor = SLP_INSTANCE_UNROLLING_FACTOR (slp_instn);
   FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
     {
-      gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-      first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
+      stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
+      first_stmt_info
+	= vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (first_stmt_info));
       /* But we have to keep those permutations that are required because
          of handling of gaps.  */
       if (known_eq (unrolling_factor, 1U)
-	  || (group_size == DR_GROUP_SIZE (vinfo_for_stmt (first_stmt))
-	      && DR_GROUP_GAP (vinfo_for_stmt (first_stmt)) == 0))
+	  || (group_size == DR_GROUP_SIZE (first_stmt_info)
+	      && DR_GROUP_GAP (first_stmt_info) == 0))
 	SLP_TREE_LOAD_PERMUTATION (node).release ();
       else
 	for (j = 0; j < SLP_TREE_LOAD_PERMUTATION (node).length (); ++j)
@@ -1721,7 +1717,7 @@  vect_supported_load_permutation_p (slp_i
   unsigned int group_size = SLP_INSTANCE_GROUP_SIZE (slp_instn);
   unsigned int i, j, k, next;
   slp_tree node;
-  gimple *stmt, *load, *next_load;
+  gimple *next_load;
 
   if (dump_enabled_p ())
     {
@@ -1750,18 +1746,18 @@  vect_supported_load_permutation_p (slp_i
       return false;
 
   node = SLP_INSTANCE_TREE (slp_instn);
-  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
+  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 
   /* Reduction (there are no data-refs in the root).
      In reduction chain the order of the loads is not important.  */
-  if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt))
-      && !REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  if (!STMT_VINFO_DATA_REF (stmt_info)
+      && !REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     vect_attempt_slp_rearrange_stmts (slp_instn);
 
   /* In basic block vectorization we allow any subchain of an interleaving
      chain.
      FORNOW: not supported in loop SLP because of realignment compications.  */
-  if (STMT_VINFO_BB_VINFO (vinfo_for_stmt (stmt)))
+  if (STMT_VINFO_BB_VINFO (stmt_info))
     {
       /* Check whether the loads in an instance form a subchain and thus
          no permutation is necessary.  */
@@ -1771,24 +1767,25 @@  vect_supported_load_permutation_p (slp_i
 	    continue;
 	  bool subchain_p = true;
           next_load = NULL;
-          FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load)
-            {
-              if (j != 0
-		  && (next_load != load
-		      || DR_GROUP_GAP (vinfo_for_stmt (load)) != 1))
+	  stmt_vec_info load_info;
+	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), j, load_info)
+	    {
+	      if (j != 0
+		  && (next_load != load_info
+		      || DR_GROUP_GAP (load_info) != 1))
 		{
 		  subchain_p = false;
 		  break;
 		}
-              next_load = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (load));
-            }
+	      next_load = DR_GROUP_NEXT_ELEMENT (load_info);
+	    }
 	  if (subchain_p)
 	    SLP_TREE_LOAD_PERMUTATION (node).release ();
 	  else
 	    {
-	      stmt_vec_info group_info
-		= vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
-	      group_info = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
+	      stmt_vec_info group_info = SLP_TREE_SCALAR_STMTS (node)[0];
+	      group_info
+		= vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (group_info));
 	      unsigned HOST_WIDE_INT nunits;
 	      unsigned k, maxk = 0;
 	      FOR_EACH_VEC_ELT (SLP_TREE_LOAD_PERMUTATION (node), j, k)
@@ -1831,7 +1828,7 @@  vect_supported_load_permutation_p (slp_i
   poly_uint64 test_vf
     = force_common_multiple (SLP_INSTANCE_UNROLLING_FACTOR (slp_instn),
 			     LOOP_VINFO_VECT_FACTOR
-			     (STMT_VINFO_LOOP_VINFO (vinfo_for_stmt (stmt))));
+			     (STMT_VINFO_LOOP_VINFO (stmt_info)));
   FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (slp_instn), i, node)
     if (node->load_permutation.exists ()
 	&& !vect_transform_slp_perm_load (node, vNULL, NULL, test_vf,
@@ -1847,15 +1844,15 @@  vect_supported_load_permutation_p (slp_i
 gimple *
 vect_find_last_scalar_stmt_in_slp (slp_tree node)
 {
-  gimple *last = NULL, *stmt;
+  gimple *last = NULL;
+  stmt_vec_info stmt_vinfo;
 
-  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt); i++)
+  for (int i = 0; SLP_TREE_SCALAR_STMTS (node).iterate (i, &stmt_vinfo); i++)
     {
-      stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
       if (is_pattern_stmt_p (stmt_vinfo))
 	last = get_later_stmt (STMT_VINFO_RELATED_STMT (stmt_vinfo), last);
       else
-	last = get_later_stmt (stmt, last);
+	last = get_later_stmt (stmt_vinfo, last);
     }
 
   return last;
@@ -1926,6 +1923,7 @@  calculate_unrolling_factor (poly_uint64
 vect_analyze_slp_instance (vec_info *vinfo,
 			   gimple *stmt, unsigned max_tree_size)
 {
+  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
   slp_instance new_instance;
   slp_tree node;
   unsigned int group_size;
@@ -1934,25 +1932,25 @@  vect_analyze_slp_instance (vec_info *vin
   stmt_vec_info next_info;
   unsigned int i;
   vec<slp_tree> loads;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (stmt));
-  vec<gimple *> scalar_stmts;
+  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  vec<stmt_vec_info> scalar_stmts;
 
-  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
+  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
     {
       scalar_type = TREE_TYPE (DR_REF (dr));
       vectype = get_vectype_for_scalar_type (scalar_type);
-      group_size = DR_GROUP_SIZE (vinfo_for_stmt (stmt));
+      group_size = DR_GROUP_SIZE (stmt_info);
     }
-  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     {
       gcc_assert (is_a <loop_vec_info> (vinfo));
-      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
-      group_size = REDUC_GROUP_SIZE (vinfo_for_stmt (stmt));
+      vectype = STMT_VINFO_VECTYPE (stmt_info);
+      group_size = REDUC_GROUP_SIZE (stmt_info);
     }
   else
     {
       gcc_assert (is_a <loop_vec_info> (vinfo));
-      vectype = STMT_VINFO_VECTYPE (vinfo_for_stmt (stmt));
+      vectype = STMT_VINFO_VECTYPE (stmt_info);
       group_size = as_a <loop_vec_info> (vinfo)->reductions.length ();
     }
 
@@ -1973,38 +1971,38 @@  vect_analyze_slp_instance (vec_info *vin
   /* Create a node (a root of the SLP tree) for the packed grouped stores.  */
   scalar_stmts.create (group_size);
   next = stmt;
-  if (STMT_VINFO_GROUPED_ACCESS (vinfo_for_stmt (stmt)))
+  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
     {
       /* Collect the stores and store them in SLP_TREE_SCALAR_STMTS.  */
       while (next)
         {
-	  if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
-	      && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
-	    scalar_stmts.safe_push (
-		  STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
+	  next_info = vinfo_for_stmt (next);
+	  if (STMT_VINFO_IN_PATTERN_P (next_info)
+	      && STMT_VINFO_RELATED_STMT (next_info))
+	    scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
 	  else
-            scalar_stmts.safe_push (next);
+	    scalar_stmts.safe_push (next_info);
           next = DR_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
         }
     }
-  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
+  else if (!dr && REDUC_GROUP_FIRST_ELEMENT (stmt_info))
     {
       /* Collect the reduction stmts and store them in
 	 SLP_TREE_SCALAR_STMTS.  */
       while (next)
         {
-	  if (STMT_VINFO_IN_PATTERN_P (vinfo_for_stmt (next))
-	      && STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)))
-	    scalar_stmts.safe_push (
-		  STMT_VINFO_RELATED_STMT (vinfo_for_stmt (next)));
+	  next_info = vinfo_for_stmt (next);
+	  if (STMT_VINFO_IN_PATTERN_P (next_info)
+	      && STMT_VINFO_RELATED_STMT (next_info))
+	    scalar_stmts.safe_push (STMT_VINFO_RELATED_STMT (next_info));
 	  else
-            scalar_stmts.safe_push (next);
+	    scalar_stmts.safe_push (next_info);
           next = REDUC_GROUP_NEXT_ELEMENT (vinfo_for_stmt (next));
         }
       /* Mark the first element of the reduction chain as reduction to properly
 	 transform the node.  In the reduction analysis phase only the last
 	 element of the chain is marked as reduction.  */
-      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_reduction_def;
+      STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
     }
   else
     {
@@ -2068,15 +2066,16 @@  vect_analyze_slp_instance (vec_info *vin
 	{
 	  vec<unsigned> load_permutation;
 	  int j;
-	  gimple *load, *first_stmt;
+	  stmt_vec_info load_info;
+	  gimple *first_stmt;
 	  bool this_load_permuted = false;
 	  load_permutation.create (group_size);
 	  first_stmt = DR_GROUP_FIRST_ELEMENT
-	      (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
-	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load)
+	    (SLP_TREE_SCALAR_STMTS (load_node)[0]);
+	  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (load_node), j, load_info)
 	    {
-		  int load_place = vect_get_place_in_interleaving_chain
-				     (load, first_stmt);
+	      int load_place = vect_get_place_in_interleaving_chain
+		(load_info, first_stmt);
 	      gcc_assert (load_place != -1);
 	      if (load_place != j)
 		this_load_permuted = true;
@@ -2124,7 +2123,7 @@  vect_analyze_slp_instance (vec_info *vin
 	  FOR_EACH_VEC_ELT (loads, i, load_node)
 	    {
 	      gimple *first_stmt = DR_GROUP_FIRST_ELEMENT
-		  (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (load_node)[0]));
+		(SLP_TREE_SCALAR_STMTS (load_node)[0]);
 	      stmt_vec_info stmt_vinfo = vinfo_for_stmt (first_stmt);
 		  /* Use SLP for strided accesses (or if we
 		     can't load-lanes).  */
@@ -2307,10 +2306,10 @@  vect_make_slp_decision (loop_vec_info lo
 static void
 vect_detect_hybrid_slp_stmts (slp_tree node, unsigned i, slp_vect_type stype)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[i];
+  stmt_vec_info stmt_vinfo = SLP_TREE_SCALAR_STMTS (node)[i];
   imm_use_iterator imm_iter;
   gimple *use_stmt;
-  stmt_vec_info use_vinfo, stmt_vinfo = vinfo_for_stmt (stmt);
+  stmt_vec_info use_vinfo;
   slp_tree child;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_vinfo);
   int j;
@@ -2326,6 +2325,7 @@  vect_detect_hybrid_slp_stmts (slp_tree n
       gcc_checking_assert (PURE_SLP_STMT (stmt_vinfo));
       /* If we get a pattern stmt here we have to use the LHS of the
          original stmt for immediate uses.  */
+      gimple *stmt = stmt_vinfo->stmt;
       if (! STMT_VINFO_IN_PATTERN_P (stmt_vinfo)
 	  && STMT_VINFO_RELATED_STMT (stmt_vinfo))
 	stmt = STMT_VINFO_RELATED_STMT (stmt_vinfo)->stmt;
@@ -2366,7 +2366,7 @@  vect_detect_hybrid_slp_stmts (slp_tree n
       if (dump_enabled_p ())
 	{
 	  dump_printf_loc (MSG_NOTE, vect_location, "marking hybrid: ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_vinfo->stmt, 0);
 	}
       STMT_SLP_TYPE (stmt_vinfo) = hybrid;
     }
@@ -2525,9 +2525,8 @@  vect_slp_analyze_node_operations_1 (vec_
 				    slp_instance node_instance,
 				    stmt_vector_for_cost *cost_vec)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
-  gcc_assert (stmt_info);
+  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
+  gimple *stmt = stmt_info->stmt;
   gcc_assert (STMT_SLP_TYPE (stmt_info) != loop_vect);
 
   /* For BB vectorization vector types are assigned here.
@@ -2551,10 +2550,10 @@  vect_slp_analyze_node_operations_1 (vec_
 	    return false;
 	}
 
-      gimple *sstmt;
+      stmt_vec_info sstmt_info;
       unsigned int i;
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt)
-	STMT_VINFO_VECTYPE (vinfo_for_stmt (sstmt)) = vectype;
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, sstmt_info)
+	STMT_VINFO_VECTYPE (sstmt_info) = vectype;
     }
 
   /* Calculate the number of vector statements to be created for the
@@ -2626,14 +2625,14 @@  vect_slp_analyze_node_operations (vec_in
   /* Push SLP node def-type to stmt operands.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
+      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
 	= SLP_TREE_DEF_TYPE (child);
   bool res = vect_slp_analyze_node_operations_1 (vinfo, node, node_instance,
 						 cost_vec);
   /* Restore def-types.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), j, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      STMT_VINFO_DEF_TYPE (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (child)[0]))
+      STMT_VINFO_DEF_TYPE (SLP_TREE_SCALAR_STMTS (child)[0])
 	= vect_internal_def;
   if (! res)
     return false;
@@ -2665,11 +2664,11 @@  vect_slp_analyze_operations (vec_info *v
 					     instance, visited, &lvisited,
 					     &cost_vec))
         {
+	  slp_tree node = SLP_INSTANCE_TREE (instance);
+	  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "removing SLP instance operations starting from: ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
-			    SLP_TREE_SCALAR_STMTS
-			      (SLP_INSTANCE_TREE (instance))[0], 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
 	  vect_free_slp_instance (instance, false);
           vinfo->slp_instances.ordered_remove (i);
 	  cost_vec.release ();
@@ -2701,14 +2700,14 @@  vect_bb_slp_scalar_cost (basic_block bb,
 			 stmt_vector_for_cost *cost_vec)
 {
   unsigned i;
-  gimple *stmt;
+  stmt_vec_info stmt_info;
   slp_tree child;
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
+      gimple *stmt = stmt_info->stmt;
       ssa_op_iter op_iter;
       def_operand_p def_p;
-      stmt_vec_info stmt_info;
 
       if ((*life)[i])
 	continue;
@@ -2724,8 +2723,7 @@  vect_bb_slp_scalar_cost (basic_block bb,
 	  gimple *use_stmt;
 	  FOR_EACH_IMM_USE_STMT (use_stmt, use_iter, DEF_FROM_PTR (def_p))
 	    if (!is_gimple_debug (use_stmt)
-		&& (! vect_stmt_in_region_p (vinfo_for_stmt (stmt)->vinfo,
-					     use_stmt)
+		&& (! vect_stmt_in_region_p (stmt_info->vinfo, use_stmt)
 		    || ! PURE_SLP_STMT (vinfo_for_stmt (use_stmt))))
 	      {
 		(*life)[i] = true;
@@ -2740,7 +2738,6 @@  vect_bb_slp_scalar_cost (basic_block bb,
 	continue;
       gimple_set_visited (stmt, true);
 
-      stmt_info = vinfo_for_stmt (stmt);
       vect_cost_for_stmt kind;
       if (STMT_VINFO_DATA_REF (stmt_info))
         {
@@ -2944,11 +2941,11 @@  vect_slp_analyze_bb_1 (gimple_stmt_itera
       if (! vect_slp_analyze_and_verify_instance_alignment (instance)
 	  || ! vect_slp_analyze_instance_dependence (instance))
 	{
+	  slp_tree node = SLP_INSTANCE_TREE (instance);
+	  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 	  dump_printf_loc (MSG_NOTE, vect_location,
 			   "removing SLP instance operations starting from: ");
-	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM,
-			    SLP_TREE_SCALAR_STMTS
-			      (SLP_INSTANCE_TREE (instance))[0], 0);
+	  dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
 	  vect_free_slp_instance (instance, false);
 	  BB_VINFO_SLP_INSTANCES (bb_vinfo).ordered_remove (i);
 	  continue;
@@ -3299,9 +3296,9 @@  vect_get_constant_vectors (tree op, slp_
                            vec<tree> *vec_oprnds,
 			   unsigned int op_num, unsigned int number_of_vectors)
 {
-  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-  gimple *stmt = stmts[0];
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
+  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+  stmt_vec_info stmt_vinfo = stmts[0];
+  gimple *stmt = stmt_vinfo->stmt;
   unsigned HOST_WIDE_INT nunits;
   tree vec_cst;
   unsigned j, number_of_places_left_in_vector;
@@ -3320,7 +3317,7 @@  vect_get_constant_vectors (tree op, slp_
 
   /* Check if vector type is a boolean vector.  */
   if (VECT_SCALAR_BOOLEAN_TYPE_P (TREE_TYPE (op))
-      && vect_mask_constant_operand_p (stmt, op_num))
+      && vect_mask_constant_operand_p (stmt_vinfo, op_num))
     vector_type
       = build_same_sized_truth_vector_type (STMT_VINFO_VECTYPE (stmt_vinfo));
   else
@@ -3366,8 +3363,9 @@  vect_get_constant_vectors (tree op, slp_
   bool place_after_defs = false;
   for (j = 0; j < number_of_copies; j++)
     {
-      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
+      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
         {
+	  stmt = stmt_vinfo->stmt;
           if (is_store)
             op = gimple_assign_rhs1 (stmt);
           else
@@ -3496,10 +3494,12 @@  vect_get_constant_vectors (tree op, slp_
 		{
 		  gsi = gsi_for_stmt
 		          (vect_find_last_scalar_stmt_in_slp (slp_node));
-		  init = vect_init_vector (stmt, vec_cst, vector_type, &gsi);
+		  init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
+					   &gsi);
 		}
 	      else
-		init = vect_init_vector (stmt, vec_cst, vector_type, NULL);
+		init = vect_init_vector (stmt_vinfo, vec_cst, vector_type,
+					 NULL);
 	      if (ctor_seq != NULL)
 		{
 		  gsi = gsi_for_stmt (SSA_NAME_DEF_STMT (init));
@@ -3612,15 +3612,14 @@  vect_get_slp_defs (vec<tree> ops, slp_tr
 	  /* We have to check both pattern and original def, if available.  */
 	  if (SLP_TREE_DEF_TYPE (child) == vect_internal_def)
 	    {
-	      gimple *first_def = SLP_TREE_SCALAR_STMTS (child)[0];
-	      stmt_vec_info related
-		= STMT_VINFO_RELATED_STMT (vinfo_for_stmt (first_def));
+	      stmt_vec_info first_def_info = SLP_TREE_SCALAR_STMTS (child)[0];
+	      stmt_vec_info related = STMT_VINFO_RELATED_STMT (first_def_info);
 	      tree first_def_op;
 
-	      if (gimple_code (first_def) == GIMPLE_PHI)
+	      if (gphi *first_def = dyn_cast <gphi *> (first_def_info->stmt))
 		first_def_op = gimple_phi_result (first_def);
 	      else
-		first_def_op = gimple_get_lhs (first_def);
+		first_def_op = gimple_get_lhs (first_def_info->stmt);
 	      if (operand_equal_p (oprnd, first_def_op, 0)
 		  || (related
 		      && operand_equal_p (oprnd,
@@ -3686,8 +3685,7 @@  vect_transform_slp_perm_load (slp_tree n
 			      slp_instance slp_node_instance, bool analyze_only,
 			      unsigned *n_perms)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
   vec_info *vinfo = stmt_info->vinfo;
   tree mask_element_type = NULL_TREE, mask_type;
   int vec_index = 0;
@@ -3779,7 +3777,7 @@  vect_transform_slp_perm_load (slp_tree n
 				   "permutation requires at "
 				   "least three vectors ");
 		  dump_gimple_stmt (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-				    stmt, 0);
+				    stmt_info->stmt, 0);
 		}
 	      gcc_assert (analyze_only);
 	      return false;
@@ -3832,6 +3830,7 @@  vect_transform_slp_perm_load (slp_tree n
 		  stmt_vec_info perm_stmt_info;
 		  if (! noop_p)
 		    {
+		      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
 		      tree perm_dest
 			= vect_create_destination_var (gimple_assign_lhs (stmt),
 						       vectype);
@@ -3841,7 +3840,8 @@  vect_transform_slp_perm_load (slp_tree n
 					       first_vec, second_vec,
 					       mask_vec);
 		      perm_stmt_info
-			= vect_finish_stmt_generation (stmt, perm_stmt, gsi);
+			= vect_finish_stmt_generation (stmt_info, perm_stmt,
+						       gsi);
 		    }
 		  else
 		    /* If mask was NULL_TREE generate the requested
@@ -3870,7 +3870,6 @@  vect_transform_slp_perm_load (slp_tree n
 vect_schedule_slp_instance (slp_tree node, slp_instance instance,
 			    scalar_stmts_to_slp_tree_map_t *bst_map)
 {
-  gimple *stmt;
   bool grouped_store, is_store;
   gimple_stmt_iterator si;
   stmt_vec_info stmt_info;
@@ -3897,11 +3896,13 @@  vect_schedule_slp_instance (slp_tree nod
   /* Push SLP node def-type to stmts.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
-	STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = SLP_TREE_DEF_TYPE (child);
+      {
+	stmt_vec_info child_stmt_info;
+	FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
+	  STMT_VINFO_DEF_TYPE (child_stmt_info) = SLP_TREE_DEF_TYPE (child);
+      }
 
-  stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  stmt_info = vinfo_for_stmt (stmt);
+  stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
 
   /* VECTYPE is the type of the destination.  */
   vectype = STMT_VINFO_VECTYPE (stmt_info);
@@ -3916,7 +3917,7 @@  vect_schedule_slp_instance (slp_tree nod
     {
       dump_printf_loc (MSG_NOTE,vect_location,
 		       "------>vectorizing SLP node starting from: ");
-      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt, 0);
+      dump_gimple_stmt (MSG_NOTE, TDF_SLIM, stmt_info->stmt, 0);
     }
 
   /* Vectorized stmts go before the last scalar stmt which is where
@@ -3928,7 +3929,7 @@  vect_schedule_slp_instance (slp_tree nod
      chain is marked as reduction.  */
   if (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
       && REDUC_GROUP_FIRST_ELEMENT (stmt_info)
-      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt)
+      && REDUC_GROUP_FIRST_ELEMENT (stmt_info) == stmt_info)
     {
       STMT_VINFO_DEF_TYPE (stmt_info) = vect_reduction_def;
       STMT_VINFO_TYPE (stmt_info) = reduc_vec_info_type;
@@ -3938,29 +3939,33 @@  vect_schedule_slp_instance (slp_tree nod
      both operations and then performing a merge.  */
   if (SLP_TREE_TWO_OPERATORS (node))
     {
+      gassign *stmt = as_a <gassign *> (stmt_info->stmt);
       enum tree_code code0 = gimple_assign_rhs_code (stmt);
       enum tree_code ocode = ERROR_MARK;
-      gimple *ostmt;
+      stmt_vec_info ostmt_info;
       vec_perm_builder mask (group_size, group_size, 1);
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt)
-	if (gimple_assign_rhs_code (ostmt) != code0)
-	  {
-	    mask.quick_push (1);
-	    ocode = gimple_assign_rhs_code (ostmt);
-	  }
-	else
-	  mask.quick_push (0);
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, ostmt_info)
+	{
+	  gassign *ostmt = as_a <gassign *> (ostmt_info->stmt);
+	  if (gimple_assign_rhs_code (ostmt) != code0)
+	    {
+	      mask.quick_push (1);
+	      ocode = gimple_assign_rhs_code (ostmt);
+	    }
+	  else
+	    mask.quick_push (0);
+	}
       if (ocode != ERROR_MARK)
 	{
 	  vec<stmt_vec_info> v0;
 	  vec<stmt_vec_info> v1;
 	  unsigned j;
 	  tree tmask = NULL_TREE;
-	  vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
+	  vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
 	  v0 = SLP_TREE_VEC_STMTS (node).copy ();
 	  SLP_TREE_VEC_STMTS (node).truncate (0);
 	  gimple_assign_set_rhs_code (stmt, ocode);
-	  vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
+	  vect_transform_stmt (stmt_info, &si, &grouped_store, node, instance);
 	  gimple_assign_set_rhs_code (stmt, code0);
 	  v1 = SLP_TREE_VEC_STMTS (node).copy ();
 	  SLP_TREE_VEC_STMTS (node).truncate (0);
@@ -3998,20 +4003,24 @@  vect_schedule_slp_instance (slp_tree nod
 					   gimple_assign_lhs (v1[j]->stmt),
 					   tmask);
 	      SLP_TREE_VEC_STMTS (node).quick_push
-		(vect_finish_stmt_generation (stmt, vstmt, &si));
+		(vect_finish_stmt_generation (stmt_info, vstmt, &si));
 	    }
 	  v0.release ();
 	  v1.release ();
 	  return false;
 	}
     }
-  is_store = vect_transform_stmt (stmt, &si, &grouped_store, node, instance);
+  is_store = vect_transform_stmt (stmt_info, &si, &grouped_store, node,
+				  instance);
 
   /* Restore stmt def-types.  */
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     if (SLP_TREE_DEF_TYPE (child) != vect_internal_def)
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, stmt)
-	STMT_VINFO_DEF_TYPE (vinfo_for_stmt (stmt)) = vect_internal_def;
+      {
+	stmt_vec_info child_stmt_info;
+	FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (child), j, child_stmt_info)
+	  STMT_VINFO_DEF_TYPE (child_stmt_info) = vect_internal_def;
+      }
 
   return is_store;
 }
@@ -4024,7 +4033,7 @@  vect_schedule_slp_instance (slp_tree nod
 static void
 vect_remove_slp_scalar_calls (slp_tree node)
 {
-  gimple *stmt, *new_stmt;
+  gimple *new_stmt;
   gimple_stmt_iterator gsi;
   int i;
   slp_tree child;
@@ -4037,13 +4046,12 @@  vect_remove_slp_scalar_calls (slp_tree n
   FOR_EACH_VEC_ELT (SLP_TREE_CHILDREN (node), i, child)
     vect_remove_slp_scalar_calls (child);
 
-  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt)
+  FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (node), i, stmt_info)
     {
-      if (!is_gimple_call (stmt) || gimple_bb (stmt) == NULL)
+      gcall *stmt = dyn_cast <gcall *> (stmt_info->stmt);
+      if (!stmt || gimple_bb (stmt) == NULL)
 	continue;
-      stmt_info = vinfo_for_stmt (stmt);
-      if (stmt_info == NULL_STMT_VEC_INFO
-	  || is_pattern_stmt_p (stmt_info)
+      if (is_pattern_stmt_p (stmt_info)
 	  || !PURE_SLP_STMT (stmt_info))
 	continue;
       lhs = gimple_call_lhs (stmt);
@@ -4085,7 +4093,7 @@  vect_schedule_slp (vec_info *vinfo)
   FOR_EACH_VEC_ELT (slp_instances, i, instance)
     {
       slp_tree root = SLP_INSTANCE_TREE (instance);
-      gimple *store;
+      stmt_vec_info store_info;
       unsigned int j;
       gimple_stmt_iterator gsi;
 
@@ -4099,20 +4107,20 @@  vect_schedule_slp (vec_info *vinfo)
       if (is_a <loop_vec_info> (vinfo))
 	vect_remove_slp_scalar_calls (root);
 
-      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store)
+      for (j = 0; SLP_TREE_SCALAR_STMTS (root).iterate (j, &store_info)
                   && j < SLP_INSTANCE_GROUP_SIZE (instance); j++)
         {
-          if (!STMT_VINFO_DATA_REF (vinfo_for_stmt (store)))
-            break;
+	  if (!STMT_VINFO_DATA_REF (store_info))
+	    break;
 
-         if (is_pattern_stmt_p (vinfo_for_stmt (store)))
-           store = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (store));
-          /* Free the attached stmt_vec_info and remove the stmt.  */
-          gsi = gsi_for_stmt (store);
-	  unlink_stmt_vdef (store);
-          gsi_remove (&gsi, true);
-	  release_defs (store);
-          free_stmt_vec_info (store);
+	  if (is_pattern_stmt_p (store_info))
+	    store_info = STMT_VINFO_RELATED_STMT (store_info);
+	  /* Free the attached stmt_vec_info and remove the stmt.  */
+	  gsi = gsi_for_stmt (store_info);
+	  unlink_stmt_vdef (store_info);
+	  gsi_remove (&gsi, true);
+	  release_defs (store_info);
+	  free_stmt_vec_info (store_info);
         }
     }
 
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:22:47.485157343 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:23:00.397042684 +0100
@@ -665,7 +665,8 @@  vect_slp_analyze_data_ref_dependence (st
 
 static bool
 vect_slp_analyze_node_dependences (slp_instance instance, slp_tree node,
-				   vec<gimple *> stores, gimple *last_store)
+				   vec<stmt_vec_info> stores,
+				   gimple *last_store)
 {
   /* This walks over all stmts involved in the SLP load/store done
      in NODE verifying we can sink them up to the last stmt in the
@@ -673,13 +674,13 @@  vect_slp_analyze_node_dependences (slp_i
   gimple *last_access = vect_find_last_scalar_stmt_in_slp (node);
   for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
     {
-      gimple *access = SLP_TREE_SCALAR_STMTS (node)[k];
-      if (access == last_access)
+      stmt_vec_info access_info = SLP_TREE_SCALAR_STMTS (node)[k];
+      if (access_info == last_access)
 	continue;
-      data_reference *dr_a = STMT_VINFO_DATA_REF (vinfo_for_stmt (access));
+      data_reference *dr_a = STMT_VINFO_DATA_REF (access_info);
       ao_ref ref;
       bool ref_initialized_p = false;
-      for (gimple_stmt_iterator gsi = gsi_for_stmt (access);
+      for (gimple_stmt_iterator gsi = gsi_for_stmt (access_info->stmt);
 	   gsi_stmt (gsi) != last_access; gsi_next (&gsi))
 	{
 	  gimple *stmt = gsi_stmt (gsi);
@@ -712,11 +713,10 @@  vect_slp_analyze_node_dependences (slp_i
 	      if (stmt != last_store)
 		continue;
 	      unsigned i;
-	      gimple *store;
-	      FOR_EACH_VEC_ELT (stores, i, store)
+	      stmt_vec_info store_info;
+	      FOR_EACH_VEC_ELT (stores, i, store_info)
 		{
-		  data_reference *store_dr
-		    = STMT_VINFO_DATA_REF (vinfo_for_stmt (store));
+		  data_reference *store_dr = STMT_VINFO_DATA_REF (store_info);
 		  ddr_p ddr = initialize_data_dependence_relation
 				(dr_a, store_dr, vNULL);
 		  dependent = vect_slp_analyze_data_ref_dependence (ddr);
@@ -753,7 +753,7 @@  vect_slp_analyze_instance_dependence (sl
 
   /* The stores of this instance are at the root of the SLP tree.  */
   slp_tree store = SLP_INSTANCE_TREE (instance);
-  if (! STMT_VINFO_DATA_REF (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (store)[0])))
+  if (! STMT_VINFO_DATA_REF (SLP_TREE_SCALAR_STMTS (store)[0]))
     store = NULL;
 
   /* Verify we can sink stores to the vectorized stmt insert location.  */
@@ -766,7 +766,7 @@  vect_slp_analyze_instance_dependence (sl
       /* Mark stores in this instance and remember the last one.  */
       last_store = vect_find_last_scalar_stmt_in_slp (store);
       for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
-	gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k], true);
+	gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k]->stmt, true);
     }
 
   bool res = true;
@@ -788,7 +788,7 @@  vect_slp_analyze_instance_dependence (sl
   /* Unset the visited flag.  */
   if (store)
     for (unsigned k = 0; k < SLP_INSTANCE_GROUP_SIZE (instance); ++k)
-      gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k], false);
+      gimple_set_visited (SLP_TREE_SCALAR_STMTS (store)[k]->stmt, false);
 
   return res;
 }
@@ -2389,10 +2389,11 @@  vect_slp_analyze_and_verify_node_alignme
   /* We vectorize from the first scalar stmt in the node unless
      the node is permuted in which case we start from the first
      element in the group.  */
-  gimple *first_stmt = SLP_TREE_SCALAR_STMTS (node)[0];
-  data_reference_p first_dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
+  stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
+  gimple *first_stmt = first_stmt_info->stmt;
+  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
   if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
-    first_stmt = DR_GROUP_FIRST_ELEMENT (vinfo_for_stmt (first_stmt));
+    first_stmt = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
 
   data_reference_p dr = STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt));
   vect_compute_data_ref_alignment (dr);
@@ -2429,7 +2430,7 @@  vect_slp_analyze_and_verify_instance_ali
       return false;
 
   node = SLP_INSTANCE_TREE (instance);
-  if (STMT_VINFO_DATA_REF (vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]))
+  if (STMT_VINFO_DATA_REF (SLP_TREE_SCALAR_STMTS (node)[0])
       && ! vect_slp_analyze_and_verify_node_alignment
 	     (SLP_INSTANCE_TREE (instance)))
     return false;
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-24 10:22:57.273070426 +0100
+++ gcc/tree-vect-loop.c	2018-07-24 10:23:00.397042684 +0100
@@ -2186,8 +2186,7 @@  vect_analyze_loop_2 (loop_vec_info loop_
   FOR_EACH_VEC_ELT (LOOP_VINFO_SLP_INSTANCES (loop_vinfo), i, instance)
     {
       stmt_vec_info vinfo;
-      vinfo = vinfo_for_stmt
-	  (SLP_TREE_SCALAR_STMTS (SLP_INSTANCE_TREE (instance))[0]);
+      vinfo = SLP_TREE_SCALAR_STMTS (SLP_INSTANCE_TREE (instance))[0];
       if (! STMT_VINFO_GROUPED_ACCESS (vinfo))
 	continue;
       vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
@@ -2199,7 +2198,7 @@  vect_analyze_loop_2 (loop_vec_info loop_
        return false;
       FOR_EACH_VEC_ELT (SLP_INSTANCE_LOADS (instance), j, node)
 	{
-	  vinfo = vinfo_for_stmt (SLP_TREE_SCALAR_STMTS (node)[0]);
+	  vinfo = SLP_TREE_SCALAR_STMTS (node)[0];
 	  vinfo = vinfo_for_stmt (DR_GROUP_FIRST_ELEMENT (vinfo));
 	  bool single_element_p = !DR_GROUP_NEXT_ELEMENT (vinfo);
 	  size = DR_GROUP_SIZE (vinfo);
@@ -2442,12 +2441,11 @@  reduction_fn_for_scalar_code (enum tree_
 neutral_op_for_slp_reduction (slp_tree slp_node, tree_code code,
 			      bool reduc_chain)
 {
-  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-  gimple *stmt = stmts[0];
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
+  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+  stmt_vec_info stmt_vinfo = stmts[0];
   tree vector_type = STMT_VINFO_VECTYPE (stmt_vinfo);
   tree scalar_type = TREE_TYPE (vector_type);
-  struct loop *loop = gimple_bb (stmt)->loop_father;
+  struct loop *loop = gimple_bb (stmt_vinfo->stmt)->loop_father;
   gcc_assert (loop);
 
   switch (code)
@@ -2473,7 +2471,8 @@  neutral_op_for_slp_reduction (slp_tree s
 	 has only a single initial value, so that value is neutral for
 	 all statements.  */
       if (reduc_chain)
-	return PHI_ARG_DEF_FROM_EDGE (stmt, loop_preheader_edge (loop));
+	return PHI_ARG_DEF_FROM_EDGE (stmt_vinfo->stmt,
+				      loop_preheader_edge (loop));
       return NULL_TREE;
 
     default:
@@ -4182,9 +4181,8 @@  get_initial_defs_for_reduction (slp_tree
 				unsigned int number_of_vectors,
 				bool reduc_chain, tree neutral_op)
 {
-  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-  gimple *stmt = stmts[0];
-  stmt_vec_info stmt_vinfo = vinfo_for_stmt (stmt);
+  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+  stmt_vec_info stmt_vinfo = stmts[0];
   unsigned HOST_WIDE_INT nunits;
   unsigned j, number_of_places_left_in_vector;
   tree vector_type;
@@ -4201,7 +4199,7 @@  get_initial_defs_for_reduction (slp_tree
 
   gcc_assert (STMT_VINFO_DEF_TYPE (stmt_vinfo) == vect_reduction_def);
 
-  loop = (gimple_bb (stmt))->loop_father;
+  loop = (gimple_bb (stmt_vinfo->stmt))->loop_father;
   gcc_assert (loop);
   edge pe = loop_preheader_edge (loop);
 
@@ -4234,7 +4232,7 @@  get_initial_defs_for_reduction (slp_tree
   elts.quick_grow (nunits);
   for (j = 0; j < number_of_copies; j++)
     {
-      for (i = group_size - 1; stmts.iterate (i, &stmt); i--)
+      for (i = group_size - 1; stmts.iterate (i, &stmt_vinfo); i--)
         {
 	  tree op;
 	  /* Get the def before the loop.  In reduction chain we have only
@@ -4244,7 +4242,7 @@  get_initial_defs_for_reduction (slp_tree
 	      && neutral_op)
 	    op = neutral_op;
 	  else
-	    op = PHI_ARG_DEF_FROM_EDGE (stmt, pe);
+	    op = PHI_ARG_DEF_FROM_EDGE (stmt_vinfo->stmt, pe);
 
           /* Create 'vect_ = {op0,op1,...,opn}'.  */
           number_of_places_left_in_vector--;
@@ -5128,7 +5126,8 @@  vect_create_epilog_for_reduction (vec<tr
       gcc_assert (pow2p_hwi (group_size));
 
       slp_tree orig_phis_slp_node = slp_node_instance->reduc_phis;
-      vec<gimple *> orig_phis = SLP_TREE_SCALAR_STMTS (orig_phis_slp_node);
+      vec<stmt_vec_info> orig_phis
+	= SLP_TREE_SCALAR_STMTS (orig_phis_slp_node);
       gimple_seq seq = NULL;
 
       /* Build a vector {0, 1, 2, ...}, with the same number of elements
@@ -5159,7 +5158,7 @@  vect_create_epilog_for_reduction (vec<tr
 	  if (!neutral_op)
 	    {
 	      tree scalar_value
-		= PHI_ARG_DEF_FROM_EDGE (orig_phis[i],
+		= PHI_ARG_DEF_FROM_EDGE (orig_phis[i]->stmt,
 					 loop_preheader_edge (loop));
 	      vector_identity = gimple_build_vector_from_val (&seq, vectype,
 							      scalar_value);
@@ -5572,12 +5571,13 @@  vect_create_epilog_for_reduction (vec<tr
      the loop exit phi node.  */
   if (REDUC_GROUP_FIRST_ELEMENT (vinfo_for_stmt (stmt)))
     {
-      gimple *dest_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
+      stmt_vec_info dest_stmt_info
+	= SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
       /* Handle reduction patterns.  */
-      if (STMT_VINFO_RELATED_STMT (vinfo_for_stmt (dest_stmt)))
-	dest_stmt = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (dest_stmt));
+      if (STMT_VINFO_RELATED_STMT (dest_stmt_info))
+	dest_stmt_info = STMT_VINFO_RELATED_STMT (dest_stmt_info);
 
-      scalar_dest = gimple_assign_lhs (dest_stmt);
+      scalar_dest = gimple_assign_lhs (dest_stmt_info->stmt);
       group_size = 1;
     }
 
@@ -5607,13 +5607,12 @@  vect_create_epilog_for_reduction (vec<tr
 
       if (slp_reduc)
         {
-	  gimple *current_stmt = SLP_TREE_SCALAR_STMTS (slp_node)[k];
+	  stmt_vec_info scalar_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[k];
 
-	  orig_stmt_info
-	    = STMT_VINFO_RELATED_STMT (vinfo_for_stmt (current_stmt));
+	  orig_stmt_info = STMT_VINFO_RELATED_STMT (scalar_stmt_info);
 	  /* SLP statements can't participate in patterns.  */
 	  gcc_assert (!orig_stmt_info);
-	  scalar_dest = gimple_assign_lhs (current_stmt);
+	  scalar_dest = gimple_assign_lhs (scalar_stmt_info->stmt);
         }
 
       phis.create (3);
@@ -5881,23 +5880,23 @@  vectorize_fold_left_reduction (gimple *s
   tree op0 = ops[1 - reduc_index];
 
   int group_size = 1;
-  gimple *scalar_dest_def;
+  stmt_vec_info scalar_dest_def_info;
   auto_vec<tree> vec_oprnds0;
   if (slp_node)
     {
       vect_get_vec_defs (op0, NULL_TREE, stmt, &vec_oprnds0, NULL, slp_node);
       group_size = SLP_TREE_SCALAR_STMTS (slp_node).length ();
-      scalar_dest_def = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
+      scalar_dest_def_info = SLP_TREE_SCALAR_STMTS (slp_node)[group_size - 1];
     }
   else
     {
       tree loop_vec_def0 = vect_get_vec_def_for_operand (op0, stmt);
       vec_oprnds0.create (1);
       vec_oprnds0.quick_push (loop_vec_def0);
-      scalar_dest_def = stmt;
+      scalar_dest_def_info = stmt_info;
     }
 
-  tree scalar_dest = gimple_assign_lhs (scalar_dest_def);
+  tree scalar_dest = gimple_assign_lhs (scalar_dest_def_info->stmt);
   tree scalar_type = TREE_TYPE (scalar_dest);
   tree reduc_var = gimple_phi_result (reduc_def_stmt);
 
@@ -5964,10 +5963,11 @@  vectorize_fold_left_reduction (gimple *s
       if (i == vec_num - 1)
 	{
 	  gimple_set_lhs (new_stmt, scalar_dest);
-	  new_stmt_info = vect_finish_replace_stmt (scalar_dest_def, new_stmt);
+	  new_stmt_info = vect_finish_replace_stmt (scalar_dest_def_info,
+						    new_stmt);
 	}
       else
-	new_stmt_info = vect_finish_stmt_generation (scalar_dest_def,
+	new_stmt_info = vect_finish_stmt_generation (scalar_dest_def_info,
 						     new_stmt, gsi);
 
       if (slp_node)
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:22:47.489157307 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:23:00.401042649 +0100
@@ -806,7 +806,7 @@  vect_prologue_cost_for_slp_op (slp_tree
 			       unsigned opno, enum vect_def_type dt,
 			       stmt_vector_for_cost *cost_vec)
 {
-  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
+  gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0]->stmt;
   tree op = gimple_op (stmt, opno);
   unsigned prologue_cost = 0;
 
@@ -838,11 +838,11 @@  vect_prologue_cost_for_slp_op (slp_tree
     {
       unsigned si = j % group_size;
       if (nelt == 0)
-	elt = gimple_op (SLP_TREE_SCALAR_STMTS (node)[si], opno);
+	elt = gimple_op (SLP_TREE_SCALAR_STMTS (node)[si]->stmt, opno);
       /* ???  We're just tracking whether all operands of a single
 	 vector initializer are the same, ideally we'd check if
 	 we emitted the same one already.  */
-      else if (elt != gimple_op (SLP_TREE_SCALAR_STMTS (node)[si],
+      else if (elt != gimple_op (SLP_TREE_SCALAR_STMTS (node)[si]->stmt,
 				 opno))
 	elt = NULL_TREE;
       nelt++;
@@ -889,7 +889,7 @@  vect_model_simple_cost (stmt_vec_info st
       /* Scan operands and account for prologue cost of constants/externals.
 	 ???  This over-estimates cost for multiple uses and should be
 	 re-engineered.  */
-      gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0];
+      gimple *stmt = SLP_TREE_SCALAR_STMTS (node)[0]->stmt;
       tree lhs = gimple_get_lhs (stmt);
       for (unsigned i = 0; i < gimple_num_ops (stmt); ++i)
 	{
@@ -5532,12 +5532,15 @@  vectorizable_shift (gimple *stmt, gimple
 	 a scalar shift.  */
       if (slp_node)
 	{
-	  vec<gimple *> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
-	  gimple *slpstmt;
+	  vec<stmt_vec_info> stmts = SLP_TREE_SCALAR_STMTS (slp_node);
+	  stmt_vec_info slpstmt_info;
 
-	  FOR_EACH_VEC_ELT (stmts, k, slpstmt)
-	    if (!operand_equal_p (gimple_assign_rhs2 (slpstmt), op1, 0))
-	      scalar_shift_arg = false;
+	  FOR_EACH_VEC_ELT (stmts, k, slpstmt_info)
+	    {
+	      gassign *slpstmt = as_a <gassign *> (slpstmt_info->stmt);
+	      if (!operand_equal_p (gimple_assign_rhs2 (slpstmt), op1, 0))
+		scalar_shift_arg = false;
+	    }
 	}
 
       /* If the shift amount is computed by a pattern stmt we cannot
@@ -7421,7 +7424,7 @@  vectorizable_load (gimple *stmt, gimple_
   vec<tree> dr_chain = vNULL;
   bool grouped_load = false;
   gimple *first_stmt;
-  gimple *first_stmt_for_drptr = NULL;
+  stmt_vec_info first_stmt_info_for_drptr = NULL;
   bool inv_p;
   bool compute_in_loop = false;
   struct loop *at_loop;
@@ -7930,7 +7933,7 @@  vectorizable_load (gimple *stmt, gimple_
       /* For BB vectorization always use the first stmt to base
 	 the data ref pointer on.  */
       if (bb_vinfo)
-	first_stmt_for_drptr = SLP_TREE_SCALAR_STMTS (slp_node)[0];
+	first_stmt_info_for_drptr = SLP_TREE_SCALAR_STMTS (slp_node)[0];
 
       /* Check if the chain of loads is already vectorized.  */
       if (STMT_VINFO_VEC_STMT (vinfo_for_stmt (first_stmt))
@@ -8180,17 +8183,17 @@  vectorizable_load (gimple *stmt, gimple_
 	      dataref_offset = build_int_cst (ref_type, 0);
 	      inv_p = false;
 	    }
-	  else if (first_stmt_for_drptr
-		   && first_stmt != first_stmt_for_drptr)
+	  else if (first_stmt_info_for_drptr
+		   && first_stmt != first_stmt_info_for_drptr)
 	    {
 	      dataref_ptr
-		= vect_create_data_ref_ptr (first_stmt_for_drptr, aggr_type,
-					    at_loop, offset, &dummy, gsi,
-					    &ptr_incr, simd_lane_access_p,
+		= vect_create_data_ref_ptr (first_stmt_info_for_drptr,
+					    aggr_type, at_loop, offset, &dummy,
+					    gsi, &ptr_incr, simd_lane_access_p,
 					    &inv_p, byte_offset, bump);
 	      /* Adjust the pointer by the difference to first_stmt.  */
 	      data_reference_p ptrdr
-		= STMT_VINFO_DATA_REF (vinfo_for_stmt (first_stmt_for_drptr));
+		= STMT_VINFO_DATA_REF (first_stmt_info_for_drptr);
 	      tree diff = fold_convert (sizetype,
 					size_binop (MINUS_EXPR,
 						    DR_INIT (first_dr),
@@ -9391,13 +9394,12 @@  can_vectorize_live_stmts (gimple *stmt,
 {
   if (slp_node)
     {
-      gimple *slp_stmt;
+      stmt_vec_info slp_stmt_info;
       unsigned int i;
-      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (slp_node), i, slp_stmt)
+      FOR_EACH_VEC_ELT (SLP_TREE_SCALAR_STMTS (slp_node), i, slp_stmt_info)
 	{
-	  stmt_vec_info slp_stmt_info = vinfo_for_stmt (slp_stmt);
 	  if (STMT_VINFO_LIVE_P (slp_stmt_info)
-	      && !vectorizable_live_operation (slp_stmt, gsi, slp_node, i,
+	      && !vectorizable_live_operation (slp_stmt_info, gsi, slp_node, i,
 					       vec_stmt, cost_vec))
 	    return false;
 	}