diff mbox series

[38/46] Pass stmt_vec_infos instead of data_references where relevant

Message ID 87d0vdlze8.fsf@arm.com
State New
Headers show
Series Remove vinfo_for_stmt etc. | expand

Commit Message

Richard Sandiford July 24, 2018, 10:08 a.m. UTC
This patch makes various routines (mostly in tree-vect-data-refs.c)
take stmt_vec_infos rather than data_references.  The affected routines
are really dealing with the way that an access is going to vectorised
for a particular stmt_vec_info, rather than with the original scalar
access described by the data_reference.


2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (vect_supportable_dr_alignment): Take
	a stmt_vec_info rather than a data_reference.
	* tree-vect-data-refs.c (vect_calculate_target_alignment)
	(vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
	(verify_data_ref_alignment, vector_alignment_reachable_p)
	(vect_get_data_access_cost, vect_get_peeling_costs_all_drs)
	(vect_peeling_supportable, vect_analyze_group_access_1)
	(vect_analyze_group_access, vect_analyze_data_ref_access)
	(vect_vfa_segment_size, vect_vfa_access_size, vect_small_gap_p)
	(vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info)
	(vect_supportable_dr_alignment): Likewise.  Update calls to other
	functions for which the same change is being made.
	(vect_verify_datarefs_alignment, vect_find_same_alignment_drs)
	(vect_analyze_data_refs_alignment): Update calls accordingly.
	(vect_slp_analyze_and_verify_node_alignment): Likewise.
	(vect_analyze_data_ref_accesses): Likewise.
	(vect_prune_runtime_alias_test_list): Likewise.
	(vect_create_addr_base_for_vector_ref): Likewise.
	(vect_create_data_ref_ptr): Likewise.
	(_vect_peel_info::dr): Replace with...
	(_vect_peel_info::stmt_info): ...this new field.
	(vect_peeling_hash_get_most_frequent): Update _vect_peel_info uses
	accordingly, and update after above interface changes.
	(vect_peeling_hash_get_lowest_cost): Likewise
	(vect_peeling_hash_choose_best_peeling): Likewise.
	(vect_enhance_data_refs_alignment): Likewise.
	(vect_peeling_hash_insert): Likewise.  Take a stmt_vec_info
	rather than a data_reference.
	* tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
	(get_negative_load_store_type): Update calls to
	vect_supportable_dr_alignment.
	(vect_get_data_ptr_increment, ensure_base_align): Take a
	stmt_vec_info instead of a data_reference.
	(vectorizable_store, vectorizable_load): Update calls after
	above interface changes.

Comments

Richard Biener July 25, 2018, 10:20 a.m. UTC | #1
On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> This patch makes various routines (mostly in tree-vect-data-refs.c)
> take stmt_vec_infos rather than data_references.  The affected routines
> are really dealing with the way that an access is going to vectorised
> for a particular stmt_vec_info, rather than with the original scalar
> access described by the data_reference.

Similar.  Doesn't it make more sense to pass both stmt_info and DR to
the functions?

We currently cannot handle aggregate copies in the to-be-vectorized IL
but rely on SRA and friends to elide those.  That's the only two-DR
stmt I can think of for vectorization.  Maybe aggregate by-value / return
function calls with OMP SIMD if that supports this somehow.

Richard.

>
> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (vect_supportable_dr_alignment): Take
>         a stmt_vec_info rather than a data_reference.
>         * tree-vect-data-refs.c (vect_calculate_target_alignment)
>         (vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
>         (verify_data_ref_alignment, vector_alignment_reachable_p)
>         (vect_get_data_access_cost, vect_get_peeling_costs_all_drs)
>         (vect_peeling_supportable, vect_analyze_group_access_1)
>         (vect_analyze_group_access, vect_analyze_data_ref_access)
>         (vect_vfa_segment_size, vect_vfa_access_size, vect_small_gap_p)
>         (vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info)
>         (vect_supportable_dr_alignment): Likewise.  Update calls to other
>         functions for which the same change is being made.
>         (vect_verify_datarefs_alignment, vect_find_same_alignment_drs)
>         (vect_analyze_data_refs_alignment): Update calls accordingly.
>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>         (vect_analyze_data_ref_accesses): Likewise.
>         (vect_prune_runtime_alias_test_list): Likewise.
>         (vect_create_addr_base_for_vector_ref): Likewise.
>         (vect_create_data_ref_ptr): Likewise.
>         (_vect_peel_info::dr): Replace with...
>         (_vect_peel_info::stmt_info): ...this new field.
>         (vect_peeling_hash_get_most_frequent): Update _vect_peel_info uses
>         accordingly, and update after above interface changes.
>         (vect_peeling_hash_get_lowest_cost): Likewise
>         (vect_peeling_hash_choose_best_peeling): Likewise.
>         (vect_enhance_data_refs_alignment): Likewise.
>         (vect_peeling_hash_insert): Likewise.  Take a stmt_vec_info
>         rather than a data_reference.
>         * tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
>         (get_negative_load_store_type): Update calls to
>         vect_supportable_dr_alignment.
>         (vect_get_data_ptr_increment, ensure_base_align): Take a
>         stmt_vec_info instead of a data_reference.
>         (vectorizable_store, vectorizable_load): Update calls after
>         above interface changes.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-24 10:24:05.744462369 +0100
> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:08.924434128 +0100
> @@ -1541,7 +1541,7 @@ extern tree vect_get_mask_type_for_stmt
>  /* In tree-vect-data-refs.c.  */
>  extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
>  extern enum dr_alignment_support vect_supportable_dr_alignment
> -                                           (struct data_reference *, bool);
> +  (stmt_vec_info, bool);
>  extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
>                                             HOST_WIDE_INT *);
>  extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-24 10:24:05.740462405 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:08.924434128 +0100
> @@ -858,19 +858,19 @@ vect_record_base_alignments (vec_info *v
>      }
>  }
>
> -/* Return the target alignment for the vectorized form of DR.  */
> +/* Return the target alignment for the vectorized form of the load or store
> +   in STMT_INFO.  */
>
>  static unsigned int
> -vect_calculate_target_alignment (struct data_reference *dr)
> +vect_calculate_target_alignment (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    return targetm.vectorize.preferred_vector_alignment (vectype);
>  }
>
>  /* Function vect_compute_data_ref_alignment
>
> -   Compute the misalignment of the data reference DR.
> +   Compute the misalignment of the load or store in STMT_INFO.
>
>     Output:
>     1. dr_misalignment (STMT_INFO) is defined.
> @@ -879,9 +879,9 @@ vect_calculate_target_alignment (struct
>     only for trivial cases. TODO.  */
>
>  static void
> -vect_compute_data_ref_alignment (struct data_reference *dr)
> +vect_compute_data_ref_alignment (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
> @@ -905,7 +905,7 @@ vect_compute_data_ref_alignment (struct
>    bool step_preserves_misalignment_p;
>
>    unsigned HOST_WIDE_INT vector_alignment
> -    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
> +    = vect_calculate_target_alignment (stmt_info) / BITS_PER_UNIT;
>    STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
>
>    /* No step for BB vectorization.  */
> @@ -1053,28 +1053,28 @@ vect_compute_data_ref_alignment (struct
>  }
>
>  /* Function vect_update_misalignment_for_peel.
> -   Sets DR's misalignment
> -   - to 0 if it has the same alignment as DR_PEEL,
> +   Sets the misalignment of the load or store in STMT_INFO
> +   - to 0 if it has the same alignment as PEEL_STMT_INFO,
>     - to the misalignment computed using NPEEL if DR's salignment is known,
>     - to -1 (unknown) otherwise.
>
> -   DR - the data reference whose misalignment is to be adjusted.
> -   DR_PEEL - the data reference whose misalignment is being made
> -             zero in the vector loop by the peel.
> +   STMT_INFO - the load or store whose misalignment is to be adjusted.
> +   PEEL_STMT_INFO - the load or store whose misalignment is being made
> +                   zero in the vector loop by the peel.
>     NPEEL - the number of iterations in the peel loop if the misalignment
> -           of DR_PEEL is known at compile time.  */
> +          of PEEL_STMT_INFO is known at compile time.  */
>
>  static void
> -vect_update_misalignment_for_peel (struct data_reference *dr,
> -                                   struct data_reference *dr_peel, int npeel)
> +vect_update_misalignment_for_peel (stmt_vec_info stmt_info,
> +                                  stmt_vec_info peel_stmt_info, int npeel)
>  {
>    unsigned int i;
>    vec<dr_p> same_aligned_drs;
>    struct data_reference *current_dr;
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> +  data_reference *dr_peel = STMT_VINFO_DATA_REF (peel_stmt_info);
>    int dr_size = vect_get_scalar_dr_size (dr);
>    int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
>
>   /* For interleaved data accesses the step in the loop must be multiplied by
>       the size of the interleaving group.  */
> @@ -1085,7 +1085,7 @@ vect_update_misalignment_for_peel (struc
>
>    /* It can be assumed that the data refs with the same alignment as dr_peel
>       are aligned in the vector loop.  */
> -  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
> +  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
>    FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
>      {
>        if (current_dr != dr)
> @@ -1118,13 +1118,15 @@ vect_update_misalignment_for_peel (struc
>
>  /* Function verify_data_ref_alignment
>
> -   Return TRUE if DR can be handled with respect to alignment.  */
> +   Return TRUE if the load or store in STMT_INFO can be handled with
> +   respect to alignment.  */
>
>  static bool
> -verify_data_ref_alignment (data_reference_p dr)
> +verify_data_ref_alignment (stmt_vec_info stmt_info)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    enum dr_alignment_support supportable_dr_alignment
> -    = vect_supportable_dr_alignment (dr, false);
> +    = vect_supportable_dr_alignment (stmt_info, false);
>    if (!supportable_dr_alignment)
>      {
>        if (dump_enabled_p ())
> @@ -1181,7 +1183,7 @@ vect_verify_datarefs_alignment (loop_vec
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      if (! verify_data_ref_alignment (dr))
> +      if (! verify_data_ref_alignment (stmt_info))
>         return false;
>      }
>
> @@ -1203,13 +1205,13 @@ not_size_aligned (tree exp)
>
>  /* Function vector_alignment_reachable_p
>
> -   Return true if vector alignment for DR is reachable by peeling
> -   a few loop iterations.  Return false otherwise.  */
> +   Return true if the vector alignment is reachable for the load or store
> +   in STMT_INFO by peeling a few loop iterations.  Return false otherwise.  */
>
>  static bool
> -vector_alignment_reachable_p (struct data_reference *dr)
> +vector_alignment_reachable_p (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> @@ -1270,16 +1272,16 @@ vector_alignment_reachable_p (struct dat
>  }
>
>
> -/* Calculate the cost of the memory access represented by DR.  */
> +/* Calculate the cost of the memory access in STMT_INFO.  */
>
>  static void
> -vect_get_data_access_cost (struct data_reference *dr,
> +vect_get_data_access_cost (stmt_vec_info stmt_info,
>                             unsigned int *inside_cost,
>                             unsigned int *outside_cost,
>                            stmt_vector_for_cost *body_cost_vec,
>                            stmt_vector_for_cost *prologue_cost_vec)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    int ncopies;
>
> @@ -1303,7 +1305,7 @@ vect_get_data_access_cost (struct data_r
>
>  typedef struct _vect_peel_info
>  {
> -  struct data_reference *dr;
> +  stmt_vec_info stmt_info;
>    int npeel;
>    unsigned int count;
>  } *vect_peel_info;
> @@ -1337,16 +1339,17 @@ peel_info_hasher::equal (const _vect_pee
>  }
>
>
> -/* Insert DR into peeling hash table with NPEEL as key.  */
> +/* Insert STMT_INFO into peeling hash table with NPEEL as key.  */
>
>  static void
>  vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
> -                         loop_vec_info loop_vinfo, struct data_reference *dr,
> +                         loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
>                            int npeel)
>  {
>    struct _vect_peel_info elem, *slot;
>    _vect_peel_info **new_slot;
> -  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> +  bool supportable_dr_alignment
> +    = vect_supportable_dr_alignment (stmt_info, true);
>
>    elem.npeel = npeel;
>    slot = peeling_htab->find (&elem);
> @@ -1356,7 +1359,7 @@ vect_peeling_hash_insert (hash_table<pee
>      {
>        slot = XNEW (struct _vect_peel_info);
>        slot->npeel = npeel;
> -      slot->dr = dr;
> +      slot->stmt_info = stmt_info;
>        slot->count = 1;
>        new_slot = peeling_htab->find_slot (slot, INSERT);
>        *new_slot = slot;
> @@ -1383,19 +1386,19 @@ vect_peeling_hash_get_most_frequent (_ve
>      {
>        max->peel_info.npeel = elem->npeel;
>        max->peel_info.count = elem->count;
> -      max->peel_info.dr = elem->dr;
> +      max->peel_info.stmt_info = elem->stmt_info;
>      }
>
>    return 1;
>  }
>
>  /* Get the costs of peeling NPEEL iterations checking data access costs
> -   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
> -   misalignment will be zero after peeling.  */
> +   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume
> +   PEEL_STMT_INFO's misalignment will be zero after peeling.  */
>
>  static void
>  vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
> -                               struct data_reference *dr0,
> +                               stmt_vec_info peel_stmt_info,
>                                 unsigned int *inside_cost,
>                                 unsigned int *outside_cost,
>                                 stmt_vector_for_cost *body_cost_vec,
> @@ -1403,8 +1406,6 @@ vect_get_peeling_costs_all_drs (vec<data
>                                 unsigned int npeel,
>                                 bool unknown_misalignment)
>  {
> -  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
> -                                 : NULL_STMT_VEC_INFO);
>    unsigned i;
>    data_reference *dr;
>
> @@ -1433,8 +1434,8 @@ vect_get_peeling_costs_all_drs (vec<data
>        else if (unknown_misalignment && stmt_info == peel_stmt_info)
>         set_dr_misalignment (stmt_info, 0);
>        else
> -       vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      vect_get_data_access_cost (dr, inside_cost, outside_cost,
> +       vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
> +      vect_get_data_access_cost (stmt_info, inside_cost, outside_cost,
>                                  body_cost_vec, prologue_cost_vec);
>        set_dr_misalignment (stmt_info, save_misalignment);
>      }
> @@ -1450,7 +1451,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    vect_peel_info elem = *slot;
>    int dummy;
>    unsigned int inside_cost = 0, outside_cost = 0;
> -  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
> +  stmt_vec_info stmt_info = elem->stmt_info;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
>                        epilogue_cost_vec;
> @@ -1460,7 +1461,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    epilogue_cost_vec.create (2);
>
>    vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
> -                                 elem->dr, &inside_cost, &outside_cost,
> +                                 elem->stmt_info, &inside_cost, &outside_cost,
>                                   &body_cost_vec, &prologue_cost_vec,
>                                   elem->npeel, false);
>
> @@ -1484,7 +1485,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>      {
>        min->inside_cost = inside_cost;
>        min->outside_cost = outside_cost;
> -      min->peel_info.dr = elem->dr;
> +      min->peel_info.stmt_info = elem->stmt_info;
>        min->peel_info.npeel = elem->npeel;
>        min->peel_info.count = elem->count;
>      }
> @@ -1503,7 +1504,7 @@ vect_peeling_hash_choose_best_peeling (h
>  {
>     struct _vect_peel_extended_info res;
>
> -   res.peel_info.dr = NULL;
> +   res.peel_info.stmt_info = NULL;
>
>     if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
>       {
> @@ -1527,8 +1528,8 @@ vect_peeling_hash_choose_best_peeling (h
>  /* Return true if the new peeling NPEEL is supported.  */
>
>  static bool
> -vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
> -                         unsigned npeel)
> +vect_peeling_supportable (loop_vec_info loop_vinfo,
> +                         stmt_vec_info peel_stmt_info, unsigned npeel)
>  {
>    unsigned i;
>    struct data_reference *dr = NULL;
> @@ -1540,10 +1541,10 @@ vect_peeling_supportable (loop_vec_info
>      {
>        int save_misalignment;
>
> -      if (dr == dr0)
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      if (stmt_info == peel_stmt_info)
>         continue;
>
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        /* For interleaving, only the alignment of the first access
>          matters.  */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> @@ -1557,8 +1558,9 @@ vect_peeling_supportable (loop_vec_info
>         continue;
>
>        save_misalignment = dr_misalignment (stmt_info);
> -      vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> +      vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
> +      supportable_dr_alignment
> +       = vect_supportable_dr_alignment (stmt_info, false);
>        set_dr_misalignment (stmt_info, save_misalignment);
>
>        if (!supportable_dr_alignment)
> @@ -1665,8 +1667,9 @@ vect_enhance_data_refs_alignment (loop_v
>    vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    enum dr_alignment_support supportable_dr_alignment;
> -  struct data_reference *dr0 = NULL, *first_store = NULL;
>    struct data_reference *dr;
> +  stmt_vec_info peel_stmt_info = NULL;
> +  stmt_vec_info first_store_info = NULL;
>    unsigned int i, j;
>    bool do_peeling = false;
>    bool do_versioning = false;
> @@ -1675,7 +1678,7 @@ vect_enhance_data_refs_alignment (loop_v
>    bool one_misalignment_known = false;
>    bool one_misalignment_unknown = false;
>    bool one_dr_unsupportable = false;
> -  struct data_reference *unsupportable_dr = NULL;
> +  stmt_vec_info unsupportable_stmt_info = NULL;
>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>    unsigned possible_npeel_number = 1;
>    tree vectype;
> @@ -1745,8 +1748,9 @@ vect_enhance_data_refs_alignment (loop_v
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> -      do_peeling = vector_alignment_reachable_p (dr);
> +      supportable_dr_alignment
> +       = vect_supportable_dr_alignment (stmt_info, true);
> +      do_peeling = vector_alignment_reachable_p (stmt_info);
>        if (do_peeling)
>          {
>           if (known_alignment_for_access_p (stmt_info))
> @@ -1796,7 +1800,7 @@ vect_enhance_data_refs_alignment (loop_v
>                for (j = 0; j < possible_npeel_number; j++)
>                  {
>                    vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
> -                                           dr, npeel_tmp);
> +                                           stmt_info, npeel_tmp);
>                   npeel_tmp += target_align / dr_size;
>                  }
>
> @@ -1810,11 +1814,11 @@ vect_enhance_data_refs_alignment (loop_v
>                   stores over load.  */
>               unsigned same_align_drs
>                 = STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
> -             if (!dr0
> +             if (!peel_stmt_info
>                   || same_align_drs_max < same_align_drs)
>                 {
>                   same_align_drs_max = same_align_drs;
> -                 dr0 = dr;
> +                 peel_stmt_info = stmt_info;
>                 }
>               /* For data-refs with the same number of related
>                  accesses prefer the one where the misalign
> @@ -1822,6 +1826,7 @@ vect_enhance_data_refs_alignment (loop_v
>               else if (same_align_drs_max == same_align_drs)
>                 {
>                   struct loop *ivloop0, *ivloop;
> +                 data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>                   ivloop0 = outermost_invariant_loop_for_expr
>                     (loop, DR_BASE_ADDRESS (dr0));
>                   ivloop = outermost_invariant_loop_for_expr
> @@ -1829,7 +1834,7 @@ vect_enhance_data_refs_alignment (loop_v
>                   if ((ivloop && !ivloop0)
>                       || (ivloop && ivloop0
>                           && flow_loop_nested_p (ivloop, ivloop0)))
> -                   dr0 = dr;
> +                   peel_stmt_info = stmt_info;
>                 }
>
>               one_misalignment_unknown = true;
> @@ -1839,11 +1844,11 @@ vect_enhance_data_refs_alignment (loop_v
>               if (!supportable_dr_alignment)
>               {
>                 one_dr_unsupportable = true;
> -               unsupportable_dr = dr;
> +               unsupportable_stmt_info = stmt_info;
>               }
>
> -             if (!first_store && DR_IS_WRITE (dr))
> -               first_store = dr;
> +             if (!first_store_info && DR_IS_WRITE (dr))
> +               first_store_info = stmt_info;
>              }
>          }
>        else
> @@ -1886,16 +1891,16 @@ vect_enhance_data_refs_alignment (loop_v
>
>        stmt_vector_for_cost dummy;
>        dummy.create (2);
> -      vect_get_peeling_costs_all_drs (datarefs, dr0,
> +      vect_get_peeling_costs_all_drs (datarefs, peel_stmt_info,
>                                       &load_inside_cost,
>                                       &load_outside_cost,
>                                       &dummy, &dummy, estimated_npeels, true);
>        dummy.release ();
>
> -      if (first_store)
> +      if (first_store_info)
>         {
>           dummy.create (2);
> -         vect_get_peeling_costs_all_drs (datarefs, first_store,
> +         vect_get_peeling_costs_all_drs (datarefs, first_store_info,
>                                           &store_inside_cost,
>                                           &store_outside_cost,
>                                           &dummy, &dummy,
> @@ -1912,7 +1917,7 @@ vect_enhance_data_refs_alignment (loop_v
>           || (load_inside_cost == store_inside_cost
>               && load_outside_cost > store_outside_cost))
>         {
> -         dr0 = first_store;
> +         peel_stmt_info = first_store_info;
>           peel_for_unknown_alignment.inside_cost = store_inside_cost;
>           peel_for_unknown_alignment.outside_cost = store_outside_cost;
>         }
> @@ -1936,18 +1941,18 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        peel_for_unknown_alignment.peel_info.count = 1
> -       + STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
> +       + STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info).length ();
>      }
>
>    peel_for_unknown_alignment.peel_info.npeel = 0;
> -  peel_for_unknown_alignment.peel_info.dr = dr0;
> +  peel_for_unknown_alignment.peel_info.stmt_info = peel_stmt_info;
>
>    best_peel = peel_for_unknown_alignment;
>
>    peel_for_known_alignment.inside_cost = INT_MAX;
>    peel_for_known_alignment.outside_cost = INT_MAX;
>    peel_for_known_alignment.peel_info.count = 0;
> -  peel_for_known_alignment.peel_info.dr = NULL;
> +  peel_for_known_alignment.peel_info.stmt_info = NULL;
>
>    if (do_peeling && one_misalignment_known)
>      {
> @@ -1959,7 +1964,7 @@ vect_enhance_data_refs_alignment (loop_v
>      }
>
>    /* Compare costs of peeling for known and unknown alignment. */
> -  if (peel_for_known_alignment.peel_info.dr != NULL
> +  if (peel_for_known_alignment.peel_info.stmt_info
>        && peel_for_unknown_alignment.inside_cost
>        >= peel_for_known_alignment.inside_cost)
>      {
> @@ -1976,7 +1981,7 @@ vect_enhance_data_refs_alignment (loop_v
>       since we'd have to discard a chosen peeling except when it accidentally
>       aligned the unsupportable data ref.  */
>    if (one_dr_unsupportable)
> -    dr0 = unsupportable_dr;
> +    peel_stmt_info = unsupportable_stmt_info;
>    else if (do_peeling)
>      {
>        /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
> @@ -2007,7 +2012,7 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        npeel = best_peel.peel_info.npeel;
> -      dr0 = best_peel.peel_info.dr;
> +      peel_stmt_info = best_peel.peel_info.stmt_info;
>
>        /* If no peeling is not more expensive than the best peeling we
>          have so far, don't perform any peeling.  */
> @@ -2017,8 +2022,8 @@ vect_enhance_data_refs_alignment (loop_v
>
>    if (do_peeling)
>      {
> -      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
>        vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
> +      data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>
>        if (known_alignment_for_access_p (peel_stmt_info))
>          {
> @@ -2052,7 +2057,7 @@ vect_enhance_data_refs_alignment (loop_v
>          }
>
>        /* Ensure that all datarefs can be vectorized after the peel.  */
> -      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
> +      if (!vect_peeling_supportable (loop_vinfo, peel_stmt_info, npeel))
>         do_peeling = false;
>
>        /* Check if all datarefs are supportable and log.  */
> @@ -2125,7 +2130,8 @@ vect_enhance_data_refs_alignment (loop_v
>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>                   continue;
>
> -               vect_update_misalignment_for_peel (dr, dr0, npeel);
> +               vect_update_misalignment_for_peel (stmt_info,
> +                                                  peel_stmt_info, npeel);
>               }
>
>            LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
> @@ -2188,7 +2194,8 @@ vect_enhance_data_refs_alignment (loop_v
>               break;
>             }
>
> -         supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> +         supportable_dr_alignment
> +           = vect_supportable_dr_alignment (stmt_info, false);
>
>            if (!supportable_dr_alignment)
>              {
> @@ -2203,7 +2210,6 @@ vect_enhance_data_refs_alignment (loop_v
>                    break;
>                  }
>
> -             stmt_info = vect_dr_stmt (dr);
>               vectype = STMT_VINFO_VECTYPE (stmt_info);
>               gcc_assert (vectype);
>
> @@ -2314,9 +2320,9 @@ vect_find_same_alignment_drs (struct dat
>    if (maybe_ne (diff, 0))
>      {
>        /* Get the wider of the two alignments.  */
> -      unsigned int align_a = (vect_calculate_target_alignment (dra)
> +      unsigned int align_a = (vect_calculate_target_alignment (stmtinfo_a)
>                               / BITS_PER_UNIT);
> -      unsigned int align_b = (vect_calculate_target_alignment (drb)
> +      unsigned int align_b = (vect_calculate_target_alignment (stmtinfo_b)
>                               / BITS_PER_UNIT);
>        unsigned int max_align = MAX (align_a, align_b);
>
> @@ -2366,7 +2372,7 @@ vect_analyze_data_refs_alignment (loop_v
>      {
>        stmt_vec_info stmt_info = vect_dr_stmt (dr);
>        if (STMT_VINFO_VECTORIZABLE (stmt_info))
> -       vect_compute_data_ref_alignment (dr);
> +       vect_compute_data_ref_alignment (stmt_info);
>      }
>
>    return true;
> @@ -2382,17 +2388,16 @@ vect_slp_analyze_and_verify_node_alignme
>       the node is permuted in which case we start from the first
>       element in the group.  */
>    stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +  stmt_vec_info stmt_info = first_stmt_info;
>    if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
> -    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
> +    stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>
> -  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
> -  vect_compute_data_ref_alignment (dr);
> +  vect_compute_data_ref_alignment (stmt_info);
>    /* For creating the data-ref pointer we need alignment of the
>       first element anyway.  */
> -  if (dr != first_dr)
> -    vect_compute_data_ref_alignment (first_dr);
> -  if (! verify_data_ref_alignment (dr))
> +  if (stmt_info != first_stmt_info)
> +    vect_compute_data_ref_alignment (first_stmt_info);
> +  if (! verify_data_ref_alignment (first_stmt_info))
>      {
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -2430,19 +2435,19 @@ vect_slp_analyze_and_verify_instance_ali
>  }
>
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> -   accesses of legal size, step, etc.  Detect gaps, single element
> -   interleaving, and other special cases. Set grouped access info.
> -   Collect groups of strided stores for further use in SLP analysis.
> -   Worker for vect_analyze_group_access.  */
> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
> +   single element interleaving, and other special cases.  Set grouped
> +   access info.  Collect groups of strided stores for further use in
> +   SLP analysis.  Worker for vect_analyze_group_access.  */
>
>  static bool
> -vect_analyze_group_access_1 (struct data_reference *dr)
> +vect_analyze_group_access_1 (stmt_vec_info stmt_info)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>    HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    HOST_WIDE_INT dr_step = -1;
> @@ -2519,7 +2524,7 @@ vect_analyze_group_access_1 (struct data
>        if (bb_vinfo)
>         {
>           /* Mark the statement as unvectorizable.  */
> -         STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> +         STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>           return true;
>         }
>
> @@ -2667,18 +2672,18 @@ vect_analyze_group_access_1 (struct data
>    return true;
>  }
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> -   accesses of legal size, step, etc.  Detect gaps, single element
> -   interleaving, and other special cases. Set grouped access info.
> -   Collect groups of strided stores for further use in SLP analysis.  */
> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
> +   single element interleaving, and other special cases.  Set grouped
> +   access info.  Collect groups of strided stores for further use in
> +   SLP analysis.  */
>
>  static bool
> -vect_analyze_group_access (struct data_reference *dr)
> +vect_analyze_group_access (stmt_vec_info stmt_info)
>  {
> -  if (!vect_analyze_group_access_1 (dr))
> +  if (!vect_analyze_group_access_1 (stmt_info))
>      {
>        /* Dissolve the group if present.  */
> -      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
>        while (stmt_info)
>         {
>           stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
> @@ -2691,16 +2696,16 @@ vect_analyze_group_access (struct data_r
>    return true;
>  }
>
> -/* Analyze the access pattern of the data-reference DR.
> +/* Analyze the access pattern of the load or store in STMT_INFO.
>     In case of non-consecutive accesses call vect_analyze_group_access() to
>     analyze groups of accesses.  */
>
>  static bool
> -vect_analyze_data_ref_access (struct data_reference *dr)
> +vect_analyze_data_ref_access (stmt_vec_info stmt_info)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
>
> @@ -2780,10 +2785,10 @@ vect_analyze_data_ref_access (struct dat
>    if (TREE_CODE (step) != INTEGER_CST)
>      return (STMT_VINFO_STRIDED_P (stmt_info)
>             && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -               || vect_analyze_group_access (dr)));
> +               || vect_analyze_group_access (stmt_info)));
>
>    /* Not consecutive access - check if it's a part of interleaving group.  */
> -  return vect_analyze_group_access (dr);
> +  return vect_analyze_group_access (stmt_info);
>  }
>
>  /* Compare two data-references DRA and DRB to group them into chunks
> @@ -3062,25 +3067,28 @@ vect_analyze_data_ref_accesses (vec_info
>      }
>
>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
> -    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
> -        && !vect_analyze_data_ref_access (dr))
> -      {
> -       if (dump_enabled_p ())
> -         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> -                          "not vectorized: complicated access pattern.\n");
> +    {
> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      if (STMT_VINFO_VECTORIZABLE (stmt_info)
> +         && !vect_analyze_data_ref_access (stmt_info))
> +       {
> +         if (dump_enabled_p ())
> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                            "not vectorized: complicated access pattern.\n");
>
> -        if (is_a <bb_vec_info> (vinfo))
> -         {
> -           /* Mark the statement as not vectorizable.  */
> -           STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> -           continue;
> -         }
> -        else
> -         {
> -           datarefs_copy.release ();
> -           return false;
> -         }
> -      }
> +         if (is_a <bb_vec_info> (vinfo))
> +           {
> +             /* Mark the statement as not vectorizable.  */
> +             STMT_VINFO_VECTORIZABLE (stmt_info) = false;
> +             continue;
> +           }
> +         else
> +           {
> +             datarefs_copy.release ();
> +             return false;
> +           }
> +       }
> +    }
>
>    datarefs_copy.release ();
>    return true;
> @@ -3089,7 +3097,7 @@ vect_analyze_data_ref_accesses (vec_info
>  /* Function vect_vfa_segment_size.
>
>     Input:
> -     DR: The data reference.
> +     STMT_INFO: the load or store statement.
>       LENGTH_FACTOR: segment length to consider.
>
>     Return a value suitable for the dr_with_seg_len::seg_len field.
> @@ -3098,8 +3106,9 @@ vect_analyze_data_ref_accesses (vec_info
>     the size of the access; in effect it only describes the first byte.  */
>
>  static tree
> -vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
> +vect_vfa_segment_size (stmt_vec_info stmt_info, tree length_factor)
>  {
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    length_factor = size_binop (MINUS_EXPR,
>                               fold_convert (sizetype, length_factor),
>                               size_one_node);
> @@ -3107,23 +3116,23 @@ vect_vfa_segment_size (struct data_refer
>                      length_factor);
>  }
>
> -/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
> +/* Return a value that, when added to abs (vect_vfa_segment_size (STMT_INFO)),
>     gives the worst-case number of bytes covered by the segment.  */
>
>  static unsigned HOST_WIDE_INT
> -vect_vfa_access_size (data_reference *dr)
> +vect_vfa_access_size (stmt_vec_info stmt_vinfo)
>  {
> -  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_vinfo);
>    tree ref_type = TREE_TYPE (DR_REF (dr));
>    unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
>    unsigned HOST_WIDE_INT access_size = ref_size;
>    if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
>      {
> -      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
> +      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
>        access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
>      }
>    if (STMT_VINFO_VEC_STMT (stmt_vinfo)
> -      && (vect_supportable_dr_alignment (dr, false)
> +      && (vect_supportable_dr_alignment (stmt_vinfo, false)
>           == dr_explicit_realign_optimized))
>      {
>        /* We might access a full vector's worth.  */
> @@ -3281,13 +3290,14 @@ vect_check_lower_bound (loop_vec_info lo
>    LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
>  }
>
> -/* Return true if it's unlikely that the step of the vectorized form of DR
> -   will span fewer than GAP bytes.  */
> +/* Return true if it's unlikely that the step of the vectorized form of
> +   the load or store in STMT_INFO will span fewer than GAP bytes.  */
>
>  static bool
> -vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
> +vect_small_gap_p (stmt_vec_info stmt_info, poly_int64 gap)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    HOST_WIDE_INT count
>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
> @@ -3295,16 +3305,20 @@ vect_small_gap_p (loop_vec_info loop_vin
>    return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
>  }
>
> -/* Return true if we know that there is no alias between DR_A and DR_B
> -   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
> -   *LOWER_BOUND_OUT to this N.  */
> +/* Return true if we know that there is no alias between the loads and
> +   stores in STMT_INFO_A and STMT_INFO_B when the absolute step of
> +   STMT_INFO_A's access is >= some N.  When returning true,
> +   set *LOWER_BOUND_OUT to this N.  */
>
>  static bool
> -vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
> +vectorizable_with_step_bound_p (stmt_vec_info stmt_info_a,
> +                               stmt_vec_info stmt_info_b,
>                                 poly_uint64 *lower_bound_out)
>  {
>    /* Check that there is a constant gap of known sign between DR_A
>       and DR_B.  */
> +  data_reference *dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
> +  data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
>    poly_int64 init_a, init_b;
>    if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
>        || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
> @@ -3324,8 +3338,7 @@ vectorizable_with_step_bound_p (data_ref
>    /* If the two accesses could be dependent within a scalar iteration,
>       make sure that we'd retain their order.  */
>    if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
> -      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
> -                                        vect_dr_stmt (dr_b)))
> +      && !vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b))
>      return false;
>
>    /* There is no alias if abs (DR_STEP) is greater than or equal to
> @@ -3426,7 +3439,8 @@ vect_prune_runtime_alias_test_list (loop
>          and intra-iteration dependencies are guaranteed to be honored.  */
>        if (ignore_step_p
>           && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
> -             || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
> +             || vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
> +                                                &lower_bound)))
>         {
>           if (dump_enabled_p ())
>             {
> @@ -3446,9 +3460,10 @@ vect_prune_runtime_alias_test_list (loop
>          than the number of bytes handled by one vector iteration.)  */
>        if (!ignore_step_p
>           && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
> -         && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
> -         && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
> -             || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
> +         && vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
> +                                            &lower_bound)
> +         && (vect_small_gap_p (stmt_info_a, lower_bound)
> +             || vect_small_gap_p (stmt_info_b, lower_bound)))
>         {
>           bool unsigned_p = dr_known_forward_stride_p (dr_a);
>           if (dump_enabled_p ())
> @@ -3501,11 +3516,13 @@ vect_prune_runtime_alias_test_list (loop
>             length_factor = scalar_loop_iters;
>           else
>             length_factor = size_int (vect_factor);
> -         segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
> -         segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
> +         segment_length_a = vect_vfa_segment_size (stmt_info_a,
> +                                                   length_factor);
> +         segment_length_b = vect_vfa_segment_size (stmt_info_b,
> +                                                   length_factor);
>         }
> -      access_size_a = vect_vfa_access_size (dr_a);
> -      access_size_b = vect_vfa_access_size (dr_b);
> +      access_size_a = vect_vfa_access_size (stmt_info_a);
> +      access_size_b = vect_vfa_access_size (stmt_info_b);
>        align_a = vect_vfa_align (dr_a);
>        align_b = vect_vfa_align (dr_b);
>
> @@ -4463,12 +4480,12 @@ vect_get_new_ssa_name (tree type, enum v
>    return new_vect_var;
>  }
>
> -/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
> +/* Duplicate ptr info and set alignment/misaligment on NAME from STMT_INFO.  */
>
>  static void
> -vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
> +vect_duplicate_ssa_name_ptr_info (tree name, stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
>    int misalign = dr_misalignment (stmt_info);
>    if (misalign == DR_MISALIGNMENT_UNKNOWN)
> @@ -4579,7 +4596,7 @@ vect_create_addr_base_for_vector_ref (st
>        && TREE_CODE (addr_base) == SSA_NAME
>        && !SSA_NAME_PTR_INFO (addr_base))
>      {
> -      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
> +      vect_duplicate_ssa_name_ptr_info (addr_base, stmt_info);
>        if (offset || byte_offset)
>         mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
>      }
> @@ -4845,8 +4862,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>        /* Copy the points-to information if it exists. */
>        if (DR_PTR_INFO (dr))
>         {
> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>         }
>        if (ptr_incr)
>         *ptr_incr = incr;
> @@ -4875,8 +4892,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>        /* Copy the points-to information if it exists. */
>        if (DR_PTR_INFO (dr))
>         {
> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>         }
>        if (ptr_incr)
>         *ptr_incr = incr;
> @@ -6434,17 +6451,17 @@ vect_can_force_dr_alignment_p (const_tre
>  }
>
>
> -/* Return whether the data reference DR is supported with respect to its
> -   alignment.
> +/* Return whether the load or store in STMT_INFO is supported with
> +   respect to its alignment.
>     If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
>     it is aligned, i.e., check if it is possible to vectorize it with different
>     alignment.  */
>
>  enum dr_alignment_support
> -vect_supportable_dr_alignment (struct data_reference *dr,
> +vect_supportable_dr_alignment (stmt_vec_info stmt_info,
>                                 bool check_aligned_accesses)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>    machine_mode mode = TYPE_MODE (vectype);
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
> Index: gcc/tree-vect-stmts.c
> ===================================================================
> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:05.744462369 +0100
> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:08.924434128 +0100
> @@ -1057,8 +1057,8 @@ vect_get_store_cost (stmt_vec_info stmt_
>                      unsigned int *inside_cost,
>                      stmt_vector_for_cost *body_cost_vec)
>  {
> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
> +  int alignment_support_scheme
> +    = vect_supportable_dr_alignment (stmt_info, false);
>
>    switch (alignment_support_scheme)
>      {
> @@ -1237,8 +1237,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
>                     stmt_vector_for_cost *body_cost_vec,
>                     bool record_prologue_costs)
>  {
> -  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
> +  int alignment_support_scheme
> +    = vect_supportable_dr_alignment (stmt_info, false);
>
>    switch (alignment_support_scheme)
>      {
> @@ -2340,7 +2340,6 @@ get_negative_load_store_type (stmt_vec_i
>                               vec_load_store_type vls_type,
>                               unsigned int ncopies)
>  {
> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>    dr_alignment_support alignment_support_scheme;
>
>    if (ncopies > 1)
> @@ -2351,7 +2350,7 @@ get_negative_load_store_type (stmt_vec_i
>        return VMAT_ELEMENTWISE;
>      }
>
> -  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
> +  alignment_support_scheme = vect_supportable_dr_alignment (stmt_info, false);
>    if (alignment_support_scheme != dr_aligned
>        && alignment_support_scheme != dr_unaligned_supported)
>      {
> @@ -2924,15 +2923,14 @@ vect_get_strided_load_store_ops (stmt_ve
>  }
>
>  /* Return the amount that should be added to a vector pointer to move
> -   to the next or previous copy of AGGR_TYPE.  DR is the data reference
> -   being vectorized and MEMORY_ACCESS_TYPE describes the type of
> +   to the next or previous copy of AGGR_TYPE.  STMT_INFO is the load or
> +   store being vectorized and MEMORY_ACCESS_TYPE describes the type of
>     vectorization.  */
>
>  static tree
> -vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
> +vect_get_data_ptr_increment (stmt_vec_info stmt_info, tree aggr_type,
>                              vect_memory_access_type memory_access_type)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    if (memory_access_type == VMAT_INVARIANT)
>      return size_zero_node;
>
> @@ -6171,12 +6169,12 @@ vectorizable_operation (stmt_vec_info st
>    return true;
>  }
>
> -/* A helper function to ensure data reference DR's base alignment.  */
> +/* If we decided to increase the base alignment for the memory access in
> +   STMT_INFO, but haven't increased it yet, do so now.  */
>
>  static void
> -ensure_base_align (struct data_reference *dr)
> +ensure_base_align (stmt_vec_info stmt_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>    if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
>      return;
>
> @@ -6439,7 +6437,7 @@ vectorizable_store (stmt_vec_info stmt_i
>
>    /* Transform.  */
>
> -  ensure_base_align (dr);
> +  ensure_base_align (stmt_info);
>
>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>      {
> @@ -6882,7 +6880,8 @@ vectorizable_store (stmt_vec_info stmt_i
>    auto_vec<tree> dr_chain (group_size);
>    oprnds.create (group_size);
>
> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
> +  alignment_support_scheme
> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>    gcc_assert (alignment_support_scheme);
>    vec_loop_masks *loop_masks
>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
> @@ -6920,7 +6919,8 @@ vectorizable_store (stmt_vec_info stmt_i
>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>        else
>         aggr_type = vectype;
> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
> +                                         memory_access_type);
>      }
>
>    if (mask)
> @@ -7667,7 +7667,7 @@ vectorizable_load (stmt_vec_info stmt_in
>
>    /* Transform.  */
>
> -  ensure_base_align (dr);
> +  ensure_base_align (stmt_info);
>
>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>      {
> @@ -7990,7 +7990,8 @@ vectorizable_load (stmt_vec_info stmt_in
>        ref_type = reference_alias_ptr_type (DR_REF (first_dr));
>      }
>
> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
> +  alignment_support_scheme
> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>    gcc_assert (alignment_support_scheme);
>    vec_loop_masks *loop_masks
>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
> @@ -8155,7 +8156,8 @@ vectorizable_load (stmt_vec_info stmt_in
>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>        else
>         aggr_type = vectype;
> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
> +                                         memory_access_type);
>      }
>
>    tree vec_mask = NULL_TREE;
Richard Sandiford July 25, 2018, 11:21 a.m. UTC | #2
Richard Biener <richard.guenther@gmail.com> writes:
> On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
> <richard.sandiford@arm.com> wrote:
>>
>> This patch makes various routines (mostly in tree-vect-data-refs.c)
>> take stmt_vec_infos rather than data_references.  The affected routines
>> are really dealing with the way that an access is going to vectorised
>> for a particular stmt_vec_info, rather than with the original scalar
>> access described by the data_reference.
>
> Similar.  Doesn't it make more sense to pass both stmt_info and DR to
> the functions?

Not sure.  If we...

> We currently cannot handle aggregate copies in the to-be-vectorized IL
> but rely on SRA and friends to elide those.  That's the only two-DR
> stmt I can think of for vectorization.  Maybe aggregate by-value / return
> function calls with OMP SIMD if that supports this somehow.

...did this then I don't think a data_refrence would be the natural
way of identifying a DR within a stmt_vec_info.  Presumably the
stmt_vec_info would need multiple STMT_VINFO_DATA_REFS and dr_auxs.
If both of those were vectors then a (stmt_vec_info, index) pair
might make more sense than (stmt_vec_info, data_reference).

Alternatively we could move STMT_VINFO_DATA_REF into dataref_aux,
so that there's a back-pointer to the DR, add a stmt_vec_info
field to dataref_aux too, and then use dataref_aux instead of
stmt_vec_info as the key.

Thanks,
Richard

>
> Richard.
>
>>
>> 2018-07-24  Richard Sandiford  <richard.sandiford@arm.com>
>>
>> gcc/
>>         * tree-vectorizer.h (vect_supportable_dr_alignment): Take
>>         a stmt_vec_info rather than a data_reference.
>>         * tree-vect-data-refs.c (vect_calculate_target_alignment)
>>         (vect_compute_data_ref_alignment, vect_update_misalignment_for_peel)
>>         (verify_data_ref_alignment, vector_alignment_reachable_p)
>>         (vect_get_data_access_cost, vect_get_peeling_costs_all_drs)
>>         (vect_peeling_supportable, vect_analyze_group_access_1)
>>         (vect_analyze_group_access, vect_analyze_data_ref_access)
>>         (vect_vfa_segment_size, vect_vfa_access_size, vect_small_gap_p)
>>         (vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info)
>>         (vect_supportable_dr_alignment): Likewise.  Update calls to other
>>         functions for which the same change is being made.
>>         (vect_verify_datarefs_alignment, vect_find_same_alignment_drs)
>>         (vect_analyze_data_refs_alignment): Update calls accordingly.
>>         (vect_slp_analyze_and_verify_node_alignment): Likewise.
>>         (vect_analyze_data_ref_accesses): Likewise.
>>         (vect_prune_runtime_alias_test_list): Likewise.
>>         (vect_create_addr_base_for_vector_ref): Likewise.
>>         (vect_create_data_ref_ptr): Likewise.
>>         (_vect_peel_info::dr): Replace with...
>>         (_vect_peel_info::stmt_info): ...this new field.
>>         (vect_peeling_hash_get_most_frequent): Update _vect_peel_info uses
>>         accordingly, and update after above interface changes.
>>         (vect_peeling_hash_get_lowest_cost): Likewise
>>         (vect_peeling_hash_choose_best_peeling): Likewise.
>>         (vect_enhance_data_refs_alignment): Likewise.
>>         (vect_peeling_hash_insert): Likewise.  Take a stmt_vec_info
>>         rather than a data_reference.
>>         * tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
>>         (get_negative_load_store_type): Update calls to
>>         vect_supportable_dr_alignment.
>>         (vect_get_data_ptr_increment, ensure_base_align): Take a
>>         stmt_vec_info instead of a data_reference.
>>         (vectorizable_store, vectorizable_load): Update calls after
>>         above interface changes.
>>
>> Index: gcc/tree-vectorizer.h
>> ===================================================================
>> --- gcc/tree-vectorizer.h       2018-07-24 10:24:05.744462369 +0100
>> +++ gcc/tree-vectorizer.h       2018-07-24 10:24:08.924434128 +0100
>> @@ -1541,7 +1541,7 @@ extern tree vect_get_mask_type_for_stmt
>>  /* In tree-vect-data-refs.c.  */
>>  extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
>>  extern enum dr_alignment_support vect_supportable_dr_alignment
>> -                                           (struct data_reference *, bool);
>> +  (stmt_vec_info, bool);
>>  extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
>>                                             HOST_WIDE_INT *);
>> extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned
> int *);
>> Index: gcc/tree-vect-data-refs.c
>> ===================================================================
>> --- gcc/tree-vect-data-refs.c   2018-07-24 10:24:05.740462405 +0100
>> +++ gcc/tree-vect-data-refs.c   2018-07-24 10:24:08.924434128 +0100
>> @@ -858,19 +858,19 @@ vect_record_base_alignments (vec_info *v
>>      }
>>  }
>>
>> -/* Return the target alignment for the vectorized form of DR.  */
>> +/* Return the target alignment for the vectorized form of the load or store
>> +   in STMT_INFO.  */
>>
>>  static unsigned int
>> -vect_calculate_target_alignment (struct data_reference *dr)
>> +vect_calculate_target_alignment (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>>    return targetm.vectorize.preferred_vector_alignment (vectype);
>>  }
>>
>>  /* Function vect_compute_data_ref_alignment
>>
>> -   Compute the misalignment of the data reference DR.
>> +   Compute the misalignment of the load or store in STMT_INFO.
>>
>>     Output:
>>     1. dr_misalignment (STMT_INFO) is defined.
>> @@ -879,9 +879,9 @@ vect_calculate_target_alignment (struct
>>     only for trivial cases. TODO.  */
>>
>>  static void
>> -vect_compute_data_ref_alignment (struct data_reference *dr)
>> +vect_compute_data_ref_alignment (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    struct loop *loop = NULL;
>> @@ -905,7 +905,7 @@ vect_compute_data_ref_alignment (struct
>>    bool step_preserves_misalignment_p;
>>
>>    unsigned HOST_WIDE_INT vector_alignment
>> -    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
>> +    = vect_calculate_target_alignment (stmt_info) / BITS_PER_UNIT;
>>    STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
>>
>>    /* No step for BB vectorization.  */
>> @@ -1053,28 +1053,28 @@ vect_compute_data_ref_alignment (struct
>>  }
>>
>>  /* Function vect_update_misalignment_for_peel.
>> -   Sets DR's misalignment
>> -   - to 0 if it has the same alignment as DR_PEEL,
>> +   Sets the misalignment of the load or store in STMT_INFO
>> +   - to 0 if it has the same alignment as PEEL_STMT_INFO,
>>     - to the misalignment computed using NPEEL if DR's salignment is known,
>>     - to -1 (unknown) otherwise.
>>
>> -   DR - the data reference whose misalignment is to be adjusted.
>> -   DR_PEEL - the data reference whose misalignment is being made
>> -             zero in the vector loop by the peel.
>> +   STMT_INFO - the load or store whose misalignment is to be adjusted.
>> +   PEEL_STMT_INFO - the load or store whose misalignment is being made
>> +                   zero in the vector loop by the peel.
>>     NPEEL - the number of iterations in the peel loop if the misalignment
>> -           of DR_PEEL is known at compile time.  */
>> +          of PEEL_STMT_INFO is known at compile time.  */
>>
>>  static void
>> -vect_update_misalignment_for_peel (struct data_reference *dr,
>> -                                   struct data_reference *dr_peel, int npeel)
>> +vect_update_misalignment_for_peel (stmt_vec_info stmt_info,
>> +                                  stmt_vec_info peel_stmt_info, int npeel)
>>  {
>>    unsigned int i;
>>    vec<dr_p> same_aligned_drs;
>>    struct data_reference *current_dr;
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> +  data_reference *dr_peel = STMT_VINFO_DATA_REF (peel_stmt_info);
>>    int dr_size = vect_get_scalar_dr_size (dr);
>>    int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> -  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
>>
>>   /* For interleaved data accesses the step in the loop must be multiplied by
>>       the size of the interleaving group.  */
>> @@ -1085,7 +1085,7 @@ vect_update_misalignment_for_peel (struc
>>
>>    /* It can be assumed that the data refs with the same alignment as dr_peel
>>       are aligned in the vector loop.  */
>> -  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
>> +  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
>>    FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
>>      {
>>        if (current_dr != dr)
>> @@ -1118,13 +1118,15 @@ vect_update_misalignment_for_peel (struc
>>
>>  /* Function verify_data_ref_alignment
>>
>> -   Return TRUE if DR can be handled with respect to alignment.  */
>> +   Return TRUE if the load or store in STMT_INFO can be handled with
>> +   respect to alignment.  */
>>
>>  static bool
>> -verify_data_ref_alignment (data_reference_p dr)
>> +verify_data_ref_alignment (stmt_vec_info stmt_info)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    enum dr_alignment_support supportable_dr_alignment
>> -    = vect_supportable_dr_alignment (dr, false);
>> +    = vect_supportable_dr_alignment (stmt_info, false);
>>    if (!supportable_dr_alignment)
>>      {
>>        if (dump_enabled_p ())
>> @@ -1181,7 +1183,7 @@ vect_verify_datarefs_alignment (loop_vec
>>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>         continue;
>>
>> -      if (! verify_data_ref_alignment (dr))
>> +      if (! verify_data_ref_alignment (stmt_info))
>>         return false;
>>      }
>>
>> @@ -1203,13 +1205,13 @@ not_size_aligned (tree exp)
>>
>>  /* Function vector_alignment_reachable_p
>>
>> -   Return true if vector alignment for DR is reachable by peeling
>> -   a few loop iterations.  Return false otherwise.  */
>> +   Return true if the vector alignment is reachable for the load or store
>> + in STMT_INFO by peeling a few loop iterations.  Return false
> otherwise.  */
>>
>>  static bool
>> -vector_alignment_reachable_p (struct data_reference *dr)
>> +vector_alignment_reachable_p (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>>
>>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>> @@ -1270,16 +1272,16 @@ vector_alignment_reachable_p (struct dat
>>  }
>>
>>
>> -/* Calculate the cost of the memory access represented by DR.  */
>> +/* Calculate the cost of the memory access in STMT_INFO.  */
>>
>>  static void
>> -vect_get_data_access_cost (struct data_reference *dr,
>> +vect_get_data_access_cost (stmt_vec_info stmt_info,
>>                             unsigned int *inside_cost,
>>                             unsigned int *outside_cost,
>>                            stmt_vector_for_cost *body_cost_vec,
>>                            stmt_vector_for_cost *prologue_cost_vec)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    int ncopies;
>>
>> @@ -1303,7 +1305,7 @@ vect_get_data_access_cost (struct data_r
>>
>>  typedef struct _vect_peel_info
>>  {
>> -  struct data_reference *dr;
>> +  stmt_vec_info stmt_info;
>>    int npeel;
>>    unsigned int count;
>>  } *vect_peel_info;
>> @@ -1337,16 +1339,17 @@ peel_info_hasher::equal (const _vect_pee
>>  }
>>
>>
>> -/* Insert DR into peeling hash table with NPEEL as key.  */
>> +/* Insert STMT_INFO into peeling hash table with NPEEL as key.  */
>>
>>  static void
>>  vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
>> -                         loop_vec_info loop_vinfo, struct data_reference *dr,
>> +                         loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
>>                            int npeel)
>>  {
>>    struct _vect_peel_info elem, *slot;
>>    _vect_peel_info **new_slot;
>> -  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
>> +  bool supportable_dr_alignment
>> +    = vect_supportable_dr_alignment (stmt_info, true);
>>
>>    elem.npeel = npeel;
>>    slot = peeling_htab->find (&elem);
>> @@ -1356,7 +1359,7 @@ vect_peeling_hash_insert (hash_table<pee
>>      {
>>        slot = XNEW (struct _vect_peel_info);
>>        slot->npeel = npeel;
>> -      slot->dr = dr;
>> +      slot->stmt_info = stmt_info;
>>        slot->count = 1;
>>        new_slot = peeling_htab->find_slot (slot, INSERT);
>>        *new_slot = slot;
>> @@ -1383,19 +1386,19 @@ vect_peeling_hash_get_most_frequent (_ve
>>      {
>>        max->peel_info.npeel = elem->npeel;
>>        max->peel_info.count = elem->count;
>> -      max->peel_info.dr = elem->dr;
>> +      max->peel_info.stmt_info = elem->stmt_info;
>>      }
>>
>>    return 1;
>>  }
>>
>>  /* Get the costs of peeling NPEEL iterations checking data access costs
>> -   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
>> -   misalignment will be zero after peeling.  */
>> +   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume
>> +   PEEL_STMT_INFO's misalignment will be zero after peeling.  */
>>
>>  static void
>>  vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
>> -                               struct data_reference *dr0,
>> +                               stmt_vec_info peel_stmt_info,
>>                                 unsigned int *inside_cost,
>>                                 unsigned int *outside_cost,
>>                                 stmt_vector_for_cost *body_cost_vec,
>> @@ -1403,8 +1406,6 @@ vect_get_peeling_costs_all_drs (vec<data
>>                                 unsigned int npeel,
>>                                 bool unknown_misalignment)
>>  {
>> -  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
>> -                                 : NULL_STMT_VEC_INFO);
>>    unsigned i;
>>    data_reference *dr;
>>
>> @@ -1433,8 +1434,8 @@ vect_get_peeling_costs_all_drs (vec<data
>>        else if (unknown_misalignment && stmt_info == peel_stmt_info)
>>         set_dr_misalignment (stmt_info, 0);
>>        else
>> -       vect_update_misalignment_for_peel (dr, dr0, npeel);
>> -      vect_get_data_access_cost (dr, inside_cost, outside_cost,
>> +       vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
>> +      vect_get_data_access_cost (stmt_info, inside_cost, outside_cost,
>>                                  body_cost_vec, prologue_cost_vec);
>>        set_dr_misalignment (stmt_info, save_misalignment);
>>      }
>> @@ -1450,7 +1451,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>>    vect_peel_info elem = *slot;
>>    int dummy;
>>    unsigned int inside_cost = 0, outside_cost = 0;
>> -  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
>> +  stmt_vec_info stmt_info = elem->stmt_info;
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
>>                        epilogue_cost_vec;
>> @@ -1460,7 +1461,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>>    epilogue_cost_vec.create (2);
>>
>>    vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
>> -                                 elem->dr, &inside_cost, &outside_cost,
>> + elem->stmt_info, &inside_cost, &outside_cost,
>>                                   &body_cost_vec, &prologue_cost_vec,
>>                                   elem->npeel, false);
>>
>> @@ -1484,7 +1485,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>>      {
>>        min->inside_cost = inside_cost;
>>        min->outside_cost = outside_cost;
>> -      min->peel_info.dr = elem->dr;
>> +      min->peel_info.stmt_info = elem->stmt_info;
>>        min->peel_info.npeel = elem->npeel;
>>        min->peel_info.count = elem->count;
>>      }
>> @@ -1503,7 +1504,7 @@ vect_peeling_hash_choose_best_peeling (h
>>  {
>>     struct _vect_peel_extended_info res;
>>
>> -   res.peel_info.dr = NULL;
>> +   res.peel_info.stmt_info = NULL;
>>
>>     if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
>>       {
>> @@ -1527,8 +1528,8 @@ vect_peeling_hash_choose_best_peeling (h
>>  /* Return true if the new peeling NPEEL is supported.  */
>>
>>  static bool
>> -vect_peeling_supportable (loop_vec_info loop_vinfo, struct
> data_reference *dr0,
>> -                         unsigned npeel)
>> +vect_peeling_supportable (loop_vec_info loop_vinfo,
>> +                         stmt_vec_info peel_stmt_info, unsigned npeel)
>>  {
>>    unsigned i;
>>    struct data_reference *dr = NULL;
>> @@ -1540,10 +1541,10 @@ vect_peeling_supportable (loop_vec_info
>>      {
>>        int save_misalignment;
>>
>> -      if (dr == dr0)
>> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +      if (stmt_info == peel_stmt_info)
>>         continue;
>>
>> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>        /* For interleaving, only the alignment of the first access
>>          matters.  */
>>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
>> @@ -1557,8 +1558,9 @@ vect_peeling_supportable (loop_vec_info
>>         continue;
>>
>>        save_misalignment = dr_misalignment (stmt_info);
>> -      vect_update_misalignment_for_peel (dr, dr0, npeel);
>> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
>> +      vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
>> +      supportable_dr_alignment
>> +       = vect_supportable_dr_alignment (stmt_info, false);
>>        set_dr_misalignment (stmt_info, save_misalignment);
>>
>>        if (!supportable_dr_alignment)
>> @@ -1665,8 +1667,9 @@ vect_enhance_data_refs_alignment (loop_v
>>    vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
>>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>>    enum dr_alignment_support supportable_dr_alignment;
>> -  struct data_reference *dr0 = NULL, *first_store = NULL;
>>    struct data_reference *dr;
>> +  stmt_vec_info peel_stmt_info = NULL;
>> +  stmt_vec_info first_store_info = NULL;
>>    unsigned int i, j;
>>    bool do_peeling = false;
>>    bool do_versioning = false;
>> @@ -1675,7 +1678,7 @@ vect_enhance_data_refs_alignment (loop_v
>>    bool one_misalignment_known = false;
>>    bool one_misalignment_unknown = false;
>>    bool one_dr_unsupportable = false;
>> -  struct data_reference *unsupportable_dr = NULL;
>> +  stmt_vec_info unsupportable_stmt_info = NULL;
>>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>>    unsigned possible_npeel_number = 1;
>>    tree vectype;
>> @@ -1745,8 +1748,9 @@ vect_enhance_data_refs_alignment (loop_v
>>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>         continue;
>>
>> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
>> -      do_peeling = vector_alignment_reachable_p (dr);
>> +      supportable_dr_alignment
>> +       = vect_supportable_dr_alignment (stmt_info, true);
>> +      do_peeling = vector_alignment_reachable_p (stmt_info);
>>        if (do_peeling)
>>          {
>>           if (known_alignment_for_access_p (stmt_info))
>> @@ -1796,7 +1800,7 @@ vect_enhance_data_refs_alignment (loop_v
>>                for (j = 0; j < possible_npeel_number; j++)
>>                  {
>>                    vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
>> -                                           dr, npeel_tmp);
>> +                                           stmt_info, npeel_tmp);
>>                   npeel_tmp += target_align / dr_size;
>>                  }
>>
>> @@ -1810,11 +1814,11 @@ vect_enhance_data_refs_alignment (loop_v
>>                   stores over load.  */
>>               unsigned same_align_drs
>>                 = STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
>> -             if (!dr0
>> +             if (!peel_stmt_info
>>                   || same_align_drs_max < same_align_drs)
>>                 {
>>                   same_align_drs_max = same_align_drs;
>> -                 dr0 = dr;
>> +                 peel_stmt_info = stmt_info;
>>                 }
>>               /* For data-refs with the same number of related
>>                  accesses prefer the one where the misalign
>> @@ -1822,6 +1826,7 @@ vect_enhance_data_refs_alignment (loop_v
>>               else if (same_align_drs_max == same_align_drs)
>>                 {
>>                   struct loop *ivloop0, *ivloop;
>> +                 data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>>                   ivloop0 = outermost_invariant_loop_for_expr
>>                     (loop, DR_BASE_ADDRESS (dr0));
>>                   ivloop = outermost_invariant_loop_for_expr
>> @@ -1829,7 +1834,7 @@ vect_enhance_data_refs_alignment (loop_v
>>                   if ((ivloop && !ivloop0)
>>                       || (ivloop && ivloop0
>>                           && flow_loop_nested_p (ivloop, ivloop0)))
>> -                   dr0 = dr;
>> +                   peel_stmt_info = stmt_info;
>>                 }
>>
>>               one_misalignment_unknown = true;
>> @@ -1839,11 +1844,11 @@ vect_enhance_data_refs_alignment (loop_v
>>               if (!supportable_dr_alignment)
>>               {
>>                 one_dr_unsupportable = true;
>> -               unsupportable_dr = dr;
>> +               unsupportable_stmt_info = stmt_info;
>>               }
>>
>> -             if (!first_store && DR_IS_WRITE (dr))
>> -               first_store = dr;
>> +             if (!first_store_info && DR_IS_WRITE (dr))
>> +               first_store_info = stmt_info;
>>              }
>>          }
>>        else
>> @@ -1886,16 +1891,16 @@ vect_enhance_data_refs_alignment (loop_v
>>
>>        stmt_vector_for_cost dummy;
>>        dummy.create (2);
>> -      vect_get_peeling_costs_all_drs (datarefs, dr0,
>> +      vect_get_peeling_costs_all_drs (datarefs, peel_stmt_info,
>>                                       &load_inside_cost,
>>                                       &load_outside_cost,
>>                                       &dummy, &dummy, estimated_npeels, true);
>>        dummy.release ();
>>
>> -      if (first_store)
>> +      if (first_store_info)
>>         {
>>           dummy.create (2);
>> -         vect_get_peeling_costs_all_drs (datarefs, first_store,
>> +         vect_get_peeling_costs_all_drs (datarefs, first_store_info,
>>                                           &store_inside_cost,
>>                                           &store_outside_cost,
>>                                           &dummy, &dummy,
>> @@ -1912,7 +1917,7 @@ vect_enhance_data_refs_alignment (loop_v
>>           || (load_inside_cost == store_inside_cost
>>               && load_outside_cost > store_outside_cost))
>>         {
>> -         dr0 = first_store;
>> +         peel_stmt_info = first_store_info;
>>           peel_for_unknown_alignment.inside_cost = store_inside_cost;
>>           peel_for_unknown_alignment.outside_cost = store_outside_cost;
>>         }
>> @@ -1936,18 +1941,18 @@ vect_enhance_data_refs_alignment (loop_v
>>        epilogue_cost_vec.release ();
>>
>>        peel_for_unknown_alignment.peel_info.count = 1
>> -       + STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
>> +       + STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info).length ();
>>      }
>>
>>    peel_for_unknown_alignment.peel_info.npeel = 0;
>> -  peel_for_unknown_alignment.peel_info.dr = dr0;
>> +  peel_for_unknown_alignment.peel_info.stmt_info = peel_stmt_info;
>>
>>    best_peel = peel_for_unknown_alignment;
>>
>>    peel_for_known_alignment.inside_cost = INT_MAX;
>>    peel_for_known_alignment.outside_cost = INT_MAX;
>>    peel_for_known_alignment.peel_info.count = 0;
>> -  peel_for_known_alignment.peel_info.dr = NULL;
>> +  peel_for_known_alignment.peel_info.stmt_info = NULL;
>>
>>    if (do_peeling && one_misalignment_known)
>>      {
>> @@ -1959,7 +1964,7 @@ vect_enhance_data_refs_alignment (loop_v
>>      }
>>
>>    /* Compare costs of peeling for known and unknown alignment. */
>> -  if (peel_for_known_alignment.peel_info.dr != NULL
>> +  if (peel_for_known_alignment.peel_info.stmt_info
>>        && peel_for_unknown_alignment.inside_cost
>>        >= peel_for_known_alignment.inside_cost)
>>      {
>> @@ -1976,7 +1981,7 @@ vect_enhance_data_refs_alignment (loop_v
>>       since we'd have to discard a chosen peeling except when it accidentally
>>       aligned the unsupportable data ref.  */
>>    if (one_dr_unsupportable)
>> -    dr0 = unsupportable_dr;
>> +    peel_stmt_info = unsupportable_stmt_info;
>>    else if (do_peeling)
>>      {
>>        /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
>> @@ -2007,7 +2012,7 @@ vect_enhance_data_refs_alignment (loop_v
>>        epilogue_cost_vec.release ();
>>
>>        npeel = best_peel.peel_info.npeel;
>> -      dr0 = best_peel.peel_info.dr;
>> +      peel_stmt_info = best_peel.peel_info.stmt_info;
>>
>>        /* If no peeling is not more expensive than the best peeling we
>>          have so far, don't perform any peeling.  */
>> @@ -2017,8 +2022,8 @@ vect_enhance_data_refs_alignment (loop_v
>>
>>    if (do_peeling)
>>      {
>> -      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
>>        vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
>> +      data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
>>
>>        if (known_alignment_for_access_p (peel_stmt_info))
>>          {
>> @@ -2052,7 +2057,7 @@ vect_enhance_data_refs_alignment (loop_v
>>          }
>>
>>        /* Ensure that all datarefs can be vectorized after the peel.  */
>> -      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
>> +      if (!vect_peeling_supportable (loop_vinfo, peel_stmt_info, npeel))
>>         do_peeling = false;
>>
>>        /* Check if all datarefs are supportable and log.  */
>> @@ -2125,7 +2130,8 @@ vect_enhance_data_refs_alignment (loop_v
>>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>>                   continue;
>>
>> -               vect_update_misalignment_for_peel (dr, dr0, npeel);
>> +               vect_update_misalignment_for_peel (stmt_info,
>> +                                                  peel_stmt_info, npeel);
>>               }
>>
>>            LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
>> @@ -2188,7 +2194,8 @@ vect_enhance_data_refs_alignment (loop_v
>>               break;
>>             }
>>
>> - supportable_dr_alignment = vect_supportable_dr_alignment (dr,
> false);
>> +         supportable_dr_alignment
>> +           = vect_supportable_dr_alignment (stmt_info, false);
>>
>>            if (!supportable_dr_alignment)
>>              {
>> @@ -2203,7 +2210,6 @@ vect_enhance_data_refs_alignment (loop_v
>>                    break;
>>                  }
>>
>> -             stmt_info = vect_dr_stmt (dr);
>>               vectype = STMT_VINFO_VECTYPE (stmt_info);
>>               gcc_assert (vectype);
>>
>> @@ -2314,9 +2320,9 @@ vect_find_same_alignment_drs (struct dat
>>    if (maybe_ne (diff, 0))
>>      {
>>        /* Get the wider of the two alignments.  */
>> -      unsigned int align_a = (vect_calculate_target_alignment (dra)
>> +      unsigned int align_a = (vect_calculate_target_alignment (stmtinfo_a)
>>                               / BITS_PER_UNIT);
>> -      unsigned int align_b = (vect_calculate_target_alignment (drb)
>> +      unsigned int align_b = (vect_calculate_target_alignment (stmtinfo_b)
>>                               / BITS_PER_UNIT);
>>        unsigned int max_align = MAX (align_a, align_b);
>>
>> @@ -2366,7 +2372,7 @@ vect_analyze_data_refs_alignment (loop_v
>>      {
>>        stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>        if (STMT_VINFO_VECTORIZABLE (stmt_info))
>> -       vect_compute_data_ref_alignment (dr);
>> +       vect_compute_data_ref_alignment (stmt_info);
>>      }
>>
>>    return true;
>> @@ -2382,17 +2388,16 @@ vect_slp_analyze_and_verify_node_alignme
>>       the node is permuted in which case we start from the first
>>       element in the group.  */
>>    stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
>> -  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
>> +  stmt_vec_info stmt_info = first_stmt_info;
>>    if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
>> -    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
>> +    stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
>>
>> -  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
>> -  vect_compute_data_ref_alignment (dr);
>> +  vect_compute_data_ref_alignment (stmt_info);
>>    /* For creating the data-ref pointer we need alignment of the
>>       first element anyway.  */
>> -  if (dr != first_dr)
>> -    vect_compute_data_ref_alignment (first_dr);
>> -  if (! verify_data_ref_alignment (dr))
>> +  if (stmt_info != first_stmt_info)
>> +    vect_compute_data_ref_alignment (first_stmt_info);
>> +  if (! verify_data_ref_alignment (first_stmt_info))
>>      {
>>        if (dump_enabled_p ())
>>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>> @@ -2430,19 +2435,19 @@ vect_slp_analyze_and_verify_instance_ali
>>  }
>>
>>
>> -/* Analyze groups of accesses: check that DR belongs to a group of
>> -   accesses of legal size, step, etc.  Detect gaps, single element
>> -   interleaving, and other special cases. Set grouped access info.
>> -   Collect groups of strided stores for further use in SLP analysis.
>> -   Worker for vect_analyze_group_access.  */
>> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
>> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
>> +   single element interleaving, and other special cases.  Set grouped
>> +   access info.  Collect groups of strided stores for further use in
>> +   SLP analysis.  Worker for vect_analyze_group_access.  */
>>
>>  static bool
>> -vect_analyze_group_access_1 (struct data_reference *dr)
>> +vect_analyze_group_access_1 (stmt_vec_info stmt_info)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree step = DR_STEP (dr);
>>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>>    HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>>    HOST_WIDE_INT dr_step = -1;
>> @@ -2519,7 +2524,7 @@ vect_analyze_group_access_1 (struct data
>>        if (bb_vinfo)
>>         {
>>           /* Mark the statement as unvectorizable.  */
>> -         STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
>> +         STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>>           return true;
>>         }
>>
>> @@ -2667,18 +2672,18 @@ vect_analyze_group_access_1 (struct data
>>    return true;
>>  }
>>
>> -/* Analyze groups of accesses: check that DR belongs to a group of
>> -   accesses of legal size, step, etc.  Detect gaps, single element
>> -   interleaving, and other special cases. Set grouped access info.
>> -   Collect groups of strided stores for further use in SLP analysis.  */
>> +/* Analyze groups of accesses: check that the load or store in STMT_INFO
>> +   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
>> +   single element interleaving, and other special cases.  Set grouped
>> +   access info.  Collect groups of strided stores for further use in
>> +   SLP analysis.  */
>>
>>  static bool
>> -vect_analyze_group_access (struct data_reference *dr)
>> +vect_analyze_group_access (stmt_vec_info stmt_info)
>>  {
>> -  if (!vect_analyze_group_access_1 (dr))
>> +  if (!vect_analyze_group_access_1 (stmt_info))
>>      {
>>        /* Dissolve the group if present.  */
>> -      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
>>        while (stmt_info)
>>         {
>>           stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
>> @@ -2691,16 +2696,16 @@ vect_analyze_group_access (struct data_r
>>    return true;
>>  }
>>
>> -/* Analyze the access pattern of the data-reference DR.
>> +/* Analyze the access pattern of the load or store in STMT_INFO.
>>     In case of non-consecutive accesses call vect_analyze_group_access() to
>>     analyze groups of accesses.  */
>>
>>  static bool
>> -vect_analyze_data_ref_access (struct data_reference *dr)
>> +vect_analyze_data_ref_access (stmt_vec_info stmt_info)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree step = DR_STEP (dr);
>>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>>    struct loop *loop = NULL;
>>
>> @@ -2780,10 +2785,10 @@ vect_analyze_data_ref_access (struct dat
>>    if (TREE_CODE (step) != INTEGER_CST)
>>      return (STMT_VINFO_STRIDED_P (stmt_info)
>>             && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
>> -               || vect_analyze_group_access (dr)));
>> +               || vect_analyze_group_access (stmt_info)));
>>
>>    /* Not consecutive access - check if it's a part of interleaving group.  */
>> -  return vect_analyze_group_access (dr);
>> +  return vect_analyze_group_access (stmt_info);
>>  }
>>
>>  /* Compare two data-references DRA and DRB to group them into chunks
>> @@ -3062,25 +3067,28 @@ vect_analyze_data_ref_accesses (vec_info
>>      }
>>
>>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
>> -    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
>> -        && !vect_analyze_data_ref_access (dr))
>> -      {
>> -       if (dump_enabled_p ())
>> -         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>> -                          "not vectorized: complicated access pattern.\n");
>> +    {
>> +      stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +      if (STMT_VINFO_VECTORIZABLE (stmt_info)
>> +         && !vect_analyze_data_ref_access (stmt_info))
>> +       {
>> +         if (dump_enabled_p ())
>> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>> +                            "not vectorized: complicated access pattern.\n");
>>
>> -        if (is_a <bb_vec_info> (vinfo))
>> -         {
>> -           /* Mark the statement as not vectorizable.  */
>> -           STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
>> -           continue;
>> -         }
>> -        else
>> -         {
>> -           datarefs_copy.release ();
>> -           return false;
>> -         }
>> -      }
>> +         if (is_a <bb_vec_info> (vinfo))
>> +           {
>> +             /* Mark the statement as not vectorizable.  */
>> +             STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>> +             continue;
>> +           }
>> +         else
>> +           {
>> +             datarefs_copy.release ();
>> +             return false;
>> +           }
>> +       }
>> +    }
>>
>>    datarefs_copy.release ();
>>    return true;
>> @@ -3089,7 +3097,7 @@ vect_analyze_data_ref_accesses (vec_info
>>  /* Function vect_vfa_segment_size.
>>
>>     Input:
>> -     DR: The data reference.
>> +     STMT_INFO: the load or store statement.
>>       LENGTH_FACTOR: segment length to consider.
>>
>>     Return a value suitable for the dr_with_seg_len::seg_len field.
>> @@ -3098,8 +3106,9 @@ vect_analyze_data_ref_accesses (vec_info
>>     the size of the access; in effect it only describes the first byte.  */
>>
>>  static tree
>> -vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
>> +vect_vfa_segment_size (stmt_vec_info stmt_info, tree length_factor)
>>  {
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    length_factor = size_binop (MINUS_EXPR,
>>                               fold_convert (sizetype, length_factor),
>>                               size_one_node);
>> @@ -3107,23 +3116,23 @@ vect_vfa_segment_size (struct data_refer
>>                      length_factor);
>>  }
>>
>> -/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
>> +/* Return a value that, when added to abs (vect_vfa_segment_size
> (STMT_INFO)),
>>     gives the worst-case number of bytes covered by the segment.  */
>>
>>  static unsigned HOST_WIDE_INT
>> -vect_vfa_access_size (data_reference *dr)
>> +vect_vfa_access_size (stmt_vec_info stmt_vinfo)
>>  {
>> -  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_vinfo);
>>    tree ref_type = TREE_TYPE (DR_REF (dr));
>>    unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
>>    unsigned HOST_WIDE_INT access_size = ref_size;
>>    if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
>>      {
>> -      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
>> +      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
>>        access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
>>      }
>>    if (STMT_VINFO_VEC_STMT (stmt_vinfo)
>> -      && (vect_supportable_dr_alignment (dr, false)
>> +      && (vect_supportable_dr_alignment (stmt_vinfo, false)
>>           == dr_explicit_realign_optimized))
>>      {
>>        /* We might access a full vector's worth.  */
>> @@ -3281,13 +3290,14 @@ vect_check_lower_bound (loop_vec_info lo
>>    LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
>>  }
>>
>> -/* Return true if it's unlikely that the step of the vectorized form of DR
>> -   will span fewer than GAP bytes.  */
>> +/* Return true if it's unlikely that the step of the vectorized form of
>> +   the load or store in STMT_INFO will span fewer than GAP bytes.  */
>>
>>  static bool
>> -vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr,
> poly_int64 gap)
>> +vect_small_gap_p (stmt_vec_info stmt_info, poly_int64 gap)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    HOST_WIDE_INT count
>>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
>> @@ -3295,16 +3305,20 @@ vect_small_gap_p (loop_vec_info loop_vin
>>    return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
>>  }
>>
>> -/* Return true if we know that there is no alias between DR_A and DR_B
>> -   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
>> -   *LOWER_BOUND_OUT to this N.  */
>> +/* Return true if we know that there is no alias between the loads and
>> +   stores in STMT_INFO_A and STMT_INFO_B when the absolute step of
>> +   STMT_INFO_A's access is >= some N.  When returning true,
>> +   set *LOWER_BOUND_OUT to this N.  */
>>
>>  static bool
>> -vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
>> +vectorizable_with_step_bound_p (stmt_vec_info stmt_info_a,
>> +                               stmt_vec_info stmt_info_b,
>>                                 poly_uint64 *lower_bound_out)
>>  {
>>    /* Check that there is a constant gap of known sign between DR_A
>>       and DR_B.  */
>> +  data_reference *dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
>> +  data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
>>    poly_int64 init_a, init_b;
>>    if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
>>        || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
>> @@ -3324,8 +3338,7 @@ vectorizable_with_step_bound_p (data_ref
>>    /* If the two accesses could be dependent within a scalar iteration,
>>       make sure that we'd retain their order.  */
>>    if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
>> -      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
>> -                                        vect_dr_stmt (dr_b)))
>> +      && !vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b))
>>      return false;
>>
>>    /* There is no alias if abs (DR_STEP) is greater than or equal to
>> @@ -3426,7 +3439,8 @@ vect_prune_runtime_alias_test_list (loop
>>          and intra-iteration dependencies are guaranteed to be honored.  */
>>        if (ignore_step_p
>>           && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
>> -             || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
>> +             || vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
>> +                                                &lower_bound)))
>>         {
>>           if (dump_enabled_p ())
>>             {
>> @@ -3446,9 +3460,10 @@ vect_prune_runtime_alias_test_list (loop
>>          than the number of bytes handled by one vector iteration.)  */
>>        if (!ignore_step_p
>>           && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
>> -         && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
>> -         && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
>> -             || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
>> +         && vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
>> +                                            &lower_bound)
>> +         && (vect_small_gap_p (stmt_info_a, lower_bound)
>> +             || vect_small_gap_p (stmt_info_b, lower_bound)))
>>         {
>>           bool unsigned_p = dr_known_forward_stride_p (dr_a);
>>           if (dump_enabled_p ())
>> @@ -3501,11 +3516,13 @@ vect_prune_runtime_alias_test_list (loop
>>             length_factor = scalar_loop_iters;
>>           else
>>             length_factor = size_int (vect_factor);
>> -         segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
>> -         segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
>> +         segment_length_a = vect_vfa_segment_size (stmt_info_a,
>> +                                                   length_factor);
>> +         segment_length_b = vect_vfa_segment_size (stmt_info_b,
>> +                                                   length_factor);
>>         }
>> -      access_size_a = vect_vfa_access_size (dr_a);
>> -      access_size_b = vect_vfa_access_size (dr_b);
>> +      access_size_a = vect_vfa_access_size (stmt_info_a);
>> +      access_size_b = vect_vfa_access_size (stmt_info_b);
>>        align_a = vect_vfa_align (dr_a);
>>        align_b = vect_vfa_align (dr_b);
>>
>> @@ -4463,12 +4480,12 @@ vect_get_new_ssa_name (tree type, enum v
>>    return new_vect_var;
>>  }
>>
>> -/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
>> +/* Duplicate ptr info and set alignment/misaligment on NAME from
> STMT_INFO.  */
>>
>>  static void
>> -vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
>> +vect_duplicate_ssa_name_ptr_info (tree name, stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
>>    int misalign = dr_misalignment (stmt_info);
>>    if (misalign == DR_MISALIGNMENT_UNKNOWN)
>> @@ -4579,7 +4596,7 @@ vect_create_addr_base_for_vector_ref (st
>>        && TREE_CODE (addr_base) == SSA_NAME
>>        && !SSA_NAME_PTR_INFO (addr_base))
>>      {
>> -      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
>> +      vect_duplicate_ssa_name_ptr_info (addr_base, stmt_info);
>>        if (offset || byte_offset)
>>         mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
>>      }
>> @@ -4845,8 +4862,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>>        /* Copy the points-to information if it exists. */
>>        if (DR_PTR_INFO (dr))
>>         {
>> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
>> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
>> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
>> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>>         }
>>        if (ptr_incr)
>>         *ptr_incr = incr;
>> @@ -4875,8 +4892,8 @@ vect_create_data_ref_ptr (stmt_vec_info
>>        /* Copy the points-to information if it exists. */
>>        if (DR_PTR_INFO (dr))
>>         {
>> -         vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
>> -         vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
>> +         vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
>> +         vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
>>         }
>>        if (ptr_incr)
>>         *ptr_incr = incr;
>> @@ -6434,17 +6451,17 @@ vect_can_force_dr_alignment_p (const_tre
>>  }
>>
>>
>> -/* Return whether the data reference DR is supported with respect to its
>> -   alignment.
>> +/* Return whether the load or store in STMT_INFO is supported with
>> +   respect to its alignment.
>>     If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
>> it is aligned, i.e., check if it is possible to vectorize it with
> different
>>     alignment.  */
>>
>>  enum dr_alignment_support
>> -vect_supportable_dr_alignment (struct data_reference *dr,
>> +vect_supportable_dr_alignment (stmt_vec_info stmt_info,
>>                                 bool check_aligned_accesses)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>> +  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>>    machine_mode mode = TYPE_MODE (vectype);
>>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>> Index: gcc/tree-vect-stmts.c
>> ===================================================================
>> --- gcc/tree-vect-stmts.c       2018-07-24 10:24:05.744462369 +0100
>> +++ gcc/tree-vect-stmts.c       2018-07-24 10:24:08.924434128 +0100
>> @@ -1057,8 +1057,8 @@ vect_get_store_cost (stmt_vec_info stmt_
>>                      unsigned int *inside_cost,
>>                      stmt_vector_for_cost *body_cost_vec)
>>  {
>> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
>> +  int alignment_support_scheme
>> +    = vect_supportable_dr_alignment (stmt_info, false);
>>
>>    switch (alignment_support_scheme)
>>      {
>> @@ -1237,8 +1237,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
>>                     stmt_vector_for_cost *body_cost_vec,
>>                     bool record_prologue_costs)
>>  {
>> -  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>> -  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
>> +  int alignment_support_scheme
>> +    = vect_supportable_dr_alignment (stmt_info, false);
>>
>>    switch (alignment_support_scheme)
>>      {
>> @@ -2340,7 +2340,6 @@ get_negative_load_store_type (stmt_vec_i
>>                               vec_load_store_type vls_type,
>>                               unsigned int ncopies)
>>  {
>> -  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
>>    dr_alignment_support alignment_support_scheme;
>>
>>    if (ncopies > 1)
>> @@ -2351,7 +2350,7 @@ get_negative_load_store_type (stmt_vec_i
>>        return VMAT_ELEMENTWISE;
>>      }
>>
>> -  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
>> + alignment_support_scheme = vect_supportable_dr_alignment (stmt_info,
> false);
>>    if (alignment_support_scheme != dr_aligned
>>        && alignment_support_scheme != dr_unaligned_supported)
>>      {
>> @@ -2924,15 +2923,14 @@ vect_get_strided_load_store_ops (stmt_ve
>>  }
>>
>>  /* Return the amount that should be added to a vector pointer to move
>> -   to the next or previous copy of AGGR_TYPE.  DR is the data reference
>> -   being vectorized and MEMORY_ACCESS_TYPE describes the type of
>> +   to the next or previous copy of AGGR_TYPE.  STMT_INFO is the load or
>> +   store being vectorized and MEMORY_ACCESS_TYPE describes the type of
>>     vectorization.  */
>>
>>  static tree
>> -vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
>> +vect_get_data_ptr_increment (stmt_vec_info stmt_info, tree aggr_type,
>>                              vect_memory_access_type memory_access_type)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    if (memory_access_type == VMAT_INVARIANT)
>>      return size_zero_node;
>>
>> @@ -6171,12 +6169,12 @@ vectorizable_operation (stmt_vec_info st
>>    return true;
>>  }
>>
>> -/* A helper function to ensure data reference DR's base alignment.  */
>> +/* If we decided to increase the base alignment for the memory access in
>> +   STMT_INFO, but haven't increased it yet, do so now.  */
>>
>>  static void
>> -ensure_base_align (struct data_reference *dr)
>> +ensure_base_align (stmt_vec_info stmt_info)
>>  {
>> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
>>    if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
>>      return;
>>
>> @@ -6439,7 +6437,7 @@ vectorizable_store (stmt_vec_info stmt_i
>>
>>    /* Transform.  */
>>
>> -  ensure_base_align (dr);
>> +  ensure_base_align (stmt_info);
>>
>>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>>      {
>> @@ -6882,7 +6880,8 @@ vectorizable_store (stmt_vec_info stmt_i
>>    auto_vec<tree> dr_chain (group_size);
>>    oprnds.create (group_size);
>>
>> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
>> +  alignment_support_scheme
>> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>>    gcc_assert (alignment_support_scheme);
>>    vec_loop_masks *loop_masks
>>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
>> @@ -6920,7 +6919,8 @@ vectorizable_store (stmt_vec_info stmt_i
>>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>>        else
>>         aggr_type = vectype;
>> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
>> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
>> +                                         memory_access_type);
>>      }
>>
>>    if (mask)
>> @@ -7667,7 +7667,7 @@ vectorizable_load (stmt_vec_info stmt_in
>>
>>    /* Transform.  */
>>
>> -  ensure_base_align (dr);
>> +  ensure_base_align (stmt_info);
>>
>>    if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
>>      {
>> @@ -7990,7 +7990,8 @@ vectorizable_load (stmt_vec_info stmt_in
>>        ref_type = reference_alias_ptr_type (DR_REF (first_dr));
>>      }
>>
>> -  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
>> +  alignment_support_scheme
>> +    = vect_supportable_dr_alignment (first_stmt_info, false);
>>    gcc_assert (alignment_support_scheme);
>>    vec_loop_masks *loop_masks
>>      = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
>> @@ -8155,7 +8156,8 @@ vectorizable_load (stmt_vec_info stmt_in
>>         aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
>>        else
>>         aggr_type = vectype;
>> -      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
>> +      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
>> +                                         memory_access_type);
>>      }
>>
>>    tree vec_mask = NULL_TREE;
Richard Sandiford July 26, 2018, 11:05 a.m. UTC | #3
Richard Sandiford <richard.sandiford@arm.com> writes:
> Richard Biener <richard.guenther@gmail.com> writes:
>> On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
>> <richard.sandiford@arm.com> wrote:
>>>
>>> This patch makes various routines (mostly in tree-vect-data-refs.c)
>>> take stmt_vec_infos rather than data_references.  The affected routines
>>> are really dealing with the way that an access is going to vectorised
>>> for a particular stmt_vec_info, rather than with the original scalar
>>> access described by the data_reference.
>>
>> Similar.  Doesn't it make more sense to pass both stmt_info and DR to
>> the functions?
>
> Not sure.  If we...
>
>> We currently cannot handle aggregate copies in the to-be-vectorized IL
>> but rely on SRA and friends to elide those.  That's the only two-DR
>> stmt I can think of for vectorization.  Maybe aggregate by-value / return
>> function calls with OMP SIMD if that supports this somehow.
>
> ...did this then I don't think a data_refrence would be the natural
> way of identifying a DR within a stmt_vec_info.  Presumably the
> stmt_vec_info would need multiple STMT_VINFO_DATA_REFS and dr_auxs.
> If both of those were vectors then a (stmt_vec_info, index) pair
> might make more sense than (stmt_vec_info, data_reference).
>
> Alternatively we could move STMT_VINFO_DATA_REF into dataref_aux,
> so that there's a back-pointer to the DR, add a stmt_vec_info
> field to dataref_aux too, and then use dataref_aux instead of
> stmt_vec_info as the key.

New patch 37/46 does that.  The one below goes through and uses
dr_vec_info insead of data_reference in code that is dealing
with the way that a reference is going to be vectorised.

Thanks,
Richard


2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* tree-vectorizer.h (set_dr_misalignment, dr_misalignment)
	(DR_TARGET_ALIGNMENT, aligned_access_p, known_alignment_for_access_p)
	(vect_known_alignment_in_bytes, vect_dr_behavior)
	(vect_get_scalar_dr_size): Take references as dr_vec_infos
	instead of data_references.  Update calls to other routines for
	which the same change has been made.
	* tree-vect-data-refs.c (vect_preserves_scalar_order_p): Take
	dr_vec_infos instead of stmt_vec_infos.
	(vect_analyze_data_ref_dependence): Update call accordingly.
	(vect_slp_analyze_data_ref_dependence)
	(vect_record_base_alignments): Use DR_VECT_AUX.
	(vect_calculate_target_alignment, vect_compute_data_ref_alignment)
	(vect_update_misalignment_for_peel, verify_data_ref_alignment)
	(vector_alignment_reachable_p, vect_get_data_access_cost)
	(vect_peeling_supportable, vect_analyze_group_access_1)
	(vect_analyze_group_access, vect_analyze_data_ref_access)
	(vect_vfa_segment_size, vect_vfa_access_size, vect_vfa_align)
	(vect_compile_time_alias, vect_small_gap_p)
	(vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info):
	(vect_supportable_dr_alignment): Take references as dr_vec_infos
	instead of data_references.  Update calls to other routines for
	which the same change has been made.
	(vect_verify_datarefs_alignment, vect_get_peeling_costs_all_drs)
	(vect_find_same_alignment_drs, vect_analyze_data_refs_alignment)
	(vect_slp_analyze_and_verify_node_alignment)
	(vect_analyze_data_ref_accesses, vect_prune_runtime_alias_test_list)
	(vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
	(vect_setup_realignment): Use dr_vec_infos.  Update calls after
	above changes.
	(_vect_peel_info::dr): Replace with...
	(_vect_peel_info::dr_info): ...this new field.
	(vect_peeling_hash_get_most_frequent)
	(vect_peeling_hash_choose_best_peeling): Update accordingly.
	(vect_peeling_hash_get_lowest_cost):
	(vect_enhance_data_refs_alignment): Likewise.  Update calls to other
	routines for which the same change has been made.
	(vect_peeling_hash_insert): Likewise.  Take a dr_vec_info instead of a
	data_reference.
	* tree-vect-loop-manip.c (get_misalign_in_elems)
	(vect_gen_prolog_loop_niters): Use dr_vec_infos.  Update calls after
	above changes.
	* tree-vect-loop.c (vect_analyze_loop_2): Likewise.
	* tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
	(vect_truncate_gather_scatter_offset, compare_step_with_zero)
	(get_group_load_store_type, get_negative_load_store_type)
	(vect_get_data_ptr_increment, vectorizable_store)
	(vectorizable_load): Likewise.
	(ensure_base_align): Take a dr_vec_info instead of a data_reference.
	Update calls to other routines for which the same change has been made.

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-26 11:30:56.197256524 +0100
+++ gcc/tree-vectorizer.h	2018-07-26 11:42:19.035663718 +0100
@@ -1294,15 +1294,15 @@ #define DR_MISALIGNMENT_UNKNOWN (-1)
 #define DR_MISALIGNMENT_UNINITIALIZED (-2)
 
 inline void
-set_dr_misalignment (struct data_reference *dr, int val)
+set_dr_misalignment (dr_vec_info *dr_info, int val)
 {
-  DR_VECT_AUX (dr)->misalignment = val;
+  dr_info->misalignment = val;
 }
 
 inline int
-dr_misalignment (struct data_reference *dr)
+dr_misalignment (dr_vec_info *dr_info)
 {
-  int misalign = DR_VECT_AUX (dr)->misalignment;
+  int misalign = dr_info->misalignment;
   gcc_assert (misalign != DR_MISALIGNMENT_UNINITIALIZED);
   return misalign;
 }
@@ -1313,52 +1313,51 @@ #define DR_MISALIGNMENT(DR) dr_misalignm
 #define SET_DR_MISALIGNMENT(DR, VAL) set_dr_misalignment (DR, VAL)
 
 /* Only defined once DR_MISALIGNMENT is defined.  */
-#define DR_TARGET_ALIGNMENT(DR) DR_VECT_AUX (DR)->target_alignment
+#define DR_TARGET_ALIGNMENT(DR) ((DR)->target_alignment)
 
-/* Return true if data access DR is aligned to its target alignment
+/* Return true if data access DR_INFO is aligned to its target alignment
    (which may be less than a full vector).  */
 
 static inline bool
-aligned_access_p (struct data_reference *data_ref_info)
+aligned_access_p (dr_vec_info *dr_info)
 {
-  return (DR_MISALIGNMENT (data_ref_info) == 0);
+  return (DR_MISALIGNMENT (dr_info) == 0);
 }
 
 /* Return TRUE if the alignment of the data access is known, and FALSE
    otherwise.  */
 
 static inline bool
-known_alignment_for_access_p (struct data_reference *data_ref_info)
+known_alignment_for_access_p (dr_vec_info *dr_info)
 {
-  return (DR_MISALIGNMENT (data_ref_info) != DR_MISALIGNMENT_UNKNOWN);
+  return (DR_MISALIGNMENT (dr_info) != DR_MISALIGNMENT_UNKNOWN);
 }
 
 /* Return the minimum alignment in bytes that the vectorized version
-   of DR is guaranteed to have.  */
+   of DR_INFO is guaranteed to have.  */
 
 static inline unsigned int
-vect_known_alignment_in_bytes (struct data_reference *dr)
+vect_known_alignment_in_bytes (dr_vec_info *dr_info)
 {
-  if (DR_MISALIGNMENT (dr) == DR_MISALIGNMENT_UNKNOWN)
-    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
-  if (DR_MISALIGNMENT (dr) == 0)
-    return DR_TARGET_ALIGNMENT (dr);
-  return DR_MISALIGNMENT (dr) & -DR_MISALIGNMENT (dr);
+  if (DR_MISALIGNMENT (dr_info) == DR_MISALIGNMENT_UNKNOWN)
+    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
+  if (DR_MISALIGNMENT (dr_info) == 0)
+    return DR_TARGET_ALIGNMENT (dr_info);
+  return DR_MISALIGNMENT (dr_info) & -DR_MISALIGNMENT (dr_info);
 }
 
-/* Return the behavior of DR with respect to the vectorization context
+/* Return the behavior of DR_INFO with respect to the vectorization context
    (which for outer loop vectorization might not be the behavior recorded
-   in DR itself).  */
+   in DR_INFO itself).  */
 
 static inline innermost_loop_behavior *
-vect_dr_behavior (data_reference *dr)
+vect_dr_behavior (dr_vec_info *dr_info)
 {
-  gimple *stmt = DR_STMT (dr);
-  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   if (loop_vinfo == NULL
       || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
-    return &DR_INNERMOST (dr);
+    return &DR_INNERMOST (dr_info->dr);
   else
     return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
 }
@@ -1451,17 +1450,17 @@ vect_max_vf (loop_vec_info loop_vinfo)
   return MAX_VECTORIZATION_FACTOR;
 }
 
-/* Return the size of the value accessed by unvectorized data reference DR.
-   This is only valid once STMT_VINFO_VECTYPE has been calculated for the
-   associated gimple statement, since that guarantees that DR accesses
-   either a scalar or a scalar equivalent.  ("Scalar equivalent" here
-   includes things like V1SI, which can be vectorized in the same way
+/* Return the size of the value accessed by unvectorized data reference
+   DR_INFO.  This is only valid once STMT_VINFO_VECTYPE has been calculated
+   for the associated gimple statement, since that guarantees that DR_INFO
+   accesses either a scalar or a scalar equivalent.  ("Scalar equivalent"
+   here includes things like V1SI, which can be vectorized in the same way
    as a plain SI.)  */
 
 inline unsigned int
-vect_get_scalar_dr_size (struct data_reference *dr)
+vect_get_scalar_dr_size (dr_vec_info *dr_info)
 {
-  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr))));
+  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr_info->dr))));
 }
 
 /* Source location + hotness information. */
@@ -1561,7 +1560,7 @@ extern tree vect_get_mask_type_for_stmt
 /* In tree-vect-data-refs.c.  */
 extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
 extern enum dr_alignment_support vect_supportable_dr_alignment
-                                           (struct data_reference *, bool);
+                                           (dr_vec_info *, bool);
 extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
                                            HOST_WIDE_INT *);
 extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-26 11:30:56.193256600 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-26 11:42:19.031663762 +0100
@@ -192,14 +192,16 @@ vect_check_nonzero_value (loop_vec_info
   LOOP_VINFO_CHECK_NONZERO (loop_vinfo).safe_push (value);
 }
 
-/* Return true if we know that the order of vectorized STMTINFO_A and
-   vectorized STMTINFO_B will be the same as the order of STMTINFO_A and
-   STMTINFO_B.  At least one of the statements is a write.  */
+/* Return true if we know that the order of vectorized DR_INFO_A and
+   vectorized DR_INFO_B will be the same as the order of DR_INFO_A and
+   DR_INFO_B.  At least one of the accesses is a write.  */
 
 static bool
-vect_preserves_scalar_order_p (stmt_vec_info stmtinfo_a,
-			       stmt_vec_info stmtinfo_b)
+vect_preserves_scalar_order_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b)
 {
+  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
+  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
+
   /* Single statements are always kept in their original order.  */
   if (!STMT_VINFO_GROUPED_ACCESS (stmtinfo_a)
       && !STMT_VINFO_GROUPED_ACCESS (stmtinfo_b))
@@ -294,8 +296,10 @@ vect_analyze_data_ref_dependence (struct
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
-  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
+  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
   lambda_vector dist_v;
   unsigned int loop_depth;
 
@@ -471,7 +475,7 @@ vect_analyze_data_ref_dependence (struct
 		... = a[i];
 		a[i+1] = ...;
 	     where loads from the group interleave with the store.  */
-	  if (!vect_preserves_scalar_order_p (stmtinfo_a, stmtinfo_b))
+	  if (!vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
 	    {
 	      if (dump_enabled_p ())
 		dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -609,6 +613,8 @@ vect_slp_analyze_data_ref_dependence (st
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
+  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
 
   /* We need to check dependences of statements marked as unvectorizable
      as well, they still can prohibit vectorization.  */
@@ -626,9 +632,9 @@ vect_slp_analyze_data_ref_dependence (st
 
   /* If dra and drb are part of the same interleaving chain consider
      them independent.  */
-  if (STMT_VINFO_GROUPED_ACCESS (vect_dr_stmt (dra))
-      && (DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dra))
-	  == DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (drb))))
+  if (STMT_VINFO_GROUPED_ACCESS (dr_info_a->stmt)
+      && (DR_GROUP_FIRST_ELEMENT (dr_info_a->stmt)
+	  == DR_GROUP_FIRST_ELEMENT (dr_info_b->stmt)))
     return false;
 
   /* Unknown data dependence.  */
@@ -842,7 +848,8 @@ vect_record_base_alignments (vec_info *v
   unsigned int i;
   FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
       if (!DR_IS_CONDITIONAL_IN_STMT (dr)
 	  && STMT_VINFO_VECTORIZABLE (stmt_info)
 	  && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
@@ -858,34 +865,33 @@ vect_record_base_alignments (vec_info *v
     }
 }
 
-/* Return the target alignment for the vectorized form of DR.  */
+/* Return the target alignment for the vectorized form of DR_INFO.  */
 
 static unsigned int
-vect_calculate_target_alignment (struct data_reference *dr)
+vect_calculate_target_alignment (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
-  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
+  tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
   return targetm.vectorize.preferred_vector_alignment (vectype);
 }
 
 /* Function vect_compute_data_ref_alignment
 
-   Compute the misalignment of the data reference DR.
+   Compute the misalignment of the data reference DR_INFO.
 
    Output:
-   1. DR_MISALIGNMENT (DR) is defined.
+   1. DR_MISALIGNMENT (DR_INFO) is defined.
 
    FOR NOW: No analysis is actually performed. Misalignment is calculated
    only for trivial cases. TODO.  */
 
 static void
-vect_compute_data_ref_alignment (struct data_reference *dr)
+vect_compute_data_ref_alignment (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
-  tree ref = DR_REF (dr);
+  tree ref = DR_REF (dr_info->dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   if (dump_enabled_p ())
@@ -896,17 +902,17 @@ vect_compute_data_ref_alignment (struct
     loop = LOOP_VINFO_LOOP (loop_vinfo);
 
   /* Initialize misalignment to unknown.  */
-  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
 
   if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
     return;
 
-  innermost_loop_behavior *drb = vect_dr_behavior (dr);
+  innermost_loop_behavior *drb = vect_dr_behavior (dr_info);
   bool step_preserves_misalignment_p;
 
   unsigned HOST_WIDE_INT vector_alignment
-    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
-  DR_TARGET_ALIGNMENT (dr) = vector_alignment;
+    = vect_calculate_target_alignment (dr_info) / BITS_PER_UNIT;
+  DR_TARGET_ALIGNMENT (dr_info) = vector_alignment;
 
   /* No step for BB vectorization.  */
   if (!loop)
@@ -924,7 +930,7 @@ vect_compute_data_ref_alignment (struct
   else if (nested_in_vect_loop_p (loop, stmt_info))
     {
       step_preserves_misalignment_p
-	= (DR_STEP_ALIGNMENT (dr) % vector_alignment) == 0;
+	= (DR_STEP_ALIGNMENT (dr_info->dr) % vector_alignment) == 0;
 
       if (dump_enabled_p ())
 	{
@@ -946,7 +952,7 @@ vect_compute_data_ref_alignment (struct
     {
       poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
       step_preserves_misalignment_p
-	= multiple_p (DR_STEP_ALIGNMENT (dr) * vf, vector_alignment);
+	= multiple_p (DR_STEP_ALIGNMENT (dr_info->dr) * vf, vector_alignment);
 
       if (!step_preserves_misalignment_p && dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1009,8 +1015,8 @@ vect_compute_data_ref_alignment (struct
           dump_printf (MSG_NOTE, "\n");
         }
 
-      DR_VECT_AUX (dr)->base_decl = base;
-      DR_VECT_AUX (dr)->base_misaligned = true;
+      dr_info->base_decl = base;
+      dr_info->base_misaligned = true;
       base_misalignment = 0;
     }
   poly_int64 misalignment
@@ -1038,12 +1044,13 @@ vect_compute_data_ref_alignment (struct
       return;
     }
 
-  SET_DR_MISALIGNMENT (dr, const_misalignment);
+  SET_DR_MISALIGNMENT (dr_info, const_misalignment);
 
   if (dump_enabled_p ())
     {
       dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
-                       "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr));
+		       "misalign = %d bytes of ref ",
+		       DR_MISALIGNMENT (dr_info));
       dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref);
       dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
     }
@@ -1052,28 +1059,28 @@ vect_compute_data_ref_alignment (struct
 }
 
 /* Function vect_update_misalignment_for_peel.
-   Sets DR's misalignment
-   - to 0 if it has the same alignment as DR_PEEL,
-   - to the misalignment computed using NPEEL if DR's salignment is known,
+   Sets DR_INFO's misalignment
+   - to 0 if it has the same alignment as DR_PEEL_INFO,
+   - to the misalignment computed using NPEEL if DR_INFO's salignment is known,
    - to -1 (unknown) otherwise.
 
-   DR - the data reference whose misalignment is to be adjusted.
-   DR_PEEL - the data reference whose misalignment is being made
-             zero in the vector loop by the peel.
+   DR_INFO - the data reference whose misalignment is to be adjusted.
+   DR_PEEL_INFO - the data reference whose misalignment is being made
+		  zero in the vector loop by the peel.
    NPEEL - the number of iterations in the peel loop if the misalignment
-           of DR_PEEL is known at compile time.  */
+           of DR_PEEL_INFO is known at compile time.  */
 
 static void
-vect_update_misalignment_for_peel (struct data_reference *dr,
-                                   struct data_reference *dr_peel, int npeel)
+vect_update_misalignment_for_peel (dr_vec_info *dr_info,
+				   dr_vec_info *dr_peel_info, int npeel)
 {
   unsigned int i;
   vec<dr_p> same_aligned_drs;
   struct data_reference *current_dr;
-  int dr_size = vect_get_scalar_dr_size (dr);
-  int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
-  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
+  int dr_size = vect_get_scalar_dr_size (dr_info);
+  int dr_peel_size = vect_get_scalar_dr_size (dr_peel_info);
+  stmt_vec_info stmt_info = dr_info->stmt;
+  stmt_vec_info peel_stmt_info = dr_peel_info->stmt;
 
  /* For interleaved data accesses the step in the loop must be multiplied by
      the size of the interleaving group.  */
@@ -1084,51 +1091,52 @@ vect_update_misalignment_for_peel (struc
 
   /* It can be assumed that the data refs with the same alignment as dr_peel
      are aligned in the vector loop.  */
-  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
+  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
   FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
     {
-      if (current_dr != dr)
+      if (current_dr != dr_info->dr)
         continue;
-      gcc_assert (!known_alignment_for_access_p (dr)
-		  || !known_alignment_for_access_p (dr_peel)
-		  || (DR_MISALIGNMENT (dr) / dr_size
-		      == DR_MISALIGNMENT (dr_peel) / dr_peel_size));
-      SET_DR_MISALIGNMENT (dr, 0);
+      gcc_assert (!known_alignment_for_access_p (dr_info)
+		  || !known_alignment_for_access_p (dr_peel_info)
+		  || (DR_MISALIGNMENT (dr_info) / dr_size
+		      == DR_MISALIGNMENT (dr_peel_info) / dr_peel_size));
+      SET_DR_MISALIGNMENT (dr_info, 0);
       return;
     }
 
-  if (known_alignment_for_access_p (dr)
-      && known_alignment_for_access_p (dr_peel))
+  if (known_alignment_for_access_p (dr_info)
+      && known_alignment_for_access_p (dr_peel_info))
     {
-      bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
-      int misal = DR_MISALIGNMENT (dr);
+      bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
+					    size_zero_node) < 0;
+      int misal = DR_MISALIGNMENT (dr_info);
       misal += negative ? -npeel * dr_size : npeel * dr_size;
-      misal &= DR_TARGET_ALIGNMENT (dr) - 1;
-      SET_DR_MISALIGNMENT (dr, misal);
+      misal &= DR_TARGET_ALIGNMENT (dr_info) - 1;
+      SET_DR_MISALIGNMENT (dr_info, misal);
       return;
     }
 
   if (dump_enabled_p ())
     dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment " \
 		     "to unknown (-1).\n");
-  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
+  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
 }
 
 
 /* Function verify_data_ref_alignment
 
-   Return TRUE if DR can be handled with respect to alignment.  */
+   Return TRUE if DR_INFO can be handled with respect to alignment.  */
 
 static bool
-verify_data_ref_alignment (data_reference_p dr)
+verify_data_ref_alignment (dr_vec_info *dr_info)
 {
   enum dr_alignment_support supportable_dr_alignment
-    = vect_supportable_dr_alignment (dr, false);
+    = vect_supportable_dr_alignment (dr_info, false);
   if (!supportable_dr_alignment)
     {
       if (dump_enabled_p ())
 	{
-	  if (DR_IS_READ (dr))
+	  if (DR_IS_READ (dr_info->dr))
 	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 			     "not vectorized: unsupported unaligned load.");
 	  else
@@ -1137,7 +1145,7 @@ verify_data_ref_alignment (data_referenc
 			     "store.");
 
 	  dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
-			     DR_REF (dr));
+			     DR_REF (dr_info->dr));
 	  dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
 	}
       return false;
@@ -1164,7 +1172,8 @@ vect_verify_datarefs_alignment (loop_vec
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1180,7 +1189,7 @@ vect_verify_datarefs_alignment (loop_vec
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      if (! verify_data_ref_alignment (dr))
+      if (! verify_data_ref_alignment (dr_info))
 	return false;
     }
 
@@ -1202,13 +1211,13 @@ not_size_aligned (tree exp)
 
 /* Function vector_alignment_reachable_p
 
-   Return true if vector alignment for DR is reachable by peeling
+   Return true if vector alignment for DR_INFO is reachable by peeling
    a few loop iterations.  Return false otherwise.  */
 
 static bool
-vector_alignment_reachable_p (struct data_reference *dr)
+vector_alignment_reachable_p (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
@@ -1219,13 +1228,13 @@ vector_alignment_reachable_p (struct dat
       int elem_size, mis_in_elements;
 
       /* FORNOW: handle only known alignment.  */
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (dr_info))
 	return false;
 
       poly_uint64 nelements = TYPE_VECTOR_SUBPARTS (vectype);
       poly_uint64 vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
       elem_size = vector_element_size (vector_size, nelements);
-      mis_in_elements = DR_MISALIGNMENT (dr) / elem_size;
+      mis_in_elements = DR_MISALIGNMENT (dr_info) / elem_size;
 
       if (!multiple_p (nelements - mis_in_elements, DR_GROUP_SIZE (stmt_info)))
 	return false;
@@ -1233,7 +1242,7 @@ vector_alignment_reachable_p (struct dat
 
   /* If misalignment is known at the compile time then allow peeling
      only if natural alignment is reachable through peeling.  */
-  if (known_alignment_for_access_p (dr) && !aligned_access_p (dr))
+  if (known_alignment_for_access_p (dr_info) && !aligned_access_p (dr_info))
     {
       HOST_WIDE_INT elmsize =
 		int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype)));
@@ -1242,9 +1251,9 @@ vector_alignment_reachable_p (struct dat
 	  dump_printf_loc (MSG_NOTE, vect_location,
 	                   "data size =" HOST_WIDE_INT_PRINT_DEC, elmsize);
 	  dump_printf (MSG_NOTE,
-	               ". misalignment = %d.\n", DR_MISALIGNMENT (dr));
+	               ". misalignment = %d.\n", DR_MISALIGNMENT (dr_info));
 	}
-      if (DR_MISALIGNMENT (dr) % elmsize)
+      if (DR_MISALIGNMENT (dr_info) % elmsize)
 	{
 	  if (dump_enabled_p ())
 	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1253,10 +1262,10 @@ vector_alignment_reachable_p (struct dat
 	}
     }
 
-  if (!known_alignment_for_access_p (dr))
+  if (!known_alignment_for_access_p (dr_info))
     {
-      tree type = TREE_TYPE (DR_REF (dr));
-      bool is_packed = not_size_aligned (DR_REF (dr));
+      tree type = TREE_TYPE (DR_REF (dr_info->dr));
+      bool is_packed = not_size_aligned (DR_REF (dr_info->dr));
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
 	                 "Unknown misalignment, %snaturally aligned\n",
@@ -1268,16 +1277,16 @@ vector_alignment_reachable_p (struct dat
 }
 
 
-/* Calculate the cost of the memory access represented by DR.  */
+/* Calculate the cost of the memory access represented by DR_INFO.  */
 
 static void
-vect_get_data_access_cost (struct data_reference *dr,
+vect_get_data_access_cost (dr_vec_info *dr_info,
                            unsigned int *inside_cost,
                            unsigned int *outside_cost,
 			   stmt_vector_for_cost *body_cost_vec,
 			   stmt_vector_for_cost *prologue_cost_vec)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   int ncopies;
 
@@ -1286,7 +1295,7 @@ vect_get_data_access_cost (struct data_r
   else
     ncopies = vect_get_num_copies (loop_vinfo, STMT_VINFO_VECTYPE (stmt_info));
 
-  if (DR_IS_READ (dr))
+  if (DR_IS_READ (dr_info->dr))
     vect_get_load_cost (stmt_info, ncopies, true, inside_cost, outside_cost,
 			prologue_cost_vec, body_cost_vec, false);
   else
@@ -1301,7 +1310,7 @@ vect_get_data_access_cost (struct data_r
 
 typedef struct _vect_peel_info
 {
-  struct data_reference *dr;
+  dr_vec_info *dr_info;
   int npeel;
   unsigned int count;
 } *vect_peel_info;
@@ -1335,16 +1344,17 @@ peel_info_hasher::equal (const _vect_pee
 }
 
 
-/* Insert DR into peeling hash table with NPEEL as key.  */
+/* Insert DR_INFO into peeling hash table with NPEEL as key.  */
 
 static void
 vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
-			  loop_vec_info loop_vinfo, struct data_reference *dr,
+			  loop_vec_info loop_vinfo, dr_vec_info *dr_info,
                           int npeel)
 {
   struct _vect_peel_info elem, *slot;
   _vect_peel_info **new_slot;
-  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
+  bool supportable_dr_alignment
+    = vect_supportable_dr_alignment (dr_info, true);
 
   elem.npeel = npeel;
   slot = peeling_htab->find (&elem);
@@ -1354,7 +1364,7 @@ vect_peeling_hash_insert (hash_table<pee
     {
       slot = XNEW (struct _vect_peel_info);
       slot->npeel = npeel;
-      slot->dr = dr;
+      slot->dr_info = dr_info;
       slot->count = 1;
       new_slot = peeling_htab->find_slot (slot, INSERT);
       *new_slot = slot;
@@ -1381,19 +1391,19 @@ vect_peeling_hash_get_most_frequent (_ve
     {
       max->peel_info.npeel = elem->npeel;
       max->peel_info.count = elem->count;
-      max->peel_info.dr = elem->dr;
+      max->peel_info.dr_info = elem->dr_info;
     }
 
   return 1;
 }
 
 /* Get the costs of peeling NPEEL iterations checking data access costs
-   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
+   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0_INFO's
    misalignment will be zero after peeling.  */
 
 static void
 vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
-				struct data_reference *dr0,
+				dr_vec_info *dr0_info,
 				unsigned int *inside_cost,
 				unsigned int *outside_cost,
 				stmt_vector_for_cost *body_cost_vec,
@@ -1406,7 +1416,8 @@ vect_get_peeling_costs_all_drs (vec<data
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
 
@@ -1423,16 +1434,16 @@ vect_get_peeling_costs_all_drs (vec<data
 	continue;
 
       int save_misalignment;
-      save_misalignment = DR_MISALIGNMENT (dr);
+      save_misalignment = DR_MISALIGNMENT (dr_info);
       if (npeel == 0)
 	;
-      else if (unknown_misalignment && dr == dr0)
-	SET_DR_MISALIGNMENT (dr, 0);
+      else if (unknown_misalignment && dr_info == dr0_info)
+	SET_DR_MISALIGNMENT (dr_info, 0);
       else
-	vect_update_misalignment_for_peel (dr, dr0, npeel);
-      vect_get_data_access_cost (dr, inside_cost, outside_cost,
+	vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
+      vect_get_data_access_cost (dr_info, inside_cost, outside_cost,
 				 body_cost_vec, prologue_cost_vec);
-      SET_DR_MISALIGNMENT (dr, save_misalignment);
+      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
     }
 }
 
@@ -1446,7 +1457,7 @@ vect_peeling_hash_get_lowest_cost (_vect
   vect_peel_info elem = *slot;
   int dummy;
   unsigned int inside_cost = 0, outside_cost = 0;
-  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
+  stmt_vec_info stmt_info = elem->dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
 		       epilogue_cost_vec;
@@ -1456,7 +1467,7 @@ vect_peeling_hash_get_lowest_cost (_vect
   epilogue_cost_vec.create (2);
 
   vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
-				  elem->dr, &inside_cost, &outside_cost,
+				  elem->dr_info, &inside_cost, &outside_cost,
 				  &body_cost_vec, &prologue_cost_vec,
 				  elem->npeel, false);
 
@@ -1480,7 +1491,7 @@ vect_peeling_hash_get_lowest_cost (_vect
     {
       min->inside_cost = inside_cost;
       min->outside_cost = outside_cost;
-      min->peel_info.dr = elem->dr;
+      min->peel_info.dr_info = elem->dr_info;
       min->peel_info.npeel = elem->npeel;
       min->peel_info.count = elem->count;
     }
@@ -1499,7 +1510,7 @@ vect_peeling_hash_choose_best_peeling (h
 {
    struct _vect_peel_extended_info res;
 
-   res.peel_info.dr = NULL;
+   res.peel_info.dr_info = NULL;
 
    if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
      {
@@ -1523,7 +1534,7 @@ vect_peeling_hash_choose_best_peeling (h
 /* Return true if the new peeling NPEEL is supported.  */
 
 static bool
-vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
+vect_peeling_supportable (loop_vec_info loop_vinfo, dr_vec_info *dr0_info,
 			  unsigned npeel)
 {
   unsigned i;
@@ -1536,10 +1547,11 @@ vect_peeling_supportable (loop_vec_info
     {
       int save_misalignment;
 
-      if (dr == dr0)
+      if (dr == dr0_info->dr)
 	continue;
 
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
       /* For interleaving, only the alignment of the first access
 	 matters.  */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
@@ -1552,10 +1564,11 @@ vect_peeling_supportable (loop_vec_info
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      save_misalignment = DR_MISALIGNMENT (dr);
-      vect_update_misalignment_for_peel (dr, dr0, npeel);
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
-      SET_DR_MISALIGNMENT (dr, save_misalignment);
+      save_misalignment = DR_MISALIGNMENT (dr_info);
+      vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
+      supportable_dr_alignment
+	= vect_supportable_dr_alignment (dr_info, false);
+      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
 
       if (!supportable_dr_alignment)
 	return false;
@@ -1661,7 +1674,8 @@ vect_enhance_data_refs_alignment (loop_v
   vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   enum dr_alignment_support supportable_dr_alignment;
-  struct data_reference *dr0 = NULL, *first_store = NULL;
+  dr_vec_info *first_store = NULL;
+  dr_vec_info *dr0_info = NULL;
   struct data_reference *dr;
   unsigned int i, j;
   bool do_peeling = false;
@@ -1671,7 +1685,7 @@ vect_enhance_data_refs_alignment (loop_v
   bool one_misalignment_known = false;
   bool one_misalignment_unknown = false;
   bool one_dr_unsupportable = false;
-  struct data_reference *unsupportable_dr = NULL;
+  dr_vec_info *unsupportable_dr_info = NULL;
   poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
   unsigned possible_npeel_number = 1;
   tree vectype;
@@ -1718,7 +1732,8 @@ vect_enhance_data_refs_alignment (loop_v
 
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      stmt_vec_info stmt_info = dr_info->stmt;
 
       if (!STMT_VINFO_RELEVANT_P (stmt_info))
 	continue;
@@ -1741,21 +1756,23 @@ vect_enhance_data_refs_alignment (loop_v
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
-      do_peeling = vector_alignment_reachable_p (dr);
+      supportable_dr_alignment = vect_supportable_dr_alignment (dr_info, true);
+      do_peeling = vector_alignment_reachable_p (dr_info);
       if (do_peeling)
         {
-          if (known_alignment_for_access_p (dr))
+          if (known_alignment_for_access_p (dr_info))
             {
 	      unsigned int npeel_tmp = 0;
 	      bool negative = tree_int_cst_compare (DR_STEP (dr),
 						    size_zero_node) < 0;
 
 	      vectype = STMT_VINFO_VECTYPE (stmt_info);
-	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
-	      unsigned int dr_size = vect_get_scalar_dr_size (dr);
-	      mis = (negative ? DR_MISALIGNMENT (dr) : -DR_MISALIGNMENT (dr));
-	      if (DR_MISALIGNMENT (dr) != 0)
+	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
+	      unsigned int dr_size = vect_get_scalar_dr_size (dr_info);
+	      mis = (negative
+		     ? DR_MISALIGNMENT (dr_info)
+		     : -DR_MISALIGNMENT (dr_info));
+	      if (DR_MISALIGNMENT (dr_info) != 0)
 		npeel_tmp = (mis & (target_align - 1)) / dr_size;
 
               /* For multiple types, it is possible that the bigger type access
@@ -1780,7 +1797,7 @@ vect_enhance_data_refs_alignment (loop_v
 
 		  /* NPEEL_TMP is 0 when there is no misalignment, but also
 		     allow peeling NELEMENTS.  */
-		  if (DR_MISALIGNMENT (dr) == 0)
+		  if (DR_MISALIGNMENT (dr_info) == 0)
 		    possible_npeel_number++;
 		}
 
@@ -1789,7 +1806,7 @@ vect_enhance_data_refs_alignment (loop_v
               for (j = 0; j < possible_npeel_number; j++)
                 {
                   vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
-					    dr, npeel_tmp);
+					    dr_info, npeel_tmp);
 		  npeel_tmp += target_align / dr_size;
                 }
 
@@ -1803,11 +1820,11 @@ vect_enhance_data_refs_alignment (loop_v
                  stores over load.  */
 	      unsigned same_align_drs
 		= STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
-	      if (!dr0
+	      if (!dr0_info
 		  || same_align_drs_max < same_align_drs)
 		{
 		  same_align_drs_max = same_align_drs;
-		  dr0 = dr;
+		  dr0_info = dr_info;
 		}
 	      /* For data-refs with the same number of related
 		 accesses prefer the one where the misalign
@@ -1816,13 +1833,13 @@ vect_enhance_data_refs_alignment (loop_v
 		{
 		  struct loop *ivloop0, *ivloop;
 		  ivloop0 = outermost_invariant_loop_for_expr
-		    (loop, DR_BASE_ADDRESS (dr0));
+		    (loop, DR_BASE_ADDRESS (dr0_info->dr));
 		  ivloop = outermost_invariant_loop_for_expr
 		    (loop, DR_BASE_ADDRESS (dr));
 		  if ((ivloop && !ivloop0)
 		      || (ivloop && ivloop0
 			  && flow_loop_nested_p (ivloop, ivloop0)))
-		    dr0 = dr;
+		    dr0_info = dr_info;
 		}
 
 	      one_misalignment_unknown = true;
@@ -1832,16 +1849,16 @@ vect_enhance_data_refs_alignment (loop_v
 	      if (!supportable_dr_alignment)
 	      {
 		one_dr_unsupportable = true;
-		unsupportable_dr = dr;
+		unsupportable_dr_info = dr_info;
 	      }
 
 	      if (!first_store && DR_IS_WRITE (dr))
-		first_store = dr;
+		first_store = dr_info;
             }
         }
       else
         {
-          if (!aligned_access_p (dr))
+          if (!aligned_access_p (dr_info))
             {
               if (dump_enabled_p ())
                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -1879,7 +1896,7 @@ vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, dr0,
+      vect_get_peeling_costs_all_drs (datarefs, dr0_info,
 				      &load_inside_cost,
 				      &load_outside_cost,
 				      &dummy, &dummy, estimated_npeels, true);
@@ -1905,7 +1922,7 @@ vect_enhance_data_refs_alignment (loop_v
 	  || (load_inside_cost == store_inside_cost
 	      && load_outside_cost > store_outside_cost))
 	{
-	  dr0 = first_store;
+	  dr0_info = first_store;
 	  peel_for_unknown_alignment.inside_cost = store_inside_cost;
 	  peel_for_unknown_alignment.outside_cost = store_outside_cost;
 	}
@@ -1929,18 +1946,18 @@ vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       peel_for_unknown_alignment.peel_info.count = 1
-	+ STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
+	+ STMT_VINFO_SAME_ALIGN_REFS (dr0_info->stmt).length ();
     }
 
   peel_for_unknown_alignment.peel_info.npeel = 0;
-  peel_for_unknown_alignment.peel_info.dr = dr0;
+  peel_for_unknown_alignment.peel_info.dr_info = dr0_info;
 
   best_peel = peel_for_unknown_alignment;
 
   peel_for_known_alignment.inside_cost = INT_MAX;
   peel_for_known_alignment.outside_cost = INT_MAX;
   peel_for_known_alignment.peel_info.count = 0;
-  peel_for_known_alignment.peel_info.dr = NULL;
+  peel_for_known_alignment.peel_info.dr_info = NULL;
 
   if (do_peeling && one_misalignment_known)
     {
@@ -1952,7 +1969,7 @@ vect_enhance_data_refs_alignment (loop_v
     }
 
   /* Compare costs of peeling for known and unknown alignment. */
-  if (peel_for_known_alignment.peel_info.dr != NULL
+  if (peel_for_known_alignment.peel_info.dr_info != NULL
       && peel_for_unknown_alignment.inside_cost
       >= peel_for_known_alignment.inside_cost)
     {
@@ -1969,7 +1986,7 @@ vect_enhance_data_refs_alignment (loop_v
      since we'd have to discard a chosen peeling except when it accidentally
      aligned the unsupportable data ref.  */
   if (one_dr_unsupportable)
-    dr0 = unsupportable_dr;
+    dr0_info = unsupportable_dr_info;
   else if (do_peeling)
     {
       /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
@@ -2000,7 +2017,7 @@ vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       npeel = best_peel.peel_info.npeel;
-      dr0 = best_peel.peel_info.dr;
+      dr0_info = best_peel.peel_info.dr_info;
 
       /* If no peeling is not more expensive than the best peeling we
 	 have so far, don't perform any peeling.  */
@@ -2010,12 +2027,12 @@ vect_enhance_data_refs_alignment (loop_v
 
   if (do_peeling)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
+      stmt_vec_info stmt_info = dr0_info->stmt;
       vectype = STMT_VINFO_VECTYPE (stmt_info);
 
-      if (known_alignment_for_access_p (dr0))
+      if (known_alignment_for_access_p (dr0_info))
         {
-	  bool negative = tree_int_cst_compare (DR_STEP (dr0),
+	  bool negative = tree_int_cst_compare (DR_STEP (dr0_info->dr),
 						size_zero_node) < 0;
           if (!npeel)
             {
@@ -2024,16 +2041,17 @@ vect_enhance_data_refs_alignment (loop_v
                  updating DR_MISALIGNMENT values.  The peeling factor is the
                  vectorization factor minus the misalignment as an element
                  count.  */
-	      mis = negative ? DR_MISALIGNMENT (dr0) : -DR_MISALIGNMENT (dr0);
-	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
+	      mis = (negative
+		     ? DR_MISALIGNMENT (dr0_info)
+		     : -DR_MISALIGNMENT (dr0_info));
+	      unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
 	      npeel = ((mis & (target_align - 1))
-		       / vect_get_scalar_dr_size (dr0));
+		       / vect_get_scalar_dr_size (dr0_info));
             }
 
 	  /* For interleaved data access every iteration accesses all the
 	     members of the group, therefore we divide the number of iterations
 	     by the group size.  */
-	  stmt_info = vect_dr_stmt (dr0);
 	  if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	    npeel /= DR_GROUP_SIZE (stmt_info);
 
@@ -2043,11 +2061,11 @@ vect_enhance_data_refs_alignment (loop_v
         }
 
       /* Ensure that all datarefs can be vectorized after the peel.  */
-      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
+      if (!vect_peeling_supportable (loop_vinfo, dr0_info, npeel))
 	do_peeling = false;
 
       /* Check if all datarefs are supportable and log.  */
-      if (do_peeling && known_alignment_for_access_p (dr0) && npeel == 0)
+      if (do_peeling && known_alignment_for_access_p (dr0_info) && npeel == 0)
         {
           stat = vect_verify_datarefs_alignment (loop_vinfo);
           if (!stat)
@@ -2066,8 +2084,9 @@ vect_enhance_data_refs_alignment (loop_v
               unsigned max_peel = npeel;
               if (max_peel == 0)
                 {
-		  unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
-		  max_peel = target_align / vect_get_scalar_dr_size (dr0) - 1;
+		  unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
+		  max_peel = (target_align
+			      / vect_get_scalar_dr_size (dr0_info) - 1);
                 }
               if (max_peel > max_allowed_peel)
                 {
@@ -2103,25 +2122,26 @@ vect_enhance_data_refs_alignment (loop_v
              vectorization factor times the size).  Otherwise, the
              misalignment of DR_i must be set to unknown.  */
 	  FOR_EACH_VEC_ELT (datarefs, i, dr)
-	    if (dr != dr0)
+	    if (dr != dr0_info->dr)
 	      {
 		/* Strided accesses perform only component accesses, alignment
 		   is irrelevant for them.  */
-		stmt_info = vect_dr_stmt (dr);
+		dr_vec_info *dr_info = DR_VECT_AUX (dr);
+		stmt_info = dr_info->stmt;
 		if (STMT_VINFO_STRIDED_P (stmt_info)
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 		  continue;
 
-		vect_update_misalignment_for_peel (dr, dr0, npeel);
+		vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
 	      }
 
-          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
+          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info->dr;
           if (npeel)
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
           else
             LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo)
-	      = DR_MISALIGNMENT (dr0);
-	  SET_DR_MISALIGNMENT (dr0, 0);
+	      = DR_MISALIGNMENT (dr0_info);
+	  SET_DR_MISALIGNMENT (dr0_info, 0);
 	  if (dump_enabled_p ())
             {
               dump_printf_loc (MSG_NOTE, vect_location,
@@ -2156,11 +2176,12 @@ vect_enhance_data_refs_alignment (loop_v
     {
       FOR_EACH_VEC_ELT (datarefs, i, dr)
         {
-	  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+	  dr_vec_info *dr_info = DR_VECT_AUX (dr);
+	  stmt_vec_info stmt_info = dr_info->stmt;
 
 	  /* For interleaving, only the alignment of the first access
 	     matters.  */
-	  if (aligned_access_p (dr)
+	  if (aligned_access_p (dr_info)
 	      || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
 		  && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
 	    continue;
@@ -2175,14 +2196,15 @@ vect_enhance_data_refs_alignment (loop_v
 	      break;
 	    }
 
-	  supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
+	  supportable_dr_alignment
+	    = vect_supportable_dr_alignment (dr_info, false);
 
           if (!supportable_dr_alignment)
             {
               int mask;
               tree vectype;
 
-              if (known_alignment_for_access_p (dr)
+              if (known_alignment_for_access_p (dr_info)
                   || LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).length ()
                      >= (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIGNMENT_CHECKS))
                 {
@@ -2190,7 +2212,6 @@ vect_enhance_data_refs_alignment (loop_v
                   break;
                 }
 
-	      stmt_info = vect_dr_stmt (dr);
 	      vectype = STMT_VINFO_VECTYPE (stmt_info);
 	      gcc_assert (vectype);
 
@@ -2241,8 +2262,8 @@ vect_enhance_data_refs_alignment (loop_v
          of the loop being vectorized.  */
       FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
         {
-          dr = STMT_VINFO_DATA_REF (stmt_info);
-	  SET_DR_MISALIGNMENT (dr, 0);
+	  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+	  SET_DR_MISALIGNMENT (dr_info, 0);
 	  if (dump_enabled_p ())
             dump_printf_loc (MSG_NOTE, vect_location,
                              "Alignment of access forced using versioning.\n");
@@ -2278,8 +2299,10 @@ vect_find_same_alignment_drs (struct dat
 {
   struct data_reference *dra = DDR_A (ddr);
   struct data_reference *drb = DDR_B (ddr);
-  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
-  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
+  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
 
   if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
     return;
@@ -2302,9 +2325,9 @@ vect_find_same_alignment_drs (struct dat
   if (maybe_ne (diff, 0))
     {
       /* Get the wider of the two alignments.  */
-      unsigned int align_a = (vect_calculate_target_alignment (dra)
+      unsigned int align_a = (vect_calculate_target_alignment (dr_info_a)
 			      / BITS_PER_UNIT);
-      unsigned int align_b = (vect_calculate_target_alignment (drb)
+      unsigned int align_b = (vect_calculate_target_alignment (dr_info_b)
 			      / BITS_PER_UNIT);
       unsigned int max_align = MAX (align_a, align_b);
 
@@ -2352,9 +2375,9 @@ vect_analyze_data_refs_alignment (loop_v
   vect_record_base_alignments (vinfo);
   FOR_EACH_VEC_ELT (datarefs, i, dr)
     {
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
-      if (STMT_VINFO_VECTORIZABLE (stmt_info))
-	vect_compute_data_ref_alignment (dr);
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt))
+	vect_compute_data_ref_alignment (dr_info);
     }
 
   return true;
@@ -2370,17 +2393,17 @@ vect_slp_analyze_and_verify_node_alignme
      the node is permuted in which case we start from the first
      element in the group.  */
   stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
-  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  dr_vec_info *first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
   if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
     first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
 
-  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
-  vect_compute_data_ref_alignment (dr);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
+  vect_compute_data_ref_alignment (dr_info);
   /* For creating the data-ref pointer we need alignment of the
      first element anyway.  */
-  if (dr != first_dr)
-    vect_compute_data_ref_alignment (first_dr);
-  if (! verify_data_ref_alignment (dr))
+  if (dr_info != first_dr_info)
+    vect_compute_data_ref_alignment (first_dr_info);
+  if (! verify_data_ref_alignment (dr_info))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -2418,19 +2441,20 @@ vect_slp_analyze_and_verify_instance_ali
 }
 
 
-/* Analyze groups of accesses: check that DR belongs to a group of
+/* Analyze groups of accesses: check that DR_INFO belongs to a group of
    accesses of legal size, step, etc.  Detect gaps, single element
    interleaving, and other special cases. Set grouped access info.
    Collect groups of strided stores for further use in SLP analysis.
    Worker for vect_analyze_group_access.  */
 
 static bool
-vect_analyze_group_access_1 (struct data_reference *dr)
+vect_analyze_group_access_1 (dr_vec_info *dr_info)
 {
+  data_reference *dr = dr_info->dr;
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
   HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   HOST_WIDE_INT dr_step = -1;
@@ -2507,7 +2531,7 @@ vect_analyze_group_access_1 (struct data
       if (bb_vinfo)
 	{
 	  /* Mark the statement as unvectorizable.  */
-	  STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
+	  STMT_VINFO_VECTORIZABLE (stmt_info) = false;
 	  return true;
 	}
 
@@ -2655,18 +2679,18 @@ vect_analyze_group_access_1 (struct data
   return true;
 }
 
-/* Analyze groups of accesses: check that DR belongs to a group of
+/* Analyze groups of accesses: check that DR_INFO belongs to a group of
    accesses of legal size, step, etc.  Detect gaps, single element
    interleaving, and other special cases. Set grouped access info.
    Collect groups of strided stores for further use in SLP analysis.  */
 
 static bool
-vect_analyze_group_access (struct data_reference *dr)
+vect_analyze_group_access (dr_vec_info *dr_info)
 {
-  if (!vect_analyze_group_access_1 (dr))
+  if (!vect_analyze_group_access_1 (dr_info))
     {
       /* Dissolve the group if present.  */
-      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
+      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (dr_info->stmt);
       while (stmt_info)
 	{
 	  stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
@@ -2679,16 +2703,17 @@ vect_analyze_group_access (struct data_r
   return true;
 }
 
-/* Analyze the access pattern of the data-reference DR.
+/* Analyze the access pattern of the data-reference DR_INFO.
    In case of non-consecutive accesses call vect_analyze_group_access() to
    analyze groups of accesses.  */
 
 static bool
-vect_analyze_data_ref_access (struct data_reference *dr)
+vect_analyze_data_ref_access (dr_vec_info *dr_info)
 {
+  data_reference *dr = dr_info->dr;
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
 
@@ -2768,10 +2793,10 @@ vect_analyze_data_ref_access (struct dat
   if (TREE_CODE (step) != INTEGER_CST)
     return (STMT_VINFO_STRIDED_P (stmt_info)
 	    && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
-		|| vect_analyze_group_access (dr)));
+		|| vect_analyze_group_access (dr_info)));
 
   /* Not consecutive access - check if it's a part of interleaving group.  */
-  return vect_analyze_group_access (dr);
+  return vect_analyze_group_access (dr_info);
 }
 
 /* Compare two data-references DRA and DRB to group them into chunks
@@ -2916,7 +2941,8 @@ vect_analyze_data_ref_accesses (vec_info
   for (i = 0; i < datarefs_copy.length () - 1;)
     {
       data_reference_p dra = datarefs_copy[i];
-      stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
+      dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
+      stmt_vec_info stmtinfo_a = dr_info_a->stmt;
       stmt_vec_info lastinfo = NULL;
       if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
 	  || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_a))
@@ -2927,7 +2953,8 @@ vect_analyze_data_ref_accesses (vec_info
       for (i = i + 1; i < datarefs_copy.length (); ++i)
 	{
 	  data_reference_p drb = datarefs_copy[i];
-	  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
+	  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
+	  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
 	  if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
 	      || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
 	    break;
@@ -3050,25 +3077,28 @@ vect_analyze_data_ref_accesses (vec_info
     }
 
   FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
-    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
-        && !vect_analyze_data_ref_access (dr))
-      {
-	if (dump_enabled_p ())
-	  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
-	                   "not vectorized: complicated access pattern.\n");
+    {
+      dr_vec_info *dr_info = DR_VECT_AUX (dr);
+      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt)
+	  && !vect_analyze_data_ref_access (dr_info))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "not vectorized: complicated access pattern.\n");
 
-        if (is_a <bb_vec_info> (vinfo))
-	  {
-	    /* Mark the statement as not vectorizable.  */
-	    STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
-	    continue;
-	  }
-        else
-	  {
-	    datarefs_copy.release ();
-	    return false;
-	  }
-      }
+	  if (is_a <bb_vec_info> (vinfo))
+	    {
+	      /* Mark the statement as not vectorizable.  */
+	      STMT_VINFO_VECTORIZABLE (dr_info->stmt) = false;
+	      continue;
+	    }
+	  else
+	    {
+	      datarefs_copy.release ();
+	      return false;
+	    }
+	}
+    }
 
   datarefs_copy.release ();
   return true;
@@ -3077,7 +3107,7 @@ vect_analyze_data_ref_accesses (vec_info
 /* Function vect_vfa_segment_size.
 
    Input:
-     DR: The data reference.
+     DR_INFO: The data reference.
      LENGTH_FACTOR: segment length to consider.
 
    Return a value suitable for the dr_with_seg_len::seg_len field.
@@ -3086,32 +3116,32 @@ vect_analyze_data_ref_accesses (vec_info
    the size of the access; in effect it only describes the first byte.  */
 
 static tree
-vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
+vect_vfa_segment_size (dr_vec_info *dr_info, tree length_factor)
 {
   length_factor = size_binop (MINUS_EXPR,
 			      fold_convert (sizetype, length_factor),
 			      size_one_node);
-  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr)),
+  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr_info->dr)),
 		     length_factor);
 }
 
-/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
+/* Return a value that, when added to abs (vect_vfa_segment_size (DR_INFO)),
    gives the worst-case number of bytes covered by the segment.  */
 
 static unsigned HOST_WIDE_INT
-vect_vfa_access_size (data_reference *dr)
+vect_vfa_access_size (dr_vec_info *dr_info)
 {
-  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
-  tree ref_type = TREE_TYPE (DR_REF (dr));
+  stmt_vec_info stmt_vinfo = dr_info->stmt;
+  tree ref_type = TREE_TYPE (DR_REF (dr_info->dr));
   unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
   unsigned HOST_WIDE_INT access_size = ref_size;
   if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
     {
-      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
+      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
       access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
     }
   if (STMT_VINFO_VEC_STMT (stmt_vinfo)
-      && (vect_supportable_dr_alignment (dr, false)
+      && (vect_supportable_dr_alignment (dr_info, false)
 	  == dr_explicit_realign_optimized))
     {
       /* We might access a full vector's worth.  */
@@ -3121,12 +3151,13 @@ vect_vfa_access_size (data_reference *dr
   return access_size;
 }
 
-/* Get the minimum alignment for all the scalar accesses that DR describes.  */
+/* Get the minimum alignment for all the scalar accesses that DR_INFO
+   describes.  */
 
 static unsigned int
-vect_vfa_align (const data_reference *dr)
+vect_vfa_align (dr_vec_info *dr_info)
 {
-  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
+  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
 }
 
 /* Function vect_no_alias_p.
@@ -3139,27 +3170,27 @@ vect_vfa_align (const data_reference *dr
    of dr_with_seg_len::{seg_len,access_size} for A and B.  */
 
 static int
-vect_compile_time_alias (struct data_reference *a, struct data_reference *b,
+vect_compile_time_alias (dr_vec_info *a, dr_vec_info *b,
 			 tree segment_length_a, tree segment_length_b,
 			 unsigned HOST_WIDE_INT access_size_a,
 			 unsigned HOST_WIDE_INT access_size_b)
 {
-  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a));
-  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b));
+  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a->dr));
+  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b->dr));
   poly_uint64 const_length_a;
   poly_uint64 const_length_b;
 
   /* For negative step, we need to adjust address range by TYPE_SIZE_UNIT
      bytes, e.g., int a[3] -> a[1] range is [a+4, a+16) instead of
      [a, a+12) */
-  if (tree_int_cst_compare (DR_STEP (a), size_zero_node) < 0)
+  if (tree_int_cst_compare (DR_STEP (a->dr), size_zero_node) < 0)
     {
       const_length_a = (-wi::to_poly_wide (segment_length_a)).force_uhwi ();
       offset_a = (offset_a + access_size_a) - const_length_a;
     }
   else
     const_length_a = tree_to_poly_uint64 (segment_length_a);
-  if (tree_int_cst_compare (DR_STEP (b), size_zero_node) < 0)
+  if (tree_int_cst_compare (DR_STEP (b->dr), size_zero_node) < 0)
     {
       const_length_b = (-wi::to_poly_wide (segment_length_b)).force_uhwi ();
       offset_b = (offset_b + access_size_b) - const_length_b;
@@ -3269,30 +3300,34 @@ vect_check_lower_bound (loop_vec_info lo
   LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
 }
 
-/* Return true if it's unlikely that the step of the vectorized form of DR
+/* Return true if it's unlikely that the step of the vectorized form of DR_INFO
    will span fewer than GAP bytes.  */
 
 static bool
-vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
+vect_small_gap_p (loop_vec_info loop_vinfo, dr_vec_info *dr_info,
+		  poly_int64 gap)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   HOST_WIDE_INT count
     = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
   if (DR_GROUP_FIRST_ELEMENT (stmt_info))
     count *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
-  return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
+  return (estimated_poly_value (gap)
+	  <= count * vect_get_scalar_dr_size (dr_info));
 }
 
-/* Return true if we know that there is no alias between DR_A and DR_B
-   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
-   *LOWER_BOUND_OUT to this N.  */
+/* Return true if we know that there is no alias between DR_INFO_A and
+   DR_INFO_B when abs (DR_STEP (DR_INFO_A->dr)) >= N for some N.
+   When returning true, set *LOWER_BOUND_OUT to this N.  */
 
 static bool
-vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
+vectorizable_with_step_bound_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b,
 				poly_uint64 *lower_bound_out)
 {
   /* Check that there is a constant gap of known sign between DR_A
      and DR_B.  */
+  data_reference *dr_a = dr_info_a->dr;
+  data_reference *dr_b = dr_info_b->dr;
   poly_int64 init_a, init_b;
   if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
       || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
@@ -3306,19 +3341,19 @@ vectorizable_with_step_bound_p (data_ref
   if (maybe_lt (init_b, init_a))
     {
       std::swap (init_a, init_b);
+      std::swap (dr_info_a, dr_info_b);
       std::swap (dr_a, dr_b);
     }
 
   /* If the two accesses could be dependent within a scalar iteration,
      make sure that we'd retain their order.  */
-  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
-      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
-					 vect_dr_stmt (dr_b)))
+  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_info_a), init_b)
+      && !vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
     return false;
 
   /* There is no alias if abs (DR_STEP) is greater than or equal to
      the bytes spanned by the combination of the two accesses.  */
-  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_b) - init_a;
+  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_info_b) - init_a;
   return true;
 }
 
@@ -3376,7 +3411,6 @@ vect_prune_runtime_alias_test_list (loop
     {
       int comp_res;
       poly_uint64 lower_bound;
-      struct data_reference *dr_a, *dr_b;
       tree segment_length_a, segment_length_b;
       unsigned HOST_WIDE_INT access_size_a, access_size_b;
       unsigned int align_a, align_b;
@@ -3404,25 +3438,26 @@ vect_prune_runtime_alias_test_list (loop
 	  continue;
 	}
 
-      dr_a = DDR_A (ddr);
-      stmt_vec_info stmt_info_a = vect_dr_stmt (DDR_A (ddr));
+      dr_vec_info *dr_info_a = DR_VECT_AUX (DDR_A (ddr));
+      stmt_vec_info stmt_info_a = dr_info_a->stmt;
 
-      dr_b = DDR_B (ddr);
-      stmt_vec_info stmt_info_b = vect_dr_stmt (DDR_B (ddr));
+      dr_vec_info *dr_info_b = DR_VECT_AUX (DDR_B (ddr));
+      stmt_vec_info stmt_info_b = dr_info_b->stmt;
 
       /* Skip the pair if inter-iteration dependencies are irrelevant
 	 and intra-iteration dependencies are guaranteed to be honored.  */
       if (ignore_step_p
-	  && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
-	      || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
+	  && (vect_preserves_scalar_order_p (dr_info_a, dr_info_b)
+	      || vectorizable_with_step_bound_p (dr_info_a, dr_info_b,
+						 &lower_bound)))
 	{
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location,
 			       "no need for alias check between ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " and ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_b->dr));
 	      dump_printf (MSG_NOTE, " when VF is 1\n");
 	    }
 	  continue;
@@ -3433,20 +3468,21 @@ vect_prune_runtime_alias_test_list (loop
 	 (It might not be, for example, if the minimum step is much larger
 	 than the number of bytes handled by one vector iteration.)  */
       if (!ignore_step_p
-	  && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
-	  && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
-	  && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
-	      || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
+	  && TREE_CODE (DR_STEP (dr_info_a->dr)) != INTEGER_CST
+	  && vectorizable_with_step_bound_p (dr_info_a, dr_info_b,
+					     &lower_bound)
+	  && (vect_small_gap_p (loop_vinfo, dr_info_a, lower_bound)
+	      || vect_small_gap_p (loop_vinfo, dr_info_b, lower_bound)))
 	{
-	  bool unsigned_p = dr_known_forward_stride_p (dr_a);
+	  bool unsigned_p = dr_known_forward_stride_p (dr_info_a->dr);
 	  if (dump_enabled_p ())
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location, "no alias between ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " and ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_b->dr));
 	      dump_printf (MSG_NOTE, " when the step ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_STEP (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_STEP (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " is outside ");
 	      if (unsigned_p)
 		dump_printf (MSG_NOTE, "[0");
@@ -3459,8 +3495,8 @@ vect_prune_runtime_alias_test_list (loop
 	      dump_dec (MSG_NOTE, lower_bound);
 	      dump_printf (MSG_NOTE, ")\n");
 	    }
-	  vect_check_lower_bound (loop_vinfo, DR_STEP (dr_a), unsigned_p,
-				  lower_bound);
+	  vect_check_lower_bound (loop_vinfo, DR_STEP (dr_info_a->dr),
+				  unsigned_p, lower_bound);
 	  continue;
 	}
 
@@ -3468,14 +3504,14 @@ vect_prune_runtime_alias_test_list (loop
       if (dr_group_first_a)
 	{
 	  stmt_info_a = dr_group_first_a;
-	  dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
+	  dr_info_a = STMT_VINFO_DR_INFO (stmt_info_a);
 	}
 
       stmt_vec_info dr_group_first_b = DR_GROUP_FIRST_ELEMENT (stmt_info_b);
       if (dr_group_first_b)
 	{
 	  stmt_info_b = dr_group_first_b;
-	  dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
+	  dr_info_b = STMT_VINFO_DR_INFO (stmt_info_b);
 	}
 
       if (ignore_step_p)
@@ -3485,32 +3521,33 @@ vect_prune_runtime_alias_test_list (loop
 	}
       else
 	{
-	  if (!operand_equal_p (DR_STEP (dr_a), DR_STEP (dr_b), 0))
+	  if (!operand_equal_p (DR_STEP (dr_info_a->dr),
+				DR_STEP (dr_info_b->dr), 0))
 	    length_factor = scalar_loop_iters;
 	  else
 	    length_factor = size_int (vect_factor);
-	  segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
-	  segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
+	  segment_length_a = vect_vfa_segment_size (dr_info_a, length_factor);
+	  segment_length_b = vect_vfa_segment_size (dr_info_b, length_factor);
 	}
-      access_size_a = vect_vfa_access_size (dr_a);
-      access_size_b = vect_vfa_access_size (dr_b);
-      align_a = vect_vfa_align (dr_a);
-      align_b = vect_vfa_align (dr_b);
+      access_size_a = vect_vfa_access_size (dr_info_a);
+      access_size_b = vect_vfa_access_size (dr_info_b);
+      align_a = vect_vfa_align (dr_info_a);
+      align_b = vect_vfa_align (dr_info_b);
 
-      comp_res = data_ref_compare_tree (DR_BASE_ADDRESS (dr_a),
-					DR_BASE_ADDRESS (dr_b));
+      comp_res = data_ref_compare_tree (DR_BASE_ADDRESS (dr_info_a->dr),
+					DR_BASE_ADDRESS (dr_info_b->dr));
       if (comp_res == 0)
-	comp_res = data_ref_compare_tree (DR_OFFSET (dr_a),
-					  DR_OFFSET (dr_b));
+	comp_res = data_ref_compare_tree (DR_OFFSET (dr_info_a->dr),
+					  DR_OFFSET (dr_info_b->dr));
 
       /* See whether the alias is known at compilation time.  */
       if (comp_res == 0
-	  && TREE_CODE (DR_STEP (dr_a)) == INTEGER_CST
-	  && TREE_CODE (DR_STEP (dr_b)) == INTEGER_CST
+	  && TREE_CODE (DR_STEP (dr_info_a->dr)) == INTEGER_CST
+	  && TREE_CODE (DR_STEP (dr_info_b->dr)) == INTEGER_CST
 	  && poly_int_tree_p (segment_length_a)
 	  && poly_int_tree_p (segment_length_b))
 	{
-	  int res = vect_compile_time_alias (dr_a, dr_b,
+	  int res = vect_compile_time_alias (dr_info_a, dr_info_b,
 					     segment_length_a,
 					     segment_length_b,
 					     access_size_a,
@@ -3519,9 +3556,9 @@ vect_prune_runtime_alias_test_list (loop
 	    {
 	      dump_printf_loc (MSG_NOTE, vect_location,
 			       "can tell at compile time that ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_a));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_a->dr));
 	      dump_printf (MSG_NOTE, " and ");
-	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_b));
+	      dump_generic_expr (MSG_NOTE, TDF_SLIM, DR_REF (dr_info_b->dr));
 	      if (res == 0)
 		dump_printf (MSG_NOTE, " do not alias\n");
 	      else
@@ -3541,8 +3578,10 @@ vect_prune_runtime_alias_test_list (loop
 	}
 
       dr_with_seg_len_pair_t dr_with_seg_len_pair
-	(dr_with_seg_len (dr_a, segment_length_a, access_size_a, align_a),
-	 dr_with_seg_len (dr_b, segment_length_b, access_size_b, align_b));
+	(dr_with_seg_len (dr_info_a->dr, segment_length_a,
+			  access_size_a, align_a),
+	 dr_with_seg_len (dr_info_b->dr, segment_length_b,
+			  access_size_b, align_b));
 
       /* Canonicalize pairs by sorting the two DR members.  */
       if (comp_res > 0)
@@ -4451,18 +4490,18 @@ vect_get_new_ssa_name (tree type, enum v
   return new_vect_var;
 }
 
-/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
+/* Duplicate ptr info and set alignment/misaligment on NAME from DR_INFO.  */
 
 static void
-vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
+vect_duplicate_ssa_name_ptr_info (tree name, dr_vec_info *dr_info)
 {
-  duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
-  int misalign = DR_MISALIGNMENT (dr);
+  duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr_info->dr));
+  int misalign = DR_MISALIGNMENT (dr_info);
   if (misalign == DR_MISALIGNMENT_UNKNOWN)
     mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (name));
   else
     set_ptr_info_alignment (SSA_NAME_PTR_INFO (name),
-			    DR_TARGET_ALIGNMENT (dr), misalign);
+			    DR_TARGET_ALIGNMENT (dr_info), misalign);
 }
 
 /* Function vect_create_addr_base_for_vector_ref.
@@ -4505,7 +4544,8 @@ vect_create_addr_base_for_vector_ref (st
 				      tree offset,
 				      tree byte_offset)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  struct data_reference *dr = dr_info->dr;
   const char *base_name;
   tree addr_base;
   tree dest;
@@ -4513,7 +4553,7 @@ vect_create_addr_base_for_vector_ref (st
   tree vect_ptr_type;
   tree step = TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr)));
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
-  innermost_loop_behavior *drb = vect_dr_behavior (dr);
+  innermost_loop_behavior *drb = vect_dr_behavior (dr_info);
 
   tree data_ref_base = unshare_expr (drb->base_address);
   tree base_offset = unshare_expr (drb->offset);
@@ -4566,7 +4606,7 @@ vect_create_addr_base_for_vector_ref (st
       && TREE_CODE (addr_base) == SSA_NAME
       && !SSA_NAME_PTR_INFO (addr_base))
     {
-      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
+      vect_duplicate_ssa_name_ptr_info (addr_base, dr_info);
       if (offset || byte_offset)
 	mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
     }
@@ -4658,7 +4698,8 @@ vect_create_data_ref_ptr (stmt_vec_info
   edge pe = NULL;
   basic_block new_bb;
   tree aggr_ptr_init;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  struct data_reference *dr = dr_info->dr;
   tree aptr;
   gimple_stmt_iterator incr_gsi;
   bool insert_after;
@@ -4687,7 +4728,7 @@ vect_create_data_ref_ptr (stmt_vec_info
 
   /* Check the step (evolution) of the load in LOOP, and record
      whether it's invariant.  */
-  step = vect_dr_behavior (dr)->step;
+  step = vect_dr_behavior (dr_info)->step;
   if (integer_zerop (step))
     *inv_p = true;
   else
@@ -4832,8 +4873,8 @@ vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -4862,8 +4903,8 @@ vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -5406,7 +5447,8 @@ vect_setup_realignment (stmt_vec_info st
 {
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  struct data_reference *dr = dr_info->dr;
   struct loop *loop = NULL;
   edge pe = NULL;
   tree scalar_dest = gimple_assign_lhs (stmt_info->stmt);
@@ -5519,7 +5561,7 @@ vect_setup_realignment (stmt_vec_info st
 	new_temp = copy_ssa_name (ptr);
       else
 	new_temp = make_ssa_name (TREE_TYPE (ptr));
-      unsigned int align = DR_TARGET_ALIGNMENT (dr);
+      unsigned int align = DR_TARGET_ALIGNMENT (dr_info);
       new_stmt = gimple_build_assign
 		   (new_temp, BIT_AND_EXPR, ptr,
 		    build_int_cst (TREE_TYPE (ptr), -(HOST_WIDE_INT) align));
@@ -6421,24 +6463,25 @@ vect_can_force_dr_alignment_p (const_tre
 }
 
 
-/* Return whether the data reference DR is supported with respect to its
+/* Return whether the data reference DR_INFO is supported with respect to its
    alignment.
    If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
    it is aligned, i.e., check if it is possible to vectorize it with different
    alignment.  */
 
 enum dr_alignment_support
-vect_supportable_dr_alignment (struct data_reference *dr,
+vect_supportable_dr_alignment (dr_vec_info *dr_info,
                                bool check_aligned_accesses)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = dr_info->dr;
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   machine_mode mode = TYPE_MODE (vectype);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *vect_loop = NULL;
   bool nested_in_vect_loop = false;
 
-  if (aligned_access_p (dr) && !check_aligned_accesses)
+  if (aligned_access_p (dr_info) && !check_aligned_accesses)
     return dr_aligned;
 
   /* For now assume all conditional loads/stores support unaligned
@@ -6546,11 +6589,11 @@ vect_supportable_dr_alignment (struct da
 	  else
 	    return dr_explicit_realign_optimized;
 	}
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (dr_info))
 	is_packed = not_size_aligned (DR_REF (dr));
 
       if (targetm.vectorize.support_vector_misalignment
-	    (mode, type, DR_MISALIGNMENT (dr), is_packed))
+	    (mode, type, DR_MISALIGNMENT (dr_info), is_packed))
 	/* Can't software pipeline the loads, but can at least do them.  */
 	return dr_unaligned_supported;
     }
@@ -6559,11 +6602,11 @@ vect_supportable_dr_alignment (struct da
       bool is_packed = false;
       tree type = (TREE_TYPE (DR_REF (dr)));
 
-      if (!known_alignment_for_access_p (dr))
+      if (!known_alignment_for_access_p (dr_info))
 	is_packed = not_size_aligned (DR_REF (dr));
 
      if (targetm.vectorize.support_vector_misalignment
-	   (mode, type, DR_MISALIGNMENT (dr), is_packed))
+	   (mode, type, DR_MISALIGNMENT (dr_info), is_packed))
        return dr_unaligned_supported;
     }
 
Index: gcc/tree-vect-loop-manip.c
===================================================================
--- gcc/tree-vect-loop-manip.c	2018-07-26 11:28:07.929273995 +0100
+++ gcc/tree-vect-loop-manip.c	2018-07-26 11:42:19.031663762 +0100
@@ -1560,14 +1560,15 @@ vect_update_ivs_after_vectorizer (loop_v
 static tree
 get_misalign_in_elems (gimple **seq, loop_vec_info loop_vinfo)
 {
-  struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
-  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+  unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
   gcc_assert (target_align != 0);
 
-  bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
+  bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
+					size_zero_node) < 0;
   tree offset = (negative
 		 ? size_int (-TYPE_VECTOR_SUBPARTS (vectype) + 1)
 		 : size_zero_node);
@@ -1626,14 +1627,14 @@ get_misalign_in_elems (gimple **seq, loo
 vect_gen_prolog_loop_niters (loop_vec_info loop_vinfo,
 			     basic_block bb, int *bound)
 {
-  struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
+  dr_vec_info *dr_info = DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
   tree var;
   tree niters_type = TREE_TYPE (LOOP_VINFO_NITERS (loop_vinfo));
   gimple_seq stmts = NULL, new_stmts = NULL;
   tree iters, iters_name;
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  stmt_vec_info stmt_info = dr_info->stmt;
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
-  unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
+  unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
 
   if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) > 0)
     {
@@ -1658,7 +1659,8 @@ vect_gen_prolog_loop_niters (loop_vec_in
 
       /* Create:  (niters_type) ((align_in_elems - misalign_in_elems)
 				 & (align_in_elems - 1)).  */
-      bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
+      bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
+					    size_zero_node) < 0;
       if (negative)
 	iters = fold_build2 (MINUS_EXPR, type, misalign_in_elems,
 			     align_in_elems_tree);
Index: gcc/tree-vect-loop.c
===================================================================
--- gcc/tree-vect-loop.c	2018-07-26 11:28:12.000000000 +0100
+++ gcc/tree-vect-loop.c	2018-07-26 11:42:19.031663762 +0100
@@ -2142,8 +2142,9 @@ vect_analyze_loop_2 (loop_vec_info loop_
 	  /* Niters for peeled prolog loop.  */
 	  if (LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) < 0)
 	    {
-	      struct data_reference *dr = LOOP_VINFO_UNALIGNED_DR (loop_vinfo);
-	      tree vectype = STMT_VINFO_VECTYPE (vect_dr_stmt (dr));
+	      dr_vec_info *dr_info
+		= DR_VECT_AUX (LOOP_VINFO_UNALIGNED_DR (loop_vinfo));
+	      tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
 	      niters_th += TYPE_VECTOR_SUBPARTS (vectype) - 1;
 	    }
 	  else
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-26 11:30:56.197256524 +0100
+++ gcc/tree-vect-stmts.c	2018-07-26 11:42:19.035663718 +0100
@@ -1057,8 +1057,9 @@ vect_get_store_cost (stmt_vec_info stmt_
 		     unsigned int *inside_cost,
 		     stmt_vector_for_cost *body_cost_vec)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (dr_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -1079,7 +1080,8 @@ vect_get_store_cost (stmt_vec_info stmt_
         /* Here, we assign an additional cost for the unaligned store.  */
 	*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
 					  unaligned_store, stmt_info,
-					  DR_MISALIGNMENT (dr), vect_body);
+					  DR_MISALIGNMENT (dr_info),
+					  vect_body);
         if (dump_enabled_p ())
           dump_printf_loc (MSG_NOTE, vect_location,
                            "vect_model_store_cost: unaligned supported by "
@@ -1236,8 +1238,9 @@ vect_get_load_cost (stmt_vec_info stmt_i
 		    stmt_vector_for_cost *body_cost_vec,
 		    bool record_prologue_costs)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (dr_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -1257,7 +1260,8 @@ vect_get_load_cost (stmt_vec_info stmt_i
         /* Here, we assign an additional cost for the unaligned load.  */
 	*inside_cost += record_stmt_cost (body_cost_vec, ncopies,
 					  unaligned_load, stmt_info,
-					  DR_MISALIGNMENT (dr), vect_body);
+					  DR_MISALIGNMENT (dr_info),
+					  vect_body);
 
         if (dump_enabled_p ())
           dump_printf_loc (MSG_NOTE, vect_location,
@@ -1975,7 +1979,8 @@ vect_truncate_gather_scatter_offset (stm
 				     loop_vec_info loop_vinfo, bool masked_p,
 				     gather_scatter_info *gs_info)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  data_reference *dr = dr_info->dr;
   tree step = DR_STEP (dr);
   if (TREE_CODE (step) != INTEGER_CST)
     {
@@ -2003,7 +2008,7 @@ vect_truncate_gather_scatter_offset (stm
     count = max_iters.to_shwi ();
 
   /* Try scales of 1 and the element size.  */
-  int scales[] = { 1, vect_get_scalar_dr_size (dr) };
+  int scales[] = { 1, vect_get_scalar_dr_size (dr_info) };
   wi::overflow_type overflow = wi::OVF_NONE;
   for (int i = 0; i < 2; ++i)
     {
@@ -2102,8 +2107,8 @@ vect_use_strided_gather_scatters_p (stmt
 static int
 compare_step_with_zero (stmt_vec_info stmt_info)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  return tree_int_cst_compare (vect_dr_behavior (dr)->step,
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
+  return tree_int_cst_compare (vect_dr_behavior (dr_info)->step,
 			       size_zero_node);
 }
 
@@ -2166,7 +2171,7 @@ get_group_load_store_type (stmt_vec_info
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = loop_vinfo ? LOOP_VINFO_LOOP (loop_vinfo) : NULL;
   stmt_vec_info first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
-  data_reference *first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  dr_vec_info *first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
   unsigned int group_size = DR_GROUP_SIZE (first_stmt_info);
   bool single_element_p = (stmt_info == first_stmt_info
 			   && !DR_GROUP_NEXT_ELEMENT (stmt_info));
@@ -2218,8 +2223,8 @@ get_group_load_store_type (stmt_vec_info
 	     be a multiple of B and so we are guaranteed to access a
 	     non-gap element in the same B-sized block.  */
 	  if (overrun_p
-	      && gap < (vect_known_alignment_in_bytes (first_dr)
-			/ vect_get_scalar_dr_size (first_dr)))
+	      && gap < (vect_known_alignment_in_bytes (first_dr_info)
+			/ vect_get_scalar_dr_size (first_dr_info)))
 	    overrun_p = false;
 	  if (overrun_p && !can_overrun_p)
 	    {
@@ -2246,8 +2251,8 @@ get_group_load_store_type (stmt_vec_info
 	 same B-sized block.  */
       if (would_overrun_p
 	  && !masked_p
-	  && gap < (vect_known_alignment_in_bytes (first_dr)
-		    / vect_get_scalar_dr_size (first_dr)))
+	  && gap < (vect_known_alignment_in_bytes (first_dr_info)
+		    / vect_get_scalar_dr_size (first_dr_info)))
 	would_overrun_p = false;
 
       if (!STMT_VINFO_STRIDED_P (stmt_info)
@@ -2339,7 +2344,7 @@ get_negative_load_store_type (stmt_vec_i
 			      vec_load_store_type vls_type,
 			      unsigned int ncopies)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
   dr_alignment_support alignment_support_scheme;
 
   if (ncopies > 1)
@@ -2350,7 +2355,7 @@ get_negative_load_store_type (stmt_vec_i
       return VMAT_ELEMENTWISE;
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  alignment_support_scheme = vect_supportable_dr_alignment (dr_info, false);
   if (alignment_support_scheme != dr_aligned
       && alignment_support_scheme != dr_unaligned_supported)
     {
@@ -2923,19 +2928,19 @@ vect_get_strided_load_store_ops (stmt_ve
 }
 
 /* Return the amount that should be added to a vector pointer to move
-   to the next or previous copy of AGGR_TYPE.  DR is the data reference
+   to the next or previous copy of AGGR_TYPE.  DR_INFO is the data reference
    being vectorized and MEMORY_ACCESS_TYPE describes the type of
    vectorization.  */
 
 static tree
-vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
+vect_get_data_ptr_increment (dr_vec_info *dr_info, tree aggr_type,
 			     vect_memory_access_type memory_access_type)
 {
   if (memory_access_type == VMAT_INVARIANT)
     return size_zero_node;
 
   tree iv_step = TYPE_SIZE_UNIT (aggr_type);
-  tree step = vect_dr_behavior (dr)->step;
+  tree step = vect_dr_behavior (dr_info)->step;
   if (tree_int_cst_sgn (step) == -1)
     iv_step = fold_build1 (NEGATE_EXPR, TREE_TYPE (iv_step), iv_step);
   return iv_step;
@@ -6169,19 +6174,20 @@ vectorizable_operation (stmt_vec_info st
   return true;
 }
 
-/* A helper function to ensure data reference DR's base alignment.  */
+/* A helper function to ensure data reference DR_INFO's base alignment.  */
 
 static void
-ensure_base_align (struct data_reference *dr)
+ensure_base_align (dr_vec_info *dr_info)
 {
-  if (DR_VECT_AUX (dr)->misalignment == DR_MISALIGNMENT_UNINITIALIZED)
+  if (dr_info->misalignment == DR_MISALIGNMENT_UNINITIALIZED)
     return;
 
-  if (DR_VECT_AUX (dr)->base_misaligned)
+  if (dr_info->base_misaligned)
     {
-      tree base_decl = DR_VECT_AUX (dr)->base_decl;
+      tree base_decl = dr_info->base_decl;
 
-      unsigned int align_base_to = DR_TARGET_ALIGNMENT (dr) * BITS_PER_UNIT;
+      unsigned int align_base_to
+	= DR_TARGET_ALIGNMENT (dr_info) * BITS_PER_UNIT;
 
       if (decl_in_symtab_p (base_decl))
 	symtab_node::get (base_decl)->increase_alignment (align_base_to);
@@ -6190,7 +6196,7 @@ ensure_base_align (struct data_reference
 	  SET_DECL_ALIGN (base_decl, align_base_to);
           DECL_USER_ALIGN (base_decl) = 1;
 	}
-      DR_VECT_AUX (dr)->base_misaligned = false;
+      dr_info->base_misaligned = false;
     }
 }
 
@@ -6239,7 +6245,6 @@ vectorizable_store (stmt_vec_info stmt_i
   tree data_ref;
   tree op;
   tree vec_oprnd = NULL_TREE;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info), *first_dr = NULL;
   tree elem_type;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
@@ -6401,19 +6406,20 @@ vectorizable_store (stmt_vec_info stmt_i
 	return false;
     }
 
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info), *first_dr_info = NULL;
   grouped_store = (STMT_VINFO_GROUPED_ACCESS (stmt_info)
 		   && memory_access_type != VMAT_GATHER_SCATTER
 		   && (slp || memory_access_type != VMAT_CONTIGUOUS));
   if (grouped_store)
     {
       first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
-      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+      first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
       group_size = DR_GROUP_SIZE (first_stmt_info);
     }
   else
     {
       first_stmt_info = stmt_info;
-      first_dr = dr;
+      first_dr_info = dr_info;
       group_size = vec_num = 1;
     }
 
@@ -6435,7 +6441,7 @@ vectorizable_store (stmt_vec_info stmt_i
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  ensure_base_align (dr_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -6614,7 +6620,7 @@ vectorizable_store (stmt_vec_info stmt_i
 	  first_stmt_info = SLP_TREE_SCALAR_STMTS (slp_node)[0];
 	  gcc_assert (DR_GROUP_FIRST_ELEMENT (first_stmt_info)
 		      == first_stmt_info);
-	  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+	  first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
 	  op = vect_get_store_rhs (first_stmt_info);
         } 
       else
@@ -6625,7 +6631,7 @@ vectorizable_store (stmt_vec_info stmt_i
       ref_type = get_group_alias_ptr_type (first_stmt_info);
     }
   else
-    ref_type = reference_alias_ptr_type (DR_REF (first_dr));
+    ref_type = reference_alias_ptr_type (DR_REF (first_dr_info->dr));
 
   if (dump_enabled_p ())
     dump_printf_loc (MSG_NOTE, vect_location,
@@ -6651,11 +6657,11 @@ vectorizable_store (stmt_vec_info stmt_i
 
       stride_base
 	= fold_build_pointer_plus
-	    (DR_BASE_ADDRESS (first_dr),
+	    (DR_BASE_ADDRESS (first_dr_info->dr),
 	     size_binop (PLUS_EXPR,
-			 convert_to_ptrofftype (DR_OFFSET (first_dr)),
-			 convert_to_ptrofftype (DR_INIT (first_dr))));
-      stride_step = fold_convert (sizetype, DR_STEP (first_dr));
+			 convert_to_ptrofftype (DR_OFFSET (first_dr_info->dr)),
+			 convert_to_ptrofftype (DR_INIT (first_dr_info->dr))));
+      stride_step = fold_convert (sizetype, DR_STEP (first_dr_info->dr));
 
       /* For a store with loop-invariant (but other than power-of-2)
          stride (i.e. not a grouped access) like so:
@@ -6835,7 +6841,7 @@ vectorizable_store (stmt_vec_info stmt_i
 						 group_el * elsz);
 		  newref = build2 (MEM_REF, ltype,
 				   running_off, this_off);
-		  vect_copy_ref_info (newref, DR_REF (first_dr));
+		  vect_copy_ref_info (newref, DR_REF (first_dr_info->dr));
 
 		  /* And store it to *running_off.  */
 		  assign = gimple_build_assign (newref, elem);
@@ -6878,7 +6884,8 @@ vectorizable_store (stmt_vec_info stmt_i
   auto_vec<tree> dr_chain (group_size);
   oprnds.create (group_size);
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_dr_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -6916,7 +6923,8 @@ vectorizable_store (stmt_vec_info stmt_i
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (dr_info, aggr_type,
+					  memory_access_type);
     }
 
   if (mask)
@@ -7011,14 +7019,14 @@ vectorizable_store (stmt_vec_info stmt_i
 	  bool simd_lane_access_p
 	    = STMT_VINFO_SIMD_LANE_ACCESS_P (stmt_info);
 	  if (simd_lane_access_p
-	      && TREE_CODE (DR_BASE_ADDRESS (first_dr)) == ADDR_EXPR
-	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr), 0))
-	      && integer_zerop (DR_OFFSET (first_dr))
-	      && integer_zerop (DR_INIT (first_dr))
+	      && TREE_CODE (DR_BASE_ADDRESS (first_dr_info->dr)) == ADDR_EXPR
+	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr_info->dr), 0))
+	      && integer_zerop (DR_OFFSET (first_dr_info->dr))
+	      && integer_zerop (DR_INIT (first_dr_info->dr))
 	      && alias_sets_conflict_p (get_alias_set (aggr_type),
 					get_alias_set (TREE_TYPE (ref_type))))
 	    {
-	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr));
+	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr_info->dr));
 	      dataref_offset = build_int_cst (ref_type, 0);
 	      inv_p = false;
 	    }
@@ -7175,16 +7183,16 @@ vectorizable_store (stmt_vec_info stmt_i
 		   vect_permute_store_chain().  */
 		vec_oprnd = result_chain[i];
 
-	      align = DR_TARGET_ALIGNMENT (first_dr);
-	      if (aligned_access_p (first_dr))
+	      align = DR_TARGET_ALIGNMENT (first_dr_info);
+	      if (aligned_access_p (first_dr_info))
 		misalign = 0;
-	      else if (DR_MISALIGNMENT (first_dr) == -1)
+	      else if (DR_MISALIGNMENT (first_dr_info) == -1)
 		{
-		  align = dr_alignment (vect_dr_behavior (first_dr));
+		  align = dr_alignment (vect_dr_behavior (first_dr_info));
 		  misalign = 0;
 		}
 	      else
-		misalign = DR_MISALIGNMENT (first_dr);
+		misalign = DR_MISALIGNMENT (first_dr_info);
 	      if (dataref_offset == NULL_TREE
 		  && TREE_CODE (dataref_ptr) == SSA_NAME)
 		set_ptr_info_alignment (get_ptr_info (dataref_ptr), align,
@@ -7227,9 +7235,9 @@ vectorizable_store (stmt_vec_info stmt_i
 					  dataref_offset
 					  ? dataref_offset
 					  : build_int_cst (ref_type, 0));
-		  if (aligned_access_p (first_dr))
+		  if (aligned_access_p (first_dr_info))
 		    ;
-		  else if (DR_MISALIGNMENT (first_dr) == -1)
+		  else if (DR_MISALIGNMENT (first_dr_info) == -1)
 		    TREE_TYPE (data_ref)
 		      = build_aligned_type (TREE_TYPE (data_ref),
 					    align * BITS_PER_UNIT);
@@ -7237,7 +7245,7 @@ vectorizable_store (stmt_vec_info stmt_i
 		    TREE_TYPE (data_ref)
 		      = build_aligned_type (TREE_TYPE (data_ref),
 					    TYPE_ALIGN (elem_type));
-		  vect_copy_ref_info (data_ref, DR_REF (first_dr));
+		  vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 		  gassign *new_stmt
 		    = gimple_build_assign (data_ref, vec_oprnd);
 		  new_stmt_info
@@ -7400,7 +7408,6 @@ vectorizable_load (stmt_vec_info stmt_in
   struct loop *loop = NULL;
   struct loop *containing_loop = gimple_bb (stmt_info->stmt)->loop_father;
   bool nested_in_vect_loop = false;
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info), *first_dr = NULL;
   tree elem_type;
   tree new_temp;
   machine_mode mode;
@@ -7663,7 +7670,8 @@ vectorizable_load (stmt_vec_info stmt_in
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info), *first_dr_info = NULL;
+  ensure_base_align (dr_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -7692,12 +7700,12 @@ vectorizable_load (stmt_vec_info stmt_in
       if (grouped_load)
 	{
 	  first_stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
-	  first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+	  first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
 	}
       else
 	{
 	  first_stmt_info = stmt_info;
-	  first_dr = dr;
+	  first_dr_info = dr_info;
 	}
       if (slp && grouped_load)
 	{
@@ -7712,16 +7720,16 @@ vectorizable_load (stmt_vec_info stmt_in
 		 * vect_get_place_in_interleaving_chain (stmt_info,
 							 first_stmt_info));
 	  group_size = 1;
-	  ref_type = reference_alias_ptr_type (DR_REF (dr));
+	  ref_type = reference_alias_ptr_type (DR_REF (dr_info->dr));
 	}
 
       stride_base
 	= fold_build_pointer_plus
-	    (DR_BASE_ADDRESS (first_dr),
+	    (DR_BASE_ADDRESS (first_dr_info->dr),
 	     size_binop (PLUS_EXPR,
-			 convert_to_ptrofftype (DR_OFFSET (first_dr)),
-			 convert_to_ptrofftype (DR_INIT (first_dr))));
-      stride_step = fold_convert (sizetype, DR_STEP (first_dr));
+			 convert_to_ptrofftype (DR_OFFSET (first_dr_info->dr)),
+			 convert_to_ptrofftype (DR_INIT (first_dr_info->dr))));
+      stride_step = fold_convert (sizetype, DR_STEP (first_dr_info->dr));
 
       /* For a load with loop-invariant (but other than power-of-2)
          stride (i.e. not a grouped access) like so:
@@ -7850,7 +7858,7 @@ vectorizable_load (stmt_vec_info stmt_in
 	      tree this_off = build_int_cst (TREE_TYPE (alias_off),
 					     group_el * elsz + cst_offset);
 	      tree data_ref = build2 (MEM_REF, ltype, running_off, this_off);
-	      vect_copy_ref_info (data_ref, DR_REF (first_dr));
+	      vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 	      gassign *new_stmt
 		= gimple_build_assign (make_ssa_name (ltype), data_ref);
 	      new_stmt_info
@@ -7946,7 +7954,7 @@ vectorizable_load (stmt_vec_info stmt_in
 	  *vec_stmt = STMT_VINFO_VEC_STMT (stmt_info);
 	  return true;
 	}
-      first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+      first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
       group_gap_adj = 0;
 
       /* VEC_NUM is the number of vect stmts to be created for this group.  */
@@ -7980,13 +7988,14 @@ vectorizable_load (stmt_vec_info stmt_in
   else
     {
       first_stmt_info = stmt_info;
-      first_dr = dr;
+      first_dr_info = dr_info;
       group_size = vec_num = 1;
       group_gap_adj = 0;
-      ref_type = reference_alias_ptr_type (DR_REF (first_dr));
+      ref_type = reference_alias_ptr_type (DR_REF (first_dr_info->dr));
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_dr_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -8105,7 +8114,7 @@ vectorizable_load (stmt_vec_info stmt_in
      nested within an outer-loop that is being vectorized.  */
 
   if (nested_in_vect_loop
-      && !multiple_p (DR_STEP_ALIGNMENT (dr),
+      && !multiple_p (DR_STEP_ALIGNMENT (dr_info->dr),
 		      GET_MODE_SIZE (TYPE_MODE (vectype))))
     {
       gcc_assert (alignment_support_scheme != dr_explicit_realign_optimized);
@@ -8151,7 +8160,8 @@ vectorizable_load (stmt_vec_info stmt_in
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (dr_info, aggr_type,
+					  memory_access_type);
     }
 
   tree vec_mask = NULL_TREE;
@@ -8166,16 +8176,16 @@ vectorizable_load (stmt_vec_info stmt_in
 	  bool simd_lane_access_p
 	    = STMT_VINFO_SIMD_LANE_ACCESS_P (stmt_info);
 	  if (simd_lane_access_p
-	      && TREE_CODE (DR_BASE_ADDRESS (first_dr)) == ADDR_EXPR
-	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr), 0))
-	      && integer_zerop (DR_OFFSET (first_dr))
-	      && integer_zerop (DR_INIT (first_dr))
+	      && TREE_CODE (DR_BASE_ADDRESS (first_dr_info->dr)) == ADDR_EXPR
+	      && VAR_P (TREE_OPERAND (DR_BASE_ADDRESS (first_dr_info->dr), 0))
+	      && integer_zerop (DR_OFFSET (first_dr_info->dr))
+	      && integer_zerop (DR_INIT (first_dr_info->dr))
 	      && alias_sets_conflict_p (get_alias_set (aggr_type),
 					get_alias_set (TREE_TYPE (ref_type)))
 	      && (alignment_support_scheme == dr_aligned
 		  || alignment_support_scheme == dr_unaligned_supported))
 	    {
-	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr));
+	      dataref_ptr = unshare_expr (DR_BASE_ADDRESS (first_dr_info->dr));
 	      dataref_offset = build_int_cst (ref_type, 0);
 	      inv_p = false;
 	    }
@@ -8190,10 +8200,11 @@ vectorizable_load (stmt_vec_info stmt_in
 	      /* Adjust the pointer by the difference to first_stmt.  */
 	      data_reference_p ptrdr
 		= STMT_VINFO_DATA_REF (first_stmt_info_for_drptr);
-	      tree diff = fold_convert (sizetype,
-					size_binop (MINUS_EXPR,
-						    DR_INIT (first_dr),
-						    DR_INIT (ptrdr)));
+	      tree diff
+		= fold_convert (sizetype,
+				size_binop (MINUS_EXPR,
+					    DR_INIT (first_dr_info->dr),
+					    DR_INIT (ptrdr)));
 	      dataref_ptr = bump_vector_ptr (dataref_ptr, ptr_incr, gsi,
 					     stmt_info, diff);
 	    }
@@ -8326,19 +8337,20 @@ vectorizable_load (stmt_vec_info stmt_in
 			break;
 		      }
 
-		    align = DR_TARGET_ALIGNMENT (dr);
+		    align = DR_TARGET_ALIGNMENT (dr_info);
 		    if (alignment_support_scheme == dr_aligned)
 		      {
-			gcc_assert (aligned_access_p (first_dr));
+			gcc_assert (aligned_access_p (first_dr_info));
 			misalign = 0;
 		      }
-		    else if (DR_MISALIGNMENT (first_dr) == -1)
+		    else if (DR_MISALIGNMENT (first_dr_info) == -1)
 		      {
-			align = dr_alignment (vect_dr_behavior (first_dr));
+			align = dr_alignment
+			  (vect_dr_behavior (first_dr_info));
 			misalign = 0;
 		      }
 		    else
-		      misalign = DR_MISALIGNMENT (first_dr);
+		      misalign = DR_MISALIGNMENT (first_dr_info);
 		    if (dataref_offset == NULL_TREE
 			&& TREE_CODE (dataref_ptr) == SSA_NAME)
 		      set_ptr_info_alignment (get_ptr_info (dataref_ptr),
@@ -8365,7 +8377,7 @@ vectorizable_load (stmt_vec_info stmt_in
 					 : build_int_cst (ref_type, 0));
 			if (alignment_support_scheme == dr_aligned)
 			  ;
-			else if (DR_MISALIGNMENT (first_dr) == -1)
+			else if (DR_MISALIGNMENT (first_dr_info) == -1)
 			  TREE_TYPE (data_ref)
 			    = build_aligned_type (TREE_TYPE (data_ref),
 						  align * BITS_PER_UNIT);
@@ -8392,7 +8404,7 @@ vectorizable_load (stmt_vec_info stmt_in
 		      ptr = copy_ssa_name (dataref_ptr);
 		    else
 		      ptr = make_ssa_name (TREE_TYPE (dataref_ptr));
-		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr_info);
 		    new_stmt = gimple_build_assign
 				 (ptr, BIT_AND_EXPR, dataref_ptr,
 				  build_int_cst
@@ -8402,7 +8414,7 @@ vectorizable_load (stmt_vec_info stmt_in
 		    data_ref
 		      = build2 (MEM_REF, vectype, ptr,
 				build_int_cst (ref_type, 0));
-		    vect_copy_ref_info (data_ref, DR_REF (first_dr));
+		    vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 		    vec_dest = vect_create_destination_var (scalar_dest,
 							    vectype);
 		    new_stmt = gimple_build_assign (vec_dest, data_ref);
@@ -8436,7 +8448,7 @@ vectorizable_load (stmt_vec_info stmt_in
 		      new_temp = copy_ssa_name (dataref_ptr);
 		    else
 		      new_temp = make_ssa_name (TREE_TYPE (dataref_ptr));
-		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr);
+		    unsigned int align = DR_TARGET_ALIGNMENT (first_dr_info);
 		    new_stmt = gimple_build_assign
 		      (new_temp, BIT_AND_EXPR, dataref_ptr,
 		       build_int_cst (TREE_TYPE (dataref_ptr),
@@ -8454,7 +8466,7 @@ vectorizable_load (stmt_vec_info stmt_in
 	      /* DATA_REF is null if we've already built the statement.  */
 	      if (data_ref)
 		{
-		  vect_copy_ref_info (data_ref, DR_REF (first_dr));
+		  vect_copy_ref_info (data_ref, DR_REF (first_dr_info->dr));
 		  new_stmt = gimple_build_assign (vec_dest, data_ref);
 		}
 	      new_temp = make_ssa_name (vec_dest, new_stmt);
Richard Biener July 26, 2018, 11:13 a.m. UTC | #4
On Thu, Jul 26, 2018 at 1:05 PM Richard Sandiford
<richard.sandiford@arm.com> wrote:
>
> Richard Sandiford <richard.sandiford@arm.com> writes:
> > Richard Biener <richard.guenther@gmail.com> writes:
> >> On Tue, Jul 24, 2018 at 12:08 PM Richard Sandiford
> >> <richard.sandiford@arm.com> wrote:
> >>>
> >>> This patch makes various routines (mostly in tree-vect-data-refs.c)
> >>> take stmt_vec_infos rather than data_references.  The affected routines
> >>> are really dealing with the way that an access is going to vectorised
> >>> for a particular stmt_vec_info, rather than with the original scalar
> >>> access described by the data_reference.
> >>
> >> Similar.  Doesn't it make more sense to pass both stmt_info and DR to
> >> the functions?
> >
> > Not sure.  If we...
> >
> >> We currently cannot handle aggregate copies in the to-be-vectorized IL
> >> but rely on SRA and friends to elide those.  That's the only two-DR
> >> stmt I can think of for vectorization.  Maybe aggregate by-value / return
> >> function calls with OMP SIMD if that supports this somehow.
> >
> > ...did this then I don't think a data_refrence would be the natural
> > way of identifying a DR within a stmt_vec_info.  Presumably the
> > stmt_vec_info would need multiple STMT_VINFO_DATA_REFS and dr_auxs.
> > If both of those were vectors then a (stmt_vec_info, index) pair
> > might make more sense than (stmt_vec_info, data_reference).
> >
> > Alternatively we could move STMT_VINFO_DATA_REF into dataref_aux,
> > so that there's a back-pointer to the DR, add a stmt_vec_info
> > field to dataref_aux too, and then use dataref_aux instead of
> > stmt_vec_info as the key.
>
> New patch 37/46 does that.  The one below goes through and uses
> dr_vec_info insead of data_reference in code that is dealing
> with the way that a reference is going to be vectorised.

OK.

> Thanks,
> Richard
>
>
> 2018-07-26  Richard Sandiford  <richard.sandiford@arm.com>
>
> gcc/
>         * tree-vectorizer.h (set_dr_misalignment, dr_misalignment)
>         (DR_TARGET_ALIGNMENT, aligned_access_p, known_alignment_for_access_p)
>         (vect_known_alignment_in_bytes, vect_dr_behavior)
>         (vect_get_scalar_dr_size): Take references as dr_vec_infos
>         instead of data_references.  Update calls to other routines for
>         which the same change has been made.
>         * tree-vect-data-refs.c (vect_preserves_scalar_order_p): Take
>         dr_vec_infos instead of stmt_vec_infos.
>         (vect_analyze_data_ref_dependence): Update call accordingly.
>         (vect_slp_analyze_data_ref_dependence)
>         (vect_record_base_alignments): Use DR_VECT_AUX.
>         (vect_calculate_target_alignment, vect_compute_data_ref_alignment)
>         (vect_update_misalignment_for_peel, verify_data_ref_alignment)
>         (vector_alignment_reachable_p, vect_get_data_access_cost)
>         (vect_peeling_supportable, vect_analyze_group_access_1)
>         (vect_analyze_group_access, vect_analyze_data_ref_access)
>         (vect_vfa_segment_size, vect_vfa_access_size, vect_vfa_align)
>         (vect_compile_time_alias, vect_small_gap_p)
>         (vectorizable_with_step_bound_p, vect_duplicate_ssa_name_ptr_info):
>         (vect_supportable_dr_alignment): Take references as dr_vec_infos
>         instead of data_references.  Update calls to other routines for
>         which the same change has been made.
>         (vect_verify_datarefs_alignment, vect_get_peeling_costs_all_drs)
>         (vect_find_same_alignment_drs, vect_analyze_data_refs_alignment)
>         (vect_slp_analyze_and_verify_node_alignment)
>         (vect_analyze_data_ref_accesses, vect_prune_runtime_alias_test_list)
>         (vect_create_addr_base_for_vector_ref, vect_create_data_ref_ptr)
>         (vect_setup_realignment): Use dr_vec_infos.  Update calls after
>         above changes.
>         (_vect_peel_info::dr): Replace with...
>         (_vect_peel_info::dr_info): ...this new field.
>         (vect_peeling_hash_get_most_frequent)
>         (vect_peeling_hash_choose_best_peeling): Update accordingly.
>         (vect_peeling_hash_get_lowest_cost):
>         (vect_enhance_data_refs_alignment): Likewise.  Update calls to other
>         routines for which the same change has been made.
>         (vect_peeling_hash_insert): Likewise.  Take a dr_vec_info instead of a
>         data_reference.
>         * tree-vect-loop-manip.c (get_misalign_in_elems)
>         (vect_gen_prolog_loop_niters): Use dr_vec_infos.  Update calls after
>         above changes.
>         * tree-vect-loop.c (vect_analyze_loop_2): Likewise.
>         * tree-vect-stmts.c (vect_get_store_cost, vect_get_load_cost)
>         (vect_truncate_gather_scatter_offset, compare_step_with_zero)
>         (get_group_load_store_type, get_negative_load_store_type)
>         (vect_get_data_ptr_increment, vectorizable_store)
>         (vectorizable_load): Likewise.
>         (ensure_base_align): Take a dr_vec_info instead of a data_reference.
>         Update calls to other routines for which the same change has been made.
>
> Index: gcc/tree-vectorizer.h
> ===================================================================
> --- gcc/tree-vectorizer.h       2018-07-26 11:30:56.197256524 +0100
> +++ gcc/tree-vectorizer.h       2018-07-26 11:42:19.035663718 +0100
> @@ -1294,15 +1294,15 @@ #define DR_MISALIGNMENT_UNKNOWN (-1)
>  #define DR_MISALIGNMENT_UNINITIALIZED (-2)
>
>  inline void
> -set_dr_misalignment (struct data_reference *dr, int val)
> +set_dr_misalignment (dr_vec_info *dr_info, int val)
>  {
> -  DR_VECT_AUX (dr)->misalignment = val;
> +  dr_info->misalignment = val;
>  }
>
>  inline int
> -dr_misalignment (struct data_reference *dr)
> +dr_misalignment (dr_vec_info *dr_info)
>  {
> -  int misalign = DR_VECT_AUX (dr)->misalignment;
> +  int misalign = dr_info->misalignment;
>    gcc_assert (misalign != DR_MISALIGNMENT_UNINITIALIZED);
>    return misalign;
>  }
> @@ -1313,52 +1313,51 @@ #define DR_MISALIGNMENT(DR) dr_misalignm
>  #define SET_DR_MISALIGNMENT(DR, VAL) set_dr_misalignment (DR, VAL)
>
>  /* Only defined once DR_MISALIGNMENT is defined.  */
> -#define DR_TARGET_ALIGNMENT(DR) DR_VECT_AUX (DR)->target_alignment
> +#define DR_TARGET_ALIGNMENT(DR) ((DR)->target_alignment)
>
> -/* Return true if data access DR is aligned to its target alignment
> +/* Return true if data access DR_INFO is aligned to its target alignment
>     (which may be less than a full vector).  */
>
>  static inline bool
> -aligned_access_p (struct data_reference *data_ref_info)
> +aligned_access_p (dr_vec_info *dr_info)
>  {
> -  return (DR_MISALIGNMENT (data_ref_info) == 0);
> +  return (DR_MISALIGNMENT (dr_info) == 0);
>  }
>
>  /* Return TRUE if the alignment of the data access is known, and FALSE
>     otherwise.  */
>
>  static inline bool
> -known_alignment_for_access_p (struct data_reference *data_ref_info)
> +known_alignment_for_access_p (dr_vec_info *dr_info)
>  {
> -  return (DR_MISALIGNMENT (data_ref_info) != DR_MISALIGNMENT_UNKNOWN);
> +  return (DR_MISALIGNMENT (dr_info) != DR_MISALIGNMENT_UNKNOWN);
>  }
>
>  /* Return the minimum alignment in bytes that the vectorized version
> -   of DR is guaranteed to have.  */
> +   of DR_INFO is guaranteed to have.  */
>
>  static inline unsigned int
> -vect_known_alignment_in_bytes (struct data_reference *dr)
> +vect_known_alignment_in_bytes (dr_vec_info *dr_info)
>  {
> -  if (DR_MISALIGNMENT (dr) == DR_MISALIGNMENT_UNKNOWN)
> -    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
> -  if (DR_MISALIGNMENT (dr) == 0)
> -    return DR_TARGET_ALIGNMENT (dr);
> -  return DR_MISALIGNMENT (dr) & -DR_MISALIGNMENT (dr);
> +  if (DR_MISALIGNMENT (dr_info) == DR_MISALIGNMENT_UNKNOWN)
> +    return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
> +  if (DR_MISALIGNMENT (dr_info) == 0)
> +    return DR_TARGET_ALIGNMENT (dr_info);
> +  return DR_MISALIGNMENT (dr_info) & -DR_MISALIGNMENT (dr_info);
>  }
>
> -/* Return the behavior of DR with respect to the vectorization context
> +/* Return the behavior of DR_INFO with respect to the vectorization context
>     (which for outer loop vectorization might not be the behavior recorded
> -   in DR itself).  */
> +   in DR_INFO itself).  */
>
>  static inline innermost_loop_behavior *
> -vect_dr_behavior (data_reference *dr)
> +vect_dr_behavior (dr_vec_info *dr_info)
>  {
> -  gimple *stmt = DR_STMT (dr);
> -  stmt_vec_info stmt_info = vinfo_for_stmt (stmt);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    if (loop_vinfo == NULL
>        || !nested_in_vect_loop_p (LOOP_VINFO_LOOP (loop_vinfo), stmt_info))
> -    return &DR_INNERMOST (dr);
> +    return &DR_INNERMOST (dr_info->dr);
>    else
>      return &STMT_VINFO_DR_WRT_VEC_LOOP (stmt_info);
>  }
> @@ -1451,17 +1450,17 @@ vect_max_vf (loop_vec_info loop_vinfo)
>    return MAX_VECTORIZATION_FACTOR;
>  }
>
> -/* Return the size of the value accessed by unvectorized data reference DR.
> -   This is only valid once STMT_VINFO_VECTYPE has been calculated for the
> -   associated gimple statement, since that guarantees that DR accesses
> -   either a scalar or a scalar equivalent.  ("Scalar equivalent" here
> -   includes things like V1SI, which can be vectorized in the same way
> +/* Return the size of the value accessed by unvectorized data reference
> +   DR_INFO.  This is only valid once STMT_VINFO_VECTYPE has been calculated
> +   for the associated gimple statement, since that guarantees that DR_INFO
> +   accesses either a scalar or a scalar equivalent.  ("Scalar equivalent"
> +   here includes things like V1SI, which can be vectorized in the same way
>     as a plain SI.)  */
>
>  inline unsigned int
> -vect_get_scalar_dr_size (struct data_reference *dr)
> +vect_get_scalar_dr_size (dr_vec_info *dr_info)
>  {
> -  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr))));
> +  return tree_to_uhwi (TYPE_SIZE_UNIT (TREE_TYPE (DR_REF (dr_info->dr))));
>  }
>
>  /* Source location + hotness information. */
> @@ -1561,7 +1560,7 @@ extern tree vect_get_mask_type_for_stmt
>  /* In tree-vect-data-refs.c.  */
>  extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
>  extern enum dr_alignment_support vect_supportable_dr_alignment
> -                                           (struct data_reference *, bool);
> +                                           (dr_vec_info *, bool);
>  extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
>                                             HOST_WIDE_INT *);
>  extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
> Index: gcc/tree-vect-data-refs.c
> ===================================================================
> --- gcc/tree-vect-data-refs.c   2018-07-26 11:30:56.193256600 +0100
> +++ gcc/tree-vect-data-refs.c   2018-07-26 11:42:19.031663762 +0100
> @@ -192,14 +192,16 @@ vect_check_nonzero_value (loop_vec_info
>    LOOP_VINFO_CHECK_NONZERO (loop_vinfo).safe_push (value);
>  }
>
> -/* Return true if we know that the order of vectorized STMTINFO_A and
> -   vectorized STMTINFO_B will be the same as the order of STMTINFO_A and
> -   STMTINFO_B.  At least one of the statements is a write.  */
> +/* Return true if we know that the order of vectorized DR_INFO_A and
> +   vectorized DR_INFO_B will be the same as the order of DR_INFO_A and
> +   DR_INFO_B.  At least one of the accesses is a write.  */
>
>  static bool
> -vect_preserves_scalar_order_p (stmt_vec_info stmtinfo_a,
> -                              stmt_vec_info stmtinfo_b)
> +vect_preserves_scalar_order_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b)
>  {
> +  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
> +  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
> +
>    /* Single statements are always kept in their original order.  */
>    if (!STMT_VINFO_GROUPED_ACCESS (stmtinfo_a)
>        && !STMT_VINFO_GROUPED_ACCESS (stmtinfo_b))
> @@ -294,8 +296,10 @@ vect_analyze_data_ref_dependence (struct
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> -  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
> +  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
> +  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>    lambda_vector dist_v;
>    unsigned int loop_depth;
>
> @@ -471,7 +475,7 @@ vect_analyze_data_ref_dependence (struct
>                 ... = a[i];
>                 a[i+1] = ...;
>              where loads from the group interleave with the store.  */
> -         if (!vect_preserves_scalar_order_p (stmtinfo_a, stmtinfo_b))
> +         if (!vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
>             {
>               if (dump_enabled_p ())
>                 dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -609,6 +613,8 @@ vect_slp_analyze_data_ref_dependence (st
>  {
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> +  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
>
>    /* We need to check dependences of statements marked as unvectorizable
>       as well, they still can prohibit vectorization.  */
> @@ -626,9 +632,9 @@ vect_slp_analyze_data_ref_dependence (st
>
>    /* If dra and drb are part of the same interleaving chain consider
>       them independent.  */
> -  if (STMT_VINFO_GROUPED_ACCESS (vect_dr_stmt (dra))
> -      && (DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dra))
> -         == DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (drb))))
> +  if (STMT_VINFO_GROUPED_ACCESS (dr_info_a->stmt)
> +      && (DR_GROUP_FIRST_ELEMENT (dr_info_a->stmt)
> +         == DR_GROUP_FIRST_ELEMENT (dr_info_b->stmt)))
>      return false;
>
>    /* Unknown data dependence.  */
> @@ -842,7 +848,8 @@ vect_record_base_alignments (vec_info *v
>    unsigned int i;
>    FOR_EACH_VEC_ELT (vinfo->shared->datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>        if (!DR_IS_CONDITIONAL_IN_STMT (dr)
>           && STMT_VINFO_VECTORIZABLE (stmt_info)
>           && !STMT_VINFO_GATHER_SCATTER_P (stmt_info))
> @@ -858,34 +865,33 @@ vect_record_base_alignments (vec_info *v
>      }
>  }
>
> -/* Return the target alignment for the vectorized form of DR.  */
> +/* Return the target alignment for the vectorized form of DR_INFO.  */
>
>  static unsigned int
> -vect_calculate_target_alignment (struct data_reference *dr)
> +vect_calculate_target_alignment (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -  tree vectype = STMT_VINFO_VECTYPE (stmt_info);
> +  tree vectype = STMT_VINFO_VECTYPE (dr_info->stmt);
>    return targetm.vectorize.preferred_vector_alignment (vectype);
>  }
>
>  /* Function vect_compute_data_ref_alignment
>
> -   Compute the misalignment of the data reference DR.
> +   Compute the misalignment of the data reference DR_INFO.
>
>     Output:
> -   1. DR_MISALIGNMENT (DR) is defined.
> +   1. DR_MISALIGNMENT (DR_INFO) is defined.
>
>     FOR NOW: No analysis is actually performed. Misalignment is calculated
>     only for trivial cases. TODO.  */
>
>  static void
> -vect_compute_data_ref_alignment (struct data_reference *dr)
> +vect_compute_data_ref_alignment (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
> -  tree ref = DR_REF (dr);
> +  tree ref = DR_REF (dr_info->dr);
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    if (dump_enabled_p ())
> @@ -896,17 +902,17 @@ vect_compute_data_ref_alignment (struct
>      loop = LOOP_VINFO_LOOP (loop_vinfo);
>
>    /* Initialize misalignment to unknown.  */
> -  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
> +  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
>
>    if (STMT_VINFO_GATHER_SCATTER_P (stmt_info))
>      return;
>
> -  innermost_loop_behavior *drb = vect_dr_behavior (dr);
> +  innermost_loop_behavior *drb = vect_dr_behavior (dr_info);
>    bool step_preserves_misalignment_p;
>
>    unsigned HOST_WIDE_INT vector_alignment
> -    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
> -  DR_TARGET_ALIGNMENT (dr) = vector_alignment;
> +    = vect_calculate_target_alignment (dr_info) / BITS_PER_UNIT;
> +  DR_TARGET_ALIGNMENT (dr_info) = vector_alignment;
>
>    /* No step for BB vectorization.  */
>    if (!loop)
> @@ -924,7 +930,7 @@ vect_compute_data_ref_alignment (struct
>    else if (nested_in_vect_loop_p (loop, stmt_info))
>      {
>        step_preserves_misalignment_p
> -       = (DR_STEP_ALIGNMENT (dr) % vector_alignment) == 0;
> +       = (DR_STEP_ALIGNMENT (dr_info->dr) % vector_alignment) == 0;
>
>        if (dump_enabled_p ())
>         {
> @@ -946,7 +952,7 @@ vect_compute_data_ref_alignment (struct
>      {
>        poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>        step_preserves_misalignment_p
> -       = multiple_p (DR_STEP_ALIGNMENT (dr) * vf, vector_alignment);
> +       = multiple_p (DR_STEP_ALIGNMENT (dr_info->dr) * vf, vector_alignment);
>
>        if (!step_preserves_misalignment_p && dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -1009,8 +1015,8 @@ vect_compute_data_ref_alignment (struct
>            dump_printf (MSG_NOTE, "\n");
>          }
>
> -      DR_VECT_AUX (dr)->base_decl = base;
> -      DR_VECT_AUX (dr)->base_misaligned = true;
> +      dr_info->base_decl = base;
> +      dr_info->base_misaligned = true;
>        base_misalignment = 0;
>      }
>    poly_int64 misalignment
> @@ -1038,12 +1044,13 @@ vect_compute_data_ref_alignment (struct
>        return;
>      }
>
> -  SET_DR_MISALIGNMENT (dr, const_misalignment);
> +  SET_DR_MISALIGNMENT (dr_info, const_misalignment);
>
>    if (dump_enabled_p ())
>      {
>        dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> -                       "misalign = %d bytes of ref ", DR_MISALIGNMENT (dr));
> +                      "misalign = %d bytes of ref ",
> +                      DR_MISALIGNMENT (dr_info));
>        dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM, ref);
>        dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
>      }
> @@ -1052,28 +1059,28 @@ vect_compute_data_ref_alignment (struct
>  }
>
>  /* Function vect_update_misalignment_for_peel.
> -   Sets DR's misalignment
> -   - to 0 if it has the same alignment as DR_PEEL,
> -   - to the misalignment computed using NPEEL if DR's salignment is known,
> +   Sets DR_INFO's misalignment
> +   - to 0 if it has the same alignment as DR_PEEL_INFO,
> +   - to the misalignment computed using NPEEL if DR_INFO's salignment is known,
>     - to -1 (unknown) otherwise.
>
> -   DR - the data reference whose misalignment is to be adjusted.
> -   DR_PEEL - the data reference whose misalignment is being made
> -             zero in the vector loop by the peel.
> +   DR_INFO - the data reference whose misalignment is to be adjusted.
> +   DR_PEEL_INFO - the data reference whose misalignment is being made
> +                 zero in the vector loop by the peel.
>     NPEEL - the number of iterations in the peel loop if the misalignment
> -           of DR_PEEL is known at compile time.  */
> +           of DR_PEEL_INFO is known at compile time.  */
>
>  static void
> -vect_update_misalignment_for_peel (struct data_reference *dr,
> -                                   struct data_reference *dr_peel, int npeel)
> +vect_update_misalignment_for_peel (dr_vec_info *dr_info,
> +                                  dr_vec_info *dr_peel_info, int npeel)
>  {
>    unsigned int i;
>    vec<dr_p> same_aligned_drs;
>    struct data_reference *current_dr;
> -  int dr_size = vect_get_scalar_dr_size (dr);
> -  int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
> +  int dr_size = vect_get_scalar_dr_size (dr_info);
> +  int dr_peel_size = vect_get_scalar_dr_size (dr_peel_info);
> +  stmt_vec_info stmt_info = dr_info->stmt;
> +  stmt_vec_info peel_stmt_info = dr_peel_info->stmt;
>
>   /* For interleaved data accesses the step in the loop must be multiplied by
>       the size of the interleaving group.  */
> @@ -1084,51 +1091,52 @@ vect_update_misalignment_for_peel (struc
>
>    /* It can be assumed that the data refs with the same alignment as dr_peel
>       are aligned in the vector loop.  */
> -  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
> +  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
>    FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
>      {
> -      if (current_dr != dr)
> +      if (current_dr != dr_info->dr)
>          continue;
> -      gcc_assert (!known_alignment_for_access_p (dr)
> -                 || !known_alignment_for_access_p (dr_peel)
> -                 || (DR_MISALIGNMENT (dr) / dr_size
> -                     == DR_MISALIGNMENT (dr_peel) / dr_peel_size));
> -      SET_DR_MISALIGNMENT (dr, 0);
> +      gcc_assert (!known_alignment_for_access_p (dr_info)
> +                 || !known_alignment_for_access_p (dr_peel_info)
> +                 || (DR_MISALIGNMENT (dr_info) / dr_size
> +                     == DR_MISALIGNMENT (dr_peel_info) / dr_peel_size));
> +      SET_DR_MISALIGNMENT (dr_info, 0);
>        return;
>      }
>
> -  if (known_alignment_for_access_p (dr)
> -      && known_alignment_for_access_p (dr_peel))
> +  if (known_alignment_for_access_p (dr_info)
> +      && known_alignment_for_access_p (dr_peel_info))
>      {
> -      bool negative = tree_int_cst_compare (DR_STEP (dr), size_zero_node) < 0;
> -      int misal = DR_MISALIGNMENT (dr);
> +      bool negative = tree_int_cst_compare (DR_STEP (dr_info->dr),
> +                                           size_zero_node) < 0;
> +      int misal = DR_MISALIGNMENT (dr_info);
>        misal += negative ? -npeel * dr_size : npeel * dr_size;
> -      misal &= DR_TARGET_ALIGNMENT (dr) - 1;
> -      SET_DR_MISALIGNMENT (dr, misal);
> +      misal &= DR_TARGET_ALIGNMENT (dr_info) - 1;
> +      SET_DR_MISALIGNMENT (dr_info, misal);
>        return;
>      }
>
>    if (dump_enabled_p ())
>      dump_printf_loc (MSG_NOTE, vect_location, "Setting misalignment " \
>                      "to unknown (-1).\n");
> -  SET_DR_MISALIGNMENT (dr, DR_MISALIGNMENT_UNKNOWN);
> +  SET_DR_MISALIGNMENT (dr_info, DR_MISALIGNMENT_UNKNOWN);
>  }
>
>
>  /* Function verify_data_ref_alignment
>
> -   Return TRUE if DR can be handled with respect to alignment.  */
> +   Return TRUE if DR_INFO can be handled with respect to alignment.  */
>
>  static bool
> -verify_data_ref_alignment (data_reference_p dr)
> +verify_data_ref_alignment (dr_vec_info *dr_info)
>  {
>    enum dr_alignment_support supportable_dr_alignment
> -    = vect_supportable_dr_alignment (dr, false);
> +    = vect_supportable_dr_alignment (dr_info, false);
>    if (!supportable_dr_alignment)
>      {
>        if (dump_enabled_p ())
>         {
> -         if (DR_IS_READ (dr))
> +         if (DR_IS_READ (dr_info->dr))
>             dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                              "not vectorized: unsupported unaligned load.");
>           else
> @@ -1137,7 +1145,7 @@ verify_data_ref_alignment (data_referenc
>                              "store.");
>
>           dump_generic_expr (MSG_MISSED_OPTIMIZATION, TDF_SLIM,
> -                            DR_REF (dr));
> +                            DR_REF (dr_info->dr));
>           dump_printf (MSG_MISSED_OPTIMIZATION, "\n");
>         }
>        return false;
> @@ -1164,7 +1172,8 @@ vect_verify_datarefs_alignment (loop_vec
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
> @@ -1180,7 +1189,7 @@ vect_verify_datarefs_alignment (loop_vec
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      if (! verify_data_ref_alignment (dr))
> +      if (! verify_data_ref_alignment (dr_info))
>         return false;
>      }
>
> @@ -1202,13 +1211,13 @@ not_size_aligned (tree exp)
>
>  /* Function vector_alignment_reachable_p
>
> -   Return true if vector alignment for DR is reachable by peeling
> +   Return true if vector alignment for DR_INFO is reachable by peeling
>     a few loop iterations.  Return false otherwise.  */
>
>  static bool
> -vector_alignment_reachable_p (struct data_reference *dr)
> +vector_alignment_reachable_p (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    tree vectype = STMT_VINFO_VECTYPE (stmt_info);
>
>    if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
> @@ -1219,13 +1228,13 @@ vector_alignment_reachable_p (struct dat
>        int elem_size, mis_in_elements;
>
>        /* FORNOW: handle only known alignment.  */
> -      if (!known_alignment_for_access_p (dr))
> +      if (!known_alignment_for_access_p (dr_info))
>         return false;
>
>        poly_uint64 nelements = TYPE_VECTOR_SUBPARTS (vectype);
>        poly_uint64 vector_size = GET_MODE_SIZE (TYPE_MODE (vectype));
>        elem_size = vector_element_size (vector_size, nelements);
> -      mis_in_elements = DR_MISALIGNMENT (dr) / elem_size;
> +      mis_in_elements = DR_MISALIGNMENT (dr_info) / elem_size;
>
>        if (!multiple_p (nelements - mis_in_elements, DR_GROUP_SIZE (stmt_info)))
>         return false;
> @@ -1233,7 +1242,7 @@ vector_alignment_reachable_p (struct dat
>
>    /* If misalignment is known at the compile time then allow peeling
>       only if natural alignment is reachable through peeling.  */
> -  if (known_alignment_for_access_p (dr) && !aligned_access_p (dr))
> +  if (known_alignment_for_access_p (dr_info) && !aligned_access_p (dr_info))
>      {
>        HOST_WIDE_INT elmsize =
>                 int_cst_value (TYPE_SIZE_UNIT (TREE_TYPE (vectype)));
> @@ -1242,9 +1251,9 @@ vector_alignment_reachable_p (struct dat
>           dump_printf_loc (MSG_NOTE, vect_location,
>                            "data size =" HOST_WIDE_INT_PRINT_DEC, elmsize);
>           dump_printf (MSG_NOTE,
> -                      ". misalignment = %d.\n", DR_MISALIGNMENT (dr));
> +                      ". misalignment = %d.\n", DR_MISALIGNMENT (dr_info));
>         }
> -      if (DR_MISALIGNMENT (dr) % elmsize)
> +      if (DR_MISALIGNMENT (dr_info) % elmsize)
>         {
>           if (dump_enabled_p ())
>             dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -1253,10 +1262,10 @@ vector_alignment_reachable_p (struct dat
>         }
>      }
>
> -  if (!known_alignment_for_access_p (dr))
> +  if (!known_alignment_for_access_p (dr_info))
>      {
> -      tree type = TREE_TYPE (DR_REF (dr));
> -      bool is_packed = not_size_aligned (DR_REF (dr));
> +      tree type = TREE_TYPE (DR_REF (dr_info->dr));
> +      bool is_packed = not_size_aligned (DR_REF (dr_info->dr));
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
>                          "Unknown misalignment, %snaturally aligned\n",
> @@ -1268,16 +1277,16 @@ vector_alignment_reachable_p (struct dat
>  }
>
>
> -/* Calculate the cost of the memory access represented by DR.  */
> +/* Calculate the cost of the memory access represented by DR_INFO.  */
>
>  static void
> -vect_get_data_access_cost (struct data_reference *dr,
> +vect_get_data_access_cost (dr_vec_info *dr_info,
>                             unsigned int *inside_cost,
>                             unsigned int *outside_cost,
>                            stmt_vector_for_cost *body_cost_vec,
>                            stmt_vector_for_cost *prologue_cost_vec)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    int ncopies;
>
> @@ -1286,7 +1295,7 @@ vect_get_data_access_cost (struct data_r
>    else
>      ncopies = vect_get_num_copies (loop_vinfo, STMT_VINFO_VECTYPE (stmt_info));
>
> -  if (DR_IS_READ (dr))
> +  if (DR_IS_READ (dr_info->dr))
>      vect_get_load_cost (stmt_info, ncopies, true, inside_cost, outside_cost,
>                         prologue_cost_vec, body_cost_vec, false);
>    else
> @@ -1301,7 +1310,7 @@ vect_get_data_access_cost (struct data_r
>
>  typedef struct _vect_peel_info
>  {
> -  struct data_reference *dr;
> +  dr_vec_info *dr_info;
>    int npeel;
>    unsigned int count;
>  } *vect_peel_info;
> @@ -1335,16 +1344,17 @@ peel_info_hasher::equal (const _vect_pee
>  }
>
>
> -/* Insert DR into peeling hash table with NPEEL as key.  */
> +/* Insert DR_INFO into peeling hash table with NPEEL as key.  */
>
>  static void
>  vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
> -                         loop_vec_info loop_vinfo, struct data_reference *dr,
> +                         loop_vec_info loop_vinfo, dr_vec_info *dr_info,
>                            int npeel)
>  {
>    struct _vect_peel_info elem, *slot;
>    _vect_peel_info **new_slot;
> -  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> +  bool supportable_dr_alignment
> +    = vect_supportable_dr_alignment (dr_info, true);
>
>    elem.npeel = npeel;
>    slot = peeling_htab->find (&elem);
> @@ -1354,7 +1364,7 @@ vect_peeling_hash_insert (hash_table<pee
>      {
>        slot = XNEW (struct _vect_peel_info);
>        slot->npeel = npeel;
> -      slot->dr = dr;
> +      slot->dr_info = dr_info;
>        slot->count = 1;
>        new_slot = peeling_htab->find_slot (slot, INSERT);
>        *new_slot = slot;
> @@ -1381,19 +1391,19 @@ vect_peeling_hash_get_most_frequent (_ve
>      {
>        max->peel_info.npeel = elem->npeel;
>        max->peel_info.count = elem->count;
> -      max->peel_info.dr = elem->dr;
> +      max->peel_info.dr_info = elem->dr_info;
>      }
>
>    return 1;
>  }
>
>  /* Get the costs of peeling NPEEL iterations checking data access costs
> -   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
> +   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0_INFO's
>     misalignment will be zero after peeling.  */
>
>  static void
>  vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
> -                               struct data_reference *dr0,
> +                               dr_vec_info *dr0_info,
>                                 unsigned int *inside_cost,
>                                 unsigned int *outside_cost,
>                                 stmt_vector_for_cost *body_cost_vec,
> @@ -1406,7 +1416,8 @@ vect_get_peeling_costs_all_drs (vec<data
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
>
> @@ -1423,16 +1434,16 @@ vect_get_peeling_costs_all_drs (vec<data
>         continue;
>
>        int save_misalignment;
> -      save_misalignment = DR_MISALIGNMENT (dr);
> +      save_misalignment = DR_MISALIGNMENT (dr_info);
>        if (npeel == 0)
>         ;
> -      else if (unknown_misalignment && dr == dr0)
> -       SET_DR_MISALIGNMENT (dr, 0);
> +      else if (unknown_misalignment && dr_info == dr0_info)
> +       SET_DR_MISALIGNMENT (dr_info, 0);
>        else
> -       vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      vect_get_data_access_cost (dr, inside_cost, outside_cost,
> +       vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
> +      vect_get_data_access_cost (dr_info, inside_cost, outside_cost,
>                                  body_cost_vec, prologue_cost_vec);
> -      SET_DR_MISALIGNMENT (dr, save_misalignment);
> +      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
>      }
>  }
>
> @@ -1446,7 +1457,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    vect_peel_info elem = *slot;
>    int dummy;
>    unsigned int inside_cost = 0, outside_cost = 0;
> -  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
> +  stmt_vec_info stmt_info = elem->dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
>                        epilogue_cost_vec;
> @@ -1456,7 +1467,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>    epilogue_cost_vec.create (2);
>
>    vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
> -                                 elem->dr, &inside_cost, &outside_cost,
> +                                 elem->dr_info, &inside_cost, &outside_cost,
>                                   &body_cost_vec, &prologue_cost_vec,
>                                   elem->npeel, false);
>
> @@ -1480,7 +1491,7 @@ vect_peeling_hash_get_lowest_cost (_vect
>      {
>        min->inside_cost = inside_cost;
>        min->outside_cost = outside_cost;
> -      min->peel_info.dr = elem->dr;
> +      min->peel_info.dr_info = elem->dr_info;
>        min->peel_info.npeel = elem->npeel;
>        min->peel_info.count = elem->count;
>      }
> @@ -1499,7 +1510,7 @@ vect_peeling_hash_choose_best_peeling (h
>  {
>     struct _vect_peel_extended_info res;
>
> -   res.peel_info.dr = NULL;
> +   res.peel_info.dr_info = NULL;
>
>     if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
>       {
> @@ -1523,7 +1534,7 @@ vect_peeling_hash_choose_best_peeling (h
>  /* Return true if the new peeling NPEEL is supported.  */
>
>  static bool
> -vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
> +vect_peeling_supportable (loop_vec_info loop_vinfo, dr_vec_info *dr0_info,
>                           unsigned npeel)
>  {
>    unsigned i;
> @@ -1536,10 +1547,11 @@ vect_peeling_supportable (loop_vec_info
>      {
>        int save_misalignment;
>
> -      if (dr == dr0)
> +      if (dr == dr0_info->dr)
>         continue;
>
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>        /* For interleaving, only the alignment of the first access
>          matters.  */
>        if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
> @@ -1552,10 +1564,11 @@ vect_peeling_supportable (loop_vec_info
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      save_misalignment = DR_MISALIGNMENT (dr);
> -      vect_update_misalignment_for_peel (dr, dr0, npeel);
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> -      SET_DR_MISALIGNMENT (dr, save_misalignment);
> +      save_misalignment = DR_MISALIGNMENT (dr_info);
> +      vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
> +      supportable_dr_alignment
> +       = vect_supportable_dr_alignment (dr_info, false);
> +      SET_DR_MISALIGNMENT (dr_info, save_misalignment);
>
>        if (!supportable_dr_alignment)
>         return false;
> @@ -1661,7 +1674,8 @@ vect_enhance_data_refs_alignment (loop_v
>    vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
>    struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
>    enum dr_alignment_support supportable_dr_alignment;
> -  struct data_reference *dr0 = NULL, *first_store = NULL;
> +  dr_vec_info *first_store = NULL;
> +  dr_vec_info *dr0_info = NULL;
>    struct data_reference *dr;
>    unsigned int i, j;
>    bool do_peeling = false;
> @@ -1671,7 +1685,7 @@ vect_enhance_data_refs_alignment (loop_v
>    bool one_misalignment_known = false;
>    bool one_misalignment_unknown = false;
>    bool one_dr_unsupportable = false;
> -  struct data_reference *unsupportable_dr = NULL;
> +  dr_vec_info *unsupportable_dr_info = NULL;
>    poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
>    unsigned possible_npeel_number = 1;
>    tree vectype;
> @@ -1718,7 +1732,8 @@ vect_enhance_data_refs_alignment (loop_v
>
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      stmt_vec_info stmt_info = dr_info->stmt;
>
>        if (!STMT_VINFO_RELEVANT_P (stmt_info))
>         continue;
> @@ -1741,21 +1756,23 @@ vect_enhance_data_refs_alignment (loop_v
>           && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>         continue;
>
> -      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
> -      do_peeling = vector_alignment_reachable_p (dr);
> +      supportable_dr_alignment = vect_supportable_dr_alignment (dr_info, true);
> +      do_peeling = vector_alignment_reachable_p (dr_info);
>        if (do_peeling)
>          {
> -          if (known_alignment_for_access_p (dr))
> +          if (known_alignment_for_access_p (dr_info))
>              {
>               unsigned int npeel_tmp = 0;
>               bool negative = tree_int_cst_compare (DR_STEP (dr),
>                                                     size_zero_node) < 0;
>
>               vectype = STMT_VINFO_VECTYPE (stmt_info);
> -             unsigned int target_align = DR_TARGET_ALIGNMENT (dr);
> -             unsigned int dr_size = vect_get_scalar_dr_size (dr);
> -             mis = (negative ? DR_MISALIGNMENT (dr) : -DR_MISALIGNMENT (dr));
> -             if (DR_MISALIGNMENT (dr) != 0)
> +             unsigned int target_align = DR_TARGET_ALIGNMENT (dr_info);
> +             unsigned int dr_size = vect_get_scalar_dr_size (dr_info);
> +             mis = (negative
> +                    ? DR_MISALIGNMENT (dr_info)
> +                    : -DR_MISALIGNMENT (dr_info));
> +             if (DR_MISALIGNMENT (dr_info) != 0)
>                 npeel_tmp = (mis & (target_align - 1)) / dr_size;
>
>                /* For multiple types, it is possible that the bigger type access
> @@ -1780,7 +1797,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>                   /* NPEEL_TMP is 0 when there is no misalignment, but also
>                      allow peeling NELEMENTS.  */
> -                 if (DR_MISALIGNMENT (dr) == 0)
> +                 if (DR_MISALIGNMENT (dr_info) == 0)
>                     possible_npeel_number++;
>                 }
>
> @@ -1789,7 +1806,7 @@ vect_enhance_data_refs_alignment (loop_v
>                for (j = 0; j < possible_npeel_number; j++)
>                  {
>                    vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
> -                                           dr, npeel_tmp);
> +                                           dr_info, npeel_tmp);
>                   npeel_tmp += target_align / dr_size;
>                  }
>
> @@ -1803,11 +1820,11 @@ vect_enhance_data_refs_alignment (loop_v
>                   stores over load.  */
>               unsigned same_align_drs
>                 = STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
> -             if (!dr0
> +             if (!dr0_info
>                   || same_align_drs_max < same_align_drs)
>                 {
>                   same_align_drs_max = same_align_drs;
> -                 dr0 = dr;
> +                 dr0_info = dr_info;
>                 }
>               /* For data-refs with the same number of related
>                  accesses prefer the one where the misalign
> @@ -1816,13 +1833,13 @@ vect_enhance_data_refs_alignment (loop_v
>                 {
>                   struct loop *ivloop0, *ivloop;
>                   ivloop0 = outermost_invariant_loop_for_expr
> -                   (loop, DR_BASE_ADDRESS (dr0));
> +                   (loop, DR_BASE_ADDRESS (dr0_info->dr));
>                   ivloop = outermost_invariant_loop_for_expr
>                     (loop, DR_BASE_ADDRESS (dr));
>                   if ((ivloop && !ivloop0)
>                       || (ivloop && ivloop0
>                           && flow_loop_nested_p (ivloop, ivloop0)))
> -                   dr0 = dr;
> +                   dr0_info = dr_info;
>                 }
>
>               one_misalignment_unknown = true;
> @@ -1832,16 +1849,16 @@ vect_enhance_data_refs_alignment (loop_v
>               if (!supportable_dr_alignment)
>               {
>                 one_dr_unsupportable = true;
> -               unsupportable_dr = dr;
> +               unsupportable_dr_info = dr_info;
>               }
>
>               if (!first_store && DR_IS_WRITE (dr))
> -               first_store = dr;
> +               first_store = dr_info;
>              }
>          }
>        else
>          {
> -          if (!aligned_access_p (dr))
> +          if (!aligned_access_p (dr_info))
>              {
>                if (dump_enabled_p ())
>                  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -1879,7 +1896,7 @@ vect_enhance_data_refs_alignment (loop_v
>
>        stmt_vector_for_cost dummy;
>        dummy.create (2);
> -      vect_get_peeling_costs_all_drs (datarefs, dr0,
> +      vect_get_peeling_costs_all_drs (datarefs, dr0_info,
>                                       &load_inside_cost,
>                                       &load_outside_cost,
>                                       &dummy, &dummy, estimated_npeels, true);
> @@ -1905,7 +1922,7 @@ vect_enhance_data_refs_alignment (loop_v
>           || (load_inside_cost == store_inside_cost
>               && load_outside_cost > store_outside_cost))
>         {
> -         dr0 = first_store;
> +         dr0_info = first_store;
>           peel_for_unknown_alignment.inside_cost = store_inside_cost;
>           peel_for_unknown_alignment.outside_cost = store_outside_cost;
>         }
> @@ -1929,18 +1946,18 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        peel_for_unknown_alignment.peel_info.count = 1
> -       + STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
> +       + STMT_VINFO_SAME_ALIGN_REFS (dr0_info->stmt).length ();
>      }
>
>    peel_for_unknown_alignment.peel_info.npeel = 0;
> -  peel_for_unknown_alignment.peel_info.dr = dr0;
> +  peel_for_unknown_alignment.peel_info.dr_info = dr0_info;
>
>    best_peel = peel_for_unknown_alignment;
>
>    peel_for_known_alignment.inside_cost = INT_MAX;
>    peel_for_known_alignment.outside_cost = INT_MAX;
>    peel_for_known_alignment.peel_info.count = 0;
> -  peel_for_known_alignment.peel_info.dr = NULL;
> +  peel_for_known_alignment.peel_info.dr_info = NULL;
>
>    if (do_peeling && one_misalignment_known)
>      {
> @@ -1952,7 +1969,7 @@ vect_enhance_data_refs_alignment (loop_v
>      }
>
>    /* Compare costs of peeling for known and unknown alignment. */
> -  if (peel_for_known_alignment.peel_info.dr != NULL
> +  if (peel_for_known_alignment.peel_info.dr_info != NULL
>        && peel_for_unknown_alignment.inside_cost
>        >= peel_for_known_alignment.inside_cost)
>      {
> @@ -1969,7 +1986,7 @@ vect_enhance_data_refs_alignment (loop_v
>       since we'd have to discard a chosen peeling except when it accidentally
>       aligned the unsupportable data ref.  */
>    if (one_dr_unsupportable)
> -    dr0 = unsupportable_dr;
> +    dr0_info = unsupportable_dr_info;
>    else if (do_peeling)
>      {
>        /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
> @@ -2000,7 +2017,7 @@ vect_enhance_data_refs_alignment (loop_v
>        epilogue_cost_vec.release ();
>
>        npeel = best_peel.peel_info.npeel;
> -      dr0 = best_peel.peel_info.dr;
> +      dr0_info = best_peel.peel_info.dr_info;
>
>        /* If no peeling is not more expensive than the best peeling we
>          have so far, don't perform any peeling.  */
> @@ -2010,12 +2027,12 @@ vect_enhance_data_refs_alignment (loop_v
>
>    if (do_peeling)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr0);
> +      stmt_vec_info stmt_info = dr0_info->stmt;
>        vectype = STMT_VINFO_VECTYPE (stmt_info);
>
> -      if (known_alignment_for_access_p (dr0))
> +      if (known_alignment_for_access_p (dr0_info))
>          {
> -         bool negative = tree_int_cst_compare (DR_STEP (dr0),
> +         bool negative = tree_int_cst_compare (DR_STEP (dr0_info->dr),
>                                                 size_zero_node) < 0;
>            if (!npeel)
>              {
> @@ -2024,16 +2041,17 @@ vect_enhance_data_refs_alignment (loop_v
>                   updating DR_MISALIGNMENT values.  The peeling factor is the
>                   vectorization factor minus the misalignment as an element
>                   count.  */
> -             mis = negative ? DR_MISALIGNMENT (dr0) : -DR_MISALIGNMENT (dr0);
> -             unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
> +             mis = (negative
> +                    ? DR_MISALIGNMENT (dr0_info)
> +                    : -DR_MISALIGNMENT (dr0_info));
> +             unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
>               npeel = ((mis & (target_align - 1))
> -                      / vect_get_scalar_dr_size (dr0));
> +                      / vect_get_scalar_dr_size (dr0_info));
>              }
>
>           /* For interleaved data access every iteration accesses all the
>              members of the group, therefore we divide the number of iterations
>              by the group size.  */
> -         stmt_info = vect_dr_stmt (dr0);
>           if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
>             npeel /= DR_GROUP_SIZE (stmt_info);
>
> @@ -2043,11 +2061,11 @@ vect_enhance_data_refs_alignment (loop_v
>          }
>
>        /* Ensure that all datarefs can be vectorized after the peel.  */
> -      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
> +      if (!vect_peeling_supportable (loop_vinfo, dr0_info, npeel))
>         do_peeling = false;
>
>        /* Check if all datarefs are supportable and log.  */
> -      if (do_peeling && known_alignment_for_access_p (dr0) && npeel == 0)
> +      if (do_peeling && known_alignment_for_access_p (dr0_info) && npeel == 0)
>          {
>            stat = vect_verify_datarefs_alignment (loop_vinfo);
>            if (!stat)
> @@ -2066,8 +2084,9 @@ vect_enhance_data_refs_alignment (loop_v
>                unsigned max_peel = npeel;
>                if (max_peel == 0)
>                  {
> -                 unsigned int target_align = DR_TARGET_ALIGNMENT (dr0);
> -                 max_peel = target_align / vect_get_scalar_dr_size (dr0) - 1;
> +                 unsigned int target_align = DR_TARGET_ALIGNMENT (dr0_info);
> +                 max_peel = (target_align
> +                             / vect_get_scalar_dr_size (dr0_info) - 1);
>                  }
>                if (max_peel > max_allowed_peel)
>                  {
> @@ -2103,25 +2122,26 @@ vect_enhance_data_refs_alignment (loop_v
>               vectorization factor times the size).  Otherwise, the
>               misalignment of DR_i must be set to unknown.  */
>           FOR_EACH_VEC_ELT (datarefs, i, dr)
> -           if (dr != dr0)
> +           if (dr != dr0_info->dr)
>               {
>                 /* Strided accesses perform only component accesses, alignment
>                    is irrelevant for them.  */
> -               stmt_info = vect_dr_stmt (dr);
> +               dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +               stmt_info = dr_info->stmt;
>                 if (STMT_VINFO_STRIDED_P (stmt_info)
>                     && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
>                   continue;
>
> -               vect_update_misalignment_for_peel (dr, dr0, npeel);
> +               vect_update_misalignment_for_peel (dr_info, dr0_info, npeel);
>               }
>
> -          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
> +          LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0_info->dr;
>            if (npeel)
>              LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo) = npeel;
>            else
>              LOOP_VINFO_PEELING_FOR_ALIGNMENT (loop_vinfo)
> -             = DR_MISALIGNMENT (dr0);
> -         SET_DR_MISALIGNMENT (dr0, 0);
> +             = DR_MISALIGNMENT (dr0_info);
> +         SET_DR_MISALIGNMENT (dr0_info, 0);
>           if (dump_enabled_p ())
>              {
>                dump_printf_loc (MSG_NOTE, vect_location,
> @@ -2156,11 +2176,12 @@ vect_enhance_data_refs_alignment (loop_v
>      {
>        FOR_EACH_VEC_ELT (datarefs, i, dr)
>          {
> -         stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +         dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +         stmt_vec_info stmt_info = dr_info->stmt;
>
>           /* For interleaving, only the alignment of the first access
>              matters.  */
> -         if (aligned_access_p (dr)
> +         if (aligned_access_p (dr_info)
>               || (STMT_VINFO_GROUPED_ACCESS (stmt_info)
>                   && DR_GROUP_FIRST_ELEMENT (stmt_info) != stmt_info))
>             continue;
> @@ -2175,14 +2196,15 @@ vect_enhance_data_refs_alignment (loop_v
>               break;
>             }
>
> -         supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
> +         supportable_dr_alignment
> +           = vect_supportable_dr_alignment (dr_info, false);
>
>            if (!supportable_dr_alignment)
>              {
>                int mask;
>                tree vectype;
>
> -              if (known_alignment_for_access_p (dr)
> +              if (known_alignment_for_access_p (dr_info)
>                    || LOOP_VINFO_MAY_MISALIGN_STMTS (loop_vinfo).length ()
>                       >= (unsigned) PARAM_VALUE (PARAM_VECT_MAX_VERSION_FOR_ALIGNMENT_CHECKS))
>                  {
> @@ -2190,7 +2212,6 @@ vect_enhance_data_refs_alignment (loop_v
>                    break;
>                  }
>
> -             stmt_info = vect_dr_stmt (dr);
>               vectype = STMT_VINFO_VECTYPE (stmt_info);
>               gcc_assert (vectype);
>
> @@ -2241,8 +2262,8 @@ vect_enhance_data_refs_alignment (loop_v
>           of the loop being vectorized.  */
>        FOR_EACH_VEC_ELT (may_misalign_stmts, i, stmt_info)
>          {
> -          dr = STMT_VINFO_DATA_REF (stmt_info);
> -         SET_DR_MISALIGNMENT (dr, 0);
> +         dr_vec_info *dr_info = STMT_VINFO_DR_INFO (stmt_info);
> +         SET_DR_MISALIGNMENT (dr_info, 0);
>           if (dump_enabled_p ())
>              dump_printf_loc (MSG_NOTE, vect_location,
>                               "Alignment of access forced using versioning.\n");
> @@ -2278,8 +2299,10 @@ vect_find_same_alignment_drs (struct dat
>  {
>    struct data_reference *dra = DDR_A (ddr);
>    struct data_reference *drb = DDR_B (ddr);
> -  stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> -  stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
> +  dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +  dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +  stmt_vec_info stmtinfo_a = dr_info_a->stmt;
> +  stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>
>    if (DDR_ARE_DEPENDENT (ddr) == chrec_known)
>      return;
> @@ -2302,9 +2325,9 @@ vect_find_same_alignment_drs (struct dat
>    if (maybe_ne (diff, 0))
>      {
>        /* Get the wider of the two alignments.  */
> -      unsigned int align_a = (vect_calculate_target_alignment (dra)
> +      unsigned int align_a = (vect_calculate_target_alignment (dr_info_a)
>                               / BITS_PER_UNIT);
> -      unsigned int align_b = (vect_calculate_target_alignment (drb)
> +      unsigned int align_b = (vect_calculate_target_alignment (dr_info_b)
>                               / BITS_PER_UNIT);
>        unsigned int max_align = MAX (align_a, align_b);
>
> @@ -2352,9 +2375,9 @@ vect_analyze_data_refs_alignment (loop_v
>    vect_record_base_alignments (vinfo);
>    FOR_EACH_VEC_ELT (datarefs, i, dr)
>      {
> -      stmt_vec_info stmt_info = vect_dr_stmt (dr);
> -      if (STMT_VINFO_VECTORIZABLE (stmt_info))
> -       vect_compute_data_ref_alignment (dr);
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt))
> +       vect_compute_data_ref_alignment (dr_info);
>      }
>
>    return true;
> @@ -2370,17 +2393,17 @@ vect_slp_analyze_and_verify_node_alignme
>       the node is permuted in which case we start from the first
>       element in the group.  */
>    stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
> -  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
> +  dr_vec_info *first_dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
>    if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
>      first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
>
> -  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
> -  vect_compute_data_ref_alignment (dr);
> +  dr_vec_info *dr_info = STMT_VINFO_DR_INFO (first_stmt_info);
> +  vect_compute_data_ref_alignment (dr_info);
>    /* For creating the data-ref pointer we need alignment of the
>       first element anyway.  */
> -  if (dr != first_dr)
> -    vect_compute_data_ref_alignment (first_dr);
> -  if (! verify_data_ref_alignment (dr))
> +  if (dr_info != first_dr_info)
> +    vect_compute_data_ref_alignment (first_dr_info);
> +  if (! verify_data_ref_alignment (dr_info))
>      {
>        if (dump_enabled_p ())
>         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> @@ -2418,19 +2441,20 @@ vect_slp_analyze_and_verify_instance_ali
>  }
>
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> +/* Analyze groups of accesses: check that DR_INFO belongs to a group of
>     accesses of legal size, step, etc.  Detect gaps, single element
>     interleaving, and other special cases. Set grouped access info.
>     Collect groups of strided stores for further use in SLP analysis.
>     Worker for vect_analyze_group_access.  */
>
>  static bool
> -vect_analyze_group_access_1 (struct data_reference *dr)
> +vect_analyze_group_access_1 (dr_vec_info *dr_info)
>  {
> +  data_reference *dr = dr_info->dr;
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
>    HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
>    HOST_WIDE_INT dr_step = -1;
> @@ -2507,7 +2531,7 @@ vect_analyze_group_access_1 (struct data
>        if (bb_vinfo)
>         {
>           /* Mark the statement as unvectorizable.  */
> -         STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> +         STMT_VINFO_VECTORIZABLE (stmt_info) = false;
>           return true;
>         }
>
> @@ -2655,18 +2679,18 @@ vect_analyze_group_access_1 (struct data
>    return true;
>  }
>
> -/* Analyze groups of accesses: check that DR belongs to a group of
> +/* Analyze groups of accesses: check that DR_INFO belongs to a group of
>     accesses of legal size, step, etc.  Detect gaps, single element
>     interleaving, and other special cases. Set grouped access info.
>     Collect groups of strided stores for further use in SLP analysis.  */
>
>  static bool
> -vect_analyze_group_access (struct data_reference *dr)
> +vect_analyze_group_access (dr_vec_info *dr_info)
>  {
> -  if (!vect_analyze_group_access_1 (dr))
> +  if (!vect_analyze_group_access_1 (dr_info))
>      {
>        /* Dissolve the group if present.  */
> -      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
> +      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (dr_info->stmt);
>        while (stmt_info)
>         {
>           stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
> @@ -2679,16 +2703,17 @@ vect_analyze_group_access (struct data_r
>    return true;
>  }
>
> -/* Analyze the access pattern of the data-reference DR.
> +/* Analyze the access pattern of the data-reference DR_INFO.
>     In case of non-consecutive accesses call vect_analyze_group_access() to
>     analyze groups of accesses.  */
>
>  static bool
> -vect_analyze_data_ref_access (struct data_reference *dr)
> +vect_analyze_data_ref_access (dr_vec_info *dr_info)
>  {
> +  data_reference *dr = dr_info->dr;
>    tree step = DR_STEP (dr);
>    tree scalar_type = TREE_TYPE (DR_REF (dr));
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
>    struct loop *loop = NULL;
>
> @@ -2768,10 +2793,10 @@ vect_analyze_data_ref_access (struct dat
>    if (TREE_CODE (step) != INTEGER_CST)
>      return (STMT_VINFO_STRIDED_P (stmt_info)
>             && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
> -               || vect_analyze_group_access (dr)));
> +               || vect_analyze_group_access (dr_info)));
>
>    /* Not consecutive access - check if it's a part of interleaving group.  */
> -  return vect_analyze_group_access (dr);
> +  return vect_analyze_group_access (dr_info);
>  }
>
>  /* Compare two data-references DRA and DRB to group them into chunks
> @@ -2916,7 +2941,8 @@ vect_analyze_data_ref_accesses (vec_info
>    for (i = 0; i < datarefs_copy.length () - 1;)
>      {
>        data_reference_p dra = datarefs_copy[i];
> -      stmt_vec_info stmtinfo_a = vect_dr_stmt (dra);
> +      dr_vec_info *dr_info_a = DR_VECT_AUX (dra);
> +      stmt_vec_info stmtinfo_a = dr_info_a->stmt;
>        stmt_vec_info lastinfo = NULL;
>        if (!STMT_VINFO_VECTORIZABLE (stmtinfo_a)
>           || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_a))
> @@ -2927,7 +2953,8 @@ vect_analyze_data_ref_accesses (vec_info
>        for (i = i + 1; i < datarefs_copy.length (); ++i)
>         {
>           data_reference_p drb = datarefs_copy[i];
> -         stmt_vec_info stmtinfo_b = vect_dr_stmt (drb);
> +         dr_vec_info *dr_info_b = DR_VECT_AUX (drb);
> +         stmt_vec_info stmtinfo_b = dr_info_b->stmt;
>           if (!STMT_VINFO_VECTORIZABLE (stmtinfo_b)
>               || STMT_VINFO_GATHER_SCATTER_P (stmtinfo_b))
>             break;
> @@ -3050,25 +3077,28 @@ vect_analyze_data_ref_accesses (vec_info
>      }
>
>    FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
> -    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
> -        && !vect_analyze_data_ref_access (dr))
> -      {
> -       if (dump_enabled_p ())
> -         dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> -                          "not vectorized: complicated access pattern.\n");
> +    {
> +      dr_vec_info *dr_info = DR_VECT_AUX (dr);
> +      if (STMT_VINFO_VECTORIZABLE (dr_info->stmt)
> +         && !vect_analyze_data_ref_access (dr_info))
> +       {
> +         if (dump_enabled_p ())
> +           dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
> +                            "not vectorized: complicated access pattern.\n");
>
> -        if (is_a <bb_vec_info> (vinfo))
> -         {
> -           /* Mark the statement as not vectorizable.  */
> -           STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
> -           continue;
> -         }
> -        else
> -         {
> -           datarefs_copy.release ();
> -           return false;
> -         }
> -      }
> +         if (is_a <bb_vec_info> (vinfo))
> +           {
> +             /* Mark the statement as not vectorizable.  */
> +             STMT_VINFO_VECTORIZABLE (dr_info->stmt) = false;
> +             continue;
> +           }
> +         else
> +           {
> +             datarefs_copy.release ();
> +             return false;
> +           }
> +       }
> +    }
>
>    datarefs_copy.release ();
>    return true;
> @@ -3077,7 +3107,7 @@ vect_analyze_data_ref_accesses (vec_info
>  /* Function vect_vfa_segment_size.
>
>     Input:
> -     DR: The data reference.
> +     DR_INFO: The data reference.
>       LENGTH_FACTOR: segment length to consider.
>
>     Return a value suitable for the dr_with_seg_len::seg_len field.
> @@ -3086,32 +3116,32 @@ vect_analyze_data_ref_accesses (vec_info
>     the size of the access; in effect it only describes the first byte.  */
>
>  static tree
> -vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
> +vect_vfa_segment_size (dr_vec_info *dr_info, tree length_factor)
>  {
>    length_factor = size_binop (MINUS_EXPR,
>                               fold_convert (sizetype, length_factor),
>                               size_one_node);
> -  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr)),
> +  return size_binop (MULT_EXPR, fold_convert (sizetype, DR_STEP (dr_info->dr)),
>                      length_factor);
>  }
>
> -/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
> +/* Return a value that, when added to abs (vect_vfa_segment_size (DR_INFO)),
>     gives the worst-case number of bytes covered by the segment.  */
>
>  static unsigned HOST_WIDE_INT
> -vect_vfa_access_size (data_reference *dr)
> +vect_vfa_access_size (dr_vec_info *dr_info)
>  {
> -  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
> -  tree ref_type = TREE_TYPE (DR_REF (dr));
> +  stmt_vec_info stmt_vinfo = dr_info->stmt;
> +  tree ref_type = TREE_TYPE (DR_REF (dr_info->dr));
>    unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
>    unsigned HOST_WIDE_INT access_size = ref_size;
>    if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
>      {
> -      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
> +      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
>        access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
>      }
>    if (STMT_VINFO_VEC_STMT (stmt_vinfo)
> -      && (vect_supportable_dr_alignment (dr, false)
> +      && (vect_supportable_dr_alignment (dr_info, false)
>           == dr_explicit_realign_optimized))
>      {
>        /* We might access a full vector's worth.  */
> @@ -3121,12 +3151,13 @@ vect_vfa_access_size (data_reference *dr
>    return access_size;
>  }
>
> -/* Get the minimum alignment for all the scalar accesses that DR describes.  */
> +/* Get the minimum alignment for all the scalar accesses that DR_INFO
> +   describes.  */
>
>  static unsigned int
> -vect_vfa_align (const data_reference *dr)
> +vect_vfa_align (dr_vec_info *dr_info)
>  {
> -  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr)));
> +  return TYPE_ALIGN_UNIT (TREE_TYPE (DR_REF (dr_info->dr)));
>  }
>
>  /* Function vect_no_alias_p.
> @@ -3139,27 +3170,27 @@ vect_vfa_align (const data_reference *dr
>     of dr_with_seg_len::{seg_len,access_size} for A and B.  */
>
>  static int
> -vect_compile_time_alias (struct data_reference *a, struct data_reference *b,
> +vect_compile_time_alias (dr_vec_info *a, dr_vec_info *b,
>                          tree segment_length_a, tree segment_length_b,
>                          unsigned HOST_WIDE_INT access_size_a,
>                          unsigned HOST_WIDE_INT access_size_b)
>  {
> -  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a));
> -  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b));
> +  poly_offset_int offset_a = wi::to_poly_offset (DR_INIT (a->dr));
> +  poly_offset_int offset_b = wi::to_poly_offset (DR_INIT (b->dr));
>    poly_uint64 const_length_a;
>    poly_uint64 const_length_b;
>
>    /* For negative step, we need to adjust address range by TYPE_SIZE_UNIT
>       bytes, e.g., int a[3] -> a[1] range is [a+4, a+16) instead of
>       [a, a+12) */
> -  if (tree_int_cst_compare (DR_STEP (a), size_zero_node) < 0)
> +  if (tree_int_cst_compare (DR_STEP (a->dr), size_zero_node) < 0)
>      {
>        const_length_a = (-wi::to_poly_wide (segment_length_a)).force_uhwi ();
>        offset_a = (offset_a + access_size_a) - const_length_a;
>      }
>    else
>      const_length_a = tree_to_poly_uint64 (segment_length_a);
> -  if (tree_int_cst_compare (DR_STEP (b), size_zero_node) < 0)
> +  if (tree_int_cst_compare (DR_STEP (b->dr), size_zero_node) < 0)
>      {
>        const_length_b = (-wi::to_poly_wide (segment_length_b)).force_uhwi ();
>        offset_b = (offset_b + access_size_b) - const_length_b;
> @@ -3269,30 +3300,34 @@ vect_check_lower_bound (loop_vec_info lo
>    LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
>  }
>
> -/* Return true if it's unlikely that the step of the vectorized form of DR
> +/* Return true if it's unlikely that the step of the vectorized form of DR_INFO
>     will span fewer than GAP bytes.  */
>
>  static bool
> -vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
> +vect_small_gap_p (loop_vec_info loop_vinfo, dr_vec_info *dr_info,
> +                 poly_int64 gap)
>  {
> -  stmt_vec_info stmt_info = vect_dr_stmt (dr);
> +  stmt_vec_info stmt_info = dr_info->stmt;
>    HOST_WIDE_INT count
>      = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
>    if (DR_GROUP_FIRST_ELEMENT (stmt_info))
>      count *= DR_GROUP_SIZE (DR_GROUP_FIRST_ELEMENT (stmt_info));
> -  return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
> +  return (estimated_poly_value (gap)
> +         <= count * vect_get_scalar_dr_size (dr_info));
>  }
>
> -/* Return true if we know that there is no alias between DR_A and DR_B
> -   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
> -   *LOWER_BOUND_OUT to this N.  */
> +/* Return true if we know that there is no alias between DR_INFO_A and
> +   DR_INFO_B when abs (DR_STEP (DR_INFO_A->dr)) >= N for some N.
> +   When returning true, set *LOWER_BOUND_OUT to this N.  */
>
>  static bool
> -vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
> +vectorizable_with_step_bound_p (dr_vec_info *dr_info_a, dr_vec_info *dr_info_b,
>                                 poly_uint64 *lower_bound_out)
>  {
>    /* Check that there is a constant gap of known sign between DR_A
>       and DR_B.  */
> +  data_reference *dr_a = dr_info_a->dr;
> +  data_reference *dr_b = dr_info_b->dr;
>    poly_int64 init_a, init_b;
>    if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
>        || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
> @@ -3306,19 +3341,19 @@ vectorizable_with_step_bound_p (data_ref
>    if (maybe_lt (init_b, init_a))
>      {
>        std::swap (init_a, init_b);
> +      std::swap (dr_info_a, dr_info_b);
>        std::swap (dr_a, dr_b);
>      }
>
>    /* If the two accesses could be dependent within a scalar iteration,
>       make sure that we'd retain their order.  */
> -  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
> -      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
> -                                        vect_dr_stmt (dr_b)))
> +  if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_info_a), init_b)
> +      && !vect_preserves_scalar_order_p (dr_info_a, dr_info_b))
>      return false;
>
>    /* There is no alias if abs (DR_STEP) is greater than or equal to
>       the bytes spanned by the combination of the two accesses.  */
> -  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_b) - init_a;
> +  *lower_bound_out = init_b + vect_get_scalar_dr_size (dr_info_b) - init_a;
>    return true;
>  }
>
> @@ -3376,7 +3411,6 @@ vect_prune_runtime_alias_test_list (loop
>      {
>        int comp_res;
>        poly_uint64 lower_bound;
> -      struct data_reference *dr_a, *dr_b;
>        tree segment_length_a, segment_length_b;
>        unsigned HOST_WIDE_INT access_size_a, access_size_b;
>        unsigned int align_a, align_b;
> @@ -3404,25 +3438,26 @@ vect_prune_runtime_alias_test_list (loop
>           continue;
>         }
>
> -      dr_a = DDR_A (ddr);
> -      stmt_vec_info stmt_info_a = vect_dr_stmt (DDR_A (ddr));
> +      dr_vec_info *dr_info_a = DR_VECT_AUX (DDR_A (ddr));
> +      stmt_vec_info stmt_info_a = dr_info_a->stmt;
>
> -      dr_b = DDR_B (ddr);
> -      stmt_vec_info stmt_info_b = vect_dr_stmt (DDR_B (ddr));
> +      dr_vec_info *dr_info_b = DR_VECT_AUX (DDR_B (ddr));
> +      stmt_vec_info stmt_info_b = dr_info_b->stmt;
>
>        /* Skip the pair if inter-iteration dependencies are irrelevant
>          and intra-iteration dependencies are guaranteed to be honored.  */
>        if (ignore_step_p
> -         && (vect_pr
diff mbox series

Patch

Index: gcc/tree-vectorizer.h
===================================================================
--- gcc/tree-vectorizer.h	2018-07-24 10:24:05.744462369 +0100
+++ gcc/tree-vectorizer.h	2018-07-24 10:24:08.924434128 +0100
@@ -1541,7 +1541,7 @@  extern tree vect_get_mask_type_for_stmt
 /* In tree-vect-data-refs.c.  */
 extern bool vect_can_force_dr_alignment_p (const_tree, unsigned int);
 extern enum dr_alignment_support vect_supportable_dr_alignment
-                                           (struct data_reference *, bool);
+  (stmt_vec_info, bool);
 extern tree vect_get_smallest_scalar_type (stmt_vec_info, HOST_WIDE_INT *,
                                            HOST_WIDE_INT *);
 extern bool vect_analyze_data_ref_dependences (loop_vec_info, unsigned int *);
Index: gcc/tree-vect-data-refs.c
===================================================================
--- gcc/tree-vect-data-refs.c	2018-07-24 10:24:05.740462405 +0100
+++ gcc/tree-vect-data-refs.c	2018-07-24 10:24:08.924434128 +0100
@@ -858,19 +858,19 @@  vect_record_base_alignments (vec_info *v
     }
 }
 
-/* Return the target alignment for the vectorized form of DR.  */
+/* Return the target alignment for the vectorized form of the load or store
+   in STMT_INFO.  */
 
 static unsigned int
-vect_calculate_target_alignment (struct data_reference *dr)
+vect_calculate_target_alignment (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   return targetm.vectorize.preferred_vector_alignment (vectype);
 }
 
 /* Function vect_compute_data_ref_alignment
 
-   Compute the misalignment of the data reference DR.
+   Compute the misalignment of the load or store in STMT_INFO.
 
    Output:
    1. dr_misalignment (STMT_INFO) is defined.
@@ -879,9 +879,9 @@  vect_calculate_target_alignment (struct
    only for trivial cases. TODO.  */
 
 static void
-vect_compute_data_ref_alignment (struct data_reference *dr)
+vect_compute_data_ref_alignment (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   vec_base_alignments *base_alignments = &stmt_info->vinfo->base_alignments;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
@@ -905,7 +905,7 @@  vect_compute_data_ref_alignment (struct
   bool step_preserves_misalignment_p;
 
   unsigned HOST_WIDE_INT vector_alignment
-    = vect_calculate_target_alignment (dr) / BITS_PER_UNIT;
+    = vect_calculate_target_alignment (stmt_info) / BITS_PER_UNIT;
   STMT_VINFO_TARGET_ALIGNMENT (stmt_info) = vector_alignment;
 
   /* No step for BB vectorization.  */
@@ -1053,28 +1053,28 @@  vect_compute_data_ref_alignment (struct
 }
 
 /* Function vect_update_misalignment_for_peel.
-   Sets DR's misalignment
-   - to 0 if it has the same alignment as DR_PEEL,
+   Sets the misalignment of the load or store in STMT_INFO
+   - to 0 if it has the same alignment as PEEL_STMT_INFO,
    - to the misalignment computed using NPEEL if DR's salignment is known,
    - to -1 (unknown) otherwise.
 
-   DR - the data reference whose misalignment is to be adjusted.
-   DR_PEEL - the data reference whose misalignment is being made
-             zero in the vector loop by the peel.
+   STMT_INFO - the load or store whose misalignment is to be adjusted.
+   PEEL_STMT_INFO - the load or store whose misalignment is being made
+		    zero in the vector loop by the peel.
    NPEEL - the number of iterations in the peel loop if the misalignment
-           of DR_PEEL is known at compile time.  */
+	   of PEEL_STMT_INFO is known at compile time.  */
 
 static void
-vect_update_misalignment_for_peel (struct data_reference *dr,
-                                   struct data_reference *dr_peel, int npeel)
+vect_update_misalignment_for_peel (stmt_vec_info stmt_info,
+				   stmt_vec_info peel_stmt_info, int npeel)
 {
   unsigned int i;
   vec<dr_p> same_aligned_drs;
   struct data_reference *current_dr;
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
+  data_reference *dr_peel = STMT_VINFO_DATA_REF (peel_stmt_info);
   int dr_size = vect_get_scalar_dr_size (dr);
   int dr_peel_size = vect_get_scalar_dr_size (dr_peel);
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
-  stmt_vec_info peel_stmt_info = vect_dr_stmt (dr_peel);
 
  /* For interleaved data accesses the step in the loop must be multiplied by
      the size of the interleaving group.  */
@@ -1085,7 +1085,7 @@  vect_update_misalignment_for_peel (struc
 
   /* It can be assumed that the data refs with the same alignment as dr_peel
      are aligned in the vector loop.  */
-  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr_peel));
+  same_aligned_drs = STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info);
   FOR_EACH_VEC_ELT (same_aligned_drs, i, current_dr)
     {
       if (current_dr != dr)
@@ -1118,13 +1118,15 @@  vect_update_misalignment_for_peel (struc
 
 /* Function verify_data_ref_alignment
 
-   Return TRUE if DR can be handled with respect to alignment.  */
+   Return TRUE if the load or store in STMT_INFO can be handled with
+   respect to alignment.  */
 
 static bool
-verify_data_ref_alignment (data_reference_p dr)
+verify_data_ref_alignment (stmt_vec_info stmt_info)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   enum dr_alignment_support supportable_dr_alignment
-    = vect_supportable_dr_alignment (dr, false);
+    = vect_supportable_dr_alignment (stmt_info, false);
   if (!supportable_dr_alignment)
     {
       if (dump_enabled_p ())
@@ -1181,7 +1183,7 @@  vect_verify_datarefs_alignment (loop_vec
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      if (! verify_data_ref_alignment (dr))
+      if (! verify_data_ref_alignment (stmt_info))
 	return false;
     }
 
@@ -1203,13 +1205,13 @@  not_size_aligned (tree exp)
 
 /* Function vector_alignment_reachable_p
 
-   Return true if vector alignment for DR is reachable by peeling
-   a few loop iterations.  Return false otherwise.  */
+   Return true if the vector alignment is reachable for the load or store
+   in STMT_INFO by peeling a few loop iterations.  Return false otherwise.  */
 
 static bool
-vector_alignment_reachable_p (struct data_reference *dr)
+vector_alignment_reachable_p (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
 
   if (STMT_VINFO_GROUPED_ACCESS (stmt_info))
@@ -1270,16 +1272,16 @@  vector_alignment_reachable_p (struct dat
 }
 
 
-/* Calculate the cost of the memory access represented by DR.  */
+/* Calculate the cost of the memory access in STMT_INFO.  */
 
 static void
-vect_get_data_access_cost (struct data_reference *dr,
+vect_get_data_access_cost (stmt_vec_info stmt_info,
                            unsigned int *inside_cost,
                            unsigned int *outside_cost,
 			   stmt_vector_for_cost *body_cost_vec,
 			   stmt_vector_for_cost *prologue_cost_vec)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   int ncopies;
 
@@ -1303,7 +1305,7 @@  vect_get_data_access_cost (struct data_r
 
 typedef struct _vect_peel_info
 {
-  struct data_reference *dr;
+  stmt_vec_info stmt_info;
   int npeel;
   unsigned int count;
 } *vect_peel_info;
@@ -1337,16 +1339,17 @@  peel_info_hasher::equal (const _vect_pee
 }
 
 
-/* Insert DR into peeling hash table with NPEEL as key.  */
+/* Insert STMT_INFO into peeling hash table with NPEEL as key.  */
 
 static void
 vect_peeling_hash_insert (hash_table<peel_info_hasher> *peeling_htab,
-			  loop_vec_info loop_vinfo, struct data_reference *dr,
+			  loop_vec_info loop_vinfo, stmt_vec_info stmt_info,
                           int npeel)
 {
   struct _vect_peel_info elem, *slot;
   _vect_peel_info **new_slot;
-  bool supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
+  bool supportable_dr_alignment
+    = vect_supportable_dr_alignment (stmt_info, true);
 
   elem.npeel = npeel;
   slot = peeling_htab->find (&elem);
@@ -1356,7 +1359,7 @@  vect_peeling_hash_insert (hash_table<pee
     {
       slot = XNEW (struct _vect_peel_info);
       slot->npeel = npeel;
-      slot->dr = dr;
+      slot->stmt_info = stmt_info;
       slot->count = 1;
       new_slot = peeling_htab->find_slot (slot, INSERT);
       *new_slot = slot;
@@ -1383,19 +1386,19 @@  vect_peeling_hash_get_most_frequent (_ve
     {
       max->peel_info.npeel = elem->npeel;
       max->peel_info.count = elem->count;
-      max->peel_info.dr = elem->dr;
+      max->peel_info.stmt_info = elem->stmt_info;
     }
 
   return 1;
 }
 
 /* Get the costs of peeling NPEEL iterations checking data access costs
-   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume DR0's
-   misalignment will be zero after peeling.  */
+   for all data refs.  If UNKNOWN_MISALIGNMENT is true, we assume
+   PEEL_STMT_INFO's misalignment will be zero after peeling.  */
 
 static void
 vect_get_peeling_costs_all_drs (vec<data_reference_p> datarefs,
-				struct data_reference *dr0,
+				stmt_vec_info peel_stmt_info,
 				unsigned int *inside_cost,
 				unsigned int *outside_cost,
 				stmt_vector_for_cost *body_cost_vec,
@@ -1403,8 +1406,6 @@  vect_get_peeling_costs_all_drs (vec<data
 				unsigned int npeel,
 				bool unknown_misalignment)
 {
-  stmt_vec_info peel_stmt_info = (dr0 ? vect_dr_stmt (dr0)
-				  : NULL_STMT_VEC_INFO);
   unsigned i;
   data_reference *dr;
 
@@ -1433,8 +1434,8 @@  vect_get_peeling_costs_all_drs (vec<data
       else if (unknown_misalignment && stmt_info == peel_stmt_info)
 	set_dr_misalignment (stmt_info, 0);
       else
-	vect_update_misalignment_for_peel (dr, dr0, npeel);
-      vect_get_data_access_cost (dr, inside_cost, outside_cost,
+	vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
+      vect_get_data_access_cost (stmt_info, inside_cost, outside_cost,
 				 body_cost_vec, prologue_cost_vec);
       set_dr_misalignment (stmt_info, save_misalignment);
     }
@@ -1450,7 +1451,7 @@  vect_peeling_hash_get_lowest_cost (_vect
   vect_peel_info elem = *slot;
   int dummy;
   unsigned int inside_cost = 0, outside_cost = 0;
-  stmt_vec_info stmt_info = vect_dr_stmt (elem->dr);
+  stmt_vec_info stmt_info = elem->stmt_info;
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   stmt_vector_for_cost prologue_cost_vec, body_cost_vec,
 		       epilogue_cost_vec;
@@ -1460,7 +1461,7 @@  vect_peeling_hash_get_lowest_cost (_vect
   epilogue_cost_vec.create (2);
 
   vect_get_peeling_costs_all_drs (LOOP_VINFO_DATAREFS (loop_vinfo),
-				  elem->dr, &inside_cost, &outside_cost,
+				  elem->stmt_info, &inside_cost, &outside_cost,
 				  &body_cost_vec, &prologue_cost_vec,
 				  elem->npeel, false);
 
@@ -1484,7 +1485,7 @@  vect_peeling_hash_get_lowest_cost (_vect
     {
       min->inside_cost = inside_cost;
       min->outside_cost = outside_cost;
-      min->peel_info.dr = elem->dr;
+      min->peel_info.stmt_info = elem->stmt_info;
       min->peel_info.npeel = elem->npeel;
       min->peel_info.count = elem->count;
     }
@@ -1503,7 +1504,7 @@  vect_peeling_hash_choose_best_peeling (h
 {
    struct _vect_peel_extended_info res;
 
-   res.peel_info.dr = NULL;
+   res.peel_info.stmt_info = NULL;
 
    if (!unlimited_cost_model (LOOP_VINFO_LOOP (loop_vinfo)))
      {
@@ -1527,8 +1528,8 @@  vect_peeling_hash_choose_best_peeling (h
 /* Return true if the new peeling NPEEL is supported.  */
 
 static bool
-vect_peeling_supportable (loop_vec_info loop_vinfo, struct data_reference *dr0,
-			  unsigned npeel)
+vect_peeling_supportable (loop_vec_info loop_vinfo,
+			  stmt_vec_info peel_stmt_info, unsigned npeel)
 {
   unsigned i;
   struct data_reference *dr = NULL;
@@ -1540,10 +1541,10 @@  vect_peeling_supportable (loop_vec_info
     {
       int save_misalignment;
 
-      if (dr == dr0)
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      if (stmt_info == peel_stmt_info)
 	continue;
 
-      stmt_vec_info stmt_info = vect_dr_stmt (dr);
       /* For interleaving, only the alignment of the first access
 	 matters.  */
       if (STMT_VINFO_GROUPED_ACCESS (stmt_info)
@@ -1557,8 +1558,9 @@  vect_peeling_supportable (loop_vec_info
 	continue;
 
       save_misalignment = dr_misalignment (stmt_info);
-      vect_update_misalignment_for_peel (dr, dr0, npeel);
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
+      vect_update_misalignment_for_peel (stmt_info, peel_stmt_info, npeel);
+      supportable_dr_alignment
+	= vect_supportable_dr_alignment (stmt_info, false);
       set_dr_misalignment (stmt_info, save_misalignment);
 
       if (!supportable_dr_alignment)
@@ -1665,8 +1667,9 @@  vect_enhance_data_refs_alignment (loop_v
   vec<data_reference_p> datarefs = LOOP_VINFO_DATAREFS (loop_vinfo);
   struct loop *loop = LOOP_VINFO_LOOP (loop_vinfo);
   enum dr_alignment_support supportable_dr_alignment;
-  struct data_reference *dr0 = NULL, *first_store = NULL;
   struct data_reference *dr;
+  stmt_vec_info peel_stmt_info = NULL;
+  stmt_vec_info first_store_info = NULL;
   unsigned int i, j;
   bool do_peeling = false;
   bool do_versioning = false;
@@ -1675,7 +1678,7 @@  vect_enhance_data_refs_alignment (loop_v
   bool one_misalignment_known = false;
   bool one_misalignment_unknown = false;
   bool one_dr_unsupportable = false;
-  struct data_reference *unsupportable_dr = NULL;
+  stmt_vec_info unsupportable_stmt_info = NULL;
   poly_uint64 vf = LOOP_VINFO_VECT_FACTOR (loop_vinfo);
   unsigned possible_npeel_number = 1;
   tree vectype;
@@ -1745,8 +1748,9 @@  vect_enhance_data_refs_alignment (loop_v
 	  && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 	continue;
 
-      supportable_dr_alignment = vect_supportable_dr_alignment (dr, true);
-      do_peeling = vector_alignment_reachable_p (dr);
+      supportable_dr_alignment
+	= vect_supportable_dr_alignment (stmt_info, true);
+      do_peeling = vector_alignment_reachable_p (stmt_info);
       if (do_peeling)
         {
 	  if (known_alignment_for_access_p (stmt_info))
@@ -1796,7 +1800,7 @@  vect_enhance_data_refs_alignment (loop_v
               for (j = 0; j < possible_npeel_number; j++)
                 {
                   vect_peeling_hash_insert (&peeling_htab, loop_vinfo,
-					    dr, npeel_tmp);
+					    stmt_info, npeel_tmp);
 		  npeel_tmp += target_align / dr_size;
                 }
 
@@ -1810,11 +1814,11 @@  vect_enhance_data_refs_alignment (loop_v
                  stores over load.  */
 	      unsigned same_align_drs
 		= STMT_VINFO_SAME_ALIGN_REFS (stmt_info).length ();
-	      if (!dr0
+	      if (!peel_stmt_info
 		  || same_align_drs_max < same_align_drs)
 		{
 		  same_align_drs_max = same_align_drs;
-		  dr0 = dr;
+		  peel_stmt_info = stmt_info;
 		}
 	      /* For data-refs with the same number of related
 		 accesses prefer the one where the misalign
@@ -1822,6 +1826,7 @@  vect_enhance_data_refs_alignment (loop_v
 	      else if (same_align_drs_max == same_align_drs)
 		{
 		  struct loop *ivloop0, *ivloop;
+		  data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
 		  ivloop0 = outermost_invariant_loop_for_expr
 		    (loop, DR_BASE_ADDRESS (dr0));
 		  ivloop = outermost_invariant_loop_for_expr
@@ -1829,7 +1834,7 @@  vect_enhance_data_refs_alignment (loop_v
 		  if ((ivloop && !ivloop0)
 		      || (ivloop && ivloop0
 			  && flow_loop_nested_p (ivloop, ivloop0)))
-		    dr0 = dr;
+		    peel_stmt_info = stmt_info;
 		}
 
 	      one_misalignment_unknown = true;
@@ -1839,11 +1844,11 @@  vect_enhance_data_refs_alignment (loop_v
 	      if (!supportable_dr_alignment)
 	      {
 		one_dr_unsupportable = true;
-		unsupportable_dr = dr;
+		unsupportable_stmt_info = stmt_info;
 	      }
 
-	      if (!first_store && DR_IS_WRITE (dr))
-		first_store = dr;
+	      if (!first_store_info && DR_IS_WRITE (dr))
+		first_store_info = stmt_info;
             }
         }
       else
@@ -1886,16 +1891,16 @@  vect_enhance_data_refs_alignment (loop_v
 
       stmt_vector_for_cost dummy;
       dummy.create (2);
-      vect_get_peeling_costs_all_drs (datarefs, dr0,
+      vect_get_peeling_costs_all_drs (datarefs, peel_stmt_info,
 				      &load_inside_cost,
 				      &load_outside_cost,
 				      &dummy, &dummy, estimated_npeels, true);
       dummy.release ();
 
-      if (first_store)
+      if (first_store_info)
 	{
 	  dummy.create (2);
-	  vect_get_peeling_costs_all_drs (datarefs, first_store,
+	  vect_get_peeling_costs_all_drs (datarefs, first_store_info,
 					  &store_inside_cost,
 					  &store_outside_cost,
 					  &dummy, &dummy,
@@ -1912,7 +1917,7 @@  vect_enhance_data_refs_alignment (loop_v
 	  || (load_inside_cost == store_inside_cost
 	      && load_outside_cost > store_outside_cost))
 	{
-	  dr0 = first_store;
+	  peel_stmt_info = first_store_info;
 	  peel_for_unknown_alignment.inside_cost = store_inside_cost;
 	  peel_for_unknown_alignment.outside_cost = store_outside_cost;
 	}
@@ -1936,18 +1941,18 @@  vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       peel_for_unknown_alignment.peel_info.count = 1
-	+ STMT_VINFO_SAME_ALIGN_REFS (vect_dr_stmt (dr0)).length ();
+	+ STMT_VINFO_SAME_ALIGN_REFS (peel_stmt_info).length ();
     }
 
   peel_for_unknown_alignment.peel_info.npeel = 0;
-  peel_for_unknown_alignment.peel_info.dr = dr0;
+  peel_for_unknown_alignment.peel_info.stmt_info = peel_stmt_info;
 
   best_peel = peel_for_unknown_alignment;
 
   peel_for_known_alignment.inside_cost = INT_MAX;
   peel_for_known_alignment.outside_cost = INT_MAX;
   peel_for_known_alignment.peel_info.count = 0;
-  peel_for_known_alignment.peel_info.dr = NULL;
+  peel_for_known_alignment.peel_info.stmt_info = NULL;
 
   if (do_peeling && one_misalignment_known)
     {
@@ -1959,7 +1964,7 @@  vect_enhance_data_refs_alignment (loop_v
     }
 
   /* Compare costs of peeling for known and unknown alignment. */
-  if (peel_for_known_alignment.peel_info.dr != NULL
+  if (peel_for_known_alignment.peel_info.stmt_info
       && peel_for_unknown_alignment.inside_cost
       >= peel_for_known_alignment.inside_cost)
     {
@@ -1976,7 +1981,7 @@  vect_enhance_data_refs_alignment (loop_v
      since we'd have to discard a chosen peeling except when it accidentally
      aligned the unsupportable data ref.  */
   if (one_dr_unsupportable)
-    dr0 = unsupportable_dr;
+    peel_stmt_info = unsupportable_stmt_info;
   else if (do_peeling)
     {
       /* Calculate the penalty for no peeling, i.e. leaving everything as-is.
@@ -2007,7 +2012,7 @@  vect_enhance_data_refs_alignment (loop_v
       epilogue_cost_vec.release ();
 
       npeel = best_peel.peel_info.npeel;
-      dr0 = best_peel.peel_info.dr;
+      peel_stmt_info = best_peel.peel_info.stmt_info;
 
       /* If no peeling is not more expensive than the best peeling we
 	 have so far, don't perform any peeling.  */
@@ -2017,8 +2022,8 @@  vect_enhance_data_refs_alignment (loop_v
 
   if (do_peeling)
     {
-      stmt_vec_info peel_stmt_info = vect_dr_stmt (dr0);
       vectype = STMT_VINFO_VECTYPE (peel_stmt_info);
+      data_reference *dr0 = STMT_VINFO_DATA_REF (peel_stmt_info);
 
       if (known_alignment_for_access_p (peel_stmt_info))
         {
@@ -2052,7 +2057,7 @@  vect_enhance_data_refs_alignment (loop_v
         }
 
       /* Ensure that all datarefs can be vectorized after the peel.  */
-      if (!vect_peeling_supportable (loop_vinfo, dr0, npeel))
+      if (!vect_peeling_supportable (loop_vinfo, peel_stmt_info, npeel))
 	do_peeling = false;
 
       /* Check if all datarefs are supportable and log.  */
@@ -2125,7 +2130,8 @@  vect_enhance_data_refs_alignment (loop_v
 		    && !STMT_VINFO_GROUPED_ACCESS (stmt_info))
 		  continue;
 
-		vect_update_misalignment_for_peel (dr, dr0, npeel);
+		vect_update_misalignment_for_peel (stmt_info,
+						   peel_stmt_info, npeel);
 	      }
 
           LOOP_VINFO_UNALIGNED_DR (loop_vinfo) = dr0;
@@ -2188,7 +2194,8 @@  vect_enhance_data_refs_alignment (loop_v
 	      break;
 	    }
 
-	  supportable_dr_alignment = vect_supportable_dr_alignment (dr, false);
+	  supportable_dr_alignment
+	    = vect_supportable_dr_alignment (stmt_info, false);
 
           if (!supportable_dr_alignment)
             {
@@ -2203,7 +2210,6 @@  vect_enhance_data_refs_alignment (loop_v
                   break;
                 }
 
-	      stmt_info = vect_dr_stmt (dr);
 	      vectype = STMT_VINFO_VECTYPE (stmt_info);
 	      gcc_assert (vectype);
 
@@ -2314,9 +2320,9 @@  vect_find_same_alignment_drs (struct dat
   if (maybe_ne (diff, 0))
     {
       /* Get the wider of the two alignments.  */
-      unsigned int align_a = (vect_calculate_target_alignment (dra)
+      unsigned int align_a = (vect_calculate_target_alignment (stmtinfo_a)
 			      / BITS_PER_UNIT);
-      unsigned int align_b = (vect_calculate_target_alignment (drb)
+      unsigned int align_b = (vect_calculate_target_alignment (stmtinfo_b)
 			      / BITS_PER_UNIT);
       unsigned int max_align = MAX (align_a, align_b);
 
@@ -2366,7 +2372,7 @@  vect_analyze_data_refs_alignment (loop_v
     {
       stmt_vec_info stmt_info = vect_dr_stmt (dr);
       if (STMT_VINFO_VECTORIZABLE (stmt_info))
-	vect_compute_data_ref_alignment (dr);
+	vect_compute_data_ref_alignment (stmt_info);
     }
 
   return true;
@@ -2382,17 +2388,16 @@  vect_slp_analyze_and_verify_node_alignme
      the node is permuted in which case we start from the first
      element in the group.  */
   stmt_vec_info first_stmt_info = SLP_TREE_SCALAR_STMTS (node)[0];
-  data_reference_p first_dr = STMT_VINFO_DATA_REF (first_stmt_info);
+  stmt_vec_info stmt_info = first_stmt_info;
   if (SLP_TREE_LOAD_PERMUTATION (node).exists ())
-    first_stmt_info = DR_GROUP_FIRST_ELEMENT (first_stmt_info);
+    stmt_info = DR_GROUP_FIRST_ELEMENT (stmt_info);
 
-  data_reference_p dr = STMT_VINFO_DATA_REF (first_stmt_info);
-  vect_compute_data_ref_alignment (dr);
+  vect_compute_data_ref_alignment (stmt_info);
   /* For creating the data-ref pointer we need alignment of the
      first element anyway.  */
-  if (dr != first_dr)
-    vect_compute_data_ref_alignment (first_dr);
-  if (! verify_data_ref_alignment (dr))
+  if (stmt_info != first_stmt_info)
+    vect_compute_data_ref_alignment (first_stmt_info);
+  if (! verify_data_ref_alignment (first_stmt_info))
     {
       if (dump_enabled_p ())
 	dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
@@ -2430,19 +2435,19 @@  vect_slp_analyze_and_verify_instance_ali
 }
 
 
-/* Analyze groups of accesses: check that DR belongs to a group of
-   accesses of legal size, step, etc.  Detect gaps, single element
-   interleaving, and other special cases. Set grouped access info.
-   Collect groups of strided stores for further use in SLP analysis.
-   Worker for vect_analyze_group_access.  */
+/* Analyze groups of accesses: check that the load or store in STMT_INFO
+   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
+   single element interleaving, and other special cases.  Set grouped
+   access info.  Collect groups of strided stores for further use in
+   SLP analysis.  Worker for vect_analyze_group_access.  */
 
 static bool
-vect_analyze_group_access_1 (struct data_reference *dr)
+vect_analyze_group_access_1 (stmt_vec_info stmt_info)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
   HOST_WIDE_INT type_size = TREE_INT_CST_LOW (TYPE_SIZE_UNIT (scalar_type));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   bb_vec_info bb_vinfo = STMT_VINFO_BB_VINFO (stmt_info);
   HOST_WIDE_INT dr_step = -1;
@@ -2519,7 +2524,7 @@  vect_analyze_group_access_1 (struct data
       if (bb_vinfo)
 	{
 	  /* Mark the statement as unvectorizable.  */
-	  STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
+	  STMT_VINFO_VECTORIZABLE (stmt_info) = false;
 	  return true;
 	}
 
@@ -2667,18 +2672,18 @@  vect_analyze_group_access_1 (struct data
   return true;
 }
 
-/* Analyze groups of accesses: check that DR belongs to a group of
-   accesses of legal size, step, etc.  Detect gaps, single element
-   interleaving, and other special cases. Set grouped access info.
-   Collect groups of strided stores for further use in SLP analysis.  */
+/* Analyze groups of accesses: check that the load or store in STMT_INFO
+   belongs to a group of accesses of legal size, step, etc.  Detect gaps,
+   single element interleaving, and other special cases.  Set grouped
+   access info.  Collect groups of strided stores for further use in
+   SLP analysis.  */
 
 static bool
-vect_analyze_group_access (struct data_reference *dr)
+vect_analyze_group_access (stmt_vec_info stmt_info)
 {
-  if (!vect_analyze_group_access_1 (dr))
+  if (!vect_analyze_group_access_1 (stmt_info))
     {
       /* Dissolve the group if present.  */
-      stmt_vec_info stmt_info = DR_GROUP_FIRST_ELEMENT (vect_dr_stmt (dr));
       while (stmt_info)
 	{
 	  stmt_vec_info next = DR_GROUP_NEXT_ELEMENT (stmt_info);
@@ -2691,16 +2696,16 @@  vect_analyze_group_access (struct data_r
   return true;
 }
 
-/* Analyze the access pattern of the data-reference DR.
+/* Analyze the access pattern of the load or store in STMT_INFO.
    In case of non-consecutive accesses call vect_analyze_group_access() to
    analyze groups of accesses.  */
 
 static bool
-vect_analyze_data_ref_access (struct data_reference *dr)
+vect_analyze_data_ref_access (stmt_vec_info stmt_info)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree step = DR_STEP (dr);
   tree scalar_type = TREE_TYPE (DR_REF (dr));
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
   struct loop *loop = NULL;
 
@@ -2780,10 +2785,10 @@  vect_analyze_data_ref_access (struct dat
   if (TREE_CODE (step) != INTEGER_CST)
     return (STMT_VINFO_STRIDED_P (stmt_info)
 	    && (!STMT_VINFO_GROUPED_ACCESS (stmt_info)
-		|| vect_analyze_group_access (dr)));
+		|| vect_analyze_group_access (stmt_info)));
 
   /* Not consecutive access - check if it's a part of interleaving group.  */
-  return vect_analyze_group_access (dr);
+  return vect_analyze_group_access (stmt_info);
 }
 
 /* Compare two data-references DRA and DRB to group them into chunks
@@ -3062,25 +3067,28 @@  vect_analyze_data_ref_accesses (vec_info
     }
 
   FOR_EACH_VEC_ELT (datarefs_copy, i, dr)
-    if (STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr))
-        && !vect_analyze_data_ref_access (dr))
-      {
-	if (dump_enabled_p ())
-	  dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
-	                   "not vectorized: complicated access pattern.\n");
+    {
+      stmt_vec_info stmt_info = vect_dr_stmt (dr);
+      if (STMT_VINFO_VECTORIZABLE (stmt_info)
+	  && !vect_analyze_data_ref_access (stmt_info))
+	{
+	  if (dump_enabled_p ())
+	    dump_printf_loc (MSG_MISSED_OPTIMIZATION, vect_location,
+			     "not vectorized: complicated access pattern.\n");
 
-        if (is_a <bb_vec_info> (vinfo))
-	  {
-	    /* Mark the statement as not vectorizable.  */
-	    STMT_VINFO_VECTORIZABLE (vect_dr_stmt (dr)) = false;
-	    continue;
-	  }
-        else
-	  {
-	    datarefs_copy.release ();
-	    return false;
-	  }
-      }
+	  if (is_a <bb_vec_info> (vinfo))
+	    {
+	      /* Mark the statement as not vectorizable.  */
+	      STMT_VINFO_VECTORIZABLE (stmt_info) = false;
+	      continue;
+	    }
+	  else
+	    {
+	      datarefs_copy.release ();
+	      return false;
+	    }
+	}
+    }
 
   datarefs_copy.release ();
   return true;
@@ -3089,7 +3097,7 @@  vect_analyze_data_ref_accesses (vec_info
 /* Function vect_vfa_segment_size.
 
    Input:
-     DR: The data reference.
+     STMT_INFO: the load or store statement.
      LENGTH_FACTOR: segment length to consider.
 
    Return a value suitable for the dr_with_seg_len::seg_len field.
@@ -3098,8 +3106,9 @@  vect_analyze_data_ref_accesses (vec_info
    the size of the access; in effect it only describes the first byte.  */
 
 static tree
-vect_vfa_segment_size (struct data_reference *dr, tree length_factor)
+vect_vfa_segment_size (stmt_vec_info stmt_info, tree length_factor)
 {
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   length_factor = size_binop (MINUS_EXPR,
 			      fold_convert (sizetype, length_factor),
 			      size_one_node);
@@ -3107,23 +3116,23 @@  vect_vfa_segment_size (struct data_refer
 		     length_factor);
 }
 
-/* Return a value that, when added to abs (vect_vfa_segment_size (dr)),
+/* Return a value that, when added to abs (vect_vfa_segment_size (STMT_INFO)),
    gives the worst-case number of bytes covered by the segment.  */
 
 static unsigned HOST_WIDE_INT
-vect_vfa_access_size (data_reference *dr)
+vect_vfa_access_size (stmt_vec_info stmt_vinfo)
 {
-  stmt_vec_info stmt_vinfo = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_vinfo);
   tree ref_type = TREE_TYPE (DR_REF (dr));
   unsigned HOST_WIDE_INT ref_size = tree_to_uhwi (TYPE_SIZE_UNIT (ref_type));
   unsigned HOST_WIDE_INT access_size = ref_size;
   if (DR_GROUP_FIRST_ELEMENT (stmt_vinfo))
     {
-      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == vect_dr_stmt (dr));
+      gcc_assert (DR_GROUP_FIRST_ELEMENT (stmt_vinfo) == stmt_vinfo);
       access_size *= DR_GROUP_SIZE (stmt_vinfo) - DR_GROUP_GAP (stmt_vinfo);
     }
   if (STMT_VINFO_VEC_STMT (stmt_vinfo)
-      && (vect_supportable_dr_alignment (dr, false)
+      && (vect_supportable_dr_alignment (stmt_vinfo, false)
 	  == dr_explicit_realign_optimized))
     {
       /* We might access a full vector's worth.  */
@@ -3281,13 +3290,14 @@  vect_check_lower_bound (loop_vec_info lo
   LOOP_VINFO_LOWER_BOUNDS (loop_vinfo).safe_push (lower_bound);
 }
 
-/* Return true if it's unlikely that the step of the vectorized form of DR
-   will span fewer than GAP bytes.  */
+/* Return true if it's unlikely that the step of the vectorized form of
+   the load or store in STMT_INFO will span fewer than GAP bytes.  */
 
 static bool
-vect_small_gap_p (loop_vec_info loop_vinfo, data_reference *dr, poly_int64 gap)
+vect_small_gap_p (stmt_vec_info stmt_info, poly_int64 gap)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   HOST_WIDE_INT count
     = estimated_poly_value (LOOP_VINFO_VECT_FACTOR (loop_vinfo));
   if (DR_GROUP_FIRST_ELEMENT (stmt_info))
@@ -3295,16 +3305,20 @@  vect_small_gap_p (loop_vec_info loop_vin
   return estimated_poly_value (gap) <= count * vect_get_scalar_dr_size (dr);
 }
 
-/* Return true if we know that there is no alias between DR_A and DR_B
-   when abs (DR_STEP (DR_A)) >= N for some N.  When returning true, set
-   *LOWER_BOUND_OUT to this N.  */
+/* Return true if we know that there is no alias between the loads and
+   stores in STMT_INFO_A and STMT_INFO_B when the absolute step of
+   STMT_INFO_A's access is >= some N.  When returning true,
+   set *LOWER_BOUND_OUT to this N.  */
 
 static bool
-vectorizable_with_step_bound_p (data_reference *dr_a, data_reference *dr_b,
+vectorizable_with_step_bound_p (stmt_vec_info stmt_info_a,
+				stmt_vec_info stmt_info_b,
 				poly_uint64 *lower_bound_out)
 {
   /* Check that there is a constant gap of known sign between DR_A
      and DR_B.  */
+  data_reference *dr_a = STMT_VINFO_DATA_REF (stmt_info_a);
+  data_reference *dr_b = STMT_VINFO_DATA_REF (stmt_info_b);
   poly_int64 init_a, init_b;
   if (!operand_equal_p (DR_BASE_ADDRESS (dr_a), DR_BASE_ADDRESS (dr_b), 0)
       || !operand_equal_p (DR_OFFSET (dr_a), DR_OFFSET (dr_b), 0)
@@ -3324,8 +3338,7 @@  vectorizable_with_step_bound_p (data_ref
   /* If the two accesses could be dependent within a scalar iteration,
      make sure that we'd retain their order.  */
   if (maybe_gt (init_a + vect_get_scalar_dr_size (dr_a), init_b)
-      && !vect_preserves_scalar_order_p (vect_dr_stmt (dr_a),
-					 vect_dr_stmt (dr_b)))
+      && !vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b))
     return false;
 
   /* There is no alias if abs (DR_STEP) is greater than or equal to
@@ -3426,7 +3439,8 @@  vect_prune_runtime_alias_test_list (loop
 	 and intra-iteration dependencies are guaranteed to be honored.  */
       if (ignore_step_p
 	  && (vect_preserves_scalar_order_p (stmt_info_a, stmt_info_b)
-	      || vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)))
+	      || vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
+						 &lower_bound)))
 	{
 	  if (dump_enabled_p ())
 	    {
@@ -3446,9 +3460,10 @@  vect_prune_runtime_alias_test_list (loop
 	 than the number of bytes handled by one vector iteration.)  */
       if (!ignore_step_p
 	  && TREE_CODE (DR_STEP (dr_a)) != INTEGER_CST
-	  && vectorizable_with_step_bound_p (dr_a, dr_b, &lower_bound)
-	  && (vect_small_gap_p (loop_vinfo, dr_a, lower_bound)
-	      || vect_small_gap_p (loop_vinfo, dr_b, lower_bound)))
+	  && vectorizable_with_step_bound_p (stmt_info_a, stmt_info_b,
+					     &lower_bound)
+	  && (vect_small_gap_p (stmt_info_a, lower_bound)
+	      || vect_small_gap_p (stmt_info_b, lower_bound)))
 	{
 	  bool unsigned_p = dr_known_forward_stride_p (dr_a);
 	  if (dump_enabled_p ())
@@ -3501,11 +3516,13 @@  vect_prune_runtime_alias_test_list (loop
 	    length_factor = scalar_loop_iters;
 	  else
 	    length_factor = size_int (vect_factor);
-	  segment_length_a = vect_vfa_segment_size (dr_a, length_factor);
-	  segment_length_b = vect_vfa_segment_size (dr_b, length_factor);
+	  segment_length_a = vect_vfa_segment_size (stmt_info_a,
+						    length_factor);
+	  segment_length_b = vect_vfa_segment_size (stmt_info_b,
+						    length_factor);
 	}
-      access_size_a = vect_vfa_access_size (dr_a);
-      access_size_b = vect_vfa_access_size (dr_b);
+      access_size_a = vect_vfa_access_size (stmt_info_a);
+      access_size_b = vect_vfa_access_size (stmt_info_b);
       align_a = vect_vfa_align (dr_a);
       align_b = vect_vfa_align (dr_b);
 
@@ -4463,12 +4480,12 @@  vect_get_new_ssa_name (tree type, enum v
   return new_vect_var;
 }
 
-/* Duplicate ptr info and set alignment/misaligment on NAME from DR.  */
+/* Duplicate ptr info and set alignment/misaligment on NAME from STMT_INFO.  */
 
 static void
-vect_duplicate_ssa_name_ptr_info (tree name, data_reference *dr)
+vect_duplicate_ssa_name_ptr_info (tree name, stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   duplicate_ssa_name_ptr_info (name, DR_PTR_INFO (dr));
   int misalign = dr_misalignment (stmt_info);
   if (misalign == DR_MISALIGNMENT_UNKNOWN)
@@ -4579,7 +4596,7 @@  vect_create_addr_base_for_vector_ref (st
       && TREE_CODE (addr_base) == SSA_NAME
       && !SSA_NAME_PTR_INFO (addr_base))
     {
-      vect_duplicate_ssa_name_ptr_info (addr_base, dr);
+      vect_duplicate_ssa_name_ptr_info (addr_base, stmt_info);
       if (offset || byte_offset)
 	mark_ptr_info_alignment_unknown (SSA_NAME_PTR_INFO (addr_base));
     }
@@ -4845,8 +4862,8 @@  vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -4875,8 +4892,8 @@  vect_create_data_ref_ptr (stmt_vec_info
       /* Copy the points-to information if it exists. */
       if (DR_PTR_INFO (dr))
 	{
-	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, dr);
-	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, dr);
+	  vect_duplicate_ssa_name_ptr_info (indx_before_incr, stmt_info);
+	  vect_duplicate_ssa_name_ptr_info (indx_after_incr, stmt_info);
 	}
       if (ptr_incr)
 	*ptr_incr = incr;
@@ -6434,17 +6451,17 @@  vect_can_force_dr_alignment_p (const_tre
 }
 
 
-/* Return whether the data reference DR is supported with respect to its
-   alignment.
+/* Return whether the load or store in STMT_INFO is supported with
+   respect to its alignment.
    If CHECK_ALIGNED_ACCESSES is TRUE, check if the access is supported even
    it is aligned, i.e., check if it is possible to vectorize it with different
    alignment.  */
 
 enum dr_alignment_support
-vect_supportable_dr_alignment (struct data_reference *dr,
+vect_supportable_dr_alignment (stmt_vec_info stmt_info,
                                bool check_aligned_accesses)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
+  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   tree vectype = STMT_VINFO_VECTYPE (stmt_info);
   machine_mode mode = TYPE_MODE (vectype);
   loop_vec_info loop_vinfo = STMT_VINFO_LOOP_VINFO (stmt_info);
Index: gcc/tree-vect-stmts.c
===================================================================
--- gcc/tree-vect-stmts.c	2018-07-24 10:24:05.744462369 +0100
+++ gcc/tree-vect-stmts.c	2018-07-24 10:24:08.924434128 +0100
@@ -1057,8 +1057,8 @@  vect_get_store_cost (stmt_vec_info stmt_
 		     unsigned int *inside_cost,
 		     stmt_vector_for_cost *body_cost_vec)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (stmt_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -1237,8 +1237,8 @@  vect_get_load_cost (stmt_vec_info stmt_i
 		    stmt_vector_for_cost *body_cost_vec,
 		    bool record_prologue_costs)
 {
-  data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
-  int alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  int alignment_support_scheme
+    = vect_supportable_dr_alignment (stmt_info, false);
 
   switch (alignment_support_scheme)
     {
@@ -2340,7 +2340,6 @@  get_negative_load_store_type (stmt_vec_i
 			      vec_load_store_type vls_type,
 			      unsigned int ncopies)
 {
-  struct data_reference *dr = STMT_VINFO_DATA_REF (stmt_info);
   dr_alignment_support alignment_support_scheme;
 
   if (ncopies > 1)
@@ -2351,7 +2350,7 @@  get_negative_load_store_type (stmt_vec_i
       return VMAT_ELEMENTWISE;
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (dr, false);
+  alignment_support_scheme = vect_supportable_dr_alignment (stmt_info, false);
   if (alignment_support_scheme != dr_aligned
       && alignment_support_scheme != dr_unaligned_supported)
     {
@@ -2924,15 +2923,14 @@  vect_get_strided_load_store_ops (stmt_ve
 }
 
 /* Return the amount that should be added to a vector pointer to move
-   to the next or previous copy of AGGR_TYPE.  DR is the data reference
-   being vectorized and MEMORY_ACCESS_TYPE describes the type of
+   to the next or previous copy of AGGR_TYPE.  STMT_INFO is the load or
+   store being vectorized and MEMORY_ACCESS_TYPE describes the type of
    vectorization.  */
 
 static tree
-vect_get_data_ptr_increment (data_reference *dr, tree aggr_type,
+vect_get_data_ptr_increment (stmt_vec_info stmt_info, tree aggr_type,
 			     vect_memory_access_type memory_access_type)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   if (memory_access_type == VMAT_INVARIANT)
     return size_zero_node;
 
@@ -6171,12 +6169,12 @@  vectorizable_operation (stmt_vec_info st
   return true;
 }
 
-/* A helper function to ensure data reference DR's base alignment.  */
+/* If we decided to increase the base alignment for the memory access in
+   STMT_INFO, but haven't increased it yet, do so now.  */
 
 static void
-ensure_base_align (struct data_reference *dr)
+ensure_base_align (stmt_vec_info stmt_info)
 {
-  stmt_vec_info stmt_info = vect_dr_stmt (dr);
   if (stmt_info->dr_aux.misalignment == DR_MISALIGNMENT_UNINITIALIZED)
     return;
 
@@ -6439,7 +6437,7 @@  vectorizable_store (stmt_vec_info stmt_i
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  ensure_base_align (stmt_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -6882,7 +6880,8 @@  vectorizable_store (stmt_vec_info stmt_i
   auto_vec<tree> dr_chain (group_size);
   oprnds.create (group_size);
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_stmt_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -6920,7 +6919,8 @@  vectorizable_store (stmt_vec_info stmt_i
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
+					  memory_access_type);
     }
 
   if (mask)
@@ -7667,7 +7667,7 @@  vectorizable_load (stmt_vec_info stmt_in
 
   /* Transform.  */
 
-  ensure_base_align (dr);
+  ensure_base_align (stmt_info);
 
   if (memory_access_type == VMAT_GATHER_SCATTER && gs_info.decl)
     {
@@ -7990,7 +7990,8 @@  vectorizable_load (stmt_vec_info stmt_in
       ref_type = reference_alias_ptr_type (DR_REF (first_dr));
     }
 
-  alignment_support_scheme = vect_supportable_dr_alignment (first_dr, false);
+  alignment_support_scheme
+    = vect_supportable_dr_alignment (first_stmt_info, false);
   gcc_assert (alignment_support_scheme);
   vec_loop_masks *loop_masks
     = (loop_vinfo && LOOP_VINFO_FULLY_MASKED_P (loop_vinfo)
@@ -8155,7 +8156,8 @@  vectorizable_load (stmt_vec_info stmt_in
 	aggr_type = build_array_type_nelts (elem_type, vec_num * nunits);
       else
 	aggr_type = vectype;
-      bump = vect_get_data_ptr_increment (dr, aggr_type, memory_access_type);
+      bump = vect_get_data_ptr_increment (stmt_info, aggr_type,
+					  memory_access_type);
     }
 
   tree vec_mask = NULL_TREE;