Patchwork [2/2] Reimplementation of build_ref_for_offset

login
register
mail settings
Submitter Martin Jambor
Date Sept. 8, 2010, 4:43 p.m.
Message ID <20100908164349.484185230@virgil.suse.cz>
Download mbox | patch
Permalink /patch/64177/
State New
Headers show

Comments

Martin Jambor - Sept. 8, 2010, 4:43 p.m.
Hi,

this patch reimplements build_ref_for_offset so that it simply creates
a MEM_REF rather than trying to figure out what combination of
component and array refs are necessary.  The main advantage of this
approach is that this can never fail, allowing us to be more
aggressive and remove a number of checks.

There were two main problems with this, though.  First is that
MEM_REFs are not particularly readable to by users.  This would be a
problem when we are creating a reference that might be displayed to
them in a warning or a debugger which is what we do with
DECL_DEBUG_EXPR expressions.  We sometimes construct these
artificially when propagating accesses across assignments.  So for
those cases I retained the old implementation and only simplified it a
bit - it is now called build_user_friendly_ref_for_offset.

The other problem was bit-fields.  Constructing accesses to them was
Richard Guenther - Sept. 9, 2010, 9:29 a.m.
On Wed, 8 Sep 2010, Martin Jambor wrote:

> Hi,
> 
> this patch reimplements build_ref_for_offset so that it simply creates
> a MEM_REF rather than trying to figure out what combination of
> component and array refs are necessary.  The main advantage of this
> approach is that this can never fail, allowing us to be more
> aggressive and remove a number of checks.
> 
> There were two main problems with this, though.  First is that
> MEM_REFs are not particularly readable to by users.  This would be a
> problem when we are creating a reference that might be displayed to
> them in a warning or a debugger which is what we do with
> DECL_DEBUG_EXPR expressions.  We sometimes construct these
> artificially when propagating accesses across assignments.  So for
> those cases I retained the old implementation and only simplified it a
> bit - it is now called build_user_friendly_ref_for_offset.
> 
> The other problem was bit-fields.  Constructing accesses to them was
> difficult enough but then I realized that I was not even able to
> detect the cases when I was accessing a bit field if their offset
> happened to be on a byte boundary.  I thought I would be able to
> figure this out from TYPE_SIZE and TYPE_PRECISION of exp_type but
> combinations that signal a bit-field in one language may not be
> applied in another (in C, small TYPE_PRECISION denotes bit-fields and
> TYPE_SIZE is big, but for example Fortran booleans have the precision
> set to one even though they are not bit-fields).

From types alone we can't see whether the object is part of a
bitfield or not, only DECL_BIT_FIELD in FIELD_DECLs specify it.
Thus recognizing and remembering this during access analysis
is indeed the correct way of making sure to handle them
correctly.

Indeed MEM_REF will cause accesses of byte-granular size as
specified by TYPE_SIZE_UNIT (TREE_TYPE (mem-ref)).  While this
might not be a problem for bitfield loads that are aligned
to byte boundaries, bitfield stores will clobber whatever is
adjacent to it and is not in the next byte(s).

Until we lower all bit field accesses (I have plans to resurrect
a pass doing that from the old mem-ref branch), bitfield accesses
should be done by wrapping a BIT_FIELD_REF around generated
MEM_REF trees.

> So in the end I based the detection on the access structures that
> represented the thing being loaded or stored which I knew had their
> sizes correct because they are based on field sizes.  Since I use the
> access, the simplest way to actually create the reference to the bit
> field is to re-use the last component ref of its expression - that is
> what build_ref_for_model (meaning a model access) does.  Separating
> this from build_ref_for_offset (which cannot handle bit-fields) makes
> the code a bit cleaner and keeps the latter function for other users
> which know nothing about SRA access structures.
> 
> I hope that you'll find these approaches reasonable.  The patch was
> bootstrapped and tested on x86_64-linux without any issues.  I'd like
> to commit it to trunk but I'm sure there will be comments and
> suggestions.

Comments inline.

> Thanks,
> 
> Martin
> 
> 
> 
> 2010-09-08  Martin Jambor  <mjambor@suse.cz>
> 
> 	PR tree-optimization/44972
> 	* tree-sra.c: Include toplev.h.
> 	(build_ref_for_offset): Entirely reimplemented.
> 	(build_ref_for_model): New function.
> 	(build_user_friendly_ref_for_offset): New function.
> 	(analyze_access_subtree): Removed build_ref_for_offset check.
> 	(propagate_subaccesses_across_link): Likewise.
> 	(create_artificial_child_access): Use
> 	build_user_friendly_ref_for_offset.
> 	(propagate_subaccesses_across_link): Likewise.
> 	(ref_expr_for_all_replacements_p): Removed.
> 	(generate_subtree_copies): Updated comment.  Use build_ref_for_model.
> 	(sra_modify_expr): Use build_ref_for_model.
> 	(load_assign_lhs_subreplacements): Likewise.
> 	(sra_modify_assign): Removed ref_expr_for_all_replacements_p checks,
> 	checks for return values of build_ref_for_offset.
> 	* ipa-cp.c (ipcp_lattice_from_jfunc): No need to check return value of
> 	build_ref_for_offset.
> 	* ipa-prop.h: Include gimple.h
> 	* ipa-prop.c (ipa_compute_jump_functions): Update to look for MEM_REFs.
> 	(ipa_analyze_indirect_call_uses): Update comment.
> 	* Makefile.in (tree-sra.o): Add $(GIMPLE_H) to dependencies.
> 	(IPA_PROP_H): Likewise.
> 
> 	* testsuite/gcc.dg/ipa/ipa-sra-1.c: Adjust scanning expressions.
> 	* testsuite/gcc.dg/tree-ssa/pr45144.c: Likewise.
> 	* testsuite/gcc.dg/tree-ssa/forwprop-5.c: Likewise and scan optimzed
> 	dump instead.
> 	* testsuite/g++.dg/torture/pr34850.C: Remove expected warning.
>         * gcc/testsuite/g++.dg/torture/pr44972.C: New test.
> 
> Index: mine/gcc/tree-sra.c
> ===================================================================
> --- mine.orig/gcc/tree-sra.c
> +++ mine/gcc/tree-sra.c
> @@ -76,6 +76,7 @@ along with GCC; see the file COPYING3.
>  #include "coretypes.h"
>  #include "alloc-pool.h"
>  #include "tm.h"
> +#include "toplev.h"
>  #include "tree.h"
>  #include "gimple.h"
>  #include "cgraph.h"
> @@ -1320,15 +1321,114 @@ make_fancy_name (tree expr)
>    return XOBFINISH (&name_obstack, char *);
>  }
>  
> -/* Helper function for build_ref_for_offset.
> +/* Construct a MEM_REF that would reference a part of aggregate BASE of type
> +   EXP_TYPE at the given OFFSET.  If BASE is something for which
> +   get_addr_base_and_unit_offset returns NULL, gsi must be non-NULL and is used
> +   to insert new statements either before or below the current one as specified
> +   by INSERT_AFTER.  If offset is not aligned to bytes or EXP_TYPE is a
> +   bit_field.

.. then?  (The last sentence looks incomplete)

  This function is not capable of handling bitfields.  */
> +
> +tree
> +build_ref_for_offset (tree base, HOST_WIDE_INT offset,
> +		      tree exp_type, gimple_stmt_iterator *gsi,
> +		      bool insert_after)
> +{
> +  tree prev_base = base;
> +  tree off;
> +  location_t loc = EXPR_LOCATION (base);
> +  HOST_WIDE_INT base_offset;
>  
> -   FIXME: Eventually this should be rewritten to either re-use the
> -   original access expression unshared (which is good for alias
> -   analysis) or to build a MEM_REF expression.  */
> +  gcc_checking_assert (offset % BITS_PER_UNIT == 0);
> +
> +  base = get_addr_base_and_unit_offset (base, &base_offset);
> +  if (!base)
> +    {

Can you explain when we get here?  Can base be a reference
with a variable offset like a[i]?  I suppose it can't be a
bit-field-ref, as on top of that we should never build another
ref.

> +      gimple stmt;
> +      tree tmp, addr;
> +
> +      gcc_checking_assert (gsi);
> +      tmp = create_tmp_reg (build_pointer_type (TREE_TYPE (prev_base)), NULL);
> +      add_referenced_var (tmp);
> +      tmp = make_ssa_name (tmp, NULL);
> +      addr = build_fold_addr_expr (unshare_expr (prev_base));
> +      stmt = gimple_build_assign (tmp, addr);
> +      SSA_NAME_DEF_STMT (tmp) = stmt;
> +      if (insert_after)
> +	gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
> +      else
> +	gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
> +

I think you need to update_stmt (stmt) here.

> +      off = build_int_cst (reference_alias_ptr_type (prev_base),
> +			   offset / BITS_PER_UNIT);
> +      base = tmp;
> +    }
> +  else if (TREE_CODE (base) == MEM_REF)
> +    {
> +      off = build_int_cst (TREE_TYPE (TREE_OPERAND (base, 1)),
> +			   base_offset + offset / BITS_PER_UNIT);
> +      off = int_const_binop (PLUS_EXPR, TREE_OPERAND (base, 1), off, 0);
> +      base = unshare_expr (TREE_OPERAND (base, 0));
> +    }
> +  else
> +    {
> +      off = build_int_cst (reference_alias_ptr_type (base),
> +			   base_offset + offset / BITS_PER_UNIT);
> +      base = build_fold_addr_expr (unshare_expr (base));
> +    }
> +
> +  return fold_build2_loc (loc, MEM_REF, exp_type, base, off);
> +}

Ok sofar.

> +/* Construct a memory reference to a part of an aggregate BASE at the given
> +   OFFSET and of the same type as MODEL.  In case this is a reference to a
> +   bit-field, the function will replicate the last component_ref of model's
> +   expr to access it.  GSI and INSERT_AFTER have the same meaning as in
> +   build_ref_for_offset.  */
> +
> +static tree
> +build_ref_for_model (tree base, HOST_WIDE_INT offset,
> +		     struct access *model, gimple_stmt_iterator *gsi,
> +		     bool insert_after)
> +{
> +  tree t, exp_type;
> +  bool bitfield;
> +
> +  if (offset % BITS_PER_UNIT != 0
> +      || model->size < BITS_PER_UNIT
> +      || exact_log2 (model->size) == -1)
> +    {
         /* This access looks like a bitfield.  */
> +      gcc_checking_assert (TREE_CODE (model->expr) == COMPONENT_REF);

If you have the COMPONENT_REF, why not do

   if (TREE_CODE (model->expr) == COMPONENT_REF
       && DECL_BIT_FIELD (TREE_OPERAND (model->expr, 1)))

instead of the offset/size checks?  That way it is 100% obvious.

> +      offset -= int_bit_position (TREE_OPERAND (model->expr, 1));
> +      gcc_assert (offset % BITS_PER_UNIT == 0);

We're doing that check in build_ref_for_offset.

> +      exp_type = TREE_TYPE (TREE_OPERAND (model->expr, 0));
> +      bitfield = true;
> +    }
> +  else
> +    {
> +      exp_type = model->type;
> +      bitfield = false;
> +    }
> +
> +  t = build_ref_for_offset (base, offset, exp_type, gsi, insert_after);

Replicating this call in both arms above would make it easier to
distinguish both cases.  Which makes me wonder if in either of
the case we can pass NULL for gsi?  (relates to the question in
build_ref_for_offset)

> +  if (bitfield)
> +    t = fold_build3_loc (EXPR_LOCATION (base), COMPONENT_REF, model->type, t,

You use base location here, shouldn't this be model->expr location
instead?  If there are more callers to build_ref_for_offset I think
it would make sense to pass the location to use for the built MEM_REF
to it, rather than hard-coding the base location inside.

And indeed a COMPONENT_REF also works (in fact it always works, unless
a BIT_FIELD_REF which doesn't work on the LHS).  Nice obvious idea ;)

> +			 TREE_OPERAND (model->expr, 1), NULL_TREE);
> +
> +  return t;
> +}
> +
> +/* Construct a memory reference consisting of component_refs and array_refs to
> +   a part of an aggregate *RES (which is of type TYPE).  The requested part
> +   should have type EXP_TYPE at be the given OFFSET.  This function might not
> +   succeed, it returns true when it does and only then *RES points to something
> +   meaningful.  This function should be used only to build expressions that we
> +   might need to present to user (e.g. in warnings).  In all other situations,
> +   build_ref_for_model or build_ref_for_offset should be used instead.  */
>  
>  static bool
> -build_ref_for_offset_1 (tree *res, tree type, HOST_WIDE_INT offset,
> -			tree exp_type)
> +build_user_friendly_ref_for_offset (tree *res, tree type, HOST_WIDE_INT offset,
> +				    tree exp_type)
>  {
>    while (1)
>      {
> @@ -1367,19 +1467,13 @@ build_ref_for_offset_1 (tree *res, tree
>  	      else if (pos > offset || (pos + size) <= offset)
>  		continue;
>  
> -	      if (res)
> +	      expr = build3 (COMPONENT_REF, TREE_TYPE (fld), *res, fld,
> +			     NULL_TREE);
> +	      expr_ptr = &expr;
> +	      if (build_user_friendly_ref_for_offset (expr_ptr, TREE_TYPE (fld),
> +						      offset - pos, exp_type))
>  		{
> -		  expr = build3 (COMPONENT_REF, TREE_TYPE (fld), *res, fld,
> -				 NULL_TREE);
> -		  expr_ptr = &expr;
> -		}
> -	      else
> -		expr_ptr = NULL;
> -	      if (build_ref_for_offset_1 (expr_ptr, TREE_TYPE (fld),
> -					  offset - pos, exp_type))
> -		{
> -		  if (res)
> -		    *res = expr;
> +		  *res = expr;
>  		  return true;
>  		}
>  	    }
> @@ -1394,14 +1488,11 @@ build_ref_for_offset_1 (tree *res, tree
>  	  minidx = TYPE_MIN_VALUE (TYPE_DOMAIN (type));
>  	  if (TREE_CODE (minidx) != INTEGER_CST || el_size == 0)
>  	    return false;
> -	  if (res)
> -	    {
> -	      index = build_int_cst (TYPE_DOMAIN (type), offset / el_size);
> -	      if (!integer_zerop (minidx))
> -		index = int_const_binop (PLUS_EXPR, index, minidx, 0);
> -	      *res = build4 (ARRAY_REF, TREE_TYPE (type), *res, index,
> -			     NULL_TREE, NULL_TREE);
> -	    }
> +	  index = build_int_cst (TYPE_DOMAIN (type), offset / el_size);
> +	  if (!integer_zerop (minidx))
> +	    index = int_const_binop (PLUS_EXPR, index, minidx, 0);
> +	  *res = build4 (ARRAY_REF, TREE_TYPE (type), *res, index,
> +			 NULL_TREE, NULL_TREE);
>  	  offset = offset % el_size;
>  	  type = TREE_TYPE (type);
>  	  break;
> @@ -1418,31 +1509,6 @@ build_ref_for_offset_1 (tree *res, tree
>      }
>  }

Ok.  I see the point about warnings, but when are MEM_REF trees
displayed in a debugger?

> -/* Construct an expression that would reference a part of aggregate *EXPR of
> -   type TYPE at the given OFFSET of the type EXP_TYPE.  If EXPR is NULL, the
> -   function only determines whether it can build such a reference without
> -   actually doing it, otherwise, the tree it points to is unshared first and
> -   then used as a base for furhter sub-references.  */
> -
> -bool
> -build_ref_for_offset (tree *expr, tree type, HOST_WIDE_INT offset,
> -		      tree exp_type, bool allow_ptr)
> -{
> -  location_t loc = expr ? EXPR_LOCATION (*expr) : UNKNOWN_LOCATION;
> -
> -  if (expr)
> -    *expr = unshare_expr (*expr);
> -
> -  if (allow_ptr && POINTER_TYPE_P (type))
> -    {
> -      type = TREE_TYPE (type);
> -      if (expr)
> -	*expr = build_simple_mem_ref_loc (loc, *expr);
> -    }
> -
> -  return build_ref_for_offset_1 (expr, type, offset, exp_type);
> -}
> -
>  /* Return true iff TYPE is stdarg va_list type.  */
>  
>  static inline bool
> @@ -1823,13 +1889,7 @@ analyze_access_subtree (struct access *r
>  
>    if (allow_replacements && scalar && !root->first_child
>        && (root->grp_hint
> -	  || (root->grp_write && (direct_read || root->grp_assignment_read)))
> -      /* We must not ICE later on when trying to build an access to the
> -	 original data within the aggregate even when it is impossible to do in
> -	 a defined way like in the PR 42703 testcase.  Therefore we check
> -	 pre-emptively here that we will be able to do that.  */
> -      && build_ref_for_offset (NULL, TREE_TYPE (root->base), root->offset,
> -			       root->type, false))
> +	  || (root->grp_write && (direct_read || root->grp_assignment_read))))
>      {
>        if (dump_file && (dump_flags & TDF_DETAILS))
>  	{
> @@ -1914,12 +1974,11 @@ create_artificial_child_access (struct a
>  {
>    struct access *access;
>    struct access **child;
> -  tree expr = parent->base;;
> +  tree expr = parent->base;
>  
>    gcc_assert (!model->grp_unscalarizable_region);
> -
> -  if (!build_ref_for_offset (&expr, TREE_TYPE (expr), new_offset,
> -			     model->type, false))
> +  if (!build_user_friendly_ref_for_offset (&expr, TREE_TYPE (expr), new_offset,
> +					   model->type))
>      return NULL;

Hm, so we don't fall back to creating a MEM_REF here?  Or is this
one case of relaxing checks that you want to postpone for later
(which I think is fine)?

>    access = (struct access *) pool_alloc (access_pool);
> @@ -1964,8 +2023,8 @@ propagate_subaccesses_across_link (struc
>      {
>        tree t = lacc->base;
>  
> -      if (build_ref_for_offset (&t, TREE_TYPE (t), lacc->offset, racc->type,
> -				false))
> +      if (build_user_friendly_ref_for_offset (&t, TREE_TYPE (t), lacc->offset,
> +					      racc->type))
>  	{
>  	  lacc->expr = t;
>  	  lacc->type = racc->type;
> @@ -1994,13 +2053,6 @@ propagate_subaccesses_across_link (struc
>  	  continue;
>  	}
>  
> -      /* If a (part of) a union field is on the RHS of an assignment, it can
> -	 have sub-accesses which do not make sense on the LHS (PR 40351).
> -	 Check that this is not the case.  */
> -      if (!build_ref_for_offset (NULL, TREE_TYPE (lacc->base), norm_offset,
> -				 rchild->type, false))
> -	continue;
> -
>        rchild->grp_hint = 1;
>        new_acc = create_artificial_child_access (lacc, rchild, norm_offset);
>        if (new_acc)
> @@ -2124,48 +2176,19 @@ analyze_all_variable_accesses (void)
>      return false;
>  }
>  
> -/* Return true iff a reference statement into aggregate AGG can be built for
> -   every single to-be-replaced accesses that is a child of ACCESS, its sibling
> -   or a child of its sibling. TOP_OFFSET is the offset from the processed
> -   access subtree that has to be subtracted from offset of each access.  */
> -
> -static bool
> -ref_expr_for_all_replacements_p (struct access *access, tree agg,
> -				 HOST_WIDE_INT top_offset)
> -{
> -  do
> -    {
> -      if (access->grp_to_be_replaced
> -	  && !build_ref_for_offset (NULL, TREE_TYPE (agg),
> -				    access->offset - top_offset,
> -				    access->type, false))
> -	return false;
> -
> -      if (access->first_child
> -	  && !ref_expr_for_all_replacements_p (access->first_child, agg,
> -					       top_offset))
> -	return false;
> -
> -      access = access->next_sibling;
> -    }
> -  while (access);
> -
> -  return true;
> -}
> -
>  /* Generate statements copying scalar replacements of accesses within a subtree
>     into or out of AGG.  ACCESS is the first child of the root of the subtree to
>     be processed.  AGG is an aggregate type expression (can be a declaration but
> -   does not have to be, it can for example also be an indirect_ref).
> -   TOP_OFFSET is the offset of the processed subtree which has to be subtracted
> -   from offsets of individual accesses to get corresponding offsets for AGG.
> -   If CHUNK_SIZE is non-null, copy only replacements in the interval
> -   <start_offset, start_offset + chunk_size>, otherwise copy all.  GSI is a
> -   statement iterator used to place the new statements.  WRITE should be true
> -   when the statements should write from AGG to the replacement and false if
> -   vice versa.  if INSERT_AFTER is true, new statements will be added after the
> -   current statement in GSI, they will be added before the statement
> -   otherwise.  */
> +   does not have to be, it can for example also be a mem_ref or a series of
> +   handled components).  TOP_OFFSET is the offset of the processed subtree
> +   which has to be subtracted from offsets of individual accesses to get
> +   corresponding offsets for AGG.  If CHUNK_SIZE is non-null, copy only
> +   replacements in the interval <start_offset, start_offset + chunk_size>,
> +   otherwise copy all.  GSI is a statement iterator used to place the new
> +   statements.  WRITE should be true when the statements should write from AGG
> +   to the replacement and false if vice versa.  if INSERT_AFTER is true, new
> +   statements will be added after the current statement in GSI, they will be
> +   added before the statement otherwise.  */
>  
>  static void
>  generate_subtree_copies (struct access *access, tree agg,
> @@ -2176,8 +2199,6 @@ generate_subtree_copies (struct access *
>  {
>    do
>      {
> -      tree expr = agg;
> -
>        if (chunk_size && access->offset >= start_offset + chunk_size)
>  	return;
>  
> @@ -2185,14 +2206,11 @@ generate_subtree_copies (struct access *
>  	  && (chunk_size == 0
>  	      || access->offset + access->size > start_offset))
>  	{
> -	  tree repl = get_access_replacement (access);
> -	  bool ref_found;
> +	  tree expr, repl = get_access_replacement (access);
>  	  gimple stmt;
>  
> -	  ref_found = build_ref_for_offset (&expr, TREE_TYPE (agg),
> -					     access->offset - top_offset,
> -					     access->type, false);
> -	  gcc_assert (ref_found);
> +	  expr = build_ref_for_model (agg, access->offset - top_offset,
> +				      access, gsi, insert_after);
>  
>  	  if (write)
>  	    {
> @@ -2329,12 +2347,10 @@ sra_modify_expr (tree *expr, gimple_stmt
>           in assembler statements (see PR42398).  */
>        if (!useless_type_conversion_p (type, access->type))
>  	{
> -	  tree ref = access->base;
> -	  bool ok;
> +	  tree ref;
>  
> -	  ok = build_ref_for_offset (&ref, TREE_TYPE (ref),
> -				     access->offset, access->type, false);
> -	  gcc_assert (ok);
> +	  ref = build_ref_for_model (access->base, access->offset, access,
> +				     NULL, false);
>  
>  	  if (write)
>  	    {
> @@ -2458,25 +2474,11 @@ load_assign_lhs_subreplacements (struct
>  								  lhs, old_gsi);
>  
>  	      if (*refreshed == SRA_UDH_LEFT)
> -		{
> -		  bool repl_found;
> -
> -		  rhs = lacc->base;
> -		  repl_found = build_ref_for_offset (&rhs, TREE_TYPE (rhs),
> -						     lacc->offset, lacc->type,
> -						     false);
> -		  gcc_assert (repl_found);
> -		}
> +		rhs = build_ref_for_model (lacc->base, lacc->offset, lacc,
> +					    new_gsi, true);
>  	      else
> -		{
> -		  bool repl_found;
> -
> -		  rhs = top_racc->base;
> -		  repl_found = build_ref_for_offset (&rhs,
> -						     TREE_TYPE (top_racc->base),
> -						     offset, lacc->type, false);
> -		  gcc_assert (repl_found);
> -		}
> +		rhs = build_ref_for_model (top_racc->base, offset, lacc,
> +					    new_gsi, true);
>  	    }
>  
>  	  stmt = gimple_build_assign (get_access_replacement (lacc), rhs);
> @@ -2633,25 +2635,18 @@ sra_modify_assign (gimple *stmt, gimple_
>  	  if (AGGREGATE_TYPE_P (TREE_TYPE (lhs))
>  	      && !access_has_children_p (lacc))
>  	    {
> -	      tree expr = lhs;
> -	      if (build_ref_for_offset (&expr, TREE_TYPE (lhs), 0,
> -					TREE_TYPE (rhs), false))
> -		{
> -		  lhs = expr;
> -		  gimple_assign_set_lhs (*stmt, expr);
> -		}
> +	      lhs = build_ref_for_offset (lhs, 0, TREE_TYPE (rhs), gsi, false);
> +	      gimple_assign_set_lhs (*stmt, lhs);
>  	    }
>  	  else if (AGGREGATE_TYPE_P (TREE_TYPE (rhs))
> +		   && !contains_view_convert_expr_p (rhs)
>  		   && !access_has_children_p (racc))
> -	    {
> -	      tree expr = rhs;
> -	      if (build_ref_for_offset (&expr, TREE_TYPE (rhs), 0,
> -					TREE_TYPE (lhs), false))
> -		rhs = expr;
> -	    }
> +	    rhs = build_ref_for_offset (rhs, 0, TREE_TYPE (lhs), gsi, false);
> +
>  	  if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
>  	    {
> -	      rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, TREE_TYPE (lhs), rhs);
> +	      rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, TREE_TYPE (lhs),
> +				     rhs);
>  	      if (is_gimple_reg_type (TREE_TYPE (lhs))
>  		  && TREE_CODE (lhs) != SSA_NAME)
>  		force_gimple_rhs = true;
> @@ -2694,11 +2689,7 @@ sra_modify_assign (gimple *stmt, gimple_
>  
>    if (gimple_has_volatile_ops (*stmt)
>        || contains_view_convert_expr_p (rhs)
> -      || contains_view_convert_expr_p (lhs)
> -      || (access_has_children_p (racc)
> -	  && !ref_expr_for_all_replacements_p (racc, lhs, racc->offset))
> -      || (access_has_children_p (lacc)
> -	  && !ref_expr_for_all_replacements_p (lacc, rhs, lacc->offset)))
> +      || contains_view_convert_expr_p (lhs))
>      {
>        if (access_has_children_p (racc))
>  	generate_subtree_copies (racc->first_child, racc->base, 0, 0, 0,
> Index: mine/gcc/ipa-cp.c
> ===================================================================
> --- mine.orig/gcc/ipa-cp.c
> +++ mine/gcc/ipa-cp.c
> @@ -327,7 +327,6 @@ ipcp_lattice_from_jfunc (struct ipa_node
>      {
>        struct ipcp_lattice *caller_lat;
>        tree t;
> -      bool ok;
>  
>        caller_lat = ipcp_get_lattice (info, jfunc->value.ancestor.formal_id);
>        lat->type = caller_lat->type;
> @@ -340,16 +339,9 @@ ipcp_lattice_from_jfunc (struct ipa_node
>  	  return;
>  	}
>        t = TREE_OPERAND (caller_lat->constant, 0);
> -      ok = build_ref_for_offset (&t, TREE_TYPE (t),
> -				 jfunc->value.ancestor.offset,
> -				 jfunc->value.ancestor.type, false);
> -      if (!ok)
> -	{
> -	  lat->type = IPA_BOTTOM;
> -	  lat->constant = NULL_TREE;
> -	}
> -      else
> -	lat->constant = build_fold_addr_expr (t);
> +      t = build_ref_for_offset (t, jfunc->value.ancestor.offset,
> +				jfunc->value.ancestor.type, NULL, false);
> +      lat->constant = build_fold_addr_expr (t);
>      }
>    else
>      lat->type = IPA_BOTTOM;
> Index: mine/gcc/ipa-prop.h
> ===================================================================
> --- mine.orig/gcc/ipa-prop.h
> +++ mine/gcc/ipa-prop.h
> @@ -24,6 +24,7 @@ along with GCC; see the file COPYING3.
>  #include "tree.h"
>  #include "vec.h"
>  #include "cgraph.h"
> +#include "gimple.h"
>  
>  /* The following definitions and interfaces are used by
>     interprocedural analyses or parameters.  */
> @@ -511,6 +512,7 @@ void ipa_prop_read_jump_functions (void)
>  void ipa_update_after_lto_read (void);
>  
>  /* From tree-sra.c:  */
> -bool build_ref_for_offset (tree *, tree, HOST_WIDE_INT, tree, bool);
> +tree build_ref_for_offset (tree, HOST_WIDE_INT, tree, gimple_stmt_iterator *,
> +			   bool);
>  
>  #endif /* IPA_PROP_H */
> Index: mine/gcc/Makefile.in
> ===================================================================
> --- mine.orig/gcc/Makefile.in
> +++ mine/gcc/Makefile.in
> @@ -968,7 +968,7 @@ EBITMAP_H = ebitmap.h sbitmap.h
>  LTO_STREAMER_H = lto-streamer.h $(LINKER_PLUGIN_API_H) $(TARGET_H) \
>  		$(CGRAPH_H) $(VEC_H) vecprim.h $(TREE_H) $(GIMPLE_H)
>  TREE_VECTORIZER_H = tree-vectorizer.h $(TREE_DATA_REF_H)
> -IPA_PROP_H = ipa-prop.h $(TREE_H) $(VEC_H) $(CGRAPH_H)
> +IPA_PROP_H = ipa-prop.h $(TREE_H) $(VEC_H) $(CGRAPH_H) $(GIMPLE_H)
>  GSTAB_H = gstab.h stab.def
>  BITMAP_H = bitmap.h $(HASHTAB_H) statistics.h
>  GCC_PLUGIN_H = gcc-plugin.h highlev-plugin-common.h $(CONFIG_H) $(SYSTEM_H) \
> @@ -3142,10 +3142,10 @@ tree-ssa-ccp.o : tree-ssa-ccp.c $(TREE_F
>     tree-ssa-propagate.h value-prof.h $(FLAGS_H) $(TARGET_H) $(TOPLEV_H) $(DIAGNOSTIC_CORE_H) \
>     $(DBGCNT_H) tree-pretty-print.h gimple-pretty-print.h
>  tree-sra.o : tree-sra.c $(CONFIG_H) $(SYSTEM_H) coretypes.h alloc-pool.h \
> -   $(TM_H) $(TREE_H) $(GIMPLE_H) $(CGRAPH_H) $(TREE_FLOW_H) $(IPA_PROP_H) \
> -   $(DIAGNOSTIC_H) statistics.h $(TREE_DUMP_H) $(TIMEVAR_H) $(PARAMS_H) \
> -   $(TARGET_H) $(FLAGS_H) $(EXPR_H) tree-pretty-print.h $(DBGCNT_H) \
> -   $(TREE_INLINE_H) gimple-pretty-print.h
> +   $(TM_H) $(TOPLEV_H) $(TREE_H) $(GIMPLE_H) $(CGRAPH_H) $(TREE_FLOW_H) \
> +   $(IPA_PROP_H) $(DIAGNOSTIC_H) statistics.h $(TREE_DUMP_H) $(TIMEVAR_H) \
> +   $(PARAMS_H) $(TARGET_H) $(FLAGS_H) $(EXPR_H) tree-pretty-print.h \
> +   $(DBGCNT_H) $(TREE_INLINE_H) gimple-pretty-print.h
>  tree-switch-conversion.o : tree-switch-conversion.c $(CONFIG_H) $(SYSTEM_H) \
>      $(TREE_H) $(TM_P_H) $(TREE_FLOW_H) $(DIAGNOSTIC_H) $(TREE_INLINE_H) \
>      $(TIMEVAR_H) $(TM_H) coretypes.h $(TREE_DUMP_H) $(GIMPLE_H) \
> Index: mine/gcc/testsuite/gcc.dg/ipa/ipa-sra-1.c
> ===================================================================
> --- mine.orig/gcc/testsuite/gcc.dg/ipa/ipa-sra-1.c
> +++ mine/gcc/testsuite/gcc.dg/ipa/ipa-sra-1.c
> @@ -36,6 +36,5 @@ main (int argc, char *argv[])
>    return 0;
>  }
>  
> -/* { dg-final { scan-tree-dump "About to replace expr cow.green with ISRA" "eipa_sra"  } } */
> -/* { dg-final { scan-tree-dump "About to replace expr cow.blue with ISRA" "eipa_sra"  } } */
> +/* { dg-final { scan-tree-dump-times "About to replace expr" 2 "eipa_sra" } } */
>  /* { dg-final { cleanup-tree-dump "eipa_sra" } } */
> Index: mine/gcc/testsuite/gcc.dg/tree-ssa/forwprop-5.c
> ===================================================================
> --- mine.orig/gcc/testsuite/gcc.dg/tree-ssa/forwprop-5.c
> +++ mine/gcc/testsuite/gcc.dg/tree-ssa/forwprop-5.c
> @@ -1,5 +1,5 @@
>  /* { dg-do compile } */
> -/* { dg-options "-O1 -fdump-tree-esra -w" } */
> +/* { dg-options "-O1 -fdump-tree-optimized -w" } */
>  
>  #define vector __attribute__((vector_size(16) ))
>  struct VecClass
> @@ -11,12 +11,9 @@ vector float foo( vector float v )
>  {
>      vector float x = v;
>      x = x + x;
> -    struct VecClass y = *(struct VecClass*)&x;
> -    return y.v;
> +    struct VecClass disappear = *(struct VecClass*)&x;
> +    return disappear.v;
>  }
>  
> -/* We should be able to remove the intermediate struct and directly
> -   return x.  As we do not fold VIEW_CONVERT_EXPR<struct VecClass>(x).v
> -   that doesn't happen right now.  */
> -/* { dg-final { scan-tree-dump-times "VIEW_CONVERT_EXPR" 1 "esra"} } */
> -/* { dg-final { cleanup-tree-dump "esra" } } */
> +/* { dg-final { scan-tree-dump-times "disappear" 0 "optimized"} } */
> +/* { dg-final { cleanup-tree-dump "optimized" } } */
> Index: mine/gcc/testsuite/gcc.dg/tree-ssa/pr45144.c
> ===================================================================
> --- mine.orig/gcc/testsuite/gcc.dg/tree-ssa/pr45144.c
> +++ mine/gcc/testsuite/gcc.dg/tree-ssa/pr45144.c
> @@ -42,5 +42,5 @@ bar (unsigned orig, unsigned *new)
>    *new = foo (&a);
>  }
>  
> -/* { dg-final { scan-tree-dump "x = a;" "optimized"} } */
> +/* { dg-final { scan-tree-dump " = VIEW_CONVERT_EXPR<unsigned int>\\(a\\);" "optimized"} } */

Nice.

>  /* { dg-final { cleanup-tree-dump "optimized" } } */
> Index: mine/gcc/ipa-prop.c
> ===================================================================
> --- mine.orig/gcc/ipa-prop.c
> +++ mine/gcc/ipa-prop.c
> @@ -916,23 +916,27 @@ ipa_compute_jump_functions (struct cgrap
>  static tree
>  ipa_get_member_ptr_load_param (tree rhs, bool use_delta)
>  {
> -  tree rec, fld;
> +  tree rec, ref_offset, fld_offset;
>    tree ptr_field;
>    tree delta_field;
>  
> -  if (TREE_CODE (rhs) != COMPONENT_REF)
> +  if (TREE_CODE (rhs) != MEM_REF)
>      return NULL_TREE;

Are you sure we never have a COMPONENT_REF here?  We are not
generally lowering them to MEM_REFs.

The patch looks ok in general, minus my minor comments.

Thanks,
Richard.

> -
>    rec = TREE_OPERAND (rhs, 0);
> +  if (TREE_CODE (rec) != ADDR_EXPR)
> +    return NULL_TREE;
> +  rec = TREE_OPERAND (rec, 0);
>    if (TREE_CODE (rec) != PARM_DECL
>        || !type_like_member_ptr_p (TREE_TYPE (rec), &ptr_field, &delta_field))
>      return NULL_TREE;
>  
> -  fld = TREE_OPERAND (rhs, 1);
> -  if (use_delta ? (fld == delta_field) : (fld == ptr_field))
> -    return rec;
> +  ref_offset = TREE_OPERAND (rhs, 1);
> +  if (use_delta)
> +    fld_offset = byte_position (delta_field);
>    else
> -    return NULL_TREE;
> +    fld_offset = byte_position (ptr_field);
> +
> +  return tree_int_cst_equal (ref_offset, fld_offset) ? rec : NULL_TREE;
>  }
>  
>  /* If STMT looks like a statement loading a value from a member pointer formal
> @@ -999,8 +1003,8 @@ ipa_note_param_call (struct cgraph_node
>     below, the call is on the last line:
>  
>       <bb 2>:
> -       f$__delta_5 = f.__delta;
> -       f$__pfn_24 = f.__pfn;
> +       f$__delta_5 = MEM[(struct  *)&f];
> +       f$__pfn_24 = MEM[(struct  *)&f + 4B];
>  
>       ...
>  
> Index: mine/gcc/testsuite/g++.dg/torture/pr44972.C
> ===================================================================
> --- /dev/null
> +++ mine/gcc/testsuite/g++.dg/torture/pr44972.C
> @@ -0,0 +1,142 @@
> +/* { dg-do compile } */
> +
> +#include<cassert>
> +#include<new>
> +#include<utility>
> +
> +namespace boost {
> +
> +template<class T>
> +class optional;
> +
> +class aligned_storage
> +{
> +	char data[ 1000 ];
> +  public:
> +    void const* address() const { return &data[0]; }
> +    void      * address()       { return &data[0]; }
> +} ;
> +
> +
> +template<class T>
> +class optional_base
> +{
> +  protected :
> +    optional_base(){}
> +    optional_base ( T const& val )
> +    {
> +      construct(val);
> +    }
> +
> +    template<class U>
> +    void assign ( optional<U> const& rhs )
> +    {
> +      if (!is_initialized())
> +        if ( rhs.is_initialized() )
> +          construct(T());
> +    }
> +
> +  public :
> +
> +    bool is_initialized() const { return m_initialized ; }
> +
> +  protected :
> +
> +    void construct ( T const& val )
> +     {
> +       new (m_storage.address()) T(val) ;
> +     }
> +
> +    T const* get_ptr_impl() const
> +    { return static_cast<T const*>(m_storage.address()); }
> +
> +  private :
> +
> +    bool m_initialized ;
> +    aligned_storage  m_storage ;
> +} ;
> +
> +
> +template<class T>
> +class optional : public optional_base<T>
> +{
> +    typedef optional_base<T> base ;
> +
> +  public :
> +
> +    optional() : base() {}
> +    optional ( T const& val ) : base(val) {}
> +    optional& operator= ( optional const& rhs )
> +      {
> +        this->assign( rhs ) ;
> +        return *this ;
> +      }
> +
> +    T const& get() const ;
> +
> +    T const* operator->() const { ((this->is_initialized()) ? static_cast<void> (0) : __assert_fail ("this->is_initialized()", "pr44972.C", 78, __PRETTY_FUNCTION__)) ; return this->get_ptr_impl() ; }
> +
> +} ;
> +
> +
> +} // namespace boost
> +
> +
> +namespace std
> +{
> +
> +  template<typename _Tp, std::size_t _Nm>
> +    struct array
> +    {
> +      typedef _Tp 	    			      value_type;
> +      typedef const value_type*			      const_iterator;
> +
> +      value_type _M_instance[_Nm];
> +
> +    };
> +}
> +
> +
> +class NT
> +{
> +  double _inf, _sup;
> +};
> +
> +
> +template < typename T > inline
> +std::array<T, 1>
> +make_array(const T& b1)
> +{
> +  std::array<T, 1> a = { { b1 } };
> +  return a;
> +}
> +
> +class V
> +{
> +  typedef std::array<NT, 1>               Base;
> +  Base base;
> +
> +public:
> +  V() {}
> +  V(const NT &x)
> +    : base(make_array(x)) {}
> +
> +};
> +
> +using boost::optional ;
> +
> +optional< std::pair< NT, NT > >
> +  linsolve_pointC2() ;
> +
> +optional< V > construct_normal_offset_lines_isecC2 ( )
> +{
> +  optional< std::pair<NT,NT> > ip;
> +
> +  ip = linsolve_pointC2();
> +
> +  V a(ip->first) ;
> +  return a;
> +}
> +
> +
> +
> 
>
H.J. Lu - Oct. 23, 2010, 5:12 p.m.
On Wed, Sep 8, 2010 at 9:43 AM, Martin Jambor <mjambor@suse.cz> wrote:
> Hi,
>
> this patch reimplements build_ref_for_offset so that it simply creates
> a MEM_REF rather than trying to figure out what combination of
> component and array refs are necessary.  The main advantage of this
> approach is that this can never fail, allowing us to be more
> aggressive and remove a number of checks.
>
> There were two main problems with this, though.  First is that
> MEM_REFs are not particularly readable to by users.  This would be a
> problem when we are creating a reference that might be displayed to
> them in a warning or a debugger which is what we do with
> DECL_DEBUG_EXPR expressions.  We sometimes construct these
> artificially when propagating accesses across assignments.  So for
> those cases I retained the old implementation and only simplified it a
> bit - it is now called build_user_friendly_ref_for_offset.
>
> The other problem was bit-fields.  Constructing accesses to them was
> difficult enough but then I realized that I was not even able to
> detect the cases when I was accessing a bit field if their offset
> happened to be on a byte boundary.  I thought I would be able to
> figure this out from TYPE_SIZE and TYPE_PRECISION of exp_type but
> combinations that signal a bit-field in one language may not be
> applied in another (in C, small TYPE_PRECISION denotes bit-fields and
> TYPE_SIZE is big, but for example Fortran booleans have the precision
> set to one even though they are not bit-fields).
>
> So in the end I based the detection on the access structures that
> represented the thing being loaded or stored which I knew had their
> sizes correct because they are based on field sizes.  Since I use the
> access, the simplest way to actually create the reference to the bit
> field is to re-use the last component ref of its expression - that is
> what build_ref_for_model (meaning a model access) does.  Separating
> this from build_ref_for_offset (which cannot handle bit-fields) makes
> the code a bit cleaner and keeps the latter function for other users
> which know nothing about SRA access structures.
>
> I hope that you'll find these approaches reasonable.  The patch was
> bootstrapped and tested on x86_64-linux without any issues.  I'd like
> to commit it to trunk but I'm sure there will be comments and
> suggestions.
>
> Thanks,
>
> Martin
>
>
>
> 2010-09-08  Martin Jambor  <mjambor@suse.cz>
>
>        PR tree-optimization/44972
>        * tree-sra.c: Include toplev.h.
>        (build_ref_for_offset): Entirely reimplemented.
>        (build_ref_for_model): New function.
>        (build_user_friendly_ref_for_offset): New function.
>        (analyze_access_subtree): Removed build_ref_for_offset check.
>        (propagate_subaccesses_across_link): Likewise.
>        (create_artificial_child_access): Use
>        build_user_friendly_ref_for_offset.
>        (propagate_subaccesses_across_link): Likewise.
>        (ref_expr_for_all_replacements_p): Removed.
>        (generate_subtree_copies): Updated comment.  Use build_ref_for_model.
>        (sra_modify_expr): Use build_ref_for_model.
>        (load_assign_lhs_subreplacements): Likewise.
>        (sra_modify_assign): Removed ref_expr_for_all_replacements_p checks,
>        checks for return values of build_ref_for_offset.
>        * ipa-cp.c (ipcp_lattice_from_jfunc): No need to check return value of
>        build_ref_for_offset.
>        * ipa-prop.h: Include gimple.h
>        * ipa-prop.c (ipa_compute_jump_functions): Update to look for MEM_REFs.
>        (ipa_analyze_indirect_call_uses): Update comment.
>        * Makefile.in (tree-sra.o): Add $(GIMPLE_H) to dependencies.
>        (IPA_PROP_H): Likewise.

This caused:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46150


H.J.
H.J. Lu - May 18, 2011, 2:36 p.m.
On Sat, Oct 23, 2010 at 10:12 AM, H.J. Lu <hjl.tools@gmail.com> wrote:
> On Wed, Sep 8, 2010 at 9:43 AM, Martin Jambor <mjambor@suse.cz> wrote:
>> Hi,
>>
>> this patch reimplements build_ref_for_offset so that it simply creates
>> a MEM_REF rather than trying to figure out what combination of
>> component and array refs are necessary.  The main advantage of this
>> approach is that this can never fail, allowing us to be more
>> aggressive and remove a number of checks.
>>
>> There were two main problems with this, though.  First is that
>> MEM_REFs are not particularly readable to by users.  This would be a
>> problem when we are creating a reference that might be displayed to
>> them in a warning or a debugger which is what we do with
>> DECL_DEBUG_EXPR expressions.  We sometimes construct these
>> artificially when propagating accesses across assignments.  So for
>> those cases I retained the old implementation and only simplified it a
>> bit - it is now called build_user_friendly_ref_for_offset.
>>
>> The other problem was bit-fields.  Constructing accesses to them was
>> difficult enough but then I realized that I was not even able to
>> detect the cases when I was accessing a bit field if their offset
>> happened to be on a byte boundary.  I thought I would be able to
>> figure this out from TYPE_SIZE and TYPE_PRECISION of exp_type but
>> combinations that signal a bit-field in one language may not be
>> applied in another (in C, small TYPE_PRECISION denotes bit-fields and
>> TYPE_SIZE is big, but for example Fortran booleans have the precision
>> set to one even though they are not bit-fields).
>>
>> So in the end I based the detection on the access structures that
>> represented the thing being loaded or stored which I knew had their
>> sizes correct because they are based on field sizes.  Since I use the
>> access, the simplest way to actually create the reference to the bit
>> field is to re-use the last component ref of its expression - that is
>> what build_ref_for_model (meaning a model access) does.  Separating
>> this from build_ref_for_offset (which cannot handle bit-fields) makes
>> the code a bit cleaner and keeps the latter function for other users
>> which know nothing about SRA access structures.
>>
>> I hope that you'll find these approaches reasonable.  The patch was
>> bootstrapped and tested on x86_64-linux without any issues.  I'd like
>> to commit it to trunk but I'm sure there will be comments and
>> suggestions.
>>
>> Thanks,
>>
>> Martin
>>
>>
>>
>> 2010-09-08  Martin Jambor  <mjambor@suse.cz>
>>
>>        PR tree-optimization/44972
>>        * tree-sra.c: Include toplev.h.
>>        (build_ref_for_offset): Entirely reimplemented.
>>        (build_ref_for_model): New function.
>>        (build_user_friendly_ref_for_offset): New function.
>>        (analyze_access_subtree): Removed build_ref_for_offset check.
>>        (propagate_subaccesses_across_link): Likewise.
>>        (create_artificial_child_access): Use
>>        build_user_friendly_ref_for_offset.
>>        (propagate_subaccesses_across_link): Likewise.
>>        (ref_expr_for_all_replacements_p): Removed.
>>        (generate_subtree_copies): Updated comment.  Use build_ref_for_model.
>>        (sra_modify_expr): Use build_ref_for_model.
>>        (load_assign_lhs_subreplacements): Likewise.
>>        (sra_modify_assign): Removed ref_expr_for_all_replacements_p checks,
>>        checks for return values of build_ref_for_offset.
>>        * ipa-cp.c (ipcp_lattice_from_jfunc): No need to check return value of
>>        build_ref_for_offset.
>>        * ipa-prop.h: Include gimple.h
>>        * ipa-prop.c (ipa_compute_jump_functions): Update to look for MEM_REFs.
>>        (ipa_analyze_indirect_call_uses): Update comment.
>>        * Makefile.in (tree-sra.o): Add $(GIMPLE_H) to dependencies.
>>        (IPA_PROP_H): Likewise.
>
> This caused:
>
> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=46150
>

This also caused:

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=49039

Patch

difficult enough but then I realized that I was not even able to
detect the cases when I was accessing a bit field if their offset
happened to be on a byte boundary.  I thought I would be able to
figure this out from TYPE_SIZE and TYPE_PRECISION of exp_type but
combinations that signal a bit-field in one language may not be
applied in another (in C, small TYPE_PRECISION denotes bit-fields and
TYPE_SIZE is big, but for example Fortran booleans have the precision
set to one even though they are not bit-fields).

So in the end I based the detection on the access structures that
represented the thing being loaded or stored which I knew had their
sizes correct because they are based on field sizes.  Since I use the
access, the simplest way to actually create the reference to the bit
field is to re-use the last component ref of its expression - that is
what build_ref_for_model (meaning a model access) does.  Separating
this from build_ref_for_offset (which cannot handle bit-fields) makes
the code a bit cleaner and keeps the latter function for other users
which know nothing about SRA access structures.

I hope that you'll find these approaches reasonable.  The patch was
bootstrapped and tested on x86_64-linux without any issues.  I'd like
to commit it to trunk but I'm sure there will be comments and
suggestions.

Thanks,

Martin



2010-09-08  Martin Jambor  <mjambor@suse.cz>

	PR tree-optimization/44972
	* tree-sra.c: Include toplev.h.
	(build_ref_for_offset): Entirely reimplemented.
	(build_ref_for_model): New function.
	(build_user_friendly_ref_for_offset): New function.
	(analyze_access_subtree): Removed build_ref_for_offset check.
	(propagate_subaccesses_across_link): Likewise.
	(create_artificial_child_access): Use
	build_user_friendly_ref_for_offset.
	(propagate_subaccesses_across_link): Likewise.
	(ref_expr_for_all_replacements_p): Removed.
	(generate_subtree_copies): Updated comment.  Use build_ref_for_model.
	(sra_modify_expr): Use build_ref_for_model.
	(load_assign_lhs_subreplacements): Likewise.
	(sra_modify_assign): Removed ref_expr_for_all_replacements_p checks,
	checks for return values of build_ref_for_offset.
	* ipa-cp.c (ipcp_lattice_from_jfunc): No need to check return value of
	build_ref_for_offset.
	* ipa-prop.h: Include gimple.h
	* ipa-prop.c (ipa_compute_jump_functions): Update to look for MEM_REFs.
	(ipa_analyze_indirect_call_uses): Update comment.
	* Makefile.in (tree-sra.o): Add $(GIMPLE_H) to dependencies.
	(IPA_PROP_H): Likewise.

	* testsuite/gcc.dg/ipa/ipa-sra-1.c: Adjust scanning expressions.
	* testsuite/gcc.dg/tree-ssa/pr45144.c: Likewise.
	* testsuite/gcc.dg/tree-ssa/forwprop-5.c: Likewise and scan optimzed
	dump instead.
	* testsuite/g++.dg/torture/pr34850.C: Remove expected warning.
        * gcc/testsuite/g++.dg/torture/pr44972.C: New test.

Index: mine/gcc/tree-sra.c
===================================================================
--- mine.orig/gcc/tree-sra.c
+++ mine/gcc/tree-sra.c
@@ -76,6 +76,7 @@  along with GCC; see the file COPYING3.
 #include "coretypes.h"
 #include "alloc-pool.h"
 #include "tm.h"
+#include "toplev.h"
 #include "tree.h"
 #include "gimple.h"
 #include "cgraph.h"
@@ -1320,15 +1321,114 @@  make_fancy_name (tree expr)
   return XOBFINISH (&name_obstack, char *);
 }
 
-/* Helper function for build_ref_for_offset.
+/* Construct a MEM_REF that would reference a part of aggregate BASE of type
+   EXP_TYPE at the given OFFSET.  If BASE is something for which
+   get_addr_base_and_unit_offset returns NULL, gsi must be non-NULL and is used
+   to insert new statements either before or below the current one as specified
+   by INSERT_AFTER.  If offset is not aligned to bytes or EXP_TYPE is a
+   bit_field.  This function is not capable of handling bitfields.  */
+
+tree
+build_ref_for_offset (tree base, HOST_WIDE_INT offset,
+		      tree exp_type, gimple_stmt_iterator *gsi,
+		      bool insert_after)
+{
+  tree prev_base = base;
+  tree off;
+  location_t loc = EXPR_LOCATION (base);
+  HOST_WIDE_INT base_offset;
 
-   FIXME: Eventually this should be rewritten to either re-use the
-   original access expression unshared (which is good for alias
-   analysis) or to build a MEM_REF expression.  */
+  gcc_checking_assert (offset % BITS_PER_UNIT == 0);
+
+  base = get_addr_base_and_unit_offset (base, &base_offset);
+  if (!base)
+    {
+      gimple stmt;
+      tree tmp, addr;
+
+      gcc_checking_assert (gsi);
+      tmp = create_tmp_reg (build_pointer_type (TREE_TYPE (prev_base)), NULL);
+      add_referenced_var (tmp);
+      tmp = make_ssa_name (tmp, NULL);
+      addr = build_fold_addr_expr (unshare_expr (prev_base));
+      stmt = gimple_build_assign (tmp, addr);
+      SSA_NAME_DEF_STMT (tmp) = stmt;
+      if (insert_after)
+	gsi_insert_after (gsi, stmt, GSI_NEW_STMT);
+      else
+	gsi_insert_before (gsi, stmt, GSI_SAME_STMT);
+
+      off = build_int_cst (reference_alias_ptr_type (prev_base),
+			   offset / BITS_PER_UNIT);
+      base = tmp;
+    }
+  else if (TREE_CODE (base) == MEM_REF)
+    {
+      off = build_int_cst (TREE_TYPE (TREE_OPERAND (base, 1)),
+			   base_offset + offset / BITS_PER_UNIT);
+      off = int_const_binop (PLUS_EXPR, TREE_OPERAND (base, 1), off, 0);
+      base = unshare_expr (TREE_OPERAND (base, 0));
+    }
+  else
+    {
+      off = build_int_cst (reference_alias_ptr_type (base),
+			   base_offset + offset / BITS_PER_UNIT);
+      base = build_fold_addr_expr (unshare_expr (base));
+    }
+
+  return fold_build2_loc (loc, MEM_REF, exp_type, base, off);
+}
+
+/* Construct a memory reference to a part of an aggregate BASE at the given
+   OFFSET and of the same type as MODEL.  In case this is a reference to a
+   bit-field, the function will replicate the last component_ref of model's
+   expr to access it.  GSI and INSERT_AFTER have the same meaning as in
+   build_ref_for_offset.  */
+
+static tree
+build_ref_for_model (tree base, HOST_WIDE_INT offset,
+		     struct access *model, gimple_stmt_iterator *gsi,
+		     bool insert_after)
+{
+  tree t, exp_type;
+  bool bitfield;
+
+  if (offset % BITS_PER_UNIT != 0
+      || model->size < BITS_PER_UNIT
+      || exact_log2 (model->size) == -1)
+    {
+      gcc_checking_assert (TREE_CODE (model->expr) == COMPONENT_REF);
+      offset -= int_bit_position (TREE_OPERAND (model->expr, 1));
+      gcc_assert (offset % BITS_PER_UNIT == 0);
+      exp_type = TREE_TYPE (TREE_OPERAND (model->expr, 0));
+      bitfield = true;
+    }
+  else
+    {
+      exp_type = model->type;
+      bitfield = false;
+    }
+
+  t = build_ref_for_offset (base, offset, exp_type, gsi, insert_after);
+
+  if (bitfield)
+    t = fold_build3_loc (EXPR_LOCATION (base), COMPONENT_REF, model->type, t,
+			 TREE_OPERAND (model->expr, 1), NULL_TREE);
+
+  return t;
+}
+
+/* Construct a memory reference consisting of component_refs and array_refs to
+   a part of an aggregate *RES (which is of type TYPE).  The requested part
+   should have type EXP_TYPE at be the given OFFSET.  This function might not
+   succeed, it returns true when it does and only then *RES points to something
+   meaningful.  This function should be used only to build expressions that we
+   might need to present to user (e.g. in warnings).  In all other situations,
+   build_ref_for_model or build_ref_for_offset should be used instead.  */
 
 static bool
-build_ref_for_offset_1 (tree *res, tree type, HOST_WIDE_INT offset,
-			tree exp_type)
+build_user_friendly_ref_for_offset (tree *res, tree type, HOST_WIDE_INT offset,
+				    tree exp_type)
 {
   while (1)
     {
@@ -1367,19 +1467,13 @@  build_ref_for_offset_1 (tree *res, tree
 	      else if (pos > offset || (pos + size) <= offset)
 		continue;
 
-	      if (res)
+	      expr = build3 (COMPONENT_REF, TREE_TYPE (fld), *res, fld,
+			     NULL_TREE);
+	      expr_ptr = &expr;
+	      if (build_user_friendly_ref_for_offset (expr_ptr, TREE_TYPE (fld),
+						      offset - pos, exp_type))
 		{
-		  expr = build3 (COMPONENT_REF, TREE_TYPE (fld), *res, fld,
-				 NULL_TREE);
-		  expr_ptr = &expr;
-		}
-	      else
-		expr_ptr = NULL;
-	      if (build_ref_for_offset_1 (expr_ptr, TREE_TYPE (fld),
-					  offset - pos, exp_type))
-		{
-		  if (res)
-		    *res = expr;
+		  *res = expr;
 		  return true;
 		}
 	    }
@@ -1394,14 +1488,11 @@  build_ref_for_offset_1 (tree *res, tree
 	  minidx = TYPE_MIN_VALUE (TYPE_DOMAIN (type));
 	  if (TREE_CODE (minidx) != INTEGER_CST || el_size == 0)
 	    return false;
-	  if (res)
-	    {
-	      index = build_int_cst (TYPE_DOMAIN (type), offset / el_size);
-	      if (!integer_zerop (minidx))
-		index = int_const_binop (PLUS_EXPR, index, minidx, 0);
-	      *res = build4 (ARRAY_REF, TREE_TYPE (type), *res, index,
-			     NULL_TREE, NULL_TREE);
-	    }
+	  index = build_int_cst (TYPE_DOMAIN (type), offset / el_size);
+	  if (!integer_zerop (minidx))
+	    index = int_const_binop (PLUS_EXPR, index, minidx, 0);
+	  *res = build4 (ARRAY_REF, TREE_TYPE (type), *res, index,
+			 NULL_TREE, NULL_TREE);
 	  offset = offset % el_size;
 	  type = TREE_TYPE (type);
 	  break;
@@ -1418,31 +1509,6 @@  build_ref_for_offset_1 (tree *res, tree
     }
 }
 
-/* Construct an expression that would reference a part of aggregate *EXPR of
-   type TYPE at the given OFFSET of the type EXP_TYPE.  If EXPR is NULL, the
-   function only determines whether it can build such a reference without
-   actually doing it, otherwise, the tree it points to is unshared first and
-   then used as a base for furhter sub-references.  */
-
-bool
-build_ref_for_offset (tree *expr, tree type, HOST_WIDE_INT offset,
-		      tree exp_type, bool allow_ptr)
-{
-  location_t loc = expr ? EXPR_LOCATION (*expr) : UNKNOWN_LOCATION;
-
-  if (expr)
-    *expr = unshare_expr (*expr);
-
-  if (allow_ptr && POINTER_TYPE_P (type))
-    {
-      type = TREE_TYPE (type);
-      if (expr)
-	*expr = build_simple_mem_ref_loc (loc, *expr);
-    }
-
-  return build_ref_for_offset_1 (expr, type, offset, exp_type);
-}
-
 /* Return true iff TYPE is stdarg va_list type.  */
 
 static inline bool
@@ -1823,13 +1889,7 @@  analyze_access_subtree (struct access *r
 
   if (allow_replacements && scalar && !root->first_child
       && (root->grp_hint
-	  || (root->grp_write && (direct_read || root->grp_assignment_read)))
-      /* We must not ICE later on when trying to build an access to the
-	 original data within the aggregate even when it is impossible to do in
-	 a defined way like in the PR 42703 testcase.  Therefore we check
-	 pre-emptively here that we will be able to do that.  */
-      && build_ref_for_offset (NULL, TREE_TYPE (root->base), root->offset,
-			       root->type, false))
+	  || (root->grp_write && (direct_read || root->grp_assignment_read))))
     {
       if (dump_file && (dump_flags & TDF_DETAILS))
 	{
@@ -1914,12 +1974,11 @@  create_artificial_child_access (struct a
 {
   struct access *access;
   struct access **child;
-  tree expr = parent->base;;
+  tree expr = parent->base;
 
   gcc_assert (!model->grp_unscalarizable_region);
-
-  if (!build_ref_for_offset (&expr, TREE_TYPE (expr), new_offset,
-			     model->type, false))
+  if (!build_user_friendly_ref_for_offset (&expr, TREE_TYPE (expr), new_offset,
+					   model->type))
     return NULL;
 
   access = (struct access *) pool_alloc (access_pool);
@@ -1964,8 +2023,8 @@  propagate_subaccesses_across_link (struc
     {
       tree t = lacc->base;
 
-      if (build_ref_for_offset (&t, TREE_TYPE (t), lacc->offset, racc->type,
-				false))
+      if (build_user_friendly_ref_for_offset (&t, TREE_TYPE (t), lacc->offset,
+					      racc->type))
 	{
 	  lacc->expr = t;
 	  lacc->type = racc->type;
@@ -1994,13 +2053,6 @@  propagate_subaccesses_across_link (struc
 	  continue;
 	}
 
-      /* If a (part of) a union field is on the RHS of an assignment, it can
-	 have sub-accesses which do not make sense on the LHS (PR 40351).
-	 Check that this is not the case.  */
-      if (!build_ref_for_offset (NULL, TREE_TYPE (lacc->base), norm_offset,
-				 rchild->type, false))
-	continue;
-
       rchild->grp_hint = 1;
       new_acc = create_artificial_child_access (lacc, rchild, norm_offset);
       if (new_acc)
@@ -2124,48 +2176,19 @@  analyze_all_variable_accesses (void)
     return false;
 }
 
-/* Return true iff a reference statement into aggregate AGG can be built for
-   every single to-be-replaced accesses that is a child of ACCESS, its sibling
-   or a child of its sibling. TOP_OFFSET is the offset from the processed
-   access subtree that has to be subtracted from offset of each access.  */
-
-static bool
-ref_expr_for_all_replacements_p (struct access *access, tree agg,
-				 HOST_WIDE_INT top_offset)
-{
-  do
-    {
-      if (access->grp_to_be_replaced
-	  && !build_ref_for_offset (NULL, TREE_TYPE (agg),
-				    access->offset - top_offset,
-				    access->type, false))
-	return false;
-
-      if (access->first_child
-	  && !ref_expr_for_all_replacements_p (access->first_child, agg,
-					       top_offset))
-	return false;
-
-      access = access->next_sibling;
-    }
-  while (access);
-
-  return true;
-}
-
 /* Generate statements copying scalar replacements of accesses within a subtree
    into or out of AGG.  ACCESS is the first child of the root of the subtree to
    be processed.  AGG is an aggregate type expression (can be a declaration but
-   does not have to be, it can for example also be an indirect_ref).
-   TOP_OFFSET is the offset of the processed subtree which has to be subtracted
-   from offsets of individual accesses to get corresponding offsets for AGG.
-   If CHUNK_SIZE is non-null, copy only replacements in the interval
-   <start_offset, start_offset + chunk_size>, otherwise copy all.  GSI is a
-   statement iterator used to place the new statements.  WRITE should be true
-   when the statements should write from AGG to the replacement and false if
-   vice versa.  if INSERT_AFTER is true, new statements will be added after the
-   current statement in GSI, they will be added before the statement
-   otherwise.  */
+   does not have to be, it can for example also be a mem_ref or a series of
+   handled components).  TOP_OFFSET is the offset of the processed subtree
+   which has to be subtracted from offsets of individual accesses to get
+   corresponding offsets for AGG.  If CHUNK_SIZE is non-null, copy only
+   replacements in the interval <start_offset, start_offset + chunk_size>,
+   otherwise copy all.  GSI is a statement iterator used to place the new
+   statements.  WRITE should be true when the statements should write from AGG
+   to the replacement and false if vice versa.  if INSERT_AFTER is true, new
+   statements will be added after the current statement in GSI, they will be
+   added before the statement otherwise.  */
 
 static void
 generate_subtree_copies (struct access *access, tree agg,
@@ -2176,8 +2199,6 @@  generate_subtree_copies (struct access *
 {
   do
     {
-      tree expr = agg;
-
       if (chunk_size && access->offset >= start_offset + chunk_size)
 	return;
 
@@ -2185,14 +2206,11 @@  generate_subtree_copies (struct access *
 	  && (chunk_size == 0
 	      || access->offset + access->size > start_offset))
 	{
-	  tree repl = get_access_replacement (access);
-	  bool ref_found;
+	  tree expr, repl = get_access_replacement (access);
 	  gimple stmt;
 
-	  ref_found = build_ref_for_offset (&expr, TREE_TYPE (agg),
-					     access->offset - top_offset,
-					     access->type, false);
-	  gcc_assert (ref_found);
+	  expr = build_ref_for_model (agg, access->offset - top_offset,
+				      access, gsi, insert_after);
 
 	  if (write)
 	    {
@@ -2329,12 +2347,10 @@  sra_modify_expr (tree *expr, gimple_stmt
          in assembler statements (see PR42398).  */
       if (!useless_type_conversion_p (type, access->type))
 	{
-	  tree ref = access->base;
-	  bool ok;
+	  tree ref;
 
-	  ok = build_ref_for_offset (&ref, TREE_TYPE (ref),
-				     access->offset, access->type, false);
-	  gcc_assert (ok);
+	  ref = build_ref_for_model (access->base, access->offset, access,
+				     NULL, false);
 
 	  if (write)
 	    {
@@ -2458,25 +2474,11 @@  load_assign_lhs_subreplacements (struct
 								  lhs, old_gsi);
 
 	      if (*refreshed == SRA_UDH_LEFT)
-		{
-		  bool repl_found;
-
-		  rhs = lacc->base;
-		  repl_found = build_ref_for_offset (&rhs, TREE_TYPE (rhs),
-						     lacc->offset, lacc->type,
-						     false);
-		  gcc_assert (repl_found);
-		}
+		rhs = build_ref_for_model (lacc->base, lacc->offset, lacc,
+					    new_gsi, true);
 	      else
-		{
-		  bool repl_found;
-
-		  rhs = top_racc->base;
-		  repl_found = build_ref_for_offset (&rhs,
-						     TREE_TYPE (top_racc->base),
-						     offset, lacc->type, false);
-		  gcc_assert (repl_found);
-		}
+		rhs = build_ref_for_model (top_racc->base, offset, lacc,
+					    new_gsi, true);
 	    }
 
 	  stmt = gimple_build_assign (get_access_replacement (lacc), rhs);
@@ -2633,25 +2635,18 @@  sra_modify_assign (gimple *stmt, gimple_
 	  if (AGGREGATE_TYPE_P (TREE_TYPE (lhs))
 	      && !access_has_children_p (lacc))
 	    {
-	      tree expr = lhs;
-	      if (build_ref_for_offset (&expr, TREE_TYPE (lhs), 0,
-					TREE_TYPE (rhs), false))
-		{
-		  lhs = expr;
-		  gimple_assign_set_lhs (*stmt, expr);
-		}
+	      lhs = build_ref_for_offset (lhs, 0, TREE_TYPE (rhs), gsi, false);
+	      gimple_assign_set_lhs (*stmt, lhs);
 	    }
 	  else if (AGGREGATE_TYPE_P (TREE_TYPE (rhs))
+		   && !contains_view_convert_expr_p (rhs)
 		   && !access_has_children_p (racc))
-	    {
-	      tree expr = rhs;
-	      if (build_ref_for_offset (&expr, TREE_TYPE (rhs), 0,
-					TREE_TYPE (lhs), false))
-		rhs = expr;
-	    }
+	    rhs = build_ref_for_offset (rhs, 0, TREE_TYPE (lhs), gsi, false);
+
 	  if (!useless_type_conversion_p (TREE_TYPE (lhs), TREE_TYPE (rhs)))
 	    {
-	      rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, TREE_TYPE (lhs), rhs);
+	      rhs = fold_build1_loc (loc, VIEW_CONVERT_EXPR, TREE_TYPE (lhs),
+				     rhs);
 	      if (is_gimple_reg_type (TREE_TYPE (lhs))
 		  && TREE_CODE (lhs) != SSA_NAME)
 		force_gimple_rhs = true;
@@ -2694,11 +2689,7 @@  sra_modify_assign (gimple *stmt, gimple_
 
   if (gimple_has_volatile_ops (*stmt)
       || contains_view_convert_expr_p (rhs)
-      || contains_view_convert_expr_p (lhs)
-      || (access_has_children_p (racc)
-	  && !ref_expr_for_all_replacements_p (racc, lhs, racc->offset))
-      || (access_has_children_p (lacc)
-	  && !ref_expr_for_all_replacements_p (lacc, rhs, lacc->offset)))
+      || contains_view_convert_expr_p (lhs))
     {
       if (access_has_children_p (racc))
 	generate_subtree_copies (racc->first_child, racc->base, 0, 0, 0,
Index: mine/gcc/ipa-cp.c
===================================================================
--- mine.orig/gcc/ipa-cp.c
+++ mine/gcc/ipa-cp.c
@@ -327,7 +327,6 @@  ipcp_lattice_from_jfunc (struct ipa_node
     {
       struct ipcp_lattice *caller_lat;
       tree t;
-      bool ok;
 
       caller_lat = ipcp_get_lattice (info, jfunc->value.ancestor.formal_id);
       lat->type = caller_lat->type;
@@ -340,16 +339,9 @@  ipcp_lattice_from_jfunc (struct ipa_node
 	  return;
 	}
       t = TREE_OPERAND (caller_lat->constant, 0);
-      ok = build_ref_for_offset (&t, TREE_TYPE (t),
-				 jfunc->value.ancestor.offset,
-				 jfunc->value.ancestor.type, false);
-      if (!ok)
-	{
-	  lat->type = IPA_BOTTOM;
-	  lat->constant = NULL_TREE;
-	}
-      else
-	lat->constant = build_fold_addr_expr (t);
+      t = build_ref_for_offset (t, jfunc->value.ancestor.offset,
+				jfunc->value.ancestor.type, NULL, false);
+      lat->constant = build_fold_addr_expr (t);
     }
   else
     lat->type = IPA_BOTTOM;
Index: mine/gcc/ipa-prop.h
===================================================================
--- mine.orig/gcc/ipa-prop.h
+++ mine/gcc/ipa-prop.h
@@ -24,6 +24,7 @@  along with GCC; see the file COPYING3.
 #include "tree.h"
 #include "vec.h"
 #include "cgraph.h"
+#include "gimple.h"
 
 /* The following definitions and interfaces are used by
    interprocedural analyses or parameters.  */
@@ -511,6 +512,7 @@  void ipa_prop_read_jump_functions (void)
 void ipa_update_after_lto_read (void);
 
 /* From tree-sra.c:  */
-bool build_ref_for_offset (tree *, tree, HOST_WIDE_INT, tree, bool);
+tree build_ref_for_offset (tree, HOST_WIDE_INT, tree, gimple_stmt_iterator *,
+			   bool);
 
 #endif /* IPA_PROP_H */
Index: mine/gcc/Makefile.in
===================================================================
--- mine.orig/gcc/Makefile.in
+++ mine/gcc/Makefile.in
@@ -968,7 +968,7 @@  EBITMAP_H = ebitmap.h sbitmap.h
 LTO_STREAMER_H = lto-streamer.h $(LINKER_PLUGIN_API_H) $(TARGET_H) \
 		$(CGRAPH_H) $(VEC_H) vecprim.h $(TREE_H) $(GIMPLE_H)
 TREE_VECTORIZER_H = tree-vectorizer.h $(TREE_DATA_REF_H)
-IPA_PROP_H = ipa-prop.h $(TREE_H) $(VEC_H) $(CGRAPH_H)
+IPA_PROP_H = ipa-prop.h $(TREE_H) $(VEC_H) $(CGRAPH_H) $(GIMPLE_H)
 GSTAB_H = gstab.h stab.def
 BITMAP_H = bitmap.h $(HASHTAB_H) statistics.h
 GCC_PLUGIN_H = gcc-plugin.h highlev-plugin-common.h $(CONFIG_H) $(SYSTEM_H) \
@@ -3142,10 +3142,10 @@  tree-ssa-ccp.o : tree-ssa-ccp.c $(TREE_F
    tree-ssa-propagate.h value-prof.h $(FLAGS_H) $(TARGET_H) $(TOPLEV_H) $(DIAGNOSTIC_CORE_H) \
    $(DBGCNT_H) tree-pretty-print.h gimple-pretty-print.h
 tree-sra.o : tree-sra.c $(CONFIG_H) $(SYSTEM_H) coretypes.h alloc-pool.h \
-   $(TM_H) $(TREE_H) $(GIMPLE_H) $(CGRAPH_H) $(TREE_FLOW_H) $(IPA_PROP_H) \
-   $(DIAGNOSTIC_H) statistics.h $(TREE_DUMP_H) $(TIMEVAR_H) $(PARAMS_H) \
-   $(TARGET_H) $(FLAGS_H) $(EXPR_H) tree-pretty-print.h $(DBGCNT_H) \
-   $(TREE_INLINE_H) gimple-pretty-print.h
+   $(TM_H) $(TOPLEV_H) $(TREE_H) $(GIMPLE_H) $(CGRAPH_H) $(TREE_FLOW_H) \
+   $(IPA_PROP_H) $(DIAGNOSTIC_H) statistics.h $(TREE_DUMP_H) $(TIMEVAR_H) \
+   $(PARAMS_H) $(TARGET_H) $(FLAGS_H) $(EXPR_H) tree-pretty-print.h \
+   $(DBGCNT_H) $(TREE_INLINE_H) gimple-pretty-print.h
 tree-switch-conversion.o : tree-switch-conversion.c $(CONFIG_H) $(SYSTEM_H) \
     $(TREE_H) $(TM_P_H) $(TREE_FLOW_H) $(DIAGNOSTIC_H) $(TREE_INLINE_H) \
     $(TIMEVAR_H) $(TM_H) coretypes.h $(TREE_DUMP_H) $(GIMPLE_H) \
Index: mine/gcc/testsuite/gcc.dg/ipa/ipa-sra-1.c
===================================================================
--- mine.orig/gcc/testsuite/gcc.dg/ipa/ipa-sra-1.c
+++ mine/gcc/testsuite/gcc.dg/ipa/ipa-sra-1.c
@@ -36,6 +36,5 @@  main (int argc, char *argv[])
   return 0;
 }
 
-/* { dg-final { scan-tree-dump "About to replace expr cow.green with ISRA" "eipa_sra"  } } */
-/* { dg-final { scan-tree-dump "About to replace expr cow.blue with ISRA" "eipa_sra"  } } */
+/* { dg-final { scan-tree-dump-times "About to replace expr" 2 "eipa_sra" } } */
 /* { dg-final { cleanup-tree-dump "eipa_sra" } } */
Index: mine/gcc/testsuite/gcc.dg/tree-ssa/forwprop-5.c
===================================================================
--- mine.orig/gcc/testsuite/gcc.dg/tree-ssa/forwprop-5.c
+++ mine/gcc/testsuite/gcc.dg/tree-ssa/forwprop-5.c
@@ -1,5 +1,5 @@ 
 /* { dg-do compile } */
-/* { dg-options "-O1 -fdump-tree-esra -w" } */
+/* { dg-options "-O1 -fdump-tree-optimized -w" } */
 
 #define vector __attribute__((vector_size(16) ))
 struct VecClass
@@ -11,12 +11,9 @@  vector float foo( vector float v )
 {
     vector float x = v;
     x = x + x;
-    struct VecClass y = *(struct VecClass*)&x;
-    return y.v;
+    struct VecClass disappear = *(struct VecClass*)&x;
+    return disappear.v;
 }
 
-/* We should be able to remove the intermediate struct and directly
-   return x.  As we do not fold VIEW_CONVERT_EXPR<struct VecClass>(x).v
-   that doesn't happen right now.  */
-/* { dg-final { scan-tree-dump-times "VIEW_CONVERT_EXPR" 1 "esra"} } */
-/* { dg-final { cleanup-tree-dump "esra" } } */
+/* { dg-final { scan-tree-dump-times "disappear" 0 "optimized"} } */
+/* { dg-final { cleanup-tree-dump "optimized" } } */
Index: mine/gcc/testsuite/gcc.dg/tree-ssa/pr45144.c
===================================================================
--- mine.orig/gcc/testsuite/gcc.dg/tree-ssa/pr45144.c
+++ mine/gcc/testsuite/gcc.dg/tree-ssa/pr45144.c
@@ -42,5 +42,5 @@  bar (unsigned orig, unsigned *new)
   *new = foo (&a);
 }
 
-/* { dg-final { scan-tree-dump "x = a;" "optimized"} } */
+/* { dg-final { scan-tree-dump " = VIEW_CONVERT_EXPR<unsigned int>\\(a\\);" "optimized"} } */
 /* { dg-final { cleanup-tree-dump "optimized" } } */
Index: mine/gcc/ipa-prop.c
===================================================================
--- mine.orig/gcc/ipa-prop.c
+++ mine/gcc/ipa-prop.c
@@ -916,23 +916,27 @@  ipa_compute_jump_functions (struct cgrap
 static tree
 ipa_get_member_ptr_load_param (tree rhs, bool use_delta)
 {
-  tree rec, fld;
+  tree rec, ref_offset, fld_offset;
   tree ptr_field;
   tree delta_field;
 
-  if (TREE_CODE (rhs) != COMPONENT_REF)
+  if (TREE_CODE (rhs) != MEM_REF)
     return NULL_TREE;
-
   rec = TREE_OPERAND (rhs, 0);
+  if (TREE_CODE (rec) != ADDR_EXPR)
+    return NULL_TREE;
+  rec = TREE_OPERAND (rec, 0);
   if (TREE_CODE (rec) != PARM_DECL
       || !type_like_member_ptr_p (TREE_TYPE (rec), &ptr_field, &delta_field))
     return NULL_TREE;
 
-  fld = TREE_OPERAND (rhs, 1);
-  if (use_delta ? (fld == delta_field) : (fld == ptr_field))
-    return rec;
+  ref_offset = TREE_OPERAND (rhs, 1);
+  if (use_delta)
+    fld_offset = byte_position (delta_field);
   else
-    return NULL_TREE;
+    fld_offset = byte_position (ptr_field);
+
+  return tree_int_cst_equal (ref_offset, fld_offset) ? rec : NULL_TREE;
 }
 
 /* If STMT looks like a statement loading a value from a member pointer formal
@@ -999,8 +1003,8 @@  ipa_note_param_call (struct cgraph_node
    below, the call is on the last line:
 
      <bb 2>:
-       f$__delta_5 = f.__delta;
-       f$__pfn_24 = f.__pfn;
+       f$__delta_5 = MEM[(struct  *)&f];
+       f$__pfn_24 = MEM[(struct  *)&f + 4B];
 
      ...
 
Index: mine/gcc/testsuite/g++.dg/torture/pr44972.C
===================================================================
--- /dev/null
+++ mine/gcc/testsuite/g++.dg/torture/pr44972.C
@@ -0,0 +1,142 @@ 
+/* { dg-do compile } */
+
+#include<cassert>
+#include<new>
+#include<utility>
+
+namespace boost {
+
+template<class T>
+class optional;
+
+class aligned_storage
+{
+	char data[ 1000 ];
+  public:
+    void const* address() const { return &data[0]; }
+    void      * address()       { return &data[0]; }
+} ;
+
+
+template<class T>
+class optional_base
+{
+  protected :
+    optional_base(){}
+    optional_base ( T const& val )
+    {
+      construct(val);
+    }
+
+    template<class U>
+    void assign ( optional<U> const& rhs )
+    {
+      if (!is_initialized())
+        if ( rhs.is_initialized() )
+          construct(T());
+    }
+
+  public :
+
+    bool is_initialized() const { return m_initialized ; }
+
+  protected :
+
+    void construct ( T const& val )
+     {
+       new (m_storage.address()) T(val) ;
+     }
+
+    T const* get_ptr_impl() const
+    { return static_cast<T const*>(m_storage.address()); }
+
+  private :
+
+    bool m_initialized ;
+    aligned_storage  m_storage ;
+} ;
+
+
+template<class T>
+class optional : public optional_base<T>
+{
+    typedef optional_base<T> base ;
+
+  public :
+
+    optional() : base() {}
+    optional ( T const& val ) : base(val) {}
+    optional& operator= ( optional const& rhs )
+      {
+        this->assign( rhs ) ;
+        return *this ;
+      }
+
+    T const& get() const ;
+
+    T const* operator->() const { ((this->is_initialized()) ? static_cast<void> (0) : __assert_fail ("this->is_initialized()", "pr44972.C", 78, __PRETTY_FUNCTION__)) ; return this->get_ptr_impl() ; }
+
+} ;
+
+
+} // namespace boost
+
+
+namespace std
+{
+
+  template<typename _Tp, std::size_t _Nm>
+    struct array
+    {
+      typedef _Tp 	    			      value_type;
+      typedef const value_type*			      const_iterator;
+
+      value_type _M_instance[_Nm];
+
+    };
+}
+
+
+class NT
+{
+  double _inf, _sup;
+};
+
+
+template < typename T > inline
+std::array<T, 1>
+make_array(const T& b1)
+{
+  std::array<T, 1> a = { { b1 } };
+  return a;
+}
+
+class V
+{
+  typedef std::array<NT, 1>               Base;
+  Base base;
+
+public:
+  V() {}
+  V(const NT &x)
+    : base(make_array(x)) {}
+
+};
+
+using boost::optional ;
+
+optional< std::pair< NT, NT > >
+  linsolve_pointC2() ;
+
+optional< V > construct_normal_offset_lines_isecC2 ( )
+{
+  optional< std::pair<NT,NT> > ip;
+
+  ip = linsolve_pointC2();
+
+  V a(ip->first) ;
+  return a;
+}
+
+
+