Patchwork Minor TARGET_MEM_REF cleanup

login
register
mail settings
Submitter Eric Botcazou
Date Sept. 29, 2012, 11:17 a.m.
Message ID <1612032.RubVKZplSl@polaris>
Download mbox | patch
Permalink /patch/188028/
State New
Headers show

Comments

Eric Botcazou - Sept. 29, 2012, 11:17 a.m.
Hi,

for simple loops like:

extern int a[];
extern int b[];

void foo (int l)
{
  int i;

  for (i = 0; i < l; i++)
    a[i] = b [i];
}

you get in the .lim3 dump:

Unanalyzed memory reference 0: _5 = MEM[symbol: b, index: ivtmp.3_1, step: 4, 
offset: 0B];
Memory reference 1: MEM[symbol: a, index: ivtmp.3_1, step: 4, offset: 0B]

so the pass analyzes the store but not the load, which seems an oversight.
The patch also folds copy_mem_ref_info into its only user and removes it.

Tested on x86_64-suse-linux, OK for mainline?


2012-09-29  Eric Botcazou  <ebotcazou@adacore.com>

	* tree.h (copy_mem_ref_info): Delete.
	* tree-ssa-address.c (copy_mem_ref_info): Likewise.
	(maybe_fold_tmr): Copy flags manually.
	* tree-ssa-loop-im.c (simple_mem_ref_in_stmt): Accept TARGET_MEM_REF
	on the RHS as well.
Richard Guenther - Oct. 1, 2012, 9:38 a.m.
On Sat, Sep 29, 2012 at 1:17 PM, Eric Botcazou <ebotcazou@adacore.com> wrote:
> Hi,
>
> for simple loops like:
>
> extern int a[];
> extern int b[];
>
> void foo (int l)
> {
>   int i;
>
>   for (i = 0; i < l; i++)
>     a[i] = b [i];
> }
>
> you get in the .lim3 dump:
>
> Unanalyzed memory reference 0: _5 = MEM[symbol: b, index: ivtmp.3_1, step: 4,
> offset: 0B];
> Memory reference 1: MEM[symbol: a, index: ivtmp.3_1, step: 4, offset: 0B]
>
> so the pass analyzes the store but not the load, which seems an oversight.
> The patch also folds copy_mem_ref_info into its only user and removes it.
>
> Tested on x86_64-suse-linux, OK for mainline?

Please take the opportunity to clean up simple_mem_ref_in_stmt some more.
Both loads and stores in assigns require gimple_assign_single_p, thus do

 if (!gimple_assign_single_p (stmt))
  return NULL;

before deciding on store/load.  To decide that the stmt is a load then do

  if (TREE_CODE (*lhs) == SSA_NAME
      && gimple_vuse (stmt))

it is a store if

   gimple_vdef (stmt)
   && (TREE_CODE (*rhs) == SSA_NAME
          || is_gimple_min_invariant (*rhs))

else it may still be an aggregate copy but LIM doesn't handle those
(though they may still be interesting for disambiguation ...)

The tree-ssa-address parts are ok as-is.

Thanks,
Richard.


>
> 2012-09-29  Eric Botcazou  <ebotcazou@adacore.com>
>
>         * tree.h (copy_mem_ref_info): Delete.
>         * tree-ssa-address.c (copy_mem_ref_info): Likewise.
>         (maybe_fold_tmr): Copy flags manually.
>         * tree-ssa-loop-im.c (simple_mem_ref_in_stmt): Accept TARGET_MEM_REF
>         on the RHS as well.
>
>
> --
> Eric Botcazou

Patch

Index: tree-ssa-loop-im.c
===================================================================
--- tree-ssa-loop-im.c	(revision 191796)
+++ tree-ssa-loop-im.c	(working copy)
@@ -638,7 +638,7 @@  outermost_indep_loop (struct loop *outer
 static tree *
 simple_mem_ref_in_stmt (gimple stmt, bool *is_store)
 {
-  tree *lhs;
+  tree *lhs, *rhs;
   enum tree_code code;
 
   /* Recognize MEM = (SSA_NAME | invariant) and SSA_NAME = MEM patterns.  */
@@ -648,19 +648,21 @@  simple_mem_ref_in_stmt (gimple stmt, boo
   code = gimple_assign_rhs_code (stmt);
 
   lhs = gimple_assign_lhs_ptr (stmt);
+  rhs = gimple_assign_rhs1_ptr (stmt);
 
   if (TREE_CODE (*lhs) == SSA_NAME)
     {
       if (get_gimple_rhs_class (code) != GIMPLE_SINGLE_RHS
-	  || !is_gimple_addressable (gimple_assign_rhs1 (stmt)))
+	  || (!is_gimple_addressable (*rhs)
+	      && TREE_CODE (*rhs) != TARGET_MEM_REF))
 	return NULL;
 
       *is_store = false;
-      return gimple_assign_rhs1_ptr (stmt);
+      return rhs;
     }
   else if (code == SSA_NAME
 	   || (get_gimple_rhs_class (code) == GIMPLE_SINGLE_RHS
-	       && is_gimple_min_invariant (gimple_assign_rhs1 (stmt))))
+	       && is_gimple_min_invariant (*rhs)))
     {
       *is_store = true;
       return lhs;
Index: tree.h
===================================================================
--- tree.h	(revision 191796)
+++ tree.h	(working copy)
@@ -6263,7 +6263,6 @@  tree target_for_debug_bind (tree);
 
 /* In tree-ssa-address.c.  */
 extern tree tree_mem_ref_addr (tree, tree);
-extern void copy_mem_ref_info (tree, tree);
 extern void copy_ref_info (tree, tree);
 
 /* In tree-vrp.c */
Index: tree-ssa-address.c
===================================================================
--- tree-ssa-address.c	(revision 191796)
+++ tree-ssa-address.c	(working copy)
@@ -821,16 +821,6 @@  get_address_description (tree op, struct
   addr->offset = TMR_OFFSET (op);
 }
 
-/* Copies the additional information attached to target_mem_ref FROM to TO.  */
-
-void
-copy_mem_ref_info (tree to, tree from)
-{
-  /* And the info about the original reference.  */
-  TREE_SIDE_EFFECTS (to) = TREE_SIDE_EFFECTS (from);
-  TREE_THIS_VOLATILE (to) = TREE_THIS_VOLATILE (from);
-}
-
 /* Copies the reference information from OLD_REF to NEW_REF, where
    NEW_REF should be either a MEM_REF or a TARGET_MEM_REF.  */
 
@@ -901,7 +891,7 @@  maybe_fold_tmr (tree ref)
 {
   struct mem_address addr;
   bool changed = false;
-  tree ret, off;
+  tree new_ref, off;
 
   get_address_description (ref, &addr);
 
@@ -962,10 +952,11 @@  maybe_fold_tmr (tree ref)
      ended up folding it, always create a new TARGET_MEM_REF regardless
      if it is valid in this for on the target - the propagation result
      wouldn't be anyway.  */
-  ret = create_mem_ref_raw (TREE_TYPE (ref),
-			    TREE_TYPE (addr.offset), &addr, false);
-  copy_mem_ref_info (ret, ref);
-  return ret;
+  new_ref = create_mem_ref_raw (TREE_TYPE (ref),
+			        TREE_TYPE (addr.offset), &addr, false);
+  TREE_SIDE_EFFECTS (new_ref) = TREE_SIDE_EFFECTS (ref);
+  TREE_THIS_VOLATILE (new_ref) = TREE_THIS_VOLATILE (ref);
+  return new_ref;
 }
 
 /* Dump PARTS to FILE.  */