diff mbox

Split parts of cse_insn out to a few new functions

Message ID CABu31nPvRfLWD4aR4nMW+dA4WLK64yd2aeJTr=_aDdXWrgU8qQ@mail.gmail.com
State New
Headers show

Commit Message

Steven Bosscher March 21, 2012, 11:09 p.m. UTC
On Wed, Mar 21, 2012 at 1:13 AM, Ian Lance Taylor wrote:
> On Tue, Mar 20, 2012 at 2:06 PM, Steven Bosscher wrote:
>>
>> This patch splits a couple of pieces of cse_insn out to new functions.
>> There are no functional changes, and no code generation differences as
>> far as I could tell on x86_64 (-m64 and -m32).

Likewise for the attached patch.

>> The purpose of the patch is and, loto hopefully make cse_insn easier
>> to understand. In a follow-up patch, I will make canonicalize_insn run
>> only once per insn (it currently, i.e. before and after this patch,
>> runs multiple times for CSE on extended basic blocks if a block is in
>> multiple extended basic blocks).

That is what the attached patch does.

Bootstrapped&tested on x86_64-unknown-linux-gnu.
OK for trunk?

Ciao!
Steven

	* cse.c (cse_canonicalized_basic_blocks): New simple bitmap to
	tag basic blocks that have already been traversed at least once,
	so that all insns have been canonicalized.
	(cse_insn): Call canonicalize_insn only if the basic block that
	contains insn is visited for the first time.
	(cse_extended_basic_block): After visiting all insns in a basic
	block, mark the block in cse_canonicalized_basic_blocks.
	(cse_main): Setup and destroy cse_canonicalized_basic_blocks.

	(cse_find_path): Micro-optimization, reorder one condition to
	avoid a reference to cfun.

Comments

Steven Bosscher March 21, 2012, 11:15 p.m. UTC | #1
On Thu, Mar 22, 2012 at 12:09 AM, Steven Bosscher <stevenb.gcc@gmail.com> wrote:

>        (cse_find_path): Micro-optimization, reorder one condition to
>        avoid a reference to cfun.

Ah, and please ignore this bit. I don't know what I was thinking...
Richard Sandiford March 26, 2012, 7:02 p.m. UTC | #2
Steven Bosscher <stevenb.gcc@gmail.com> writes:
> On Wed, Mar 21, 2012 at 1:13 AM, Ian Lance Taylor wrote:
>> On Tue, Mar 20, 2012 at 2:06 PM, Steven Bosscher wrote:
>>>
>>> This patch splits a couple of pieces of cse_insn out to new functions.
>>> There are no functional changes, and no code generation differences as
>>> far as I could tell on x86_64 (-m64 and -m32).
>
> Likewise for the attached patch.
>
>>> The purpose of the patch is and, loto hopefully make cse_insn easier
>>> to understand. In a follow-up patch, I will make canonicalize_insn run
>>> only once per insn (it currently, i.e. before and after this patch,
>>> runs multiple times for CSE on extended basic blocks if a block is in
>>> multiple extended basic blocks).
>
> That is what the attached patch does.
>
> Bootstrapped&tested on x86_64-unknown-linux-gnu.
> OK for trunk?
>
> Ciao!
> Steven
>
> 	* cse.c (cse_canonicalized_basic_blocks): New simple bitmap to
> 	tag basic blocks that have already been traversed at least once,
> 	so that all insns have been canonicalized.
> 	(cse_insn): Call canonicalize_insn only if the basic block that
> 	contains insn is visited for the first time.
> 	(cse_extended_basic_block): After visiting all insns in a basic
> 	block, mark the block in cse_canonicalized_basic_blocks.
> 	(cse_main): Setup and destroy cse_canonicalized_basic_blocks.

OK, thanks (without the microoptimisation, as you say).

Out of curiosity, do you still see this bit as useful:

      /* We potentially will process this insn many times.  Therefore,
	 drop the REG_EQUAL note if it is equal to the SET_SRC of the
	 unique set in INSN.

	 Do not do so if the REG_EQUAL note is for a STRICT_LOW_PART,
	 because cse_insn handles those specially.  */

?  Does "many times" mean in CSE, or later?

Richard
Steven Bosscher March 26, 2012, 7:12 p.m. UTC | #3
On Mon, Mar 26, 2012 at 9:02 PM, Richard Sandiford
<rdsandiford@googlemail.com> wrote:
>>       * cse.c (cse_canonicalized_basic_blocks): New simple bitmap to
>>       tag basic blocks that have already been traversed at least once,
>>       so that all insns have been canonicalized.
>>       (cse_insn): Call canonicalize_insn only if the basic block that
>>       contains insn is visited for the first time.
>>       (cse_extended_basic_block): After visiting all insns in a basic
>>       block, mark the block in cse_canonicalized_basic_blocks.
>>       (cse_main): Setup and destroy cse_canonicalized_basic_blocks.
>
> OK, thanks (without the microoptimisation, as you say).

Thanks, will commit!


> Out of curiosity, do you still see this bit as useful:
>
>      /* We potentially will process this insn many times.  Therefore,
>         drop the REG_EQUAL note if it is equal to the SET_SRC of the
>         unique set in INSN.
>
>         Do not do so if the REG_EQUAL note is for a STRICT_LOW_PART,
>         because cse_insn handles those specially.  */
>
> ?  Does "many times" mean in CSE, or later?

It means "in CSE", yes. But with the patch to canonicalize only once,
I suppose this can go away again.
Having said that, I do believe it would be good if we avoid having
REG_EQUAL notes that are rtx_equal_p to the SET_SRC. This happens
quite frequently after CSE. I'm not sure how to clean them up.

Ciao!
Steven
diff mbox

Patch

	* cse.c (cse_canonicalized_basic_blocks): New simple bitmap to
	tag basic blocks that have already been traversed at least once,
	so that all insns have been canonicalized.
	(cse_insn): Call canonicalize_insn only if the basic block that
	contains insn is visited for the first time.
	(cse_extended_basic_block): After visiting all insns in a basic
	block, mark the block in cse_canonicalized_basic_blocks.
	(cse_main): Setup and destroy cse_canonicalized_basic_blocks.

	(cse_find_path): Micro-optimization, reorder one condition to
	avoid a reference to cfun.

Index: cse.c
===================================================================
--- cse.c	(revision 185622)
+++ cse.c	(working copy)
@@ -551,6 +551,10 @@  static bitmap cse_ebb_live_in, cse_ebb_l
    already as part of an already processed extended basic block.  */
 static sbitmap cse_visited_basic_blocks;
 
+/* A simple bitmap to track for which basic blocks all insns have been
+   canonicalized already.  */
+static sbitmap cse_canonicalized_basic_blocks;
+
 static bool fixed_base_plus_p (rtx x);
 static int notreg_cost (rtx, enum rtx_code, int);
 static int approx_reg_cost_1 (rtx *, void *);
@@ -4492,8 +4496,10 @@  cse_insn (rtx insn)
   /* Record all the SETs in this instruction.  */
   n_sets = find_sets_in_insn (insn, &sets);
 
-  /* Substitute the canonical register where possible.  */
-  canonicalize_insn (insn, &sets, n_sets);
+  /* If we have not visited this block before (as part of another extended
+     basic block, substitute the canonical register where possible.  */
+  if (!TEST_BIT (cse_canonicalized_basic_blocks, BLOCK_FOR_INSN (insn)->index))
+    canonicalize_insn (insn, &sets, n_sets);
 
   /* If this insn has a REG_EQUAL note, store the equivalent value in SRC_EQV,
      if different, or if the DEST is a STRICT_LOW_PART.  The latter condition
@@ -6254,10 +6260,9 @@  cse_find_path (basic_block first_bb, str
 	  else
 	    e = NULL;
 
-	  if (e
-	      && !((e->flags & EDGE_ABNORMAL_CALL) && cfun->has_nonlocal_label)
-	      && e->dest != EXIT_BLOCK_PTR
+	  if (e && e->dest != EXIT_BLOCK_PTR
 	      && single_pred_p (e->dest)
+	      && !((e->flags & EDGE_ABNORMAL_CALL) && cfun->has_nonlocal_label)
 	      /* Avoid visiting basic blocks twice.  The large comment
 		 above explains why this can happen.  */
 	      && !TEST_BIT (cse_visited_basic_blocks, e->dest->index))
@@ -6452,6 +6457,9 @@  cse_extended_basic_block (struct cse_bas
 	    }
 	}
 
+      /* We have now canonicalized all insns in this basic block.  */
+      SET_BIT (cse_canonicalized_basic_blocks, bb->index);
+
       /* With non-call exceptions, we are not always able to update
 	 the CFG properly inside cse_insn.  So clean up possibly
 	 redundant EH edges here.  */
@@ -6555,6 +6563,10 @@  cse_main (rtx f ATTRIBUTE_UNUSED, int nr
   cse_visited_basic_blocks = sbitmap_alloc (last_basic_block);
   sbitmap_zero (cse_visited_basic_blocks);
 
+  /* Set up the table of already canonicalized basic blocks.  */
+  cse_canonicalized_basic_blocks = sbitmap_alloc (last_basic_block);
+  sbitmap_zero (cse_canonicalized_basic_blocks);
+
   /* Loop over basic blocks in reverse completion order (RPO),
      excluding the ENTRY and EXIT blocks.  */
   n_blocks = pre_and_rev_post_order_compute (NULL, rc_order, false);
@@ -6598,6 +6610,7 @@  cse_main (rtx f ATTRIBUTE_UNUSED, int nr
   free (reg_eqv_table);
   free (ebb_data.path);
   sbitmap_free (cse_visited_basic_blocks);
+  sbitmap_free (cse_canonicalized_basic_blocks);
   free (rc_order);
   rtl_hooks = general_rtl_hooks;