Patchwork [PATCH:,PR,target/46631] Change operands order so we can use 16bit AND instead of 32bit in thumb2

login
register
mail settings
Submitter Richard Earnshaw
Date Dec. 8, 2010, 4:46 p.m.
Message ID <1291826813.6768.28.camel@e102346-lin.cambridge.arm.com>
Download mbox | patch
Permalink /patch/74748/
State New
Headers show

Comments

Richard Earnshaw - Dec. 8, 2010, 4:46 p.m.
On Wed, 2010-12-01 at 17:54 -0800, Carrot Wei wrote:
> So you mean pattern "*andsi3_compare0"?
> 
> It doesn't work since in the original instruction "and    r2, r3, r2",
> it also doesn't clobber the condition code. So we must add the clobber
> explicitely. It looks peephole2 is still the best solution.
> 
> thanks
> Guozhi

No, having looked again at the source code I notice that Bernd took out
the peephole2 patches for this sort of code earlier this year.  The
reason was that doing it in a peephole2 prevented if-conversion from
applying (because of the clobber of the condition codes).  Instead the
equivalent code for <op> Rd, Rd, Rm is now done in thumb2_reorg.

The patch below now extends that code to handle Rd, Rn, Rd when op is
commutative.  This fixes up the case you've been looking at and also the
other logical commutative operations (xor, or).

Tested on arm-eabi and committed to trunk along with your test case.
Sorry it's taken so long to get this resolved.

R.

2010-12-08  Richard Earnshaw  <rearnsha@arm.com>

	PR target/46631
	* arm.c (thumb2_reorg): Also try to reduce <commutative_op> Rd, Rn, Rd
	into a 16-bit instruction.

> 
> On Wed, Dec 1, 2010 at 4:26 PM, Richard Earnshaw
> <richard.earnshaw@buzzard.freeserve.co.uk> wrote:
> > Absolutely not!  That pattern does not clobber the condition codes. Look for the existing pattern that generates a two register and operation for thumb2 and update that.
> >
> > R.
> >
> > On 1 Dec 2010, at 19:12, "Carrot Wei" <carrot@google.com> wrote:
> >
> >> On Wed, Dec 1, 2010 at 4:04 AM, Richard Earnshaw <rearnsha@arm.com> wrote:
> >>>
> >>> On Tue, 2010-11-30 at 15:21 -0800, Carrot Wei wrote:
> >>>> Hi
> >>>>
> >>>> An instruction of the following style is 32 bit in thumb2.
> >>>>    and    r2, r3, r2
> >>>>
> >>>> If we change the source operands order, it will be 16 bit.
> >>>>    ands    r2, r2, r3
> >>>>
> >>>> This patch contains a new peephole2 to detect the situation that the all 3
> >>>> operands of AND are low registers, and the target is the same as the second
> >>>> source, then replace it with another AND with its source operands exchanged.
> >>>>
> >>>> This patch passed the regression test on arm qemu.
> >>>>
> >>>
> >>> So gas will already generate a 16-bit instruction from
> >>>       ands    r2, r3, r2
> >>>
> >>> So it should be possible to just extend the existing peephole/split to
> >>> handle this case (and for all other commutative operations).
> >>>
> >>
> >> Do you mean I should do the following modification to pattern
> >> "*arm_andsi3_insn" ?
> >>
> >> Index: arm.md
> >> ===================================================================
> >> --- arm.md    (revision 167257)
> >> +++ arm.md    (working copy)
> >> @@ -2094,15 +2094,26 @@
> >>   and%?\\t%0, %1, %2
> >>   bic%?\\t%0, %1, #%B2
> >>   #"
> >> -  "TARGET_32BIT
> >> -   && GET_CODE (operands[2]) == CONST_INT
> >> -   && !(const_ok_for_arm (INTVAL (operands[2]))
> >> -    || const_ok_for_arm (~INTVAL (operands[2])))"
> >> -  [(clobber (const_int 0))]
> >> +  "TARGET_32BIT"
> >> +  [(parallel
> >> +    [(set (match_dup 0)
> >> +          (and:SI (match_dup 0)
> >> +                  (match_dup 1)))
> >> +     (clobber (reg:CC CC_REGNUM))])]
> >>  "
> >> -  arm_split_constant  (AND, SImode, curr_insn,
> >> -                   INTVAL (operands[2]), operands[0], operands[1], 0);
> >> -  DONE;
> >> +  if (GET_CODE (operands[2]) == CONST_INT
> >> +      && !(const_ok_for_arm (INTVAL (operands[2]))
> >> +           || const_ok_for_arm (~INTVAL (operands[2])))"
> >> +    {
> >> +      arm_split_constant  (AND, SImode, curr_insn,
> >> +                       INTVAL (operands[2]), operands[0], operands[1], 0);
> >> +      DONE;
> >> +    }
> >> +  else if (TARGET_THUMB2 && rtx_equal_p (operands[0], operands[2])
> >> +           && peep2_regno_dead_p(0, CC_REGNUM)
> >> +    {
> >> +    }
> >> +  FAIL;
> >>  "
> >>  [(set_attr "length" "4,4,16")
> >>   (set_attr "predicable" "yes")]
> >>
> >> And do the similar to patterns "*iorsi3_insn" and "*arm_xorsi3"?
> >>
> >> thanks
> >> Guozhi
> >
> >
> >
> >

Patch

*** arm.c	(revision 167602)
--- arm.c	(local)
*************** thumb2_reorg (void)
*** 12183,12188 ****
--- 12183,12189 ----
    FOR_EACH_BB (bb)
      {
        rtx insn;
+ 
        COPY_REG_SET (&live, DF_LR_OUT (bb));
        df_simulate_initialize_backwards (bb, &live);
        FOR_BB_INSNS_REVERSE (bb, insn)
*************** thumb2_reorg (void)
*** 12200,12220 ****
--- 12201,12243 ----
  		  rtx dst = XEXP (pat, 0);
  		  rtx src = XEXP (pat, 1);
  		  rtx op0 = XEXP (src, 0);
+ 		  rtx op1 = (GET_RTX_CLASS (GET_CODE (src)) == RTX_COMM_ARITH
+ 			     ? XEXP (src, 1) : NULL);
+ 
  		  if (rtx_equal_p (dst, op0)
  		      || GET_CODE (src) == PLUS || GET_CODE (src) == MINUS)
  		    {
  		      rtx ccreg = gen_rtx_REG (CCmode, CC_REGNUM);
  		      rtx clobber = gen_rtx_CLOBBER (VOIDmode, ccreg);
  		      rtvec vec = gen_rtvec (2, pat, clobber);
+ 
+ 		      PATTERN (insn) = gen_rtx_PARALLEL (VOIDmode, vec);
+ 		      INSN_CODE (insn) = -1;
+ 		    }
+ 		  /* We can also handle a commutative operation where the
+ 		     second operand matches the destination.  */
+ 		  else if (op1 && rtx_equal_p (dst, op1))
+ 		    {
+ 		      rtx ccreg = gen_rtx_REG (CCmode, CC_REGNUM);
+ 		      rtx clobber = gen_rtx_CLOBBER (VOIDmode, ccreg);
+ 		      rtvec vec;
+ 
+ 		      src = copy_rtx (src);
+ 		      XEXP (src, 0) = op1;
+ 		      XEXP (src, 1) = op0;
+ 		      pat = gen_rtx_SET (VOIDmode, dst, src);
+ 		      vec = gen_rtvec (2, pat, clobber);
  		      PATTERN (insn) = gen_rtx_PARALLEL (VOIDmode, vec);
  		      INSN_CODE (insn) = -1;
  		    }
  		}
  	    }
+ 
  	  if (NONDEBUG_INSN_P (insn))
  	    df_simulate_one_insn_backwards (bb, insn, &live);
  	}
      }
+ 
    CLEAR_REG_SET (&live);
  }