Patchwork [PING,^,2,1/n] Add conditional compare support

login
register
mail settings
Submitter Zhenqiang Chen
Date Nov. 28, 2013, 9:53 a.m.
Message ID <000101ceec1f$a41bf500$ec53df00$@arm.com>
Download mbox | patch
Permalink /patch/294841/
State New
Headers show

Comments

Zhenqiang Chen - Nov. 28, 2013, 9:53 a.m.
Hi,

Patch is rebased with the latest trunk with changes:
* simplify_while_replacing (recog.c) should not swap operands of compares in CCMP.
* make sure no other instructions can clobber CC except compares in CCMP when expanding CCMP.

For easy to read, I add the following description(in expr.c).

  The following functions expand conditional compare (CCMP) instructions.
   Here is a short description about the over all algorithm:
     * ccmp_candidate_p is used to identify the CCMP candidate

     * expand_ccmp_expr is the main entry, which calls expand_ccmp_expr_1
       to expand CCMP.

     * expand_ccmp_expr_1 uses a recursive algorithm to expand CCMP.
       It calls two target hooks gen_ccmp_first and gen_ccmp_next to generate
       CCMP instructions.
         - gen_ccmp_first expands the first compare in CCMP.
         - gen_ccmp_next expands the following compares.

       Another hook select_ccmp_cmp_order is called to determine which compare
       is done first since not all combination of compares are legal in some
       target like ARM.  We might get more chance when swapping the compares.

       During expanding, we must make sure that no instruction can clobber the
       CC reg except the compares.  So clobber_cc_p and check_clobber_cc are
       introduced to do the check.

     * If the final result is not used in a COND_EXPR (checked by function
       used_in_cond_stmt_p), it calls cstorecc4 pattern to store the CC to a
       general register.

Bootstrap and no make check regression on X86-64 and ARM chrome book.

ChangeLog:
2013-11-28  Zhenqiang Chen  <zhenqiang.chen@linaro.org>

	* config/arm/arm-protos.h (arm_select_dominance_ccmp_mode,
	arm_ccmode_to_code): New prototypes.
	* config/arm/arm.c (arm_select_dominance_cc_mode_1): New function
	extracted from arm_select_dominance_cc_mode.
	(arm_ccmode_to_code, arm_code_to_ccmode, arm_convert_to_SImode,
	arm_select_dominance_ccmp_mode): New functions.
	(arm_select_ccmp_cmp_order, arm_gen_ccmp_first, arm_gen_ccmp_next):
	New hooks.
	(arm_select_dominance_cc_mode): Call arm_select_dominance_cc_mode_1.
	* config/arm/arm.md (cbranchcc4, cstorecc4, ccmp_and, ccmp_ior): New
	instruction patterns.
	* doc/md.texi (ccmp): New index.
	* doc/tm.texi (TARGET_SELECT_CCMP_CMP_ORDER, TARGET_GEN_CCMP_FIRST,
	TARGET_GEN_CCMP_NEXT): New hooks.
	* doc/tm.texi (TARGET_SELECT_CCMP_CMP_ORDER, TARGET_GEN_CCMP_FIRST,
	TARGET_GEN_CCMP_NEXT): New hooks.
	* doc/tm.texi.in (TARGET_SELECT_CCMP_CMP_ORDER, TARGET_GEN_CCMP_FIRST,
	TARGET_GEN_CCMP_NEXT): New hooks.
	* expmed.c (emit_cstore): Make it global.
	* expr.c: Include tree-phinodes.h and ssa-iterators.h.
	(ccmp_candidate_p, used_in_cond_stmt_p, check_clobber_cc, clobber_cc_p,
	gen_ccmp_next, expand_ccmp_expr_1, expand_ccmp_expr): New functions.
	(expand_expr_real_1): Handle conditional compare.
	* optabs.c (get_rtx_code): Make it global and handle BIT_AND_EXPR and
	BIT_IOR_EXPR.
	* optabs.h (get_rtx_code, emit_cstore): New prototypes.
	* recog.c (ccmp_insn_p): New function.
	(simplify_while_replacing): Do not swap conditional compare insn.
	* target.def (select_ccmp_cmp_order, gen_ccmp_first, gen_ccmp_next):
	Define hooks.
	* targhooks.c (default_select_ccmp_cmp_order): New function.
	* targhooks.h (default_select_ccmp_cmp_order): New prototypes.




> -----Original Message-----
> From: gcc-patches-owner@gcc.gnu.org [mailto:gcc-patches-
> owner@gcc.gnu.org] On Behalf Of Zhenqiang Chen
> Sent: Wednesday, November 20, 2013 4:05 PM
> To: 'Richard Henderson'; Richard Earnshaw
> Cc: 'Richard Biener'; GCC Patches
> Subject: [PING] [PATCH 1/n] Add conditional compare support
> 
> Ping?
> 
> Thanks!
> -Zhenqiang
> 
> > -----Original Message-----
> > From: gcc-patches-owner@gcc.gnu.org [mailto:gcc-patches-
> > owner@gcc.gnu.org] On Behalf Of Zhenqiang Chen
> > Sent: Wednesday, November 06, 2013 3:39 PM
> > To: 'Richard Henderson'
> > Cc: Richard Earnshaw; 'Richard Biener'; GCC Patches
> > Subject: RE: [PATCH 1/n] Add conditional compare support
> >
> >
> > > -----Original Message-----
> > > From: Richard Henderson [mailto:rth@redhat.com]
> > > Sent: Tuesday, November 05, 2013 4:39 AM
> > > To: Zhenqiang Chen
> > > Cc: Richard Earnshaw; 'Richard Biener'; GCC Patches
> > > Subject: Re: [PATCH 1/n] Add conditional compare support
> > >
> > > On 11/04/2013 08:00 PM, Zhenqiang Chen wrote:
> > > > Thanks. I add a new hook. The default function will return -1 if
> > > > the target does not care about the order.
> > > >
> > > > +DEFHOOK
> > > > +(select_ccmp_cmp_order,
> > > > + "For some target (like ARM), the order of two compares is
> > > > +sensitive for\n\ conditional compare.  cmp0-cmp1 might be an
> > > > +invalid combination.  But when\n\ swapping the order, cmp1-cmp0
> > > > +is
> > valid.
> > > > +The function will return\n\
> > > > +  -1: if @code{code1} and @code{code2} are valid combination.\n\
> > > > +   1: if @code{code2} and @code{code1} are valid combination.\n\
> > > > +   0: both are invalid.",
> > > > + int, (int code1, int code2),
> > > > + default_select_ccmp_cmp_order)
> > >
> > > Fair enough.  I'd originally been thinking that returning a
> > > tri-state value akin to the comparison callback to qsort would allow
> > > easy sorting of a whole list of comparisons.  But probably just as
> > > easy to open-code while checking for invalid combinations.
> > >
> > > Checking for invalid while sorting means that we can then disallow
> > > returning NULL from the other two hooks.  Because the backend has
> > > already had a chance to indicate failure.
> >
> > The check is only for the first two compares. And the following
> > compares are not checked. In addition, backend might check more staffs
> (e.g.
> > arm_select_dominance_cc_mode) to generate a valid compare instruction.
> >
> > > > For gen_ccmp_next, I add another parameter CC_P to indicate the
> > > > result is used as CC or not. If CC_P is false, the gen_ccmp_next
> > > > will return a general register. This is for code like
> > > >
> > > > int test (int a, int b)
> > > > {
> > > >   return a > 0 && b > 0;
> > > > }
> > > > During expand, there might have no branch at all. So gen_ccmp_next
> > > > can not return CC for "a > 0 && b > 0".
> > >
> > > Uh, no, this is a terrible idea.  There's no need for gen_ccmp_next
> > > to re-do the work of cstore_optab.
> > >
> > > I believe you can use emit_store_flag as a high-level interface
> > > here, since there are technically vagaries due to STORE_FLAG_VALUE.
> > > If that turns out to crash or fail in some way, we can talk about
> > > using cstore_optab directly given some restrictions.
> >
> > emit_store_flag does too much checks. I use cstore_optab to emit the insn.
> >
> > +      icode = optab_handler (cstore_optab, CCmode);
> > +      if (icode != CODE_FOR_nothing)
> > +	{
> > +	  rtx target = gen_reg_rtx (word_mode);
> > +	  tmp = emit_cstore (target, icode, NE, CCmode, CCmode,
> > +			     0, tmp, const0_rtx, 1, word_mode);
> > +	  if (tmp)
> > +	    return tmp;
> > +	}
> >
> > > It also means that you shouldn't need all of and_scc_scc,
> > > ior_scc_scc, ccmp_and_scc_scc, ccmp_ior_scc_scc.
> >
> > Yes. We only need ccmp_and and ccmp_ior now.
> >
> > I will verify to remove the existing and_scc_scc, ior_scc_scc,
> > and_scc_scc_cmp, ior_scc_scc_cmp once conditional compare is enabled.
> >
> > > Although I don't see cstorecc4 defined for ARM, so there is
> > > something missing.
> >
> > cstorecc4 is added.
> >
> > > > +static int
> > > > +arm_select_ccmp_cmp_order (int cond1, int cond2) {
> > > > +  if (cond1 == cond2)
> > > > +    return -1;
> > > > +  if (comparison_dominates_p ((enum rtx_code) cond1, (enum
> > > > +rtx_code)
> > > cond2))
> > > > +    return 1;
> > > > +  if (comparison_dominates_p ((enum rtx_code) cond2, (enum
> > > > + rtx_code)
> > > cond1))
> > > > +    return -1;
> > > > +  return 0;
> > > > +
> > > > +}
> > >
> > > This sort does not look stable.  In particular,
> > >
> > >   if (cond1 == cond2)
> > >     return 1;
> > >
> > > would seem to better preserve the original order of the comparisons.
> >
> > -1 is to keep the original order. Anyway I change the function as:
> >
> > +/* COND1 and COND2 should be enum rtx_code, which represent two
> > compares.
> > +   There are order sensitive for conditional compare.  It returns
> > +      1: Keep current order.
> > +     -1: Swap the two compares.
> > +      0: Invalid combination.  */
> > +
> > +static int
> > +arm_select_ccmp_cmp_order (int cond1, int cond2) {
> > +  /* THUMB1 does not support conditional compare.  */
> > +  if (TARGET_THUMB1)
> > +    return 0;
> > +
> > +  if (cond1 == cond2)
> > +    return 1;
> > +  if (comparison_dominates_p ((enum rtx_code) cond1, (enum rtx_code)
> > cond2))
> > +    return -1;
> > +  if (comparison_dominates_p ((enum rtx_code) cond2, (enum rtx_code)
> > cond1))
> > +    return 1;
> > +
> > +  return 0;
> > +}
> >
> > Thanks!
> > -Zhenqiang
> 
> 
>

Patch

diff --git a/gcc/config/arm/arm-protos.h b/gcc/config/arm/arm-protos.h
index c5b16da..e3162c1 100644
--- a/gcc/config/arm/arm-protos.h
+++ b/gcc/config/arm/arm-protos.h
@@ -117,6 +117,9 @@  extern bool gen_movmem_ldrd_strd (rtx *);
 extern enum machine_mode arm_select_cc_mode (RTX_CODE, rtx, rtx);
 extern enum machine_mode arm_select_dominance_cc_mode (rtx, rtx,
 						       HOST_WIDE_INT);
+extern enum machine_mode arm_select_dominance_ccmp_mode (rtx, enum machine_mode,
+							 HOST_WIDE_INT);
+enum rtx_code arm_ccmode_to_code (enum machine_mode mode);
 extern rtx arm_gen_compare_reg (RTX_CODE, rtx, rtx, rtx);
 extern rtx arm_gen_return_addr_mask (void);
 extern void arm_reload_in_hi (rtx *);
diff --git a/gcc/config/arm/arm.c b/gcc/config/arm/arm.c
index 129e428..b0fc4f4 100644
--- a/gcc/config/arm/arm.c
+++ b/gcc/config/arm/arm.c
@@ -287,6 +287,12 @@  static unsigned arm_add_stmt_cost (void *data, int count,
 static void arm_canonicalize_comparison (int *code, rtx *op0, rtx *op1,
 					 bool op0_preserve_value);
 static unsigned HOST_WIDE_INT arm_asan_shadow_offset (void);
+static int arm_select_ccmp_cmp_order (int, int);
+static rtx arm_gen_ccmp_first (int, rtx, rtx);
+static rtx arm_gen_ccmp_next (rtx, int, rtx, rtx, int);
+static enum machine_mode arm_select_dominance_cc_mode_1 (enum rtx_code cond1,
+							 enum rtx_code cond2,
+							 HOST_WIDE_INT);
 

 /* Table of machine attributes.  */
 static const struct attribute_spec arm_attribute_table[] =
@@ -675,6 +681,15 @@  static const struct attribute_spec arm_attribute_table[] =
 #undef TARGET_CAN_USE_DOLOOP_P
 #define TARGET_CAN_USE_DOLOOP_P can_use_doloop_if_innermost
 
+#undef TARGET_SELECT_CCMP_CMP_ORDER
+#define TARGET_SELECT_CCMP_CMP_ORDER arm_select_ccmp_cmp_order
+
+#undef TARGET_GEN_CCMP_FIRST
+#define TARGET_GEN_CCMP_FIRST arm_gen_ccmp_first
+
+#undef TARGET_GEN_CCMP_NEXT
+#define TARGET_GEN_CCMP_NEXT arm_gen_ccmp_next
+
 struct gcc_target targetm = TARGET_INITIALIZER;
 

 /* Obstack for minipool constant handling.  */
@@ -933,7 +948,6 @@  struct processors
   const struct tune_params *const tune;
 };
 
-
 #define ARM_PREFETCH_NOT_BENEFICIAL 0, -1, -1
 #define ARM_PREFETCH_BENEFICIAL(prefetch_slots,l1_size,l1_line_size) \
   prefetch_slots, \
@@ -14314,7 +14328,13 @@  arm_select_dominance_cc_mode (rtx x, rtx y, HOST_WIDE_INT cond_or)
       cond1 = cond2;
       cond2 = temp;
     }
+  return arm_select_dominance_cc_mode_1 (cond1, cond2, cond_or);
+}
 
+static enum machine_mode
+arm_select_dominance_cc_mode_1 (enum rtx_code cond1, enum rtx_code cond2,
+				HOST_WIDE_INT cond_or)
+{
   switch (cond1)
     {
     case EQ:
@@ -14395,8 +14415,7 @@  arm_select_dominance_cc_mode (rtx x, rtx y, HOST_WIDE_INT cond_or)
 	  gcc_unreachable ();
 	}
 
-    /* The remaining cases only occur when both comparisons are the
-       same.  */
+    /* The remaining cases only occur when both comparisons are the same.  */
     case NE:
       gcc_assert (cond1 == cond2);
       return CC_DNEmode;
@@ -14422,6 +14441,194 @@  arm_select_dominance_cc_mode (rtx x, rtx y, HOST_WIDE_INT cond_or)
     }
 }
 
+enum rtx_code
+arm_ccmode_to_code (enum machine_mode mode)
+{
+  switch (mode)
+    {
+    case CC_DNEmode:
+      return NE;
+    case CC_DEQmode:
+      return EQ;
+    case CC_DLEmode:
+      return LE;
+    case CC_DLTmode:
+      return LT;
+    case CC_DGEmode:
+      return GE;
+    case CC_DGTmode:
+      return GT;
+    case CC_DLEUmode:
+      return LEU;
+    case CC_DLTUmode:
+      return LTU;
+    case CC_DGEUmode:
+      return GEU;
+    case CC_DGTUmode:
+      return GTU;
+    default:
+      return UNKNOWN;
+    }
+}
+
+static enum machine_mode
+arm_code_to_ccmode (enum rtx_code code)
+{
+  switch (code)
+    {
+    case NE:
+      return CC_DNEmode;
+    case EQ:
+      return CC_DEQmode;
+    case LE:
+      return CC_DLEmode;
+    case LT:
+      return CC_DLTmode;
+    case GE:
+      return CC_DGEmode;
+    case GT:
+      return CC_DGTmode;
+    case LEU:
+      return CC_DLEUmode;
+    case LTU:
+      return CC_DLTUmode;
+    case GEU:
+      return CC_DGEUmode;
+    case GTU:
+      return CC_DGTUmode;
+    default:
+      return CCmode;
+    }
+}
+
+/* MODE is the CC mode result of the previous conditional compare.
+   X is the next compare.  */
+enum machine_mode
+arm_select_dominance_ccmp_mode (rtx x, enum machine_mode mode,
+			 	HOST_WIDE_INT cond_or)
+{
+  enum rtx_code cond1 = arm_ccmode_to_code (mode);
+  enum rtx_code cond2;
+
+  if (cond1 == UNKNOWN)
+    return CCmode;
+
+  /* Currently we will probably get the wrong result if the individual
+     comparisons are not simple.  */
+  if (arm_select_cc_mode (cond2 = GET_CODE (x), XEXP (x, 0), XEXP (x, 1))
+      != CCmode)
+    return CCmode;
+
+  /* If the comparisons are not equal, and one doesn't dominate the other,
+     then we can't do this.  Since there is a conditional compare before
+     current insn, we can not swap the compares.  So we have to check the
+     dominate relation separately for DOM_CC_X_OR_Y and DOM_CC_X_AND_Y.  */
+  if (cond1 != cond2
+      && !(cond_or == DOM_CC_X_OR_Y ? comparison_dominates_p (cond1, cond2)
+				      : comparison_dominates_p (cond2, cond1)))
+    return CCmode;
+
+  if (cond_or == DOM_CC_X_OR_Y)
+    return arm_select_dominance_cc_mode_1 (cond1, cond2, cond_or);
+  else
+    return arm_select_dominance_cc_mode_1 (cond2, cond1, cond_or);
+}
+
+/* COND1 and COND2 should be enum rtx_code, which represent two compares.
+   There are order sensitive for conditional compare.  It returns
+      1: Keep current order.
+     -1: Swap the two compares.
+      0: Invalid combination.  */
+
+static int
+arm_select_ccmp_cmp_order (int cond1, int cond2)
+{
+  /* THUMB1 does not support conditional compare.  */
+  if (TARGET_THUMB1)
+    return 0;
+
+  if (cond1 == cond2)
+    return 1;
+  if (comparison_dominates_p ((enum rtx_code) cond1, (enum rtx_code) cond2))
+    return -1;
+  if (comparison_dominates_p ((enum rtx_code) cond2, (enum rtx_code) cond1))
+    return 1;
+
+  return 0;
+}
+
+static void
+arm_convert_to_SImode (rtx* op0, rtx* op1, int unsignedp)
+{
+  enum machine_mode mode;
+
+  mode = GET_MODE (*op0);
+  if (mode == VOIDmode)
+    mode = GET_MODE (*op1);
+
+  if (mode == QImode || mode == HImode)
+    {
+      *op0 = convert_modes (SImode, mode, *op0, unsignedp);
+      *op1 = convert_modes (SImode, mode, *op1, unsignedp);
+    }
+}
+
+static rtx
+arm_gen_ccmp_first (int code, rtx op0, rtx op1)
+{
+  enum machine_mode mode;
+  rtx cmp, target;
+  int unsignedp = code == LTU || code == LEU || code == GTU || code == GEU;
+
+  arm_convert_to_SImode (&op0, &op1, unsignedp);
+  if (!s_register_operand (op0, SImode) || !arm_add_operand (op1, SImode))
+     /* Do we need convert the operands to register?  If converting them to
+	registers, we add more overhead for conditional compare.  */
+    return NULL_RTX;
+
+  mode = arm_code_to_ccmode ((enum rtx_code) code);
+  if (mode == CCmode)
+    return NULL_RTX;
+
+  cmp = gen_rtx_fmt_ee (COMPARE, CCmode, op0, op1);
+  target = gen_rtx_REG (mode, CC_REGNUM);
+  emit_insn (gen_rtx_SET (VOIDmode, gen_rtx_REG (CCmode, CC_REGNUM), cmp));
+  return target;
+}
+
+static rtx
+arm_gen_ccmp_next (rtx prev, int cmp_code, rtx op0, rtx op1, int bit_code)
+{
+  rtx cmp0, cmp1, target, bit_op;
+  HOST_WIDE_INT cond_or;
+  enum machine_mode mode;
+  int unsignedp = cmp_code == LTU || cmp_code == LEU
+		  || cmp_code == GTU || cmp_code == GEU;
+
+  arm_convert_to_SImode (&op0, &op1, unsignedp);
+  if (!s_register_operand (op0, SImode) || !arm_add_operand (op1, SImode))
+     /* Do we need convert the operands to register?  If converting them to
+	registers, we add more overhead for conditional compare.  */
+    return NULL_RTX;
+
+  cmp1 = gen_rtx_fmt_ee ((enum rtx_code) cmp_code, SImode, op0, op1);
+  cond_or = bit_code == AND ? DOM_CC_X_AND_Y : DOM_CC_X_OR_Y;
+  mode = arm_select_dominance_ccmp_mode (cmp1, GET_MODE (prev), cond_or);
+  if (mode == CCmode)
+    return NULL_RTX;
+
+  cmp0 = gen_rtx_fmt_ee (NE, SImode, prev, const0_rtx);
+
+  bit_op = gen_rtx_fmt_ee ((enum rtx_code) bit_code, SImode, cmp0, cmp1);
+
+  /* Generate insn to match ccmp_and/ccmp_ior.  */
+  target = gen_rtx_REG (mode, CC_REGNUM);
+  emit_insn (gen_rtx_SET (VOIDmode, target,
+			  gen_rtx_fmt_ee (COMPARE, VOIDmode,
+					  bit_op, const0_rtx)));
+  return target;
+}
+
 enum machine_mode
 arm_select_cc_mode (enum rtx_code op, rtx x, rtx y)
 {
diff --git a/gcc/config/arm/arm.md b/gcc/config/arm/arm.md
index dd73366..61f98af 100644
--- a/gcc/config/arm/arm.md
+++ b/gcc/config/arm/arm.md
@@ -12894,3 +12894,122 @@ 
 (include "sync.md")
 ;; Fixed-point patterns
 (include "arm-fixed.md")
+
+(define_expand "cbranchcc4"
+  [(set (pc) (if_then_else
+	      (match_operator 0 "expandable_comparison_operator"
+	       [(match_operand 1 "dominant_cc_register" "")
+		(const_int 0)])
+	      (label_ref (match_operand 3 "" ""))
+	      (pc)))]
+  "TARGET_32BIT"
+  " ")
+
+(define_expand "cstorecc4"
+  [(set (match_operand:SI 0 "s_register_operand")
+	(match_operator 1 "" [(match_operand 2 "")
+			      (match_operand 3 "")]))]
+  "TARGET_32BIT"
+"{
+  enum machine_mode mode = GET_MODE (operands[2]);
+  if (mode != CCmode)
+    {
+      operands[2] = gen_rtx_REG (CCmode, CC_REGNUM);
+      operands[3] = const0_rtx;
+      operands[1] = gen_rtx_fmt_ee (arm_ccmode_to_code (mode),
+				    SImode, operands[2], operands[3]);
+    }
+  emit_insn (gen_rtx_SET (SImode, operands[0], operands[1]));
+  DONE;
+}")
+
+;; The first compare in this pattern is the result of a previous CCMP.
+;; We can not swap it.  And we only need its flag.
+(define_insn "*ccmp_and"
+  [(set (match_operand 6 "dominant_cc_register" "")
+	(compare
+	 (and:SI
+	  (match_operator 4 "expandable_comparison_operator"
+	   [(match_operand 0 "dominant_cc_register" "")
+	    (match_operand:SI 1 "arm_add_operand" "")])
+	  (match_operator:SI 5 "arm_comparison_operator"
+	   [(match_operand:SI 2 "s_register_operand"
+	        "l,r,r,r,r")
+	    (match_operand:SI 3 "arm_add_operand"
+	        "lPy,rI,L,rI,L")]))
+	 (const_int 0)))]
+  "TARGET_32BIT"
+  {
+    static const char *const cmp2[2] =
+    {
+      "cmp%d4\t%2, %3",
+      "cmn%d4\t%2, #%n3"
+    };
+    static const char *const ite = "it\t%d4";
+    static const int cmp_idx[5] = {0, 0, 1, 0, 1};
+
+    if (TARGET_THUMB2)
+      output_asm_insn (ite, operands);
+
+    output_asm_insn (cmp2[cmp_idx[which_alternative]], operands);
+    return "";
+  }
+  [(set_attr "conds" "set")
+   (set_attr "predicable" "no")
+   (set_attr "arch" "t2,t2,t2,any,any")
+   (set_attr_alternative "length"
+      [(const_int 4)
+       (const_int 6)
+       (const_int 6)
+       (if_then_else (eq_attr "is_thumb" "no")
+           (const_int 4)
+           (const_int 6))
+       (if_then_else (eq_attr "is_thumb" "no")
+           (const_int 4)
+           (const_int 6))])]
+)
+
+;; The first compare in this pattern is the result of a previous CCMP.
+;; We can not swap it.  And we only need its flag.
+(define_insn "*ccmp_ior"
+  [(set (match_operand 6 "dominant_cc_register" "")
+	(compare
+	 (ior:SI
+	  (match_operator 4 "expandable_comparison_operator"
+	   [(match_operand 0 "dominant_cc_register" "")
+	    (match_operand:SI 1 "arm_add_operand" "")])
+	  (match_operator:SI 5 "arm_comparison_operator"
+	   [(match_operand:SI 2 "s_register_operand"
+	        "l,r,r,r,r")
+	    (match_operand:SI 3 "arm_add_operand"
+	        "lPy,rI,L,rI,L")]))
+	 (const_int 0)))]
+  "TARGET_32BIT"
+  {
+    static const char *const cmp2[2] =
+    {
+      "cmp%D4\t%2, %3",
+      "cmn%D4\t%2, #%n3"
+    };
+    static const char *const ite = "it\t%D4";
+    static const int cmp_idx[5] = {0, 0, 1, 0, 1};
+
+    if (TARGET_THUMB2)
+      output_asm_insn (ite, operands);
+
+    output_asm_insn (cmp2[cmp_idx[which_alternative]], operands);
+    return "";
+  }
+  [(set_attr "conds" "set")
+   (set_attr "arch" "t2,t2,t2,any,any")
+   (set_attr_alternative "length"
+      [(const_int 4)
+       (const_int 6)
+       (const_int 6)
+       (if_then_else (eq_attr "is_thumb" "no")
+           (const_int 4)
+           (const_int 6))
+       (if_then_else (eq_attr "is_thumb" "no")
+           (const_int 4)
+           (const_int 6))])]
+)
diff --git a/gcc/doc/md.texi b/gcc/doc/md.texi
index 44a9183..f9ac8e8 100644
--- a/gcc/doc/md.texi
+++ b/gcc/doc/md.texi
@@ -6159,6 +6159,42 @@  A typical @code{ctrap} pattern looks like
   "@dots{}")
 @end smallexample
 
+@cindex @code{ccmp} instruction pattern
+@item @samp{ccmp}
+Conditional compare instruction.  Operand 2 and 5 are RTLs which perform
+two comparisons.  Operand 1 is AND or IOR, which operates on the result of
+Operand 2 and 5.  Operand 0 is the result of operand 1.
+It uses recursive method to support more than two compares.  e.g.
+
+  CC0 = CMP (a, b);
+  CC1 = CCMP (NE (CC0, 0), CMP (e, f));
+  ...
+  CCn = CCMP (NE (CCn-1, 0), CMP (...));
+
+Two target hooks are used to generate conditional compares.  GEN_CCMP_FISRT
+is used to generate the first CMP.  And GEN_CCMP_NEXT is used to generate the
+following CCMPs.  Operand 1 is AND or IOR.  Operand 3 is the result of
+GEN_CCMP_FISRT or a previous GEN_CCMP_NEXT.  Operand 2 is NE.
+Operand 4, 5 and 6 is another compare expression.
+
+A typical CCMP pattern looks like
+
+@smallexample
+(define_insn "*ccmp_and_ior"
+  [(set (match_operand 6 "dominant_cc_register" "")
+        (compare
+         (match_operator 1
+          (match_operator 2 "comparison_operator"
+           [(match_operand 3 "dominant_cc_register")
+            (const_int 0)])
+          (match_operator 4 "comparison_operator"
+           [(match_operand 5 "register_operand")
+            (match_operand 6 "compare_operand"]))
+         (const_int 0)))]
+  ""
+  "@dots{}")
+@end smallexample
+
 @cindex @code{prefetch} instruction pattern
 @item @samp{prefetch}
 
diff --git a/gcc/doc/tm.texi b/gcc/doc/tm.texi
index 68b59b9..f1ef345 100644
--- a/gcc/doc/tm.texi
+++ b/gcc/doc/tm.texi
@@ -11302,6 +11302,32 @@  This target hook is required only when the target has several different
 modes and they have different conditional execution capability, such as ARM.
 @end deftypefn
 
+@deftypefn {Target Hook} int TARGET_SELECT_CCMP_CMP_ORDER (int @var{code1}, int @var{code2})
+For some target (like ARM), the order of two compares is sensitive for
+conditional compare.  cmp0-cmp1 might be an invalid combination.  But when
+swapping the order, cmp1-cmp0 is valid.  The function will return
+   1: Keep current order.
+  -1: Swap the two compares.
+   0: Invalid combination.
+@end deftypefn
+
+@deftypefn {Target Hook} rtx TARGET_GEN_CCMP_FIRST (int @var{code}, rtx @var{op0}, rtx @var{op1})
+This function emits a comparison insn for the first of a sequence of
+ conditional comparisions.  It returns a comparison expression appropriate
+ for passing to @code{gen_ccmp_next} or to @code{cbranch_optab}.
+ @code{unsignedp} is used when converting @code{op0} and @code{op1}'s mode.
+@end deftypefn
+
+@deftypefn {Target Hook} rtx TARGET_GEN_CCMP_NEXT (rtx @var{prev}, int @var{cmp_code}, rtx @var{op0}, rtx @var{op1}, int @var{bit_code})
+This function emits a conditional comparison within a sequence of
+ conditional comparisons.  The @code{prev} expression is the result of a
+ prior call to @code{gen_ccmp_first} or @code{gen_ccmp_next}.  It may return
+ @code{NULL} if the combination of @code{prev} and this comparison is
+ not supported, otherwise the result must be appropriate for passing to
+ @code{gen_ccmp_next} or @code{cbranch_optab}.  @code{bit_code}
+ is AND or IOR, which is the op on the two compares.
+@end deftypefn
+
 @deftypefn {Target Hook} unsigned TARGET_LOOP_UNROLL_ADJUST (unsigned @var{nunroll}, struct loop *@var{loop})
 This target hook returns a new value for the number of times @var{loop}
 should be unrolled. The parameter @var{nunroll} is the number of times
diff --git a/gcc/doc/tm.texi.in b/gcc/doc/tm.texi.in
index 1bb3806..c69b620 100644
--- a/gcc/doc/tm.texi.in
+++ b/gcc/doc/tm.texi.in
@@ -8298,6 +8298,12 @@  build_type_attribute_variant (@var{mdecl},
 
 @hook TARGET_HAVE_CONDITIONAL_EXECUTION
 
+@hook TARGET_SELECT_CCMP_CMP_ORDER
+
+@hook TARGET_GEN_CCMP_FIRST
+
+@hook TARGET_GEN_CCMP_NEXT
+
 @hook TARGET_LOOP_UNROLL_ADJUST
 
 @defmac POWI_MAX_MULTS
diff --git a/gcc/expmed.c b/gcc/expmed.c
index c5123cb..6a8e184 100644
--- a/gcc/expmed.c
+++ b/gcc/expmed.c
@@ -5089,7 +5089,7 @@  expand_and (enum machine_mode mode, rtx op0, rtx op1, rtx target)
 }
 
 /* Helper function for emit_store_flag.  */
-static rtx
+rtx
 emit_cstore (rtx target, enum insn_code icode, enum rtx_code code,
 	     enum machine_mode mode, enum machine_mode compare_mode,
 	     int unsignedp, rtx x, rtx y, int normalizep,
diff --git a/gcc/expr.c b/gcc/expr.c
index 4815c88..2dac9b8 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -67,6 +67,8 @@  along with GCC; see the file COPYING3.  If not see
 #include "params.h"
 #include "tree-ssa-address.h"
 #include "cfgexpand.h"
+#include "tree-phinodes.h"
+#include "ssa-iterators.h"
 
 /* Decide whether a function's arguments should be processed
    from first to last or from last to first.
@@ -9204,6 +9206,299 @@  expand_expr_real_2 (sepops ops, rtx target, enum machine_mode tmode,
 }
 #undef REDUCE_BIT_FIELD
 
+/* The following functions expand conditional compare (CCMP) instructions.
+   Here is a short description about the over all algorithm:
+     * ccmp_candidate_p is used to identify the CCMP candidate
+
+     * expand_ccmp_expr is the main entry, which calls expand_ccmp_expr_1
+       to expand CCMP.
+
+     * expand_ccmp_expr_1 uses a recursive algorithm to expand CCMP.
+       It calls two target hooks gen_ccmp_first and gen_ccmp_next to generate
+       CCMP instructions.
+	 - gen_ccmp_first expands the first compare in CCMP.
+	 - gen_ccmp_next expands the following compares.
+
+       Another hook select_ccmp_cmp_order is called to determine which compare
+       is done first since not all combination of compares are legal in some
+       target like ARM.  We might get more chance when swapping the compares.
+
+       During expanding, we must make sure that no instruction can clobber the
+       CC reg except the compares.  So clobber_cc_p and check_clobber_cc are
+       introduced to do the check.
+
+     * If the final result is not used in a COND_EXPR (checked by function
+       used_in_cond_stmt_p), it calls cstorecc4 pattern to store the CC to a
+       general register.  */
+
+/* Check whether G is a potential conditional compare candidate.  */
+static bool
+ccmp_candidate_p (gimple g)
+{
+  tree rhs = gimple_assign_rhs_to_tree (g);
+  tree lhs, op0, op1;
+  gimple gs0, gs1;
+  enum tree_code tcode, tcode0, tcode1;
+  tcode = TREE_CODE (rhs);
+
+  if (tcode != BIT_AND_EXPR && tcode != BIT_IOR_EXPR)
+    return false;
+
+  lhs = gimple_assign_lhs (g);
+  op0 = TREE_OPERAND (rhs, 0);
+  op1 = TREE_OPERAND (rhs, 1);
+
+  if ((TREE_CODE (op0) != SSA_NAME) || (TREE_CODE (op1) != SSA_NAME)
+      || !has_single_use (lhs))
+    return false;
+
+  gs0 = get_gimple_for_ssa_name (op0);
+  gs1 = get_gimple_for_ssa_name (op1);
+  if (!gs0 || !gs1 || !is_gimple_assign (gs0) || !is_gimple_assign (gs1)
+      /* g, gs0 and gs1 must be in the same basic block, since current stage
+	 is out-of-ssa.  We can not guarantee the correctness when forwording
+	 the gs0 and gs1 into g whithout DATAFLOW analysis.  */
+      || gimple_bb (gs0) != gimple_bb (gs1)
+      || gimple_bb (gs0) != gimple_bb (g))
+    return false;
+
+  if (!(INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (gs0)))
+       || POINTER_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (gs0))))
+      || !(INTEGRAL_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (gs1)))
+	   || POINTER_TYPE_P (TREE_TYPE (gimple_assign_rhs1 (gs1)))))
+    return false;
+
+  tcode0 = gimple_assign_rhs_code (gs0);
+  tcode1 = gimple_assign_rhs_code (gs1);
+  if (TREE_CODE_CLASS (tcode0) == tcc_comparison
+      && TREE_CODE_CLASS (tcode1) == tcc_comparison)
+    return true;
+  if (TREE_CODE_CLASS (tcode0) == tcc_comparison
+      && ccmp_candidate_p (gs1))
+    return true;
+  else if (TREE_CODE_CLASS (tcode1) == tcc_comparison
+	   && ccmp_candidate_p (gs0))
+    return true;
+  /* We skip ccmp_candidate_p (gs1) && ccmp_candidate_p (gs0) since
+     there is no way to set the CC flag.  */
+  return false;
+}
+
+/* Check whether EXP is used in a GIMPLE_COND statement or not.  */
+static bool
+used_in_cond_stmt_p (tree exp)
+{
+  bool expand_cond = false;
+  imm_use_iterator ui;
+  gimple use_stmt;
+  FOR_EACH_IMM_USE_STMT (use_stmt, ui, exp)
+    if (gimple_code (use_stmt) == GIMPLE_COND)
+      {
+	tree op1 = gimple_cond_rhs (use_stmt);
+	/* TBD: If we can convert all
+	    _Bool t;
+
+	    if (t == 1)
+	      goto <bb 3>;
+	    else
+	      goto <bb 4>;
+	   to
+	    if (t != 0)
+	      goto <bb 3>;
+	    else
+	      goto <bb 4>;
+	   we can remove the following check.  */
+	if (integer_zerop (op1))
+	  expand_cond = true;
+	BREAK_FROM_IMM_USE_STMT (ui);
+      }
+  return expand_cond;
+}
+
+/* If SETTER clobber CC reg, set DATA to TRUE.  */
+
+static void
+check_clobber_cc (rtx reg, const_rtx setter, void *data)
+{
+  if (GET_CODE (setter) == CLOBBER && GET_MODE (reg) == CCmode)
+    *(bool *)data = true;
+}
+
+/* Check whether INSN and all its NEXT_INSN clobber CC reg or not.  */
+
+static bool
+clobber_cc_p (rtx insn)
+{
+  bool clobber = false;
+  for (; insn; insn = NEXT_INSN (insn))
+    {
+      note_stores (PATTERN (insn), check_clobber_cc, &clobber);
+      if (clobber)
+	return true;
+    }
+  return false;
+}
+
+/* Help function to generate conditional compare.  PREV is the result of
+   GEN_CCMP_FIRST or GEN_CCMP_NEXT.  G is the next compare.
+   CODE is BIT_AND_EXPR or BIT_IOR_EXPR.  */
+
+static rtx
+gen_ccmp_next (rtx prev, gimple g, enum tree_code code)
+{
+  rtx op0, op1;
+  int unsignedp = TYPE_UNSIGNED (TREE_TYPE (gimple_assign_rhs1 (g)));
+  enum rtx_code rcode = get_rtx_code (gimple_assign_rhs_code (g), unsignedp);
+  rtx last = get_last_insn ();
+
+  expand_operands (gimple_assign_rhs1 (g),
+		   gimple_assign_rhs2 (g),
+		   NULL_RTX, &op0, &op1, EXPAND_NORMAL);
+
+  /* If any operand clobbers CC reg, we will give up.  */
+  if (clobber_cc_p (NEXT_INSN (last)))
+    return NULL_RTX;
+
+  return targetm.gen_ccmp_next (prev, rcode, op0, op1, get_rtx_code (code, 0));
+}
+
+/* Expand conditional compare gimple G.  A typical CCMP sequence is like:
+
+     CC0 = CMP (a, b);
+     CC1 = CCMP (NE (CC0, 0), CMP (e, f));
+     ...
+     CCn = CCMP (NE (CCn-1, 0), CMP (...));
+
+   hook gen_ccmp_first is used to expand the first compare.
+   hook gen_ccmp_next is used to expand the following CCMP.  */
+
+static rtx
+expand_ccmp_expr_1 (gimple g)
+{
+  tree exp = gimple_assign_rhs_to_tree (g);
+  enum tree_code code = TREE_CODE (exp);
+  gimple gs0 = get_gimple_for_ssa_name (TREE_OPERAND (exp, 0));
+  gimple gs1 = get_gimple_for_ssa_name (TREE_OPERAND (exp, 1));
+  rtx tmp;
+  enum tree_code code0 = gimple_assign_rhs_code (gs0);
+  enum tree_code code1 = gimple_assign_rhs_code (gs1);
+
+  gcc_assert (code == BIT_AND_EXPR || code == BIT_IOR_EXPR);
+  gcc_assert (gs0 && gs1 && is_gimple_assign (gs0) && is_gimple_assign (gs1));
+
+  if (TREE_CODE_CLASS (code0) == tcc_comparison)
+    {
+      if (TREE_CODE_CLASS (code1) == tcc_comparison)
+	{
+	  int unsignedp0, unsignedp1, cmp_order;
+	  enum rtx_code rcode0, rcode1, rcode;
+	  rtx op0, op1, op2, op3, tmp;
+	  gimple first, next;
+
+	  unsignedp0 = TYPE_UNSIGNED (TREE_TYPE (gimple_assign_rhs1 (gs0)));
+	  rcode0 = get_rtx_code (code0, unsignedp0);
+	  unsignedp1 = TYPE_UNSIGNED (TREE_TYPE (gimple_assign_rhs1 (gs1)));
+	  rcode1 = get_rtx_code (code1, unsignedp1);
+
+	  /* For some target (like ARM), the order of two compares is sensitive
+	     for conditional compare.  cmp0-cmp1 might be an invalid combination.
+	     But when swapping the order, cmp1-cmp0 is valid.  The target hook
+	     select_ccmp_cmp_order will return
+		 1: Keep current order.
+		-1: Swap the two compares.
+		 0: Invalid combination.  */
+
+	  cmp_order = targetm.select_ccmp_cmp_order (rcode0, rcode1);
+	  /* Invalid combination.  */
+	  if (!cmp_order)
+	    return NULL_RTX;
+	  /* Swap the compare order.  Do gs1 first.  */
+	  if (cmp_order == -1)
+	    {
+	      first = gs1;
+	      next = gs0;
+	      rcode = rcode1;
+	    }
+	  else
+	    {
+	      first = gs0;
+	      next = gs1;
+	      rcode = rcode0;
+	    }
+	    expand_operands (gimple_assign_rhs1 (first),
+			     gimple_assign_rhs2 (first),
+			     NULL_RTX, &op0, &op1, EXPAND_NORMAL);
+
+	    /* Since the operands of NEXT might clobber CC reg, we expand the
+	       operands of NEXT before GEN_CCMP_FIRST.  */
+	    expand_operands (gimple_assign_rhs1 (next),
+			     gimple_assign_rhs2 (next),
+			     NULL_RTX, &op2, &op3, EXPAND_NORMAL);
+	    tmp = targetm.gen_ccmp_first (rcode, op0, op1);
+	    if (!tmp)
+	      return NULL_RTX;
+
+	    return targetm.gen_ccmp_next (tmp,
+					  rcode == rcode1 ? rcode0 : rcode1,
+					  op2, op3, get_rtx_code (code, 0));
+	}
+      gcc_assert (code1 == BIT_AND_EXPR || code1 == BIT_IOR_EXPR);
+      tmp = expand_ccmp_expr_1 (gs1);
+      if (tmp)
+	return gen_ccmp_next (tmp, gs0, code);
+    }
+  else
+    {
+      gcc_assert (gimple_assign_rhs_code (gs0) == BIT_AND_EXPR
+                  || gimple_assign_rhs_code (gs0) == BIT_IOR_EXPR);
+      if (TREE_CODE_CLASS (gimple_assign_rhs_code (gs1)) == tcc_comparison)
+	{
+	  tmp = expand_ccmp_expr_1 (gs0);
+	  if (tmp)
+	    return gen_ccmp_next (tmp, gs1, code);
+	}
+      else
+	{
+	  gcc_assert (gimple_assign_rhs_code (gs1) == BIT_AND_EXPR
+		      || gimple_assign_rhs_code (gs1) == BIT_IOR_EXPR);
+	}
+    }
+
+  return NULL_RTX;
+}
+
+static rtx
+expand_ccmp_expr (gimple g)
+{
+  rtx last = get_last_insn ();
+  tree lhs = gimple_assign_lhs (g);
+  rtx tmp = expand_ccmp_expr_1 (g);
+
+  if (tmp)
+    {
+      enum insn_code icode;
+      /* TMP should be CC.  If it is used in a GIMPLE_COND, just return it.
+	 Note: Target needs to define "cbranchcc4".  */
+      if (used_in_cond_stmt_p (lhs))
+	return tmp;
+
+      /* If TMP is not used in a GIMPLE_COND, store it with a csctorecc4_optab.
+	 Note: Target needs to define "cstorecc4".  */
+      icode = optab_handler (cstore_optab, CCmode);
+      if (icode != CODE_FOR_nothing)
+	{
+	  rtx target = gen_reg_rtx (word_mode);
+	  tmp = emit_cstore (target, icode, NE, CCmode, CCmode,
+			     0, tmp, const0_rtx, 1, word_mode);
+	  if (tmp)
+	    return tmp;
+	}
+    }
+
+  /* Clean up.  */
+  delete_insns_since (last);
+  return NULL_RTX;
+}
 
 /* Return TRUE if expression STMT is suitable for replacement.  
    Never consider memory loads as replaceable, because those don't ever lead 
@@ -9367,10 +9662,20 @@  expand_expr_real_1 (tree exp, rtx target, enum machine_mode tmode,
 	{
 	  rtx r;
 	  location_t saved_loc = curr_insn_location ();
+	  tree rhs = gimple_assign_rhs_to_tree (g);
 
 	  set_curr_insn_location (gimple_location (g));
-	  r = expand_expr_real (gimple_assign_rhs_to_tree (g), target,
-				tmode, modifier, NULL);
+
+	  if ((targetm.gen_ccmp_first != NULL) && ccmp_candidate_p (g))
+	    {
+	      gcc_checking_assert (targetm.gen_ccmp_next != NULL);
+	      r = expand_ccmp_expr (g);
+	      if (!r)
+		r = expand_expr_real (rhs, target, tmode, modifier, NULL);
+	    }
+	  else
+	    r = expand_expr_real (rhs, target, tmode, modifier, NULL);
+
 	  set_curr_insn_location (saved_loc);
 	  if (REG_P (r) && !REG_EXPR (r))
 	    set_reg_attrs_for_decl_rtl (SSA_NAME_VAR (exp), r);
diff --git a/gcc/optabs.c b/gcc/optabs.c
index dcef480..aad2635 100644
--- a/gcc/optabs.c
+++ b/gcc/optabs.c
@@ -6394,7 +6394,7 @@  gen_cond_trap (enum rtx_code code, rtx op1, rtx op2, rtx tcode)
 /* Return rtx code for TCODE. Use UNSIGNEDP to select signed
    or unsigned operation code.  */
 
-static enum rtx_code
+enum rtx_code
 get_rtx_code (enum tree_code tcode, bool unsignedp)
 {
   enum rtx_code code;
@@ -6444,6 +6444,12 @@  get_rtx_code (enum tree_code tcode, bool unsignedp)
       code = LTGT;
       break;
 
+    case BIT_AND_EXPR:
+      code = AND;
+      break;
+    case BIT_IOR_EXPR:
+      code = IOR;
+      break;
     default:
       gcc_unreachable ();
     }
diff --git a/gcc/optabs.h b/gcc/optabs.h
index 6a5ec19..e0804a4 100644
--- a/gcc/optabs.h
+++ b/gcc/optabs.h
@@ -91,7 +91,7 @@  extern rtx expand_widen_pattern_expr (sepops ops, rtx op0, rtx op1, rtx wide_op,
 extern rtx expand_ternary_op (enum machine_mode mode, optab ternary_optab,
 			      rtx op0, rtx op1, rtx op2, rtx target,
 			      int unsignedp);
-
+extern enum rtx_code get_rtx_code (enum tree_code tcode, bool unsignedp);
 /* Expand a binary operation given optab and rtx operands.  */
 extern rtx expand_binop (enum machine_mode, optab, rtx, rtx, rtx, int,
 			 enum optab_methods);
@@ -553,4 +553,9 @@  extern void gen_satfractuns_conv_libfunc (convert_optab, const char *,
 					  enum machine_mode);
 extern void init_tree_optimization_optabs (tree);
 
+extern rtx emit_cstore (rtx target, enum insn_code icode, enum rtx_code code,
+			enum machine_mode mode, enum machine_mode compare_mode,
+			int unsignedp, rtx x, rtx y, int normalizep,
+			enum machine_mode target_mode);
+
 #endif /* GCC_OPTABS_H */
diff --git a/gcc/recog.c b/gcc/recog.c
index 5c0ec16..8913b3b 100644
--- a/gcc/recog.c
+++ b/gcc/recog.c
@@ -557,6 +557,19 @@  cancel_changes (int num)
 #define CODE_FOR_extzv	CODE_FOR_nothing
 #endif
 
+static bool
+ccmp_insn_p (rtx object)
+{
+  rtx x = PATTERN (object);
+  if (targetm.gen_ccmp_first
+      && GET_CODE (x) == SET
+      && GET_CODE (XEXP (x, 1)) == COMPARE
+      && (GET_CODE (XEXP (XEXP (x, 1), 0)) == IOR
+	  || GET_CODE (XEXP (XEXP (x, 1), 0)) == AND))
+    return true;
+  return false;
+}
+
 /* A subroutine of validate_replace_rtx_1 that tries to simplify the resulting
    rtx.  */
 
@@ -568,7 +581,8 @@  simplify_while_replacing (rtx *loc, rtx to, rtx object,
   enum rtx_code code = GET_CODE (x);
   rtx new_rtx;
 
-  if (SWAPPABLE_OPERANDS_P (x)
+  /* Do not swap compares in conditional compare instruction.  */
+  if (SWAPPABLE_OPERANDS_P (x) && !ccmp_insn_p (object)
       && swap_commutative_operands_p (XEXP (x, 0), XEXP (x, 1)))
     {
       validate_unshare_change (object, loc,
diff --git a/gcc/target.def b/gcc/target.def
index ca1d250..a016c9d 100644
--- a/gcc/target.def
+++ b/gcc/target.def
@@ -2432,6 +2432,38 @@  modes and they have different conditional execution capability, such as ARM.",
  bool, (void),
  default_have_conditional_execution)
 
+DEFHOOK
+(select_ccmp_cmp_order,
+ "For some target (like ARM), the order of two compares is sensitive for\n\
+conditional compare.  cmp0-cmp1 might be an invalid combination.  But when\n\
+swapping the order, cmp1-cmp0 is valid.  The function will return\n\
+   1: Keep current order.\n\
+  -1: Swap the two compares.\n\
+   0: Invalid combination.",
+ int, (int code1, int code2),
+ default_select_ccmp_cmp_order)
+
+DEFHOOK
+(gen_ccmp_first,
+ "This function emits a comparison insn for the first of a sequence of\n\
+ conditional comparisions.  It returns a comparison expression appropriate\n\
+ for passing to @code{gen_ccmp_next} or to @code{cbranch_optab}.\n\
+ @code{unsignedp} is used when converting @code{op0} and @code{op1}'s mode.",
+ rtx, (int code, rtx op0, rtx op1),
+ NULL)
+
+DEFHOOK
+(gen_ccmp_next,
+ "This function emits a conditional comparison within a sequence of\n\
+ conditional comparisons.  The @code{prev} expression is the result of a\n\
+ prior call to @code{gen_ccmp_first} or @code{gen_ccmp_next}.  It may return\n\
+ @code{NULL} if the combination of @code{prev} and this comparison is\n\
+ not supported, otherwise the result must be appropriate for passing to\n\
+ @code{gen_ccmp_next} or @code{cbranch_optab}.  @code{bit_code}\n\
+ is AND or IOR, which is the op on the two compares.",
+ rtx, (rtx prev, int cmp_code, rtx op0, rtx op1, int bit_code),
+ NULL)
+
 /* Return a new value for loop unroll size.  */
 DEFHOOK
 (loop_unroll_adjust,
diff --git a/gcc/targhooks.c b/gcc/targhooks.c
index 1f158b8..4c8e817 100644
--- a/gcc/targhooks.c
+++ b/gcc/targhooks.c
@@ -76,6 +76,12 @@  along with GCC; see the file COPYING3.  If not see
 #include "tree-ssanames.h"
 #include "insn-codes.h"
 
+/* Default return 1 to keep current order.  */
+int
+default_select_ccmp_cmp_order (int, int)
+{
+  return 1;
+}
 
 bool
 default_legitimate_address_p (enum machine_mode mode ATTRIBUTE_UNUSED,
diff --git a/gcc/targhooks.h b/gcc/targhooks.h
index 1ba0c1d..2651d23 100644
--- a/gcc/targhooks.h
+++ b/gcc/targhooks.h
@@ -215,3 +215,4 @@  extern enum machine_mode default_chkp_bound_mode (void);
 extern tree default_builtin_chkp_function (unsigned int);
 extern bool can_use_doloop_if_innermost (double_int, double_int,
 					 unsigned int, bool);
+extern int default_select_ccmp_cmp_order (int, int);