Patchwork Fix folding of UBSAN_CHECK_SUB (x, 0) etc.

login
register
mail settings
Submitter Jakub Jelinek
Date March 25, 2014, 7:20 p.m.
Message ID <20140325192015.GU1817@tucnak.redhat.com>
Download mbox | patch
Permalink /patch/333695/
State New
Headers show

Comments

Jakub Jelinek - March 25, 2014, 7:20 p.m.
On Tue, Mar 25, 2014 at 10:15:37AM +0100, Richard Biener wrote:
> On Tue, 25 Mar 2014, Richard Biener wrote:
> 
> > On Tue, 25 Mar 2014, Jakub Jelinek wrote:
> > 
> > > Hi!
> > > 
> > > While working on previous patch, I've noticed a severe omission in the
> > > -fsanitize=signed-integer-overflow support.  Nothing ever
> > > folds addition/subtraction of 0 and multiplication by [0,1], none of these
> > > may ever overflow and thus we can fold them to normal operations and let
> > > those be folded as normal operations.
> > > 
> > > Bootstrapped/regtested on x86_64-linux and i686-linux, tested with
> > > --with-build-config=bootstrap-ubsan bootstrap on i686-linux, ok for trunk?
> > 
> > I think you only need to adjust gimple_fold_stmt_to_constant_1.
> 
> Actually there only for a * 0 and a - a, as others don't result in
> constants.  Still the other cases should be handled in fold_stmt,
> not only in VRP.

Ok, here is an updated patch that optimizes this in VRP,
gimple_fold_stmt_to_constant_1 and gimple_fold_call.

Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

2014-03-25  Jakub Jelinek  <jakub@redhat.com>

	* tree-vrp.c (simplify_internal_call_using_ranges): If only
	one range is range_int_cst_p, but not both, at least optimize
	addition/subtraction of 0 and multiplication by 0 or 1.
	* gimple-fold.c (gimple_fold_call): Fold
	IFN_UBSAN_CHECK_{ADD,SUB,MUL}.
	(gimple_fold_stmt_to_constant_1): If both op0 and op1 aren't
	INTEGER_CSTs, try to fold at least x * 0 and y - y.



	Jakub
Richard Guenther - March 26, 2014, 9:02 a.m.
On Tue, 25 Mar 2014, Jakub Jelinek wrote:

> On Tue, Mar 25, 2014 at 10:15:37AM +0100, Richard Biener wrote:
> > On Tue, 25 Mar 2014, Richard Biener wrote:
> > 
> > > On Tue, 25 Mar 2014, Jakub Jelinek wrote:
> > > 
> > > > Hi!
> > > > 
> > > > While working on previous patch, I've noticed a severe omission in the
> > > > -fsanitize=signed-integer-overflow support.  Nothing ever
> > > > folds addition/subtraction of 0 and multiplication by [0,1], none of these
> > > > may ever overflow and thus we can fold them to normal operations and let
> > > > those be folded as normal operations.
> > > > 
> > > > Bootstrapped/regtested on x86_64-linux and i686-linux, tested with
> > > > --with-build-config=bootstrap-ubsan bootstrap on i686-linux, ok for trunk?
> > > 
> > > I think you only need to adjust gimple_fold_stmt_to_constant_1.
> > 
> > Actually there only for a * 0 and a - a, as others don't result in
> > constants.  Still the other cases should be handled in fold_stmt,
> > not only in VRP.
> 
> Ok, here is an updated patch that optimizes this in VRP,
> gimple_fold_stmt_to_constant_1 and gimple_fold_call.
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, ok for trunk?

Ok.

Thanks,
Richard.

> 2014-03-25  Jakub Jelinek  <jakub@redhat.com>
> 
> 	* tree-vrp.c (simplify_internal_call_using_ranges): If only
> 	one range is range_int_cst_p, but not both, at least optimize
> 	addition/subtraction of 0 and multiplication by 0 or 1.
> 	* gimple-fold.c (gimple_fold_call): Fold
> 	IFN_UBSAN_CHECK_{ADD,SUB,MUL}.
> 	(gimple_fold_stmt_to_constant_1): If both op0 and op1 aren't
> 	INTEGER_CSTs, try to fold at least x * 0 and y - y.
> 
> --- gcc/tree-vrp.c.jj	2014-03-25 09:22:01.352151925 +0100
> +++ gcc/tree-vrp.c	2014-03-25 11:25:44.898562545 +0100
> @@ -9336,31 +9336,58 @@ simplify_internal_call_using_ranges (gim
>    else if (TREE_CODE (op0) == INTEGER_CST)
>      set_value_range_to_value (&vr0, op0, NULL);
>    else
> -    return false;
> +    set_value_range_to_varying (&vr0);
>  
>    if (TREE_CODE (op1) == SSA_NAME)
>      vr1 = *get_value_range (op1);
>    else if (TREE_CODE (op1) == INTEGER_CST)
>      set_value_range_to_value (&vr1, op1, NULL);
>    else
> -    return false;
> +    set_value_range_to_varying (&vr1);
>  
> -  if (!range_int_cst_p (&vr0) || !range_int_cst_p (&vr1))
> -    return false;
> -
> -  tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> -  tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> -  if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> -      || r2 == NULL_TREE || TREE_OVERFLOW (r2))
> -    return false;
> -  if (subcode == MULT_EXPR)
> +  if (!range_int_cst_p (&vr0))
> +    {
> +      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> +	 optimize at least x = y + 0; x = y - 0; x = y * 0;
> +	 and x = y * 1; which never overflow.  */
> +      if (!range_int_cst_p (&vr1))
> +	return false;
> +      if (tree_int_cst_sgn (vr1.min) == -1)
> +	return false;
> +      if (compare_tree_int (vr1.max, subcode == MULT_EXPR) == 1)
> +	return false;
> +    }
> +  else if (!range_int_cst_p (&vr1))
> +    {
> +      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> +	 optimize at least x = 0 + y; x = 0 * y; and x = 1 * y;
> +	 which never overflow.  */
> +      if (subcode == MINUS_EXPR)
> +	return false;
> +      if (!range_int_cst_p (&vr0))
> +	return false;
> +      if (tree_int_cst_sgn (vr0.min) == -1)
> +	return false;
> +      if (compare_tree_int (vr0.max, subcode == MULT_EXPR) == 1)
> +	return false;
> +    }
> +  else
>      {
> -      tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> -      tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> -      if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> -	  || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> +      tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> +      tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> +      if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> +	  || r2 == NULL_TREE || TREE_OVERFLOW (r2))
>  	return false;
> +      if (subcode == MULT_EXPR)
> +	{
> +	  tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> +	  tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> +	  if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> +	      || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> +	    return false;
> +	}
>      }
> +
>    gimple g = gimple_build_assign_with_ops (subcode, gimple_call_lhs (stmt),
>  					   op0, op1);
>    gsi_replace (gsi, g, false);
> --- gcc/gimple-fold.c.jj	2014-03-18 17:08:48.000000000 +0100
> +++ gcc/gimple-fold.c	2014-03-25 12:04:07.277342445 +0100
> @@ -1186,13 +1186,56 @@ gimple_fold_call (gimple_stmt_iterator *
>        else if (gimple_call_builtin_p (stmt, BUILT_IN_MD))
>  	changed |= targetm.gimple_fold_builtin (gsi);
>      }
> -  else if (gimple_call_internal_p (stmt)
> -	   && gimple_call_internal_fn (stmt) == IFN_BUILTIN_EXPECT)
> +  else if (gimple_call_internal_p (stmt))
>      {
> -      tree result = fold_builtin_expect (gimple_location (stmt),
> -					 gimple_call_arg (stmt, 0),
> -					 gimple_call_arg (stmt, 1),
> -					 gimple_call_arg (stmt, 2));
> +      enum tree_code subcode = ERROR_MARK;
> +      tree result = NULL_TREE;
> +      switch (gimple_call_internal_fn (stmt))
> +	{
> +	case IFN_BUILTIN_EXPECT:
> +	  result = fold_builtin_expect (gimple_location (stmt),
> +					gimple_call_arg (stmt, 0),
> +					gimple_call_arg (stmt, 1),
> +					gimple_call_arg (stmt, 2));
> +	  break;
> +	case IFN_UBSAN_CHECK_ADD:
> +	  subcode = PLUS_EXPR;
> +	  break;
> +	case IFN_UBSAN_CHECK_SUB:
> +	  subcode = MINUS_EXPR;
> +	  break;
> +	case IFN_UBSAN_CHECK_MUL:
> +	  subcode = MULT_EXPR;
> +	  break;
> +	default:
> +	  break;
> +	}
> +      if (subcode != ERROR_MARK)
> +	{
> +	  tree arg0 = gimple_call_arg (stmt, 0);
> +	  tree arg1 = gimple_call_arg (stmt, 1);
> +	  /* x = y + 0; x = y - 0; x = y * 0; */
> +	  if (integer_zerop (arg1))
> +	    result = subcode == MULT_EXPR
> +		     ? build_zero_cst (TREE_TYPE (arg0))
> +		     : arg0;
> +	  /* x = 0 + y; x = 0 * y; */
> +	  else if (subcode != MINUS_EXPR && integer_zerop (arg0))
> +	    result = subcode == MULT_EXPR
> +		     ? build_zero_cst (TREE_TYPE (arg0))
> +		     : arg1;
> +	  /* x = y - y; */
> +	  else if (subcode == MINUS_EXPR && operand_equal_p (arg0, arg1, 0))
> +	    result = build_zero_cst (TREE_TYPE (arg0));
> +	  /* x = y * 1; x = 1 * y; */
> +	  else if (subcode == MULT_EXPR)
> +	    {
> +	      if (integer_onep (arg1))
> +		result = arg0;
> +	      else if (integer_onep (arg0))
> +		result = arg1;
> +	    }
> +	}
>        if (result)
>  	{
>  	  if (!update_call_from_tree (gsi, result))
> @@ -2688,15 +2731,32 @@ gimple_fold_stmt_to_constant_1 (gimple s
>  	      default:
>  		return NULL_TREE;
>  	      }
> -	    tree op0 = (*valueize) (gimple_call_arg (stmt, 0));
> -	    tree op1 = (*valueize) (gimple_call_arg (stmt, 1));
> +	    tree arg0 = gimple_call_arg (stmt, 0);
> +	    tree arg1 = gimple_call_arg (stmt, 1);
> +	    tree op0 = (*valueize) (arg0);
> +	    tree op1 = (*valueize) (arg1);
>  
>  	    if (TREE_CODE (op0) != INTEGER_CST
>  		|| TREE_CODE (op1) != INTEGER_CST)
> -	      return NULL_TREE;
> -	    tree res = fold_binary_loc (loc, subcode,
> -					TREE_TYPE (gimple_call_arg (stmt, 0)),
> -					op0, op1);
> +	      {
> +		switch (subcode)
> +		  {
> +		  case MULT_EXPR:
> +		    /* x * 0 = 0 * x = 0 without overflow.  */
> +		    if (integer_zerop (op0) || integer_zerop (op1))
> +		      return build_zero_cst (TREE_TYPE (arg0));
> +		    break;
> +		  case MINUS_EXPR:
> +		    /* y - y = 0 without overflow.  */
> +		    if (operand_equal_p (op0, op1, 0))
> +		      return build_zero_cst (TREE_TYPE (arg0));
> +		    break;
> +		  default:
> +		    break;
> +		  }
> +	      }
> +	    tree res
> +	      = fold_binary_loc (loc, subcode, TREE_TYPE (arg0), op0, op1);
>  	    if (res
>  		&& TREE_CODE (res) == INTEGER_CST
>  		&& !TREE_OVERFLOW (res))
> 
> 
> 	Jakub
> 
>

Patch

--- gcc/tree-vrp.c.jj	2014-03-25 09:22:01.352151925 +0100
+++ gcc/tree-vrp.c	2014-03-25 11:25:44.898562545 +0100
@@ -9336,31 +9336,58 @@  simplify_internal_call_using_ranges (gim
   else if (TREE_CODE (op0) == INTEGER_CST)
     set_value_range_to_value (&vr0, op0, NULL);
   else
-    return false;
+    set_value_range_to_varying (&vr0);
 
   if (TREE_CODE (op1) == SSA_NAME)
     vr1 = *get_value_range (op1);
   else if (TREE_CODE (op1) == INTEGER_CST)
     set_value_range_to_value (&vr1, op1, NULL);
   else
-    return false;
+    set_value_range_to_varying (&vr1);
 
-  if (!range_int_cst_p (&vr0) || !range_int_cst_p (&vr1))
-    return false;
-
-  tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
-  tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
-  if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
-      || r2 == NULL_TREE || TREE_OVERFLOW (r2))
-    return false;
-  if (subcode == MULT_EXPR)
+  if (!range_int_cst_p (&vr0))
+    {
+      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
+	 optimize at least x = y + 0; x = y - 0; x = y * 0;
+	 and x = y * 1; which never overflow.  */
+      if (!range_int_cst_p (&vr1))
+	return false;
+      if (tree_int_cst_sgn (vr1.min) == -1)
+	return false;
+      if (compare_tree_int (vr1.max, subcode == MULT_EXPR) == 1)
+	return false;
+    }
+  else if (!range_int_cst_p (&vr1))
+    {
+      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
+	 optimize at least x = 0 + y; x = 0 * y; and x = 1 * y;
+	 which never overflow.  */
+      if (subcode == MINUS_EXPR)
+	return false;
+      if (!range_int_cst_p (&vr0))
+	return false;
+      if (tree_int_cst_sgn (vr0.min) == -1)
+	return false;
+      if (compare_tree_int (vr0.max, subcode == MULT_EXPR) == 1)
+	return false;
+    }
+  else
     {
-      tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
-      tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
-      if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
-	  || r4 == NULL_TREE || TREE_OVERFLOW (r4))
+      tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
+      tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
+      if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
+	  || r2 == NULL_TREE || TREE_OVERFLOW (r2))
 	return false;
+      if (subcode == MULT_EXPR)
+	{
+	  tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
+	  tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
+	  if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
+	      || r4 == NULL_TREE || TREE_OVERFLOW (r4))
+	    return false;
+	}
     }
+
   gimple g = gimple_build_assign_with_ops (subcode, gimple_call_lhs (stmt),
 					   op0, op1);
   gsi_replace (gsi, g, false);
--- gcc/gimple-fold.c.jj	2014-03-18 17:08:48.000000000 +0100
+++ gcc/gimple-fold.c	2014-03-25 12:04:07.277342445 +0100
@@ -1186,13 +1186,56 @@  gimple_fold_call (gimple_stmt_iterator *
       else if (gimple_call_builtin_p (stmt, BUILT_IN_MD))
 	changed |= targetm.gimple_fold_builtin (gsi);
     }
-  else if (gimple_call_internal_p (stmt)
-	   && gimple_call_internal_fn (stmt) == IFN_BUILTIN_EXPECT)
+  else if (gimple_call_internal_p (stmt))
     {
-      tree result = fold_builtin_expect (gimple_location (stmt),
-					 gimple_call_arg (stmt, 0),
-					 gimple_call_arg (stmt, 1),
-					 gimple_call_arg (stmt, 2));
+      enum tree_code subcode = ERROR_MARK;
+      tree result = NULL_TREE;
+      switch (gimple_call_internal_fn (stmt))
+	{
+	case IFN_BUILTIN_EXPECT:
+	  result = fold_builtin_expect (gimple_location (stmt),
+					gimple_call_arg (stmt, 0),
+					gimple_call_arg (stmt, 1),
+					gimple_call_arg (stmt, 2));
+	  break;
+	case IFN_UBSAN_CHECK_ADD:
+	  subcode = PLUS_EXPR;
+	  break;
+	case IFN_UBSAN_CHECK_SUB:
+	  subcode = MINUS_EXPR;
+	  break;
+	case IFN_UBSAN_CHECK_MUL:
+	  subcode = MULT_EXPR;
+	  break;
+	default:
+	  break;
+	}
+      if (subcode != ERROR_MARK)
+	{
+	  tree arg0 = gimple_call_arg (stmt, 0);
+	  tree arg1 = gimple_call_arg (stmt, 1);
+	  /* x = y + 0; x = y - 0; x = y * 0; */
+	  if (integer_zerop (arg1))
+	    result = subcode == MULT_EXPR
+		     ? build_zero_cst (TREE_TYPE (arg0))
+		     : arg0;
+	  /* x = 0 + y; x = 0 * y; */
+	  else if (subcode != MINUS_EXPR && integer_zerop (arg0))
+	    result = subcode == MULT_EXPR
+		     ? build_zero_cst (TREE_TYPE (arg0))
+		     : arg1;
+	  /* x = y - y; */
+	  else if (subcode == MINUS_EXPR && operand_equal_p (arg0, arg1, 0))
+	    result = build_zero_cst (TREE_TYPE (arg0));
+	  /* x = y * 1; x = 1 * y; */
+	  else if (subcode == MULT_EXPR)
+	    {
+	      if (integer_onep (arg1))
+		result = arg0;
+	      else if (integer_onep (arg0))
+		result = arg1;
+	    }
+	}
       if (result)
 	{
 	  if (!update_call_from_tree (gsi, result))
@@ -2688,15 +2731,32 @@  gimple_fold_stmt_to_constant_1 (gimple s
 	      default:
 		return NULL_TREE;
 	      }
-	    tree op0 = (*valueize) (gimple_call_arg (stmt, 0));
-	    tree op1 = (*valueize) (gimple_call_arg (stmt, 1));
+	    tree arg0 = gimple_call_arg (stmt, 0);
+	    tree arg1 = gimple_call_arg (stmt, 1);
+	    tree op0 = (*valueize) (arg0);
+	    tree op1 = (*valueize) (arg1);
 
 	    if (TREE_CODE (op0) != INTEGER_CST
 		|| TREE_CODE (op1) != INTEGER_CST)
-	      return NULL_TREE;
-	    tree res = fold_binary_loc (loc, subcode,
-					TREE_TYPE (gimple_call_arg (stmt, 0)),
-					op0, op1);
+	      {
+		switch (subcode)
+		  {
+		  case MULT_EXPR:
+		    /* x * 0 = 0 * x = 0 without overflow.  */
+		    if (integer_zerop (op0) || integer_zerop (op1))
+		      return build_zero_cst (TREE_TYPE (arg0));
+		    break;
+		  case MINUS_EXPR:
+		    /* y - y = 0 without overflow.  */
+		    if (operand_equal_p (op0, op1, 0))
+		      return build_zero_cst (TREE_TYPE (arg0));
+		    break;
+		  default:
+		    break;
+		  }
+	      }
+	    tree res
+	      = fold_binary_loc (loc, subcode, TREE_TYPE (arg0), op0, op1);
 	    if (res
 		&& TREE_CODE (res) == INTEGER_CST
 		&& !TREE_OVERFLOW (res))