Patchwork Fix folding of UBSAN_CHECK_SUB (x, 0) etc.

login
register
mail settings
Submitter Jakub Jelinek
Date March 25, 2014, 7:28 a.m.
Message ID <20140325072800.GL1817@tucnak.redhat.com>
Download mbox | patch
Permalink /patch/333301/
State New
Headers show

Comments

Jakub Jelinek - March 25, 2014, 7:28 a.m.
Hi!

While working on previous patch, I've noticed a severe omission in the
-fsanitize=signed-integer-overflow support.  Nothing ever
folds addition/subtraction of 0 and multiplication by [0,1], none of these
may ever overflow and thus we can fold them to normal operations and let
those be folded as normal operations.

Bootstrapped/regtested on x86_64-linux and i686-linux, tested with
--with-build-config=bootstrap-ubsan bootstrap on i686-linux, ok for trunk?

2014-03-25  Jakub Jelinek  <jakub@redhat.com>

	* tree-vrp.c (simplify_internal_call_using_ranges): If only
	one range is range_int_cst_p, but not both, at least optimize
	addition/subtraction of 0 and multiplication by 0 or 1.


	Jakub
Richard Guenther - March 25, 2014, 9:11 a.m.
On Tue, 25 Mar 2014, Jakub Jelinek wrote:

> Hi!
> 
> While working on previous patch, I've noticed a severe omission in the
> -fsanitize=signed-integer-overflow support.  Nothing ever
> folds addition/subtraction of 0 and multiplication by [0,1], none of these
> may ever overflow and thus we can fold them to normal operations and let
> those be folded as normal operations.
> 
> Bootstrapped/regtested on x86_64-linux and i686-linux, tested with
> --with-build-config=bootstrap-ubsan bootstrap on i686-linux, ok for trunk?

I think you only need to adjust gimple_fold_stmt_to_constant_1.

Richard.

> 2014-03-25  Jakub Jelinek  <jakub@redhat.com>
> 
> 	* tree-vrp.c (simplify_internal_call_using_ranges): If only
> 	one range is range_int_cst_p, but not both, at least optimize
> 	addition/subtraction of 0 and multiplication by 0 or 1.
> 
> --- gcc/tree-vrp.c.jj	2014-03-12 10:14:18.000000000 +0100
> +++ gcc/tree-vrp.c	2014-03-24 16:14:50.822674164 +0100
> @@ -9336,31 +9336,58 @@ simplify_internal_call_using_ranges (gim
>    else if (TREE_CODE (op0) == INTEGER_CST)
>      set_value_range_to_value (&vr0, op0, NULL);
>    else
> -    return false;
> +    set_value_range_to_varying (&vr0);
>  
>    if (TREE_CODE (op1) == SSA_NAME)
>      vr1 = *get_value_range (op1);
>    else if (TREE_CODE (op1) == INTEGER_CST)
>      set_value_range_to_value (&vr1, op1, NULL);
>    else
> -    return false;
> +    set_value_range_to_varying (&vr1);
>  
> -  if (!range_int_cst_p (&vr0) || !range_int_cst_p (&vr1))
> -    return false;
> -
> -  tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> -  tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> -  if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> -      || r2 == NULL_TREE || TREE_OVERFLOW (r2))
> -    return false;
> -  if (subcode == MULT_EXPR)
> +  if (!range_int_cst_p (&vr0))
> +    {
> +      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> +	 optimize at least x = y + 0; x = y - 0; x = y * 0;
> +	 and x = y * 1; which never overflow.  */
> +      if (!range_int_cst_p (&vr1))
> +	return false;
> +      if (tree_int_cst_sgn (vr1.min) == -1)
> +	return false;
> +      if (compare_tree_int (vr1.max, subcode == MULT_EXPR) == 1)
> +	return false;
> +    }
> +  else if (!range_int_cst_p (&vr1))
> +    {
> +      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> +	 optimize at least x = 0 + y; x = 0 * y; and x = 1 * y;
> +	 which never overflow.  */
> +      if (subcode == MINUS_EXPR)
> +	return false;
> +      if (!range_int_cst_p (&vr0))
> +	return false;
> +      if (tree_int_cst_sgn (vr0.min) == -1)
> +	return false;
> +      if (compare_tree_int (vr0.max, subcode == MULT_EXPR) == 1)
> +	return false;
> +    }
> +  else
>      {
> -      tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> -      tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> -      if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> -	  || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> +      tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> +      tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> +      if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> +	  || r2 == NULL_TREE || TREE_OVERFLOW (r2))
>  	return false;
> +      if (subcode == MULT_EXPR)
> +	{
> +	  tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> +	  tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> +	  if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> +	      || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> +	    return false;
> +	}
>      }
> +
>    gimple g = gimple_build_assign_with_ops (subcode, gimple_call_lhs (stmt),
>  					   op0, op1);
>    gsi_replace (gsi, g, false);
> 
> 	Jakub
> 
>
Richard Guenther - March 25, 2014, 9:15 a.m.
On Tue, 25 Mar 2014, Richard Biener wrote:

> On Tue, 25 Mar 2014, Jakub Jelinek wrote:
> 
> > Hi!
> > 
> > While working on previous patch, I've noticed a severe omission in the
> > -fsanitize=signed-integer-overflow support.  Nothing ever
> > folds addition/subtraction of 0 and multiplication by [0,1], none of these
> > may ever overflow and thus we can fold them to normal operations and let
> > those be folded as normal operations.
> > 
> > Bootstrapped/regtested on x86_64-linux and i686-linux, tested with
> > --with-build-config=bootstrap-ubsan bootstrap on i686-linux, ok for trunk?
> 
> I think you only need to adjust gimple_fold_stmt_to_constant_1.

Actually there only for a * 0 and a - a, as others don't result in
constants.  Still the other cases should be handled in fold_stmt,
not only in VRP.

Richard.

> Richard.
> 
> > 2014-03-25  Jakub Jelinek  <jakub@redhat.com>
> > 
> > 	* tree-vrp.c (simplify_internal_call_using_ranges): If only
> > 	one range is range_int_cst_p, but not both, at least optimize
> > 	addition/subtraction of 0 and multiplication by 0 or 1.
> > 
> > --- gcc/tree-vrp.c.jj	2014-03-12 10:14:18.000000000 +0100
> > +++ gcc/tree-vrp.c	2014-03-24 16:14:50.822674164 +0100
> > @@ -9336,31 +9336,58 @@ simplify_internal_call_using_ranges (gim
> >    else if (TREE_CODE (op0) == INTEGER_CST)
> >      set_value_range_to_value (&vr0, op0, NULL);
> >    else
> > -    return false;
> > +    set_value_range_to_varying (&vr0);
> >  
> >    if (TREE_CODE (op1) == SSA_NAME)
> >      vr1 = *get_value_range (op1);
> >    else if (TREE_CODE (op1) == INTEGER_CST)
> >      set_value_range_to_value (&vr1, op1, NULL);
> >    else
> > -    return false;
> > +    set_value_range_to_varying (&vr1);
> >  
> > -  if (!range_int_cst_p (&vr0) || !range_int_cst_p (&vr1))
> > -    return false;
> > -
> > -  tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> > -  tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> > -  if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> > -      || r2 == NULL_TREE || TREE_OVERFLOW (r2))
> > -    return false;
> > -  if (subcode == MULT_EXPR)
> > +  if (!range_int_cst_p (&vr0))
> > +    {
> > +      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> > +	 optimize at least x = y + 0; x = y - 0; x = y * 0;
> > +	 and x = y * 1; which never overflow.  */
> > +      if (!range_int_cst_p (&vr1))
> > +	return false;
> > +      if (tree_int_cst_sgn (vr1.min) == -1)
> > +	return false;
> > +      if (compare_tree_int (vr1.max, subcode == MULT_EXPR) == 1)
> > +	return false;
> > +    }
> > +  else if (!range_int_cst_p (&vr1))
> > +    {
> > +      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
> > +	 optimize at least x = 0 + y; x = 0 * y; and x = 1 * y;
> > +	 which never overflow.  */
> > +      if (subcode == MINUS_EXPR)
> > +	return false;
> > +      if (!range_int_cst_p (&vr0))
> > +	return false;
> > +      if (tree_int_cst_sgn (vr0.min) == -1)
> > +	return false;
> > +      if (compare_tree_int (vr0.max, subcode == MULT_EXPR) == 1)
> > +	return false;
> > +    }
> > +  else
> >      {
> > -      tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> > -      tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> > -      if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> > -	  || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> > +      tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
> > +      tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
> > +      if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
> > +	  || r2 == NULL_TREE || TREE_OVERFLOW (r2))
> >  	return false;
> > +      if (subcode == MULT_EXPR)
> > +	{
> > +	  tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
> > +	  tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
> > +	  if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
> > +	      || r4 == NULL_TREE || TREE_OVERFLOW (r4))
> > +	    return false;
> > +	}
> >      }
> > +
> >    gimple g = gimple_build_assign_with_ops (subcode, gimple_call_lhs (stmt),
> >  					   op0, op1);
> >    gsi_replace (gsi, g, false);
> > 
> > 	Jakub
> > 
> > 
> 
>
Jakub Jelinek - March 25, 2014, 9:46 a.m.
On Tue, Mar 25, 2014 at 10:15:37AM +0100, Richard Biener wrote:
> On Tue, 25 Mar 2014, Richard Biener wrote:
> > > While working on previous patch, I've noticed a severe omission in the
> > > -fsanitize=signed-integer-overflow support.  Nothing ever
> > > folds addition/subtraction of 0 and multiplication by [0,1], none of these
> > > may ever overflow and thus we can fold them to normal operations and let
> > > those be folded as normal operations.
> > > 
> > > Bootstrapped/regtested on x86_64-linux and i686-linux, tested with
> > > --with-build-config=bootstrap-ubsan bootstrap on i686-linux, ok for trunk?
> > 
> > I think you only need to adjust gimple_fold_stmt_to_constant_1.
> 
> Actually there only for a * 0 and a - a, as others don't result in
> constants.  Still the other cases should be handled in fold_stmt,
> not only in VRP.

IMHO we need even the VRP case, at least for multiplication, otherwise
we wouldn't optimize say:
int
baz (int x, int y)
{
  if (y < 32 || y > 33)
    return 6;
  return x * (y - 32);
}
where y - 32 is not known to be a constant, but [0, 1], which is still
ok to optimize into just normal multiplication, because it never overflows.

Anyway, I'll look into handling it also in gimple-fold.c.

	Jakub

Patch

--- gcc/tree-vrp.c.jj	2014-03-12 10:14:18.000000000 +0100
+++ gcc/tree-vrp.c	2014-03-24 16:14:50.822674164 +0100
@@ -9336,31 +9336,58 @@  simplify_internal_call_using_ranges (gim
   else if (TREE_CODE (op0) == INTEGER_CST)
     set_value_range_to_value (&vr0, op0, NULL);
   else
-    return false;
+    set_value_range_to_varying (&vr0);
 
   if (TREE_CODE (op1) == SSA_NAME)
     vr1 = *get_value_range (op1);
   else if (TREE_CODE (op1) == INTEGER_CST)
     set_value_range_to_value (&vr1, op1, NULL);
   else
-    return false;
+    set_value_range_to_varying (&vr1);
 
-  if (!range_int_cst_p (&vr0) || !range_int_cst_p (&vr1))
-    return false;
-
-  tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
-  tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
-  if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
-      || r2 == NULL_TREE || TREE_OVERFLOW (r2))
-    return false;
-  if (subcode == MULT_EXPR)
+  if (!range_int_cst_p (&vr0))
+    {
+      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
+	 optimize at least x = y + 0; x = y - 0; x = y * 0;
+	 and x = y * 1; which never overflow.  */
+      if (!range_int_cst_p (&vr1))
+	return false;
+      if (tree_int_cst_sgn (vr1.min) == -1)
+	return false;
+      if (compare_tree_int (vr1.max, subcode == MULT_EXPR) == 1)
+	return false;
+    }
+  else if (!range_int_cst_p (&vr1))
+    {
+      /* If one range is VR_ANTI_RANGE, VR_VARYING etc.,
+	 optimize at least x = 0 + y; x = 0 * y; and x = 1 * y;
+	 which never overflow.  */
+      if (subcode == MINUS_EXPR)
+	return false;
+      if (!range_int_cst_p (&vr0))
+	return false;
+      if (tree_int_cst_sgn (vr0.min) == -1)
+	return false;
+      if (compare_tree_int (vr0.max, subcode == MULT_EXPR) == 1)
+	return false;
+    }
+  else
     {
-      tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
-      tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
-      if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
-	  || r4 == NULL_TREE || TREE_OVERFLOW (r4))
+      tree r1 = int_const_binop (subcode, vr0.min, vr1.min);
+      tree r2 = int_const_binop (subcode, vr0.max, vr1.max);
+      if (r1 == NULL_TREE || TREE_OVERFLOW (r1)
+	  || r2 == NULL_TREE || TREE_OVERFLOW (r2))
 	return false;
+      if (subcode == MULT_EXPR)
+	{
+	  tree r3 = int_const_binop (subcode, vr0.min, vr1.max);
+	  tree r4 = int_const_binop (subcode, vr0.max, vr1.min);
+	  if (r3 == NULL_TREE || TREE_OVERFLOW (r3)
+	      || r4 == NULL_TREE || TREE_OVERFLOW (r4))
+	    return false;
+	}
     }
+
   gimple g = gimple_build_assign_with_ops (subcode, gimple_call_lhs (stmt),
 					   op0, op1);
   gsi_replace (gsi, g, false);