Patchwork [RFC] Bit CCP and pointer alignment propagation

login
register
mail settings
Submitter Paolo Bonzini
Date July 30, 2010, 1:29 p.m.
Message ID <4C52D3BE.5080100@gnu.org>
Download mbox | patch
Permalink /patch/60356/
State New
Headers show

Comments

Paolo Bonzini - July 30, 2010, 1:29 p.m.
On 07/30/2010 03:15 PM, Richard Guenther wrote:
> I think we can have negative shift counts (at least the constant folding
> code suggests so), this is why I have the code as-is.

No, that seems very weird.  Sure expand does not handle it, and the 
implementation-defined section of the manual does not mention it.  I'm 
more inclined to consider it a historical wart, given this comment:

   /* Previously detected shift-counts computed by NEGATE_EXPR
      and shifted in the other direction; but that does not work
      on all machines.  */

dating back to the beginning of the GCC repo.  I wonder what the 
attached patch would do.

BTW the SHIFT_COUNT_TRUNCATED handling is not needed because you get it 
from lshift_double.  However, this opens another small can of worms. 
lshift_double does

   if (SHIFT_COUNT_TRUNCATED)
     count %= prec;

which makes bit-level analysis totally wrong for non-power-of-two 
precisions, including IIRC bitfields bigger than sizeof(int).  I think 
the shifting functions are wrong however, and should round prec up to 
the next power of two before applying the truncation.

Paolo
Richard Guenther - July 30, 2010, 1:39 p.m.
On Fri, 30 Jul 2010, Paolo Bonzini wrote:

> On 07/30/2010 03:15 PM, Richard Guenther wrote:
> > I think we can have negative shift counts (at least the constant folding
> > code suggests so), this is why I have the code as-is.
> 
> No, that seems very weird.  Sure expand does not handle it, and the
> implementation-defined section of the manual does not mention it.  I'm more
> inclined to consider it a historical wart, given this comment:
> 
>   /* Previously detected shift-counts computed by NEGATE_EXPR
>      and shifted in the other direction; but that does not work
>      on all machines.  */
> 
> dating back to the beginning of the GCC repo.  I wonder what the attached
> patch would do.

Maybe - at least with 2 << -1 we only warn:

t.c:3: warning: left shift count is negative

and happily continue, doing a constant right shift.  With -1 put
into a temporary we get sall with a negative shift at -O0 (and
zero as a result) and one as result when optimizing.

> BTW the SHIFT_COUNT_TRUNCATED handling is not needed because you get it from
> lshift_double.  However, this opens another small can of worms. lshift_double
> does
> 
>   if (SHIFT_COUNT_TRUNCATED)
>     count %= prec;
> 
> which makes bit-level analysis totally wrong for non-power-of-two precisions,
> including IIRC bitfields bigger than sizeof(int).  I think the shifting
> functions are wrong however, and should round prec up to the next power of two
> before applying the truncation.

Ick.  Indeed ...

Richard.
Paolo Bonzini - July 30, 2010, 1:42 p.m.
On 07/30/2010 03:39 PM, Richard Guenther wrote:
> On Fri, 30 Jul 2010, Paolo Bonzini wrote:
>
>> On 07/30/2010 03:15 PM, Richard Guenther wrote:
>>> I think we can have negative shift counts (at least the constant folding
>>> code suggests so), this is why I have the code as-is.
>>
>> No, that seems very weird.  Sure expand does not handle it, and the
>> implementation-defined section of the manual does not mention it.  I'm more
>> inclined to consider it a historical wart, given this comment:
>>
>>    /* Previously detected shift-counts computed by NEGATE_EXPR
>>       and shifted in the other direction; but that does not work
>>       on all machines.  */
>>
>> dating back to the beginning of the GCC repo.  I wonder what the attached
>> patch would do.
>
> Maybe - at least with 2<<  -1 we only warn:
>
> t.c:3: warning: left shift count is negative
>
> and happily continue, doing a constant right shift.

So the patch would crash.  The right solution is probably to change the 
shift count to unsigned and fix the fallout.

Paolo
Richard Guenther - July 30, 2010, 2:47 p.m.
On Fri, 30 Jul 2010, Paolo Bonzini wrote:

> On 07/30/2010 03:39 PM, Richard Guenther wrote:
> > On Fri, 30 Jul 2010, Paolo Bonzini wrote:
> > 
> > > On 07/30/2010 03:15 PM, Richard Guenther wrote:
> > > > I think we can have negative shift counts (at least the constant folding
> > > > code suggests so), this is why I have the code as-is.
> > > 
> > > No, that seems very weird.  Sure expand does not handle it, and the
> > > implementation-defined section of the manual does not mention it.  I'm
> > > more
> > > inclined to consider it a historical wart, given this comment:
> > > 
> > >    /* Previously detected shift-counts computed by NEGATE_EXPR
> > >       and shifted in the other direction; but that does not work
> > >       on all machines.  */
> > > 
> > > dating back to the beginning of the GCC repo.  I wonder what the attached
> > > patch would do.
> > 
> > Maybe - at least with 2<<  -1 we only warn:
> > 
> > t.c:3: warning: left shift count is negative
> > 
> > and happily continue, doing a constant right shift.
> 
> So the patch would crash.  The right solution is probably to change the shift
> count to unsigned and fix the fallout.

Yes.  Btw, the intel compiler warns the same but treats the shift
count unsigned (at least it produces zero, not 1).

Richard.

Patch

Index: double-int.c
===================================================================
--- double-int.c	(revision 160609)
+++ double-int.c	(working copy)
@@ -314,12 +314,7 @@  lshift_double (unsigned HOST_WIDE_INT l1
 {
   unsigned HOST_WIDE_INT signmask;
 
-  if (count < 0)
-    {
-      rshift_double (l1, h1, -count, prec, lv, hv, arith);
-      return;
-    }
-
+  gcc_assert (count >= 0);
   if (SHIFT_COUNT_TRUNCATED)
     count %= prec;
 
@@ -377,12 +372,7 @@  rshift_double (unsigned HOST_WIDE_INT l1
 {
   unsigned HOST_WIDE_INT signmask;
 
-  if (count < 0)
-    {
-      lshift_double (l1, h1, -count, prec, lv, hv, arith);
-      return;
-    }
-
+  gcc_assert (count >= 0);
   signmask = (arith
 	      ? -((unsigned HOST_WIDE_INT) h1 >> (HOST_BITS_PER_WIDE_INT - 1))
 	      : 0);
@@ -445,6 +435,7 @@  lrotate_double (unsigned HOST_WIDE_INT l
   unsigned HOST_WIDE_INT s1l, s2l;
   HOST_WIDE_INT s1h, s2h;
 
+  gcc_assert (count >= 0);
   count %= prec;
   if (count < 0)
     count += prec;
@@ -467,6 +458,7 @@  rrotate_double (unsigned HOST_WIDE_INT l
   unsigned HOST_WIDE_INT s1l, s2l;
   HOST_WIDE_INT s1h, s2h;
 
+  gcc_assert (count >= 0);
   count %= prec;
   if (count < 0)
     count += prec;
Index: fold-const.c
===================================================================
--- fold-const.c	(revision 160609)
+++ fold-const.c	(working copy)
@@ -957,7 +957,10 @@  int_const_binop (enum tree_code code, co
       break;
 
     case RSHIFT_EXPR:
-      int2l = -int2l;
+      rshift_double (int1l, int1h, int2l, TYPE_PRECISION (type),
+		     &low, &hi, !uns);
+      break;
+
     case LSHIFT_EXPR:
       /* It's unclear from the C standard whether shifts can overflow.
 	 The following code ignores overflow; perhaps a C standard
@@ -967,7 +970,10 @@  int_const_binop (enum tree_code code, co
       break;
 
     case RROTATE_EXPR:
-      int2l = - int2l;
+      rrotate_double (int1l, int1h, int2l, TYPE_PRECISION (type),
+		      &low, &hi);
+      break;
+
     case LROTATE_EXPR:
       lrotate_double (int1l, int1h, int2l, TYPE_PRECISION (type),
 		      &low, &hi);