From patchwork Fri Jul 30 13:29:34 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 60356 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id B7F19B70D1 for ; Fri, 30 Jul 2010 23:29:52 +1000 (EST) Received: (qmail 25055 invoked by alias); 30 Jul 2010 13:29:46 -0000 Received: (qmail 25039 invoked by uid 22791); 30 Jul 2010 13:29:45 -0000 X-SWARE-Spam-Status: No, hits=-2.0 required=5.0 tests=AWL, BAYES_00, DKIM_SIGNED, DKIM_VALID, FREEMAIL_FROM, RCVD_IN_DNSWL_NONE X-Spam-Check-By: sourceware.org Received: from mail-bw0-f47.google.com (HELO mail-bw0-f47.google.com) (209.85.214.47) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 30 Jul 2010 13:29:39 +0000 Received: by bwz10 with SMTP id 10so942259bwz.20 for ; Fri, 30 Jul 2010 06:29:36 -0700 (PDT) Received: by 10.204.33.86 with SMTP id g22mr1247104bkd.26.1280496576420; Fri, 30 Jul 2010 06:29:36 -0700 (PDT) Received: from yakj.usersys.redhat.com (s209p8.home.99maxprogres.cz [85.93.118.17]) by mx.google.com with ESMTPS id y2sm1537863bkx.20.2010.07.30.06.29.35 (version=TLSv1/SSLv3 cipher=RC4-MD5); Fri, 30 Jul 2010 06:29:35 -0700 (PDT) Message-ID: <4C52D3BE.5080100@gnu.org> Date: Fri, 30 Jul 2010 15:29:34 +0200 From: Paolo Bonzini User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.10) Gecko/20100621 Fedora/3.0.5-1.fc13 Lightning/1.0b2pre Thunderbird/3.0.5 MIME-Version: 1.0 To: Richard Guenther CC: gcc-patches@gcc.gnu.org Subject: Re: [PATCH][RFC] Bit CCP and pointer alignment propagation References: <4C52C01E.4010202@gnu.org> <4C52CB87.6060409@gnu.org> In-Reply-To: Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org On 07/30/2010 03:15 PM, Richard Guenther wrote: > I think we can have negative shift counts (at least the constant folding > code suggests so), this is why I have the code as-is. No, that seems very weird. Sure expand does not handle it, and the implementation-defined section of the manual does not mention it. I'm more inclined to consider it a historical wart, given this comment: /* Previously detected shift-counts computed by NEGATE_EXPR and shifted in the other direction; but that does not work on all machines. */ dating back to the beginning of the GCC repo. I wonder what the attached patch would do. BTW the SHIFT_COUNT_TRUNCATED handling is not needed because you get it from lshift_double. However, this opens another small can of worms. lshift_double does if (SHIFT_COUNT_TRUNCATED) count %= prec; which makes bit-level analysis totally wrong for non-power-of-two precisions, including IIRC bitfields bigger than sizeof(int). I think the shifting functions are wrong however, and should round prec up to the next power of two before applying the truncation. Paolo Index: double-int.c =================================================================== --- double-int.c (revision 160609) +++ double-int.c (working copy) @@ -314,12 +314,7 @@ lshift_double (unsigned HOST_WIDE_INT l1 { unsigned HOST_WIDE_INT signmask; - if (count < 0) - { - rshift_double (l1, h1, -count, prec, lv, hv, arith); - return; - } - + gcc_assert (count >= 0); if (SHIFT_COUNT_TRUNCATED) count %= prec; @@ -377,12 +372,7 @@ rshift_double (unsigned HOST_WIDE_INT l1 { unsigned HOST_WIDE_INT signmask; - if (count < 0) - { - lshift_double (l1, h1, -count, prec, lv, hv, arith); - return; - } - + gcc_assert (count >= 0); signmask = (arith ? -((unsigned HOST_WIDE_INT) h1 >> (HOST_BITS_PER_WIDE_INT - 1)) : 0); @@ -445,6 +435,7 @@ lrotate_double (unsigned HOST_WIDE_INT l unsigned HOST_WIDE_INT s1l, s2l; HOST_WIDE_INT s1h, s2h; + gcc_assert (count >= 0); count %= prec; if (count < 0) count += prec; @@ -467,6 +458,7 @@ rrotate_double (unsigned HOST_WIDE_INT l unsigned HOST_WIDE_INT s1l, s2l; HOST_WIDE_INT s1h, s2h; + gcc_assert (count >= 0); count %= prec; if (count < 0) count += prec; Index: fold-const.c =================================================================== --- fold-const.c (revision 160609) +++ fold-const.c (working copy) @@ -957,7 +957,10 @@ int_const_binop (enum tree_code code, co break; case RSHIFT_EXPR: - int2l = -int2l; + rshift_double (int1l, int1h, int2l, TYPE_PRECISION (type), + &low, &hi, !uns); + break; + case LSHIFT_EXPR: /* It's unclear from the C standard whether shifts can overflow. The following code ignores overflow; perhaps a C standard @@ -967,7 +970,10 @@ int_const_binop (enum tree_code code, co break; case RROTATE_EXPR: - int2l = - int2l; + rrotate_double (int1l, int1h, int2l, TYPE_PRECISION (type), + &low, &hi); + break; + case LROTATE_EXPR: lrotate_double (int1l, int1h, int2l, TYPE_PRECISION (type), &low, &hi);