From patchwork Fri Jul 1 17:27:37 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bernd Schmidt X-Patchwork-Id: 102930 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 67222B6F62 for ; Sat, 2 Jul 2011 03:28:00 +1000 (EST) Received: (qmail 7387 invoked by alias); 1 Jul 2011 17:27:59 -0000 Received: (qmail 7379 invoked by uid 22791); 1 Jul 2011 17:27:57 -0000 X-SWARE-Spam-Status: No, hits=-1.8 required=5.0 tests=AWL, BAYES_00, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mail.codesourcery.com (HELO mail.codesourcery.com) (38.113.113.100) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Fri, 01 Jul 2011 17:27:44 +0000 Received: (qmail 9417 invoked from network); 1 Jul 2011 17:27:44 -0000 Received: from unknown (HELO ?84.152.209.23?) (bernds@127.0.0.2) by mail.codesourcery.com with ESMTPA; 1 Jul 2011 17:27:44 -0000 Message-ID: <4E0E0389.5040505@codesourcery.com> Date: Fri, 01 Jul 2011 19:27:37 +0200 From: Bernd Schmidt User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.17) Gecko/20110505 Lightning/1.0b3pre Thunderbird/3.1.10 MIME-Version: 1.0 To: GCC Patches Subject: [1/11] Use targetm.shift_truncation_mask more consistently References: <4E0E0310.60406@codesourcery.com> In-Reply-To: <4E0E0310.60406@codesourcery.com> Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org At some point we've grown a shift_truncation_mask hook, but we're not using it everywhere we're masking shift counts. This patch changes the instances I found. Bernd * simplify-rtx.c (simplify_const_binary_operation): Use the shift_truncation_mask hook instead of performing modulo by width. Compare against mode precision, not bitsize. * combine.c (combine_simplify_rtx, simplify_shift_const_1): Use shift_truncation_mask instead of constructing the value manually. Index: gcc/simplify-rtx.c =================================================================== --- gcc/simplify-rtx.c.orig +++ gcc/simplify-rtx.c @@ -3704,8 +3704,8 @@ simplify_const_binary_operation (enum rt shift_truncation_mask, since the shift might not be part of an ashlM3, lshrM3 or ashrM3 instruction. */ if (SHIFT_COUNT_TRUNCATED) - arg1 = (unsigned HOST_WIDE_INT) arg1 % width; - else if (arg1 < 0 || arg1 >= GET_MODE_BITSIZE (mode)) + arg1 &= targetm.shift_truncation_mask (mode); + else if (arg1 < 0 || arg1 >= GET_MODE_PRECISION (mode)) return 0; val = (code == ASHIFT Index: gcc/combine.c =================================================================== --- gcc/combine.c.orig +++ gcc/combine.c @@ -5941,9 +5941,7 @@ combine_simplify_rtx (rtx x, enum machin else if (SHIFT_COUNT_TRUNCATED && !REG_P (XEXP (x, 1))) SUBST (XEXP (x, 1), force_to_mode (XEXP (x, 1), GET_MODE (XEXP (x, 1)), - ((unsigned HOST_WIDE_INT) 1 - << exact_log2 (GET_MODE_BITSIZE (GET_MODE (x)))) - - 1, + targetm.shift_truncation_mask (GET_MODE (x)), 0)); break; @@ -9896,7 +9894,7 @@ simplify_shift_const_1 (enum rtx_code co want to do this inside the loop as it makes it more difficult to combine shifts. */ if (SHIFT_COUNT_TRUNCATED) - orig_count &= GET_MODE_BITSIZE (mode) - 1; + orig_count &= targetm.shift_truncation_mask (mode); /* If we were given an invalid count, don't do anything except exactly what was requested. */