From patchwork Tue Apr 11 17:24:56 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Jelinek X-Patchwork-Id: 749576 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3w2Yph5Sw4z9s8Y for ; Wed, 12 Apr 2017 03:25:16 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="Mf6w+jEm"; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:reply-to:mime-version:content-type; q=dns; s=default; b=vY0vILNsmt+7CCvW/U6kLGQUZdeD3vSh96NnYg0is33 dQhHiMKlqamoDybZw5h1YRBqfM4UttwZbhnc7qChmxNyJ/+9BNlZQNtdmerwwLIO AMZXesx1Mx8uQqffbRRl4Q9/7a122X1SOC9oXthVl9t68b5T4g32xrbU5k5w9W0E = DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:reply-to:mime-version:content-type; s=default; bh=eohouV/FXskA03J74uIXHLATAG4=; b=Mf6w+jEmyVvz+ryWG FBbNASS3amxwHEJRQsDtUkz0BACfCQNs4mXQbBGcITcUcljsQIC2dsEYpbpDqktR 9p8qSMg+5i5oPTdiZqDfmIzqheQkCh0IjmeM3dLY5wTdggRQs8mw/hfMRcqDgfsg fxFbE7WEzuR7SGQA9hw0Rdpu4Q= Received: (qmail 33622 invoked by alias); 11 Apr 2017 17:25:02 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 32803 invoked by uid 89); 11 Apr 2017 17:25:01 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-10.9 required=5.0 tests=BAYES_00, GIT_PATCH_2, GIT_PATCH_3, KAM_LAZY_DOMAIN_SECURITY, RP_MATCHES_RCVD, SPF_HELO_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Tue, 11 Apr 2017 17:25:00 +0000 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 4AD703B755 for ; Tue, 11 Apr 2017 17:25:00 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 4AD703B755 Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx06.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=jakub@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 4AD703B755 Received: from tucnak.zalov.cz (ovpn-116-29.ams2.redhat.com [10.36.116.29]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E47DB60BE2 for ; Tue, 11 Apr 2017 17:24:59 +0000 (UTC) Received: from tucnak.zalov.cz (localhost [127.0.0.1]) by tucnak.zalov.cz (8.15.2/8.15.2) with ESMTP id v3BHOvCW021483 for ; Tue, 11 Apr 2017 19:24:58 +0200 Received: (from jakub@localhost) by tucnak.zalov.cz (8.15.2/8.15.2/Submit) id v3BHOuBv021482 for gcc-patches@gcc.gnu.org; Tue, 11 Apr 2017 19:24:56 +0200 Date: Tue, 11 Apr 2017 19:24:56 +0200 From: Jakub Jelinek To: gcc-patches@gcc.gnu.org Subject: [committed] Fix UB in simplify-rtx.c (PR middle-end/80100) Message-ID: <20170411172456.GU1809@tucnak> Reply-To: Jakub Jelinek MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.7.1 (2016-10-04) X-IsSubscribed: yes Hi! The following testcase triggers UB in simplify_binary_operation_1, in particular trueop1 is 2 and it is shifted up by 63. Later we want to shift it down (arithmetically) again by 63 and compare against the original value and only optimize if there is match, i.e. if trueop1 can be safely shifted up. In cases it can't we don't want to trigger UB, so the following patch just uses unsigned shift that is well defined and then implementation defined conversion to signed type we rely on everywhere. We still want mask to be signed so that the right shift is arithmetic. Bootstrapped/regtested on x86_64-linux and i686-linux, committed to trunk as obvious. 2017-04-11 Jakub Jelinek PR middle-end/80100 * simplify-rtx.c (simplify_binary_operation_1) : Perform left shift in unsigned HOST_WIDE_INT type. * gcc.dg/pr80100.c: New test. Jakub --- gcc/simplify-rtx.c.jj 2017-04-11 16:09:22.003071899 +0200 +++ gcc/simplify-rtx.c 2017-04-11 16:01:44.350830295 +0200 @@ -2741,8 +2741,8 @@ simplify_binary_operation_1 (enum rtx_co && CONST_INT_P (XEXP (op0, 1)) && INTVAL (XEXP (op0, 1)) < HOST_BITS_PER_WIDE_INT) { - int count = INTVAL (XEXP (op0, 1)); - HOST_WIDE_INT mask = INTVAL (trueop1) << count; + int count = INTVAL (XEXP (op0, 1)); + HOST_WIDE_INT mask = UINTVAL (trueop1) << count; if (mask >> count == INTVAL (trueop1) && trunc_int_for_mode (mask, mode) == mask --- gcc/testsuite/gcc.dg/pr80100.c.jj 2017-04-11 16:22:42.706047192 +0200 +++ gcc/testsuite/gcc.dg/pr80100.c 2017-04-11 16:22:29.000000000 +0200 @@ -0,0 +1,9 @@ +/* PR middle-end/80100 */ +/* { dg-do compile } */ +/* { dg-options "-O2" } */ + +long int +foo (long int x) +{ + return 2L | ((x - 1L) >> (__SIZEOF_LONG__ * __CHAR_BIT__ - 1)); +}