From patchwork Thu Mar 5 11:00:16 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Bernd Edlinger X-Patchwork-Id: 446674 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 6977614009B for ; Thu, 5 Mar 2015 22:00:30 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass reason="1024-bit key; unprotected key" header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=hPIjBkB3; dkim-adsp=none (unprotected policy); dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:from:to:cc:subject:date:in-reply-to:references :content-type:content-transfer-encoding:mime-version; q=dns; s= default; b=KPG6dSNOg1W8c6+icQr8R8vCC6LBm/0HUDWu9WWfzsyLszCcJtymr hfB1BbiF322K/ivCUce2eIU6Q2Msahyqm4s0ZzNTTLn2n/CjP5StUXja74TmCe0M ldRnewd0KyxM7thkrIrOUe62xZhbxSfq8PlyE02+JZrn0xJZlnRivw= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender :message-id:from:to:cc:subject:date:in-reply-to:references :content-type:content-transfer-encoding:mime-version; s=default; bh=PE1mdfHKtLOWVB4wNq6jAAeKvUk=; b=hPIjBkB33yw6xEjbVzI828k/Ub0O vgC9R3QsxDjuMnSoW2cGGnewzFU75Mj68ZiCsM/tVen7qy6tpGSioK67JcecuYxw IBuCdTsTtGQgeU5w8Y7lstFn2ZJm/G8DqxtSyk+HAUA+h0FhME38+MqGGSN0c1LA 57K25nAQnsi+wWA= Received: (qmail 15482 invoked by alias); 5 Mar 2015 11:00:21 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 15459 invoked by uid 89); 5 Mar 2015 11:00:21 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.0 required=5.0 tests=AWL, BAYES_00, FREEMAIL_FROM, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.2 X-HELO: DUB004-OMC4S29.hotmail.com Received: from dub004-omc4s29.hotmail.com (HELO DUB004-OMC4S29.hotmail.com) (157.55.2.104) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-SHA256 encrypted) ESMTPS; Thu, 05 Mar 2015 11:00:20 +0000 Received: from DUB118-W25 ([157.55.2.71]) by DUB004-OMC4S29.hotmail.com over TLS secured channel with Microsoft SMTPSVC(7.5.7601.22751); Thu, 5 Mar 2015 03:00:16 -0800 X-TMN: [PKgXmZWfijUU9P88qsMjBUSB1AxLsFAg] Message-ID: From: Bernd Edlinger To: Richard Biener CC: "gcc-patches@gcc.gnu.org" , Eric Botcazou Subject: RE: [PATCH] Fix another wrong-code bug with -fstrict-volatile-bitfields Date: Thu, 5 Mar 2015 12:00:16 +0100 In-Reply-To: References: , , MIME-Version: 1.0 Hi, On Thu, 5 Mar 2015 10:00:26, Richard Biener wrote: > > On Wed, Mar 4, 2015 at 3:13 PM, Bernd Edlinger > wrote: >> bounced... again, without html. >> >> >> Hi Richard, >> >> while working on another bug in the area of -fstrict-volatile-bitfields >> I became aware of another example where -fstrict-volatile-bitfields may generate >> wrong code. This is reproducible on a !STRICT_ALIGNMENT target like x86_64. >> >> The problem is that strict_volatile_bitfield_p tries to allow more than necessary >> if !STRICT_ALIGNMENT. Everything works OK on ARM for instance. >> >> If this function returns true, we may later call narrow_bit_field_mem, and >> the check in strict_volatile_bitfield_p should mirror the logic there: >> narrow_bit_field_mem just uses GET_MODE_BITSIZE (mode) and does not >> care about STRICT_ALIGNMENT, and in the end *new_bitnum + bitsize may >> reach beyond the end of the region. This causes store_fixed_bit_field_1 >> to silently fail to generate correct code. > > Hmm, but the comment sounds like if using GET_MODE_ALIGNMENT is > more correct (even for !strict-alignment) - if mode is SImode and mode > alignment is 16 (HImode aligned) then we don't need to split the load > if bitnum is 16 and bitsize is 32. > > So maybe narrow_bit_field_mem needs to be fixed as well? > I'd rather not touch that function.... In the whole expmed.c the only place where  GET_MODE_ALIGNMENT is used, is in simple_mem_bitfield_p but only if SLOW_UNALIGNED_ACCESS returns 1, which is only the case on few targets. Do you know any targets, where GET_MODE_ALIGNMENT may be less than GET_MODE_BITSIZE? Maybe one thing is missing from strict_volatile_bitfield_p, I am not sure. Maybe it should check that MEM_ALIGN (op0)>= GET_MODE_ALIGNMENT (fieldmode) Because the strict volatile bitfields handling will inevitably try to use the fieldmode to access the memory. Or would it be better to say MEM_ALIGN (op0)>= GET_MODE_BITSIZE (fieldmode), because it is easier to explain when some one asks, when we guarantee the semantics of strict volatile bitfield? Probably there is already something in the logic in expr.c that prevents these cases, because otherwise it would be way to easy to find an example for unaligned accesses to unaligned memory on STRICT_ALIGNMENT targets. Ok, what would you think about this variant? Trying to use an atomic access to a device register is pointless if the memory context is not aligned to the MODE_BITSIZE, that has nothing to do with MODE_ALIGNMENT, right? Thanks Bernd. --- expmed.c.jj    2015-01-16 11:20:40.000000000 +0100 +++ expmed.c    2015-03-05 11:50:09.400444416 +0100 @@ -472,9 +472,11 @@ strict_volatile_bitfield_p (rtx op0, uns      return false;      /* Check for cases of unaligned fields that must be split.  */ -  if (bitnum % BITS_PER_UNIT + bitsize> modesize -      || (STRICT_ALIGNMENT -      && bitnum % GET_MODE_ALIGNMENT (fieldmode) + bitsize> modesize)) +  if (bitnum % modesize + bitsize> modesize) +    return false; + +  /* Check that the memory is sufficiently aligned.  */ +  if (MEM_ALIGN (op0) < modesize)      return false;      /* Check for cases where the C++ memory model applies.  */