From patchwork Wed Nov 28 13:58:02 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 202460 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 2207B2C008C for ; Thu, 29 Nov 2012 00:58:28 +1100 (EST) Comment: DKIM? See http://www.dkim.org DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=gcc.gnu.org; s=default; x=1354715909; h=Comment: DomainKey-Signature:Received:Received:Received:Received:Received: Received:From:To:Mail-Followup-To:Cc:Subject:References:Date: In-Reply-To:Message-ID:User-Agent:MIME-Version:Content-Type: Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:Sender:Delivered-To; bh=U1tf/GRPH6g6s0rya99H Tedc0t4=; b=SBrfuL3kJG7far1dHt0eHv9homFE2LjBjpvuClyui/IT5kplpbvu DDuddhu91WDfFJsi1LCXfJ7s2qMQxm32m7STxcMnUUQnlOxdyVJaRhTqS26xaFOb dBTZBUSgrkSqG9sC2U05TejeMUWjv56SAd0XraSZjc5eoI4d3UIZ+r4= Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=gcc.gnu.org; h=Received:Received:X-SWARE-Spam-Status:X-Spam-Check-By:Received:Received:Received:Received:From:To:Mail-Followup-To:Cc:Subject:References:Date:In-Reply-To:Message-ID:User-Agent:MIME-Version:Content-Type:Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help:Sender:Delivered-To; b=Wj3bSsgZB9EG6BbeYuY3qWbgRzuxk5aWHpavSR0pDJASROjhqTqVy3GVK8uUkw 2MLP2xqCpCjO4yqFp1mHSCTCZD0/MzkOx/uAlRS4SD6MJhi+9Sq1QOPdpTAnt2a9 /sP78JFBm0K8BhUEqH7bqVsfdTLes6eKvRTqFuZ3oiro4=; Received: (qmail 8439 invoked by alias); 28 Nov 2012 13:58:22 -0000 Received: (qmail 8418 invoked by uid 22791); 28 Nov 2012 13:58:20 -0000 X-SWARE-Spam-Status: No, hits=-3.7 required=5.0 tests=AWL, BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, DKIM_VALID, FREEMAIL_FROM, KHOP_RCVD_TRUST, NML_ADSP_CUSTOM_MED, RCVD_IN_DNSWL_LOW, RCVD_IN_HOSTKARMA_YE, TW_XT X-Spam-Check-By: sourceware.org Received: from mail-bk0-f47.google.com (HELO mail-bk0-f47.google.com) (209.85.214.47) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 28 Nov 2012 13:58:13 +0000 Received: by mail-bk0-f47.google.com with SMTP id j4so4257603bkw.20 for ; Wed, 28 Nov 2012 05:58:12 -0800 (PST) Received: by 10.204.12.203 with SMTP id y11mr5873530bky.27.1354111092188; Wed, 28 Nov 2012 05:58:12 -0800 (PST) Received: from sandifor-thinkpad.stglab.manchester.uk.ibm.com (gbibp9ph1--blueice1n1.emea.ibm.com. [195.212.29.67]) by mx.google.com with ESMTPS id 18sm3473544bkv.0.2012.11.28.05.58.08 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 28 Nov 2012 05:58:09 -0800 (PST) From: Richard Sandiford To: ramrad01@arm.com Mail-Followup-To: ramrad01@arm.com, gcc-patches@gcc.gnu.org, rdsandiford@googlemail.com Cc: gcc-patches@gcc.gnu.org Subject: Re: [0/8] Add optabs alternatives for insv, extv and extzv References: <87k3u3eybu.fsf@talisman.home> <50B4F419.4030804@arm.com> <878v9mdcdl.fsf@talisman.default> Date: Wed, 28 Nov 2012 13:58:02 +0000 In-Reply-To: (Ramana Radhakrishnan's message of "Tue, 27 Nov 2012 22:45:37 +0000") Message-ID: <87lidlvnh1.fsf@sandifor-thinkpad.stglab.manchester.uk.ibm.com> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux) MIME-Version: 1.0 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Ramana Radhakrishnan writes: > On Tue, Nov 27, 2012 at 8:22 PM, Richard Sandiford > wrote: >> Ramana Radhakrishnan writes: >>>> Tested on x86_64-linux-gnu, powerpc64-linux-gnu and mipsisa64-elf. >>>> Also tested by making sure that there were no changes in assembly >>>> output for a set of gcc .ii files. On the other hand, the -march=octeon >>>> output for a set of mips64-linux-gnu gcc .ii files showed the optimisation >>>> kicking in as hoped. >>> >>> This sequence of patches caused a regression in >>> gcc.target/arm/volatile-bitfields-1.c . I haven't reviewed the patch >>> stack in great detail yet, but it looks like this sequence of patches >>> doesn't take the ARM volatile bitfields into account. (193600 is fine, >>> 193606 is not). >> >> Looks like I was wrong to drop the conditions from patch 5. >> Please could you try the attached? > > Fixes volatile-bitfields-1.c but appears to break volatile-bitfields-4.c :( . This is because the structure we are given is: (mem/v/j:SI (reg/v/f:SI 110 [ t ]) [3 t_2(D)->a+0 S1 A32]) i.e. a 1-byte SImode reference. The strange size comes from the set_mem_attributes_minus_bitpos code that was mentioned earlier in the series, where we set the size based on the type even if it doesn't match the mode. The original rules for forcing strict-volatile bitfields from memory to registers were different (or at least written in a different way) between store_bit_field_1 -- where we force to a register in an attempt to use register insvs -- and store_fixed_bit_field -- where we use the fallback shifts and masks -- even though logically I thought they'd be the same. In store_bit_field_1 the mode was chosen like this: /* Get the mode to use for inserting into this field. If OP0 is BLKmode, get the smallest mode consistent with the alignment. If OP0 is a non-BLKmode object that is no wider than OP_MODE, use its mode. Otherwise, use the smallest mode containing the field. */ if (GET_MODE (op0) == BLKmode || (op_mode != MAX_MACHINE_MODE && GET_MODE_SIZE (GET_MODE (op0)) > GET_MODE_SIZE (op_mode))) bestmode = get_best_mode (bitsize, bitnum, MEM_ALIGN (op0), (op_mode == MAX_MACHINE_MODE ? VOIDmode : op_mode), MEM_VOLATILE_P (op0)); else bestmode = GET_MODE (op0); if (bestmode != VOIDmode && GET_MODE_SIZE (bestmode) >= GET_MODE_SIZE (fieldmode) && !(SLOW_UNALIGNED_ACCESS (bestmode, MEM_ALIGN (op0)) && GET_MODE_BITSIZE (bestmode) > MEM_ALIGN (op0))) { i.e. no explicit test of flag_strict_volatile_bitfields. In store_fixed_bit_field we had: mode = GET_MODE (op0); if (GET_MODE_BITSIZE (mode) == 0 || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode)) mode = word_mode; if (MEM_VOLATILE_P (op0) && GET_MODE_BITSIZE (GET_MODE (op0)) > 0 && flag_strict_volatile_bitfields > 0) mode = GET_MODE (op0); else mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT, MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0)); The same thing applied to extract_bit_field_1 and extract_fixed_bit_field, with the latter using: if (MEM_VOLATILE_P (op0) && flag_strict_volatile_bitfields > 0) { if (GET_MODE_BITSIZE (GET_MODE (op0)) > 0) mode = GET_MODE (op0); else if (target && GET_MODE_BITSIZE (GET_MODE (target)) > 0) mode = GET_MODE (target); else mode = tmode; } else mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT, MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0)); This patch seems to fix the volatile bitfield tests, both with the default arm-linux-gnueabi (which already works) and with -mcpu=cortex-a8. Could you give it a proper test? Richard gcc/ * expmed.c (adjust_bit_field_mem_for_reg): Add handling of volatile bitfields. Index: gcc/expmed.c =================================================================== --- gcc/expmed.c (revision 193881) +++ gcc/expmed.c (working copy) @@ -372,6 +372,17 @@ bitregion_end, MEM_ALIGN (op0), MEM_VOLATILE_P (op0)); enum machine_mode best_mode; + if (MEM_VOLATILE_P (op0) && flag_strict_volatile_bitfields > 0) + { + unsigned int size = GET_MODE_BITSIZE (GET_MODE (op0)); + if (size > 0) + while (iter.next_mode (&best_mode)) + if (GET_MODE_BITSIZE (best_mode) == size) + return narrow_bit_field_mem (op0, best_mode, bitsize, bitnum, + new_bitnum); + return NULL_RTX; + } + if (iter.next_mode (&best_mode)) { /* We can use a memory in BEST_MODE. See whether this is true for