From patchwork Tue Nov 27 20:19:34 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 202297 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id C37742C0095 for ; Wed, 28 Nov 2012 07:19:56 +1100 (EST) Comment: DKIM? See http://www.dkim.org DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=gcc.gnu.org; s=default; x=1354652397; h=Comment: DomainKey-Signature:Received:Received:Received:Received:Received: Received:From:To:Mail-Followup-To:Cc:Subject:References:Date: In-Reply-To:Message-ID:User-Agent:MIME-Version:Content-Type: Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:Sender:Delivered-To; bh=tlYMttgTam3zEyPx8klN PwzqaMA=; b=ZhezPmAAFWcapygVEEKamGitTrW0Y+5pXotZ/nDPyeir0hRCQaZO d/jY43SvWJGdFjF51KCGGUmK63nhfz/tU9/3QgRCHlUJXd1+q3KAIYOfPHKPnHq2 WMwfdfufdlGnxgdRNywm/K8GFGDjt0S6zB6CrpFvFd9GvRVQy/1+3wE= Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=gcc.gnu.org; h=Received:Received:X-SWARE-Spam-Status:X-Spam-Check-By:Received:Received:Received:Received:From:To:Mail-Followup-To:Cc:Subject:References:Date:In-Reply-To:Message-ID:User-Agent:MIME-Version:Content-Type:Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help:Sender:Delivered-To; b=U68/QennCo2y4Y0IpFLY9If78WV4FYiCKGmvYgCtiaJMrIY6Z8dP4iPgZ8EpCg WpDCuGlECdmMIUwRmMWKy4d6jtVW+8r94f3T8a1wk07v410LYsG6/LOgqyBzBdw3 OwaPSNNFD+wsjrWWIAhK7UkR09t6C5fMvNNAkgiGFriA0=; Received: (qmail 13854 invoked by alias); 27 Nov 2012 20:19:47 -0000 Received: (qmail 13833 invoked by uid 22791); 27 Nov 2012 20:19:43 -0000 X-SWARE-Spam-Status: No, hits=-3.3 required=5.0 tests=AWL, BAYES_00, DKIM_ADSP_CUSTOM_MED, DKIM_SIGNED, DKIM_VALID, FREEMAIL_FROM, KHOP_RCVD_TRUST, NML_ADSP_CUSTOM_MED, RCVD_IN_DNSWL_LOW, RCVD_IN_HOSTKARMA_YE X-Spam-Check-By: sourceware.org Received: from mail-wi0-f175.google.com (HELO mail-wi0-f175.google.com) (209.85.212.175) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Tue, 27 Nov 2012 20:19:38 +0000 Received: by mail-wi0-f175.google.com with SMTP id hm11so3871007wib.8 for ; Tue, 27 Nov 2012 12:19:37 -0800 (PST) Received: by 10.180.78.161 with SMTP id c1mr25651021wix.4.1354047577515; Tue, 27 Nov 2012 12:19:37 -0800 (PST) Received: from localhost ([2.28.234.219]) by mx.google.com with ESMTPS id d9sm4156635wiw.0.2012.11.27.12.19.35 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 27 Nov 2012 12:19:36 -0800 (PST) From: Richard Sandiford To: Eric Botcazou Mail-Followup-To: Eric Botcazou , gcc-patches@gcc.gnu.org, rdsandiford@googlemail.com Cc: gcc-patches@gcc.gnu.org Subject: Re: PR 55438: Incorrect use of BIGGEST_ALIGNMENT References: <871ufi7svh.fsf@talisman.default> <2385141.taqiEgospT@polaris> <87txscchrx.fsf@talisman.default> <2788010.lbKgRQuCzs@polaris> Date: Tue, 27 Nov 2012 20:19:34 +0000 In-Reply-To: <2788010.lbKgRQuCzs@polaris> (Eric Botcazou's message of "Mon, 26 Nov 2012 22:38:03 +0100") Message-ID: <87d2yydcix.fsf@talisman.default> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/24.1 (gnu/linux) MIME-Version: 1.0 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Eric Botcazou writes: >> OK, how about: >> >> /* ??? For historical reasons, reject modes that would normally >> receive greater alignment, even if unaligned accesses are >> acceptable. This has both advantages and disadvantages. > > Fine with me. > >> I think here we really do want unit (i.e. the GET_MODE_BITSIZE). >> We're dividing the bitfield into unit-sized chunks and want to know >> whether those chunks are aligned or not. If they aren't aligned, >> we need to know whether unaligned accesses are OK. > > We could conceivably have an aligned mode with bitsize 32 and alignment 16 > (i.e. STRICT_ALIGNMENT 1 and BIGGEST_ALIGNMENT 16), meaning that we can use > 16-bit aligned 32-bit chunks in this mode. Why would the bitsize matter > (assuming that it's a multiple of the alignment)? I suppose I was putting too much store by the expmed.c code. How does this version look? Tested on x86_64-linux-gnu, powerpc64-linux-gnu and cris-elf. Richard gcc/ PR middle-end/55438 * expmed.c (simple_mem_bitfield_p): New function, extracted from store_bit_field_1 and extract_bit_field_1. Use GET_MODE_ALIGNMENT rather than bitsize when checking the alignment. * stor-layout.c (bit_field_mode_iterator::bit_field_mode_iterator): Don't limit ALIGN_. Assume that memory is mapped in chunks of at least word size, regardless of BIGGEST_ALIGNMENT. (bit_field_mode_iterator::get_mode): Use GET_MODE_ALIGNMENT rather than unit when checking the alignment. (get_best_mode): Use GET_MODE_ALIGNMENT. Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2012-11-20 19:49:07.000000000 +0000 +++ gcc/expmed.c 2012-11-27 17:27:38.381139339 +0000 @@ -416,6 +416,21 @@ lowpart_bit_field_p (unsigned HOST_WIDE_ else return bitnum % BITS_PER_WORD == 0; } + +/* Return true if OP is a memory and if a bitfield of size BITSIZE at + bit number BITNUM can be treated as a simple value of mode MODE. */ + +static bool +simple_mem_bitfield_p (rtx op0, unsigned HOST_WIDE_INT bitsize, + unsigned HOST_WIDE_INT bitnum, enum machine_mode mode) +{ + return (MEM_P (op0) + && bitnum % BITS_PER_UNIT == 0 + && bitsize == GET_MODE_BITSIZE (mode) + && (!SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (op0)) + || (bitnum % GET_MODE_ALIGNMENT (mode) == 0 + && MEM_ALIGN (op0) % GET_MODE_ALIGNMENT (mode) == 0))); +} /* Try to use instruction INSV to store VALUE into a field of OP0. BITSIZE and BITNUM are as for store_bit_field. */ @@ -624,12 +639,7 @@ store_bit_field_1 (rtx str_rtx, unsigned /* If the target is memory, storing any naturally aligned field can be done with a simple store. For targets that support fast unaligned memory, any naturally sized, unit aligned field can be done directly. */ - if (MEM_P (op0) - && bitnum % BITS_PER_UNIT == 0 - && bitsize == GET_MODE_BITSIZE (fieldmode) - && (!SLOW_UNALIGNED_ACCESS (fieldmode, MEM_ALIGN (op0)) - || (bitnum % bitsize == 0 - && MEM_ALIGN (op0) % bitsize == 0))) + if (simple_mem_bitfield_p (op0, bitsize, bitnum, fieldmode)) { op0 = adjust_bitfield_address (op0, fieldmode, bitnum / BITS_PER_UNIT); emit_move_insn (op0, value); @@ -1454,12 +1464,7 @@ extract_bit_field_1 (rtx str_rtx, unsign /* Extraction of a full MODE1 value can be done with a load as long as the field is on a byte boundary and is sufficiently aligned. */ - if (MEM_P (op0) - && bitnum % BITS_PER_UNIT == 0 - && bitsize == GET_MODE_BITSIZE (mode1) - && (!SLOW_UNALIGNED_ACCESS (mode1, MEM_ALIGN (op0)) - || (bitnum % bitsize == 0 - && MEM_ALIGN (op0) % bitsize == 0))) + if (simple_mem_bitfield_p (op0, bitsize, bitnum, mode1)) { op0 = adjust_bitfield_address (op0, mode1, bitnum / BITS_PER_UNIT); return convert_extracted_bit_field (op0, mode, tmode, unsignedp); Index: gcc/stor-layout.c =================================================================== --- gcc/stor-layout.c 2012-11-24 12:16:28.947902915 +0000 +++ gcc/stor-layout.c 2012-11-26 23:16:36.273308430 +0000 @@ -2643,15 +2643,17 @@ fixup_unsigned_type (tree type) unsigned int align, bool volatilep) : mode_ (GET_CLASS_NARROWEST_MODE (MODE_INT)), bitsize_ (bitsize), bitpos_ (bitpos), bitregion_start_ (bitregion_start), - bitregion_end_ (bitregion_end), align_ (MIN (align, BIGGEST_ALIGNMENT)), + bitregion_end_ (bitregion_end), align_ (align), volatilep_ (volatilep), count_ (0) { if (!bitregion_end_) { - /* We can assume that any aligned chunk of ALIGN_ bits that overlaps + /* We can assume that any aligned chunk of UNITS bits that overlaps the bitfield is mapped and won't trap. */ - bitregion_end_ = bitpos + bitsize + align_ - 1; - bitregion_end_ -= bitregion_end_ % align_ + 1; + unsigned HOST_WIDE_INT units = MIN (align, MAX (BIGGEST_ALIGNMENT, + BITS_PER_WORD)); + bitregion_end_ = bitpos + bitsize + units - 1; + bitregion_end_ -= bitregion_end_ % units + 1; } } @@ -2694,7 +2696,8 @@ bit_field_mode_iterator::next_mode (enum break; /* Stop if the mode requires too much alignment. */ - if (unit > align_ && SLOW_UNALIGNED_ACCESS (mode_, align_)) + if (GET_MODE_ALIGNMENT (mode_) > align_ + && SLOW_UNALIGNED_ACCESS (mode_, align_)) break; *out_mode = mode_; @@ -2753,8 +2756,9 @@ get_best_mode (int bitsize, int bitpos, enum machine_mode widest_mode = VOIDmode; enum machine_mode mode; while (iter.next_mode (&mode) - /* ??? For historical reasons, reject modes that are wider than - the alignment. This has both advantages and disadvantages. + /* ??? For historical reasons, reject modes that would normally + receive greater alignment, even if unaligned accesses are + acceptable. This has both advantages and disadvantages. Removing this check means that something like: struct s { unsigned int x; unsigned int y; }; @@ -2808,7 +2812,7 @@ get_best_mode (int bitsize, int bitpos, causes store_bit_field to keep a 128-bit memory reference, so that the final bitfield reference still has a MEM_EXPR and MEM_OFFSET. */ - && GET_MODE_BITSIZE (mode) <= align + && GET_MODE_ALIGNMENT (mode) <= align && (largest_mode == VOIDmode || GET_MODE_SIZE (mode) <= GET_MODE_SIZE (largest_mode))) {