From patchwork Thu Oct 18 21:19:55 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Sandiford X-Patchwork-Id: 192470 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 859BF2C008F for ; Fri, 19 Oct 2012 08:20:15 +1100 (EST) Comment: DKIM? See http://www.dkim.org DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=gcc.gnu.org; s=default; x=1351200015; h=Comment: DomainKey-Signature:Received:Received:Received:Received:Received: Received:From:To:Mail-Followup-To:Subject:References:Date: In-Reply-To:Message-ID:User-Agent:MIME-Version:Content-Type: Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive: List-Post:List-Help:Sender:Delivered-To; bh=SN9h9JCZdL5hpOcAZuf/ 8tL0/R4=; b=ynws3XOJM97OFejhbDMu6Y22BvUp0lHOZgY9YdjcaAzGHIyvrzJJ sP8ZekUyBVaey4MZ/9/08wON0RtBA5iv0gruTCQRkrTkdrfm5Ig+3oc9XOIqIoP+ brAm1CK6Kma7CYsoOqRUymy8H0X6rIhwE2iyCRbS9GxOZxjmatv/1es= Comment: DomainKeys? See http://antispam.yahoo.com/domainkeys DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=gcc.gnu.org; h=Received:Received:X-SWARE-Spam-Status:X-Spam-Check-By:Received:Received:Received:Received:From:To:Mail-Followup-To:Subject:References:Date:In-Reply-To:Message-ID:User-Agent:MIME-Version:Content-Type:Mailing-List:Precedence:List-Id:List-Unsubscribe:List-Archive:List-Post:List-Help:Sender:Delivered-To; b=KaAjoNGNxh82kGSGvMJ+taWLSmD8ysFJus/dnAhgB8Q7RjFNYDFKZjLohAXJDA DKXHsu9X8x1542Ba43srx9R93V1tFxjQdoIRl2f5pDO3WW3Xgs5wil3HhMSe5H16 2pJN8aRcb1MgITGH4JQ44K3SNZI/GOyFWBndOA+VwKPnY=; Received: (qmail 30104 invoked by alias); 18 Oct 2012 21:20:07 -0000 Received: (qmail 30088 invoked by uid 22791); 18 Oct 2012 21:20:07 -0000 X-SWARE-Spam-Status: No, hits=-4.3 required=5.0 tests=AWL, BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, FREEMAIL_FROM, KHOP_RCVD_TRUST, RCVD_IN_DNSWL_LOW, RCVD_IN_HOSTKARMA_YE, TW_XT X-Spam-Check-By: sourceware.org Received: from mail-wg0-f41.google.com (HELO mail-wg0-f41.google.com) (74.125.82.41) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 18 Oct 2012 21:19:57 +0000 Received: by mail-wg0-f41.google.com with SMTP id ds1so1385248wgb.2 for ; Thu, 18 Oct 2012 14:19:56 -0700 (PDT) Received: by 10.216.141.14 with SMTP id f14mr13986734wej.208.1350595196107; Thu, 18 Oct 2012 14:19:56 -0700 (PDT) Received: from localhost ([2.26.188.227]) by mx.google.com with ESMTPS id cn6sm35746917wib.9.2012.10.18.14.19.52 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 18 Oct 2012 14:19:53 -0700 (PDT) From: Richard Sandiford To: gcc-patches@gcc.gnu.org Mail-Followup-To: gcc-patches@gcc.gnu.org, rdsandiford@googlemail.com Subject: Re: Tidy extract_bit_field_1 & co. References: <87k3usltoj.fsf@talisman.home> Date: Thu, 18 Oct 2012 22:19:55 +0100 In-Reply-To: <87k3usltoj.fsf@talisman.home> (Richard Sandiford's message of "Sun, 14 Oct 2012 20:50:20 +0100") Message-ID: <878vb3jx50.fsf@talisman.home> User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/23.3 (gnu/linux) MIME-Version: 1.0 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Richard Sandiford writes: > Partnering the store_bit_field_1 patch that I just posted, this patch > tidies up the extract_bit_field code in the same way. > > There is one deliberate behavioural change here. The old code had a > single check for cases where the extraction could be done as a simple > move. It started: > > if (((bitsize >= BITS_PER_WORD && bitsize == GET_MODE_BITSIZE (mode) > && bitpos % BITS_PER_WORD == 0) > || (mode1 != BLKmode > /* ??? The big endian test here is wrong. This is correct > if the value is in a register, and if mode_for_size is not > the same mode as op0. This causes us to get unnecessarily > inefficient code from the Thumb port when -mbig-endian. */ > && (BYTES_BIG_ENDIAN > ? bitpos + bitsize == BITS_PER_WORD > : bitpos == 0))) > > The BYTES_BIG_ENDIAN check didn't make sense for memory operands though, > because bitpos was based on byte units in that case. That might well be > what the comment was complaining about; I'm not sure. > > Also, I made the MODE1 computation take failures of mode_for_size > into account. > > Tested on x86_64-linux-gnu, powerpc64-linux-gnu, mipsisa64-elf (both -EL > and -EB) and mipsisa32-elf (also both -EL and -EB). OK to install? Here's a version with the corresponding fixes from Eric's review of the store_bit_field_1 patch. Tested as before. gcc/ * expmed.c (store_split_bit_field): Update the calls to extract_fixed_bit_field. In the big-endian case, always use the mode of OP0 to count the number of significant bits. (extract_bit_field_1): Remove unit, offset, bitpos and byte_offset from the outermost scope. Express conditions in terms of bitnum rather than offset, bitpos and byte_offset. Move the computation of MODE1 to the block that needs it. Use MODE unless the TMODE-based mode_for_size calculation succeeds. Split the plain move cases into two, one for memory accesses and one for register accesses. Generalize the memory case, freeing it from the old register-based endian checks. Move the INT_MODE calculation above the code that needs it. Use simplify_gen_subreg to handle multiword OP0s. If the field still spans several words, pass it directly to extract_split_bit_field. Assume after that point that both targets and register sources fit within a word. Replace x-prefixed variables with non-prefixed forms. Compute the bitpos for ext(z)v register operands directly in the chosen unit size, rather than going through an intermediate BITS_PER_WORD unit size. Simplify the containment check used when forcing OP0 into a register. Update the call to extract_fixed_bit_field. (extract_fixed_bit_field): Replace the bitpos and offset parameters with a single bitnum parameter, of the same form as extract_bit_field. Assume that OP0 contains the full field. Simplify the memory offset calculation and containment check for volatile bitfields. Make the offset explicit when volatile bitfields force a misaligned access. Remove WARNED and fix long lines. Assert that the processed OP0 has an integral mode. (store_split_bit_field): Update the call to store_fixed_bit_field. Index: gcc/expmed.c =================================================================== --- gcc/expmed.c 2012-10-18 19:10:29.268718181 +0100 +++ gcc/expmed.c 2012-10-18 19:13:24.134708442 +0100 @@ -57,7 +57,6 @@ static void store_split_bit_field (rtx, rtx); static rtx extract_fixed_bit_field (enum machine_mode, rtx, unsigned HOST_WIDE_INT, - unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT, rtx, int, bool); static rtx mask_rtx (enum machine_mode, int, int, int); static rtx lshift_value (enum machine_mode, rtx, int, int); @@ -1129,28 +1128,21 @@ store_split_bit_field (rtx op0, unsigned if (BYTES_BIG_ENDIAN) { - int total_bits; - - /* We must do an endian conversion exactly the same way as it is - done in extract_bit_field, so that the two calls to - extract_fixed_bit_field will have comparable arguments. */ - if (!MEM_P (value) || GET_MODE (value) == BLKmode) - total_bits = BITS_PER_WORD; - else - total_bits = GET_MODE_BITSIZE (GET_MODE (value)); - /* Fetch successively less significant portions. */ if (CONST_INT_P (value)) part = GEN_INT (((unsigned HOST_WIDE_INT) (INTVAL (value)) >> (bitsize - bitsdone - thissize)) & (((HOST_WIDE_INT) 1 << thissize) - 1)); else - /* The args are chosen so that the last part includes the - lsb. Give extract_bit_field the value it needs (with - endianness compensation) to fetch the piece we want. */ - part = extract_fixed_bit_field (word_mode, value, 0, thissize, - total_bits - bitsize + bitsdone, - NULL_RTX, 1, false); + { + int total_bits = GET_MODE_BITSIZE (GET_MODE (value)); + /* The args are chosen so that the last part includes the + lsb. Give extract_bit_field the value it needs (with + endianness compensation) to fetch the piece we want. */ + part = extract_fixed_bit_field (word_mode, value, thissize, + total_bits - bitsize + bitsdone, + NULL_RTX, 1, false); + } } else { @@ -1160,7 +1152,7 @@ store_split_bit_field (rtx op0, unsigned >> bitsdone) & (((HOST_WIDE_INT) 1 << thissize) - 1)); else - part = extract_fixed_bit_field (word_mode, value, 0, thissize, + part = extract_fixed_bit_field (word_mode, value, thissize, bitsdone, NULL_RTX, 1, false); } @@ -1241,14 +1233,10 @@ extract_bit_field_1 (rtx str_rtx, unsign enum machine_mode mode, enum machine_mode tmode, bool fallback_p) { - unsigned int unit - = (MEM_P (str_rtx)) ? BITS_PER_UNIT : BITS_PER_WORD; - unsigned HOST_WIDE_INT offset, bitpos; rtx op0 = str_rtx; enum machine_mode int_mode; enum machine_mode ext_mode; enum machine_mode mode1; - int byte_offset; if (tmode == VOIDmode) tmode = mode; @@ -1366,37 +1354,10 @@ extract_bit_field_1 (rtx str_rtx, unsign } } - /* Extraction of a full-word or multi-word value from a structure - in a register or aligned memory can be done with just a SUBREG. - A subword value in the least significant part of a register - can also be extracted with a SUBREG. For this, we need the - byte offset of the value in op0. */ - - bitpos = bitnum % unit; - offset = bitnum / unit; - byte_offset = bitpos / BITS_PER_UNIT + offset * UNITS_PER_WORD; - - /* If OP0 is a register, BITPOS must count within a word. - But as we have it, it counts within whatever size OP0 now has. - On a bigendian machine, these are not the same, so convert. */ - if (BYTES_BIG_ENDIAN - && !MEM_P (op0) - && unit > GET_MODE_BITSIZE (GET_MODE (op0))) - bitpos += unit - GET_MODE_BITSIZE (GET_MODE (op0)); - /* ??? We currently assume TARGET is at least as big as BITSIZE. If that's wrong, the solution is to test for it and set TARGET to 0 if needed. */ - /* Only scalar integer modes can be converted via subregs. There is an - additional problem for FP modes here in that they can have a precision - which is different from the size. mode_for_size uses precision, but - we want a mode based on the size, so we must avoid calling it for FP - modes. */ - mode1 = (SCALAR_INT_MODE_P (tmode) - ? mode_for_size (bitsize, GET_MODE_CLASS (tmode), 0) - : mode); - /* If the bitfield is volatile, we need to make sure the access remains on a type-aligned boundary. */ if (GET_CODE (op0) == MEM @@ -1405,39 +1366,48 @@ extract_bit_field_1 (rtx str_rtx, unsign && flag_strict_volatile_bitfields > 0) goto no_subreg_mode_swap; - if (((bitsize >= BITS_PER_WORD && bitsize == GET_MODE_BITSIZE (mode) - && bitpos % BITS_PER_WORD == 0) - || (mode1 != BLKmode - /* ??? The big endian test here is wrong. This is correct - if the value is in a register, and if mode_for_size is not - the same mode as op0. This causes us to get unnecessarily - inefficient code from the Thumb port when -mbig-endian. */ - && (BYTES_BIG_ENDIAN - ? bitpos + bitsize == BITS_PER_WORD - : bitpos == 0))) - && ((!MEM_P (op0) - && TRULY_NOOP_TRUNCATION_MODES_P (mode1, GET_MODE (op0)) - && GET_MODE_SIZE (mode1) != 0 - && byte_offset % GET_MODE_SIZE (mode1) == 0) - || (MEM_P (op0) - && (! SLOW_UNALIGNED_ACCESS (mode, MEM_ALIGN (op0)) - || (offset * BITS_PER_UNIT % bitsize == 0 - && MEM_ALIGN (op0) % bitsize == 0))))) - { - if (MEM_P (op0)) - op0 = adjust_bitfield_address (op0, mode1, offset); - else if (mode1 != GET_MODE (op0)) - { - rtx sub = simplify_gen_subreg (mode1, op0, GET_MODE (op0), - byte_offset); - if (sub == NULL) - goto no_subreg_mode_swap; - op0 = sub; - } - if (mode1 != mode) - return convert_to_mode (tmode, op0, unsignedp); - return op0; + /* Only scalar integer modes can be converted via subregs. There is an + additional problem for FP modes here in that they can have a precision + which is different from the size. mode_for_size uses precision, but + we want a mode based on the size, so we must avoid calling it for FP + modes. */ + mode1 = mode; + if (SCALAR_INT_MODE_P (tmode)) + { + enum machine_mode try_mode = mode_for_size (bitsize, + GET_MODE_CLASS (tmode), 0); + if (try_mode != BLKmode) + mode1 = try_mode; + } + gcc_assert (mode1 != BLKmode); + + /* Extraction of a full MODE1 value can be done with a subreg as long + as the least significant bit of the value is the least significant + bit of either OP0 or a word of OP0. */ + if (!MEM_P (op0) + && lowpart_bit_field_p (bitnum, bitsize, GET_MODE (op0)) + && bitsize == GET_MODE_BITSIZE (mode1) + && TRULY_NOOP_TRUNCATION_MODES_P (mode1, GET_MODE (op0))) + { + rtx sub = simplify_gen_subreg (mode1, op0, GET_MODE (op0), + bitnum / BITS_PER_UNIT); + if (sub) + return convert_extracted_bit_field (sub, mode, tmode, unsignedp); + } + + /* Extraction of a full MODE1 value can be done with a load as long as + the field is on a byte boundary and is sufficiently aligned. */ + if (MEM_P (op0) + && bitnum % BITS_PER_UNIT == 0 + && bitsize == GET_MODE_BITSIZE (mode1) + && (!SLOW_UNALIGNED_ACCESS (mode1, MEM_ALIGN (op0)) + || (bitnum % bitsize == 0 + && MEM_ALIGN (op0) % bitsize == 0))) + { + op0 = adjust_bitfield_address (op0, mode1, bitnum / BITS_PER_UNIT); + return convert_extracted_bit_field (op0, mode, tmode, unsignedp); } + no_subreg_mode_swap: /* Handle fields bigger than a word. */ @@ -1518,35 +1488,25 @@ extract_bit_field_1 (rtx str_rtx, unsign GET_MODE_BITSIZE (mode) - bitsize, NULL_RTX, 0); } - /* From here on we know the desired field is smaller than a word. */ - - /* Check if there is a correspondingly-sized integer field, so we can - safely extract it as one size of integer, if necessary; then - truncate or extend to the size that is wanted; then use SUBREGs or - convert_to_mode to get one of the modes we really wanted. */ - - int_mode = int_mode_for_mode (tmode); - if (int_mode == BLKmode) - int_mode = int_mode_for_mode (mode); - /* Should probably push op0 out to memory and then do a load. */ - gcc_assert (int_mode != BLKmode); - - /* OFFSET is the number of words or bytes (UNIT says which) - from STR_RTX to the first word or byte containing part of the field. */ - if (!MEM_P (op0)) + /* If OP0 is a multi-word register, narrow it to the affected word. + If the region spans two words, defer to extract_split_bit_field. */ + if (!MEM_P (op0) && GET_MODE_SIZE (GET_MODE (op0)) > UNITS_PER_WORD) { - if (offset != 0 - || GET_MODE_SIZE (GET_MODE (op0)) > UNITS_PER_WORD) + op0 = simplify_gen_subreg (word_mode, op0, GET_MODE (op0), + bitnum / BITS_PER_WORD * UNITS_PER_WORD); + bitnum %= BITS_PER_WORD; + if (bitnum + bitsize > BITS_PER_WORD) { - if (!REG_P (op0)) - op0 = copy_to_reg (op0); - op0 = gen_rtx_SUBREG (mode_for_size (BITS_PER_WORD, MODE_INT, 0), - op0, (offset * UNITS_PER_WORD)); + if (!fallback_p) + return NULL_RTX; + target = extract_split_bit_field (op0, bitsize, bitnum, unsignedp); + return convert_extracted_bit_field (target, mode, tmode, unsignedp); } - offset = 0; } - /* Now OFFSET is nonzero only for memory operands. */ + /* From here on we know the desired field is smaller than a word. + If OP0 is a register, it too fits within a word. */ + ext_mode = mode_for_extraction (unsignedp ? EP_extzv : EP_extv, 0); if (ext_mode != MAX_MACHINE_MODE && bitsize > 0 @@ -1557,30 +1517,34 @@ extract_bit_field_1 (rtx str_rtx, unsign && flag_strict_volatile_bitfields > 0) /* If op0 is a register, we need it in EXT_MODE to make it acceptable to the format of ext(z)v. */ - && !(GET_CODE (op0) == SUBREG && GET_MODE (op0) != ext_mode) - && !((REG_P (op0) || GET_CODE (op0) == SUBREG) - && (bitsize + bitpos > GET_MODE_BITSIZE (ext_mode)))) + && !(GET_CODE (op0) == SUBREG && GET_MODE (op0) != ext_mode)) { struct expand_operand ops[4]; - unsigned HOST_WIDE_INT xbitpos = bitpos, xoffset = offset; + unsigned HOST_WIDE_INT bitpos = bitnum; rtx xop0 = op0; rtx xtarget = target; rtx xspec_target = target; rtx xspec_target_subreg = 0; + unsigned unit = GET_MODE_BITSIZE (ext_mode); /* If op0 is a register, we need it in EXT_MODE to make it acceptable to the format of ext(z)v. */ if (REG_P (xop0) && GET_MODE (xop0) != ext_mode) xop0 = gen_lowpart_SUBREG (ext_mode, xop0); - if (MEM_P (xop0)) - /* Get ref to first byte containing part of the field. */ - xop0 = adjust_bitfield_address (xop0, byte_mode, xoffset); - /* Now convert from counting within UNIT to counting in EXT_MODE. */ - if (BYTES_BIG_ENDIAN && !MEM_P (xop0)) - xbitpos += GET_MODE_BITSIZE (ext_mode) - unit; - - unit = GET_MODE_BITSIZE (ext_mode); + if (MEM_P (xop0)) + { + /* Get a reference to the first byte of the field. */ + xop0 = adjust_bitfield_address (xop0, byte_mode, + bitpos / BITS_PER_UNIT); + bitpos %= BITS_PER_UNIT; + } + else + { + /* Convert from counting within OP0 to counting in EXT_MODE. */ + if (BYTES_BIG_ENDIAN) + bitpos += unit - GET_MODE_BITSIZE (GET_MODE (op0)); + } /* If BITS_BIG_ENDIAN is zero on a BYTES_BIG_ENDIAN machine, we count "backwards" from the size of the unit we are extracting from. @@ -1588,7 +1552,7 @@ extract_bit_field_1 (rtx str_rtx, unsign BYTES/BITS_BIG_ENDIAN machine. */ if (BITS_BIG_ENDIAN != BYTES_BIG_ENDIAN) - xbitpos = unit - bitsize - xbitpos; + bitpos = unit - bitsize - bitpos; if (xtarget == 0) xtarget = xspec_target = gen_reg_rtx (tmode); @@ -1614,7 +1578,7 @@ extract_bit_field_1 (rtx str_rtx, unsign create_output_operand (&ops[0], xtarget, ext_mode); create_fixed_operand (&ops[1], xop0); create_integer_operand (&ops[2], bitsize); - create_integer_operand (&ops[3], xbitpos); + create_integer_operand (&ops[3], bitpos); if (maybe_expand_insn (unsignedp ? CODE_FOR_extzv : CODE_FOR_extv, 4, ops)) { @@ -1653,26 +1617,25 @@ extract_bit_field_1 (rtx str_rtx, unsign && !(SLOW_UNALIGNED_ACCESS (bestmode, MEM_ALIGN (op0)) && GET_MODE_BITSIZE (bestmode) > MEM_ALIGN (op0))) { - unsigned HOST_WIDE_INT xoffset, xbitpos; + unsigned HOST_WIDE_INT offset, bitpos; /* Compute the offset as a multiple of this unit, counting in bytes. */ - unit = GET_MODE_BITSIZE (bestmode); - xoffset = (bitnum / unit) * GET_MODE_SIZE (bestmode); - xbitpos = bitnum % unit; + unsigned int unit = GET_MODE_BITSIZE (bestmode); + offset = (bitnum / unit) * GET_MODE_SIZE (bestmode); + bitpos = bitnum % unit; /* Make sure the register is big enough for the whole field. */ - if (xoffset * BITS_PER_UNIT + unit - >= offset * BITS_PER_UNIT + bitsize) + if (bitpos + bitsize <= unit) { rtx last, result, xop0; last = get_last_insn (); /* Fetch it to a register in that size. */ - xop0 = adjust_bitfield_address (op0, bestmode, xoffset); + xop0 = adjust_bitfield_address (op0, bestmode, offset); xop0 = force_reg (bestmode, xop0); - result = extract_bit_field_1 (xop0, bitsize, xbitpos, + result = extract_bit_field_1 (xop0, bitsize, bitpos, unsignedp, packedp, target, mode, tmode, false); if (result) @@ -1686,8 +1649,16 @@ extract_bit_field_1 (rtx str_rtx, unsign if (!fallback_p) return NULL; - target = extract_fixed_bit_field (int_mode, op0, offset, bitsize, - bitpos, target, unsignedp, packedp); + /* Find a correspondingly-sized integer field, so we can apply + shifts and masks to it. */ + int_mode = int_mode_for_mode (tmode); + if (int_mode == BLKmode) + int_mode = int_mode_for_mode (mode); + /* Should probably push op0 out to memory and then do a load. */ + gcc_assert (int_mode != BLKmode); + + target = extract_fixed_bit_field (int_mode, op0, bitsize, bitnum, + target, unsignedp, packedp); return convert_extracted_bit_field (target, mode, tmode, unsignedp); } @@ -1717,16 +1688,8 @@ extract_bit_field (rtx str_rtx, unsigned target, mode, tmode, true); } -/* Extract a bit field using shifts and boolean operations - Returns an rtx to represent the value. - OP0 addresses a register (word) or memory (byte). - BITPOS says which bit within the word or byte the bit field starts in. - OFFSET says how many bytes farther the bit field starts; - it is 0 if OP0 is a register. - BITSIZE says how many bits long the bit field is. - (If OP0 is a register, it may be narrower than a full word, - but BITPOS still counts within a full word, - which is significant on bigendian machines.) +/* Use shifts and boolean operations to extract a field of BITSIZE bits + from bit BITNUM of OP0. UNSIGNEDP is nonzero for an unsigned bit field (don't sign-extend value). PACKEDP is true if the field has the packed attribute. @@ -1737,21 +1700,13 @@ extract_bit_field (rtx str_rtx, unsigned static rtx extract_fixed_bit_field (enum machine_mode tmode, rtx op0, - unsigned HOST_WIDE_INT offset, unsigned HOST_WIDE_INT bitsize, - unsigned HOST_WIDE_INT bitpos, rtx target, + unsigned HOST_WIDE_INT bitnum, rtx target, int unsignedp, bool packedp) { - unsigned int total_bits = BITS_PER_WORD; enum machine_mode mode; - if (GET_CODE (op0) == SUBREG || REG_P (op0)) - { - /* Special treatment for a bit field split across two registers. */ - if (bitsize + bitpos > BITS_PER_WORD) - return extract_split_bit_field (op0, bitsize, bitpos, unsignedp); - } - else + if (MEM_P (op0)) { /* Get the proper mode to use for this field. We want a mode that includes the entire field. If such a mode would be larger than @@ -1768,105 +1723,89 @@ extract_fixed_bit_field (enum machine_mo mode = tmode; } else - mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT, 0, 0, + mode = get_best_mode (bitsize, bitnum, 0, 0, MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0)); if (mode == VOIDmode) /* The only way this should occur is if the field spans word boundaries. */ - return extract_split_bit_field (op0, bitsize, - bitpos + offset * BITS_PER_UNIT, - unsignedp); - - total_bits = GET_MODE_BITSIZE (mode); - - /* Make sure bitpos is valid for the chosen mode. Adjust BITPOS to - be in the range 0 to total_bits-1, and put any excess bytes in - OFFSET. */ - if (bitpos >= total_bits) - { - offset += (bitpos / total_bits) * (total_bits / BITS_PER_UNIT); - bitpos -= ((bitpos / total_bits) * (total_bits / BITS_PER_UNIT) - * BITS_PER_UNIT); - } - - /* If we're accessing a volatile MEM, we can't do the next - alignment step if it results in a multi-word access where we - otherwise wouldn't have one. So, check for that case - here. */ + return extract_split_bit_field (op0, bitsize, bitnum, unsignedp); + + unsigned int total_bits = GET_MODE_BITSIZE (mode); + HOST_WIDE_INT bit_offset = bitnum - bitnum % total_bits; + + /* If we're accessing a volatile MEM, we can't apply BIT_OFFSET + if it results in a multi-word access where we otherwise wouldn't + have one. So, check for that case here. */ if (MEM_P (op0) && MEM_VOLATILE_P (op0) && flag_strict_volatile_bitfields > 0 - && bitpos + bitsize <= total_bits - && bitpos + bitsize + (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT > total_bits) + && bitnum % BITS_PER_UNIT + bitsize <= total_bits + && bitnum % GET_MODE_BITSIZE (mode) + bitsize > total_bits) { if (STRICT_ALIGNMENT) { static bool informed_about_misalignment = false; - bool warned; if (packedp) { if (bitsize == total_bits) - warned = warning_at (input_location, OPT_fstrict_volatile_bitfields, - "multiple accesses to volatile structure member" - " because of packed attribute"); + warning_at (input_location, OPT_fstrict_volatile_bitfields, + "multiple accesses to volatile structure" + " member because of packed attribute"); else - warned = warning_at (input_location, OPT_fstrict_volatile_bitfields, - "multiple accesses to volatile structure bitfield" - " because of packed attribute"); + warning_at (input_location, OPT_fstrict_volatile_bitfields, + "multiple accesses to volatile structure" + " bitfield because of packed attribute"); - return extract_split_bit_field (op0, bitsize, - bitpos + offset * BITS_PER_UNIT, + return extract_split_bit_field (op0, bitsize, bitnum, unsignedp); } if (bitsize == total_bits) - warned = warning_at (input_location, OPT_fstrict_volatile_bitfields, - "mis-aligned access used for structure member"); + warning_at (input_location, OPT_fstrict_volatile_bitfields, + "mis-aligned access used for structure member"); else - warned = warning_at (input_location, OPT_fstrict_volatile_bitfields, - "mis-aligned access used for structure bitfield"); + warning_at (input_location, OPT_fstrict_volatile_bitfields, + "mis-aligned access used for structure bitfield"); - if (! informed_about_misalignment && warned) + if (! informed_about_misalignment) { informed_about_misalignment = true; inform (input_location, - "when a volatile object spans multiple type-sized locations," - " the compiler must choose between using a single mis-aligned access to" - " preserve the volatility, or using multiple aligned accesses to avoid" - " runtime faults; this code may fail at runtime if the hardware does" - " not allow this access"); + "when a volatile object spans multiple type-sized" + " locations, the compiler must choose between using" + " a single mis-aligned access to preserve the" + " volatility, or using multiple aligned accesses" + " to avoid runtime faults; this code may fail at" + " runtime if the hardware does not allow this" + " access"); } } + bit_offset = bitnum - bitnum % BITS_PER_UNIT; } - else - { - - /* Get ref to an aligned byte, halfword, or word containing the field. - Adjust BITPOS to be position within a word, - and OFFSET to be the offset of that word. - Then alter OP0 to refer to that word. */ - bitpos += (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT; - offset -= (offset % (total_bits / BITS_PER_UNIT)); - } - - op0 = adjust_bitfield_address (op0, mode, offset); + op0 = adjust_bitfield_address (op0, mode, bit_offset / BITS_PER_UNIT); + bitnum -= bit_offset; } mode = GET_MODE (op0); + gcc_assert (SCALAR_INT_MODE_P (mode)); + + /* Note that bitsize + bitnum can be greater than GET_MODE_BITSIZE (mode) + for invalid input, such as extract equivalent of f5 from + gcc.dg/pr48335-2.c. */ if (BYTES_BIG_ENDIAN) - /* BITPOS is the distance between our msb and that of OP0. + /* BITNUM is the distance between our msb and that of OP0. Convert it to the distance from the lsb. */ - bitpos = total_bits - bitsize - bitpos; + bitnum = GET_MODE_BITSIZE (mode) - bitsize - bitnum; - /* Now BITPOS is always the distance between the field's lsb and that of OP0. + /* Now BITNUM is always the distance between the field's lsb and that of OP0. We have reduced the big-endian case to the little-endian case. */ if (unsignedp) { - if (bitpos) + if (bitnum) { /* If the field does not already start at the lsb, shift it so it does. */ @@ -1874,7 +1813,7 @@ extract_fixed_bit_field (enum machine_mo rtx subtarget = (target != 0 && REG_P (target) ? target : 0); if (tmode != mode) subtarget = 0; - op0 = expand_shift (RSHIFT_EXPR, mode, op0, bitpos, subtarget, 1); + op0 = expand_shift (RSHIFT_EXPR, mode, op0, bitnum, subtarget, 1); } /* Convert the value to the desired mode. */ if (mode != tmode) @@ -1883,7 +1822,7 @@ extract_fixed_bit_field (enum machine_mo /* Unless the msb of the field used to be the msb when we shifted, mask out the upper bits. */ - if (GET_MODE_BITSIZE (mode) != bitpos + bitsize) + if (GET_MODE_BITSIZE (mode) != bitnum + bitsize) return expand_binop (GET_MODE (op0), and_optab, op0, mask_rtx (GET_MODE (op0), 0, bitsize, 0), target, 1, OPTAB_LIB_WIDEN); @@ -1898,7 +1837,7 @@ extract_fixed_bit_field (enum machine_mo for (mode = GET_CLASS_NARROWEST_MODE (MODE_INT); mode != VOIDmode; mode = GET_MODE_WIDER_MODE (mode)) - if (GET_MODE_BITSIZE (mode) >= bitsize + bitpos) + if (GET_MODE_BITSIZE (mode) >= bitsize + bitnum) { op0 = convert_to_mode (mode, op0, 0); break; @@ -1907,9 +1846,9 @@ extract_fixed_bit_field (enum machine_mo if (mode != tmode) target = 0; - if (GET_MODE_BITSIZE (mode) != (bitsize + bitpos)) + if (GET_MODE_BITSIZE (mode) != (bitsize + bitnum)) { - int amount = GET_MODE_BITSIZE (mode) - (bitsize + bitpos); + int amount = GET_MODE_BITSIZE (mode) - (bitsize + bitnum); /* Maybe propagate the target for the shift. */ rtx subtarget = (target != 0 && REG_P (target) ? target : 0); op0 = expand_shift (LSHIFT_EXPR, mode, op0, amount, subtarget, 1); @@ -2015,11 +1954,9 @@ extract_split_bit_field (rtx op0, unsign /* Extract the parts in bit-counting order, whose meaning is determined by BYTES_PER_UNIT. - OFFSET is in UNITs, and UNIT is in bits. - extract_fixed_bit_field wants offset in bytes. */ - part = extract_fixed_bit_field (word_mode, word, - offset * unit / BITS_PER_UNIT, - thissize, thispos, 0, 1, false); + OFFSET is in UNITs, and UNIT is in bits. */ + part = extract_fixed_bit_field (word_mode, word, thissize, + offset * unit + thispos, 0, 1, false); bitsdone += thissize; /* Shift this part into place for the result. */