Patchwork [4/5] fix bugs with -fstrict-volatile-bitfields and packed structures

login
register
mail settings
Submitter Sandra Loosemore
Date June 16, 2013, 7:08 p.m.
Message ID <51BE0D45.4070005@codesourcery.com>
Download mbox | patch
Permalink /patch/251736/
State New
Headers show

Comments

Sandra Loosemore - June 16, 2013, 7:08 p.m.
This part of the patch series fixes problems with bad code being emitted 
for unaligned bitfield accesses, as reported in PRs 48784, 56341, and 
56997.  A secondary goal of this patch was making the bitfield store and 
extract code follow similar logic, at least for the parts relating to 
-fstrict-volatile-bitfield handling.

This patch is intended to be applied on top of part 1 (they both touch 
some of the same code).

-Sandra
Sandra Loosemore - June 23, 2013, 4:17 p.m.
On 06/16/2013 01:08 PM, Sandra Loosemore wrote:
> This part of the patch series fixes problems with bad code being emitted
> for unaligned bitfield accesses, as reported in PRs 48784, 56341, and
> 56997.  A secondary goal of this patch was making the bitfield store and
> extract code follow similar logic, at least for the parts relating to
> -fstrict-volatile-bitfield handling.

Is it possible to get this part of the patch series reviewed?  Except 
for the documentation change, it is independent of the controversy 
surrounding part 3 regarding whether the target ABI or C/C++ standard 
should take precedence when they conflict, and is independent of any 
further patches to change the default -fstrict-volatile-bitfields 
setting.  If the rest of the patch is approved, I'll take care to fix up 
invoke.texi to accurately reflect the behavior of the approved patches 
before checking anything in.

http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00911.html

-Sandra
Richard Guenther - June 24, 2013, 12:31 p.m.
On Sun, Jun 23, 2013 at 6:17 PM, Sandra Loosemore
<sandra@codesourcery.com> wrote:
> On 06/16/2013 01:08 PM, Sandra Loosemore wrote:
>>
>> This part of the patch series fixes problems with bad code being emitted
>> for unaligned bitfield accesses, as reported in PRs 48784, 56341, and
>> 56997.  A secondary goal of this patch was making the bitfield store and
>> extract code follow similar logic, at least for the parts relating to
>> -fstrict-volatile-bitfield handling.
>
>
> Is it possible to get this part of the patch series reviewed?  Except for
> the documentation change, it is independent of the controversy surrounding
> part 3 regarding whether the target ABI or C/C++ standard should take
> precedence when they conflict, and is independent of any further patches to
> change the default -fstrict-volatile-bitfields setting.  If the rest of the
> patch is approved, I'll take care to fix up invoke.texi to accurately
> reflect the behavior of the approved patches before checking anything in.
>
> http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00911.html

It looks sensible to me but I'd like to have Eric have a 2nd look as he is most
familiar with this code.

Thanks,
Richard.

> -Sandra
>
>
Sandra Loosemore - June 24, 2013, 2:38 p.m.
On 06/24/2013 06:31 AM, Richard Biener wrote:
> On Sun, Jun 23, 2013 at 6:17 PM, Sandra Loosemore
> <sandra@codesourcery.com> wrote:
>> On 06/16/2013 01:08 PM, Sandra Loosemore wrote:
>>>
>>> This part of the patch series fixes problems with bad code being emitted
>>> for unaligned bitfield accesses, as reported in PRs 48784, 56341, and
>>> 56997.  A secondary goal of this patch was making the bitfield store and
>>> extract code follow similar logic, at least for the parts relating to
>>> -fstrict-volatile-bitfield handling.
>>
>>
>> Is it possible to get this part of the patch series reviewed?  Except for
>> the documentation change, it is independent of the controversy surrounding
>> part 3 regarding whether the target ABI or C/C++ standard should take
>> precedence when they conflict, and is independent of any further patches to
>> change the default -fstrict-volatile-bitfields setting.  If the rest of the
>> patch is approved, I'll take care to fix up invoke.texi to accurately
>> reflect the behavior of the approved patches before checking anything in.
>>
>> http://gcc.gnu.org/ml/gcc-patches/2013-06/msg00911.html
>
> It looks sensible to me but I'd like to have Eric have a 2nd look as he is most
> familiar with this code.

OK.  Meanwhile there have been some comments in PR56997 indicating that 
this patch still isn't quite right, so I am going to have to tinker with 
it a bit more.

To tell the truth, I've also been somewhat unhappy with the current 
version; I'd much rather restructure the code further so that we check 
for the -fstrict-volatile-bitfields case once at the top instead of 
having to special-case 3 or 4 places deep down in the code to do 
something different.  The current version seemed like a more 
conservative incremental change, but if it's still not correct it might 
be better to bite the bullet on the refactoring.  Anyway, I will spend a 
few more days on it and post a revised patch for review.

-Sandra
Eric Botcazou - June 24, 2013, 3:55 p.m.
> This part of the patch series fixes problems with bad code being emitted
> for unaligned bitfield accesses, as reported in PRs 48784, 56341, and
> 56997.  A secondary goal of this patch was making the bitfield store and
> extract code follow similar logic, at least for the parts relating to
> -fstrict-volatile-bitfield handling.

Thanks for tackling this.  I think we all agree that making the store and 
extract paths follow a similar logic would be a step forward.  Some remarks:

 - the extract_fixed_bit_field changes look good to me (nice cleanup!) modulo 
s/type/mode/ in the added comment,

 - the store_fixed_bit_field changes are less immediate, as the fallback isn't 
the default behavior (contrary to what the comment says).  I presume that this 
is intended?

 - the extract_bit_field_1 changes look good to me (modulo a superfluous "of 
the field" in the first added comment) but I don't really understand the 
ChangeLog entry; I'd just write "do not short-circuit direct extraction for
flag_strict_volatile_bitfields handling" or something along these lines.

Patch

Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	(revision 199963)
+++ gcc/expmed.c	(working copy)
@@ -933,19 +933,35 @@  store_fixed_bit_field (rtx op0, unsigned
 	 a word, we won't be doing the extraction the normal way.
 	 We don't want a mode bigger than the destination.  */
 
-      mode = GET_MODE (op0);
-      if (GET_MODE_BITSIZE (mode) == 0
-	  || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
-	mode = word_mode;
-
       if (MEM_VOLATILE_P (op0)
           && GET_MODE_BITSIZE (GET_MODE (op0)) > 0
 	  && GET_MODE_BITSIZE (GET_MODE (op0)) <= maxbits
 	  && flag_strict_volatile_bitfields > 0)
-	mode = GET_MODE (op0);
+	{
+	  unsigned int modesize;
+
+	  mode = GET_MODE (op0);
+
+	  /* Check for oversized or unaligned field that we can't write
+	     with a single access of the required type, and fall back to
+	     the default behavior.  */
+	  modesize = GET_MODE_BITSIZE (mode);
+	  if (bitnum % BITS_PER_UNIT + bitsize > modesize
+	      || (STRICT_ALIGNMENT
+		  && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize > modesize))
+	    mode = get_best_mode (bitsize, bitnum, 0, 0,
+				  MEM_ALIGN (op0), word_mode, true);
+	}
       else
-	mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
-			      MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
+	{
+	  mode = GET_MODE (op0);
+	  if (GET_MODE_BITSIZE (mode) == 0
+	      || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
+	    mode = word_mode;
+
+	  mode = get_best_mode (bitsize, bitnum, bitregion_start, bitregion_end,
+				MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
+	}
 
       if (mode == VOIDmode)
 	{
@@ -1429,24 +1445,35 @@  extract_bit_field_1 (rtx str_rtx, unsign
      If that's wrong, the solution is to test for it and set TARGET to 0
      if needed.  */
 
-  /* If the bitfield is volatile, we need to make sure the access
-     remains on a type-aligned boundary.  */
+  /* Get the mode of the field to use for atomic access or subreg
+     conversion.  */
+  mode1 = mode;
+
+  /* For the -fstrict-volatile-bitfields case, we must use the mode of the
+     field instead.  */
   if (GET_CODE (op0) == MEM
       && MEM_VOLATILE_P (op0)
       && GET_MODE_BITSIZE (GET_MODE (op0)) > 0
       && flag_strict_volatile_bitfields > 0)
-    goto no_subreg_mode_swap;
+    {
+      if (GET_MODE_BITSIZE (GET_MODE (op0)) > 0)
+	mode1 = GET_MODE (op0);
+      else if (target && GET_MODE_BITSIZE (GET_MODE (target)) > 0)
+	mode1 = GET_MODE (target);
+      else
+	mode1 = tmode;
+    }
 
   /* Only scalar integer modes can be converted via subregs.  There is an
      additional problem for FP modes here in that they can have a precision
      which is different from the size.  mode_for_size uses precision, but
      we want a mode based on the size, so we must avoid calling it for FP
      modes.  */
-  mode1 = mode;
-  if (SCALAR_INT_MODE_P (tmode))
+  else if (SCALAR_INT_MODE_P (tmode))
     {
       enum machine_mode try_mode = mode_for_size (bitsize,
 						  GET_MODE_CLASS (tmode), 0);
+
       if (try_mode != BLKmode)
 	mode1 = try_mode;
     }
@@ -1474,8 +1501,6 @@  extract_bit_field_1 (rtx str_rtx, unsign
       return convert_extracted_bit_field (op0, mode, tmode, unsignedp);
     }
 
- no_subreg_mode_swap:
-
   /* Handle fields bigger than a word.  */
 
   if (bitsize > BITS_PER_WORD)
@@ -1694,12 +1719,24 @@  extract_fixed_bit_field (enum machine_mo
       if (MEM_VOLATILE_P (op0)
 	  && flag_strict_volatile_bitfields > 0)
 	{
+	  unsigned int modesize;
+
 	  if (GET_MODE_BITSIZE (GET_MODE (op0)) > 0)
 	    mode = GET_MODE (op0);
 	  else if (target && GET_MODE_BITSIZE (GET_MODE (target)) > 0)
 	    mode = GET_MODE (target);
 	  else
 	    mode = tmode;
+
+	  /* Check for oversized or unaligned field that we can't read
+	     with a single access of the required type, and fall back to
+	     the default behavior.  */
+	  modesize = GET_MODE_BITSIZE (mode);
+	  if (bitnum % BITS_PER_UNIT + bitsize > modesize
+	      || (STRICT_ALIGNMENT
+		  && bitnum % GET_MODE_ALIGNMENT (mode) + bitsize > modesize))
+	    mode = get_best_mode (bitsize, bitnum, 0, 0,
+				  MEM_ALIGN (op0), word_mode, true);
 	}
       else
 	mode = get_best_mode (bitsize, bitnum, 0, 0,
@@ -1710,26 +1747,7 @@  extract_fixed_bit_field (enum machine_mo
 	   boundaries.  */
 	return extract_split_bit_field (op0, bitsize, bitnum, unsignedp);
 
-      unsigned int total_bits = GET_MODE_BITSIZE (mode);
-      HOST_WIDE_INT bit_offset = bitnum - bitnum % total_bits;
-
-      /* If we're accessing a volatile MEM, we can't apply BIT_OFFSET
-	 if it results in a multi-word access where we otherwise wouldn't
-	 have one.  So, check for that case here.  */
-      if (MEM_P (op0)
-	  && MEM_VOLATILE_P (op0)
-	  && flag_strict_volatile_bitfields > 0
-	  && bitnum % BITS_PER_UNIT + bitsize <= total_bits
-	  && bitnum % GET_MODE_BITSIZE (mode) + bitsize > total_bits)
-	{
-	  /* If the target doesn't support unaligned access, give up and
-	     split the access into two.  */
-	  if (STRICT_ALIGNMENT)
-	    return extract_split_bit_field (op0, bitsize, bitnum, unsignedp);
-	  bit_offset = bitnum - bitnum % BITS_PER_UNIT;
-	}
-      op0 = adjust_bitfield_address (op0, mode, bit_offset / BITS_PER_UNIT);
-      bitnum -= bit_offset;
+      op0 = narrow_bit_field_mem (op0, mode, bitsize, bitnum, &bitnum);
     }
 
   mode = GET_MODE (op0);
Index: gcc/doc/invoke.texi
===================================================================
--- gcc/doc/invoke.texi	(revision 199963)
+++ gcc/doc/invoke.texi	(working copy)
@@ -20887,8 +20887,16 @@  instruction, even though that accesses b
 any portion of the bit-field, or memory-mapped registers unrelated to
 the one being updated.
 
+In some cases, such as when the @code{packed} attribute is applied to a 
+structure field, it may not be possible to access the field with a single
+read or write that is correctly aligned for the target machine.  In this
+case GCC falls back to generating multiple accesses rather than code that 
+will fault or truncate the result at run time, regardless of whether
+@option{-fstrict-volatile-bitfields} is in effect.
+
 The default value of this option is determined by the application binary
-interface for the target processor.
+interface for the target processor.  In particular, on ARM targets using
+AAPCS, @option{-fstrict-volatile-bitfields} is the default.
 
 @item -fsync-libcalls
 @opindex fsync-libcalls