Patchwork another wrong-code problem with -fstrict-volatile-bitfields

login
register
mail settings
Submitter Sandra Loosemore
Date Aug. 27, 2012, 3:24 a.m.
Message ID <503AE86D.6090503@codesourcery.com>
Download mbox | patch
Permalink /patch/180115/
State New
Headers show

Comments

Sandra Loosemore - Aug. 27, 2012, 3:24 a.m.
A couple days ago I wrote:

> While I was grovelling around trying to swap in more state on the
> bitfield store/extract code for the patch rewrite being discussed here:
>
> http://gcc.gnu.org/ml/gcc-patches/2012-08/msg01546.html
>
> I found a reference to PR23623 and found that it is broken again, but in
> a different way. On ARM EABI, the attached test case correctly does
> 32-bit reads for the volatile int bit-field with
> -fstrict-volatile-bitfields, but incorrectly does 8-bit writes. I
> thought I should try to track this down and fix it first, as part of
> making the bit-field read/extract code more consistent with each other,
> before trying to figure out a new place to hook in the packedp attribute
> stuff. The patch I previously submitted does not fix the behavior of
> this test case for writing, nor does reverting the older patch that
> added the packedp attribute for reading break that case.
>
> After I tweaked a couple other places in store_bit_field_1 to handle
> -fstrict-volatile-bitfields consistently with extract_bit_field_1, I've
> gotten it into store_fixed_bit_field, to parallel the read case where it
> is getting into extract_fixed_bit_field. But there it's failing to reach
> the special case for using the declared mode of the field with
> -fstrict-volatile-bitfields because it's been passed bitregion_start = 0
> and bitregion_end = 7 so it thinks it must not write more than 8 bits no
> matter what. Those values are coming in from expand_assignment, which is
> in turn getting them from get_bit_range.

To be more specific, here is a work-in-progress patch that takes a 
brute-force approach by making store_fixed_bit_field ignore the provided 
bit region in the strict volatile bitfield case; it also has the other 
fixes to make the bitfield store control flow match that for extract. 
This fixes the previously-failing test case and regression tests OK on 
arm-none-eabi, but I haven't tried it on any other target yet.  Is this 
a reasonable way to resolve this conflict, or should something farther 
up the call chain take care of it?

-Sandra


2012-08-26  Sandra Loosemore  <sandra@codesourcery.com>

	gcc/
	* expmed.c (store_bit_field_1): Make handling of strict
	volatile bitfields parallel that for extraction.
	(store_fixed_bit_field): Ignore bitregion for strict volatile
	bitfields.
	(extract_bit_field_1): Make test for strict volatile bitfields
	explicit.

Patch

Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	(revision 190623)
+++ gcc/expmed.c	(working copy)
@@ -475,6 +475,19 @@  store_bit_field_1 (rtx str_rtx, unsigned
 	return true;
     }
 
+  offset = bitnum / unit;
+  bitpos = bitnum % unit;
+  byte_offset = (bitnum % BITS_PER_WORD) / BITS_PER_UNIT
+                + (offset * UNITS_PER_WORD);
+
+  /* If the bitfield is volatile, we need to make sure the access
+     remains on a type-aligned boundary.  */
+  if (GET_CODE (op0) == MEM
+      && MEM_VOLATILE_P (op0)
+      && GET_MODE_BITSIZE (GET_MODE (op0)) > 0
+      && flag_strict_volatile_bitfields > 0)
+    goto no_subreg_mode_swap;
+
   /* If the target is a register, overwriting the entire object, or storing
      a full-word or multi-word field can be done with just a SUBREG.
 
@@ -482,11 +495,6 @@  store_bit_field_1 (rtx str_rtx, unsigned
      done with a simple store.  For targets that support fast unaligned
      memory, any naturally sized, unit aligned field can be done directly.  */
 
-  offset = bitnum / unit;
-  bitpos = bitnum % unit;
-  byte_offset = (bitnum % BITS_PER_WORD) / BITS_PER_UNIT
-                + (offset * UNITS_PER_WORD);
-
   if (bitpos == 0
       && bitsize == GET_MODE_BITSIZE (fieldmode)
       && (!MEM_P (op0)
@@ -581,6 +589,7 @@  store_bit_field_1 (rtx str_rtx, unsigned
 	    return true;
 	}
     }
+ no_subreg_mode_swap:
 
   /* Handle fields bigger than a word.  */
 
@@ -809,8 +818,11 @@  store_bit_field_1 (rtx str_rtx, unsigned
     }
 
   /* If OP0 is a memory, try copying it to a register and seeing if a
-     cheap register alternative is available.  */
-  if (HAVE_insv && MEM_P (op0))
+     cheap register alternative is available.
+     Do not try this optimization for volatile bitfields if
+     -fstrict-volatile-bitfields is in effect.  */
+  if (HAVE_insv && MEM_P (op0)
+      && !(MEM_VOLATILE_P (op0) && flag_strict_volatile_bitfields > 0))
     {
       enum machine_mode bestmode;
       unsigned HOST_WIDE_INT maxbits = MAX_FIXED_MODE_SIZE;
@@ -988,8 +1000,6 @@  store_fixed_bit_field (rtx op0, unsigned
 	mode = word_mode;
 
       if (MEM_VOLATILE_P (op0)
-          && GET_MODE_BITSIZE (GET_MODE (op0)) > 0
-	  && GET_MODE_BITSIZE (GET_MODE (op0)) <= maxbits
 	  && flag_strict_volatile_bitfields > 0)
 	mode = GET_MODE (op0);
       else
@@ -1683,8 +1693,11 @@  extract_bit_field_1 (rtx str_rtx, unsign
     }
 
   /* If OP0 is a memory, try copying it to a register and seeing if a
-     cheap register alternative is available.  */
-  if (ext_mode != MAX_MACHINE_MODE && MEM_P (op0))
+     cheap register alternative is available.
+     Do not try this optimization for volatile bitfields if
+     -fstrict-volatile-bitfields is in effect.  */
+  if (ext_mode != MAX_MACHINE_MODE && MEM_P (op0)
+      && !(MEM_VOLATILE_P (op0) && flag_strict_volatile_bitfields > 0))
     {
       enum machine_mode bestmode;