diff mbox

patch: honor volatile bitfield types

Message ID xnocfay4fx.fsf@greed.delorie.com
State New
Headers show

Commit Message

DJ Delorie June 16, 2010, 10:53 p.m. UTC
Mark Mitchell <mark@codesourcery.com> writes:
> This version of the patch is OK if there are no objections in 24 hours.

Committed as attached.  Whew.

2010-06-16  DJ Delorie  <dj@redhat.com>

	* common.opt (-fstrict-volatile-bitfields): new.
	* doc/invoke.texi: Document it.
	* fold-const.c (optimize_bit_field_compare): For volatile
	bitfields, use the field's type to determine the mode, not the
	field's size.
	* expr.c (expand_assignment): Likewise.
	(get_inner_reference): Likewise.
	(expand_expr_real_1): Likewise.
	* expmed.c (store_fixed_bit_field): Likewise.
	(extract_bit_field_1): Likewise.
	(extract_fixed_bit_field): Likewise.

2010-06-16  DJ Delorie  <dj@redhat.com>

	* gcc.target/i386/volatile-bitfields-1.c: New.
	* gcc.target/i386/volatile-bitfields-2.c: New.

Comments

Eric Botcazou June 22, 2010, 10:03 a.m. UTC | #1
> 2010-06-16  DJ Delorie  <dj@redhat.com>
>
> 	* gcc.target/i386/volatile-bitfields-1.c: New.

This one doesn't pass on i586:

foo:
        movb    bits, %al
        pushl   %ebp
        sarb    %al
        movl    %esp, %ebp
        movsbl  %al, %eax
        popl    %ebp
        ret
DJ Delorie June 22, 2010, 6:25 p.m. UTC | #2
How about this?

/* { dg-final { scan-assembler "mov[zs]bl.*bits" } } */
Eric Botcazou June 22, 2010, 6:26 p.m. UTC | #3
> How about this?
>
> /* { dg-final { scan-assembler "mov[zs]bl.*bits" } } */

This doesn't match the changed line, does it?
DJ Delorie June 22, 2010, 6:38 p.m. UTC | #4
doh, you're right.  I was focusing on the movsbl and not the operands

/* { dg-final { scan-assembler "(movb|mov[zs]bl).*bits" } } */

Are there any other ways to load a byte than mobv and mov[sz]bl ?
Eric Botcazou June 22, 2010, 6:46 p.m. UTC | #5
> doh, you're right.  I was focusing on the movsbl and not the operands
>
> /* { dg-final { scan-assembler "(movb|mov[zs]bl).*bits" } } */
>
> Are there any other ways to load a byte than mobv and mov[sz]bl ?

In my experience, there are only 2 variants of generated code in this area, 
one for i[345]86 and one for i686+, so that's probably good enough.
Jie Zhang Sept. 3, 2010, 6:34 p.m. UTC | #6
Hi DJ,

On 06/17/2010 06:53 AM, DJ Delorie wrote:
> Committed as attached.  Whew.
>
> 2010-06-16  DJ Delorie<dj@redhat.com>
>
> 	* common.opt (-fstrict-volatile-bitfields): new.
> 	* doc/invoke.texi: Document it.
> 	* fold-const.c (optimize_bit_field_compare): For volatile
> 	bitfields, use the field's type to determine the mode, not the
> 	field's size.
> 	* expr.c (expand_assignment): Likewise.
> 	(get_inner_reference): Likewise.
> 	(expand_expr_real_1): Likewise.
> 	* expmed.c (store_fixed_bit_field): Likewise.
> 	(extract_bit_field_1): Likewise.
> 	(extract_fixed_bit_field): Likewise.
>
> 2010-06-16  DJ Delorie<dj@redhat.com>
>
> 	* gcc.target/i386/volatile-bitfields-1.c: New.
> 	* gcc.target/i386/volatile-bitfields-2.c: New.
>
When I tried to make ARM EABI target default to 
-fstrict-volatile-bitfields, I found some regressions on ARM EABI caused 
by this flag:

http://gcc.gnu.org/ml/gcc-patches/2010-09/msg00155.html

h8300, sh and rx should also have the same regressions. m32c does not 
suffer it because it's not STRICT_ALIGNMENT.

There are already some discussion under that thread. And with some more 
offline discussion with Joseph, Dan and Paul, we think this flag should 
be fixed to fix the regressions, i.e. when packed attribute conflicts 
with -fstrict-volatile-bitfields, GCC should honor packed attribute. One 
reason is that -fstrict-volatile-bitfields should not cause any 
regression on testsuite. Another reason is that this is what armcc does 
and we'd better keep compatibility with armcc on this case.

BTW, it's a little confusing that -fstrict-volatile-bitfields causes a 
warning on code which does not use any bitfields.


Regards,
diff mbox

Patch

Index: gcc/doc/invoke.texi
===================================================================
--- gcc/doc/invoke.texi	(revision 160864)
+++ gcc/doc/invoke.texi	(working copy)
@@ -17722,12 +17722,38 @@  be thrown between DSOs must be explicitl
 visibility so that the @samp{type_info} nodes will be unified between
 the DSOs.
 
 An overview of these techniques, their benefits and how to use them
 is at @w{@uref{http://gcc.gnu.org/wiki/Visibility}}.
 
+@item -fstrict-volatile-bitfields
+This option should be used if accesses to volatile bitfields (or other
+structure fields, although the compiler usually honors those types
+anyway) should use a single access in a mode of the same size as the
+container's type, aligned to a natural alignment if possible.  For
+example, targets with memory-mapped peripheral registers might require
+all such accesses to be 16 bits wide; with this flag the user could
+declare all peripheral bitfields as ``unsigned short'' (assuming short
+is 16 bits on these targets) to force GCC to use 16 bit accesses
+instead of, perhaps, a more efficient 32 bit access.
+
+If this option is disabled, the compiler will use the most efficient
+instruction.  In the previous example, that might be a 32-bit load
+instruction, even though that will access bytes that do not contain
+any portion of the bitfield, or memory-mapped registers unrelated to
+the one being updated.
+
+If the target requires strict alignment, and honoring the container
+type would require violating this alignment, a warning is issued.
+However, the access happens as the user requested, under the
+assumption that the user knows something about the target hardware
+that GCC is unaware of.
+
+The default value of this option is determined by the application binary
+interface for the target processor.
+
 @end table
 
 @c man end
 
 @node Environment Variables
 @section Environment Variables Affecting GCC
Index: gcc/fold-const.c
===================================================================
--- gcc/fold-const.c	(revision 160864)
+++ gcc/fold-const.c	(working copy)
@@ -3460,17 +3460,22 @@  optimize_bit_field_compare (location_t l
 	 || TREE_CODE (rinner) == PLACEHOLDER_EXPR)
        return 0;
    }
 
   /* See if we can find a mode to refer to this field.  We should be able to,
      but fail if we can't.  */
-  nmode = get_best_mode (lbitsize, lbitpos,
-			 const_p ? TYPE_ALIGN (TREE_TYPE (linner))
-			 : MIN (TYPE_ALIGN (TREE_TYPE (linner)),
-				TYPE_ALIGN (TREE_TYPE (rinner))),
-			 word_mode, lvolatilep || rvolatilep);
+  if (lvolatilep
+      && GET_MODE_BITSIZE (lmode) > 0
+      && flag_strict_volatile_bitfields > 0)
+    nmode = lmode;
+  else
+    nmode = get_best_mode (lbitsize, lbitpos,
+			   const_p ? TYPE_ALIGN (TREE_TYPE (linner))
+			   : MIN (TYPE_ALIGN (TREE_TYPE (linner)),
+				  TYPE_ALIGN (TREE_TYPE (rinner))),
+			   word_mode, lvolatilep || rvolatilep);
   if (nmode == VOIDmode)
     return 0;
 
   /* Set signed and unsigned types of the precision of this mode for the
      shifts below.  */
   signed_type = lang_hooks.types.type_for_mode (nmode, 0);
Index: gcc/testsuite/gcc.target/i386/volatile-bitfields-2.c
===================================================================
--- gcc/testsuite/gcc.target/i386/volatile-bitfields-2.c	(revision 0)
+++ gcc/testsuite/gcc.target/i386/volatile-bitfields-2.c	(revision 0)
@@ -0,0 +1,17 @@ 
+/* { dg-do compile } */
+/* { dg-options "-O2 -fno-strict-volatile-bitfields" } */
+
+typedef struct {
+  char a:1;
+  char b:7;
+  int c;
+} BitStruct;
+
+volatile BitStruct bits;
+
+int foo ()
+{
+  return bits.b;
+}
+
+/* { dg-final { scan-assembler "movl.*bits" } } */
Index: gcc/testsuite/gcc.target/i386/volatile-bitfields-1.c
===================================================================
--- gcc/testsuite/gcc.target/i386/volatile-bitfields-1.c	(revision 0)
+++ gcc/testsuite/gcc.target/i386/volatile-bitfields-1.c	(revision 0)
@@ -0,0 +1,17 @@ 
+/* { dg-do compile } */
+/* { dg-options "-O2 -fstrict-volatile-bitfields" } */
+
+typedef struct {
+  char a:1;
+  char b:7;
+  int c;
+} BitStruct;
+
+volatile BitStruct bits;
+
+int foo ()
+{
+  return bits.b;
+}
+
+/* { dg-final { scan-assembler "movzbl.*bits" } } */
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	(revision 160864)
+++ gcc/expr.c	(working copy)
@@ -4226,12 +4226,19 @@  expand_assignment (tree to, tree from, b
 
       /* If we are going to use store_bit_field and extract_bit_field,
 	 make sure to_rtx will be safe for multiple use.  */
 
       to_rtx = expand_normal (tem);
 
+      /* If the bitfield is volatile, we want to access it in the
+	 field's mode, not the computed mode.  */
+      if (volatilep
+	  && GET_CODE (to_rtx) == MEM
+	  && flag_strict_volatile_bitfields > 0)
+	to_rtx = adjust_address (to_rtx, mode1, 0);
+ 
       if (offset != 0)
 	{
 	  enum machine_mode address_mode;
 	  rtx offset_rtx;
 
 	  if (!MEM_P (to_rtx))
@@ -5987,12 +5994,18 @@  get_inner_reference (tree exp, HOST_WIDE
       tree field = TREE_OPERAND (exp, 1);
       size_tree = DECL_SIZE (field);
       if (!DECL_BIT_FIELD (field))
 	mode = DECL_MODE (field);
       else if (DECL_MODE (field) == BLKmode)
 	blkmode_bitfield = true;
+      else if (TREE_THIS_VOLATILE (exp)
+	       && flag_strict_volatile_bitfields > 0)
+	/* Volatile bitfields should be accessed in the mode of the
+	     field's type, not the mode computed based on the bit
+	     size.  */
+	mode = TYPE_MODE (DECL_BIT_FIELD_TYPE (field));
 
       *punsignedp = DECL_UNSIGNED (field);
     }
   else if (TREE_CODE (exp) == BIT_FIELD_REF)
     {
       size_tree = TREE_OPERAND (exp, 1);
@@ -8963,12 +8976,20 @@  expand_expr_real_1 (tree exp, rtx target
 			 VOIDmode,
 			 (modifier == EXPAND_INITIALIZER
 			  || modifier == EXPAND_CONST_ADDRESS
 			  || modifier == EXPAND_STACK_PARM)
 			 ? modifier : EXPAND_NORMAL);
 
+
+	/* If the bitfield is volatile, we want to access it in the
+	   field's mode, not the computed mode.  */
+	if (volatilep
+	    && GET_CODE (op0) == MEM
+	    && flag_strict_volatile_bitfields > 0)
+	  op0 = adjust_address (op0, mode1, 0);
+
 	mode2
 	  = CONSTANT_P (op0) ? TYPE_MODE (TREE_TYPE (tem)) : GET_MODE (op0);
 
 	/* If we have either an offset, a BLKmode result, or a reference
 	   outside the underlying object, we must force it to memory.
 	   Such a case can occur in Ada if we have unchecked conversion
@@ -9088,12 +9109,15 @@  expand_expr_real_1 (tree exp, rtx target
 	    || REG_P (op0) || GET_CODE (op0) == SUBREG
 	    || (mode1 != BLKmode && ! direct_load[(int) mode1]
 		&& GET_MODE_CLASS (mode) != MODE_COMPLEX_INT
 		&& GET_MODE_CLASS (mode) != MODE_COMPLEX_FLOAT
 		&& modifier != EXPAND_CONST_ADDRESS
 		&& modifier != EXPAND_INITIALIZER)
+	    /* If the field is volatile, we always want an aligned
+	       access.  */
+	    || (volatilep && flag_strict_volatile_bitfields > 0)
 	    /* If the field isn't aligned enough to fetch as a memref,
 	       fetch it as a bit field.  */
 	    || (mode1 != BLKmode
 		&& (((TYPE_ALIGN (TREE_TYPE (tem)) < GET_MODE_ALIGNMENT (mode)
 		      || (bitpos % GET_MODE_ALIGNMENT (mode) != 0)
 		      || (MEM_P (op0)
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	(revision 160864)
+++ gcc/expmed.c	(working copy)
@@ -900,14 +900,20 @@  store_fixed_bit_field (rtx op0, unsigned
 	 We don't want a mode bigger than the destination.  */
 
       mode = GET_MODE (op0);
       if (GET_MODE_BITSIZE (mode) == 0
 	  || GET_MODE_BITSIZE (mode) > GET_MODE_BITSIZE (word_mode))
 	mode = word_mode;
-      mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT,
-			    MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
+
+      if (MEM_VOLATILE_P (op0)
+          && GET_MODE_BITSIZE (GET_MODE (op0)) > 0
+	  && flag_strict_volatile_bitfields > 0)
+	mode = GET_MODE (op0);
+      else
+	mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT,
+			      MEM_ALIGN (op0), mode, MEM_VOLATILE_P (op0));
 
       if (mode == VOIDmode)
 	{
 	  /* The only way this should occur is if the field spans word
 	     boundaries.  */
 	  store_split_bit_field (op0, bitsize, bitpos + offset * BITS_PER_UNIT,
@@ -1374,12 +1380,20 @@  extract_bit_field_1 (rtx str_rtx, unsign
      we want a mode based on the size, so we must avoid calling it for FP
      modes.  */
   mode1  = (SCALAR_INT_MODE_P (tmode)
 	    ? mode_for_size (bitsize, GET_MODE_CLASS (tmode), 0)
 	    : mode);
 
+  /* If the bitfield is volatile, we need to make sure the access
+     remains on a type-aligned boundary.  */
+  if (GET_CODE (op0) == MEM
+      && MEM_VOLATILE_P (op0)
+      && GET_MODE_BITSIZE (GET_MODE (op0)) > 0
+      && flag_strict_volatile_bitfields > 0)
+    goto no_subreg_mode_swap;
+
   if (((bitsize >= BITS_PER_WORD && bitsize == GET_MODE_BITSIZE (mode)
 	&& bitpos % BITS_PER_WORD == 0)
        || (mode1 != BLKmode
 	   /* ??? The big endian test here is wrong.  This is correct
 	      if the value is in a register, and if mode_for_size is not
 	      the same mode as op0.  This causes us to get unnecessarily
@@ -1726,14 +1740,25 @@  extract_fixed_bit_field (enum machine_mo
   else
     {
       /* Get the proper mode to use for this field.  We want a mode that
 	 includes the entire field.  If such a mode would be larger than
 	 a word, we won't be doing the extraction the normal way.  */
 
-      mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT,
-			    MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
+      if (MEM_VOLATILE_P (op0)
+	  && flag_strict_volatile_bitfields > 0)
+	{
+	  if (GET_MODE_BITSIZE (GET_MODE (op0)) > 0)
+	    mode = GET_MODE (op0);
+	  else if (target && GET_MODE_BITSIZE (GET_MODE (target)) > 0)
+	    mode = GET_MODE (target);
+	  else
+	    mode = tmode;
+	}
+      else
+	mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT,
+			      MEM_ALIGN (op0), word_mode, MEM_VOLATILE_P (op0));
 
       if (mode == VOIDmode)
 	/* The only way this should occur is if the field spans word
 	   boundaries.  */
 	return extract_split_bit_field (op0, bitsize,
 					bitpos + offset * BITS_PER_UNIT,
@@ -1748,18 +1773,57 @@  extract_fixed_bit_field (enum machine_mo
 	{
 	  offset += (bitpos / total_bits) * (total_bits / BITS_PER_UNIT);
 	  bitpos -= ((bitpos / total_bits) * (total_bits / BITS_PER_UNIT)
 		     * BITS_PER_UNIT);
 	}
 
-      /* Get ref to an aligned byte, halfword, or word containing the field.
-	 Adjust BITPOS to be position within a word,
-	 and OFFSET to be the offset of that word.
-	 Then alter OP0 to refer to that word.  */
-      bitpos += (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT;
-      offset -= (offset % (total_bits / BITS_PER_UNIT));
+      /* If we're accessing a volatile MEM, we can't do the next
+	 alignment step if it results in a multi-word access where we
+	 otherwise wouldn't have one.  So, check for that case
+	 here.  */
+      if (MEM_P (op0)
+	  && MEM_VOLATILE_P (op0)
+	  && flag_strict_volatile_bitfields > 0
+	  && bitpos + bitsize <= total_bits
+	  && bitpos + bitsize + (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT > total_bits)
+	{
+	  if (STRICT_ALIGNMENT)
+	    {
+	      static bool informed_about_misalignment = false;
+	      bool warned;
+
+	      if (bitsize == total_bits)
+		warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
+				     "mis-aligned access used for structure member");
+	      else
+		warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
+				     "mis-aligned access used for structure bitfield");
+
+	      if (! informed_about_misalignment && warned)
+		{
+		  informed_about_misalignment = true;
+		  inform (input_location,
+			  "When a volatile object spans multiple type-sized locations,"
+			  " the compiler must choose between using a single mis-aligned access to"
+			  " preserve the volatility, or using multiple aligned accesses to avoid"
+			  " runtime faults.  This code may fail at runtime if the hardware does"
+			  " not allow this access.");
+		}
+	    }
+	}
+      else
+	{
+
+	  /* Get ref to an aligned byte, halfword, or word containing the field.
+	     Adjust BITPOS to be position within a word,
+	     and OFFSET to be the offset of that word.
+	     Then alter OP0 to refer to that word.  */
+	  bitpos += (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT;
+	  offset -= (offset % (total_bits / BITS_PER_UNIT));
+	}
+
       op0 = adjust_address (op0, mode, offset);
     }
 
   mode = GET_MODE (op0);
 
   if (BYTES_BIG_ENDIAN)
Index: gcc/common.opt
===================================================================
--- gcc/common.opt	(revision 160864)
+++ gcc/common.opt	(working copy)
@@ -626,12 +626,16 @@  Common Report Var(flag_loop_interchange)
 Enable Loop Interchange transformation
 
 floop-block
 Common Report Var(flag_loop_block) Optimization
 Enable Loop Blocking transformation
 
+fstrict-volatile-bitfields
+Common Report Var(flag_strict_volatile_bitfields) Init(-1)
+Force bitfield accesses to match their type width
+
 fguess-branch-probability
 Common Report Var(flag_guess_branch_prob) Optimization
 Enable guessing of branch probabilities
 
 ; Nonzero means ignore `#ident' directives.  0 means handle them.
 ; Generate position-independent code for executables if possible