Patchwork fix wrong-code bug for -fstrict-volatile-bitfields

login
register
mail settings
Submitter Sandra Loosemore
Date Aug. 21, 2012, 8:20 p.m.
Message ID <5033ED82.1010901@codesourcery.com>
Download mbox | patch
Permalink /patch/179158/
State New
Headers show

Comments

Sandra Loosemore - Aug. 21, 2012, 8:20 p.m.
This patch is a followup to the addition of support for 
-fstrict-volatile-bitfields (required by the ARM EABI); see this thread

http://gcc.gnu.org/ml/gcc-patches/2010-10/msg01889.html

for discussion of the original patch.

That patch only addressed the behavior when extracting the value of a 
volatile bit field, but the same problems affect storing values into a 
volatile bit field (or a field of a packed structure, which is 
effectively implemented as a bit field).  This patch makes the code for 
bitfield stores mirror that for bitfield loads.

Although the fix is in target-independent code, it's really for ARM; 
hence the test case, which (without this patch) generates wrong code. 
Code to determine the access width was correctly preserving the 
user-specified width, but was incorrectly falling through to code that 
assumes word mode.

As well as regression testing on arm-none-eabi, I've bootstrapped and 
regression-tested this patch on x86_64 Linux.  Earlier versions of this 
patch have also been present in our local 4.5 and 4.6 GCC trees for some 
time, so it's been well-tested on a variety of other platforms.  OK to 
check in on mainline?

-Sandra


2012-08-21  Paul Brook  <paul@codesourcery.com>
	    Joseph Myers <joseph@codesourcery.com>
	    Sandra Loosemore  <sandra@codesourcery.com>

	gcc/
	* expr.h (store_bit_field): Add packedp parameter to prototype.
	* expmed.c (store_bit_field, store_bit_field_1): Add packedp
	parameter.  Adjust all callers.
	(warn_misaligned_bitfield): New function, split from
	extract_fixed_bit_field.
	(store_fixed_bit_field): Add packedp parameter.  Fix wrong-code
	behavior for the combination of misaligned bitfield and
	-fstrict-volatile-bitfields.  Use warn_misaligned_bitfield.
	(extract_fixed_bit_field): Use warn_misaligned_bitfield.
	* expr.c: Adjust calls to store_bit_field.
	(expand_assignment): Identify accesses to packed structures.
	(store_field): Add packedp parameter.  Adjust callers.
	* calls.c: Adjust calls to store_bit_field.
	* ifcvt.c: Likewise.
	* config/s390/s390.c: Likewise.

	gcc/testsuite/
	* gcc.target/arm/volatile-bitfields-5.c: New test case.
Richard Guenther - Aug. 22, 2012, 9:01 a.m.
On Tue, Aug 21, 2012 at 10:20 PM, Sandra Loosemore
<sandra@codesourcery.com> wrote:
> This patch is a followup to the addition of support for
> -fstrict-volatile-bitfields (required by the ARM EABI); see this thread
>
> http://gcc.gnu.org/ml/gcc-patches/2010-10/msg01889.html
>
> for discussion of the original patch.
>
> That patch only addressed the behavior when extracting the value of a
> volatile bit field, but the same problems affect storing values into a
> volatile bit field (or a field of a packed structure, which is effectively
> implemented as a bit field).  This patch makes the code for bitfield stores
> mirror that for bitfield loads.
>
> Although the fix is in target-independent code, it's really for ARM; hence
> the test case, which (without this patch) generates wrong code. Code to
> determine the access width was correctly preserving the user-specified
> width, but was incorrectly falling through to code that assumes word mode.
>
> As well as regression testing on arm-none-eabi, I've bootstrapped and
> regression-tested this patch on x86_64 Linux.  Earlier versions of this
> patch have also been present in our local 4.5 and 4.6 GCC trees for some
> time, so it's been well-tested on a variety of other platforms.  OK to check
> in on mainline?

+             bool packedp = false;
+
+             if (TREE_CODE(to) == COMPONENT_REF
+                 && (TYPE_PACKED (TREE_TYPE (TREE_OPERAND (to, 0)))
+                     || (TREE_CODE (TREE_OPERAND (to, 1)) == FIELD_DECL
+                         && DECL_PACKED (TREE_OPERAND (to, 1)))))
+               packedp = true;

note that this is not a reliable way of determining if a field is packed (yes,
the existing code is broken in the same way).  I realize that you are only
using this for diagnostic purposes, still something worth mentioning.

I dislike passing around the packedp flag throughout the expander and to
warn inside the expander.  First, the flag has a name that can be confusing
(it does not conservatively flag all packed accesses).  Second, why don't
you warn for this from inside the frontend when the bitfield access is
generated?  This way the diagnostic won't depend on optimization levels
and you avoid uglifying the expander.

Thanks,
Richard.

> -Sandra
>
>
> 2012-08-21  Paul Brook  <paul@codesourcery.com>
>             Joseph Myers <joseph@codesourcery.com>
>             Sandra Loosemore  <sandra@codesourcery.com>
>
>         gcc/
>         * expr.h (store_bit_field): Add packedp parameter to prototype.
>         * expmed.c (store_bit_field, store_bit_field_1): Add packedp
>         parameter.  Adjust all callers.
>         (warn_misaligned_bitfield): New function, split from
>         extract_fixed_bit_field.
>         (store_fixed_bit_field): Add packedp parameter.  Fix wrong-code
>         behavior for the combination of misaligned bitfield and
>         -fstrict-volatile-bitfields.  Use warn_misaligned_bitfield.
>         (extract_fixed_bit_field): Use warn_misaligned_bitfield.
>         * expr.c: Adjust calls to store_bit_field.
>         (expand_assignment): Identify accesses to packed structures.
>         (store_field): Add packedp parameter.  Adjust callers.
>         * calls.c: Adjust calls to store_bit_field.
>         * ifcvt.c: Likewise.
>         * config/s390/s390.c: Likewise.
>
>         gcc/testsuite/
>         * gcc.target/arm/volatile-bitfields-5.c: New test case.
>
Eric Botcazou - Aug. 22, 2012, 9:27 p.m.
> +             bool packedp = false;
> +
> +             if (TREE_CODE(to) == COMPONENT_REF
> +                 && (TYPE_PACKED (TREE_TYPE (TREE_OPERAND (to, 0)))
> +                     || (TREE_CODE (TREE_OPERAND (to, 1)) == FIELD_DECL
> +                         && DECL_PACKED (TREE_OPERAND (to, 1)))))
> +               packedp = true;
> 
> note that this is not a reliable way of determining if a field is packed
> (yes, the existing code is broken in the same way).  I realize that you
> are only using this for diagnostic purposes, still something worth
> mentioning.

Indeed, ideally DECL_PACKED shouldn't be used outside stor-layout.c and 
dwarf2out.c in the middle-end.

> I dislike passing around the packedp flag throughout the expander and to
> warn inside the expander.  First, the flag has a name that can be confusing
> (it does not conservatively flag all packed accesses).  Second, why don't
> you warn for this from inside the frontend when the bitfield access is
> generated?  This way the diagnostic won't depend on optimization levels
> and you avoid uglifying the expander.

Seconded.  If the -fstrict-volatile-bitfields support is still incomplete, 
let's take this opportunity to clean it up.  Testing DECL_PACKED or issuing
a warning from the RTL expander is to be avoided.
Sandra Loosemore - Aug. 23, 2012, 3:37 a.m.
On 08/22/2012 03:27 PM, Eric Botcazou wrote:
>> +             bool packedp = false;
>> +
>> +             if (TREE_CODE(to) == COMPONENT_REF
>> +&&  (TYPE_PACKED (TREE_TYPE (TREE_OPERAND (to, 0)))
>> +                     || (TREE_CODE (TREE_OPERAND (to, 1)) == FIELD_DECL
>> +&&  DECL_PACKED (TREE_OPERAND (to, 1)))))
>> +               packedp = true;
>>
>> note that this is not a reliable way of determining if a field is packed
>> (yes, the existing code is broken in the same way).  I realize that you
>> are only using this for diagnostic purposes, still something worth
>> mentioning.
>
> Indeed, ideally DECL_PACKED shouldn't be used outside stor-layout.c and
> dwarf2out.c in the middle-end.
>
>> I dislike passing around the packedp flag throughout the expander and to
>> warn inside the expander.  First, the flag has a name that can be confusing
>> (it does not conservatively flag all packed accesses).  Second, why don't
>> you warn for this from inside the frontend when the bitfield access is
>> generated?  This way the diagnostic won't depend on optimization levels
>> and you avoid uglifying the expander.
>
> Seconded.  If the -fstrict-volatile-bitfields support is still incomplete,
> let's take this opportunity to clean it up.  Testing DECL_PACKED or issuing
> a warning from the RTL expander is to be avoided.

Well, I'm willing to give it a shot, but from a pragmatic point of view 
I've only got about a week left to wind up the current batch of patches 
I've got pending before I'm reassigned to a new project.  Can one of you 
give a more specific outline of how this should be fixed, to increase 
the chances that I can actually implement an acceptable solution in the 
time available?  In particular, the packedp flag that's being passed 
down is *not* just used for diagnostics; it changes code generation, and 
this code is scary enough that I'm worried about preserving the current 
behavior (which IIUC is required by the ARM EABI) if this is 
restructured.  If only the diagnostics should be moved elsewhere, where, 
exactly?  Or is there just a better test that can be used in the 
existing places to determine whether a field is packed?

-Sandra the confused
Richard Guenther - Aug. 23, 2012, 8:58 a.m.
On Thu, Aug 23, 2012 at 5:37 AM, Sandra Loosemore
<sandra@codesourcery.com> wrote:
> On 08/22/2012 03:27 PM, Eric Botcazou wrote:
>>>
>>> +             bool packedp = false;
>>> +
>>> +             if (TREE_CODE(to) == COMPONENT_REF
>>> +&&  (TYPE_PACKED (TREE_TYPE (TREE_OPERAND (to, 0)))
>>>
>>> +                     || (TREE_CODE (TREE_OPERAND (to, 1)) == FIELD_DECL
>>> +&&  DECL_PACKED (TREE_OPERAND (to, 1)))))
>>>
>>> +               packedp = true;
>>>
>>> note that this is not a reliable way of determining if a field is packed
>>> (yes, the existing code is broken in the same way).  I realize that you
>>> are only using this for diagnostic purposes, still something worth
>>> mentioning.
>>
>>
>> Indeed, ideally DECL_PACKED shouldn't be used outside stor-layout.c and
>> dwarf2out.c in the middle-end.
>>
>>> I dislike passing around the packedp flag throughout the expander and to
>>> warn inside the expander.  First, the flag has a name that can be
>>> confusing
>>> (it does not conservatively flag all packed accesses).  Second, why don't
>>> you warn for this from inside the frontend when the bitfield access is
>>> generated?  This way the diagnostic won't depend on optimization levels
>>> and you avoid uglifying the expander.
>>
>>
>> Seconded.  If the -fstrict-volatile-bitfields support is still incomplete,
>> let's take this opportunity to clean it up.  Testing DECL_PACKED or
>> issuing
>> a warning from the RTL expander is to be avoided.
>
>
> Well, I'm willing to give it a shot, but from a pragmatic point of view I've
> only got about a week left to wind up the current batch of patches I've got
> pending before I'm reassigned to a new project.  Can one of you give a more
> specific outline of how this should be fixed, to increase the chances that I
> can actually implement an acceptable solution in the time available?  In
> particular, the packedp flag that's being passed down is *not* just used for
> diagnostics; it changes code generation, and this code is scary enough that
> I'm worried about preserving the current behavior (which IIUC is required by
> the ARM EABI) if this is restructured.  If only the diagnostics should be
> moved elsewhere, where, exactly?  Or is there just a better test that can be
> used in the existing places to determine whether a field is packed?

First of all the warning should be probably issued from stor-layout.c
itself - see
other cases where we warn about packed structs.  Yes, that means you'll
get the warning even when there is no access but you'll only get it a
single time.

As of detecting whether a field is not aligned according to its mode, well,
that's impossible if you just look at the field or its containing type (and thus
for a warning without false positives / negatives from stor-layout).  It's also
impossible to determine optimally (thus, it will say "not aligned according to
its mode" in more cases).  The way you detect such misalignment is,
given an access to such field,

  get_object_alignment (exp) < GET_MODE_ALIGNMENT (DECL_MODE (field))

when exp is a COMPONENT_REF with TREE_OPERAND (exp, 1) == field
and field has non-BLK mode.

Note that an access a.f with f being a volatile field may be represented
by a volatile MEM_REF[&a, 16] where 16 is the byte offset relative to a.

Now, if you are only interested in bitfields (note, bitfields which fit a mode
are _not_ bitfields as far as optimization passes are concerned and thus
may end up like above), then you probably want to look at
DECL_BIT_FIELD_REPRESENTATIVE of the FIELD_DECL of such field.
DECL_BIT_FIELD_REPRESENTATIVE constrains the offset / size said
field is to be accessed and you should call get_object_alignment with
a COMPONENT_REF using that field instead.  You also want to make
sure DECL_BIT_FIELD_REPRESENTATIVE is computed correctly for
volatile bitfields on ARM (stor-layout computes that).

Richard.

> -Sandra the confused
>
Richard Guenther - Aug. 23, 2012, 9:08 a.m.
On Thu, Aug 23, 2012 at 10:58 AM, Richard Guenther
<richard.guenther@gmail.com> wrote:
> On Thu, Aug 23, 2012 at 5:37 AM, Sandra Loosemore
> <sandra@codesourcery.com> wrote:
>> On 08/22/2012 03:27 PM, Eric Botcazou wrote:
>>>>
>>>> +             bool packedp = false;
>>>> +
>>>> +             if (TREE_CODE(to) == COMPONENT_REF
>>>> +&&  (TYPE_PACKED (TREE_TYPE (TREE_OPERAND (to, 0)))
>>>>
>>>> +                     || (TREE_CODE (TREE_OPERAND (to, 1)) == FIELD_DECL
>>>> +&&  DECL_PACKED (TREE_OPERAND (to, 1)))))
>>>>
>>>> +               packedp = true;
>>>>
>>>> note that this is not a reliable way of determining if a field is packed
>>>> (yes, the existing code is broken in the same way).  I realize that you
>>>> are only using this for diagnostic purposes, still something worth
>>>> mentioning.
>>>
>>>
>>> Indeed, ideally DECL_PACKED shouldn't be used outside stor-layout.c and
>>> dwarf2out.c in the middle-end.
>>>
>>>> I dislike passing around the packedp flag throughout the expander and to
>>>> warn inside the expander.  First, the flag has a name that can be
>>>> confusing
>>>> (it does not conservatively flag all packed accesses).  Second, why don't
>>>> you warn for this from inside the frontend when the bitfield access is
>>>> generated?  This way the diagnostic won't depend on optimization levels
>>>> and you avoid uglifying the expander.
>>>
>>>
>>> Seconded.  If the -fstrict-volatile-bitfields support is still incomplete,
>>> let's take this opportunity to clean it up.  Testing DECL_PACKED or
>>> issuing
>>> a warning from the RTL expander is to be avoided.
>>
>>
>> Well, I'm willing to give it a shot, but from a pragmatic point of view I've
>> only got about a week left to wind up the current batch of patches I've got
>> pending before I'm reassigned to a new project.  Can one of you give a more
>> specific outline of how this should be fixed, to increase the chances that I
>> can actually implement an acceptable solution in the time available?  In
>> particular, the packedp flag that's being passed down is *not* just used for
>> diagnostics; it changes code generation, and this code is scary enough that
>> I'm worried about preserving the current behavior (which IIUC is required by
>> the ARM EABI) if this is restructured.  If only the diagnostics should be
>> moved elsewhere, where, exactly?  Or is there just a better test that can be
>> used in the existing places to determine whether a field is packed?
>
> First of all the warning should be probably issued from stor-layout.c
> itself - see
> other cases where we warn about packed structs.  Yes, that means you'll
> get the warning even when there is no access but you'll only get it a
> single time.
>
> As of detecting whether a field is not aligned according to its mode, well,
> that's impossible if you just look at the field or its containing type (and thus
> for a warning without false positives / negatives from stor-layout).  It's also
> impossible to determine optimally (thus, it will say "not aligned according to
> its mode" in more cases).  The way you detect such misalignment is,
> given an access to such field,
>
>   get_object_alignment (exp) < GET_MODE_ALIGNMENT (DECL_MODE (field))
>
> when exp is a COMPONENT_REF with TREE_OPERAND (exp, 1) == field
> and field has non-BLK mode.
>
> Note that an access a.f with f being a volatile field may be represented
> by a volatile MEM_REF[&a, 16] where 16 is the byte offset relative to a.
>
> Now, if you are only interested in bitfields (note, bitfields which fit a mode
> are _not_ bitfields as far as optimization passes are concerned and thus
> may end up like above), then you probably want to look at
> DECL_BIT_FIELD_REPRESENTATIVE of the FIELD_DECL of such field.
> DECL_BIT_FIELD_REPRESENTATIVE constrains the offset / size said
> field is to be accessed and you should call get_object_alignment with
> a COMPONENT_REF using that field instead.  You also want to make
> sure DECL_BIT_FIELD_REPRESENTATIVE is computed correctly for
> volatile bitfields on ARM (stor-layout computes that).

In fact, you should probably implement code-generation constraints from
within the frontends by, for strict volatile bitfields, emitting loads/stores
using DECL_BIT_FIELD_REPRESENTATIVE (doing read-modify-write
explicitely).  Or maybe you can elaborate on the code-generation effects
of strict-volatile bitfields?

Richard.
Chung-Lin Tang - Aug. 23, 2012, 9:51 a.m.
On 2012/8/23 05:08, Richard Guenther wrote:
>> First of all the warning should be probably issued from stor-layout.c
>> itself - see
>> other cases where we warn about packed structs.  Yes, that means you'll
>> get the warning even when there is no access but you'll only get it a
>> single time.
>>
>> As of detecting whether a field is not aligned according to its mode, well,
>> that's impossible if you just look at the field or its containing type (and thus
>> for a warning without false positives / negatives from stor-layout).  It's also
>> impossible to determine optimally (thus, it will say "not aligned according to
>> its mode" in more cases).  The way you detect such misalignment is,
>> given an access to such field,
>>
>>   get_object_alignment (exp) < GET_MODE_ALIGNMENT (DECL_MODE (field))
>>
>> when exp is a COMPONENT_REF with TREE_OPERAND (exp, 1) == field
>> and field has non-BLK mode.
>>
>> Note that an access a.f with f being a volatile field may be represented
>> by a volatile MEM_REF[&a, 16] where 16 is the byte offset relative to a.

WRT only the code expansion aspects in store_fixed_bit_field(), would a
test of "STRICT_ALIGNMENT && MEM_ALIGN(op0) < GET_MODE_ALIGNMENT(mode)"
be sufficient to detect instead of a packedp parameter?

Chung-Lin

>> Now, if you are only interested in bitfields (note, bitfields which fit a mode
>> are _not_ bitfields as far as optimization passes are concerned and thus
>> may end up like above), then you probably want to look at
>> DECL_BIT_FIELD_REPRESENTATIVE of the FIELD_DECL of such field.
>> DECL_BIT_FIELD_REPRESENTATIVE constrains the offset / size said
>> field is to be accessed and you should call get_object_alignment with
>> a COMPONENT_REF using that field instead.  You also want to make
>> sure DECL_BIT_FIELD_REPRESENTATIVE is computed correctly for
>> volatile bitfields on ARM (stor-layout computes that).
> 
> In fact, you should probably implement code-generation constraints from
> within the frontends by, for strict volatile bitfields, emitting loads/stores
> using DECL_BIT_FIELD_REPRESENTATIVE (doing read-modify-write
> explicitely).  Or maybe you can elaborate on the code-generation effects
> of strict-volatile bitfields?
Richard Guenther - Aug. 23, 2012, 10:05 a.m.
On Thu, Aug 23, 2012 at 11:51 AM, Chung-Lin Tang
<cltang@codesourcery.com> wrote:
> On 2012/8/23 05:08, Richard Guenther wrote:
>>> First of all the warning should be probably issued from stor-layout.c
>>> itself - see
>>> other cases where we warn about packed structs.  Yes, that means you'll
>>> get the warning even when there is no access but you'll only get it a
>>> single time.
>>>
>>> As of detecting whether a field is not aligned according to its mode, well,
>>> that's impossible if you just look at the field or its containing type (and thus
>>> for a warning without false positives / negatives from stor-layout).  It's also
>>> impossible to determine optimally (thus, it will say "not aligned according to
>>> its mode" in more cases).  The way you detect such misalignment is,
>>> given an access to such field,
>>>
>>>   get_object_alignment (exp) < GET_MODE_ALIGNMENT (DECL_MODE (field))
>>>
>>> when exp is a COMPONENT_REF with TREE_OPERAND (exp, 1) == field
>>> and field has non-BLK mode.
>>>
>>> Note that an access a.f with f being a volatile field may be represented
>>> by a volatile MEM_REF[&a, 16] where 16 is the byte offset relative to a.
>
> WRT only the code expansion aspects in store_fixed_bit_field(), would a
> test of "STRICT_ALIGNMENT && MEM_ALIGN(op0) < GET_MODE_ALIGNMENT(mode)"
> be sufficient to detect instead of a packedp parameter?

I suppose so.  The code is there to ensure we don't use multiple smaller
accesses, right?  Accesses to outside of the field should be prohibited
by properly computing DECL_BIT_FIELD_REPRESENTATIVE and all
code specific to flag_strict_volatile_bitfields and that effect should be
"moved" to stor-layout.c instead.

Richard.

> Chung-Lin
>
>>> Now, if you are only interested in bitfields (note, bitfields which fit a mode
>>> are _not_ bitfields as far as optimization passes are concerned and thus
>>> may end up like above), then you probably want to look at
>>> DECL_BIT_FIELD_REPRESENTATIVE of the FIELD_DECL of such field.
>>> DECL_BIT_FIELD_REPRESENTATIVE constrains the offset / size said
>>> field is to be accessed and you should call get_object_alignment with
>>> a COMPONENT_REF using that field instead.  You also want to make
>>> sure DECL_BIT_FIELD_REPRESENTATIVE is computed correctly for
>>> volatile bitfields on ARM (stor-layout computes that).
>>
>> In fact, you should probably implement code-generation constraints from
>> within the frontends by, for strict volatile bitfields, emitting loads/stores
>> using DECL_BIT_FIELD_REPRESENTATIVE (doing read-modify-write
>> explicitely).  Or maybe you can elaborate on the code-generation effects
>> of strict-volatile bitfields?
>
Sandra Loosemore - Aug. 23, 2012, 7:31 p.m.
On 08/23/2012 03:08 AM, Richard Guenther wrote:

> In fact, you should probably implement code-generation constraints from
> within the frontends by, for strict volatile bitfields, emitting loads/stores
> using DECL_BIT_FIELD_REPRESENTATIVE (doing read-modify-write
> explicitely).  Or maybe you can elaborate on the code-generation effects
> of strict-volatile bitfields?

This was documented in the GCC manual at the same time the original 
patch for extracting bit-field values was added:

@item -fstrict-volatile-bitfields
@opindex fstrict-volatile-bitfields
This option should be used if accesses to volatile bit-fields (or other
structure fields, although the compiler usually honors those types
anyway) should use a single access of the width of the
field's type, aligned to a natural alignment if possible.  For
example, targets with memory-mapped peripheral registers might require
all such accesses to be 16 bits wide; with this flag the user could
declare all peripheral bit-fields as @code{unsigned short} (assuming short
is 16 bits on these targets) to force GCC to use 16-bit accesses
instead of, perhaps, a more efficient 32-bit access.

If this option is disabled, the compiler uses the most efficient
instruction.  In the previous example, that might be a 32-bit load
instruction, even though that accesses bytes that do not contain
any portion of the bit-field, or memory-mapped registers unrelated to
the one being updated.

If the target requires strict alignment, and honoring the field
type would require violating this alignment, a warning is issued.
If the field has @code{packed} attribute, the access is done without
honoring the field type.  If the field doesn't have @code{packed}
attribute, the access is done honoring the field type.  In both cases,
GCC assumes that the user knows something about the target hardware
that it is unaware of.

The default value of this option is determined by the application binary
interface for the target processor.

-Sandra
Sandra Loosemore - Aug. 28, 2012, 8:41 p.m.
On 08/23/2012 03:51 AM, Chung-Lin Tang wrote:
>
> WRT only the code expansion aspects in store_fixed_bit_field(), would a
> test of "STRICT_ALIGNMENT&&  MEM_ALIGN(op0)<  GET_MODE_ALIGNMENT(mode)"
> be sufficient to detect instead of a packedp parameter?

As an experiment, I tried putting in an assertion to compare this test 
to the packedp flag at the point in extract_fixed_bit_field where 
packedp is currently being checked, and testing showed the two 
expressions are not equivalent (e.g., on 
gcc.c-torture/execute/20010518-2.c).

-Sandra

Patch

Index: gcc/expr.h
===================================================================
--- gcc/expr.h	(revision 190541)
+++ gcc/expr.h	(working copy)
@@ -693,7 +693,7 @@  extern void store_bit_field (rtx, unsign
 			     unsigned HOST_WIDE_INT,
 			     unsigned HOST_WIDE_INT,
 			     unsigned HOST_WIDE_INT,
-			     enum machine_mode, rtx);
+			     bool, enum machine_mode, rtx);
 extern rtx extract_bit_field (rtx, unsigned HOST_WIDE_INT,
 			      unsigned HOST_WIDE_INT, int, bool, rtx,
 			      enum machine_mode, enum machine_mode);
Index: gcc/expmed.c
===================================================================
--- gcc/expmed.c	(revision 190541)
+++ gcc/expmed.c	(working copy)
@@ -50,7 +50,7 @@  static void store_fixed_bit_field (rtx, 
 				   unsigned HOST_WIDE_INT,
 				   unsigned HOST_WIDE_INT,
 				   unsigned HOST_WIDE_INT,
-				   rtx);
+				   rtx, bool);
 static void store_split_bit_field (rtx, unsigned HOST_WIDE_INT,
 				   unsigned HOST_WIDE_INT,
 				   unsigned HOST_WIDE_INT,
@@ -406,7 +406,7 @@  store_bit_field_1 (rtx str_rtx, unsigned
 		   unsigned HOST_WIDE_INT bitnum,
 		   unsigned HOST_WIDE_INT bitregion_start,
 		   unsigned HOST_WIDE_INT bitregion_end,
-		   enum machine_mode fieldmode,
+		   bool packedp, enum machine_mode fieldmode,
 		   rtx value, bool fallback_p)
 {
   unsigned int unit
@@ -638,7 +638,7 @@  store_bit_field_1 (rtx str_rtx, unsigned
 	  if (!store_bit_field_1 (op0, new_bitsize,
 				  bitnum + bit_offset,
 				  bitregion_start, bitregion_end,
-				  word_mode,
+				  false, word_mode,
 				  value_word, fallback_p))
 	    {
 	      delete_insns_since (last);
@@ -859,7 +859,7 @@  store_bit_field_1 (rtx str_rtx, unsigned
 	  tempreg = copy_to_reg (xop0);
 	  if (store_bit_field_1 (tempreg, bitsize, xbitpos,
 				 bitregion_start, bitregion_end,
-				 fieldmode, orig_value, false))
+				 false, fieldmode, orig_value, false))
 	    {
 	      emit_move_insn (xop0, tempreg);
 	      return true;
@@ -872,7 +872,7 @@  store_bit_field_1 (rtx str_rtx, unsigned
     return false;
 
   store_fixed_bit_field (op0, offset, bitsize, bitpos,
-			 bitregion_start, bitregion_end, value);
+			 bitregion_start, bitregion_end, value, packedp);
   return true;
 }
 
@@ -885,6 +885,8 @@  store_bit_field_1 (rtx str_rtx, unsigned
    These two fields are 0, if the C++ memory model does not apply,
    or we are not interested in keeping track of bitfield regions.
 
+   PACKEDP is true for fields with the packed attribute.
+
    FIELDMODE is the machine-mode of the FIELD_DECL node for this field.  */
 
 void
@@ -892,6 +894,7 @@  store_bit_field (rtx str_rtx, unsigned H
 		 unsigned HOST_WIDE_INT bitnum,
 		 unsigned HOST_WIDE_INT bitregion_start,
 		 unsigned HOST_WIDE_INT bitregion_end,
+		 bool packedp,
 		 enum machine_mode fieldmode,
 		 rtx value)
 {
@@ -924,9 +927,48 @@  store_bit_field (rtx str_rtx, unsigned H
 
   if (!store_bit_field_1 (str_rtx, bitsize, bitnum,
 			  bitregion_start, bitregion_end,
-			  fieldmode, value, true))
+			  packedp, fieldmode, value, true))
     gcc_unreachable ();
 }
+
+static void
+warn_misaligned_bitfield (bool struct_member, bool packedp)
+{
+  static bool informed_about_misalignment = false;
+  bool warned;
+
+  if (packedp)
+    {
+      if (struct_member)
+	warning_at (input_location, OPT_fstrict_volatile_bitfields,
+		    "multiple accesses to volatile structure member"
+		    " because of packed attribute");
+      else
+	warning_at (input_location, OPT_fstrict_volatile_bitfields,
+		    "multiple accesses to volatile structure bitfield"
+		    " because of packed attribute");
+    }
+  else
+    {
+      if (struct_member)
+	warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
+			     "mis-aligned access used for structure member");
+      else
+	warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
+			     "mis-aligned access used for structure bitfield");
+      if (! informed_about_misalignment && warned)
+	{
+	  informed_about_misalignment = true;
+	  inform (input_location,
+		  "When a volatile object spans multiple type-sized"
+		  " locations, the compiler must choose between using a"
+		  " single mis-aligned access to preserve the volatility,"
+		  " or using multiple aligned accesses to avoid runtime"
+		  " faults.  This code may fail at runtime if the hardware"
+		  " does not allow this access.");
+	}
+    }
+}
 
 /* Use shifts and boolean operations to store VALUE
    into a bit field of width BITSIZE
@@ -943,7 +985,8 @@  store_fixed_bit_field (rtx op0, unsigned
 		       unsigned HOST_WIDE_INT bitpos,
 		       unsigned HOST_WIDE_INT bitregion_start,
 		       unsigned HOST_WIDE_INT bitregion_end,
-		       rtx value)
+		       rtx value,
+		       bool packedp)
 {
   enum machine_mode mode;
   unsigned int total_bits = BITS_PER_WORD;
@@ -981,6 +1024,7 @@  store_fixed_bit_field (rtx op0, unsigned
 	 includes the entire field.  If such a mode would be larger than
 	 a word, we won't be doing the extraction the normal way.
 	 We don't want a mode bigger than the destination.  */
+      bool realign = true;
 
       mode = GET_MODE (op0);
       if (GET_MODE_BITSIZE (mode) == 0
@@ -991,7 +1035,25 @@  store_fixed_bit_field (rtx op0, unsigned
           && GET_MODE_BITSIZE (GET_MODE (op0)) > 0
 	  && GET_MODE_BITSIZE (GET_MODE (op0)) <= maxbits
 	  && flag_strict_volatile_bitfields > 0)
-	mode = GET_MODE (op0);
+	{
+	  /* We must use the specified access size.  */
+	  mode = GET_MODE (op0);
+	  total_bits = GET_MODE_BITSIZE (mode);
+	  if (bitpos + bitsize <= total_bits
+	      && bitpos + bitsize 
+		 + (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT
+		 > total_bits)
+	    {
+	      realign = false;
+	      if (STRICT_ALIGNMENT)
+		{
+		  warn_misaligned_bitfield (bitsize == GET_MODE_BITSIZE (mode),
+					    packedp);
+		  if (packedp)
+		    mode = VOIDmode;
+		}
+	    }
+	}
       else
 	mode = get_best_mode (bitsize, bitpos + offset * BITS_PER_UNIT,
 			      bitregion_start, bitregion_end,
@@ -1000,7 +1062,8 @@  store_fixed_bit_field (rtx op0, unsigned
       if (mode == VOIDmode)
 	{
 	  /* The only way this should occur is if the field spans word
-	     boundaries.  */
+	     boundaries, or container boundaries with
+	     -fstrict-volatile-bitfields.  */
 	  store_split_bit_field (op0, bitsize, bitpos + offset * BITS_PER_UNIT,
 				 bitregion_start, bitregion_end, value);
 	  return;
@@ -1018,12 +1081,15 @@  store_fixed_bit_field (rtx op0, unsigned
 		     * BITS_PER_UNIT);
 	}
 
-      /* Get ref to an aligned byte, halfword, or word containing the field.
-	 Adjust BITPOS to be position within a word,
-	 and OFFSET to be the offset of that word.
-	 Then alter OP0 to refer to that word.  */
-      bitpos += (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT;
-      offset -= (offset % (total_bits / BITS_PER_UNIT));
+      if (realign)
+	{
+	  /* Get ref to an aligned byte, halfword, or word containing the field.
+	     Adjust BITPOS to be position within a word,
+	     and OFFSET to be the offset of that word.
+	     Then alter OP0 to refer to that word.  */
+	  bitpos += (offset % (total_bits / BITS_PER_UNIT)) * BITS_PER_UNIT;
+	  offset -= (offset % (total_bits / BITS_PER_UNIT));
+	}
       op0 = adjust_address (op0, mode, offset);
     }
 
@@ -1250,7 +1316,8 @@  store_split_bit_field (rtx op0, unsigned
 	 it is just an out-of-bounds access.  Ignore it.  */
       if (word != const0_rtx)
 	store_fixed_bit_field (word, offset * unit / BITS_PER_UNIT, thissize,
-			       thispos, bitregion_start, bitregion_end, part);
+			       thispos, bitregion_start, bitregion_end, part,
+			       false);
       bitsdone += thissize;
     }
 }
@@ -1857,42 +1924,13 @@  extract_fixed_bit_field (enum machine_mo
 	{
 	  if (STRICT_ALIGNMENT)
 	    {
-	      static bool informed_about_misalignment = false;
-	      bool warned;
-
+	      warn_misaligned_bitfield (bitsize == total_bits, packedp);
 	      if (packedp)
 		{
-		  if (bitsize == total_bits)
-		    warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
-					 "multiple accesses to volatile structure member"
-					 " because of packed attribute");
-		  else
-		    warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
-					 "multiple accesses to volatile structure bitfield"
-					 " because of packed attribute");
-
 		  return extract_split_bit_field (op0, bitsize,
 						  bitpos + offset * BITS_PER_UNIT,
 						  unsignedp);
 		}
-
-	      if (bitsize == total_bits)
-		warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
-				     "mis-aligned access used for structure member");
-	      else
-		warned = warning_at (input_location, OPT_fstrict_volatile_bitfields,
-				     "mis-aligned access used for structure bitfield");
-
-	      if (! informed_about_misalignment && warned)
-		{
-		  informed_about_misalignment = true;
-		  inform (input_location,
-			  "when a volatile object spans multiple type-sized locations,"
-			  " the compiler must choose between using a single mis-aligned access to"
-			  " preserve the volatility, or using multiple aligned accesses to avoid"
-			  " runtime faults; this code may fail at runtime if the hardware does"
-			  " not allow this access");
-		}
 	    }
 	}
       else
Index: gcc/expr.c
===================================================================
--- gcc/expr.c	(revision 190541)
+++ gcc/expr.c	(working copy)
@@ -142,7 +142,7 @@  static void store_constructor (tree, rtx
 static rtx store_field (rtx, HOST_WIDE_INT, HOST_WIDE_INT,
 			unsigned HOST_WIDE_INT, unsigned HOST_WIDE_INT,
 			enum machine_mode,
-			tree, tree, alias_set_type, bool);
+			tree, tree, alias_set_type, bool, bool);
 
 static unsigned HOST_WIDE_INT highest_pow2_factor_for_target (const_tree, const_tree);
 
@@ -2076,7 +2076,7 @@  emit_group_store (rtx orig_dst, rtx src,
 	emit_move_insn (adjust_address (dest, mode, bytepos), tmps[i]);
       else
 	store_bit_field (dest, bytelen * BITS_PER_UNIT, bytepos * BITS_PER_UNIT,
-			 0, 0, mode, tmps[i]);
+			 0, 0, false, mode, tmps[i]);
     }
 
   /* Copy from the pseudo into the (probable) hard reg.  */
@@ -2170,7 +2170,8 @@  copy_blkmode_from_reg (rtx tgtblk, rtx s
 
       /* Use xbitpos for the source extraction (right justified) and
 	 bitpos for the destination store (left justified).  */
-      store_bit_field (dst, bitsize, bitpos % BITS_PER_WORD, 0, 0, copy_mode,
+      store_bit_field (dst, bitsize, bitpos % BITS_PER_WORD, 0, 0,
+		       false, copy_mode,
 		       extract_bit_field (src, bitsize,
 					  xbitpos % BITS_PER_WORD, 1, false,
 					  NULL_RTX, copy_mode, copy_mode));
@@ -2249,7 +2250,7 @@  copy_blkmode_to_reg (enum machine_mode m
       /* Use bitpos for the source extraction (left justified) and
 	 xbitpos for the destination store (right justified).  */
       store_bit_field (dst_word, bitsize, xbitpos % BITS_PER_WORD,
-		       0, 0, word_mode,
+		       0, 0, false, word_mode,
 		       extract_bit_field (src_word, bitsize,
 					  bitpos % BITS_PER_WORD, 1, false,
 					  NULL_RTX, word_mode, word_mode));
@@ -2933,7 +2934,8 @@  write_complex_part (rtx cplx, rtx val, b
 	gcc_assert (MEM_P (cplx) && ibitsize < BITS_PER_WORD);
     }
 
-  store_bit_field (cplx, ibitsize, imag_p ? ibitsize : 0, 0, 0, imode, val);
+  store_bit_field (cplx, ibitsize, imag_p ? ibitsize : 0, 0, 0,
+		   false, imode, val);
 }
 
 /* Extract one of the components of the complex value CPLX.  Extract the
@@ -4614,7 +4616,7 @@  expand_assignment (tree to, tree from, b
 	}
       else
 	store_bit_field (mem, GET_MODE_BITSIZE (mode),
-			 0, 0, 0, mode, reg);
+			 0, 0, 0, false, mode, reg);
       return;
     }
 
@@ -4759,14 +4761,14 @@  expand_assignment (tree to, tree from, b
 	    result = store_field (XEXP (to_rtx, 0), bitsize, bitpos,
 				  bitregion_start, bitregion_end,
 				  mode1, from, TREE_TYPE (tem),
-				  get_alias_set (to), nontemporal);
+				  get_alias_set (to), nontemporal, false);
 	  else if (bitpos >= mode_bitsize / 2)
 	    result = store_field (XEXP (to_rtx, 1), bitsize,
 				  bitpos - mode_bitsize / 2,
 				  bitregion_start, bitregion_end,
 				  mode1, from,
 				  TREE_TYPE (tem), get_alias_set (to),
-				  nontemporal);
+				  nontemporal, false);
 	  else if (bitpos == 0 && bitsize == mode_bitsize)
 	    {
 	      rtx from_rtx;
@@ -4788,7 +4790,7 @@  expand_assignment (tree to, tree from, b
 				    bitregion_start, bitregion_end,
 				    mode1, from,
 				    TREE_TYPE (tem), get_alias_set (to),
-				    nontemporal);
+				    nontemporal, false);
 	      emit_move_insn (XEXP (to_rtx, 0), read_complex_part (temp, false));
 	      emit_move_insn (XEXP (to_rtx, 1), read_complex_part (temp, true));
 	    }
@@ -4817,11 +4819,21 @@  expand_assignment (tree to, tree from, b
 					       to_rtx, to, from))
 	    result = NULL;
 	  else
-	    result = store_field (to_rtx, bitsize, bitpos,
-				  bitregion_start, bitregion_end,
-				  mode1, from,
-				  TREE_TYPE (tem), get_alias_set (to),
-				  nontemporal);
+	    {
+	      bool packedp = false;
+
+	      if (TREE_CODE(to) == COMPONENT_REF
+		  && (TYPE_PACKED (TREE_TYPE (TREE_OPERAND (to, 0)))
+		      || (TREE_CODE (TREE_OPERAND (to, 1)) == FIELD_DECL
+			  && DECL_PACKED (TREE_OPERAND (to, 1)))))
+		packedp = true;
+
+	      result = store_field (to_rtx, bitsize, bitpos,
+				    bitregion_start, bitregion_end,
+				    mode1, from,
+				    TREE_TYPE (tem), get_alias_set (to),
+				    nontemporal, packedp);
+	    }
 	}
 
       if (misalignp)
@@ -5220,7 +5232,7 @@  store_expr (tree exp, rtx target, int ca
 			      : BLOCK_OP_NORMAL));
 	  else if (GET_MODE (target) == BLKmode)
 	    store_bit_field (target, INTVAL (expr_size (exp)) * BITS_PER_UNIT,
-			     0, 0, 0, GET_MODE (temp), temp);
+			     0, 0, 0, false, GET_MODE (temp), temp);
 	  else
 	    convert_move (target, temp, unsignedp);
 	}
@@ -5688,7 +5700,7 @@  store_constructor_field (rtx target, uns
     }
   else
     store_field (target, bitsize, bitpos, 0, 0, mode, exp, type, alias_set,
-		 false);
+		 false, false);
 }
 
 /* Store the value of constructor EXP into the rtx TARGET.
@@ -6275,14 +6287,16 @@  store_constructor (tree exp, rtx target,
    (in general) be different from that for TARGET, since TARGET is a
    reference to the containing structure.
 
-   If NONTEMPORAL is true, try generating a nontemporal store.  */
+   If NONTEMPORAL is true, try generating a nontemporal store.
+
+   PACKEDP is true if this field has the packed attribute.  */
 
 static rtx
 store_field (rtx target, HOST_WIDE_INT bitsize, HOST_WIDE_INT bitpos,
 	     unsigned HOST_WIDE_INT bitregion_start,
 	     unsigned HOST_WIDE_INT bitregion_end,
 	     enum machine_mode mode, tree exp, tree type,
-	     alias_set_type alias_set, bool nontemporal)
+	     alias_set_type alias_set, bool nontemporal, bool packedp)
 {
   if (TREE_CODE (exp) == ERROR_MARK)
     return const0_rtx;
@@ -6315,7 +6329,8 @@  store_field (rtx target, HOST_WIDE_INT b
 
       store_field (blk_object, bitsize, bitpos,
 		   bitregion_start, bitregion_end,
-		   mode, exp, type, MEM_ALIAS_SET (blk_object), nontemporal);
+		   mode, exp, type, MEM_ALIAS_SET (blk_object), nontemporal,
+		   false);
 
       emit_move_insn (target, object);
 
@@ -6432,7 +6447,7 @@  store_field (rtx target, HOST_WIDE_INT b
       /* Store the value in the bitfield.  */
       store_bit_field (target, bitsize, bitpos,
 		       bitregion_start, bitregion_end,
-		       mode, temp);
+		       packedp, mode, temp);
 
       return const0_rtx;
     }
@@ -8024,7 +8039,7 @@  expand_expr_real_2 (sepops ops, rtx targ
 				 * BITS_PER_UNIT),
 				(HOST_WIDE_INT) GET_MODE_BITSIZE (mode)),
 			   0, 0, 0, TYPE_MODE (valtype), treeop0,
-			   type, 0, false);
+			   type, 0, false, false);
 	    }
 
 	  /* Return the entire union.  */
Index: gcc/calls.c
===================================================================
--- gcc/calls.c	(revision 190541)
+++ gcc/calls.c	(working copy)
@@ -1051,7 +1051,7 @@  store_unaligned_arguments_into_pseudos (
 
 	    bytes -= bitsize / BITS_PER_UNIT;
 	    store_bit_field (reg, bitsize, endian_correction, 0, 0,
-			     word_mode, word);
+			     false, word_mode, word);
 	  }
       }
 }
Index: gcc/ifcvt.c
===================================================================
--- gcc/ifcvt.c	(revision 190541)
+++ gcc/ifcvt.c	(working copy)
@@ -896,7 +896,7 @@  noce_emit_move_insn (rtx x, rtx y)
 		}
 
 	      gcc_assert (start < (MEM_P (op) ? BITS_PER_UNIT : BITS_PER_WORD));
-	      store_bit_field (op, size, start, 0, 0, GET_MODE (x), y);
+	      store_bit_field (op, size, start, 0, 0, false, GET_MODE (x), y);
 	      return;
 	    }
 
@@ -951,7 +951,7 @@  noce_emit_move_insn (rtx x, rtx y)
   outmode = GET_MODE (outer);
   bitpos = SUBREG_BYTE (outer) * BITS_PER_UNIT;
   store_bit_field (inner, GET_MODE_BITSIZE (outmode), bitpos,
-		   0, 0, outmode, y);
+		   0, 0, false, outmode, y);
 }
 
 /* Return sequence of instructions generated by if conversion.  This
Index: gcc/config/s390/s390.c
===================================================================
--- gcc/config/s390/s390.c	(revision 190541)
+++ gcc/config/s390/s390.c	(working copy)
@@ -4944,7 +4944,7 @@  s390_expand_atomic (enum machine_mode mo
     case SET:
       if (ac.aligned && MEM_P (val))
 	store_bit_field (new_rtx, GET_MODE_BITSIZE (mode), 0,
-			 0, 0, SImode, val);
+			 0, 0, false, SImode, val);
       else
 	{
 	  new_rtx = expand_simple_binop (SImode, AND, new_rtx, ac.modemaski,
Index: gcc/testsuite/gcc.target/arm/volatile-bitfields-5.c
===================================================================
--- gcc/testsuite/gcc.target/arm/volatile-bitfields-5.c	(revision 0)
+++ gcc/testsuite/gcc.target/arm/volatile-bitfields-5.c	(revision 0)
@@ -0,0 +1,37 @@ 
+/* { dg-require-effective-target arm_eabi } */
+/* { dg-do run } */
+/* { dg-options "-O2 -fno-strict-aliasing" } */
+
+#include <stdlib.h>
+
+#pragma pack(1)	// no space between structure members
+volatile typedef struct {
+  unsigned char byte;
+  /* Packed field members get converted to bitfields internally.
+     On most other these will either be split into smaller accesses,
+     or a single large aligned access.  With -fstrict-volatile-bitfields
+     store_bitfield was assuming a single aligned user-specified-size access
+     covered the whole field.  */
+  unsigned short halfword;
+} S;
+#pragma pack() // resume default structure packing
+
+void __attribute__((noinline)) foo(S *p)
+{
+  p->halfword++; /* { dg-message "note: When a volatile" } */
+  /* { dg-warning "mis-aligned access" "" { target *-*-* } 21 } */
+}
+
+int main()
+{
+/* Make sure the halfword is actually aligned in practice.  */
+  short buf[3];
+  buf[0] = 0x5a5a;
+  buf[1] = 0x42ff;
+  buf[2] = 0x5a5a;
+
+  foo((S *)(((char *)buf) + 1));
+  if (buf[0] != 0x5a5a || buf[1] != 0x4300 || buf[2] != 0x5a5a)
+    abort();
+  return 0;
+}