diff mbox

PR middle-end/60281

Message ID 5305BA80.5080306@gmail.com
State New
Headers show

Commit Message

林作健 Feb. 20, 2014, 8:19 a.m. UTC
Without aligning the asan stack base,base will only 64-bit aligned in
ARM machines.
But asan require 256-bit aligned base because of this:
1.right shift take ASAN_SHADOW_SHIFT(which is 3) bits are zeros
2.store multiple/load multiple instructions require the other 2 bits are
zeros

that add up lowest 5 bits should be zeros.That means 32 bytes or 256
bits aligned.

* asan.c (asan_emit_stack_protection): Forcing the base to align to 256 bits
* cfgexpand.c (expand_used_vars): Leaving a space in the stack frame for
alignment



Signed-off-by: lin zuojian <manjian2006@gmail.com>
---
gcc/asan.c | 6 ++++++
gcc/cfgexpand.c | 2 ++
2 files changed, 8 insertions(+)

Comments

Jakub Jelinek Feb. 20, 2014, 8:50 a.m. UTC | #1
On Thu, Feb 20, 2014 at 04:19:12PM +0800, lin zuojian wrote:
> Without aligning the asan stack base,base will only 64-bit aligned in
> ARM machines.
> But asan require 256-bit aligned base because of this:
> 1.right shift take ASAN_SHADOW_SHIFT(which is 3) bits are zeros
> 2.store multiple/load multiple instructions require the other 2 bits are
> zeros
> 
> that add up lowest 5 bits should be zeros.That means 32 bytes or 256
> bits aligned.
> 
> * asan.c (asan_emit_stack_protection): Forcing the base to align to 256 bits
> * cfgexpand.c (expand_used_vars): Leaving a space in the stack frame for
> alignment

This is wrong.

You certainly don't want to do any of the extra aligning
on non-strict alignment targets, so all the changes must be
if (STRICT_ALIGNMENT) guarded, as they are quite expensive (you need one
extra register in that case).

Second thing is, I think you break --param use-after-return=0 this way
and also functions with very large stack frames, because emit_move_insn
to pbase is then done before you adjust the stack.

Also, I don't think you want to align for ASAN_RED_ZONE_SIZE,
but instead (GET_MODE_ALIGNMENT (SImode) << ASAN_SHADOW_SHIFT) / BITS_PER_UNIT,
and if you do (of course for STRICT_ALIGNMENT targets only), you should
set_mem_align on shadow_mem when it is created.

If you do the realignment, then it would be nice if the middle-end was told
about that, so do
          base_align = crtl->max_used_stack_slot_alignment;
in both the if ((flag_sanitize & SANITIZE_ADDRESS) && ASAN_STACK && pred)
then body and else, and if sanitizing and STRICT_ALIGNMENT, set
base_align to MIN of crtl->max_used_stack_slot_alignment and the alignment
you actually guarantee.

Or, if ARM supports unaligned loads/stores using special instructions,
perhaps you should also benchmark the alternative of not realigning, but
instead making sure those unaligned instructions are used for the shadow
memory loads/stores in the asan prologue/epilogue.

Please next time post the patch from a sane MUA that doesn't eat tabs or
at least as an attachment.  Comments should start with a capital letter, and
with a full stop followed by two spaces, in the ChangeLog also all entries
should end with a full stop.

	Jakub
林作健 Feb. 20, 2014, 9:13 a.m. UTC | #2
> On Thu, Feb 20, 2014 at 04:19:12PM +0800, lin zuojian wrote:
>> Without aligning the asan stack base,base will only 64-bit aligned in
>> ARM machines.
>> But asan require 256-bit aligned base because of this:
>> 1.right shift take ASAN_SHADOW_SHIFT(which is 3) bits are zeros
>> 2.store multiple/load multiple instructions require the other 2 bits are
>> zeros
>>
>> that add up lowest 5 bits should be zeros.That means 32 bytes or 256
>> bits aligned.
>>
>> * asan.c (asan_emit_stack_protection): Forcing the base to align to 256 bits
>> * cfgexpand.c (expand_used_vars): Leaving a space in the stack frame for
>> alignment
> This is wrong.
>
> You certainly don't want to do any of the extra aligning
> on non-strict alignment targets, so all the changes must be
> if (STRICT_ALIGNMENT) guarded, as they are quite expensive (you need one
> extra register in that case).

I have to do realignment.And llvm's backend also do the realignment.I
just refers to its implementation.

>
> Second thing is, I think you break --param use-after-return=0 this way
> and also functions with very large stack frames, because emit_move_insn
> to pbase is then done before you adjust the stack.
Yes, that's a defect.
>
> Also, I don't think you want to align for ASAN_RED_ZONE_SIZE,
> but instead (GET_MODE_ALIGNMENT (SImode) << ASAN_SHADOW_SHIFT) / BITS_PER_UNIT,
> and if you do (of course for STRICT_ALIGNMENT targets only), you should
> set_mem_align on shadow_mem when it is created.

I agree with you on the alignment value.May invoking set_mem_align on
shadow_mem happen too late at this point?Given RTL has been expanded.
And again I have to do realignment on non STRICT_ALIGNMENT whatever cost
it take or there are always alignment faults.

>
> If you do the realignment, then it would be nice if the middle-end was told
> about that, so do
>           base_align = crtl->max_used_stack_slot_alignment;
> in both the if ((flag_sanitize & SANITIZE_ADDRESS) && ASAN_STACK && pred)
> then body and else, and if sanitizing and STRICT_ALIGNMENT, set
> base_align to MIN of crtl->max_used_stack_slot_alignment and the alignment
> you actually guarantee.

Okay,I will do that.

>
> Or, if ARM supports unaligned loads/stores using special instructions,
> perhaps you should also benchmark the alternative of not realigning, but
> instead making sure those unaligned instructions are used for the shadow
> memory loads/stores in the asan prologue/epilogue.
I have tried to use -fno-peephole2 to shutdown instructions
combinations,but that makes me feel uncomfortable.
>
> Please next time post the patch from a sane MUA that doesn't eat tabs or
> at least as an attachment.  Comments should start with a capital letter, and
> with a full stop followed by two spaces, in the ChangeLog also all entries
> should end with a full stop.
Sorry about that...
BTW,I didn't modify the ChangeLog because this patch should be discussed
carefully.
>
> 	Jakub
林作健 Feb. 20, 2014, 9:18 a.m. UTC | #3
If you don't mind my realigning on STRICT_ALIGNMENT machines,I will make
patch v2 soon.
>> On Thu, Feb 20, 2014 at 04:19:12PM +0800, lin zuojian wrote:
>>> Without aligning the asan stack base,base will only 64-bit aligned in
>>> ARM machines.
>>> But asan require 256-bit aligned base because of this:
>>> 1.right shift take ASAN_SHADOW_SHIFT(which is 3) bits are zeros
>>> 2.store multiple/load multiple instructions require the other 2 bits are
>>> zeros
>>>
>>> that add up lowest 5 bits should be zeros.That means 32 bytes or 256
>>> bits aligned.
>>>
>>> * asan.c (asan_emit_stack_protection): Forcing the base to align to 256 bits
>>> * cfgexpand.c (expand_used_vars): Leaving a space in the stack frame for
>>> alignment
>> This is wrong.
>>
>> You certainly don't want to do any of the extra aligning
>> on non-strict alignment targets, so all the changes must be
>> if (STRICT_ALIGNMENT) guarded, as they are quite expensive (you need one
>> extra register in that case).
> I have to do realignment.And llvm's backend also do the realignment.I
> just refers to its implementation.
>
>> Second thing is, I think you break --param use-after-return=0 this way
>> and also functions with very large stack frames, because emit_move_insn
>> to pbase is then done before you adjust the stack.
> Yes, that's a defect.
>> Also, I don't think you want to align for ASAN_RED_ZONE_SIZE,
>> but instead (GET_MODE_ALIGNMENT (SImode) << ASAN_SHADOW_SHIFT) / BITS_PER_UNIT,
>> and if you do (of course for STRICT_ALIGNMENT targets only), you should
>> set_mem_align on shadow_mem when it is created.
> I agree with you on the alignment value.May invoking set_mem_align on
> shadow_mem happen too late at this point?Given RTL has been expanded.
> And again I have to do realignment on non STRICT_ALIGNMENT whatever cost
> it take or there are always alignment faults.
>
>> If you do the realignment, then it would be nice if the middle-end was told
>> about that, so do
>>           base_align = crtl->max_used_stack_slot_alignment;
>> in both the if ((flag_sanitize & SANITIZE_ADDRESS) && ASAN_STACK && pred)
>> then body and else, and if sanitizing and STRICT_ALIGNMENT, set
>> base_align to MIN of crtl->max_used_stack_slot_alignment and the alignment
>> you actually guarantee.
> Okay,I will do that.
>
>> Or, if ARM supports unaligned loads/stores using special instructions,
>> perhaps you should also benchmark the alternative of not realigning, but
>> instead making sure those unaligned instructions are used for the shadow
>> memory loads/stores in the asan prologue/epilogue.
> I have tried to use -fno-peephole2 to shutdown instructions
> combinations,but that makes me feel uncomfortable.
>> Please next time post the patch from a sane MUA that doesn't eat tabs or
>> at least as an attachment.  Comments should start with a capital letter, and
>> with a full stop followed by two spaces, in the ChangeLog also all entries
>> should end with a full stop.
> Sorry about that...
> BTW,I didn't modify the ChangeLog because this patch should be discussed
> carefully.
>> 	Jakub
>
Jakub Jelinek Feb. 20, 2014, 9:21 a.m. UTC | #4
On Thu, Feb 20, 2014 at 05:13:36PM +0800, lin zuojian wrote:
> > You certainly don't want to do any of the extra aligning
> > on non-strict alignment targets, so all the changes must be
> > if (STRICT_ALIGNMENT) guarded, as they are quite expensive (you need one
> > extra register in that case).
> 
> I have to do realignment.And llvm's backend also do the realignment.I
> just refers to its implementation.

On !STRICT_ALIGNMENT, you don't need to do the realignment, it just works
as is.  I don't care what llvm does.

> > Also, I don't think you want to align for ASAN_RED_ZONE_SIZE,
> > but instead (GET_MODE_ALIGNMENT (SImode) << ASAN_SHADOW_SHIFT) / BITS_PER_UNIT,
> > and if you do (of course for STRICT_ALIGNMENT targets only), you should
> > set_mem_align on shadow_mem when it is created.
> 
> I agree with you on the alignment value.May invoking set_mem_align on
> shadow_mem happen too late at this point?Given RTL has been expanded.

I meant to do:
   shadow_mem = gen_rtx_MEM (SImode, shadow_base);
   set_mem_alias_set (shadow_mem, asan_shadow_set);
+  if (STRICT_ALIGNMENT)
+    set_mem_align (shadow_mem, ...);
(the default alignment is BITS_PER_UNIT).  No RTL has been expanded with
that yet at that point.

> And again I have to do realignment on non STRICT_ALIGNMENT whatever cost
> it take or there are always alignment faults.

No.

> > Or, if ARM supports unaligned loads/stores using special instructions,
> > perhaps you should also benchmark the alternative of not realigning, but
> > instead making sure those unaligned instructions are used for the shadow
> > memory loads/stores in the asan prologue/epilogue.
> I have tried to use -fno-peephole2 to shutdown instructions
> combinations,but that makes me feel uncomfortable.

You mean that for the prologues right now on ARM we emit unaligned store
insns during expansion and that they are later peepholed into aligned
stores?

	Jakub
Jakub Jelinek Feb. 20, 2014, 9:33 a.m. UTC | #5
On Thu, Feb 20, 2014 at 10:21:12AM +0100, Jakub Jelinek wrote:
> > > Or, if ARM supports unaligned loads/stores using special instructions,
> > > perhaps you should also benchmark the alternative of not realigning, but
> > > instead making sure those unaligned instructions are used for the shadow
> > > memory loads/stores in the asan prologue/epilogue.

Also note that apparently ARM does have unaligned_storesi pattern,
just doesn't provide movmisalignsi expander so that it could be (easily) used from
the middle-end.  So you should certainly also try it that way and test.

	Jakub
林作健 Feb. 20, 2014, 10:46 a.m. UTC | #6
Oh, you may think my patch have not violated STRICT_ALIGNMENT.My patch
doesn't realign stack_pointer,but virtual_stack_vars,although they may
share the same position mostly.
As you can see,the code emitted just make the stack 8 bytes aligned.

> You mean that for the prologues right now on ARM we emit unaligned
> store insns during expansion and that they are later peepholed into
> aligned stores? 
Yes.Because SImode is aligned to 4 bytes ,and str is considered aligned
to 4 bytes on all ARM machine,so gcc doesn't consider itself emitting
the unaligned str instructions in the first place.Of course aligned str
can be peepholed into aligned str multiple.

--
lin zuojian
Jakub Jelinek Feb. 20, 2014, 10:50 a.m. UTC | #7
On Thu, Feb 20, 2014 at 06:46:21PM +0800, lin zuojian wrote:
> Oh, you may think my patch have not violated STRICT_ALIGNMENT.My patch
> doesn't realign stack_pointer,but virtual_stack_vars,although they may
> share the same position mostly.
> As you can see,the code emitted just make the stack 8 bytes aligned.
> 
> > You mean that for the prologues right now on ARM we emit unaligned
> > store insns during expansion and that they are later peepholed into
> > aligned stores? 
> Yes.Because SImode is aligned to 4 bytes ,and str is considered aligned
> to 4 bytes on all ARM machine,so gcc doesn't consider itself emitting
> the unaligned str instructions in the first place.Of course aligned str
> can be peepholed into aligned str multiple.

ARM is a STRICT_ALIGNMENT target, I was suggesting that you do the changes
you want to do only for STRICT_ALIGNMENT targets.  They are not desirable
for !STRICT_ALIGNMENT targets like i?86/x86_64 (or say ppc/ppc64 unless
strict alignment is requested by compielr option).

	Jakub
Ramana Radhakrishnan Feb. 20, 2014, 11:05 a.m. UTC | #8
>> Or, if ARM supports unaligned loads/stores using special instructions,
>> perhaps you should also benchmark the alternative of not realigning, but
>> instead making sure those unaligned instructions are used for the shadow
>> memory loads/stores in the asan prologue/epilogue.
> I have tried to use -fno-peephole2 to shutdown instructions
> combinations,but that makes me feel uncomfortable.

That should not be required. I suspect what you need is a
movmisalignsi expander generating unaligned_store/loadsi patterns.
Notice that these put out ldr / str instructions but with UNSPECs
which means that the peepholers for ldrd / ldm will not catch them.

As Jakub says later in this thread ARM is a STRICT_ALIGNMENT target.
If the compiler is peepholing unaligned ldr's and str's, that would be
a bug and the compiler needs to be fixed to avoid this.

regards
Ramana

>>
>> Please next time post the patch from a sane MUA that doesn't eat tabs or
>> at least as an attachment.  Comments should start with a capital letter, and
>> with a full stop followed by two spaces, in the ChangeLog also all entries
>> should end with a full stop.
> Sorry about that...
> BTW,I didn't modify the ChangeLog because this patch should be discussed
> carefully.
>>
>>       Jakub
>
>
林作健 Feb. 20, 2014, 11:12 a.m. UTC | #9
于 2014年02月20日 19:05, Ramana Radhakrishnan 写道:
>>> Or, if ARM supports unaligned loads/stores using special instructions,
>>> perhaps you should also benchmark the alternative of not realigning, but
>>> instead making sure those unaligned instructions are used for the shadow
>>> memory loads/stores in the asan prologue/epilogue.
>> I have tried to use -fno-peephole2 to shutdown instructions
>> combinations,but that makes me feel uncomfortable.
> That should not be required. I suspect what you need is a
> movmisalignsi expander generating unaligned_store/loadsi patterns.
> Notice that these put out ldr / str instructions but with UNSPECs
> which means that the peepholers for ldrd / ldm will not catch them.
>
> As Jakub says later in this thread ARM is a STRICT_ALIGNMENT target.
> If the compiler is peepholing unaligned ldr's and str's, that would be
> a bug and the compiler needs to be fixed to avoid this.
>
> regards
> Ramana
That is not a bug of instruction selection.SImode implies that the
address is 4 bytes aligned.
>
>>> Please next time post the patch from a sane MUA that doesn't eat tabs or
>>> at least as an attachment.  Comments should start with a capital letter, and
>>> with a full stop followed by two spaces, in the ChangeLog also all entries
>>> should end with a full stop.
>> Sorry about that...
>> BTW,I didn't modify the ChangeLog because this patch should be discussed
>> carefully.
>>>       Jakub
>>
diff mbox

Patch

diff --git a/gcc/asan.c b/gcc/asan.c
index 53992a8..455e484 100644
--- a/gcc/asan.c
+++ b/gcc/asan.c
@@ -1019,6 +1019,12 @@  asan_emit_stack_protection (rtx base, rtx pbase,
unsigned int alignb,
}
if (use_after_return_class == -1 && pbase)
emit_move_insn (pbase, base);
+
+ /* align base */
+ base = expand_binop (Pmode, and_optab, base,
+ gen_int_mode (-ASAN_RED_ZONE_SIZE, Pmode),
+ NULL_RTX, 1, OPTAB_DIRECT);
+
base = expand_binop (Pmode, add_optab, base,
gen_int_mode (base_offset - base_align_bias, Pmode),
NULL_RTX, 1, OPTAB_DIRECT);
diff --git a/gcc/cfgexpand.c b/gcc/cfgexpand.c
index 06d494c..9e887f7 100644
--- a/gcc/cfgexpand.c
+++ b/gcc/cfgexpand.c
@@ -1843,6 +1843,8 @@  expand_used_vars (void)
= alloc_stack_frame_space (redzonesz, ASAN_RED_ZONE_SIZE);
data.asan_vec.safe_push (prev_offset);
data.asan_vec.safe_push (offset);
+ /* leave a space for alignment */
+ alloc_stack_frame_space (ASAN_RED_ZONE_SIZE, 1);

var_end_seq
= asan_emit_stack_protection (virtual_stack_vars_rtx,