diff mbox

[asan] Protection of stack vars

Message ID 20121012162714.GO584@tucnak.redhat.com
State New
Headers show

Commit Message

Jakub Jelinek Oct. 12, 2012, 4:27 p.m. UTC
Hi!

This is not finished completely yet, but roughly implements protection
of stack variables.  As testcase I was using:
extern void *malloc (__SIZE_TYPE__);

int
main ()
{
  char buf1[16];
  char buf2[256];
  char buf3[33];
  char *p = malloc (16);
  asm ("" : "+r" (p));
  int i;
  for (i = 0; i < 18; i++)
    p[i] = 17;
  asm ("" : "=r" (p) : "0" (buf1));
  for (i = 0; i < 18; i++)
    p[i] = 17;
  asm ("" : "=r" (p) : "0" (buf2));
  for (i = 0; i < 18; i++)
    p[i] = 17;
  asm ("" : "=r" (p) : "0" (buf3));
  for (i = 0; i < 18; i++)
    p[i] = 17;
  return 0;
}
and looked at what LLVM generates.  What isn't handled is that
expand_stack_vars returns an insn sequence that needs to be emitted
before epilogue, but isn't actually emitting it in the right spot(s)
yet.  I wonder here if we we should disallow tail call optimization
for -fasan, or if (the possibly lengthy) sequence to clear the shadow
stack memory should be just emitted before every tail call and before
every return spot (noreturn calls don't need to be handled).

Comments?

2012-10-12  Jakub Jelinek  <jakub@redhat.com>

	* Makefile.in (asan.o): Depend on $(EXPR_H) $(OPTABS_H).
	* asan.c: Include expr.h and optabs.h.
	(asan_shadow_set): New variable.
	(asan_shadow_cst, asan_emit_stack_protection): New functions.
	(asan_init_shadow_ptr_types): Initialize also asan_shadow_set.
	* cfgexpand.c: Include asan.h.  Define HOST_WIDE_INT heap vector.
	(expand_stack_vars): Add asan_vec and asan_decl_vec arguments,
	fill it in and add needed padding in between variables if -fasan.
	(defer_stack_allocation): Defer everything for flag_asan.
	(expand_used_vars): Return var destruction sequence.  Call
	asan_emit_stack_protection if expand_stack_vars added anything
	to the vectors.
	* asan.h (asan_emit_stack_protection): New prototype.
	(asan_shadow_set): New decl.
	(ASAN_RED_ZONE_SIZE, ASAN_STACK_MAGIC_LEFT, ASAN_STACK_MAGIC_MIDDLE,
	ASAN_STACK_MAGIC_RIGHT, ASAN_STACK_FRAME_MAGIC): Define.
	* toplev.c (process_options): Also disable -fasan on
	!FRAME_GROWS_DOWNWARDS targets.


	Jakub

Comments

Xinliang David Li Oct. 12, 2012, 5:52 p.m. UTC | #1
On Fri, Oct 12, 2012 at 9:27 AM, Jakub Jelinek <jakub@redhat.com> wrote:
> Hi!
>
> This is not finished completely yet, but roughly implements protection
> of stack variables.  As testcase I was using:
> extern void *malloc (__SIZE_TYPE__);
>
> int
> main ()
> {
>   char buf1[16];
>   char buf2[256];
>   char buf3[33];
>   char *p = malloc (16);
>   asm ("" : "+r" (p));
>   int i;
>   for (i = 0; i < 18; i++)
>     p[i] = 17;
>   asm ("" : "=r" (p) : "0" (buf1));
>   for (i = 0; i < 18; i++)
>     p[i] = 17;
>   asm ("" : "=r" (p) : "0" (buf2));
>   for (i = 0; i < 18; i++)
>     p[i] = 17;
>   asm ("" : "=r" (p) : "0" (buf3));
>   for (i = 0; i < 18; i++)
>     p[i] = 17;
>   return 0;
> }
> and looked at what LLVM generates.  What isn't handled is that
> expand_stack_vars returns an insn sequence that needs to be emitted
> before epilogue, but isn't actually emitting it in the right spot(s)
> yet.  I wonder here if we we should disallow tail call optimization
> for -fasan, or if (the possibly lengthy) sequence to clear the shadow
> stack memory should be just emitted before every tail call and before
> every return spot (noreturn calls don't need to be handled).

This is related to the way how you implement it. Emitting the stack
shadow initialization code in GIMPLE would solve the problem. I think
that would be cleaner.

You do need to create guard variables earlier in tsan pass and setup
the mapping from the original var to guard var in some form. In
cfgexpand, if an original stack var has a guard, allocate it on stack.


>
> Comments?
>
> 2012-10-12  Jakub Jelinek  <jakub@redhat.com>
>
>         * Makefile.in (asan.o): Depend on $(EXPR_H) $(OPTABS_H).
>         * asan.c: Include expr.h and optabs.h.
>         (asan_shadow_set): New variable.
>         (asan_shadow_cst, asan_emit_stack_protection): New functions.
>         (asan_init_shadow_ptr_types): Initialize also asan_shadow_set.
>         * cfgexpand.c: Include asan.h.  Define HOST_WIDE_INT heap vector.
>         (expand_stack_vars): Add asan_vec and asan_decl_vec arguments,
>         fill it in and add needed padding in between variables if -fasan.
>         (defer_stack_allocation): Defer everything for flag_asan.
>         (expand_used_vars): Return var destruction sequence.  Call
>         asan_emit_stack_protection if expand_stack_vars added anything
>         to the vectors.
>         * asan.h (asan_emit_stack_protection): New prototype.
>         (asan_shadow_set): New decl.
>         (ASAN_RED_ZONE_SIZE, ASAN_STACK_MAGIC_LEFT, ASAN_STACK_MAGIC_MIDDLE,
>         ASAN_STACK_MAGIC_RIGHT, ASAN_STACK_FRAME_MAGIC): Define.
>         * toplev.c (process_options): Also disable -fasan on
>         !FRAME_GROWS_DOWNWARDS targets.
>
> --- gcc/Makefile.in.jj  2012-10-11 20:13:17.000000000 +0200
> +++ gcc/Makefile.in     2012-10-12 18:11:10.491807366 +0200
> @@ -2204,7 +2204,7 @@ stor-layout.o : stor-layout.c $(CONFIG_H
>  asan.o : asan.c asan.h $(CONFIG_H) pointer-set.h \
>     $(SYSTEM_H) $(TREE_H) $(GIMPLE_H) \
>     output.h $(DIAGNOSTIC_H) coretypes.h $(TREE_DUMP_H) $(FLAGS_H) \
> -   tree-pretty-print.h $(TARGET_H)
> +   tree-pretty-print.h $(TARGET_H) $(EXPR_H) $(OPTABS_H)
>  tree-ssa-tail-merge.o: tree-ssa-tail-merge.c \
>     $(SYSTEM_H) $(CONFIG_H) coretypes.h $(TM_H) $(BITMAP_H) \
>     $(FLAGS_H) $(TM_P_H) $(BASIC_BLOCK_H) \
> --- gcc/asan.c.jj       2012-10-12 10:02:10.000000000 +0200
> +++ gcc/asan.c  2012-10-12 18:00:45.529248028 +0200
> @@ -43,6 +43,8 @@ along with GCC; see the file COPYING3.
>  #include "asan.h"
>  #include "gimple-pretty-print.h"
>  #include "target.h"
> +#include "expr.h"
> +#include "optabs.h"
>
>  /*
>   AddressSanitizer finds out-of-bounds and use-after-free bugs
> @@ -79,10 +81,180 @@ along with GCC; see the file COPYING3.
>   to create redzones for stack and global object and poison them.
>  */
>
> +alias_set_type asan_shadow_set = -1;
> +
>  /* Pointer types to 1 resp. 2 byte integers in shadow memory.  A separate
>     alias set is used for all shadow memory accesses.  */
>  static GTY(()) tree shadow_ptr_types[2];
>
> +static rtx
> +asan_shadow_cst (unsigned char shadow_bytes[4])

Missing comments.

> +{
> +  int i;
> +  unsigned HOST_WIDE_INT val = 0;
> +  gcc_assert (WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN);
> +  for (i = 0; i < 4; i++)
> +    val |= (unsigned HOST_WIDE_INT) shadow_bytes[BYTES_BIG_ENDIAN ? 3 - i : i]
> +          << (BITS_PER_UNIT * i);
> +  return GEN_INT (trunc_int_for_mode (val, SImode));
> +}
> +
> +rtx
> +asan_emit_stack_protection (rtx base, HOST_WIDE_INT *offsets, tree *decls,
> +                           int length)

Missing comments.

See my previous comment -- having this in rtl form makes the code less
readable and less maintainable.

> +{
> +  rtx shadow_base, shadow_mem, ret, mem;
> +  unsigned char shadow_bytes[4];
> +  HOST_WIDE_INT base_offset = offsets[length - 1], offset, prev_offset;
> +  HOST_WIDE_INT last_offset, last_size;
> +  int l;
> +  unsigned char cur_shadow_byte = ASAN_STACK_MAGIC_LEFT;
> +  static pretty_printer pp;
> +  static bool pp_initialized;
> +  const char *buf;
> +  size_t len;
> +  tree str_cst;
> +
> +  /* First of all, prepare the description string.  */
> +  if (!pp_initialized)
> +    {
> +      pp_construct (&pp, /* prefix */NULL, /* line-width */0);
> +      pp_initialized = true;
> +    }
> +  pp_clear_output_area (&pp);
> +  if (DECL_NAME (current_function_decl))
> +    pp_base_tree_identifier (&pp, DECL_NAME (current_function_decl));
> +  else
> +    pp_string (&pp, "<unknown>");
> +  pp_space (&pp);
> +  pp_decimal_int (&pp, length / 2 - 1);
> +  pp_space (&pp);
> +  for (l = length - 2; l; l -= 2)
> +    {
> +      tree decl = decls[l / 2 - 1];
> +      pp_wide_integer (&pp, offsets[l] - base_offset);
> +      pp_space (&pp);
> +      pp_wide_integer (&pp, offsets[l - 1] - offsets[l]);
> +      pp_space (&pp);
> +      if (DECL_P (decl) && DECL_NAME (decl))
> +       {
> +         pp_decimal_int (&pp, IDENTIFIER_LENGTH (DECL_NAME (decl)));
> +         pp_space (&pp);
> +         pp_base_tree_identifier (&pp, DECL_NAME (decl));
> +       }
> +      else
> +       pp_string (&pp, "9 <unknown>");
> +      pp_space (&pp);
> +    }
> +  buf = pp_base_formatted_text (&pp);
> +  len = strlen (buf);
> +  str_cst = build_string (len + 1, buf);
> +  TREE_TYPE (str_cst)
> +    = build_array_type (char_type_node, build_index_type (size_int (len)));
> +  TREE_READONLY (str_cst) = 1;
> +  TREE_STATIC (str_cst) = 1;
> +  str_cst = build1 (ADDR_EXPR, build_pointer_type (char_type_node), str_cst);
> +
> +  /* Emit the prologue sequence.  */
> +  base = expand_binop (Pmode, add_optab, base, GEN_INT (base_offset),
> +                      NULL_RTX, 1, OPTAB_DIRECT);
> +  mem = gen_rtx_MEM (ptr_mode, base);
> +  emit_move_insn (mem, GEN_INT (ASAN_STACK_FRAME_MAGIC));
> +  mem = adjust_address (mem, VOIDmode, GET_MODE_SIZE (ptr_mode));
> +  emit_move_insn (mem, expand_normal (str_cst));
> +  shadow_base = expand_binop (Pmode, lshr_optab, base,
> +                             GEN_INT (ASAN_SHADOW_SHIFT),
> +                             NULL_RTX, 1, OPTAB_DIRECT);
> +  shadow_base = expand_binop (Pmode, add_optab, shadow_base,
> +                             GEN_INT (targetm.asan_shadow_offset ()),
> +                             NULL_RTX, 1, OPTAB_DIRECT);
> +  gcc_assert (asan_shadow_set != -1
> +             && (ASAN_RED_ZONE_SIZE >> ASAN_SHADOW_SHIFT) == 4);
> +  shadow_mem = gen_rtx_MEM (SImode, shadow_base);
> +  set_mem_alias_set (shadow_mem, asan_shadow_set);
> +  prev_offset = base_offset;
> +  for (l = length; l; l -= 2)
> +    {
> +      if (l == 2)
> +       cur_shadow_byte = ASAN_STACK_MAGIC_RIGHT;
> +      offset = offsets[l - 1];
> +      if ((offset - base_offset) & (ASAN_RED_ZONE_SIZE - 1))
> +       {
> +         int i;
> +         HOST_WIDE_INT aoff
> +           = base_offset + ((offset - base_offset)
> +                            & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1));
> +         shadow_mem = adjust_address (shadow_mem, VOIDmode,
> +                                      (aoff - prev_offset)
> +                                      >> ASAN_SHADOW_SHIFT);
> +         prev_offset = aoff;
> +         for (i = 0; i < 4; i++, aoff += (1 << ASAN_SHADOW_SHIFT))
> +           if (aoff < offset)
> +             {
> +               if (aoff < offset - (1 << ASAN_SHADOW_SHIFT) + 1)
> +                 shadow_bytes[i] = 0;
> +               else
> +                 shadow_bytes[i] = offset - aoff;
> +             }
> +           else
> +             shadow_bytes[i] = ASAN_STACK_MAGIC_PARTIAL;
> +         emit_move_insn (shadow_mem, asan_shadow_cst (shadow_bytes));
> +         offset = aoff;
> +       }
> +      while (offset <= offsets[l - 2] - ASAN_RED_ZONE_SIZE)
> +       {
> +         shadow_mem = adjust_address (shadow_mem, VOIDmode,
> +                                      (offset - prev_offset)
> +                                      >> ASAN_SHADOW_SHIFT);
> +         prev_offset = offset;
> +         memset (shadow_bytes, cur_shadow_byte, 4);
> +         emit_move_insn (shadow_mem, asan_shadow_cst (shadow_bytes));
> +         offset += ASAN_RED_ZONE_SIZE;
> +       }
> +      cur_shadow_byte = ASAN_STACK_MAGIC_MIDDLE;
> +    }
> +
> +  /* Construct epilogue sequence.  */
> +  start_sequence ();
> +
> +  shadow_mem = gen_rtx_MEM (SImode, shadow_base);
> +  set_mem_alias_set (shadow_mem, asan_shadow_set);
> +  prev_offset = base_offset;
> +  last_offset = base_offset;
> +  last_size = 0;
> +  for (l = length; l; l -= 2)
> +    {
> +      offset = base_offset + ((offsets[l - 1] - base_offset)
> +                            & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1));
> +      if (last_offset + last_size != offset)
> +       {
> +         shadow_mem = adjust_address (shadow_mem, VOIDmode,
> +                                      (last_offset - prev_offset)
> +                                      >> ASAN_SHADOW_SHIFT);
> +         prev_offset = last_offset;
> +         clear_storage (shadow_mem, GEN_INT (last_size >> ASAN_SHADOW_SHIFT),
> +                        BLOCK_OP_NORMAL);
> +         last_offset = offset;
> +         last_size = 0;
> +       }
> +      last_size += base_offset + ((offsets[l - 2] - base_offset)
> +                                 & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1))
> +                  - offset;
> +    }
> +  if (last_size)
> +    {
> +      shadow_mem = adjust_address (shadow_mem, VOIDmode,
> +                                  (last_offset - prev_offset)
> +                                  >> ASAN_SHADOW_SHIFT);
> +      clear_storage (shadow_mem, GEN_INT (last_size >> ASAN_SHADOW_SHIFT),
> +                    BLOCK_OP_NORMAL);
> +    }
> +
> +  ret = get_insns ();
> +  end_sequence ();
> +  return ret;
> +}
> +
>  /* Construct a function tree for __asan_report_{load,store}{1,2,4,8,16}.
>     IS_STORE is either 1 (for a store) or 0 (for a load).
>     SIZE_IN_BYTES is one of 1, 2, 4, 8, 16.  */
> @@ -401,12 +573,12 @@ asan_finish_file (void)
>  static void
>  asan_init_shadow_ptr_types (void)
>  {
> -  alias_set_type set = new_alias_set ();
> +  asan_shadow_set = new_alias_set ();
>    shadow_ptr_types[0] = build_distinct_type_copy (unsigned_char_type_node);
> -  TYPE_ALIAS_SET (shadow_ptr_types[0]) = set;
> +  TYPE_ALIAS_SET (shadow_ptr_types[0]) = asan_shadow_set;
>    shadow_ptr_types[0] = build_pointer_type (shadow_ptr_types[0]);
>    shadow_ptr_types[1] = build_distinct_type_copy (short_unsigned_type_node);
> -  TYPE_ALIAS_SET (shadow_ptr_types[1]) = set;
> +  TYPE_ALIAS_SET (shadow_ptr_types[1]) = asan_shadow_set;
>    shadow_ptr_types[1] = build_pointer_type (shadow_ptr_types[1]);
>  }
>
> --- gcc/cfgexpand.c.jj  2012-10-11 19:09:43.000000000 +0200
> +++ gcc/cfgexpand.c     2012-10-12 18:08:32.823675395 +0200
> @@ -47,6 +47,7 @@ along with GCC; see the file COPYING3.
>  #include "cfgloop.h"
>  #include "regs.h" /* For reg_renumber.  */
>  #include "insn-attr.h" /* For INSN_SCHEDULING.  */
> +#include "asan.h"
>
>  /* This variable holds information helping the rewriting of SSA trees
>     into RTL.  */
> @@ -835,12 +836,16 @@ expand_one_stack_var_at (tree decl, rtx
>    set_rtl (decl, x);
>  }
>
> +DEF_VEC_I(HOST_WIDE_INT);
> +DEF_VEC_ALLOC_I(HOST_WIDE_INT,heap);
> +
>  /* A subroutine of expand_used_vars.  Give each partition representative
>     a unique location within the stack frame.  Update each partition member
>     with that location.  */
>
>  static void
> -expand_stack_vars (bool (*pred) (tree))
> +expand_stack_vars (bool (*pred) (tree), VEC(HOST_WIDE_INT, heap) **asan_vec,
> +                  VEC(tree, heap) **asan_decl_vec)


With the current implementation, the interface change can also be
avoided --- can the asan shadow var info be stashed into 'struct
stack_var' ?


Thanks,

David

>  {
>    size_t si, i, j, n = stack_vars_num;
>    HOST_WIDE_INT large_size = 0, large_alloc = 0;
> @@ -917,7 +922,21 @@ expand_stack_vars (bool (*pred) (tree))
>        alignb = stack_vars[i].alignb;
>        if (alignb * BITS_PER_UNIT <= MAX_SUPPORTED_STACK_ALIGNMENT)
>         {
> -         offset = alloc_stack_frame_space (stack_vars[i].size, alignb);
> +         if (flag_asan)
> +           {
> +             HOST_WIDE_INT prev_offset = frame_offset;
> +             offset
> +               = alloc_stack_frame_space (stack_vars[i].size
> +                                          + ASAN_RED_ZONE_SIZE,
> +                                          MAX (alignb, ASAN_RED_ZONE_SIZE));
> +             VEC_safe_push (HOST_WIDE_INT, heap, *asan_vec,
> +                            prev_offset);
> +             VEC_safe_push (HOST_WIDE_INT, heap, *asan_vec,
> +                            offset + stack_vars[i].size);
> +             VEC_safe_push (tree, heap, *asan_decl_vec, decl);
> +           }
> +         else
> +           offset = alloc_stack_frame_space (stack_vars[i].size, alignb);
>           base = virtual_stack_vars_rtx;
>           base_align = crtl->max_used_stack_slot_alignment;
>         }
> @@ -1055,8 +1074,9 @@ static bool
>  defer_stack_allocation (tree var, bool toplevel)
>  {
>    /* If stack protection is enabled, *all* stack variables must be deferred,
> -     so that we can re-order the strings to the top of the frame.  */
> -  if (flag_stack_protect)
> +     so that we can re-order the strings to the top of the frame.
> +     Similarly for Address Sanitizer.  */
> +  if (flag_stack_protect || flag_asan)
>      return true;
>
>    /* We handle "large" alignment via dynamic allocation.  We want to handle
> @@ -1446,11 +1466,12 @@ estimated_stack_frame_size (struct cgrap
>
>  /* Expand all variables used in the function.  */
>
> -static void
> +static rtx
>  expand_used_vars (void)
>  {
>    tree var, outer_block = DECL_INITIAL (current_function_decl);
>    VEC(tree,heap) *maybe_local_decls = NULL;
> +  rtx var_end_seq = NULL_RTX;
>    struct pointer_map_t *ssa_name_decls;
>    unsigned i;
>    unsigned len;
> @@ -1601,6 +1622,9 @@ expand_used_vars (void)
>    /* Assign rtl to each variable based on these partitions.  */
>    if (stack_vars_num > 0)
>      {
> +      VEC(HOST_WIDE_INT, heap) *asan_vec = NULL;
> +      VEC(tree, heap) *asan_decl_vec = NULL;
> +
>        /* Reorder decls to be protected by iterating over the variables
>          array multiple times, and allocating out of each phase in turn.  */
>        /* ??? We could probably integrate this into the qsort we did
> @@ -1609,14 +1633,37 @@ expand_used_vars (void)
>        if (has_protected_decls)
>         {
>           /* Phase 1 contains only character arrays.  */
> -         expand_stack_vars (stack_protect_decl_phase_1);
> +         expand_stack_vars (stack_protect_decl_phase_1,
> +                            &asan_vec, &asan_decl_vec);
>
>           /* Phase 2 contains other kinds of arrays.  */
>           if (flag_stack_protect == 2)
> -           expand_stack_vars (stack_protect_decl_phase_2);
> +           expand_stack_vars (stack_protect_decl_phase_2,
> +                              &asan_vec, &asan_decl_vec);
>         }
>
> -      expand_stack_vars (NULL);
> +      expand_stack_vars (NULL, &asan_vec, &asan_decl_vec);
> +
> +      if (!VEC_empty (HOST_WIDE_INT, asan_vec))
> +       {
> +         HOST_WIDE_INT prev_offset = frame_offset;
> +         HOST_WIDE_INT offset
> +           = alloc_stack_frame_space (ASAN_RED_ZONE_SIZE,
> +                                      ASAN_RED_ZONE_SIZE);
> +         VEC_safe_push (HOST_WIDE_INT, heap, asan_vec, prev_offset);
> +         VEC_safe_push (HOST_WIDE_INT, heap, asan_vec, offset);
> +
> +         var_end_seq
> +           = asan_emit_stack_protection (virtual_stack_vars_rtx,
> +                                         VEC_address (HOST_WIDE_INT,
> +                                                      asan_vec),
> +                                         VEC_address (tree,
> +                                                      asan_decl_vec),
> +                                         VEC_length (HOST_WIDE_INT,
> +                                                     asan_vec));
> +       }
> +      VEC_free (HOST_WIDE_INT, heap, asan_vec);
> +      VEC_free (tree, heap, asan_decl_vec);
>      }
>
>    fini_vars_expansion ();
> @@ -1643,6 +1690,8 @@ expand_used_vars (void)
>         frame_offset += align - 1;
>        frame_offset &= -align;
>      }
> +
> +  return var_end_seq;
>  }
>
>
> --- gcc/asan.h.jj       2012-10-11 20:13:17.000000000 +0200
> +++ gcc/asan.h  2012-10-12 17:05:21.903904415 +0200
> @@ -21,10 +21,31 @@ along with GCC; see the file COPYING3.
>  #ifndef TREE_ASAN
>  #define TREE_ASAN
>
> -extern void asan_finish_file(void);
> +extern void asan_finish_file (void);
> +extern rtx asan_emit_stack_protection (rtx, HOST_WIDE_INT *, tree *, int);
> +
> +/* Alias set for accessing the shadow memory.  */
> +extern alias_set_type asan_shadow_set;
>
>  /* Shadow memory is found at
>     (address >> ASAN_SHADOW_SHIFT) + targetm.asan_shadow_offset ().  */
>  #define ASAN_SHADOW_SHIFT      3
>
> +/* Red zone size, stack and global variables are padded by ASAN_RED_ZONE_SIZE
> +   up to 2 * ASAN_RED_ZONE_SIZE - 1 bytes.  */
> +#define ASAN_RED_ZONE_SIZE     32
> +
> +/* Shadow memory values for stack protection.  Left is below protected vars,
> +   the first pointer in stack corresponding to that offset contains
> +   ASAN_STACK_FRAME_MAGIC word, the second pointer to a string describing
> +   the frame.  Middle is for padding in between variables, right is
> +   above the last protected variable and partial immediately after variables
> +   up to ASAN_RED_ZONE_SIZE alignment.  */
> +#define ASAN_STACK_MAGIC_LEFT          0xf1
> +#define ASAN_STACK_MAGIC_MIDDLE                0xf2
> +#define ASAN_STACK_MAGIC_RIGHT         0xf3
> +#define ASAN_STACK_MAGIC_PARTIAL       0xf4
> +
> +#define ASAN_STACK_FRAME_MAGIC 0x41b58ab3
> +
>  #endif /* TREE_ASAN */
> --- gcc/toplev.c.jj     2012-10-11 19:20:01.000000000 +0200
> +++ gcc/toplev.c        2012-10-12 12:10:46.548879467 +0200
> @@ -1544,7 +1544,9 @@ process_options (void)
>      }
>
>    /* Address Sanitizer needs porting to each target architecture.  */
> -  if (flag_asan && targetm.asan_shadow_offset == NULL)
> +  if (flag_asan
> +      && (targetm.asan_shadow_offset == NULL
> +         || !FRAME_GROWS_DOWNWARD))
>      {
>        warning (0, "-fasan not supported for this target");
>        flag_asan = 0;
>
>         Jakub
Jakub Jelinek Oct. 12, 2012, 6:10 p.m. UTC | #2
On Fri, Oct 12, 2012 at 10:52:04AM -0700, Xinliang David Li wrote:
> This is related to the way how you implement it. Emitting the stack
> shadow initialization code in GIMPLE would solve the problem. I think
> that would be cleaner.

You'd need to duplicate all the stack slot sharing code, or adjust it so
that it would be usable for both RTL and GIMPLE.  IMHO much uglier.

I don't see how it would solve the problem where to put the cleanup
sequence.  Either you'd put it before tail calls and allow them to be
tail call optimized (but you don't know at the GIMPLE level whether
they will be or won't be tail call optimized), or you'd just put them
after the calls, but then they wouldn't be tail call optimized ever.
It is really not a big deal to put the cleanup sequence somewhere,
it is easy to disable tail call optimizations if var_end_seq is non-NULL,
if we decide that is the way to go, or it is easy to put it before the
calls.
The reason I haven't implemented it is because I run out of hacking time for
today.

> > +static rtx
> > +asan_shadow_cst (unsigned char shadow_bytes[4])
> 
> Missing comments.

Will add.
> 
> > +{
> > +  int i;
> > +  unsigned HOST_WIDE_INT val = 0;
> > +  gcc_assert (WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN);
> > +  for (i = 0; i < 4; i++)
> > +    val |= (unsigned HOST_WIDE_INT) shadow_bytes[BYTES_BIG_ENDIAN ? 3 - i : i]
> > +          << (BITS_PER_UNIT * i);
> > +  return GEN_INT (trunc_int_for_mode (val, SImode));
> > +}
> > +
> > +rtx
> > +asan_emit_stack_protection (rtx base, HOST_WIDE_INT *offsets, tree *decls,
> > +                           int length)
> 
> Missing comments.

Likewise.

> See my previous comment -- having this in rtl form makes the code less
> readable and less maintainable.

I disagree about that.  I know in LLVM asan is implemented as a single pass
transforming everything at once, but I don't think that is the best thing to
do in GCC, actually I think it is better to let the instrumentation phase
just emit an artificial builtin call instead of the shadow mem address
computation, read, test, and conditional call (e.g. because then it won't
prevent vectorization), and either lower it at some later point during
GIMPLE, or just expand it to RTL directly.

> >  /* A subroutine of expand_used_vars.  Give each partition representative
> >     a unique location within the stack frame.  Update each partition member
> >     with that location.  */
> >
> >  static void
> > -expand_stack_vars (bool (*pred) (tree))
> > +expand_stack_vars (bool (*pred) (tree), VEC(HOST_WIDE_INT, heap) **asan_vec,
> > +                  VEC(tree, heap) **asan_decl_vec)
> 
> 
> With the current implementation, the interface change can also be
> avoided --- can the asan shadow var info be stashed into 'struct
> stack_var' ?

Yeah, sure, it can be groupped into a single struct.  But it is just
an interface between a helper function and the caller anyway, the reason
for the separate expand_stack_vars function is that it needs to be called
up to 3 times for -fstack-protector support.

	Jakub
Xinliang David Li Oct. 12, 2012, 6:18 p.m. UTC | #3
On Fri, Oct 12, 2012 at 11:10 AM, Jakub Jelinek <jakub@redhat.com> wrote:
> On Fri, Oct 12, 2012 at 10:52:04AM -0700, Xinliang David Li wrote:
>> This is related to the way how you implement it. Emitting the stack
>> shadow initialization code in GIMPLE would solve the problem. I think
>> that would be cleaner.
>
> You'd need to duplicate all the stack slot sharing code, or adjust it so
> that it would be usable for both RTL and GIMPLE.  IMHO much uglier.

I don't think you can have stack slot sharing with tsan, can you?
Unless the shadow initialization/clearing code is inserted at the
entry and exit of each lexical scope.

>
> I don't see how it would solve the problem where to put the cleanup
> sequence.  Either you'd put it before tail calls and allow them to be
> tail call optimized (but you don't know at the GIMPLE level whether
> they will be or won't be tail call optimized), or you'd just put them
> after the calls, but then they wouldn't be tail call optimized ever.

Yes -- that is more natural way to disable tail call other than
checking the tsan flag :).

> It is really not a big deal to put the cleanup sequence somewhere,
> it is easy to disable tail call optimizations if var_end_seq is non-NULL,
> if we decide that is the way to go, or it is easy to put it before the
> calls.
> The reason I haven't implemented it is because I run out of hacking time for
> today.
>
>> > +static rtx
>> > +asan_shadow_cst (unsigned char shadow_bytes[4])
>>
>> Missing comments.
>
> Will add.
>>
>> > +{
>> > +  int i;
>> > +  unsigned HOST_WIDE_INT val = 0;
>> > +  gcc_assert (WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN);
>> > +  for (i = 0; i < 4; i++)
>> > +    val |= (unsigned HOST_WIDE_INT) shadow_bytes[BYTES_BIG_ENDIAN ? 3 - i : i]
>> > +          << (BITS_PER_UNIT * i);
>> > +  return GEN_INT (trunc_int_for_mode (val, SImode));
>> > +}
>> > +
>> > +rtx
>> > +asan_emit_stack_protection (rtx base, HOST_WIDE_INT *offsets, tree *decls,
>> > +                           int length)
>>
>> Missing comments.
>
> Likewise.
>
>> See my previous comment -- having this in rtl form makes the code less
>> readable and less maintainable.
>
> I disagree about that.  I know in LLVM asan is implemented as a single pass
> transforming everything at once, but I don't think that is the best thing to
> do in GCC, actually I think it is better to let the instrumentation phase
> just emit an artificial builtin call instead of the shadow mem address
> computation, read, test, and conditional call (e.g. because then it won't
> prevent vectorization), and either lower it at some later point during
> GIMPLE, or just expand it to RTL directly.

The builtin idea sounds good. Without that, splitting the tsan
instrumentation code in two places sounds unreasonable to me --
especially for high level transformation like this.

thanks,

David

>
>> >  /* A subroutine of expand_used_vars.  Give each partition representative
>> >     a unique location within the stack frame.  Update each partition member
>> >     with that location.  */
>> >
>> >  static void
>> > -expand_stack_vars (bool (*pred) (tree))
>> > +expand_stack_vars (bool (*pred) (tree), VEC(HOST_WIDE_INT, heap) **asan_vec,
>> > +                  VEC(tree, heap) **asan_decl_vec)
>>
>>
>> With the current implementation, the interface change can also be
>> avoided --- can the asan shadow var info be stashed into 'struct
>> stack_var' ?
>
> Yeah, sure, it can be groupped into a single struct.  But it is just
> an interface between a helper function and the caller anyway, the reason
> for the separate expand_stack_vars function is that it needs to be called
> up to 3 times for -fstack-protector support.
>
>         Jakub
diff mbox

Patch

--- gcc/Makefile.in.jj	2012-10-11 20:13:17.000000000 +0200
+++ gcc/Makefile.in	2012-10-12 18:11:10.491807366 +0200
@@ -2204,7 +2204,7 @@  stor-layout.o : stor-layout.c $(CONFIG_H
 asan.o : asan.c asan.h $(CONFIG_H) pointer-set.h \
    $(SYSTEM_H) $(TREE_H) $(GIMPLE_H) \
    output.h $(DIAGNOSTIC_H) coretypes.h $(TREE_DUMP_H) $(FLAGS_H) \
-   tree-pretty-print.h $(TARGET_H)
+   tree-pretty-print.h $(TARGET_H) $(EXPR_H) $(OPTABS_H)
 tree-ssa-tail-merge.o: tree-ssa-tail-merge.c \
    $(SYSTEM_H) $(CONFIG_H) coretypes.h $(TM_H) $(BITMAP_H) \
    $(FLAGS_H) $(TM_P_H) $(BASIC_BLOCK_H) \
--- gcc/asan.c.jj	2012-10-12 10:02:10.000000000 +0200
+++ gcc/asan.c	2012-10-12 18:00:45.529248028 +0200
@@ -43,6 +43,8 @@  along with GCC; see the file COPYING3.
 #include "asan.h"
 #include "gimple-pretty-print.h"
 #include "target.h"
+#include "expr.h"
+#include "optabs.h"
 
 /*
  AddressSanitizer finds out-of-bounds and use-after-free bugs 
@@ -79,10 +81,180 @@  along with GCC; see the file COPYING3.
  to create redzones for stack and global object and poison them.
 */
 
+alias_set_type asan_shadow_set = -1;
+
 /* Pointer types to 1 resp. 2 byte integers in shadow memory.  A separate
    alias set is used for all shadow memory accesses.  */
 static GTY(()) tree shadow_ptr_types[2];
 
+static rtx
+asan_shadow_cst (unsigned char shadow_bytes[4])
+{
+  int i;
+  unsigned HOST_WIDE_INT val = 0;
+  gcc_assert (WORDS_BIG_ENDIAN == BYTES_BIG_ENDIAN);
+  for (i = 0; i < 4; i++)
+    val |= (unsigned HOST_WIDE_INT) shadow_bytes[BYTES_BIG_ENDIAN ? 3 - i : i]
+	   << (BITS_PER_UNIT * i);
+  return GEN_INT (trunc_int_for_mode (val, SImode));
+}
+
+rtx
+asan_emit_stack_protection (rtx base, HOST_WIDE_INT *offsets, tree *decls,
+			    int length)
+{
+  rtx shadow_base, shadow_mem, ret, mem;
+  unsigned char shadow_bytes[4];
+  HOST_WIDE_INT base_offset = offsets[length - 1], offset, prev_offset;
+  HOST_WIDE_INT last_offset, last_size;
+  int l;
+  unsigned char cur_shadow_byte = ASAN_STACK_MAGIC_LEFT;
+  static pretty_printer pp;
+  static bool pp_initialized;
+  const char *buf;
+  size_t len;
+  tree str_cst;
+
+  /* First of all, prepare the description string.  */
+  if (!pp_initialized)
+    {
+      pp_construct (&pp, /* prefix */NULL, /* line-width */0);
+      pp_initialized = true;
+    }
+  pp_clear_output_area (&pp);
+  if (DECL_NAME (current_function_decl))
+    pp_base_tree_identifier (&pp, DECL_NAME (current_function_decl));
+  else
+    pp_string (&pp, "<unknown>");
+  pp_space (&pp);
+  pp_decimal_int (&pp, length / 2 - 1);
+  pp_space (&pp);
+  for (l = length - 2; l; l -= 2)
+    {
+      tree decl = decls[l / 2 - 1];
+      pp_wide_integer (&pp, offsets[l] - base_offset);
+      pp_space (&pp);
+      pp_wide_integer (&pp, offsets[l - 1] - offsets[l]);
+      pp_space (&pp);
+      if (DECL_P (decl) && DECL_NAME (decl))
+	{
+	  pp_decimal_int (&pp, IDENTIFIER_LENGTH (DECL_NAME (decl)));
+	  pp_space (&pp);
+	  pp_base_tree_identifier (&pp, DECL_NAME (decl));
+	}
+      else
+	pp_string (&pp, "9 <unknown>");
+      pp_space (&pp);
+    }
+  buf = pp_base_formatted_text (&pp);
+  len = strlen (buf);
+  str_cst = build_string (len + 1, buf);
+  TREE_TYPE (str_cst)
+    = build_array_type (char_type_node, build_index_type (size_int (len)));
+  TREE_READONLY (str_cst) = 1;
+  TREE_STATIC (str_cst) = 1;
+  str_cst = build1 (ADDR_EXPR, build_pointer_type (char_type_node), str_cst);
+
+  /* Emit the prologue sequence.  */
+  base = expand_binop (Pmode, add_optab, base, GEN_INT (base_offset),
+		       NULL_RTX, 1, OPTAB_DIRECT);
+  mem = gen_rtx_MEM (ptr_mode, base);
+  emit_move_insn (mem, GEN_INT (ASAN_STACK_FRAME_MAGIC));
+  mem = adjust_address (mem, VOIDmode, GET_MODE_SIZE (ptr_mode));
+  emit_move_insn (mem, expand_normal (str_cst));
+  shadow_base = expand_binop (Pmode, lshr_optab, base,
+			      GEN_INT (ASAN_SHADOW_SHIFT),
+			      NULL_RTX, 1, OPTAB_DIRECT);
+  shadow_base = expand_binop (Pmode, add_optab, shadow_base,
+			      GEN_INT (targetm.asan_shadow_offset ()),
+			      NULL_RTX, 1, OPTAB_DIRECT);
+  gcc_assert (asan_shadow_set != -1
+	      && (ASAN_RED_ZONE_SIZE >> ASAN_SHADOW_SHIFT) == 4);
+  shadow_mem = gen_rtx_MEM (SImode, shadow_base);
+  set_mem_alias_set (shadow_mem, asan_shadow_set);
+  prev_offset = base_offset;
+  for (l = length; l; l -= 2)
+    {
+      if (l == 2)
+	cur_shadow_byte = ASAN_STACK_MAGIC_RIGHT;
+      offset = offsets[l - 1];
+      if ((offset - base_offset) & (ASAN_RED_ZONE_SIZE - 1))
+	{
+	  int i;
+	  HOST_WIDE_INT aoff
+	    = base_offset + ((offset - base_offset)
+			     & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1));
+	  shadow_mem = adjust_address (shadow_mem, VOIDmode,
+				       (aoff - prev_offset)
+				       >> ASAN_SHADOW_SHIFT);
+	  prev_offset = aoff;
+	  for (i = 0; i < 4; i++, aoff += (1 << ASAN_SHADOW_SHIFT))
+	    if (aoff < offset)
+	      {
+		if (aoff < offset - (1 << ASAN_SHADOW_SHIFT) + 1)
+		  shadow_bytes[i] = 0;
+		else
+		  shadow_bytes[i] = offset - aoff;
+	      }
+	    else
+	      shadow_bytes[i] = ASAN_STACK_MAGIC_PARTIAL;
+	  emit_move_insn (shadow_mem, asan_shadow_cst (shadow_bytes));
+	  offset = aoff;
+	}
+      while (offset <= offsets[l - 2] - ASAN_RED_ZONE_SIZE)
+	{
+	  shadow_mem = adjust_address (shadow_mem, VOIDmode,
+				       (offset - prev_offset)
+				       >> ASAN_SHADOW_SHIFT);
+	  prev_offset = offset;
+	  memset (shadow_bytes, cur_shadow_byte, 4);
+	  emit_move_insn (shadow_mem, asan_shadow_cst (shadow_bytes));
+	  offset += ASAN_RED_ZONE_SIZE;
+	}
+      cur_shadow_byte = ASAN_STACK_MAGIC_MIDDLE;
+    }
+
+  /* Construct epilogue sequence.  */
+  start_sequence ();
+
+  shadow_mem = gen_rtx_MEM (SImode, shadow_base);
+  set_mem_alias_set (shadow_mem, asan_shadow_set);
+  prev_offset = base_offset;
+  last_offset = base_offset;
+  last_size = 0;
+  for (l = length; l; l -= 2)
+    {
+      offset = base_offset + ((offsets[l - 1] - base_offset)
+			     & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1));
+      if (last_offset + last_size != offset)
+	{
+	  shadow_mem = adjust_address (shadow_mem, VOIDmode,
+				       (last_offset - prev_offset)
+				       >> ASAN_SHADOW_SHIFT);
+	  prev_offset = last_offset;
+	  clear_storage (shadow_mem, GEN_INT (last_size >> ASAN_SHADOW_SHIFT),
+			 BLOCK_OP_NORMAL);
+	  last_offset = offset;
+	  last_size = 0;
+	}
+      last_size += base_offset + ((offsets[l - 2] - base_offset)
+				  & ~(ASAN_RED_ZONE_SIZE - HOST_WIDE_INT_1))
+		   - offset;
+    }
+  if (last_size)
+    {
+      shadow_mem = adjust_address (shadow_mem, VOIDmode,
+				   (last_offset - prev_offset)
+				   >> ASAN_SHADOW_SHIFT);
+      clear_storage (shadow_mem, GEN_INT (last_size >> ASAN_SHADOW_SHIFT),
+		     BLOCK_OP_NORMAL);
+    }
+
+  ret = get_insns ();
+  end_sequence ();
+  return ret;
+}
+
 /* Construct a function tree for __asan_report_{load,store}{1,2,4,8,16}.
    IS_STORE is either 1 (for a store) or 0 (for a load).
    SIZE_IN_BYTES is one of 1, 2, 4, 8, 16.  */
@@ -401,12 +573,12 @@  asan_finish_file (void)
 static void
 asan_init_shadow_ptr_types (void)
 {
-  alias_set_type set = new_alias_set ();
+  asan_shadow_set = new_alias_set ();
   shadow_ptr_types[0] = build_distinct_type_copy (unsigned_char_type_node);
-  TYPE_ALIAS_SET (shadow_ptr_types[0]) = set;
+  TYPE_ALIAS_SET (shadow_ptr_types[0]) = asan_shadow_set;
   shadow_ptr_types[0] = build_pointer_type (shadow_ptr_types[0]);
   shadow_ptr_types[1] = build_distinct_type_copy (short_unsigned_type_node);
-  TYPE_ALIAS_SET (shadow_ptr_types[1]) = set;
+  TYPE_ALIAS_SET (shadow_ptr_types[1]) = asan_shadow_set;
   shadow_ptr_types[1] = build_pointer_type (shadow_ptr_types[1]);
 }
 
--- gcc/cfgexpand.c.jj	2012-10-11 19:09:43.000000000 +0200
+++ gcc/cfgexpand.c	2012-10-12 18:08:32.823675395 +0200
@@ -47,6 +47,7 @@  along with GCC; see the file COPYING3.
 #include "cfgloop.h"
 #include "regs.h" /* For reg_renumber.  */
 #include "insn-attr.h" /* For INSN_SCHEDULING.  */
+#include "asan.h"
 
 /* This variable holds information helping the rewriting of SSA trees
    into RTL.  */
@@ -835,12 +836,16 @@  expand_one_stack_var_at (tree decl, rtx
   set_rtl (decl, x);
 }
 
+DEF_VEC_I(HOST_WIDE_INT);
+DEF_VEC_ALLOC_I(HOST_WIDE_INT,heap);
+
 /* A subroutine of expand_used_vars.  Give each partition representative
    a unique location within the stack frame.  Update each partition member
    with that location.  */
 
 static void
-expand_stack_vars (bool (*pred) (tree))
+expand_stack_vars (bool (*pred) (tree), VEC(HOST_WIDE_INT, heap) **asan_vec,
+		   VEC(tree, heap) **asan_decl_vec)
 {
   size_t si, i, j, n = stack_vars_num;
   HOST_WIDE_INT large_size = 0, large_alloc = 0;
@@ -917,7 +922,21 @@  expand_stack_vars (bool (*pred) (tree))
       alignb = stack_vars[i].alignb;
       if (alignb * BITS_PER_UNIT <= MAX_SUPPORTED_STACK_ALIGNMENT)
 	{
-	  offset = alloc_stack_frame_space (stack_vars[i].size, alignb);
+	  if (flag_asan)
+	    {
+	      HOST_WIDE_INT prev_offset = frame_offset;
+	      offset
+		= alloc_stack_frame_space (stack_vars[i].size
+					   + ASAN_RED_ZONE_SIZE,
+					   MAX (alignb, ASAN_RED_ZONE_SIZE));
+	      VEC_safe_push (HOST_WIDE_INT, heap, *asan_vec,
+			     prev_offset);
+	      VEC_safe_push (HOST_WIDE_INT, heap, *asan_vec,
+			     offset + stack_vars[i].size);
+	      VEC_safe_push (tree, heap, *asan_decl_vec, decl);
+	    }
+	  else
+	    offset = alloc_stack_frame_space (stack_vars[i].size, alignb);
 	  base = virtual_stack_vars_rtx;
 	  base_align = crtl->max_used_stack_slot_alignment;
 	}
@@ -1055,8 +1074,9 @@  static bool
 defer_stack_allocation (tree var, bool toplevel)
 {
   /* If stack protection is enabled, *all* stack variables must be deferred,
-     so that we can re-order the strings to the top of the frame.  */
-  if (flag_stack_protect)
+     so that we can re-order the strings to the top of the frame.
+     Similarly for Address Sanitizer.  */
+  if (flag_stack_protect || flag_asan)
     return true;
 
   /* We handle "large" alignment via dynamic allocation.  We want to handle
@@ -1446,11 +1466,12 @@  estimated_stack_frame_size (struct cgrap
 
 /* Expand all variables used in the function.  */
 
-static void
+static rtx
 expand_used_vars (void)
 {
   tree var, outer_block = DECL_INITIAL (current_function_decl);
   VEC(tree,heap) *maybe_local_decls = NULL;
+  rtx var_end_seq = NULL_RTX;
   struct pointer_map_t *ssa_name_decls;
   unsigned i;
   unsigned len;
@@ -1601,6 +1622,9 @@  expand_used_vars (void)
   /* Assign rtl to each variable based on these partitions.  */
   if (stack_vars_num > 0)
     {
+      VEC(HOST_WIDE_INT, heap) *asan_vec = NULL;
+      VEC(tree, heap) *asan_decl_vec = NULL;
+
       /* Reorder decls to be protected by iterating over the variables
 	 array multiple times, and allocating out of each phase in turn.  */
       /* ??? We could probably integrate this into the qsort we did
@@ -1609,14 +1633,37 @@  expand_used_vars (void)
       if (has_protected_decls)
 	{
 	  /* Phase 1 contains only character arrays.  */
-	  expand_stack_vars (stack_protect_decl_phase_1);
+	  expand_stack_vars (stack_protect_decl_phase_1,
+			     &asan_vec, &asan_decl_vec);
 
 	  /* Phase 2 contains other kinds of arrays.  */
 	  if (flag_stack_protect == 2)
-	    expand_stack_vars (stack_protect_decl_phase_2);
+	    expand_stack_vars (stack_protect_decl_phase_2,
+			       &asan_vec, &asan_decl_vec);
 	}
 
-      expand_stack_vars (NULL);
+      expand_stack_vars (NULL, &asan_vec, &asan_decl_vec);
+
+      if (!VEC_empty (HOST_WIDE_INT, asan_vec))
+	{
+	  HOST_WIDE_INT prev_offset = frame_offset;
+	  HOST_WIDE_INT offset
+	    = alloc_stack_frame_space (ASAN_RED_ZONE_SIZE,
+				       ASAN_RED_ZONE_SIZE);
+	  VEC_safe_push (HOST_WIDE_INT, heap, asan_vec, prev_offset);
+	  VEC_safe_push (HOST_WIDE_INT, heap, asan_vec, offset);
+
+	  var_end_seq
+	    = asan_emit_stack_protection (virtual_stack_vars_rtx,
+					  VEC_address (HOST_WIDE_INT,
+						       asan_vec),
+					  VEC_address (tree,
+						       asan_decl_vec),
+					  VEC_length (HOST_WIDE_INT,
+						      asan_vec));
+	}
+      VEC_free (HOST_WIDE_INT, heap, asan_vec);
+      VEC_free (tree, heap, asan_decl_vec);
     }
 
   fini_vars_expansion ();
@@ -1643,6 +1690,8 @@  expand_used_vars (void)
 	frame_offset += align - 1;
       frame_offset &= -align;
     }
+
+  return var_end_seq;
 }
 
 
--- gcc/asan.h.jj	2012-10-11 20:13:17.000000000 +0200
+++ gcc/asan.h	2012-10-12 17:05:21.903904415 +0200
@@ -21,10 +21,31 @@  along with GCC; see the file COPYING3.
 #ifndef TREE_ASAN
 #define TREE_ASAN
 
-extern void asan_finish_file(void);
+extern void asan_finish_file (void);
+extern rtx asan_emit_stack_protection (rtx, HOST_WIDE_INT *, tree *, int);
+
+/* Alias set for accessing the shadow memory.  */
+extern alias_set_type asan_shadow_set;
 
 /* Shadow memory is found at
    (address >> ASAN_SHADOW_SHIFT) + targetm.asan_shadow_offset ().  */
 #define ASAN_SHADOW_SHIFT	3
 
+/* Red zone size, stack and global variables are padded by ASAN_RED_ZONE_SIZE
+   up to 2 * ASAN_RED_ZONE_SIZE - 1 bytes.  */
+#define ASAN_RED_ZONE_SIZE	32
+
+/* Shadow memory values for stack protection.  Left is below protected vars,
+   the first pointer in stack corresponding to that offset contains
+   ASAN_STACK_FRAME_MAGIC word, the second pointer to a string describing
+   the frame.  Middle is for padding in between variables, right is
+   above the last protected variable and partial immediately after variables
+   up to ASAN_RED_ZONE_SIZE alignment.  */
+#define ASAN_STACK_MAGIC_LEFT		0xf1
+#define ASAN_STACK_MAGIC_MIDDLE		0xf2
+#define ASAN_STACK_MAGIC_RIGHT		0xf3
+#define ASAN_STACK_MAGIC_PARTIAL	0xf4
+
+#define ASAN_STACK_FRAME_MAGIC	0x41b58ab3
+
 #endif /* TREE_ASAN */
--- gcc/toplev.c.jj	2012-10-11 19:20:01.000000000 +0200
+++ gcc/toplev.c	2012-10-12 12:10:46.548879467 +0200
@@ -1544,7 +1544,9 @@  process_options (void)
     }
 
   /* Address Sanitizer needs porting to each target architecture.  */
-  if (flag_asan && targetm.asan_shadow_offset == NULL)
+  if (flag_asan
+      && (targetm.asan_shadow_offset == NULL
+	  || !FRAME_GROWS_DOWNWARD))
     {
       warning (0, "-fasan not supported for this target");
       flag_asan = 0;