diff mbox series

[committed] Fix probe into the red zone on aarch64

Message ID 42eaa7fa-ebad-6b4d-d85a-44045f5b9cec@gmail.com
State New
Headers show
Series [committed] Fix probe into the red zone on aarch64 | expand

Commit Message

Jeff Law Nov. 15, 2017, 6:31 a.m. UTC
Testing within Red Hat of the aarch64 stack clash bits turned up an
additional problem, one I probably should have expected.

aarch64 is a bit odd in that we may need to emit a probe in the residual
alloca space to enforce certain probing rules related to the outgoing
argument space.

When the size of the alloca space is known (and nonzero) at compile
time, we can just probe *sp as we know it's just been allocated and has
no live data.

If the size of the alloca space is not known at compile time, we have to
account for the possibility that it is zero.  So right now we actually
probe into the red zone (the existence of which is ABI dependent).

I didn't expect it would matter that much in practice, but it turns out
that certain code in process tear down within glibc has a dynamic
allocation where the size is not compile-time constant and is often zero
at runtime.

As a result valgrind complains regularly on stack-clash protected code
for aarch64.  Given the utility of clean valgrind runs and the relative
low cost of a probe relative to alloca setup it seems best to just have
a guarded probe of *sp when the size of the alloca space is not known at
compile time.

Bootstrapped and regression tested on aarch64.  Also verified that glibc
built with -fstack-clash-protection builds and does not regress its
testsuite and that valgrind's testsuite builds and runs with that just
built glibc.

Installing on the trunk.

Jeff
commit 0618a201f59699d48fd68edac10d9ad9da6b4c54
Author: law <law@138bc75d-0d04-0410-961f-82ee72b054a4>
Date:   Wed Nov 15 06:30:31 2017 +0000

            * explow.c (anti_adjust_stack_and_probe_stack_clash): Avoid probing
            the red zone for stack_clash_protection_final_dynamic_probe targets
            when the total dynamic stack size is zero bytes.
    
    git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@254753 138bc75d-0d04-0410-961f-82ee72b054a4
diff mbox series

Patch

diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index c404eb8e5a7..08642663d95 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,5 +1,9 @@ 
 2017-11-14  Jeff Law  <law@redhat.com>
 
+	* explow.c (anti_adjust_stack_and_probe_stack_clash): Avoid probing
+	the red zone for stack_clash_protection_final_dynamic_probe targets
+	when the total dynamic stack size is zero bytes.
+
 	* tree-ssa-threadupdate.c (thread_through_all_blocks): Thread
 	blocks is post order.
 
diff --git a/gcc/explow.c b/gcc/explow.c
index 662865d2808..96334b2b448 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -1999,6 +1999,13 @@  anti_adjust_stack_and_probe_stack_clash (rtx size)
   if (size != CONST0_RTX (Pmode)
       && targetm.stack_clash_protection_final_dynamic_probe (residual))
     {
+      /* SIZE could be zero at runtime and in that case *sp could hold
+	 live data.  Furthermore, we don't want to probe into the red
+	 zone.
+
+	 Go ahead and just guard a probe at *sp on SIZE != 0 at runtime
+	 if SIZE is not a compile time constant.  */
+
       /* Ideally we would just probe at *sp.  However, if SIZE is not
 	 a compile-time constant, but is zero at runtime, then *sp
 	 might hold live data.  So probe at *sp if we know that
@@ -2011,9 +2018,12 @@  anti_adjust_stack_and_probe_stack_clash (rtx size)
 	}
       else
 	{
-	  emit_stack_probe (plus_constant (Pmode, stack_pointer_rtx,
-					   -GET_MODE_SIZE (word_mode)));
+	  rtx label = gen_label_rtx ();
+	  emit_cmp_and_jump_insns (size, CONST0_RTX (GET_MODE (size)),
+				   EQ, NULL_RTX, Pmode, 1, label);
+	  emit_stack_probe (stack_pointer_rtx);
 	  emit_insn (gen_blockage ());
+	  emit_label (label);
 	}
     }
 }