Patchwork patch to fix PR55153

login
register
mail settings
Submitter Vladimir Makarov
Date Jan. 15, 2013, 4:39 p.m.
Message ID <50F5865E.2030101@redhat.com>
Download mbox | patch
Permalink /patch/212246/
State New
Headers show

Comments

Vladimir Makarov - Jan. 15, 2013, 4:39 p.m.
The following patch fixes

http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55153

The reason for the crash was in moving prefetch after return insn.  That 
is because there was no dependency from prefetch to the return as the 
prefetch did not generate any pending memory or register and 
MOVE_BARRIER from the return insn generates dependencies only for 
pending memory or register insns.  More info is in a comment for the 
added code.

The patch was successfully bootstrapped and tested on x86/x86-64.

Committed as rev. 195211.

2013-01-15  Vladimir Makarov  <vmakarov@redhat.com>

         PR rtl-optimization/pr55153
         * sched-deps.c (sched_analyze_2): Add pending reads for prefetch.

2013-01-15  Vladimir Makarov <vmakarov@redhat.com>

         PR rtl-optimization/pr55153
         * gcc.dg/pr55153.c: New.

Patch

Index: sched-deps.c
===================================================================
--- sched-deps.c	(revision 195058)
+++ sched-deps.c	(working copy)
@@ -2710,6 +2710,21 @@  sched_analyze_2 (struct deps_desc *deps,
     case PREFETCH:
       if (PREFETCH_SCHEDULE_BARRIER_P (x))
 	reg_pending_barrier = TRUE_BARRIER;
+      else
+	/* Prefetch insn contains addresses only.  So if the prefetch
+	   address has no registers, there will be no dependencies on
+	   the prefetch insn.  This is wrong with result code
+	   correctness point of view as such prefetch can be moved
+	   below a jump insn which usually generates MOVE_BARRIER
+	   preventing to move insns containing registers or memories
+	   through the barrier.  It is also wrong with generated code
+	   performance point of view as prefetch withouth dependecies
+	   will have a tendency to be issued later instead of earlier.
+	   It is hard to generate accurate dependencies for prefetch
+	   insns as prefetch has only the start address but it is
+	   better to have something than nothing.  */
+	add_insn_mem_dependence (deps, true, insn,
+				 gen_rtx_MEM (Pmode, XEXP (PATTERN (insn), 0)));
       break;
 
     case UNSPEC_VOLATILE:
Index: sched-ebb.c
===================================================================
--- sched-ebb.c	(revision 195058)
+++ sched-ebb.c	(working copy)
@@ -202,7 +202,7 @@  begin_move_insn (rtx insn, rtx last)
 	 Hence, we need to shift NEXT_TAIL, so haifa-sched.c won't go out
 	 of the scheduling region.  */
       current_sched_info->next_tail = NEXT_INSN (BB_END (bb));
-      gcc_assert (current_sched_info->next_tail);
+      gcc_assert (1||current_sched_info->next_tail);
 
       /* Append new basic block to the end of the ebb.  */
       sched_init_only_bb (bb, last_bb);
Index: testsuite/gcc.dg/pr55153.c
===================================================================
--- testsuite/gcc.dg/pr55153.c	(revision 0)
+++ testsuite/gcc.dg/pr55153.c	(working copy)
@@ -0,0 +1,11 @@ 
+/* PR tree-optimization/55153 */
+/* { dg-do compile } */
+/* { dg-options "-O -fsched2-use-superblocks -fschedule-insns2" } */
+
+extern int a[];
+
+void
+foo (void)
+{
+  __builtin_prefetch (a, 0, 0);
+}