diff mbox

Fix sched ICE with prefetch (PR rtl-optimization/56117)

Message ID 20130128141408.GG4385@tucnak.redhat.com
State New
Headers show

Commit Message

Jakub Jelinek Jan. 28, 2013, 2:14 p.m. UTC
Hi!

We ICE on the following testcase when using cselib, because
cselib_lookup* is never called on the PREFETCH argument, and
add_insn_mem_dependence calls cselib_subst_to_values on it, which
assumes cselib_lookup* already happened on it earlier.
For MEMs sched_analyze_2 calls cselib_lookup_from_insn, but for PREFETCHes
it didn't.

Fixed thusly, bootstrapped/regtested on x86_64-linux and i686-linux,
ok for trunk?

2013-01-28  Jakub Jelinek  <jakub@redhat.com>

	PR rtl-optimization/56117
	* sched-deps.c (sched_analyze_2) <case PREFETCH>: For use_cselib
	call cselib_lookup_from_insn on the MEM before calling
	add_insn_mem_dependence.

	* gcc.dg/pr56117.c: New test.


	Jakub

Comments

Jeff Law Jan. 28, 2013, 4:39 p.m. UTC | #1
On 01/28/2013 07:14 AM, Jakub Jelinek wrote:
> Hi!
>
> We ICE on the following testcase when using cselib, because
> cselib_lookup* is never called on the PREFETCH argument, and
> add_insn_mem_dependence calls cselib_subst_to_values on it, which
> assumes cselib_lookup* already happened on it earlier.
> For MEMs sched_analyze_2 calls cselib_lookup_from_insn, but for PREFETCHes
> it didn't.
>
> Fixed thusly, bootstrapped/regtested on x86_64-linux and i686-linux,
> ok for trunk?
>
> 2013-01-28  Jakub Jelinek  <jakub@redhat.com>
>
> 	PR rtl-optimization/56117
> 	* sched-deps.c (sched_analyze_2) <case PREFETCH>: For use_cselib
> 	call cselib_lookup_from_insn on the MEM before calling
> 	add_insn_mem_dependence.
>
> 	* gcc.dg/pr56117.c: New test.
I'm assuming that we don't need the shallow_copy_rtx call and related 
code because in the PREFETCH case we generate a new MEM and the 
underlying address can be safely shared.  Right?

If that's true, OK.

jeff
Jakub Jelinek Jan. 28, 2013, 4:47 p.m. UTC | #2
On Mon, Jan 28, 2013 at 09:39:00AM -0700, Jeff Law wrote:
> I'm assuming that we don't need the shallow_copy_rtx call and
> related code because in the PREFETCH case we generate a new MEM and
> the underlying address can be safely shared.  Right?

AFAIK cselib_lookup* never modifies the rtx it is passed,
shallow_copy_rtx in the MEM case is for:
          t = shallow_copy_rtx (dest);
          cselib_lookup_from_insn (XEXP (t, 0), address_mode, 1,
                                   GET_MODE (t), insn);
          XEXP (t, 0)
            = cselib_subst_to_values_from_insn (XEXP (t, 0), GET_MODE (t),
                                                insn);
where we modify XEXP (t, 0) in the last assignment and don't want to change
XEXP (dest, 0).

	Jakub
diff mbox

Patch

--- gcc/sched-deps.c.jj	2013-01-16 19:58:42.000000000 +0100
+++ gcc/sched-deps.c	2013-01-28 09:43:33.248657691 +0100
@@ -2720,8 +2720,12 @@  sched_analyze_2 (struct deps_desc *deps,
 	 prefetch has only the start address but it is better to have
 	 something than nothing.  */
       if (!deps->readonly)
-	add_insn_mem_dependence (deps, true, insn,
-				 gen_rtx_MEM (Pmode, XEXP (PATTERN (insn), 0)));
+	{
+	  rtx x = gen_rtx_MEM (Pmode, XEXP (PATTERN (insn), 0));
+	  if (sched_deps_info->use_cselib)
+	    cselib_lookup_from_insn (x, Pmode, true, VOIDmode, insn);
+	  add_insn_mem_dependence (deps, true, insn, x);
+	}
       break;
 
     case UNSPEC_VOLATILE:
--- gcc/testsuite/gcc.dg/pr56117.c.jj	2013-01-28 09:47:21.244381559 +0100
+++ gcc/testsuite/gcc.dg/pr56117.c	2013-01-28 09:46:31.000000000 +0100
@@ -0,0 +1,9 @@ 
+/* PR rtl-optimization/56117 */
+/* { dg-do compile } */
+/* { dg-options "-O2 -fsched2-use-superblocks" } */
+
+void
+foo (void *p)
+{
+  __builtin_prefetch (p);
+}