diff mbox

[RFC] : Next stage1, refactoring: propagating rtx subclasses

Message ID 555113DE.1080201@gmail.com
State New
Headers show

Commit Message

Mikhail Maltsev May 11, 2015, 8:41 p.m. UTC
On 09.05.2015 0:54, Jeff Law wrote:
> 
> Both patches are approved.  Please install onto the trunk.
> 
> jeff
> 

Sorry for delay. When I started to work on this task, I wrote that I'll
test the patches on couple of other platforms (not just x86). Probably I
should have done it earlier, because I missed a couple of important
details, which could break the build. Fortunately I did several tests
before merging into trunk, and I think I need an advice on testing (or
maybe some reworking).

First, I didn't realize that lots of code in GCC (besides the targets in
gcc/config and the generated code) is compiled conditionally (I mean,
guarded by #ifdef's). I also missed a couple of places which use
rtx_jump-related functions (which were affected by the change of
prototypes) in target code. I attached the changes that should be added
to the patch. I don't think they can be considered "obvious", so I'm
sending them for review. Right now I did not include the changelog, but
the patch will anyway need some fixing and rebasing, so I'll update it
later.

In general, is there a recommended set of targets that cover most
conditionally compiled code? Also, the GCC Wiki mentions some automated
test services and compile farm. Is it possible to use it to test a patch
on many targets?

Finally, I could try to break the patch into smaller pieces, though I
don't know if it's worth the efforts.

P.S. Bootstrapped/regtested on x86_64-unknown-linux-gnu {,-m32}
(C,C++,lto,objc,fortran,go), crosscompiled and regtested (C and C++
testsuites) on sh-elf, mips-elf, poweperpc-eabisim and arm-eabi simulators.

Comments

Joseph Myers May 11, 2015, 9:21 p.m. UTC | #1
On Mon, 11 May 2015, Mikhail Maltsev wrote:

> In general, is there a recommended set of targets that cover most
> conditionally compiled code? Also, the GCC Wiki mentions some automated

See contrib/config-list.mk (note that some of those targets may have 
pre-existing build failures, and note that you need to start with a 
current trunk native compiler so that --enable-werror-always works; don't 
try to build all those cross compilers using an older GCC).
Jeff Law May 12, 2015, 8:10 p.m. UTC | #2
On 05/11/2015 02:41 PM, Mikhail Maltsev wrote:
> On 09.05.2015 0:54, Jeff Law wrote:
>>
>> Both patches are approved.  Please install onto the trunk.
>>
>> jeff
>>
>
> Sorry for delay. When I started to work on this task, I wrote that I'll
> test the patches on couple of other platforms (not just x86). Probably I
> should have done it earlier, because I missed a couple of important
> details, which could break the build. Fortunately I did several tests
> before merging into trunk, and I think I need an advice on testing (or
> maybe some reworking).
It happens.  This kind of problem is part of what Trevor's patches are 
improving for us.

For many years, the preferred style of coding in GCC was to create 
target macros, the conditionalize code based on those macros.  That 
results in a lot of code in GCC that is rarely actually compiled.

>
> In general, is there a recommended set of targets that cover most
> conditionally compiled code? Also, the GCC Wiki mentions some automated
> test services and compile farm. Is it possible to use it to test a patch
> on many targets?
There's a makefile fragment in contrib which will build a large number 
of targets that you might find helpful.  Of course without some baseline 
to compare against, it's of less value.


>
> Finally, I could try to break the patch into smaller pieces, though I
> don't know if it's worth the efforts.
I doubt it's worth the effort at this point.


>
> P.S. Bootstrapped/regtested on x86_64-unknown-linux-gnu {,-m32}
> (C,C++,lto,objc,fortran,go), crosscompiled and regtested (C and C++
> testsuites) on sh-elf, mips-elf, poweperpc-eabisim and arm-eabi simulators.
These seem like a reasonable set of targets, especially if you could add 
one cc0 target (h8/300, v850, m68k come to mind as candidates).  I also 
doubt you need to do the full testing with simulators for this work. 
I'd think that bootstrapping one target, then just build the cross tools 
for the others would be fine.

jeff
diff mbox

Patch

diff --git a/gcc/config/bfin/bfin.c b/gcc/config/bfin/bfin.c
index 2768266..37f4ded 100644
--- a/gcc/config/bfin/bfin.c
+++ b/gcc/config/bfin/bfin.c
@@ -3844,7 +3844,8 @@  hwloop_optimize (hwloop_info loop)
 
   delete_insn (loop->loop_end);
   /* Insert the loop end label before the last instruction of the loop.  */
-  emit_label_before (loop->end_label, loop->last_insn);
+  emit_label_before (as_a <rtx_code_label *> (loop->end_label),
+		     loop->last_insn);
 
   return true;
 }
diff --git a/gcc/config/mips/mips.c b/gcc/config/mips/mips.c
index 16ed5f0..280738c 100644
--- a/gcc/config/mips/mips.c
+++ b/gcc/config/mips/mips.c
@@ -16799,13 +16799,14 @@  mips16_split_long_branches (void)
   do
     {
       rtx_insn *insn;
+      rtx_jump_insn *jump_insn;
 
       shorten_branches (get_insns ());
       something_changed = false;
       for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
-	if (JUMP_P (insn)
-	    && get_attr_length (insn) > 4
-	    && (any_condjump_p (insn) || any_uncondjump_p (insn)))
+	if ((jump_insn = dyn_cast <rtx_jump_insn *> (insn))
+	    && get_attr_length (jump_insn) > 4
+	    && (any_condjump_p (jump_insn) || any_uncondjump_p (jump_insn)))
 	  {
 	    rtx old_label, temp, saved_temp;
 	    rtx_code_label *new_label;
@@ -16820,7 +16821,7 @@  mips16_split_long_branches (void)
 	    emit_move_insn (saved_temp, temp);
 
 	    /* Load the branch target into TEMP.  */
-	    old_label = JUMP_LABEL (insn);
+	    old_label = JUMP_LABEL (jump_insn);
 	    target = gen_rtx_LABEL_REF (Pmode, old_label);
 	    mips16_load_branch_target (temp, target);
 
@@ -16835,7 +16836,7 @@  mips16_split_long_branches (void)
 	       a PC-relative constant pool.  */
 	    mips16_lay_out_constants (false);
 
-	    if (simplejump_p (insn))
+	    if (simplejump_p (jump_insn))
 	      /* We're going to replace INSN with a longer form.  */
 	      new_label = NULL;
 	    else
@@ -16849,11 +16850,11 @@  mips16_split_long_branches (void)
 	    jump_sequence = get_insns ();
 	    end_sequence ();
 
-	    emit_insn_after (jump_sequence, insn);
+	    emit_insn_after (jump_sequence, jump_insn);
 	    if (new_label)
-	      invert_jump (insn, new_label, false);
+	      invert_jump (jump_insn, new_label, false);
 	    else
-	      delete_insn (insn);
+	      delete_insn (jump_insn);
 	    something_changed = true;
 	  }
     }
diff --git a/gcc/config/sh/sh.c b/gcc/config/sh/sh.c
index 9bcb423..bc1ce24 100644
--- a/gcc/config/sh/sh.c
+++ b/gcc/config/sh/sh.c
@@ -5876,7 +5876,7 @@  static void
 gen_far_branch (struct far_branch *bp)
 {
   rtx_insn *insn = bp->insert_place;
-  rtx_insn *jump;
+  rtx_jump_insn *jump;
   rtx_code_label *label = gen_label_rtx ();
   int ok;
 
@@ -5907,7 +5907,7 @@  gen_far_branch (struct far_branch *bp)
       JUMP_LABEL (jump) = pat;
     }
 
-  ok = invert_jump (insn, label, 1);
+  ok = invert_jump (as_a <rtx_jump_insn *> (insn), label, 1);
   gcc_assert (ok);
 
   /* If we are branching around a jump (rather than a return), prevent
@@ -6700,7 +6700,7 @@  split_branches (rtx_insn *first)
 		    bp->insert_place = insn;
 		    bp->address = addr;
 		  }
-		ok = redirect_jump (insn, label, 0);
+		ok = redirect_jump (as_a <rtx_jump_insn *> (insn), label, 0);
 		gcc_assert (ok);
 	      }
 	    else
@@ -6775,7 +6775,7 @@  split_branches (rtx_insn *first)
 			JUMP_LABEL (insn) = far_label;
 			LABEL_NUSES (far_label)++;
 		      }
-		    redirect_jump (insn, ret_rtx, 1);
+		    redirect_jump (as_a <rtx_jump_insn *> (insn), ret_rtx, 1);
 		    far_label = 0;
 		  }
 	      }
diff --git a/gcc/emit-rtl.c b/gcc/emit-rtl.c
index d297380..a7338ce 100644
--- a/gcc/emit-rtl.c
+++ b/gcc/emit-rtl.c
@@ -4401,11 +4401,12 @@  emit_insn_before_noloc (rtx x, rtx_insn *before, basic_block bb)
 /* Make an instruction with body X and code JUMP_INSN
    and output it before the instruction BEFORE.  */
 
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_before_noloc (rtx x, rtx_insn *before)
 {
-  return emit_pattern_before_noloc (x, before, NULL_RTX, NULL,
-				    make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+		emit_pattern_before_noloc (x, before, NULL_RTX, NULL,
+					   make_jump_insn_raw));
 }
 
 /* Make an instruction with body X and code CALL_INSN
@@ -4445,12 +4446,12 @@  emit_barrier_before (rtx before)
 /* Emit the label LABEL before the insn BEFORE.  */
 
 rtx_code_label *
-emit_label_before (rtx_code_label *label, rtx_insn *before)
+emit_label_before (rtx label, rtx_insn *before)
 {
   gcc_checking_assert (INSN_UID (label) == 0);
   INSN_UID (label) = cur_insn_uid++;
   add_insn_before (label, before, NULL);
-  return label;
+  return as_a <rtx_code_label *> (label);
 }
 
 /* Helper for emit_insn_after, handles lists of instructions
@@ -4552,10 +4553,11 @@  emit_insn_after_noloc (rtx x, rtx after, basic_block bb)
 /* Make an insn of code JUMP_INSN with body X
    and output it after the insn AFTER.  */
 
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_after_noloc (rtx x, rtx after)
 {
-  return emit_pattern_after_noloc (x, after, NULL, make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+		emit_pattern_after_noloc (x, after, NULL, make_jump_insn_raw));
 }
 
 /* Make an instruction with body X and code CALL_INSN
@@ -4727,17 +4729,19 @@  emit_insn_after (rtx pattern, rtx after)
 }
 
 /* Like emit_jump_insn_after_noloc, but set INSN_LOCATION according to LOC.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_after_setloc (rtx pattern, rtx after, int loc)
 {
-  return emit_pattern_after_setloc (pattern, after, loc, make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_after_setloc (pattern, after, loc, make_jump_insn_raw));
 }
 
 /* Like emit_jump_insn_after_noloc, but set INSN_LOCATION according to AFTER.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_after (rtx pattern, rtx after)
 {
-  return emit_pattern_after (pattern, after, true, make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_after (pattern, after, true, make_jump_insn_raw));
 }
 
 /* Like emit_call_insn_after_noloc, but set INSN_LOCATION according to LOC.  */
@@ -4842,19 +4846,21 @@  emit_insn_before (rtx pattern, rtx before)
 }
 
 /* like emit_insn_before_noloc, but set INSN_LOCATION according to LOC.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_before_setloc (rtx pattern, rtx_insn *before, int loc)
 {
-  return emit_pattern_before_setloc (pattern, before, loc, false,
-				     make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_before_setloc (pattern, before, loc, false,
+				    make_jump_insn_raw));
 }
 
 /* Like emit_jump_insn_before_noloc, but set INSN_LOCATION according to BEFORE.  */
-rtx_insn *
+rtx_jump_insn *
 emit_jump_insn_before (rtx pattern, rtx before)
 {
-  return emit_pattern_before (pattern, before, true, false,
-			      make_jump_insn_raw);
+  return as_a <rtx_jump_insn *> (
+	emit_pattern_before (pattern, before, true, false,
+			     make_jump_insn_raw));
 }
 
 /* Like emit_insn_before_noloc, but set INSN_LOCATION according to LOC.  */
diff --git a/gcc/explow.c b/gcc/explow.c
index 57cb767..c4427a8 100644
--- a/gcc/explow.c
+++ b/gcc/explow.c
@@ -984,7 +984,7 @@  emit_stack_save (enum save_level save_level, rtx *psave)
 {
   rtx sa = *psave;
   /* The default is that we use a move insn and save in a Pmode object.  */
-  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx (*fcn) (rtx, rtx) = gen_move_insn_uncast;
   machine_mode mode = STACK_SAVEAREA_MODE (save_level);
 
   /* See if this machine has anything special to do for this kind of save.  */
@@ -1039,7 +1039,7 @@  void
 emit_stack_restore (enum save_level save_level, rtx sa)
 {
   /* The default is that we use a move insn.  */
-  rtx_insn * (*fcn) (rtx, rtx) = gen_move_insn;
+  rtx (*fcn) (rtx, rtx) = gen_move_insn_uncast;
 
   /* If stack_realign_drap, the x86 backend emits a prologue that aligns both
      STACK_POINTER and HARD_FRAME_POINTER.
diff --git a/gcc/expr.c b/gcc/expr.c
index a789024..395cafb 100644
--- a/gcc/expr.c
+++ b/gcc/expr.c
@@ -3664,6 +3664,15 @@  gen_move_insn (rtx x, rtx y)
   return seq;
 }
 
+/* Same as above, but return rtx (used as a callback, which must have
+   prototype compatible with other functions returning rtx).  */
+
+rtx
+gen_move_insn_uncast (rtx x, rtx y)
+{
+  return gen_move_insn (x, y);
+}
+
 /* If Y is representable exactly in a narrower mode, and the target can
    perform the extension directly from constant or memory, then emit the
    move as an extension.  */
diff --git a/gcc/expr.h b/gcc/expr.h
index 6c4afc4..e3afa8d 100644
--- a/gcc/expr.h
+++ b/gcc/expr.h
@@ -204,6 +204,7 @@  extern rtx store_by_pieces (rtx, unsigned HOST_WIDE_INT,
 /* Emit insns to set X from Y.  */
 extern rtx_insn *emit_move_insn (rtx, rtx);
 extern rtx_insn *gen_move_insn (rtx, rtx);
+extern rtx gen_move_insn_uncast (rtx, rtx);
 
 /* Emit insns to set X from Y, with no frills.  */
 extern rtx_insn *emit_move_insn_1 (rtx, rtx);
diff --git a/gcc/loop-doloop.c b/gcc/loop-doloop.c
index b5adbac..afd1da0 100644
--- a/gcc/loop-doloop.c
+++ b/gcc/loop-doloop.c
@@ -365,7 +365,7 @@  static bool
 add_test (rtx cond, edge *e, basic_block dest)
 {
   rtx_insn *seq, *jump;
-  rtx label;
+  rtx_code_label *label;
   machine_mode mode;
   rtx op0 = XEXP (cond, 0), op1 = XEXP (cond, 1);
   enum rtx_code code = GET_CODE (cond);
@@ -379,8 +379,7 @@  add_test (rtx cond, edge *e, basic_block dest)
   op0 = force_operand (op0, NULL_RTX);
   op1 = force_operand (op1, NULL_RTX);
   label = block_label (dest);
-  do_compare_rtx_and_jump (op0, op1, code, 0, mode, NULL_RTX,
-			   NULL_RTX, label, -1);
+  do_compare_rtx_and_jump (op0, op1, code, 0, mode, NULL_RTX, NULL, label, -1);
 
   jump = get_last_insn ();
   if (!jump || !JUMP_P (jump))
@@ -432,7 +431,7 @@  doloop_modify (struct loop *loop, struct niter_desc *desc,
   rtx tmp, noloop = NULL_RTX;
   rtx_insn *sequence;
   rtx_insn *jump_insn;
-  rtx jump_label;
+  rtx_code_label *jump_label;
   int nonneg = 0;
   bool increment_count;
   basic_block loop_end = desc->out_edge->src;
@@ -627,7 +626,7 @@  doloop_optimize (struct loop *loop)
   rtx doloop_seq, doloop_pat, doloop_reg;
   rtx count;
   widest_int iterations, iterations_max;
-  rtx start_label;
+  rtx_code_label *start_label;
   rtx condition;
   unsigned level, est_niter;
   int max_cost;
diff --git a/gcc/reorg.c b/gcc/reorg.c
index 4b41f7e..e085290 100644
--- a/gcc/reorg.c
+++ b/gcc/reorg.c
@@ -236,7 +236,7 @@  static rtx_insn *delete_from_delay_slot (rtx_insn *);
 static void delete_scheduled_jump (rtx_insn *);
 static void note_delay_statistics (int, int);
 #if defined(ANNUL_IFFALSE_SLOTS) || defined(ANNUL_IFTRUE_SLOTS)
-static rtx_insn_list *optimize_skip (rtx_insn *);
+static rtx_insn_list *optimize_skip (rtx_jump_insn *);
 #endif
 static int get_jump_flags (const rtx_insn *, rtx);
 static int mostly_true_jump (rtx);
@@ -264,12 +264,12 @@  static void try_merge_delay_insns (rtx_insn *, rtx_insn *);
 static rtx redundant_insn (rtx, rtx_insn *, rtx);
 static int own_thread_p (rtx, rtx, int);
 static void update_block (rtx_insn *, rtx);
-static int reorg_redirect_jump (rtx_insn *, rtx);
+static int reorg_redirect_jump (rtx_jump_insn *, rtx);
 static void update_reg_dead_notes (rtx_insn *, rtx_insn *);
 static void fix_reg_dead_note (rtx, rtx);
 static void update_reg_unused_notes (rtx, rtx);
 static void fill_simple_delay_slots (int);
-static rtx_insn_list *fill_slots_from_thread (rtx_insn *, rtx, rtx, rtx,
+static rtx_insn_list *fill_slots_from_thread (rtx_jump_insn *, rtx, rtx, rtx,
 					      int, int, int, int,
 					      int *, rtx_insn_list *);
 static void fill_eager_delay_slots (void);
@@ -779,7 +779,7 @@  note_delay_statistics (int slots_filled, int index)
    of delay slots required.  */
 
 static rtx_insn_list *
-optimize_skip (rtx_insn *insn)
+optimize_skip (rtx_jump_insn *insn)
 {
   rtx_insn *trial = next_nonnote_insn (insn);
   rtx_insn *next_trial = next_active_insn (trial);
@@ -1789,7 +1789,7 @@  update_block (rtx_insn *insn, rtx where)
    the basic block containing the jump.  */
 
 static int
-reorg_redirect_jump (rtx_insn *jump, rtx nlabel)
+reorg_redirect_jump (rtx_jump_insn *jump, rtx nlabel)
 {
   incr_ticks_for_insn (jump);
   return redirect_jump (jump, nlabel, 1);
@@ -2147,7 +2147,7 @@  fill_simple_delay_slots (int non_jumps_p)
 	  && (condjump_p (insn) || condjump_in_parallel_p (insn))
 	  && !ANY_RETURN_P (JUMP_LABEL (insn)))
 	{
-	  delay_list = optimize_skip (insn);
+	  delay_list = optimize_skip (as_a <rtx_jump_insn *> (insn));
 	  if (delay_list)
 	    slots_filled += 1;
 	}
@@ -2296,18 +2296,20 @@  fill_simple_delay_slots (int non_jumps_p)
 		    = add_to_delay_list (copy_delay_slot_insn (next_trial),
 					 delay_list);
 		  slots_filled++;
-		  reorg_redirect_jump (trial, new_label);
+		  reorg_redirect_jump (as_a <rtx_jump_insn *> (trial),
+				       new_label);
 		}
 	    }
 	}
 
       /* If this is an unconditional jump, then try to get insns from the
 	 target of the jump.  */
-      if (JUMP_P (insn)
-	  && simplejump_p (insn)
+      rtx_jump_insn *jump_insn;
+      if ((jump_insn = dyn_cast <rtx_jump_insn *> (insn))
+	  && simplejump_p (jump_insn)
 	  && slots_filled != slots_to_fill)
 	delay_list
-	  = fill_slots_from_thread (insn, const_true_rtx,
+	  = fill_slots_from_thread (jump_insn, const_true_rtx,
 				    next_active_insn (JUMP_LABEL (insn)),
 				    NULL, 1, 1,
 				    own_thread_p (JUMP_LABEL (insn),
@@ -2411,10 +2413,9 @@  follow_jumps (rtx label, rtx_insn *jump, bool *crossing)
    slot.  We then adjust the jump to point after the insns we have taken.  */
 
 static rtx_insn_list *
-fill_slots_from_thread (rtx_insn *insn, rtx condition, rtx thread_or_return,
-			rtx opposite_thread, int likely,
-			int thread_if_true,
-			int own_thread, int slots_to_fill,
+fill_slots_from_thread (rtx_jump_insn *insn, rtx condition,
+			rtx thread_or_return, rtx opposite_thread, int likely,
+			int thread_if_true, int own_thread, int slots_to_fill,
 			int *pslots_filled, rtx_insn_list *delay_list)
 {
   rtx new_thread;
@@ -2883,6 +2884,7 @@  fill_eager_delay_slots (void)
       rtx target_label, insn_at_target;
       rtx_insn *fallthrough_insn;
       rtx_insn_list *delay_list = 0;
+      rtx_jump_insn *jump_insn;
       int own_target;
       int own_fallthrough;
       int prediction, slots_to_fill, slots_filled;
@@ -2890,11 +2892,11 @@  fill_eager_delay_slots (void)
       insn = unfilled_slots_base[i];
       if (insn == 0
 	  || insn->deleted ()
-	  || !JUMP_P (insn)
-	  || ! (condjump_p (insn) || condjump_in_parallel_p (insn)))
+	  || ! (jump_insn = dyn_cast <rtx_jump_insn *> (insn))
+	  || ! (condjump_p (jump_insn) || condjump_in_parallel_p (jump_insn)))
 	continue;
 
-      slots_to_fill = num_delay_slots (insn);
+      slots_to_fill = num_delay_slots (jump_insn);
       /* Some machine description have defined instructions to have
 	 delay slots only in certain circumstances which may depend on
 	 nearby insns (which change due to reorg's actions).
@@ -2910,8 +2912,8 @@  fill_eager_delay_slots (void)
 	continue;
 
       slots_filled = 0;
-      target_label = JUMP_LABEL (insn);
-      condition = get_branch_condition (insn, target_label);
+      target_label = JUMP_LABEL (jump_insn);
+      condition = get_branch_condition (jump_insn, target_label);
 
       if (condition == 0)
 	continue;
@@ -2931,9 +2933,9 @@  fill_eager_delay_slots (void)
 	}
       else
 	{
-	  fallthrough_insn = next_active_insn (insn);
-	  own_fallthrough = own_thread_p (NEXT_INSN (insn), NULL_RTX, 1);
-	  prediction = mostly_true_jump (insn);
+	  fallthrough_insn = next_active_insn (jump_insn);
+	  own_fallthrough = own_thread_p (NEXT_INSN (jump_insn), NULL_RTX, 1);
+	  prediction = mostly_true_jump (jump_insn);
 	}
 
       /* If this insn is expected to branch, first try to get insns from our
@@ -2943,7 +2945,7 @@  fill_eager_delay_slots (void)
       if (prediction > 0)
 	{
 	  delay_list
-	    = fill_slots_from_thread (insn, condition, insn_at_target,
+	    = fill_slots_from_thread (jump_insn, condition, insn_at_target,
 				      fallthrough_insn, prediction == 2, 1,
 				      own_target,
 				      slots_to_fill, &slots_filled, delay_list);
@@ -2954,11 +2956,12 @@  fill_eager_delay_slots (void)
 		 we might have found a redundant insn which we deleted
 		 from the thread that was filled.  So we have to recompute
 		 the next insn at the target.  */
-	      target_label = JUMP_LABEL (insn);
+	      target_label = JUMP_LABEL (jump_insn);
 	      insn_at_target = first_active_target_insn (target_label);
 
 	      delay_list
-		= fill_slots_from_thread (insn, condition, fallthrough_insn,
+		= fill_slots_from_thread (jump_insn, condition,
+					  fallthrough_insn,
 					  insn_at_target, 0, 0,
 					  own_fallthrough,
 					  slots_to_fill, &slots_filled,
@@ -2969,7 +2972,7 @@  fill_eager_delay_slots (void)
 	{
 	  if (own_fallthrough)
 	    delay_list
-	      = fill_slots_from_thread (insn, condition, fallthrough_insn,
+	      = fill_slots_from_thread (jump_insn, condition, fallthrough_insn,
 					insn_at_target, 0, 0,
 					own_fallthrough,
 					slots_to_fill, &slots_filled,
@@ -2977,7 +2980,7 @@  fill_eager_delay_slots (void)
 
 	  if (delay_list == 0)
 	    delay_list
-	      = fill_slots_from_thread (insn, condition, insn_at_target,
+	      = fill_slots_from_thread (jump_insn, condition, insn_at_target,
 					next_active_insn (insn), 0, 1,
 					own_target,
 					slots_to_fill, &slots_filled,
@@ -2986,7 +2989,7 @@  fill_eager_delay_slots (void)
 
       if (delay_list)
 	unfilled_slots_base[i]
-	  = emit_delay_sequence (insn, delay_list, slots_filled);
+	  = emit_delay_sequence (jump_insn, delay_list, slots_filled);
 
       if (slots_to_fill == slots_filled)
 	unfilled_slots_base[i] = 0;
@@ -3222,40 +3225,41 @@  relax_delay_slots (rtx_insn *first)
       /* If this is a jump insn, see if it now jumps to a jump, jumps to
 	 the next insn, or jumps to a label that is not the last of a
 	 group of consecutive labels.  */
-      if (JUMP_P (insn)
+      if (is_a <rtx_jump_insn *> (insn)
 	  && (condjump_p (insn) || condjump_in_parallel_p (insn))
 	  && !ANY_RETURN_P (target_label = JUMP_LABEL (insn)))
 	{
+	  rtx_jump_insn *jump_insn = as_a <rtx_jump_insn *> (insn);
 	  target_label
-	    = skip_consecutive_labels (follow_jumps (target_label, insn,
+	    = skip_consecutive_labels (follow_jumps (target_label, jump_insn,
 						     &crossing));
 	  if (ANY_RETURN_P (target_label))
 	    target_label = find_end_label (target_label);
 
 	  if (target_label && next_active_insn (target_label) == next
-	      && ! condjump_in_parallel_p (insn)
-	      && ! (next && switch_text_sections_between_p (insn, next)))
+	      && ! condjump_in_parallel_p (jump_insn)
+	      && ! (next && switch_text_sections_between_p (jump_insn, next)))
 	    {
-	      delete_jump (insn);
+	      delete_jump (jump_insn);
 	      continue;
 	    }
 
-	  if (target_label && target_label != JUMP_LABEL (insn))
+	  if (target_label && target_label != JUMP_LABEL (jump_insn))
 	    {
-	      reorg_redirect_jump (insn, target_label);
+	      reorg_redirect_jump (jump_insn, target_label);
 	      if (crossing)
-		CROSSING_JUMP_P (insn) = 1;
+		CROSSING_JUMP_P (jump_insn) = 1;
 	    }
 
 	  /* See if this jump conditionally branches around an unconditional
 	     jump.  If so, invert this jump and point it to the target of the
 	     second jump.  Check if it's possible on the target.  */
 	  if (next && simplejump_or_return_p (next)
-	      && any_condjump_p (insn)
+	      && any_condjump_p (jump_insn)
 	      && target_label
 	      && next_active_insn (target_label) == next_active_insn (next)
-	      && no_labels_between_p (insn, next)
-	      && targetm.can_follow_jump (insn, next))
+	      && no_labels_between_p (jump_insn, next)
+	      && targetm.can_follow_jump (jump_insn, next))
 	    {
 	      rtx label = JUMP_LABEL (next);
 
@@ -3270,10 +3274,10 @@  relax_delay_slots (rtx_insn *first)
 	      if (!ANY_RETURN_P (label))
 		++LABEL_NUSES (label);
 
-	      if (invert_jump (insn, label, 1))
+	      if (invert_jump (jump_insn, label, 1))
 		{
 		  delete_related_insns (next);
-		  next = insn;
+		  next = jump_insn;
 		}
 
 	      if (!ANY_RETURN_P (label))
@@ -3303,8 +3307,8 @@  relax_delay_slots (rtx_insn *first)
 	  rtx other_target = JUMP_LABEL (other);
 	  target_label = JUMP_LABEL (insn);
 
-	  if (invert_jump (other, target_label, 0))
-	    reorg_redirect_jump (insn, other_target);
+	  if (invert_jump (as_a <rtx_jump_insn *> (other), target_label, 0))
+	    reorg_redirect_jump (as_a <rtx_jump_insn *> (insn), other_target);
 	}
 
       /* Now look only at cases where we have a filled delay slot.  */
@@ -3369,25 +3373,28 @@  relax_delay_slots (rtx_insn *first)
 	}
 
       /* Now look only at the cases where we have a filled JUMP_INSN.  */
-      if (!JUMP_P (delay_insn)
-	  || !(condjump_p (delay_insn) || condjump_in_parallel_p (delay_insn)))
+      rtx_jump_insn *delay_jump_insn =
+		dyn_cast <rtx_jump_insn *> (delay_insn);
+      if (! delay_jump_insn || !(condjump_p (delay_jump_insn)
+	  || condjump_in_parallel_p (delay_jump_insn)))
 	continue;
 
-      target_label = JUMP_LABEL (delay_insn);
+      target_label = JUMP_LABEL (delay_jump_insn);
       if (target_label && ANY_RETURN_P (target_label))
 	continue;
 
       /* If this jump goes to another unconditional jump, thread it, but
 	 don't convert a jump into a RETURN here.  */
-      trial = skip_consecutive_labels (follow_jumps (target_label, delay_insn,
+      trial = skip_consecutive_labels (follow_jumps (target_label,
+						     delay_jump_insn,
 						     &crossing));
       if (ANY_RETURN_P (trial))
 	trial = find_end_label (trial);
 
       if (trial && trial != target_label
-	  && redirect_with_delay_slots_safe_p (delay_insn, trial, insn))
+	  && redirect_with_delay_slots_safe_p (delay_jump_insn, trial, insn))
 	{
-	  reorg_redirect_jump (delay_insn, trial);
+	  reorg_redirect_jump (delay_jump_insn, trial);
 	  target_label = trial;
 	  if (crossing)
 	    CROSSING_JUMP_P (insn) = 1;
@@ -3419,7 +3426,7 @@  relax_delay_slots (rtx_insn *first)
 	      /* Now emit a label before the special USE insn, and
 		 redirect our jump to the new label.  */
 	      target_label = get_label_before (PREV_INSN (tmp), target_label);
-	      reorg_redirect_jump (delay_insn, target_label);
+	      reorg_redirect_jump (delay_jump_insn, target_label);
 	      next = insn;
 	      continue;
 	    }
@@ -3440,19 +3447,19 @@  relax_delay_slots (rtx_insn *first)
 	    target_label = find_end_label (target_label);
 	  
 	  if (target_label
-	      && redirect_with_delay_slots_safe_p (delay_insn, target_label,
-						   insn))
+	      && redirect_with_delay_slots_safe_p (delay_jump_insn,
+						   target_label, insn))
 	    {
 	      update_block (trial_seq->insn (1), insn);
-	      reorg_redirect_jump (delay_insn, target_label);
+	      reorg_redirect_jump (delay_jump_insn, target_label);
 	      next = insn;
 	      continue;
 	    }
 	}
 
       /* See if we have a simple (conditional) jump that is useless.  */
-      if (! INSN_ANNULLED_BRANCH_P (delay_insn)
-	  && ! condjump_in_parallel_p (delay_insn)
+      if (! INSN_ANNULLED_BRANCH_P (delay_jump_insn)
+	  && ! condjump_in_parallel_p (delay_jump_insn)
 	  && prev_active_insn (target_label) == insn
 	  && ! BARRIER_P (prev_nonnote_insn (target_label))
 #if HAVE_cc0
@@ -3489,11 +3496,11 @@  relax_delay_slots (rtx_insn *first)
 	  trial = PREV_INSN (insn);
 	  delete_related_insns (insn);
 	  gcc_assert (GET_CODE (pat) == SEQUENCE);
-	  add_insn_after (delay_insn, trial, NULL);
-	  after = delay_insn;
+	  add_insn_after (delay_jump_insn, trial, NULL);
+	  after = delay_jump_insn;
 	  for (i = 1; i < pat->len (); i++)
 	    after = emit_copy_of_insn_after (pat->insn (i), after);
-	  delete_scheduled_jump (delay_insn);
+	  delete_scheduled_jump (delay_jump_insn);
 	  continue;
 	}
 
@@ -3515,14 +3522,14 @@  relax_delay_slots (rtx_insn *first)
 	 this jump and point it to the target of the second jump.  We cannot
 	 do this for annulled jumps, though.  Again, don't convert a jump to
 	 a RETURN here.  */
-      if (! INSN_ANNULLED_BRANCH_P (delay_insn)
-	  && any_condjump_p (delay_insn)
+      if (! INSN_ANNULLED_BRANCH_P (delay_jump_insn)
+	  && any_condjump_p (delay_jump_insn)
 	  && next && simplejump_or_return_p (next)
 	  && next_active_insn (target_label) == next_active_insn (next)
 	  && no_labels_between_p (insn, next))
 	{
 	  rtx label = JUMP_LABEL (next);
-	  rtx old_label = JUMP_LABEL (delay_insn);
+	  rtx old_label = JUMP_LABEL (delay_jump_insn);
 
 	  if (ANY_RETURN_P (label))
 	    label = find_end_label (label);
@@ -3530,7 +3537,8 @@  relax_delay_slots (rtx_insn *first)
 	  /* find_end_label can generate a new label. Check this first.  */
 	  if (label
 	      && no_labels_between_p (insn, next)
-	      && redirect_with_delay_slots_safe_p (delay_insn, label, insn))
+	      && redirect_with_delay_slots_safe_p (delay_jump_insn,
+						   label, insn))
 	    {
 	      /* Be careful how we do this to avoid deleting code or labels
 		 that are momentarily dead.  See similar optimization in
@@ -3538,7 +3546,7 @@  relax_delay_slots (rtx_insn *first)
 	      if (old_label)
 		++LABEL_NUSES (old_label);
 
-	      if (invert_jump (delay_insn, label, 1))
+	      if (invert_jump (delay_jump_insn, label, 1))
 		{
 		  int i;
 
@@ -3585,7 +3593,7 @@  static void
 make_return_insns (rtx_insn *first)
 {
   rtx_insn *insn;
-  rtx_insn *jump_insn;
+  rtx_jump_insn *jump_insn;
   rtx real_return_label = function_return_label;
   rtx real_simple_return_label = function_simple_return_label;
   int slots, i;
@@ -3645,7 +3653,7 @@  make_return_insns (rtx_insn *first)
       else
 	continue;
 
-      jump_insn = pat->insn (0);
+      jump_insn = as_a <rtx_jump_insn *> (pat->insn (0));
 
       /* If we can't make the jump into a RETURN, try to redirect it to the best
 	 RETURN and go on to the next insn.  */
@@ -3783,7 +3791,7 @@  dbr_schedule (rtx_insn *first)
 	  && !ANY_RETURN_P (JUMP_LABEL (insn))
 	  && ((target = skip_consecutive_labels (JUMP_LABEL (insn)))
 	      != JUMP_LABEL (insn)))
-	redirect_jump (insn, target, 1);
+	redirect_jump (as_a <rtx_jump_insn *> (insn), target, 1);
     }
 
   init_resource_info (epilogue_insn);
diff --git a/gcc/rtl.h b/gcc/rtl.h
index 12052b8..f236fa0 100644
--- a/gcc/rtl.h
+++ b/gcc/rtl.h
@@ -2701,9 +2701,9 @@  extern void decide_function_section (tree);
 extern rtx_insn *emit_insn_before (rtx, rtx);
 extern rtx_insn *emit_insn_before_noloc (rtx, rtx_insn *, basic_block);
 extern rtx_insn *emit_insn_before_setloc (rtx, rtx_insn *, int);
-extern rtx_insn *emit_jump_insn_before (rtx, rtx);
-extern rtx_insn *emit_jump_insn_before_noloc (rtx, rtx_insn *);
-extern rtx_insn *emit_jump_insn_before_setloc (rtx, rtx_insn *, int);
+extern rtx_jump_insn *emit_jump_insn_before (rtx, rtx);
+extern rtx_jump_insn *emit_jump_insn_before_noloc (rtx, rtx_insn *);
+extern rtx_jump_insn *emit_jump_insn_before_setloc (rtx, rtx_insn *, int);
 extern rtx_insn *emit_call_insn_before (rtx, rtx_insn *);
 extern rtx_insn *emit_call_insn_before_noloc (rtx, rtx_insn *);
 extern rtx_insn *emit_call_insn_before_setloc (rtx, rtx_insn *, int);
@@ -2711,14 +2711,14 @@  extern rtx_insn *emit_debug_insn_before (rtx, rtx_insn *);
 extern rtx_insn *emit_debug_insn_before_noloc (rtx, rtx);
 extern rtx_insn *emit_debug_insn_before_setloc (rtx, rtx, int);
 extern rtx_barrier *emit_barrier_before (rtx);
-extern rtx_code_label *emit_label_before (rtx_code_label *, rtx_insn *);
+extern rtx_code_label *emit_label_before (rtx, rtx_insn *);
 extern rtx_note *emit_note_before (enum insn_note, rtx_insn *);
 extern rtx_insn *emit_insn_after (rtx, rtx);
 extern rtx_insn *emit_insn_after_noloc (rtx, rtx, basic_block);
 extern rtx_insn *emit_insn_after_setloc (rtx, rtx, int);
-extern rtx_insn *emit_jump_insn_after (rtx, rtx);
-extern rtx_insn *emit_jump_insn_after_noloc (rtx, rtx);
-extern rtx_insn *emit_jump_insn_after_setloc (rtx, rtx, int);
+extern rtx_jump_insn *emit_jump_insn_after (rtx, rtx);
+extern rtx_jump_insn *emit_jump_insn_after_noloc (rtx, rtx);
+extern rtx_jump_insn *emit_jump_insn_after_setloc (rtx, rtx, int);
 extern rtx_insn *emit_call_insn_after (rtx, rtx);
 extern rtx_insn *emit_call_insn_after_noloc (rtx, rtx);
 extern rtx_insn *emit_call_insn_after_setloc (rtx, rtx, int);