diff mbox

[4/6] Shrink-wrapping

Message ID 4E2766B8.80904@codesourcery.com
State New
Headers show

Commit Message

Bernd Schmidt July 20, 2011, 11:37 p.m. UTC
On 07/07/11 16:34, Richard Sandiford wrote:
> Is JUMP_LABEL ever null after this change?  (In fully-complete rtl
> sequences, I mean.)  It looked like some of the null checks in the
> patch might not be necessary any more.

It turns out that computed jumps can have a NULL JUMP_LABEL, and so can
JUMP_INSNs holding ADDR_VECs.

> I know it's a pain, but it'd really help if you could split the
> "JUMP_LABEL == a return rtx" stuff out.

Done with this patch. I've looked at all the target code that uses
JUMP_LABELS and either convinced myself that they are safe, or changed
them, but I haven't tested them all. Bootstrapped and tested on
i686-linux, and also with a mips64-elf cross compiler (using the
workaround patch in PR49735). Also verified that there are no changes in
code generation for any of my collected .i files on mips64-elf. Ok?

This only deals with JUMP_LABELs, not (return) occurring in patterns -
another patch will be needed to change these kinds of tests to
ANY_RETURN_P to allow the introduction of (simple_return).


Bernd
* rtlanal.c (tablejump_p): False for returns.
	* reorg.c (active_insn_after): New static function.
	(find_end_label): Set JUMP_LABEL for a new returnjump.
	(optimize_skip, get_jump_flags, rare_destination,
	mostly_true_jump, get_branch_condition,
	steal_delay_list_from_target, own_thread_p,
	fill_simple_delay_slots, follow_jumps, fill_slots_from_thread,
	fill_eager_delay_slots, relax_delay_slots, make_return_insns,
	dbr_schedule): Adjust to handle ret_rtx in JUMP_LABELs.
	* jump.c (delete_related_insns): Likewise.
	(redirect_target): New static function.
	(redirect_exp_1): Use it.  Adjust to handle ret_rtx in JUMP_LABELS.
	(redirect_jump_1): Assert that the new label is nonnull.
	(redirect_jump): Likewise.
	(redirect_jump_2): Check for ANY_RETURN_P rather than NULL labels.
	* ifcvt.c (find_if_case_1): Take care when redirecting jumps to the
	exit block.
	(dead_or_predicable): Change NEW_DEST arg to DEST_EDGE.  All callers
	changed.  Ensure that the right label is passed to redirect_jump.
	* function.c (emit_return_into_block,
	thread_prologue_and_epilogue_insns): Ensure new returnjumps have
	ret_rtx in their JUMP_LABEL.
	* print-rtl.c (print_rtx): Handle ret_rtx in a JUMP_LABEL.
	* emit-rtl.c (skip_consecutive_labels): Allow the caller to
	pass ret_rtx as label.
	* cfglayout.c (fixup_reorder_chain): Use
	force_nonfallthru_and_redirect rather than force_nonfallthru.
	(duplicate_insn_chain): Copy JUMP_LABELs for returns.
	* rtl.h (ANY_RETURN_P): New macro.
	* dwarf2cfi.c (compute_barrier_args_size_1): Check JUMP_LABEL
	for ret_rtx.
	(create_cfi_notes): Skip ADDR_VECs and ADDR_DIFF_VECs early.
	* resource.c (find_dead_or_set_registers): Handle ret_rtx in
	JUMP_LABELs.
	(mark_target_live_regs): Likewise.
	* basic-block.h (force_nonfallthru_and_redirect): Declare.
	* cfgrtl.c (force_nonfallthru_and_redirect): No longer static.
	* config/alpha/alpha.c (alpha_tablejump_addr_vec,
	alpha_tablejump_best_label): Remove functions.
	* config/alpha/alpha-protos.c (alpha_tablejump_addr_vec,
	alpha_tablejump_best_label): Remove declarations.
	* config/sh/sh.c (barrier_align, split_branches): Adjust for
	ret_rtx in JUMP_LABELs.
	* config/arm/arm.c (is_jump_table): Likewise.

Comments

Richard Sandiford July 21, 2011, 9:52 a.m. UTC | #1
Bernd Schmidt <bernds@codesourcery.com> writes:
> On 07/07/11 16:34, Richard Sandiford wrote:
>> Is JUMP_LABEL ever null after this change?  (In fully-complete rtl
>> sequences, I mean.)  It looked like some of the null checks in the
>> patch might not be necessary any more.
>
> It turns out that computed jumps can have a NULL JUMP_LABEL, and so can
> JUMP_INSNs holding ADDR_VECs.

Bleh.  Thanks for checking.

> +/* A wrapper around next_active_insn which takes care to return ret_rtx
> +   unchanged.  */
> +
> +static rtx
> +active_insn_after (rtx insn)
> +{
> +  if (ANY_RETURN_P (insn))
> +    return insn;
> +  return next_active_insn (insn);
> +}

The name "active_insn_after" seems a bit too similar to "next_active_insn"
for the difference to be obvious.  How about something like
"first_active_target_insn" instead?

It wasn't clear to me whether this should return null instead of "insn"
for the ANY_RETURN_P code.  In things like:

     insn_at_target = active_insn_after (target_label);

it introduces a new "INSN_P or RETURN" rtx choice, rather than the
"label or RETURN" choice seen in JUMP_LABELs.  So it might seem at a
glance that PATTERN could be directly applied to a nonnull insn_at_target,
whereas you actually need to test ANY_RETURN_P first.

But the existing code seems inconsistent.  Sometimes it passes
JUMP_LABELs directly to functions like own_thread_p, whereas sometimes
it passes the first active insn instead.  So if you returned null here,
you'd probably have three-way "null or RETURN or LABEL" checks where you
otherwise wouldn't.

All in all, I agree it's probably better this way.

> @@ -921,7 +933,7 @@ rare_destination (rtx insn)
>    int jump_count = 0;
>    rtx next;
>  
> -  for (; insn; insn = next)
> +  for (; insn && !ANY_RETURN_P (insn); insn = next)
>      {
>        if (NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == SEQUENCE)
>  	insn = XVECEXP (PATTERN (insn), 0, 0);

Since ANY_RETURN looks for patterns, while this loop iterates over insns,
I think it'd be more obvious to have:

  if (insn && ANY_RETURN_P (insn))
    return 1;

above the loop instead, as you did in follow_jumps and
skip_consecutive_labels.

> Index: gcc/jump.c
> ===================================================================
> --- gcc/jump.c	(revision 176230)
> +++ gcc/jump.c	(working copy)
> @@ -1217,7 +1217,7 @@ delete_related_insns (rtx insn)
>    /* If deleting a jump, decrement the count of the label,
>       and delete the label if it is now unused.  */
>  
> -  if (JUMP_P (insn) && JUMP_LABEL (insn))
> +  if (JUMP_P (insn) && !ANY_RETURN_P (JUMP_LABEL (insn)))
>      {
>        rtx lab = JUMP_LABEL (insn), lab_next;
>  

Given what you said above, and given that this is a public function,
I think we should keep the null check.

This pattern came up in reorg.c too, so maybe it would be worth having
a jump_to_label_p inline function somewhere, such as:

static bool
jump_to_label_p (rtx insn)
{
  return JUMP_P (insn) && JUMP_LABEL (insn) && LABEL_P (JUMP_LABEL (insn));
}

And maybe also:

static rtx
jump_target_insn (rtx insn)
{
  return jump_to_label_p (insn) ? JUMP_LABEL (insn) : NULL_RTX;
}

It might help avoid the sprinkling of ANY_RETURN_Ps.  Just a suggestion
though, not going to insist.

>  /* Throughout LOC, redirect OLABEL to NLABEL.  Treat null OLABEL or
>     NLABEL as a return.  Accrue modifications into the change group.  */
>  
> @@ -1359,37 +1371,19 @@ redirect_exp_1 (rtx *loc, rtx olabel, rt
>    int i;
>    const char *fmt;
>  
> -  if (code == LABEL_REF)
> -    {
> -      if (XEXP (x, 0) == olabel)
> -	{
> -	  rtx n;
> -	  if (nlabel)
> -	    n = gen_rtx_LABEL_REF (Pmode, nlabel);
> -	  else
> -	    n = ret_rtx;
> -
> -	  validate_change (insn, loc, n, 1);
> -	  return;
> -	}
> -    }
> -  else if (code == RETURN && olabel == 0)
> +  if ((code == LABEL_REF && XEXP (x, 0) == olabel)
> +      || x == olabel)
>      {
> -      if (nlabel)
> -	x = gen_rtx_LABEL_REF (Pmode, nlabel);
> -      else
> -	x = ret_rtx;
> -      if (loc == &PATTERN (insn))
> -	x = gen_rtx_SET (VOIDmode, pc_rtx, x);
> -      validate_change (insn, loc, x, 1);
> +      validate_change (insn, loc, redirect_target (nlabel), 1);
>        return;

It looks like the old code tried to allow returns to be redirected
to a label -- (return) to (set (pc) (label_ref)) -- whereas the new
code doesn't.  (Then again, it looks like the old code would create
(set (pc) (return)) when "redirecting" a return to a return.
That doesn't seem like a good idea, and it ought to be dead
anyway with the olabel == nlabel shortcuts.)

How about:

      x = redirect_target (nlabel);
      if (GET_CODE (x) == LABEL_REF && loc == &PATTERN (insn))
	x = gen_rtx_SET (VOIDmode, pc_rtx, x);
      validate_change (insn, loc, x, 1);

I realise this doesn't help for PARALLELs though (just as it didn't
for the old code).

> @@ -4126,6 +4129,18 @@ dead_or_predicable (basic_block test_bb,
>      }
>  
>   no_body:
> +  if (JUMP_P (BB_END (dest_edge->src)))
> +    new_dest_label = JUMP_LABEL (BB_END (dest_edge->src));
> +  else if (other_bb != new_dest)
> +    {
> +      if (new_dest == EXIT_BLOCK_PTR)
> +	new_dest_label = ret_rtx;
> +      else
> +	new_dest_label = block_label (new_dest);
> +    }
> +  else
> +    new_dest_label = NULL_RTX;
> +

I found the placement of this code a bit confusing as things stand.
new_dest_label is only meaningful if other_bb != new_dest, so it seemed
like something that should directly replace the existing new_label
assignment.  It's OK if it makes the shrink-wrap stuff easier though.

> @@ -1195,6 +1195,9 @@ duplicate_insn_chain (rtx from, rtx to)
>  	      break;
>  	    }
>  	  copy = emit_copy_of_insn_after (insn, get_last_insn ());
> +	  if (JUMP_P (insn) && JUMP_LABEL (insn) != NULL_RTX
> +	      && ANY_RETURN_P (JUMP_LABEL (insn)))
> +	    JUMP_LABEL (copy) = JUMP_LABEL (insn);

I think this should go in emit_copy_of_insn_after instead.

> @@ -2294,6 +2294,8 @@ create_cfi_notes (void)
>  	  dwarf2out_frame_debug (insn, false);
>  	  continue;
>  	}
> +      if (GET_CODE (pat) == ADDR_VEC || GET_CODE (pat) == ADDR_DIFF_VEC)
> +	continue;
>  
>        if (GET_CODE (pat) == SEQUENCE)
>  	{

rth better approve this bit...

Looks good to me otherwise.

Richard
diff mbox

Patch

Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c	(revision 176230)
+++ gcc/rtlanal.c	(working copy)
@@ -2660,8 +2660,11 @@  tablejump_p (const_rtx insn, rtx *labelp
 {
   rtx label, table;
 
-  if (JUMP_P (insn)
-      && (label = JUMP_LABEL (insn)) != NULL_RTX
+  if (!JUMP_P (insn))
+    return false;
+
+  label = JUMP_LABEL (insn);
+  if (label != NULL_RTX && !ANY_RETURN_P (label)
       && (table = next_active_insn (label)) != NULL_RTX
       && JUMP_TABLE_DATA_P (table))
     {
Index: gcc/reorg.c
===================================================================
--- gcc/reorg.c	(revision 176230)
+++ gcc/reorg.c	(working copy)
@@ -220,6 +220,17 @@  static void relax_delay_slots (rtx);
 static void make_return_insns (rtx);
 #endif
 
+/* A wrapper around next_active_insn which takes care to return ret_rtx
+   unchanged.  */
+
+static rtx
+active_insn_after (rtx insn)
+{
+  if (ANY_RETURN_P (insn))
+    return insn;
+  return next_active_insn (insn);
+}
+
 /* Return TRUE if this insn should stop the search for insn to fill delay
    slots.  LABELS_P indicates that labels should terminate the search.
    In all cases, jumps terminate the search.  */
@@ -437,6 +448,7 @@  find_end_label (void)
 	      /* The return we make may have delay slots too.  */
 	      rtx insn = gen_return ();
 	      insn = emit_jump_insn (insn);
+	      JUMP_LABEL (insn) = ret_rtx;
 	      emit_barrier ();
 	      if (num_delay_slots (insn) > 0)
 		obstack_ptr_grow (&unfilled_slots_obstack, insn);
@@ -824,7 +836,7 @@  optimize_skip (rtx insn)
 	      || GET_CODE (PATTERN (next_trial)) == RETURN))
 	{
 	  rtx target_label = JUMP_LABEL (next_trial);
-	  if (target_label == 0)
+	  if (ANY_RETURN_P (target_label))
 	    target_label = find_end_label ();
 
 	  if (target_label)
@@ -866,7 +878,7 @@  get_jump_flags (rtx insn, rtx label)
   if (JUMP_P (insn)
       && (condjump_p (insn) || condjump_in_parallel_p (insn))
       && INSN_UID (insn) <= max_uid
-      && label != 0
+      && !ANY_RETURN_P (label)
       && INSN_UID (label) <= max_uid)
     flags
       = (uid_to_ruid[INSN_UID (label)] > uid_to_ruid[INSN_UID (insn)])
@@ -921,7 +933,7 @@  rare_destination (rtx insn)
   int jump_count = 0;
   rtx next;
 
-  for (; insn; insn = next)
+  for (; insn && !ANY_RETURN_P (insn); insn = next)
     {
       if (NONJUMP_INSN_P (insn) && GET_CODE (PATTERN (insn)) == SEQUENCE)
 	insn = XVECEXP (PATTERN (insn), 0, 0);
@@ -1017,7 +1029,7 @@  mostly_true_jump (rtx jump_insn, rtx con
   /* Predict backward branches usually take, forward branches usually not.  If
      we don't know whether this is forward or backward, assume the branch
      will be taken, since most are.  */
-  return (target_label == 0 || INSN_UID (jump_insn) > max_uid
+  return (ANY_RETURN_P (target_label) || INSN_UID (jump_insn) > max_uid
 	  || INSN_UID (target_label) > max_uid
 	  || (uid_to_ruid[INSN_UID (jump_insn)]
 	      > uid_to_ruid[INSN_UID (target_label)]));
@@ -1037,10 +1049,10 @@  get_branch_condition (rtx insn, rtx targ
   if (condjump_in_parallel_p (insn))
     pat = XVECEXP (pat, 0, 0);
 
-  if (GET_CODE (pat) == RETURN)
-    return target == 0 ? const_true_rtx : 0;
+  if (ANY_RETURN_P (pat))
+    return pat == target ? const_true_rtx : 0;
 
-  else if (GET_CODE (pat) != SET || SET_DEST (pat) != pc_rtx)
+  if (GET_CODE (pat) != SET || SET_DEST (pat) != pc_rtx)
     return 0;
 
   src = SET_SRC (pat);
@@ -1048,16 +1060,12 @@  get_branch_condition (rtx insn, rtx targ
     return const_true_rtx;
 
   else if (GET_CODE (src) == IF_THEN_ELSE
-	   && ((target == 0 && GET_CODE (XEXP (src, 1)) == RETURN)
-	       || (GET_CODE (XEXP (src, 1)) == LABEL_REF
-		   && XEXP (XEXP (src, 1), 0) == target))
+	   && XEXP (XEXP (src, 1), 0) == target
 	   && XEXP (src, 2) == pc_rtx)
     return XEXP (src, 0);
 
   else if (GET_CODE (src) == IF_THEN_ELSE
-	   && ((target == 0 && GET_CODE (XEXP (src, 2)) == RETURN)
-	       || (GET_CODE (XEXP (src, 2)) == LABEL_REF
-		   && XEXP (XEXP (src, 2), 0) == target))
+	   && XEXP (XEXP (src, 2), 0) == target
 	   && XEXP (src, 1) == pc_rtx)
     {
       enum rtx_code rev;
@@ -1318,7 +1326,7 @@  steal_delay_list_from_target (rtx insn,
     }
 
   /* Show the place to which we will be branching.  */
-  *pnew_thread = next_active_insn (JUMP_LABEL (XVECEXP (seq, 0, 0)));
+  *pnew_thread = active_insn_after (JUMP_LABEL (XVECEXP (seq, 0, 0)));
 
   /* Add any new insns to the delay list and update the count of the
      number of slots filled.  */
@@ -1827,7 +1835,7 @@  own_thread_p (rtx thread, rtx label, int
   rtx insn;
 
   /* We don't own the function end.  */
-  if (thread == 0)
+  if (thread == 0 || ANY_RETURN_P (thread))
     return 0;
 
   /* Get the first active insn, or THREAD, if it is an active insn.  */
@@ -2245,7 +2253,7 @@  fill_simple_delay_slots (int non_jumps_p
 	  && (!JUMP_P (insn)
 	      || ((condjump_p (insn) || condjump_in_parallel_p (insn))
 		  && ! simplejump_p (insn)
-		  && JUMP_LABEL (insn) != 0)))
+		  && !ANY_RETURN_P (JUMP_LABEL (insn)))))
 	{
 	  /* Invariant: If insn is a JUMP_INSN, the insn's jump
 	     label.  Otherwise, zero.  */
@@ -2270,7 +2278,7 @@  fill_simple_delay_slots (int non_jumps_p
 		target = JUMP_LABEL (insn);
 	    }
 
-	  if (target == 0)
+	  if (target == 0 || ANY_RETURN_P (target))
 	    for (trial = next_nonnote_insn (insn); !stop_search_p (trial, 1);
 		 trial = next_trial)
 	      {
@@ -2346,6 +2354,7 @@  fill_simple_delay_slots (int non_jumps_p
 	      && JUMP_P (trial)
 	      && simplejump_p (trial)
 	      && (target == 0 || JUMP_LABEL (trial) == target)
+	      && !ANY_RETURN_P (JUMP_LABEL (trial))
 	      && (next_trial = next_active_insn (JUMP_LABEL (trial))) != 0
 	      && ! (NONJUMP_INSN_P (next_trial)
 		    && GET_CODE (PATTERN (next_trial)) == SEQUENCE)
@@ -2500,7 +2509,7 @@  fill_simple_delay_slots (int non_jumps_p
 
 /* Follow any unconditional jump at LABEL;
    return the ultimate label reached by any such chain of jumps.
-   Return null if the chain ultimately leads to a return instruction.
+   Return ret_rtx if the chain ultimately leads to a return instruction.
    If LABEL is not followed by a jump, return LABEL.
    If the chain loops or we can't find end, return LABEL,
    since that tells caller to avoid changing the insn.  */
@@ -2513,29 +2522,34 @@  follow_jumps (rtx label)
   rtx value = label;
   int depth;
 
+  if (ANY_RETURN_P (label))
+    return label;
   for (depth = 0;
        (depth < 10
 	&& (insn = next_active_insn (value)) != 0
 	&& JUMP_P (insn)
-	&& ((JUMP_LABEL (insn) != 0 && any_uncondjump_p (insn)
-	     && onlyjump_p (insn))
+	&& JUMP_LABEL (insn) != NULL_RTX
+	&& ((any_uncondjump_p (insn) && onlyjump_p (insn))
 	    || GET_CODE (PATTERN (insn)) == RETURN)
 	&& (next = NEXT_INSN (insn))
 	&& BARRIER_P (next));
        depth++)
     {
+      rtx this_label = JUMP_LABEL (insn);
       rtx tem;
 
       /* If we have found a cycle, make the insn jump to itself.  */
-      if (JUMP_LABEL (insn) == label)
+      if (this_label == label)
 	return label;
-
-      tem = next_active_insn (JUMP_LABEL (insn));
-      if (tem && (GET_CODE (PATTERN (tem)) == ADDR_VEC
-		  || GET_CODE (PATTERN (tem)) == ADDR_DIFF_VEC))
+      if (ANY_RETURN_P (this_label))
+	return this_label;
+      tem = next_active_insn (this_label);
+      if (tem
+	  && (GET_CODE (PATTERN (tem)) == ADDR_VEC
+	      || GET_CODE (PATTERN (tem)) == ADDR_DIFF_VEC))
 	break;
 
-      value = JUMP_LABEL (insn);
+      value = this_label;
     }
   if (depth == 10)
     return label;
@@ -2587,7 +2601,7 @@  fill_slots_from_thread (rtx insn, rtx co
 
   /* If our thread is the end of subroutine, we can't get any delay
      insns from that.  */
-  if (thread == 0)
+  if (thread == NULL_RTX || ANY_RETURN_P (thread))
     return delay_list;
 
   /* If this is an unconditional branch, nothing is needed at the
@@ -2757,7 +2771,8 @@  fill_slots_from_thread (rtx insn, rtx co
 			      gcc_assert (REG_NOTE_KIND (note)
 					  == REG_LABEL_OPERAND);
 			  }
-		      if (JUMP_P (trial) && JUMP_LABEL (trial))
+		      if (JUMP_P (trial) && JUMP_LABEL (trial)
+			  && !ANY_RETURN_P (JUMP_LABEL (trial)))
 			LABEL_NUSES (JUMP_LABEL (trial))++;
 
 		      delete_related_insns (trial);
@@ -2776,7 +2791,8 @@  fill_slots_from_thread (rtx insn, rtx co
 			      gcc_assert (REG_NOTE_KIND (note)
 					  == REG_LABEL_OPERAND);
 			  }
-		      if (JUMP_P (trial) && JUMP_LABEL (trial))
+		      if (JUMP_P (trial) && JUMP_LABEL (trial)
+			  && !ANY_RETURN_P (JUMP_LABEL (trial)))
 			LABEL_NUSES (JUMP_LABEL (trial))--;
 		    }
 		  else
@@ -2897,7 +2913,8 @@  fill_slots_from_thread (rtx insn, rtx co
      depend on the destination register.  If so, try to place the opposite
      arithmetic insn after the jump insn and put the arithmetic insn in the
      delay slot.  If we can't do this, return.  */
-  if (delay_list == 0 && likely && new_thread
+  if (delay_list == 0 && likely
+      && new_thread && !ANY_RETURN_P (new_thread)
       && NONJUMP_INSN_P (new_thread)
       && GET_CODE (PATTERN (new_thread)) != ASM_INPUT
       && asm_noperands (PATTERN (new_thread)) < 0)
@@ -2990,7 +3007,7 @@  fill_slots_from_thread (rtx insn, rtx co
 					      delay_list))
 	new_thread = follow_jumps (JUMP_LABEL (new_thread));
 
-      if (new_thread == 0)
+      if (ANY_RETURN_P (new_thread))
 	label = find_end_label ();
       else if (LABEL_P (new_thread))
 	label = new_thread;
@@ -3063,7 +3080,7 @@  fill_eager_delay_slots (void)
 	 them.  Then see whether the branch is likely true.  We don't need
 	 to do a lot of this for unconditional branches.  */
 
-      insn_at_target = next_active_insn (target_label);
+      insn_at_target = active_insn_after (target_label);
       own_target = own_thread_p (target_label, target_label, 0);
 
       if (condition == const_true_rtx)
@@ -3098,7 +3115,7 @@  fill_eager_delay_slots (void)
 		 from the thread that was filled.  So we have to recompute
 		 the next insn at the target.  */
 	      target_label = JUMP_LABEL (insn);
-	      insn_at_target = next_active_insn (target_label);
+	      insn_at_target = active_insn_after (target_label);
 
 	      delay_list
 		= fill_slots_from_thread (insn, condition, fallthrough_insn,
@@ -3337,10 +3354,10 @@  relax_delay_slots (rtx first)
 	 group of consecutive labels.  */
       if (JUMP_P (insn)
 	  && (condjump_p (insn) || condjump_in_parallel_p (insn))
-	  && (target_label = JUMP_LABEL (insn)) != 0)
+	  && !ANY_RETURN_P (target_label = JUMP_LABEL (insn)))
 	{
 	  target_label = skip_consecutive_labels (follow_jumps (target_label));
-	  if (target_label == 0)
+	  if (ANY_RETURN_P (target_label))
 	    target_label = find_end_label ();
 
 	  if (target_label && next_active_insn (target_label) == next
@@ -3373,7 +3390,7 @@  relax_delay_slots (rtx first)
 		 invert_jump fails.  */
 
 	      ++LABEL_NUSES (target_label);
-	      if (label)
+	      if (!ANY_RETURN_P (label))
 		++LABEL_NUSES (label);
 
 	      if (invert_jump (insn, label, 1))
@@ -3382,7 +3399,7 @@  relax_delay_slots (rtx first)
 		  next = insn;
 		}
 
-	      if (label)
+	      if (!ANY_RETURN_P (label))
 		--LABEL_NUSES (label);
 
 	      if (--LABEL_NUSES (target_label) == 0)
@@ -3485,12 +3502,12 @@  relax_delay_slots (rtx first)
 
       target_label = JUMP_LABEL (delay_insn);
 
-      if (target_label)
+      if (!ANY_RETURN_P (target_label))
 	{
 	  /* If this jump goes to another unconditional jump, thread it, but
 	     don't convert a jump into a RETURN here.  */
 	  trial = skip_consecutive_labels (follow_jumps (target_label));
-	  if (trial == 0)
+	  if (ANY_RETURN_P (trial))
 	    trial = find_end_label ();
 
 	  if (trial && trial != target_label
@@ -3540,7 +3557,7 @@  relax_delay_slots (rtx first)
 	      && redundant_insn (XVECEXP (PATTERN (trial), 0, 1), insn, 0))
 	    {
 	      target_label = JUMP_LABEL (XVECEXP (PATTERN (trial), 0, 0));
-	      if (target_label == 0)
+	      if (ANY_RETURN_P (target_label))
 		target_label = find_end_label ();
 
 	      if (target_label
@@ -3627,7 +3644,7 @@  relax_delay_slots (rtx first)
 	  rtx label = JUMP_LABEL (next);
 	  rtx old_label = JUMP_LABEL (delay_insn);
 
-	  if (label == 0)
+	  if (ANY_RETURN_P (label))
 	    label = find_end_label ();
 
 	  /* find_end_label can generate a new label. Check this first.  */
@@ -3737,7 +3754,7 @@  make_return_insns (rtx first)
 
       /* If we can't make the jump into a RETURN, try to redirect it to the best
 	 RETURN and go on to the next insn.  */
-      if (! reorg_redirect_jump (jump_insn, NULL_RTX))
+      if (! reorg_redirect_jump (jump_insn, ret_rtx))
 	{
 	  /* Make sure redirecting the jump will not invalidate the delay
 	     slot insns.  */
@@ -3866,7 +3883,7 @@  dbr_schedule (rtx first)
       /* Ensure all jumps go to the last of a set of consecutive labels.  */
       if (JUMP_P (insn)
 	  && (condjump_p (insn) || condjump_in_parallel_p (insn))
-	  && JUMP_LABEL (insn) != 0
+	  && !ANY_RETURN_P (JUMP_LABEL (insn))
 	  && ((target = skip_consecutive_labels (JUMP_LABEL (insn)))
 	      != JUMP_LABEL (insn)))
 	redirect_jump (insn, target, 1);
Index: gcc/jump.c
===================================================================
--- gcc/jump.c	(revision 176230)
+++ gcc/jump.c	(working copy)
@@ -1217,7 +1217,7 @@  delete_related_insns (rtx insn)
   /* If deleting a jump, decrement the count of the label,
      and delete the label if it is now unused.  */
 
-  if (JUMP_P (insn) && JUMP_LABEL (insn))
+  if (JUMP_P (insn) && !ANY_RETURN_P (JUMP_LABEL (insn)))
     {
       rtx lab = JUMP_LABEL (insn), lab_next;
 
@@ -1348,6 +1348,18 @@  delete_for_peephole (rtx from, rtx to)
      is also an unconditional jump in that case.  */
 }
 
+/* A helper function for redirect_exp_1; examines its input X and returns
+   either a LABEL_REF around a label, or a RETURN if X was NULL.  */
+static rtx
+redirect_target (rtx x)
+{
+  if (x == NULL_RTX)
+    return ret_rtx;
+  if (!ANY_RETURN_P (x))
+    return gen_rtx_LABEL_REF (Pmode, x);
+  return x;
+}
+
 /* Throughout LOC, redirect OLABEL to NLABEL.  Treat null OLABEL or
    NLABEL as a return.  Accrue modifications into the change group.  */
 
@@ -1359,37 +1371,19 @@  redirect_exp_1 (rtx *loc, rtx olabel, rt
   int i;
   const char *fmt;
 
-  if (code == LABEL_REF)
-    {
-      if (XEXP (x, 0) == olabel)
-	{
-	  rtx n;
-	  if (nlabel)
-	    n = gen_rtx_LABEL_REF (Pmode, nlabel);
-	  else
-	    n = ret_rtx;
-
-	  validate_change (insn, loc, n, 1);
-	  return;
-	}
-    }
-  else if (code == RETURN && olabel == 0)
+  if ((code == LABEL_REF && XEXP (x, 0) == olabel)
+      || x == olabel)
     {
-      if (nlabel)
-	x = gen_rtx_LABEL_REF (Pmode, nlabel);
-      else
-	x = ret_rtx;
-      if (loc == &PATTERN (insn))
-	x = gen_rtx_SET (VOIDmode, pc_rtx, x);
-      validate_change (insn, loc, x, 1);
+      validate_change (insn, loc, redirect_target (nlabel), 1);
       return;
     }
 
-  if (code == SET && nlabel == 0 && SET_DEST (x) == pc_rtx
+  if (code == SET && SET_DEST (x) == pc_rtx
+      && ANY_RETURN_P (nlabel)
       && GET_CODE (SET_SRC (x)) == LABEL_REF
       && XEXP (SET_SRC (x), 0) == olabel)
     {
-      validate_change (insn, loc, ret_rtx, 1);
+      validate_change (insn, loc, nlabel, 1);
       return;
     }
 
@@ -1426,6 +1420,7 @@  redirect_jump_1 (rtx jump, rtx nlabel)
   int ochanges = num_validated_changes ();
   rtx *loc, asmop;
 
+  gcc_assert (nlabel != NULL_RTX);
   asmop = extract_asm_operands (PATTERN (jump));
   if (asmop)
     {
@@ -1447,17 +1442,20 @@  redirect_jump_1 (rtx jump, rtx nlabel)
    jump target label is unused as a result, it and the code following
    it may be deleted.
 
-   If NLABEL is zero, we are to turn the jump into a (possibly conditional)
-   RETURN insn.
+   Normally, NLABEL will be a label, but it may also be a RETURN rtx;
+   in that case we are to turn the jump into a (possibly conditional)
+   return insn.
 
    The return value will be 1 if the change was made, 0 if it wasn't
-   (this can only occur for NLABEL == 0).  */
+   (this can only occur when trying to produce return insns).  */
 
 int
 redirect_jump (rtx jump, rtx nlabel, int delete_unused)
 {
   rtx olabel = JUMP_LABEL (jump);
 
+  gcc_assert (nlabel != NULL_RTX);
+
   if (nlabel == olabel)
     return 1;
 
@@ -1485,13 +1483,14 @@  redirect_jump_2 (rtx jump, rtx olabel, r
      about this.  */
   gcc_assert (delete_unused >= 0);
   JUMP_LABEL (jump) = nlabel;
-  if (nlabel)
+  if (!ANY_RETURN_P (nlabel))
     ++LABEL_NUSES (nlabel);
 
   /* Update labels in any REG_EQUAL note.  */
   if ((note = find_reg_note (jump, REG_EQUAL, NULL_RTX)) != NULL_RTX)
     {
-      if (!nlabel || (invert && !invert_exp_1 (XEXP (note, 0), jump)))
+      if (ANY_RETURN_P (nlabel)
+	  || (invert && !invert_exp_1 (XEXP (note, 0), jump)))
 	remove_note (jump, note);
       else
 	{
@@ -1500,7 +1499,8 @@  redirect_jump_2 (rtx jump, rtx olabel, r
 	}
     }
 
-  if (olabel && --LABEL_NUSES (olabel) == 0 && delete_unused > 0
+  if (!ANY_RETURN_P (olabel)
+      && --LABEL_NUSES (olabel) == 0 && delete_unused > 0
       /* Undefined labels will remain outside the insn stream.  */
       && INSN_UID (olabel))
     delete_related_insns (olabel);
Index: gcc/ifcvt.c
===================================================================
--- gcc/ifcvt.c	(revision 176230)
+++ gcc/ifcvt.c	(working copy)
@@ -104,7 +104,7 @@  static int cond_exec_find_if_block (ce_i
 static int find_if_case_1 (basic_block, edge, edge);
 static int find_if_case_2 (basic_block, edge, edge);
 static int dead_or_predicable (basic_block, basic_block, basic_block,
-			       basic_block, int);
+			       edge, int);
 static void noce_emit_move_insn (rtx, rtx);
 static rtx block_has_only_trap (basic_block);
 
@@ -3846,7 +3846,7 @@  find_if_case_1 (basic_block test_bb, edg
 
   /* Registers set are dead, or are predicable.  */
   if (! dead_or_predicable (test_bb, then_bb, else_bb,
-			    single_succ (then_bb), 1))
+			    single_succ_edge (then_bb), 1))
     return FALSE;
 
   /* Conversion went ok, including moving the insns and fixing up the
@@ -3961,7 +3961,7 @@  find_if_case_2 (basic_block test_bb, edg
     return FALSE;
 
   /* Registers set are dead, or are predicable.  */
-  if (! dead_or_predicable (test_bb, else_bb, then_bb, else_succ->dest, 0))
+  if (! dead_or_predicable (test_bb, else_bb, then_bb, else_succ, 0))
     return FALSE;
 
   /* Conversion went ok, including moving the insns and fixing up the
@@ -3984,18 +3984,21 @@  find_if_case_2 (basic_block test_bb, edg
    Return TRUE if successful.
 
    TEST_BB is the block containing the conditional branch.  MERGE_BB
-   is the block containing the code to manipulate.  NEW_DEST is the
-   label TEST_BB should be branching to after the conversion.
+   is the block containing the code to manipulate.  DEST_EDGE is an
+   edge representing a jump to the join block; after the conversion,
+   TEST_BB should be branching to its destination.
    REVERSEP is true if the sense of the branch should be reversed.  */
 
 static int
 dead_or_predicable (basic_block test_bb, basic_block merge_bb,
-		    basic_block other_bb, basic_block new_dest, int reversep)
+		    basic_block other_bb, edge dest_edge, int reversep)
 {
-  rtx head, end, jump, earliest = NULL_RTX, old_dest, new_label = NULL_RTX;
+  basic_block new_dest = dest_edge->dest;
+  rtx head, end, jump, earliest = NULL_RTX, old_dest;
   bitmap merge_set = NULL;
   /* Number of pending changes.  */
   int n_validated_changes = 0;
+  rtx new_dest_label;
 
   jump = BB_END (test_bb);
 
@@ -4126,6 +4129,18 @@  dead_or_predicable (basic_block test_bb,
     }
 
  no_body:
+  if (JUMP_P (BB_END (dest_edge->src)))
+    new_dest_label = JUMP_LABEL (BB_END (dest_edge->src));
+  else if (other_bb != new_dest)
+    {
+      if (new_dest == EXIT_BLOCK_PTR)
+	new_dest_label = ret_rtx;
+      else
+	new_dest_label = block_label (new_dest);
+    }
+  else
+    new_dest_label = NULL_RTX;
+
   /* We don't want to use normal invert_jump or redirect_jump because
      we don't want to delete_insn called.  Also, we want to do our own
      change group management.  */
@@ -4133,10 +4148,9 @@  dead_or_predicable (basic_block test_bb,
   old_dest = JUMP_LABEL (jump);
   if (other_bb != new_dest)
     {
-      new_label = block_label (new_dest);
       if (reversep
-	  ? ! invert_jump_1 (jump, new_label)
-	  : ! redirect_jump_1 (jump, new_label))
+	  ? ! invert_jump_1 (jump, new_dest_label)
+	  : ! redirect_jump_1 (jump, new_dest_label))
 	goto cancel;
     }
 
@@ -4147,7 +4161,7 @@  dead_or_predicable (basic_block test_bb,
 
   if (other_bb != new_dest)
     {
-      redirect_jump_2 (jump, old_dest, new_label, 0, reversep);
+      redirect_jump_2 (jump, old_dest, new_dest_label, 0, reversep);
 
       redirect_edge_succ (BRANCH_EDGE (test_bb), new_dest);
       if (reversep)
Index: gcc/function.c
===================================================================
--- gcc/function.c	(revision 176230)
+++ gcc/function.c	(working copy)
@@ -5309,7 +5309,8 @@  emit_use_return_register_into_block (bas
 static void
 emit_return_into_block (basic_block bb)
 {
-  emit_jump_insn_after (gen_return (), BB_END (bb));
+  rtx jump = emit_jump_insn_after (gen_return (), BB_END (bb));
+  JUMP_LABEL (jump) = ret_rtx;
 }
 #endif /* HAVE_return */
 
@@ -5468,7 +5469,7 @@  thread_prologue_and_epilogue_insns (void
 		 that with a conditional return instruction.  */
 	      else if (condjump_p (jump))
 		{
-		  if (! redirect_jump (jump, 0, 0))
+		  if (! redirect_jump (jump, ret_rtx, 0))
 		    {
 		      ei_next (&ei2);
 		      continue;
@@ -5551,6 +5552,8 @@  thread_prologue_and_epilogue_insns (void
 #ifdef HAVE_epilogue
   if (HAVE_epilogue)
     {
+      rtx returnjump;
+
       start_sequence ();
       epilogue_end = emit_note (NOTE_INSN_EPILOGUE_BEG);
       seq = gen_epilogue ();
@@ -5561,11 +5564,25 @@  thread_prologue_and_epilogue_insns (void
       record_insns (seq, NULL, &epilogue_insn_hash);
       set_insn_locators (seq, epilogue_locator);
 
+      returnjump = get_last_insn ();
       seq = get_insns ();
       end_sequence ();
 
       insert_insn_on_edge (seq, e);
       inserted = true;
+
+      if (JUMP_P (returnjump))
+	{
+	  rtx pat = PATTERN (returnjump);
+	  if (GET_CODE (pat) == PARALLEL)
+	    pat = XVECEXP (pat, 0, 0);
+	  if (ANY_RETURN_P (pat))
+	    JUMP_LABEL (returnjump) = pat;
+	  else
+	    JUMP_LABEL (returnjump) = ret_rtx;
+	}
+      else
+	returnjump = NULL_RTX;
     }
   else
 #endif
Index: gcc/print-rtl.c
===================================================================
--- gcc/print-rtl.c	(revision 176230)
+++ gcc/print-rtl.c	(working copy)
@@ -323,9 +323,14 @@  print_rtx (const_rtx in_rtx)
 	      }
 	  }
 	else if (i == 8 && JUMP_P (in_rtx) && JUMP_LABEL (in_rtx) != NULL)
-	  /* Output the JUMP_LABEL reference.  */
-	  fprintf (outfile, "\n%s%*s -> %d", print_rtx_head, indent * 2, "",
-		   INSN_UID (JUMP_LABEL (in_rtx)));
+	  {
+	    /* Output the JUMP_LABEL reference.  */
+	    fprintf (outfile, "\n%s%*s -> ", print_rtx_head, indent * 2, "");
+	    if (GET_CODE (JUMP_LABEL (in_rtx)) == RETURN)
+	      fprintf (outfile, "return");
+	    else
+	      fprintf (outfile, "%d", INSN_UID (JUMP_LABEL (in_rtx)));
+	  }
 	else if (i == 0 && GET_CODE (in_rtx) == VALUE)
 	  {
 #ifndef GENERATOR_FILE
Index: gcc/emit-rtl.c
===================================================================
--- gcc/emit-rtl.c	(revision 176230)
+++ gcc/emit-rtl.c	(working copy)
@@ -3265,14 +3265,17 @@  prev_label (rtx insn)
   return insn;
 }
 
-/* Return the last label to mark the same position as LABEL.  Return null
-   if LABEL itself is null.  */
+/* Return the last label to mark the same position as LABEL.  Return LABEL
+   itself if it is null or any return rtx.  */
 
 rtx
 skip_consecutive_labels (rtx label)
 {
   rtx insn;
 
+  if (label && ANY_RETURN_P (label))
+    return label;
+
   for (insn = label; insn != 0 && !INSN_P (insn); insn = NEXT_INSN (insn))
     if (LABEL_P (insn))
       label = insn;
Index: gcc/cfglayout.c
===================================================================
--- gcc/cfglayout.c	(revision 176230)
+++ gcc/cfglayout.c	(working copy)
@@ -899,7 +899,7 @@  fixup_reorder_chain (void)
 	 Note force_nonfallthru can delete E_FALL and thus we have to
 	 save E_FALL->src prior to the call to force_nonfallthru.  */
       src_bb = e_fall->src;
-      nb = force_nonfallthru (e_fall);
+      nb = force_nonfallthru_and_redirect (e_fall, e_fall->dest);
       if (nb)
 	{
 	  nb->il.rtl->visited = 1;
@@ -1195,6 +1195,9 @@  duplicate_insn_chain (rtx from, rtx to)
 	      break;
 	    }
 	  copy = emit_copy_of_insn_after (insn, get_last_insn ());
+	  if (JUMP_P (insn) && JUMP_LABEL (insn) != NULL_RTX
+	      && ANY_RETURN_P (JUMP_LABEL (insn)))
+	    JUMP_LABEL (copy) = JUMP_LABEL (insn);
           maybe_copy_prologue_epilogue_insn (insn, copy);
 	  break;
 
Index: gcc/rtl.h
===================================================================
--- gcc/rtl.h	(revision 176230)
+++ gcc/rtl.h	(working copy)
@@ -413,6 +413,9 @@  struct GTY((variable_size)) rtvec_def {
   (JUMP_P (INSN) && (GET_CODE (PATTERN (INSN)) == ADDR_VEC || \
 		     GET_CODE (PATTERN (INSN)) == ADDR_DIFF_VEC))
 
+/* Predicate yielding nonzero iff X is a return.  */
+#define ANY_RETURN_P(X) ((X) == ret_rtx)
+
 /* 1 if X is a unary operator.  */
 
 #define UNARY_P(X)   \
Index: gcc/dwarf2cfi.c
===================================================================
--- gcc/dwarf2cfi.c	(revision 176230)
+++ gcc/dwarf2cfi.c	(working copy)
@@ -678,7 +678,7 @@  compute_barrier_args_size_1 (rtx insn, H
     {
       rtx dest = JUMP_LABEL (insn);
 
-      if (dest)
+      if (dest != NULL_RTX && !ANY_RETURN_P (dest))
 	{
 	  if (barrier_args_size [INSN_UID (dest)] < 0)
 	    {
@@ -2294,6 +2294,8 @@  create_cfi_notes (void)
 	  dwarf2out_frame_debug (insn, false);
 	  continue;
 	}
+      if (GET_CODE (pat) == ADDR_VEC || GET_CODE (pat) == ADDR_DIFF_VEC)
+	continue;
 
       if (GET_CODE (pat) == SEQUENCE)
 	{
Index: gcc/resource.c
===================================================================
--- gcc/resource.c	(revision 176230)
+++ gcc/resource.c	(working copy)
@@ -495,6 +495,8 @@  find_dead_or_set_registers (rtx target,
 		  || GET_CODE (PATTERN (this_jump_insn)) == RETURN)
 		{
 		  next = JUMP_LABEL (this_jump_insn);
+		  if (ANY_RETURN_P (next))
+		    next = NULL_RTX;
 		  if (jump_insn == 0)
 		    {
 		      jump_insn = insn;
@@ -562,9 +564,10 @@  find_dead_or_set_registers (rtx target,
 		  AND_COMPL_HARD_REG_SET (scratch, needed.regs);
 		  AND_COMPL_HARD_REG_SET (fallthrough_res.regs, scratch);
 
-		  find_dead_or_set_registers (JUMP_LABEL (this_jump_insn),
-					      &target_res, 0, jump_count,
-					      target_set, needed);
+		  if (!ANY_RETURN_P (JUMP_LABEL (this_jump_insn)))
+		    find_dead_or_set_registers (JUMP_LABEL (this_jump_insn),
+						&target_res, 0, jump_count,
+						target_set, needed);
 		  find_dead_or_set_registers (next,
 					      &fallthrough_res, 0, jump_count,
 					      set, needed);
@@ -878,7 +881,7 @@  mark_target_live_regs (rtx insns, rtx ta
   struct resources set, needed;
 
   /* Handle end of function.  */
-  if (target == 0)
+  if (target == 0 || ANY_RETURN_P (target))
     {
       *res = end_of_function_needs;
       return;
@@ -1097,8 +1100,9 @@  mark_target_live_regs (rtx insns, rtx ta
       struct resources new_resources;
       rtx stop_insn = next_active_insn (jump_insn);
 
-      mark_target_live_regs (insns, next_active_insn (jump_target),
-			     &new_resources);
+      if (!ANY_RETURN_P (jump_target))
+	jump_target = next_active_insn (jump_target);
+      mark_target_live_regs (insns, jump_target, &new_resources);
       CLEAR_RESOURCE (&set);
       CLEAR_RESOURCE (&needed);
 
Index: gcc/basic-block.h
===================================================================
--- gcc/basic-block.h	(revision 176230)
+++ gcc/basic-block.h	(working copy)
@@ -799,6 +799,7 @@  extern rtx block_label (basic_block);
 extern bool purge_all_dead_edges (void);
 extern bool purge_dead_edges (basic_block);
 extern bool fixup_abnormal_edges (void);
+extern basic_block force_nonfallthru_and_redirect (edge, basic_block);
 
 /* In cfgbuild.c.  */
 extern void find_many_sub_basic_blocks (sbitmap);
Index: gcc/config/alpha/alpha.c
===================================================================
--- gcc/config/alpha/alpha.c	(revision 176230)
+++ gcc/config/alpha/alpha.c	(working copy)
@@ -571,59 +571,6 @@  direct_return (void)
 	  && crtl->args.pretend_args_size == 0);
 }
 
-/* Return the ADDR_VEC associated with a tablejump insn.  */
-
-rtx
-alpha_tablejump_addr_vec (rtx insn)
-{
-  rtx tmp;
-
-  tmp = JUMP_LABEL (insn);
-  if (!tmp)
-    return NULL_RTX;
-  tmp = NEXT_INSN (tmp);
-  if (!tmp)
-    return NULL_RTX;
-  if (JUMP_P (tmp)
-      && GET_CODE (PATTERN (tmp)) == ADDR_DIFF_VEC)
-    return PATTERN (tmp);
-  return NULL_RTX;
-}
-
-/* Return the label of the predicted edge, or CONST0_RTX if we don't know.  */
-
-rtx
-alpha_tablejump_best_label (rtx insn)
-{
-  rtx jump_table = alpha_tablejump_addr_vec (insn);
-  rtx best_label = NULL_RTX;
-
-  /* ??? Once the CFG doesn't keep getting completely rebuilt, look
-     there for edge frequency counts from profile data.  */
-
-  if (jump_table)
-    {
-      int n_labels = XVECLEN (jump_table, 1);
-      int best_count = -1;
-      int i, j;
-
-      for (i = 0; i < n_labels; i++)
-	{
-	  int count = 1;
-
-	  for (j = i + 1; j < n_labels; j++)
-	    if (XEXP (XVECEXP (jump_table, 1, i), 0)
-		== XEXP (XVECEXP (jump_table, 1, j), 0))
-	      count++;
-
-	  if (count > best_count)
-	    best_count = count, best_label = XVECEXP (jump_table, 1, i);
-	}
-    }
-
-  return best_label ? best_label : const0_rtx;
-}
-
 /* Return the TLS model to use for SYMBOL.  */
 
 static enum tls_model
Index: gcc/config/alpha/alpha-protos.h
===================================================================
--- gcc/config/alpha/alpha-protos.h	(revision 176230)
+++ gcc/config/alpha/alpha-protos.h	(working copy)
@@ -31,9 +31,6 @@  extern void alpha_expand_prologue (void)
 extern void alpha_expand_epilogue (void);
 extern void alpha_output_filename (FILE *, const char *);
 
-extern rtx alpha_tablejump_addr_vec (rtx);
-extern rtx alpha_tablejump_best_label (rtx);
-
 extern bool alpha_legitimate_constant_p (enum machine_mode, rtx);
 extern rtx alpha_legitimize_reload_address (rtx, enum machine_mode,
 					    int, int, int);
Index: gcc/config/sh/sh.c
===================================================================
--- gcc/config/sh/sh.c	(revision 176230)
+++ gcc/config/sh/sh.c	(working copy)
@@ -5276,7 +5276,8 @@  barrier_align (rtx barrier_or_label)
 	}
       if (prev
 	  && JUMP_P (prev)
-	  && JUMP_LABEL (prev))
+	  && JUMP_LABEL (prev) != NULL_RTX
+	  && !ANY_RETURN_P (JUMP_LABEL (prev)))
 	{
 	  rtx x;
 	  if (jump_to_next
@@ -5975,7 +5976,7 @@  split_branches (rtx first)
 			JUMP_LABEL (insn) = far_label;
 			LABEL_NUSES (far_label)++;
 		      }
-		    redirect_jump (insn, NULL_RTX, 1);
+		    redirect_jump (insn, ret_rtx, 1);
 		    far_label = 0;
 		  }
 	      }
Index: gcc/config/arm/arm.c
===================================================================
--- gcc/config/arm/arm.c	(revision 176230)
+++ gcc/config/arm/arm.c	(working copy)
@@ -11466,6 +11466,7 @@  is_jump_table (rtx insn)
 
   if (GET_CODE (insn) == JUMP_INSN
       && JUMP_LABEL (insn) != NULL
+      && !ANY_RETURN_P (JUMP_LABEL (insn))
       && ((table = next_real_insn (JUMP_LABEL (insn)))
 	  == next_real_insn (insn))
       && table != NULL
Index: gcc/cfgrtl.c
===================================================================
--- gcc/cfgrtl.c	(revision 176230)
+++ gcc/cfgrtl.c	(working copy)
@@ -1119,7 +1119,7 @@  rtl_redirect_edge_and_branch (edge e, ba
 /* Like force_nonfallthru below, but additionally performs redirection
    Used by redirect_edge_and_branch_force.  */
 
-static basic_block
+basic_block
 force_nonfallthru_and_redirect (edge e, basic_block target)
 {
   basic_block jump_block, new_bb = NULL, src = e->src;