diff mbox series

[4/9] Replace call_used_reg_set with call_used_or_fixed_regs

Message ID mptwoeg2jvz.fsf@arm.com
State New
Headers show
Series Reduce the amount of global ABI state | expand

Commit Message

Richard Sandiford Sept. 10, 2019, 4:31 p.m. UTC
CALL_USED_REGISTERS and call_used_regs infamously contain all fixed
registers (hence the need for CALL_REALLY_USED_REGISTERS etc.).
We try to recover from this to some extent with:

  /* Contains 1 for registers that are set or clobbered by calls.  */
  /* ??? Ideally, this would be just call_used_regs plus global_regs, but
     for someone's bright idea to have call_used_regs strictly include
     fixed_regs.  Which leaves us guessing as to the set of fixed_regs
     that are actually preserved.  We know for sure that those associated
     with the local stack frame are safe, but scant others.  */
  HARD_REG_SET x_regs_invalidated_by_call;

Since global registers are added to fixed_reg_set and call_used_reg_set
too, it's always the case that:

  call_used_reg_set == regs_invalidated_by_call | fixed_reg_set

This patch replaces all uses of call_used_reg_set with a new macro
call_used_or_fixed_regs to make this clearer.

This is part of a series that allows call_used_regs to be what is
now call_really_used_regs.  It's a purely mechanical replacement;
later patches clean up obvious oddities like
"call_used_or_fixed_regs & ~fixed_regs".


2019-09-10  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* hard-reg-set.h (target_hard_regs::x_call_used_reg_set): Delete.
	(call_used_reg_set): Delete.
	(call_used_or_fixed_regs): New macro.
	* reginfo.c (init_reg_sets_1, globalize_reg): Remove initialization
	of call_used_reg_set.
	* caller-save.c (setup_save_areas): Use call_used_or_fixed_regs
	instead of call_used_regs.
	(save_call_clobbered_regs): Likewise.
	* cfgcleanup.c (old_insns_match_p): Likewise.
	* config/c6x/c6x.c (c6x_call_saved_register_used): Likewise.
	* config/epiphany/epiphany.c (epiphany_conditional_register_usage):
	Likewise.
	* config/frv/frv.c (frv_ifcvt_modify_tests): Likewise.
	* config/sh/sh.c (output_stack_adjust): Likewise.
	* final.c (collect_fn_hard_reg_usage): Likewise.
	* ira-build.c (ira_build): Likewise.
	* ira-color.c (calculate_saved_nregs): Likewise.
	(allocno_reload_assign, calculate_spill_cost): Likewise.
	* ira-conflicts.c (ira_build_conflicts): Likewise.
	* ira-costs.c (ira_tune_allocno_costs): Likewise.
	* ira-lives.c (process_bb_node_lives): Likewise.
	* ira.c (setup_reg_renumber): Likewise.
	* lra-assigns.c (find_hard_regno_for_1, lra_assign): Likewise.
	* lra-constraints.c (need_for_call_save_p): Likewise.
	(need_for_split_p, inherit_in_ebb): Likewise.
	* lra-lives.c (process_bb_lives): Likewise.
	* lra-remat.c (call_used_input_regno_present_p): Likewise.
	* postreload.c (reload_combine): Likewise.
	* regrename.c (find_rename_reg): Likewise.
	* reload1.c (reload_as_needed): Likewise.
	* rtlanal.c (find_all_hard_reg_sets): Likewise.
	* sel-sched.c (mark_unavailable_hard_regs): Likewise.
	* shrink-wrap.c (requires_stack_frame_p): Likewise.

Comments

Jeff Law Sept. 10, 2019, 6:32 p.m. UTC | #1
On 9/10/19 10:31 AM, Richard Sandiford wrote:
> CALL_USED_REGISTERS and call_used_regs infamously contain all fixed
> registers (hence the need for CALL_REALLY_USED_REGISTERS etc.).
> We try to recover from this to some extent with:
> 
>   /* Contains 1 for registers that are set or clobbered by calls.  */
>   /* ??? Ideally, this would be just call_used_regs plus global_regs, but
>      for someone's bright idea to have call_used_regs strictly include
>      fixed_regs.  Which leaves us guessing as to the set of fixed_regs
>      that are actually preserved.  We know for sure that those associated
>      with the local stack frame are safe, but scant others.  */
>   HARD_REG_SET x_regs_invalidated_by_call;
> 
> Since global registers are added to fixed_reg_set and call_used_reg_set
> too, it's always the case that:
> 
>   call_used_reg_set == regs_invalidated_by_call | fixed_reg_set
> 
> This patch replaces all uses of call_used_reg_set with a new macro
> call_used_or_fixed_regs to make this clearer.
> 
> This is part of a series that allows call_used_regs to be what is
> now call_really_used_regs.  It's a purely mechanical replacement;
> later patches clean up obvious oddities like
> "call_used_or_fixed_regs & ~fixed_regs".
> 
> 
> 2019-09-10  Richard Sandiford  <richard.sandiford@arm.com>
> 
> gcc/
> 	* hard-reg-set.h (target_hard_regs::x_call_used_reg_set): Delete.
> 	(call_used_reg_set): Delete.
> 	(call_used_or_fixed_regs): New macro.
> 	* reginfo.c (init_reg_sets_1, globalize_reg): Remove initialization
> 	of call_used_reg_set.
> 	* caller-save.c (setup_save_areas): Use call_used_or_fixed_regs
> 	instead of call_used_regs.
> 	(save_call_clobbered_regs): Likewise.
> 	* cfgcleanup.c (old_insns_match_p): Likewise.
> 	* config/c6x/c6x.c (c6x_call_saved_register_used): Likewise.
> 	* config/epiphany/epiphany.c (epiphany_conditional_register_usage):
> 	Likewise.
> 	* config/frv/frv.c (frv_ifcvt_modify_tests): Likewise.
> 	* config/sh/sh.c (output_stack_adjust): Likewise.
> 	* final.c (collect_fn_hard_reg_usage): Likewise.
> 	* ira-build.c (ira_build): Likewise.
> 	* ira-color.c (calculate_saved_nregs): Likewise.
> 	(allocno_reload_assign, calculate_spill_cost): Likewise.
> 	* ira-conflicts.c (ira_build_conflicts): Likewise.
> 	* ira-costs.c (ira_tune_allocno_costs): Likewise.
> 	* ira-lives.c (process_bb_node_lives): Likewise.
> 	* ira.c (setup_reg_renumber): Likewise.
> 	* lra-assigns.c (find_hard_regno_for_1, lra_assign): Likewise.
> 	* lra-constraints.c (need_for_call_save_p): Likewise.
> 	(need_for_split_p, inherit_in_ebb): Likewise.
> 	* lra-lives.c (process_bb_lives): Likewise.
> 	* lra-remat.c (call_used_input_regno_present_p): Likewise.
> 	* postreload.c (reload_combine): Likewise.
> 	* regrename.c (find_rename_reg): Likewise.
> 	* reload1.c (reload_as_needed): Likewise.
> 	* rtlanal.c (find_all_hard_reg_sets): Likewise.
> 	* sel-sched.c (mark_unavailable_hard_regs): Likewise.
> 	* shrink-wrap.c (requires_stack_frame_p): Likewise.
OK
jeff
diff mbox series

Patch

Index: gcc/hard-reg-set.h
===================================================================
--- gcc/hard-reg-set.h	2019-09-10 17:22:40.938445344 +0100
+++ gcc/hard-reg-set.h	2019-09-10 17:22:44.694419214 +0100
@@ -397,9 +397,6 @@  struct target_hard_regs {
 
   char x_call_really_used_regs[FIRST_PSEUDO_REGISTER];
 
-  /* The same info as a HARD_REG_SET.  */
-  HARD_REG_SET x_call_used_reg_set;
-
   /* For targets that use reload rather than LRA, this is the set
      of registers that we are able to save and restore around calls
      (i.e. those for which we know a suitable mode and set of
@@ -480,12 +477,12 @@  #define call_used_regs \
   (this_target_hard_regs->x_call_used_regs)
 #define call_really_used_regs \
   (this_target_hard_regs->x_call_really_used_regs)
-#define call_used_reg_set \
-  (this_target_hard_regs->x_call_used_reg_set)
 #define savable_regs \
   (this_target_hard_regs->x_savable_regs)
 #define regs_invalidated_by_call \
   (this_target_hard_regs->x_regs_invalidated_by_call)
+#define call_used_or_fixed_regs \
+  (regs_invalidated_by_call | fixed_reg_set)
 #define reg_alloc_order \
   (this_target_hard_regs->x_reg_alloc_order)
 #define inv_reg_alloc_order \
Index: gcc/reginfo.c
===================================================================
--- gcc/reginfo.c	2019-09-10 17:22:37.774467353 +0100
+++ gcc/reginfo.c	2019-09-10 17:22:44.698419188 +0100
@@ -350,7 +350,6 @@  init_reg_sets_1 (void)
   /* Initialize "constant" tables.  */
 
   CLEAR_HARD_REG_SET (fixed_reg_set);
-  CLEAR_HARD_REG_SET (call_used_reg_set);
   CLEAR_HARD_REG_SET (regs_invalidated_by_call);
 
   operand_reg_set &= accessible_reg_set;
@@ -384,9 +383,6 @@  init_reg_sets_1 (void)
       if (fixed_regs[i])
 	SET_HARD_REG_BIT (fixed_reg_set, i);
 
-      if (call_used_regs[i])
-	SET_HARD_REG_BIT (call_used_reg_set, i);
-
       /* There are a couple of fixed registers that we know are safe to
 	 exclude from being clobbered by calls:
 
@@ -426,7 +422,6 @@  init_reg_sets_1 (void)
 	{
 	  fixed_regs[i] = call_used_regs[i] = 1;
 	  SET_HARD_REG_BIT (fixed_reg_set, i);
-	  SET_HARD_REG_BIT (call_used_reg_set, i);
 	}
     }
 
@@ -779,7 +774,6 @@  globalize_reg (tree decl, int i)
 #endif
 
   SET_HARD_REG_BIT (fixed_reg_set, i);
-  SET_HARD_REG_BIT (call_used_reg_set, i);
 
   reinit_regs ();
 }
Index: gcc/caller-save.c
===================================================================
--- gcc/caller-save.c	2019-09-10 17:22:40.938445344 +0100
+++ gcc/caller-save.c	2019-09-10 17:22:44.686419271 +0100
@@ -426,7 +426,7 @@  setup_save_areas (void)
       freq = REG_FREQ_FROM_BB (BLOCK_FOR_INSN (insn));
       REG_SET_TO_HARD_REG_SET (hard_regs_to_save,
 			       &chain->live_throughout);
-      get_call_reg_set_usage (insn, &used_regs, call_used_reg_set);
+      get_call_reg_set_usage (insn, &used_regs, call_used_or_fixed_regs);
 
       /* Record all registers set in this call insn.  These don't
 	 need to be saved.  N.B. the call insn might set a subreg
@@ -509,7 +509,7 @@  setup_save_areas (void)
 
 	  REG_SET_TO_HARD_REG_SET (hard_regs_to_save,
 				   &chain->live_throughout);
-	  get_call_reg_set_usage (insn, &used_regs, call_used_reg_set);
+	  get_call_reg_set_usage (insn, &used_regs, call_used_or_fixed_regs);
 
 	  /* Record all registers set in this call insn.  These don't
 	     need to be saved.  N.B. the call insn might set a subreg
@@ -839,7 +839,7 @@  save_call_clobbered_regs (void)
 				     | hard_regs_saved);
 	      hard_regs_to_save &= savable_regs;
 	      get_call_reg_set_usage (insn, &call_def_reg_set,
-				      call_used_reg_set);
+				      call_used_or_fixed_regs);
 	      hard_regs_to_save &= call_def_reg_set;
 
 	      for (regno = 0; regno < FIRST_PSEUDO_REGISTER; regno++)
@@ -855,7 +855,8 @@  save_call_clobbered_regs (void)
 	      
 	      if (cheap
 		  && HARD_REGISTER_P (cheap)
-		  && TEST_HARD_REG_BIT (call_used_reg_set, REGNO (cheap)))
+		  && TEST_HARD_REG_BIT (call_used_or_fixed_regs,
+					REGNO (cheap)))
 		{
 		  rtx dest, newpat;
 		  rtx pat = PATTERN (insn);
Index: gcc/cfgcleanup.c
===================================================================
--- gcc/cfgcleanup.c	2019-09-09 18:59:33.967970674 +0100
+++ gcc/cfgcleanup.c	2019-09-10 17:22:44.686419271 +0100
@@ -1228,8 +1228,8 @@  old_insns_match_p (int mode ATTRIBUTE_UN
 
       HARD_REG_SET i1_used, i2_used;
 
-      get_call_reg_set_usage (i1, &i1_used, call_used_reg_set);
-      get_call_reg_set_usage (i2, &i2_used, call_used_reg_set);
+      get_call_reg_set_usage (i1, &i1_used, call_used_or_fixed_regs);
+      get_call_reg_set_usage (i2, &i2_used, call_used_or_fixed_regs);
 
       if (i1_used != i2_used)
         return dir_none;
Index: gcc/config/c6x/c6x.c
===================================================================
--- gcc/config/c6x/c6x.c	2019-09-10 17:22:34.518490002 +0100
+++ gcc/config/c6x/c6x.c	2019-09-10 17:22:44.686419271 +0100
@@ -1094,7 +1094,7 @@  c6x_call_saved_register_used (tree call_
   INIT_CUMULATIVE_ARGS (cum_v, NULL, NULL, 0, 0);
   cum = pack_cumulative_args (&cum_v);
 
-  call_saved_regset = ~call_used_reg_set;
+  call_saved_regset = ~call_used_or_fixed_regs;
   for (i = 0; i < call_expr_nargs (call_expr); i++)
     {
       parameter = CALL_EXPR_ARG (call_expr, i);
Index: gcc/config/epiphany/epiphany.c
===================================================================
--- gcc/config/epiphany/epiphany.c	2019-09-09 18:59:20.840063365 +0100
+++ gcc/config/epiphany/epiphany.c	2019-09-10 17:22:44.690419243 +0100
@@ -2242,7 +2242,7 @@  epiphany_conditional_register_usage (voi
     CLEAR_HARD_REG_SET (reg_class_contents[SHORT_INSN_REGS]);
   reg_class_contents[SIBCALL_REGS] = reg_class_contents[GENERAL_REGS];
   /* It would be simpler and quicker if we could just use
-     &~, alas, call_used_reg_set is yet uninitialized;
+     &~, alas, call_used_or_fixed_regs is yet uninitialized;
      it is set up later by our caller.  */
   for (i = 0; i < FIRST_PSEUDO_REGISTER; i++)
     if (!call_used_regs[i])
Index: gcc/config/frv/frv.c
===================================================================
--- gcc/config/frv/frv.c	2019-09-09 18:59:20.844063337 +0100
+++ gcc/config/frv/frv.c	2019-09-10 17:22:44.690419243 +0100
@@ -5201,7 +5201,7 @@  frv_ifcvt_modify_tests (ce_if_block *ce_
      not fixed.  However, allow the ICC/ICR temporary registers to be allocated
      if we did not need to use them in reloading other registers.  */
   memset (&tmp_reg->regs, 0, sizeof (tmp_reg->regs));
-  tmp_reg->regs = call_used_reg_set &~ fixed_reg_set;
+  tmp_reg->regs = call_used_or_fixed_regs &~ fixed_reg_set;
   SET_HARD_REG_BIT (tmp_reg->regs, ICC_TEMP);
   SET_HARD_REG_BIT (tmp_reg->regs, ICR_TEMP);
 
Index: gcc/config/sh/sh.c
===================================================================
--- gcc/config/sh/sh.c	2019-09-10 17:22:37.774467353 +0100
+++ gcc/config/sh/sh.c	2019-09-10 17:22:44.694419214 +0100
@@ -6707,7 +6707,7 @@  output_stack_adjust (int size, rtx reg,
 	    temp = -1;
 	  if (temp < 0 && ! current_function_interrupt && epilogue_p >= 0)
 	    {
-	      HARD_REG_SET temps = (call_used_reg_set
+	      HARD_REG_SET temps = (call_used_or_fixed_regs
 				    & ~fixed_reg_set
 				    & savable_regs);
 	      if (epilogue_p > 0)
Index: gcc/final.c
===================================================================
--- gcc/final.c	2019-09-09 18:59:15.984097650 +0100
+++ gcc/final.c	2019-09-10 17:22:44.694419214 +0100
@@ -5007,7 +5007,7 @@  collect_fn_hard_reg_usage (void)
 	  && !self_recursive_call_p (insn))
 	{
 	  if (!get_call_reg_set_usage (insn, &insn_used_regs,
-				       call_used_reg_set))
+				       call_used_or_fixed_regs))
 	    return;
 
 	  function_used_regs |= insn_used_regs;
@@ -5030,7 +5030,7 @@  collect_fn_hard_reg_usage (void)
 
   /* The information we have gathered is only interesting if it exposes a
      register from the call_used_regs that is not used in this function.  */
-  if (hard_reg_set_subset_p (call_used_reg_set, function_used_regs))
+  if (hard_reg_set_subset_p (call_used_or_fixed_regs, function_used_regs))
     return;
 
   node = cgraph_node::rtl_info (current_function_decl);
Index: gcc/ira-build.c
===================================================================
--- gcc/ira-build.c	2019-09-09 18:59:51.239848733 +0100
+++ gcc/ira-build.c	2019-09-10 17:22:44.694419214 +0100
@@ -3462,7 +3462,7 @@  ira_build (void)
 	 allocno crossing calls.  */
       FOR_EACH_ALLOCNO (a, ai)
 	if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
-	  ior_hard_reg_conflicts (a, call_used_reg_set);
+	  ior_hard_reg_conflicts (a, call_used_or_fixed_regs);
     }
   if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
     print_copies (ira_dump_file);
Index: gcc/ira-color.c
===================================================================
--- gcc/ira-color.c	2019-09-09 18:59:33.971970646 +0100
+++ gcc/ira-color.c	2019-09-10 17:22:44.694419214 +0100
@@ -1650,7 +1650,7 @@  calculate_saved_nregs (int hard_regno, m
   ira_assert (hard_regno >= 0);
   for (i = hard_regno_nregs (hard_regno, mode) - 1; i >= 0; i--)
     if (!allocated_hardreg_p[hard_regno + i]
-	&& !TEST_HARD_REG_BIT (call_used_reg_set, hard_regno + i)
+	&& !TEST_HARD_REG_BIT (call_used_or_fixed_regs, hard_regno + i)
 	&& !LOCAL_REGNO (hard_regno + i))
       nregs++;
   return nregs;
@@ -4379,7 +4379,7 @@  allocno_reload_assign (ira_allocno_t a,
       saved[i] = OBJECT_TOTAL_CONFLICT_HARD_REGS (obj);
       OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= forbidden_regs;
       if (! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
-	OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= call_used_reg_set;
+	OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= call_used_or_fixed_regs;
     }
   ALLOCNO_ASSIGNED_P (a) = false;
   aclass = ALLOCNO_CLASS (a);
@@ -4400,7 +4400,7 @@  allocno_reload_assign (ira_allocno_t a,
 					    [aclass][hard_regno]]));
       if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
 	  && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
-					      call_used_reg_set))
+					      call_used_or_fixed_regs))
 	{
 	  ira_assert (flag_caller_saves);
 	  caller_save_needed = 1;
@@ -4715,7 +4715,7 @@  calculate_spill_cost (int *regnos, rtx i
       cost += ALLOCNO_MEMORY_COST (a) - ALLOCNO_CLASS_COST (a);
       nregs = hard_regno_nregs (hard_regno, ALLOCNO_MODE (a));
       for (j = 0; j < nregs; j++)
-	if (! TEST_HARD_REG_BIT (call_used_reg_set, hard_regno + j))
+	if (! TEST_HARD_REG_BIT (call_used_or_fixed_regs, hard_regno + j))
 	  break;
       if (j == nregs)
 	count++;
Index: gcc/ira-conflicts.c
===================================================================
--- gcc/ira-conflicts.c	2019-09-10 17:22:40.938445344 +0100
+++ gcc/ira-conflicts.c	2019-09-10 17:22:44.694419214 +0100
@@ -740,7 +740,7 @@  ira_build_conflicts (void)
   else
     temp_hard_reg_set = (reg_class_contents[base]
 			 & ~ira_no_alloc_regs
-			 & call_used_reg_set);
+			 & call_used_or_fixed_regs);
   FOR_EACH_ALLOCNO (a, ai)
     {
       int i, n = ALLOCNO_NUM_OBJECTS (a);
@@ -760,13 +760,13 @@  ira_build_conflicts (void)
 		  && REG_USERVAR_P (allocno_reg)
 		  && ! reg_is_parm_p (allocno_reg)))
 	    {
-	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= call_used_reg_set;
-	      OBJECT_CONFLICT_HARD_REGS (obj) |= call_used_reg_set;
+	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= call_used_or_fixed_regs;
+	      OBJECT_CONFLICT_HARD_REGS (obj) |= call_used_or_fixed_regs;
 	    }
 	  else if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
 	    {
 	      HARD_REG_SET no_caller_save_reg_set
-		= (call_used_reg_set & ~savable_regs);
+		= (call_used_or_fixed_regs & ~savable_regs);
 	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= no_caller_save_reg_set;
 	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= temp_hard_reg_set;
 	      OBJECT_CONFLICT_HARD_REGS (obj) |= no_caller_save_reg_set;
@@ -805,7 +805,7 @@  ira_build_conflicts (void)
 	      /* Allocnos bigger than the saved part of call saved
 		 regs must conflict with them.  */
 	      for (regno = 0; regno < FIRST_PSEUDO_REGISTER; regno++)
-		if (!TEST_HARD_REG_BIT (call_used_reg_set, regno)
+		if (!TEST_HARD_REG_BIT (call_used_or_fixed_regs, regno)
 		    && targetm.hard_regno_call_part_clobbered (NULL, regno,
 							       obj_mode))
 		  {
Index: gcc/ira-costs.c
===================================================================
--- gcc/ira-costs.c	2019-09-09 18:59:20.856063253 +0100
+++ gcc/ira-costs.c	2019-09-10 17:22:44.694419214 +0100
@@ -2380,7 +2380,7 @@  ira_tune_allocno_costs (void)
 	      if (ira_hard_reg_set_intersection_p (regno, mode,
 						   *crossed_calls_clobber_regs)
 		  && (ira_hard_reg_set_intersection_p (regno, mode,
-						       call_used_reg_set)
+						       call_used_or_fixed_regs)
 		      || targetm.hard_regno_call_part_clobbered (NULL, regno,
 								 mode)))
 		cost += (ALLOCNO_CALL_FREQ (a)
Index: gcc/ira-lives.c
===================================================================
--- gcc/ira-lives.c	2019-09-09 18:59:20.856063253 +0100
+++ gcc/ira-lives.c	2019-09-10 17:22:44.698419188 +0100
@@ -1257,7 +1257,7 @@  process_bb_node_lives (ira_loop_tree_nod
 		  HARD_REG_SET this_call_used_reg_set;
 
 		  get_call_reg_set_usage (insn, &this_call_used_reg_set,
-					  call_used_reg_set);
+					  call_used_or_fixed_regs);
 
 		  /* Don't allocate allocnos that cross setjmps or any
 		     call, if this function receives a nonlocal
Index: gcc/ira.c
===================================================================
--- gcc/ira.c	2019-09-09 18:59:33.971970646 +0100
+++ gcc/ira.c	2019-09-10 17:22:44.698419188 +0100
@@ -2370,7 +2370,7 @@  setup_reg_renumber (void)
 	    }
 	  if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
 	      && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
-						  call_used_reg_set))
+						  call_used_or_fixed_regs))
 	    {
 	      ira_assert (!optimize || flag_caller_saves
 			  || (ALLOCNO_CALLS_CROSSED_NUM (a)
Index: gcc/lra-assigns.c
===================================================================
--- gcc/lra-assigns.c	2019-09-09 18:59:26.356024419 +0100
+++ gcc/lra-assigns.c	2019-09-10 17:22:44.698419188 +0100
@@ -654,7 +654,7 @@  find_hard_regno_for_1 (int regno, int *c
 	  for (j = 0;
 	       j < hard_regno_nregs (hard_regno, PSEUDO_REGNO_MODE (regno));
 	       j++)
-	    if (! TEST_HARD_REG_BIT (call_used_reg_set, hard_regno + j)
+	    if (! TEST_HARD_REG_BIT (call_used_or_fixed_regs, hard_regno + j)
 		&& ! df_regs_ever_live_p (hard_regno + j))
 	      /* It needs save restore.	 */
 	      hard_regno_costs[hard_regno]
@@ -1641,7 +1641,7 @@  lra_assign (bool &fails_p)
     for (i = FIRST_PSEUDO_REGISTER; i < max_regno; i++)
       if (lra_reg_info[i].nrefs != 0 && reg_renumber[i] >= 0
 	  && lra_reg_info[i].call_insn
-	  && overlaps_hard_reg_set_p (call_used_reg_set,
+	  && overlaps_hard_reg_set_p (call_used_or_fixed_regs,
 				      PSEUDO_REGNO_MODE (i), reg_renumber[i]))
 	gcc_unreachable ();
   /* Setup insns to process on the next constraint pass.  */
Index: gcc/lra-constraints.c
===================================================================
--- gcc/lra-constraints.c	2019-09-09 18:59:20.860063224 +0100
+++ gcc/lra-constraints.c	2019-09-10 17:22:44.698419188 +0100
@@ -5439,7 +5439,7 @@  need_for_call_save_p (int regno)
 	      ((flag_ipa_ra &&
 		! hard_reg_set_empty_p (lra_reg_info[regno].actual_call_used_reg_set))
 	       ? lra_reg_info[regno].actual_call_used_reg_set
-	       : call_used_reg_set,
+	       : call_used_or_fixed_regs,
 	       PSEUDO_REGNO_MODE (regno), reg_renumber[regno])
 	      || (targetm.hard_regno_call_part_clobbered
 		  (lra_reg_info[regno].call_insn,
@@ -5483,7 +5483,7 @@  need_for_split_p (HARD_REG_SET potential
 	      true) the assign pass assumes that all pseudos living
 	      through calls are assigned to call saved hard regs.  */
 	   && (regno >= FIRST_PSEUDO_REGISTER
-	       || ! TEST_HARD_REG_BIT (call_used_reg_set, regno)
+	       || ! TEST_HARD_REG_BIT (call_used_or_fixed_regs, regno)
 	       || usage_insns[regno].calls_num == calls_num)
 	   /* We need at least 2 reloads to make pseudo splitting
 	      profitable.  We should provide hard regno splitting in
@@ -6458,7 +6458,7 @@  inherit_in_ebb (rtx_insn *head, rtx_insn
 		  /* If there are pending saves/restores, the
 		     optimization is not worth.	 */
 		  && usage_insns[regno].calls_num == calls_num - 1
-		  && TEST_HARD_REG_BIT (call_used_reg_set, hard_regno))
+		  && TEST_HARD_REG_BIT (call_used_or_fixed_regs, hard_regno))
 		{
 		  /* Restore the pseudo from the call result as
 		     REG_RETURNED note says that the pseudo value is
Index: gcc/lra-lives.c
===================================================================
--- gcc/lra-lives.c	2019-09-09 18:59:33.971970646 +0100
+++ gcc/lra-lives.c	2019-09-10 17:22:44.698419188 +0100
@@ -928,12 +928,12 @@  process_bb_lives (basic_block bb, int &c
 	{
 	  call_insn = curr_insn;
 	  if (! flag_ipa_ra && ! targetm.return_call_with_max_clobbers)
-	    last_call_used_reg_set = call_used_reg_set;
+	    last_call_used_reg_set = call_used_or_fixed_regs;
 	  else
 	    {
 	      HARD_REG_SET this_call_used_reg_set;
 	      get_call_reg_set_usage (curr_insn, &this_call_used_reg_set,
-				      call_used_reg_set);
+				      call_used_or_fixed_regs);
 
 	      bool flush = (! hard_reg_set_empty_p (last_call_used_reg_set)
 			    && (last_call_used_reg_set
Index: gcc/lra-remat.c
===================================================================
--- gcc/lra-remat.c	2019-08-07 19:31:48.420992240 +0100
+++ gcc/lra-remat.c	2019-09-10 17:22:44.698419188 +0100
@@ -69,9 +69,9 @@  Software Foundation; either version 3, o
 /* Number of candidates for rematerialization.  */
 static unsigned int cands_num;
 
-/* The following is used for representation of call_used_reg_set in
+/* The following is used for representation of call_used_or_fixed_regs in
    form array whose elements are hard register numbers with nonzero bit
-   in CALL_USED_REG_SET. */
+   in CALL_USED_OR_FIXED_REGS. */
 static int call_used_regs_arr_len;
 static int call_used_regs_arr[FIRST_PSEUDO_REGISTER];
 
@@ -710,7 +710,7 @@  call_used_input_regno_present_p (rtx_ins
 	 reg != NULL;
 	 reg = reg->next)
       if (reg->type == OP_IN && reg->regno < FIRST_PSEUDO_REGISTER
-	  && TEST_HARD_REG_BIT (call_used_reg_set, reg->regno))
+	  && TEST_HARD_REG_BIT (call_used_or_fixed_regs, reg->regno))
 	return true;
   return false;
 }
Index: gcc/postreload.c
===================================================================
--- gcc/postreload.c	2019-09-09 18:59:15.992097594 +0100
+++ gcc/postreload.c	2019-09-10 17:22:44.698419188 +0100
@@ -1332,7 +1332,7 @@  reload_combine (void)
 	  rtx link;
 	  HARD_REG_SET used_regs;
 
-	  get_call_reg_set_usage (insn, &used_regs, call_used_reg_set);
+	  get_call_reg_set_usage (insn, &used_regs, call_used_or_fixed_regs);
 
 	  for (r = 0; r < FIRST_PSEUDO_REGISTER; r++)
 	    if (TEST_HARD_REG_BIT (used_regs, r))
Index: gcc/regrename.c
===================================================================
--- gcc/regrename.c	2019-09-09 18:59:26.356024419 +0100
+++ gcc/regrename.c	2019-09-10 17:22:44.698419188 +0100
@@ -367,7 +367,7 @@  find_rename_reg (du_head_p this_head, en
      If the chain needs a call-saved register, mark the call-used
      registers as unavailable.  */
   if (this_head->need_caller_save_reg)
-    *unavailable |= call_used_reg_set;
+    *unavailable |= call_used_or_fixed_regs;
 
   /* Mark registers that overlap this chain's lifetime as unavailable.  */
   merge_overlapping_regs (unavailable, this_head);
Index: gcc/reload1.c
===================================================================
--- gcc/reload1.c	2019-09-09 18:59:26.356024419 +0100
+++ gcc/reload1.c	2019-09-10 17:22:44.702419160 +0100
@@ -4784,7 +4784,7 @@  reload_as_needed (int live_known)
          be partially clobbered by the call.  */
       else if (CALL_P (insn))
 	{
-	  reg_reloaded_valid &= ~(call_used_reg_set
+	  reg_reloaded_valid &= ~(call_used_or_fixed_regs
 				  | reg_reloaded_call_part_clobbered);
 
 	  /* If this is a call to a setjmp-type function, we must not
Index: gcc/rtlanal.c
===================================================================
--- gcc/rtlanal.c	2019-09-10 17:18:59.955982709 +0100
+++ gcc/rtlanal.c	2019-09-10 17:22:44.702419160 +0100
@@ -1477,7 +1477,7 @@  find_all_hard_reg_sets (const rtx_insn *
   CLEAR_HARD_REG_SET (*pset);
   note_stores (insn, record_hard_reg_sets, pset);
   if (CALL_P (insn) && implicit)
-    *pset |= call_used_reg_set;
+    *pset |= call_used_or_fixed_regs;
   for (link = REG_NOTES (insn); link; link = XEXP (link, 1))
     if (REG_NOTE_KIND (link) == REG_INC)
       record_hard_reg_sets (XEXP (link, 0), NULL, pset);
Index: gcc/sel-sched.c
===================================================================
--- gcc/sel-sched.c	2019-09-09 18:59:20.864063196 +0100
+++ gcc/sel-sched.c	2019-09-10 17:22:44.702419160 +0100
@@ -1224,10 +1224,10 @@  mark_unavailable_hard_regs (def_t def, s
     reg_rename_p->unavailable_hard_regs |= sel_hrd.stack_regs;
 #endif
 
-  /* If there's a call on this path, make regs from call_used_reg_set
+  /* If there's a call on this path, make regs from call_used_or_fixed_regs
      unavailable.  */
   if (def->crosses_call)
-    reg_rename_p->unavailable_hard_regs |= call_used_reg_set;
+    reg_rename_p->unavailable_hard_regs |= call_used_or_fixed_regs;
 
   /* Stop here before reload: we need FRAME_REGS, STACK_REGS, and crosses_call,
      but not register classes.  */
Index: gcc/shrink-wrap.c
===================================================================
--- gcc/shrink-wrap.c	2019-09-09 18:59:20.864063196 +0100
+++ gcc/shrink-wrap.c	2019-09-10 17:22:44.702419160 +0100
@@ -76,7 +76,7 @@  requires_stack_frame_p (rtx_insn *insn,
     }
   if (hard_reg_set_intersect_p (hardregs, prologue_used))
     return true;
-  hardregs &= ~call_used_reg_set;
+  hardregs &= ~call_used_or_fixed_regs;
   for (regno = 0; regno < FIRST_PSEUDO_REGISTER; regno++)
     if (TEST_HARD_REG_BIT (hardregs, regno)
 	&& df_regs_ever_live_p (regno))