diff mbox series

[19/32] Remove global call sets: IRA

Message ID mpt8squwssx.fsf@arm.com
State New
Headers show
Series Support multiple ABIs in the same translation unit | expand

Commit Message

Richard Sandiford Sept. 11, 2019, 7:12 p.m. UTC
For -fipa-ra, IRA already keeps track of which specific registers
are call-clobbered in a region, rather than using global information.
The patch generalises this so that it tracks which ABIs are used
by calls in the region.

We can then use the new ABI descriptors to handle partially-clobbered
registers in the same way as fully-clobbered registers, without having
special code for targetm.hard_regno_call_part_clobbered.  This in turn
makes -fipa-ra work for partially-clobbered registers too.

A side-effect of allowing multiple ABIs is that we no longer have
an obvious set of conflicting registers for the self-described
"fragile hack" in ira-constraints.c.  This code kicks in for
user-defined registers that aren't live across a call at -O0,
and it tries to avoid allocating a call-clobbered register to them.
Here I've used the set of call-clobbered registers in the current
function's ABI, applying on top of any registers that are clobbered by
called functions.  This is enough to keep gcc.dg/debug/dwarf2/pr5948.c
happy.

The handling of GENERIC_STACK_CHECK in do_reload seemed to have
a reversed condition:

      for (int i = 0; i < FIRST_PSEUDO_REGISTER; i++)
	if (df_regs_ever_live_p (i)
	    && !fixed_regs[i]
	    && call_used_or_fixed_reg_p (i))
	  size += UNITS_PER_WORD;

The final part of the condition counts registers that don't need to be
saved in the prologue, but I think the opposite was intended.


2019-09-11  Richard Sandiford  <richard.sandiford@arm.com>

gcc/
	* function-abi.h (call_clobbers_in_region): Declare.
	(call_clobbered_in_region_p): New function.
	* function-abi.cc (call_clobbers_in_region): Likewise.
	* ira-int.h: Include function-abi.h.
	(ira_allocno::crossed_calls_abis): New field.
	(ALLOCNO_CROSSED_CALLS_ABIS): New macro.
	(ira_need_caller_save_regs): New function.
	(ira_need_caller_save_p): Likewise.
	* ira.c (setup_reg_renumber): Use ira_need_caller_save_p instead
	of call_used_or_fixed_regs.
	(do_reload): Use crtl->abi to test whether the current function
	needs to save a register in the prologue.  Count registers that
	need to be saved rather than registers that don't.
	* ira-build.c (create_cap_allocno): Copy ALLOCNO_CROSSED_CALLS_ABIS.
	Remove unnecessary | from ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
	(propagate_allocno_info): Merge ALLOCNO_CROSSED_CALLS_ABIS too.
	(propagate_some_info_from_allocno): Likewise.
	(copy_info_to_removed_store_destinations): Likewise.
	(ira_flattening): Say that ALLOCNO_CROSSED_CALLS_ABIS and
	ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS are handled conservatively.
	(ira_build): Use ira_need_caller_save_regs instead of
	call_used_or_fixed_regs.
	* ira-color.c (calculate_saved_nregs): Use crtl->abi to test
	whether the current function would need to save a register
	before using it.
	(calculate_spill_cost): Likewise.
	(allocno_reload_assign): Use ira_need_caller_save_regs and
	ira_need_caller_save_p instead of call_used_or_fixed_regs.
	* ira-conflicts.c (ira_build_conflicts): Use
	ira_need_caller_save_regs rather than call_used_or_fixed_regs
	as the set of call-clobbered registers.  Remove the
	call_used_or_fixed_regs mask from the calculation of
	temp_hard_reg_set and mask its use instead.  Remove special
	handling of partially-clobbered registers.
	* ira-costs.c (ira_tune_allocno_costs): Use ira_need_caller_save_p.
	* ira-lives.c (process_bb_node_lives): Use mode_clobbers to
	calculate the set of conflicting registers for calls that
	can throw.  Record the ABIs of calls in ALLOCNO_CROSSED_CALLS_ABIS.
	Use full_and_partial_reg_clobbers rather than full_reg_clobbers
	for the calculation of ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
	Use eh_edge_abi to calculate the set of registers that could
	be clobbered by an EH edge.  Include partially-clobbered as
	well as fully-clobbered registers.

Comments

Jeff Law Sept. 30, 2019, 3:16 p.m. UTC | #1
On 9/11/19 1:12 PM, Richard Sandiford wrote:
> For -fipa-ra, IRA already keeps track of which specific registers
> are call-clobbered in a region, rather than using global information.
> The patch generalises this so that it tracks which ABIs are used
> by calls in the region.
> 
> We can then use the new ABI descriptors to handle partially-clobbered
> registers in the same way as fully-clobbered registers, without having
> special code for targetm.hard_regno_call_part_clobbered.  This in turn
> makes -fipa-ra work for partially-clobbered registers too.
> 
> A side-effect of allowing multiple ABIs is that we no longer have
> an obvious set of conflicting registers for the self-described
> "fragile hack" in ira-constraints.c.  This code kicks in for
> user-defined registers that aren't live across a call at -O0,
> and it tries to avoid allocating a call-clobbered register to them.
> Here I've used the set of call-clobbered registers in the current
> function's ABI, applying on top of any registers that are clobbered by
> called functions.  This is enough to keep gcc.dg/debug/dwarf2/pr5948.c
> happy.
> 
> The handling of GENERIC_STACK_CHECK in do_reload seemed to have
> a reversed condition:
> 
>       for (int i = 0; i < FIRST_PSEUDO_REGISTER; i++)
> 	if (df_regs_ever_live_p (i)
> 	    && !fixed_regs[i]
> 	    && call_used_or_fixed_reg_p (i))
> 	  size += UNITS_PER_WORD;
> 
> The final part of the condition counts registers that don't need to be
> saved in the prologue, but I think the opposite was intended.
Agreed.  Given it's just used to emit a diagnostic and that in reality
it's only used for Ada, I'm confident this code isn't getting exercised
in any significant way right now.

> 
> 
> 2019-09-11  Richard Sandiford  <richard.sandiford@arm.com>
> 
> gcc/
> 	* function-abi.h (call_clobbers_in_region): Declare.
> 	(call_clobbered_in_region_p): New function.
> 	* function-abi.cc (call_clobbers_in_region): Likewise.
> 	* ira-int.h: Include function-abi.h.
> 	(ira_allocno::crossed_calls_abis): New field.
> 	(ALLOCNO_CROSSED_CALLS_ABIS): New macro.
> 	(ira_need_caller_save_regs): New function.
> 	(ira_need_caller_save_p): Likewise.
> 	* ira.c (setup_reg_renumber): Use ira_need_caller_save_p instead
> 	of call_used_or_fixed_regs.
> 	(do_reload): Use crtl->abi to test whether the current function
> 	needs to save a register in the prologue.  Count registers that
> 	need to be saved rather than registers that don't.
> 	* ira-build.c (create_cap_allocno): Copy ALLOCNO_CROSSED_CALLS_ABIS.
> 	Remove unnecessary | from ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
> 	(propagate_allocno_info): Merge ALLOCNO_CROSSED_CALLS_ABIS too.
> 	(propagate_some_info_from_allocno): Likewise.
> 	(copy_info_to_removed_store_destinations): Likewise.
> 	(ira_flattening): Say that ALLOCNO_CROSSED_CALLS_ABIS and
> 	ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS are handled conservatively.
> 	(ira_build): Use ira_need_caller_save_regs instead of
> 	call_used_or_fixed_regs.
> 	* ira-color.c (calculate_saved_nregs): Use crtl->abi to test
> 	whether the current function would need to save a register
> 	before using it.
> 	(calculate_spill_cost): Likewise.
> 	(allocno_reload_assign): Use ira_need_caller_save_regs and
> 	ira_need_caller_save_p instead of call_used_or_fixed_regs.
> 	* ira-conflicts.c (ira_build_conflicts): Use
> 	ira_need_caller_save_regs rather than call_used_or_fixed_regs
> 	as the set of call-clobbered registers.  Remove the
> 	call_used_or_fixed_regs mask from the calculation of
> 	temp_hard_reg_set and mask its use instead.  Remove special
> 	handling of partially-clobbered registers.
> 	* ira-costs.c (ira_tune_allocno_costs): Use ira_need_caller_save_p.
> 	* ira-lives.c (process_bb_node_lives): Use mode_clobbers to
> 	calculate the set of conflicting registers for calls that
> 	can throw.  Record the ABIs of calls in ALLOCNO_CROSSED_CALLS_ABIS.
> 	Use full_and_partial_reg_clobbers rather than full_reg_clobbers
> 	for the calculation of ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS.
> 	Use eh_edge_abi to calculate the set of registers that could
> 	be clobbered by an EH edge.  Include partially-clobbered as
> 	well as fully-clobbered registers.
OK
jeff
diff mbox series

Patch

Index: gcc/function-abi.h
===================================================================
--- gcc/function-abi.h	2019-09-11 19:47:24.418262673 +0100
+++ gcc/function-abi.h	2019-09-11 19:48:31.709788491 +0100
@@ -265,6 +265,32 @@  #define default_function_abi \
   (this_target_function_abi_info->x_function_abis[0])
 #define eh_edge_abi default_function_abi
 
+extern HARD_REG_SET call_clobbers_in_region (unsigned int, const_hard_reg_set,
+					     machine_mode mode);
+
+/* Return true if (reg:MODE REGNO) might be clobbered by one of the
+   calls in a region described by ABIS and MASK, where:
+
+   * Bit ID of ABIS is set if the region contains a call with
+     function_abi identifier ID.
+
+   * MASK contains all the registers that are fully or partially
+     clobbered by calls in the region.
+
+   This is not quite as accurate as testing each individual call,
+   but it's a close and conservatively-correct approximation.
+   It's much better for some targets than:
+
+     overlaps_hard_reg_set_p (MASK, MODE, REGNO).  */
+
+inline bool
+call_clobbered_in_region_p (unsigned int abis, const_hard_reg_set mask,
+			    machine_mode mode, unsigned int regno)
+{
+  HARD_REG_SET clobbers = call_clobbers_in_region (abis, mask, mode);
+  return overlaps_hard_reg_set_p (clobbers, mode, regno);
+}
+
 extern const predefined_function_abi &fntype_abi (const_tree);
 extern function_abi fndecl_abi (const_tree);
 extern function_abi call_insn_abi (const rtx_insn *);
Index: gcc/function-abi.cc
===================================================================
--- gcc/function-abi.cc	2019-09-11 19:47:32.898202916 +0100
+++ gcc/function-abi.cc	2019-09-11 19:48:31.709788491 +0100
@@ -126,6 +126,31 @@  predefined_function_abi::add_full_reg_cl
     SET_HARD_REG_BIT (m_mode_clobbers[i], regno);
 }
 
+/* Return the set of registers that cannot be used to hold a value of
+   mode MODE across the calls in a region described by ABIS and MASK, where:
+
+   * Bit ID of ABIS is set if the region contains a call with
+     function_abi identifier ID.
+
+   * MASK contains all the registers that are fully or partially
+     clobbered by calls in the region.
+
+   This is not quite as accurate as testing each individual call,
+   but it's a close and conservatively-correct approximation.
+   It's much better for some targets than just using MASK.  */
+
+HARD_REG_SET
+call_clobbers_in_region (unsigned int abis, const_hard_reg_set mask,
+			 machine_mode mode)
+{
+  HARD_REG_SET result;
+  CLEAR_HARD_REG_SET (result);
+  for (unsigned int id = 0; abis; abis >>= 1, ++id)
+    if (abis & 1)
+      result |= function_abis[id].mode_clobbers (mode);
+  return result & mask;
+}
+
 /* Return the predefined ABI used by functions with type TYPE.  */
 
 const predefined_function_abi &
Index: gcc/ira-int.h
===================================================================
--- gcc/ira-int.h	2019-09-09 18:59:51.239848733 +0100
+++ gcc/ira-int.h	2019-09-11 19:48:31.713788462 +0100
@@ -22,6 +22,7 @@  Software Foundation; either version 3, o
 #define GCC_IRA_INT_H
 
 #include "recog.h"
+#include "function-abi.h"
 
 /* To provide consistency in naming, all IRA external variables,
    functions, common typedefs start with prefix ira_.  */
@@ -287,6 +288,9 @@  struct ira_allocno
   /* Register class which should be used for allocation for given
      allocno.  NO_REGS means that we should use memory.  */
   ENUM_BITFIELD (reg_class) aclass : 16;
+  /* A bitmask of the ABIs used by calls that occur while the allocno
+     is live.  */
+  unsigned int crossed_calls_abis : NUM_ABI_IDS;
   /* During the reload, value TRUE means that we should not reassign a
      hard register to the allocno got memory earlier.  It is set up
      when we removed memory-memory move insn before each iteration of
@@ -423,6 +427,7 @@  #define ALLOCNO_HARD_REGNO(A) ((A)->hard
 #define ALLOCNO_CALL_FREQ(A) ((A)->call_freq)
 #define ALLOCNO_CALLS_CROSSED_NUM(A) ((A)->calls_crossed_num)
 #define ALLOCNO_CHEAP_CALLS_CROSSED_NUM(A) ((A)->cheap_calls_crossed_num)
+#define ALLOCNO_CROSSED_CALLS_ABIS(A) ((A)->crossed_calls_abis)
 #define ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS(A) \
   ((A)->crossed_calls_clobbered_regs)
 #define ALLOCNO_MEM_OPTIMIZED_DEST(A) ((A)->mem_optimized_dest)
@@ -1510,4 +1515,28 @@  ira_allocate_and_set_or_copy_costs (int
 extern rtx ira_create_new_reg (rtx);
 extern int first_moveable_pseudo, last_moveable_pseudo;
 
+/* Return the set of registers that would need a caller save if allocno A
+   overlapped them.  */
+
+inline HARD_REG_SET
+ira_need_caller_save_regs (ira_allocno_t a)
+{
+  return call_clobbers_in_region (ALLOCNO_CROSSED_CALLS_ABIS (a),
+				  ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a),
+				  ALLOCNO_MODE (a));
+}
+
+/* Return true if we would need to save allocno A around a call if we
+   assigned hard register REGNO.  */
+
+inline bool
+ira_need_caller_save_p (ira_allocno_t a, unsigned int regno)
+{
+  if (ALLOCNO_CALLS_CROSSED_NUM (a) == 0)
+    return false;
+  return call_clobbered_in_region_p (ALLOCNO_CROSSED_CALLS_ABIS (a),
+				     ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a),
+				     ALLOCNO_MODE (a), regno);
+}
+
 #endif /* GCC_IRA_INT_H */
Index: gcc/ira.c
===================================================================
--- gcc/ira.c	2019-09-10 19:56:45.357177891 +0100
+++ gcc/ira.c	2019-09-11 19:48:31.713788462 +0100
@@ -2368,9 +2368,7 @@  setup_reg_renumber (void)
 	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj)
 		|= ~reg_class_contents[pclass];
 	    }
-	  if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
-	      && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
-						  call_used_or_fixed_regs))
+	  if (ira_need_caller_save_p (a, hard_regno))
 	    {
 	      ira_assert (!optimize || flag_caller_saves
 			  || (ALLOCNO_CALLS_CROSSED_NUM (a)
@@ -5591,7 +5589,7 @@  do_reload (void)
       for (int i = 0; i < FIRST_PSEUDO_REGISTER; i++)
 	if (df_regs_ever_live_p (i)
 	    && !fixed_regs[i]
-	    && call_used_or_fixed_reg_p (i))
+	    && !crtl->abi->clobbers_full_reg_p (i))
 	  size += UNITS_PER_WORD;
 
       if (constant_lower_bound (size) > STACK_CHECK_MAX_FRAME_SIZE)
Index: gcc/ira-build.c
===================================================================
--- gcc/ira-build.c	2019-09-10 19:56:32.569268148 +0100
+++ gcc/ira-build.c	2019-09-11 19:48:31.709788491 +0100
@@ -903,8 +903,9 @@  create_cap_allocno (ira_allocno_t a)
 
   ALLOCNO_CALLS_CROSSED_NUM (cap) = ALLOCNO_CALLS_CROSSED_NUM (a);
   ALLOCNO_CHEAP_CALLS_CROSSED_NUM (cap) = ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a);
+  ALLOCNO_CROSSED_CALLS_ABIS (cap) = ALLOCNO_CROSSED_CALLS_ABIS (a);
   ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (cap)
-    |= ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a);
+    = ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a);
   if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
     {
       fprintf (ira_dump_file, "    Creating cap ");
@@ -2032,6 +2033,8 @@  propagate_allocno_info (void)
 	    += ALLOCNO_CALLS_CROSSED_NUM (a);
 	  ALLOCNO_CHEAP_CALLS_CROSSED_NUM (parent_a)
 	    += ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a);
+	  ALLOCNO_CROSSED_CALLS_ABIS (parent_a)
+	    |= ALLOCNO_CROSSED_CALLS_ABIS (a);
 	  ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (parent_a)
 	    |= ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a);
 	  ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (parent_a)
@@ -2415,6 +2418,7 @@  propagate_some_info_from_allocno (ira_al
   ALLOCNO_CALLS_CROSSED_NUM (a) += ALLOCNO_CALLS_CROSSED_NUM (from_a);
   ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a)
     += ALLOCNO_CHEAP_CALLS_CROSSED_NUM (from_a);
+  ALLOCNO_CROSSED_CALLS_ABIS (a) |= ALLOCNO_CROSSED_CALLS_ABIS (from_a);
   ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a)
     |= ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (from_a);
 
@@ -3056,6 +3060,8 @@  copy_info_to_removed_store_destinations
 	+= ALLOCNO_CALLS_CROSSED_NUM (a);
       ALLOCNO_CHEAP_CALLS_CROSSED_NUM (parent_a)
 	+= ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a);
+      ALLOCNO_CROSSED_CALLS_ABIS (parent_a)
+	|= ALLOCNO_CROSSED_CALLS_ABIS (a);
       ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (parent_a)
 	|= ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a);
       ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (parent_a)
@@ -3155,6 +3161,9 @@  ira_flattening (int max_regno_before_emi
 		-= ALLOCNO_CALLS_CROSSED_NUM (a);
 	      ALLOCNO_CHEAP_CALLS_CROSSED_NUM (parent_a)
 		-= ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a);
+	      /* Assume that ALLOCNO_CROSSED_CALLS_ABIS and
+		 ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS stay the same.
+		 We'd need to rebuild the IR to do better.  */
 	      ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (parent_a)
 		-= ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a);
 	      ira_assert (ALLOCNO_CALLS_CROSSED_NUM (parent_a) >= 0
@@ -3462,7 +3471,7 @@  ira_build (void)
 	 allocno crossing calls.  */
       FOR_EACH_ALLOCNO (a, ai)
 	if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
-	  ior_hard_reg_conflicts (a, call_used_or_fixed_regs);
+	  ior_hard_reg_conflicts (a, ira_need_caller_save_regs (a));
     }
   if (internal_flag_ira_verbose > 2 && ira_dump_file != NULL)
     print_copies (ira_dump_file);
Index: gcc/ira-color.c
===================================================================
--- gcc/ira-color.c	2019-09-10 19:56:32.569268148 +0100
+++ gcc/ira-color.c	2019-09-11 19:48:31.709788491 +0100
@@ -1650,7 +1650,7 @@  calculate_saved_nregs (int hard_regno, m
   ira_assert (hard_regno >= 0);
   for (i = hard_regno_nregs (hard_regno, mode) - 1; i >= 0; i--)
     if (!allocated_hardreg_p[hard_regno + i]
-	&& !TEST_HARD_REG_BIT (call_used_or_fixed_regs, hard_regno + i)
+	&& !crtl->abi->clobbers_full_reg_p (hard_regno + i)
 	&& !LOCAL_REGNO (hard_regno + i))
       nregs++;
   return nregs;
@@ -4379,7 +4379,7 @@  allocno_reload_assign (ira_allocno_t a,
       saved[i] = OBJECT_TOTAL_CONFLICT_HARD_REGS (obj);
       OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= forbidden_regs;
       if (! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
-	OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= call_used_or_fixed_regs;
+	OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= ira_need_caller_save_regs (a);
     }
   ALLOCNO_ASSIGNED_P (a) = false;
   aclass = ALLOCNO_CLASS (a);
@@ -4398,9 +4398,7 @@  allocno_reload_assign (ira_allocno_t a,
 	       ? ALLOCNO_CLASS_COST (a)
 	       : ALLOCNO_HARD_REG_COSTS (a)[ira_class_hard_reg_index
 					    [aclass][hard_regno]]));
-      if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0
-	  && ira_hard_reg_set_intersection_p (hard_regno, ALLOCNO_MODE (a),
-					      call_used_or_fixed_regs))
+      if (ira_need_caller_save_p (a, regno))
 	{
 	  ira_assert (flag_caller_saves);
 	  caller_save_needed = 1;
@@ -4687,16 +4685,16 @@  ira_mark_new_stack_slot (rtx x, int regn
    given IN and OUT for INSN.  Return also number points (through
    EXCESS_PRESSURE_LIVE_LENGTH) where the pseudo-register lives and
    the register pressure is high, number of references of the
-   pseudo-registers (through NREFS), number of callee-clobbered
-   hard-registers occupied by the pseudo-registers (through
-   CALL_USED_COUNT), and the first hard regno occupied by the
+   pseudo-registers (through NREFS), the number of psuedo registers
+   whose allocated register wouldn't need saving in the prologue
+   (through CALL_USED_COUNT), and the first hard regno occupied by the
    pseudo-registers (through FIRST_HARD_REGNO).  */
 static int
 calculate_spill_cost (int *regnos, rtx in, rtx out, rtx_insn *insn,
 		      int *excess_pressure_live_length,
 		      int *nrefs, int *call_used_count, int *first_hard_regno)
 {
-  int i, cost, regno, hard_regno, j, count, saved_cost, nregs;
+  int i, cost, regno, hard_regno, count, saved_cost;
   bool in_p, out_p;
   int length;
   ira_allocno_t a;
@@ -4713,11 +4711,8 @@  calculate_spill_cost (int *regnos, rtx i
       a = ira_regno_allocno_map[regno];
       length += ALLOCNO_EXCESS_PRESSURE_POINTS_NUM (a) / ALLOCNO_NUM_OBJECTS (a);
       cost += ALLOCNO_MEMORY_COST (a) - ALLOCNO_CLASS_COST (a);
-      nregs = hard_regno_nregs (hard_regno, ALLOCNO_MODE (a));
-      for (j = 0; j < nregs; j++)
-	if (! TEST_HARD_REG_BIT (call_used_or_fixed_regs, hard_regno + j))
-	  break;
-      if (j == nregs)
+      if (in_hard_reg_set_p (crtl->abi->full_reg_clobbers (),
+			     ALLOCNO_MODE (a), hard_regno))
 	count++;
       in_p = in && REG_P (in) && (int) REGNO (in) == hard_regno;
       out_p = out && REG_P (out) && (int) REGNO (out) == hard_regno;
Index: gcc/ira-conflicts.c
===================================================================
--- gcc/ira-conflicts.c	2019-09-11 19:47:32.898202916 +0100
+++ gcc/ira-conflicts.c	2019-09-11 19:48:31.709788491 +0100
@@ -738,9 +738,7 @@  ira_build_conflicts (void)
   if (! targetm.class_likely_spilled_p (base))
     CLEAR_HARD_REG_SET (temp_hard_reg_set);
   else
-    temp_hard_reg_set = (reg_class_contents[base]
-			 & ~ira_no_alloc_regs
-			 & call_used_or_fixed_regs);
+    temp_hard_reg_set = reg_class_contents[base] & ~ira_no_alloc_regs;
   FOR_EACH_ALLOCNO (a, ai)
     {
       int i, n = ALLOCNO_NUM_OBJECTS (a);
@@ -748,29 +746,28 @@  ira_build_conflicts (void)
       for (i = 0; i < n; i++)
 	{
 	  ira_object_t obj = ALLOCNO_OBJECT (a, i);
-	  machine_mode obj_mode = obj->allocno->mode;
 	  rtx allocno_reg = regno_reg_rtx [ALLOCNO_REGNO (a)];
 
-	  if ((! flag_caller_saves && ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
-	      /* For debugging purposes don't put user defined variables in
-		 callee-clobbered registers.  However, do allow parameters
-		 in callee-clobbered registers to improve debugging.  This
-		 is a bit of a fragile hack.  */
-	      || (optimize == 0
-		  && REG_USERVAR_P (allocno_reg)
-		  && ! reg_is_parm_p (allocno_reg)))
+	  /* For debugging purposes don't put user defined variables in
+	     callee-clobbered registers.  However, do allow parameters
+	     in callee-clobbered registers to improve debugging.  This
+	     is a bit of a fragile hack.  */
+	  if (optimize == 0
+	      && REG_USERVAR_P (allocno_reg)
+	      && ! reg_is_parm_p (allocno_reg))
 	    {
-	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= call_used_or_fixed_regs;
-	      OBJECT_CONFLICT_HARD_REGS (obj) |= call_used_or_fixed_regs;
+	      HARD_REG_SET new_conflict_regs = crtl->abi->full_reg_clobbers ();
+	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= new_conflict_regs;
+	      OBJECT_CONFLICT_HARD_REGS (obj) |= new_conflict_regs;
 	    }
-	  else if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
+
+	  if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
 	    {
-	      HARD_REG_SET no_caller_save_reg_set
-		= (call_used_or_fixed_regs & ~savable_regs);
-	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= no_caller_save_reg_set;
-	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= temp_hard_reg_set;
-	      OBJECT_CONFLICT_HARD_REGS (obj) |= no_caller_save_reg_set;
-	      OBJECT_CONFLICT_HARD_REGS (obj) |= temp_hard_reg_set;
+	      HARD_REG_SET new_conflict_regs = ira_need_caller_save_regs (a);
+	      if (flag_caller_saves)
+		new_conflict_regs &= (~savable_regs | temp_hard_reg_set);
+	      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj) |= new_conflict_regs;
+	      OBJECT_CONFLICT_HARD_REGS (obj) |= new_conflict_regs;
 	    }
 
 	  /* Now we deal with paradoxical subreg cases where certain registers
@@ -797,23 +794,6 @@  ira_build_conflicts (void)
 		     }
 		}
 	    }
-
-	  if (ALLOCNO_CALLS_CROSSED_NUM (a) != 0)
-	    {
-	      int regno;
-
-	      /* Allocnos bigger than the saved part of call saved
-		 regs must conflict with them.  */
-	      for (regno = 0; regno < FIRST_PSEUDO_REGISTER; regno++)
-		if (!TEST_HARD_REG_BIT (call_used_or_fixed_regs, regno)
-		    && targetm.hard_regno_call_part_clobbered (0, regno,
-							       obj_mode))
-		  {
-		    SET_HARD_REG_BIT (OBJECT_CONFLICT_HARD_REGS (obj), regno);
-		    SET_HARD_REG_BIT (OBJECT_TOTAL_CONFLICT_HARD_REGS (obj),
-				      regno);
-		  }
-	    }
 	}
     }
   if (optimize && ira_conflicts_p
Index: gcc/ira-costs.c
===================================================================
--- gcc/ira-costs.c	2019-09-11 19:47:32.898202916 +0100
+++ gcc/ira-costs.c	2019-09-11 19:48:31.713788462 +0100
@@ -2340,7 +2340,6 @@  ira_tune_allocno_costs (void)
   ira_allocno_object_iterator oi;
   ira_object_t obj;
   bool skip_p;
-  HARD_REG_SET *crossed_calls_clobber_regs;
 
   FOR_EACH_ALLOCNO (a, ai)
     {
@@ -2375,14 +2374,7 @@  ira_tune_allocno_costs (void)
 		continue;
 	      rclass = REGNO_REG_CLASS (regno);
 	      cost = 0;
-	      crossed_calls_clobber_regs
-		= &(ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a));
-	      if (ira_hard_reg_set_intersection_p (regno, mode,
-						   *crossed_calls_clobber_regs)
-		  && (ira_hard_reg_set_intersection_p (regno, mode,
-						       call_used_or_fixed_regs)
-		      || targetm.hard_regno_call_part_clobbered (0, regno,
-								 mode)))
+	      if (ira_need_caller_save_p (a, regno))
 		cost += (ALLOCNO_CALL_FREQ (a)
 			 * (ira_memory_move_cost[mode][rclass][0]
 			    + ira_memory_move_cost[mode][rclass][1]));
Index: gcc/ira-lives.c
===================================================================
--- gcc/ira-lives.c	2019-09-11 19:47:24.418262673 +0100
+++ gcc/ira-lives.c	2019-09-11 19:48:31.713788462 +0100
@@ -1255,11 +1255,7 @@  process_bb_node_lives (ira_loop_tree_nod
 		  ira_object_t obj = ira_object_id_map[i];
 		  a = OBJECT_ALLOCNO (obj);
 		  int num = ALLOCNO_NUM (a);
-		  HARD_REG_SET this_call_used_reg_set
-		    = call_insn_abi (insn).full_reg_clobbers ();
-		  /* ??? This preserves traditional behavior; it might not be
-		     needed.  */
-		  this_call_used_reg_set |= fixed_reg_set;
+		  function_abi abi = call_insn_abi (insn);
 
 		  /* Don't allocate allocnos that cross setjmps or any
 		     call, if this function receives a nonlocal
@@ -1275,9 +1271,9 @@  process_bb_node_lives (ira_loop_tree_nod
 		  if (can_throw_internal (insn))
 		    {
 		      OBJECT_CONFLICT_HARD_REGS (obj)
-			|= this_call_used_reg_set;
+			|= abi.mode_clobbers (ALLOCNO_MODE (a));
 		      OBJECT_TOTAL_CONFLICT_HARD_REGS (obj)
-			|= this_call_used_reg_set;
+			|= abi.mode_clobbers (ALLOCNO_MODE (a));
 		    }
 
 		  if (sparseset_bit_p (allocnos_processed, num))
@@ -1294,8 +1290,9 @@  process_bb_node_lives (ira_loop_tree_nod
 		  /* Mark it as saved at the next call.  */
 		  allocno_saved_at_call[num] = last_call_num + 1;
 		  ALLOCNO_CALLS_CROSSED_NUM (a)++;
+		  ALLOCNO_CROSSED_CALLS_ABIS (a) |= 1 << abi.id ();
 		  ALLOCNO_CROSSED_CALLS_CLOBBERED_REGS (a)
-		    |= this_call_used_reg_set;
+		    |= abi.full_and_partial_reg_clobbers ();
 		  if (cheap_reg != NULL_RTX
 		      && ALLOCNO_REGNO (a) == (int) REGNO (cheap_reg))
 		    ALLOCNO_CHEAP_CALLS_CROSSED_NUM (a)++;
@@ -1359,10 +1356,11 @@  process_bb_node_lives (ira_loop_tree_nod
 	  }
 
       /* Allocnos can't go in stack regs at the start of a basic block
-	 that is reached by an abnormal edge. Likewise for call
-	 clobbered regs, because caller-save, fixup_abnormal_edges and
-	 possibly the table driven EH machinery are not quite ready to
-	 handle such allocnos live across such edges.  */
+	 that is reached by an abnormal edge. Likewise for registers
+	 that are at least partly call clobbered, because caller-save,
+	 fixup_abnormal_edges and possibly the table driven EH machinery
+	 are not quite ready to handle such allocnos live across such
+	 edges.  */
       if (bb_has_abnormal_pred (bb))
 	{
 #ifdef STACK_REGS
@@ -1382,7 +1380,7 @@  process_bb_node_lives (ira_loop_tree_nod
 	  if (!cfun->has_nonlocal_label
 	      && has_abnormal_call_or_eh_pred_edge_p (bb))
 	    for (px = 0; px < FIRST_PSEUDO_REGISTER; px++)
-	      if (call_used_or_fixed_reg_p (px)
+	      if (eh_edge_abi.clobbers_at_least_part_of_reg_p (px)
 #ifdef REAL_PIC_OFFSET_TABLE_REGNUM
 		  /* We should create a conflict of PIC pseudo with
 		     PIC hard reg as PIC hard reg can have a wrong