diff mbox series

[COMMITTED] vrange_storage overhaul

Message ID 20230501062906.564803-1-aldyh@redhat.com
State New
Headers show
Series [COMMITTED] vrange_storage overhaul | expand

Commit Message

Aldy Hernandez May 1, 2023, 6:28 a.m. UTC
[tl;dr: This is a rewrite of value-range-storage.* such that global
ranges and the internal ranger cache can use the same efficient
storage mechanism.  It is optimized such that when wide_ints are
dropped into irange, the copying back and forth from storage will be
very fast, while being able to hold any number of sub-ranges
dynamically allocated at run-time.  This replaces the global storage
mechanism which was limited to 6-subranges.]

Previously we had a vrange allocator for use in the ranger cache.  It
worked with trees and could be used in place (fast), but it was not
memory efficient.  With the upcoming switch to wide_ints for irange,
we can't afford to allocate ranges that can be used in place, because
an irange will be significantly larger, as it will hold full
wide_ints.  We need a trailing_wide_int mechanism similar to what we
use for global ranges, but fast enough to use in the ranger's cache.

The global ranges had another allocation mechanism that was
trailing_wide_int based.  It was memory efficient but slow given the
constant conversions from trees to wide_ints.

This patch gets us the best of both worlds by providing a storage
mechanism with a custom trailing wide int interface, while at the same
time being fast enough to use in the ranger cache.

We use a custom trailing wide_int mechanism but more flexible than
trailing_wide_int, since the latter has compile-time fixed-sized
wide_ints.  The original TWI structure has the current length of each
wide_int in a static portion preceeding the variable length:

template <int N>
struct GTY((user)) trailing_wide_ints
{
...
...
  /* The current length of each number.
     that will, in turn, turn off TBAA on gimple, trees and RTL.  */
  struct {unsigned char len;} m_len[N];

  /* The variable-length part of the structure, which always contains
     at least one HWI.  Element I starts at index I * M_MAX_LEN.  */
  HOST_WIDE_INT m_val[1];
};

We need both m_len[] and m_val[] to be variable-length at run-time.
In the previous incarnation of the storage mechanism the limitation of
m_len[] being static meant that we were limited to whatever [N] could
use up the unused bits in the TWI control world.  In practice this
meant we were limited to 6 sub-ranges.  This worked fine for global
ranges, but is a no go for our internal cache, where we must represent
things exactly (ranges for switches, etc).

The new implementation removes this restriction by making both m_len[]
and m_val[] variable length.  Also, rolling our own allows future
optimization be using some of the leftover bits in the control world.

Also, in preparation for the wide_int conversion, vrange_storage is
now optimized to blast the bits directly into the ultimate irange
instead of going through the irange API.  So ultimately copying back
and forth between the ranger cache and the storage mechanism is just a
matter of copying a few bits for the control word, and copying an
array of HOST_WIDE_INTs.  These changes were heavily profiled, and
yielded a good chunk of the overall speedup for the wide_int
conversion.

Finally, vrange_storage is now a first class structure with GTY
markers and all, thus alleviating the void * hack in struct
tree_ssa_name and friends.  This removes a few warts in the API and
looks cleaner overall.

gcc/ChangeLog:

	* gimple-fold.cc (maybe_fold_comparisons_from_match_pd): Adjust
	for vrange_storage.
	* gimple-range-cache.cc (sbr_vector::sbr_vector): Same.
	(sbr_vector::grow): Same.
	(sbr_vector::set_bb_range): Same.
	(sbr_vector::get_bb_range): Same.
	(sbr_sparse_bitmap::sbr_sparse_bitmap): Same.
	(sbr_sparse_bitmap::set_bb_range): Same.
	(sbr_sparse_bitmap::get_bb_range): Same.
	(block_range_cache::block_range_cache): Same.
	(ssa_global_cache::ssa_global_cache): Same.
	(ssa_global_cache::get_global_range): Same.
	(ssa_global_cache::set_global_range): Same.
	* gimple-range-cache.h: Same.
	* gimple-range-edge.cc
	(gimple_outgoing_range::gimple_outgoing_range): Same.
	(gimple_outgoing_range::switch_edge_range): Same.
	(gimple_outgoing_range::calc_switch_ranges): Same.
	* gimple-range-edge.h: Same.
	* gimple-range-infer.cc
	(infer_range_manager::infer_range_manager): Same.
	(infer_range_manager::get_nonzero): Same.
	(infer_range_manager::maybe_adjust_range): Same.
	(infer_range_manager::add_range): Same.
	* gimple-range-infer.h: Rename obstack_vrange_allocator to
	vrange_allocator.
	* tree-core.h (struct irange_storage_slot): Remove.
	(struct tree_ssa_name): Remove irange_info and frange_info.  Make
	range_info a pointer to vrange_storage.
	* tree-ssanames.cc (range_info_fits_p): Adjust for vrange_storage.
	(range_info_alloc): Same.
	(range_info_free): Same.
	(range_info_get_range): Same.
	(range_info_set_range): Same.
	(get_nonzero_bits): Same.
	* value-query.cc (get_ssa_name_range_info): Same.
	* value-range-storage.cc (class vrange_internal_alloc): New.
	(class vrange_obstack_alloc): New.
	(class vrange_ggc_alloc): New.
	(vrange_allocator::vrange_allocator): New.
	(vrange_allocator::~vrange_allocator): New.
	(vrange_storage::alloc_slot): New.
	(vrange_allocator::alloc): New.
	(vrange_allocator::free): New.
	(vrange_allocator::clone): New.
	(vrange_allocator::clone_varying): New.
	(vrange_allocator::clone_undefined): New.
	(vrange_storage::alloc): New.
	(vrange_storage::set_vrange): Remove slot argument.
	(vrange_storage::get_vrange): Same.
	(vrange_storage::fits_p): Same.
	(vrange_storage::equal_p): New.
	(irange_storage::write_lengths_address): New.
	(irange_storage::lengths_address): New.
	(irange_storage_slot::alloc_slot): Remove.
	(irange_storage::alloc): New.
	(irange_storage_slot::irange_storage_slot): Remove.
	(irange_storage::irange_storage): New.
	(write_wide_int): New.
	(irange_storage_slot::set_irange): Remove.
	(irange_storage::set_irange): New.
	(read_wide_int): New.
	(irange_storage_slot::get_irange): Remove.
	(irange_storage::get_irange): New.
	(irange_storage_slot::size): Remove.
	(irange_storage::equal_p): New.
	(irange_storage_slot::num_wide_ints_needed): Remove.
	(irange_storage::size): New.
	(irange_storage_slot::fits_p): Remove.
	(irange_storage::fits_p): New.
	(irange_storage_slot::dump): Remove.
	(irange_storage::dump): New.
	(frange_storage_slot::alloc_slot): Remove.
	(frange_storage::alloc): New.
	(frange_storage_slot::set_frange): Remove.
	(frange_storage::set_frange): New.
	(frange_storage_slot::get_frange): Remove.
	(frange_storage::get_frange): New.
	(frange_storage_slot::fits_p): Remove.
	(frange_storage::equal_p): New.
	(frange_storage::fits_p): New.
	(ggc_vrange_allocator): New.
	(ggc_alloc_vrange_storage): New.
	* value-range-storage.h (class vrange_storage): Rewrite.
	(class irange_storage): Rewrite.
	(class frange_storage): Rewrite.
	(class obstack_vrange_allocator): Remove.
	(class ggc_vrange_allocator): Remove.
	(vrange_allocator::alloc_vrange): Remove.
	(vrange_allocator::alloc_irange): Remove.
	(vrange_allocator::alloc_frange): Remove.
	(ggc_alloc_vrange_storage): New.
	* value-range.h (class irange): Rename vrange_allocator to
	irange_storage.
	(class frange): Same.
---
 gcc/gimple-fold.cc         |   4 +-
 gcc/gimple-range-cache.cc  |  61 +++--
 gcc/gimple-range-cache.h   |   2 +-
 gcc/gimple-range-edge.cc   |  23 +-
 gcc/gimple-range-edge.h    |   4 +-
 gcc/gimple-range-infer.cc  |  25 +-
 gcc/gimple-range-infer.h   |   2 +-
 gcc/tree-core.h            |  16 +-
 gcc/tree-ssanames.cc       |  28 +--
 gcc/value-query.cc         |   7 +-
 gcc/value-range-storage.cc | 478 +++++++++++++++++++++++++++++--------
 gcc/value-range-storage.h  | 226 +++++-------------
 gcc/value-range.h          |   4 +-
 13 files changed, 518 insertions(+), 362 deletions(-)
diff mbox series

Patch

diff --git a/gcc/gimple-fold.cc b/gcc/gimple-fold.cc
index 2b6855d1205..1d0e4c32c40 100644
--- a/gcc/gimple-fold.cc
+++ b/gcc/gimple-fold.cc
@@ -6919,7 +6919,7 @@  and_comparisons_1 (tree type, enum tree_code code1, tree op1a, tree op1b,
 }
 
 static basic_block fosa_bb;
-static vec<std::pair<tree, void *> > *fosa_unwind;
+static vec<std::pair<tree, vrange_storage *> > *fosa_unwind;
 static tree
 follow_outer_ssa_edges (tree val)
 {
@@ -7006,7 +7006,7 @@  maybe_fold_comparisons_from_match_pd (tree type, enum tree_code code,
 		      type, gimple_assign_lhs (stmt1),
 		      gimple_assign_lhs (stmt2));
   fosa_bb = outer_cond_bb;
-  auto_vec<std::pair<tree, void *>, 8> unwind_stack;
+  auto_vec<std::pair<tree, vrange_storage *>, 8> unwind_stack;
   fosa_unwind = &unwind_stack;
   if (op.resimplify (NULL, (!outer_cond_bb
 			    ? follow_all_ssa_edges : follow_outer_ssa_edges)))
diff --git a/gcc/gimple-range-cache.cc b/gcc/gimple-range-cache.cc
index 5510efba1ca..92622fc5000 100644
--- a/gcc/gimple-range-cache.cc
+++ b/gcc/gimple-range-cache.cc
@@ -85,10 +85,10 @@  public:
   virtual bool get_bb_range (vrange &r, const_basic_block bb) override;
   virtual bool bb_range_p (const_basic_block bb) override;
 protected:
-  vrange **m_tab;	// Non growing vector.
+  vrange_storage **m_tab;	// Non growing vector.
   int m_tab_size;
-  vrange *m_varying;
-  vrange *m_undefined;
+  vrange_storage *m_varying;
+  vrange_storage *m_undefined;
   tree m_type;
   vrange_allocator *m_range_allocator;
   bool m_zero_p;
@@ -106,16 +106,14 @@  sbr_vector::sbr_vector (tree t, vrange_allocator *allocator, bool zero_p)
   m_zero_p = zero_p;
   m_range_allocator = allocator;
   m_tab_size = last_basic_block_for_fn (cfun) + 1;
-  m_tab = static_cast <vrange **>
-    (allocator->alloc (m_tab_size * sizeof (vrange *)));
+  m_tab = static_cast <vrange_storage **>
+    (allocator->alloc (m_tab_size * sizeof (vrange_storage *)));
   if (zero_p)
     memset (m_tab, 0, m_tab_size * sizeof (vrange *));
 
   // Create the cached type range.
-  m_varying = m_range_allocator->alloc_vrange (t);
-  m_undefined = m_range_allocator->alloc_vrange (t);
-  m_varying->set_varying (t);
-  m_undefined->set_undefined ();
+  m_varying = m_range_allocator->clone_varying (t);
+  m_undefined = m_range_allocator->clone_undefined (t);
 }
 
 // Grow the vector when the CFG has increased in size.
@@ -132,11 +130,11 @@  sbr_vector::grow ()
   int new_size = inc + curr_bb_size;
 
   // Allocate new memory, copy the old vector and clear the new space.
-  vrange **t = static_cast <vrange **>
-    (m_range_allocator->alloc (new_size * sizeof (vrange *)));
-  memcpy (t, m_tab, m_tab_size * sizeof (vrange *));
+  vrange_storage **t = static_cast <vrange_storage **>
+    (m_range_allocator->alloc (new_size * sizeof (vrange_storage *)));
+  memcpy (t, m_tab, m_tab_size * sizeof (vrange_storage *));
   if (m_zero_p)
-    memset (t + m_tab_size, 0, (new_size - m_tab_size) * sizeof (vrange *));
+    memset (t + m_tab_size, 0, (new_size - m_tab_size) * sizeof (vrange_storage *));
 
   m_tab = t;
   m_tab_size = new_size;
@@ -147,7 +145,7 @@  sbr_vector::grow ()
 bool
 sbr_vector::set_bb_range (const_basic_block bb, const vrange &r)
 {
-  vrange *m;
+  vrange_storage *m;
   if (bb->index >= m_tab_size)
     grow ();
   if (r.varying_p ())
@@ -168,10 +166,10 @@  sbr_vector::get_bb_range (vrange &r, const_basic_block bb)
 {
   if (bb->index >= m_tab_size)
     return false;
-  vrange *m = m_tab[bb->index];
+  vrange_storage *m = m_tab[bb->index];
   if (m)
     {
-      r = *m;
+      m->get_vrange (r, m_type);
       return true;
     }
   return false;
@@ -255,7 +253,7 @@  private:
   void bitmap_set_quad (bitmap head, int quad, int quad_value);
   int bitmap_get_quad (const_bitmap head, int quad);
   vrange_allocator *m_range_allocator;
-  vrange *m_range[SBR_NUM];
+  vrange_storage *m_range[SBR_NUM];
   bitmap_head bitvec;
   tree m_type;
 };
@@ -272,15 +270,16 @@  sbr_sparse_bitmap::sbr_sparse_bitmap (tree t, vrange_allocator *allocator,
   bitmap_tree_view (&bitvec);
   m_range_allocator = allocator;
   // Pre-cache varying.
-  m_range[0] = m_range_allocator->alloc_vrange (t);
-  m_range[0]->set_varying (t);
+  m_range[0] = m_range_allocator->clone_varying (t);
   // Pre-cache zero and non-zero values for pointers.
   if (POINTER_TYPE_P (t))
     {
-      m_range[1] = m_range_allocator->alloc_vrange (t);
-      m_range[1]->set_nonzero (t);
-      m_range[2] = m_range_allocator->alloc_vrange (t);
-      m_range[2]->set_zero (t);
+      int_range<2> nonzero;
+      nonzero.set_nonzero (t);
+      m_range[1] = m_range_allocator->clone (nonzero);
+      int_range<2> zero;
+      zero.set_zero (t);
+      m_range[2] = m_range_allocator->clone (zero);
     }
   else
     m_range[1] = m_range[2] = NULL;
@@ -321,7 +320,7 @@  sbr_sparse_bitmap::set_bb_range (const_basic_block bb, const vrange &r)
 
   // Loop thru the values to see if R is already present.
   for (int x = 0; x < SBR_NUM; x++)
-    if (!m_range[x] || r == *(m_range[x]))
+    if (!m_range[x] || m_range[x]->equal_p (r, m_type))
       {
 	if (!m_range[x])
 	  m_range[x] = m_range_allocator->clone (r);
@@ -348,7 +347,7 @@  sbr_sparse_bitmap::get_bb_range (vrange &r, const_basic_block bb)
   if (value == SBR_UNDEF)
     r.set_undefined ();
   else
-    r = *(m_range[value - 1]);
+    m_range[value - 1]->get_vrange (r, m_type);
   return true;
 }
 
@@ -369,7 +368,7 @@  block_range_cache::block_range_cache ()
   bitmap_obstack_initialize (&m_bitmaps);
   m_ssa_ranges.create (0);
   m_ssa_ranges.safe_grow_cleared (num_ssa_names);
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 // Remove any m_block_caches which have been created.
@@ -535,7 +534,7 @@  block_range_cache::dump (FILE *f, basic_block bb, bool print_varying)
 ssa_cache::ssa_cache ()
 {
   m_tab.create (0);
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 // Deconstruct an ssa cache.
@@ -567,10 +566,10 @@  ssa_cache::get_range (vrange &r, tree name) const
   if (v >= m_tab.length ())
     return false;
 
-  vrange *stow = m_tab[v];
+  vrange_storage *stow = m_tab[v];
   if (!stow)
     return false;
-  r = *stow;
+  stow->get_vrange (r, TREE_TYPE (name));
   return true;
 }
 
@@ -584,9 +583,9 @@  ssa_cache::set_range (tree name, const vrange &r)
   if (v >= m_tab.length ())
     m_tab.safe_grow_cleared (num_ssa_names + 1);
 
-  vrange *m = m_tab[v];
+  vrange_storage *m = m_tab[v];
   if (m && m->fits_p (r))
-    *m = r;
+    m->set_vrange (r);
   else
     m_tab[v] = m_range_allocator->clone (r);
   return m != NULL;
diff --git a/gcc/gimple-range-cache.h b/gcc/gimple-range-cache.h
index 9032df9e3e3..946fbc51465 100644
--- a/gcc/gimple-range-cache.h
+++ b/gcc/gimple-range-cache.h
@@ -65,7 +65,7 @@  public:
   void dump (FILE *f = stderr);
 protected:
   virtual bool dump_range_query (vrange &r, tree name) const;
-  vec<vrange *> m_tab;
+  vec<vrange_storage *> m_tab;
   vrange_allocator *m_range_allocator;
 };
 
diff --git a/gcc/gimple-range-edge.cc b/gcc/gimple-range-edge.cc
index 8fedac58fe6..22fb709c9b1 100644
--- a/gcc/gimple-range-edge.cc
+++ b/gcc/gimple-range-edge.cc
@@ -69,7 +69,7 @@  gimple_outgoing_range::gimple_outgoing_range (int max_sw_edges)
 {
   m_edge_table = NULL;
   m_max_edges = max_sw_edges;
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 
@@ -97,16 +97,16 @@  gimple_outgoing_range::switch_edge_range (irange &r, gswitch *sw, edge e)
     return false;
 
    if (!m_edge_table)
-     m_edge_table = new hash_map<edge, irange *> (n_edges_for_fn (cfun));
+     m_edge_table = new hash_map<edge, vrange_storage *> (n_edges_for_fn (cfun));
 
-   irange **val = m_edge_table->get (e);
+   vrange_storage **val = m_edge_table->get (e);
    if (!val)
      {
        calc_switch_ranges (sw);
        val = m_edge_table->get (e);
        gcc_checking_assert (val);
      }
-    r = **val;
+   (*val)->get_vrange (r, TREE_TYPE (gimple_switch_index (sw)));
   return true;
 }
 
@@ -150,29 +150,30 @@  gimple_outgoing_range::calc_switch_ranges (gswitch *sw)
       // Create/union this case with anything on else on the edge.
       int_range_max case_range (low, high);
       range_cast (case_range, type);
-      irange *&slot = m_edge_table->get_or_insert (e, &existed);
+      vrange_storage *&slot = m_edge_table->get_or_insert (e, &existed);
       if (existed)
 	{
 	  // If this doesn't change the value, move on.
-	  if (!case_range.union_ (*slot))
+	  int_range_max tmp;
+	  slot->get_vrange (tmp, type);
+	  if (!case_range.union_ (tmp))
 	   continue;
 	  if (slot->fits_p (case_range))
 	    {
-	      *slot = case_range;
+	      slot->set_vrange (case_range);
 	      continue;
 	    }
 	}
       // If there was an existing range and it doesn't fit, we lose the memory.
       // It'll get reclaimed when the obstack is freed.  This seems less
       // intrusive than allocating max ranges for each case.
-      slot = m_range_allocator->clone <irange> (case_range);
+      slot = m_range_allocator->clone (case_range);
     }
 
-  irange *&slot = m_edge_table->get_or_insert (default_edge, &existed);
+  vrange_storage *&slot = m_edge_table->get_or_insert (default_edge, &existed);
   // This should be the first call into this switch.
   gcc_checking_assert (!existed);
-  irange *dr = m_range_allocator->clone <irange> (default_range);
-  slot = dr;
+  slot = m_range_allocator->clone (default_range);
 }
 
 
diff --git a/gcc/gimple-range-edge.h b/gcc/gimple-range-edge.h
index bb0de1b1d3e..86b9c0cbcb1 100644
--- a/gcc/gimple-range-edge.h
+++ b/gcc/gimple-range-edge.h
@@ -46,8 +46,8 @@  private:
   bool switch_edge_range (irange &r, gswitch *sw, edge e);
 
   int m_max_edges;
-  hash_map<edge, irange *> *m_edge_table;
-  class obstack_vrange_allocator *m_range_allocator;
+  hash_map<edge, vrange_storage *> *m_edge_table;
+  class vrange_allocator *m_range_allocator;
 };
 
 // If there is a range control statement at the end of block BB, return it.
diff --git a/gcc/gimple-range-infer.cc b/gcc/gimple-range-infer.cc
index 14ccd7347e6..a6f7d4e7991 100644
--- a/gcc/gimple-range-infer.cc
+++ b/gcc/gimple-range-infer.cc
@@ -181,7 +181,7 @@  class exit_range
 {
 public:
   tree name;
-  vrange *range;
+  vrange_storage *range;
   exit_range *next;
 };
 
@@ -221,7 +221,7 @@  infer_range_manager::infer_range_manager (bool do_search)
   // Non-zero elements are very common, so cache them for each ssa-name.
   m_nonzero.create (0);
   m_nonzero.safe_grow_cleared (num_ssa_names + 1);
-  m_range_allocator = new obstack_vrange_allocator;
+  m_range_allocator = new vrange_allocator;
 }
 
 // Destruct a range infer manager.
@@ -246,7 +246,8 @@  infer_range_manager::get_nonzero (tree name)
     m_nonzero.safe_grow_cleared (num_ssa_names + 20);
   if (!m_nonzero[v])
     {
-      m_nonzero[v] = m_range_allocator->alloc_vrange (TREE_TYPE (name));
+      m_nonzero[v]
+	= (irange *) m_range_allocator->alloc (sizeof (int_range <2>));
       m_nonzero[v]->set_nonzero (TREE_TYPE (name));
     }
   return *(m_nonzero[v]);
@@ -292,7 +293,10 @@  infer_range_manager::maybe_adjust_range (vrange &r, tree name, basic_block bb)
   exit_range *ptr = m_on_exit[bb->index].find_ptr (name);
   gcc_checking_assert (ptr);
   // Return true if this exit range changes R, otherwise false.
-  return r.intersect (*(ptr->range));
+  tree type = TREE_TYPE (name);
+  Value_Range tmp (type);
+  ptr->range->get_vrange (tmp, type);
+  return r.intersect (tmp);
 }
 
 // Add range R as an inferred range for NAME in block BB.
@@ -320,17 +324,16 @@  infer_range_manager::add_range (tree name, basic_block bb, const vrange &r)
   exit_range *ptr = m_on_exit[bb->index].find_ptr (name);
   if (ptr)
     {
-      Value_Range cur (r);
+      tree type = TREE_TYPE (name);
+      Value_Range cur (r), name_range (type);
+      ptr->range->get_vrange (name_range, type);
       // If no new info is added, just return.
-      if (!cur.intersect (*(ptr->range)))
+      if (!cur.intersect (name_range))
 	return;
       if (ptr->range->fits_p (cur))
-	*(ptr->range) = cur;
+	ptr->range->set_vrange (cur);
       else
-	{
-	  vrange &v = cur;
-	  ptr->range = m_range_allocator->clone (v);
-	}
+	ptr->range = m_range_allocator->clone (cur);
       return;
     }
 
diff --git a/gcc/gimple-range-infer.h b/gcc/gimple-range-infer.h
index 3c85e29c0bd..34716ca6402 100644
--- a/gcc/gimple-range-infer.h
+++ b/gcc/gimple-range-infer.h
@@ -80,7 +80,7 @@  private:
   bitmap m_seen;
   bitmap_obstack m_bitmaps;
   struct obstack m_list_obstack;
-  class obstack_vrange_allocator *m_range_allocator;
+  class vrange_allocator *m_range_allocator;
 };
 
 #endif // GCC_GIMPLE_RANGE_SIDE_H
diff --git a/gcc/tree-core.h b/gcc/tree-core.h
index fd2be57b78c..847f0b1e994 100644
--- a/gcc/tree-core.h
+++ b/gcc/tree-core.h
@@ -33,7 +33,6 @@  struct function;
 struct real_value;
 struct fixed_value;
 struct ptr_info_def;
-struct irange_storage_slot;
 struct die_struct;
 
 
@@ -1605,17 +1604,12 @@  struct GTY(()) tree_ssa_name {
 
   /* Value range information.  */
   union ssa_name_info_type {
-    /* Ranges for integers.  */
-    struct GTY ((tag ("0"))) irange_storage_slot *irange_info;
-    /* Ranges for floating point numbers.  */
-    struct GTY ((tag ("1"))) frange_storage_slot *frange_info;
-    /* Pointer attributes used for alias analysis.  */
-    struct GTY ((tag ("2"))) ptr_info_def *ptr_info;
-    /* This holds any range info supported by ranger (except ptr_info
-       above) and is managed by vrange_storage.  */
-    void * GTY ((skip)) range_info;
+    /* Range and aliasing info for pointers.  */
+    struct GTY ((tag ("0"))) ptr_info_def *ptr_info;
+    /* Range info for everything else.  */
+    struct GTY ((tag ("1"))) vrange_storage * range_info;
   } GTY ((desc ("%1.typed.type ?" \
-		"(POINTER_TYPE_P (TREE_TYPE ((tree)&%1)) ? 2 : SCALAR_FLOAT_TYPE_P (TREE_TYPE ((tree)&%1))) : 3"))) info;
+		"!POINTER_TYPE_P (TREE_TYPE ((tree)&%1)) : 2"))) info;
   /* Immediate uses list for this SSA_NAME.  */
   struct ssa_use_operand_t imm_uses;
 };
diff --git a/gcc/tree-ssanames.cc b/gcc/tree-ssanames.cc
index 08aa166ef17..b6cbf97b878 100644
--- a/gcc/tree-ssanames.cc
+++ b/gcc/tree-ssanames.cc
@@ -72,9 +72,6 @@  unsigned int ssa_name_nodes_created;
 #define FREE_SSANAMES(fun) (fun)->gimple_df->free_ssanames
 #define FREE_SSANAMES_QUEUE(fun) (fun)->gimple_df->free_ssanames_queue
 
-static ggc_vrange_allocator ggc_allocator;
-static vrange_storage vstore (&ggc_allocator);
-
 /* Return TRUE if NAME has global range info.  */
 
 inline bool
@@ -89,8 +86,8 @@  inline bool
 range_info_fits_p (tree name, const vrange &r)
 {
   gcc_checking_assert (range_info_p (name));
-  void *mem = SSA_NAME_RANGE_INFO (name);
-  return vrange_storage::fits_p (mem, r);
+  vrange_storage *mem = SSA_NAME_RANGE_INFO (name);
+  return mem->fits_p (r);
 }
 
 /* Allocate a new global range for NAME and set it to R.  Return the
@@ -99,7 +96,7 @@  range_info_fits_p (tree name, const vrange &r)
 inline void *
 range_info_alloc (tree name, const vrange &r)
 {
-  void *mem = vstore.alloc_slot (r);
+  vrange_storage *mem = ggc_alloc_vrange_storage (r);
   SSA_NAME_RANGE_INFO (name) = mem;
   return mem;
 }
@@ -109,16 +106,16 @@  range_info_alloc (tree name, const vrange &r)
 inline void
 range_info_free (tree name)
 {
-  void *mem = SSA_NAME_RANGE_INFO (name);
-  vstore.free (mem);
+  vrange_storage *mem = SSA_NAME_RANGE_INFO (name);
+  ggc_free (mem);
 }
 
 /* Return the global range for NAME in R.  */
 
 inline void
-range_info_get_range (tree name, vrange &r)
+range_info_get_range (const_tree name, vrange &r)
 {
-  vstore.get_vrange (SSA_NAME_RANGE_INFO (name), r, TREE_TYPE (name));
+  SSA_NAME_RANGE_INFO (name)->get_vrange (r, TREE_TYPE (name));
 }
 
 /* Set the global range for NAME from R.  Return TRUE if successfull,
@@ -136,7 +133,7 @@  range_info_set_range (tree name, const vrange &r)
     }
   else
     {
-      vstore.set_vrange (SSA_NAME_RANGE_INFO (name), r);
+      SSA_NAME_RANGE_INFO (name)->set_vrange (r);
       return true;
     }
 }
@@ -492,12 +489,9 @@  get_nonzero_bits (const_tree name)
   if (!range_info_p (name) || !irange::supports_p (TREE_TYPE (name)))
     return wi::shwi (-1, precision);
 
-  /* Optimization to get at the nonzero bits because we know the
-     storage type.  This saves us measurable time compared to going
-     through vrange_storage.  */
-  irange_storage_slot *ri
-    = static_cast <irange_storage_slot *> (SSA_NAME_RANGE_INFO (name));
-  return ri->get_nonzero_bits ();
+  int_range_max tmp;
+  range_info_get_range (name, tmp);
+  return tmp.get_nonzero_bits ();
 }
 
 /* Return TRUE is OP, an SSA_NAME has a range of values [0..1], false
diff --git a/gcc/value-query.cc b/gcc/value-query.cc
index 538cfad19b1..8ccdc9f8852 100644
--- a/gcc/value-query.cc
+++ b/gcc/value-query.cc
@@ -261,13 +261,10 @@  get_ssa_name_range_info (vrange &r, const_tree name)
   gcc_checking_assert (!POINTER_TYPE_P (type));
   gcc_checking_assert (TREE_CODE (name) == SSA_NAME);
 
-  void *ri = SSA_NAME_RANGE_INFO (name);
+  vrange_storage *ri = SSA_NAME_RANGE_INFO (name);
 
   if (ri)
-    {
-      vrange_storage vstore (NULL);
-      vstore.get_vrange (ri, r, TREE_TYPE (name));
-    }
+    ri->get_vrange (r, TREE_TYPE (name));
   else
     r.set_varying (type);
 }
diff --git a/gcc/value-range-storage.cc b/gcc/value-range-storage.cc
index bf23f6dd476..98a6d99af78 100644
--- a/gcc/value-range-storage.cc
+++ b/gcc/value-range-storage.cc
@@ -30,35 +30,137 @@  along with GCC; see the file COPYING3.  If not see
 #include "gimple-range.h"
 #include "value-range-storage.h"
 
-// Return a newly allocated slot holding R, or NULL if storing a range
-// of R's type is not supported.
+// Generic memory allocator to share one interface between GC and
+// obstack allocators.
+
+class vrange_internal_alloc
+{
+public:
+  vrange_internal_alloc () { }
+  virtual ~vrange_internal_alloc () { }
+  virtual void *alloc (size_t size) = 0;
+  virtual void free (void *) = 0;
+private:
+  DISABLE_COPY_AND_ASSIGN (vrange_internal_alloc);
+};
+
+class vrange_obstack_alloc final: public vrange_internal_alloc
+{
+public:
+  vrange_obstack_alloc ()
+  {
+    obstack_init (&m_obstack);
+  }
+  virtual ~vrange_obstack_alloc () final override
+  {
+    obstack_free (&m_obstack, NULL);
+  }
+  virtual void *alloc (size_t size) final override
+  {
+    return obstack_alloc (&m_obstack, size);
+  }
+  virtual void free (void *) final override { }
+private:
+  obstack m_obstack;
+};
+
+class vrange_ggc_alloc final: public vrange_internal_alloc
+{
+public:
+  vrange_ggc_alloc () { }
+  virtual ~vrange_ggc_alloc () final override { }
+  virtual void *alloc (size_t size) final override
+  {
+    return ggc_internal_alloc (size);
+  }
+  virtual void free (void *p) final override
+  {
+    return ggc_free (p);
+  }
+};
+
+vrange_allocator::vrange_allocator (bool gc)
+  : m_gc (gc)
+{
+  if (gc)
+    m_alloc = new vrange_ggc_alloc;
+  else
+    m_alloc = new vrange_obstack_alloc;
+}
+
+vrange_allocator::~vrange_allocator ()
+{
+  delete m_alloc;
+}
 
 void *
-vrange_storage::alloc_slot (const vrange &r)
+vrange_allocator::alloc (size_t size)
+{
+  return m_alloc->alloc (size);
+}
+
+void
+vrange_allocator::free (void *p)
+{
+  m_alloc->free (p);
+}
+
+// Allocate a new vrange_storage object initialized to R and return
+// it.
+
+vrange_storage *
+vrange_allocator::clone (const vrange &r)
+{
+  return vrange_storage::alloc (*m_alloc, r);
+}
+
+vrange_storage *
+vrange_allocator::clone_varying (tree type)
 {
-  gcc_checking_assert (m_alloc);
+  if (irange::supports_p (type))
+    return irange_storage::alloc (*m_alloc, int_range <1> (type));
+  if (frange::supports_p (type))
+    return frange_storage::alloc (*m_alloc, frange (type));
+  return NULL;
+}
 
+vrange_storage *
+vrange_allocator::clone_undefined (tree type)
+{
+  if (irange::supports_p (type))
+    return irange_storage::alloc (*m_alloc, int_range<1> ());
+  if (frange::supports_p (type))
+    return frange_storage::alloc  (*m_alloc, frange ());
+  return NULL;
+}
+
+// Allocate a new vrange_storage object initialized to R and return
+// it.  Return NULL if R is unsupported.
+
+vrange_storage *
+vrange_storage::alloc (vrange_internal_alloc &allocator, const vrange &r)
+{
   if (is_a <irange> (r))
-    return irange_storage_slot::alloc_slot (*m_alloc, as_a <irange> (r));
+    return irange_storage::alloc (allocator, as_a <irange> (r));
   if (is_a <frange> (r))
-    return frange_storage_slot::alloc_slot (*m_alloc, as_a <frange> (r));
+    return frange_storage::alloc (allocator, as_a <frange> (r));
   return NULL;
 }
 
-// Set SLOT to R.
+// Set storage to R.
 
 void
-vrange_storage::set_vrange (void *slot, const vrange &r)
+vrange_storage::set_vrange (const vrange &r)
 {
   if (is_a <irange> (r))
     {
-      irange_storage_slot *s = static_cast <irange_storage_slot *> (slot);
+      irange_storage *s = static_cast <irange_storage *> (this);
       gcc_checking_assert (s->fits_p (as_a <irange> (r)));
       s->set_irange (as_a <irange> (r));
     }
   else if (is_a <frange> (r))
     {
-      frange_storage_slot *s = static_cast <frange_storage_slot *> (slot);
+      frange_storage *s = static_cast <frange_storage *> (this);
       gcc_checking_assert (s->fits_p (as_a <frange> (r)));
       s->set_frange (as_a <frange> (r));
     }
@@ -66,188 +168,324 @@  vrange_storage::set_vrange (void *slot, const vrange &r)
     gcc_unreachable ();
 }
 
-// Restore R from SLOT.  TYPE is the type of R.
+// Restore R from storage.
 
 void
-vrange_storage::get_vrange (const void *slot, vrange &r, tree type)
+vrange_storage::get_vrange (vrange &r, tree type) const
 {
   if (is_a <irange> (r))
     {
-      const irange_storage_slot *s
-	= static_cast <const irange_storage_slot *> (slot);
+      const irange_storage *s = static_cast <const irange_storage *> (this);
       s->get_irange (as_a <irange> (r), type);
     }
   else if (is_a <frange> (r))
     {
-      const frange_storage_slot *s
-	= static_cast <const frange_storage_slot *> (slot);
+      const frange_storage *s = static_cast <const frange_storage *> (this);
       s->get_frange (as_a <frange> (r), type);
     }
   else
     gcc_unreachable ();
 }
 
-// Return TRUE if SLOT can fit R.
+// Return TRUE if storage can fit R.
 
 bool
-vrange_storage::fits_p (const void *slot, const vrange &r)
+vrange_storage::fits_p (const vrange &r) const
 {
   if (is_a <irange> (r))
     {
-      const irange_storage_slot *s
-	= static_cast <const irange_storage_slot *> (slot);
+      const irange_storage *s = static_cast <const irange_storage *> (this);
       return s->fits_p (as_a <irange> (r));
     }
   if (is_a <frange> (r))
     {
-      const frange_storage_slot *s
-	= static_cast <const frange_storage_slot *> (slot);
+      const frange_storage *s = static_cast <const frange_storage *> (this);
       return s->fits_p (as_a <frange> (r));
     }
   gcc_unreachable ();
   return false;
 }
 
-// Factory that creates a new irange_storage_slot object containing R.
-// This is the only way to construct an irange slot as stack creation
-// is disallowed.
+// Return TRUE if the range in storage is equal to R.
+
+bool
+vrange_storage::equal_p (const vrange &r, tree type) const
+{
+  if (is_a <irange> (r))
+    {
+      const irange_storage *s = static_cast <const irange_storage *> (this);
+      return s->equal_p (as_a <irange> (r), type);
+    }
+  if (is_a <frange> (r))
+    {
+      const frange_storage *s = static_cast <const frange_storage *> (this);
+      return s->equal_p (as_a <frange> (r), type);
+    }
+  gcc_unreachable ();
+}
+
+//============================================================================
+// irange_storage implementation
+//============================================================================
+
+unsigned char *
+irange_storage::write_lengths_address ()
+{
+  return (unsigned char *)&m_val[(m_num_ranges * 2 + 1)
+				 * WIDE_INT_MAX_HWIS (m_precision)];
+}
+
+const unsigned char *
+irange_storage::lengths_address () const
+{
+  return const_cast <irange_storage *> (this)->write_lengths_address ();
+}
 
-irange_storage_slot *
-irange_storage_slot::alloc_slot (vrange_allocator &allocator, const irange &r)
+// Allocate a new irange_storage object initialized to R.
+
+irange_storage *
+irange_storage::alloc (vrange_internal_alloc &allocator, const irange &r)
 {
-  size_t size = irange_storage_slot::size (r);
-  irange_storage_slot *p
-    = static_cast <irange_storage_slot *> (allocator.alloc (size));
-  new (p) irange_storage_slot (r);
+  size_t size = irange_storage::size (r);
+  irange_storage *p = static_cast <irange_storage *> (allocator.alloc (size));
+  new (p) irange_storage (r);
   return p;
 }
 
-// Initialize the current slot with R.
+// Initialize the storage with R.
 
-irange_storage_slot::irange_storage_slot (const irange &r)
+irange_storage::irange_storage (const irange &r)
+  : m_max_ranges (r.num_pairs ())
 {
-  gcc_checking_assert (!r.undefined_p ());
+  m_num_ranges = m_max_ranges;
+  set_irange (r);
+}
 
-  unsigned prec = TYPE_PRECISION (r.type ());
-  unsigned n = num_wide_ints_needed (r);
-  if (n > MAX_INTS)
-    {
-      int_range<MAX_PAIRS> squash (r);
-      m_ints.set_precision (prec, num_wide_ints_needed (squash));
-      set_irange (squash);
-    }
-  else
-    {
-      m_ints.set_precision (prec, n);
-      set_irange (r);
-    }
+static inline void
+write_wide_int (HOST_WIDE_INT *&val, unsigned char *&len, const wide_int &w)
+{
+  *len = w.get_len ();
+  for (unsigned i = 0; i < *len; ++i)
+    *val++ = w.elt (i);
+  ++len;
 }
 
-// Store R into the current slot.
+// Store R into the current storage.
 
 void
-irange_storage_slot::set_irange (const irange &r)
+irange_storage::set_irange (const irange &r)
 {
   gcc_checking_assert (fits_p (r));
 
-  m_ints[0] = r.get_nonzero_bits ();
+  if (r.undefined_p ())
+    {
+      m_kind = VR_UNDEFINED;
+      return;
+    }
+  if (r.varying_p ())
+    {
+      m_kind = VR_VARYING;
+      return;
+    }
+
+  m_precision = TYPE_PRECISION (r.type ());
+  m_num_ranges = r.num_pairs ();
+  m_kind = VR_RANGE;
+
+  HOST_WIDE_INT *val = &m_val[0];
+  unsigned char *len = write_lengths_address ();
+
+  for (unsigned i = 0; i < r.num_pairs (); ++i)
+    {
+      write_wide_int (val, len, r.lower_bound (i));
+      write_wide_int (val, len, r.upper_bound (i));
+    }
+  if (r.m_nonzero_mask)
+    write_wide_int (val, len, wi::to_wide (r.m_nonzero_mask));
+  else
+    write_wide_int (val, len, wi::minus_one (m_precision));
 
-  unsigned pairs = r.num_pairs ();
-  for (unsigned i = 0; i < pairs; ++i)
+  if (flag_checking)
     {
-      m_ints[i*2 + 1] = r.lower_bound (i);
-      m_ints[i*2 + 2] = r.upper_bound (i);
+      int_range_max tmp;
+      get_irange (tmp, r.type ());
+      gcc_checking_assert (tmp == r);
     }
 }
 
-// Restore a range of TYPE from the current slot into R.
+static inline void
+read_wide_int (wide_int &w,
+	       const HOST_WIDE_INT *val, unsigned char len, unsigned prec)
+{
+  trailing_wide_int_storage stow (prec, &len,
+				  const_cast <HOST_WIDE_INT *> (val));
+  w = trailing_wide_int (stow);
+}
+
+// Restore a range of TYPE from storage into R.
 
 void
-irange_storage_slot::get_irange (irange &r, tree type) const
+irange_storage::get_irange (irange &r, tree type) const
 {
-  gcc_checking_assert (TYPE_PRECISION (type) == m_ints.get_precision ());
+  if (m_kind == VR_UNDEFINED)
+    {
+      r.set_undefined ();
+      return;
+    }
+  if (m_kind == VR_VARYING)
+    {
+      r.set_varying (type);
+      return;
+    }
 
-  r.set_undefined ();
-  unsigned nelements = m_ints.num_elements ();
-  for (unsigned i = 1; i < nelements; i += 2)
+  gcc_checking_assert (TYPE_PRECISION (type) == m_precision);
+  const HOST_WIDE_INT *val = &m_val[0];
+  const unsigned char *len = lengths_address ();
+  wide_int w;
+
+  // Handle the common case where R can fit the new range.
+  if (r.m_max_ranges >= m_num_ranges)
     {
-      int_range<2> tmp (type, m_ints[i], m_ints[i + 1]);
-      r.union_ (tmp);
+      r.m_kind = VR_RANGE;
+      r.m_num_ranges = m_num_ranges;
+      for (unsigned i = 0; i < m_num_ranges * 2; ++i)
+	{
+	  read_wide_int (w, val, *len, m_precision);
+	  r.m_base[i] = wide_int_to_tree (type, w);
+	  val += *len++;
+	}
+    }
+  // Otherwise build the range piecewise.
+  else
+    {
+      r.set_undefined ();
+      for (unsigned i = 0; i < m_num_ranges; ++i)
+	{
+	  wide_int lb, ub;
+	  read_wide_int (lb, val, *len, m_precision);
+	  val += *len++;
+	  read_wide_int (ub, val, *len, m_precision);
+	  val += *len++;
+	  int_range<1> tmp (type, lb, ub);
+	  r.union_ (tmp);
+	}
+    }
+  read_wide_int (w, val, *len, m_precision);
+  if (w == -1)
+    r.m_nonzero_mask = NULL;
+  else
+    {
+      r.m_nonzero_mask = wide_int_to_tree (type, w);
+      if (r.m_kind == VR_VARYING)
+	r.m_kind = VR_RANGE;
     }
-  r.set_nonzero_bits (get_nonzero_bits ());
-}
 
-// Return the size in bytes to allocate a slot that can hold R.
+  if (flag_checking)
+    r.verify_range ();
+}
 
-size_t
-irange_storage_slot::size (const irange &r)
+bool
+irange_storage::equal_p (const irange &r, tree type) const
 {
-  gcc_checking_assert (!r.undefined_p ());
-
-  unsigned prec = TYPE_PRECISION (r.type ());
-  unsigned n = num_wide_ints_needed (r);
-  if (n > MAX_INTS)
-    n = MAX_INTS;
-  return (sizeof (irange_storage_slot)
-	  + trailing_wide_ints<MAX_INTS>::extra_size (prec, n));
+  if (m_kind == VR_UNDEFINED || r.undefined_p ())
+    return m_kind == r.m_kind;
+  if (m_kind == VR_VARYING || r.varying_p ())
+    return m_kind == r.m_kind && types_compatible_p (r.type (), type);
+
+  tree rtype = r.type ();
+  if (!types_compatible_p (rtype, type))
+    return false;
+
+  // ?? We could make this faster by doing the comparison in place,
+  // without going through get_irange.
+  int_range_max tmp;
+  get_irange (tmp, rtype);
+  return tmp == r;
 }
 
-// Return the number of wide ints needed to represent R.
+// Return the size in bytes to allocate storage that can hold R.
 
-unsigned int
-irange_storage_slot::num_wide_ints_needed (const irange &r)
+size_t
+irange_storage::size (const irange &r)
 {
-  return r.num_pairs () * 2 + 1;
+  if (r.undefined_p ())
+    return sizeof (irange_storage);
+
+  unsigned prec = TYPE_PRECISION (r.type ());
+  unsigned n = r.num_pairs () * 2 + 1;
+  unsigned hwi_size = ((n * WIDE_INT_MAX_HWIS (prec) - 1)
+		       * sizeof (HOST_WIDE_INT));
+  unsigned len_size = n;
+  return sizeof (irange_storage) + hwi_size + len_size;
 }
 
-// Return TRUE if R fits in the current slot.
+// Return TRUE if R fits in the current storage.
 
 bool
-irange_storage_slot::fits_p (const irange &r) const
+irange_storage::fits_p (const irange &r) const
 {
-  return m_ints.num_elements () >= num_wide_ints_needed (r);
+  return m_max_ranges >= r.num_pairs ();
 }
 
-// Dump the current slot.
-
 void
-irange_storage_slot::dump () const
+irange_storage::dump () const
 {
-  fprintf (stderr, "raw irange_storage_slot:\n");
-  for (unsigned i = 1; i < m_ints.num_elements (); i += 2)
+  fprintf (stderr, "irange_storage (prec=%d, ranges=%d):\n",
+	   m_precision, m_num_ranges);
+
+  if (m_num_ranges == 0)
+    return;
+
+  const HOST_WIDE_INT *val = &m_val[0];
+  const unsigned char *len = lengths_address ();
+  int i, j;
+
+  fprintf (stderr, "  lengths = [ ");
+  for (i = 0; i < m_num_ranges * 2 + 1; ++i)
+    fprintf (stderr, "%d ", len[i]);
+  fprintf (stderr, "]\n");
+
+  for (i = 0; i < m_num_ranges; ++i)
     {
-      m_ints[i].dump ();
-      m_ints[i + 1].dump ();
+      for (j = 0; j < *len; ++j)
+	fprintf (stderr, "  [PAIR %d] LB " HOST_WIDE_INT_PRINT_DEC "\n", i,
+		 *val++);
+      ++len;
+      for (j = 0; j < *len; ++j)
+	fprintf (stderr, "  [PAIR %d] UB " HOST_WIDE_INT_PRINT_DEC "\n", i,
+		 *val++);
+      ++len;
     }
-  fprintf (stderr, "NONZERO ");
-  wide_int nz = get_nonzero_bits ();
-  nz.dump ();
+  for (j = 0; j < *len; ++j)
+    fprintf (stderr, "  [NZ] " HOST_WIDE_INT_PRINT_DEC "\n", *val++);
 }
 
 DEBUG_FUNCTION void
-debug (const irange_storage_slot &storage)
+debug (const irange_storage &storage)
 {
   storage.dump ();
   fprintf (stderr, "\n");
 }
 
-// Implementation of frange_storage_slot.
+//============================================================================
+// frange_storage implementation
+//============================================================================
 
-frange_storage_slot *
-frange_storage_slot::alloc_slot (vrange_allocator &allocator, const frange &r)
+// Allocate a new frange_storage object initialized to R.
+
+frange_storage *
+frange_storage::alloc (vrange_internal_alloc &allocator, const frange &r)
 {
-  size_t size = sizeof (frange_storage_slot);
-  frange_storage_slot *p
-    = static_cast <frange_storage_slot *> (allocator.alloc (size));
-  new (p) frange_storage_slot (r);
+  size_t size = sizeof (frange_storage);
+  frange_storage *p = static_cast <frange_storage *> (allocator.alloc (size));
+  new (p) frange_storage (r);
   return p;
 }
 
 void
-frange_storage_slot::set_frange (const frange &r)
+frange_storage::set_frange (const frange &r)
 {
   gcc_checking_assert (fits_p (r));
-  gcc_checking_assert (!r.undefined_p ());
 
   m_kind = r.m_kind;
   m_min = r.m_min;
@@ -257,7 +495,7 @@  frange_storage_slot::set_frange (const frange &r)
 }
 
 void
-frange_storage_slot::get_frange (frange &r, tree type) const
+frange_storage::get_frange (frange &r, tree type) const
 {
   gcc_checking_assert (r.supports_type_p (type));
 
@@ -275,6 +513,11 @@  frange_storage_slot::get_frange (frange &r, tree type) const
 	r.set_undefined ();
       return;
     }
+  if (m_kind == VR_UNDEFINED)
+    {
+      r.set_undefined ();
+      return;
+    }
 
   // We use the constructor to create the new range instead of writing
   // out the bits into the frange directly, because the global range
@@ -293,7 +536,34 @@  frange_storage_slot::get_frange (frange &r, tree type) const
 }
 
 bool
-frange_storage_slot::fits_p (const frange &) const
+frange_storage::equal_p (const frange &r, tree type) const
+{
+  if (r.undefined_p ())
+    return m_kind == VR_UNDEFINED;
+
+  tree rtype = type;
+  if (!types_compatible_p (rtype, type))
+    return false;
+
+  frange tmp;
+  get_frange (tmp, rtype);
+  return tmp == r;
+}
+
+bool
+frange_storage::fits_p (const frange &) const
 {
   return true;
 }
+
+static vrange_allocator ggc_vrange_allocator (true);
+
+vrange_storage *ggc_alloc_vrange_storage (tree type)
+{
+  return ggc_vrange_allocator.clone_varying (type);
+}
+
+vrange_storage *ggc_alloc_vrange_storage (const vrange &r)
+{
+  return ggc_vrange_allocator.clone (r);
+}
diff --git a/gcc/value-range-storage.h b/gcc/value-range-storage.h
index 070b85c5739..4ec0da73059 100644
--- a/gcc/value-range-storage.h
+++ b/gcc/value-range-storage.h
@@ -21,97 +21,101 @@  along with GCC; see the file COPYING3.  If not see
 #ifndef GCC_VALUE_RANGE_STORAGE_H
 #define GCC_VALUE_RANGE_STORAGE_H
 
-// This class is used to allocate the minimum amount of storage needed
-// for a given range.  Storage is automatically freed at destruction
-// of the class.
+// This class is used to allocate chunks of memory that can store
+// ranges as memory efficiently as possible.
 
 class vrange_allocator
 {
 public:
-  vrange_allocator () { }
-  virtual ~vrange_allocator () { }
-  // Allocate a range of TYPE.
-  vrange *alloc_vrange (tree type);
-  // Allocate a memory block of BYTES.
-  virtual void *alloc (unsigned bytes) = 0;
-  virtual void free (void *p) = 0;
-  // Return a clone of SRC.
-  template <typename T> T *clone (const T &src);
+  // Use GC memory when GC is true, otherwise use obstacks.
+  vrange_allocator (bool gc = false);
+  ~vrange_allocator ();
+  class vrange_storage *clone (const vrange &r);
+  vrange_storage *clone_varying (tree type);
+  vrange_storage *clone_undefined (tree type);
+  void *alloc (size_t size);
+  void free (void *);
 private:
-  irange *alloc_irange (unsigned pairs);
-  frange *alloc_frange ();
-  void operator= (const vrange_allocator &) = delete;
+  DISABLE_COPY_AND_ASSIGN (vrange_allocator);
+  class vrange_internal_alloc *m_alloc;
+  bool m_gc;
 };
 
-// This class is used to allocate chunks of memory that can store
-// ranges as memory efficiently as possible.  It is meant to be used
-// when long term storage of a range is needed.  The class can be used
-// with any vrange_allocator (i.e. alloca or GC).
+// Efficient memory storage for a vrange.
+//
+// The GTY marker here does nothing but get gengtype to generate the
+// ggc_test_and_set_mark calls.  We ignore the derived classes, since
+// they don't contain any pointers.
 
-class vrange_storage
+class GTY(()) vrange_storage
 {
 public:
-  vrange_storage (vrange_allocator *alloc) : m_alloc (alloc) { }
-  void *alloc_slot (const vrange &r);
-  void free (void *slot) { m_alloc->free (slot); }
-  void get_vrange (const void *slot, vrange &r, tree type);
-  void set_vrange (void *slot, const vrange &r);
-  static bool fits_p (const void *slot, const vrange &r);
-private:
-  DISABLE_COPY_AND_ASSIGN (vrange_storage);
-  vrange_allocator *m_alloc;
+  static vrange_storage *alloc (vrange_internal_alloc &, const vrange &);
+  void get_vrange (vrange &r, tree type) const;
+  void set_vrange (const vrange &r);
+  bool fits_p (const vrange &r) const;
+  bool equal_p (const vrange &r, tree type) const;
+protected:
+  // Stack initialization disallowed.
+  vrange_storage () { }
 };
 
-// A chunk of memory pointing to an irange storage.
+// Efficient memory storage for an irange.
 
-class GTY ((variable_size)) irange_storage_slot
+class irange_storage : public vrange_storage
 {
 public:
-  static irange_storage_slot *alloc_slot (vrange_allocator &, const irange &r);
+  static irange_storage *alloc (vrange_internal_alloc &, const irange &);
   void set_irange (const irange &r);
   void get_irange (irange &r, tree type) const;
-  wide_int get_nonzero_bits () const { return m_ints[0]; }
+  bool equal_p (const irange &r, tree type) const;
   bool fits_p (const irange &r) const;
-  static size_t size (const irange &r);
   void dump () const;
 private:
-  DISABLE_COPY_AND_ASSIGN (irange_storage_slot);
-  friend void gt_ggc_mx_irange_storage_slot (void *);
-  friend void gt_pch_p_19irange_storage_slot (void *, void *,
+  DISABLE_COPY_AND_ASSIGN (irange_storage);
+  static size_t size (const irange &r);
+  const unsigned char *lengths_address () const;
+  unsigned char *write_lengths_address ();
+  friend void gt_ggc_mx_irange_storage (void *);
+  friend void gt_pch_p_14irange_storage (void *, void *,
 					      gt_pointer_operator, void *);
-  friend void gt_pch_nx_irange_storage_slot (void *);
+  friend void gt_pch_nx_irange_storage (void *);
+
+  // The shared precision of each number.
+  unsigned short m_precision;
 
-  // This is the maximum number of wide_int's allowed in the trailing
-  // ints structure, without going over 16 bytes (128 bits) in the
-  // control word that precedes the HOST_WIDE_INTs in
-  // trailing_wide_ints::m_val[].
-  static const unsigned MAX_INTS = 12;
+  // The max number of sub-ranges that fit in this storage.
+  const unsigned char m_max_ranges;
 
-  // Maximum number of range pairs we can handle, considering the
-  // nonzero bits take one wide_int.
-  static const unsigned MAX_PAIRS = (MAX_INTS - 1) / 2;
+  // The number of stored sub-ranges.
+  unsigned char m_num_ranges;
 
-  // Constructor is private to disallow stack initialization.  Use
-  // alloc_slot() to create objects.
-  irange_storage_slot (const irange &r);
+  enum value_range_kind m_kind : 3;
 
-  static unsigned num_wide_ints_needed (const irange &r);
+  // The length of this is m_num_ranges * 2 + 1 to accomodate the nonzero bits.
+  HOST_WIDE_INT m_val[1];
 
-  trailing_wide_ints<MAX_INTS> m_ints;
+  // Another variable-length part of the structure following the HWIs.
+  // This is the length of each wide_int in m_val.
+  //
+  // unsigned char m_len[];
+
+  irange_storage (const irange &r);
 };
 
-// A chunk of memory to store an frange to long term memory.
+// Efficient memory storage for an frange.
 
-class GTY (()) frange_storage_slot
+class frange_storage : public vrange_storage
 {
  public:
-  static frange_storage_slot *alloc_slot (vrange_allocator &, const frange &r);
+  static frange_storage *alloc (vrange_internal_alloc &, const frange &r);
   void set_frange (const frange &r);
   void get_frange (frange &r, tree type) const;
+  bool equal_p (const frange &r, tree type) const;
   bool fits_p (const frange &) const;
  private:
-  frange_storage_slot (const frange &r) { set_frange (r); }
-  DISABLE_COPY_AND_ASSIGN (frange_storage_slot);
+  frange_storage (const frange &r) { set_frange (r); }
+  DISABLE_COPY_AND_ASSIGN (frange_storage);
 
   enum value_range_kind m_kind;
   REAL_VALUE_TYPE m_min;
@@ -120,113 +124,7 @@  class GTY (()) frange_storage_slot
   bool m_neg_nan;
 };
 
-class obstack_vrange_allocator final: public vrange_allocator
-{
-public:
-  obstack_vrange_allocator ()
-  {
-    obstack_init (&m_obstack);
-  }
-  virtual ~obstack_vrange_allocator () final override
-  {
-    obstack_free (&m_obstack, NULL);
-  }
-  virtual void *alloc (unsigned bytes) final override
-  {
-    return obstack_alloc (&m_obstack, bytes);
-  }
-  virtual void free (void *) final override { }
-private:
-  obstack m_obstack;
-};
-
-class ggc_vrange_allocator final: public vrange_allocator
-{
-public:
-  ggc_vrange_allocator () { }
-  virtual ~ggc_vrange_allocator () final override { }
-  virtual void *alloc (unsigned bytes) final override
-  {
-    return ggc_internal_alloc (bytes);
-  }
-  virtual void free (void *p) final override
-  {
-    return ggc_free (p);
-  }
-};
-
-// Return a new range to hold ranges of TYPE.  The newly allocated
-// range is initialized to VR_UNDEFINED.
-
-inline vrange *
-vrange_allocator::alloc_vrange (tree type)
-{
-  if (irange::supports_p (type))
-    return alloc_irange (2);
-  if (frange::supports_p (type))
-    return alloc_frange ();
-  return NULL;
-  gcc_unreachable ();
-}
-
-// Return a new range with NUM_PAIRS.
-
-inline irange *
-vrange_allocator::alloc_irange (unsigned num_pairs)
-{
-  // Never allocate 0 pairs.
-  if (num_pairs < 1)
-    num_pairs = 2;
-
-  size_t nbytes = sizeof (tree) * 2 * num_pairs;
-
-  // Allocate the irange and required memory for the vector.
-  void *r = alloc (sizeof (irange));
-  tree *mem = static_cast <tree *> (alloc (nbytes));
-  return new (r) irange (mem, num_pairs);
-}
-
-inline frange *
-vrange_allocator::alloc_frange ()
-{
-  void *r = alloc (sizeof (frange));
-  return new (r) frange ();
-}
-
-// Return a clone of an irange.
-
-template <>
-inline irange *
-vrange_allocator::clone <irange> (const irange &src)
-{
-  irange *r = alloc_irange (src.num_pairs ());
-  *r = src;
-  return r;
-}
-
-// Return a clone of an frange.
-
-template <>
-inline frange *
-vrange_allocator::clone <frange> (const frange &src)
-{
-  frange *r = alloc_frange ();
-  *r = src;
-  return r;
-}
-
-// Return a clone of a vrange.
-
-template <>
-inline vrange *
-vrange_allocator::clone <vrange> (const vrange &src)
-{
-  if (is_a <irange> (src))
-    return clone <irange> (as_a <irange> (src));
-  if (is_a <frange> (src))
-    return clone <frange> (as_a <frange> (src));
-  return NULL;
-  gcc_unreachable ();
-}
+extern vrange_storage *ggc_alloc_vrange_storage (tree type);
+extern vrange_storage *ggc_alloc_vrange_storage (const vrange &);
 
 #endif // GCC_VALUE_RANGE_STORAGE_H
diff --git a/gcc/value-range.h b/gcc/value-range.h
index 0b61341e5c4..9d485fbbe77 100644
--- a/gcc/value-range.h
+++ b/gcc/value-range.h
@@ -119,7 +119,7 @@  namespace inchash
 class GTY((user)) irange : public vrange
 {
   friend value_range_kind get_legacy_range (const irange &, tree &, tree &);
-  friend class vrange_allocator;
+  friend class irange_storage;
 public:
   // In-place setters.
   virtual void set (tree, tree, value_range_kind = VR_RANGE) override;
@@ -310,7 +310,7 @@  nan_state::neg_p () const
 
 class GTY((user)) frange : public vrange
 {
-  friend class frange_storage_slot;
+  friend class frange_storage;
   friend class vrange_printer;
   friend void gt_ggc_mx (frange *);
   friend void gt_pch_nx (frange *);