diff mbox series

range-ops contribution

Message ID 50a1cde1-adb5-1643-ad44-2250261a6e4d@redhat.com
State New
Headers show
Series range-ops contribution | expand

Commit Message

Aldy Hernandez Oct. 1, 2019, 5:11 p.m. UTC
Hi folks.

Here is my official submission of the range-ops part of the ranger to 
mainline.

I realize that I could have split this patch up into 2-3 separate ones, 
but I don't want to run into the chicken-and-egg scenario of last time, 
where I had 4 inter-connected patches that were hard to review 
independently.

A few notes that may help in reviewing.

The range-ops proper is in range-op.*.

The range.* files are separate files containing some simple auxiliary 
functions that will have irange and value_range_base counterparts.  Our 
development branch will have #define value_range_base irange, and some 
auxiliary glue, but none of that will be in trunk.  As promised, trunk 
is all value_range_base.

* The changes to tree-vrp.* are:

1. New constructors to align the irange and value_range_base APIs.  We 
discussed this a few months ago, and these were the agreed upon changes 
to the API.

2. Extracting the symbolic handling of PLUS/MINUS and POINTER_PLUS_EXPR 
into separate functions (extract_range_from_plus_minus_expr and 
extract_range_from_pointer_plus_expr).

3. New range_fold_unary_expr and range_fold_binary_expr functions. These 
are the VRP entry point into range-ops.  They normalize symbolics and do 
some minor pre-shuffling before calling range-ops to do the actual range 
operation.

(I have some minor shuffling of these two functions that I'd like to 
post as separate clean-ups, but I didn't want to pollute this patchset 
with them: Fedora taking forever to test and all.)

4. Removing the remaining extract_range_from_*ary_expr() code, since 
everything not symbolic is now obsolete and handled by range-ops.

5. Removing the wide-int-range.* files.  Most of the code is now 
in-lined into range-op.cc with the exception of 
wide_int_range_set_zero_nonzero_bits which has been moved into tree-vrp.c.

6. Minor performance tweak: Revamp value_range_base::singleton_p() so 
that it doesn't create a value_range_base until it's absolute sure it 
needs to call ranges_from_anti_range.

I think that's all!

For testing this patchset, I left the old extract_*ary_expr_code in, and 
added comparison code that trapped if there were any differences between 
what VRP was getting and what range-ops calculated.  I found no 
regressions in either a full bootstrap/tests (all languages), or with a 
full Fedora build.  As a bonus, we found quite a few cases where 
range-ops was getting better results.

Only after verifying we were in violent agreement, did I remove the 
extract_range_from_*ary_expr and wide-int-range.* code.

(Note: At the last minute, Jeff found one regression in the multi-day 
Fedora build.  I will fix this as a follow-up.  BTW, it does not cause 
any regressions in a bootstrap or GCC test-run, just a discrepancy on 
one specific corner case between VRP and range-ops.)

I measured VRP and evrp times in trunk with and without this patch.  I 
used 242 .ii files I keep around from an old compiler bootstrap, and ran 
and re-ran them 4-5 times with each compiler.  I am happy to report, 
there is nothing but noise there:

	Range-ops is slower than evrp by 0.58%.  To put this into
	perspective, this is a cumulative 0.009967 second degradation
	over the course of 242 files.

	OTOH, range-ops is faster than VRP by 0.26%.  Again, this is a
	0.01746 second improvement over the said files.  All noise.

Interestingly, these numbers include all the converting back and forth 
from trees and wide-ints.  We hope to get a slight performance gain 
if/when we move to a strict wide-int range representation (irange?).

The attached patch is based off of trunk from a few weeks ago.  If 
approved, I will merge and re-test again with latest trunk.  I won't 
however, test all of Fedora :-P.

May I be so bold as to suggest that if there are minor suggestions that 
arise from this review, that they be done as follow-ups?  I'd like to 
get as much testing as possible in this stage1.

Thanks.
Aldy

Comments

Jeff Law Oct. 1, 2019, 6:07 p.m. UTC | #1
On 10/1/19 11:11 AM, Aldy Hernandez wrote:
> Hi folks.
> 
> Here is my official submission of the range-ops part of the ranger to
> mainline.
> 
> I realize that I could have split this patch up into 2-3 separate ones,
> but I don't want to run into the chicken-and-egg scenario of last time,
> where I had 4 inter-connected patches that were hard to review
> independently.
It might have helped a bit, but it was pretty easy to find the mapping
from bits in wide-int-range.cc into range-op.cc -- the comments were
copied :-)

> 
> A few notes that may help in reviewing.
> 
> The range-ops proper is in range-op.*.
> 
> The range.* files are separate files containing some simple auxiliary
> functions that will have irange and value_range_base counterparts.  Our
> development branch will have #define value_range_base irange, and some
> auxiliary glue, but none of that will be in trunk.  As promised, trunk
> is all value_range_base.
> 
> * The changes to tree-vrp.* are:
> 
> 1. New constructors to align the irange and value_range_base APIs.  We
> discussed this a few months ago, and these were the agreed upon changes
> to the API.
Right.

> 
> 2. Extracting the symbolic handling of PLUS/MINUS and POINTER_PLUS_EXPR
> into separate functions (extract_range_from_plus_minus_expr and
> extract_range_from_pointer_plus_expr).
In retrospect we should have broken down that function in the old vrp
code.  I suspect that function started out relatively small and just
kept expanding over time into the horrid mess that became.

THere were a number of places where you ended up pulling code from two
existing locations into a single point in range-ops.  But again, it was
just a matter of finding the multiple original source points and mapping
then into their new location in range-ops.cc, using the copied comments
as a guide.

> 
> 3. New range_fold_unary_expr and range_fold_binary_expr functions. These
> are the VRP entry point into range-ops.  They normalize symbolics and do
> some minor pre-shuffling before calling range-ops to do the actual range
> operation.
Right.  I see these as primarily an adapter between existing code and
the new range ops.

> 
> (I have some minor shuffling of these two functions that I'd like to
> post as separate clean-ups, but I didn't want to pollute this patchset
> with them: Fedora taking forever to test and all.)
Works for me.


> 5. Removing the wide-int-range.* files.  Most of the code is now
> in-lined into range-op.cc with the exception of
> wide_int_range_set_zero_nonzero_bits which has been moved into tree-vrp.c.
Right.  Largely follows from #2 above.

> 
> I think that's all!
> 
> For testing this patchset, I left the old extract_*ary_expr_code in, and
> added comparison code that trapped if there were any differences between
> what VRP was getting and what range-ops calculated.  I found no
> regressions in either a full bootstrap/tests (all languages), or with a
> full Fedora build.  As a bonus, we found quite a few cases where
> range-ops was getting better results.
So to provide a bit more info here.  We ran tests back in the spring
which resulted in various bugfixes/improvements.  Aldy asked me to
re-run with their more recent branch.  That run exposed one very clear
ranger bug which Aldy fixed prior to submitting this patch as well as
several cases where the results differed.  We verified each and every
one of them was a case where Ranger was getting better results.

> (Note: At the last minute, Jeff found one regression in the multi-day
> Fedora build.  I will fix this as a follow-up.  BTW, it does not cause
> any regressions in a bootstrap or GCC test-run, just a discrepancy on
> one specific corner case between VRP and range-ops.)
Right.  WHat happened was there was a package that failed to build due
to the Fortran front-end getting tighter in its handling of argument
checking.  Once that (and various other issues related to using a gcc-10
snapshot) was worked around I rebuilt the failing packages.  That in
turn exposed another case where ranger and vrp differed in their results
(it's a MULT+overflow case IIRC)  ANyway, I'm leaving it to you to
analyze :-)


[ ... ]

> 
> The attached patch is based off of trunk from a few weeks ago.  If
> approved, I will merge and re-test again with latest trunk.  I won't
> however, test all of Fedora :-P.
Agreed, I don't think that's necessary.  FWIW, using a month-old branch
for testing was amazingly helpful in other respects.  We found ~100
packages that need updating for gcc-10 as well as a few bugs unrelated
to Ranger.  I've actually got Sunday's snapshot spinning now and fully
expect to be spinning Fedora builds with snapshots for the next several
months.  So I don't expect a Fedora build just to test after ranger
integration, but instead that it'll "just happen" on a subsequent snapshot.

> 
> May I be so bold as to suggest that if there are minor suggestions that
> arise from this review, that they be done as follow-ups?  I'd like to
> get as much testing as possible in this stage1.
There's a variety of small, obvious things that should be fixed.
Comment typos and the like.  There's one question on inversion that may
require some discussion.

See inline comments...


> 
> Thanks.
> Aldy
> 
> 
> range-ops.patch
> 
> diff --git a/gcc/ChangeLog b/gcc/ChangeLog
> index 65f9db966d0..9aa46c087b8 100644
> --- a/gcc/ChangeLog
> +++ b/gcc/ChangeLog
> @@ -1,3 +1,68 @@
> +2019-09-25  Aldy Hernandez  <aldyh@redhat.com>
> +
> +	* Makefile.in (OBJS): Add range.o and range-op.o.
> +	Remove wide-int-range.o.
> +	(GTFILES): Add range.h.
> +	* function-tests.c (test_ranges): New.
> +	(function_tests_c_tests): Call test_ranges.
> +	* ipa-cp.c (ipa_vr_operation_and_type_effects): Call
> +	range_fold_unary_expr instead of extract_range_from_unary_expr.
> +	* ipa-prop.c (ipa_compute_jump_functions_for_edge): Same.
> +	* range-op.cc: New file.
> +	* range-op.h: New file.
> +	* range.cc: New file.
> +	* range.h: New file.
> +	* selftest.h (range_tests): New prototype.
> +	* ssa.h: Include range.h.
> +	* tree-vrp.c (value_range_base::value_range_base): New
> +	constructors.
> +	(value_range_base::singleton_p): Do not call
> +	ranges_from_anti_range until sure we will need to.
> +	(value_range_base::type): Rename gcc_assert to
> +	gcc_checking_assert.
> +	(vrp_val_is_max): New argument.
> +	(vrp_val_is_min): Same.
> +	(wide_int_range_set_zero_nonzero_bits): Move from
> +	wide-int-range.cc.
> +	(extract_range_into_wide_ints): Remove.
> +	(extract_range_from_multiplicative_op): Remove.
> +	(extract_range_from_pointer_plus_expr): Abstract POINTER_PLUS code
> +	from extract_range_from_binary_expr.
> +	(extract_range_from_plus_minus_expr): Abstract PLUS/MINUS code
> +	from extract_range_from_binary_expr.
> +	(extract_range_from_binary_expr): Remove.
> +	(normalize_for_range_ops): New.
> +	(range_fold_binary_expr): New.
> +	(range_fold_unary_expr): New.
> +	(value_range_base::num_pairs): New.
> +	(value_range_base::lower_bound): New.
> +	(value_range_base::upper_bound): New.
> +	(value_range_base::upper_bound): New.
> +	(value_range_base::contains_p): New.
> +	(value_range_base::invert): New.
> +	(value_range_base::union_): New.
> +	(value_range_base::intersect): New.
> +	(range_compatible_p): New.
> +	(value_range_base::operator==): New.
> +	(determine_value_range_1): Call range_fold_*expr instead of
> +	extract_range_from_*expr.
> +	* tree-vrp.h (class value_range_base): Add new constructors.
> +	Add methods for union_, intersect, operator==, contains_p,
> +	num_pairs, lower_bound, upper_bound, invert.
> +	(vrp_val_is_min): Add handle_pointers argument.
> +	(vrp_val_is_max): Same.
> +	(extract_range_from_unary_expr): Remove.
> +	(extract_range_from_binary_expr): Remove.
> +	(range_fold_unary_expr): New.
> +	(range_fold_binary_expr): New.
> +	* vr-values.c (vr_values::extract_range_from_binary_expr): Call
> +	range_fold_binary_expr instead of extract_range_from_binary_expr.
> +	(vr_values::extract_range_basic): Same.
> +	(vr_values::extract_range_from_unary_expr): Call
> +	range_fold_unary_expr instead of extract_range_from_unary_expr.
> +	* wide-int-range.cc: Remove.
> +	* wide-int-range.h: Remove.
> +
>  2019-08-27  Richard Biener  <rguenther@suse.de>
>  
>  	* config/i386/i386-features.h
> diff --git a/gcc/Makefile.in b/gcc/Makefile.in
> index 597dc01328b..d9549710d8e 100644
> --- a/gcc/Makefile.in
> +++ b/gcc/Makefile.in
[ ... ]
> @@ -2548,6 +2549,7 @@ GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
>    $(srcdir)/stringpool.c $(srcdir)/tree.c $(srcdir)/varasm.c \
>    $(srcdir)/gimple.h \
>    $(srcdir)/gimple-ssa.h \
> +  $(srcdir)/range.h $(srcdir)/range.cc \
>    $(srcdir)/tree-ssanames.c $(srcdir)/tree-eh.c $(srcdir)/tree-ssa-address.c \
>    $(srcdir)/tree-cfg.c $(srcdir)/tree-ssa-loop-ivopts.c \
>    $(srcdir)/tree-dfa.c \
I didn't see any GTY thingies in range.h/range.cc, so I don't think this
is strictly necessary.  BUt I'm not going to stress if you leave it in.

> diff --git a/gcc/range-op.cc b/gcc/range-op.cc
> new file mode 100644
> index 00000000000..a21520df355
> --- /dev/null
> +++ b/gcc/range-op.cc
[ ... ]
> +
> +// Return a value_range_base instance that is a boolean FALSE.
> +
> +static inline value_range_base
> +range_true_and_false (tree type)
> +{
> +  unsigned prec = TYPE_PRECISION (type);
> +  return value_range_base (type, wi::zero (prec), wi::one (prec));
> +}
I think the function comment is wrong.  We're creating a range with both
true and false values.  Effectively a BRS_FULL range I believe.

> +// Return the summary information about boolean range LHS.  Return an
> +// "interesting" range in R.  For EMPTY or FULL, return the equivilent
> +// range for TYPE, for BRS_TRUE and BRS false, return the negation of
> +// the bool range.
s/equivilent/equivalent/

> +
> +static bool_range_state
> +get_bool_state (value_range_base &r,
> +		const value_range_base &lhs, tree val_type)
> +{
> +  // If there is no result, then this is unexectuable.
s/unexectuable/unexecutable/

[ ... ]


> +bool
> +operator_equal::op1_range (value_range_base &r, tree type,
> +			   const value_range_base &lhs,
> +			   const value_range_base &op2) const
> +{
> +  switch (get_bool_state (r, lhs, type))
> +    {
> +      case BRS_FALSE:
> +        // If the result is false, the only time we know anything is
> +	// if OP2 is a constant.
> +	if (wi::eq_p (op2.lower_bound(), op2.upper_bound()))
> +	  r = range_invert (op2);
> +	else
> +	  r.set_varying (type);
> +	break;
Looks like you've got spaces vs tabs wrong above.  It repeats in other
BRS_FALSE cases.  I suspect some global search/replace is the right
thing to do here.

[ ... ]
+
> +value_range_base
> +operator_lt::fold_range (tree type,
> +			 const value_range_base &op1,
> +			 const value_range_base &op2) const
> +{
> +  value_range_base r;
> +  if (empty_range_check (r, op1, op2))
> +    return r;
> +
> +  signop sign = TYPE_SIGN (op1.type ());
> +  gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
> +
> +  if (wi::lt_p (op1.upper_bound (), op2.lower_bound (), sign))
> +    r = range_true (type);
> +  else
> +    if (!wi::lt_p (op1.lower_bound (), op2.upper_bound (), sign))
> +      r = range_false (type);
> +    else
> +      r = range_true_and_false (type);
> +  return r;
> +}
So formatting goof here.  It should be

  if (...)
    blah
  else if (...)
    blah2
  else
    blah3

This repeats in other operator_XX member functions.

[ ... ]

> +

> +value_range_base
> +operator_lshift::wi_fold (tree type,
> +			  const wide_int &lh_lb, const wide_int &lh_ub,
> +			  const wide_int &rh_lb, const wide_int &rh_ub) const
> +{
[ ... ]

> +	{
> +	  // For non-negative numbers, we're shifting out only zeroes,
> +	  // the value increases monotonically.  For negative numbers,
> +	  // we're shifting out only ones, the value decreases
> +	  // monotomically.
s/monotomically/monotonically/


> +
> +bool
> +operator_cast::op1_range (value_range_base &r, tree type,
> +			  const value_range_base &lhs,
> +			  const value_range_base &op2) const
> +{
[ ... ]


> +  // If the LHS precision is greater than the rhs precision, the LHS
> +  // range is resticted to the range of the RHS by this
> +  // assignment.
s/resticted/restricted/



> diff --git a/gcc/range.cc b/gcc/range.cc
> new file mode 100644
> index 00000000000..5e4d90436f2
> --- /dev/null
> +++ b/gcc/range.cc
[ ... ]
> +
> +value_range_base
> +range_intersect (const value_range_base &r1, const value_range_base &r2)
> +{
> +  value_range_base tmp (r1);
> +  tmp.intersect (r2);
> +  return tmp;
> +}
So low level question here.  This code looks particularly well suited
for the NRV optimization.  Can you check if NVR (named-value-return) is
triggering here, and if not why.  ISTM these are likely used heavily and
NVR would be a win.


> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
> index 5ec4d17f23b..269a3cb090e 100644
> --- a/gcc/tree-vrp.c
> +++ b/gcc/tree-vrp.c
[ ... ]

> @@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see

> +
> +/* Return the inverse of a range.  */
> +
> +void
> +value_range_base::invert ()
> +{
> +  if (undefined_p ())
> +    return;
> +  if (varying_p ())
> +    set_undefined ();
> +  else if (m_kind == VR_RANGE)
> +    m_kind = VR_ANTI_RANGE;
> +  else if (m_kind == VR_ANTI_RANGE)
> +    m_kind = VR_RANGE;
> +  else
> +    gcc_unreachable ();
> +}
I don't think this is right for varying_p.  ISTM that if something is
VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
particularly bad given its optimistic treatment elsewhere.


So there's a handful of nits in there.  The only functional concern I
have is value_range_base::invert.

jeff
Richard Biener Oct. 2, 2019, 10:52 a.m. UTC | #2
On Tue, Oct 1, 2019 at 8:07 PM Jeff Law <law@redhat.com> wrote:
>
> On 10/1/19 11:11 AM, Aldy Hernandez wrote:
> > Hi folks.
> >
> > Here is my official submission of the range-ops part of the ranger to
> > mainline.
> >
> > I realize that I could have split this patch up into 2-3 separate ones,
> > but I don't want to run into the chicken-and-egg scenario of last time,
> > where I had 4 inter-connected patches that were hard to review
> > independently.
> It might have helped a bit, but it was pretty easy to find the mapping
> from bits in wide-int-range.cc into range-op.cc -- the comments were
> copied :-)
>
> >
> > A few notes that may help in reviewing.
> >
> > The range-ops proper is in range-op.*.
> >
> > The range.* files are separate files containing some simple auxiliary
> > functions that will have irange and value_range_base counterparts.  Our
> > development branch will have #define value_range_base irange, and some
> > auxiliary glue, but none of that will be in trunk.  As promised, trunk
> > is all value_range_base.
> >
> > * The changes to tree-vrp.* are:
> >
> > 1. New constructors to align the irange and value_range_base APIs.  We
> > discussed this a few months ago, and these were the agreed upon changes
> > to the API.
> Right.
>
> >
> > 2. Extracting the symbolic handling of PLUS/MINUS and POINTER_PLUS_EXPR
> > into separate functions (extract_range_from_plus_minus_expr and
> > extract_range_from_pointer_plus_expr).
> In retrospect we should have broken down that function in the old vrp
> code.  I suspect that function started out relatively small and just
> kept expanding over time into the horrid mess that became.
>
> THere were a number of places where you ended up pulling code from two
> existing locations into a single point in range-ops.  But again, it was
> just a matter of finding the multiple original source points and mapping
> then into their new location in range-ops.cc, using the copied comments
> as a guide.
>
> >
> > 3. New range_fold_unary_expr and range_fold_binary_expr functions. These
> > are the VRP entry point into range-ops.  They normalize symbolics and do
> > some minor pre-shuffling before calling range-ops to do the actual range
> > operation.
> Right.  I see these as primarily an adapter between existing code and
> the new range ops.
>
> >
> > (I have some minor shuffling of these two functions that I'd like to
> > post as separate clean-ups, but I didn't want to pollute this patchset
> > with them: Fedora taking forever to test and all.)
> Works for me.
>
>
> > 5. Removing the wide-int-range.* files.  Most of the code is now
> > in-lined into range-op.cc with the exception of
> > wide_int_range_set_zero_nonzero_bits which has been moved into tree-vrp.c.
> Right.  Largely follows from #2 above.
>
> >
> > I think that's all!
> >
> > For testing this patchset, I left the old extract_*ary_expr_code in, and
> > added comparison code that trapped if there were any differences between
> > what VRP was getting and what range-ops calculated.  I found no
> > regressions in either a full bootstrap/tests (all languages), or with a
> > full Fedora build.  As a bonus, we found quite a few cases where
> > range-ops was getting better results.
> So to provide a bit more info here.  We ran tests back in the spring
> which resulted in various bugfixes/improvements.  Aldy asked me to
> re-run with their more recent branch.  That run exposed one very clear
> ranger bug which Aldy fixed prior to submitting this patch as well as
> several cases where the results differed.  We verified each and every
> one of them was a case where Ranger was getting better results.
>
> > (Note: At the last minute, Jeff found one regression in the multi-day
> > Fedora build.  I will fix this as a follow-up.  BTW, it does not cause
> > any regressions in a bootstrap or GCC test-run, just a discrepancy on
> > one specific corner case between VRP and range-ops.)
> Right.  WHat happened was there was a package that failed to build due
> to the Fortran front-end getting tighter in its handling of argument
> checking.  Once that (and various other issues related to using a gcc-10
> snapshot) was worked around I rebuilt the failing packages.  That in
> turn exposed another case where ranger and vrp differed in their results
> (it's a MULT+overflow case IIRC)  ANyway, I'm leaving it to you to
> analyze :-)
>
>
> [ ... ]
>
> >
> > The attached patch is based off of trunk from a few weeks ago.  If
> > approved, I will merge and re-test again with latest trunk.  I won't
> > however, test all of Fedora :-P.
> Agreed, I don't think that's necessary.  FWIW, using a month-old branch
> for testing was amazingly helpful in other respects.  We found ~100
> packages that need updating for gcc-10 as well as a few bugs unrelated
> to Ranger.  I've actually got Sunday's snapshot spinning now and fully
> expect to be spinning Fedora builds with snapshots for the next several
> months.  So I don't expect a Fedora build just to test after ranger
> integration, but instead that it'll "just happen" on a subsequent snapshot.
>
> >
> > May I be so bold as to suggest that if there are minor suggestions that
> > arise from this review, that they be done as follow-ups?  I'd like to
> > get as much testing as possible in this stage1.
> There's a variety of small, obvious things that should be fixed.
> Comment typos and the like.  There's one question on inversion that may
> require some discussion.
>
> See inline comments...
>
>
> >
> > Thanks.
> > Aldy
> >
> >
> > range-ops.patch
> >
> > diff --git a/gcc/ChangeLog b/gcc/ChangeLog
> > index 65f9db966d0..9aa46c087b8 100644
> > --- a/gcc/ChangeLog
> > +++ b/gcc/ChangeLog
> > @@ -1,3 +1,68 @@
> > +2019-09-25  Aldy Hernandez  <aldyh@redhat.com>
> > +
> > +     * Makefile.in (OBJS): Add range.o and range-op.o.
> > +     Remove wide-int-range.o.
> > +     (GTFILES): Add range.h.
> > +     * function-tests.c (test_ranges): New.
> > +     (function_tests_c_tests): Call test_ranges.
> > +     * ipa-cp.c (ipa_vr_operation_and_type_effects): Call
> > +     range_fold_unary_expr instead of extract_range_from_unary_expr.
> > +     * ipa-prop.c (ipa_compute_jump_functions_for_edge): Same.
> > +     * range-op.cc: New file.
> > +     * range-op.h: New file.
> > +     * range.cc: New file.
> > +     * range.h: New file.
> > +     * selftest.h (range_tests): New prototype.
> > +     * ssa.h: Include range.h.
> > +     * tree-vrp.c (value_range_base::value_range_base): New
> > +     constructors.
> > +     (value_range_base::singleton_p): Do not call
> > +     ranges_from_anti_range until sure we will need to.
> > +     (value_range_base::type): Rename gcc_assert to
> > +     gcc_checking_assert.
> > +     (vrp_val_is_max): New argument.
> > +     (vrp_val_is_min): Same.
> > +     (wide_int_range_set_zero_nonzero_bits): Move from
> > +     wide-int-range.cc.
> > +     (extract_range_into_wide_ints): Remove.
> > +     (extract_range_from_multiplicative_op): Remove.
> > +     (extract_range_from_pointer_plus_expr): Abstract POINTER_PLUS code
> > +     from extract_range_from_binary_expr.
> > +     (extract_range_from_plus_minus_expr): Abstract PLUS/MINUS code
> > +     from extract_range_from_binary_expr.
> > +     (extract_range_from_binary_expr): Remove.
> > +     (normalize_for_range_ops): New.
> > +     (range_fold_binary_expr): New.
> > +     (range_fold_unary_expr): New.
> > +     (value_range_base::num_pairs): New.
> > +     (value_range_base::lower_bound): New.
> > +     (value_range_base::upper_bound): New.
> > +     (value_range_base::upper_bound): New.
> > +     (value_range_base::contains_p): New.
> > +     (value_range_base::invert): New.
> > +     (value_range_base::union_): New.
> > +     (value_range_base::intersect): New.
> > +     (range_compatible_p): New.
> > +     (value_range_base::operator==): New.
> > +     (determine_value_range_1): Call range_fold_*expr instead of
> > +     extract_range_from_*expr.
> > +     * tree-vrp.h (class value_range_base): Add new constructors.
> > +     Add methods for union_, intersect, operator==, contains_p,
> > +     num_pairs, lower_bound, upper_bound, invert.
> > +     (vrp_val_is_min): Add handle_pointers argument.
> > +     (vrp_val_is_max): Same.
> > +     (extract_range_from_unary_expr): Remove.
> > +     (extract_range_from_binary_expr): Remove.
> > +     (range_fold_unary_expr): New.
> > +     (range_fold_binary_expr): New.
> > +     * vr-values.c (vr_values::extract_range_from_binary_expr): Call
> > +     range_fold_binary_expr instead of extract_range_from_binary_expr.
> > +     (vr_values::extract_range_basic): Same.
> > +     (vr_values::extract_range_from_unary_expr): Call
> > +     range_fold_unary_expr instead of extract_range_from_unary_expr.
> > +     * wide-int-range.cc: Remove.
> > +     * wide-int-range.h: Remove.
> > +
> >  2019-08-27  Richard Biener  <rguenther@suse.de>
> >
> >       * config/i386/i386-features.h
> > diff --git a/gcc/Makefile.in b/gcc/Makefile.in
> > index 597dc01328b..d9549710d8e 100644
> > --- a/gcc/Makefile.in
> > +++ b/gcc/Makefile.in
> [ ... ]
> > @@ -2548,6 +2549,7 @@ GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
> >    $(srcdir)/stringpool.c $(srcdir)/tree.c $(srcdir)/varasm.c \
> >    $(srcdir)/gimple.h \
> >    $(srcdir)/gimple-ssa.h \
> > +  $(srcdir)/range.h $(srcdir)/range.cc \
> >    $(srcdir)/tree-ssanames.c $(srcdir)/tree-eh.c $(srcdir)/tree-ssa-address.c \
> >    $(srcdir)/tree-cfg.c $(srcdir)/tree-ssa-loop-ivopts.c \
> >    $(srcdir)/tree-dfa.c \
> I didn't see any GTY thingies in range.h/range.cc, so I don't think this
> is strictly necessary.  BUt I'm not going to stress if you leave it in.
>
> > diff --git a/gcc/range-op.cc b/gcc/range-op.cc
> > new file mode 100644
> > index 00000000000..a21520df355
> > --- /dev/null
> > +++ b/gcc/range-op.cc
> [ ... ]
> > +
> > +// Return a value_range_base instance that is a boolean FALSE.
> > +
> > +static inline value_range_base
> > +range_true_and_false (tree type)
> > +{
> > +  unsigned prec = TYPE_PRECISION (type);
> > +  return value_range_base (type, wi::zero (prec), wi::one (prec));
> > +}
> I think the function comment is wrong.  We're creating a range with both
> true and false values.  Effectively a BRS_FULL range I believe.
>
> > +// Return the summary information about boolean range LHS.  Return an
> > +// "interesting" range in R.  For EMPTY or FULL, return the equivilent
> > +// range for TYPE, for BRS_TRUE and BRS false, return the negation of
> > +// the bool range.
> s/equivilent/equivalent/
>
> > +
> > +static bool_range_state
> > +get_bool_state (value_range_base &r,
> > +             const value_range_base &lhs, tree val_type)
> > +{
> > +  // If there is no result, then this is unexectuable.
> s/unexectuable/unexecutable/
>
> [ ... ]
>
>
> > +bool
> > +operator_equal::op1_range (value_range_base &r, tree type,
> > +                        const value_range_base &lhs,
> > +                        const value_range_base &op2) const
> > +{
> > +  switch (get_bool_state (r, lhs, type))
> > +    {
> > +      case BRS_FALSE:
> > +        // If the result is false, the only time we know anything is
> > +     // if OP2 is a constant.
> > +     if (wi::eq_p (op2.lower_bound(), op2.upper_bound()))
> > +       r = range_invert (op2);
> > +     else
> > +       r.set_varying (type);
> > +     break;
> Looks like you've got spaces vs tabs wrong above.  It repeats in other
> BRS_FALSE cases.  I suspect some global search/replace is the right
> thing to do here.
>
> [ ... ]
> +
> > +value_range_base
> > +operator_lt::fold_range (tree type,
> > +                      const value_range_base &op1,
> > +                      const value_range_base &op2) const
> > +{
> > +  value_range_base r;
> > +  if (empty_range_check (r, op1, op2))
> > +    return r;
> > +
> > +  signop sign = TYPE_SIGN (op1.type ());
> > +  gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
> > +
> > +  if (wi::lt_p (op1.upper_bound (), op2.lower_bound (), sign))
> > +    r = range_true (type);
> > +  else
> > +    if (!wi::lt_p (op1.lower_bound (), op2.upper_bound (), sign))
> > +      r = range_false (type);
> > +    else
> > +      r = range_true_and_false (type);
> > +  return r;
> > +}
> So formatting goof here.  It should be
>
>   if (...)
>     blah
>   else if (...)
>     blah2
>   else
>     blah3
>
> This repeats in other operator_XX member functions.
>
> [ ... ]
>
> > +
>
> > +value_range_base
> > +operator_lshift::wi_fold (tree type,
> > +                       const wide_int &lh_lb, const wide_int &lh_ub,
> > +                       const wide_int &rh_lb, const wide_int &rh_ub) const
> > +{
> [ ... ]
>
> > +     {
> > +       // For non-negative numbers, we're shifting out only zeroes,
> > +       // the value increases monotonically.  For negative numbers,
> > +       // we're shifting out only ones, the value decreases
> > +       // monotomically.
> s/monotomically/monotonically/
>
>
> > +
> > +bool
> > +operator_cast::op1_range (value_range_base &r, tree type,
> > +                       const value_range_base &lhs,
> > +                       const value_range_base &op2) const
> > +{
> [ ... ]
>
>
> > +  // If the LHS precision is greater than the rhs precision, the LHS
> > +  // range is resticted to the range of the RHS by this
> > +  // assignment.
> s/resticted/restricted/
>
>
>
> > diff --git a/gcc/range.cc b/gcc/range.cc
> > new file mode 100644
> > index 00000000000..5e4d90436f2
> > --- /dev/null
> > +++ b/gcc/range.cc
> [ ... ]
> > +
> > +value_range_base
> > +range_intersect (const value_range_base &r1, const value_range_base &r2)
> > +{
> > +  value_range_base tmp (r1);
> > +  tmp.intersect (r2);
> > +  return tmp;
> > +}
> So low level question here.  This code looks particularly well suited
> for the NRV optimization.  Can you check if NVR (named-value-return) is
> triggering here, and if not why.  ISTM these are likely used heavily and
> NVR would be a win.
>
>
> > diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
> > index 5ec4d17f23b..269a3cb090e 100644
> > --- a/gcc/tree-vrp.c
> > +++ b/gcc/tree-vrp.c
> [ ... ]
>
> > @@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see
>
> > +
> > +/* Return the inverse of a range.  */
> > +
> > +void
> > +value_range_base::invert ()
> > +{
> > +  if (undefined_p ())
> > +    return;
> > +  if (varying_p ())
> > +    set_undefined ();
> > +  else if (m_kind == VR_RANGE)
> > +    m_kind = VR_ANTI_RANGE;
> > +  else if (m_kind == VR_ANTI_RANGE)
> > +    m_kind = VR_RANGE;
> > +  else
> > +    gcc_unreachable ();
> > +}
> I don't think this is right for varying_p.  ISTM that if something is
> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
> particularly bad given its optimistic treatment elsewhere.

VR_VARYING isn't a range, it's a lattice state (likewise for VR_UNDEFINED).
It doesn't make sense to invert a lattice state.  How you treat
VR_VARYING/VR_UNDEFINED
depends on context and so depends what 'invert' would do.  I suggest to assert
that varying/undefined is never inverted.

Richard.

>
> So there's a handful of nits in there.  The only functional concern I
> have is value_range_base::invert.
>
> jeff
Andrew MacLeod Oct. 2, 2019, 12:19 p.m. UTC | #3
On 10/2/19 6:52 AM, Richard Biener wrote:
>
>>> +
>>> +/* Return the inverse of a range.  */
>>> +
>>> +void
>>> +value_range_base::invert ()
>>> +{
>>> +  if (undefined_p ())
>>> +    return;
>>> +  if (varying_p ())
>>> +    set_undefined ();
>>> +  else if (m_kind == VR_RANGE)
>>> +    m_kind = VR_ANTI_RANGE;
>>> +  else if (m_kind == VR_ANTI_RANGE)
>>> +    m_kind = VR_RANGE;
>>> +  else
>>> +    gcc_unreachable ();
>>> +}
>> I don't think this is right for varying_p.  ISTM that if something is
>> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
>> particularly bad given its optimistic treatment elsewhere.
> VR_VARYING isn't a range, it's a lattice state (likewise for VR_UNDEFINED).
> It doesn't make sense to invert a lattice state.  How you treat
> VR_VARYING/VR_UNDEFINED
> depends on context and so depends what 'invert' would do.  I suggest to assert
> that varying/undefined is never inverted.
>
>
True for a lattice state, not true for a range in the new context of the 
ranger where
  a) varying == range for type and
  b) undefined == unreachable

This is a carry over from really old code where we only got part of it 
fixed right a while ago.
invert ( varying ) == varying    because we still know nothing about it, 
its still range for type.
invert (undefined) == undefined     because undefined is unreachable 
which is viral.

So indeed, varying should return varying... So If its undefined or 
varying, we should just return from the invert call. ie, not do anything 
to the range.    in the case of a lattice state, doing nothing to it 
should not be harmful.  I also expect it will never get called for a 
pure lattice state since its only invoked from range-ops, at which point 
we only are dealing with the range it represents.


I took a look and this bug hasn't been triggered because its only used 
in a couple of places.
1)  op1_range for EQ_EXPR and NE_EXPR when we have a true OR false 
constant condition in both the LHS and OP2 position, it sometimes 
inverts it via this call..  so its only when there is a specific boolean 
range of [TRUE,TRUE]  or [FALSE,FALSE].      when any range is undefined 
or varying in those routines, theres a different path for the result

2) fold() for logical NOT, which also has a preliminary check for 
varying or undefined and does nothing in those cases ie, returns the 
existing value.   IN fact, you can probably remove the special casing in 
logical_not with this fix, which is indicative that it is correct.



Andrew
Aldy Hernandez Oct. 2, 2019, 12:23 p.m. UTC | #4
On 10/2/19 8:19 AM, Andrew MacLeod wrote:
> On 10/2/19 6:52 AM, Richard Biener wrote:
>>
>>>> +
>>>> +/* Return the inverse of a range.  */
>>>> +
>>>> +void
>>>> +value_range_base::invert ()
>>>> +{
>>>> +  if (undefined_p ())
>>>> +    return;
>>>> +  if (varying_p ())
>>>> +    set_undefined ();
>>>> +  else if (m_kind == VR_RANGE)
>>>> +    m_kind = VR_ANTI_RANGE;
>>>> +  else if (m_kind == VR_ANTI_RANGE)
>>>> +    m_kind = VR_RANGE;
>>>> +  else
>>>> +    gcc_unreachable ();
>>>> +}
>>> I don't think this is right for varying_p.  ISTM that if something is
>>> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
>>> particularly bad given its optimistic treatment elsewhere.
>> VR_VARYING isn't a range, it's a lattice state (likewise for 
>> VR_UNDEFINED).
>> It doesn't make sense to invert a lattice state.  How you treat
>> VR_VARYING/VR_UNDEFINED
>> depends on context and so depends what 'invert' would do.  I suggest 
>> to assert
>> that varying/undefined is never inverted.
>>
>>
> True for a lattice state, not true for a range in the new context of the 
> ranger where
>   a) varying == range for type and
>   b) undefined == unreachable
> 
> This is a carry over from really old code where we only got part of it 
> fixed right a while ago.
> invert ( varying ) == varying    because we still know nothing about it, 
> its still range for type.
> invert (undefined) == undefined     because undefined is unreachable 
> which is viral.
> 
> So indeed, varying should return varying... So If its undefined or 
> varying, we should just return from the invert call. ie, not do anything 
> to the range.    in the case of a lattice state, doing nothing to it 
> should not be harmful.  I also expect it will never get called for a 
> pure lattice state since its only invoked from range-ops, at which point 
> we only are dealing with the range it represents.
> 
> 
> I took a look and this bug hasn't been triggered because its only used 
> in a couple of places.
> 1)  op1_range for EQ_EXPR and NE_EXPR when we have a true OR false 
> constant condition in both the LHS and OP2 position, it sometimes 
> inverts it via this call..  so its only when there is a specific boolean 
> range of [TRUE,TRUE]  or [FALSE,FALSE].      when any range is undefined 
> or varying in those routines, theres a different path for the result
> 
> 2) fold() for logical NOT, which also has a preliminary check for 
> varying or undefined and does nothing in those cases ie, returns the 
> existing value.   IN fact, you can probably remove the special casing in 
> logical_not with this fix, which is indicative that it is correct.

Good idea.  I've removed the special casing and am testing a new patch.

Thanks.
Aldy
Aldy Hernandez Oct. 2, 2019, 12:56 p.m. UTC | #5
On 10/1/19 2:07 PM, Jeff Law wrote:
> On 10/1/19 11:11 AM, Aldy Hernandez wrote:
>> Hi folks.
>>
>> Here is my official submission of the range-ops part of the ranger to
>> mainline.
>>
>> I realize that I could have split this patch up into 2-3 separate ones,
>> but I don't want to run into the chicken-and-egg scenario of last time,
>> where I had 4 inter-connected patches that were hard to review
>> independently.
> It might have helped a bit, but it was pretty easy to find the mapping
> from bits in wide-int-range.cc into range-op.cc -- the comments were
> copied :-)

On purpose :).

> 
>>
>> A few notes that may help in reviewing.
>>
>> The range-ops proper is in range-op.*.
>>
>> The range.* files are separate files containing some simple auxiliary
>> functions that will have irange and value_range_base counterparts.  Our
>> development branch will have #define value_range_base irange, and some
>> auxiliary glue, but none of that will be in trunk.  As promised, trunk
>> is all value_range_base.
>>
>> * The changes to tree-vrp.* are:
>>
>> 1. New constructors to align the irange and value_range_base APIs.  We
>> discussed this a few months ago, and these were the agreed upon changes
>> to the API.
> Right.
> 
>>
>> 2. Extracting the symbolic handling of PLUS/MINUS and POINTER_PLUS_EXPR
>> into separate functions (extract_range_from_plus_minus_expr and
>> extract_range_from_pointer_plus_expr).
> In retrospect we should have broken down that function in the old vrp
> code.  I suspect that function started out relatively small and just
> kept expanding over time into the horrid mess that became.
> 
> THere were a number of places where you ended up pulling code from two
> existing locations into a single point in range-ops.  But again, it was
> just a matter of finding the multiple original source points and mapping
> then into their new location in range-ops.cc, using the copied comments
> as a guide.

Yeah, we've been trying to merge 10 different places of doing something 
into one localized spot.

> 
>>
>> 3. New range_fold_unary_expr and range_fold_binary_expr functions. These
>> are the VRP entry point into range-ops.  They normalize symbolics and do
>> some minor pre-shuffling before calling range-ops to do the actual range
>> operation.
> Right.  I see these as primarily an adapter between existing code and
> the new range ops.
> 
>>
>> (I have some minor shuffling of these two functions that I'd like to
>> post as separate clean-ups, but I didn't want to pollute this patchset
>> with them: Fedora taking forever to test and all.)
> Works for me.
> 
> 
>> 5. Removing the wide-int-range.* files.  Most of the code is now
>> in-lined into range-op.cc with the exception of
>> wide_int_range_set_zero_nonzero_bits which has been moved into tree-vrp.c.
> Right.  Largely follows from #2 above.
> 
>>
>> I think that's all!
>>
>> For testing this patchset, I left the old extract_*ary_expr_code in, and
>> added comparison code that trapped if there were any differences between
>> what VRP was getting and what range-ops calculated.  I found no
>> regressions in either a full bootstrap/tests (all languages), or with a
>> full Fedora build.  As a bonus, we found quite a few cases where
>> range-ops was getting better results.
> So to provide a bit more info here.  We ran tests back in the spring
> which resulted in various bugfixes/improvements.  Aldy asked me to
> re-run with their more recent branch.  That run exposed one very clear
> ranger bug which Aldy fixed prior to submitting this patch as well as
> several cases where the results differed.  We verified each and every
> one of them was a case where Ranger was getting better results.
> 
>> (Note: At the last minute, Jeff found one regression in the multi-day
>> Fedora build.  I will fix this as a follow-up.  BTW, it does not cause
>> any regressions in a bootstrap or GCC test-run, just a discrepancy on
>> one specific corner case between VRP and range-ops.)
> Right.  WHat happened was there was a package that failed to build due
> to the Fortran front-end getting tighter in its handling of argument
> checking.  Once that (and various other issues related to using a gcc-10
> snapshot) was worked around I rebuilt the failing packages.  That in
> turn exposed another case where ranger and vrp differed in their results
> (it's a MULT+overflow case IIRC)  ANyway, I'm leaving it to you to
> analyze :-)

Yeah, looks like a simple bug.  I'll get to it, as soon as we're done 
here :).

> 
> 
> [ ... ]
> 
>>
>> The attached patch is based off of trunk from a few weeks ago.  If
>> approved, I will merge and re-test again with latest trunk.  I won't
>> however, test all of Fedora :-P.
> Agreed, I don't think that's necessary.  FWIW, using a month-old branch
> for testing was amazingly helpful in other respects.  We found ~100
> packages that need updating for gcc-10 as well as a few bugs unrelated
> to Ranger.  I've actually got Sunday's snapshot spinning now and fully
> expect to be spinning Fedora builds with snapshots for the next several
> months.  So I don't expect a Fedora build just to test after ranger
> integration, but instead that it'll "just happen" on a subsequent snapshot.

The attached patch has been re-tested with current trunk.

> 
>>
>> May I be so bold as to suggest that if there are minor suggestions that
>> arise from this review, that they be done as follow-ups?  I'd like to
>> get as much testing as possible in this stage1.
> There's a variety of small, obvious things that should be fixed.
> Comment typos and the like.  There's one question on inversion that may
> require some discussion.
> 
> See inline comments...
> 
> 
>>
>> Thanks.
>> Aldy
>>
>>
>> range-ops.patch
>>
>> diff --git a/gcc/ChangeLog b/gcc/ChangeLog
>> index 65f9db966d0..9aa46c087b8 100644
>> --- a/gcc/ChangeLog
>> +++ b/gcc/ChangeLog
>> @@ -1,3 +1,68 @@
>> +2019-09-25  Aldy Hernandez  <aldyh@redhat.com>
>> +
>> +	* Makefile.in (OBJS): Add range.o and range-op.o.
>> +	Remove wide-int-range.o.
>> +	(GTFILES): Add range.h.
>> +	* function-tests.c (test_ranges): New.
>> +	(function_tests_c_tests): Call test_ranges.
>> +	* ipa-cp.c (ipa_vr_operation_and_type_effects): Call
>> +	range_fold_unary_expr instead of extract_range_from_unary_expr.
>> +	* ipa-prop.c (ipa_compute_jump_functions_for_edge): Same.
>> +	* range-op.cc: New file.
>> +	* range-op.h: New file.
>> +	* range.cc: New file.
>> +	* range.h: New file.
>> +	* selftest.h (range_tests): New prototype.
>> +	* ssa.h: Include range.h.
>> +	* tree-vrp.c (value_range_base::value_range_base): New
>> +	constructors.
>> +	(value_range_base::singleton_p): Do not call
>> +	ranges_from_anti_range until sure we will need to.
>> +	(value_range_base::type): Rename gcc_assert to
>> +	gcc_checking_assert.
>> +	(vrp_val_is_max): New argument.
>> +	(vrp_val_is_min): Same.
>> +	(wide_int_range_set_zero_nonzero_bits): Move from
>> +	wide-int-range.cc.
>> +	(extract_range_into_wide_ints): Remove.
>> +	(extract_range_from_multiplicative_op): Remove.
>> +	(extract_range_from_pointer_plus_expr): Abstract POINTER_PLUS code
>> +	from extract_range_from_binary_expr.
>> +	(extract_range_from_plus_minus_expr): Abstract PLUS/MINUS code
>> +	from extract_range_from_binary_expr.
>> +	(extract_range_from_binary_expr): Remove.
>> +	(normalize_for_range_ops): New.
>> +	(range_fold_binary_expr): New.
>> +	(range_fold_unary_expr): New.
>> +	(value_range_base::num_pairs): New.
>> +	(value_range_base::lower_bound): New.
>> +	(value_range_base::upper_bound): New.
>> +	(value_range_base::upper_bound): New.
>> +	(value_range_base::contains_p): New.
>> +	(value_range_base::invert): New.
>> +	(value_range_base::union_): New.
>> +	(value_range_base::intersect): New.
>> +	(range_compatible_p): New.
>> +	(value_range_base::operator==): New.
>> +	(determine_value_range_1): Call range_fold_*expr instead of
>> +	extract_range_from_*expr.
>> +	* tree-vrp.h (class value_range_base): Add new constructors.
>> +	Add methods for union_, intersect, operator==, contains_p,
>> +	num_pairs, lower_bound, upper_bound, invert.
>> +	(vrp_val_is_min): Add handle_pointers argument.
>> +	(vrp_val_is_max): Same.
>> +	(extract_range_from_unary_expr): Remove.
>> +	(extract_range_from_binary_expr): Remove.
>> +	(range_fold_unary_expr): New.
>> +	(range_fold_binary_expr): New.
>> +	* vr-values.c (vr_values::extract_range_from_binary_expr): Call
>> +	range_fold_binary_expr instead of extract_range_from_binary_expr.
>> +	(vr_values::extract_range_basic): Same.
>> +	(vr_values::extract_range_from_unary_expr): Call
>> +	range_fold_unary_expr instead of extract_range_from_unary_expr.
>> +	* wide-int-range.cc: Remove.
>> +	* wide-int-range.h: Remove.
>> +
>>   2019-08-27  Richard Biener  <rguenther@suse.de>
>>   
>>   	* config/i386/i386-features.h
>> diff --git a/gcc/Makefile.in b/gcc/Makefile.in
>> index 597dc01328b..d9549710d8e 100644
>> --- a/gcc/Makefile.in
>> +++ b/gcc/Makefile.in
> [ ... ]
>> @@ -2548,6 +2549,7 @@ GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
>>     $(srcdir)/stringpool.c $(srcdir)/tree.c $(srcdir)/varasm.c \
>>     $(srcdir)/gimple.h \
>>     $(srcdir)/gimple-ssa.h \
>> +  $(srcdir)/range.h $(srcdir)/range.cc \
>>     $(srcdir)/tree-ssanames.c $(srcdir)/tree-eh.c $(srcdir)/tree-ssa-address.c \
>>     $(srcdir)/tree-cfg.c $(srcdir)/tree-ssa-loop-ivopts.c \
>>     $(srcdir)/tree-dfa.c \
> I didn't see any GTY thingies in range.h/range.cc, so I don't think this
> is strictly necessary.  BUt I'm not going to stress if you leave it in.

Whoops, that was leftover from the branch, which has it's own irange 
implementation with GT markers.  Removed.

> 
>> diff --git a/gcc/range-op.cc b/gcc/range-op.cc
>> new file mode 100644
>> index 00000000000..a21520df355
>> --- /dev/null
>> +++ b/gcc/range-op.cc
> [ ... ]
>> +
>> +// Return a value_range_base instance that is a boolean FALSE.
>> +
>> +static inline value_range_base
>> +range_true_and_false (tree type)
>> +{
>> +  unsigned prec = TYPE_PRECISION (type);
>> +  return value_range_base (type, wi::zero (prec), wi::one (prec));
>> +}
> I think the function comment is wrong.  We're creating a range with both
> true and false values.  Effectively a BRS_FULL range I believe.
> 
>> +// Return the summary information about boolean range LHS.  Return an
>> +// "interesting" range in R.  For EMPTY or FULL, return the equivilent
>> +// range for TYPE, for BRS_TRUE and BRS false, return the negation of
>> +// the bool range.
> s/equivilent/equivalent/
> 
>> +
>> +static bool_range_state
>> +get_bool_state (value_range_base &r,
>> +		const value_range_base &lhs, tree val_type)
>> +{
>> +  // If there is no result, then this is unexectuable.
> s/unexectuable/unexecutable/
> 
> [ ... ]
> 
> 
>> +bool
>> +operator_equal::op1_range (value_range_base &r, tree type,
>> +			   const value_range_base &lhs,
>> +			   const value_range_base &op2) const
>> +{
>> +  switch (get_bool_state (r, lhs, type))
>> +    {
>> +      case BRS_FALSE:
>> +        // If the result is false, the only time we know anything is
>> +	// if OP2 is a constant.
>> +	if (wi::eq_p (op2.lower_bound(), op2.upper_bound()))
>> +	  r = range_invert (op2);
>> +	else
>> +	  r.set_varying (type);
>> +	break;
> Looks like you've got spaces vs tabs wrong above.  It repeats in other
> BRS_FALSE cases.  I suspect some global search/replace is the right
> thing to do here.
> 
> [ ... ]
> +
>> +value_range_base
>> +operator_lt::fold_range (tree type,
>> +			 const value_range_base &op1,
>> +			 const value_range_base &op2) const
>> +{
>> +  value_range_base r;
>> +  if (empty_range_check (r, op1, op2))
>> +    return r;
>> +
>> +  signop sign = TYPE_SIGN (op1.type ());
>> +  gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
>> +
>> +  if (wi::lt_p (op1.upper_bound (), op2.lower_bound (), sign))
>> +    r = range_true (type);
>> +  else
>> +    if (!wi::lt_p (op1.lower_bound (), op2.upper_bound (), sign))
>> +      r = range_false (type);
>> +    else
>> +      r = range_true_and_false (type);
>> +  return r;
>> +}
> So formatting goof here.  It should be
> 
>    if (...)
>      blah
>    else if (...)
>      blah2
>    else
>      blah3
> 
> This repeats in other operator_XX member functions.
> 
> [ ... ]
> 
>> +
> 
>> +value_range_base
>> +operator_lshift::wi_fold (tree type,
>> +			  const wide_int &lh_lb, const wide_int &lh_ub,
>> +			  const wide_int &rh_lb, const wide_int &rh_ub) const
>> +{
> [ ... ]
> 
>> +	{
>> +	  // For non-negative numbers, we're shifting out only zeroes,
>> +	  // the value increases monotonically.  For negative numbers,
>> +	  // we're shifting out only ones, the value decreases
>> +	  // monotomically.
> s/monotomically/monotonically/
> 
> 
>> +
>> +bool
>> +operator_cast::op1_range (value_range_base &r, tree type,
>> +			  const value_range_base &lhs,
>> +			  const value_range_base &op2) const
>> +{
> [ ... ]
> 
> 
>> +  // If the LHS precision is greater than the rhs precision, the LHS
>> +  // range is resticted to the range of the RHS by this
>> +  // assignment.
> s/resticted/restricted/

The typos and comments were all Andrew :).  I've fixed them.

> 
> 
> 
>> diff --git a/gcc/range.cc b/gcc/range.cc
>> new file mode 100644
>> index 00000000000..5e4d90436f2
>> --- /dev/null
>> +++ b/gcc/range.cc
> [ ... ]
>> +
>> +value_range_base
>> +range_intersect (const value_range_base &r1, const value_range_base &r2)
>> +{
>> +  value_range_base tmp (r1);
>> +  tmp.intersect (r2);
>> +  return tmp;
>> +}
> So low level question here.  This code looks particularly well suited
> for the NRV optimization.  Can you check if NVR (named-value-return) is
> triggering here, and if not why.  ISTM these are likely used heavily and
> NVR would be a win.

AFAICT, even before the NRV pass, we have already lined up 
value_range_base::intersect to put its return value into RETVAL:

   const struct value_range_base & r1_2(D) = r1;
   const struct value_range_base & r2_4(D) = r2;
   <bb 2> [local count: 1073741824]:
   <retval> = *r1_2(D);
   value_range_base::intersect (&<retval>, r2_4(D));
   return <retval>;

So...all good?

> 
> 
>> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
>> index 5ec4d17f23b..269a3cb090e 100644
>> --- a/gcc/tree-vrp.c
>> +++ b/gcc/tree-vrp.c
> [ ... ]
> 
>> @@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see
> 
>> +
>> +/* Return the inverse of a range.  */
>> +
>> +void
>> +value_range_base::invert ()
>> +{
>> +  if (undefined_p ())
>> +    return;
>> +  if (varying_p ())
>> +    set_undefined ();
>> +  else if (m_kind == VR_RANGE)
>> +    m_kind = VR_ANTI_RANGE;
>> +  else if (m_kind == VR_ANTI_RANGE)
>> +    m_kind = VR_RANGE;
>> +  else
>> +    gcc_unreachable ();
>> +}
> I don't think this is right for varying_p.  ISTM that if something is
> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
> particularly bad given its optimistic treatment elsewhere.

Done.  Andrew agreed, as per his reply.

Retested for all languages on x86-64 Linux.

OK?

Aldy
Richard Sandiford Oct. 2, 2019, 1:36 p.m. UTC | #6
Andrew MacLeod <amacleod@redhat.com> writes:
> On 10/2/19 6:52 AM, Richard Biener wrote:
>>
>>>> +
>>>> +/* Return the inverse of a range.  */
>>>> +
>>>> +void
>>>> +value_range_base::invert ()
>>>> +{
>>>> +  if (undefined_p ())
>>>> +    return;
>>>> +  if (varying_p ())
>>>> +    set_undefined ();
>>>> +  else if (m_kind == VR_RANGE)
>>>> +    m_kind = VR_ANTI_RANGE;
>>>> +  else if (m_kind == VR_ANTI_RANGE)
>>>> +    m_kind = VR_RANGE;
>>>> +  else
>>>> +    gcc_unreachable ();
>>>> +}
>>> I don't think this is right for varying_p.  ISTM that if something is
>>> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
>>> particularly bad given its optimistic treatment elsewhere.
>> VR_VARYING isn't a range, it's a lattice state (likewise for VR_UNDEFINED).
>> It doesn't make sense to invert a lattice state.  How you treat
>> VR_VARYING/VR_UNDEFINED
>> depends on context and so depends what 'invert' would do.  I suggest to assert
>> that varying/undefined is never inverted.
>>
>>
> True for a lattice state, not true for a range in the new context of the 
> ranger where
>   a) varying == range for type and
>   b) undefined == unreachable
>
> This is a carry over from really old code where we only got part of it 
> fixed right a while ago.
> invert ( varying ) == varying    because we still know nothing about it, 
> its still range for type.
> invert (undefined) == undefined     because undefined is unreachable 
> which is viral.
>
> So indeed, varying should return varying... So If its undefined or 
> varying, we should just return from the invert call. ie, not do anything 
> to the range.    in the case of a lattice state, doing nothing to it 
> should not be harmful.  I also expect it will never get called for a 
> pure lattice state since its only invoked from range-ops, at which point 
> we only are dealing with the range it represents.
>
>
> I took a look and this bug hasn't been triggered because its only used 
> in a couple of places.
> 1)  op1_range for EQ_EXPR and NE_EXPR when we have a true OR false 
> constant condition in both the LHS and OP2 position, it sometimes 
> inverts it via this call..  so its only when there is a specific boolean 
> range of [TRUE,TRUE]  or [FALSE,FALSE].      when any range is undefined 
> or varying in those routines, theres a different path for the result
>
> 2) fold() for logical NOT, which also has a preliminary check for 
> varying or undefined and does nothing in those cases ie, returns the 
> existing value.   IN fact, you can probably remove the special casing in 
> logical_not with this fix, which is indicative that it is correct.

IMO that makes invert a bit of a dangerous operation.  E.g. for
ranges of unsigned bytes:

  invert ({}) = invert(UNDEFINED) = UNDEFINED = {}
  invert ([255, 255]) = ~[255, 255] = [0, 254]
  ...
  invert ([3, 255]) = ~[3, 255] = [0, 2]
  invert ([2, 255]) = ~[2, 255] = [0, 1]
  invert ([1, 255]) = ~[1, 255] = [0, 0]
  invert ([0, 255]) = invert(VARYING) = VARYING = [0, 255]

where there's no continuity at the extremes.  Maybe it would be
better to open-code it in those two places instead?

Richard
Andrew MacLeod Oct. 2, 2019, 2:31 p.m. UTC | #7
On 10/2/19 9:36 AM, Richard Sandiford wrote:
> Andrew MacLeod <amacleod@redhat.com> writes:
>> On 10/2/19 6:52 AM, Richard Biener wrote:
>>>>> +
>>>>> +/* Return the inverse of a range.  */
>>>>> +
>>>>> +void
>>>>> +value_range_base::invert ()
>>>>> +{
>>>>> +  if (undefined_p ())
>>>>> +    return;
>>>>> +  if (varying_p ())
>>>>> +    set_undefined ();
>>>>> +  else if (m_kind == VR_RANGE)
>>>>> +    m_kind = VR_ANTI_RANGE;
>>>>> +  else if (m_kind == VR_ANTI_RANGE)
>>>>> +    m_kind = VR_RANGE;
>>>>> +  else
>>>>> +    gcc_unreachable ();
>>>>> +}
>>>> I don't think this is right for varying_p.  ISTM that if something is
>>>> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
>>>> particularly bad given its optimistic treatment elsewhere.
>>> VR_VARYING isn't a range, it's a lattice state (likewise for VR_UNDEFINED).
>>> It doesn't make sense to invert a lattice state.  How you treat
>>> VR_VARYING/VR_UNDEFINED
>>> depends on context and so depends what 'invert' would do.  I suggest to assert
>>> that varying/undefined is never inverted.
>>>
>>>
>> True for a lattice state, not true for a range in the new context of the
>> ranger where
>>    a) varying == range for type and
>>    b) undefined == unreachable
>>
>> This is a carry over from really old code where we only got part of it
>> fixed right a while ago.
>> invert ( varying ) == varying    because we still know nothing about it,
>> its still range for type.
>> invert (undefined) == undefined     because undefined is unreachable
>> which is viral.
>>
>> So indeed, varying should return varying... So If its undefined or
>> varying, we should just return from the invert call. ie, not do anything
>> to the range.    in the case of a lattice state, doing nothing to it
>> should not be harmful.  I also expect it will never get called for a
>> pure lattice state since its only invoked from range-ops, at which point
>> we only are dealing with the range it represents.
>>
>>
>> I took a look and this bug hasn't been triggered because its only used
>> in a couple of places.
>> 1)  op1_range for EQ_EXPR and NE_EXPR when we have a true OR false
>> constant condition in both the LHS and OP2 position, it sometimes
>> inverts it via this call..  so its only when there is a specific boolean
>> range of [TRUE,TRUE]  or [FALSE,FALSE].      when any range is undefined
>> or varying in those routines, theres a different path for the result
>>
>> 2) fold() for logical NOT, which also has a preliminary check for
>> varying or undefined and does nothing in those cases ie, returns the
>> existing value.   IN fact, you can probably remove the special casing in
>> logical_not with this fix, which is indicative that it is correct.
> IMO that makes invert a bit of a dangerous operation.  E.g. for
> ranges of unsigned bytes:
>
>    invert ({}) = invert(UNDEFINED) = UNDEFINED = {}
>    invert ([255, 255]) = ~[255, 255] = [0, 254]
>    ...
>    invert ([3, 255]) = ~[3, 255] = [0, 2]
>    invert ([2, 255]) = ~[2, 255] = [0, 1]
>    invert ([1, 255]) = ~[1, 255] = [0, 0]
>    invert ([0, 255]) = invert(VARYING) = VARYING = [0, 255]
>
> where there's no continuity at the extremes.  Maybe it would be
> better to open-code it in those two places instead?
>
> Richard

Im not sure that the continuity is important since ranges are a little 
bit odd at the edges anyway :-)

however, I will take the point that perhaps invert () has potentially 
different meanings at the edges depending no how you look at it.  
Ultimately that is why we ended up getting it wrong in the first place...

So, I audited all the current uses of invert() (it is not commonly used 
either) , and we already special case the varying or undefined behaviour 
when its appropriate before invert is invoked.   So I think we can 
reduce everyones concern about these edge cases by simply doing as you 
guys suggest and gcc_assert()  that the ranges are not undefined or 
varying for invert().


Andrew
Aldy Hernandez Oct. 2, 2019, 2:48 p.m. UTC | #8
On 10/2/19 10:31 AM, Andrew MacLeod wrote:
> On 10/2/19 9:36 AM, Richard Sandiford wrote:
>> Andrew MacLeod <amacleod@redhat.com> writes:
>>> On 10/2/19 6:52 AM, Richard Biener wrote:
>>>>>> +
>>>>>> +/* Return the inverse of a range.  */
>>>>>> +
>>>>>> +void
>>>>>> +value_range_base::invert ()
>>>>>> +{
>>>>>> +  if (undefined_p ())
>>>>>> +    return;
>>>>>> +  if (varying_p ())
>>>>>> +    set_undefined ();
>>>>>> +  else if (m_kind == VR_RANGE)
>>>>>> +    m_kind = VR_ANTI_RANGE;
>>>>>> +  else if (m_kind == VR_ANTI_RANGE)
>>>>>> +    m_kind = VR_RANGE;
>>>>>> +  else
>>>>>> +    gcc_unreachable ();
>>>>>> +}
>>>>> I don't think this is right for varying_p.  ISTM that if something is
>>>>> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
>>>>> particularly bad given its optimistic treatment elsewhere.
>>>> VR_VARYING isn't a range, it's a lattice state (likewise for 
>>>> VR_UNDEFINED).
>>>> It doesn't make sense to invert a lattice state.  How you treat
>>>> VR_VARYING/VR_UNDEFINED
>>>> depends on context and so depends what 'invert' would do.  I suggest 
>>>> to assert
>>>> that varying/undefined is never inverted.
>>>>
>>>>
>>> True for a lattice state, not true for a range in the new context of the
>>> ranger where
>>>    a) varying == range for type and
>>>    b) undefined == unreachable
>>>
>>> This is a carry over from really old code where we only got part of it
>>> fixed right a while ago.
>>> invert ( varying ) == varying    because we still know nothing about it,
>>> its still range for type.
>>> invert (undefined) == undefined     because undefined is unreachable
>>> which is viral.
>>>
>>> So indeed, varying should return varying... So If its undefined or
>>> varying, we should just return from the invert call. ie, not do anything
>>> to the range.    in the case of a lattice state, doing nothing to it
>>> should not be harmful.  I also expect it will never get called for a
>>> pure lattice state since its only invoked from range-ops, at which point
>>> we only are dealing with the range it represents.
>>>
>>>
>>> I took a look and this bug hasn't been triggered because its only used
>>> in a couple of places.
>>> 1)  op1_range for EQ_EXPR and NE_EXPR when we have a true OR false
>>> constant condition in both the LHS and OP2 position, it sometimes
>>> inverts it via this call..  so its only when there is a specific boolean
>>> range of [TRUE,TRUE]  or [FALSE,FALSE].      when any range is undefined
>>> or varying in those routines, theres a different path for the result
>>>
>>> 2) fold() for logical NOT, which also has a preliminary check for
>>> varying or undefined and does nothing in those cases ie, returns the
>>> existing value.   IN fact, you can probably remove the special casing in
>>> logical_not with this fix, which is indicative that it is correct.
>> IMO that makes invert a bit of a dangerous operation.  E.g. for
>> ranges of unsigned bytes:
>>
>>    invert ({}) = invert(UNDEFINED) = UNDEFINED = {}
>>    invert ([255, 255]) = ~[255, 255] = [0, 254]
>>    ...
>>    invert ([3, 255]) = ~[3, 255] = [0, 2]
>>    invert ([2, 255]) = ~[2, 255] = [0, 1]
>>    invert ([1, 255]) = ~[1, 255] = [0, 0]
>>    invert ([0, 255]) = invert(VARYING) = VARYING = [0, 255]
>>
>> where there's no continuity at the extremes.  Maybe it would be
>> better to open-code it in those two places instead?
>>
>> Richard
> 
> Im not sure that the continuity is important since ranges are a little 
> bit odd at the edges anyway :-)
> 
> however, I will take the point that perhaps invert () has potentially 
> different meanings at the edges depending no how you look at it. 
> Ultimately that is why we ended up getting it wrong in the first place...
> 
> So, I audited all the current uses of invert() (it is not commonly used 
> either) , and we already special case the varying or undefined behaviour 
> when its appropriate before invert is invoked.   So I think we can 
> reduce everyones concern about these edge cases by simply doing as you 
> guys suggest and gcc_assert()  that the ranges are not undefined or 
> varying for invert().

Patch adjusted (attached).

Bootstrap succeeds without triggering the gcc_unreachable.

Tests are running.

Aldy
Jeff Law Oct. 2, 2019, 6:18 p.m. UTC | #9
On 10/2/19 6:56 AM, Aldy Hernandez wrote:
>>> diff --git a/gcc/range.cc b/gcc/range.cc
>>> new file mode 100644
>>> index 00000000000..5e4d90436f2
>>> --- /dev/null
>>> +++ b/gcc/range.cc
>> [ ... ]
>>> +
>>> +value_range_base
>>> +range_intersect (const value_range_base &r1, const value_range_base
>>> &r2)
>>> +{
>>> +  value_range_base tmp (r1);
>>> +  tmp.intersect (r2);
>>> +  return tmp;
>>> +}
>> So low level question here.  This code looks particularly well suited
>> for the NRV optimization.  Can you check if NVR (named-value-return) is
>> triggering here, and if not why.  ISTM these are likely used heavily and
>> NVR would be a win.
> 
> AFAICT, even before the NRV pass, we have already lined up
> value_range_base::intersect to put its return value into RETVAL:
> 
>   const struct value_range_base & r1_2(D) = r1;
>   const struct value_range_base & r2_4(D) = r2;
>   <bb 2> [local count: 1073741824]:
>   <retval> = *r1_2(D);
>   value_range_base::intersect (&<retval>, r2_4(D));
>   return <retval>;
> 
> So...all good?
Yup. We construct into <retval> and return it.

> 
>>
>>
>>> diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
>>> index 5ec4d17f23b..269a3cb090e 100644
>>> --- a/gcc/tree-vrp.c
>>> +++ b/gcc/tree-vrp.c
>> [ ... ]
>>
>>> @@ -67,7 +67,7 @@ along with GCC; see the file COPYING3.  If not see
>>
>>> +
>>> +/* Return the inverse of a range.  */
>>> +
>>> +void
>>> +value_range_base::invert ()
>>> +{
>>> +  if (undefined_p ())
>>> +    return;
>>> +  if (varying_p ())
>>> +    set_undefined ();
>>> +  else if (m_kind == VR_RANGE)
>>> +    m_kind = VR_ANTI_RANGE;
>>> +  else if (m_kind == VR_ANTI_RANGE)
>>> +    m_kind = VR_RANGE;
>>> +  else
>>> +    gcc_unreachable ();
>>> +}
>> I don't think this is right for varying_p.  ISTM that if something is
>> VR_VARYING that inverting it is still VR_VARYING.  VR_UNDEFINED seems
>> particularly bad given its optimistic treatment elsewhere.
> 
> Done.  Andrew agreed, as per his reply.
> 
> Retested for all languages on x86-64 Linux.
> 
> OK?
So the final version has asserts to ensure we don't get into the invert
method with varying/undefined.  That works for me.

OK for the trunk.

Onward!

jeff
diff mbox series

Patch

diff --git a/gcc/ChangeLog b/gcc/ChangeLog
index 65f9db966d0..9aa46c087b8 100644
--- a/gcc/ChangeLog
+++ b/gcc/ChangeLog
@@ -1,3 +1,68 @@ 
+2019-09-25  Aldy Hernandez  <aldyh@redhat.com>
+
+	* Makefile.in (OBJS): Add range.o and range-op.o.
+	Remove wide-int-range.o.
+	(GTFILES): Add range.h.
+	* function-tests.c (test_ranges): New.
+	(function_tests_c_tests): Call test_ranges.
+	* ipa-cp.c (ipa_vr_operation_and_type_effects): Call
+	range_fold_unary_expr instead of extract_range_from_unary_expr.
+	* ipa-prop.c (ipa_compute_jump_functions_for_edge): Same.
+	* range-op.cc: New file.
+	* range-op.h: New file.
+	* range.cc: New file.
+	* range.h: New file.
+	* selftest.h (range_tests): New prototype.
+	* ssa.h: Include range.h.
+	* tree-vrp.c (value_range_base::value_range_base): New
+	constructors.
+	(value_range_base::singleton_p): Do not call
+	ranges_from_anti_range until sure we will need to.
+	(value_range_base::type): Rename gcc_assert to
+	gcc_checking_assert.
+	(vrp_val_is_max): New argument.
+	(vrp_val_is_min): Same.
+	(wide_int_range_set_zero_nonzero_bits): Move from
+	wide-int-range.cc.
+	(extract_range_into_wide_ints): Remove.
+	(extract_range_from_multiplicative_op): Remove.
+	(extract_range_from_pointer_plus_expr): Abstract POINTER_PLUS code
+	from extract_range_from_binary_expr.
+	(extract_range_from_plus_minus_expr): Abstract PLUS/MINUS code
+	from extract_range_from_binary_expr.
+	(extract_range_from_binary_expr): Remove.
+	(normalize_for_range_ops): New.
+	(range_fold_binary_expr): New.
+	(range_fold_unary_expr): New.
+	(value_range_base::num_pairs): New.
+	(value_range_base::lower_bound): New.
+	(value_range_base::upper_bound): New.
+	(value_range_base::upper_bound): New.
+	(value_range_base::contains_p): New.
+	(value_range_base::invert): New.
+	(value_range_base::union_): New.
+	(value_range_base::intersect): New.
+	(range_compatible_p): New.
+	(value_range_base::operator==): New.
+	(determine_value_range_1): Call range_fold_*expr instead of
+	extract_range_from_*expr.
+	* tree-vrp.h (class value_range_base): Add new constructors.
+	Add methods for union_, intersect, operator==, contains_p,
+	num_pairs, lower_bound, upper_bound, invert.
+	(vrp_val_is_min): Add handle_pointers argument.
+	(vrp_val_is_max): Same.
+	(extract_range_from_unary_expr): Remove.
+	(extract_range_from_binary_expr): Remove.
+	(range_fold_unary_expr): New.
+	(range_fold_binary_expr): New.
+	* vr-values.c (vr_values::extract_range_from_binary_expr): Call
+	range_fold_binary_expr instead of extract_range_from_binary_expr.
+	(vr_values::extract_range_basic): Same.
+	(vr_values::extract_range_from_unary_expr): Call
+	range_fold_unary_expr instead of extract_range_from_unary_expr.
+	* wide-int-range.cc: Remove.
+	* wide-int-range.h: Remove.
+
 2019-08-27  Richard Biener  <rguenther@suse.de>
 
 	* config/i386/i386-features.h
diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 597dc01328b..d9549710d8e 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1452,6 +1452,8 @@  OBJS = \
 	print-tree.o \
 	profile.o \
 	profile-count.o \
+	range.o \
+	range-op.o \
 	read-md.o \
 	read-rtl.o \
 	read-rtl-function.o \
@@ -1610,7 +1612,6 @@  OBJS = \
 	web.o \
 	wide-int.o \
 	wide-int-print.o \
-	wide-int-range.o \
 	xcoffout.o \
 	$(out_object_file) \
 	$(EXTRA_OBJS) \
@@ -2548,6 +2549,7 @@  GTFILES = $(CPPLIB_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
   $(srcdir)/stringpool.c $(srcdir)/tree.c $(srcdir)/varasm.c \
   $(srcdir)/gimple.h \
   $(srcdir)/gimple-ssa.h \
+  $(srcdir)/range.h $(srcdir)/range.cc \
   $(srcdir)/tree-ssanames.c $(srcdir)/tree-eh.c $(srcdir)/tree-ssa-address.c \
   $(srcdir)/tree-cfg.c $(srcdir)/tree-ssa-loop-ivopts.c \
   $(srcdir)/tree-dfa.c \
diff --git a/gcc/function-tests.c b/gcc/function-tests.c
index f1e29e49ee1..2440dd6820b 100644
--- a/gcc/function-tests.c
+++ b/gcc/function-tests.c
@@ -570,6 +570,19 @@  test_conversion_to_ssa ()
   ASSERT_EQ (SSA_NAME, TREE_CODE (gimple_return_retval (return_stmt)));
 }
 
+/* Test range folding.  We must start this here because we need cfun
+   set.  */
+
+static void
+test_ranges ()
+{
+  tree fndecl = build_trivial_high_gimple_function ();
+  function *fun = DECL_STRUCT_FUNCTION (fndecl);
+  push_cfun (fun);
+  range_tests ();
+  pop_cfun ();
+}
+
 /* Test of expansion from gimple-ssa to RTL.  */
 
 static void
@@ -674,6 +687,7 @@  function_tests_c_tests ()
   test_gimplification ();
   test_building_cfg ();
   test_conversion_to_ssa ();
+  test_ranges ();
   test_expansion_to_rtl ();
 }
 
diff --git a/gcc/ipa-cp.c b/gcc/ipa-cp.c
index 0046064fea1..33d297d8e6e 100644
--- a/gcc/ipa-cp.c
+++ b/gcc/ipa-cp.c
@@ -1901,8 +1901,7 @@  ipa_vr_operation_and_type_effects (value_range_base *dst_vr,
 				   enum tree_code operation,
 				   tree dst_type, tree src_type)
 {
-  extract_range_from_unary_expr (dst_vr, operation, dst_type,
-				 src_vr, src_type);
+  range_fold_unary_expr (dst_vr, operation, dst_type, src_vr, src_type);
   if (dst_vr->varying_p () || dst_vr->undefined_p ())
     return false;
   return true;
diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c
index 1a0e12e6c0c..304cc107226 100644
--- a/gcc/ipa-prop.c
+++ b/gcc/ipa-prop.c
@@ -1923,8 +1923,8 @@  ipa_compute_jump_functions_for_edge (struct ipa_func_body_info *fbi,
 	      value_range_base tmpvr (type,
 				      wide_int_to_tree (TREE_TYPE (arg), min),
 				      wide_int_to_tree (TREE_TYPE (arg), max));
-	      extract_range_from_unary_expr (&resvr, NOP_EXPR, param_type,
-					     &tmpvr, TREE_TYPE (arg));
+	      range_fold_unary_expr (&resvr, NOP_EXPR, param_type,
+				     &tmpvr, TREE_TYPE (arg));
 	      if (!resvr.undefined_p () && !resvr.varying_p ())
 		ipa_set_jfunc_vr (jfunc, &resvr);
 	      else
diff --git a/gcc/range-op.cc b/gcc/range-op.cc
new file mode 100644
index 00000000000..a21520df355
--- /dev/null
+++ b/gcc/range-op.cc
@@ -0,0 +1,3273 @@ 
+/* Code for range operators.
+   Copyright (C) 2017-2019 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "insn-codes.h"
+#include "rtl.h"
+#include "tree.h"
+#include "gimple.h"
+#include "cfghooks.h"
+#include "tree-pass.h"
+#include "ssa.h"
+#include "optabs-tree.h"
+#include "gimple-pretty-print.h"
+#include "diagnostic-core.h"
+#include "flags.h"
+#include "fold-const.h"
+#include "stor-layout.h"
+#include "calls.h"
+#include "cfganal.h"
+#include "gimple-fold.h"
+#include "tree-eh.h"
+#include "gimple-iterator.h"
+#include "gimple-walk.h"
+#include "tree-cfg.h"
+#include "wide-int.h"
+#include "range-op.h"
+
+// Return the upper limit for a type.
+
+static inline wide_int
+max_limit (const_tree type)
+{
+  return wi::max_value (TYPE_PRECISION (type) , TYPE_SIGN (type));
+}
+
+// Return the lower limit for a type.
+
+static inline wide_int
+min_limit (const_tree type)
+{
+  return wi::min_value (TYPE_PRECISION (type) , TYPE_SIGN (type));
+}
+
+// If the range of either op1 or op2 is undefined, set the result to
+// undefined and return TRUE.
+
+inline bool
+empty_range_check (value_range_base &r,
+		   const value_range_base &op1,
+		   const value_range_base & op2)
+{
+  if (op1.undefined_p () || op2.undefined_p ())
+    {
+      r.set_undefined ();
+      return true;
+    }
+  else
+    return false;
+}
+
+// Return TRUE if shifting by OP is undefined behavior, and set R to
+// the appropriate range.
+
+static inline bool
+undefined_shift_range_check (value_range_base &r, tree type,
+			     value_range_base op)
+{
+  if (op.undefined_p ())
+    {
+      r = value_range_base ();
+      return true;
+    }
+
+  // Shifting by any values outside [0..prec-1], gets undefined
+  // behavior from the shift operation.  We cannot even trust
+  // SHIFT_COUNT_TRUNCATED at this stage, because that applies to rtl
+  // shifts, and the operation at the tree level may be widened.
+  if (wi::lt_p (op.lower_bound (), 0, TYPE_SIGN (op.type ()))
+      || wi::ge_p (op.upper_bound (),
+		   TYPE_PRECISION (type), TYPE_SIGN (op.type ())))
+    {
+      r = value_range_base (type);
+      return true;
+    }
+  return false;
+}
+
+// Return TRUE if 0 is within [WMIN, WMAX].
+
+static inline bool
+wi_includes_zero_p (tree type, const wide_int &wmin, const wide_int &wmax)
+{
+  signop sign = TYPE_SIGN (type);
+  return wi::le_p (wmin, 0, sign) && wi::ge_p (wmax, 0, sign);
+}
+
+// Return TRUE if [WMIN, WMAX] is the singleton 0.
+
+static inline bool
+wi_zero_p (tree type, const wide_int &wmin, const wide_int &wmax)
+{
+  unsigned prec = TYPE_PRECISION (type);
+  return wmin == wmax && wi::eq_p (wmin, wi::zero (prec));
+}
+
+// Default wide_int fold operation returns [MIN, MAX].
+
+value_range_base
+range_operator::wi_fold (tree type,
+			 const wide_int &lh_lb ATTRIBUTE_UNUSED,
+			 const wide_int &lh_ub ATTRIBUTE_UNUSED,
+			 const wide_int &rh_lb ATTRIBUTE_UNUSED,
+			 const wide_int &rh_ub ATTRIBUTE_UNUSED) const
+{
+  return value_range_base (type);
+}
+
+// The default for fold is to break all ranges into sub-ranges and
+// invoke the wi_fold method on each sub-range pair.
+
+value_range_base
+range_operator::fold_range (tree type,
+			    const value_range_base &lh,
+			    const value_range_base &rh) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+
+  for (unsigned x = 0; x < lh.num_pairs (); ++x)
+    for (unsigned y = 0; y < rh.num_pairs (); ++y)
+      {
+	wide_int lh_lb = lh.lower_bound (x);
+	wide_int lh_ub = lh.upper_bound (x);
+	wide_int rh_lb = rh.lower_bound (y);
+	wide_int rh_ub = rh.upper_bound (y);
+	r.union_ (wi_fold (type, lh_lb, lh_ub, rh_lb, rh_ub));
+	if (r.varying_p ())
+	  return r;
+      }
+  return r;
+}
+
+// The default for op1_range is to return false.
+
+bool
+range_operator::op1_range (value_range_base &r ATTRIBUTE_UNUSED,
+			   tree type ATTRIBUTE_UNUSED,
+			   const value_range_base &lhs ATTRIBUTE_UNUSED,
+			   const value_range_base &op2 ATTRIBUTE_UNUSED) const
+{
+  return false;
+}
+
+// The default for op2_range is to return false.
+
+bool
+range_operator::op2_range (value_range_base &r ATTRIBUTE_UNUSED,
+			   tree type ATTRIBUTE_UNUSED,
+			   const value_range_base &lhs ATTRIBUTE_UNUSED,
+			   const value_range_base &op1 ATTRIBUTE_UNUSED) const
+{
+  return false;
+}
+
+
+// Called when there is either an overflow OR an underflow... which
+// means an anti range must be created to compensate.  This does not
+// cover the case where there are 2 possible overflows, or none.
+
+static value_range_base
+adjust_overflow_bound (tree type, const wide_int &wmin, const wide_int &wmax)
+{
+  const signop sgn = TYPE_SIGN (type);
+  const unsigned int prec = TYPE_PRECISION (type);
+
+  wide_int tmin = wide_int::from (wmin, prec, sgn);
+  wide_int tmax = wide_int::from (wmax, prec, sgn);
+
+  bool covers = false;
+  wide_int tem = tmin;
+  tmin = tmax + 1;
+  if (wi::cmp (tmin, tmax, sgn) < 0)
+    covers = true;
+  tmax = tem - 1;
+  if (wi::cmp (tmax, tem, sgn) > 0)
+    covers = true;
+
+  // If the anti-range would cover nothing, drop to varying.
+  // Likewise if the anti-range bounds are outside of the types
+  // values.
+  if (covers || wi::cmp (tmin, tmax, sgn) > 0)
+    return value_range_base (type);
+
+  return value_range_base (VR_ANTI_RANGE, type, tmin, tmax);
+}
+
+// Given a newly calculated lbound and ubound, examine their
+// respective overflow bits to determine how to create a range.
+// Return said range.
+
+static value_range_base
+create_range_with_overflow (tree type,
+			    const wide_int &wmin, const wide_int &wmax,
+			    wi::overflow_type min_ovf = wi::OVF_NONE,
+			    wi::overflow_type max_ovf = wi::OVF_NONE)
+{
+  const signop sgn = TYPE_SIGN (type);
+  const unsigned int prec = TYPE_PRECISION (type);
+  const bool overflow_wraps = TYPE_OVERFLOW_WRAPS (type);
+
+  // For one bit precision if max != min, then the range covers all
+  // values.
+  if (prec == 1 && wi::ne_p (wmax, wmin))
+    return value_range_base (type);
+
+  if (overflow_wraps)
+    {
+      // If overflow wraps, truncate the values and adjust the range,
+      // kind, and bounds appropriately.
+      if ((min_ovf != wi::OVF_NONE) == (max_ovf != wi::OVF_NONE))
+	{
+	  wide_int tmin = wide_int::from (wmin, prec, sgn);
+	  wide_int tmax = wide_int::from (wmax, prec, sgn);
+	  // If the limits are swapped, we wrapped around and cover
+	  // the entire range.
+	  if (wi::gt_p (tmin, tmax, sgn))
+	    return value_range_base (type);
+
+	  // No overflow or both overflow or underflow.  The range
+	  // kind stays normal.
+	  return value_range_base (type, tmin, tmax);
+	}
+
+      if ((min_ovf == wi::OVF_UNDERFLOW && max_ovf == wi::OVF_NONE)
+	  || (max_ovf == wi::OVF_OVERFLOW && min_ovf == wi::OVF_NONE))
+	return adjust_overflow_bound (type, wmin, wmax);
+
+      // Other underflow and/or overflow, drop to VR_VARYING.
+      return value_range_base (type);
+    }
+  else
+    {
+      // If overflow does not wrap, saturate to [MIN, MAX].
+      wide_int new_lb, new_ub;
+      if (min_ovf == wi::OVF_UNDERFLOW)
+	new_lb = wi::min_value (prec, sgn);
+      else if (min_ovf == wi::OVF_OVERFLOW)
+	new_lb = wi::max_value (prec, sgn);
+      else
+        new_lb = wmin;
+
+      if (max_ovf == wi::OVF_UNDERFLOW)
+	new_ub = wi::min_value (prec, sgn);
+      else if (max_ovf == wi::OVF_OVERFLOW)
+	new_ub = wi::max_value (prec, sgn);
+      else
+        new_ub = wmax;
+
+      return value_range_base (type, new_lb, new_ub);
+    }
+}
+
+// Like above, but canonicalize the case where the bounds are swapped
+// and overflow may wrap.  In which case, we transform [10,5] into
+// [MIN,5][10,MAX].
+
+static inline value_range_base
+create_possibly_reversed_range (tree type,
+				const wide_int &new_lb, const wide_int &new_ub)
+{
+  signop s = TYPE_SIGN (type);
+  // If the bounds are swapped, treat the result as if an overflow occured.
+  if (wi::gt_p (new_lb, new_ub, s))
+    return adjust_overflow_bound (type, new_lb, new_ub);
+
+  // Otherwise its just a normal range.
+  return value_range_base (type, new_lb, new_ub);
+}
+
+// Return a value_range_base instance that is a boolean TRUE.
+
+static inline value_range_base
+range_true (tree type)
+{
+  unsigned prec = TYPE_PRECISION (type);
+  return value_range_base (type, wi::one (prec), wi::one (prec));
+}
+
+// Return a value_range_base instance that is a boolean FALSE.
+
+static inline value_range_base
+range_false (tree type)
+{
+  unsigned prec = TYPE_PRECISION (type);
+  return value_range_base (type, wi::zero (prec), wi::zero (prec));
+}
+
+// Return a value_range_base instance that is a boolean FALSE.
+
+static inline value_range_base
+range_true_and_false (tree type)
+{
+  unsigned prec = TYPE_PRECISION (type);
+  return value_range_base (type, wi::zero (prec), wi::one (prec));
+}
+
+enum bool_range_state { BRS_FALSE, BRS_TRUE, BRS_EMPTY, BRS_FULL };
+
+// Return the summary information about boolean range LHS.  Return an
+// "interesting" range in R.  For EMPTY or FULL, return the equivilent
+// range for TYPE, for BRS_TRUE and BRS false, return the negation of
+// the bool range.
+
+static bool_range_state
+get_bool_state (value_range_base &r,
+		const value_range_base &lhs, tree val_type)
+{
+  // If there is no result, then this is unexectuable.
+  if (lhs.undefined_p ())
+    {
+      r.set_undefined ();
+      return BRS_EMPTY;
+    }
+
+  // If the bounds aren't the same, then it's not a constant.
+  if (!wi::eq_p (lhs.upper_bound (), lhs.lower_bound ()))
+    {
+      r.set_varying (val_type);
+      return BRS_FULL;
+    }
+
+  if (lhs.zero_p ())
+    return BRS_FALSE;
+
+  return BRS_TRUE;
+}
+
+
+class operator_equal : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &val) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &val) const;
+} op_equal;
+
+value_range_base
+operator_equal::fold_range (tree type,
+			    const value_range_base &op1,
+			    const value_range_base &op2) const
+{
+  value_range_base r;
+  if (empty_range_check (r, op1, op2))
+    return r;
+
+  // We can be sure the values are always equal or not if both ranges
+  // consist of a single value, and then compare them.
+  if (wi::eq_p (op1.lower_bound (), op1.upper_bound ())
+      && wi::eq_p (op2.lower_bound (), op2.upper_bound ()))
+    {
+      if (wi::eq_p (op1.lower_bound (), op2.upper_bound()))
+	r = range_true (type);
+      else
+	r = range_false (type);
+    }
+  else
+    {
+      // If ranges do not intersect, we know the range is not equal,
+      // otherwise we don't know anything for sure.
+      r = range_intersect (op1, op2);
+      if (r.undefined_p ())
+	r = range_false (type);
+      else
+	r = range_true_and_false (type);
+    }
+
+  return r;
+}
+
+bool
+operator_equal::op1_range (value_range_base &r, tree type,
+			   const value_range_base &lhs,
+			   const value_range_base &op2) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_FALSE:
+        // If the result is false, the only time we know anything is
+	// if OP2 is a constant.
+	if (wi::eq_p (op2.lower_bound(), op2.upper_bound()))
+	  r = range_invert (op2);
+	else
+	  r.set_varying (type);
+	break;
+
+      case BRS_TRUE:
+        // If it's true, the result is the same as OP2.
+        r = op2;
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+bool
+operator_equal::op2_range (value_range_base &r, tree type,
+			   const value_range_base &lhs,
+			   const value_range_base &op1) const
+{
+  return operator_equal::op1_range (r, type, lhs, op1);
+}
+
+
+class operator_not_equal : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+} op_not_equal;
+
+value_range_base
+operator_not_equal::fold_range (tree type,
+				const value_range_base &op1,
+				const value_range_base &op2) const
+{
+  value_range_base r;
+  if (empty_range_check (r, op1, op2))
+    return r;
+
+  // We can be sure the values are always equal or not if both ranges
+  // consist of a single value, and then compare them.
+  if (wi::eq_p (op1.lower_bound (), op1.upper_bound ())
+      && wi::eq_p (op2.lower_bound (), op2.upper_bound ()))
+    {
+      if (wi::ne_p (op1.lower_bound (), op2.upper_bound()))
+	r = range_true (type);
+      else
+	r = range_false (type);
+    }
+  else
+    {
+      // If ranges do not intersect, we know the range is not equal,
+      // otherwise we don't know anything for sure.
+      r = range_intersect (op1, op2);
+      if (r.undefined_p ())
+	r = range_true (type);
+      else
+	r = range_true_and_false (type);
+    }
+
+  return r;
+}
+
+bool
+operator_not_equal::op1_range (value_range_base &r, tree type,
+			       const value_range_base &lhs,
+			       const value_range_base &op2) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_TRUE:
+        // If the result is true, the only time we know anything is if
+	// OP2 is a constant.
+	if (wi::eq_p (op2.lower_bound(), op2.upper_bound()))
+	  r = range_invert (op2);
+	else
+	  r.set_varying (type);
+	break;
+
+      case BRS_FALSE:
+        // If its true, the result is the same as OP2.
+        r = op2;
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+
+bool
+operator_not_equal::op2_range (value_range_base &r, tree type,
+			       const value_range_base &lhs,
+			       const value_range_base &op1) const
+{
+  return operator_not_equal::op1_range (r, type, lhs, op1);
+}
+
+// (X < VAL) produces the range of [MIN, VAL - 1].
+
+static void
+build_lt (value_range_base &r, tree type, const wide_int &val)
+{
+  wi::overflow_type ov;
+  wide_int lim = wi::sub (val, 1, TYPE_SIGN (type), &ov);
+
+  // If val - 1 underflows, check if X < MIN, which is an empty range.
+  if (ov)
+    r.set_undefined ();
+  else
+    r = value_range_base (type, min_limit (type), lim);
+}
+
+// (X <= VAL) produces the range of [MIN, VAL].
+
+static void
+build_le (value_range_base &r, tree type, const wide_int &val)
+{
+  r = value_range_base (type, min_limit (type), val);
+}
+
+// (X > VAL) produces the range of [VAL + 1, MAX].
+
+static void
+build_gt (value_range_base &r, tree type, const wide_int &val)
+{
+  wi::overflow_type ov;
+  wide_int lim = wi::add (val, 1, TYPE_SIGN (type), &ov);
+  // If val + 1 overflows, check is for X > MAX, which is an empty range.
+  if (ov)
+    r.set_undefined ();
+  else
+    r = value_range_base (type, lim, max_limit (type));
+}
+
+// (X >= val) produces the range of [VAL, MAX].
+
+static void
+build_ge (value_range_base &r, tree type, const wide_int &val)
+{
+  r = value_range_base (type, val, max_limit (type));
+}
+
+
+class operator_lt :  public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+} op_lt;
+
+value_range_base
+operator_lt::fold_range (tree type,
+			 const value_range_base &op1,
+			 const value_range_base &op2) const
+{
+  value_range_base r;
+  if (empty_range_check (r, op1, op2))
+    return r;
+
+  signop sign = TYPE_SIGN (op1.type ());
+  gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+  if (wi::lt_p (op1.upper_bound (), op2.lower_bound (), sign))
+    r = range_true (type);
+  else
+    if (!wi::lt_p (op1.lower_bound (), op2.upper_bound (), sign))
+      r = range_false (type);
+    else
+      r = range_true_and_false (type);
+  return r;
+}
+
+bool
+operator_lt::op1_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op2) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_TRUE:
+	build_lt (r, type, op2.upper_bound ());
+	break;
+
+      case BRS_FALSE:
+	build_ge (r, type, op2.lower_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+bool
+operator_lt::op2_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op1) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_FALSE:
+	build_le (r, type, op1.upper_bound ());
+	break;
+
+      case BRS_TRUE:
+	build_gt (r, type, op1.lower_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+
+class operator_le :  public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+} op_le;
+
+value_range_base
+operator_le::fold_range (tree type,
+			 const value_range_base &op1,
+			 const value_range_base &op2) const
+{
+  value_range_base r;
+  if (empty_range_check (r, op1, op2))
+    return r;
+
+  signop sign = TYPE_SIGN (op1.type ());
+  gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+  if (wi::le_p (op1.upper_bound (), op2.lower_bound (), sign))
+    r = range_true (type);
+  else
+    if (!wi::le_p (op1.lower_bound (), op2.upper_bound (), sign))
+      r = range_false (type);
+    else
+      r = range_true_and_false (type);
+  return r;
+}
+
+bool
+operator_le::op1_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op2) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_TRUE:
+	build_le (r, type, op2.upper_bound ());
+	break;
+
+      case BRS_FALSE:
+	build_gt (r, type, op2.lower_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+bool
+operator_le::op2_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op1) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_FALSE:
+	build_lt (r, type, op1.upper_bound ());
+	break;
+
+      case BRS_TRUE:
+	build_ge (r, type, op1.lower_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+
+class operator_gt :  public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+} op_gt;
+
+value_range_base
+operator_gt::fold_range (tree type,
+			 const value_range_base &op1,
+			 const value_range_base &op2) const
+{
+  value_range_base r;
+  if (empty_range_check (r, op1, op2))
+    return r;
+
+  signop sign = TYPE_SIGN (op1.type ());
+  gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+  if (wi::gt_p (op1.lower_bound (), op2.upper_bound (), sign))
+    r = range_true (type);
+  else
+    if (!wi::gt_p (op1.upper_bound (), op2.lower_bound (), sign))
+      r = range_false (type);
+    else
+      r = range_true_and_false (type);
+  return r;
+}
+
+bool
+operator_gt::op1_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op2) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_TRUE:
+	build_gt (r, type, op2.lower_bound ());
+	break;
+
+      case BRS_FALSE:
+	build_le (r, type, op2.upper_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+bool
+operator_gt::op2_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op1) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_FALSE:
+	build_ge (r, type, op1.lower_bound ());
+	break;
+
+      case BRS_TRUE:
+	build_lt (r, type, op1.upper_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+
+class operator_ge :  public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+} op_ge;
+
+value_range_base
+operator_ge::fold_range (tree type,
+			 const value_range_base &op1,
+			 const value_range_base &op2) const
+{
+  value_range_base r;
+  if (empty_range_check (r, op1, op2))
+    return r;
+
+  signop sign = TYPE_SIGN (op1.type ());
+  gcc_checking_assert (sign == TYPE_SIGN (op2.type ()));
+
+  if (wi::ge_p (op1.lower_bound (), op2.upper_bound (), sign))
+    r = range_true (type);
+  else if (!wi::ge_p (op1.upper_bound (), op2.lower_bound (), sign))
+    r = range_false (type);
+  else
+    r = range_true_and_false (type);
+  return r;
+}
+
+bool
+operator_ge::op1_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op2) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_TRUE:
+	build_ge (r, type, op2.lower_bound ());
+	break;
+
+      case BRS_FALSE:
+	build_lt (r, type, op2.upper_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+bool
+operator_ge::op2_range (value_range_base &r, tree type,
+			const value_range_base &lhs,
+			const value_range_base &op1) const
+{
+  switch (get_bool_state (r, lhs, type))
+    {
+      case BRS_FALSE:
+	build_gt (r, type, op1.lower_bound ());
+	break;
+
+      case BRS_TRUE:
+	build_le (r, type, op1.upper_bound ());
+	break;
+
+      default:
+        break;
+    }
+  return true;
+}
+
+
+class operator_plus : public range_operator
+{
+public:
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_plus;
+
+value_range_base
+operator_plus::wi_fold (tree type,
+			const wide_int &lh_lb, const wide_int &lh_ub,
+			const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  wi::overflow_type ov_lb, ov_ub;
+  signop s = TYPE_SIGN (type);
+  wide_int new_lb = wi::add (lh_lb, rh_lb, s, &ov_lb);
+  wide_int new_ub = wi::add (lh_ub, rh_ub, s, &ov_ub);
+  return create_range_with_overflow (type, new_lb, new_ub, ov_lb, ov_ub);
+}
+
+bool
+operator_plus::op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const
+{
+  r = range_op_handler (MINUS_EXPR, type)->fold_range (type, lhs, op2);
+  return true;
+}
+
+bool
+operator_plus::op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const
+{
+  r = range_op_handler (MINUS_EXPR, type)->fold_range (type, lhs, op1);
+  return true;
+}
+
+
+class operator_minus : public range_operator
+{
+public:
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_minus;
+
+value_range_base
+operator_minus::wi_fold (tree type,
+			 const wide_int &lh_lb, const wide_int &lh_ub,
+			 const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  wi::overflow_type ov_lb, ov_ub;
+  signop s = TYPE_SIGN (type);
+  wide_int new_lb = wi::sub (lh_lb, rh_ub, s, &ov_lb);
+  wide_int new_ub = wi::sub (lh_ub, rh_lb, s, &ov_ub);
+  return create_range_with_overflow (type, new_lb, new_ub, ov_lb, ov_ub);
+}
+
+bool
+operator_minus::op1_range (value_range_base &r, tree type,
+			   const value_range_base &lhs,
+			   const value_range_base &op2) const
+{
+  r = range_op_handler (PLUS_EXPR, type)->fold_range (type, lhs, op2);
+  return true;
+}
+
+bool
+operator_minus::op2_range (value_range_base &r, tree type,
+			   const value_range_base &lhs,
+			   const value_range_base &op1) const
+{
+  r = fold_range (type, op1, lhs);
+  return true;
+}
+
+
+class operator_min : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_min;
+
+value_range_base
+operator_min::wi_fold (tree type,
+		       const wide_int &lh_lb, const wide_int &lh_ub,
+		       const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  signop s = TYPE_SIGN (type);
+  wide_int new_lb = wi::min (lh_lb, rh_lb, s);
+  wide_int new_ub = wi::min (lh_ub, rh_ub, s);
+  return create_range_with_overflow (type, new_lb, new_ub);
+}
+
+
+class operator_max : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_max;
+
+value_range_base
+operator_max::wi_fold (tree type,
+		       const wide_int &lh_lb, const wide_int &lh_ub,
+		       const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  signop s = TYPE_SIGN (type);
+  wide_int new_lb = wi::max (lh_lb, rh_lb, s);
+  wide_int new_ub = wi::max (lh_ub, rh_ub, s);
+  return create_range_with_overflow (type, new_lb, new_ub);
+}
+
+
+class cross_product_operator : public range_operator
+{
+public:
+  // Perform an operation between two wide-ints and place the result
+  // in R.  Return true if the operation overflowed.
+  virtual bool wi_op_overflows (wide_int &r,
+				tree type,
+				const wide_int &,
+				const wide_int &) const = 0;
+
+  // Calculate the cross product of two sets of sub-ranges and return it.
+  value_range_base wi_cross_product (tree type,
+				     const wide_int &lh_lb,
+				     const wide_int &lh_ub,
+				     const wide_int &rh_lb,
+				     const wide_int &rh_ub) const;
+};
+
+// Calculate the cross product of two sets of ranges and return it.
+//
+// Multiplications, divisions and shifts are a bit tricky to handle,
+// depending on the mix of signs we have in the two ranges, we need to
+// operate on different values to get the minimum and maximum values
+// for the new range.  One approach is to figure out all the
+// variations of range combinations and do the operations.
+//
+// However, this involves several calls to compare_values and it is
+// pretty convoluted.  It's simpler to do the 4 operations (MIN0 OP
+// MIN1, MIN0 OP MAX1, MAX0 OP MIN1 and MAX0 OP MAX0 OP MAX1) and then
+// figure the smallest and largest values to form the new range.
+
+value_range_base
+cross_product_operator::wi_cross_product (tree type,
+					  const wide_int &lh_lb,
+					  const wide_int &lh_ub,
+					  const wide_int &rh_lb,
+					  const wide_int &rh_ub) const
+{
+  wide_int cp1, cp2, cp3, cp4;
+
+  // Compute the 4 cross operations, bailing if we get an overflow we
+  // can't handle.
+  if (wi_op_overflows (cp1, type, lh_lb, rh_lb))
+    return value_range_base (type);
+  if (wi::eq_p (lh_lb, lh_ub))
+    cp3 = cp1;
+  else if (wi_op_overflows (cp3, type, lh_ub, rh_lb))
+    return value_range_base (type);
+  if (wi::eq_p (rh_lb, rh_ub))
+    cp2 = cp1;
+  else if (wi_op_overflows (cp2, type, lh_lb, rh_ub))
+    return value_range_base (type);
+  if (wi::eq_p (lh_lb, lh_ub))
+    cp4 = cp2;
+  else if (wi_op_overflows (cp4, type, lh_ub, rh_ub))
+    return value_range_base (type);
+
+  // Order pairs.
+  signop sign = TYPE_SIGN (type);
+  if (wi::gt_p (cp1, cp2, sign))
+    std::swap (cp1, cp2);
+  if (wi::gt_p (cp3, cp4, sign))
+    std::swap (cp3, cp4);
+
+  // Choose min and max from the ordered pairs.
+  wide_int res_lb = wi::min (cp1, cp3, sign);
+  wide_int res_ub = wi::max (cp2, cp4, sign);
+  return create_range_with_overflow (type, res_lb, res_ub);
+}
+
+
+class operator_mult : public cross_product_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+  virtual bool wi_op_overflows (wide_int &res,
+				tree type,
+				const wide_int &w0,
+				const wide_int &w1) const;
+} op_mult;
+
+bool
+operator_mult::wi_op_overflows (wide_int &res,
+				tree type,
+				const wide_int &w0,
+				const wide_int &w1) const
+{
+  wi::overflow_type overflow = wi::OVF_NONE;
+  signop sign = TYPE_SIGN (type);
+  res = wi::mul (w0, w1, sign, &overflow);
+   if (overflow && TYPE_OVERFLOW_UNDEFINED (type))
+     {
+       // For multiplication, the sign of the overflow is given
+       // by the comparison of the signs of the operands.
+       if (sign == UNSIGNED || w0.sign_mask () == w1.sign_mask ())
+	 res = wi::max_value (w0.get_precision (), sign);
+       else
+	 res = wi::min_value (w0.get_precision (), sign);
+       return false;
+     }
+   return overflow;
+}
+
+value_range_base
+operator_mult::wi_fold (tree type,
+			const wide_int &lh_lb, const wide_int &lh_ub,
+			const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  if (TYPE_OVERFLOW_UNDEFINED (type))
+    return wi_cross_product (type, lh_lb, lh_ub, rh_lb, rh_ub);
+
+  // Multiply the ranges when overflow wraps.  This is basically fancy
+  // code so we don't drop to varying with an unsigned
+  // [-3,-1]*[-3,-1].
+  //
+  // This test requires 2*prec bits if both operands are signed and
+  // 2*prec + 2 bits if either is not.  Therefore, extend the values
+  // using the sign of the result to PREC2.  From here on out,
+  // everthing is just signed math no matter what the input types
+  // were.
+
+  signop sign = TYPE_SIGN (type);
+  unsigned prec = TYPE_PRECISION (type);
+  widest2_int min0 = widest2_int::from (lh_lb, sign);
+  widest2_int max0 = widest2_int::from (lh_ub, sign);
+  widest2_int min1 = widest2_int::from (rh_lb, sign);
+  widest2_int max1 = widest2_int::from (rh_ub, sign);
+  widest2_int sizem1 = wi::mask <widest2_int> (prec, false);
+  widest2_int size = sizem1 + 1;
+
+  // Canonicalize the intervals.
+  if (sign == UNSIGNED)
+    {
+      if (wi::ltu_p (size, min0 + max0))
+	{
+	  min0 -= size;
+	  max0 -= size;
+	}
+      if (wi::ltu_p (size, min1 + max1))
+	{
+	  min1 -= size;
+	  max1 -= size;
+	}
+    }
+
+  // Sort the 4 products so that min is in prod0 and max is in
+  // prod3.
+  widest2_int prod0 = min0 * min1;
+  widest2_int prod1 = min0 * max1;
+  widest2_int prod2 = max0 * min1;
+  widest2_int prod3 = max0 * max1;
+
+  // min0min1 > max0max1
+  if (prod0 > prod3)
+    std::swap (prod0, prod3);
+
+  // min0max1 > max0min1
+  if (prod1 > prod2)
+    std::swap (prod1, prod2);
+
+  if (prod0 > prod1)
+    std::swap (prod0, prod1);
+
+  if (prod2 > prod3)
+    std::swap (prod2, prod3);
+
+  // diff = max - min
+  prod2 = prod3 - prod0;
+  if (wi::geu_p (prod2, sizem1))
+    // The range covers all values.
+    return value_range_base (type);
+
+  wide_int new_lb = wide_int::from (prod0, prec, sign);
+  wide_int new_ub = wide_int::from (prod3, prec, sign);
+  return create_possibly_reversed_range (type, new_lb, new_ub);
+}
+
+
+class operator_div : public cross_product_operator
+{
+public:
+  operator_div (enum tree_code c)  { code = c; }
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+  virtual bool wi_op_overflows (wide_int &res,
+				tree type,
+				const wide_int &,
+				const wide_int &) const;
+private:
+  enum tree_code code;
+};
+
+bool
+operator_div::wi_op_overflows (wide_int &res,
+			       tree type,
+			       const wide_int &w0,
+			       const wide_int &w1) const
+{
+  if (w1 == 0)
+    return true;
+
+  wi::overflow_type overflow = wi::OVF_NONE;
+  signop sign = TYPE_SIGN (type);
+
+  switch (code)
+    {
+    case EXACT_DIV_EXPR:
+      // EXACT_DIV_EXPR is implemented as TRUNC_DIV_EXPR in
+      // operator_exact_divide.  No need to handle it here.
+      gcc_unreachable ();
+      break;
+    case TRUNC_DIV_EXPR:
+      res = wi::div_trunc (w0, w1, sign, &overflow);
+      break;
+    case FLOOR_DIV_EXPR:
+      res = wi::div_floor (w0, w1, sign, &overflow);
+      break;
+    case ROUND_DIV_EXPR:
+      res = wi::div_round (w0, w1, sign, &overflow);
+      break;
+    case CEIL_DIV_EXPR:
+      res = wi::div_ceil (w0, w1, sign, &overflow);
+      break;
+    default:
+      gcc_unreachable ();
+    }
+
+  if (overflow && TYPE_OVERFLOW_UNDEFINED (type))
+    {
+      // For division, the only case is -INF / -1 = +INF.
+      res = wi::max_value (w0.get_precision (), sign);
+      return false;
+    }
+  return overflow;
+}
+
+value_range_base
+operator_div::wi_fold (tree type,
+		       const wide_int &lh_lb, const wide_int &lh_ub,
+		       const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  // If we know we will divide by zero, return undefined.
+  if (rh_lb == 0 && rh_ub == 0)
+    return value_range_base ();
+
+  const wide_int dividend_min = lh_lb;
+  const wide_int dividend_max = lh_ub;
+  const wide_int divisor_min = rh_lb;
+  const wide_int divisor_max = rh_ub;
+  signop sign = TYPE_SIGN (type);
+  unsigned prec = TYPE_PRECISION (type);
+  wide_int extra_min, extra_max;
+
+  // If we know we won't divide by zero, just do the division.
+  if (!wi_includes_zero_p (type, divisor_min, divisor_max))
+    return wi_cross_product (type, dividend_min, dividend_max,
+			     divisor_min, divisor_max);
+
+  // If flag_non_call_exceptions, we must not eliminate a division by zero.
+  if (cfun->can_throw_non_call_exceptions)
+    return value_range_base (type);
+
+  // If we're definitely dividing by zero, there's nothing to do.
+  if (wi_zero_p (type, divisor_min, divisor_max))
+    return value_range_base ();
+
+  // Perform the division in 2 parts, [LB, -1] and [1, UB], which will
+  // skip any division by zero.
+
+  // First divide by the negative numbers, if any.
+  value_range_base r;
+  if (wi::neg_p (divisor_min, sign))
+    r = wi_cross_product (type, dividend_min, dividend_max,
+			  divisor_min, wi::minus_one (prec));
+  // Then divide by the non-zero positive numbers, if any.
+  if (wi::gt_p (divisor_max, wi::zero (prec), sign))
+    {
+      value_range_base tmp;
+      tmp = wi_cross_product (type, dividend_min, dividend_max,
+			      wi::one (prec), divisor_max);
+      r.union_ (tmp);
+    }
+  return r;
+}
+
+operator_div op_trunc_div (TRUNC_DIV_EXPR);
+operator_div op_floor_div(FLOOR_DIV_EXPR);
+operator_div op_round_div (ROUND_DIV_EXPR);
+operator_div op_ceil_div (CEIL_DIV_EXPR);
+
+
+class operator_exact_divide : public operator_div
+{
+public:
+  operator_exact_divide () : operator_div (TRUNC_DIV_EXPR) { }
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+
+} op_exact_div;
+
+bool
+operator_exact_divide::op1_range (value_range_base &r, tree type,
+				  const value_range_base &lhs,
+				  const value_range_base &op2) const
+{
+  tree offset;
+  // [2, 4] = op1 / [3,3]   since its exact divide, no need to worry about
+  // remainders in the endpoints, so op1 = [2,4] * [3,3] = [6,12].
+  // We wont bother trying to enumerate all the in between stuff :-P
+  // TRUE accuraacy is [6,6][9,9][12,12].  This is unlikely to matter most of
+  // the time however.
+  // If op2 is a multiple of 2, we would be able to set some non-zero bits.
+  if (op2.singleton_p (&offset)
+      && !integer_zerop (offset))
+    {
+      r = range_op_handler (MULT_EXPR, type)->fold_range (type, lhs, op2);
+      return true;
+    }
+  return false;
+}
+
+
+class operator_lshift : public cross_product_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+
+  virtual value_range_base wi_fold (tree type,
+			  const wide_int &lh_lb, const wide_int &lh_ub,
+			  const wide_int &rh_lb, const wide_int &rh_ub) const;
+  virtual bool wi_op_overflows (wide_int &res,
+				tree type,
+				const wide_int &,
+				const wide_int &) const;
+} op_lshift;
+
+value_range_base
+operator_lshift::fold_range (tree type,
+			     const value_range_base &op1,
+			     const value_range_base &op2) const
+{
+  value_range_base r;
+  if (undefined_shift_range_check (r, type, op2))
+    return r;
+
+  // Transform left shifts by constants into multiplies.
+  if (op2.singleton_p ())
+    {
+      unsigned shift = op2.lower_bound ().to_uhwi ();
+      wide_int tmp = wi::set_bit_in_zero (shift, TYPE_PRECISION (type));
+      value_range_base mult (type, tmp, tmp);
+
+      // Force wrapping multiplication.
+      bool saved_flag_wrapv = flag_wrapv;
+      bool saved_flag_wrapv_pointer = flag_wrapv_pointer;
+      flag_wrapv = 1;
+      flag_wrapv_pointer = 1;
+      r = range_op_handler (MULT_EXPR, type)->fold_range (type, op1, mult);
+      flag_wrapv = saved_flag_wrapv;
+      flag_wrapv_pointer = saved_flag_wrapv_pointer;
+      return r;
+    }
+
+  // Otherwise, invoke the generic fold routine.
+  return range_operator::fold_range (type, op1, op2);
+}
+
+value_range_base
+operator_lshift::wi_fold (tree type,
+			  const wide_int &lh_lb, const wide_int &lh_ub,
+			  const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  signop sign = TYPE_SIGN (type);
+  unsigned prec = TYPE_PRECISION (type);
+  int overflow_pos = sign == SIGNED ? prec - 1 : prec;
+  int bound_shift = overflow_pos - rh_ub.to_shwi ();
+  // If bound_shift == HOST_BITS_PER_WIDE_INT, the llshift can
+  // overflow.  However, for that to happen, rh.max needs to be zero,
+  // which means rh is a singleton range of zero, which means it
+  // should be handled by the lshift fold_range above.
+  wide_int bound = wi::set_bit_in_zero (bound_shift, prec);
+  wide_int complement = ~(bound - 1);
+  wide_int low_bound, high_bound;
+  bool in_bounds = false;
+
+  if (sign == UNSIGNED)
+    {
+      low_bound = bound;
+      high_bound = complement;
+      if (wi::ltu_p (lh_ub, low_bound))
+	{
+	  // [5, 6] << [1, 2] == [10, 24].
+	  // We're shifting out only zeroes, the value increases
+	  // monotonically.
+	  in_bounds = true;
+	}
+      else if (wi::ltu_p (high_bound, lh_lb))
+	{
+	  // [0xffffff00, 0xffffffff] << [1, 2]
+	  // == [0xfffffc00, 0xfffffffe].
+	  // We're shifting out only ones, the value decreases
+	  // monotonically.
+	  in_bounds = true;
+	}
+    }
+  else
+    {
+      // [-1, 1] << [1, 2] == [-4, 4]
+      low_bound = complement;
+      high_bound = bound;
+      if (wi::lts_p (lh_ub, high_bound)
+	  && wi::lts_p (low_bound, lh_lb))
+	{
+	  // For non-negative numbers, we're shifting out only zeroes,
+	  // the value increases monotonically.  For negative numbers,
+	  // we're shifting out only ones, the value decreases
+	  // monotomically.
+	  in_bounds = true;
+	}
+    }
+
+  if (in_bounds)
+    return wi_cross_product (type, lh_lb, lh_ub, rh_lb, rh_ub);
+
+  return value_range_base (type);
+}
+
+bool
+operator_lshift::wi_op_overflows (wide_int &res,
+				  tree type,
+				  const wide_int &w0,
+				  const wide_int &w1) const
+{
+  signop sign = TYPE_SIGN (type);
+  if (wi::neg_p (w1))
+    {
+      // It's unclear from the C standard whether shifts can overflow.
+      // The following code ignores overflow; perhaps a C standard
+      // interpretation ruling is needed.
+      res = wi::rshift (w0, -w1, sign);
+    }
+  else
+    res = wi::lshift (w0, w1);
+  return false;
+}
+
+
+class operator_rshift : public cross_product_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual value_range_base wi_fold (tree type,
+			  const wide_int &lh_lb, const wide_int &lh_ub,
+			  const wide_int &rh_lb, const wide_int &rh_ub) const;
+  virtual bool wi_op_overflows (wide_int &res,
+				tree type,
+				const wide_int &w0,
+				const wide_int &w1) const;
+} op_rshift;
+
+bool
+operator_rshift::wi_op_overflows (wide_int &res,
+				  tree type,
+				  const wide_int &w0,
+				  const wide_int &w1) const
+{
+  signop sign = TYPE_SIGN (type);
+  if (wi::neg_p (w1))
+    res = wi::lshift (w0, -w1);
+  else
+    {
+      // It's unclear from the C standard whether shifts can overflow.
+      // The following code ignores overflow; perhaps a C standard
+      // interpretation ruling is needed.
+      res = wi::rshift (w0, w1, sign);
+    }
+  return false;
+}
+
+value_range_base
+operator_rshift::fold_range (tree type,
+			     const value_range_base &op1,
+			     const value_range_base &op2) const
+{
+  value_range_base r;
+  if (undefined_shift_range_check (r, type, op2))
+    return r;
+
+  // Otherwise, invoke the generic fold routine.
+  return range_operator::fold_range (type, op1, op2);
+}
+
+value_range_base
+operator_rshift::wi_fold (tree type,
+			  const wide_int &lh_lb, const wide_int &lh_ub,
+			  const wide_int &rh_lb, const wide_int &rh_ub) const
+{
+  return wi_cross_product (type, lh_lb, lh_ub, rh_lb, rh_ub);
+}
+
+
+class operator_cast: public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+
+} op_convert;
+
+value_range_base
+operator_cast::fold_range (tree type ATTRIBUTE_UNUSED,
+			   const value_range_base &lh,
+			   const value_range_base &rh) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+
+  tree inner = lh.type ();
+  tree outer = rh.type ();
+  gcc_checking_assert (rh.varying_p ());
+  gcc_checking_assert (types_compatible_p (outer, type));
+  signop inner_sign = TYPE_SIGN (inner);
+  signop outer_sign = TYPE_SIGN (outer);
+  unsigned inner_prec = TYPE_PRECISION (inner);
+  unsigned outer_prec = TYPE_PRECISION (outer);
+
+  for (unsigned x = 0; x < lh.num_pairs (); ++x)
+    {
+      wide_int lh_lb = lh.lower_bound (x);
+      wide_int lh_ub = lh.upper_bound (x);
+
+      // If the conversion is not truncating we can convert the min
+      // and max values and canonicalize the resulting range.
+      // Otherwise, we can do the conversion if the size of the range
+      // is less than what the precision of the target type can
+      // represent.
+      if (outer_prec >= inner_prec
+	  || wi::rshift (wi::sub (lh_ub, lh_lb),
+			 wi::uhwi (outer_prec, inner_prec),
+			 inner_sign) == 0)
+	{
+	  wide_int min = wide_int::from (lh_lb, outer_prec, inner_sign);
+	  wide_int max = wide_int::from (lh_ub, outer_prec, inner_sign);
+	  if (!wi::eq_p (min, wi::min_value (outer_prec, outer_sign))
+	      || !wi::eq_p (max, wi::max_value (outer_prec, outer_sign)))
+	    {
+	      value_range_base tmp;
+	      tmp = create_possibly_reversed_range (type, min, max);
+	      r.union_ (tmp);
+	      continue;
+	    }
+	}
+      return value_range_base (type);
+    }
+  return r;
+}
+
+bool
+operator_cast::op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const
+{
+  tree lhs_type = lhs.type ();
+  gcc_checking_assert (types_compatible_p (op2.type(), type));
+
+  // If the precision of the LHS is smaller than the precision of the
+  // RHS, then there would be truncation of the value on the RHS, and
+  // so we can tell nothing about it.
+  if (TYPE_PRECISION (lhs_type) < TYPE_PRECISION (type))
+    {
+      // If we've been passed an actual value for the RHS rather than
+      // the type, see if it fits the LHS, and if so, then we can allow
+      // it.
+      r = op2;
+      r = fold_range (lhs_type, r, value_range_base (lhs_type));
+      r = fold_range (type, r, value_range_base (type));
+      if (r == op2)
+        {
+	  // We know the value of the RHS fits in the LHS type, so
+	  // convert the LHS and remove any values that arent in OP2.
+	  r = lhs;
+	  r = fold_range (type, r, value_range_base (type));
+	  r.intersect (op2);
+	  return true;
+	}
+      // Special case if the LHS is a boolean.  A 0 means the RHS is
+      // zero, and a 1 means the RHS is non-zero.
+      if (TREE_CODE (lhs_type) == BOOLEAN_TYPE)
+	{
+	  // If the LHS is unknown, the result is whatever op2 already is.
+	  if (!lhs.singleton_p ())
+	    {
+	      r = op2;
+	      return true;
+	    }
+	  // Boolean casts are weird in GCC. It's actually an implied
+	  // mask with 0x01, so all that is known is whether the
+	  // rightmost bit is 0 or 1, which implies the only value
+	  // *not* in the RHS is 0 or -1.
+	  unsigned prec = TYPE_PRECISION (type);
+	  if (lhs.zero_p ())
+	    r = value_range_base (VR_ANTI_RANGE, type,
+			wi::minus_one (prec), wi::minus_one (prec));
+	  else
+	    r = value_range_base (VR_ANTI_RANGE, type,
+			wi::zero (prec), wi::zero (prec));
+	  // And intersect it with what we know about op2.
+	  r.intersect (op2);
+	}
+      else
+	// Otherwise we'll have to assume it's whatever we know about op2.
+	r = op2;
+      return true;
+    }
+
+  // If the LHS precision is greater than the rhs precision, the LHS
+  // range is resticted to the range of the RHS by this
+  // assignment.
+  if (TYPE_PRECISION (lhs_type) > TYPE_PRECISION (type))
+    {
+      // Cast the range of the RHS to the type of the LHS.
+      value_range_base op_type (type);
+      op_type = fold_range (lhs_type, op_type, value_range_base (lhs_type));
+
+      // Intersect this with the LHS range will produce the RHS range.
+      r = range_intersect (lhs, op_type);
+    }
+  else
+    r = lhs;
+
+  // Cast the calculated range to the type of the RHS.
+  r = fold_range (type, r, value_range_base (type));
+  return true;
+}
+
+
+class operator_logical_and : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &lh,
+				       const value_range_base &rh) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+} op_logical_and;
+
+
+value_range_base
+operator_logical_and::fold_range (tree type,
+				  const value_range_base &lh,
+				  const value_range_base &rh) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+
+  // 0 && anything is 0.
+  if ((wi::eq_p (lh.lower_bound (), 0) && wi::eq_p (lh.upper_bound (), 0))
+      || (wi::eq_p (lh.lower_bound (), 0) && wi::eq_p (rh.upper_bound (), 0)))
+    return range_false (type);
+
+  // To reach this point, there must be a logical 1 on each side, and
+  // the only remaining question is whether there is a zero or not.
+  if (lh.contains_p (build_zero_cst (lh.type ()))
+      || rh.contains_p (build_zero_cst (rh.type ())))
+    return range_true_and_false (type);
+
+  return range_true (type);
+}
+
+bool
+operator_logical_and::op1_range (value_range_base &r, tree type,
+				 const value_range_base &lhs,
+				 const value_range_base &op2 ATTRIBUTE_UNUSED) const
+{
+   switch (get_bool_state (r, lhs, type))
+     {
+       // A true result means both sides of the AND must be true.
+       case BRS_TRUE:
+         r = range_true (type);
+	 break;
+       // Any other result means only one side has to be false, the
+       // other side can be anything. So we cannott be sure of any
+       // result here.
+      default:
+	r = range_true_and_false (type);
+	break;
+    }
+  return true;
+}
+
+bool
+operator_logical_and::op2_range (value_range_base &r, tree type,
+				 const value_range_base &lhs,
+				 const value_range_base &op1) const
+{
+  return operator_logical_and::op1_range (r, type, lhs, op1);
+}
+
+
+class operator_bitwise_and : public range_operator
+{
+public:
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_bitwise_and;
+
+// Optimize BIT_AND_EXPR and BIT_IOR_EXPR in terms of a mask if
+// possible.  Basically, see if we can optimize:
+//
+//	[LB, UB] op Z
+//   into:
+//	[LB op Z, UB op Z]
+//
+// If the optimization was successful, accumulate the range in R and
+// return TRUE.
+
+static bool
+wi_optimize_and_or (value_range_base &r,
+		    enum tree_code code,
+		    tree type,
+		    const wide_int &lh_lb, const wide_int &lh_ub,
+		    const wide_int &rh_lb, const wide_int &rh_ub)
+{
+  // Calculate the singleton mask among the ranges, if any.
+  wide_int lower_bound, upper_bound, mask;
+  if (wi::eq_p (rh_lb, rh_ub))
+    {
+      mask = rh_lb;
+      lower_bound = lh_lb;
+      upper_bound = lh_ub;
+    }
+  else if (wi::eq_p (lh_lb, lh_ub))
+    {
+      mask = lh_lb;
+      lower_bound = rh_lb;
+      upper_bound = rh_ub;
+    }
+  else
+    return false;
+
+  // If Z is a constant which (for op | its bitwise not) has n
+  // consecutive least significant bits cleared followed by m 1
+  // consecutive bits set immediately above it and either
+  // m + n == precision, or (x >> (m + n)) == (y >> (m + n)).
+  //
+  // The least significant n bits of all the values in the range are
+  // cleared or set, the m bits above it are preserved and any bits
+  // above these are required to be the same for all values in the
+  // range.
+  wide_int w = mask;
+  int m = 0, n = 0;
+  if (code == BIT_IOR_EXPR)
+    w = ~w;
+  if (wi::eq_p (w, 0))
+    n = w.get_precision ();
+  else
+    {
+      n = wi::ctz (w);
+      w = ~(w | wi::mask (n, false, w.get_precision ()));
+      if (wi::eq_p (w, 0))
+	m = w.get_precision () - n;
+      else
+	m = wi::ctz (w) - n;
+    }
+  wide_int new_mask = wi::mask (m + n, true, w.get_precision ());
+  if ((new_mask & lower_bound) != (new_mask & upper_bound))
+    return false;
+
+  wide_int res_lb, res_ub;
+  if (code == BIT_AND_EXPR)
+    {
+      res_lb = wi::bit_and (lower_bound, mask);
+      res_ub = wi::bit_and (upper_bound, mask);
+    }
+  else if (code == BIT_IOR_EXPR)
+    {
+      res_lb = wi::bit_or (lower_bound, mask);
+      res_ub = wi::bit_or (upper_bound, mask);
+    }
+  else
+    gcc_unreachable ();
+  r = create_range_with_overflow (type, res_lb, res_ub);
+  return true;
+}
+
+// For range [LB, UB] compute two wide_int bit masks.
+//
+// In the MAYBE_NONZERO bit mask, if some bit is unset, it means that
+// for all numbers in the range the bit is 0, otherwise it might be 0
+// or 1.
+//
+// In the MUSTBE_NONZERO bit mask, if some bit is set, it means that
+// for all numbers in the range the bit is 1, otherwise it might be 0
+// or 1.
+
+static void
+wi_set_zero_nonzero_bits (tree type,
+			  const wide_int &lb, const wide_int &ub,
+			  wide_int &maybe_nonzero,
+			  wide_int &mustbe_nonzero)
+{
+  signop sign = TYPE_SIGN (type);
+
+  if (wi::eq_p (lb, ub))
+    maybe_nonzero = mustbe_nonzero = lb;
+  else if (wi::ge_p (lb, 0, sign) || wi::lt_p (ub, 0, sign))
+    {
+      wide_int xor_mask = lb ^ ub;
+      maybe_nonzero = lb | ub;
+      mustbe_nonzero = lb & ub;
+      if (xor_mask != 0)
+	{
+	  wide_int mask = wi::mask (wi::floor_log2 (xor_mask), false,
+				    maybe_nonzero.get_precision ());
+	  maybe_nonzero = maybe_nonzero | mask;
+	  mustbe_nonzero = wi::bit_and_not (mustbe_nonzero, mask);
+	}
+    }
+  else
+    {
+      maybe_nonzero = wi::minus_one (lb.get_precision ());
+      mustbe_nonzero = wi::zero (lb.get_precision ());
+    }
+}
+
+value_range_base
+operator_bitwise_and::wi_fold (tree type,
+			       const wide_int &lh_lb,
+			       const wide_int &lh_ub,
+			       const wide_int &rh_lb,
+			       const wide_int &rh_ub) const
+{
+  value_range_base r;
+  if (wi_optimize_and_or (r, BIT_AND_EXPR, type, lh_lb, lh_ub, rh_lb, rh_ub))
+    return r;
+
+  wide_int maybe_nonzero_lh, mustbe_nonzero_lh;
+  wide_int maybe_nonzero_rh, mustbe_nonzero_rh;
+  wi_set_zero_nonzero_bits (type, lh_lb, lh_ub,
+			    maybe_nonzero_lh, mustbe_nonzero_lh);
+  wi_set_zero_nonzero_bits (type, rh_lb, rh_ub,
+			    maybe_nonzero_rh, mustbe_nonzero_rh);
+
+  wide_int new_lb = mustbe_nonzero_lh & mustbe_nonzero_rh;
+  wide_int new_ub = maybe_nonzero_lh & maybe_nonzero_rh;
+  signop sign = TYPE_SIGN (type);
+  unsigned prec = TYPE_PRECISION (type);
+  // If both input ranges contain only negative values, we can
+  // truncate the result range maximum to the minimum of the
+  // input range maxima.
+  if (wi::lt_p (lh_ub, 0, sign) && wi::lt_p (rh_ub, 0, sign))
+    {
+      new_ub = wi::min (new_ub, lh_ub, sign);
+      new_ub = wi::min (new_ub, rh_ub, sign);
+    }
+  // If either input range contains only non-negative values
+  // we can truncate the result range maximum to the respective
+  // maximum of the input range.
+  if (wi::ge_p (lh_lb, 0, sign))
+    new_ub = wi::min (new_ub, lh_ub, sign);
+  if (wi::ge_p (rh_lb, 0, sign))
+    new_ub = wi::min (new_ub, rh_ub, sign);
+  // PR68217: In case of signed & sign-bit-CST should
+  // result in [-INF, 0] instead of [-INF, INF].
+  if (wi::gt_p (new_lb, new_ub, sign))
+    {
+      wide_int sign_bit = wi::set_bit_in_zero (prec - 1, prec);
+      if (sign == SIGNED
+	  && ((wi::eq_p (lh_lb, lh_ub)
+	       && !wi::cmps (lh_lb, sign_bit))
+	      || (wi::eq_p (rh_lb, rh_ub)
+		  && !wi::cmps (rh_lb, sign_bit))))
+	{
+	  new_lb = wi::min_value (prec, sign);
+	  new_ub = wi::zero (prec);
+	}
+    }
+  // If the limits got swapped around, return varying.
+  if (wi::gt_p (new_lb, new_ub,sign))
+    return value_range_base (type);
+
+  return create_range_with_overflow (type, new_lb, new_ub);
+}
+
+bool
+operator_bitwise_and::op1_range (value_range_base &r, tree type,
+				 const value_range_base &lhs,
+				 const value_range_base &op2) const
+{
+  // If this is really a logical wi_fold, call that.
+  if (types_compatible_p (type, boolean_type_node))
+    return op_logical_and.op1_range (r, type, lhs, op2);
+
+  // For now do nothing with bitwise AND of value_range's.
+  r.set_varying (type);
+  return true;
+}
+
+bool
+operator_bitwise_and::op2_range (value_range_base &r, tree type,
+				 const value_range_base &lhs,
+				 const value_range_base &op1) const
+{
+  return operator_bitwise_and::op1_range (r, type, lhs, op1);
+}
+
+
+class operator_logical_or : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &lh,
+				       const value_range_base &rh) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+} op_logical_or;
+
+value_range_base
+operator_logical_or::fold_range (tree type ATTRIBUTE_UNUSED,
+				 const value_range_base &lh,
+				 const value_range_base &rh) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+
+  return range_union (lh, rh);
+}
+
+bool
+operator_logical_or::op1_range (value_range_base &r, tree type,
+				const value_range_base &lhs,
+				const value_range_base &op2 ATTRIBUTE_UNUSED) const
+{
+   switch (get_bool_state (r, lhs, type))
+     {
+       // A false result means both sides of the OR must be false.
+       case BRS_FALSE:
+         r = range_false (type);
+	 break;
+       // Any other result means only one side has to be true, the
+       // other side can be anything. so we can't be sure of any result
+       // here.
+      default:
+	r = range_true_and_false (type);
+	break;
+    }
+  return true;
+}
+
+bool
+operator_logical_or::op2_range (value_range_base &r, tree type,
+				const value_range_base &lhs,
+				const value_range_base &op1) const
+{
+  return operator_logical_or::op1_range (r, type, lhs, op1);
+}
+
+
+class operator_bitwise_or : public range_operator
+{
+public:
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_bitwise_or;
+
+value_range_base
+operator_bitwise_or::wi_fold (tree type,
+			      const wide_int &lh_lb,
+			      const wide_int &lh_ub,
+			      const wide_int &rh_lb,
+			      const wide_int &rh_ub) const
+{
+  value_range_base r;
+  if (wi_optimize_and_or (r, BIT_IOR_EXPR, type, lh_lb, lh_ub, rh_lb, rh_ub))
+    return r;
+
+  wide_int maybe_nonzero_lh, mustbe_nonzero_lh;
+  wide_int maybe_nonzero_rh, mustbe_nonzero_rh;
+  wi_set_zero_nonzero_bits (type, lh_lb, lh_ub,
+			    maybe_nonzero_lh, mustbe_nonzero_lh);
+  wi_set_zero_nonzero_bits (type, rh_lb, rh_ub,
+			    maybe_nonzero_rh, mustbe_nonzero_rh);
+  wide_int new_lb = mustbe_nonzero_lh | mustbe_nonzero_rh;
+  wide_int new_ub = maybe_nonzero_lh | maybe_nonzero_rh;
+  signop sign = TYPE_SIGN (type);
+  // If the input ranges contain only positive values we can
+  // truncate the minimum of the result range to the maximum
+  // of the input range minima.
+  if (wi::ge_p (lh_lb, 0, sign)
+      && wi::ge_p (rh_lb, 0, sign))
+    {
+      new_lb = wi::max (new_lb, lh_lb, sign);
+      new_lb = wi::max (new_lb, rh_lb, sign);
+    }
+  // If either input range contains only negative values
+  // we can truncate the minimum of the result range to the
+  // respective minimum range.
+  if (wi::lt_p (lh_ub, 0, sign))
+    new_lb = wi::max (new_lb, lh_lb, sign);
+  if (wi::lt_p (rh_ub, 0, sign))
+    new_lb = wi::max (new_lb, rh_lb, sign);
+  // If the limits got swapped around, return varying.
+  if (wi::gt_p (new_lb, new_ub,sign))
+    return value_range_base (type);
+
+  return create_range_with_overflow (type, new_lb, new_ub);
+}
+
+bool
+operator_bitwise_or::op1_range (value_range_base &r, tree type,
+				const value_range_base &lhs,
+				const value_range_base &op2) const
+{
+  // If this is really a logical wi_fold, call that.
+  if (types_compatible_p (type, boolean_type_node))
+    return op_logical_or.op1_range (r, type, lhs, op2);
+
+  // For now do nothing with bitwise OR of value_range's.
+  r.set_varying (type);
+  return true;
+}
+
+bool
+operator_bitwise_or::op2_range (value_range_base &r, tree type,
+				const value_range_base &lhs,
+				const value_range_base &op1) const
+{
+  return operator_bitwise_or::op1_range (r, type, lhs, op1);
+}
+
+
+class operator_bitwise_xor : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_bitwise_xor;
+
+value_range_base
+operator_bitwise_xor::wi_fold (tree type,
+			       const wide_int &lh_lb,
+			       const wide_int &lh_ub,
+			       const wide_int &rh_lb,
+			       const wide_int &rh_ub) const
+{
+  signop sign = TYPE_SIGN (type);
+  wide_int maybe_nonzero_lh, mustbe_nonzero_lh;
+  wide_int maybe_nonzero_rh, mustbe_nonzero_rh;
+  wi_set_zero_nonzero_bits (type, lh_lb, lh_ub,
+			    maybe_nonzero_lh, mustbe_nonzero_lh);
+  wi_set_zero_nonzero_bits (type, rh_lb, rh_ub,
+			    maybe_nonzero_rh, mustbe_nonzero_rh);
+
+  wide_int result_zero_bits = ((mustbe_nonzero_lh & mustbe_nonzero_rh)
+			       | ~(maybe_nonzero_lh | maybe_nonzero_rh));
+  wide_int result_one_bits
+    = (wi::bit_and_not (mustbe_nonzero_lh, maybe_nonzero_rh)
+       | wi::bit_and_not (mustbe_nonzero_rh, maybe_nonzero_lh));
+  wide_int new_ub = ~result_zero_bits;
+  wide_int new_lb = result_one_bits;
+
+  // If the range has all positive or all negative values, the result
+  // is better than VARYING.
+  if (wi::lt_p (new_lb, 0, sign) || wi::ge_p (new_ub, 0, sign))
+    return create_range_with_overflow (type, new_lb, new_ub);
+
+  return value_range_base (type);
+}
+
+
+class operator_trunc_mod : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+} op_trunc_mod;
+
+value_range_base
+operator_trunc_mod::wi_fold (tree type,
+			     const wide_int &lh_lb,
+			     const wide_int &lh_ub,
+			     const wide_int &rh_lb,
+			     const wide_int &rh_ub) const
+{
+  wide_int new_lb, new_ub, tmp;
+  signop sign = TYPE_SIGN (type);
+  unsigned prec = TYPE_PRECISION (type);
+
+  // Mod 0 is undefined.  Return undefined.
+  if (wi_zero_p (type, rh_lb, rh_ub))
+    return value_range_base ();
+
+  // ABS (A % B) < ABS (B) and either 0 <= A % B <= A or A <= A % B <= 0.
+  new_ub = rh_ub - 1;
+  if (sign == SIGNED)
+    {
+      tmp = -1 - rh_lb;
+      new_ub = wi::smax (new_ub, tmp);
+    }
+
+  if (sign == UNSIGNED)
+    new_lb = wi::zero (prec);
+  else
+    {
+      new_lb = -new_ub;
+      tmp = lh_lb;
+      if (wi::gts_p (tmp, 0))
+	tmp = wi::zero (prec);
+      new_lb = wi::smax (new_lb, tmp);
+    }
+  tmp = lh_ub;
+  if (sign == SIGNED && wi::neg_p (tmp))
+    tmp = wi::zero (prec);
+  new_ub = wi::min (new_ub, tmp, sign);
+
+  return create_range_with_overflow (type, new_lb, new_ub);
+}
+
+
+class operator_logical_not : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &lh,
+				       const value_range_base &rh) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+} op_logical_not;
+
+// Folding a logical NOT, oddly enough, involves doing nothing on the
+// forward pass through.  During the initial walk backwards, the
+// logical NOT reversed the desired outcome on the way back, so on the
+// way forward all we do is pass the range forward.
+//
+// 	b_2 = x_1 < 20
+// 	b_3 = !b_2
+// 	if (b_3)
+//  to determine the TRUE branch, walking  backward
+//       if (b_3)		if ([1,1])
+//       b_3 = !b_2		[1,1] = ![0,0]
+// 	 b_2 = x_1 < 20		[0,0] = x_1 < 20,   false, so x_1 == [20, 255]
+//   which is the result we are looking for.. so.. pass it through.
+
+value_range_base
+operator_logical_not::fold_range (tree type,
+				  const value_range_base &lh,
+				  const value_range_base &rh ATTRIBUTE_UNUSED) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+
+  if (lh.varying_p () || lh.undefined_p ())
+    r = lh;
+  else
+    r = range_invert (lh);
+  gcc_checking_assert (lh.type() == type);
+  return r;
+}
+
+bool
+operator_logical_not::op1_range (value_range_base &r,
+				 tree type ATTRIBUTE_UNUSED,
+				 const value_range_base &lhs,
+				 const value_range_base &op2 ATTRIBUTE_UNUSED) const
+{
+  if (lhs.varying_p () || lhs.undefined_p ())
+    r = lhs;
+  else
+    r = range_invert (lhs);
+  return true;
+}
+
+
+class operator_bitwise_not : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &lh,
+				       const value_range_base &rh) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+} op_bitwise_not;
+
+value_range_base
+operator_bitwise_not::fold_range (tree type,
+				  const value_range_base &lh,
+				  const value_range_base &rh) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+
+  // ~X is simply -1 - X.
+  value_range_base minusone (type,
+		   wi::minus_one (TYPE_PRECISION (type)),
+		   wi::minus_one (TYPE_PRECISION (type)));
+  r = range_op_handler (MINUS_EXPR, type)->fold_range (type, minusone, lh);
+  return r;
+}
+
+bool
+operator_bitwise_not::op1_range (value_range_base &r, tree type,
+				 const value_range_base &lhs,
+				 const value_range_base &op2) const
+{
+  // ~X is -1 - X and since bitwise NOT is involutary...do it again.
+  r = fold_range (type, lhs, op2);
+  return true;
+}
+
+
+class operator_cst : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+} op_integer_cst;
+
+value_range_base
+operator_cst::fold_range (tree type ATTRIBUTE_UNUSED,
+			  const value_range_base &lh,
+			  const value_range_base &rh ATTRIBUTE_UNUSED) const
+{
+  return lh;
+}
+
+
+class operator_identity : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+} op_identity;
+
+value_range_base
+operator_identity::fold_range (tree type ATTRIBUTE_UNUSED,
+			       const value_range_base &lh,
+			       const value_range_base &rh ATTRIBUTE_UNUSED) const
+{
+  return lh;
+}
+
+bool
+operator_identity::op1_range (value_range_base &r, tree type ATTRIBUTE_UNUSED,
+			      const value_range_base &lhs,
+			      const value_range_base &op2 ATTRIBUTE_UNUSED) const
+{
+  r = lhs;
+  return true;
+}
+
+
+class operator_abs : public range_operator
+{
+ public:
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+} op_abs;
+
+value_range_base
+operator_abs::wi_fold (tree type,
+		       const wide_int &lh_lb, const wide_int &lh_ub,
+		       const wide_int &rh_lb ATTRIBUTE_UNUSED,
+		       const wide_int &rh_ub ATTRIBUTE_UNUSED) const
+{
+  wide_int min, max;
+  signop sign = TYPE_SIGN (type);
+  unsigned prec = TYPE_PRECISION (type);
+
+  // Pass through LH for the easy cases.
+  if (sign == UNSIGNED || wi::ge_p (lh_lb, 0, sign))
+    return value_range_base (type, lh_lb, lh_ub);
+
+  // -TYPE_MIN_VALUE = TYPE_MIN_VALUE with flag_wrapv so we can't get
+  // a useful range.
+  wide_int min_value = wi::min_value (prec, sign);
+  wide_int max_value = wi::max_value (prec, sign);
+  if (!TYPE_OVERFLOW_UNDEFINED (type) && wi::eq_p (lh_lb, min_value))
+    return value_range_base (type);
+
+  // ABS_EXPR may flip the range around, if the original range
+  // included negative values.
+  if (wi::eq_p (lh_lb, min_value))
+    min = max_value;
+  else
+    min = wi::abs (lh_lb);
+  if (wi::eq_p (lh_ub, min_value))
+    max = max_value;
+  else
+    max = wi::abs (lh_ub);
+
+  // If the range contains zero then we know that the minimum value in the
+  // range will be zero.
+  if (wi::le_p (lh_lb, 0, sign) && wi::ge_p (lh_ub, 0, sign))
+    {
+      if (wi::gt_p (min, max, sign))
+	max = min;
+      min = wi::zero (prec);
+    }
+  else
+    {
+      // If the range was reversed, swap MIN and MAX.
+      if (wi::gt_p (min, max, sign))
+	std::swap (min, max);
+    }
+
+  // If the new range has its limits swapped around (MIN > MAX), then
+  // the operation caused one of them to wrap around.  The only thing
+  // we know is that the result is positive.
+  if (wi::gt_p (min, max, sign))
+    {
+      min = wi::zero (prec);
+      max = max_value;
+    }
+  return value_range_base (type, min, max);
+}
+
+bool
+operator_abs::op1_range (value_range_base &r, tree type,
+			 const value_range_base &lhs,
+			 const value_range_base &op2) const
+{
+  if (empty_range_check (r, lhs, op2))
+    return true;
+  if (TYPE_UNSIGNED (type))
+    {
+      r = lhs;
+      return true;
+    }
+  // Start with the positives because negatives are an impossible result.
+  value_range_base positives = range_positives (type);
+  positives.intersect (lhs);
+  r = positives;
+  // Then add the negative of each pair:
+  // ABS(op1) = [5,20] would yield op1 => [-20,-5][5,20].
+  for (unsigned i = 0; i < positives.num_pairs (); ++i)
+    r.union_ (value_range_base (type,
+		      -positives.upper_bound (i),
+		      -positives.lower_bound (i)));
+  return true;
+}
+
+
+class operator_absu : public range_operator
+{
+ public:
+  virtual value_range_base wi_fold (tree type,
+			  const wide_int &lh_lb, const wide_int &lh_ub,
+			  const wide_int &rh_lb, const wide_int &rh_ub) const;
+} op_absu;
+
+value_range_base
+operator_absu::wi_fold (tree type,
+			const wide_int &lh_lb, const wide_int &lh_ub,
+			const wide_int &rh_lb ATTRIBUTE_UNUSED,
+			const wide_int &rh_ub ATTRIBUTE_UNUSED) const
+{
+  wide_int new_lb, new_ub;
+
+  // Pass through VR0 the easy cases.
+  if (wi::ges_p (lh_lb, 0))
+    {
+      new_lb = lh_lb;
+      new_ub = lh_ub;
+    }
+  else
+    {
+      new_lb = wi::abs (lh_lb);
+      new_ub = wi::abs (lh_ub);
+
+      // If the range contains zero then we know that the minimum
+      // value in the range will be zero.
+      if (wi::ges_p (lh_ub, 0))
+	{
+	  if (wi::gtu_p (new_lb, new_ub))
+	    new_ub = new_lb;
+	  new_lb = wi::zero (TYPE_PRECISION (type));
+	}
+      else
+	std::swap (new_lb, new_ub);
+    }
+
+  gcc_checking_assert (TYPE_UNSIGNED (type));
+  return value_range_base (type, new_lb, new_ub);
+}
+
+
+class operator_negate : public range_operator
+{
+ public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+} op_negate;
+
+value_range_base
+operator_negate::fold_range (tree type,
+			     const value_range_base &lh,
+			     const value_range_base &rh) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+  // -X is simply 0 - X.
+  return
+    range_op_handler (MINUS_EXPR, type)->fold_range (type,
+						     range_zero (type), lh);
+}
+
+bool
+operator_negate::op1_range (value_range_base &r, tree type,
+			    const value_range_base &lhs,
+			    const value_range_base &op2) const
+{
+  // NEGATE is involutory.
+  r = fold_range (type, lhs, op2);
+  return true;
+}
+
+
+class operator_addr_expr : public range_operator
+{
+public:
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &op1,
+				       const value_range_base &op2) const;
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+} op_addr;
+
+value_range_base
+operator_addr_expr::fold_range (tree type,
+				const value_range_base &lh,
+				const value_range_base &rh) const
+{
+  value_range_base r;
+  if (empty_range_check (r, lh, rh))
+    return r;
+
+  // Return a non-null pointer of the LHS type (passed in op2).
+  if (lh.zero_p ())
+    return range_zero (type);
+  if (!lh.contains_p (build_zero_cst (lh.type ())))
+    return range_nonzero (type);
+  return value_range_base (type);
+}
+
+bool
+operator_addr_expr::op1_range (value_range_base &r, tree type,
+			       const value_range_base &lhs,
+			       const value_range_base &op2) const
+{
+  r = operator_addr_expr::fold_range (type, lhs, op2);
+  return true;
+}
+
+
+class pointer_plus_operator : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+                          const wide_int &lh_lb, const wide_int &lh_ub,
+                          const wide_int &rh_lb, const wide_int &rh_ub) const;
+} op_pointer_plus;
+
+value_range_base
+pointer_plus_operator::wi_fold (tree type,
+				const wide_int &lh_lb,
+				const wide_int &lh_ub,
+				const wide_int &rh_lb,
+				const wide_int &rh_ub) const
+{
+  // For pointer types, we are really only interested in asserting
+  // whether the expression evaluates to non-NULL.
+  //
+  // With -fno-delete-null-pointer-checks we need to be more
+  // conservative.  As some object might reside at address 0,
+  // then some offset could be added to it and the same offset
+  // subtracted again and the result would be NULL.
+  // E.g.
+  // static int a[12]; where &a[0] is NULL and
+  // ptr = &a[6];
+  // ptr -= 6;
+  // ptr will be NULL here, even when there is POINTER_PLUS_EXPR
+  // where the first range doesn't include zero and the second one
+  // doesn't either.  As the second operand is sizetype (unsigned),
+  // consider all ranges where the MSB could be set as possible
+  // subtractions where the result might be NULL.
+  if ((!wi_includes_zero_p (type, lh_lb, lh_ub)
+       || !wi_includes_zero_p (type, rh_lb, rh_ub))
+      && !TYPE_OVERFLOW_WRAPS (type)
+      && (flag_delete_null_pointer_checks
+	  || !wi::sign_mask (rh_ub)))
+    return range_nonzero (type);
+  if (lh_lb == lh_ub && lh_lb == 0
+      && rh_lb == rh_ub && rh_lb == 0)
+    return range_zero (type);
+  return value_range_base (type);
+}
+
+
+class pointer_min_max_operator : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+                          const wide_int &lh_lb, const wide_int &lh_ub,
+                          const wide_int &rh_lb, const wide_int &rh_ub) const;
+} op_ptr_min_max;
+
+value_range_base
+pointer_min_max_operator::wi_fold (tree type,
+				   const wide_int &lh_lb,
+				   const wide_int &lh_ub,
+				   const wide_int &rh_lb,
+				   const wide_int &rh_ub) const
+{
+  // For MIN/MAX expressions with pointers, we only care about
+  // nullness.  If both are non null, then the result is nonnull.
+  // If both are null, then the result is null.  Otherwise they
+  // are varying.
+  if (!wi_includes_zero_p (type, lh_lb, lh_ub)
+      && !wi_includes_zero_p (type, rh_lb, rh_ub))
+    return range_nonzero (type);
+  if (wi_zero_p (type, lh_lb, lh_ub) && wi_zero_p (type, rh_lb, rh_ub))
+    return range_zero (type);
+  return value_range_base (type);
+}
+
+
+class pointer_and_operator : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+			  const wide_int &lh_lb, const wide_int &lh_ub,
+			  const wide_int &rh_lb, const wide_int &rh_ub) const;
+} op_pointer_and;
+
+value_range_base
+pointer_and_operator::wi_fold (tree type,
+			       const wide_int &lh_lb,
+			       const wide_int &lh_ub,
+			       const wide_int &rh_lb,
+			       const wide_int &rh_ub) const
+{
+  // For pointer types, we are really only interested in asserting
+  // whether the expression evaluates to non-NULL.
+  if (!wi_includes_zero_p (type, lh_lb, lh_ub)
+      && !wi_includes_zero_p (type, rh_lb, rh_ub))
+    return range_nonzero (type);
+  if (wi_zero_p (type, lh_lb, lh_ub) || wi_zero_p (type, lh_lb, lh_ub))
+    return range_zero (type);
+
+  return value_range_base (type);
+}
+
+
+class pointer_or_operator : public range_operator
+{
+public:
+  virtual value_range_base wi_fold (tree type,
+			  const wide_int &lh_lb, const wide_int &lh_ub,
+			  const wide_int &rh_lb, const wide_int &rh_ub) const;
+} op_pointer_or;
+
+value_range_base
+pointer_or_operator::wi_fold (tree type,
+			      const wide_int &lh_lb,
+			      const wide_int &lh_ub,
+			      const wide_int &rh_lb,
+			      const wide_int &rh_ub) const
+{
+  // For pointer types, we are really only interested in asserting
+  // whether the expression evaluates to non-NULL.
+  if (!wi_includes_zero_p (type, lh_lb, lh_ub)
+      && !wi_includes_zero_p (type, rh_lb, rh_ub))
+    return range_nonzero (type);
+  if (wi_zero_p (type, lh_lb, lh_ub) && wi_zero_p (type, rh_lb, rh_ub))
+    return range_zero (type);
+  return value_range_base (type);
+}
+
+// This implements the range operator tables as local objects in this file.
+
+class range_op_table
+{
+public:
+  inline range_operator *operator[] (enum tree_code code);
+protected:
+  void set (enum tree_code code, range_operator &op);
+private:
+  range_operator *m_range_tree[MAX_TREE_CODES];
+};
+
+// Return a pointer to the range_operator instance, if there is one
+// associated with tree_code CODE.
+
+range_operator *
+range_op_table::operator[] (enum tree_code code)
+{
+  gcc_checking_assert (code > 0 && code < MAX_TREE_CODES);
+  return m_range_tree[code];
+}
+
+// Add OP to the handler table for CODE.
+
+void
+range_op_table::set (enum tree_code code, range_operator &op)
+{
+  gcc_checking_assert (m_range_tree[code] == NULL);
+  m_range_tree[code] = &op;
+}
+
+// Instantiate a range op table for integral operations.
+
+class integral_table : public range_op_table
+{
+public:
+  integral_table ();
+} integral_tree_table;
+
+integral_table::integral_table ()
+{
+  set (EQ_EXPR, op_equal);
+  set (NE_EXPR, op_not_equal);
+  set (LT_EXPR, op_lt);
+  set (LE_EXPR, op_le);
+  set (GT_EXPR, op_gt);
+  set (GE_EXPR, op_ge);
+  set (PLUS_EXPR, op_plus);
+  set (MINUS_EXPR, op_minus);
+  set (MIN_EXPR, op_min);
+  set (MAX_EXPR, op_max);
+  set (MULT_EXPR, op_mult);
+  set (TRUNC_DIV_EXPR, op_trunc_div);
+  set (FLOOR_DIV_EXPR, op_floor_div);
+  set (ROUND_DIV_EXPR, op_round_div);
+  set (CEIL_DIV_EXPR, op_ceil_div);
+  set (EXACT_DIV_EXPR, op_exact_div);
+  set (LSHIFT_EXPR, op_lshift);
+  set (RSHIFT_EXPR, op_rshift);
+  set (NOP_EXPR, op_convert);
+  set (CONVERT_EXPR, op_convert);
+  set (TRUTH_AND_EXPR, op_logical_and);
+  set (BIT_AND_EXPR, op_bitwise_and);
+  set (TRUTH_OR_EXPR, op_logical_or);
+  set (BIT_IOR_EXPR, op_bitwise_or);
+  set (BIT_XOR_EXPR, op_bitwise_xor);
+  set (TRUNC_MOD_EXPR, op_trunc_mod);
+  set (TRUTH_NOT_EXPR, op_logical_not);
+  set (BIT_NOT_EXPR, op_bitwise_not);
+  set (INTEGER_CST, op_integer_cst);
+  set (SSA_NAME, op_identity);
+  set (PAREN_EXPR, op_identity);
+  set (OBJ_TYPE_REF, op_identity);
+  set (ABS_EXPR, op_abs);
+  set (ABSU_EXPR, op_absu);
+  set (NEGATE_EXPR, op_negate);
+  set (ADDR_EXPR, op_addr);
+}
+
+// Instantiate a range op table for pointer operations.
+
+class pointer_table : public range_op_table
+{
+public:
+  pointer_table ();
+} pointer_tree_table;
+
+pointer_table::pointer_table ()
+{
+  set (BIT_AND_EXPR, op_pointer_and);
+  set (BIT_IOR_EXPR, op_pointer_or);
+  set (MIN_EXPR, op_ptr_min_max);
+  set (MAX_EXPR, op_ptr_min_max);
+  set (POINTER_PLUS_EXPR, op_pointer_plus);
+
+  set (EQ_EXPR, op_equal);
+  set (NE_EXPR, op_not_equal);
+  set (LT_EXPR, op_lt);
+  set (LE_EXPR, op_le);
+  set (GT_EXPR, op_gt);
+  set (GE_EXPR, op_ge);
+  set (SSA_NAME, op_identity);
+  set (ADDR_EXPR, op_addr);
+  set (NOP_EXPR, op_convert);
+  set (CONVERT_EXPR, op_convert);
+
+  set (BIT_NOT_EXPR, op_bitwise_not);
+  set (BIT_XOR_EXPR, op_bitwise_xor);
+}
+
+// The tables are hidden and accessed via a simple extern function.
+
+range_operator *
+range_op_handler (enum tree_code code, tree type)
+{
+  // First check if there is apointer specialization.
+  if (POINTER_TYPE_P (type))
+    return pointer_tree_table[code];
+  return integral_tree_table[code];
+}
+
+// Cast the range in R to TYPE.
+
+void
+range_cast (value_range_base &r, tree type)
+{
+  range_operator *op = range_op_handler (CONVERT_EXPR, type);
+  r = op->fold_range (type, r, value_range_base (type));
+}
+
+#if CHECKING_P
+#include "selftest.h"
+#include "stor-layout.h"
+
+// Ideally this should go in namespace selftest, but range_tests
+// needs to be a friend of class value_range_base so it can access
+// value_range_base::m_max_pairs.
+
+#define INT(N) build_int_cst (integer_type_node, (N))
+#define UINT(N) build_int_cstu (unsigned_type_node, (N))
+#define INT16(N) build_int_cst (short_integer_type_node, (N))
+#define UINT16(N) build_int_cstu (short_unsigned_type_node, (N))
+#define INT64(N) build_int_cstu (long_long_integer_type_node, (N))
+#define UINT64(N) build_int_cstu (long_long_unsigned_type_node, (N))
+#define UINT128(N) build_int_cstu (u128_type, (N))
+#define UCHAR(N) build_int_cstu (unsigned_char_type_node, (N))
+#define SCHAR(N) build_int_cst (signed_char_type_node, (N))
+
+#define RANGE3(A,B,C,D,E,F)		\
+( i1 = value_range_base (INT (A), INT (B)),	\
+  i2 = value_range_base (INT (C), INT (D)),	\
+  i3 = value_range_base (INT (E), INT (F)),	\
+  i1.union_ (i2),			\
+  i1.union_ (i3),			\
+  i1 )
+
+// Run all of the selftests within this file.
+
+void
+range_tests ()
+{
+  tree u128_type = build_nonstandard_integer_type (128, /*unsigned=*/1);
+  value_range_base i1, i2, i3;
+  value_range_base r0, r1, rold;
+
+  // Test that NOT(255) is [0..254] in 8-bit land.
+  value_range_base not_255 (VR_ANTI_RANGE, UCHAR (255), UCHAR (255));
+  ASSERT_TRUE (not_255 == value_range_base (UCHAR (0), UCHAR (254)));
+
+  // Test that NOT(0) is [1..255] in 8-bit land.
+  value_range_base not_zero = range_nonzero (unsigned_char_type_node);
+  ASSERT_TRUE (not_zero == value_range_base (UCHAR (1), UCHAR (255)));
+
+  // Check that [0,127][0x..ffffff80,0x..ffffff]
+  //  => ~[128, 0x..ffffff7f].
+  r0 = value_range_base (UINT128 (0), UINT128 (127));
+  tree high = build_minus_one_cst (u128_type);
+  // low = -1 - 127 => 0x..ffffff80.
+  tree low = fold_build2 (MINUS_EXPR, u128_type, high, UINT128(127));
+  r1 = value_range_base (low, high); // [0x..ffffff80, 0x..ffffffff]
+  // r0 = [0,127][0x..ffffff80,0x..fffffff].
+  r0.union_ (r1);
+  // r1 = [128, 0x..ffffff7f].
+  r1 = value_range_base (UINT128(128),
+			 fold_build2 (MINUS_EXPR, u128_type,
+				      build_minus_one_cst (u128_type),
+				      UINT128(128)));
+  r0.invert ();
+  ASSERT_TRUE (r0 == r1);
+
+  r0.set_varying (integer_type_node);
+  tree minint = wide_int_to_tree (integer_type_node, r0.lower_bound ());
+  tree maxint = wide_int_to_tree (integer_type_node, r0.upper_bound ());
+
+  r0.set_varying (short_integer_type_node);
+  tree minshort = wide_int_to_tree (short_integer_type_node, r0.lower_bound ());
+  tree maxshort = wide_int_to_tree (short_integer_type_node, r0.upper_bound ());
+
+  r0.set_varying (unsigned_type_node);
+  tree maxuint = wide_int_to_tree (unsigned_type_node, r0.upper_bound ());
+
+  // Check that ~[0,5] => [6,MAX] for unsigned int.
+  r0 = value_range_base (UINT (0), UINT (5));
+  r0.invert ();
+  ASSERT_TRUE (r0 == value_range_base (UINT(6), maxuint));
+
+  // Check that ~[10,MAX] => [0,9] for unsigned int.
+  r0 = value_range_base (VR_RANGE, UINT(10), maxuint);
+  r0.invert ();
+  ASSERT_TRUE (r0 == value_range_base (UINT (0), UINT (9)));
+
+  // Check that ~[0,5] => [6,MAX] for unsigned 128-bit numbers.
+  r0 = value_range_base (VR_ANTI_RANGE, UINT128 (0), UINT128 (5));
+  r1 = value_range_base (UINT128(6), build_minus_one_cst (u128_type));
+  ASSERT_TRUE (r0 == r1);
+
+  // Check that [~5] is really [-MIN,4][6,MAX].
+  r0 = value_range_base (VR_ANTI_RANGE, INT (5), INT (5));
+  r1 = value_range_base (minint, INT (4));
+  r1.union_ (value_range_base (INT (6), maxint));
+  ASSERT_FALSE (r1.undefined_p ());
+  ASSERT_TRUE (r0 == r1);
+
+  r1 = value_range_base (INT (5), INT (5));
+  r1.check ();
+  value_range_base r2 (r1);
+  ASSERT_TRUE (r1 == r2);
+
+  r1 = value_range_base (INT (5), INT (10));
+  r1.check ();
+
+  r1 = value_range_base (integer_type_node,
+	       wi::to_wide (INT (5)), wi::to_wide (INT (10)));
+  r1.check ();
+  ASSERT_TRUE (r1.contains_p (INT (7)));
+
+  r1 = value_range_base (SCHAR (0), SCHAR (20));
+  ASSERT_TRUE (r1.contains_p (SCHAR(15)));
+  ASSERT_FALSE (r1.contains_p (SCHAR(300)));
+
+  // If a range is in any way outside of the range for the converted
+  // to range, default to the range for the new type.
+  r1 = value_range_base (integer_zero_node, maxint);
+  range_cast (r1, short_integer_type_node);
+  ASSERT_TRUE (r1.lower_bound () == wi::to_wide (minshort)
+	       && r1.upper_bound() == wi::to_wide (maxshort));
+
+  // (unsigned char)[-5,-1] => [251,255].
+  r0 = rold = value_range_base (SCHAR (-5), SCHAR (-1));
+  range_cast (r0, unsigned_char_type_node);
+  ASSERT_TRUE (r0 == value_range_base (UCHAR (251), UCHAR (255)));
+  range_cast (r0, signed_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  // (signed char)[15, 150] => [-128,-106][15,127].
+  r0 = rold = value_range_base (UCHAR (15), UCHAR (150));
+  range_cast (r0, signed_char_type_node);
+  r1 = value_range_base (SCHAR (15), SCHAR (127));
+  r2 = value_range_base (SCHAR (-128), SCHAR (-106));
+  r1.union_ (r2);
+  ASSERT_TRUE (r1 == r0);
+  range_cast (r0, unsigned_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  // (unsigned char)[-5, 5] => [0,5][251,255].
+  r0 = rold = value_range_base (SCHAR (-5), SCHAR (5));
+  range_cast (r0, unsigned_char_type_node);
+  r1 = value_range_base (UCHAR (251), UCHAR (255));
+  r2 = value_range_base (UCHAR (0), UCHAR (5));
+  r1.union_ (r2);
+  ASSERT_TRUE (r0 == r1);
+  range_cast (r0, signed_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  // (unsigned char)[-5,5] => [0,5][251,255].
+  r0 = value_range_base (INT (-5), INT (5));
+  range_cast (r0, unsigned_char_type_node);
+  r1 = value_range_base (UCHAR (0), UCHAR (5));
+  r1.union_ (value_range_base (UCHAR (251), UCHAR (255)));
+  ASSERT_TRUE (r0 == r1);
+
+  // (unsigned char)[5U,1974U] => [0,255].
+  r0 = value_range_base (UINT (5), UINT (1974));
+  range_cast (r0, unsigned_char_type_node);
+  ASSERT_TRUE (r0 == value_range_base (UCHAR (0), UCHAR (255)));
+  range_cast (r0, integer_type_node);
+  // Going to a wider range should not sign extend.
+  ASSERT_TRUE (r0 == value_range_base (INT (0), INT (255)));
+
+  // (unsigned char)[-350,15] => [0,255].
+  r0 = value_range_base (INT (-350), INT (15));
+  range_cast (r0, unsigned_char_type_node);
+  ASSERT_TRUE (r0 == (value_range_base
+		      (TYPE_MIN_VALUE (unsigned_char_type_node),
+		       TYPE_MAX_VALUE (unsigned_char_type_node))));
+
+  // Casting [-120,20] from signed char to unsigned short.
+  // => [0, 20][0xff88, 0xffff].
+  r0 = value_range_base (SCHAR (-120), SCHAR (20));
+  range_cast (r0, short_unsigned_type_node);
+  r1 = value_range_base (UINT16 (0), UINT16 (20));
+  r2 = value_range_base (UINT16 (0xff88), UINT16 (0xffff));
+  r1.union_ (r2);
+  ASSERT_TRUE (r0 == r1);
+  // A truncating cast back to signed char will work because [-120, 20]
+  // is representable in signed char.
+  range_cast (r0, signed_char_type_node);
+  ASSERT_TRUE (r0 == value_range_base (SCHAR (-120), SCHAR (20)));
+
+  // unsigned char -> signed short
+  //	(signed short)[(unsigned char)25, (unsigned char)250]
+  // => [(signed short)25, (signed short)250]
+  r0 = rold = value_range_base (UCHAR (25), UCHAR (250));
+  range_cast (r0, short_integer_type_node);
+  r1 = value_range_base (INT16 (25), INT16 (250));
+  ASSERT_TRUE (r0 == r1);
+  range_cast (r0, unsigned_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  // Test casting a wider signed [-MIN,MAX] to a nar`rower unsigned.
+  r0 = value_range_base (TYPE_MIN_VALUE (long_long_integer_type_node),
+	       TYPE_MAX_VALUE (long_long_integer_type_node));
+  range_cast (r0, short_unsigned_type_node);
+  r1 = value_range_base (TYPE_MIN_VALUE (short_unsigned_type_node),
+	       TYPE_MAX_VALUE (short_unsigned_type_node));
+  ASSERT_TRUE (r0 == r1);
+
+  // NOT([10,20]) ==> [-MIN,9][21,MAX].
+  r0 = r1 = value_range_base (INT (10), INT (20));
+  r2 = value_range_base (minint, INT(9));
+  r2.union_ (value_range_base (INT(21), maxint));
+  ASSERT_FALSE (r2.undefined_p ());
+  r1.invert ();
+  ASSERT_TRUE (r1 == r2);
+  // Test that NOT(NOT(x)) == x.
+  r2.invert ();
+  ASSERT_TRUE (r0 == r2);
+
+  // NOT(-MIN,+MAX) is the empty set and should return false.
+  r0 = value_range_base (minint, maxint);
+  r0.invert ();
+  ASSERT_TRUE (r0.undefined_p ());
+  r1.set_undefined ();
+  ASSERT_TRUE (r0 == r1);
+
+  // Test that booleans and their inverse work as expected.
+  r0 = range_zero (boolean_type_node);
+  ASSERT_TRUE (r0 == value_range_base (build_zero_cst (boolean_type_node),
+				       build_zero_cst (boolean_type_node)));
+  r0.invert ();
+  ASSERT_TRUE (r0 == value_range_base (build_one_cst (boolean_type_node),
+				       build_one_cst (boolean_type_node)));
+
+  // Casting NONZERO to a narrower type will wrap/overflow so
+  // it's just the entire range for the narrower type.
+  //
+  // "NOT 0 at signed 32-bits" ==> [-MIN_32,-1][1, +MAX_32].  This is
+  // is outside of the range of a smaller range, return the full
+  // smaller range.
+  r0 = range_nonzero (integer_type_node);
+  range_cast (r0, short_integer_type_node);
+  r1 = value_range_base (TYPE_MIN_VALUE (short_integer_type_node),
+			 TYPE_MAX_VALUE (short_integer_type_node));
+  ASSERT_TRUE (r0 == r1);
+
+  // Casting NONZERO from a narrower signed to a wider signed.
+  //
+  // NONZERO signed 16-bits is [-MIN_16,-1][1, +MAX_16].
+  // Converting this to 32-bits signed is [-MIN_16,-1][1, +MAX_16].
+  r0 = range_nonzero (short_integer_type_node);
+  range_cast (r0, integer_type_node);
+  r1 = value_range_base (INT (-32768), INT (-1));
+  r2 = value_range_base (INT (1), INT (32767));
+  r1.union_ (r2);
+  ASSERT_TRUE (r0 == r1);
+
+  if (value_range_base::m_max_pairs > 2)
+    {
+      // ([10,20] U [5,8]) U [1,3] ==> [1,3][5,8][10,20].
+      r0 = value_range_base (INT (10), INT (20));
+      r1 = value_range_base (INT (5), INT (8));
+      r0.union_ (r1);
+      r1 = value_range_base (INT (1), INT (3));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (1, 3, 5, 8, 10, 20));
+
+      // [1,3][5,8][10,20] U [-5,0] => [-5,3][5,8][10,20].
+      r1 = value_range_base (INT (-5), INT (0));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (-5, 3, 5, 8, 10, 20));
+    }
+
+  // [10,20] U [30,40] ==> [10,20][30,40].
+  r0 = value_range_base (INT (10), INT (20));
+  r1 = value_range_base (INT (30), INT (40));
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == range_union (value_range_base (INT (10), INT (20)),
+				  value_range_base (INT (30), INT (40))));
+  if (value_range_base::m_max_pairs > 2)
+    {
+      // [10,20][30,40] U [50,60] ==> [10,20][30,40][50,60].
+      r1 = value_range_base (INT (50), INT (60));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (10, 20, 30, 40, 50, 60));
+      // [10,20][30,40][50,60] U [70, 80] ==> [10,20][30,40][50,60][70,80].
+      r1 = value_range_base (INT (70), INT (80));
+      r0.union_ (r1);
+
+      r2 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r2.union_ (value_range_base (INT (70), INT (80)));
+      ASSERT_TRUE (r0 == r2);
+    }
+
+  // Make sure NULL and non-NULL of pointer types work, and that
+  // inverses of them are consistent.
+  tree voidp = build_pointer_type (void_type_node);
+  r0 = range_zero (voidp);
+  r1 = r0;
+  r0.invert ();
+  r0.invert ();
+  ASSERT_TRUE (r0 == r1);
+
+  if (value_range_base::m_max_pairs > 2)
+    {
+      // [10,20][30,40][50,60] U [6,35] => [6,40][50,60].
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = value_range_base (INT (6), INT (35));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == range_union (value_range_base (INT (6), INT (40)),
+				      value_range_base (INT (50), INT (60))));
+
+      // [10,20][30,40][50,60] U [6,60] => [6,60].
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = value_range_base (INT (6), INT (60));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == value_range_base (INT (6), INT (60)));
+
+      // [10,20][30,40][50,60] U [6,70] => [6,70].
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = value_range_base (INT (6), INT (70));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == value_range_base (INT (6), INT (70)));
+
+      // [10,20][30,40][50,60] U [35,70] => [10,20][30,70].
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = value_range_base (INT (35), INT (70));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == range_union (value_range_base (INT (10), INT (20)),
+				      value_range_base (INT (30), INT (70))));
+    }
+
+  // [10,20][30,40] U [25,70] => [10,70].
+  r0 = range_union (value_range_base (INT (10), INT (20)),
+		     value_range_base (INT (30), INT (40)));
+  r1 = value_range_base (INT (25), INT (70));
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == range_union (value_range_base (INT (10), INT (20)),
+				  value_range_base (INT (25), INT (70))));
+
+  if (value_range_base::m_max_pairs > 2)
+    {
+      // [10,20][30,40][50,60] U [15,35] => [10,40][50,60].
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = value_range_base (INT (15), INT (35));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == range_union (value_range_base (INT (10), INT (40)),
+				      value_range_base (INT (50), INT (60))));
+    }
+
+  // [10,20] U [15, 30] => [10, 30].
+  r0 = value_range_base (INT (10), INT (20));
+  r1 = value_range_base (INT (15), INT (30));
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == value_range_base (INT (10), INT (30)));
+
+  // [10,20] U [25,25] => [10,20][25,25].
+  r0 = value_range_base (INT (10), INT (20));
+  r1 = value_range_base (INT (25), INT (25));
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == range_union (value_range_base (INT (10), INT (20)),
+				  value_range_base (INT (25), INT (25))));
+
+  if (value_range_base::m_max_pairs > 2)
+    {
+      // [10,20][30,40][50,60] U [35,35] => [10,20][30,40][50,60].
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = value_range_base (INT (35), INT (35));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (10, 20, 30, 40, 50, 60));
+    }
+
+  // [15,40] U [] => [15,40].
+  r0 = value_range_base (INT (15), INT (40));
+  r1.set_undefined ();
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == value_range_base (INT (15), INT (40)));
+
+  // [10,20] U [10,10] => [10,20].
+  r0 = value_range_base (INT (10), INT (20));
+  r1 = value_range_base (INT (10), INT (10));
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == value_range_base (INT (10), INT (20)));
+
+  // [10,20] U [9,9] => [9,20].
+  r0 = value_range_base (INT (10), INT (20));
+  r1 = value_range_base (INT (9), INT (9));
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == value_range_base (INT (9), INT (20)));
+
+  if (value_range_base::m_max_pairs > 2)
+    {
+      // [10,10][12,12][20,100] ^ [15,200].
+      r0 = RANGE3 (10, 10, 12, 12, 20, 100);
+      r1 = value_range_base (INT (15), INT (200));
+      r0.intersect (r1);
+      ASSERT_TRUE (r0 == value_range_base (INT (20), INT (100)));
+
+      // [10,20][30,40][50,60] ^ [15,25][38,51][55,70]
+      // => [15,20][38,40][50,51][55,60]
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = RANGE3 (15, 25, 38, 51, 55, 70);
+      r0.intersect (r1);
+      if (value_range_base::m_max_pairs == 3)
+	{
+	  // When pairs==3, we don't have enough space, so
+	  //  conservatively handle things.  Thus, the ...[50,60].
+	  ASSERT_TRUE (r0 == RANGE3 (15, 20, 38, 40, 50, 60));
+	}
+      else
+	{
+	  r2 = RANGE3 (15, 20, 38, 40, 50, 51);
+	  r2.union_ (value_range_base (INT (55), INT (60)));
+	  ASSERT_TRUE (r0 == r2);
+	}
+
+      // [15,20][30,40][50,60] ^ [15,35][40,90][100,200]
+      // => [15,20][30,35][40,60]
+      r0 = RANGE3 (15, 20, 30, 40, 50, 60);
+      r1 = RANGE3 (15, 35, 40, 90, 100, 200);
+      r0.intersect (r1);
+      if (value_range_base::m_max_pairs == 3)
+	{
+	  // When pairs==3, we don't have enough space, so
+	  // conservatively handle things.
+	  ASSERT_TRUE (r0 == RANGE3 (15, 20, 30, 35, 40, 60));
+	}
+      else
+	{
+	  r2 = RANGE3 (15, 20, 30, 35, 40, 40);
+	  r2.union_ (value_range_base (INT (50), INT (60)));
+	  ASSERT_TRUE (r0 == r2);
+	}
+
+      // Test cases where a union inserts a sub-range inside a larger
+      // range.
+      //
+      // [8,10][135,255] U [14,14] => [8,10][14,14][135,255]
+      r0 = range_union (value_range_base (INT (8), INT (10)),
+			 value_range_base (INT (135), INT (255)));
+      r1 = value_range_base (INT (14), INT (14));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (8, 10, 14, 14, 135, 255));
+    }
+
+  // [10,20] ^ [15,30] => [15,20].
+  r0 = value_range_base (INT (10), INT (20));
+  r1 = value_range_base (INT (15), INT (30));
+  r0.intersect (r1);
+  ASSERT_TRUE (r0 == value_range_base (INT (15), INT (20)));
+
+  // [10,20][30,40] ^ [40,50] => [40,40].
+  r0 = range_union (value_range_base (INT (10), INT (20)),
+		     value_range_base (INT (30), INT (40)));
+  r1 = value_range_base (INT (40), INT (50));
+  r0.intersect (r1);
+  ASSERT_TRUE (r0 == value_range_base (INT (40), INT (40)));
+
+  // Test non-destructive intersection.
+  r0 = rold = value_range_base (INT (10), INT (20));
+  ASSERT_FALSE (range_intersect (r0, value_range_base (INT (15),
+					     INT (30))).undefined_p ());
+  ASSERT_TRUE (r0 == rold);
+
+  // Test the internal sanity of wide_int's wrt HWIs.
+  ASSERT_TRUE (wi::max_value (TYPE_PRECISION (boolean_type_node),
+			      TYPE_SIGN (boolean_type_node))
+	       == wi::uhwi (1, TYPE_PRECISION (boolean_type_node)));
+
+  // Test zero_p().
+  r0 = value_range_base (INT (0), INT (0));
+  ASSERT_TRUE (r0.zero_p ());
+
+  // Test nonzero_p().
+  r0 = value_range_base (INT (0), INT (0));
+  r0.invert ();
+  ASSERT_TRUE (r0.nonzero_p ());
+}
+#endif // CHECKING_P
diff --git a/gcc/range-op.h b/gcc/range-op.h
new file mode 100644
index 00000000000..f6510758163
--- /dev/null
+++ b/gcc/range-op.h
@@ -0,0 +1,88 @@ 
+/* Header file for range operator class.
+   Copyright (C) 2017-2019 Free Software Foundation, Inc.
+   Contributed by Andrew MacLeod <amacleod@redhat.com>
+   and Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_RANGE_OP_H
+#define GCC_RANGE_OP_H
+
+// This class is implemented for each kind of operator supported by
+// the range generator.  It serves various purposes.
+//
+// 1 - Generates range information for the specific operation between
+//     two ranges.  This provides the ability to fold ranges for an
+//     expression.
+//
+// 2 - Performs range algebra on the expression such that a range can be
+//     adjusted in terms of one of the operands:
+//
+//       def = op1 + op2
+//
+//     Given a range for def, we can adjust the range so that it is in
+//     terms of either operand.
+//
+//     op1_range (def_range, op2) will adjust the range in place so it
+//     is in terms of op1.  Since op1 = def - op2, it will subtract
+//     op2 from each element of the range.
+//
+// 3 - Creates a range for an operand based on whether the result is 0 or
+//     non-zero.  This is mostly for logical true false, but can serve other
+//     purposes.
+//       ie   0 = op1 - op2 implies op2 has the same range as op1.
+
+class range_operator
+{
+public:
+  // Perform an operation between 2 ranges and return it.
+  virtual value_range_base fold_range (tree type,
+				       const value_range_base &lh,
+				       const value_range_base &rh) const;
+
+  // Return the range for op[12] in the general case.  LHS is the range for
+  // the LHS of the expression, OP[12]is the range for the other
+  //
+  // The operand and the result is returned in R.
+  //
+  // TYPE is the expected type of the range.
+  //
+  // Return TRUE if the operation is performed and a valid range is available.
+  //
+  // i.e.  [LHS] = ??? + OP2
+  // is re-formed as R = [LHS] - OP2.
+  virtual bool op1_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op2) const;
+  virtual bool op2_range (value_range_base &r, tree type,
+			  const value_range_base &lhs,
+			  const value_range_base &op1) const;
+
+protected:
+  // Perform an operation between 2 sub-ranges and return it.
+  virtual value_range_base wi_fold (tree type,
+				    const wide_int &lh_lb,
+				    const wide_int &lh_ub,
+				    const wide_int &rh_lb,
+				    const wide_int &rh_ub) const;
+};
+
+extern range_operator *range_op_handler (enum tree_code code, tree type);
+
+extern void range_cast (value_range_base &, tree type);
+
+#endif // GCC_RANGE_OP_H
diff --git a/gcc/range.cc b/gcc/range.cc
new file mode 100644
index 00000000000..5e4d90436f2
--- /dev/null
+++ b/gcc/range.cc
@@ -0,0 +1,89 @@ 
+/* Misc range functions.
+   Copyright (C) 2017-2019 Free Software Foundation, Inc.
+   Contributed by Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "gimple-pretty-print.h"
+#include "fold-const.h"
+#include "ssa.h"
+#include "range.h"
+
+value_range_base
+range_intersect (const value_range_base &r1, const value_range_base &r2)
+{
+  value_range_base tmp (r1);
+  tmp.intersect (r2);
+  return tmp;
+}
+
+value_range_base
+range_invert (const value_range_base &r1)
+{
+  value_range_base tmp (r1);
+  tmp.invert ();
+  return tmp;
+}
+
+value_range_base
+range_union (const value_range_base &r1, const value_range_base &r2)
+{
+  value_range_base tmp (r1);
+  tmp.union_ (r2);
+  return tmp;
+}
+
+value_range_base
+range_zero (tree type)
+{
+  return value_range_base (build_zero_cst (type), build_zero_cst (type));
+}
+
+value_range_base
+range_nonzero (tree type)
+{
+  return value_range_base (VR_ANTI_RANGE,
+			   build_zero_cst (type), build_zero_cst (type));
+}
+
+value_range_base
+range_positives (tree type)
+{
+  unsigned prec = TYPE_PRECISION (type);
+  signop sign = TYPE_SIGN (type);
+  return value_range_base (type, wi::zero (prec), wi::max_value (prec, sign));
+}
+
+value_range_base
+range_negatives (tree type)
+{
+  unsigned prec = TYPE_PRECISION (type);
+  signop sign = TYPE_SIGN (type);
+  value_range_base r;
+  if (sign == UNSIGNED)
+    r.set_undefined ();
+  else
+    r = value_range_base (type, wi::min_value (prec, sign),
+			  wi::minus_one (prec));
+  return r;
+}
diff --git a/gcc/range.h b/gcc/range.h
new file mode 100644
index 00000000000..3983171f51d
--- /dev/null
+++ b/gcc/range.h
@@ -0,0 +1,33 @@ 
+/* Header file for misc range functions. -*- C++ -*-
+   Copyright (C) 2017-2019 Free Software Foundation, Inc.
+   Contributed by Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_RANGE_H
+#define GCC_RANGE_H
+
+value_range_base range_zero (tree type);
+value_range_base range_nonzero (tree type);
+value_range_base range_intersect (const value_range_base &,
+				  const value_range_base &);
+value_range_base range_union (const value_range_base &,
+			      const value_range_base &);
+value_range_base range_invert (const value_range_base &);
+value_range_base range_positives (tree type);
+value_range_base range_negatives (tree type);
+#endif // GCC_RANGE_H
diff --git a/gcc/selftest.h b/gcc/selftest.h
index 75b2cd836e1..6f2c2afde9d 100644
--- a/gcc/selftest.h
+++ b/gcc/selftest.h
@@ -259,6 +259,10 @@  extern int num_passes;
 
 } /* end of namespace selftest.  */
 
+/* This is outside of the selftest namespace because it's a friend of
+   value_range_base.  */
+extern void range_tests ();
+
 /* Macros for writing tests.  */
 
 /* Evaluate EXPR and coerce to bool, calling
diff --git a/gcc/ssa.h b/gcc/ssa.h
index 56a8d103965..2fe4addedf2 100644
--- a/gcc/ssa.h
+++ b/gcc/ssa.h
@@ -26,6 +26,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "stringpool.h"
 #include "gimple-ssa.h"
 #include "tree-vrp.h"
+#include "range.h"
 #include "tree-ssanames.h"
 #include "tree-phinodes.h"
 #include "ssa-iterators.h" 
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index 5ec4d17f23b..269a3cb090e 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -67,7 +67,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "attribs.h"
 #include "vr-values.h"
 #include "builtins.h"
-#include "wide-int-range.h"
+#include "range-op.h"
 
 static bool
 ranges_from_anti_range (const value_range_base *ar,
@@ -131,6 +131,36 @@  value_range::value_range (const value_range_base &other)
   set (other.kind (), other.min(), other.max (), NULL);
 }
 
+value_range_base::value_range_base (tree type)
+{
+  set_varying (type);
+}
+
+value_range_base::value_range_base (enum value_range_kind kind,
+				    tree type,
+				    const wide_int &wmin,
+				    const wide_int &wmax)
+{
+  tree min = wide_int_to_tree (type, wmin);
+  tree max = wide_int_to_tree (type, wmax);
+  gcc_checking_assert (kind == VR_RANGE || kind == VR_ANTI_RANGE);
+  set (kind, min, max);
+}
+
+value_range_base::value_range_base (tree type,
+				    const wide_int &wmin,
+				    const wide_int &wmax)
+{
+  tree min = wide_int_to_tree (type, wmin);
+  tree max = wide_int_to_tree (type, wmax);
+  set (VR_RANGE, min, max);
+}
+
+value_range_base::value_range_base (tree min, tree max)
+{
+  set (VR_RANGE, min, max);
+}
+
 /* Like set, but keep the equivalences in place.  */
 
 void
@@ -350,10 +380,14 @@  value_range_base::singleton_p (tree *result) const
 	  return false;
 	}
 
-      value_range_base vr0, vr1;
-      return (ranges_from_anti_range (this, &vr0, &vr1, true)
-	      && vr1.undefined_p ()
-	      && vr0.singleton_p (result));
+      /* An anti-range that includes an extreme, is just a range with
+	 one sub-range.  Use the one sub-range.  */
+      if (vrp_val_is_min (m_min, true) || vrp_val_is_max (m_max, true))
+	{
+	  value_range_base vr0, vr1;
+	  ranges_from_anti_range (this, &vr0, &vr1, true);
+	  return vr0.singleton_p (result);
+	}
     }
   if (m_kind == VR_RANGE
       && vrp_operand_equal_p (min (), max ())
@@ -369,7 +403,7 @@  value_range_base::singleton_p (tree *result) const
 tree
 value_range_base::type () const
 {
-  gcc_assert (m_min);
+  gcc_checking_assert (m_min);
   return TREE_TYPE (min ());
 }
 
@@ -573,9 +607,9 @@  vrp_val_min (const_tree type, bool handle_pointers)
    is not == to the integer constant with the same value in the type.  */
 
 bool
-vrp_val_is_max (const_tree val)
+vrp_val_is_max (const_tree val, bool handle_pointers)
 {
-  tree type_max = vrp_val_max (TREE_TYPE (val));
+  tree type_max = vrp_val_max (TREE_TYPE (val), handle_pointers);
   return (val == type_max
 	  || (type_max != NULL_TREE
 	      && operand_equal_p (val, type_max, 0)));
@@ -584,9 +618,9 @@  vrp_val_is_max (const_tree val)
 /* Return whether VAL is equal to the minimum value of its type.  */
 
 bool
-vrp_val_is_min (const_tree val)
+vrp_val_is_min (const_tree val, bool handle_pointers)
 {
-  tree type_min = vrp_val_min (TREE_TYPE (val));
+  tree type_min = vrp_val_min (TREE_TYPE (val), handle_pointers);
   return (val == type_min
 	  || (type_min != NULL_TREE
 	      && operand_equal_p (val, type_min, 0)));
@@ -1220,9 +1254,46 @@  value_range_base::value_inside_range (tree val) const
     return !!cmp2;
 }
 
-/* Value range wrapper for wide_int_range_set_zero_nonzero_bits.
+/* For range [LB, UB] compute two wide_int bit masks.
+
+   In the MAY_BE_NONZERO bit mask, if some bit is unset, it means that
+   for all numbers in the range the bit is 0, otherwise it might be 0
+   or 1.
+
+   In the MUST_BE_NONZERO bit mask, if some bit is set, it means that
+   for all numbers in the range the bit is 1, otherwise it might be 0
+   or 1.  */
+
+static inline void
+wide_int_range_set_zero_nonzero_bits (signop sign,
+				      const wide_int &lb, const wide_int &ub,
+				      wide_int &may_be_nonzero,
+				      wide_int &must_be_nonzero)
+{
+  may_be_nonzero = wi::minus_one (lb.get_precision ());
+  must_be_nonzero = wi::zero (lb.get_precision ());
+
+  if (wi::eq_p (lb, ub))
+    {
+      may_be_nonzero = lb;
+      must_be_nonzero = may_be_nonzero;
+    }
+  else if (wi::ge_p (lb, 0, sign) || wi::lt_p (ub, 0, sign))
+    {
+      wide_int xor_mask = lb ^ ub;
+      may_be_nonzero = lb | ub;
+      must_be_nonzero = lb & ub;
+      if (xor_mask != 0)
+	{
+	  wide_int mask = wi::mask (wi::floor_log2 (xor_mask), false,
+				    may_be_nonzero.get_precision ());
+	  may_be_nonzero = may_be_nonzero | mask;
+	  must_be_nonzero = wi::bit_and_not (must_be_nonzero, mask);
+	}
+    }
+}
 
-   Compute MAY_BE_NONZERO and MUST_BE_NONZERO bit masks for range in VR.
+/* value_range wrapper for wide_int_range_set_zero_nonzero_bits above.
 
    Return TRUE if VR was a constant range and we were able to compute
    the bit masks.  */
@@ -1288,87 +1359,6 @@  ranges_from_anti_range (const value_range_base *ar,
   return !vr0->undefined_p ();
 }
 
-/* Extract the components of a value range into a pair of wide ints in
-   [WMIN, WMAX], after having normalized any symbolics from the input.  */
-
-static void inline
-extract_range_into_wide_ints (const value_range_base *vr_,
-			      tree type, wide_int &wmin, wide_int &wmax)
-{
-  signop sign = TYPE_SIGN (type);
-  unsigned int prec = TYPE_PRECISION (type);
-  gcc_assert (vr_->kind () != VR_ANTI_RANGE || vr_->symbolic_p ());
-  value_range vr = vr_->normalize_symbolics ();
-  if (range_int_cst_p (&vr))
-    {
-      wmin = wi::to_wide (vr.min ());
-      wmax = wi::to_wide (vr.max ());
-    }
-  else
-    {
-      wmin = wi::min_value (prec, sign);
-      wmax = wi::max_value (prec, sign);
-    }
-}
-
-/* Value range wrapper for wide_int_range_multiplicative_op:
-
-     *VR = *VR0 .CODE. *VR1.  */
-
-static void
-extract_range_from_multiplicative_op (value_range_base *vr,
-				      enum tree_code code, tree type,
-				      const value_range_base *vr0,
-				      const value_range_base *vr1)
-{
-  gcc_assert (code == MULT_EXPR
-	      || code == TRUNC_DIV_EXPR
-	      || code == FLOOR_DIV_EXPR
-	      || code == CEIL_DIV_EXPR
-	      || code == EXACT_DIV_EXPR
-	      || code == ROUND_DIV_EXPR
-	      || code == RSHIFT_EXPR
-	      || code == LSHIFT_EXPR);
-  if (!range_int_cst_p (vr1))
-    {
-      vr->set_varying (type);
-      return;
-    }
-
-  /* Even if vr0 is VARYING or otherwise not usable, we can derive
-     useful ranges just from the shift count.  E.g.
-     x >> 63 for signed 64-bit x is always [-1, 0].  */
-  value_range_base tem = vr0->normalize_symbolics ();
-  tree vr0_min, vr0_max;
-  if (tem.kind () == VR_RANGE)
-    {
-      vr0_min = tem.min ();
-      vr0_max = tem.max ();
-    }
-  else
-    {
-      vr0_min = vrp_val_min (type);
-      vr0_max = vrp_val_max (type);
-    }
-
-  wide_int res_lb, res_ub;
-  wide_int vr0_lb = wi::to_wide (vr0_min);
-  wide_int vr0_ub = wi::to_wide (vr0_max);
-  wide_int vr1_lb = wi::to_wide (vr1->min ());
-  wide_int vr1_ub = wi::to_wide (vr1->max ());
-  bool overflow_undefined = TYPE_OVERFLOW_UNDEFINED (type);
-  unsigned prec = TYPE_PRECISION (type);
-
-  if (wide_int_range_multiplicative_op (res_lb, res_ub,
-					code, TYPE_SIGN (type), prec,
-					vr0_lb, vr0_ub, vr1_lb, vr1_ub,
-					overflow_undefined))
-    vr->set (VR_RANGE, wide_int_to_tree (type, res_lb),
-	     wide_int_to_tree (type, res_ub));
-  else
-    vr->set_varying (type);
-}
-
 /* If BOUND will include a symbolic bound, adjust it accordingly,
    otherwise leave it as is.
 
@@ -1484,8 +1474,7 @@  set_value_range_with_overflow (value_range_kind &kind, tree &min, tree &max,
       if ((min_ovf != wi::OVF_NONE) == (max_ovf != wi::OVF_NONE))
 	{
 	  /* If the limits are swapped, we wrapped around and cover
-	     the entire range.  We have a similar check at the end of
-	     extract_range_from_binary_expr.  */
+	     the entire range.  */
 	  if (wi::gt_p (tmin, tmax, sgn))
 	    kind = VR_VARYING;
 	  else
@@ -1554,91 +1543,71 @@  set_value_range_with_overflow (value_range_kind &kind, tree &min, tree &max,
     }
 }
 
-/* Extract range information from a binary operation CODE based on
-   the ranges of each of its operands *VR0 and *VR1 with resulting
-   type EXPR_TYPE.  The resulting range is stored in *VR.  */
+/* Fold two value range's of a POINTER_PLUS_EXPR into VR.  */
 
-void
-extract_range_from_binary_expr (value_range_base *vr,
-				enum tree_code code, tree expr_type,
-				const value_range_base *vr0_,
-				const value_range_base *vr1_)
+static void
+extract_range_from_pointer_plus_expr (value_range_base *vr,
+				      enum tree_code code,
+				      tree expr_type,
+				      const value_range_base *vr0,
+				      const value_range_base *vr1)
 {
-  signop sign = TYPE_SIGN (expr_type);
-  unsigned int prec = TYPE_PRECISION (expr_type);
-  value_range_base vr0 = *vr0_, vr1 = *vr1_;
-  value_range_base vrtem0, vrtem1;
-  enum value_range_kind type;
-  tree min = NULL_TREE, max = NULL_TREE;
-  int cmp;
-
-  if (!INTEGRAL_TYPE_P (expr_type)
-      && !POINTER_TYPE_P (expr_type))
-    {
-      vr->set_varying (expr_type);
-      return;
-    }
+  gcc_checking_assert (POINTER_TYPE_P (expr_type)
+		       && code == POINTER_PLUS_EXPR);
+  /* For pointer types, we are really only interested in asserting
+     whether the expression evaluates to non-NULL.
+     With -fno-delete-null-pointer-checks we need to be more
+     conservative.  As some object might reside at address 0,
+     then some offset could be added to it and the same offset
+     subtracted again and the result would be NULL.
+     E.g.
+     static int a[12]; where &a[0] is NULL and
+     ptr = &a[6];
+     ptr -= 6;
+     ptr will be NULL here, even when there is POINTER_PLUS_EXPR
+     where the first range doesn't include zero and the second one
+     doesn't either.  As the second operand is sizetype (unsigned),
+     consider all ranges where the MSB could be set as possible
+     subtractions where the result might be NULL.  */
+  if ((!range_includes_zero_p (vr0)
+       || !range_includes_zero_p (vr1))
+      && !TYPE_OVERFLOW_WRAPS (expr_type)
+      && (flag_delete_null_pointer_checks
+	  || (range_int_cst_p (vr1)
+	      && !tree_int_cst_sign_bit (vr1->max ()))))
+    vr->set_nonzero (expr_type);
+  else if (vr0->zero_p () && vr1->zero_p ())
+    vr->set_zero (expr_type);
+  else
+    vr->set_varying (expr_type);
+}
 
-  /* Not all binary expressions can be applied to ranges in a
-     meaningful way.  Handle only arithmetic operations.  */
-  if (code != PLUS_EXPR
-      && code != MINUS_EXPR
-      && code != POINTER_PLUS_EXPR
-      && code != MULT_EXPR
-      && code != TRUNC_DIV_EXPR
-      && code != FLOOR_DIV_EXPR
-      && code != CEIL_DIV_EXPR
-      && code != EXACT_DIV_EXPR
-      && code != ROUND_DIV_EXPR
-      && code != TRUNC_MOD_EXPR
-      && code != RSHIFT_EXPR
-      && code != LSHIFT_EXPR
-      && code != MIN_EXPR
-      && code != MAX_EXPR
-      && code != BIT_AND_EXPR
-      && code != BIT_IOR_EXPR
-      && code != BIT_XOR_EXPR)
-    {
-      vr->set_varying (expr_type);
-      return;
-    }
+/* Extract range information from a PLUS/MINUS_EXPR and store the
+   result in *VR.  */
 
-  /* If both ranges are UNDEFINED, so is the result.  */
-  if (vr0.undefined_p () && vr1.undefined_p ())
-    {
-      vr->set_undefined ();
-      return;
-    }
-  /* If one of the ranges is UNDEFINED drop it to VARYING for the following
-     code.  At some point we may want to special-case operations that
-     have UNDEFINED result for all or some value-ranges of the not UNDEFINED
-     operand.  */
-  else if (vr0.undefined_p ())
-    vr0.set_varying (expr_type);
-  else if (vr1.undefined_p ())
-    vr1.set_varying (expr_type);
+static void
+extract_range_from_plus_minus_expr (value_range_base *vr,
+				    enum tree_code code,
+				    tree expr_type,
+				    const value_range_base *vr0_,
+				    const value_range_base *vr1_)
+{
+  gcc_checking_assert (code == PLUS_EXPR || code == MINUS_EXPR);
 
-  /* We get imprecise results from ranges_from_anti_range when
-     code is EXACT_DIV_EXPR.  We could mask out bits in the resulting
-     range, but then we also need to hack up vrp_union.  It's just
-     easier to special case when vr0 is ~[0,0] for EXACT_DIV_EXPR.  */
-  if (code == EXACT_DIV_EXPR && vr0.nonzero_p ())
-    {
-      vr->set_nonzero (expr_type);
-      return;
-    }
+  value_range_base vr0 = *vr0_, vr1 = *vr1_;
+  value_range_base vrtem0, vrtem1;
 
   /* Now canonicalize anti-ranges to ranges when they are not symbolic
      and express ~[] op X as ([]' op X) U ([]'' op X).  */
   if (vr0.kind () == VR_ANTI_RANGE
       && ranges_from_anti_range (&vr0, &vrtem0, &vrtem1))
     {
-      extract_range_from_binary_expr (vr, code, expr_type, &vrtem0, vr1_);
+      extract_range_from_plus_minus_expr (vr, code, expr_type, &vrtem0, vr1_);
       if (!vrtem1.undefined_p ())
 	{
 	  value_range_base vrres;
-	  extract_range_from_binary_expr (&vrres, code, expr_type,
-					  &vrtem1, vr1_);
+	  extract_range_from_plus_minus_expr (&vrres, code, expr_type,
+					      &vrtem1, vr1_);
 	  vr->union_ (&vrres);
 	}
       return;
@@ -1647,424 +1616,129 @@  extract_range_from_binary_expr (value_range_base *vr,
   if (vr1.kind () == VR_ANTI_RANGE
       && ranges_from_anti_range (&vr1, &vrtem0, &vrtem1))
     {
-      extract_range_from_binary_expr (vr, code, expr_type, vr0_, &vrtem0);
+      extract_range_from_plus_minus_expr (vr, code, expr_type, vr0_, &vrtem0);
       if (!vrtem1.undefined_p ())
 	{
 	  value_range_base vrres;
-	  extract_range_from_binary_expr (&vrres, code, expr_type,
-					  vr0_, &vrtem1);
+	  extract_range_from_plus_minus_expr (&vrres, code, expr_type,
+					      vr0_, &vrtem1);
 	  vr->union_ (&vrres);
 	}
       return;
     }
 
-  /* The type of the resulting value range defaults to VR0.TYPE.  */
-  type = vr0.kind ();
-
-  /* Refuse to operate on VARYING ranges, ranges of different kinds
-     and symbolic ranges.  As an exception, we allow BIT_{AND,IOR}
-     because we may be able to derive a useful range even if one of
-     the operands is VR_VARYING or symbolic range.  Similarly for
-     divisions, MIN/MAX and PLUS/MINUS.
-
-     TODO, we may be able to derive anti-ranges in some cases.  */
-  if (code != BIT_AND_EXPR
-      && code != BIT_IOR_EXPR
-      && code != TRUNC_DIV_EXPR
-      && code != FLOOR_DIV_EXPR
-      && code != CEIL_DIV_EXPR
-      && code != EXACT_DIV_EXPR
-      && code != ROUND_DIV_EXPR
-      && code != TRUNC_MOD_EXPR
-      && code != MIN_EXPR
-      && code != MAX_EXPR
-      && code != PLUS_EXPR
-      && code != MINUS_EXPR
-      && code != RSHIFT_EXPR
-      && code != POINTER_PLUS_EXPR
-      && (vr0.varying_p ()
-	  || vr1.varying_p ()
-	  || vr0.kind () != vr1.kind ()
-	  || vr0.symbolic_p ()
-	  || vr1.symbolic_p ()))
-    {
-      vr->set_varying (expr_type);
-      return;
-    }
-
-  /* Now evaluate the expression to determine the new range.  */
-  if (POINTER_TYPE_P (expr_type))
-    {
-      if (code == MIN_EXPR || code == MAX_EXPR)
-	{
-	  /* For MIN/MAX expressions with pointers, we only care about
-	     nullness, if both are non null, then the result is nonnull.
-	     If both are null, then the result is null. Otherwise they
-	     are varying.  */
-	  if (!range_includes_zero_p (&vr0) && !range_includes_zero_p (&vr1))
-	    vr->set_nonzero (expr_type);
-	  else if (vr0.zero_p () && vr1.zero_p ())
-	    vr->set_zero (expr_type);
-	  else
-	    vr->set_varying (expr_type);
-	}
-      else if (code == POINTER_PLUS_EXPR)
-	{
-	  /* For pointer types, we are really only interested in asserting
-	     whether the expression evaluates to non-NULL.
-	     With -fno-delete-null-pointer-checks we need to be more
-	     conservative.  As some object might reside at address 0,
-	     then some offset could be added to it and the same offset
-	     subtracted again and the result would be NULL.
-	     E.g.
-	     static int a[12]; where &a[0] is NULL and
-	     ptr = &a[6];
-	     ptr -= 6;
-	     ptr will be NULL here, even when there is POINTER_PLUS_EXPR
-	     where the first range doesn't include zero and the second one
-	     doesn't either.  As the second operand is sizetype (unsigned),
-	     consider all ranges where the MSB could be set as possible
-	     subtractions where the result might be NULL.  */
-	  if ((!range_includes_zero_p (&vr0)
-	       || !range_includes_zero_p (&vr1))
-	      && !TYPE_OVERFLOW_WRAPS (expr_type)
-	      && (flag_delete_null_pointer_checks
-		  || (range_int_cst_p (&vr1)
-		      && !tree_int_cst_sign_bit (vr1.max ()))))
-	    vr->set_nonzero (expr_type);
-	  else if (vr0.zero_p () && vr1.zero_p ())
-	    vr->set_zero (expr_type);
-	  else
-	    vr->set_varying (expr_type);
-	}
-      else if (code == BIT_AND_EXPR)
-	{
-	  /* For pointer types, we are really only interested in asserting
-	     whether the expression evaluates to non-NULL.  */
-	  if (!range_includes_zero_p (&vr0) && !range_includes_zero_p (&vr1))
-	    vr->set_nonzero (expr_type);
-	  else if (vr0.zero_p () || vr1.zero_p ())
-	    vr->set_zero (expr_type);
-	  else
-	    vr->set_varying (expr_type);
-	}
-      else
-	vr->set_varying (expr_type);
-
-      return;
-    }
-
-  /* For integer ranges, apply the operation to each end of the
-     range and see what we end up with.  */
-  if (code == PLUS_EXPR || code == MINUS_EXPR)
+  value_range_kind kind;
+  value_range_kind vr0_kind = vr0.kind (), vr1_kind = vr1.kind ();
+  tree vr0_min = vr0.min (), vr0_max = vr0.max ();
+  tree vr1_min = vr1.min (), vr1_max = vr1.max ();
+  tree min = NULL, max = NULL;
+
+  /* This will normalize things such that calculating
+     [0,0] - VR_VARYING is not dropped to varying, but is
+     calculated as [MIN+1, MAX].  */
+  if (vr0.varying_p ())
+    {
+      vr0_kind = VR_RANGE;
+      vr0_min = vrp_val_min (expr_type);
+      vr0_max = vrp_val_max (expr_type);
+    }
+  if (vr1.varying_p ())
+    {
+      vr1_kind = VR_RANGE;
+      vr1_min = vrp_val_min (expr_type);
+      vr1_max = vrp_val_max (expr_type);
+    }
+
+  const bool minus_p = (code == MINUS_EXPR);
+  tree min_op0 = vr0_min;
+  tree min_op1 = minus_p ? vr1_max : vr1_min;
+  tree max_op0 = vr0_max;
+  tree max_op1 = minus_p ? vr1_min : vr1_max;
+  tree sym_min_op0 = NULL_TREE;
+  tree sym_min_op1 = NULL_TREE;
+  tree sym_max_op0 = NULL_TREE;
+  tree sym_max_op1 = NULL_TREE;
+  bool neg_min_op0, neg_min_op1, neg_max_op0, neg_max_op1;
+
+  neg_min_op0 = neg_min_op1 = neg_max_op0 = neg_max_op1 = false;
+
+  /* If we have a PLUS or MINUS with two VR_RANGEs, either constant or
+     single-symbolic ranges, try to compute the precise resulting range,
+     but only if we know that this resulting range will also be constant
+     or single-symbolic.  */
+  if (vr0_kind == VR_RANGE && vr1_kind == VR_RANGE
+      && (TREE_CODE (min_op0) == INTEGER_CST
+	  || (sym_min_op0
+	      = get_single_symbol (min_op0, &neg_min_op0, &min_op0)))
+      && (TREE_CODE (min_op1) == INTEGER_CST
+	  || (sym_min_op1
+	      = get_single_symbol (min_op1, &neg_min_op1, &min_op1)))
+      && (!(sym_min_op0 && sym_min_op1)
+	  || (sym_min_op0 == sym_min_op1
+	      && neg_min_op0 == (minus_p ? neg_min_op1 : !neg_min_op1)))
+      && (TREE_CODE (max_op0) == INTEGER_CST
+	  || (sym_max_op0
+	      = get_single_symbol (max_op0, &neg_max_op0, &max_op0)))
+      && (TREE_CODE (max_op1) == INTEGER_CST
+	  || (sym_max_op1
+	      = get_single_symbol (max_op1, &neg_max_op1, &max_op1)))
+      && (!(sym_max_op0 && sym_max_op1)
+	  || (sym_max_op0 == sym_max_op1
+	      && neg_max_op0 == (minus_p ? neg_max_op1 : !neg_max_op1))))
     {
-      value_range_kind vr0_kind = vr0.kind (), vr1_kind = vr1.kind ();
-      tree vr0_min = vr0.min (), vr0_max = vr0.max ();
-      tree vr1_min = vr1.min (), vr1_max = vr1.max ();
-      /* This will normalize things such that calculating
-	 [0,0] - VR_VARYING is not dropped to varying, but is
-	 calculated as [MIN+1, MAX].  */
-      if (vr0.varying_p ())
-	{
-	  vr0_kind = VR_RANGE;
-	  vr0_min = vrp_val_min (expr_type);
-	  vr0_max = vrp_val_max (expr_type);
-	}
-      if (vr1.varying_p ())
-	{
-	  vr1_kind = VR_RANGE;
-	  vr1_min = vrp_val_min (expr_type);
-	  vr1_max = vrp_val_max (expr_type);
-	}
-
-      const bool minus_p = (code == MINUS_EXPR);
-      tree min_op0 = vr0_min;
-      tree min_op1 = minus_p ? vr1_max : vr1_min;
-      tree max_op0 = vr0_max;
-      tree max_op1 = minus_p ? vr1_min : vr1_max;
-      tree sym_min_op0 = NULL_TREE;
-      tree sym_min_op1 = NULL_TREE;
-      tree sym_max_op0 = NULL_TREE;
-      tree sym_max_op1 = NULL_TREE;
-      bool neg_min_op0, neg_min_op1, neg_max_op0, neg_max_op1;
-
-      neg_min_op0 = neg_min_op1 = neg_max_op0 = neg_max_op1 = false;
-
-      /* If we have a PLUS or MINUS with two VR_RANGEs, either constant or
-	 single-symbolic ranges, try to compute the precise resulting range,
-	 but only if we know that this resulting range will also be constant
-	 or single-symbolic.  */
-      if (vr0_kind == VR_RANGE && vr1_kind == VR_RANGE
-	  && (TREE_CODE (min_op0) == INTEGER_CST
-	      || (sym_min_op0
-		  = get_single_symbol (min_op0, &neg_min_op0, &min_op0)))
-	  && (TREE_CODE (min_op1) == INTEGER_CST
-	      || (sym_min_op1
-		  = get_single_symbol (min_op1, &neg_min_op1, &min_op1)))
-	  && (!(sym_min_op0 && sym_min_op1)
-	      || (sym_min_op0 == sym_min_op1
-		  && neg_min_op0 == (minus_p ? neg_min_op1 : !neg_min_op1)))
-	  && (TREE_CODE (max_op0) == INTEGER_CST
-	      || (sym_max_op0
-		  = get_single_symbol (max_op0, &neg_max_op0, &max_op0)))
-	  && (TREE_CODE (max_op1) == INTEGER_CST
-	      || (sym_max_op1
-		  = get_single_symbol (max_op1, &neg_max_op1, &max_op1)))
-	  && (!(sym_max_op0 && sym_max_op1)
-	      || (sym_max_op0 == sym_max_op1
-		  && neg_max_op0 == (minus_p ? neg_max_op1 : !neg_max_op1))))
-	{
-	  wide_int wmin, wmax;
-	  wi::overflow_type min_ovf = wi::OVF_NONE;
-	  wi::overflow_type max_ovf = wi::OVF_NONE;
-
-	  /* Build the bounds.  */
-	  combine_bound (code, wmin, min_ovf, expr_type, min_op0, min_op1);
-	  combine_bound (code, wmax, max_ovf, expr_type, max_op0, max_op1);
-
-	  /* If we have overflow for the constant part and the resulting
-	     range will be symbolic, drop to VR_VARYING.  */
-	  if (((bool)min_ovf && sym_min_op0 != sym_min_op1)
-	      || ((bool)max_ovf && sym_max_op0 != sym_max_op1))
-	    {
-	      vr->set_varying (expr_type);
-	      return;
-	    }
+      wide_int wmin, wmax;
+      wi::overflow_type min_ovf = wi::OVF_NONE;
+      wi::overflow_type max_ovf = wi::OVF_NONE;
 
-	  /* Adjust the range for possible overflow.  */
-	  min = NULL_TREE;
-	  max = NULL_TREE;
-	  set_value_range_with_overflow (type, min, max, expr_type,
-					 wmin, wmax, min_ovf, max_ovf);
-	  if (type == VR_VARYING)
-	    {
-	      vr->set_varying (expr_type);
-	      return;
-	    }
+      /* Build the bounds.  */
+      combine_bound (code, wmin, min_ovf, expr_type, min_op0, min_op1);
+      combine_bound (code, wmax, max_ovf, expr_type, max_op0, max_op1);
 
-	  /* Build the symbolic bounds if needed.  */
-	  adjust_symbolic_bound (min, code, expr_type,
-				 sym_min_op0, sym_min_op1,
-				 neg_min_op0, neg_min_op1);
-	  adjust_symbolic_bound (max, code, expr_type,
-				 sym_max_op0, sym_max_op1,
-				 neg_max_op0, neg_max_op1);
-	}
-      else
+      /* If we have overflow for the constant part and the resulting
+	 range will be symbolic, drop to VR_VARYING.  */
+      if (((bool)min_ovf && sym_min_op0 != sym_min_op1)
+	  || ((bool)max_ovf && sym_max_op0 != sym_max_op1))
 	{
-	  /* For other cases, for example if we have a PLUS_EXPR with two
-	     VR_ANTI_RANGEs, drop to VR_VARYING.  It would take more effort
-	     to compute a precise range for such a case.
-	     ???  General even mixed range kind operations can be expressed
-	     by for example transforming ~[3, 5] + [1, 2] to range-only
-	     operations and a union primitive:
-	       [-INF, 2] + [1, 2]  U  [5, +INF] + [1, 2]
-	           [-INF+1, 4]     U    [6, +INF(OVF)]
-	     though usually the union is not exactly representable with
-	     a single range or anti-range as the above is
-		 [-INF+1, +INF(OVF)] intersected with ~[5, 5]
-	     but one could use a scheme similar to equivalences for this. */
 	  vr->set_varying (expr_type);
 	  return;
 	}
-    }
-  else if (code == MIN_EXPR
-	   || code == MAX_EXPR)
-    {
-      wide_int wmin, wmax;
-      wide_int vr0_min, vr0_max;
-      wide_int vr1_min, vr1_max;
-      extract_range_into_wide_ints (&vr0, expr_type, vr0_min, vr0_max);
-      extract_range_into_wide_ints (&vr1, expr_type, vr1_min, vr1_max);
-      if (wide_int_range_min_max (wmin, wmax, code, sign, prec,
-				  vr0_min, vr0_max, vr1_min, vr1_max))
-	vr->set (VR_RANGE, wide_int_to_tree (expr_type, wmin),
-		 wide_int_to_tree (expr_type, wmax));
-      else
-	vr->set_varying (expr_type);
-      return;
-    }
-  else if (code == MULT_EXPR)
-    {
-      if (!range_int_cst_p (&vr0)
-	  || !range_int_cst_p (&vr1))
-	{
-	  vr->set_varying (expr_type);
-	  return;
-	}
-      extract_range_from_multiplicative_op (vr, code, expr_type, &vr0, &vr1);
-      return;
-    }
-  else if (code == RSHIFT_EXPR
-	   || code == LSHIFT_EXPR)
-    {
-      if (range_int_cst_p (&vr1)
-	  && !wide_int_range_shift_undefined_p
-		(TYPE_SIGN (TREE_TYPE (vr1.min ())),
-		 prec,
-		 wi::to_wide (vr1.min ()),
-		 wi::to_wide (vr1.max ())))
-	{
-	  if (code == RSHIFT_EXPR)
-	    {
-	      extract_range_from_multiplicative_op (vr, code, expr_type,
-						    &vr0, &vr1);
-	      return;
-	    }
-	  else if (code == LSHIFT_EXPR
-		   && range_int_cst_p (&vr0))
-	    {
-	      wide_int res_lb, res_ub;
-	      if (wide_int_range_lshift (res_lb, res_ub, sign, prec,
-					 wi::to_wide (vr0.min ()),
-					 wi::to_wide (vr0.max ()),
-					 wi::to_wide (vr1.min ()),
-					 wi::to_wide (vr1.max ()),
-					 TYPE_OVERFLOW_UNDEFINED (expr_type)))
-		{
-		  min = wide_int_to_tree (expr_type, res_lb);
-		  max = wide_int_to_tree (expr_type, res_ub);
-		  vr->set (VR_RANGE, min, max);
-		  return;
-		}
-	    }
-	}
-      vr->set_varying (expr_type);
-      return;
-    }
-  else if (code == TRUNC_DIV_EXPR
-	   || code == FLOOR_DIV_EXPR
-	   || code == CEIL_DIV_EXPR
-	   || code == EXACT_DIV_EXPR
-	   || code == ROUND_DIV_EXPR)
-    {
-      wide_int dividend_min, dividend_max, divisor_min, divisor_max;
-      wide_int wmin, wmax, extra_min, extra_max;
-      bool extra_range_p;
-
-      /* Special case explicit division by zero as undefined.  */
-      if (vr1.zero_p ())
-	{
-	  vr->set_undefined ();
-	  return;
-	}
 
-      /* First, normalize ranges into constants we can handle.  Note
-	 that VR_ANTI_RANGE's of constants were already normalized
-	 before arriving here.
-
-	 NOTE: As a future improvement, we may be able to do better
-	 with mixed symbolic (anti-)ranges like [0, A].  See note in
-	 ranges_from_anti_range.  */
-      extract_range_into_wide_ints (&vr0, expr_type,
-				    dividend_min, dividend_max);
-      extract_range_into_wide_ints (&vr1, expr_type,
-				    divisor_min, divisor_max);
-      if (!wide_int_range_div (wmin, wmax, code, sign, prec,
-			       dividend_min, dividend_max,
-			       divisor_min, divisor_max,
-			       TYPE_OVERFLOW_UNDEFINED (expr_type),
-			       extra_range_p, extra_min, extra_max))
+      /* Adjust the range for possible overflow.  */
+      min = NULL_TREE;
+      max = NULL_TREE;
+      set_value_range_with_overflow (kind, min, max, expr_type,
+				     wmin, wmax, min_ovf, max_ovf);
+      if (kind == VR_VARYING)
 	{
 	  vr->set_varying (expr_type);
 	  return;
 	}
-      vr->set (VR_RANGE, wide_int_to_tree (expr_type, wmin),
-	       wide_int_to_tree (expr_type, wmax));
-      if (extra_range_p)
-	{
-	  value_range_base
-	    extra_range (VR_RANGE, wide_int_to_tree (expr_type, extra_min),
-			 wide_int_to_tree (expr_type, extra_max));
-	  vr->union_ (&extra_range);
-	}
-      return;
+
+      /* Build the symbolic bounds if needed.  */
+      adjust_symbolic_bound (min, code, expr_type,
+			     sym_min_op0, sym_min_op1,
+			     neg_min_op0, neg_min_op1);
+      adjust_symbolic_bound (max, code, expr_type,
+			     sym_max_op0, sym_max_op1,
+			     neg_max_op0, neg_max_op1);
     }
-  else if (code == TRUNC_MOD_EXPR)
+  else
     {
-      if (vr1.zero_p ())
-	{
-	  vr->set_undefined ();
-	  return;
-	}
-      wide_int wmin, wmax, tmp;
-      wide_int vr0_min, vr0_max, vr1_min, vr1_max;
-      extract_range_into_wide_ints (&vr0, expr_type, vr0_min, vr0_max);
-      extract_range_into_wide_ints (&vr1, expr_type, vr1_min, vr1_max);
-      wide_int_range_trunc_mod (wmin, wmax, sign, prec,
-				vr0_min, vr0_max, vr1_min, vr1_max);
-      min = wide_int_to_tree (expr_type, wmin);
-      max = wide_int_to_tree (expr_type, wmax);
-      vr->set (VR_RANGE, min, max);
+      /* For other cases, for example if we have a PLUS_EXPR with two
+	 VR_ANTI_RANGEs, drop to VR_VARYING.  It would take more effort
+	 to compute a precise range for such a case.
+	 ???  General even mixed range kind operations can be expressed
+	 by for example transforming ~[3, 5] + [1, 2] to range-only
+	 operations and a union primitive:
+	 [-INF, 2] + [1, 2]  U  [5, +INF] + [1, 2]
+	 [-INF+1, 4]     U    [6, +INF(OVF)]
+	 though usually the union is not exactly representable with
+	 a single range or anti-range as the above is
+	 [-INF+1, +INF(OVF)] intersected with ~[5, 5]
+	 but one could use a scheme similar to equivalences for this. */
+      vr->set_varying (expr_type);
       return;
     }
-  else if (code == BIT_AND_EXPR || code == BIT_IOR_EXPR || code == BIT_XOR_EXPR)
-    {
-      wide_int may_be_nonzero0, may_be_nonzero1;
-      wide_int must_be_nonzero0, must_be_nonzero1;
-      wide_int wmin, wmax;
-      wide_int vr0_min, vr0_max, vr1_min, vr1_max;
-      vrp_set_zero_nonzero_bits (expr_type, &vr0,
-				 &may_be_nonzero0, &must_be_nonzero0);
-      vrp_set_zero_nonzero_bits (expr_type, &vr1,
-				 &may_be_nonzero1, &must_be_nonzero1);
-      extract_range_into_wide_ints (&vr0, expr_type, vr0_min, vr0_max);
-      extract_range_into_wide_ints (&vr1, expr_type, vr1_min, vr1_max);
-      if (code == BIT_AND_EXPR)
-	{
-	  if (wide_int_range_bit_and (wmin, wmax, sign, prec,
-				      vr0_min, vr0_max,
-				      vr1_min, vr1_max,
-				      must_be_nonzero0,
-				      may_be_nonzero0,
-				      must_be_nonzero1,
-				      may_be_nonzero1))
-	    {
-	      min = wide_int_to_tree (expr_type, wmin);
-	      max = wide_int_to_tree (expr_type, wmax);
-	      vr->set (VR_RANGE, min, max);
-	    }
-	  else
-	    vr->set_varying (expr_type);
-	  return;
-	}
-      else if (code == BIT_IOR_EXPR)
-	{
-	  if (wide_int_range_bit_ior (wmin, wmax, sign,
-				      vr0_min, vr0_max,
-				      vr1_min, vr1_max,
-				      must_be_nonzero0,
-				      may_be_nonzero0,
-				      must_be_nonzero1,
-				      may_be_nonzero1))
-	    {
-	      min = wide_int_to_tree (expr_type, wmin);
-	      max = wide_int_to_tree (expr_type, wmax);
-	      vr->set (VR_RANGE, min, max);
-	    }
-	  else
-	    vr->set_varying (expr_type);
-	  return;
-	}
-      else if (code == BIT_XOR_EXPR)
-	{
-	  if (wide_int_range_bit_xor (wmin, wmax, sign, prec,
-				      must_be_nonzero0,
-				      may_be_nonzero0,
-				      must_be_nonzero1,
-				      may_be_nonzero1))
-	    {
-	      min = wide_int_to_tree (expr_type, wmin);
-	      max = wide_int_to_tree (expr_type, wmax);
-	      vr->set (VR_RANGE, min, max);
-	    }
-	  else
-	    vr->set_varying (expr_type);
-	  return;
-	}
-    }
-  else
-    gcc_unreachable ();
 
   /* If either MIN or MAX overflowed, then set the resulting range to
      VARYING.  */
@@ -2077,16 +1751,7 @@  extract_range_from_binary_expr (value_range_base *vr,
       return;
     }
 
-  /* We punt for [-INF, +INF].
-     We learn nothing when we have INF on both sides.
-     Note that we do accept [-INF, -INF] and [+INF, +INF].  */
-  if (vrp_val_is_min (min) && vrp_val_is_max (max))
-    {
-      vr->set_varying (expr_type);
-      return;
-    }
-
-  cmp = compare_values (min, max);
+  int cmp = compare_values (min, max);
   if (cmp == -2 || cmp == 1)
     {
       /* If the new range has its limits swapped around (MIN > MAX),
@@ -2095,166 +1760,162 @@  extract_range_from_binary_expr (value_range_base *vr,
       vr->set_varying (expr_type);
     }
   else
-    vr->set (type, min, max);
+    vr->set (kind, min, max);
 }
 
-/* Extract range information from a unary operation CODE based on
-   the range of its operand *VR0 with type OP0_TYPE with resulting type TYPE.
-   The resulting range is stored in *VR.  */
+/* Normalize a value_range for use in range_ops and return it.  */
 
-void
-extract_range_from_unary_expr (value_range_base *vr,
-			       enum tree_code code, tree type,
-			       const value_range_base *vr0_, tree op0_type)
+static value_range_base
+normalize_for_range_ops (const value_range_base &vr)
 {
-  signop sign = TYPE_SIGN (type);
-  unsigned int prec = TYPE_PRECISION (type);
-  value_range_base vr0 = *vr0_;
-  value_range_base vrtem0, vrtem1;
+  tree type = vr.type ();
 
-  /* VRP only operates on integral and pointer types.  */
-  if (!(INTEGRAL_TYPE_P (op0_type)
-	|| POINTER_TYPE_P (op0_type))
-      || !(INTEGRAL_TYPE_P (type)
-	   || POINTER_TYPE_P (type)))
+  /* This will return ~[0,0] for [&var, &var].  */
+  if (POINTER_TYPE_P (type) && !range_includes_zero_p (&vr))
     {
-      vr->set_varying (type);
-      return;
+      value_range_base temp;
+      temp.set_nonzero (type);
+      return temp;
     }
+  if (vr.symbolic_p ())
+    return normalize_for_range_ops (vr.normalize_symbolics ());
+  if (TREE_CODE (vr.min ()) == INTEGER_CST
+      && TREE_CODE (vr.max ()) == INTEGER_CST)
+    return vr;
+  /* Anything not strictly numeric at this point becomes varying.  */
+  return value_range_base (vr.type ());
+}
 
-  /* If VR0 is UNDEFINED, so is the result.  */
-  if (vr0.undefined_p ())
-    {
-      vr->set_undefined ();
-      return;
-    }
+/* Fold a binary expression of two value_range's with range-ops.  */
 
-  /* Handle operations that we express in terms of others.  */
-  if (code == PAREN_EXPR)
+void
+range_fold_binary_expr (value_range_base *vr,
+			enum tree_code code,
+			tree expr_type,
+			const value_range_base *vr0_,
+			const value_range_base *vr1_)
+{
+  if (!value_range_base::supports_type_p (expr_type)
+      || (!vr0_->undefined_p ()
+	  && !value_range_base::supports_type_p (vr0_->type ()))
+      || (!vr1_->undefined_p ()
+	  && !value_range_base::supports_type_p (vr1_->type ())))
     {
-      /* PAREN_EXPR and OBJ_TYPE_REF are simple copies.  */
-      *vr = vr0;
+      vr->set_varying (expr_type);
       return;
     }
-  else if (code == NEGATE_EXPR)
+  if (vr0_->undefined_p () && vr1_->undefined_p ())
     {
-      /* -X is simply 0 - X, so re-use existing code that also handles
-         anti-ranges fine.  */
-      value_range_base zero;
-      zero.set (build_int_cst (type, 0));
-      extract_range_from_binary_expr (vr, MINUS_EXPR, type, &zero, &vr0);
+      vr->set_undefined ();
       return;
     }
-  else if (code == BIT_NOT_EXPR)
+  range_operator *op = range_op_handler (code, expr_type);
+  if (!op)
     {
-      /* ~X is simply -1 - X, so re-use existing code that also handles
-         anti-ranges fine.  */
-      value_range_base minusone;
-      minusone.set (build_int_cst (type, -1));
-      extract_range_from_binary_expr (vr, MINUS_EXPR, type, &minusone, &vr0);
+      vr->set_varying (expr_type);
       return;
     }
 
-  /* Now canonicalize anti-ranges to ranges when they are not symbolic
-     and express op ~[]  as (op []') U (op []'').  */
-  if (vr0.kind () == VR_ANTI_RANGE
-      && ranges_from_anti_range (&vr0, &vrtem0, &vrtem1))
+  /* Mimic any behavior users of extract_range_from_binary_expr may
+     expect.  */
+  value_range_base vr0 = *vr0_, vr1 = *vr1_;
+  if (vr0.undefined_p ())
+    vr0.set_varying (expr_type);
+  else if (vr1.undefined_p ())
+    vr1.set_varying (expr_type);
+
+  /* Handle symbolics.  */
+  if (vr0.symbolic_p () || vr1.symbolic_p ())
     {
-      extract_range_from_unary_expr (vr, code, type, &vrtem0, op0_type);
-      if (!vrtem1.undefined_p ())
+      if ((code == PLUS_EXPR || code == MINUS_EXPR))
 	{
-	  value_range_base vrres;
-	  extract_range_from_unary_expr (&vrres, code, type,
-					 &vrtem1, op0_type);
-	  vr->union_ (&vrres);
+	  extract_range_from_plus_minus_expr (vr, code, expr_type,
+					      &vr0, &vr1);
+	  return;
+	}
+      if (POINTER_TYPE_P (expr_type) && code == POINTER_PLUS_EXPR)
+	{
+	  extract_range_from_pointer_plus_expr (vr, code, expr_type,
+						&vr0, &vr1);
+	  return;
 	}
-      return;
     }
 
-  if (CONVERT_EXPR_CODE_P (code))
-    {
-      tree inner_type = op0_type;
-      tree outer_type = type;
+  /* Do the range-ops dance.  */
+  value_range_base n0 = normalize_for_range_ops (vr0);
+  value_range_base n1 = normalize_for_range_ops (vr1);
+  *vr = op->fold_range (expr_type, n0, n1);
+}
 
-      /* If the expression involves a pointer, we are only interested in
-	 determining if it evaluates to NULL [0, 0] or non-NULL (~[0, 0]).
+/* Fold a unary expression of a value_range with range-ops.  */
 
-	 This may lose precision when converting (char *)~[0,2] to
-	 int, because we'll forget that the pointer can also not be 1
-	 or 2.  In practice we don't care, as this is some idiot
-	 storing a magic constant to a pointer.  */
-      if (POINTER_TYPE_P (type) || POINTER_TYPE_P (op0_type))
+void
+range_fold_unary_expr (value_range_base *vr,
+		       enum tree_code code, tree expr_type,
+		       const value_range_base *vr0,
+		       tree vr0_type)
+{
+  /* Mimic any behavior users of extract_range_from_unary_expr may
+     expect.  */
+  if (!value_range_base::supports_type_p (expr_type)
+      || !value_range_base::supports_type_p (vr0_type))
+    {
+      vr->set_varying (expr_type);
+      return;
+    }
+  if (vr0->undefined_p ())
+    {
+      vr->set_undefined ();
+      return;
+    }
+  range_operator *op = range_op_handler (code, expr_type);
+  if (!op)
+    {
+      vr->set_varying (expr_type);
+      return;
+    }
+
+  /* Handle symbolics.  */
+  if (vr0->symbolic_p ())
+    {
+      if (code == NEGATE_EXPR)
 	{
-	  if (!range_includes_zero_p (&vr0))
-	    vr->set_nonzero (type);
-	  else if (vr0.zero_p ())
-	    vr->set_zero (type);
-	  else
-	    vr->set_varying (type);
+	  /* -X is simply 0 - X.  */
+	  value_range_base zero;
+	  zero.set_zero (vr0->type ());
+	  range_fold_binary_expr (vr, MINUS_EXPR, expr_type, &zero, vr0);
 	  return;
 	}
-
-      /* The POINTER_TYPE_P code above will have dealt with all
-	 pointer anti-ranges.  Any remaining anti-ranges at this point
-	 will be integer conversions from SSA names that will be
-	 normalized into VARYING.  For instance: ~[x_55, x_55].  */
-      gcc_assert (vr0.kind () != VR_ANTI_RANGE
-		  || TREE_CODE (vr0.min ()) != INTEGER_CST);
-
-      /* NOTES: Previously we were returning VARYING for all symbolics, but
-	 we can do better by treating them as [-MIN, +MAX].  For
-	 example, converting [SYM, SYM] from INT to LONG UNSIGNED,
-	 we can return: ~[0x8000000, 0xffffffff7fffffff].
-
-	 We were also failing to convert ~[0,0] from char* to unsigned,
-	 instead choosing to return VR_VARYING.  Now we return ~[0,0].  */
-      wide_int vr0_min, vr0_max, wmin, wmax;
-      signop inner_sign = TYPE_SIGN (inner_type);
-      signop outer_sign = TYPE_SIGN (outer_type);
-      unsigned inner_prec = TYPE_PRECISION (inner_type);
-      unsigned outer_prec = TYPE_PRECISION (outer_type);
-      extract_range_into_wide_ints (&vr0, inner_type, vr0_min, vr0_max);
-      if (wide_int_range_convert (wmin, wmax,
-				  inner_sign, inner_prec,
-				  outer_sign, outer_prec,
-				  vr0_min, vr0_max))
+      if (code == BIT_NOT_EXPR)
 	{
-	  tree min = wide_int_to_tree (outer_type, wmin);
-	  tree max = wide_int_to_tree (outer_type, wmax);
-	  vr->set (VR_RANGE, min, max);
+	  /* ~X is simply -1 - X.  */
+	  value_range_base minusone;
+	  minusone.set (build_int_cst (vr0->type (), -1));
+	  range_fold_binary_expr (vr, MINUS_EXPR, expr_type, &minusone, vr0);
+	  return;
 	}
-      else
-	vr->set_varying (outer_type);
+      *vr = op->fold_range (expr_type,
+			    normalize_for_range_ops (*vr0),
+			    value_range_base (expr_type));
       return;
     }
-  else if (code == ABS_EXPR)
+  if (CONVERT_EXPR_CODE_P (code) && (POINTER_TYPE_P (expr_type)
+				     || POINTER_TYPE_P (vr0->type ())))
     {
-      wide_int wmin, wmax;
-      wide_int vr0_min, vr0_max;
-      extract_range_into_wide_ints (&vr0, type, vr0_min, vr0_max);
-      if (wide_int_range_abs (wmin, wmax, sign, prec, vr0_min, vr0_max,
-			      TYPE_OVERFLOW_UNDEFINED (type)))
-	vr->set (VR_RANGE, wide_int_to_tree (type, wmin),
-		 wide_int_to_tree (type, wmax));
+      /* This handles symbolic conversions such such as [25, x_4].  */
+      if (!range_includes_zero_p (vr0))
+	vr->set_nonzero (expr_type);
+      else if (vr0->zero_p ())
+	vr->set_zero (expr_type);
       else
-	vr->set_varying (type);
-      return;
-    }
-  else if (code == ABSU_EXPR)
-    {
-      wide_int wmin, wmax;
-      wide_int vr0_min, vr0_max;
-      tree signed_type = make_signed_type (TYPE_PRECISION (type));
-      extract_range_into_wide_ints (&vr0, signed_type, vr0_min, vr0_max);
-      wide_int_range_absu (wmin, wmax, prec, vr0_min, vr0_max);
-      vr->set (VR_RANGE, wide_int_to_tree (type, wmin),
-	       wide_int_to_tree (type, wmax));
+	vr->set_varying (expr_type);
       return;
     }
 
-  /* For unhandled operations fall back to varying.  */
-  vr->set_varying (type);
-  return;
+  /* Do the range-ops dance.  */
+  value_range_base n0 = normalize_for_range_ops (*vr0);
+  value_range_base n1 (expr_type);
+  *vr = op->fold_range (expr_type, n0, n1);
 }
 
 /* Given a COND_EXPR COND of the form 'V OP W', and an SSA name V,
@@ -6353,18 +6014,18 @@  value_range_base::normalize_symbolics () const
     {
       // [SYM, NUM] -> [-MIN, NUM]
       if (min_symbolic)
-	return value_range_base (VR_RANGE, vrp_val_min (ttype), max ());
+	return value_range_base (VR_RANGE, vrp_val_min (ttype, true), max ());
       // [NUM, SYM] -> [NUM, +MAX]
-      return value_range_base (VR_RANGE, min (), vrp_val_max (ttype));
+      return value_range_base (VR_RANGE, min (), vrp_val_max (ttype, true));
     }
-  gcc_assert (kind () == VR_ANTI_RANGE);
+  gcc_checking_assert (kind () == VR_ANTI_RANGE);
   // ~[SYM, NUM] -> [NUM + 1, +MAX]
   if (min_symbolic)
     {
       if (!vrp_val_is_max (max ()))
 	{
 	  tree n = wide_int_to_tree (ttype, wi::to_wide (max ()) + 1);
-	  return value_range_base (VR_RANGE, n, vrp_val_max (ttype));
+	  return value_range_base (VR_RANGE, n, vrp_val_max (ttype, true));
 	}
       value_range_base var;
       var.set_varying (ttype);
@@ -6374,13 +6035,182 @@  value_range_base::normalize_symbolics () const
   if (!vrp_val_is_min (min ()))
     {
       tree n = wide_int_to_tree (ttype, wi::to_wide (min ()) - 1);
-      return value_range_base (VR_RANGE, vrp_val_min (ttype), n);
+      return value_range_base (VR_RANGE, vrp_val_min (ttype, true), n);
     }
   value_range_base var;
   var.set_varying (ttype);
   return var;
 }
 
+/* Return the number of sub-ranges in a range.  */
+
+unsigned
+value_range_base::num_pairs () const
+{
+  if (undefined_p ())
+    return 0;
+  if (varying_p ())
+    return 1;
+  if (symbolic_p ())
+    return normalize_symbolics ().num_pairs ();
+  if (m_kind == VR_ANTI_RANGE)
+    {
+      // ~[MIN, X] has one sub-range of [X+1, MAX], and
+      // ~[X, MAX] has one sub-range of [MIN, X-1].
+      if (vrp_val_is_min (m_min, true) || vrp_val_is_max (m_max, true))
+	return 1;
+      return 2;
+    }
+  return 1;
+}
+
+/* Return the lower bound for a sub-range.  PAIR is the sub-range in
+   question.  */
+
+wide_int
+value_range_base::lower_bound (unsigned pair) const
+{
+  if (symbolic_p ())
+    return normalize_symbolics ().lower_bound (pair);
+
+  gcc_checking_assert (!undefined_p ());
+  gcc_checking_assert (pair + 1 <= num_pairs ());
+  tree t = NULL;
+  if (m_kind == VR_ANTI_RANGE)
+    {
+      tree typ = type ();
+      if (pair == 1 || vrp_val_is_min (m_min, true))
+	t = wide_int_to_tree (typ, wi::to_wide (m_max) + 1);
+      else
+	t = vrp_val_min (typ, true);
+    }
+  else
+    t = m_min;
+  return wi::to_wide (t);
+}
+
+/* Return the upper bound for a sub-range.  PAIR is the sub-range in
+   question.  */
+
+wide_int
+value_range_base::upper_bound (unsigned pair) const
+{
+  if (symbolic_p ())
+    return normalize_symbolics ().upper_bound (pair);
+
+  gcc_checking_assert (!undefined_p ());
+  gcc_checking_assert (pair + 1 <= num_pairs ());
+  tree t = NULL;
+  if (m_kind == VR_ANTI_RANGE)
+    {
+      tree typ = type ();
+      if (pair == 1 || vrp_val_is_min (m_min, true))
+	t = vrp_val_max (typ, true);
+      else
+	t = wide_int_to_tree (typ, wi::to_wide (m_min) - 1);
+    }
+  else
+    t = m_max;
+  return wi::to_wide (t);
+}
+
+/* Return the highest bound in a range.  */
+
+wide_int
+value_range_base::upper_bound () const
+{
+  unsigned pairs = num_pairs ();
+  gcc_checking_assert (pairs > 0);
+  return upper_bound (pairs - 1);
+}
+
+/* Return TRUE if range contains INTEGER_CST.  */
+
+bool
+value_range_base::contains_p (tree cst) const
+{
+  gcc_checking_assert (TREE_CODE (cst) == INTEGER_CST);
+  if (symbolic_p ())
+    return normalize_symbolics ().contains_p (cst);
+  return value_inside_range (cst) == 1;
+}
+
+/* Return the inverse of a range.  */
+
+void
+value_range_base::invert ()
+{
+  if (undefined_p ())
+    return;
+  if (varying_p ())
+    set_undefined ();
+  else if (m_kind == VR_RANGE)
+    m_kind = VR_ANTI_RANGE;
+  else if (m_kind == VR_ANTI_RANGE)
+    m_kind = VR_RANGE;
+  else
+    gcc_unreachable ();
+}
+
+/* Range union, but for references.  */
+
+void
+value_range_base::union_ (const value_range_base &r)
+{
+  /* Disable details for now, because it makes the ranger dump
+     unnecessarily verbose.  */
+  bool details = dump_flags & TDF_DETAILS;
+  if (details)
+    dump_flags &= ~TDF_DETAILS;
+  union_ (&r);
+  if (details)
+    dump_flags |= TDF_DETAILS;
+}
+
+/* Range intersect, but for references.  */
+
+void
+value_range_base::intersect (const value_range_base &r)
+{
+  /* Disable details for now, because it makes the ranger dump
+     unnecessarily verbose.  */
+  bool details = dump_flags & TDF_DETAILS;
+  if (details)
+    dump_flags &= ~TDF_DETAILS;
+  intersect (&r);
+  if (details)
+    dump_flags |= TDF_DETAILS;
+}
+
+/* Return TRUE if two types are compatible for range operations.  */
+
+static bool
+range_compatible_p (tree t1, tree t2)
+{
+  if (POINTER_TYPE_P (t1) && POINTER_TYPE_P (t2))
+    return true;
+
+  return types_compatible_p (t1, t2);
+}
+
+bool
+value_range_base::operator== (const value_range_base &r) const
+{
+  if (undefined_p ())
+    return r.undefined_p ();
+
+  if (num_pairs () != r.num_pairs ()
+      || !range_compatible_p (type (), r.type ()))
+    return false;
+
+  for (unsigned p = 0; p < num_pairs (); p++)
+    if (wi::ne_p (lower_bound (p), r.lower_bound (p))
+	|| wi::ne_p (upper_bound (p), r.upper_bound (p)))
+      return false;
+
+  return true;
+}
+
 /* Visit all arguments for PHI node PHI that flow through executable
    edges.  If a valid value range can be derived from all the incoming
    value ranges, set a new range for the LHS of PHI.  */
@@ -7031,15 +6861,15 @@  determine_value_range_1 (value_range_base *vr, tree expr)
       value_range_base vr0, vr1;
       determine_value_range_1 (&vr0, TREE_OPERAND (expr, 0));
       determine_value_range_1 (&vr1, TREE_OPERAND (expr, 1));
-      extract_range_from_binary_expr (vr, TREE_CODE (expr), TREE_TYPE (expr),
-				      &vr0, &vr1);
+      range_fold_binary_expr (vr, TREE_CODE (expr), TREE_TYPE (expr),
+			      &vr0, &vr1);
     }
   else if (UNARY_CLASS_P (expr))
     {
       value_range_base vr0;
       determine_value_range_1 (&vr0, TREE_OPERAND (expr, 0));
-      extract_range_from_unary_expr (vr, TREE_CODE (expr), TREE_TYPE (expr),
-				     &vr0, TREE_TYPE (TREE_OPERAND (expr, 0)));
+      range_fold_unary_expr (vr, TREE_CODE (expr), TREE_TYPE (expr),
+			     &vr0, TREE_TYPE (TREE_OPERAND (expr, 0)));
     }
   else if (TREE_CODE (expr) == INTEGER_CST)
     vr->set (expr);
diff --git a/gcc/tree-vrp.h b/gcc/tree-vrp.h
index cf236fa6264..d20d0043ba3 100644
--- a/gcc/tree-vrp.h
+++ b/gcc/tree-vrp.h
@@ -35,14 +35,19 @@  enum value_range_kind
   VR_LAST
 };
 
-
 /* Range of values that can be associated with an SSA_NAME after VRP
    has executed.  */
 class GTY((for_user)) value_range_base
 {
+  friend void range_tests ();
 public:
   value_range_base ();
   value_range_base (value_range_kind, tree, tree);
+  value_range_base (tree, tree);
+  value_range_base (value_range_kind,
+		    tree type, const wide_int &, const wide_int &);
+  value_range_base (tree type, const wide_int &, const wide_int &);
+  value_range_base (tree type);
 
   void set (value_range_kind, tree, tree);
   void set (tree);
@@ -63,8 +68,10 @@  public:
 
   void union_ (const value_range_base *);
   void intersect (const value_range_base *);
+  void union_ (const value_range_base &);
+  void intersect (const value_range_base &);
 
-  bool operator== (const value_range_base &) const /* = delete */;
+  bool operator== (const value_range_base &) const;
   bool operator!= (const value_range_base &) const /* = delete */;
   bool equal_p (const value_range_base &) const;
 
@@ -80,6 +87,14 @@  public:
   static bool supports_type_p (tree);
   value_range_base normalize_symbolics () const;
 
+  static const unsigned int m_max_pairs = 2;
+  bool contains_p (tree) const;
+  unsigned num_pairs () const;
+  wide_int lower_bound (unsigned = 0) const;
+  wide_int upper_bound (unsigned) const;
+  wide_int upper_bound () const;
+  void invert ();
+
 protected:
   void check ();
   static value_range_base union_helper (const value_range_base *,
@@ -281,21 +296,17 @@  extern bool range_int_cst_singleton_p (const value_range_base *);
 extern int compare_values (tree, tree);
 extern int compare_values_warnv (tree, tree, bool *);
 extern int operand_less_p (tree, tree);
-extern bool vrp_val_is_min (const_tree);
-extern bool vrp_val_is_max (const_tree);
+extern bool vrp_val_is_min (const_tree, bool handle_pointers = false);
+extern bool vrp_val_is_max (const_tree, bool handle_pointers = false);
 
 extern tree vrp_val_min (const_tree, bool handle_pointers = false);
 extern tree vrp_val_max (const_tree, bool handle_pointers = false);
 
-extern void extract_range_from_unary_expr (value_range_base *vr,
-					   enum tree_code code,
-					   tree type,
-					   const value_range_base *vr0_,
-					   tree op0_type);
-extern void extract_range_from_binary_expr (value_range_base *,
-					    enum tree_code,
-					    tree, const value_range_base *,
-					    const value_range_base *);
+void range_fold_unary_expr (value_range_base *, enum tree_code, tree type,
+			    const value_range_base *, tree op0_type);
+void range_fold_binary_expr (value_range_base *, enum tree_code, tree type,
+			     const value_range_base *,
+			     const value_range_base *);
 
 extern bool vrp_operand_equal_p (const_tree, const_tree);
 extern enum value_range_kind intersect_range_with_nonzero_bits
diff --git a/gcc/vr-values.c b/gcc/vr-values.c
index 96c764c987b..9e868ed46ef 100644
--- a/gcc/vr-values.c
+++ b/gcc/vr-values.c
@@ -46,8 +46,10 @@  along with GCC; see the file COPYING3.  If not see
 #include "case-cfn-macros.h"
 #include "alloc-pool.h"
 #include "attribs.h"
+#include "range.h"
 #include "vr-values.h"
 #include "cfghooks.h"
+#include "range-op.h"
 
 /* Set value range VR to a non-negative range of type TYPE.  */
 
@@ -803,7 +805,7 @@  vr_values::extract_range_from_binary_expr (value_range *vr,
 			   vrp_val_max (expr_type));
     }
 
-  ::extract_range_from_binary_expr (vr, code, expr_type, &vr0, &vr1);
+  range_fold_binary_expr (vr, code, expr_type, &vr0, &vr1);
 
   /* Set value_range for n in following sequence:
      def = __builtin_memchr (arg, 0, sz)
@@ -864,7 +866,7 @@  vr_values::extract_range_from_binary_expr (value_range *vr,
       else
 	n_vr1.set (VR_RANGE, op1, op1);
 
-      ::extract_range_from_binary_expr (vr, code, expr_type, &vr0, &n_vr1);
+      range_fold_binary_expr (vr, code, expr_type, &vr0, &n_vr1);
     }
 
   if (vr->varying_p ()
@@ -888,7 +890,7 @@  vr_values::extract_range_from_binary_expr (value_range *vr,
       else
 	n_vr0.set (op0);
 
-      ::extract_range_from_binary_expr (vr, code, expr_type, &n_vr0, &vr1);
+      range_fold_binary_expr (vr, code, expr_type, &n_vr0, &vr1);
     }
 
   /* If we didn't derive a range for MINUS_EXPR, and
@@ -929,7 +931,7 @@  vr_values::extract_range_from_unary_expr (value_range *vr, enum tree_code code,
   else
     vr0.set_varying (type);
 
-  ::extract_range_from_unary_expr (vr, code, type, &vr0, TREE_TYPE (op0));
+  range_fold_unary_expr (vr, code, type, &vr0, TREE_TYPE (op0));
 }
 
 
@@ -1429,8 +1431,7 @@  vr_values::extract_range_basic (value_range *vr, gimple *stmt)
 						     type, op0);
 		      extract_range_from_unary_expr (&vr1, NOP_EXPR,
 						     type, op1);
-		      ::extract_range_from_binary_expr (vr, subcode, type,
-							&vr0, &vr1);
+		      range_fold_binary_expr (vr, subcode, type, &vr0, &vr1);
 		      flag_wrapv = saved_flag_wrapv;
 		    }
 		  return;
diff --git a/gcc/wide-int-range.cc b/gcc/wide-int-range.cc
deleted file mode 100644
index 90c58f6bb6e..00000000000
--- a/gcc/wide-int-range.cc
+++ /dev/null
@@ -1,865 +0,0 @@ 
-/* Support routines for range operations on wide ints.
-   Copyright (C) 2018-2019 Free Software Foundation, Inc.
-
-This file is part of GCC.
-
-GCC is free software; you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation; either version 3, or (at your option)
-any later version.
-
-GCC is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with GCC; see the file COPYING3.  If not see
-<http://www.gnu.org/licenses/>.  */
-
-#include "config.h"
-#include "system.h"
-#include "coretypes.h"
-#include "tree.h"
-#include "function.h"
-#include "fold-const.h"
-#include "wide-int-range.h"
-
-/* Wrapper around wide_int_binop that adjusts for overflow.
-
-   Return true if we can compute the result; i.e. if the operation
-   doesn't overflow or if the overflow is undefined.  In the latter
-   case (if the operation overflows and overflow is undefined), then
-   adjust the result to be -INF or +INF depending on CODE, VAL1 and
-   VAL2.  Return the value in *RES.
-
-   Return false for division by zero, for which the result is
-   indeterminate.  */
-
-static bool
-wide_int_binop_overflow (wide_int &res,
-			 enum tree_code code,
-			 const wide_int &w0, const wide_int &w1,
-			 signop sign, bool overflow_undefined)
-{
-  wi::overflow_type overflow;
-  if (!wide_int_binop (res, code, w0, w1, sign, &overflow))
-    return false;
-
-  /* If the operation overflowed return -INF or +INF depending on the
-     operation and the combination of signs of the operands.  */
-  if (overflow && overflow_undefined)
-    {
-      switch (code)
-	{
-	case MULT_EXPR:
-	  /* For multiplication, the sign of the overflow is given
-	     by the comparison of the signs of the operands.  */
-	  if (sign == UNSIGNED || w0.sign_mask () == w1.sign_mask ())
-	    res = wi::max_value (w0.get_precision (), sign);
-	  else
-	    res = wi::min_value (w0.get_precision (), sign);
-	  return true;
-
-	case TRUNC_DIV_EXPR:
-	case FLOOR_DIV_EXPR:
-	case CEIL_DIV_EXPR:
-	case EXACT_DIV_EXPR:
-	case ROUND_DIV_EXPR:
-	  /* For division, the only case is -INF / -1 = +INF.  */
-	  res = wi::max_value (w0.get_precision (), sign);
-	  return true;
-
-	default:
-	  gcc_unreachable ();
-	}
-    }
-  return !overflow;
-}
-
-/* For range [LB, UB] compute two wide_int bit masks.
-
-   In the MAY_BE_NONZERO bit mask, if some bit is unset, it means that
-   for all numbers in the range the bit is 0, otherwise it might be 0
-   or 1.
-
-   In the MUST_BE_NONZERO bit mask, if some bit is set, it means that
-   for all numbers in the range the bit is 1, otherwise it might be 0
-   or 1.  */
-
-void
-wide_int_range_set_zero_nonzero_bits (signop sign,
-				      const wide_int &lb, const wide_int &ub,
-				      wide_int &may_be_nonzero,
-				      wide_int &must_be_nonzero)
-{
-  may_be_nonzero = wi::minus_one (lb.get_precision ());
-  must_be_nonzero = wi::zero (lb.get_precision ());
-
-  if (wi::eq_p (lb, ub))
-    {
-      may_be_nonzero = lb;
-      must_be_nonzero = may_be_nonzero;
-    }
-  else if (wi::ge_p (lb, 0, sign) || wi::lt_p (ub, 0, sign))
-    {
-      wide_int xor_mask = lb ^ ub;
-      may_be_nonzero = lb | ub;
-      must_be_nonzero = lb & ub;
-      if (xor_mask != 0)
-	{
-	  wide_int mask = wi::mask (wi::floor_log2 (xor_mask), false,
-				    may_be_nonzero.get_precision ());
-	  may_be_nonzero = may_be_nonzero | mask;
-	  must_be_nonzero = wi::bit_and_not (must_be_nonzero, mask);
-	}
-    }
-}
-
-/* Order 2 sets of wide int ranges (w0/w1, w2/w3) and set MIN/MAX
-   accordingly.  */
-
-static void
-wide_int_range_order_set (wide_int &min, wide_int &max,
-			  wide_int &w0, wide_int &w1,
-			  wide_int &w2, wide_int &w3,
-			  signop sign)
-{
-  /* Order pairs w0,w1 and w2,w3.  */
-  if (wi::gt_p (w0, w1, sign))
-    std::swap (w0, w1);
-  if (wi::gt_p (w2, w3, sign))
-    std::swap (w2, w3);
-
-  /* Choose min and max from the ordered pairs.  */
-  min = wi::min (w0, w2, sign);
-  max = wi::max (w1, w3, sign);
-}
-
-/* Calculate the cross product of two sets of ranges (VR0 and VR1) and
-   store the result in [RES_LB, RES_UB].
-
-   CODE is the operation to perform with sign SIGN.
-
-   OVERFLOW_UNDEFINED is set if overflow is undefined for the operation type.
-
-   Return TRUE if we were able to calculate the cross product.  */
-
-bool
-wide_int_range_cross_product (wide_int &res_lb, wide_int &res_ub,
-			      enum tree_code code, signop sign,
-			      const wide_int &vr0_lb, const wide_int &vr0_ub,
-			      const wide_int &vr1_lb, const wide_int &vr1_ub,
-			      bool overflow_undefined)
-{
-  wide_int cp1, cp2, cp3, cp4;
-
-  /* Compute the 4 cross operations, bailing if we get an overflow we
-     can't handle.  */
-
-  if (!wide_int_binop_overflow (cp1, code, vr0_lb, vr1_lb, sign,
-				overflow_undefined))
-    return false;
-
-  if (wi::eq_p (vr0_lb, vr0_ub))
-    cp3 = cp1;
-  else if (!wide_int_binop_overflow (cp3, code, vr0_ub, vr1_lb, sign,
-				     overflow_undefined))
-    return false;
-
-  if (wi::eq_p (vr1_lb, vr1_ub))
-    cp2 = cp1;
-  else if (!wide_int_binop_overflow (cp2, code, vr0_lb, vr1_ub, sign,
-				     overflow_undefined))
-    return false;
-
-  if (wi::eq_p (vr0_lb, vr0_ub))
-    cp4 = cp2;
-  else if (!wide_int_binop_overflow (cp4, code, vr0_ub, vr1_ub, sign,
-				     overflow_undefined))
-    return false;
-
-  wide_int_range_order_set (res_lb, res_ub, cp1, cp2, cp3, cp4, sign);
-  return true;
-}
-
-/* Multiply two ranges when TYPE_OVERFLOW_WRAPS:
-
-     [RES_LB, RES_UB] = [MIN0, MAX0] * [MIN1, MAX1]
-
-   This is basically fancy code so we don't drop to varying with an
-   unsigned [-3,-1]*[-3,-1].
-
-   Return TRUE if we were able to perform the operation.  */
-
-bool
-wide_int_range_mult_wrapping (wide_int &res_lb,
-			      wide_int &res_ub,
-			      signop sign,
-			      unsigned prec,
-			      const wide_int &min0_,
-			      const wide_int &max0_,
-			      const wide_int &min1_,
-			      const wide_int &max1_)
-{
-  /* This test requires 2*prec bits if both operands are signed and
-     2*prec + 2 bits if either is not.  Therefore, extend the values
-     using the sign of the result to PREC2.  From here on out,
-     everthing is just signed math no matter what the input types
-     were.  */
-  widest2_int min0 = widest2_int::from (min0_, sign);
-  widest2_int max0 = widest2_int::from (max0_, sign);
-  widest2_int min1 = widest2_int::from (min1_, sign);
-  widest2_int max1 = widest2_int::from (max1_, sign);
-  widest2_int sizem1 = wi::mask <widest2_int> (prec, false);
-  widest2_int size = sizem1 + 1;
-
-  /* Canonicalize the intervals.  */
-  if (sign == UNSIGNED)
-    {
-      if (wi::ltu_p (size, min0 + max0))
-	{
-	  min0 -= size;
-	  max0 -= size;
-	}
-
-      if (wi::ltu_p (size, min1 + max1))
-	{
-	  min1 -= size;
-	  max1 -= size;
-	}
-    }
-
-  widest2_int prod0 = min0 * min1;
-  widest2_int prod1 = min0 * max1;
-  widest2_int prod2 = max0 * min1;
-  widest2_int prod3 = max0 * max1;
-
-  /* Sort the 4 products so that min is in prod0 and max is in
-     prod3.  */
-  /* min0min1 > max0max1 */
-  if (prod0 > prod3)
-    std::swap (prod0, prod3);
-
-  /* min0max1 > max0min1 */
-  if (prod1 > prod2)
-    std::swap (prod1, prod2);
-
-  if (prod0 > prod1)
-    std::swap (prod0, prod1);
-
-  if (prod2 > prod3)
-    std::swap (prod2, prod3);
-
-  /* diff = max - min.  */
-  prod2 = prod3 - prod0;
-  if (wi::geu_p (prod2, sizem1))
-    /* The range covers all values.  */
-    return false;
-
-  res_lb = wide_int::from (prod0, prec, sign);
-  res_ub = wide_int::from (prod3, prec, sign);
-  return true;
-}
-
-/* Perform multiplicative operation CODE on two ranges:
-
-     [RES_LB, RES_UB] = [VR0_LB, VR0_UB] .CODE. [VR1_LB, VR1_LB]
-
-   Return TRUE if we were able to perform the operation.
-
-   NOTE: If code is MULT_EXPR and !TYPE_OVERFLOW_UNDEFINED, the resulting
-   range must be canonicalized by the caller because its components
-   may be swapped.  */
-
-bool
-wide_int_range_multiplicative_op (wide_int &res_lb, wide_int &res_ub,
-				  enum tree_code code,
-				  signop sign,
-				  unsigned prec,
-				  const wide_int &vr0_lb,
-				  const wide_int &vr0_ub,
-				  const wide_int &vr1_lb,
-				  const wide_int &vr1_ub,
-				  bool overflow_undefined)
-{
-  /* Multiplications, divisions and shifts are a bit tricky to handle,
-     depending on the mix of signs we have in the two ranges, we
-     need to operate on different values to get the minimum and
-     maximum values for the new range.  One approach is to figure
-     out all the variations of range combinations and do the
-     operations.
-
-     However, this involves several calls to compare_values and it
-     is pretty convoluted.  It's simpler to do the 4 operations
-     (MIN0 OP MIN1, MIN0 OP MAX1, MAX0 OP MIN1 and MAX0 OP MAX0 OP
-     MAX1) and then figure the smallest and largest values to form
-     the new range.  */
-  if (code == MULT_EXPR && !overflow_undefined)
-    return wide_int_range_mult_wrapping (res_lb, res_ub,
-					 sign, prec,
-					 vr0_lb, vr0_ub, vr1_lb, vr1_ub);
-  return wide_int_range_cross_product (res_lb, res_ub,
-				       code, sign,
-				       vr0_lb, vr0_ub, vr1_lb, vr1_ub,
-				       overflow_undefined);
-}
-
-/* Perform a left shift operation on two ranges:
-
-     [RES_LB, RES_UB] = [VR0_LB, VR0_UB] << [VR1_LB, VR1_LB]
-
-   Return TRUE if we were able to perform the operation.
-
-   NOTE: The resulting range must be canonicalized by the caller
-   because its contents components may be swapped.  */
-
-bool
-wide_int_range_lshift (wide_int &res_lb, wide_int &res_ub,
-		       signop sign, unsigned prec,
-		       const wide_int &vr0_lb, const wide_int &vr0_ub,
-		       const wide_int &vr1_lb, const wide_int &vr1_ub,
-		       bool overflow_undefined)
-{
-  /* Transform left shifts by constants into multiplies.  */
-  if (wi::eq_p (vr1_lb, vr1_ub))
-    {
-      unsigned shift = vr1_ub.to_uhwi ();
-      wide_int tmp = wi::set_bit_in_zero (shift, prec);
-      return wide_int_range_multiplicative_op (res_lb, res_ub,
-					       MULT_EXPR, sign, prec,
-					       vr0_lb, vr0_ub, tmp, tmp,
-					       /*overflow_undefined=*/false);
-    }
-
-  int overflow_pos = prec;
-  if (sign == SIGNED)
-    overflow_pos -= 1;
-  int bound_shift = overflow_pos - vr1_ub.to_shwi ();
-  /* If bound_shift == HOST_BITS_PER_WIDE_INT, the llshift can
-     overflow.  However, for that to happen, vr1.max needs to be
-     zero, which means vr1 is a singleton range of zero, which
-     means it should be handled by the previous LSHIFT_EXPR
-     if-clause.  */
-  wide_int bound = wi::set_bit_in_zero (bound_shift, prec);
-  wide_int complement = ~(bound - 1);
-  wide_int low_bound, high_bound;
-  bool in_bounds = false;
-  if (sign == UNSIGNED)
-    {
-      low_bound = bound;
-      high_bound = complement;
-      if (wi::ltu_p (vr0_ub, low_bound))
-	{
-	  /* [5, 6] << [1, 2] == [10, 24].  */
-	  /* We're shifting out only zeroes, the value increases
-	     monotonically.  */
-	  in_bounds = true;
-	}
-      else if (wi::ltu_p (high_bound, vr0_lb))
-	{
-	  /* [0xffffff00, 0xffffffff] << [1, 2]
-	     == [0xfffffc00, 0xfffffffe].  */
-	  /* We're shifting out only ones, the value decreases
-	     monotonically.  */
-	  in_bounds = true;
-	}
-    }
-  else
-    {
-      /* [-1, 1] << [1, 2] == [-4, 4].  */
-      low_bound = complement;
-      high_bound = bound;
-      if (wi::lts_p (vr0_ub, high_bound)
-	  && wi::lts_p (low_bound, vr0_lb))
-	{
-	  /* For non-negative numbers, we're shifting out only
-	     zeroes, the value increases monotonically.
-	     For negative numbers, we're shifting out only ones, the
-	     value decreases monotomically.  */
-	  in_bounds = true;
-	}
-    }
-  if (in_bounds)
-    return wide_int_range_multiplicative_op (res_lb, res_ub,
-					     LSHIFT_EXPR, sign, prec,
-					     vr0_lb, vr0_ub,
-					     vr1_lb, vr1_ub,
-					     overflow_undefined);
-  return false;
-}
-
-/* Return TRUE if a bit operation on two ranges can be easily
-   optimized in terms of a mask.
-
-   Basically, for BIT_AND_EXPR or BIT_IOR_EXPR see if we can optimize:
-
-	[LB, UB] op Z
-   into:
-	[LB op Z, UB op Z]
-
-   It is up to the caller to perform the actual folding above.  */
-
-static bool
-wide_int_range_can_optimize_bit_op (tree_code code,
-				    const wide_int &lb, const wide_int &ub,
-				    const wide_int &mask)
-
-{
-  if (code != BIT_AND_EXPR && code != BIT_IOR_EXPR)
-    return false;
-  /* If Z is a constant which (for op | its bitwise not) has n
-     consecutive least significant bits cleared followed by m 1
-     consecutive bits set immediately above it and either
-     m + n == precision, or (x >> (m + n)) == (y >> (m + n)).
-
-     The least significant n bits of all the values in the range are
-     cleared or set, the m bits above it are preserved and any bits
-     above these are required to be the same for all values in the
-     range.  */
-
-  wide_int w = mask;
-  int m = 0, n = 0;
-  if (code == BIT_IOR_EXPR)
-    w = ~w;
-  if (wi::eq_p (w, 0))
-    n = w.get_precision ();
-  else
-    {
-      n = wi::ctz (w);
-      w = ~(w | wi::mask (n, false, w.get_precision ()));
-      if (wi::eq_p (w, 0))
-	m = w.get_precision () - n;
-      else
-	m = wi::ctz (w) - n;
-    }
-  wide_int new_mask = wi::mask (m + n, true, w.get_precision ());
-  if ((new_mask & lb) == (new_mask & ub))
-    return true;
-
-  return false;
-}
-
-/* Helper function for wide_int_range_optimize_bit_op.
-
-   Calculates bounds and mask for a pair of ranges.  The mask is the
-   singleton range among the ranges, if any.  The bounds are the
-   bounds for the remaining range.  */
-
-bool
-wide_int_range_get_mask_and_bounds (wide_int &mask,
-				    wide_int &lower_bound,
-				    wide_int &upper_bound,
-				    const wide_int &vr0_min,
-				    const wide_int &vr0_max,
-				    const wide_int &vr1_min,
-				    const wide_int &vr1_max)
-{
-  if (wi::eq_p (vr1_min, vr1_max))
-    {
-      mask = vr1_min;
-      lower_bound = vr0_min;
-      upper_bound = vr0_max;
-      return true;
-    }
-  else if (wi::eq_p (vr0_min, vr0_max))
-    {
-      mask = vr0_min;
-      lower_bound = vr1_min;
-      upper_bound = vr1_max;
-      return true;
-    }
-  return false;
-}
-
-/* Optimize a bit operation (BIT_AND_EXPR or BIT_IOR_EXPR) if
-   possible.  If so, return TRUE and store the result in
-   [RES_LB, RES_UB].  */
-
-bool
-wide_int_range_optimize_bit_op (wide_int &res_lb, wide_int &res_ub,
-				enum tree_code code,
-				signop sign,
-				const wide_int &vr0_min,
-				const wide_int &vr0_max,
-				const wide_int &vr1_min,
-				const wide_int &vr1_max)
-{
-  gcc_assert (code == BIT_AND_EXPR || code == BIT_IOR_EXPR);
-
-  wide_int lower_bound, upper_bound, mask;
-  if (!wide_int_range_get_mask_and_bounds (mask, lower_bound, upper_bound,
-					   vr0_min, vr0_max, vr1_min, vr1_max))
-    return false;
-  if (wide_int_range_can_optimize_bit_op (code,
-					  lower_bound, upper_bound, mask))
-    {
-      wi::overflow_type ovf;
-      wide_int_binop (res_lb, code, lower_bound, mask, sign, &ovf);
-      wide_int_binop (res_ub, code, upper_bound, mask, sign, &ovf);
-      return true;
-    }
-  return false;
-}
-
-/* Calculate the XOR of two ranges and store the result in [WMIN,WMAX].
-   The two input ranges are described by their MUST_BE_NONZERO and
-   MAY_BE_NONZERO bit masks.
-
-   Return TRUE if we were able to successfully calculate the new range.  */
-
-bool
-wide_int_range_bit_xor (wide_int &wmin, wide_int &wmax,
-			signop sign,
-			unsigned prec,
-			const wide_int &must_be_nonzero0,
-			const wide_int &may_be_nonzero0,
-			const wide_int &must_be_nonzero1,
-			const wide_int &may_be_nonzero1)
-{
-  wide_int result_zero_bits = ((must_be_nonzero0 & must_be_nonzero1)
-			       | ~(may_be_nonzero0 | may_be_nonzero1));
-  wide_int result_one_bits
-    = (wi::bit_and_not (must_be_nonzero0, may_be_nonzero1)
-       | wi::bit_and_not (must_be_nonzero1, may_be_nonzero0));
-  wmax = ~result_zero_bits;
-  wmin = result_one_bits;
-  /* If the range has all positive or all negative values, the result
-     is better than VARYING.  */
-  if (wi::lt_p (wmin, 0, sign) || wi::ge_p (wmax, 0, sign))
-    return true;
-  wmin = wi::min_value (prec, sign);
-  wmax = wi::max_value (prec, sign);
-  return false;
-}
-
-/* Calculate the IOR of two ranges and store the result in [WMIN,WMAX].
-   Return TRUE if we were able to successfully calculate the new range.  */
-
-bool
-wide_int_range_bit_ior (wide_int &wmin, wide_int &wmax,
-			signop sign,
-			const wide_int &vr0_min,
-			const wide_int &vr0_max,
-			const wide_int &vr1_min,
-			const wide_int &vr1_max,
-			const wide_int &must_be_nonzero0,
-			const wide_int &may_be_nonzero0,
-			const wide_int &must_be_nonzero1,
-			const wide_int &may_be_nonzero1)
-{
-  if (wide_int_range_optimize_bit_op (wmin, wmax, BIT_IOR_EXPR, sign,
-				      vr0_min, vr0_max,
-				      vr1_min, vr1_max))
-    return true;
-  wmin = must_be_nonzero0 | must_be_nonzero1;
-  wmax = may_be_nonzero0 | may_be_nonzero1;
-  /* If the input ranges contain only positive values we can
-     truncate the minimum of the result range to the maximum
-     of the input range minima.  */
-  if (wi::ge_p (vr0_min, 0, sign)
-      && wi::ge_p (vr1_min, 0, sign))
-    {
-      wmin = wi::max (wmin, vr0_min, sign);
-      wmin = wi::max (wmin, vr1_min, sign);
-    }
-  /* If either input range contains only negative values
-     we can truncate the minimum of the result range to the
-     respective minimum range.  */
-  if (wi::lt_p (vr0_max, 0, sign))
-    wmin = wi::max (wmin, vr0_min, sign);
-  if (wi::lt_p (vr1_max, 0, sign))
-    wmin = wi::max (wmin, vr1_min, sign);
-  /* If the limits got swapped around, indicate error so we can adjust
-     the range to VARYING.  */
-  if (wi::gt_p (wmin, wmax,sign))
-    return false;
-  return true;
-}
-
-/* Calculate the bitwise AND of two ranges and store the result in [WMIN,WMAX].
-   Return TRUE if we were able to successfully calculate the new range.  */
-
-bool
-wide_int_range_bit_and (wide_int &wmin, wide_int &wmax,
-			signop sign,
-			unsigned prec,
-			const wide_int &vr0_min,
-			const wide_int &vr0_max,
-			const wide_int &vr1_min,
-			const wide_int &vr1_max,
-			const wide_int &must_be_nonzero0,
-			const wide_int &may_be_nonzero0,
-			const wide_int &must_be_nonzero1,
-			const wide_int &may_be_nonzero1)
-{
-  if (wide_int_range_optimize_bit_op (wmin, wmax, BIT_AND_EXPR, sign,
-				      vr0_min, vr0_max,
-				      vr1_min, vr1_max))
-    return true;
-  wmin = must_be_nonzero0 & must_be_nonzero1;
-  wmax = may_be_nonzero0 & may_be_nonzero1;
-  /* If both input ranges contain only negative values we can
-     truncate the result range maximum to the minimum of the
-     input range maxima.  */
-  if (wi::lt_p (vr0_max, 0, sign) && wi::lt_p (vr1_max, 0, sign))
-    {
-      wmax = wi::min (wmax, vr0_max, sign);
-      wmax = wi::min (wmax, vr1_max, sign);
-    }
-  /* If either input range contains only non-negative values
-     we can truncate the result range maximum to the respective
-     maximum of the input range.  */
-  if (wi::ge_p (vr0_min, 0, sign))
-    wmax = wi::min (wmax, vr0_max, sign);
-  if (wi::ge_p (vr1_min, 0, sign))
-    wmax = wi::min (wmax, vr1_max, sign);
-  /* PR68217: In case of signed & sign-bit-CST should
-     result in [-INF, 0] instead of [-INF, INF].  */
-  if (wi::gt_p (wmin, wmax, sign))
-    {
-      wide_int sign_bit = wi::set_bit_in_zero (prec - 1, prec);
-      if (sign == SIGNED
-	  && ((wi::eq_p (vr0_min, vr0_max)
-	       && !wi::cmps (vr0_min, sign_bit))
-	      || (wi::eq_p (vr1_min, vr1_max)
-		  && !wi::cmps (vr1_min, sign_bit))))
-	{
-	  wmin = wi::min_value (prec, sign);
-	  wmax = wi::zero (prec);
-	}
-    }
-  /* If the limits got swapped around, indicate error so we can adjust
-     the range to VARYING.  */
-  if (wi::gt_p (wmin, wmax,sign))
-    return false;
-  return true;
-}
-
-/* Calculate TRUNC_MOD_EXPR on two ranges and store the result in
-   [WMIN,WMAX].  */
-
-void
-wide_int_range_trunc_mod (wide_int &wmin, wide_int &wmax,
-			  signop sign,
-			  unsigned prec,
-			  const wide_int &vr0_min,
-			  const wide_int &vr0_max,
-			  const wide_int &vr1_min,
-			  const wide_int &vr1_max)
-{
-  wide_int tmp;
-
-  /* ABS (A % B) < ABS (B) and either
-     0 <= A % B <= A or A <= A % B <= 0.  */
-  wmax = vr1_max - 1;
-  if (sign == SIGNED)
-    {
-      tmp = -1 - vr1_min;
-      wmax = wi::smax (wmax, tmp);
-    }
-
-  if (sign == UNSIGNED)
-    wmin = wi::zero (prec);
-  else
-    {
-      wmin = -wmax;
-      tmp = vr0_min;
-      if (wi::gts_p (tmp, 0))
-	tmp = wi::zero (prec);
-      wmin = wi::smax (wmin, tmp);
-    }
-  tmp = vr0_max;
-  if (sign == SIGNED && wi::neg_p (tmp))
-    tmp = wi::zero (prec);
-  wmax = wi::min (wmax, tmp, sign);
-}
-
-/* Calculate ABS_EXPR on a range and store the result in [MIN, MAX].  */
-
-bool
-wide_int_range_abs (wide_int &min, wide_int &max,
-		    signop sign, unsigned prec,
-		    const wide_int &vr0_min, const wide_int &vr0_max,
-		    bool overflow_undefined)
-{
-  /* Pass through VR0 the easy cases.  */
-  if (sign == UNSIGNED || wi::ge_p (vr0_min, 0, sign))
-    {
-      min = vr0_min;
-      max = vr0_max;
-      return true;
-    }
-
-  /* -TYPE_MIN_VALUE = TYPE_MIN_VALUE with flag_wrapv so we can't get a
-     useful range.  */
-  wide_int min_value = wi::min_value (prec, sign);
-  wide_int max_value = wi::max_value (prec, sign);
-  if (!overflow_undefined && wi::eq_p (vr0_min, min_value))
-    return false;
-
-  /* ABS_EXPR may flip the range around, if the original range
-     included negative values.  */
-  if (wi::eq_p (vr0_min, min_value))
-    min = max_value;
-  else
-    min = wi::abs (vr0_min);
-  if (wi::eq_p (vr0_max, min_value))
-    max = max_value;
-  else
-    max = wi::abs (vr0_max);
-
-  /* If the range contains zero then we know that the minimum value in the
-     range will be zero.  */
-  if (wi::le_p (vr0_min, 0, sign) && wi::ge_p (vr0_max, 0, sign))
-    {
-      if (wi::gt_p (min, max, sign))
-	max = min;
-      min = wi::zero (prec);
-    }
-  else
-    {
-      /* If the range was reversed, swap MIN and MAX.  */
-      if (wi::gt_p (min, max, sign))
-	std::swap (min, max);
-    }
-
-  /* If the new range has its limits swapped around (MIN > MAX), then
-     the operation caused one of them to wrap around.  The only thing
-     we know is that the result is positive.  */
-  if (wi::gt_p (min, max, sign))
-    {
-      min = wi::zero (prec);
-      max = max_value;
-    }
-  return true;
-}
-
-/* Calculate ABSU_EXPR on a range and store the result in [MIN, MAX].  */
-
-void
-wide_int_range_absu (wide_int &min, wide_int &max,
-		     unsigned prec, const wide_int &vr0_min,
-		     const wide_int &vr0_max)
-{
-  /* Pass through VR0 the easy cases.  */
-  if (wi::ges_p (vr0_min, 0))
-    {
-      min = vr0_min;
-      max = vr0_max;
-      return;
-    }
-
-  min = wi::abs (vr0_min);
-  max = wi::abs (vr0_max);
-
-  /* If the range contains zero then we know that the minimum value in the
-     range will be zero.  */
-  if (wi::ges_p (vr0_max, 0))
-    {
-      if (wi::gtu_p (min, max))
-	max = min;
-      min = wi::zero (prec);
-    }
-  else
-    /* Otherwise, swap MIN and MAX.  */
-    std::swap (min, max);
-}
-
-/* Convert range in [VR0_MIN, VR0_MAX] with INNER_SIGN and INNER_PREC,
-   to a range in [MIN, MAX] with OUTER_SIGN and OUTER_PREC.
-
-   Return TRUE if we were able to successfully calculate the new range.
-
-   Caller is responsible for canonicalizing the resulting range.  */
-
-bool
-wide_int_range_convert (wide_int &min, wide_int &max,
-			signop inner_sign,
-			unsigned inner_prec,
-			signop outer_sign,
-			unsigned outer_prec,
-			const wide_int &vr0_min,
-			const wide_int &vr0_max)
-{
-  /* If the conversion is not truncating we can convert the min and
-     max values and canonicalize the resulting range.  Otherwise we
-     can do the conversion if the size of the range is less than what
-     the precision of the target type can represent.  */
-  if (outer_prec >= inner_prec
-      || wi::rshift (wi::sub (vr0_max, vr0_min),
-		     wi::uhwi (outer_prec, inner_prec),
-		     inner_sign) == 0)
-    {
-      min = wide_int::from (vr0_min, outer_prec, inner_sign);
-      max = wide_int::from (vr0_max, outer_prec, inner_sign);
-      return (!wi::eq_p (min, wi::min_value (outer_prec, outer_sign))
-	      || !wi::eq_p (max, wi::max_value (outer_prec, outer_sign)));
-    }
-  return false;
-}
-
-/* Calculate a division operation on two ranges and store the result in
-   [WMIN, WMAX] U [EXTRA_MIN, EXTRA_MAX].
-
-   If EXTRA_RANGE_P is set upon return, EXTRA_MIN/EXTRA_MAX hold
-   meaningful information, otherwise they should be ignored.
-
-   Return TRUE if we were able to successfully calculate the new range.  */
-
-bool
-wide_int_range_div (wide_int &wmin, wide_int &wmax,
-		    tree_code code, signop sign, unsigned prec,
-		    const wide_int &dividend_min, const wide_int &dividend_max,
-		    const wide_int &divisor_min, const wide_int &divisor_max,
-		    bool overflow_undefined,
-		    bool &extra_range_p,
-		    wide_int &extra_min, wide_int &extra_max)
-{
-  extra_range_p = false;
-
-  /* If we know we won't divide by zero, just do the division.  */
-  if (!wide_int_range_includes_zero_p (divisor_min, divisor_max, sign))
-    return wide_int_range_multiplicative_op (wmin, wmax, code, sign, prec,
-					     dividend_min, dividend_max,
-					     divisor_min, divisor_max,
-					     overflow_undefined);
-
-  /* If flag_non_call_exceptions, we must not eliminate a division
-     by zero.  */
-  if (cfun->can_throw_non_call_exceptions)
-    return false;
-
-  /* If we're definitely dividing by zero, there's nothing to do.  */
-  if (wide_int_range_zero_p (divisor_min, divisor_max, prec))
-    return false;
-
-  /* Perform the division in 2 parts, [LB, -1] and [1, UB],
-     which will skip any division by zero.
-
-     First divide by the negative numbers, if any.  */
-  if (wi::neg_p (divisor_min, sign))
-    {
-      if (!wide_int_range_multiplicative_op (wmin, wmax,
-					     code, sign, prec,
-					     dividend_min, dividend_max,
-					     divisor_min, wi::minus_one (prec),
-					     overflow_undefined))
-	return false;
-      extra_range_p = true;
-    }
-  /* Then divide by the non-zero positive numbers, if any.  */
-  if (wi::gt_p (divisor_max, wi::zero (prec), sign))
-    {
-      if (!wide_int_range_multiplicative_op (extra_range_p ? extra_min : wmin,
-					     extra_range_p ? extra_max : wmax,
-					     code, sign, prec,
-					     dividend_min, dividend_max,
-					     wi::one (prec), divisor_max,
-					     overflow_undefined))
-	return false;
-    }
-  else
-    extra_range_p = false;
-  return true;
-}
diff --git a/gcc/wide-int-range.h b/gcc/wide-int-range.h
deleted file mode 100644
index fc9af72b127..00000000000
--- a/gcc/wide-int-range.h
+++ /dev/null
@@ -1,188 +0,0 @@ 
-/* Support routines for range operations on wide ints.
-   Copyright (C) 2018-2019 Free Software Foundation, Inc.
-
-This file is part of GCC.
-
-GCC is free software; you can redistribute it and/or modify
-it under the terms of the GNU General Public License as published by
-the Free Software Foundation; either version 3, or (at your option)
-any later version.
-
-GCC is distributed in the hope that it will be useful,
-but WITHOUT ANY WARRANTY; without even the implied warranty of
-MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-GNU General Public License for more details.
-
-You should have received a copy of the GNU General Public License
-along with GCC; see the file COPYING3.  If not see
-<http://www.gnu.org/licenses/>.  */
-
-#ifndef GCC_WIDE_INT_RANGE_H
-#define GCC_WIDE_INT_RANGE_H
-
-extern bool wide_int_range_cross_product (wide_int &res_lb, wide_int &res_ub,
-					  enum tree_code code, signop sign,
-					  const wide_int &, const wide_int &,
-					  const wide_int &, const wide_int &,
-					  bool overflow_undefined);
-extern bool wide_int_range_mult_wrapping (wide_int &res_lb,
-					  wide_int &res_ub,
-					  signop sign,
-					  unsigned prec,
-					  const wide_int &min0_,
-					  const wide_int &max0_,
-					  const wide_int &min1_,
-					  const wide_int &max1_);
-extern bool wide_int_range_multiplicative_op (wide_int &res_lb,
-					      wide_int &res_ub,
-					      enum tree_code code,
-					      signop sign,
-					      unsigned prec,
-					      const wide_int &vr0_lb,
-					      const wide_int &vr0_ub,
-					      const wide_int &vr1_lb,
-					      const wide_int &vr1_ub,
-					      bool overflow_undefined);
-extern bool wide_int_range_lshift (wide_int &res_lb, wide_int &res_ub,
-				   signop sign, unsigned prec,
-				   const wide_int &, const wide_int &,
-				   const wide_int &, const wide_int &,
-				   bool overflow_undefined);
-extern void wide_int_range_set_zero_nonzero_bits (signop,
-						  const wide_int &lb,
-						  const wide_int &ub,
-						  wide_int &may_be_nonzero,
-						  wide_int &must_be_nonzero);
-extern bool wide_int_range_optimize_bit_op (wide_int &res_lb, wide_int &res_ub,
-					    enum tree_code code,
-					    signop sign,
-					    const wide_int &vr0_lb,
-					    const wide_int &vr0_ub,
-					    const wide_int &vr1_lb,
-					    const wide_int &vr1_ub);
-extern bool wide_int_range_get_mask_and_bounds (wide_int &mask,
-						wide_int &lower_bound,
-						wide_int &upper_bound,
-						const wide_int &vr0_min,
-						const wide_int &vr0_max,
-						const wide_int &vr1_min,
-						const wide_int &vr1_max);
-extern bool wide_int_range_bit_xor (wide_int &wmin, wide_int &wmax,
-				    signop sign,
-				    unsigned prec,
-				    const wide_int &must_be_nonzero0,
-				    const wide_int &may_be_nonzero0,
-				    const wide_int &must_be_nonzero1,
-				    const wide_int &may_be_nonzero1);
-extern bool wide_int_range_bit_ior (wide_int &wmin, wide_int &wmax,
-				    signop sign,
-				    const wide_int &vr0_min,
-				    const wide_int &vr0_max,
-				    const wide_int &vr1_min,
-				    const wide_int &vr1_max,
-				    const wide_int &must_be_nonzero0,
-				    const wide_int &may_be_nonzero0,
-				    const wide_int &must_be_nonzero1,
-				    const wide_int &may_be_nonzero1);
-extern bool wide_int_range_bit_and (wide_int &wmin, wide_int &wmax,
-				    signop sign,
-				    unsigned prec,
-				    const wide_int &vr0_min,
-				    const wide_int &vr0_max,
-				    const wide_int &vr1_min,
-				    const wide_int &vr1_max,
-				    const wide_int &must_be_nonzero0,
-				    const wide_int &may_be_nonzero0,
-				    const wide_int &must_be_nonzero1,
-				    const wide_int &may_be_nonzero1);
-extern void wide_int_range_trunc_mod (wide_int &wmin, wide_int &wmax,
-				      signop sign,
-				      unsigned prec,
-				      const wide_int &vr0_min,
-				      const wide_int &vr0_max,
-				      const wide_int &vr1_min,
-				      const wide_int &vr1_max);
-extern bool wide_int_range_abs (wide_int &min, wide_int &max,
-				signop sign, unsigned prec,
-				const wide_int &vr0_min,
-				const wide_int &vr0_max,
-				bool overflow_undefined);
-extern void wide_int_range_absu (wide_int &min, wide_int &max,
-				 unsigned prec,
-				 const wide_int &vr0_min,
-				 const wide_int &vr0_max);
-extern bool wide_int_range_convert (wide_int &min, wide_int &max,
-				    signop inner_sign,
-				    unsigned inner_prec,
-				    signop outer_sign,
-				    unsigned outer_prec,
-				    const wide_int &vr0_min,
-				    const wide_int &vr0_max);
-extern bool wide_int_range_div (wide_int &wmin, wide_int &wmax,
-				enum tree_code code,
-				signop sign, unsigned prec,
-				const wide_int &dividend_min,
-				const wide_int &dividend_max,
-				const wide_int &divisor_min,
-				const wide_int &divisor_max,
-				bool overflow_undefined,
-				bool &extra_range_p,
-				wide_int &extra_min, wide_int &extra_max);
-
-/* Return TRUE if shifting by range [MIN, MAX] is undefined behavior,
-   interpreting MIN and MAX according to SIGN.  */
-
-inline bool
-wide_int_range_shift_undefined_p (signop sign, unsigned prec,
-				  const wide_int &min, const wide_int &max)
-{
-  /* ?? Note: The original comment said this only applied to
-     RSHIFT_EXPR, but it was being applied to both left and right
-     shifts.  */
-
-  /* Shifting by any values outside [0..prec-1], gets undefined
-     behavior from the shift operation.  We cannot even trust
-     SHIFT_COUNT_TRUNCATED at this stage, because that applies to rtl
-     shifts, and the operation at the tree level may be widened.  */
-  return wi::lt_p (min, 0, sign) || wi::ge_p (max, prec, sign);
-}
-
-/* Calculate MIN/MAX_EXPR of two ranges and store the result in [MIN, MAX].  */
-
-inline bool
-wide_int_range_min_max (wide_int &min, wide_int &max,
-			tree_code code,
-			signop sign, unsigned prec,
-			const wide_int &vr0_min, const wide_int &vr0_max,
-			const wide_int &vr1_min, const wide_int &vr1_max)
-{
-  wi::overflow_type overflow;
-  wide_int_binop (min, code, vr0_min, vr1_min, sign, &overflow);
-  wide_int_binop (max, code, vr0_max, vr1_max, sign, &overflow);
-  /* If the new range covers the entire domain, that's really no range
-     at all.  */
-  if (min == wi::min_value (prec, sign)
-      && max == wi::max_value (prec, sign))
-    return false;
-  return true;
-}
-
-/* Return TRUE if 0 is within [WMIN, WMAX].  */
-
-inline bool
-wide_int_range_includes_zero_p (const wide_int &wmin, const wide_int &wmax,
-				signop sign)
-{
-  return wi::le_p (wmin, 0, sign) && wi::ge_p (wmax, 0, sign);
-}
-
-/* Return TRUE if [WMIN, WMAX] is the singleton 0.  */
-
-inline bool
-wide_int_range_zero_p (const wide_int &wmin, const wide_int &wmax,
-		       unsigned prec)
-{
-  return wmin == wmax && wi::eq_p (wmin, wi::zero (prec));
-}
-
-#endif /* GCC_WIDE_INT_RANGE_H */