diff mbox

PING^2: Fwd: SSA range class and removal of VR_ANTI_RANGEs

Message ID CAGm3qMWqyuY+j2dq2ej3=W45M1S2_4cgOOk3cxmYt==t-3MQeg@mail.gmail.com
State New
Headers show

Commit Message

Aldy Hernandez July 17, 2017, 6:51 a.m. UTC
PING PING

Hi folks.

The following is another iteration of the SSA range class, taking into
account many of the suggestions posted on this thread, especially the
addition of a memory efficient class for storage, folding non-zero
bits back into the range information, C++ suggestions by Martin, and
some minor suggestions.

Most importantly, I have included an irange_storage class that uses
trailing_wide_ints<5>.  This produces far better results that my
previous incarnation with wide_int[6] :).

The storage class is basically this:

class GTY((variable_size)) irange_storage
{
  friend class irange;
 public:
    /* Maximum number of pairs of ranges allowed.  */
  static const unsigned int max_pairs = 2;
  /* These are the pair of subranges for the irange.  The last
     wide_int allocated is a mask representing which bits in an
     integer are known to be non-zero.  */
  trailing_wide_ints<irange::max_pairs * 2 + 1> trailing_bounds;
}

Compare this with mainline which has trailing_wide_ints<3>.  The extra
2 items in this patchset chew up two 64-bit words, for an additional
16 bytes per range in SSA_NAME_RANGE_INFO.  No additional storage is
needed for SSA_NAMEs per se.

I looked at Jakub's suggestion of compiling insn-recog.c.  Although I
don't see 4M SSA_NAMES nodes created Jakub sees, I do see a little
over a million when building with:

./cc1plus insn-recog.ii -fno-PIE -O2 -fno-exceptions -fno-rtti
-fasynchronous-unwind-tables  -quiet -fsanitize=address,undefined
-fmem-report

I explored 3 different ways of measuring memory consumption:

1. /usr/bin/time -f "%M" ...., which measures maximum RSS usage.  This
produced results within the noise.  The RSS usage differed slightly
between runs, with no consistent difference between mainline and
patch.

2. valgrind --tool=massif ...., no difference.  Perhaps the overhead
of our GC hides any difference?

3. --enable-gather-detailed-mem-stats and -fmem-report ...

        Total Allocated before: 2351658176
        Total Allocated  after: 2353199328
        diff: 1541152 (0.06%)

        SSA_NAME nodes allocated: 1026694

AFAICT with -fmem-report, a 2.35gig compilation consumes 1.5 more
megs? This is total usage, and some of this gets cleaned up during GC,
so the total impact is probably less.  Unless there is another
preferred way of measuring memory usage, I think memory is a non-issue
with this approach.

Note, this is even before my pending patch avoiding generation of
useless range information
(https://gcc.gnu.org/ml/gcc-patches/2017-06/msg01068.html).

How does this look?

Aldy

Comments

Richard Biener July 17, 2017, 10:23 a.m. UTC | #1
On Mon, Jul 17, 2017 at 8:51 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
> PING PING
>
> Hi folks.
>
> The following is another iteration of the SSA range class, taking into
> account many of the suggestions posted on this thread, especially the
> addition of a memory efficient class for storage, folding non-zero
> bits back into the range information, C++ suggestions by Martin, and
> some minor suggestions.
>
> Most importantly, I have included an irange_storage class that uses
> trailing_wide_ints<5>.  This produces far better results that my
> previous incarnation with wide_int[6] :).
>
> The storage class is basically this:
>
> class GTY((variable_size)) irange_storage
> {
>   friend class irange;
>  public:
>     /* Maximum number of pairs of ranges allowed.  */
>   static const unsigned int max_pairs = 2;
>   /* These are the pair of subranges for the irange.  The last
>      wide_int allocated is a mask representing which bits in an
>      integer are known to be non-zero.  */
>   trailing_wide_ints<irange::max_pairs * 2 + 1> trailing_bounds;
> }
>
> Compare this with mainline which has trailing_wide_ints<3>.  The extra
> 2 items in this patchset chew up two 64-bit words, for an additional
> 16 bytes per range in SSA_NAME_RANGE_INFO.  No additional storage is
> needed for SSA_NAMEs per se.
>
> I looked at Jakub's suggestion of compiling insn-recog.c.  Although I
> don't see 4M SSA_NAMES nodes created Jakub sees, I do see a little
> over a million when building with:
>
> ./cc1plus insn-recog.ii -fno-PIE -O2 -fno-exceptions -fno-rtti
> -fasynchronous-unwind-tables  -quiet -fsanitize=address,undefined
> -fmem-report
>
> I explored 3 different ways of measuring memory consumption:
>
> 1. /usr/bin/time -f "%M" ...., which measures maximum RSS usage.  This
> produced results within the noise.  The RSS usage differed slightly
> between runs, with no consistent difference between mainline and
> patch.
>
> 2. valgrind --tool=massif ...., no difference.  Perhaps the overhead
> of our GC hides any difference?
>
> 3. --enable-gather-detailed-mem-stats and -fmem-report ...
>
>         Total Allocated before: 2351658176
>         Total Allocated  after: 2353199328
>         diff: 1541152 (0.06%)
>
>         SSA_NAME nodes allocated: 1026694
>
> AFAICT with -fmem-report, a 2.35gig compilation consumes 1.5 more
> megs? This is total usage, and some of this gets cleaned up during GC,
> so the total impact is probably less.  Unless there is another
> preferred way of measuring memory usage, I think memory is a non-issue
> with this approach.
>
> Note, this is even before my pending patch avoiding generation of
> useless range information
> (https://gcc.gnu.org/ml/gcc-patches/2017-06/msg01068.html).
>
> How does this look?

It's a change that on its own doesn't look worthwhile to me.

So please post the changes that will build ontop of this.  Like removing
anti-ranges from VRP or your on-demand value-range stuff.

Thanks,
Richard.

>
> Aldy
Aldy Hernandez July 21, 2017, 7:30 p.m. UTC | #2
On Mon, Jul 17, 2017 at 6:23 AM, Richard Biener
<richard.guenther@gmail.com> wrote:
> On Mon, Jul 17, 2017 at 8:51 AM, Aldy Hernandez <aldyh@redhat.com> wrote:

>> How does this look?
>
> It's a change that on its own doesn't look worthwhile to me.
>
> So please post the changes that will build ontop of this.  Like removing
> anti-ranges from VRP or your on-demand value-range stuff.
>
> Thanks,
> Richard.

From the looks of it, we can have a variety of VRP ranges that are not
representable at all with the an integer range class.  For instance, I
see the following ranges built in set_value_range():

[INT, (nop_expr SSA)]

[INT, (plus_expr SSA INT)]

[(negate_expr SSA), (negate_expr SSA)]

[(plus_expr (negate_expr SSA INT)),
 (plus_expr (negate_expr SSA) INT)]

[SSA, SSA]

So...I guess the first suggestion is out of the question ;-).

Aldy
Richard Biener July 25, 2017, 7:12 a.m. UTC | #3
On Fri, Jul 21, 2017 at 9:30 PM, Aldy Hernandez <aldyh@redhat.com> wrote:
> On Mon, Jul 17, 2017 at 6:23 AM, Richard Biener
> <richard.guenther@gmail.com> wrote:
>> On Mon, Jul 17, 2017 at 8:51 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
>
>>> How does this look?
>>
>> It's a change that on its own doesn't look worthwhile to me.
>>
>> So please post the changes that will build ontop of this.  Like removing
>> anti-ranges from VRP or your on-demand value-range stuff.
>>
>> Thanks,
>> Richard.
>
> From the looks of it, we can have a variety of VRP ranges that are not
> representable at all with the an integer range class.  For instance, I
> see the following ranges built in set_value_range():
>
> [INT, (nop_expr SSA)]
>
> [INT, (plus_expr SSA INT)]
>
> [(negate_expr SSA), (negate_expr SSA)]
>
> [(plus_expr (negate_expr SSA INT)),
>  (plus_expr (negate_expr SSA) INT)]
>
> [SSA, SSA]
>
> So...I guess the first suggestion is out of the question ;-).

Well.  We do not want range operations implemented twice (really!)
so range operations need to work on both representations.  So we
should think of a way to get rid of anti-ranges in VRP which, frankly,
means that for the sake symbolic ranges we have to trade them
with handling open/closed ranges which I'm not sure will be less of
a hassle to handle?

Richard.

> Aldy
Andrew MacLeod July 25, 2017, 2:50 p.m. UTC | #4
On 07/25/2017 03:12 AM, Richard Biener wrote:
> On Fri, Jul 21, 2017 at 9:30 PM, Aldy Hernandez <aldyh@redhat.com> wrote:
>> On Mon, Jul 17, 2017 at 6:23 AM, Richard Biener
>> <richard.guenther@gmail.com> wrote:
>>> On Mon, Jul 17, 2017 at 8:51 AM, Aldy Hernandez <aldyh@redhat.com> wrote:
>>>> How does this look?
>>> It's a change that on its own doesn't look worthwhile to me.
>>>
>>> So please post the changes that will build ontop of this.  Like removing
>>> anti-ranges from VRP or your on-demand value-range stuff.
>>>
>>> Thanks,
>>> Richard.
>>  From the looks of it, we can have a variety of VRP ranges that are not
>> representable at all with the an integer range class.  For instance, I
>> see the following ranges built in set_value_range():
>>
>> [INT, (nop_expr SSA)]
>>
>> [INT, (plus_expr SSA INT)]
>>
>> [(negate_expr SSA), (negate_expr SSA)]
>>
>> [(plus_expr (negate_expr SSA INT)),
>>   (plus_expr (negate_expr SSA) INT)]
>>
>> [SSA, SSA]
>>
>> So...I guess the first suggestion is out of the question ;-).
> Well.  We do not want range operations implemented twice (really!)
> so range operations need to work on both representations.  So we
> should think of a way to get rid of anti-ranges in VRP which, frankly,
> means that for the sake symbolic ranges we have to trade them
> with handling open/closed ranges which I'm not sure will be less of
> a hassle to handle?

I originally looked at ranges with symbolic expressions, but as soon as 
there are ranges comprised of more than 1 range, symbolics became a 
nightmare.  At best you can do endpoints, and even then you have to 
start adding complexity..  [0, 100] Union  [5, x_4]  now has to become  
[0, max(100, x_4)] , and chained operations were making the expressions 
more and more complicated.  Trying to maintain these expression across 
optimizations was also very problematic as the IL changes and these 
expressions are not in the IL and don't suffer updates easily.

At which point one asks themselves whether it actually makes sense to 
integrate that into the range representation, or try a different tactic.

Seems to me its cleaner to have an integral range, and when appropriate 
track symbols separately to determine if their values can be refined.    
If at some point you can resolve those symbols to an integral value,  
then you simply update the integral range with the new range you've 
determined.

VRP chooses to do this by creating a completely different structure for 
ranges, and representing endpoints as expression trees. It then updates 
the integral ranges at the end of the pass. It uses anti-ranges as a 
shorthand to handle a special case of a range comprised of 2 ranges.  As 
it stands right now, VRP's version of union and intersection can never 
have much in common with a general integral range implementation. They 
are 2 very different beasts.

So we cant do much about VRP internally yet without some significant 
surgery.

This issue seems orthogonal to me though.  irange is not intended to 
ever do anything with symbols, thus can never replace VRPs internal 
value_range class.    We're simply replacing "struct range_info_def" 
with a new range structure and API which can support more precise ranges 
than a pair of integral endpoints.  We'll then build on that by 
providing routines which can generate more precise range information as 
well.

For the moment we provide an implementation with the same precision to
   a) ensure code generation remains the same
   b) allow the compiler to use it for a while to make sure no bugs have 
been introduced
   c) and there is nothing that would utilize the additional precision 
yet anyway.

I just think its important to get it in the code base so its being used 
and we can be sure its correct.

On its own it is an improvement to "struct range_info_def" as it 
encapsulates the activity in one place and provides a standard API any 
optimization can utilize.  It allows us to improve the precision of 
ranges in the future without changing any client code., and it does not 
appear to have any measurable compile time storage or speed issues.  It 
is simply an infrastructure improvement.

Andrew
Richard Sandiford July 25, 2017, 5:13 p.m. UTC | #5
Aldy Hernandez <aldyh@redhat.com> writes:
> +  /* Implicit conversion to `unsigned int' returns the number of pairs.  */
> +  operator unsigned () const { return num_pairs (); }

Yeah, I know I've said it before, but it's a bit of a hobby horse, so...
please, please don't do this!  I think it would be far clearer if users
wanting the number of pairs used num_pairs () and if users wanting to
test for an empty range used is_empty () (as for vec.h).

Thanks,
Richard
Richard Biener July 26, 2017, 11:20 a.m. UTC | #6
On Tue, Jul 25, 2017 at 4:50 PM, Andrew MacLeod <amacleod@redhat.com> wrote:
> On 07/25/2017 03:12 AM, Richard Biener wrote:
>>
>> On Fri, Jul 21, 2017 at 9:30 PM, Aldy Hernandez <aldyh@redhat.com> wrote:
>>>
>>> On Mon, Jul 17, 2017 at 6:23 AM, Richard Biener
>>> <richard.guenther@gmail.com> wrote:
>>>>
>>>> On Mon, Jul 17, 2017 at 8:51 AM, Aldy Hernandez <aldyh@redhat.com>
>>>> wrote:
>>>>>
>>>>> How does this look?
>>>>
>>>> It's a change that on its own doesn't look worthwhile to me.
>>>>
>>>> So please post the changes that will build ontop of this.  Like removing
>>>> anti-ranges from VRP or your on-demand value-range stuff.
>>>>
>>>> Thanks,
>>>> Richard.
>>>
>>>  From the looks of it, we can have a variety of VRP ranges that are not
>>> representable at all with the an integer range class.  For instance, I
>>> see the following ranges built in set_value_range():
>>>
>>> [INT, (nop_expr SSA)]
>>>
>>> [INT, (plus_expr SSA INT)]
>>>
>>> [(negate_expr SSA), (negate_expr SSA)]
>>>
>>> [(plus_expr (negate_expr SSA INT)),
>>>   (plus_expr (negate_expr SSA) INT)]
>>>
>>> [SSA, SSA]
>>>
>>> So...I guess the first suggestion is out of the question ;-).
>>
>> Well.  We do not want range operations implemented twice (really!)
>> so range operations need to work on both representations.  So we
>> should think of a way to get rid of anti-ranges in VRP which, frankly,
>> means that for the sake symbolic ranges we have to trade them
>> with handling open/closed ranges which I'm not sure will be less of
>> a hassle to handle?
>
>
> I originally looked at ranges with symbolic expressions, but as soon as
> there are ranges comprised of more than 1 range, symbolics became a
> nightmare.  At best you can do endpoints, and even then you have to start
> adding complexity..  [0, 100] Union  [5, x_4]  now has to become  [0,
> max(100, x_4)] , and chained operations were making the expressions more and
> more complicated.  Trying to maintain these expression across optimizations
> was also very problematic as the IL changes and these expressions are not in
> the IL and don't suffer updates easily.
>
> At which point one asks themselves whether it actually makes sense to
> integrate that into the range representation, or try a different tactic.
>
> Seems to me its cleaner to have an integral range, and when appropriate
> track symbols separately to determine if their values can be refined.    If
> at some point you can resolve those symbols to an integral value,  then you
> simply update the integral range with the new range you've determined.
>
> VRP chooses to do this by creating a completely different structure for
> ranges, and representing endpoints as expression trees. It then updates the
> integral ranges at the end of the pass. It uses anti-ranges as a shorthand
> to handle a special case of a range comprised of 2 ranges.  As it stands
> right now, VRP's version of union and intersection can never have much in
> common with a general integral range implementation. They are 2 very
> different beasts.
>
> So we cant do much about VRP internally yet without some significant
> surgery.
>
> This issue seems orthogonal to me though.  irange is not intended to ever do
> anything with symbols, thus can never replace VRPs internal value_range
> class.    We're simply replacing "struct range_info_def" with a new range
> structure and API which can support more precise ranges than a pair of
> integral endpoints.  We'll then build on that by providing routines which
> can generate more precise range information as well.
>
> For the moment we provide an implementation with the same precision to
>   a) ensure code generation remains the same
>   b) allow the compiler to use it for a while to make sure no bugs have been
> introduced
>   c) and there is nothing that would utilize the additional precision yet
> anyway.
>
> I just think its important to get it in the code base so its being used and
> we can be sure its correct.

But nothing uses all the methods.

And we're ending up with two variants of range + range, etc.

irange is a nearly subset of what VRP can handle so it should be able to re-use
VRP workers.  The 'nearly' part is that VRP currently doesn't handle multiple
ranges.  For an unknown reason you didn't start with teaching VRP that bit.

Yes, symbolic ranges complicate that conversion a bit (getting rid of
anti-ranges
in VRP) but you could have either kept anti-ranges for symbolic ranges only
or have open/closed ranges.  The issue with symbolic ranges here is that
from

  if (a != b)

we derive a symbolic range for a that is ~[b, b].  If you want to make that
a union of two ranges you have [MIN, b - 1] U [b + 1, MAX] _but_ both
b - 1 or b + 1 might actually overflow!  So you need to use [MIN, b[ U ]b, MAX]
here (which means both can be empty if b == MIN or b == MAX).

As said I don't like introducing new stuff without fixing existing
stuff.  Handling
open/closed ranges shouldn't be too difficult if you assert that only a symbolic
end can be open.  Then you can re-use the workers from VRP in irange.

Double-win.

> On its own it is an improvement to "struct range_info_def" as it
> encapsulates the activity in one place and provides a standard API any
> optimization can utilize.  It allows us to improve the precision of ranges
> in the future without changing any client code., and it does not appear to
> have any measurable compile time storage or speed issues.  It is simply an
> infrastructure improvement.

How does it allow us to improve ranges when the very first range propagation
pass that computes the ranges is limited to 1 sub-range?!

Richard.

>
> Andrew
>
Jeff Law Sept. 19, 2017, 11:10 p.m. UTC | #7
On 07/26/2017 05:20 AM, Richard Biener wrote:
> On Tue, Jul 25, 2017 at 4:50 PM, Andrew MacLeod <amacleod@redhat.com> wrote:
>> On 07/25/2017 03:12 AM, Richard Biener wrote:
>>>
>>> On Fri, Jul 21, 2017 at 9:30 PM, Aldy Hernandez <aldyh@redhat.com> wrote:
>>>>
>>>> On Mon, Jul 17, 2017 at 6:23 AM, Richard Biener
>>>> <richard.guenther@gmail.com> wrote:
>>>>>
>>>>> On Mon, Jul 17, 2017 at 8:51 AM, Aldy Hernandez <aldyh@redhat.com>
>>>>> wrote:
>>>>>>
>>>>>> How does this look?
>>>>>
>>>>> It's a change that on its own doesn't look worthwhile to me.
>>>>>
>>>>> So please post the changes that will build ontop of this.  Like removing
>>>>> anti-ranges from VRP or your on-demand value-range stuff.
>>>>>
>>>>> Thanks,
>>>>> Richard.
>>>>
>>>>  From the looks of it, we can have a variety of VRP ranges that are not
>>>> representable at all with the an integer range class.  For instance, I
>>>> see the following ranges built in set_value_range():
>>>>
>>>> [INT, (nop_expr SSA)]
>>>>
>>>> [INT, (plus_expr SSA INT)]
>>>>
>>>> [(negate_expr SSA), (negate_expr SSA)]
>>>>
>>>> [(plus_expr (negate_expr SSA INT)),
>>>>   (plus_expr (negate_expr SSA) INT)]
>>>>
>>>> [SSA, SSA]
>>>>
>>>> So...I guess the first suggestion is out of the question ;-).
>>>
>>> Well.  We do not want range operations implemented twice (really!)
>>> so range operations need to work on both representations.  So we
>>> should think of a way to get rid of anti-ranges in VRP which, frankly,
>>> means that for the sake symbolic ranges we have to trade them
>>> with handling open/closed ranges which I'm not sure will be less of
>>> a hassle to handle?
>>
>>
>> I originally looked at ranges with symbolic expressions, but as soon as
>> there are ranges comprised of more than 1 range, symbolics became a
>> nightmare.  At best you can do endpoints, and even then you have to start
>> adding complexity..  [0, 100] Union  [5, x_4]  now has to become  [0,
>> max(100, x_4)] , and chained operations were making the expressions more and
>> more complicated.  Trying to maintain these expression across optimizations
>> was also very problematic as the IL changes and these expressions are not in
>> the IL and don't suffer updates easily.
>>
>> At which point one asks themselves whether it actually makes sense to
>> integrate that into the range representation, or try a different tactic.
>>
>> Seems to me its cleaner to have an integral range, and when appropriate
>> track symbols separately to determine if their values can be refined.    If
>> at some point you can resolve those symbols to an integral value,  then you
>> simply update the integral range with the new range you've determined.
>>
>> VRP chooses to do this by creating a completely different structure for
>> ranges, and representing endpoints as expression trees. It then updates the
>> integral ranges at the end of the pass. It uses anti-ranges as a shorthand
>> to handle a special case of a range comprised of 2 ranges.  As it stands
>> right now, VRP's version of union and intersection can never have much in
>> common with a general integral range implementation. They are 2 very
>> different beasts.
>>
>> So we cant do much about VRP internally yet without some significant
>> surgery.
>>
>> This issue seems orthogonal to me though.  irange is not intended to ever do
>> anything with symbols, thus can never replace VRPs internal value_range
>> class.    We're simply replacing "struct range_info_def" with a new range
>> structure and API which can support more precise ranges than a pair of
>> integral endpoints.  We'll then build on that by providing routines which
>> can generate more precise range information as well.
>>
>> For the moment we provide an implementation with the same precision to
>>   a) ensure code generation remains the same
>>   b) allow the compiler to use it for a while to make sure no bugs have been
>> introduced
>>   c) and there is nothing that would utilize the additional precision yet
>> anyway.
>>
>> I just think its important to get it in the code base so its being used and
>> we can be sure its correct.
> 
> But nothing uses all the methods.
> 
> And we're ending up with two variants of range + range, etc.
But I think this is inevitable.  Consider for example floating point
ranges.  There's a belief that a limited form of VRP would be useful
(essentially tracking if the FP value could have the value 0 and perhaps
tracking NaNs as well).  We're not likely to be tracking actual values,
but boolean states about what values an object may or may not have.

I could also well see building another set of evaluation routines which
track zero, nonzero and unknown bits for integer values on top of the
basic framework Andrew is working on.

To some degree symbolics feel the same in that I suspect that we most
often are going to want to work with them as expressions.

Which leads me to wonder if we really need to templateize all the new
code so that we can have a different storage for the ranges and
different ways to extract ranges from operations we see.

Additionally, Andrew claims that handling symbolics really isn't needed
-- I'm still wrapping my head around all the implications of the
terminal names and how we can utilize them.


The more I think about the general desire to eliminate the range
extraction and propagation step from VRP and instead build on top of
this framework is going to require doing something with symbolics, plain
and simple.  We're not going to be able to remove those hunks of VRP if
we don't do *something* for symbolics.

So I come back to the do we template-ize irange so that we get frange,
srange and possibly others, do we change the underlying storage to
trees, or do  we have a polymorphic data structure that adjusts to the
integer, floating and symbolic handling?


> 
> irange is a nearly subset of what VRP can handle so it should be able to re-use
> VRP workers.  The 'nearly' part is that VRP currently doesn't handle multiple
> ranges.  For an unknown reason you didn't start with teaching VRP that bit.
Perhaps, but the general belief is that VRP as we know it today wouldn't
exist in this new world.  Computation of range information would be
separate from utilizing range information for optimization/analysis
purposes.


One approach would be to take the existing bits from VRP and try to pull
them out into its own little module, then extending it to allow the
kinds of queries we need to do.  We felt it was actually going to be
better to start over, building a system to answer the queries we want
from the ground up.

The belief is that what we're going to end up would replace huge amounts
of VRP (eliminating ASSERT_EXPRs and anti-ranges along the way).

> 
> Yes, symbolic ranges complicate that conversion a bit (getting rid of
> anti-ranges
> in VRP) but you could have either kept anti-ranges for symbolic ranges only
> or have open/closed ranges.  The issue with symbolic ranges here is that
> from
> 
>   if (a != b)
> 
> we derive a symbolic range for a that is ~[b, b].  If you want to make that
> a union of two ranges you have [MIN, b - 1] U [b + 1, MAX] _but_ both
> b - 1 or b + 1 might actually overflow!  So you need to use [MIN, b[ U ]b, MAX]
> here (which means both can be empty if b == MIN or b == MAX).
But how often is this useful in practice and how much are we willing to
pay to get this level of generality?  I wonder if just the ability to
record simple symbolics and very simple manipulations is sufficient to
do what we need in practice.


Jeff
Richard Biener Sept. 21, 2017, 4:53 p.m. UTC | #8
On September 20, 2017 1:10:25 AM GMT+02:00, Jeff Law <law@redhat.com> wrote:
>On 07/26/2017 05:20 AM, Richard Biener wrote:
>> On Tue, Jul 25, 2017 at 4:50 PM, Andrew MacLeod <amacleod@redhat.com>
>wrote:
>>> On 07/25/2017 03:12 AM, Richard Biener wrote:
>>>>
>>>> On Fri, Jul 21, 2017 at 9:30 PM, Aldy Hernandez <aldyh@redhat.com>
>wrote:
>>>>>
>>>>> On Mon, Jul 17, 2017 at 6:23 AM, Richard Biener
>>>>> <richard.guenther@gmail.com> wrote:
>>>>>>
>>>>>> On Mon, Jul 17, 2017 at 8:51 AM, Aldy Hernandez
><aldyh@redhat.com>
>>>>>> wrote:
>>>>>>>
>>>>>>> How does this look?
>>>>>>
>>>>>> It's a change that on its own doesn't look worthwhile to me.
>>>>>>
>>>>>> So please post the changes that will build ontop of this.  Like
>removing
>>>>>> anti-ranges from VRP or your on-demand value-range stuff.
>>>>>>
>>>>>> Thanks,
>>>>>> Richard.
>>>>>
>>>>>  From the looks of it, we can have a variety of VRP ranges that
>are not
>>>>> representable at all with the an integer range class.  For
>instance, I
>>>>> see the following ranges built in set_value_range():
>>>>>
>>>>> [INT, (nop_expr SSA)]
>>>>>
>>>>> [INT, (plus_expr SSA INT)]
>>>>>
>>>>> [(negate_expr SSA), (negate_expr SSA)]
>>>>>
>>>>> [(plus_expr (negate_expr SSA INT)),
>>>>>   (plus_expr (negate_expr SSA) INT)]
>>>>>
>>>>> [SSA, SSA]
>>>>>
>>>>> So...I guess the first suggestion is out of the question ;-).
>>>>
>>>> Well.  We do not want range operations implemented twice (really!)
>>>> so range operations need to work on both representations.  So we
>>>> should think of a way to get rid of anti-ranges in VRP which,
>frankly,
>>>> means that for the sake symbolic ranges we have to trade them
>>>> with handling open/closed ranges which I'm not sure will be less of
>>>> a hassle to handle?
>>>
>>>
>>> I originally looked at ranges with symbolic expressions, but as soon
>as
>>> there are ranges comprised of more than 1 range, symbolics became a
>>> nightmare.  At best you can do endpoints, and even then you have to
>start
>>> adding complexity..  [0, 100] Union  [5, x_4]  now has to become 
>[0,
>>> max(100, x_4)] , and chained operations were making the expressions
>more and
>>> more complicated.  Trying to maintain these expression across
>optimizations
>>> was also very problematic as the IL changes and these expressions
>are not in
>>> the IL and don't suffer updates easily.
>>>
>>> At which point one asks themselves whether it actually makes sense
>to
>>> integrate that into the range representation, or try a different
>tactic.
>>>
>>> Seems to me its cleaner to have an integral range, and when
>appropriate
>>> track symbols separately to determine if their values can be
>refined.    If
>>> at some point you can resolve those symbols to an integral value, 
>then you
>>> simply update the integral range with the new range you've
>determined.
>>>
>>> VRP chooses to do this by creating a completely different structure
>for
>>> ranges, and representing endpoints as expression trees. It then
>updates the
>>> integral ranges at the end of the pass. It uses anti-ranges as a
>shorthand
>>> to handle a special case of a range comprised of 2 ranges.  As it
>stands
>>> right now, VRP's version of union and intersection can never have
>much in
>>> common with a general integral range implementation. They are 2 very
>>> different beasts.
>>>
>>> So we cant do much about VRP internally yet without some significant
>>> surgery.
>>>
>>> This issue seems orthogonal to me though.  irange is not intended to
>ever do
>>> anything with symbols, thus can never replace VRPs internal
>value_range
>>> class.    We're simply replacing "struct range_info_def" with a new
>range
>>> structure and API which can support more precise ranges than a pair
>of
>>> integral endpoints.  We'll then build on that by providing routines
>which
>>> can generate more precise range information as well.
>>>
>>> For the moment we provide an implementation with the same precision
>to
>>>   a) ensure code generation remains the same
>>>   b) allow the compiler to use it for a while to make sure no bugs
>have been
>>> introduced
>>>   c) and there is nothing that would utilize the additional
>precision yet
>>> anyway.
>>>
>>> I just think its important to get it in the code base so its being
>used and
>>> we can be sure its correct.
>> 
>> But nothing uses all the methods.
>> 
>> And we're ending up with two variants of range + range, etc.
>But I think this is inevitable.  Consider for example floating point
>ranges.  There's a belief that a limited form of VRP would be useful
>(essentially tracking if the FP value could have the value 0 and
>perhaps
>tracking NaNs as well).  We're not likely to be tracking actual values,
>but boolean states about what values an object may or may not have.

Sure. But they both handle integer constants. So factor that out of the VRP
Workers. 

>I could also well see building another set of evaluation routines which
>track zero, nonzero and unknown bits for integer values on top of the
>basic framework Andrew is working on.
>
>To some degree symbolics feel the same in that I suspect that we most
>often are going to want to work with them as expressions.

I don't say you necessarily need to track symbolics and constants in the same lattice. But then you need to split it and I fear also allow cross-effects. 

>Which leads me to wonder if we really need to templateize all the new
>code so that we can have a different storage for the ranges and
>different ways to extract ranges from operations we see.
>
>Additionally, Andrew claims that handling symbolics really isn't needed
>-- I'm still wrapping my head around all the implications of the
>terminal names and how we can utilize them.
>
>
>The more I think about the general desire to eliminate the range
>extraction and propagation step from VRP and instead build on top of

Huh, I never intended to do that. 

>this framework is going to require doing something with symbolics,
>plain
>and simple.  We're not going to be able to remove those hunks of VRP if
>we don't do *something* for symbolics.

Indeed. 

>So I come back to the do we template-ize irange so that we get frange,
>srange and possibly others, do we change the underlying storage to
>trees, or do  we have a polymorphic data structure that adjusts to the
>integer, floating and symbolic handling?

No. Have separate workers operating on
Specific types. Like we have those for tracking one/zero bits in CCP. 

>
>> 
>> irange is a nearly subset of what VRP can handle so it should be able
>to re-use
>> VRP workers.  The 'nearly' part is that VRP currently doesn't handle
>multiple
>> ranges.  For an unknown reason you didn't start with teaching VRP
>that bit.
>Perhaps, but the general belief is that VRP as we know it today
>wouldn't
>exist in this new world.  Computation of range information would be
>separate from utilizing range information for optimization/analysis
>purposes.

VRP is an optimization pass in itself and setting ranges on SSA names is only a secondary purpose. 

>
>
>One approach would be to take the existing bits from VRP and try to
>pull
>them out into its own little module, then extending it to allow the
>kinds of queries we need to do.  We felt it was actually going to be
>better to start over, building a system to answer the queries we want
>from the ground up.

And probably repeat all wrong-code we eliminated from VRP... 

Also a separate pass always results in more test coverage than you'd get for (unused) infrastructure. 

>The belief is that what we're going to end up would replace huge
>amounts
>of VRP (eliminating ASSERT_EXPRs and anti-ranges along the way).

I believe it when I see it. I've only seen some sketch of a range storage class so far. 

>> 
>> Yes, symbolic ranges complicate that conversion a bit (getting rid of
>> anti-ranges
>> in VRP) but you could have either kept anti-ranges for symbolic
>ranges only
>> or have open/closed ranges.  The issue with symbolic ranges here is
>that
>> from
>> 
>>   if (a != b)
>> 
>> we derive a symbolic range for a that is ~[b, b].  If you want to
>make that
>> a union of two ranges you have [MIN, b - 1] U [b + 1, MAX] _but_ both
>> b - 1 or b + 1 might actually overflow!  So you need to use [MIN, b[
>U ]b, MAX]
>> here (which means both can be empty if b == MIN or b == MAX).
>But how often is this useful in practice and how much are we willing to
>pay to get this level of generality?  I wonder if just the ability to
>record simple symbolics and very simple manipulations is sufficient to
>do what we need in practice.

It's quite important to remove range checking code. 

Richard. 

>
>Jeff
diff mbox

Patch

diff --git a/gcc/Makefile.in b/gcc/Makefile.in
index 8ace3c2..5e48d6e 100644
--- a/gcc/Makefile.in
+++ b/gcc/Makefile.in
@@ -1416,6 +1416,7 @@  OBJS = \
 	print-rtl-function.o \
 	print-tree.o \
 	profile.o \
+	range.o \
 	read-md.o \
 	read-rtl.o \
 	read-rtl-function.o \
@@ -2484,6 +2485,7 @@  GTFILES = $(CPP_ID_DATA_H) $(srcdir)/input.h $(srcdir)/coretypes.h \
   $(srcdir)/gimple.h \
   $(srcdir)/gimple-ssa.h \
   $(srcdir)/tree-chkp.c \
+  $(srcdir)/range.h $(srcdir)/range.c \
   $(srcdir)/tree-ssanames.c $(srcdir)/tree-eh.c $(srcdir)/tree-ssa-address.c \
   $(srcdir)/tree-cfg.c $(srcdir)/tree-ssa-loop-ivopts.c \
   $(srcdir)/tree-dfa.c \
diff --git a/gcc/builtins.c b/gcc/builtins.c
index 4f6c9c4..b5c9eb0 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -34,6 +34,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "tm_p.h"
 #include "stringpool.h"
 #include "tree-vrp.h"
+#include "range.h"
 #include "tree-ssanames.h"
 #include "expmed.h"
 #include "optabs.h"
@@ -2894,6 +2895,52 @@  builtin_memcpy_read_str (void *data, HOST_WIDE_INT offset,
   return c_readstr (str + offset, mode);
 }
 
+/* If a range IR may have wrapped in such a way that we can guess the
+   range is positive, return TRUE and set PROBABLE_MAX_SIZE.
+   Otherwise, return FALSE and leave PROBABLE_MAX_SIZE unchanged.  */
+
+static bool
+range_may_have_wrapped (irange ir,
+			unsigned HOST_WIDE_INT *probable_max_size)
+{
+  /* Code like:
+
+       signed int n;
+       if (n < 100)
+         {
+           # RANGE [0, 99][0xffff8000, 0xffffffff]
+	   _1 = (unsigned) n;
+	   memcpy (a, b, _1)
+         }
+
+     Produce a range allowing negative values of N.  We can still use
+     the information and make a guess that N is not negative.  */
+  if (ir.num_pairs () != 2
+      || ir.lower_bound () != 0)
+    return false;
+
+  const_tree type = ir.get_type ();
+  unsigned precision = TYPE_PRECISION (type);
+  gcc_assert (TYPE_UNSIGNED (type));
+
+  /* Build a range with all the negatives: [0xffff8000, 0xffffffff].  */
+  wide_int minus_one = wi::bit_not (wide_int::from (0, precision, UNSIGNED));
+  wide_int smallest_neg = wi::lshift (minus_one, precision / 2 - 1);
+  irange negatives (type, smallest_neg, minus_one);
+
+  irange orig_range = ir;
+  ir.intersect (negatives);
+  if (ir == negatives)
+    {
+      wide_int max = orig_range.upper_bound (0); // Get the 99 in [0, 99].
+      if (!wi::fits_uhwi_p (max))
+	return false;
+      *probable_max_size = max.to_uhwi ();
+      return true;
+    }
+  return false;
+}
+
 /* LEN specify length of the block of memcpy/memset operation.
    Figure out its range and put it into MIN_SIZE/MAX_SIZE. 
    In some cases we can make very likely guess on max size, then we
@@ -2913,7 +2960,6 @@  determine_block_size (tree len, rtx len_rtx,
   else
     {
       wide_int min, max;
-      enum value_range_type range_type = VR_UNDEFINED;
 
       /* Determine bounds from the type.  */
       if (tree_fits_uhwi_p (TYPE_MIN_VALUE (TREE_TYPE (len))))
@@ -2926,34 +2972,18 @@  determine_block_size (tree len, rtx len_rtx,
       else
 	*probable_max_size = *max_size = GET_MODE_MASK (GET_MODE (len_rtx));
 
-      if (TREE_CODE (len) == SSA_NAME)
-	range_type = get_range_info (len, &min, &max);
-      if (range_type == VR_RANGE)
+      if (TREE_CODE (len) == SSA_NAME && get_range_info (len, &min, &max))
 	{
-	  if (wi::fits_uhwi_p (min) && *min_size < min.to_uhwi ())
-	    *min_size = min.to_uhwi ();
-	  if (wi::fits_uhwi_p (max) && *max_size > max.to_uhwi ())
-	    *probable_max_size = *max_size = max.to_uhwi ();
-	}
-      else if (range_type == VR_ANTI_RANGE)
-	{
-	  /* Anti range 0...N lets us to determine minimal size to N+1.  */
-	  if (min == 0)
+	  irange ir (len);
+	  if (range_may_have_wrapped (ir, probable_max_size))
+	    ;
+	  else
 	    {
-	      if (wi::fits_uhwi_p (max) && max.to_uhwi () + 1 != 0)
-		*min_size = max.to_uhwi () + 1;
+	      if (wi::fits_uhwi_p (min) && *min_size < min.to_uhwi ())
+		*min_size = min.to_uhwi ();
+	      if (wi::fits_uhwi_p (max) && *max_size > max.to_uhwi ())
+		*probable_max_size = *max_size = max.to_uhwi ();
 	    }
-	  /* Code like
-
-	     int n;
-	     if (n < 100)
-	       memcpy (a, b, n)
-
-	     Produce anti range allowing negative values of N.  We still
-	     can use the information and make a guess that N is not negative.
-	     */
-	  else if (!wi::leu_p (max, 1 << 30) && wi::fits_uhwi_p (min))
-	    *probable_max_size = min.to_uhwi () - 1;
 	}
     }
   gcc_checking_assert (*max_size <=
@@ -3136,7 +3166,7 @@  check_sizes (int opt, tree exp, tree size, tree maxlen, tree src, tree objsize)
   bool exactsize = size && tree_fits_uhwi_p (size);
 
   if (size)
-    get_size_range (size, range);
+    get_size_range (size, range, /*range_starts_at=*/0);
 
   /* First check the number of bytes to be written against the maximum
      object size.  */
diff --git a/gcc/calls.c b/gcc/calls.c
index 91a4466..160d9b4 100644
--- a/gcc/calls.c
+++ b/gcc/calls.c
@@ -49,6 +49,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "rtl-iter.h"
 #include "tree-chkp.h"
 #include "tree-vrp.h"
+#include "range.h"
 #include "tree-ssanames.h"
 #include "rtl-chkp.h"
 #include "intl.h"
@@ -1256,10 +1257,12 @@  alloc_max_size (void)
    after adjusting it if necessary to make EXP a valid size argument to
    an allocation function declared with attribute alloc_size (whose
    argument may be signed), or to a string manipulation function like
-   memset.  */
+   memset.
+
+   RANGE_STARTS_AT is the number where the range will start from.  */
 
 bool
-get_size_range (tree exp, tree range[2])
+get_size_range (tree exp, tree range[2], unsigned range_starts_at)
 {
   if (tree_fits_uhwi_p (exp))
     {
@@ -1269,74 +1272,41 @@  get_size_range (tree exp, tree range[2])
     }
 
   wide_int min, max;
-  enum value_range_type range_type
-    = ((TREE_CODE (exp) == SSA_NAME && INTEGRAL_TYPE_P (TREE_TYPE (exp)))
-       ? get_range_info (exp, &min, &max) : VR_VARYING);
-
-  if (range_type == VR_VARYING)
+  if (TREE_CODE (exp) != SSA_NAME
+      || !INTEGRAL_TYPE_P (TREE_TYPE (exp))
+      || !get_range_info (exp, &min, &max))
     {
       /* No range information available.  */
       range[0] = NULL_TREE;
       range[1] = NULL_TREE;
       return false;
     }
+  irange ir (exp);
 
   tree exptype = TREE_TYPE (exp);
-  unsigned expprec = TYPE_PRECISION (exptype);
-  wide_int wzero = wi::zero (expprec);
-  wide_int wmaxval = wide_int (TYPE_MAX_VALUE (exptype));
-
-  bool signed_p = !TYPE_UNSIGNED (exptype);
 
-  if (range_type == VR_ANTI_RANGE)
+  /* Remove negative numbers from the range.  */
+  irange positives;
+  range_positives (&positives, exptype, range_starts_at);
+  if (!positives.intersect (ir).empty_p ())
     {
-      if (signed_p)
-	{
-	  if (wi::les_p (max, wzero))
-	    {
-	      /* EXP is not in a strictly negative range.  That means
-		 it must be in some (not necessarily strictly) positive
-		 range which includes zero.  Since in signed to unsigned
-		 conversions negative values end up converted to large
-		 positive values, and otherwise they are not valid sizes,
-		 the resulting range is in both cases [0, TYPE_MAX].  */
-	      min = wzero;
-	      max = wmaxval;
-	    }
-	  else if (wi::les_p (min - 1, wzero))
-	    {
-	      /* EXP is not in a negative-positive range.  That means EXP
-		 is either negative, or greater than max.  Since negative
-		 sizes are invalid make the range [MAX + 1, TYPE_MAX].  */
-	      min = max + 1;
-	      max = wmaxval;
-	    }
-	  else
-	    {
-	      max = min - 1;
-	      min = wzero;
-	    }
-	}
-      else if (wi::eq_p (wzero, min - 1))
-	{
-	  /* EXP is unsigned and not in the range [1, MAX].  That means
-	     it's either zero or greater than MAX.  Even though 0 would
-	     normally be detected by -Walloc-zero set the range to
-	     [MAX, TYPE_MAX] so that when MAX is greater than the limit
-	     the whole range is diagnosed.  */
-	  min = max + 1;
-	  max = wmaxval;
-	}
-      else
-	{
-	  max = min - 1;
-	  min = wzero;
-	}
+      /* Remove the unknown parts of a multi-range.
+	 This will transform [5,10][20,MAX] into [5,10].  */
+      if (positives.num_pairs () > 1
+	  && positives.upper_bound () == wide_int (TYPE_MAX_VALUE (exptype)))
+	positives.remove_pair (positives.num_pairs () - 1);
+
+      range[0] = wide_int_to_tree (exptype, positives.lower_bound ());
+      range[1] = wide_int_to_tree (exptype, positives.upper_bound ());
+    }
+  else
+    {
+      /* If removing the negative numbers didn't give us anything
+	 back, the entire range was negative. Leave things as they
+	 were, and let the caller sort it out.  */
+      range[0] = wide_int_to_tree (exptype, min);
+      range[1] = wide_int_to_tree (exptype, max);
     }
-
-  range[0] = wide_int_to_tree (exptype, min);
-  range[1] = wide_int_to_tree (exptype, max);
-
   return true;
 }
 
diff --git a/gcc/calls.h b/gcc/calls.h
index df5817f..2b204c7 100644
--- a/gcc/calls.h
+++ b/gcc/calls.h
@@ -38,6 +38,6 @@  extern bool pass_by_reference (CUMULATIVE_ARGS *, machine_mode,
 extern bool reference_callee_copied (CUMULATIVE_ARGS *, machine_mode,
 				     tree, bool);
 extern void maybe_warn_alloc_args_overflow (tree, tree, tree[2], int[2]);
-extern bool get_size_range (tree, tree[2]);
+extern bool get_size_range (tree, tree[2], unsigned range_starts_at = 1);
 
 #endif // GCC_CALLS_H
diff --git a/gcc/fold-const.c b/gcc/fold-const.c
index 736552c..ec7b5ee 100644
--- a/gcc/fold-const.c
+++ b/gcc/fold-const.c
@@ -76,6 +76,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "md5.h"
 #include "case-cfn-macros.h"
 #include "stringpool.h"
+#include "range.h"
 #include "tree-vrp.h"
 #include "tree-ssanames.h"
 #include "selftest.h"
@@ -9063,7 +9064,6 @@  bool
 expr_not_equal_to (tree t, const wide_int &w)
 {
   wide_int min, max, nz;
-  value_range_type rtype;
   switch (TREE_CODE (t))
     {
     case INTEGER_CST:
@@ -9072,18 +9072,12 @@  expr_not_equal_to (tree t, const wide_int &w)
     case SSA_NAME:
       if (!INTEGRAL_TYPE_P (TREE_TYPE (t)))
 	return false;
-      rtype = get_range_info (t, &min, &max);
-      if (rtype == VR_RANGE)
+      if (SSA_NAME_RANGE_INFO (t))
 	{
-	  if (wi::lt_p (max, w, TYPE_SIGN (TREE_TYPE (t))))
-	    return true;
-	  if (wi::lt_p (w, min, TYPE_SIGN (TREE_TYPE (t))))
+	  irange ri (t);
+	  if (!ri.contains_p (w))
 	    return true;
 	}
-      else if (rtype == VR_ANTI_RANGE
-	       && wi::le_p (min, w, TYPE_SIGN (TREE_TYPE (t)))
-	       && wi::le_p (w, max, TYPE_SIGN (TREE_TYPE (t))))
-	return true;
       /* If T has some known zero bits and W has any of those bits set,
 	 then T is known not to be equal to W.  */
       if (wi::ne_p (wi::zext (wi::bit_and_not (w, get_nonzero_bits (t)),
diff --git a/gcc/function-tests.c b/gcc/function-tests.c
index ca30028..f1f884d 100644
--- a/gcc/function-tests.c
+++ b/gcc/function-tests.c
@@ -574,6 +574,19 @@  test_conversion_to_ssa ()
   ASSERT_EQ (SSA_NAME, TREE_CODE (gimple_return_retval (return_stmt)));
 }
 
+/* Test the irange class.  We must start this here because we need
+   cfun set.  */
+
+static void
+test_iranges ()
+{
+  tree fndecl = build_trivial_high_gimple_function ();
+  function *fun = DECL_STRUCT_FUNCTION (fndecl);
+  push_cfun (fun);
+  irange_tests ();
+  pop_cfun ();
+}
+
 /* Test of expansion from gimple-ssa to RTL.  */
 
 static void
@@ -677,6 +690,7 @@  function_tests_c_tests ()
   test_gimplification ();
   test_building_cfg ();
   test_conversion_to_ssa ();
+  test_iranges ();
   test_expansion_to_rtl ();
 }
 
diff --git a/gcc/gengtype-parse.c b/gcc/gengtype-parse.c
index f6ad398..37e678a 100644
--- a/gcc/gengtype-parse.c
+++ b/gcc/gengtype-parse.c
@@ -300,6 +300,13 @@  require_template_declaration (const char *tmpl_name)
 	{
 	  if (T.code == '*')
 	    {
+	      /* Handle simple multiplication by a constant.  Do not
+		 treat it like indirection.  */
+	      if (token () == NUM)
+		{
+		  str = concat (str, advance (), (char *) 0);
+		  continue;
+		}
 	      id = "*";
 	      if (num_indirections++)
 		parse_error ("only one level of indirection is supported"
diff --git a/gcc/gengtype.c b/gcc/gengtype.c
index b02e9ff..1c281c3 100644
--- a/gcc/gengtype.c
+++ b/gcc/gengtype.c
@@ -1715,6 +1715,7 @@  open_base_files (void)
       "optabs.h", "libfuncs.h", "debug.h", "internal-fn.h", "gimple-fold.h",
       "tree-eh.h", "gimple-iterator.h", "gimple-ssa.h", "tree-cfg.h",
       "tree-vrp.h", "tree-phinodes.h", "ssa-iterators.h", "stringpool.h",
+      "range.h",
       "tree-ssanames.h", "tree-ssa-loop.h", "tree-ssa-loop-ivopts.h",
       "tree-ssa-loop-manip.h", "tree-ssa-loop-niter.h", "tree-into-ssa.h",
       "tree-dfa.h", "tree-ssa.h", "reload.h", "cpp-id-data.h", "tree-chrec.h",
diff --git a/gcc/gimple-pretty-print.c b/gcc/gimple-pretty-print.c
index c99dfe1..658d7e6 100644
--- a/gcc/gimple-pretty-print.c
+++ b/gcc/gimple-pretty-print.c
@@ -2109,22 +2109,17 @@  dump_ssaname_info (pretty_printer *buffer, tree node, int spc)
 	}
     }
 
-  if (!POINTER_TYPE_P (TREE_TYPE (node))
-      && SSA_NAME_RANGE_INFO (node))
+  if (!POINTER_TYPE_P (TREE_TYPE (node)) && SSA_NAME_RANGE_INFO (node))
     {
-      wide_int min, max, nonzero_bits;
-      value_range_type range_type = get_range_info (node, &min, &max);
+      irange ri (node);
+      wide_int nonzero_bits;
 
-      if (range_type == VR_VARYING)
+      if (ri.empty_p ())
 	pp_printf (buffer, "# RANGE VR_VARYING");
-      else if (range_type == VR_RANGE || range_type == VR_ANTI_RANGE)
+      else
 	{
 	  pp_printf (buffer, "# RANGE ");
-	  pp_printf (buffer, "%s[", range_type == VR_RANGE ? "" : "~");
-	  pp_wide_int (buffer, min, TYPE_SIGN (TREE_TYPE (node)));
-	  pp_printf (buffer, ", ");
-	  pp_wide_int (buffer, max, TYPE_SIGN (TREE_TYPE (node)));
-	  pp_printf (buffer, "]");
+	  ri.dump (buffer);
 	}
       nonzero_bits = get_nonzero_bits (node);
       if (nonzero_bits != -1)
diff --git a/gcc/gimple-ssa-sprintf.c b/gcc/gimple-ssa-sprintf.c
index f43778b..0dd87a4 100644
--- a/gcc/gimple-ssa-sprintf.c
+++ b/gcc/gimple-ssa-sprintf.c
@@ -1132,8 +1132,7 @@  get_int_range (tree arg, HOST_WIDE_INT *pmin, HOST_WIDE_INT *pmax,
 	{
 	  /* Try to determine the range of values of the integer argument.  */
 	  wide_int min, max;
-	  enum value_range_type range_type = get_range_info (arg, &min, &max);
-	  if (range_type == VR_RANGE)
+	  if (get_range_info (arg, &min, &max))
 	    {
 	      HOST_WIDE_INT type_min
 		= (TYPE_UNSIGNED (argtype)
@@ -1432,8 +1431,7 @@  format_integer (const directive &dir, tree arg)
       /* Try to determine the range of values of the integer argument
 	 (range information is not available for pointers).  */
       wide_int min, max;
-      enum value_range_type range_type = get_range_info (arg, &min, &max);
-      if (range_type == VR_RANGE)
+      if (get_range_info (arg, &min, &max))
 	{
 	  argmin = wide_int_to_tree (argtype, min);
 	  argmax = wide_int_to_tree (argtype, max);
@@ -1449,11 +1447,7 @@  format_integer (const directive &dir, tree arg)
 	  res.argmin = argmin;
 	  res.argmax = argmax;
 	}
-      else if (range_type == VR_ANTI_RANGE)
-	{
-	  /* Handle anti-ranges if/when bug 71690 is resolved.  */
-	}
-      else if (range_type == VR_VARYING)
+      else
 	{
 	  /* The argument here may be the result of promoting the actual
 	     argument to int.  Try to determine the type of the actual
@@ -3877,9 +3871,7 @@  pass_sprintf_length::handle_gimple_call (gimple_stmt_iterator *gsi)
 	     and use the greater of the two at level 1 and the smaller
 	     of them at level 2.  */
 	  wide_int min, max;
-	  enum value_range_type range_type
-	    = get_range_info (size, &min, &max);
-	  if (range_type == VR_RANGE)
+	  if (get_range_info (size, &min, &max))
 	    {
 	      dstsize
 		= (warn_level < 2
diff --git a/gcc/gimple-ssa-warn-alloca.c b/gcc/gimple-ssa-warn-alloca.c
index ec95cc6..df506bb 100644
--- a/gcc/gimple-ssa-warn-alloca.c
+++ b/gcc/gimple-ssa-warn-alloca.c
@@ -216,13 +216,12 @@  alloca_call_type_by_arg (tree arg, tree arg_casted, edge e, unsigned max_size)
       && TREE_CODE (limit) == SSA_NAME)
     {
       wide_int min, max;
-      value_range_type range_type = get_range_info (limit, &min, &max);
 
-      if (range_type == VR_UNDEFINED || range_type == VR_VARYING)
+      if (!get_range_info (limit, &min, &max))
 	return alloca_type_and_limit (ALLOCA_BOUND_UNKNOWN);
 
       // ?? It looks like the above `if' is unnecessary, as we never
-      // get any VR_RANGE or VR_ANTI_RANGE here.  If we had a range
+      // get any range information here.  If we had a range
       // for LIMIT, I suppose we would have taken care of it in
       // alloca_call_type(), or handled above where we handle (ARG .COND. N).
       //
@@ -252,13 +251,12 @@  cast_from_signed_p (tree ssa, tree *invalid_casted_type)
   return false;
 }
 
-// Return TRUE if X has a maximum range of MAX, basically covering the
-// entire domain, in which case it's no range at all.
+// Return TRUE if TYPE has a maximum range of MAX.
 
 static bool
-is_max (tree x, wide_int max)
+is_max (tree type, wide_int max)
 {
-  return wi::max_value (TREE_TYPE (x)) == max;
+  return wi::max_value (type) == max;
 }
 
 // Analyze the alloca call in STMT and return the alloca type with its
@@ -284,104 +282,103 @@  alloca_call_type (gimple *stmt, bool is_vla, tree *invalid_casted_type)
 
   // Adjust warn_alloca_max_size for VLAs, by taking the underlying
   // type into account.
-  unsigned HOST_WIDE_INT max_size;
+  unsigned HOST_WIDE_INT max_user_size;
   if (is_vla)
-    max_size = (unsigned HOST_WIDE_INT) warn_vla_limit;
+    max_user_size = (unsigned HOST_WIDE_INT) warn_vla_limit;
   else
-    max_size = (unsigned HOST_WIDE_INT) warn_alloca_limit;
+    max_user_size = (unsigned HOST_WIDE_INT) warn_alloca_limit;
 
   // Check for the obviously bounded case.
   if (TREE_CODE (len) == INTEGER_CST)
     {
-      if (tree_to_uhwi (len) > max_size)
+      if (tree_to_uhwi (len) > max_user_size)
 	return alloca_type_and_limit (ALLOCA_BOUND_DEFINITELY_LARGE, len);
       if (integer_zerop (len))
 	return alloca_type_and_limit (ALLOCA_ARG_IS_ZERO);
       ret = alloca_type_and_limit (ALLOCA_OK);
     }
   // Check the range info if available.
-  else if (TREE_CODE (len) == SSA_NAME)
+  else if (TREE_CODE (len) == SSA_NAME && get_range_info (len, &min, &max))
     {
-      value_range_type range_type = get_range_info (len, &min, &max);
-      if (range_type == VR_RANGE)
+      irange r (len);
+      if (wi::leu_p (max, max_user_size))
+	ret = alloca_type_and_limit (ALLOCA_OK);
+      else if (is_max (TREE_TYPE (len), max)
+	       && !r.range_for_type_p ()
+	       && cast_from_signed_p (len, invalid_casted_type))
 	{
-	  if (wi::leu_p (max, max_size))
-	    ret = alloca_type_and_limit (ALLOCA_OK);
-	  else
-	    {
-	      // A cast may have created a range we don't care
-	      // about.  For instance, a cast from 16-bit to
-	      // 32-bit creates a range of 0..65535, even if there
-	      // is not really a determinable range in the
-	      // underlying code.  In this case, look through the
-	      // cast at the original argument, and fall through
-	      // to look at other alternatives.
-	      //
-	      // We only look at through the cast when its from
-	      // unsigned to unsigned, otherwise we may risk
-	      // looking at SIGNED_INT < N, which is clearly not
-	      // what we want.  In this case, we'd be interested
-	      // in a VR_RANGE of [0..N].
-	      //
-	      // Note: None of this is perfect, and should all go
-	      // away with better range information.  But it gets
-	      // most of the cases.
-	      gimple *def = SSA_NAME_DEF_STMT (len);
-	      if (gimple_assign_cast_p (def))
-		{
-		  tree rhs1 = gimple_assign_rhs1 (def);
-		  tree rhs1type = TREE_TYPE (rhs1);
-
-		  // Bail if the argument type is not valid.
-		  if (!INTEGRAL_TYPE_P (rhs1type))
-		    return alloca_type_and_limit (ALLOCA_OK);
-
-		  if (TYPE_UNSIGNED (rhs1type))
-		    {
-		      len_casted = rhs1;
-		      range_type = get_range_info (len_casted, &min, &max);
-		    }
-		}
-	      // An unknown range or a range of the entire domain is
-	      // really no range at all.
-	      if (range_type == VR_VARYING
-		  || (!len_casted && is_max (len, max))
-		  || (len_casted && is_max (len_casted, max)))
-		{
-		  // Fall through.
-		}
-	      else if (range_type == VR_ANTI_RANGE)
-		return alloca_type_and_limit (ALLOCA_UNBOUNDED);
-	      else if (range_type != VR_VARYING)
-		return alloca_type_and_limit (ALLOCA_BOUND_MAYBE_LARGE, max);
-	    }
-	}
-      else if (range_type == VR_ANTI_RANGE)
-	{
-	  // There may be some wrapping around going on.  Catch it
-	  // with this heuristic.  Hopefully, this VR_ANTI_RANGE
-	  // nonsense will go away, and we won't have to catch the
-	  // sign conversion problems with this crap.
+	  // A cast from signed to unsigned may cause us to have very
+	  // large numbers that can be caught with the above
+	  // heuristic.
 	  //
 	  // This is here to catch things like:
 	  // void foo(signed int n) {
 	  //   if (n < 100)
-	  //     alloca(n);
+	  //     {
+	  //       # RANGE [0,99][0xff80, 0xffff]
+	  //       unsigned int _1 = (unsigned int) n;
+	  //       alloca (_1);
+	  //     }
 	  //   ...
 	  // }
-	  if (cast_from_signed_p (len, invalid_casted_type))
+	  //
+	  // Unfortunately it also triggers:
+	  //
+	  // __SIZE_TYPE__ n = (__SIZE_TYPE__)blah;
+	  // if (n < 100)
+	  //   alloca(n);
+	  //
+	  // ...which is clearly bounded.  So, double check that
+	  // the paths leading up to the size definitely don't
+	  // have a bound.
+	  tentative_cast_from_signed = true;
+	}
+      else
+	{
+	  // A cast may have created a range we don't care
+	  // about.  For instance, a cast from 16-bit to
+	  // 32-bit creates a range of 0..65535, even if there
+	  // is not really a determinable range in the
+	  // underlying code.  In this case, look through the
+	  // cast at the original argument, and fall through
+	  // to look at other alternatives.
+	  //
+	  // We only look through the cast when it's from unsigned to
+	  // unsigned, otherwise we risk looking at SIGNED_INT < N,
+	  // which is clearly not what we want.  In this case, we'd be
+	  // interested in a VR_RANGE of [0..N].
+	  //
+	  // Note: None of this is perfect, and should all go
+	  // away with better range information.  But it gets
+	  // most of the cases.
+	  gimple *def = SSA_NAME_DEF_STMT (len);
+	  bool have_range = true;
+	  if (gimple_assign_cast_p (def))
+
+	    {
+	      tree rhs1 = gimple_assign_rhs1 (def);
+	      tree rhs1type = TREE_TYPE (rhs1);
+
+	      // Bail if the argument type is not valid.
+	      if (!INTEGRAL_TYPE_P (rhs1type))
+		return alloca_type_and_limit (ALLOCA_OK);
+
+	      if (TYPE_UNSIGNED (rhs1type))
+		{
+		  len_casted = rhs1;
+		  have_range = get_range_info (len_casted, &min, &max);
+		}
+	    }
+	  // An unknown range or a range of the entire domain is
+	  // really no range at all.
+	  if (!have_range
+	      || (!len_casted && is_max (TREE_TYPE (len), max))
+	      || (len_casted && is_max (TREE_TYPE (len_casted), max)))
 	    {
-	      // Unfortunately this also triggers:
-	      //
-	      // __SIZE_TYPE__ n = (__SIZE_TYPE__)blah;
-	      // if (n < 100)
-	      //   alloca(n);
-	      //
-	      // ...which is clearly bounded.  So, double check that
-	      // the paths leading up to the size definitely don't
-	      // have a bound.
-	      tentative_cast_from_signed = true;
+	      // Fall through.
 	    }
+	  else
+	    return alloca_type_and_limit (ALLOCA_BOUND_MAYBE_LARGE, max);
 	}
       // No easily determined range and try other things.
     }
@@ -397,7 +394,7 @@  alloca_call_type (gimple *stmt, bool is_vla, tree *invalid_casted_type)
 	{
 	  gcc_assert (!len_casted || TYPE_UNSIGNED (TREE_TYPE (len_casted)));
 	  ret = alloca_call_type_by_arg (len, len_casted,
-					 EDGE_PRED (bb, ix), max_size);
+					 EDGE_PRED (bb, ix), max_user_size);
 	  if (ret.type != ALLOCA_OK)
 	    break;
 	}
diff --git a/gcc/internal-fn.c b/gcc/internal-fn.c
index 75fe027..df6aa0a 100644
--- a/gcc/internal-fn.c
+++ b/gcc/internal-fn.c
@@ -496,7 +496,7 @@  get_min_precision (tree arg, signop sign)
   if (TREE_CODE (arg) != SSA_NAME)
     return prec + (orig_sign != sign);
   wide_int arg_min, arg_max;
-  while (get_range_info (arg, &arg_min, &arg_max) != VR_RANGE)
+  while (!get_range_info (arg, &arg_min, &arg_max))
     {
       gimple *g = SSA_NAME_DEF_STMT (arg);
       if (is_gimple_assign (g)
diff --git a/gcc/ipa-cp.c b/gcc/ipa-cp.c
index b97d7af..94f50c1 100644
--- a/gcc/ipa-cp.c
+++ b/gcc/ipa-cp.c
@@ -981,8 +981,6 @@  ipcp_bits_lattice::set_to_constant (widest_int value, widest_int mask)
 void
 ipcp_bits_lattice::get_value_and_mask (tree operand, widest_int *valuep, widest_int *maskp)
 {
-  wide_int get_nonzero_bits (const_tree);
-
   if (TREE_CODE (operand) == INTEGER_CST)
     {
       *valuep = wi::to_widest (operand);
diff --git a/gcc/ipa-prop.c b/gcc/ipa-prop.c
index 10741a2..c1e034f 100644
--- a/gcc/ipa-prop.c
+++ b/gcc/ipa-prop.c
@@ -1896,7 +1896,7 @@  ipa_compute_jump_functions_for_edge (struct ipa_func_body_info *fbi,
 	  value_range_type type;
 	  if (TREE_CODE (arg) == SSA_NAME
 	      && param_type
-	      && (type = get_range_info (arg, &min, &max))
+	      && (type = get_range_info_as_value_range (arg, &min, &max))
 	      && (type == VR_RANGE || type == VR_ANTI_RANGE))
 	    {
 	      value_range tmpvr,resvr;
@@ -1906,6 +1906,18 @@  ipa_compute_jump_functions_for_edge (struct ipa_func_body_info *fbi,
 	      tmpvr.max = wide_int_to_tree (TREE_TYPE (arg), max);
 	      tmpvr.equiv = NULL;
 	      memset (&resvr, 0, sizeof (resvr));
+	      /* FIXME: Can we rewrite this to avoid calling the
+		 internals of VRP here?  We should really get rid of
+		 the call to get_range_info_as_value_range() above,
+		 and this value_range business throughout this file.
+
+		 At this point, we should only be looking at
+		 SSA_NAME_RANGE_INFO (through get_range_info()).
+
+		 Perhaps we could look at all the uses of value_range
+		 in this file, and if they are only used on
+		 INTEGRAL_TYPE_P's, rewrite this to use the irage
+		 class.  */
 	      extract_range_from_unary_expr (&resvr, NOP_EXPR, param_type,
 					     &tmpvr, TREE_TYPE (arg));
 	      if (resvr.type == VR_RANGE || resvr.type == VR_ANTI_RANGE)
diff --git a/gcc/range.c b/gcc/range.c
new file mode 100644
index 0000000..e2ca95c
--- /dev/null
+++ b/gcc/range.c
@@ -0,0 +1,1361 @@ 
+/* SSA range analysis implementation. -*- C++ -*-
+   Copyright (C) 2017 Free Software Foundation, Inc.
+   Contributed by Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "backend.h"
+#include "tree.h"
+#include "gimple.h"
+#include "gimple-pretty-print.h"
+#include "fold-const.h"
+#include "ssa.h"
+#include "range.h"
+#include "selftest.h"
+
+/* Set range from a TYPE and some bounds (LBOUND and UBOUND).
+
+   RT is PLAIN if it is a normal range, or INVERSE if it is an inverse
+   range.  */
+
+void
+irange::set_range (const_tree typ, const wide_int &lbound,
+		   const wide_int &ubound, kind rt)
+{
+  gcc_assert (INTEGRAL_TYPE_P (typ) || POINTER_TYPE_P (typ));
+  gcc_assert (TYPE_PRECISION (typ) == lbound.get_precision ());
+  gcc_assert (lbound.get_precision () == ubound.get_precision ());
+  overflow = false;
+  type = typ;
+  gcc_assert (wi::le_p (lbound, ubound, TYPE_SIGN (type)));
+  if (rt == INVERSE)
+    {
+      /* We calculate INVERSE([I,J]) as [-MIN, I-1][J+1, +MAX].  */
+      bool ovf;
+      nitems = 0;
+      wide_int min = wi::min_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+      wide_int max = wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+
+      /* If we will overflow, don't bother.  This will handle unsigned
+	 underflow which doesn't set the overflow bit.
+
+	 Note: Perhaps all these &ovf checks are unecessary since we
+	 are manually checking for overflow with the if() below.  */
+      if (lbound != min)
+	{
+	  bounds[nitems++] = min;
+	  bounds[nitems++] = wi::sub (lbound, 1, TYPE_SIGN (type), &ovf);
+	  if (ovf)
+	    nitems = 0;
+	}
+      /* If we will overflow, don't bother.  This will handle unsigned
+	 overflow which doesn't set the overflow bit.  */
+      if (ubound != max)
+	{
+	  bounds[nitems++] = wi::add (ubound, 1, TYPE_SIGN (type), &ovf);
+	  if (ovf)
+	    nitems--;
+	  else
+	    bounds[nitems++] = max;
+	}
+
+      /* If we get here with N==0, it means we tried to calculate the
+	 inverse of [-MIN, +MAX] which is actually the empty set, and
+	 N==0 maps nicely to the empty set :).  */
+    }
+  else
+    {
+      nitems = 2;
+      bounds[0] = lbound;
+      bounds[1] = ubound;
+    }
+}
+
+// Set range from an IRANGE_STORAGE and TYPE.
+
+void
+irange::set_range (const irange_storage *storage, const_tree typ)
+{
+  overflow = false;
+  type = typ;
+  nitems = 0;
+  unsigned i = 0;
+  unsigned precision = wi::get_precision (storage->trailing_bounds[0]);
+  gcc_assert (precision == TYPE_PRECISION (typ));
+  while (i < max_pairs * 2)
+    {
+      wide_int lo = storage->trailing_bounds[i];
+      wide_int hi = storage->trailing_bounds[i + 1];
+      // A nonsensical sub-range of [1,0] marks the end of valid ranges.
+      if (lo == wi::one (precision) && hi == wi::zero (precision))
+	break;
+      bounds[i] = lo;
+      bounds[i + 1] = hi;
+      i += 2;
+    }
+  nitems = i;
+  gcc_assert (nitems <= max_pairs * 2);
+}
+
+/* Set range from an SSA_NAME's available range.  If there is no
+   available range, build a range for its entire domain.  */
+
+void
+irange::set_range (const_tree ssa)
+{
+  tree t = TREE_TYPE (ssa);
+  gcc_assert (TREE_CODE (ssa) == SSA_NAME && INTEGRAL_TYPE_P (t));
+  if (!SSA_NAME_RANGE_INFO (ssa))
+    {
+      set_range_for_type (t);
+      return;
+    }
+  irange_storage *storage = SSA_NAME_RANGE_INFO (ssa);
+  set_range (storage, t);
+}
+
+/* Set range from the full domain of type T.  */
+
+void
+irange::set_range_for_type (const_tree t)
+{
+  gcc_assert (TYPE_P (t));
+  gcc_assert (INTEGRAL_TYPE_P (t) || POINTER_TYPE_P (t));
+  wide_int min = wi::min_value (TYPE_PRECISION (t), TYPE_SIGN (t));
+  wide_int max = wi::max_value (TYPE_PRECISION (t), TYPE_SIGN (t));
+  set_range (t, min, max);
+}
+
+irange::irange (const irange &r)
+{
+  type = r.type;
+  overflow = false;
+  nitems = r.nitems;
+  for (unsigned i = 0; i < nitems; ++i)
+    bounds[i] = r.bounds[i];
+}
+
+bool
+irange::operator== (const irange &r) const
+{
+  if (type != r.type || nitems != r.nitems || overflow != r.overflow)
+    return false;
+  for (unsigned i = 0; i < nitems; ++i)
+    if (bounds[i] != r.bounds[i])
+      return false;
+  return true;
+}
+
+irange&
+irange::operator= (const irange &r)
+{
+  type = r.type;
+  nitems = r.nitems;
+  overflow = r.overflow;
+  for (unsigned i = 0; i < nitems; ++i)
+    bounds[i] = r.bounds[i];
+  return *this;
+}
+
+
+irange&
+irange::operator= (const_tree t)
+{
+  set_range (t);
+  return *this;
+}
+
+// Return true if this range is the full range for it's type
+
+bool
+irange::range_for_type_p () const
+{
+  irange tmp;
+  tmp.set_range_for_type (type);
+  return (*this == tmp);
+}
+
+
+bool
+irange::valid_p () const
+{
+  if (!nitems
+      || nitems % 2
+      || nitems > max_pairs * 2)
+    return false;
+
+  /* Check that the bounds are in the right order.
+
+     So for [a,b][c,d][e,f] we must have:
+     a <= b < c <= d < e <= f.  */
+  if (wi::gt_p (bounds[0], bounds[1], TYPE_SIGN (type)))
+    return false;
+  for (unsigned i = 2; i < nitems; i += 2)
+    {
+      if (wi::le_p (bounds[i], bounds[i-1], TYPE_SIGN (type)))
+	return false;
+      if (wi::gt_p (bounds[i], bounds[i+1], TYPE_SIGN (type)))
+	return false;
+    }
+  return true;
+}
+
+/* Convert the current range in place into a range of type NEW_TYPE.
+   The type of the original range is changed to the new type.  */
+
+void
+irange::cast (const_tree new_type)
+{
+  if (!nitems)
+    {
+      type = new_type;
+      return;
+    }
+  bool sign_change = TYPE_SIGN (new_type) != TYPE_SIGN (type);
+  unsigned new_precision = TYPE_PRECISION (new_type);
+
+  /* If nothing changed, this may be a useless type conversion between
+     two variants of the same type.  */
+  if (!sign_change && TYPE_PRECISION (type) == new_precision)
+    {
+      type = new_type;
+      return;
+    }
+
+  /* If any of the old bounds are outside of the representable range
+     for the new type, conservatively default to the entire range of
+     the new type.  */
+  if (new_precision < TYPE_PRECISION (type))
+    {
+      /* NOTE: There are some const_cast<> sprinkled throughout
+	 because the fold_convert machinery is not properly
+	 constified.  */
+      /* Get the extreme bounds for the new type, but within the old type,
+	 so we can properly compare them.  */
+      wide_int lbound = fold_convert (const_cast<tree> (type),
+				      TYPE_MIN_VALUE (new_type));
+      wide_int ubound
+	= fold_convert (const_cast <tree> (type),
+			TYPE_MAX_VALUE (new_type));
+
+      if (wi::lt_p (bounds[0], lbound, TYPE_SIGN (type))
+	  || wi::gt_p (bounds[nitems - 1], ubound, TYPE_SIGN (type)))
+	{
+	  bounds[0] = wide_int::from (lbound, new_precision,
+				      TYPE_SIGN (new_type));
+	  bounds[1] = wide_int::from (ubound, new_precision,
+				      TYPE_SIGN (new_type));
+	  type = new_type;
+	  nitems = 2;
+	  return;
+	}
+    }
+
+  wide_int orig_low = lower_bound ();
+  wide_int orig_high = upper_bound ();
+  wide_int min = wi::min_value (new_precision, TYPE_SIGN (new_type));
+  wide_int max = wi::max_value (new_precision, TYPE_SIGN (new_type));
+  for (unsigned i = 0; i < nitems; i += 2)
+    {
+      tree b0
+	= fold_convert (const_cast<tree> (new_type),
+			wide_int_to_tree (const_cast<tree> (type),
+					  bounds[i]));
+      tree b1
+	= fold_convert (const_cast<tree> (new_type),
+			wide_int_to_tree (const_cast<tree> (type),
+					  bounds[i+1]));
+      bool sbit0 = bounds[i].sign_mask () < 0;
+      bool sbit1 = bounds[i + 1].sign_mask () < 0;
+
+      /* If we're not doing a sign change, or we are moving to a
+	 higher precision, we can just blindly chop off bits.  */
+      if (!sign_change
+	  || (TYPE_UNSIGNED (type)
+	      && !TYPE_UNSIGNED (new_type)
+	      && new_precision > TYPE_PRECISION (type))
+	  || sbit0 == sbit1)
+	{
+	  bounds[i] = b0;
+	  bounds[i + 1] = b1;
+	}
+      else
+	{
+	  /* If we're about to go over the maximum number of ranges
+	     allowed, convert to something conservative and cast
+	     again.  */
+	  if (nitems >= max_pairs * 2)
+	    {
+	      bounds[0] = orig_low;
+	      bounds[1] = orig_high;
+	      nitems = 2;
+	      cast (new_type);
+	      return;
+	    }
+	  /*  If we're about to construct [MIN, b1==MAX].  That's just
+	      the entire range.  */
+	  if ((wide_int) b1 == max)
+	    {
+	      bounds[0] = min;
+	      bounds[1] = max;
+	      nitems = 2;
+	      type = new_type;
+	      return;
+	    }
+	  /* From no sign bit to sign bit: [15, 150]
+	     => [15,127][-128,-106].  */
+	  if (!sbit0 && sbit1)
+	    {
+	      bounds[i] = min;
+	      bounds[i + 1] = b1;
+	      bounds[nitems++] = b0;
+	      bounds[nitems++] = max;
+	    }
+	  /* From sign bit to no sign bit: [-5, 5]
+	     => [251,255][0,5].  */
+	  else
+	    {
+	      bounds[i] = min;
+	      bounds[i + 1] = b1;
+	      bounds[nitems++] = b0;
+	      bounds[nitems++] = max;
+	    }
+	}
+    }
+  type = new_type;
+  if (sign_change)
+    canonicalize ();
+  gcc_assert (valid_p ());
+}
+
+// Return TRUE if the current range contains ELEMENT.
+
+bool
+irange::contains_p (const wide_int &element) const
+{
+  for (unsigned i = 0; i < nitems; i += 2)
+    if (wi::ge_p (element, bounds[i], TYPE_SIGN (type))
+	&& wi::le_p (element, bounds[i + 1], TYPE_SIGN (type)))
+      return true;
+  return false;
+}
+
+// Like above, but ELEMENT can be an INTEGER_CST of any type.
+
+bool
+irange::contains_p (const_tree element) const
+{
+  gcc_assert (INTEGRAL_TYPE_P (TREE_TYPE (element)));
+  tree t = fold_convert (const_cast <tree> (type),
+			 const_cast <tree> (element));
+  if (TREE_OVERFLOW (t))
+    return false;
+  wide_int wi = t;
+  return contains_p (wi);
+}
+
+// Canonicalize the current range.
+
+void
+irange::canonicalize ()
+{
+  if (nitems < 2)
+    return;
+
+  /* Fix any out of order ranges: [10,20][-5,5] into [-5,5][10,20].
+
+     This is not a true sort by design because I *think* we won't
+     create any truly wacky ranges during casting.  As a temporary
+     measure, check assert(valid_p()) afterwards and if we catch
+     anything, rewrite this into a bubble sort.  */
+  for (unsigned i = 0; i < (unsigned) (nitems - 2); i += 2)
+    if (wi::gt_p (bounds[i], bounds[i + 2], TYPE_SIGN (type)))
+      {
+	wide_int x = bounds[i], y = bounds[i + 1];
+	bounds[i] = bounds[i + 2];
+	bounds[i + 1] = bounds[i + 3];
+	bounds[i + 2] = x;
+	bounds[i + 3] = y;
+      }
+  gcc_assert (valid_p ()); // See note before for(;;).
+
+  /* Merge any edges that touch.
+     [9,10][11,20] => [9,20].  */
+  for (unsigned i = 1; i < (unsigned) (nitems - 2); i += 2)
+    {
+      bool ovf;
+      wide_int x = wi::add (bounds[i], 1, TYPE_SIGN (type), &ovf);
+      /* No need to check for overflow here for the +1, since the
+	 middle ranges cannot have MAXINT.  */
+      if (x == bounds[i + 1])
+	{
+	  bounds[i] = bounds[i + 2];
+	  remove (i + 1, i + 2);
+	}
+    }
+}
+
+// Prepend [X,Y] into THIS.
+
+void
+irange::prepend (const wide_int &x, const wide_int &y)
+{
+  /* If we have enough space, shift everything to the right and
+     prepend.  */
+  if (nitems < max_pairs * 2)
+    {
+      for (unsigned i = nitems; i; i -= 2)
+	{
+	  bounds[i] = bounds[i - 2];
+	  bounds[i + 1] = bounds[i - 1];
+	}
+      bounds[0] = x;
+      bounds[1] = y;
+      nitems += 2;
+    }
+  /* Otherwise, merge it with the first entry.  */
+  else
+    bounds[0] = x;
+  canonicalize ();
+}
+
+// Place [X,Y] at the end of THIS.
+
+void
+irange::append (const wide_int &x, const wide_int &y)
+{
+  /* If we have enough space, make space at the end and append.  */
+  if (nitems < max_pairs * 2)
+    {
+      bounds[nitems++] = x;
+      bounds[nitems++] = y;
+    }
+  /* Otherwise, merge it with the last entry.  */
+  else
+    bounds[nitems - 1] = y;
+  canonicalize ();
+}
+
+// Remove the bound entries from [i..j].
+
+void
+irange::remove (unsigned i, unsigned j)
+{
+  gcc_assert (i < nitems && i < j);
+  unsigned dst = i;
+  unsigned ndeleted = j - i + 1;
+  for (++j; j < nitems; ++j)
+    bounds[dst++] = bounds[j];
+  nitems -= ndeleted;
+}
+
+// THIS = THIS U [X,Y]
+
+irange &
+irange::union_ (const wide_int &x, const wide_int &y)
+{
+  if (empty_p ())
+    {
+      bounds[0] = x;
+      bounds[1] = y;
+      nitems = 2;
+      return *this;
+    }
+
+  /* If [X,Y] comes before, put it at the front.  */
+  if (wi::lt_p (y, bounds[0], TYPE_SIGN (type)))
+    {
+      prepend (x, y);
+      return *this;
+    }
+  /* If [X,Y] comes after, put it at the end.  */
+  if (wi::gt_p (x, bounds[nitems - 1], TYPE_SIGN (type)))
+    {
+      append (x, y);
+      return *this;
+    }
+  /* Handle [X,Y] swalling up all of THIS.  */
+  if (wi::le_p (x, bounds[0], TYPE_SIGN (type))
+      && wi::ge_p (y, bounds[nitems - 1], TYPE_SIGN (type)))
+    {
+      bounds[0] = x;
+      bounds[1] = y;
+      nitems = 2;
+      return *this;
+    }
+  /* Handle X starting before, while Y is within.
+		  Y
+     X[a,b][c,d][e,f][g,h][i,j]
+     ==> [X,Y][g,h][i,j].  */
+  if (wi::lt_p (x, bounds[0], TYPE_SIGN (type)))
+    {
+      bounds[0] = x;
+
+      /*    Y
+	 X[a,b]   => [X,b].  */
+      if (nitems == 2)
+	return *this;
+
+      for (unsigned i = 1; i < nitems; i += 2)
+	if (wi::le_p (y, bounds[i], TYPE_SIGN (type)))
+	  {
+	    if (y == bounds[i])
+	      bounds[1] = y;
+	    else
+	      bounds[1] = bounds[i];
+	    remove (2, i);
+	    return *this;
+	  }
+      gcc_unreachable ();
+    }
+  /* Handle Y being outside, while X is within.
+		  X           Y
+     [a,b][c,d][e,f][g,h][i,j]
+     ==> [a,b][c,d][e,Y].  */
+  if (wi::gt_p (y, bounds[nitems - 1], TYPE_SIGN (type)))
+    {
+      for (unsigned i = 0; i < nitems; i += 2)
+	if (wi::ge_p (bounds[i + 1], x, TYPE_SIGN (type)))
+	  {
+	    bounds[i + 1] = y;
+	    nitems = i + 2;
+	    return *this;
+	  }
+      gcc_unreachable ();
+    }
+
+  /* At this point, [X,Y] must be completely inside.
+      X           Y
+     [a,b][c,d][e,f][g,h].  */
+  gcc_assert (wi::ge_p (x, bounds[0], TYPE_SIGN (type))
+	      && wi::le_p (y, bounds[nitems - 1], TYPE_SIGN (type)));
+
+  /* Find X.  */
+  gcc_assert (nitems >= 2);
+  unsigned xpos = ~0U;
+  unsigned i = nitems;
+  do
+    {
+      i -= 2;
+      if (wi::ge_p (x, bounds[i], TYPE_SIGN (type)))
+	{
+	  xpos = i;
+	  break;
+	}
+    }
+  while (i);
+  gcc_assert (xpos != ~0U);
+
+  /* Find Y.  */
+  unsigned ypos = ~0U;
+  for (i = 1; i < nitems; i += 2)
+    if (wi::le_p (y, bounds[i], TYPE_SIGN (type)))
+      {
+	ypos = i;
+	break;
+      }
+  gcc_assert (ypos != ~0U);
+
+  /* If [x,y] is inside of subrange [xpos,ypos], there's nothing to do.  */
+  if (xpos + 1 == ypos)
+    return *this;
+
+  /* Merge the sub-ranges in between xpos and ypos.  */
+  wide_int tmp = bounds[ypos];
+  remove (xpos + 2, ypos);
+  bounds[xpos + 1] = tmp;
+
+  return *this;
+}
+
+// THIS = THIS U R
+
+irange &
+irange::union_ (const irange &r)
+{
+  gcc_assert (type == r.type);
+
+  if (empty_p ())
+    {
+      *this = r;
+      return *this;
+    }
+  else if (r.empty_p ())
+    return *this;
+
+  for (unsigned i = 0; i < r.nitems; i += 2)
+    union_ (r.bounds[i], r.bounds[i + 1]);
+
+  overflow |= r.overflow;
+  return *this;
+}
+
+// THIS = THIS ^ [X,Y].
+
+irange &
+irange::intersect (const wide_int &x, const wide_int &y)
+{
+  unsigned pos = 0;
+
+  for (unsigned i = 0; i < nitems; i += 2)
+    {
+      wide_int newlo = wi::max (bounds[i], x, TYPE_SIGN (type));
+      wide_int newhi = wi::min (bounds[i + 1], y, TYPE_SIGN (type));
+      if (wi::gt_p (newlo, newhi, TYPE_SIGN (type)))
+	{
+	  /* If the new sub-range doesn't make sense, it's an
+	     impossible range and must be kept out of the result.  */
+	}
+      else
+	{
+	  bounds[pos++] = newlo;
+	  bounds[pos++] = newhi;
+	}
+    }
+  nitems = pos;
+  return *this;
+}
+
+// THIS = THIS ^ R.
+
+irange &
+irange::intersect (const irange &r)
+{
+  gcc_assert (type == r.type);
+  irange orig_range (*this);
+
+  /* Intersection with an empty range is an empty range.  */
+  clear ();
+  if (orig_range.empty_p () || r.empty_p ())
+    return *this;
+
+  /* The general algorithm is as follows.
+
+     Intersect each sub-range of R with all of ORIG_RANGE one at a time, and
+     join/union the results of these intersections together.  I.e:
+
+     [10,20][30,40][50,60] ^ [15,25][38,51][55,70]
+
+     Step 1: [10,20][30,40][50,60] ^ [15,25] => [15,20]
+     Step 2: [10,20][30,40][50,60] ^ [38,51] => [38,40]
+     Step 3: [10,20][30,40][50,60] ^ [55,70] => [55,60]
+     Final:  [15,20] U [38,40] U [55,60] => [15,20][38,40][55,60]
+
+     ?? We should probably stop making a copy of ORIG_RANGE at every step.  */
+  for (unsigned i = 0; i < r.nitems; i += 2)
+    union_ (irange (orig_range).intersect (r.bounds[i], r.bounds[i + 1]));
+
+  /* Overflow is sticky only if both ranges overflowed.  */
+  overflow = (orig_range.overflow && r.overflow);
+  return *this;
+}
+
+// Set THIS to the inverse of its range.
+
+irange &
+irange::invert ()
+{
+  /* We always need one more set of bounds to represent an inverse, so
+     if we're at the limit, we can't properly represent things.
+
+     For instance, to represent the inverse of a 2 sub-range set
+     [5, 10][20, 30], we would need a 3 sub-range set
+     [-MIN, 4][11, 19][31, MAX].
+
+     In this case convert the current range to something more
+     conservative, and return the inverse of that.
+
+     However, if any of the extremes of the range are -MIN/+MAX, we
+     know we will not need an extra bound.  For example:
+
+	INVERT([-MIN,20][30,40]) => [21,29][41,+MAX]
+	INVERT([-MIN,20][30,MAX]) => [21,29]
+  */
+  wide_int min = wi::min_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+  wide_int max = wi::max_value (TYPE_PRECISION (type), TYPE_SIGN (type));
+  if (nitems == max_pairs * 2
+      && bounds[0] != min
+      && bounds[nitems] != max)
+    {
+      bounds[0] = bounds[0];
+      bounds[1] = bounds[nitems - 1];
+      nitems = 2;
+      return invert ();
+    }
+
+  /* The inverse of the empty set is the entire domain.  */
+  if (empty_p ())
+    {
+      set_range_for_type (type);
+      return *this;
+    }
+
+  /* The algorithm is as follows.  To calculate INVERT ([a,b][c,d]), we
+     generate [-MIN, a-1][b+1, c-1][d+1, MAX].
+
+     If there is an over/underflow in the calculation for any
+     sub-range, we eliminate that subrange.  This allows us to easily
+     calculate INVERT([-MIN, 5]) with: [-MIN, -MIN-1][6, MAX].  And since
+     we eliminate the underflow, only [6, MAX] remains.  */
+
+  unsigned i = 0;
+  bool ovf;
+
+  /* Construct leftmost range.  */
+  irange orig_range (*this);
+  nitems = 0;
+  /* If this is going to underflow on the MINUS 1, don't even bother
+     checking.  This also handles subtracting one from an unsigned 0,
+     which doesn't set the underflow bit.  */
+  if (min != orig_range.bounds[i])
+    {
+      bounds[nitems++] = min;
+      bounds[nitems++]
+	= wi::sub (orig_range.bounds[i], 1, TYPE_SIGN (type), &ovf);
+      if (ovf)
+	nitems = 0;
+    }
+  i++;
+  /* Construct middle ranges if applicable.  */
+  if (orig_range.nitems > 2)
+    {
+      unsigned j = i;
+      for (; j < (unsigned) (orig_range.nitems - 2); j += 2)
+	{
+	  /* The middle ranges cannot have MAX/MIN, so there's no need
+	     to check for unsigned overflow on the +1 and -1 here.  */
+	  bounds[nitems++]
+	    = wi::add (orig_range.bounds[j], 1, TYPE_SIGN (type), &ovf);
+	  bounds[nitems++]
+	    = wi::sub (orig_range.bounds[j + 1], 1, TYPE_SIGN (type), &ovf);
+	  if (ovf)
+	    nitems -= 2;
+	}
+      i = j;
+    }
+  /* Construct rightmost range.
+
+     However, if this will overflow on the PLUS 1, don't even bother.
+     This also handles adding one to an unsigned MAX, which doesn't
+     set the overflow bit.  */
+  if (max != orig_range.bounds[i])
+    {
+      bounds[nitems++]
+	= wi::add (orig_range.bounds[i], 1, TYPE_SIGN (type), &ovf);
+      bounds[nitems++] = max;
+      if (ovf)
+	nitems -= 2;
+    }
+
+  return *this;
+}
+
+/* Returns the upper bound of PAIR.  */
+
+wide_int
+irange::upper_bound (unsigned pair) const
+{
+  gcc_assert (nitems != 0 && pair <= num_pairs ());
+  return bounds[pair * 2 + 1];
+}
+
+/* Dump the current range onto BUFFER.  */
+
+void
+irange::dump (pretty_printer *buffer)
+{
+  if (!nitems)
+    {
+      pp_string (buffer, "[]");
+      return;
+    }
+  for (unsigned i = 0; i < nitems; ++i)
+    {
+      if (i % 2 == 0)
+	pp_character (buffer, '[');
+      wide_int val = bounds[i];
+      print_hex (val, pp_buffer (buffer)->digit_buffer);
+      pp_string (buffer, pp_buffer (buffer)->digit_buffer);
+      if (i % 2 == 0)
+	pp_string (buffer, ", ");
+      else
+	pp_character (buffer, ']');
+    }
+  if (overflow)
+    pp_string (buffer, "(overflow)");
+
+  pp_string (buffer, " precision = ");
+  pp_decimal_int (buffer, TYPE_PRECISION (type));
+}
+
+/* Dump the current range onto FILE F.  */
+
+void
+irange::dump (FILE *f)
+{
+  pretty_printer buffer;
+  buffer.buffer->stream = f;
+  dump (&buffer);
+  pp_newline_and_flush (&buffer);
+}
+
+/* Like above but dump to STDERR.
+
+   ?? You'd think we could have a default parameter for dump(FILE),
+   but gdb currently doesn't do default parameters gracefully-- or at
+   all, and since this is a function we need to be callable from the
+   debugger... */
+
+void
+irange::dump ()
+{
+  dump (stderr);
+}
+
+/* Initialize the current irange_storage to the irange in IR.  */
+
+void
+irange_storage::set_irange (const irange &ir)
+{
+  unsigned precision = TYPE_PRECISION (ir.get_type ());
+  trailing_bounds.set_precision (precision);
+  unsigned i;
+  for (i = 0; i < ir.num_pairs () * 2; ++i)
+    trailing_bounds[i] = ir.bounds[i];
+
+  /* Build nonsensical [1,0] pairs for the remaining empty ranges.
+     These will be recognized as empty when we read the structure
+     back.  */
+  for (; i < irange::max_pairs * 2; i += 2)
+    {
+      trailing_bounds[i] = wi::one (precision);
+      trailing_bounds[i + 1] = wi::zero (precision);
+    }
+}
+
+bool
+make_irange (irange *result, const_tree lb, const_tree ub, const_tree type)
+{
+  irange r (TREE_TYPE (lb), lb, ub);
+  *result = r;
+  if (result->valid_p ())
+    {
+      if (type)
+        result->cast (type);
+      return true;
+    }
+  return false;
+}
+
+bool
+make_irange_not (irange *result, const_tree not_exp, const_tree type)
+{
+  irange r (TREE_TYPE (not_exp), not_exp, not_exp, irange::INVERSE);
+  *result = r;
+  if (result->valid_p ())
+    {
+      if (type)
+        result->cast (type);
+      return true;
+    }
+  return false;
+}
+
+void
+range_one (irange *r, tree type)
+{
+  tree one = build_int_cst (type, 1);
+  r->set_range (type, one, one);
+}
+
+void
+range_zero (irange *r, tree type)
+{
+  tree zero = build_int_cst (type, 0);
+  r->set_range (type, zero, zero);
+}
+
+bool
+range_non_zero (irange *r, tree type)
+{
+  tree zero = build_int_cst (type, 0);
+  return make_irange_not (r, zero, type);
+}
+
+/* Set the range of R to the set of positive numbers starting at START.  */
+
+void
+range_positives (irange *r, tree type, unsigned int start)
+{
+  r->set_range (type, build_int_cst (type, start), TYPE_MAX_VALUE (type));
+}
+
+#ifdef CHECKING_P
+namespace selftest {
+
+
+#define INT(N) build_int_cst (integer_type_node, (N))
+#define UINT(N) build_int_cstu (unsigned_type_node, (N))
+#define INT16(N) build_int_cst (short_integer_type_node, (N))
+#define UINT16(N) build_int_cstu (short_unsigned_type_node, (N))
+#define INT64(N) build_int_cstu (long_long_integer_type_node, (N))
+#define UINT64(N) build_int_cstu (long_long_unsigned_type_node, (N))
+#define UINT128(N) build_int_cstu (u128_type, (N))
+#define UCHAR(N) build_int_cst (unsigned_char_type_node, (N))
+#define SCHAR(N) build_int_cst (signed_char_type_node, (N))
+
+#define RANGE1(A,B) irange (integer_type_node, INT(A), INT(B))
+
+#define RANGE2(A,B,C,D)					\
+( i1 = irange (integer_type_node, INT (A), INT (B)),	\
+  i2 = irange (integer_type_node, INT (C), INT (D)),	\
+  i1.union_ (i2),					\
+  i1 )
+
+#define RANGE3(A,B,C,D,E,F) 				\
+( i1 = irange (integer_type_node, INT (A), INT (B)),	\
+  i2 = irange (integer_type_node, INT (C), INT (D)),	\
+  i3 = irange (integer_type_node, INT (E), INT (F)),	\
+  i1.union_ (i2),					\
+  i1.union_ (i3),					\
+  i1 )
+
+// Run all of the selftests within this file.
+
+void
+irange_tests ()
+{
+  tree u128_type = build_nonstandard_integer_type (128, /*unsigned=*/1);
+  irange i1, i2, i3;
+  irange r0, r1, rold;
+  ASSERT_FALSE (r0.valid_p ());
+
+  /* Test that NOT(255) is [0..254] in 8-bit land.  */
+  irange not_255;
+  make_irange_not (&not_255, UCHAR(255), unsigned_char_type_node);
+  ASSERT_TRUE (not_255 == irange (unsigned_char_type_node,
+				  UCHAR(0), UCHAR(254)));
+
+  /* Test that NOT(0) is [1..255] in 8-bit land.  */
+  irange not_zero;
+  range_non_zero (&not_zero, unsigned_char_type_node);
+  ASSERT_TRUE (not_zero == irange (unsigned_char_type_node,
+				   UCHAR(1), UCHAR(255)));
+
+  /* Check that [0,127][0x..ffffff80,0x..ffffff]
+     => ~[128, 0x..ffffff7f].  */
+  r0 = irange (u128_type, UINT128(0), UINT128(127), irange::PLAIN);
+  tree high = build_minus_one_cst (u128_type);
+  /* low = -1 - 127 => 0x..ffffff80.  */
+  tree low = fold_build2 (MINUS_EXPR, u128_type, high, UINT128(127));
+  r1 = irange (u128_type, low, high); // [0x..ffffff80, 0x..ffffffff]
+  /* r0 = [0,127][0x..ffffff80,0x..fffffff].  */
+  r0.union_ (r1);
+  /* r1 = [128, 0x..ffffff7f].  */
+  r1 = irange (u128_type,
+	       UINT128(128),
+	       fold_build2 (MINUS_EXPR, u128_type,
+			    build_minus_one_cst (u128_type),
+			    UINT128(128)));
+  r0.invert ();
+  ASSERT_TRUE (r0 == r1);
+
+  r0.set_range_for_type (integer_type_node);
+  tree minint = wide_int_to_tree (integer_type_node, r0.lower_bound ());
+  tree maxint = wide_int_to_tree (integer_type_node, r0.upper_bound ());
+
+  r0.set_range_for_type (short_integer_type_node);
+  tree minshort = wide_int_to_tree (short_integer_type_node, r0.lower_bound ());
+  tree maxshort = wide_int_to_tree (short_integer_type_node, r0.upper_bound ());
+
+  r0.set_range_for_type (unsigned_type_node);
+  tree maxuint = wide_int_to_tree (unsigned_type_node, r0.upper_bound ());
+
+  /* Check that ~[0,5] => [6,MAX] for unsigned int.  */
+  r0 = irange (unsigned_type_node, UINT(0), UINT(5), irange::PLAIN);
+  r0.invert ();
+  ASSERT_TRUE (r0 == irange (unsigned_type_node, UINT(6), maxuint));
+
+  /* Check that ~[10,MAX] => [0,9] for unsigned int.  */
+  r0 = irange (unsigned_type_node, UINT(10), maxuint, irange::PLAIN);
+  r0.invert ();
+  ASSERT_TRUE (r0 == irange (unsigned_type_node, UINT(0), UINT(9)));
+
+  /* Check that ~[0,5] => [6,MAX] for unsigned 128-bit numbers.  */
+  r0.set_range (u128_type, UINT128(0), UINT128(5), irange::INVERSE);
+  r1 = irange (u128_type, UINT128(6), build_minus_one_cst (u128_type));
+  ASSERT_TRUE (r0 == r1);
+
+  /* Check that [~5] is really [-MIN,4][6,MAX].  */
+  r0 = irange (integer_type_node, INT(5), INT(5), irange::INVERSE);
+  r1 = irange (integer_type_node, minint, INT(4));
+  ASSERT_TRUE (r1.union_ (irange (integer_type_node,
+				  INT(6), maxint)));
+
+  ASSERT_TRUE (r0 == r1);
+
+  r1.set_range (integer_type_node, INT(5), INT(5));
+  ASSERT_TRUE (r1.valid_p ());
+  irange r2 (r1);
+  ASSERT_TRUE (r1 == r2);
+
+  r1 = RANGE1 (5, 10);
+  ASSERT_TRUE (r1.valid_p ());
+
+  r1 = irange (integer_type_node, (wide_int) INT(5), (wide_int) INT(10));
+  ASSERT_TRUE (r1.valid_p ());
+  ASSERT_TRUE (r1.contains_p (INT (7)));
+
+  r1 = irange (signed_char_type_node, SCHAR(0), SCHAR(20));
+  ASSERT_TRUE (r1.contains_p (INT(15)));
+  ASSERT_FALSE (r1.contains_p (INT(300)));
+
+  /* If a range is in any way outside of the range for the converted
+     to range, default to the range for the new type.  */
+  r1 = irange (integer_type_node, integer_zero_node, maxint);
+  r1.cast (short_integer_type_node);
+  ASSERT_TRUE (r1.lower_bound () == minshort && r1.upper_bound() == maxshort);
+
+  /* (unsigned char)[-5,-1] => [251,255].  */
+  r0 = rold = irange (signed_char_type_node, SCHAR (-5), SCHAR(-1));
+  r0.cast (unsigned_char_type_node);
+  ASSERT_TRUE (r0 == irange (unsigned_char_type_node,
+			     UCHAR(251), UCHAR(255)));
+  r0.cast (signed_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  /* (signed char)[15, 150] => [-128,-106][15,127].  */
+  r0 = rold = irange (unsigned_char_type_node, UCHAR(15), UCHAR(150));
+  r0.cast (signed_char_type_node);
+  r1 = irange (signed_char_type_node, SCHAR(15), SCHAR(127));
+  r2 = irange (signed_char_type_node, SCHAR(-128), SCHAR(-106));
+  r1.union_ (r2);
+  ASSERT_TRUE (r1 == r0);
+  r0.cast (unsigned_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  /* (unsigned char)[-5, 5] => [0,5][251,255].  */
+  r0 = rold = irange (signed_char_type_node, SCHAR(-5), SCHAR(5));
+  r0.cast (unsigned_char_type_node);
+  r1 = irange (unsigned_char_type_node, UCHAR(251), UCHAR(255));
+  r2 = irange (unsigned_char_type_node, UCHAR(0), UCHAR(5));
+  r1.union_ (r2);
+  ASSERT_TRUE (r0 == r1);
+  r0.cast (signed_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  /* (unsigned char)[-5,5] => [0,255].  */
+  r0 = irange (integer_type_node, INT(-5), INT(5));
+  r0.cast (unsigned_char_type_node);
+  r1 = irange (unsigned_char_type_node,
+	       TYPE_MIN_VALUE (unsigned_char_type_node),
+	       TYPE_MAX_VALUE (unsigned_char_type_node));
+  ASSERT_TRUE (r0 == r1);
+
+  /* (unsigned char)[5U,1974U] => [0,255].  */
+  r0 = irange (unsigned_type_node, UINT(5), UINT(1974));
+  r0.cast (unsigned_char_type_node);
+  ASSERT_TRUE (r0 == irange (unsigned_char_type_node, UCHAR(0), UCHAR(255)));
+  r0.cast (integer_type_node);
+  /* Going to a wider range should not sign extend.  */
+  ASSERT_TRUE (r0 == RANGE1 (0, 255));
+
+  /* (unsigned char)[-350,15] => [0,255].  */
+  r0 = irange (integer_type_node, INT(-350), INT(15));
+  r0.cast (unsigned_char_type_node);
+  ASSERT_TRUE (r0 == irange (unsigned_char_type_node,
+			     TYPE_MIN_VALUE (unsigned_char_type_node),
+			     TYPE_MAX_VALUE (unsigned_char_type_node)));
+
+  /* Casting [-120,20] from signed char to unsigned short.
+	(unsigned)[(signed char)-120, (signed char)20]
+     => (unsigned)[0, 0x14][0x88, 0xff]
+     => [0,0x14][0xff88,0xffff].  */
+  r0 = irange (signed_char_type_node, SCHAR(-120), SCHAR(20));
+  r0.cast (short_unsigned_type_node);
+  r1 = irange (short_unsigned_type_node, UINT16(0), UINT16(0x14));
+  r2 = irange (short_unsigned_type_node, UINT16(0xff88), UINT16(0xffff));
+  r1.union_ (r2);
+  ASSERT_TRUE (r0 == r1);
+  /* Casting back to signed char (a smaller type), would be outside of
+     the range, we it'll be the entire range of the signed char.  */
+  r0.cast (signed_char_type_node);
+  ASSERT_TRUE (r0 == irange (signed_char_type_node,
+			     TYPE_MIN_VALUE (signed_char_type_node),
+			     TYPE_MAX_VALUE (signed_char_type_node)));
+
+  /* unsigned char -> signed short
+	(signed short)[(unsigned char)25, (unsigned char)250]
+     => [(signed short)25, (signed short)250].  */
+  r0 = rold = irange (unsigned_char_type_node, UCHAR(25), UCHAR(250));
+  r0.cast (short_integer_type_node);
+  r1 = irange (short_integer_type_node, INT16(25), INT16(250));
+  ASSERT_TRUE (r0 == r1);
+  r0.cast (unsigned_char_type_node);
+  ASSERT_TRUE (r0 == rold);
+
+  /* Test casting a wider signed [-MIN,MAX] to a narrower unsigned.  */
+  r0 = irange (long_long_integer_type_node,
+	       TYPE_MIN_VALUE (long_long_integer_type_node),
+	       TYPE_MAX_VALUE (long_long_integer_type_node));
+  r0.cast (short_unsigned_type_node);
+  r1 = irange (short_unsigned_type_node,
+	       TYPE_MIN_VALUE (short_unsigned_type_node),
+	       TYPE_MAX_VALUE (short_unsigned_type_node));
+  ASSERT_TRUE (r0 == r1);
+
+  /* Test that casting a range with MAX_PAIRS that changes sign is
+     done conservatively.
+
+        (unsigned short)[-5,5][20,30][40,50]...
+     => (unsigned short)[-5,50]
+     => [0,50][65531,65535].  */
+  r0 = irange (short_integer_type_node, INT16(-5), INT16(5));
+  gcc_assert (r0.max_pairs * 2 * 10 + 10 < 32767);
+  unsigned i;
+  for (i = 2; i < r0.max_pairs * 2; i += 2)
+    {
+      r1 = irange (short_integer_type_node, INT16(i * 10), INT16(i * 10 + 10));
+      r0.union_ (r1);
+    }
+  r0.cast(short_unsigned_type_node);
+  r1 = irange (short_unsigned_type_node, INT16(0), INT16((i - 2) * 10 + 10));
+  r2 = irange (short_unsigned_type_node, INT16(65531), INT16(65535));
+  r1.union_ (r2);
+  ASSERT_TRUE (r0 == r1);
+
+  /* NOT([10,20]) ==> [-MIN,9][21,MAX].  */
+  r0 = r1 = irange (integer_type_node, INT(10), INT(20));
+  r2 = irange (integer_type_node, minint, INT(9));
+  ASSERT_TRUE (r2.union_ (irange (integer_type_node,
+				  INT(21), maxint)));
+  r1.invert ();
+  ASSERT_TRUE (r1 == r2);
+  /* Test that NOT(NOT(x)) == x.  */
+  r2.invert ();
+  ASSERT_TRUE (r0 == r2);
+
+  /* NOT(-MIN,+MAX) is the empty set and should return false.  */
+  r0 = irange (integer_type_node, wide_int_to_tree (integer_type_node, minint),
+		  wide_int_to_tree (integer_type_node, maxint));
+  ASSERT_TRUE (!r0.invert ());
+  r1.clear ();
+  ASSERT_TRUE (r0 == r1);
+
+  /* Test that booleans and their inverse work as expected.  */
+  range_zero (&r0, boolean_type_node);
+  ASSERT_TRUE (r0 == irange (boolean_type_node,
+			     build_int_cst (boolean_type_node, 0),
+			     build_int_cst (boolean_type_node, 0)));
+  r0.invert();
+  ASSERT_TRUE (r0 == irange (boolean_type_node,
+			     build_int_cst (boolean_type_node, 1),
+			     build_int_cst (boolean_type_node, 1)));
+
+  /* Casting NONZERO to a narrower type will wrap/overflow so
+     it's just the entire range for the narrower type.
+
+     "NOT 0 at signed 32-bits" ==> [-MIN_32,-1][1, +MAX_32].  This is
+     is outside of the range of a smaller range, return the full
+     smaller range.  */
+  range_non_zero (&r0, integer_type_node);
+  r0.cast (short_integer_type_node);
+  r1 = irange (short_integer_type_node,
+	       TYPE_MIN_VALUE (short_integer_type_node),
+	       TYPE_MAX_VALUE (short_integer_type_node));
+  ASSERT_TRUE (r0 == r1);
+
+  /* Casting NONZERO from a narrower signed to a wider signed.
+
+     NONZERO signed 16-bits is [-MIN_16,-1][1, +MAX_16].
+     Converting this to 32-bits signed is [-MIN_16,-1][1, +MAX_16].  */
+  range_non_zero (&r0, short_integer_type_node);
+  r0.cast (integer_type_node);
+  r1 = RANGE1 (-32768, -1);
+  r2 = RANGE1 (1, 32767);
+  r1.union_ (r2);
+  ASSERT_TRUE (r0 == r1);
+
+  if (irange::max_pairs > 2)
+    {
+      /* ([10,20] U [5,8]) U [1,3] ==> [1,3][5,8][10,20].  */
+      r0 = RANGE1 (10, 20);
+      r1 = RANGE1 (5, 8);
+      r0.union_ (r1);
+      r1 = RANGE1 (1, 3);
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (1, 3, 5, 8, 10, 20));
+
+      /* [1,3][5,8][10,20] U [-5,0] => [-5,3][5,8][10,20].  */
+      r1 = irange (integer_type_node, INT(-5), INT(0));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (-5, 3, 5, 8, 10, 20));
+    }
+
+  /* [10,20] U [30,40] ==> [10,20][30,40].  */
+  r0 = irange (integer_type_node, INT(10), INT(20));
+  r1 = irange (integer_type_node, INT(30), INT(40));
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == RANGE2 (10, 20, 30, 40));
+  if (irange::max_pairs > 2)
+    {
+      /* [10,20][30,40] U [50,60] ==> [10,20][30,40][50,60].  */
+      r1 = irange (integer_type_node, INT(50), INT(60));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (10, 20, 30, 40, 50, 60));
+      /* [10,20][30,40][50,60] U [70, 80] ==> [10,20][30,40][50,80].  */
+      r1 = irange (integer_type_node, INT(70), INT(80));
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (10, 20, 30, 40, 50, 80));
+    }
+
+  /* Make sure NULL and non-NULL of pointer types work, and that
+     inverses of them are consistent.  */
+  tree voidp = build_pointer_type (void_type_node);
+  range_zero (&r0, voidp);
+  r1 = r0;
+  r0.invert ();
+  r0.invert ();
+  ASSERT_TRUE (r0 == r1);
+
+  if (irange::max_pairs > 2)
+    {
+      /* [10,20][30,40][50,60] U [6,35] => [6,40][50,60].  */
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = RANGE1 (6, 35);
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE2 (6,40,50,60));
+
+      /* [10,20][30,40][50,60] U [6,60] => [6,60] */
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = RANGE1 (6, 60);
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE1 (6,60));
+
+      /* [10,20][30,40][50,60] U [6,70] => [6,70].  */
+      r0 = RANGE3 (10, 20, 30, 40, 50, 60);
+      r1 = RANGE1 (6, 70);
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE1 (6,70));
+
+      /* [10,20][30,40][50,60] U [35,70] => [10,20][30,70].  */
+      r0 = RANGE3 (10,20,30,40,50,60);
+      r1 = RANGE1 (35,70);
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE2 (10,20,30,70));
+    }
+
+  /* [10,20][30,40] U [25,70] => [10,70].  */
+  r0 = RANGE2 (10,20,30,40);
+  r1 = RANGE1 (25,70);
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == RANGE2 (10,20,30,70));
+
+  if (irange::max_pairs > 2)
+    {
+      /* [10,20][30,40][50,60] U [15,35] => [10,40][50,60].  */
+      r0 = RANGE3 (10,20,30,40,50,60);
+      r1 = RANGE1 (15,35);
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE2 (10,40,50,60));
+    }
+
+  /* [10,20] U [15, 30] => [10, 30].  */
+  r0 = RANGE1 (10,20);
+  r1 = RANGE1 (15,30);
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == RANGE1 (10,30));
+
+  /* [10,20] U [25,25] => [10,20][25,25].  */
+  r0 = RANGE1 (10,20);
+  r1 = RANGE1 (25,25);
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == RANGE2 (10,20,25,25));
+
+  if (irange::max_pairs > 2)
+    {
+      /* [10,20][30,40][50,60] U [35,35] => [10,20][30,40][50,60].  */
+      r0 = RANGE3 (10,20,30,40,50,60);
+      r1 = RANGE1 (35,35);
+      r0.union_ (r1);
+      ASSERT_TRUE (r0 == RANGE3 (10,20,30,40,50,60));
+    }
+
+  /* [15,40] U [] => [15,40].  */
+  r0 = RANGE1 (15,40);
+  r1.clear ();
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == RANGE1 (15,40));
+
+  /* [10,20] U [10,10] => [10,20].  */
+  r0 = RANGE1 (10,20);
+  r1 = RANGE1 (10,10);
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == RANGE1 (10,20));
+
+  /* [10,20] U [9,9] => [9,20].  */
+  r0 = RANGE1 (10,20);
+  r1 = RANGE1 (9,9);
+  r0.union_ (r1);
+  ASSERT_TRUE (r0 == RANGE1 (9,20));
+
+  if (irange::max_pairs > 2)
+    {
+      /* [10,10][12,12][20,100] ^ [15,200].  */
+      r0 = RANGE3 (10,10,12,12,20,100);
+      r1 = RANGE1 (15,200);
+      r0.intersect (r1);
+      ASSERT_TRUE (r0 == RANGE1 (20,100));
+
+      /* [10,20][30,40][50,60] ^ [15,25][38,51][55,70]
+	 => [15,20][38,40][50,60].  */
+      r0 = RANGE3 (10,20,30,40,50,60);
+      r1 = RANGE3 (15,25,38,51,55,70);
+      r0.intersect (r1);
+      ASSERT_TRUE (r0 == RANGE3 (15,20,38,40,50,60));
+
+      /* [15,20][30,40][50,60] ^ [15,35][40,90][100,200]
+	 => [15,20][30,35][40,60].  */
+      r0 = RANGE3 (15,20,30,40,50,60);
+      r1 = RANGE3 (15,35,40,90,100,200);
+      r0.intersect (r1);
+      ASSERT_TRUE (r0 == RANGE3 (15,20,30,35,40,60));
+    }
+
+  /* [10,20] ^ [15,30] => [15,20].  */
+  r0 = RANGE1 (10, 20);
+  r1 = RANGE1 (15, 30);
+  r0.intersect (r1);
+  ASSERT_TRUE (r0 == RANGE1 (15, 20));
+
+  /* [10,20][30,40] ^ [40,50] => [40,40].  */
+  r0 = RANGE2 (10,20,30,40);
+  r1 = RANGE1 (40,50);
+  r0.intersect (r1);
+  ASSERT_TRUE (r0 == RANGE1 (40,40));
+
+  /* Test non-destructive intersection.  */
+  r0 = rold = RANGE1 (10, 20);
+  ASSERT_TRUE (irange_intersect (r0, RANGE1 (15, 30)));
+  ASSERT_TRUE (r0 == rold);
+}
+
+} // namespace selftest
+#endif // CHECKING_P
diff --git a/gcc/range.h b/gcc/range.h
new file mode 100644
index 0000000..158c980
--- /dev/null
+++ b/gcc/range.h
@@ -0,0 +1,230 @@ 
+/* Header file for range analysis.
+   Copyright (C) 2017 Free Software Foundation, Inc.
+   Contributed by Aldy Hernandez <aldyh@redhat.com>.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify it under
+the terms of the GNU General Public License as published by the Free
+Software Foundation; either version 3, or (at your option) any later
+version.
+
+GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+WARRANTY; without even the implied warranty of MERCHANTABILITY or
+FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
+ for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_RANGE_H
+#define GCC_RANGE_H
+
+class irange_storage;
+
+/* This is a class for working with ranges, currently integer ones.
+   With it you can specify a range of [5,10] (5 through 10 inclusive),
+   or even ranges including multi-part ranges [-10,5][30,40][50,60].
+   This last one specifies the union of the different sub-ranges.
+
+   Inverse ranges are represented as an actual range.  For instance,
+   the inverse of 0 is [-MIN,-1][1,+MAX] for a signed integer.
+
+   Methods are provided for intersecting and uniting ranges, as well
+   as converting between them.  In performing any of these operations,
+   when no efficient way can be computed, we may default to a more
+   conservative range.
+
+   For example, the inverse of [5,10][15,20][30,40] is actually
+   [-MIN,4][11,14][21,29][41,+MAX].  If this cannot be efficiently or
+   quickly computed, we may opt to represent the inverse as
+   [-MIN,4][41,+MAX] which is an equivalent conservative
+   representation.
+
+   This class is not meant to live in long term storage (GC).
+   Consequently, there are no GTY markers.  For long term storage, use
+   the irange_storage class described later.  */
+class irange
+{
+  friend class irange_storage;
+ public:
+  /* Maximum number of pairs of ranges allowed.  */
+  static const unsigned int max_pairs = 2;
+
+ private:
+  /* Number of items in bounds[].  */
+  unsigned char nitems;
+  /* Whether or not a set operation overflowed.  */
+  bool overflow;
+  /* The type of the range.  */
+  const_tree type;
+  /* The pairs of sub-ranges in the range.  */
+  wide_int bounds[max_pairs * 2];
+
+  void prepend (const wide_int &x, const wide_int &y);
+  void append (const wide_int &x, const wide_int &y);
+  void remove (unsigned i, unsigned j);
+  void canonicalize ();
+
+ public:
+  /* When constructing a range, this specifies wether this is a
+     regular range, or the inverse of a range.  */
+  enum kind { PLAIN, INVERSE };
+  irange () { type = NULL_TREE; nitems = 0; }
+  explicit irange (const_tree t) { set_range (t); }
+  irange (const_tree typ, const wide_int &lbound, const wide_int &ubound,
+	  kind rt = PLAIN)
+    { set_range (typ, lbound, ubound, rt); }
+  irange (const irange &);
+  irange (const irange_storage *stor, tree typ) { set_range (stor, typ); }
+
+  void set_range (const irange_storage *, const_tree);
+  void set_range (const_tree);
+  void set_range (const_tree, const wide_int &lbound, const wide_int &ubound,
+		  kind rt = PLAIN);
+  void set_range_for_type (const_tree);
+
+  bool overflow_p () const { return overflow && !TYPE_OVERFLOW_WRAPS (type); }
+  void set_overflow () { overflow = true; }
+  void clear_overflow () { overflow = false; }
+
+  unsigned num_pairs () const { return nitems / 2; }
+  /* Implicit conversion to `unsigned int' returns the number of pairs.  */
+  operator unsigned () const { return num_pairs (); }
+  /* Returns the lower bound of PAIR.  */
+  wide_int lower_bound (unsigned pair = 0) const
+    {
+      gcc_assert (nitems != 0 && pair <= num_pairs ());
+      return bounds[pair * 2];
+    }
+  /* Returns the uppermost bound.  */
+  wide_int upper_bound () const
+    {
+      gcc_assert (nitems != 0);
+      return bounds[nitems - 1];
+    }
+  wide_int upper_bound (unsigned pair) const;
+
+  /* Remove a sub-range from a range.  PAIR is the zero-based
+     sub-range to remove.  */
+  void remove_pair (unsigned pair) { remove (pair * 2, pair * 2 + 1); }
+  void clear () { nitems = 0; }
+  void clear (const_tree t) { type = t; nitems = 0; overflow = false; }
+  bool empty_p () const { return !nitems; }
+  bool range_for_type_p () const;
+  bool simple_range_p () const { return nitems == 2; }
+
+  void dump ();
+  void dump (pretty_printer *pp);
+  void dump (FILE *);
+
+  bool valid_p () const;
+  void cast (const_tree type);
+  bool contains_p (const wide_int &element) const;
+  bool contains_p (const_tree) const;
+
+  const_tree get_type () const { return type; }
+
+  irange& operator= (const irange &r);
+  irange& operator= (const_tree t);
+
+  bool operator== (const irange &r) const;
+  bool operator!= (const irange &r) const { return !(*this == r); }
+
+  irange &union_ (const wide_int &x, const wide_int &y);
+  irange &union_ (const irange &r);
+  irange &intersect (const wide_int &x, const wide_int &y);
+  irange &intersect (const irange &r);
+  irange &invert ();
+};
+
+/* Return R1 U R2.  */
+static inline
+irange &irange_union (const irange &r1, const irange &r2)
+{
+  return irange (r1).union_ (r2);
+}
+
+/* Return R1 ^ R2.  */
+static inline
+irange &irange_intersect (const irange &r1, const irange &r2)
+{
+  return irange (r1).intersect (r2);
+}
+
+/* Return the inverse range of R1.  */
+static inline
+irange &irange_invert (const irange &r1)
+{
+  return irange (r1).invert ();
+}
+
+void range_zero (irange *r, tree type);
+void range_one (irange *r, tree type);
+bool range_non_zero (irange *r, tree type);
+void range_positives (irange *r, tree type, unsigned int);
+
+/* An irange is inefficient when it comes to memory, so this class is
+   used to store iranges in memory (off of an SSA_NAME likely).  It is
+   a variable length structure that contains the sub-range pairs as
+   well as the non-zero bitmask.  The number of entries are
+   irnage::max_pairs * 2 + 1 (to accomodate the non-zero bits).
+
+   To store an irange class X into this memory efficient irange_storage
+   class use:
+
+	irange X;
+	...
+	irange_storage *stow = irange_storage::ggc_alloc (precision);
+	stow->set_irange (X);
+
+   To convert it back to an irange use:
+
+	tree type = ...;
+	irange X (stow, type);
+   or
+	if (SSA_NAME_RANGE_INFO (ssa)) {
+	  irange X (ssa);
+	  ...
+	}
+
+   To get at the nonzero bits use:
+
+	irange_storage *stow = ...;
+	stow->set_nonzero_bits();
+	stow->get_nonzero_bits();
+*/
+
+class GTY((variable_size)) irange_storage
+{
+  friend class irange;
+ public:
+  /* These are the pair of subranges for the irange.  The last
+     wide_int allocated is a mask representing which bits in an
+     integer are known to be non-zero.  */
+  trailing_wide_ints<irange::max_pairs * 2 + 1> trailing_bounds;
+
+  void set_irange (const irange &);
+  /* Returns the size of an irange_storage with PRECISION.  */
+  static size_t size (unsigned precision)
+  { return sizeof (irange_storage)
+      /* There is a +1 for the non-zero bits field.  */
+      + trailing_wide_ints<irange::max_pairs * 2 + 1>::extra_size (precision);
+  }
+  /* Allocate GC memory for an irange_storage with PRECISION.  */
+  static irange_storage *ggc_alloc (unsigned precision)
+  { irange_storage *stow = static_cast<irange_storage *> (ggc_internal_alloc
+							  (size (precision)));
+    stow->trailing_bounds.set_precision (precision);
+    return stow;
+  }
+  /* Set the nonzero bit mask to WI.  */
+  void set_nonzero_bits (const wide_int &wi)
+  { trailing_bounds[irange::max_pairs * 2] = wi; }
+  /* Return the nonzero bits in the range.  */
+  wide_int get_nonzero_bits (void)
+  { return trailing_bounds[irange::max_pairs * 2]; }
+};
+
+#endif // GCC_RANGE_H
diff --git a/gcc/selftest.h b/gcc/selftest.h
index dad53e9..e15cb07 100644
--- a/gcc/selftest.h
+++ b/gcc/selftest.h
@@ -196,6 +196,7 @@  extern void tree_c_tests ();
 extern void tree_cfg_c_tests ();
 extern void vec_c_tests ();
 extern void wide_int_cc_tests ();
+extern void irange_tests ();
 
 extern int num_passes;
 
diff --git a/gcc/ssa.h b/gcc/ssa.h
index 4bc6b3f..f84d6e8 100644
--- a/gcc/ssa.h
+++ b/gcc/ssa.h
@@ -26,6 +26,7 @@  along with GCC; see the file COPYING3.  If not see
 #include "stringpool.h"
 #include "gimple-ssa.h"
 #include "tree-vrp.h"
+#include "range.h"
 #include "tree-ssanames.h"
 #include "tree-phinodes.h"
 #include "ssa-iterators.h" 
diff --git a/gcc/tree-core.h b/gcc/tree-core.h
index ea73477..d9a708d 100644
--- a/gcc/tree-core.h
+++ b/gcc/tree-core.h
@@ -33,7 +33,7 @@  struct function;
 struct real_value;
 struct fixed_value;
 struct ptr_info_def;
-struct range_info_def;
+class irange_storage;
 struct die_struct;
 
 
@@ -1043,9 +1043,6 @@  struct GTY(()) tree_base {
        TRANSACTION_EXPR_OUTER in
 	   TRANSACTION_EXPR
 
-       SSA_NAME_ANTI_RANGE_P in
-	   SSA_NAME
-
        MUST_TAIL_CALL in
 	   CALL_EXPR
 
@@ -1403,6 +1400,7 @@  struct GTY(()) ssa_use_operand_t {
   tree *GTY((skip(""))) use;
 };
 
+class irange;
 struct GTY(()) tree_ssa_name {
   struct tree_typed typed;
 
@@ -1416,8 +1414,8 @@  struct GTY(()) tree_ssa_name {
   union ssa_name_info_type {
     /* Pointer attributes used for alias analysis.  */
     struct GTY ((tag ("0"))) ptr_info_def *ptr_info;
-    /* Value range attributes used for zero/sign extension elimination.  */
-    struct GTY ((tag ("1"))) range_info_def *range_info;
+    /* Value range attributes.  */
+    class GTY ((tag ("1"))) irange_storage *range_info;
   } GTY ((desc ("%1.typed.type ?" \
 		"!POINTER_TYPE_P (TREE_TYPE ((tree)&%1)) : 2"))) info;
 
diff --git a/gcc/tree-scalar-evolution.c b/gcc/tree-scalar-evolution.c
index 95f65b0..f218d05 100644
--- a/gcc/tree-scalar-evolution.c
+++ b/gcc/tree-scalar-evolution.c
@@ -3332,7 +3332,7 @@  iv_can_overflow_p (struct loop *loop, tree type, tree base, tree step)
     base_min = base_max = base;
   else if (TREE_CODE (base) == SSA_NAME
 	   && INTEGRAL_TYPE_P (TREE_TYPE (base))
-	   && get_range_info (base, &base_min, &base_max) == VR_RANGE)
+	   && get_range_info (base, &base_min, &base_max))
     ;
   else
     return true;
@@ -3341,7 +3341,7 @@  iv_can_overflow_p (struct loop *loop, tree type, tree base, tree step)
     step_min = step_max = step;
   else if (TREE_CODE (step) == SSA_NAME
 	   && INTEGRAL_TYPE_P (TREE_TYPE (step))
-	   && get_range_info (step, &step_min, &step_max) == VR_RANGE)
+	   && get_range_info (step, &step_min, &step_max))
     ;
   else
     return true;
diff --git a/gcc/tree-ssa-copy.c b/gcc/tree-ssa-copy.c
index 9f0fe54..dd4f26d 100644
--- a/gcc/tree-ssa-copy.c
+++ b/gcc/tree-ssa-copy.c
@@ -545,8 +545,8 @@  fini_copy_prop (void)
 		   && !SSA_NAME_RANGE_INFO (copy_of[i].value)
 		   && var_bb == copy_of_bb)
 	    duplicate_ssa_name_range_info (copy_of[i].value,
-					   SSA_NAME_RANGE_TYPE (var),
-					   SSA_NAME_RANGE_INFO (var));
+					   SSA_NAME_RANGE_INFO (var),
+					   TREE_TYPE (var));
 	}
     }
 
diff --git a/gcc/tree-ssa-loop-niter.c b/gcc/tree-ssa-loop-niter.c
index e67cd93..67d017a 100644
--- a/gcc/tree-ssa-loop-niter.c
+++ b/gcc/tree-ssa-loop-niter.c
@@ -225,7 +225,7 @@  refine_value_range_using_guard (tree type, tree var,
     }
   else if (TREE_CODE (varc1) == SSA_NAME
 	   && INTEGRAL_TYPE_P (type)
-	   && get_range_info (varc1, &minv, &maxv) == VR_RANGE)
+	   && get_range_info (varc1, &minv, &maxv))
     {
       gcc_assert (wi::le_p (minv, maxv, sgn));
       wi::to_mpz (minv, minc1, sgn);
@@ -347,8 +347,6 @@  determine_value_range (struct loop *loop, tree type, tree var, mpz_t off,
   int cnt = 0;
   mpz_t minm, maxm;
   basic_block bb;
-  wide_int minv, maxv;
-  enum value_range_type rtype = VR_VARYING;
 
   /* If the expression is a constant, we know its value exactly.  */
   if (integer_zerop (var))
@@ -368,7 +366,8 @@  determine_value_range (struct loop *loop, tree type, tree var, mpz_t off,
       gphi_iterator gsi;
 
       /* Either for VAR itself...  */
-      rtype = get_range_info (var, &minv, &maxv);
+      wide_int minv, maxv;
+      bool have_range = get_range_info (var, &minv, &maxv);
       /* Or for PHI results in loop->header where VAR is used as
 	 PHI argument from the loop preheader edge.  */
       for (gsi = gsi_start_phis (loop->header); !gsi_end_p (gsi); gsi_next (&gsi))
@@ -376,12 +375,11 @@  determine_value_range (struct loop *loop, tree type, tree var, mpz_t off,
 	  gphi *phi = gsi.phi ();
 	  wide_int minc, maxc;
 	  if (PHI_ARG_DEF_FROM_EDGE (phi, e) == var
-	      && (get_range_info (gimple_phi_result (phi), &minc, &maxc)
-		  == VR_RANGE))
+	      && get_range_info (gimple_phi_result (phi), &minc, &maxc))
 	    {
-	      if (rtype != VR_RANGE)
+	      if (!have_range)
 		{
-		  rtype = VR_RANGE;
+		  have_range = true;
 		  minv = minc;
 		  maxv = maxc;
 		}
@@ -395,7 +393,7 @@  determine_value_range (struct loop *loop, tree type, tree var, mpz_t off,
 		     involved.  */
 		  if (wi::gt_p (minv, maxv, sgn))
 		    {
-		      rtype = get_range_info (var, &minv, &maxv);
+		      have_range = get_range_info (var, &minv, &maxv);
 		      break;
 		    }
 		}
@@ -403,17 +401,17 @@  determine_value_range (struct loop *loop, tree type, tree var, mpz_t off,
 	}
       mpz_init (minm);
       mpz_init (maxm);
-      if (rtype != VR_RANGE)
-	{
-	  mpz_set (minm, min);
-	  mpz_set (maxm, max);
-	}
-      else
+      if (have_range)
 	{
 	  gcc_assert (wi::le_p (minv, maxv, sgn));
 	  wi::to_mpz (minv, minm, sgn);
 	  wi::to_mpz (maxv, maxm, sgn);
 	}
+      else
+	{
+	  mpz_set (minm, min);
+	  mpz_set (maxm, max);
+	}
       /* Now walk the dominators of the loop header and use the entry
 	 guards to refine the estimates.  */
       for (bb = loop->header;
@@ -3140,7 +3138,7 @@  record_nonwrapping_iv (struct loop *loop, tree base, tree step, gimple *stmt,
       if (TREE_CODE (orig_base) == SSA_NAME
 	  && TREE_CODE (high) == INTEGER_CST
 	  && INTEGRAL_TYPE_P (TREE_TYPE (orig_base))
-	  && (get_range_info (orig_base, &min, &max) == VR_RANGE
+	  && (get_range_info (orig_base, &min, &max)
 	      || get_cst_init_from_scev (orig_base, &max, false))
 	  && wi::gts_p (high, max))
 	base = wide_int_to_tree (unsigned_type, max);
@@ -3158,7 +3156,7 @@  record_nonwrapping_iv (struct loop *loop, tree base, tree step, gimple *stmt,
       if (TREE_CODE (orig_base) == SSA_NAME
 	  && TREE_CODE (low) == INTEGER_CST
 	  && INTEGRAL_TYPE_P (TREE_TYPE (orig_base))
-	  && (get_range_info (orig_base, &min, &max) == VR_RANGE
+	  && (get_range_info (orig_base, &min, &max)
 	      || get_cst_init_from_scev (orig_base, &min, true))
 	  && wi::gts_p (min, low))
 	base = wide_int_to_tree (unsigned_type, min);
@@ -4397,7 +4395,6 @@  scev_var_range_cant_overflow (tree var, tree step, struct loop *loop)
 {
   tree type;
   wide_int minv, maxv, diff, step_wi;
-  enum value_range_type rtype;
 
   if (TREE_CODE (step) != INTEGER_CST || !INTEGRAL_TYPE_P (TREE_TYPE (var)))
     return false;
@@ -4408,8 +4405,7 @@  scev_var_range_cant_overflow (tree var, tree step, struct loop *loop)
   if (!def_bb || !dominated_by_p (CDI_DOMINATORS, loop->latch, def_bb))
     return false;
 
-  rtype = get_range_info (var, &minv, &maxv);
-  if (rtype != VR_RANGE)
+  if (!get_range_info (var, &minv, &maxv))
     return false;
 
   /* VAR is a scev whose evolution part is STEP and value range info
diff --git a/gcc/tree-ssa-phiopt.c b/gcc/tree-ssa-phiopt.c
index b652361..f06985b 100644
--- a/gcc/tree-ssa-phiopt.c
+++ b/gcc/tree-ssa-phiopt.c
@@ -1065,11 +1065,11 @@  value_replacement (basic_block cond_bb, basic_block middle_bb,
 	  SSA_NAME_RANGE_INFO (lhs) = NULL;
 	  /* If available, we can use VR of phi result at least.  */
 	  tree phires = gimple_phi_result (phi);
-	  struct range_info_def *phires_range_info
+	  irange_storage *phires_range_info
 	    = SSA_NAME_RANGE_INFO (phires);
 	  if (phires_range_info)
-	    duplicate_ssa_name_range_info (lhs, SSA_NAME_RANGE_TYPE (phires),
-					   phires_range_info);
+	    duplicate_ssa_name_range_info (lhs, phires_range_info,
+					   TREE_TYPE (phires));
 	}
       gimple_stmt_iterator gsi_from = gsi_for_stmt (assign);
       gsi_move_before (&gsi_from, &gsi);
diff --git a/gcc/tree-ssa-pre.c b/gcc/tree-ssa-pre.c
index 2a431c9..e69cbc1 100644
--- a/gcc/tree-ssa-pre.c
+++ b/gcc/tree-ssa-pre.c
@@ -3107,12 +3107,11 @@  insert_into_preds_of_block (basic_block block, unsigned int exprnum,
       && SSA_NAME_RANGE_INFO (expr->u.nary->op[0]))
     {
       wide_int min, max;
-      if (get_range_info (expr->u.nary->op[0], &min, &max) == VR_RANGE
+      if (get_range_info (expr->u.nary->op[0], &min, &max)
 	  && !wi::neg_p (min, SIGNED)
 	  && !wi::neg_p (max, SIGNED))
 	/* Just handle extension and sign-changes of all-positive ranges.  */
-	set_range_info (temp,
-			SSA_NAME_RANGE_TYPE (expr->u.nary->op[0]),
+	set_range_info (temp, VR_RANGE,
 			wide_int_storage::from (min, TYPE_PRECISION (type),
 						TYPE_SIGN (type)),
 			wide_int_storage::from (max, TYPE_PRECISION (type),
@@ -4326,8 +4325,8 @@  eliminate_dom_walker::before_dom_children (basic_block b)
 		       && ! VN_INFO_RANGE_INFO (sprime)
 		       && b == sprime_b)
 		duplicate_ssa_name_range_info (sprime,
-					       VN_INFO_RANGE_TYPE (lhs),
-					       VN_INFO_RANGE_INFO (lhs));
+					       VN_INFO_RANGE_INFO (lhs),
+					       TREE_TYPE (lhs));
 	    }
 
 	  /* Inhibit the use of an inserted PHI on a loop header when
diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index c140c35..a991735 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -3349,24 +3349,15 @@  set_ssa_val_to (tree from, tree to)
 		{
 		  /* Save old info.  */
 		  if (! VN_INFO (to)->info.range_info)
-		    {
-		      VN_INFO (to)->info.range_info = SSA_NAME_RANGE_INFO (to);
-		      VN_INFO (to)->range_info_anti_range_p
-			= SSA_NAME_ANTI_RANGE_P (to);
-		    }
+		    VN_INFO (to)->info.range_info = SSA_NAME_RANGE_INFO (to);
 		  /* Use that from the dominator.  */
 		  SSA_NAME_RANGE_INFO (to) = SSA_NAME_RANGE_INFO (from);
-		  SSA_NAME_ANTI_RANGE_P (to) = SSA_NAME_ANTI_RANGE_P (from);
 		}
 	      else
 		{
 		  /* Save old info.  */
 		  if (! VN_INFO (to)->info.range_info)
-		    {
-		      VN_INFO (to)->info.range_info = SSA_NAME_RANGE_INFO (to);
-		      VN_INFO (to)->range_info_anti_range_p
-			= SSA_NAME_ANTI_RANGE_P (to);
-		    }
+		    VN_INFO (to)->info.range_info = SSA_NAME_RANGE_INFO (to);
 		  /* Rather than allocating memory and unioning the info
 		     just clear it.  */
 		  SSA_NAME_RANGE_INFO (to) = NULL;
@@ -4616,11 +4607,7 @@  scc_vn_restore_ssa_info (void)
 	    SSA_NAME_PTR_INFO (name) = VN_INFO (name)->info.ptr_info;
 	  else if (INTEGRAL_TYPE_P (TREE_TYPE (name))
 		   && VN_INFO (name)->info.range_info)
-	    {
-	      SSA_NAME_RANGE_INFO (name) = VN_INFO (name)->info.range_info;
-	      SSA_NAME_ANTI_RANGE_P (name)
-		= VN_INFO (name)->range_info_anti_range_p;
-	    }
+	    SSA_NAME_RANGE_INFO (name) = VN_INFO (name)->info.range_info;
 	}
     }
 }
diff --git a/gcc/tree-ssa-sccvn.h b/gcc/tree-ssa-sccvn.h
index 77d0183..6ad8e53 100644
--- a/gcc/tree-ssa-sccvn.h
+++ b/gcc/tree-ssa-sccvn.h
@@ -201,9 +201,6 @@  typedef struct vn_ssa_aux
      insertion of such with EXPR as definition is required before
      a use can be created of it.  */
   unsigned needs_insertion : 1;
-
-  /* Whether range-info is anti-range.  */
-  unsigned range_info_anti_range_p : 1;
 } *vn_ssa_aux_t;
 
 enum vn_lookup_kind { VN_NOWALK, VN_WALK, VN_WALKREWRITE };
@@ -262,7 +259,7 @@  vn_valueize (tree name)
 
 /* Get at the original range info for NAME.  */
 
-inline range_info_def *
+inline irange_storage *
 VN_INFO_RANGE_INFO (tree name)
 {
   return (VN_INFO (name)->info.range_info
@@ -270,24 +267,6 @@  VN_INFO_RANGE_INFO (tree name)
 	  : SSA_NAME_RANGE_INFO (name));
 }
 
-/* Whether the original range info of NAME is an anti-range.  */
-
-inline bool
-VN_INFO_ANTI_RANGE_P (tree name)
-{
-  return (VN_INFO (name)->info.range_info
-	  ? VN_INFO (name)->range_info_anti_range_p
-	  : SSA_NAME_ANTI_RANGE_P (name));
-}
-
-/* Get at the original range info kind for NAME.  */
-
-inline value_range_type
-VN_INFO_RANGE_TYPE (tree name)
-{
-  return VN_INFO_ANTI_RANGE_P (name) ? VR_ANTI_RANGE : VR_RANGE;
-}
-
 /* Get at the original pointer info for NAME.  */
 
 inline ptr_info_def *
diff --git a/gcc/tree-ssanames.c b/gcc/tree-ssanames.c
index 353c7b1..cf6c8c6 100644
--- a/gcc/tree-ssanames.c
+++ b/gcc/tree-ssanames.c
@@ -328,59 +328,57 @@  set_range_info (tree name, enum value_range_type range_type,
 {
   gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name)));
   gcc_assert (range_type == VR_RANGE || range_type == VR_ANTI_RANGE);
-  range_info_def *ri = SSA_NAME_RANGE_INFO (name);
+  irange_storage *ri = SSA_NAME_RANGE_INFO (name);
   unsigned int precision = TYPE_PRECISION (TREE_TYPE (name));
 
   /* Allocate if not available.  */
   if (ri == NULL)
     {
-      size_t size = (sizeof (range_info_def)
-		     + trailing_wide_ints <3>::extra_size (precision));
-      ri = static_cast<range_info_def *> (ggc_internal_alloc (size));
-      ri->ints.set_precision (precision);
+      ri = irange_storage::ggc_alloc (precision);
       SSA_NAME_RANGE_INFO (name) = ri;
       ri->set_nonzero_bits (wi::shwi (-1, precision));
     }
 
-  /* Record the range type.  */
-  if (SSA_NAME_RANGE_TYPE (name) != range_type)
-    SSA_NAME_ANTI_RANGE_P (name) = (range_type == VR_ANTI_RANGE);
-
-  /* Set the values.  */
-  ri->set_min (min);
-  ri->set_max (max);
+  signop sign = TYPE_SIGN (TREE_TYPE (name));
+  irange local (TREE_TYPE (name),
+		wide_int_storage::from (min, precision, sign),
+		wide_int_storage::from (max, precision, sign),
+		range_type == VR_ANTI_RANGE ? irange::INVERSE : irange::PLAIN);
+  ri->set_irange (local);
 
   /* If it is a range, try to improve nonzero_bits from the min/max.  */
   if (range_type == VR_RANGE)
     {
-      wide_int xorv = ri->get_min () ^ ri->get_max ();
+      wide_int xorv = min ^ max;
       if (xorv != 0)
 	xorv = wi::mask (precision - wi::clz (xorv), false, precision);
-      ri->set_nonzero_bits (ri->get_nonzero_bits () & (ri->get_min () | xorv));
+      ri->set_nonzero_bits (ri->get_nonzero_bits () & (min | xorv));
     }
 }
 
 
-/* Gets range information MIN, MAX and returns enum value_range_type
-   corresponding to tree ssa_name NAME.  enum value_range_type returned
-   is used to determine if MIN and MAX are valid values.  */
+/* If there is range information available for the given ssa_name
+   NAME, returns TRUE and sets MIN, MAX accordingly.  */
 
-enum value_range_type
+bool
 get_range_info (const_tree name, wide_int *min, wide_int *max)
 {
   gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name)));
   gcc_assert (min && max);
-  range_info_def *ri = SSA_NAME_RANGE_INFO (name);
 
   /* Return VR_VARYING for SSA_NAMEs with NULL RANGE_INFO or SSA_NAMEs
      with integral types width > 2 * HOST_BITS_PER_WIDE_INT precision.  */
-  if (!ri || (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (name)))
-	      > 2 * HOST_BITS_PER_WIDE_INT))
-    return VR_VARYING;
+  if (!SSA_NAME_RANGE_INFO (name)
+      // FIXME: ?? Do we need this precision stuff ??
+      || (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (name)))
+	  > 2 * HOST_BITS_PER_WIDE_INT))
+    return false;
 
-  *min = ri->get_min ();
-  *max = ri->get_max ();
-  return SSA_NAME_RANGE_TYPE (name);
+  irange ri (name);
+  gcc_assert (ri.valid_p ());
+  *min = ri.lower_bound ();
+  *max = ri.upper_bound ();
+  return true;
 }
 
 /* Set nonnull attribute to pointer NAME.  */
@@ -415,14 +413,14 @@  get_ptr_nonnull (const_tree name)
 /* Change non-zero bits bitmask of NAME.  */
 
 void
-set_nonzero_bits (tree name, const wide_int_ref &mask)
+set_nonzero_bits (tree name, const wide_int &mask)
 {
   gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name)));
   if (SSA_NAME_RANGE_INFO (name) == NULL)
     set_range_info (name, VR_RANGE,
 		    TYPE_MIN_VALUE (TREE_TYPE (name)),
 		    TYPE_MAX_VALUE (TREE_TYPE (name)));
-  range_info_def *ri = SSA_NAME_RANGE_INFO (name);
+  irange_storage *ri = SSA_NAME_RANGE_INFO (name);
   ri->set_nonzero_bits (mask);
 }
 
@@ -442,7 +440,7 @@  get_nonzero_bits (const_tree name)
       return wi::shwi (-1, precision);
     }
 
-  range_info_def *ri = SSA_NAME_RANGE_INFO (name);
+  irange_storage *ri = SSA_NAME_RANGE_INFO (name);
   if (!ri)
     return wi::shwi (-1, precision);
 
@@ -676,29 +674,38 @@  duplicate_ssa_name_ptr_info (tree name, struct ptr_info_def *ptr_info)
   SSA_NAME_PTR_INFO (name) = new_ptr_info;
 }
 
-/* Creates a duplicate of the range_info_def at RANGE_INFO of type
-   RANGE_TYPE for use by the SSA name NAME.  */
+/* Creates a duplicate of the range info at OLD_RANGE_INFO for use by the
+   SSA name NAME.  */
 void
-duplicate_ssa_name_range_info (tree name, enum value_range_type range_type,
-			       struct range_info_def *range_info)
+duplicate_ssa_name_range_info (tree name, irange_storage *old_range_info,
+			       tree old_type)
 {
-  struct range_info_def *new_range_info;
-
   gcc_assert (!POINTER_TYPE_P (TREE_TYPE (name)));
   gcc_assert (!SSA_NAME_RANGE_INFO (name));
 
-  if (!range_info)
+  if (!old_range_info)
     return;
 
   unsigned int precision = TYPE_PRECISION (TREE_TYPE (name));
-  size_t size = (sizeof (range_info_def)
-		 + trailing_wide_ints <3>::extra_size (precision));
-  new_range_info = static_cast<range_info_def *> (ggc_internal_alloc (size));
-  memcpy (new_range_info, range_info, size);
-
-  gcc_assert (range_type == VR_RANGE || range_type == VR_ANTI_RANGE);
-  SSA_NAME_ANTI_RANGE_P (name) = (range_type == VR_ANTI_RANGE);
+  irange_storage *new_range_info = irange_storage::ggc_alloc (precision);
   SSA_NAME_RANGE_INFO (name) = new_range_info;
+  /* If NAME was created with copy_ssa_name_fn(), we may have gotten
+     the TYPE_MAIN_VARIANT for the original type, which may be a
+     different typedef of the original type.  If so, convert the range
+     to be consistent.
+
+     NOTE: I have also seen tree-ssa-pre.c copy the range of an
+     'unsigned long long' onto the range of a 'unsigned long'.  So the
+     two types are not necessarily of the same size.  */
+  if (TREE_TYPE (name) != old_type)
+    {
+      irange ir (old_range_info, old_type);
+      ir.cast (TREE_TYPE (name));
+      new_range_info->set_irange (ir);
+      new_range_info->set_nonzero_bits (old_range_info->get_nonzero_bits ());
+      return;
+    }
+  memcpy (new_range_info, old_range_info, irange_storage::size (precision));
 }
 
 
@@ -719,11 +726,11 @@  duplicate_ssa_name_fn (struct function *fn, tree name, gimple *stmt)
     }
   else
     {
-      struct range_info_def *old_range_info = SSA_NAME_RANGE_INFO (name);
+      irange_storage *old_range_info = SSA_NAME_RANGE_INFO (name);
 
       if (old_range_info)
-	duplicate_ssa_name_range_info (new_name, SSA_NAME_RANGE_TYPE (name),
-				       old_range_info);
+	duplicate_ssa_name_range_info (new_name,
+				       old_range_info, TREE_TYPE (name));
     }
 
   return new_name;
diff --git a/gcc/tree-ssanames.h b/gcc/tree-ssanames.h
index 9a18394..ab1cfe7 100644
--- a/gcc/tree-ssanames.h
+++ b/gcc/tree-ssanames.h
@@ -45,14 +45,12 @@  struct GTY(()) ptr_info_def
   unsigned int misalign;
 };
 
-/* Value range information for SSA_NAMEs representing non-pointer variables.  */
-
-struct GTY ((variable_size)) range_info_def {
-  /* Minimum, maximum and nonzero bits.  */
-  TRAILING_WIDE_INT_ACCESSOR (min, ints, 0)
-  TRAILING_WIDE_INT_ACCESSOR (max, ints, 1)
-  TRAILING_WIDE_INT_ACCESSOR (nonzero_bits, ints, 2)
-  trailing_wide_ints <3> ints;
+/* Used bits information for SSA_NAMEs representing non-pointer variables.  */
+
+struct GTY ((variable_size)) nonzero_bits_def {
+  /* Mask representing which bits are known to be used in an integer.  */
+  TRAILING_WIDE_INT_ACCESSOR (nonzero_bits, ints, 0)
+  trailing_wide_ints <1> ints;
 };
 
 
@@ -70,9 +68,8 @@  struct GTY ((variable_size)) range_info_def {
 extern void set_range_info (tree, enum value_range_type, const wide_int_ref &,
 			    const wide_int_ref &);
 /* Gets the value range from SSA.  */
-extern enum value_range_type get_range_info (const_tree, wide_int *,
-					     wide_int *);
-extern void set_nonzero_bits (tree, const wide_int_ref &);
+extern bool get_range_info (const_tree, wide_int *, wide_int *);
+extern void set_nonzero_bits (tree, const wide_int &);
 extern wide_int get_nonzero_bits (const_tree);
 extern bool ssa_name_has_boolean_range (tree);
 extern void init_ssanames (struct function *, int);
@@ -95,8 +92,7 @@  extern bool get_ptr_nonnull (const_tree);
 extern tree copy_ssa_name_fn (struct function *, tree, gimple *);
 extern void duplicate_ssa_name_ptr_info (tree, struct ptr_info_def *);
 extern tree duplicate_ssa_name_fn (struct function *, tree, gimple *);
-extern void duplicate_ssa_name_range_info (tree, enum value_range_type,
-					   struct range_info_def *);
+extern void duplicate_ssa_name_range_info (tree, irange_storage *, tree);
 extern void reset_flow_sensitive_info (tree);
 extern void reset_flow_sensitive_info_in_bb (basic_block);
 extern void release_defs (gimple *);
diff --git a/gcc/tree-vect-patterns.c b/gcc/tree-vect-patterns.c
index 39f0133..05bf7a3 100644
--- a/gcc/tree-vect-patterns.c
+++ b/gcc/tree-vect-patterns.c
@@ -2894,7 +2894,7 @@  vect_recog_divmod_pattern (vec<gimple *> *stmts,
 
       wide_int oprnd0_min, oprnd0_max;
       int msb = 1;
-      if (get_range_info (oprnd0, &oprnd0_min, &oprnd0_max) == VR_RANGE)
+      if (get_range_info (oprnd0, &oprnd0_min, &oprnd0_max))
 	{
 	  if (!wi::neg_p (oprnd0_min, TYPE_SIGN (itype)))
 	    msb = 0;
diff --git a/gcc/tree-vrp.c b/gcc/tree-vrp.c
index 716a7c2..92abd00 100644
--- a/gcc/tree-vrp.c
+++ b/gcc/tree-vrp.c
@@ -498,6 +498,70 @@  abs_extent_range (value_range *vr, tree min, tree max)
   set_and_canonicalize_value_range (vr, VR_RANGE, min, max, NULL);
 }
 
+/* Return TRUE if an irange is an ANTI_RANGE.  This is a temporary
+   measure offering backward compatibility with range_info_def, and
+   will go away.  */
+
+static bool
+irange_is_anti_range (irange r)
+{
+  const_tree type = r.get_type ();
+  // Remember: VR_ANTI_RANGE([3,10]) ==> [-MIN,2][11,MAX]
+  unsigned int precision = TYPE_PRECISION (type);
+  wide_int min = wi::min_value (precision, TYPE_SIGN (type));
+  wide_int max = wi::max_value (precision, TYPE_SIGN (type));
+  return (r.num_pairs () == 2
+          && r.lower_bound () == min
+          && r.upper_bound () == max);
+}
+
+/* Convert the range info of an SSA name into VRP's internal
+   value_range representation.  Return VR_RANGE/VR_ANTI_RANGE if range
+   info is available for the SSA name, otherwise VR_VARYING is
+   returned.  MIN/MAX is set if range info is available.
+
+   FIXME: Any use of this function outside of tree-vrp must be
+   converted.  For instnace, ipa-prop.c.  */
+
+enum value_range_type
+get_range_info_as_value_range (const_tree ssa, wide_int *min, wide_int *max)
+{
+  if (!SSA_NAME_RANGE_INFO (ssa)
+      || (GET_MODE_PRECISION (TYPE_MODE (TREE_TYPE (ssa)))
+	  > 2 * HOST_BITS_PER_WIDE_INT))
+    return VR_VARYING;
+
+  irange ri (ssa);
+  if (irange_is_anti_range (ri))
+    {
+      irange tmp (ri);
+      tmp.invert ();
+      gcc_assert (!tmp.overflow_p ());
+      if (tmp.num_pairs () != 1)
+       {
+         fprintf (stderr, "Inverse of anti range does not have 2 elements.\n");
+         fprintf (stderr, "Type: ");
+         debug_generic_stmt (const_cast<tree> (ri.get_type ()));
+         fprintf (stderr, "Original anti range:\n");
+         ri.dump ();
+         fprintf (stderr, "\n");
+         fprintf (stderr, "Supposed inverse of anti range:\n");
+         tmp.dump ();
+         fprintf (stderr, "\n");
+         gcc_unreachable ();
+       }
+      *min = tmp.lower_bound ();
+      *max = tmp.upper_bound ();
+      return VR_ANTI_RANGE;
+    }
+
+  /* We chop off any middle ranges, because range_info_def has no use
+     for such granularity.  */
+  *min = ri.lower_bound ();
+  *max = ri.upper_bound ();
+  return VR_RANGE;
+}
+
 
 /* Return value range information for VAR.
 
@@ -555,7 +619,8 @@  get_value_range (const_tree var)
 	  else if (INTEGRAL_TYPE_P (TREE_TYPE (sym)))
 	    {
 	      wide_int min, max;
-	      value_range_type rtype = get_range_info (var, &min, &max);
+	      value_range_type rtype
+		= get_range_info_as_value_range (var, &min, &max);
 	      if (rtype == VR_RANGE || rtype == VR_ANTI_RANGE)
 		set_value_range (vr, rtype,
 				 wide_int_to_tree (TREE_TYPE (var), min),
@@ -637,7 +702,7 @@  update_value_range (const_tree var, value_range *new_vr)
   if (INTEGRAL_TYPE_P (TREE_TYPE (var)))
     {
       wide_int min, max;
-      value_range_type rtype = get_range_info (var, &min, &max);
+      value_range_type rtype = get_range_info_as_value_range (var, &min, &max);
       if (rtype == VR_RANGE || rtype == VR_ANTI_RANGE)
 	{
 	  tree nr_min, nr_max;
@@ -6887,9 +6952,10 @@  remove_range_assertions (void)
 		    && all_imm_uses_in_stmt_or_feed_cond (var, stmt,
 							  single_pred (bb)))
 		  {
-		    set_range_info (var, SSA_NAME_RANGE_TYPE (lhs),
-				    SSA_NAME_RANGE_INFO (lhs)->get_min (),
-				    SSA_NAME_RANGE_INFO (lhs)->get_max ());
+		    wide_int min, max;
+		    enum value_range_type rtype
+		      = get_range_info_as_value_range (lhs, &min, &max);
+		    set_range_info (var, rtype, min, max);
 		    maybe_set_nonzero_bits (bb, var);
 		  }
 	      }
@@ -9903,7 +9969,7 @@  simplify_conversion_using_ranges (gimple_stmt_iterator *gsi, gimple *stmt)
      case innerop was created during substitute-and-fold.  */
   wide_int imin, imax;
   if (!INTEGRAL_TYPE_P (TREE_TYPE (innerop))
-      || get_range_info (innerop, &imin, &imax) != VR_RANGE)
+      || !get_range_info (innerop, &imin, &imax))
     return false;
   innermin = widest_int::from (imin, TYPE_SIGN (TREE_TYPE (innerop)));
   innermax = widest_int::from (imax, TYPE_SIGN (TREE_TYPE (innerop)));
diff --git a/gcc/tree-vrp.h b/gcc/tree-vrp.h
index ef2c68a..99750cd 100644
--- a/gcc/tree-vrp.h
+++ b/gcc/tree-vrp.h
@@ -57,3 +57,5 @@  extern void extract_range_from_unary_expr (value_range *vr,
 					   value_range *vr0_,
 					   tree op0_type);
 
+enum value_range_type get_range_info_as_value_range (const_tree ssa,
+						     wide_int *min, wide_int *max);
diff --git a/gcc/tree.c b/gcc/tree.c
index db31620..a3b60d7 100644
--- a/gcc/tree.c
+++ b/gcc/tree.c
@@ -14277,7 +14277,6 @@  verify_type (const_tree t)
     }
 }
 
-
 /* Return 1 if ARG interpreted as signed in its precision is known to be
    always positive or 2 if ARG is known to be always negative, or 3 if
    ARG may be positive or negative.  */
@@ -14316,7 +14315,7 @@  get_range_pos_neg (tree arg)
   if (TREE_CODE (arg) != SSA_NAME)
     return 3;
   wide_int arg_min, arg_max;
-  while (get_range_info (arg, &arg_min, &arg_max) != VR_RANGE)
+  while (!get_range_info (arg, &arg_min, &arg_max))
     {
       gimple *g = SSA_NAME_DEF_STMT (arg);
       if (is_gimple_assign (g)
@@ -14326,6 +14325,8 @@  get_range_pos_neg (tree arg)
 	  if (INTEGRAL_TYPE_P (TREE_TYPE (t))
 	      && TYPE_PRECISION (TREE_TYPE (t)) <= prec)
 	    {
+	      /* Narrower value zero extended into wider type
+		 will always result in positive values.  */
 	      if (TYPE_UNSIGNED (TREE_TYPE (t))
 		  && TYPE_PRECISION (TREE_TYPE (t)) < prec)
 		return 1;
diff --git a/gcc/tree.h b/gcc/tree.h
index c6e883c..8069302 100644
--- a/gcc/tree.h
+++ b/gcc/tree.h
@@ -1740,14 +1740,6 @@  extern void protected_set_expr_location (tree, location_t);
 #define SSA_NAME_PTR_INFO(N) \
    SSA_NAME_CHECK (N)->ssa_name.info.ptr_info
 
-/* True if SSA_NAME_RANGE_INFO describes an anti-range.  */
-#define SSA_NAME_ANTI_RANGE_P(N) \
-    SSA_NAME_CHECK (N)->base.static_flag
-
-/* The type of range described by SSA_NAME_RANGE_INFO.  */
-#define SSA_NAME_RANGE_TYPE(N) \
-    (SSA_NAME_ANTI_RANGE_P (N) ? VR_ANTI_RANGE : VR_RANGE)
-
 /* Value range info attributes for SSA_NAMEs of non pointer-type variables.  */
 #define SSA_NAME_RANGE_INFO(N) \
     SSA_NAME_CHECK (N)->ssa_name.info.range_info
diff --git a/gcc/wide-int.h b/gcc/wide-int.h
index 2115b61..1d21f64 100644
--- a/gcc/wide-int.h
+++ b/gcc/wide-int.h
@@ -1341,6 +1341,7 @@  private:
 public:
   void set_precision (unsigned int);
   trailing_wide_int operator [] (unsigned int);
+  const trailing_wide_int operator [] (unsigned int) const;
   static size_t extra_size (unsigned int);
 };
 
@@ -1413,6 +1414,17 @@  trailing_wide_ints <N>::operator [] (unsigned int index)
 				    &m_val[index * m_max_len]);
 }
 
+/* Return an RVAL to element INDEX.  */
+template <int N>
+inline const trailing_wide_int
+trailing_wide_ints<N>::operator [] (unsigned int index) const
+{
+  return trailing_wide_int_storage
+    (m_precision,
+     const_cast<unsigned char *>(&m_len[index]),
+     const_cast<HOST_WIDE_INT *>(&m_val[index * m_max_len]));
+}
+
 /* Return how many extra bytes need to be added to the end of the structure
    in order to handle N wide_ints of precision PRECISION.  */
 template <int N>