Message ID | DBA11B41-3B27-4AAF-BEA4-E5D17276E5D5@comcast.net |
---|---|
State | New |
Headers | show |
On Fri, Oct 18, 2013 at 01:52:54PM -0700, Mike Stump wrote: > On Oct 18, 2013, at 6:11 AM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote: > >>> Does this look ok? Kenny, can you double check the cases, think I have them right, but? a double check would be good. > >> That works for me. > >> > > i talked to mike about this last night, but did not follow up with an email until now. The problem is that this code is wrong!!! He is working to fix that and so i would expect something from him later (tomorrow for those in europe). > > Ok, updated the patch, here is what I checked in: > > diff --git a/gcc/wide-int.h b/gcc/wide-int.h > index 2ff130b..738ae11 100644 > --- a/gcc/wide-int.h > +++ b/gcc/wide-int.h > @@ -185,7 +185,9 @@ along with GCC; see the file COPYING3. If not see > > assuming t is a int_cst. > > - Note that the bits above the precision are not defined and the > + Note, the bits past the precision up to the nearest HOST_WDE_INT WIDE? Jakub
Mike Stump <mikestump@comcast.net> writes: > + // We optimize x < y, where y is 64 or fewer bits. > + // We have to be careful to not allow comparison to a large positive > + // unsigned value like 0x8000000000000000, those would be encoded > + // with a y.len == 2. > + if (y.precision <= HOST_BITS_PER_WIDE_INT > + && y.len == 1) I don't get this. If y.precision <= HOST_BITS_PER_WIDE_INT then y.len must be 1. I realise that tree constants can be stored with TREE_INT_CST_NUNITS > TYPE_PRECISION / HOST_BITS_PER_WIDE_INT (so that extensions beyond TYPE_PRECISION are free). But the wide-int code is shielded from that by the ::decompose routine. A wide int never has len > precision / HOST_BITS_PER_WIDE_INT. Thanks, Richard
On 18 October 2013 22:55:39 Jakub Jelinek <jakub@redhat.com> wrote: > On Fri, Oct 18, 2013 at 01:52:54PM -0700, Mike Stump wrote: > > On Oct 18, 2013, at 6:11 AM, Kenneth Zadeck <zadeck@naturalbridge.com> wrote: > > >>> Does this look ok? Kenny, can you double check the cases, think I > have them right, but? a double check would be good. > > >> That works for me. > > >> > i talked to mike about this last night, but did not follow up with > an email until now. The problem is that this code is wrong!!! He is > working to fix that and so i would expect something from him later > (tomorrow for those in europe). > > Ok, updated the patch, here is what I checked in: > > > > diff --git a/gcc/wide-int.h b/gcc/wide-int.h > > index 2ff130b..738ae11 100644 > > --- a/gcc/wide-int.h > > +++ b/gcc/wide-int.h > > @@ -185,7 +185,9 @@ along with GCC; see the file COPYING3. If not see > > assuming t is a int_cst. > > - Note that the bits above the precision are not defined and the > > + Note, the bits past the precision up to the nearest HOST_WDE_INT > > WIDE? s/positve/positive/g falls into the same category, without commenting on the patch itself. Thanks, > > Jakub Sent with AquaMail for Android http://www.aqua-mail.com
On 10/19/2013 05:01 AM, Richard Sandiford wrote: > Mike Stump <mikestump@comcast.net> writes: >> + // We optimize x < y, where y is 64 or fewer bits. >> + // We have to be careful to not allow comparison to a large positive >> + // unsigned value like 0x8000000000000000, those would be encoded >> + // with a y.len == 2. >> + if (y.precision <= HOST_BITS_PER_WIDE_INT >> + && y.len == 1) > I don't get this. If y.precision <= HOST_BITS_PER_WIDE_INT then > y.len must be 1. I realise that tree constants can be stored with > TREE_INT_CST_NUNITS > TYPE_PRECISION / HOST_BITS_PER_WIDE_INT > (so that extensions beyond TYPE_PRECISION are free). But the > wide-int code is shielded from that by the ::decompose routine. > A wide int never has len > precision / HOST_BITS_PER_WIDE_INT. > > Thanks, > Richard I think that part of this is that neither mike or i really understand how this stuff works anymore. in the old version where we had precision 0, we would wait to canonicalize a constant or a simple integer until we saw what the precision of the other operand was. That was what precison 0 meant. it was kind of like what you are now proposing with this new trait, but for the reason that we actually did not know what to do than some concern about escapement. What i do not understand is what happens what do you get when you bring in an integer variable that is an unsigned HOST_WIDE_INT with the top bit set. In the precision 0 days, if the prec of the other side was 64 or less, the incoming integer took 1 hwi and if the precision was larger, it took two hwis. The canonicalization happened late enough so that there was never a question. Here we are trying to do this at compile time to avoid the escape. This is why my emails about this have continued to talk about the unsigned HOST_WIDE_INT as a boundary case. It is clear, that if the value is a constant, then you should be able to see at compile time if the top bit is set, but if the value is a simple integer variable, you should still be able to do the non escaping test as long as the type is signed HOST_WIDE_INT or smaller. I think that mike certainly has not captured this yet. But those are the issues as i see them.
Kenneth Zadeck <zadeck@naturalbridge.com> writes: > On 10/19/2013 05:01 AM, Richard Sandiford wrote: >> Mike Stump <mikestump@comcast.net> writes: >>> + // We optimize x < y, where y is 64 or fewer bits. >>> + // We have to be careful to not allow comparison to a large positive >>> + // unsigned value like 0x8000000000000000, those would be encoded >>> + // with a y.len == 2. >>> + if (y.precision <= HOST_BITS_PER_WIDE_INT >>> + && y.len == 1) >> I don't get this. If y.precision <= HOST_BITS_PER_WIDE_INT then >> y.len must be 1. I realise that tree constants can be stored with >> TREE_INT_CST_NUNITS > TYPE_PRECISION / HOST_BITS_PER_WIDE_INT >> (so that extensions beyond TYPE_PRECISION are free). But the >> wide-int code is shielded from that by the ::decompose routine. >> A wide int never has len > precision / HOST_BITS_PER_WIDE_INT. >> >> Thanks, >> Richard > I think that part of this is that neither mike or i really understand > how this stuff works anymore. > > in the old version where we had precision 0, we would wait to > canonicalize a constant or a simple integer until we saw what the > precision of the other operand was. That was what precison 0 meant. > it was kind of like what you are now proposing with this new trait, but > for the reason that we actually did not know what to do than some > concern about escapement. > > What i do not understand is what happens what do you get when you bring > in an integer variable that is an unsigned HOST_WIDE_INT with the top > bit set. In the precision 0 days, if the prec of the other side was 64 > or less, the incoming integer took 1 hwi and if the precision was > larger, it took two hwis. The canonicalization happened late enough so > that there was never a question. Ah, I think I know what you mean. The original implementation was: template <typename T> static inline const HOST_WIDE_INT* to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, const T& x) { s[0] = x; if (signedp(x) || sizeof (T) < sizeof (HOST_WIDE_INT) || ! top_bit_set (x)) { *l = 1; } else { s[1] = 0; *l = 2; } *p = 0; return s; } void wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2, bool check_equal ATTRIBUTE_UNUSED, bool check_zero ATTRIBUTE_UNUSED) { gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0); if (*p1 == 0) *p1 = *p2; if (*p2 == 0) *p2 = *p1; gcc_checking_assert ((!check_equal) || *p1 == *p2); } /* Return true if C1 < C2 using signed comparisons. */ template <typename T1, typename T2> static inline bool lts_p (const T1 &c1, const T2 &c2) { bool result; HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS]; HOST_WIDE_INT ws2[WIDE_INT_MAX_ELTS]; const HOST_WIDE_INT *s1, *s2; /* Returned data */ unsigned int cl1, cl2; /* array lengths */ unsigned int p1, p2; /* precisions */ s1 = to_shwi1 (ws1, &cl1, &p1, c1); s2 = to_shwi1 (ws2, &cl2, &p2, c2); check_precision (&p1, &p2, false, true); if (p1 <= HOST_BITS_PER_WIDE_INT && p2 <= HOST_BITS_PER_WIDE_INT) { HOST_WIDE_INT x0 = sext_hwi (s1[0], p1); HOST_WIDE_INT x1 = sext_hwi (s2[0], p2); result = x0 < x1; } else result = lts_p_large (s1, cl1, p1, s2, cl2, p2); #ifdef DEBUG_WIDE_INT debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2); #endif return result; } So if you had a 128-bit wide_int and T == unsigned HOST_WIDE_INT, this lts_p would zero-extend the unsigned HOST_WIDE_INT to 128 bits and then do a signed comparison. The thing here is that the "check_equal" argument is false. So if instead you were comparing a 128-bit wide_int with a 64-bit tree constant, lts_p would treat that tree constant as a signed 64-bit number, even if it was TYPE_UNSIGNED. Similarly if you were comparing a 128-bit tree constant and a 64-bit tree constant. You also allowed a comparison of a 128-bit wide_int with a 64-bit rtx, again treating the 64-bit rtx as signed. So when doing the wi:: conversion, I'd interpreted the desired semantics for lts_p as being "treat both inputs as signed without extending them", since that's what the function did in most cases. It seemed inconsistent to treat a 64-bit unsigned primitive integer differently from a 64-bit unsigned tree constant. So at the moment, it doesn't matter whether any HOST_WIDE_INT input to lts_p is signed or unsigned, just like it didn't and doesn't matter whether any tree input is signed or unsigned. If instead we want lts_p to operate to a single unified precision, like eq_p did: s1 = to_shwi1 (ws1, &cl1, &p1, c1); s2 = to_shwi1 (ws2, &cl2, &p2, c2); check_precision (&p1, &p2, true, false); and still does, then that's easy enough to change. Then all extendable inputs will be extended according to their natural signedness and then compared signed. Mixed-precision rtx comparisons would be forbidden. But that's tangential to the point I was trying to make above, which is that the rules about valid values for "len" and post- check_precision "precision" are still the same as in your original version. So I think Mike's original patch was right and that this extra "y.len == 1" check is redundant. That would still be true if we changed lts_p as above. Thanks, Richard
On 10/19/2013 10:18 AM, Richard Sandiford wrote: > Kenneth Zadeck <zadeck@naturalbridge.com> writes: >> On 10/19/2013 05:01 AM, Richard Sandiford wrote: >>> Mike Stump <mikestump@comcast.net> writes: >>>> + // We optimize x < y, where y is 64 or fewer bits. >>>> + // We have to be careful to not allow comparison to a large positive >>>> + // unsigned value like 0x8000000000000000, those would be encoded >>>> + // with a y.len == 2. >>>> + if (y.precision <= HOST_BITS_PER_WIDE_INT >>>> + && y.len == 1) >>> I don't get this. If y.precision <= HOST_BITS_PER_WIDE_INT then >>> y.len must be 1. I realise that tree constants can be stored with >>> TREE_INT_CST_NUNITS > TYPE_PRECISION / HOST_BITS_PER_WIDE_INT >>> (so that extensions beyond TYPE_PRECISION are free). But the >>> wide-int code is shielded from that by the ::decompose routine. >>> A wide int never has len > precision / HOST_BITS_PER_WIDE_INT. >>> >>> Thanks, >>> Richard >> I think that part of this is that neither mike or i really understand >> how this stuff works anymore. >> >> in the old version where we had precision 0, we would wait to >> canonicalize a constant or a simple integer until we saw what the >> precision of the other operand was. That was what precison 0 meant. >> it was kind of like what you are now proposing with this new trait, but >> for the reason that we actually did not know what to do than some >> concern about escapement. >> >> What i do not understand is what happens what do you get when you bring >> in an integer variable that is an unsigned HOST_WIDE_INT with the top >> bit set. In the precision 0 days, if the prec of the other side was 64 >> or less, the incoming integer took 1 hwi and if the precision was >> larger, it took two hwis. The canonicalization happened late enough so >> that there was never a question. > Ah, I think I know what you mean. The original implementation was: > > template <typename T> > static inline const HOST_WIDE_INT* > to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, const T& x) > { > s[0] = x; > if (signedp(x) > || sizeof (T) < sizeof (HOST_WIDE_INT) > || ! top_bit_set (x)) > { > *l = 1; > } > else > { > s[1] = 0; > *l = 2; > } > *p = 0; > return s; > } > > void > wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2, > bool check_equal ATTRIBUTE_UNUSED, > bool check_zero ATTRIBUTE_UNUSED) > { > gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0); > > if (*p1 == 0) > *p1 = *p2; > > if (*p2 == 0) > *p2 = *p1; > > gcc_checking_assert ((!check_equal) || *p1 == *p2); > } > > /* Return true if C1 < C2 using signed comparisons. */ > template <typename T1, typename T2> > static inline bool > lts_p (const T1 &c1, const T2 &c2) > { > bool result; > HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS]; > HOST_WIDE_INT ws2[WIDE_INT_MAX_ELTS]; > const HOST_WIDE_INT *s1, *s2; /* Returned data */ > unsigned int cl1, cl2; /* array lengths */ > unsigned int p1, p2; /* precisions */ > > s1 = to_shwi1 (ws1, &cl1, &p1, c1); > s2 = to_shwi1 (ws2, &cl2, &p2, c2); > check_precision (&p1, &p2, false, true); > > if (p1 <= HOST_BITS_PER_WIDE_INT > && p2 <= HOST_BITS_PER_WIDE_INT) > { > HOST_WIDE_INT x0 = sext_hwi (s1[0], p1); > HOST_WIDE_INT x1 = sext_hwi (s2[0], p2); > result = x0 < x1; > } > else > result = lts_p_large (s1, cl1, p1, s2, cl2, p2); > > #ifdef DEBUG_WIDE_INT > debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2); > #endif > return result; > } you need to be careful about asserting too much from the old code. the time line was: 1) we developed the stuff on x86-64 2) you did your patch 3) we ported everything to ppc and our private port. i really only became very sensitive to this issue during step 3 because the x86 does not exhibit these bugs. > So if you had a 128-bit wide_int and T == unsigned HOST_WIDE_INT, > this lts_p would zero-extend the unsigned HOST_WIDE_INT to 128 bits and > then do a signed comparison. > > The thing here is that the "check_equal" argument is false. > So if instead you were comparing a 128-bit wide_int with a 64-bit tree > constant, lts_p would treat that tree constant as a signed 64-bit number, > even if it was TYPE_UNSIGNED. Similarly if you were comparing a 128-bit > tree constant and a 64-bit tree constant. You also allowed a comparison > of a 128-bit wide_int with a 64-bit rtx, again treating the 64-bit rtx > as signed. I do not think that this is what check_equal meant because the 0 precision was a wild card. The 0 precision allowed the values to come in from simple vars and constants and be converted on the fly. However, as i said above, i am not sure how well this worked since the testing for wide stuff was so thin. > So when doing the wi:: conversion, I'd interpreted the desired semantics > for lts_p as being "treat both inputs as signed without extending them", > since that's what the function did in most cases. It seemed inconsistent > to treat a 64-bit unsigned primitive integer differently from a > 64-bit unsigned tree constant. So at the moment, it doesn't matter i do not see this as inconsistently as you do. if i have a 6 in the gcc source, i really mean that i want to compare that 6 with a 6 of any type that happens to appear in the user's source program. My 6 has to be generic enough to match anything that the user might throw at it. This was richi's big argument against me having to write foo.lts_p (wide_int (6, foo.get_precision()). The gcc source code writer needs his 6 to be special. Richi was right!!!! > whether any HOST_WIDE_INT input to lts_p is signed or unsigned, just like > it didn't and doesn't matter whether any tree input is signed or unsigned. > > If instead we want lts_p to operate to a single unified precision, > like eq_p did: > > s1 = to_shwi1 (ws1, &cl1, &p1, c1); > s2 = to_shwi1 (ws2, &cl2, &p2, c2); > check_precision (&p1, &p2, true, false); > > and still does, then that's easy enough to change. Then all extendable > inputs will be extended according to their natural signedness and then > compared signed. Mixed-precision rtx comparisons would be forbidden. > > But that's tangential to the point I was trying to make above, > which is that the rules about valid values for "len" and post- > check_precision "precision" are still the same as in your original > version. So I think Mike's original patch was right and that this extra > "y.len == 1" check is redundant. That would still be true if we changed > lts_p as above. the relative comparisons and the equality comparisons are different. The equality comparisons allowed the precision mismatch because there were places in the front end that hashed tree-csts and so it did comparisons on things whose types were not even similar. We likely could have fixed this by changing the code around to do the type comparsion first, but we chose to make the equality case more general - hense the false for the parameter for check_equal. We never made the similar changes for the relative comparisons. For this reason, the code really did depend on the precision 0 stuff working, because when you passed in an interger variable or constant, it would get a precision 0 and then get expanded to the proper width before the comparison happened. My point is that i do not see how that works now because there is no tying of the precisions from the two operands of binary operations. I agree that we do not need the length test for the short circuit code, but we do need to be at a point where (unsigned HWI) 0xfffffffffffffff is canonicalized as two hwis if it is being compared with a number with a 128 precision. Kenny > > Thanks, > Richard
Kenneth Zadeck <zadeck@naturalbridge.com> writes: > On 10/19/2013 10:18 AM, Richard Sandiford wrote: >> Kenneth Zadeck <zadeck@naturalbridge.com> writes: >>> On 10/19/2013 05:01 AM, Richard Sandiford wrote: >>>> Mike Stump <mikestump@comcast.net> writes: >>>>> + // We optimize x < y, where y is 64 or fewer bits. >>>>> + // We have to be careful to not allow comparison to a large positive >>>>> + // unsigned value like 0x8000000000000000, those would be encoded >>>>> + // with a y.len == 2. >>>>> + if (y.precision <= HOST_BITS_PER_WIDE_INT >>>>> + && y.len == 1) >>>> I don't get this. If y.precision <= HOST_BITS_PER_WIDE_INT then >>>> y.len must be 1. I realise that tree constants can be stored with >>>> TREE_INT_CST_NUNITS > TYPE_PRECISION / HOST_BITS_PER_WIDE_INT >>>> (so that extensions beyond TYPE_PRECISION are free). But the >>>> wide-int code is shielded from that by the ::decompose routine. >>>> A wide int never has len > precision / HOST_BITS_PER_WIDE_INT. >>>> >>>> Thanks, >>>> Richard >>> I think that part of this is that neither mike or i really understand >>> how this stuff works anymore. >>> >>> in the old version where we had precision 0, we would wait to >>> canonicalize a constant or a simple integer until we saw what the >>> precision of the other operand was. That was what precison 0 meant. >>> it was kind of like what you are now proposing with this new trait, but >>> for the reason that we actually did not know what to do than some >>> concern about escapement. >>> >>> What i do not understand is what happens what do you get when you bring >>> in an integer variable that is an unsigned HOST_WIDE_INT with the top >>> bit set. In the precision 0 days, if the prec of the other side was 64 >>> or less, the incoming integer took 1 hwi and if the precision was >>> larger, it took two hwis. The canonicalization happened late enough so >>> that there was never a question. >> Ah, I think I know what you mean. The original implementation was: >> >> template <typename T> >> static inline const HOST_WIDE_INT* >> to_shwi1 (HOST_WIDE_INT *s, unsigned int *l, unsigned int *p, const T& x) >> { >> s[0] = x; >> if (signedp(x) >> || sizeof (T) < sizeof (HOST_WIDE_INT) >> || ! top_bit_set (x)) >> { >> *l = 1; >> } >> else >> { >> s[1] = 0; >> *l = 2; >> } >> *p = 0; >> return s; >> } >> >> void >> wide_int_ro::check_precision (unsigned int *p1, unsigned int *p2, >> bool check_equal ATTRIBUTE_UNUSED, >> bool check_zero ATTRIBUTE_UNUSED) >> { >> gcc_checking_assert ((!check_zero) || *p1 != 0 || *p2 != 0); >> >> if (*p1 == 0) >> *p1 = *p2; >> >> if (*p2 == 0) >> *p2 = *p1; >> >> gcc_checking_assert ((!check_equal) || *p1 == *p2); >> } >> >> /* Return true if C1 < C2 using signed comparisons. */ >> template <typename T1, typename T2> >> static inline bool >> lts_p (const T1 &c1, const T2 &c2) >> { >> bool result; >> HOST_WIDE_INT ws1[WIDE_INT_MAX_ELTS]; >> HOST_WIDE_INT ws2[WIDE_INT_MAX_ELTS]; >> const HOST_WIDE_INT *s1, *s2; /* Returned data */ >> unsigned int cl1, cl2; /* array lengths */ >> unsigned int p1, p2; /* precisions */ >> >> s1 = to_shwi1 (ws1, &cl1, &p1, c1); >> s2 = to_shwi1 (ws2, &cl2, &p2, c2); >> check_precision (&p1, &p2, false, true); >> >> if (p1 <= HOST_BITS_PER_WIDE_INT >> && p2 <= HOST_BITS_PER_WIDE_INT) >> { >> HOST_WIDE_INT x0 = sext_hwi (s1[0], p1); >> HOST_WIDE_INT x1 = sext_hwi (s2[0], p2); >> result = x0 < x1; >> } >> else >> result = lts_p_large (s1, cl1, p1, s2, cl2, p2); >> >> #ifdef DEBUG_WIDE_INT >> debug_vaa ("wide_int_ro:: %d = (%s lts_p %s\n", result, s1, cl1, p1, s2, cl2, p2); >> #endif >> return result; >> } > you need to be careful about asserting too much from the old code. the > time line was: > > 1) we developed the stuff on x86-64 > 2) you did your patch > 3) we ported everything to ppc and our private port. > > i really only became very sensitive to this issue during step 3 because > the x86 does not exhibit these bugs. > > >> So if you had a 128-bit wide_int and T == unsigned HOST_WIDE_INT, >> this lts_p would zero-extend the unsigned HOST_WIDE_INT to 128 bits and >> then do a signed comparison. >> >> The thing here is that the "check_equal" argument is false. >> So if instead you were comparing a 128-bit wide_int with a 64-bit tree >> constant, lts_p would treat that tree constant as a signed 64-bit number, >> even if it was TYPE_UNSIGNED. Similarly if you were comparing a 128-bit >> tree constant and a 64-bit tree constant. You also allowed a comparison >> of a 128-bit wide_int with a 64-bit rtx, again treating the 64-bit rtx >> as signed. > I do not think that this is what check_equal meant because the 0 > precision was a wild card. The 0 precision allowed the values to come > in from simple vars and constants and be converted on the fly. Right, that's what I mean. I agree the 0 precision case did what you say (the "128-bit wide_int and T == unsigned HOST_WIDE_INT" thing). But the last paragraph above was about what happened for !check_equal operations like lts_p when p1 and p2 were different and both nonzero. In those cases we left both parameters in their original precision without requiring them to be equal. So... >> So when doing the wi:: conversion, I'd interpreted the desired semantics >> for lts_p as being "treat both inputs as signed without extending them", >> since that's what the function did in most cases. It seemed inconsistent >> to treat a 64-bit unsigned primitive integer differently from a >> 64-bit unsigned tree constant. So at the moment, it doesn't matter > i do not see this as inconsistently as you do. if i have a 6 in the > gcc source, i really mean that i want to compare that 6 with a 6 of any > type that happens to appear in the user's source program. My 6 has to > be generic enough to match anything that the user might throw at it. > This was richi's big argument against me having to write foo.lts_p > (wide_int (6, foo.get_precision()). The gcc source code writer needs > his 6 to be special. Richi was right!!!! ...using lts_p to compare (say) addr_wide_int with a: (unsigned HOST_WIDE_INT) -1 in GCC's source code behaved differently from using lts_p to compare addr_wide_int with a: (unsigned HOST_WIDE_INT) -1 in the user's input and represented as a tree. The former would be zero-extended to 2 HWIs because it had precision 0. The latter would stay in its original HWI precision and be treated as signed. That's the inconsistency that bothered me. (Of course, both cases treated (unsigned HOST_WIDE_INT) -1 as signed when compared with a HWI-sized input.) All I'm saying is that I think the two should be treated the same way. They are in the current version, but that way is to make the first case behave like the second. But... >> whether any HOST_WIDE_INT input to lts_p is signed or unsigned, just like >> it didn't and doesn't matter whether any tree input is signed or unsigned. >> >> If instead we want lts_p to operate to a single unified precision, >> like eq_p did: >> >> s1 = to_shwi1 (ws1, &cl1, &p1, c1); >> s2 = to_shwi1 (ws2, &cl2, &p2, c2); >> check_precision (&p1, &p2, true, false); >> >> and still does, then that's easy enough to change. Then all extendable >> inputs will be extended according to their natural signedness and then >> compared signed. Mixed-precision rtx comparisons would be forbidden. ...that could be changed. >> But that's tangential to the point I was trying to make above, >> which is that the rules about valid values for "len" and post- >> check_precision "precision" are still the same as in your original >> version. So I think Mike's original patch was right and that this extra >> "y.len == 1" check is redundant. That would still be true if we changed >> lts_p as above. > the relative comparisons and the equality comparisons are different. > The equality comparisons allowed the precision mismatch because there > were places in the front end that hashed tree-csts and so it did > comparisons on things whose types were not even similar. We likely > could have fixed this by changing the code around to do the type > comparsion first, but we chose to make the equality case more general - > hense the false for the parameter for check_equal. We never made the > similar changes for the relative comparisons. Hmm, it's the other way around, isn't it? The eq_p code quoted above passed "check_equal == true". I.e. the eq_p inputs had to have the same precision (after precision 0 was handled). And they still do in the current version. It's the relative comparisons that allowed the inputs to be different precisions (and still do). > My point is that i do not see how that works now because there is no > tying of the precisions from the two operands of binary operations. I > agree that we do not need the length test for the short circuit code, > but we do need to be at a point where (unsigned HWI) 0xfffffffffffffff > is canonicalized as two hwis if it is being compared with a number with > a 128 precision. This happens for equality and binary arithmetic through FLEXIBLE_PRECISION. E.g. the direct equivalent of the original: >> if (*p1 == 0) >> *p1 = *p2; >> >> if (*p2 == 0) >> *p2 = *p1; is: template <typename T1, typename T2> inline wide_int wi::int_traits <wide_int_storage>::get_binary_result (const T1 &x, const T2 &y) { /* This shouldn't be used for two flexible-precision inputs. */ STATIC_ASSERT (wi::int_traits <T1>::precision_type != FLEXIBLE_PRECISION || wi::int_traits <T2>::precision_type != FLEXIBLE_PRECISION); if (wi::int_traits <T1>::precision_type == FLEXIBLE_PRECISION) return wide_int::create (wi::get_precision (y)); else return wide_int::create (wi::get_precision (x)); } The point of doing it this way was to force the conditions to be done at compile time. Thanks, Richard
diff --git a/gcc/wide-int.h b/gcc/wide-int.h index 2ff130b..738ae11 100644 --- a/gcc/wide-int.h +++ b/gcc/wide-int.h @@ -185,7 +185,9 @@ along with GCC; see the file COPYING3. If not see assuming t is a int_cst. - Note that the bits above the precision are not defined and the + Note, the bits past the precision up to the nearest HOST_WDE_INT + boundary are defined to be copies of the top bit of the value, + however the bits above those defined bits not defined and the algorithms used here are careful not to depend on their value. In particular, values that come in from rtx constants may have random bits. When the precision is 0, all the bits in the LEN elements of @@ -1283,7 +1285,7 @@ namespace wi static const bool host_dependent_precision = false; static unsigned int get_precision (const wi::hwi_with_prec &); static wi::storage_ref decompose (HOST_WIDE_INT *, unsigned int, - const wi::hwi_with_prec &); + const wi::hwi_with_prec &); }; } @@ -1461,12 +1463,26 @@ wi::ne_p (const T1 &x, const T2 &y) inline bool wi::lts_p (const wide_int_ref &x, const wide_int_ref &y) { - if (x.precision <= HOST_BITS_PER_WIDE_INT - && y.precision <= HOST_BITS_PER_WIDE_INT) - return x.slow () < y.slow (); - else - return lts_p_large (x.val, x.len, x.precision, y.val, y.len, - y.precision); + // We optimize x < y, where y is 64 or fewer bits. + // We have to be careful to not allow comparison to a large positive + // unsigned value like 0x8000000000000000, those would be encoded + // with a y.len == 2. + if (y.precision <= HOST_BITS_PER_WIDE_INT + && y.len == 1) + { + // If x fits directly into a shwi, we can compare directly. + if (wi::fits_shwi_p (x)) + return x.slow () < y.slow (); + // If x doesn't fit and is negative, then it must be more + // negative than any value in y, and hence smaller than y. + if (neg_p (x, SIGNED)) + return true; + // If x is positve, then it must be larger than any value in y, + // and hence greater than y. + return false; + } + return lts_p_large (x.val, x.len, x.precision, y.val, y.len, + y.precision); } /* Return true if X < Y when both are treated as unsigned values. */