Message ID | 000001d10bbb$765ef6e0$631ce4a0$@arm.com |
---|---|
State | New |
Headers | show |
On Wed, Oct 21, 2015 at 6:46 AM, Bin Cheng <bin.cheng@arm.com> wrote: > Hi, > As analyzed in PR67921, I think the issue is caused by fold_binary_loc which > folds: > 4 - (sizetype) &c - (sizetype) ((int *) p1_8(D) + ((sizetype) a_23 * 24 + > 4)) > into below form: > ((sizetype) -((int *) p1_8(D) + ((sizetype) a_23 * 24 + 4)) - (sizetype) > &c) + 4 > > Look the minus sizetype expression is folded as negative pointer expression, > which seems incorrect. Apart from this, The direct reason of this ICE is in > CHREC because of an overlook. In general CHREC supports NEGATE_EXPR for > CHREC, the only problem is it uses pointer type for CHREC_RIGHT, rather than > sizetype, when building pointer type CHREC. > > This simple patch fixes the ICE issue. Bootstrap and test on x86 & x86_64. > > Is it OK? Hmm, I think not - we shouldn't ever get pointer typed multiplications. Did you track down which is the bogus fold transform (I agree the result above is bogus)? It's probably related to STRIP_NOPS stripping sizetype conversions from pointers so we might get split_tree to build such negate. Note that split_tree strips (sign!) nops itself and thus should probably simply receive op0 and op1 instead of arg0 and arg1. I'm testing @@ -9505,8 +9523,8 @@ fold_binary_loc (location_t loc, then the result with variables. This increases the chances of literals being recombined later and of generating relocatable expressions for the sum of a constant and literal. */ - var0 = split_tree (arg0, code, &con0, &lit0, &minus_lit0, 0); - var1 = split_tree (arg1, code, &con1, &lit1, &minus_lit1, + var0 = split_tree (op0, code, &con0, &lit0, &minus_lit0, 0); + var1 = split_tree (op1, code, &con1, &lit1, &minus_lit1, code == MINUS_EXPR); /* Recombine MINUS_EXPR operands by using PLUS_EXPR. */ which fixes the testcase for me. Richard. > Note, I do think the associate logic in fold_binary_loc needs fix, but that > should be another patch. > > > 2015-10-20 Bin Cheng <bin.cheng@arm.com> > > PR tree-optimization/67921 > * tree-chrec.c (chrec_fold_multiply): Use sizetype for CHREC_RIGHT > if > type is pointer type. > > 2015-10-20 Bin Cheng <bin.cheng@arm.com> > > PR tree-optimization/67921 > * gcc.dg/ubsan/pr67921.c: New test.
On Wed, Oct 21, 2015 at 5:15 PM, Richard Biener <richard.guenther@gmail.com> wrote: > On Wed, Oct 21, 2015 at 6:46 AM, Bin Cheng <bin.cheng@arm.com> wrote: >> Hi, >> As analyzed in PR67921, I think the issue is caused by fold_binary_loc which >> folds: >> 4 - (sizetype) &c - (sizetype) ((int *) p1_8(D) + ((sizetype) a_23 * 24 + >> 4)) >> into below form: >> ((sizetype) -((int *) p1_8(D) + ((sizetype) a_23 * 24 + 4)) - (sizetype) >> &c) + 4 >> >> Look the minus sizetype expression is folded as negative pointer expression, >> which seems incorrect. Apart from this, The direct reason of this ICE is in >> CHREC because of an overlook. In general CHREC supports NEGATE_EXPR for >> CHREC, the only problem is it uses pointer type for CHREC_RIGHT, rather than >> sizetype, when building pointer type CHREC. >> >> This simple patch fixes the ICE issue. Bootstrap and test on x86 & x86_64. >> >> Is it OK? > > Hmm, I think not - we shouldn't ever get pointer typed > multiplications. Did you track > down which is the bogus fold transform (I agree the result above is > bogus)? It's > probably related to STRIP_NOPS stripping sizetype conversions from pointers > so we might get split_tree to build such negate. Note that split_tree strips > (sign!) nops itself and thus should probably simply receive op0 and op1 instead > of arg0 and arg1. Yes, I was going to send similar patch for fold stuff. Just thought it might be useful to support POINTER chrec in *_multiply. I will drop this and let you test yours. Thanks, bin > > I'm testing > > @@ -9505,8 +9523,8 @@ fold_binary_loc (location_t loc, > then the result with variables. This increases the chances of > literals being recombined later and of generating relocatable > expressions for the sum of a constant and literal. */ > - var0 = split_tree (arg0, code, &con0, &lit0, &minus_lit0, 0); > - var1 = split_tree (arg1, code, &con1, &lit1, &minus_lit1, > + var0 = split_tree (op0, code, &con0, &lit0, &minus_lit0, 0); > + var1 = split_tree (op1, code, &con1, &lit1, &minus_lit1, > code == MINUS_EXPR); > > /* Recombine MINUS_EXPR operands by using PLUS_EXPR. */ > > which fixes the testcase for me. > > Richard. > >> Note, I do think the associate logic in fold_binary_loc needs fix, but that >> should be another patch. >> >> >> 2015-10-20 Bin Cheng <bin.cheng@arm.com> >> >> PR tree-optimization/67921 >> * tree-chrec.c (chrec_fold_multiply): Use sizetype for CHREC_RIGHT >> if >> type is pointer type. >> >> 2015-10-20 Bin Cheng <bin.cheng@arm.com> >> >> PR tree-optimization/67921 >> * gcc.dg/ubsan/pr67921.c: New test.
On Wed, Oct 21, 2015 at 11:39 AM, Bin.Cheng <amker.cheng@gmail.com> wrote: > On Wed, Oct 21, 2015 at 5:15 PM, Richard Biener > <richard.guenther@gmail.com> wrote: >> On Wed, Oct 21, 2015 at 6:46 AM, Bin Cheng <bin.cheng@arm.com> wrote: >>> Hi, >>> As analyzed in PR67921, I think the issue is caused by fold_binary_loc which >>> folds: >>> 4 - (sizetype) &c - (sizetype) ((int *) p1_8(D) + ((sizetype) a_23 * 24 + >>> 4)) >>> into below form: >>> ((sizetype) -((int *) p1_8(D) + ((sizetype) a_23 * 24 + 4)) - (sizetype) >>> &c) + 4 >>> >>> Look the minus sizetype expression is folded as negative pointer expression, >>> which seems incorrect. Apart from this, The direct reason of this ICE is in >>> CHREC because of an overlook. In general CHREC supports NEGATE_EXPR for >>> CHREC, the only problem is it uses pointer type for CHREC_RIGHT, rather than >>> sizetype, when building pointer type CHREC. >>> >>> This simple patch fixes the ICE issue. Bootstrap and test on x86 & x86_64. >>> >>> Is it OK? >> >> Hmm, I think not - we shouldn't ever get pointer typed >> multiplications. Did you track >> down which is the bogus fold transform (I agree the result above is >> bogus)? It's >> probably related to STRIP_NOPS stripping sizetype conversions from pointers >> so we might get split_tree to build such negate. Note that split_tree strips >> (sign!) nops itself and thus should probably simply receive op0 and op1 instead >> of arg0 and arg1. > Yes, I was going to send similar patch for fold stuff. Just thought > it might be useful to support POINTER chrec in *_multiply. I will > drop this and let you test yours. The patch regresses Running target unix/ FAIL: gcc.dg/pr52578.c scan-tree-dump-times original "return 2;" 2 FAIL: gcc.dg/strict-overflow-5.c scan-tree-dump optimized "return 3" FAIL: gcc.dg/tree-ssa/loop-15.c scan-tree-dump-times optimized "\\\\+" 0 FAIL: gcc.dg/tree-ssa/loop-15.c scan-tree-dump-times optimized "n_. \\\\* n_." 1 FAIL: gcc.dg/vect/vect-align-3.c -flto -ffat-lto-objects scan-tree-dump-not vec t "vect_do_peeling_for_loop_bound" FAIL: gcc.dg/vect/vect-align-3.c scan-tree-dump-not vect "vect_do_peeling_for_lo op_bound" and for -m32 FAIL: gcc.target/i386/pr67985-3.c scan-assembler movd[ \\t]%xmm[0-7], %eax FAIL: gcc.target/i386/pr67985-3.c scan-assembler mulss as well. So it requires some more complicated handling. It might need to take the negation code in split-tree to be performed only implicitely during associate_trees (then on the correct type). But that requires (quite) some refactoring. I suggest a struct tree_with_negate { tree t; bool negate_p; } to pass this along. Not for me at this point (looks I have quite a few patches to review ;)) Richard. > Thanks, > bin >> >> I'm testing >> >> @@ -9505,8 +9523,8 @@ fold_binary_loc (location_t loc, >> then the result with variables. This increases the chances of >> literals being recombined later and of generating relocatable >> expressions for the sum of a constant and literal. */ >> - var0 = split_tree (arg0, code, &con0, &lit0, &minus_lit0, 0); >> - var1 = split_tree (arg1, code, &con1, &lit1, &minus_lit1, >> + var0 = split_tree (op0, code, &con0, &lit0, &minus_lit0, 0); >> + var1 = split_tree (op1, code, &con1, &lit1, &minus_lit1, >> code == MINUS_EXPR); >> >> /* Recombine MINUS_EXPR operands by using PLUS_EXPR. */ >> >> which fixes the testcase for me. >> >> Richard. >> >>> Note, I do think the associate logic in fold_binary_loc needs fix, but that >>> should be another patch. >>> >>> >>> 2015-10-20 Bin Cheng <bin.cheng@arm.com> >>> >>> PR tree-optimization/67921 >>> * tree-chrec.c (chrec_fold_multiply): Use sizetype for CHREC_RIGHT >>> if >>> type is pointer type. >>> >>> 2015-10-20 Bin Cheng <bin.cheng@arm.com> >>> >>> PR tree-optimization/67921 >>> * gcc.dg/ubsan/pr67921.c: New test.
diff --git a/gcc/tree-chrec.c b/gcc/tree-chrec.c index 649c9fe..ef7b70b 100644 --- a/gcc/tree-chrec.c +++ b/gcc/tree-chrec.c @@ -436,7 +436,8 @@ chrec_fold_multiply (tree type, return build_polynomial_chrec (CHREC_VARIABLE (op0), chrec_fold_multiply (type, CHREC_LEFT (op0), op1), - chrec_fold_multiply (type, CHREC_RIGHT (op0), op1)); + chrec_fold_multiply (POINTER_TYPE_P (type) ? sizetype : type, + CHREC_RIGHT (op0), op1)); } CASE_CONVERT: @@ -459,7 +460,8 @@ chrec_fold_multiply (tree type, return build_polynomial_chrec (CHREC_VARIABLE (op1), chrec_fold_multiply (type, CHREC_LEFT (op1), op0), - chrec_fold_multiply (type, CHREC_RIGHT (op1), op0)); + chrec_fold_multiply (POINTER_TYPE_P (type) ? sizetype : type, + CHREC_RIGHT (op1), op0)); CASE_CONVERT: if (tree_contains_chrecs (op1, NULL)) diff --git a/gcc/testsuite/gcc.dg/ubsan/pr67921.c b/gcc/testsuite/gcc.dg/ubsan/pr67921.c new file mode 100644 index 0000000..5e7d707 --- /dev/null +++ b/gcc/testsuite/gcc.dg/ubsan/pr67921.c @@ -0,0 +1,22 @@ +/* { dg-do compile } */ +/* { dg-options "-fsanitize=undefined" } */ + +typedef struct { + int a; + int arr[][6]; +}st; + +void bar (int); +void foo (st *p) +{ + int a; + for (; a < 2; a++) + for (; p->a;) + { + int *b = p->arr[a]; + int c[66]; + int j = 0; + for (; j < 56; j++) + bar (b[j] - c[j]); + } +}