diff mbox series

Fix value numbering dealing with reverse byte order

Message ID 1579162426-9218-1-git-send-email-apinski@marvell.com
State New
Headers show
Series Fix value numbering dealing with reverse byte order | expand

Commit Message

Andrew Pinski Jan. 16, 2020, 8:13 a.m. UTC
From: Andrew Pinski <apinski@marvell.com>

Hi,
  While working on bit-field lowering pass, I came across this bug.
The IR looks like:
  VIEW_CONVERT_EXPR<unsigned long>(var1) = _12;
  _1 = BIT_FIELD_REF <var1, 64, 0>;

Where the BIT_FIELD_REF has REF_REVERSE_STORAGE_ORDER set on it
and var1's type has TYPE_REVERSE_STORAGE_ORDER set on it.
PRE/FRE would decided to prop _12 into the BFR statement
which would produce wrong code.
And yes _12 has the correct byte order already; bit-field lowering
removes the implicit byte swaps in the IR and adds the explicity
to make it easier optimize later on.

This patch adds a check for storage_order_barrier_p on the lhs tree
which returns true in the case where we had a reverse order with a VCE.

gcc.c-torture/execute/pr86659-1.c was the testcase which showed the issue.

OK?  Bootstrapped and tested on x86_64-linux-gnu with no regression.

Thanks,
Andrew Pinski

ChangeLog:
* tree-ssa-sccvn.c(vn_reference_lookup_3): Check lhs for
!storage_order_barrier_p.

Change-Id: I7810de6fc4ff01e431033fa7f7f7b3ec95f67644
---
 gcc/tree-ssa-sccvn.c | 2 ++
 1 file changed, 2 insertions(+)

Comments

Richard Biener Jan. 16, 2020, 8:54 a.m. UTC | #1
On January 16, 2020 9:13:46 AM GMT+01:00, apinski@marvell.com wrote:
>From: Andrew Pinski <apinski@marvell.com>
>
>Hi,
>  While working on bit-field lowering pass, I came across this bug.
>The IR looks like:
>  VIEW_CONVERT_EXPR<unsigned long>(var1) = _12;
>  _1 = BIT_FIELD_REF <var1, 64, 0>;
>
>Where the BIT_FIELD_REF has REF_REVERSE_STORAGE_ORDER set on it
>and var1's type has TYPE_REVERSE_STORAGE_ORDER set on it.
>PRE/FRE would decided to prop _12 into the BFR statement
>which would produce wrong code.
>And yes _12 has the correct byte order already; bit-field lowering
>removes the implicit byte swaps in the IR and adds the explicity
>to make it easier optimize later on.
>
>This patch adds a check for storage_order_barrier_p on the lhs tree
>which returns true in the case where we had a reverse order with a VCE.
>
>gcc.c-torture/execute/pr86659-1.c was the testcase which showed the
>issue.
>
>OK?  Bootstrapped and tested on x86_64-linux-gnu with no regression.

OK. 

Richard. 

>Thanks,
>Andrew Pinski
>
>ChangeLog:
>* tree-ssa-sccvn.c(vn_reference_lookup_3): Check lhs for
>!storage_order_barrier_p.
>
>Change-Id: I7810de6fc4ff01e431033fa7f7f7b3ec95f67644
>---
> gcc/tree-ssa-sccvn.c | 2 ++
> 1 file changed, 2 insertions(+)
>
>diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
>index 3b27c50..4d13015 100644
>--- a/gcc/tree-ssa-sccvn.c
>+++ b/gcc/tree-ssa-sccvn.c
>@@ -2593,6 +2593,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse,
>void *data_,
> 					 &offset2, &size2, &maxsize2, &reverse);
>       if (base2
> 	  && !reverse
>+	  && !storage_order_barrier_p (lhs)
> 	  && known_eq (maxsize2, size2)
> 	  && multiple_p (size2, BITS_PER_UNIT)
> 	  && multiple_p (offset2, BITS_PER_UNIT)
>@@ -2695,6 +2696,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse,
>void *data_,
> 					 &offset2, &size2, &maxsize2, &reverse);
>       tree def_rhs = gimple_assign_rhs1 (def_stmt);
>       if (!reverse
>+	  && !storage_order_barrier_p (lhs)
> 	  && known_size_p (maxsize2)
> 	  && known_eq (maxsize2, size2)
> 	  && adjust_offsets_for_equal_base_address (base, &offset,
diff mbox series

Patch

diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c
index 3b27c50..4d13015 100644
--- a/gcc/tree-ssa-sccvn.c
+++ b/gcc/tree-ssa-sccvn.c
@@ -2593,6 +2593,7 @@  vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *data_,
 					 &offset2, &size2, &maxsize2, &reverse);
       if (base2
 	  && !reverse
+	  && !storage_order_barrier_p (lhs)
 	  && known_eq (maxsize2, size2)
 	  && multiple_p (size2, BITS_PER_UNIT)
 	  && multiple_p (offset2, BITS_PER_UNIT)
@@ -2695,6 +2696,7 @@  vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *data_,
 					 &offset2, &size2, &maxsize2, &reverse);
       tree def_rhs = gimple_assign_rhs1 (def_stmt);
       if (!reverse
+	  && !storage_order_barrier_p (lhs)
 	  && known_size_p (maxsize2)
 	  && known_eq (maxsize2, size2)
 	  && adjust_offsets_for_equal_base_address (base, &offset,