From patchwork Thu Mar 16 03:16:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Law X-Patchwork-Id: 739527 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3vkDCM6HSDz9ryk for ; Thu, 16 Mar 2017 14:16:31 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="M81GO2+L"; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :subject:to:message-id:date:mime-version:content-type; q=dns; s= default; b=LZph9W/6i/S5JzPxgjzGj/IeOnZEL5KxuQPSmXsYZB6AJksyRcFT4 VP3J2dxfmw0j+adRvoccuR3186R20k2MsmlndVcwFDM7EptxiB6p+f6g0Q2B4aU9 N7VxRTPhksshUbfLghK36qITyXPY7Bna1p6CUTn+ATqnIPB1Vwgtl4= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :subject:to:message-id:date:mime-version:content-type; s= default; bh=cVOGEhVAVcM7Rzk57Ixfi6d4umY=; b=M81GO2+LwnHIX+Hc90UM xhDPKSyR63T7Af3pXwSL6Cxu5WjmdgqZAwHzdWddqPl4WwHmztAZqTLmTOe/A6hT kWlQFGBkqgEuWHPYdOxXM2axf1nVWXDe0u7LaePB26J72/zxzj2ky3royOqdKFQ7 FkN/n6n7oeA3iJ2uPXHApvk= Received: (qmail 97531 invoked by alias); 16 Mar 2017 03:16:22 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 97470 invoked by uid 89); 16 Mar 2017 03:16:15 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-26.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RP_MATCHES_RCVD, SPF_HELO_PASS autolearn=ham version=3.3.2 spammy= X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Thu, 16 Mar 2017 03:16:11 +0000 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 35C278553F for ; Thu, 16 Mar 2017 03:16:12 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com 35C278553F Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx04.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=law@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com 35C278553F Received: from localhost.localdomain (ovpn-116-108.phx2.redhat.com [10.3.116.108]) by smtp.corp.redhat.com (Postfix) with ESMTP id DF82460BF1 for ; Thu, 16 Mar 2017 03:16:11 +0000 (UTC) From: Jeff Law Subject: [PATCH 1/5][P1][tree-optimization/71437] Refactoring to avoid duplication To: gcc-patches Message-ID: Date: Wed, 15 Mar 2017 21:16:10 -0600 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.7.0 MIME-Version: 1.0 X-IsSubscribed: yes As mentioned in the cover message, this is strictly a refactoring patch. lookup_avail_expr should always have been a class member of the available expression stack. That (of course) allows it to be used by any instance of the class (which we'll take advantage of later from within VRP's instance of jump threading). We bring along vuse_eq as a dependency. Similarly for record_cond which knows some internals of the hash table implementation. record_conditions doesn't directly use the hash tables, but builds up a vector of conditions in hashable_expr form. We're going to want to use this within the VRP threading instance as well. We bring along build_and_record_new_cond as a dependency. These routines all move with no significant change in their implementation except for avoiding dom_valueize, which we open code in the one spot it was previously used. This has been bootstrapped and regression tested on x86_64-linux-gnu. Installing on the trunk. I'll be doing a bootstrap and regression test of patch #1+#2 tomorrow on ppc64le for additional testing coverage. Jeff PR tree-optimization/71437 * tree-ssa-dom.c (struct cond_equivalence): Moved from here into tree-ssa-scopedtables. (lookup_avail_expr, build_and_record_new_cond): Likewise. (record_conditions, record_cond, vuse_eq): Likewise. (record_edge_info): Adjust to API tweak of record_conditions. (simplify_stmt_for_jump_threading): Similarly for lookup_avail_expr. (record_temporary_equivalences, optimize_stmt): Likewise. (eliminate_redundant_computations): Likewise. (record_equivalences_from_stmt): Likewise. * tree-ssa-scopedtables.c: Include options.h and params.h. (vuse_eq): New function, moved from tree-ssa-dom.c (build_and_record_new_cond): Likewise. (record_conditions): Likewise. Accept vector of conditions rather than edge_equivalence structure for first argument. for the first argument. (avail_exprs_stack::lookup_avail_expr): New member function, moved from tree-ssa-dom.c. (avail_exprs_stack::record_cond): Likewise. * tree-ssa-scopedtables.h (struct cond_equivalence): Moved here from tree-ssa-dom.c. (avail_exprs_stack): Add new member functions lookup_avail_expr and record_cond. (record_conditions): Declare. diff --git a/gcc/tree-ssa-dom.c b/gcc/tree-ssa-dom.c index 2ec3f97..ad71269 100644 --- a/gcc/tree-ssa-dom.c +++ b/gcc/tree-ssa-dom.c @@ -48,15 +48,6 @@ along with GCC; see the file COPYING3. If not see /* This file implements optimizations on the dominator tree. */ -/* Structure for recording known values of a conditional expression - at the exits from its block. */ - -struct cond_equivalence -{ - struct hashable_expr cond; - tree value; -}; - /* Structure for recording edge equivalences. Computing and storing the edge equivalences instead of creating @@ -103,9 +94,6 @@ static struct opt_stats_d opt_stats; static edge optimize_stmt (basic_block, gimple_stmt_iterator, class const_and_copies *, class avail_exprs_stack *); -static tree lookup_avail_expr (gimple *, bool, class avail_exprs_stack *, - bool = true); -static void record_cond (cond_equivalence *, class avail_exprs_stack *); static void record_equality (tree, tree, class const_and_copies *); static void record_equivalences_from_phis (basic_block); static void record_equivalences_from_incoming_edge (ba:sic_block, @@ -175,148 +163,6 @@ free_all_edge_infos (void) } } -/* Build a cond_equivalence record indicating that the comparison - CODE holds between operands OP0 and OP1 and push it to **P. */ - -static void -build_and_record_new_cond (enum tree_code code, - tree op0, tree op1, - vec *p, - bool val = true) -{ - cond_equivalence c; - struct hashable_expr *cond = &c.cond; - - gcc_assert (TREE_CODE_CLASS (code) == tcc_comparison); - - cond->type = boolean_type_node; - cond->kind = EXPR_BINARY; - cond->ops.binary.op = code; - cond->ops.binary.opnd0 = op0; - cond->ops.binary.opnd1 = op1; - - c.value = val ? boolean_true_node : boolean_false_node; - p->safe_push (c); -} - -/* Record that COND is true and INVERTED is false into the edge information - structure. Also record that any conditions dominated by COND are true - as well. - - For example, if a < b is true, then a <= b must also be true. */ - -static void -record_conditions (struct edge_info *edge_info, tree cond, tree inverted) -{ - tree op0, op1; - cond_equivalence c; - - if (!COMPARISON_CLASS_P (cond)) - return; - - op0 = TREE_OPERAND (cond, 0); - op1 = TREE_OPERAND (cond, 1); - - switch (TREE_CODE (cond)) - { - case LT_EXPR: - case GT_EXPR: - if (FLOAT_TYPE_P (TREE_TYPE (op0))) - { - build_and_record_new_cond (ORDERED_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (LTGT_EXPR, op0, op1, - &edge_info->cond_equivalences); - } - - build_and_record_new_cond ((TREE_CODE (cond) == LT_EXPR - ? LE_EXPR : GE_EXPR), - op0, op1, &edge_info->cond_equivalences); - build_and_record_new_cond (NE_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (EQ_EXPR, op0, op1, - &edge_info->cond_equivalences, false); - break; - - case GE_EXPR: - case LE_EXPR: - if (FLOAT_TYPE_P (TREE_TYPE (op0))) - { - build_and_record_new_cond (ORDERED_EXPR, op0, op1, - &edge_info->cond_equivalences); - } - break; - - case EQ_EXPR: - if (FLOAT_TYPE_P (TREE_TYPE (op0))) - { - build_and_record_new_cond (ORDERED_EXPR, op0, op1, - &edge_info->cond_equivalences); - } - build_and_record_new_cond (LE_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (GE_EXPR, op0, op1, - &edge_info->cond_equivalences); - break; - - case UNORDERED_EXPR: - build_and_record_new_cond (NE_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (UNLE_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (UNGE_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (UNEQ_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (UNLT_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (UNGT_EXPR, op0, op1, - &edge_info->cond_equivalences); - break; - - case UNLT_EXPR: - case UNGT_EXPR: - build_and_record_new_cond ((TREE_CODE (cond) == UNLT_EXPR - ? UNLE_EXPR : UNGE_EXPR), - op0, op1, &edge_info->cond_equivalences); - build_and_record_new_cond (NE_EXPR, op0, op1, - &edge_info->cond_equivalences); - break; - - case UNEQ_EXPR: - build_and_record_new_cond (UNLE_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (UNGE_EXPR, op0, op1, - &edge_info->cond_equivalences); - break; - - case LTGT_EXPR: - build_and_record_new_cond (NE_EXPR, op0, op1, - &edge_info->cond_equivalences); - build_and_record_new_cond (ORDERED_EXPR, op0, op1, - &edge_info->cond_equivalences); - break; - - default: - break; - } - - /* Now store the original true and false conditions into the first - two slots. */ - initialize_expr_from_cond (cond, &c.cond); - c.value = boolean_true_node; - edge_info->cond_equivalences.safe_push (c); - - /* It is possible for INVERTED to be the negation of a comparison, - and not a valid RHS or GIMPLE_COND condition. This happens because - invert_truthvalue may return such an expression when asked to invert - a floating-point comparison. These comparisons are not assumed to - obey the trichotomy law. */ - initialize_expr_from_cond (inverted, &c.cond); - c.value = boolean_false_node; - edge_info->cond_equivalences.safe_push (c); -} - /* We have finished optimizing BB, record any information implied by taking a specific outgoing edge from BB. */ @@ -435,7 +281,7 @@ record_edge_info (basic_block bb) struct edge_info *edge_info; edge_info = allocate_edge_info (true_edge); - record_conditions (edge_info, cond, inverted); + record_conditions (&edge_info->cond_equivalences, cond, inverted); if (can_infer_simple_equiv && code == EQ_EXPR) { @@ -444,7 +290,7 @@ record_edge_info (basic_block bb) } edge_info = allocate_edge_info (false_edge); - record_conditions (edge_info, inverted, cond); + record_conditions (&edge_info->cond_equivalences, inverted, cond); if (can_infer_simple_equiv && TREE_CODE (inverted) == EQ_EXPR) { @@ -465,7 +311,7 @@ record_edge_info (basic_block bb) struct edge_info *edge_info; edge_info = allocate_edge_info (true_edge); - record_conditions (edge_info, cond, inverted); + record_conditions (&edge_info->cond_equivalences, cond, inverted); if (can_infer_simple_equiv && code == EQ_EXPR) { @@ -474,7 +320,7 @@ record_edge_info (basic_block bb) } edge_info = allocate_edge_info (false_edge); - record_conditions (edge_info, inverted, cond); + record_conditions (&edge_info->cond_equivalences, inverted, cond); if (can_infer_simple_equiv && TREE_CODE (inverted) == EQ_EXPR) { @@ -760,7 +606,7 @@ simplify_stmt_for_jump_threading (gimple *stmt, gimple *within_stmt ATTRIBUTE_UNUSED, class avail_exprs_stack *avail_exprs_stack) { - return lookup_avail_expr (stmt, false, avail_exprs_stack); + return avail_exprs_stack->lookup_avail_expr (stmt, false, true); } /* Valueize hook for gimple_fold_stmt_to_constant_1. */ @@ -865,7 +711,7 @@ record_temporary_equivalences (edge e, /* If we have 0 = COND or 1 = COND equivalences, record them into our expression hash tables. */ for (i = 0; edge_info->cond_equivalences.iterate (i, &eq); ++i) - record_cond (eq, avail_exprs_stack); + avail_exprs_stack->record_cond (eq); tree lhs = edge_info->lhs; if (!lhs || TREE_CODE (lhs) != SSA_NAME) @@ -1105,29 +951,6 @@ dump_dominator_optimization_stats (FILE *file, } -/* Enter condition equivalence P into AVAIL_EXPRS_HASH. - - This indicates that a conditional expression has a known - boolean value. */ - -static void -record_cond (cond_equivalence *p, - class avail_exprs_stack *avail_exprs_stack) -{ - class expr_hash_elt *element = new expr_hash_elt (&p->cond, p->value); - expr_hash_elt **slot; - - hash_table *avail_exprs = avail_exprs_stack->avail_exprs (); - slot = avail_exprs->find_slot_with_hash (element, element->hash (), INSERT); - if (*slot == NULL) - { - *slot = element; - avail_exprs_stack->record_expr (element, NULL, '1'); - } - else - delete element; -} - /* Similarly, but assume that X and Y are the two operands of an EQ_EXPR. This constrains the cases in which we may treat this as assignment. */ @@ -1426,7 +1249,7 @@ eliminate_redundant_computations (gimple_stmt_iterator* gsi, insert = false; /* Check if the expression has been computed before. */ - cached_lhs = lookup_avail_expr (stmt, insert, avail_exprs_stack); + cached_lhs = avail_exprs_stack->lookup_avail_expr (stmt, insert, true); opt_stats.num_exprs_considered++; @@ -1611,7 +1434,7 @@ record_equivalences_from_stmt (gimple *stmt, int may_optimize_p, /* Finally enter the statement into the available expression table. */ - lookup_avail_expr (new_stmt, true, avail_exprs_stack); + avail_exprs_stack->lookup_avail_expr (new_stmt, true, true); } } @@ -1865,8 +1688,8 @@ optimize_stmt (basic_block bb, gimple_stmt_iterator si, else new_stmt = gimple_build_assign (rhs, lhs); gimple_set_vuse (new_stmt, gimple_vuse (stmt)); - cached_lhs = lookup_avail_expr (new_stmt, false, avail_exprs_stack, - false); + cached_lhs = avail_exprs_stack->lookup_avail_expr (new_stmt, false, + false); if (cached_lhs && rhs == cached_lhs) { @@ -1942,125 +1765,3 @@ optimize_stmt (basic_block bb, gimple_stmt_iterator si, } return retval; } - -/* Helper for walk_non_aliased_vuses. Determine if we arrived at - the desired memory state. */ - -static void * -vuse_eq (ao_ref *, tree vuse1, unsigned int cnt, void *data) -{ - tree vuse2 = (tree) data; - if (vuse1 == vuse2) - return data; - - /* This bounds the stmt walks we perform on reference lookups - to O(1) instead of O(N) where N is the number of dominating - stores leading to a candidate. We re-use the SCCVN param - for this as it is basically the same complexity. */ - if (cnt > (unsigned) PARAM_VALUE (PARAM_SCCVN_MAX_ALIAS_QUERIES_PER_ACCESS)) - return (void *)-1; - - return NULL; -} - -/* Search for an existing instance of STMT in the AVAIL_EXPRS_STACK table. - If found, return its LHS. Otherwise insert STMT in the table and - return NULL_TREE. - - Also, when an expression is first inserted in the table, it is also - is also added to AVAIL_EXPRS_STACK, so that it can be removed when - we finish processing this block and its children. */ - -static tree -lookup_avail_expr (gimple *stmt, bool insert, - class avail_exprs_stack *avail_exprs_stack, bool tbaa_p) -{ - expr_hash_elt **slot; - tree lhs; - - /* Get LHS of phi, assignment, or call; else NULL_TREE. */ - if (gimple_code (stmt) == GIMPLE_PHI) - lhs = gimple_phi_result (stmt); - else - lhs = gimple_get_lhs (stmt); - - class expr_hash_elt element (stmt, lhs); - - if (dump_file && (dump_flags & TDF_DETAILS)) - { - fprintf (dump_file, "LKUP "); - element.print (dump_file); - } - - /* Don't bother remembering constant assignments and copy operations. - Constants and copy operations are handled by the constant/copy propagator - in optimize_stmt. */ - if (element.expr()->kind == EXPR_SINGLE - && (TREE_CODE (element.expr()->ops.single.rhs) == SSA_NAME - || is_gimple_min_invariant (element.expr()->ops.single.rhs))) - return NULL_TREE; - - /* Finally try to find the expression in the main expression hash table. */ - hash_table *avail_exprs = avail_exprs_stack->avail_exprs (); - slot = avail_exprs->find_slot (&element, (insert ? INSERT : NO_INSERT)); - if (slot == NULL) - { - return NULL_TREE; - } - else if (*slot == NULL) - { - class expr_hash_elt *element2 = new expr_hash_elt (element); - *slot = element2; - - avail_exprs_stack->record_expr (element2, NULL, '2'); - return NULL_TREE; - } - - /* If we found a redundant memory operation do an alias walk to - check if we can re-use it. */ - if (gimple_vuse (stmt) != (*slot)->vop ()) - { - tree vuse1 = (*slot)->vop (); - tree vuse2 = gimple_vuse (stmt); - /* If we have a load of a register and a candidate in the - hash with vuse1 then try to reach its stmt by walking - up the virtual use-def chain using walk_non_aliased_vuses. - But don't do this when removing expressions from the hash. */ - ao_ref ref; - if (!(vuse1 && vuse2 - && gimple_assign_single_p (stmt) - && TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME - && (ao_ref_init (&ref, gimple_assign_rhs1 (stmt)), - ref.base_alias_set = ref.ref_alias_set = tbaa_p ? -1 : 0, true) - && walk_non_aliased_vuses (&ref, vuse2, - vuse_eq, NULL, NULL, vuse1) != NULL)) - { - if (insert) - { - class expr_hash_elt *element2 = new expr_hash_elt (element); - - /* Insert the expr into the hash by replacing the current - entry and recording the value to restore in the - avail_exprs_stack. */ - avail_exprs_stack->record_expr (element2, *slot, '2'); - *slot = element2; - } - return NULL_TREE; - } - } - - /* Extract the LHS of the assignment so that it can be used as the current - definition of another variable. */ - lhs = (*slot)->lhs (); - - lhs = dom_valueize (lhs); - - if (dump_file && (dump_flags & TDF_DETAILS)) - { - fprintf (dump_file, "FIND: "); - print_generic_expr (dump_file, lhs, 0); - fprintf (dump_file, "\n"); - } - - return lhs; -} diff --git a/gcc/tree-ssa-scopedtables.c b/gcc/tree-ssa-scopedtables.c index e5de6a5..3e72333 100644 --- a/gcc/tree-ssa-scopedtables.c +++ b/gcc/tree-ssa-scopedtables.c @@ -33,6 +33,8 @@ along with GCC; see the file COPYING3. If not see #include "tree-eh.h" #include "internal-fn.h" #include "tree-dfa.h" +#include "options.h" +#include "params.h" static bool hashable_expr_equal_p (const struct hashable_expr *, const struct hashable_expr *); @@ -94,6 +96,153 @@ avail_exprs_stack::record_expr (class expr_hash_elt *elt1, m_stack.safe_push (std::pair (elt1, elt2)); } +/* Helper for walk_non_aliased_vuses. Determine if we arrived at + the desired memory state. */ + +static void * +vuse_eq (ao_ref *, tree vuse1, unsigned int cnt, void *data) +{ + tree vuse2 = (tree) data; + if (vuse1 == vuse2) + return data; + + /* This bounds the stmt walks we perform on reference lookups + to O(1) instead of O(N) where N is the number of dominating + stores leading to a candidate. We re-use the SCCVN param + for this as it is basically the same complexity. */ + if (cnt > (unsigned) PARAM_VALUE (PARAM_SCCVN_MAX_ALIAS_QUERIES_PER_ACCESS)) + return (void *)-1; + + return NULL; +} + +/* Search for an existing instance of STMT in the AVAIL_EXPRS_STACK table. + If found, return its LHS. Otherwise insert STMT in the table and + return NULL_TREE. + + Also, when an expression is first inserted in the table, it is also + is also added to AVAIL_EXPRS_STACK, so that it can be removed when + we finish processing this block and its children. */ + +tree +avail_exprs_stack::lookup_avail_expr (gimple *stmt, bool insert, bool tbaa_p) +{ + expr_hash_elt **slot; + tree lhs; + + /* Get LHS of phi, assignment, or call; else NULL_TREE. */ + if (gimple_code (stmt) == GIMPLE_PHI) + lhs = gimple_phi_result (stmt); + else + lhs = gimple_get_lhs (stmt); + + class expr_hash_elt element (stmt, lhs); + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "LKUP "); + element.print (dump_file); + } + + /* Don't bother remembering constant assignments and copy operations. + Constants and copy operations are handled by the constant/copy propagator + in optimize_stmt. */ + if (element.expr()->kind == EXPR_SINGLE + && (TREE_CODE (element.expr()->ops.single.rhs) == SSA_NAME + || is_gimple_min_invariant (element.expr()->ops.single.rhs))) + return NULL_TREE; + + /* Finally try to find the expression in the main expression hash table. */ + slot = m_avail_exprs->find_slot (&element, (insert ? INSERT : NO_INSERT)); + if (slot == NULL) + { + return NULL_TREE; + } + else if (*slot == NULL) + { + class expr_hash_elt *element2 = new expr_hash_elt (element); + *slot = element2; + + record_expr (element2, NULL, '2'); + return NULL_TREE; + } + + /* If we found a redundant memory operation do an alias walk to + check if we can re-use it. */ + if (gimple_vuse (stmt) != (*slot)->vop ()) + { + tree vuse1 = (*slot)->vop (); + tree vuse2 = gimple_vuse (stmt); + /* If we have a load of a register and a candidate in the + hash with vuse1 then try to reach its stmt by walking + up the virtual use-def chain using walk_non_aliased_vuses. + But don't do this when removing expressions from the hash. */ + ao_ref ref; + if (!(vuse1 && vuse2 + && gimple_assign_single_p (stmt) + && TREE_CODE (gimple_assign_lhs (stmt)) == SSA_NAME + && (ao_ref_init (&ref, gimple_assign_rhs1 (stmt)), + ref.base_alias_set = ref.ref_alias_set = tbaa_p ? -1 : 0, true) + && walk_non_aliased_vuses (&ref, vuse2, + vuse_eq, NULL, NULL, vuse1) != NULL)) + { + if (insert) + { + class expr_hash_elt *element2 = new expr_hash_elt (element); + + /* Insert the expr into the hash by replacing the current + entry and recording the value to restore in the + avail_exprs_stack. */ + record_expr (element2, *slot, '2'); + *slot = element2; + } + return NULL_TREE; + } + } + + /* Extract the LHS of the assignment so that it can be used as the current + definition of another variable. */ + lhs = (*slot)->lhs (); + + /* Valueize the result. */ + if (TREE_CODE (lhs) == SSA_NAME) + { + tree tem = SSA_NAME_VALUE (lhs); + if (tem) + lhs = tem; + } + + if (dump_file && (dump_flags & TDF_DETAILS)) + { + fprintf (dump_file, "FIND: "); + print_generic_expr (dump_file, lhs, 0); + fprintf (dump_file, "\n"); + } + + return lhs; +} + +/* Enter condition equivalence P into the hash table. + + This indicates that a conditional expression has a known + boolean value. */ + +void +avail_exprs_stack::record_cond (cond_equivalence *p) +{ + class expr_hash_elt *element = new expr_hash_elt (&p->cond, p->value); + expr_hash_elt **slot; + + slot = m_avail_exprs->find_slot_with_hash (element, element->hash (), INSERT); + if (*slot == NULL) + { + *slot = element; + record_expr (element, NULL, '1'); + } + else + delete element; +} + /* Generate a hash value for a pair of expressions. This can be used iteratively by passing a previous result in HSTATE. @@ -798,3 +947,125 @@ initialize_expr_from_cond (tree cond, struct hashable_expr *expr) gcc_unreachable (); } +/* Build a cond_equivalence record indicating that the comparison + CODE holds between operands OP0 and OP1 and push it to **P. */ + +static void +build_and_record_new_cond (enum tree_code code, + tree op0, tree op1, + vec *p, + bool val = true) +{ + cond_equivalence c; + struct hashable_expr *cond = &c.cond; + + gcc_assert (TREE_CODE_CLASS (code) == tcc_comparison); + + cond->type = boolean_type_node; + cond->kind = EXPR_BINARY; + cond->ops.binary.op = code; + cond->ops.binary.opnd0 = op0; + cond->ops.binary.opnd1 = op1; + + c.value = val ? boolean_true_node : boolean_false_node; + p->safe_push (c); +} + +/* Record that COND is true and INVERTED is false into the edge information + structure. Also record that any conditions dominated by COND are true + as well. + + For example, if a < b is true, then a <= b must also be true. */ + +void +record_conditions (vec *p, tree cond, tree inverted) +{ + tree op0, op1; + cond_equivalence c; + + if (!COMPARISON_CLASS_P (cond)) + return; + + op0 = TREE_OPERAND (cond, 0); + op1 = TREE_OPERAND (cond, 1); + + switch (TREE_CODE (cond)) + { + case LT_EXPR: + case GT_EXPR: + if (FLOAT_TYPE_P (TREE_TYPE (op0))) + { + build_and_record_new_cond (ORDERED_EXPR, op0, op1, p); + build_and_record_new_cond (LTGT_EXPR, op0, op1, p); + } + + build_and_record_new_cond ((TREE_CODE (cond) == LT_EXPR + ? LE_EXPR : GE_EXPR), + op0, op1, p); + build_and_record_new_cond (NE_EXPR, op0, op1, p); + build_and_record_new_cond (EQ_EXPR, op0, op1, p, false); + break; + + case GE_EXPR: + case LE_EXPR: + if (FLOAT_TYPE_P (TREE_TYPE (op0))) + { + build_and_record_new_cond (ORDERED_EXPR, op0, op1, p); + } + break; + + case EQ_EXPR: + if (FLOAT_TYPE_P (TREE_TYPE (op0))) + { + build_and_record_new_cond (ORDERED_EXPR, op0, op1, p); + } + build_and_record_new_cond (LE_EXPR, op0, op1, p); + build_and_record_new_cond (GE_EXPR, op0, op1, p); + break; + + case UNORDERED_EXPR: + build_and_record_new_cond (NE_EXPR, op0, op1, p); + build_and_record_new_cond (UNLE_EXPR, op0, op1, p); + build_and_record_new_cond (UNGE_EXPR, op0, op1, p); + build_and_record_new_cond (UNEQ_EXPR, op0, op1, p); + build_and_record_new_cond (UNLT_EXPR, op0, op1, p); + build_and_record_new_cond (UNGT_EXPR, op0, op1, p); + break; + + case UNLT_EXPR: + case UNGT_EXPR: + build_and_record_new_cond ((TREE_CODE (cond) == UNLT_EXPR + ? UNLE_EXPR : UNGE_EXPR), + op0, op1, p); + build_and_record_new_cond (NE_EXPR, op0, op1, p); + break; + + case UNEQ_EXPR: + build_and_record_new_cond (UNLE_EXPR, op0, op1, p); + build_and_record_new_cond (UNGE_EXPR, op0, op1, p); + break; + + case LTGT_EXPR: + build_and_record_new_cond (NE_EXPR, op0, op1, p); + build_and_record_new_cond (ORDERED_EXPR, op0, op1, p); + break; + + default: + break; + } + + /* Now store the original true and false conditions into the first + two slots. */ + initialize_expr_from_cond (cond, &c.cond); + c.value = boolean_true_node; + p->safe_push (c); + + /* It is possible for INVERTED to be the negation of a comparison, + and not a valid RHS or GIMPLE_COND condition. This happens because + invert_truthvalue may return such an expression when asked to invert + a floating-point comparison. These comparisons are not assumed to + obey the trichotomy law. */ + initialize_expr_from_cond (inverted, &c.cond); + c.value = boolean_false_node; + p->safe_push (c); +} diff --git a/gcc/tree-ssa-scopedtables.h b/gcc/tree-ssa-scopedtables.h index 3e6798a..df304ae 100644 --- a/gcc/tree-ssa-scopedtables.h +++ b/gcc/tree-ssa-scopedtables.h @@ -47,6 +47,20 @@ struct hashable_expr } ops; }; +/* Structure for recording known value of a conditional expression. + + Clients build vectors of these objects to record known values + that occur on edges. */ + +struct cond_equivalence +{ + /* The condition, in a HASHABLE_EXPR form. */ + struct hashable_expr cond; + + /* The result of the condition (true or false. */ + tree value; +}; + /* Structure for entries in the expression hash table. */ typedef class expr_hash_elt * expr_hash_elt_t; @@ -132,6 +146,12 @@ class avail_exprs_stack hash_table *avail_exprs (void) { return m_avail_exprs; } + /* Lookup and conditionally insert an expression into the table, + recording enough information to unwind as needed. */ + tree lookup_avail_expr (gimple *, bool, bool); + + void record_cond (cond_equivalence *); + private: vec > m_stack; hash_table *m_avail_exprs; @@ -182,5 +202,6 @@ class const_and_copies }; void initialize_expr_from_cond (tree cond, struct hashable_expr *expr); +void record_conditions (vec *p, tree, tree); #endif /* GCC_TREE_SSA_SCOPED_TABLES_H */