From patchwork Tue Jun 8 13:09:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 1489434 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=suse.de header.i=@suse.de header.a=rsa-sha256 header.s=susede2_rsa header.b=cY37Ly4z; dkim=pass header.d=suse.de header.i=@suse.de header.a=ed25519-sha256 header.s=susede2_ed25519 header.b=31xLPffp; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FzrG831GVz9sW6 for ; Tue, 8 Jun 2021 23:10:56 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id AC3063954457 for ; Tue, 8 Jun 2021 13:10:49 +0000 (GMT) X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.220.28]) by sourceware.org (Postfix) with ESMTPS id 6E1FD3954C74 for ; Tue, 8 Jun 2021 13:09:13 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org 6E1FD3954C74 Authentication-Results: sourceware.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=suse.de Received: from relay2.suse.de (relay2.suse.de [149.44.160.134]) by smtp-out1.suse.de (Postfix) with ESMTP id 70D23219D9 for ; Tue, 8 Jun 2021 13:09:12 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa; t=1623157752; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version:content-type:content-type; bh=Nuo3vuc92XTls/hsgAKgeZ/ghn/yyZ/OSboccaBM6P0=; b=cY37Ly4ze2I7YaDjTulwKDsaKYkfVwT6g1NoVgUFFkrJIUXXrYxDC5bgYcOETBoD7Fe0Df EdQEUq32AqWPX9lFNtOS9NoEFGRk9GeCG0xxPQW7zcKfUCimpu2AsehChzumuE4KrTgX8t twOF9U/l7sGqbRhob5M1HmTTJm1dpcI= DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_ed25519; t=1623157752; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc: mime-version:mime-version:content-type:content-type; bh=Nuo3vuc92XTls/hsgAKgeZ/ghn/yyZ/OSboccaBM6P0=; b=31xLPffp8peALHc9EHZDstdSnNTnHqIL4TNUhJRO5SGJ9iVN2mJn8TI2r0Sr0KG1ynpf7a VmOFkL8DPG+vC5Dg== Received: from [10.163.41.62] (unknown [10.163.41.62]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by relay2.suse.de (Postfix) with ESMTPS id 5E920A3B88 for ; Tue, 8 Jun 2021 13:09:12 +0000 (UTC) Date: Tue, 8 Jun 2021 15:09:12 +0200 (CEST) From: Richard Biener To: gcc-patches@gcc.gnu.org Subject: [PATCH] tree-optimization/100923 - fix alias-ref construction wrt availability Message-ID: MIME-Version: 1.0 X-Spam-Status: No, score=-10.4 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, GIT_PATCH_0, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" This PR shows that building an ao_ref from value-numbers is prone to expose bogus contextual alias info to the oracle. The following makes sure to construct ao_refs from SSA names available at the program point only. On the way it modifies the awkward valueize_refs[_1] API. Bootstrapped and tested on x86_64-unknown-linux-gnu, pushed to trunk. 2021-06-08 Richard Biener PR tree-optimization/100923 * tree-ssa-sccvn.c (valueize_refs_1): Take a pointer to the operand vector to be valueized. (valueize_refs): Likewise. (valueize_shared_reference_ops_from_ref): Adjust. (valueize_shared_reference_ops_from_call): Likewise. (vn_reference_lookup_3): Likewise. (vn_reference_lookup_pieces): Likewise. Re-valueize with honoring availability when we are about to create the ao_ref and valueized before. (vn_reference_lookup): Likewise. (vn_reference_insert_pieces): Adjust. * gcc.dg/torture/pr100923.c: New testcase. --- gcc/testsuite/gcc.dg/torture/pr100923.c | 25 ++++++++ gcc/tree-ssa-sccvn.c | 76 ++++++++++++++++--------- 2 files changed, 75 insertions(+), 26 deletions(-) create mode 100644 gcc/testsuite/gcc.dg/torture/pr100923.c diff --git a/gcc/testsuite/gcc.dg/torture/pr100923.c b/gcc/testsuite/gcc.dg/torture/pr100923.c new file mode 100644 index 00000000000..05a6341fea3 --- /dev/null +++ b/gcc/testsuite/gcc.dg/torture/pr100923.c @@ -0,0 +1,25 @@ +/* { dg-do run } */ + +int a = 1, b, c, *d = &a, *e = &a, f; +void g(int h) {} +void k(int *l) +{ + int ***j; + if (c) + { + *j = &l; + ***j; + } + g(*l); + *e = f; + if (*l) + { + int i = b / a; + a = i; + } +} +int main() +{ + k(d); + return 0; +} diff --git a/gcc/tree-ssa-sccvn.c b/gcc/tree-ssa-sccvn.c index e8761219460..64e3a707f5c 100644 --- a/gcc/tree-ssa-sccvn.c +++ b/gcc/tree-ssa-sccvn.c @@ -553,7 +553,7 @@ vuse_ssa_val (tree x) return x; } -/* Similar to the above but used as callback for walk_non_aliases_vuses +/* Similar to the above but used as callback for walk_non_aliased_vuses and thus should stop at unvisited VUSE to not walk across region boundaries. */ @@ -1579,8 +1579,8 @@ contains_storage_order_barrier_p (vec ops) the vector passed in is returned. *VALUEIZED_ANYTHING will specify whether any operands were valueized. */ -static vec -valueize_refs_1 (vec orig, bool *valueized_anything, +static void +valueize_refs_1 (vec *orig, bool *valueized_anything, bool with_avail = false) { vn_reference_op_t vro; @@ -1588,7 +1588,7 @@ valueize_refs_1 (vec orig, bool *valueized_anything, *valueized_anything = false; - FOR_EACH_VEC_ELT (orig, i, vro) + FOR_EACH_VEC_ELT (*orig, i, vro) { if (vro->opcode == SSA_NAME || (vro->op0 && TREE_CODE (vro->op0) == SSA_NAME)) @@ -1627,16 +1627,16 @@ valueize_refs_1 (vec orig, bool *valueized_anything, if (i > 0 && vro->op0 && TREE_CODE (vro->op0) == ADDR_EXPR - && orig[i - 1].opcode == MEM_REF) + && (*orig)[i - 1].opcode == MEM_REF) { - if (vn_reference_fold_indirect (&orig, &i)) + if (vn_reference_fold_indirect (orig, &i)) *valueized_anything = true; } else if (i > 0 && vro->opcode == SSA_NAME - && orig[i - 1].opcode == MEM_REF) + && (*orig)[i - 1].opcode == MEM_REF) { - if (vn_reference_maybe_forwprop_address (&orig, &i)) + if (vn_reference_maybe_forwprop_address (orig, &i)) *valueized_anything = true; } /* If it transforms a non-constant ARRAY_REF into a constant @@ -1654,15 +1654,13 @@ valueize_refs_1 (vec orig, bool *valueized_anything, off.to_shwi (&vro->off); } } - - return orig; } -static vec -valueize_refs (vec orig) +static void +valueize_refs (vec *orig) { bool tem; - return valueize_refs_1 (orig, &tem); + valueize_refs_1 (orig, &tem); } static vec shared_lookup_references; @@ -1679,8 +1677,7 @@ valueize_shared_reference_ops_from_ref (tree ref, bool *valueized_anything) return vNULL; shared_lookup_references.truncate (0); copy_reference_ops_from_ref (ref, &shared_lookup_references); - shared_lookup_references = valueize_refs_1 (shared_lookup_references, - valueized_anything); + valueize_refs_1 (&shared_lookup_references, valueized_anything); return shared_lookup_references; } @@ -1695,7 +1692,7 @@ valueize_shared_reference_ops_from_call (gcall *call) return vNULL; shared_lookup_references.truncate (0); copy_reference_ops_from_call (call, &shared_lookup_references); - shared_lookup_references = valueize_refs (shared_lookup_references); + valueize_refs (&shared_lookup_references); return shared_lookup_references; } @@ -2546,7 +2543,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *data_, if (*disambiguate_only <= TR_VALUEIZE_AND_DISAMBIGUATE) { copy_reference_ops_from_ref (lhs, &lhs_ops); - lhs_ops = valueize_refs_1 (lhs_ops, &valueized_anything, true); + valueize_refs_1 (&lhs_ops, &valueized_anything, true); } vn_context_bb = saved_rpo_bb; ao_ref_init (&lhs_ref, lhs); @@ -3225,7 +3222,7 @@ vn_reference_lookup_3 (ao_ref *ref, tree vuse, void *data_, vr->operands.truncate (i + 1 + rhs.length ()); FOR_EACH_VEC_ELT (rhs, j, vro) vr->operands[i + 1 + j] = *vro; - vr->operands = valueize_refs (vr->operands); + valueize_refs (&vr->operands); if (old == shared_lookup_references) shared_lookup_references = vr->operands; vr->hashcode = vn_reference_compute_hash (vr); @@ -3526,8 +3523,9 @@ vn_reference_lookup_pieces (tree vuse, alias_set_type set, operands.address (), sizeof (vn_reference_op_s) * operands.length ()); - vr1.operands = operands = shared_lookup_references - = valueize_refs (shared_lookup_references); + bool valueized_p; + valueize_refs_1 (&shared_lookup_references, &valueized_p); + vr1.operands = shared_lookup_references; vr1.type = type; vr1.set = set; vr1.base_set = base_set; @@ -3543,13 +3541,31 @@ vn_reference_lookup_pieces (tree vuse, alias_set_type set, ao_ref r; unsigned limit = param_sccvn_max_alias_queries_per_access; vn_walk_cb_data data (&vr1, NULL_TREE, NULL, kind, true, NULL_TREE); + vec ops_for_ref; + if (!valueized_p) + ops_for_ref = vr1.operands; + else + { + /* For ao_ref_from_mem we have to ensure only available SSA names + end up in base and the only convenient way to make this work + for PRE is to re-valueize with that in mind. */ + ops_for_ref.create (operands.length ()); + ops_for_ref.quick_grow (operands.length ()); + memcpy (ops_for_ref.address (), + operands.address (), + sizeof (vn_reference_op_s) + * operands.length ()); + valueize_refs_1 (&ops_for_ref, &valueized_p, true); + } if (ao_ref_init_from_vn_reference (&r, set, base_set, type, - vr1.operands)) + ops_for_ref)) *vnresult = ((vn_reference_t) walk_non_aliased_vuses (&r, vr1.vuse, true, vn_reference_lookup_2, vn_reference_lookup_3, vuse_valueize, limit, &data)); + if (ops_for_ref != shared_lookup_references) + ops_for_ref.release (); gcc_checking_assert (vr1.operands == shared_lookup_references); } @@ -3578,14 +3594,14 @@ vn_reference_lookup (tree op, tree vuse, vn_lookup_kind kind, { vec operands; struct vn_reference_s vr1; - bool valuezied_anything; + bool valueized_anything; if (vnresult) *vnresult = NULL; vr1.vuse = vuse_ssa_val (vuse); vr1.operands = operands - = valueize_shared_reference_ops_from_ref (op, &valuezied_anything); + = valueize_shared_reference_ops_from_ref (op, &valueized_anything); vr1.type = TREE_TYPE (op); ao_ref op_ref; ao_ref_init (&op_ref, op); @@ -3601,11 +3617,18 @@ vn_reference_lookup (tree op, tree vuse, vn_lookup_kind kind, vn_reference_t wvnresult; ao_ref r; unsigned limit = param_sccvn_max_alias_queries_per_access; + auto_vec ops_for_ref; + if (valueized_anything) + { + copy_reference_ops_from_ref (op, &ops_for_ref); + bool tem; + valueize_refs_1 (&ops_for_ref, &tem, true); + } /* Make sure to use a valueized reference if we valueized anything. Otherwise preserve the full reference for advanced TBAA. */ - if (!valuezied_anything + if (!valueized_anything || !ao_ref_init_from_vn_reference (&r, vr1.set, vr1.base_set, - vr1.type, vr1.operands)) + vr1.type, ops_for_ref)) ao_ref_init (&r, op); vn_walk_cb_data data (&vr1, r.ref ? NULL_TREE : op, last_vuse_ptr, kind, tbaa_p, mask); @@ -3733,7 +3756,8 @@ vn_reference_insert_pieces (tree vuse, alias_set_type set, vr1 = XOBNEW (&vn_tables_obstack, vn_reference_s); vr1->value_id = value_id; vr1->vuse = vuse_ssa_val (vuse); - vr1->operands = valueize_refs (operands); + vr1->operands = operands; + valueize_refs (&vr1->operands); vr1->type = type; vr1->punned = false; vr1->set = set;