From patchwork Sat Nov 9 14:25:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Giuliano Belinassi X-Patchwork-Id: 1192430 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=gcc-patches-return-512879-incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=usp.br Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b="tNjnrF/V"; dkim=pass (2048-bit key; unprotected) header.d=usp-br.20150623.gappssmtp.com header.i=@usp-br.20150623.gappssmtp.com header.b="Bd9WKdQY"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 479KFP0jZ7z9sP4 for ; Sun, 10 Nov 2019 01:26:14 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; q=dns; s= default; b=rhh44tJl3HMQat/N3EvDthUQwmiUqVQk4/j2/KccUwT8DdoRwy+zM q0M6yEt6M3hAyq8cPyoaKrQmRAt3dVfH8fofes4h82sUhMnzch/P242uEpXoKqBY oCSywCKVL6OTyeUamMSDIQ1WM7DALdnCVBg01XM3WfSFcWDq+Xoh24= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:subject:message-id:mime-version:content-type; s= default; bh=WCcvr19oI3b3fLQwm+8UxPgCvJY=; b=tNjnrF/VhRmWWBaRMJSL rBqRMwqFwvSp1eNfXfegXugyzPvsldYu22qwpinyhwgq1+N0myQLDEdlt4x1/vem eRqpDyy5ktviI7EAnoAXLSbHiw+IaSpXTReN+FOPMHOhZehf8x1JQxQ56825gKXz JsW/qYlp+2E7jSVeyT0HhHo= Received: (qmail 27366 invoked by alias); 9 Nov 2019 14:26:05 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 27357 invoked by uid 89); 9 Nov 2019 14:26:05 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-14.4 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=ham version=3.3.1 spammy=classified, sk:reducti, sk:treelo, safe_push X-HELO: mail-qt1-f181.google.com Received: from mail-qt1-f181.google.com (HELO mail-qt1-f181.google.com) (209.85.160.181) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Sat, 09 Nov 2019 14:26:00 +0000 Received: by mail-qt1-f181.google.com with SMTP id u22so10183359qtq.13 for ; Sat, 09 Nov 2019 06:26:00 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=usp-br.20150623.gappssmtp.com; s=20150623; h=date:from:to:subject:message-id:mime-version:content-disposition; bh=+RuiY7mh2m2EJioOjaRZVgrf0pXLQO75HudsDP9KqoQ=; b=Bd9WKdQYff7Sruu2YTdFr3R+4c8B8LDN8NLTZ4HSnBvHqIoEIKD8VYrZ+p6hhIo3i5 m0t2fDhG92qBG+7FK+2NxHli4kKzePKqKsD4GS27HJVtWPNqoniazYon7hX2nzvpTROo oFYQy7rY4xelYp9ch3pqE8yyHned8V7uSMmABMvdSavB8KsosXuUenTknX15PGfNLc85 JYWO1TVodk2NT9G762WWMvp5CwShzcT1V8/zUhtdLDnK2+0KE/1JnZnF5c9O0aU9hQ60 U3Gpt3SZAzibzwjsRAc08KAet2zQl84KrVQ6vFHhv60u0BP1Axw5Z8JWPoYPEJVd5JUX UvOg== Received: from smtp.gmail.com ([2804:14d:90ae:84e7:ead0:fcff:fee1:3ed1]) by smtp.gmail.com with ESMTPSA id l124sm4822890qkf.122.2019.11.09.06.25.56 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 09 Nov 2019 06:25:57 -0800 (PST) Date: Sat, 9 Nov 2019 11:25:56 -0300 From: Giuliano Belinassi To: gcc-patches@gcc.gnu.org Subject: [PATCH] Refactor tree-loop-distribution for thread safety Message-ID: <20191109142556.gtcby5qjth5vxuts@smtp.gmail.com> MIME-Version: 1.0 Content-Disposition: inline X-IsSubscribed: yes Hi all, This patch refactors tree-loop-distribution.c for thread safety without use of C11 __thread feature. All global variables were moved to a struct which is initialized at ::execute time. I can install this patch myself in trunk if it's OK. gcc/ChangeLog 2019-11-09 Giuliano Belinassi * cfgloop.c (get_loop_body_in_custom_order): New. * cfgloop.h (get_loop_body_in_custom_order): New prototype. * tree-loop-distribution.c (struct priv_pass_vars): New. (bb_top_order_cmp_r): New. (create_rdg_vertices): Update prototype. (stmts_from_loop): Same as above. (update_for_merge): Same as above. (partition_merge_into): Same as above. (get_data_dependence): Same as above. (data_dep_in_cycle_p): Same as above. (update_type_for_merge): Same as above. (build_rdg_partition_for-vertex): Same as above. (classify_builtin_ldst): Same as above. (classify_partition): Same as above. (share_memory_accesses): Same as above. (rdg_build_partitions): Same as above. (pg_add_dependence_edges): Same as above. (build_partition_graph): Same as above. (merge_dep_scc_partitions): Same as above. (break_alias_scc_partitions): Same as above. (finalize_partitions): Same as above. (distribute_loop): Same as above. (bb_top_order_init): New function. (bb_top_order_destroy): New function. (pass_loop_distribution::execute): Initialize struct priv. Thank you, Giuliano. diff --git gcc/cfgloop.c gcc/cfgloop.c index f18d2b3f24b..db0066ea859 100644 --- gcc/cfgloop.c +++ gcc/cfgloop.c @@ -980,6 +980,19 @@ get_loop_body_in_custom_order (const class loop *loop, return bbs; } +/* Same as above, but use gcc_sort_r instead of qsort. */ + +basic_block * +get_loop_body_in_custom_order (const class loop *loop, void *data, + int (*bb_comparator) (const void *, const void *, void *)) +{ + basic_block *bbs = get_loop_body (loop); + + gcc_sort_r (bbs, loop->num_nodes, sizeof (basic_block), bb_comparator, data); + + return bbs; +} + /* Get body of a LOOP in breadth first sort order. */ basic_block * diff --git gcc/cfgloop.h gcc/cfgloop.h index 0b0154ffd7b..6256cc01ff4 100644 --- gcc/cfgloop.h +++ gcc/cfgloop.h @@ -376,6 +376,8 @@ extern basic_block *get_loop_body_in_dom_order (const class loop *); extern basic_block *get_loop_body_in_bfs_order (const class loop *); extern basic_block *get_loop_body_in_custom_order (const class loop *, int (*) (const void *, const void *)); +extern basic_block *get_loop_body_in_custom_order (const class loop *, void *, + int (*) (const void *, const void *, void *)); extern vec get_loop_exit_edges (const class loop *); extern edge single_exit (const class loop *); diff --git gcc/tree-loop-distribution.c gcc/tree-loop-distribution.c index 81784866ad1..9edac891666 100644 --- gcc/tree-loop-distribution.c +++ gcc/tree-loop-distribution.c @@ -155,20 +155,55 @@ ddr_hasher::equal (const data_dependence_relation *ddr1, return (DDR_A (ddr1) == DDR_A (ddr2) && DDR_B (ddr1) == DDR_B (ddr2)); } -/* The loop (nest) to be distributed. */ -static vec loop_nest; +struct priv_pass_vars +{ + /* The loop (nest) to be distributed. */ + vec loop_nest; -/* Vector of data references in the loop to be distributed. */ -static vec datarefs_vec; + /* Vector of data references in the loop to be distributed. */ + vec datarefs_vec; -/* If there is nonaddressable data reference in above vector. */ -static bool has_nonaddressable_dataref_p; + /* If there is nonaddressable data reference in above vector. */ + bool has_nonaddressable_dataref_p; -/* Store index of data reference in aux field. */ -#define DR_INDEX(dr) ((uintptr_t) (dr)->aux) + /* Store index of data reference in aux field. */ + + /* Hash table for data dependence relation in the loop to be distributed. */ + hash_table *ddrs_table; + + /* Array mapping basic block's index to its topological order. */ + int *bb_top_order_index; + /* And size of the array. */ + int bb_top_order_index_size; + +}; + + +/* If X has a smaller topological sort number than Y, returns -1; + if greater, returns 1. */ +static int +bb_top_order_cmp_r (const void *x, const void *y, void *priv) +{ + const struct priv_pass_vars *_priv = + (const struct priv_pass_vars *) priv; -/* Hash table for data dependence relation in the loop to be distributed. */ -static hash_table *ddrs_table; + basic_block bb1 = *(const basic_block *) x; + basic_block bb2 = *(const basic_block *) y; + + int *bb_top_order_index = _priv->bb_top_order_index; + int bb_top_order_index_size = _priv->bb_top_order_index_size; + + gcc_assert (bb1->index < bb_top_order_index_size + && bb2->index < bb_top_order_index_size); + gcc_assert (bb1 == bb2 + || bb_top_order_index[bb1->index] + != bb_top_order_index[bb2->index]); + + return (bb_top_order_index[bb1->index] - bb_top_order_index[bb2->index]); +} + + +#define DR_INDEX(dr) ((uintptr_t) (dr)->aux) /* A Reduced Dependence Graph (RDG) vertex representing a statement. */ struct rdg_vertex @@ -440,7 +475,8 @@ create_rdg_cd_edges (struct graph *rdg, control_dependences *cd, loop_p loop) if that failed. */ static bool -create_rdg_vertices (struct graph *rdg, vec stmts, loop_p loop) +create_rdg_vertices (struct priv_pass_vars *priv, + struct graph *rdg, vec stmts, loop_p loop) { int i; gimple *stmt; @@ -460,56 +496,33 @@ create_rdg_vertices (struct graph *rdg, vec stmts, loop_p loop) if (gimple_code (stmt) == GIMPLE_PHI) continue; - unsigned drp = datarefs_vec.length (); - if (!find_data_references_in_stmt (loop, stmt, &datarefs_vec)) + unsigned drp = priv->datarefs_vec.length (); + if (!find_data_references_in_stmt (loop, stmt, &priv->datarefs_vec)) return false; - for (unsigned j = drp; j < datarefs_vec.length (); ++j) + for (unsigned j = drp; j < priv->datarefs_vec.length (); ++j) { - data_reference_p dr = datarefs_vec[j]; + data_reference_p dr = priv->datarefs_vec[j]; if (DR_IS_READ (dr)) RDGV_HAS_MEM_READS (v) = true; else RDGV_HAS_MEM_WRITE (v) = true; RDGV_DATAREFS (v).safe_push (dr); - has_nonaddressable_dataref_p |= may_be_nonaddressable_p (dr->ref); + priv->has_nonaddressable_dataref_p |= may_be_nonaddressable_p (dr->ref); } } return true; } -/* Array mapping basic block's index to its topological order. */ -static int *bb_top_order_index; -/* And size of the array. */ -static int bb_top_order_index_size; - -/* If X has a smaller topological sort number than Y, returns -1; - if greater, returns 1. */ - -static int -bb_top_order_cmp (const void *x, const void *y) -{ - basic_block bb1 = *(const basic_block *) x; - basic_block bb2 = *(const basic_block *) y; - - gcc_assert (bb1->index < bb_top_order_index_size - && bb2->index < bb_top_order_index_size); - gcc_assert (bb1 == bb2 - || bb_top_order_index[bb1->index] - != bb_top_order_index[bb2->index]); - - return (bb_top_order_index[bb1->index] - bb_top_order_index[bb2->index]); -} - /* Initialize STMTS with all the statements of LOOP. We use topological order to discover all statements. The order is important because generate_loops_for_partition is using the same traversal for identifying statements in loop copies. */ static void -stmts_from_loop (class loop *loop, vec *stmts) +stmts_from_loop (struct priv_pass_vars *priv, class loop *loop, vec *stmts) { unsigned int i; - basic_block *bbs = get_loop_body_in_custom_order (loop, bb_top_order_cmp); + basic_block *bbs = get_loop_body_in_custom_order (loop, priv, bb_top_order_cmp_r); for (i = 0; i < loop->num_nodes; i++) { @@ -564,15 +577,16 @@ free_rdg (struct graph *rdg) collected and recorded in global data DATAREFS_VEC. */ static struct graph * -build_rdg (class loop *loop, control_dependences *cd) +build_rdg (struct priv_pass_vars* priv, class loop *loop, + control_dependences *cd) { struct graph *rdg; /* Create the RDG vertices from the stmts of the loop nest. */ auto_vec stmts; - stmts_from_loop (loop, &stmts); + stmts_from_loop (priv, loop, &stmts); rdg = new_graph (stmts.length ()); - if (!create_rdg_vertices (rdg, stmts, loop)) + if (!create_rdg_vertices (priv, rdg, stmts, loop)) { free_rdg (rdg); return NULL; @@ -709,14 +723,15 @@ static const char *fuse_message[] = { "there is no point to distribute loop"}; static void -update_type_for_merge (struct graph *, partition *, partition *); +update_type_for_merge (struct priv_pass_vars *priv, + struct graph *, partition *, partition *); /* Merge PARTITION into the partition DEST. RDG is the reduced dependence graph and we update type for result partition if it is non-NULL. */ static void -partition_merge_into (struct graph *rdg, partition *dest, - partition *partition, enum fuse_type ft) +partition_merge_into (struct priv_pass_vars *priv, struct graph *rdg, + partition *dest, partition *partition, enum fuse_type ft) { if (dump_file && (dump_flags & TDF_DETAILS)) { @@ -738,7 +753,7 @@ partition_merge_into (struct graph *rdg, partition *dest, /* Further check if any data dependence prevents us from executing the new partition parallelly. */ if (dest->type == PTYPE_PARALLEL && rdg != NULL) - update_type_for_merge (rdg, dest, partition); + update_type_for_merge (priv, rdg, dest, partition); bitmap_ior_into (dest->datarefs, partition->datarefs); } @@ -1208,7 +1223,8 @@ generate_code_for_partition (class loop *loop, it doesn't exist, compute the ddr and cache it. */ static data_dependence_relation * -get_data_dependence (struct graph *rdg, data_reference_p a, data_reference_p b) +get_data_dependence (struct priv_pass_vars *priv, + struct graph *rdg, data_reference_p a, data_reference_p b) { struct data_dependence_relation ent, **slot; struct data_dependence_relation *ddr; @@ -1218,11 +1234,11 @@ get_data_dependence (struct graph *rdg, data_reference_p a, data_reference_p b) <= rdg_vertex_for_stmt (rdg, DR_STMT (b))); ent.a = a; ent.b = b; - slot = ddrs_table->find_slot (&ent, INSERT); + slot = priv->ddrs_table->find_slot (&ent, INSERT); if (*slot == NULL) { - ddr = initialize_data_dependence_relation (a, b, loop_nest); - compute_affine_dependence (ddr, loop_nest[0]); + ddr = initialize_data_dependence_relation (a, b, priv->loop_nest); + compute_affine_dependence (ddr, priv->loop_nest[0]); *slot = ddr; } @@ -1234,7 +1250,7 @@ get_data_dependence (struct graph *rdg, data_reference_p a, data_reference_p b) and such dependence cycle can't be resolved by runtime alias check. */ static bool -data_dep_in_cycle_p (struct graph *rdg, +data_dep_in_cycle_p (struct priv_pass_vars* priv, struct graph *rdg, data_reference_p dr1, data_reference_p dr2) { struct data_dependence_relation *ddr; @@ -1244,7 +1260,7 @@ data_dep_in_cycle_p (struct graph *rdg, > rdg_vertex_for_stmt (rdg, DR_STMT (dr2))) std::swap (dr1, dr2); - ddr = get_data_dependence (rdg, dr1, dr2); + ddr = get_data_dependence (priv, rdg, dr1, dr2); /* In case of no data dependence. */ if (DDR_ARE_DEPENDENT (ddr) == chrec_known) @@ -1269,7 +1285,7 @@ data_dep_in_cycle_p (struct graph *rdg, PARTITION1's type after merging PARTITION2 into PARTITION1. */ static void -update_type_for_merge (struct graph *rdg, +update_type_for_merge (struct priv_pass_vars *priv, struct graph *rdg, partition *partition1, partition *partition2) { unsigned i, j; @@ -1280,16 +1296,16 @@ update_type_for_merge (struct graph *rdg, { unsigned start = (partition1 == partition2) ? i + 1 : 0; - dr1 = datarefs_vec[i]; + dr1 = priv->datarefs_vec[i]; EXECUTE_IF_SET_IN_BITMAP (partition2->datarefs, start, j, bj) { - dr2 = datarefs_vec[j]; + dr2 = priv->datarefs_vec[j]; if (DR_IS_READ (dr1) && DR_IS_READ (dr2)) continue; /* Partition can only be executed sequentially if there is any data dependence cycle. */ - if (data_dep_in_cycle_p (rdg, dr1, dr2)) + if (data_dep_in_cycle_p (priv, rdg, dr1, dr2)) { partition1->type = PTYPE_SEQUENTIAL; return; @@ -1302,7 +1318,8 @@ update_type_for_merge (struct graph *rdg, the vertex V of the RDG, also including the loop exit conditions. */ static partition * -build_rdg_partition_for_vertex (struct graph *rdg, int v) +build_rdg_partition_for_vertex (struct priv_pass_vars *priv, struct graph *rdg, + int v) { partition *partition = partition_alloc (); auto_vec nodes; @@ -1319,7 +1336,7 @@ build_rdg_partition_for_vertex (struct graph *rdg, int v) for (j = 0; RDG_DATAREFS (rdg, x).iterate (j, &dr); ++j) { unsigned idx = (unsigned) DR_INDEX (dr); - gcc_assert (idx < datarefs_vec.length ()); + gcc_assert (idx < priv->datarefs_vec.length ()); /* Partition can only be executed sequentially if there is any unknown data reference. */ @@ -1336,7 +1353,7 @@ build_rdg_partition_for_vertex (struct graph *rdg, int v) /* Further check if any data dependence prevents us from executing the partition parallelly. */ - update_type_for_merge (rdg, partition, partition); + update_type_for_merge (priv, rdg, partition, partition); return partition; } @@ -1597,7 +1614,8 @@ classify_builtin_st (loop_p loop, partition *partition, data_reference_p dr) if it forms builtin memcpy or memmove call. */ static void -classify_builtin_ldst (loop_p loop, struct graph *rdg, partition *partition, +classify_builtin_ldst (struct priv_pass_vars *priv, + loop_p loop, struct graph *rdg, partition *partition, data_reference_p dst_dr, data_reference_p src_dr) { tree base, size, src_base, src_size; @@ -1624,7 +1642,7 @@ classify_builtin_ldst (loop_p loop, struct graph *rdg, partition *partition, return; /* Now check that if there is a dependence. */ - ddr_p ddr = get_data_dependence (rdg, src_dr, dst_dr); + ddr_p ddr = get_data_dependence (priv, rdg, src_dr, dst_dr); /* Classify as memcpy if no dependence between load and store. */ if (DDR_ARE_DEPENDENT (ddr) == chrec_known) @@ -1663,7 +1681,8 @@ classify_builtin_ldst (loop_p loop, struct graph *rdg, partition *partition, possibly did not mark PARTITION as having one for this reason. */ static bool -classify_partition (loop_p loop, struct graph *rdg, partition *partition, +classify_partition (struct priv_pass_vars *priv, loop_p loop, + struct graph *rdg, partition *partition, bitmap stmt_in_all_partitions) { bitmap_iterator bi; @@ -1718,7 +1737,7 @@ classify_partition (loop_p loop, struct graph *rdg, partition *partition, if (single_ld == NULL) classify_builtin_st (loop, partition, single_st); else - classify_builtin_ldst (loop, rdg, partition, single_st, single_ld); + classify_builtin_ldst (priv, loop, rdg, partition, single_st, single_ld); return has_reduction; } @@ -1726,7 +1745,7 @@ classify_partition (loop_p loop, struct graph *rdg, partition *partition, object in RDG. */ static bool -share_memory_accesses (struct graph *rdg, +share_memory_accesses (struct priv_pass_vars *priv, struct graph *rdg, partition *partition1, partition *partition2) { unsigned i, j; @@ -1744,7 +1763,7 @@ share_memory_accesses (struct graph *rdg, /* Then check whether the two partitions access the same memory object. */ EXECUTE_IF_SET_IN_BITMAP (partition1->datarefs, 0, i, bi) { - dr1 = datarefs_vec[i]; + dr1 = priv->datarefs_vec[i]; if (!DR_BASE_ADDRESS (dr1) || !DR_OFFSET (dr1) || !DR_INIT (dr1) || !DR_STEP (dr1)) @@ -1752,7 +1771,7 @@ share_memory_accesses (struct graph *rdg, EXECUTE_IF_SET_IN_BITMAP (partition2->datarefs, 0, j, bj) { - dr2 = datarefs_vec[j]; + dr2 = priv->datarefs_vec[j]; if (!DR_BASE_ADDRESS (dr2) || !DR_OFFSET (dr2) || !DR_INIT (dr2) || !DR_STEP (dr2)) @@ -1774,7 +1793,8 @@ share_memory_accesses (struct graph *rdg, All partitions are recorded in PARTITIONS. */ static void -rdg_build_partitions (struct graph *rdg, +rdg_build_partitions (struct priv_pass_vars *priv, + struct graph *rdg, vec starting_stmts, vec *partitions) { @@ -1795,7 +1815,7 @@ rdg_build_partitions (struct graph *rdg, if (bitmap_bit_p (processed, v)) continue; - partition *partition = build_rdg_partition_for_vertex (rdg, v); + partition *partition = build_rdg_partition_for_vertex (priv, rdg, v); bitmap_ior_into (processed, partition->stmts); if (dump_file && (dump_flags & TDF_DETAILS)) @@ -1899,7 +1919,7 @@ partition_contains_all_rw (struct graph *rdg, dependence as if it doesn't exist at all. */ static int -pg_add_dependence_edges (struct graph *rdg, int dir, +pg_add_dependence_edges (struct priv_pass_vars *priv, struct graph *rdg, int dir, bitmap drs1, bitmap drs2, vec *alias_ddrs) { unsigned i, j; @@ -1910,14 +1930,14 @@ pg_add_dependence_edges (struct graph *rdg, int dir, 1 is forth, 2 is both (we can stop then, merging will occur). */ EXECUTE_IF_SET_IN_BITMAP (drs1, 0, i, bi) { - dr1 = datarefs_vec[i]; + dr1 = priv->datarefs_vec[i]; EXECUTE_IF_SET_IN_BITMAP (drs2, 0, j, bj) { int res, this_dir = 1; ddr_p ddr; - dr2 = datarefs_vec[j]; + dr2 = priv->datarefs_vec[j]; /* Skip all data dependence. */ if (DR_IS_READ (dr1) && DR_IS_READ (dr2)) @@ -1931,7 +1951,7 @@ pg_add_dependence_edges (struct graph *rdg, int dir, std::swap (dr1, dr2); this_dir = -this_dir; } - ddr = get_data_dependence (rdg, dr1, dr2); + ddr = get_data_dependence (priv, rdg, dr1, dr2); if (DDR_ARE_DEPENDENT (ddr) == chrec_dont_know) { this_dir = 0; @@ -2116,7 +2136,7 @@ free_partition_graph_vdata (struct graph *pg) are considered. */ static struct graph * -build_partition_graph (struct graph *rdg, +build_partition_graph (struct priv_pass_vars *priv, struct graph *rdg, vec *partitions, bool ignore_alias_p) { @@ -2148,7 +2168,7 @@ build_partition_graph (struct graph *rdg, /* Cleanup the temporary vector. */ alias_ddrs.truncate (0); - dir = pg_add_dependence_edges (rdg, dir, partition1->datarefs, + dir = pg_add_dependence_edges (priv, rdg, dir, partition1->datarefs, partition2->datarefs, alias_ddrs_p); /* Add edge to partition graph if there exists dependence. There @@ -2206,13 +2226,13 @@ sort_partitions_by_post_order (struct graph *pg, at all; otherwise all depdendences are considered. */ static void -merge_dep_scc_partitions (struct graph *rdg, +merge_dep_scc_partitions (struct priv_pass_vars *priv, struct graph *rdg, vec *partitions, bool ignore_alias_p) { struct partition *partition1, *partition2; struct pg_vdata *data; - graph *pg = build_partition_graph (rdg, partitions, ignore_alias_p); + graph *pg = build_partition_graph (priv, rdg, partitions, ignore_alias_p); int i, j, num_sccs = graphds_scc (pg, NULL); /* Strong connected compoenent means dependence cycle, we cannot distribute @@ -2227,7 +2247,7 @@ merge_dep_scc_partitions (struct graph *rdg, for (j = j + 1; partitions->iterate (j, &partition2); ++j) if (pg->vertices[j].component == i) { - partition_merge_into (NULL, partition1, + partition_merge_into (priv, NULL, partition1, partition2, FUSE_SAME_SCC); partition1->type = PTYPE_SEQUENTIAL; (*partitions)[j] = NULL; @@ -2283,13 +2303,14 @@ pg_collect_alias_ddrs (struct graph *g, struct graph_edge *e, void *data) relations for runtime alias check in ALIAS_DDRS. */ static void -break_alias_scc_partitions (struct graph *rdg, +break_alias_scc_partitions (struct priv_pass_vars *priv, + struct graph *rdg, vec *partitions, vec *alias_ddrs) { int i, j, k, num_sccs, num_sccs_no_alias; /* Build partition dependence graph. */ - graph *pg = build_partition_graph (rdg, partitions, false); + graph *pg = build_partition_graph (priv, rdg, partitions, false); alias_ddrs->truncate (0); /* Find strong connected components in the graph, with all dependence edges @@ -2384,7 +2405,7 @@ break_alias_scc_partitions (struct graph *rdg, gcc_assert (pg->vertices[k].post < pg->vertices[j].post); pg->vertices[j].post = pg->vertices[k].post; } - partition_merge_into (NULL, first, partition, FUSE_SAME_SCC); + partition_merge_into (priv, NULL, first, partition, FUSE_SAME_SCC); (*partitions)[k] = NULL; partition_free (partition); data = (struct pg_vdata *)pg->vertices[k].data; @@ -2723,7 +2744,8 @@ fuse_memset_builtins (vec *partitions) ALIAS_DDRS contains ddrs which need runtime alias check. */ static void -finalize_partitions (class loop *loop, vec *partitions, +finalize_partitions (struct priv_pass_vars *priv, + class loop *loop, vec *partitions, vec *alias_ddrs) { unsigned i; @@ -2762,7 +2784,7 @@ finalize_partitions (class loop *loop, vec *partitions, a = (*partitions)[0]; for (i = 1; partitions->iterate (i, &partition); ++i) { - partition_merge_into (NULL, a, partition, FUSE_FINALIZE); + partition_merge_into (priv, NULL, a, partition, FUSE_FINALIZE); partition_free (partition); } partitions->truncate (1); @@ -2780,28 +2802,28 @@ finalize_partitions (class loop *loop, vec *partitions, Set *DESTROY_P to whether LOOP needs to be destroyed. */ static int -distribute_loop (class loop *loop, vec stmts, +distribute_loop (struct priv_pass_vars* priv, class loop *loop, vec stmts, control_dependences *cd, int *nb_calls, bool *destroy_p, bool only_patterns_p) { - ddrs_table = new hash_table (389); + priv->ddrs_table = new hash_table (389); struct graph *rdg; partition *partition; int i, nbp; *destroy_p = false; *nb_calls = 0; - loop_nest.create (0); - if (!find_loop_nest (loop, &loop_nest)) + priv->loop_nest.create (0); + if (!find_loop_nest (loop, &priv->loop_nest)) { - loop_nest.release (); - delete ddrs_table; + priv->loop_nest.release (); + delete priv->ddrs_table; return 0; } - datarefs_vec.create (20); - has_nonaddressable_dataref_p = false; - rdg = build_rdg (loop, cd); + priv->datarefs_vec.create (20); + priv->has_nonaddressable_dataref_p = false; + rdg = build_rdg (priv, loop, cd); if (!rdg) { if (dump_file && (dump_flags & TDF_DETAILS)) @@ -2809,13 +2831,13 @@ distribute_loop (class loop *loop, vec stmts, "Loop %d not distributed: failed to build the RDG.\n", loop->num); - loop_nest.release (); - free_data_refs (datarefs_vec); - delete ddrs_table; + priv->loop_nest.release (); + free_data_refs (priv->datarefs_vec); + delete priv->ddrs_table; return 0; } - if (datarefs_vec.length () > MAX_DATAREFS_NUM) + if (priv->datarefs_vec.length () > MAX_DATAREFS_NUM) { if (dump_file && (dump_flags & TDF_DETAILS)) fprintf (dump_file, @@ -2823,21 +2845,21 @@ distribute_loop (class loop *loop, vec stmts, loop->num); free_rdg (rdg); - loop_nest.release (); - free_data_refs (datarefs_vec); - delete ddrs_table; + priv->loop_nest.release (); + free_data_refs (priv->datarefs_vec); + delete priv->ddrs_table; return 0; } data_reference_p dref; - for (i = 0; datarefs_vec.iterate (i, &dref); ++i) + for (i = 0; priv->datarefs_vec.iterate (i, &dref); ++i) dref->aux = (void *) (uintptr_t) i; if (dump_file && (dump_flags & TDF_DETAILS)) dump_rdg (dump_file, rdg); auto_vec partitions; - rdg_build_partitions (rdg, stmts, &partitions); + rdg_build_partitions (priv, rdg, stmts, &partitions); auto_vec alias_ddrs; @@ -2851,7 +2873,7 @@ distribute_loop (class loop *loop, vec stmts, FOR_EACH_VEC_ELT (partitions, i, partition) { reduction_in_all - |= classify_partition (loop, rdg, partition, stmt_in_all_partitions); + |= classify_partition (priv, loop, rdg, partition, stmt_in_all_partitions); any_builtin |= partition_builtin_p (partition); } @@ -2877,7 +2899,7 @@ distribute_loop (class loop *loop, vec stmts, for (++i; partitions.iterate (i, &partition); ++i) if (!partition_builtin_p (partition)) { - partition_merge_into (NULL, into, partition, FUSE_NON_BUILTIN); + partition_merge_into (priv, NULL, into, partition, FUSE_NON_BUILTIN); partitions.unordered_remove (i); partition_free (partition); i--; @@ -2893,7 +2915,7 @@ distribute_loop (class loop *loop, vec stmts, for (i = i + 1; partitions.iterate (i, &partition); ++i) if (partition_reduction_p (partition)) { - partition_merge_into (rdg, into, partition, FUSE_REDUCTION); + partition_merge_into (priv, rdg, into, partition, FUSE_REDUCTION); partitions.unordered_remove (i); partition_free (partition); i--; @@ -2909,9 +2931,9 @@ distribute_loop (class loop *loop, vec stmts, for (int j = i + 1; partitions.iterate (j, &partition); ++j) { - if (share_memory_accesses (rdg, into, partition)) + if (share_memory_accesses (priv, rdg, into, partition)) { - partition_merge_into (rdg, into, partition, FUSE_SHARE_REF); + partition_merge_into (priv, rdg, into, partition, FUSE_SHARE_REF); partitions.unordered_remove (j); partition_free (partition); j--; @@ -2949,17 +2971,17 @@ distribute_loop (class loop *loop, vec stmts, since it's not likely to enable many vectorization opportunities. Also if loop has any data reference which may be not addressable since alias check needs to take, compare address of the object. */ - if (loop->inner || has_nonaddressable_dataref_p) - merge_dep_scc_partitions (rdg, &partitions, false); + if (loop->inner || priv->has_nonaddressable_dataref_p) + merge_dep_scc_partitions (priv, rdg, &partitions, false); else { - merge_dep_scc_partitions (rdg, &partitions, true); + merge_dep_scc_partitions (priv, rdg, &partitions, true); if (partitions.length () > 1) - break_alias_scc_partitions (rdg, &partitions, &alias_ddrs); + break_alias_scc_partitions (priv, rdg, &partitions, &alias_ddrs); } } - finalize_partitions (loop, &partitions, &alias_ddrs); + finalize_partitions (priv, loop, &partitions, &alias_ddrs); /* If there is a reduction in all partitions make sure the last one is not classified for builtin code generation. */ @@ -3003,15 +3025,15 @@ distribute_loop (class loop *loop, vec stmts, } ldist_done: - loop_nest.release (); - free_data_refs (datarefs_vec); - for (hash_table::iterator iter = ddrs_table->begin (); - iter != ddrs_table->end (); ++iter) + priv->loop_nest.release (); + free_data_refs (priv->datarefs_vec); + for (hash_table::iterator iter = priv->ddrs_table->begin (); + iter != priv->ddrs_table->end (); ++iter) { free_dependence_relation (*iter); *iter = NULL; } - delete ddrs_table; + delete priv->ddrs_table; FOR_EACH_VEC_ELT (partitions, i, partition) partition_free (partition); @@ -3140,6 +3162,31 @@ prepare_perfect_loop_nest (class loop *loop) return loop; } +/* Compute topological order for basic blocks. Topological order is + needed because data dependence is computed for data references in + lexicographical order. */ + +static void bb_top_order_init (struct priv_pass_vars *priv) +{ + int rpo_num; + int *rpo = XNEWVEC (int, last_basic_block_for_fn (cfun)); + + priv->bb_top_order_index = XNEWVEC (int, last_basic_block_for_fn (cfun)); + priv->bb_top_order_index_size = last_basic_block_for_fn (cfun); + rpo_num = pre_and_rev_post_order_compute_fn (cfun, NULL, rpo, true); + for (int i = 0; i < rpo_num; i++) + priv->bb_top_order_index[rpo[i]] = i; + + free (rpo); +} + +static void bb_top_order_destroy (struct priv_pass_vars *priv) +{ + free (priv->bb_top_order_index); + priv->bb_top_order_index = NULL; + priv->bb_top_order_index_size = 0; +} + unsigned int pass_loop_distribution::execute (function *fun) { @@ -3149,25 +3196,13 @@ pass_loop_distribution::execute (function *fun) control_dependences *cd = NULL; auto_vec loops_to_be_destroyed; + struct priv_pass_vars priv; + memset (&priv, 0x00, sizeof (priv)); + if (number_of_loops (fun) <= 1) return 0; - /* Compute topological order for basic blocks. Topological order is - needed because data dependence is computed for data references in - lexicographical order. */ - if (bb_top_order_index == NULL) - { - int rpo_num; - int *rpo = XNEWVEC (int, last_basic_block_for_fn (cfun)); - - bb_top_order_index = XNEWVEC (int, last_basic_block_for_fn (cfun)); - bb_top_order_index_size = last_basic_block_for_fn (cfun); - rpo_num = pre_and_rev_post_order_compute_fn (cfun, NULL, rpo, true); - for (int i = 0; i < rpo_num; i++) - bb_top_order_index[rpo[i]] = i; - - free (rpo); - } + bb_top_order_init (&priv); FOR_ALL_BB_FN (bb, fun) { @@ -3215,7 +3250,7 @@ pass_loop_distribution::execute (function *fun) bool destroy_p; int nb_generated_loops, nb_generated_calls; nb_generated_loops - = distribute_loop (loop, work_list, cd, &nb_generated_calls, + = distribute_loop (&priv, loop, work_list, cd, &nb_generated_calls, &destroy_p, (!optimize_loop_for_speed_p (loop) || !flag_tree_loop_distribution)); if (destroy_p) @@ -3241,12 +3276,8 @@ pass_loop_distribution::execute (function *fun) if (cd) delete cd; - if (bb_top_order_index != NULL) - { - free (bb_top_order_index); - bb_top_order_index = NULL; - bb_top_order_index_size = 0; - } + if (priv.bb_top_order_index != NULL) + bb_top_order_destroy (&priv); if (changed) {