From patchwork Wed Jul 3 10:39:29 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jakub Jelinek X-Patchwork-Id: 256590 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client CN "localhost", Issuer "www.qmailtoaster.com" (not verified)) by ozlabs.org (Postfix) with ESMTPS id D51162C00A6 for ; Wed, 3 Jul 2013 20:39:52 +1000 (EST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:cc:subject:message-id:reply-to:mime-version :content-type; q=dns; s=default; b=BYkuzhwf5w8BBJkxwYHNLsVtNas50 0qSw9wIbySHxnmf8H6IgPWv9AWGCSZqxO++e2Fbob2J1aobgK3WF5qv/Fq4jx5cN 5DPPqj7lDsM4t1mPrZyBBq4DHwz9rOaxz7iXJNqKu49zxEygLg6QoeM4JBRVE1pM JM3DT9gfpsbqrg= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:date :from:to:cc:subject:message-id:reply-to:mime-version :content-type; s=default; bh=Sw5skFnqwGY3dFc1V3wY1zY+C6o=; b=nRU xvwQD7T+qKtnUAsKOq2p/g+1dwYo4lG/iJWBvGPL2xXTHDTcRVE+vZ0pWtaOwDlM ReLn+QV9IIRsiQkOB8n3QL0zjubeKS+05k3LVH0y7AcIQCxluBXrLJCN+R+dxSwV lutwPFW1U8bVlo0reZmQpZ5+GymUsrH8NzjNc3b0= Received: (qmail 28337 invoked by alias); 3 Jul 2013 10:39:43 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 28301 invoked by uid 89); 3 Jul 2013 10:39:37 -0000 X-Spam-SWARE-Status: No, score=-6.2 required=5.0 tests=AWL, BAYES_00, RCVD_IN_HOSTKARMA_W, RCVD_IN_HOSTKARMA_WL, RP_MATCHES_RCVD, SPF_HELO_PASS, SPF_PASS, TW_FN, TW_TM autolearn=no version=3.3.1 Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.84/v0.84-167-ge50287c) with ESMTP; Wed, 03 Jul 2013 10:39:35 +0000 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id r63AdW88013466 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 3 Jul 2013 06:39:33 -0400 Received: from zalov.cz (vpn1-6-216.ams2.redhat.com [10.36.6.216]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id r63AdUGR032744 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Wed, 3 Jul 2013 06:39:32 -0400 Received: from zalov.cz (localhost [127.0.0.1]) by zalov.cz (8.14.5/8.14.5) with ESMTP id r63AdUJC032237; Wed, 3 Jul 2013 12:39:30 +0200 Received: (from jakub@localhost) by zalov.cz (8.14.5/8.14.5/Submit) id r63AdU1p032236; Wed, 3 Jul 2013 12:39:30 +0200 Date: Wed, 3 Jul 2013 12:39:29 +0200 From: Jakub Jelinek To: Richard Henderson Cc: gcc-patches@gcc.gnu.org Subject: [gomp4] Compiler side of the cancellation support Message-ID: <20130703103929.GM2336@tucnak.redhat.com> Reply-To: Jakub Jelinek MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.21 (2010-09-15) X-Virus-Found: No Hi! This is the compiler side of the #pragma omp cancel and #pragma omp cancellation point support. On the library side what is needed is: 1) GOMP_cancellation_point now returns a bool (whether the relevant cancellation was observed) 2) GOMP_cancel now has two arguments instead of just one, and returns bool like GOMP_cancellation_point. If the second argument is false, it acts just like GOMP_cancellation_point, if it is true, it cancels the given construct. For both these calls the first argument is 1 for parallel cancellation, 2 for loop cancellation, 4 for sections and 8 for taskgroup cancellation. 3) GOMP_barrier_cancel which is like GOMP_barrier, but should check for pending parallel cancellation and if parallel is cancelled, should return true 4) GOMP_sections_end_cancel and GOMP_loop_end_cancel variants to the non-cancel libcalls for the cancellation checking implicit barriers The still unsolved problems are that for firstprivate/lastprivate for, copyin_ref we add an implicit barrier that isn't really in the standard and similarly for #pragma omp single copyprivate we don't use one barrier mandated by the standard, but actually two barriers. Not sure what exactly we want as the behavior for these. As some subset of threads can be canceled before reaching the unofficial barrier (say one with #pragma omp cancel parallel before reaching the omp for or omp single copyprivate) and some others with #pragma omp cancellation point parallel, while some threads hit the unofficial barrier before the cancellation (and optionally some afterwards), do we want in the library to just arrange for all barriers to be awaken and not block until the final barrier at the end of parallel is hit, and for the unofficial barriers just not to return anything, while for the official barriers (*_cancel suffixed) return true to signal jump to end of region with running dtors? Or perhaps keep track on how many threads in parallel have already observed the cancellation and wait on non-*_cancel barriers only for the rest of the threads that haven't observed it yet, and only on the *_cancel barriers observe it for all threads. Another issue is what if the dtors executed on the way contain barriers, but that is probably ruled out by the restriction that "A construct that may be subject to cancellation must not encounter an orphaned cancellation point." Queuing this patch until we have a library implementation. 2013-07-03 Jakub Jelinek * gimple-pretty-print.c (dump_gimple_omp_return): Print gimple_omp_return_lhs if non-NULL. * tree-ssa-alias.c (ref_maybe_used_by_call_p_1, call_may_clobber_ref_p_1): Handle BUILT_IN_GOMP_BARRIER_CANCEL, BUILT_IN_GOMP_LOOP_END_CANCEL, BUILT_IN_GOMP_SECTIONS_END_CANCEL. * gimple.h (gimple_omp_return_set_lhs, gimple_omp_return_lhs, gimple_omp_return_lhs_ptr): New inlines. * gimple.def (GIMPLE_OMP_RETURN): Use GSS_OMP_ATOMIC_STORE instead of GSS_BASE. * gimple.c (walk_gimple_op) : Walk lhs. * builtin-types.def (BT_FN_BOOL_INT, BT_FN_BOOL_INT_BOOL): New. * omp-builtins.def (BUILT_IN_GOMP_CANCELLATION_POINT): Use ATTR_NOTHROW_LEAF_LIST instead of ATTR_NULL. Return type is now bool. (BUILT_IN_GOMP_CANCEL): Likewise. Add second argument with bool type. (BUILT_IN_BARRIER_CANCEL, BUILT_IN_GOMP_LOOP_END_CANCEL, BUILT_IN_GOMP_SECTIONS_END_CANCEL): New builtins. * omp-low.c (struct omp_context): Add cancel_label and cancellable fields. (extract_omp_for_data): Set have_nowait even for simd implicitly. (check_omp_nesting_restrictions): Verify nesting restrictions for #pragma omp cancel and #pragma omp cancellation point. Set ctx->cancellable for regions that can be cancelled or also for any task region that contains #pragma omp cancellation point. (scan_omp_1_stmt): Check nesting restrictions even if ctx == NULL. (build_omp_barrier): Return gimple instead of tree, add lhs argument, if non-NULL, build GOMP_barrier_cancel builtin instead and set its call lhs to lhs. (lower_rec_input_clauses): Adjust build_omp_barrier caller. (expand_omp_for_static_nochunk, expand_omp_for_static_chunk, expand_omp_single): Likewise. If OMP_RETURN has lhs, pass it to build_omp_barrier. (expand_omp_for_generic): If OMP_RETURN has lhs, use GOMP_loop_end_cancel libcall instead of GOMP_loop_end and set its lhs from OMP_RETURN's lhs. (expand_omp_sections): If OMP_RETURN has lhs, use GOMP_sections_end_cancel libcall instead of GOMP_sections_end and set its lhs from OMP_RETURN's lhs. (maybe_add_implicit_barrier_cancel): New function. (lower_omp_sections): If ctx->cancellable, emit cancel_label before OMP_RETURN. Call maybe_add_implicit_barrier_cancel. (lower_omp_for): Likewise. (lower_omp_single): Call maybe_add_implicit_barrier_cancel. (lower_omp_taskreg): If ctx->cancellable, emit cancel_label before OMP_RETURN. (lower_omp_1): If ctx->cancellable, create ctx->cancel_label. Adjust GOMP_barrier libcalls to GOMP_barrier_cancel plus conditional branch in cancellable regions, adjust GOMP_cancel and GOMP_cancellation_point in cancellable regions or remove GOMP_cancellation_point in non-cancellable regions. c/ * c-typeck.c (c_finish_omp_cancel): Pass two arguments to GOMP_cancel rather than just one, if no OMP_CLAUSE_IF, pass true, otherwise pass if clause argument. Emit the call unconditionally. cp/ * semantics.c (finish_omp_cancel): Pass two arguments to GOMP_cancel rather than just one, if no OMP_CLAUSE_IF, pass true, otherwise pass if clause argument. Emit the call unconditionally. fortran/ * types.def (BT_FN_BOOL_INT, BT_FN_BOOL_INT_BOOL): New. Jakub extern "C" int omp_get_thread_num (); struct S { S (); ~S (); }; __attribute__((noinline)) S::S () { } __attribute__((noinline)) S::~S () { } void fn0 (void) { for (int i = 0; i < 100000; i++) ; } void fn1 (int *x) { S s; #pragma omp parallel firstprivate(x) num_threads (32) { S a; if (x[omp_get_thread_num ()] > 2) { S b; #pragma omp cancel parallel if (x[omp_get_thread_num ()] > 3) } else if (x[omp_get_thread_num ()] == 2) { S c, d; #pragma omp cancellation point parallel } #pragma omp barrier fn0 (); } #pragma omp parallel firstprivate(x) num_threads (8) { S a; if (x[omp_get_thread_num ()] > 3) { S b; #pragma omp cancel parallel } #pragma omp for schedule(runtime) for (int i = 0; i < 100000; i++) ; fn0 (); } #pragma omp parallel firstprivate(x) num_threads (16) { S a; if (x[omp_get_thread_num ()] > 3) { S b; #pragma omp cancel parallel } #pragma omp sections { fn0 (); #pragma omp section fn0 (); #pragma omp section fn0 (); } fn0 (); } int v; #pragma omp parallel firstprivate(x) num_threads (32) { S a; int v; if (x[v = omp_get_thread_num ()] > 3) { S b; #pragma omp cancel parallel } #pragma omp single copyprivate (v) fn0 (); fn0 (); } } void fn2 (bool x) { S s; #pragma omp sections { { #pragma omp cancellation point sections } #pragma omp section { S a; #pragma omp cancel sections if (x) } #pragma omp section { S b; } #pragma omp section { #pragma omp cancellation point sections } } } int x; void fn3 (int *y) { #pragma omp for firstprivate (x) for (int i = 0; i < 1024; i++) { x += y[i] > 10; if (x > 10) { #pragma omp cancel for if (x > 12) } else if (x > 8) { #pragma omp cancellation point for } } } void fn4 (int *x) { #pragma omp task { if (x[omp_get_thread_num ()] > 10) { #pragma omp cancel taskgroup if (x[omp_get_thread_num ()] > 12) } } #pragma omp task { if (x[omp_get_thread_num ()] > 5) { #pragma omp cancellation point taskgroup } } } --- gcc/gimple-pretty-print.c.jj 2013-06-14 18:46:39.000000000 +0200 +++ gcc/gimple-pretty-print.c 2013-07-03 10:15:05.731853776 +0200 @@ -1441,14 +1441,26 @@ dump_gimple_omp_return (pretty_printer * { if (flags & TDF_RAW) { - dump_gimple_fmt (buffer, spc, flags, "%G ", gs, + dump_gimple_fmt (buffer, spc, flags, "%G ", + gimple_omp_return_lhs (gs)); + else + dump_gimple_fmt (buffer, spc, flags, ">"); } else { pp_string (buffer, "#pragma omp return"); if (gimple_omp_return_nowait_p (gs)) pp_string (buffer, "(nowait)"); + if (gimple_omp_return_lhs (gs)) + { + pp_string (buffer, " (set "); + dump_generic_node (buffer, gimple_omp_return_lhs (gs), + spc, flags, false); + pp_character (buffer, ')'); + } } } --- gcc/tree-ssa-alias.c.jj 2013-05-20 13:21:29.000000000 +0200 +++ gcc/tree-ssa-alias.c 2013-07-03 10:54:08.575261576 +0200 @@ -1512,6 +1512,7 @@ ref_maybe_used_by_call_p_1 (gimple call, case BUILT_IN_GOMP_ATOMIC_START: case BUILT_IN_GOMP_ATOMIC_END: case BUILT_IN_GOMP_BARRIER: + case BUILT_IN_GOMP_BARRIER_CANCEL: case BUILT_IN_GOMP_TASKWAIT: case BUILT_IN_GOMP_TASKGROUP_END: case BUILT_IN_GOMP_CRITICAL_START: @@ -1519,9 +1520,11 @@ ref_maybe_used_by_call_p_1 (gimple call, case BUILT_IN_GOMP_CRITICAL_NAME_START: case BUILT_IN_GOMP_CRITICAL_NAME_END: case BUILT_IN_GOMP_LOOP_END: + case BUILT_IN_GOMP_LOOP_END_CANCEL: case BUILT_IN_GOMP_ORDERED_START: case BUILT_IN_GOMP_ORDERED_END: case BUILT_IN_GOMP_SECTIONS_END: + case BUILT_IN_GOMP_SECTIONS_END_CANCEL: case BUILT_IN_GOMP_SINGLE_COPY_START: case BUILT_IN_GOMP_SINGLE_COPY_END: return true; @@ -1856,6 +1859,7 @@ call_may_clobber_ref_p_1 (gimple call, a case BUILT_IN_GOMP_ATOMIC_START: case BUILT_IN_GOMP_ATOMIC_END: case BUILT_IN_GOMP_BARRIER: + case BUILT_IN_GOMP_BARRIER_CANCEL: case BUILT_IN_GOMP_TASKWAIT: case BUILT_IN_GOMP_TASKGROUP_END: case BUILT_IN_GOMP_CRITICAL_START: @@ -1863,9 +1867,11 @@ call_may_clobber_ref_p_1 (gimple call, a case BUILT_IN_GOMP_CRITICAL_NAME_START: case BUILT_IN_GOMP_CRITICAL_NAME_END: case BUILT_IN_GOMP_LOOP_END: + case BUILT_IN_GOMP_LOOP_END_CANCEL: case BUILT_IN_GOMP_ORDERED_START: case BUILT_IN_GOMP_ORDERED_END: case BUILT_IN_GOMP_SECTIONS_END: + case BUILT_IN_GOMP_SECTIONS_END_CANCEL: case BUILT_IN_GOMP_SINGLE_COPY_START: case BUILT_IN_GOMP_SINGLE_COPY_END: return true; --- gcc/c/c-typeck.c.jj 2013-07-02 10:27:52.000000000 +0200 +++ gcc/c/c-typeck.c 2013-07-03 11:17:05.937435854 +0200 @@ -10693,12 +10693,19 @@ c_finish_omp_cancel (location_t loc, tre "clauses"); return; } - tree stmt = build_call_expr_loc (loc, fn, 1, - build_int_cst (integer_type_node, mask)); tree ifc = find_omp_clause (clauses, OMP_CLAUSE_IF); if (ifc != NULL_TREE) - stmt = build3 (COND_EXPR, void_type_node, OMP_CLAUSE_IF_EXPR (ifc), - stmt, NULL_TREE); + { + tree type = TREE_TYPE (OMP_CLAUSE_IF_EXPR (ifc)); + ifc = fold_build2_loc (OMP_CLAUSE_LOCATION (ifc), NE_EXPR, + boolean_type_node, OMP_CLAUSE_IF_EXPR (ifc), + build_zero_cst (type)); + } + else + ifc = boolean_true_node; + tree stmt = build_call_expr_loc (loc, fn, 2, + build_int_cst (integer_type_node, mask), + ifc); add_stmt (stmt); } --- gcc/gimple.h.jj 2013-06-26 12:16:02.000000000 +0200 +++ gcc/gimple.h 2013-07-03 10:59:54.002554279 +0200 @@ -1739,6 +1739,36 @@ gimple_omp_return_nowait_p (const_gimple } +/* Set the LHS of OMP return. */ + +static inline void +gimple_omp_return_set_lhs (gimple g, tree lhs) +{ + GIMPLE_CHECK (g, GIMPLE_OMP_RETURN); + g->gimple_omp_atomic_store.val = lhs; +} + + +/* Get the LHS of OMP return. */ + +static inline tree +gimple_omp_return_lhs (const_gimple g) +{ + GIMPLE_CHECK (g, GIMPLE_OMP_RETURN); + return g->gimple_omp_atomic_store.val; +} + + +/* Return a pointer to the LHS of OMP return. */ + +static inline tree * +gimple_omp_return_lhs_ptr (gimple g) +{ + GIMPLE_CHECK (g, GIMPLE_OMP_RETURN); + return &g->gimple_omp_atomic_store.val; +} + + /* Return true if OMP section statement G has the GF_OMP_SECTION_LAST flag set. */ --- gcc/fortran/types.def.jj 2013-04-10 19:11:23.000000000 +0200 +++ gcc/fortran/types.def 2013-07-03 10:21:16.181699664 +0200 @@ -91,7 +91,7 @@ DEF_FUNCTION_TYPE_1 (BT_FN_VOID_VPTR, BT DEF_FUNCTION_TYPE_1 (BT_FN_UINT_UINT, BT_UINT, BT_UINT) DEF_FUNCTION_TYPE_1 (BT_FN_PTR_PTR, BT_PTR, BT_PTR) DEF_FUNCTION_TYPE_1 (BT_FN_VOID_INT, BT_VOID, BT_INT) - +DEF_FUNCTION_TYPE_1 (BT_FN_BOOL_INT, BT_BOOL, BT_INT) DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR, BT_FN_VOID_PTR) @@ -119,7 +119,7 @@ DEF_FUNCTION_TYPE_2 (BT_FN_VOID_VPTR_INT DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_VPTR_INT, BT_BOOL, BT_VOLATILE_PTR, BT_INT) DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_SIZE_CONST_VPTR, BT_BOOL, BT_SIZE, BT_CONST_VOLATILE_PTR) - +DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_INT_BOOL, BT_BOOL, BT_INT, BT_BOOL) DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR_PTR, BT_FN_VOID_PTR_PTR) --- gcc/cp/semantics.c.jj 2013-07-02 10:27:52.000000000 +0200 +++ gcc/cp/semantics.c 2013-07-02 11:47:51.040824279 +0200 @@ -6091,14 +6091,21 @@ finish_omp_cancel (tree clauses) "%, %, % or % clauses"); return; } - vec *vec - = make_tree_vector_single (build_int_cst (integer_type_node, mask)); - tree stmt = finish_call_expr (fn, &vec, false, false, tf_warning_or_error); - release_tree_vector (vec); + vec *vec = make_tree_vector (); tree ifc = find_omp_clause (clauses, OMP_CLAUSE_IF); if (ifc != NULL_TREE) - stmt = build3 (COND_EXPR, void_type_node, OMP_CLAUSE_IF_EXPR (ifc), - stmt, NULL_TREE); + { + tree type = TREE_TYPE (OMP_CLAUSE_IF_EXPR (ifc)); + ifc = fold_build2_loc (OMP_CLAUSE_LOCATION (ifc), NE_EXPR, + boolean_type_node, OMP_CLAUSE_IF_EXPR (ifc), + build_zero_cst (type)); + } + else + ifc = boolean_true_node; + vec->quick_push (build_int_cst (integer_type_node, mask)); + vec->quick_push (ifc); + tree stmt = finish_call_expr (fn, &vec, false, false, tf_warning_or_error); + release_tree_vector (vec); finish_expr_stmt (stmt); } --- gcc/gimple.def.jj 2013-05-27 09:22:21.000000000 +0200 +++ gcc/gimple.def 2013-07-03 10:58:39.126779615 +0200 @@ -325,7 +325,7 @@ DEFGSCODE(GIMPLE_OMP_PARALLEL, "gimple_o DEFGSCODE(GIMPLE_OMP_TASK, "gimple_omp_task", GSS_OMP_TASK) /* OMP_RETURN marks the end of an OpenMP directive. */ -DEFGSCODE(GIMPLE_OMP_RETURN, "gimple_omp_return", GSS_BASE) +DEFGSCODE(GIMPLE_OMP_RETURN, "gimple_omp_return", GSS_OMP_ATOMIC_STORE) /* OMP_SECTION represents #pragma omp section. BODY is the sequence of statements in the section body. */ --- gcc/gimple.c.jj 2013-05-27 09:22:21.000000000 +0200 +++ gcc/gimple.c 2013-07-03 11:00:36.414855573 +0200 @@ -1686,10 +1686,16 @@ walk_gimple_op (gimple stmt, walk_tree_f return ret; break; + case GIMPLE_OMP_RETURN: + ret = walk_tree (gimple_omp_return_lhs_ptr (stmt), callback_op, wi, + pset); + if (ret) + return ret; + break; + /* Tuples that do not have operands. */ case GIMPLE_NOP: case GIMPLE_RESX: - case GIMPLE_OMP_RETURN: case GIMPLE_PREDICT: break; --- gcc/builtin-types.def.jj 2013-06-26 12:15:18.000000000 +0200 +++ gcc/builtin-types.def 2013-07-03 10:21:07.126846249 +0200 @@ -232,6 +232,7 @@ DEF_FUNCTION_TYPE_1 (BT_FN_ULONGLONG_ULO DEF_FUNCTION_TYPE_1 (BT_FN_UINT16_UINT16, BT_UINT16, BT_UINT16) DEF_FUNCTION_TYPE_1 (BT_FN_UINT32_UINT32, BT_UINT32, BT_UINT32) DEF_FUNCTION_TYPE_1 (BT_FN_UINT64_UINT64, BT_UINT64, BT_UINT64) +DEF_FUNCTION_TYPE_1 (BT_FN_BOOL_INT, BT_BOOL, BT_INT) DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR, BT_FN_VOID_PTR) @@ -343,6 +344,7 @@ DEF_FUNCTION_TYPE_2 (BT_FN_VOID_VPTR_INT DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_VPTR_INT, BT_BOOL, BT_VOLATILE_PTR, BT_INT) DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_SIZE_CONST_VPTR, BT_BOOL, BT_SIZE, BT_CONST_VOLATILE_PTR) +DEF_FUNCTION_TYPE_2 (BT_FN_BOOL_INT_BOOL, BT_BOOL, BT_INT, BT_BOOL) DEF_POINTER_TYPE (BT_PTR_FN_VOID_PTR_PTR, BT_FN_VOID_PTR_PTR) --- gcc/omp-builtins.def.jj 2013-06-21 09:15:13.000000000 +0200 +++ gcc/omp-builtins.def 2013-07-03 10:20:53.687076408 +0200 @@ -39,6 +39,8 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_ATOMIC_E BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_BARRIER, "GOMP_barrier", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) +DEF_GOMP_BUILTIN (BUILT_IN_GOMP_BARRIER_CANCEL, "GOMP_barrier_cancel", + BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKWAIT, "GOMP_taskwait", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKYIELD, "GOMP_taskyield", @@ -48,9 +50,9 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKGROU DEF_GOMP_BUILTIN (BUILT_IN_GOMP_TASKGROUP_END, "GOMP_taskgroup_end", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_CANCEL, "GOMP_cancel", - BT_FN_VOID_INT, ATTR_NULL) + BT_FN_BOOL_INT_BOOL, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_CANCELLATION_POINT, "GOMP_cancellation_point", - BT_FN_VOID_INT, ATTR_NULL) + BT_FN_BOOL_INT, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_CRITICAL_START, "GOMP_critical_start", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_CRITICAL_END, "GOMP_critical_end", @@ -189,6 +191,8 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL ATTR_NOTHROW_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_END, "GOMP_loop_end", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) +DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_END_CANCEL, "GOMP_loop_end_cancel", + BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_LOOP_END_NOWAIT, "GOMP_loop_end_nowait", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_ORDERED_START, "GOMP_ordered_start", @@ -209,6 +213,9 @@ DEF_GOMP_BUILTIN (BUILT_IN_GOMP_PARALLEL BT_FN_VOID_OMPFN_PTR_UINT_UINT_UINT, ATTR_NOTHROW_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SECTIONS_END, "GOMP_sections_end", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) +DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SECTIONS_END_CANCEL, + "GOMP_sections_end_cancel", + BT_FN_BOOL, ATTR_NOTHROW_LEAF_LIST) DEF_GOMP_BUILTIN (BUILT_IN_GOMP_SECTIONS_END_NOWAIT, "GOMP_sections_end_nowait", BT_FN_VOID, ATTR_NOTHROW_LEAF_LIST) --- gcc/omp-low.c.jj 2013-06-28 17:56:45.000000000 +0200 +++ gcc/omp-low.c 2013-07-03 10:47:23.473930408 +0200 @@ -90,6 +90,10 @@ typedef struct omp_context construct. In the case of a parallel, this is in the child function. */ tree block_vars; + /* Label to which GOMP_cancel{,llation_point} and explicit and implicit + barriers should jump to during omplower pass. */ + tree cancel_label; + /* What to do with variables with implicitly determined sharing attributes. */ enum omp_clause_default_kind default_kind; @@ -101,6 +105,9 @@ typedef struct omp_context /* True if this parallel directive is nested within another. */ bool is_nested; + + /* True if this construct can be cancelled. */ + bool cancellable; } omp_context; @@ -235,7 +242,7 @@ extract_omp_for_data (gimple for_stmt, s else fd->loops = &fd->loop; - fd->have_nowait = distribute; + fd->have_nowait = distribute || simd; fd->have_ordered = false; fd->sched_kind = OMP_CLAUSE_SCHEDULE_STATIC; fd->chunk_size = NULL_TREE; @@ -2014,9 +2021,92 @@ check_omp_nesting_restrictions (gimple s return true; } /* FALLTHRU */ + case GIMPLE_CALL: + if (is_gimple_call (stmt) + && (DECL_FUNCTION_CODE (gimple_call_fndecl (stmt)) + == BUILT_IN_GOMP_CANCEL + || DECL_FUNCTION_CODE (gimple_call_fndecl (stmt)) + == BUILT_IN_GOMP_CANCELLATION_POINT)) + { + const char *bad = NULL; + const char *kind = NULL; + if (ctx == NULL) + { + error_at (gimple_location (stmt), "orphaned %qs construct", + DECL_FUNCTION_CODE (gimple_call_fndecl (stmt)) + == BUILT_IN_GOMP_CANCEL + ? "#pragma omp cancel" + : "#pragma omp cancellation point"); + return false; + } + switch (host_integerp (gimple_call_arg (stmt, 0), 0) + ? tree_low_cst (gimple_call_arg (stmt, 0), 0) + : 0) + { + case 1: + if (gimple_code (ctx->stmt) != GIMPLE_OMP_PARALLEL) + bad = "#pragma omp parallel"; + else if (DECL_FUNCTION_CODE (gimple_call_fndecl (stmt)) + == BUILT_IN_GOMP_CANCEL + && !integer_zerop (gimple_call_arg (stmt, 1))) + ctx->cancellable = true; + kind = "parallel"; + break; + case 2: + if (gimple_code (ctx->stmt) != GIMPLE_OMP_FOR + || gimple_omp_for_kind (ctx->stmt) != GF_OMP_FOR_KIND_FOR) + bad = "#pragma omp for"; + else if (DECL_FUNCTION_CODE (gimple_call_fndecl (stmt)) + == BUILT_IN_GOMP_CANCEL + && !integer_zerop (gimple_call_arg (stmt, 1))) + ctx->cancellable = true; + kind = "for"; + break; + case 4: + if (gimple_code (ctx->stmt) != GIMPLE_OMP_SECTIONS + && gimple_code (ctx->stmt) != GIMPLE_OMP_SECTION) + bad = "#pragma omp sections"; + else if (DECL_FUNCTION_CODE (gimple_call_fndecl (stmt)) + == BUILT_IN_GOMP_CANCEL + && !integer_zerop (gimple_call_arg (stmt, 1))) + { + if (gimple_code (ctx->stmt) == GIMPLE_OMP_SECTIONS) + ctx->cancellable = true; + else + { + gcc_assert (ctx->outer + && gimple_code (ctx->outer->stmt) + == GIMPLE_OMP_SECTIONS); + ctx->outer->cancellable = true; + } + } + kind = "sections"; + break; + case 8: + if (gimple_code (ctx->stmt) != GIMPLE_OMP_TASK) + bad = "#pragma omp task"; + else + ctx->cancellable = true; + kind = "taskgroup"; + break; + default: + error_at (gimple_location (stmt), "invalid arguments"); + return false; + } + if (bad) + { + error_at (gimple_location (stmt), + "%<%s %s%> construct not closely nested inside of %qs", + DECL_FUNCTION_CODE (gimple_call_fndecl (stmt)) + == BUILT_IN_GOMP_CANCEL + ? "#pragma omp cancel" + : "#pragma omp cancellation point", kind, bad); + return false; + } + } + /* FALLTHRU */ case GIMPLE_OMP_SECTIONS: case GIMPLE_OMP_SINGLE: - case GIMPLE_CALL: for (; ctx != NULL; ctx = ctx->outer) switch (gimple_code (ctx->stmt)) { @@ -2191,36 +2281,33 @@ scan_omp_1_stmt (gimple_stmt_iterator *g input_location = gimple_location (stmt); /* Check the OpenMP nesting restrictions. */ - if (ctx != NULL) + bool remove = false; + if (is_gimple_omp (stmt)) + remove = !check_omp_nesting_restrictions (stmt, ctx); + else if (is_gimple_call (stmt)) + { + tree fndecl = gimple_call_fndecl (stmt); + if (fndecl + && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL) + switch (DECL_FUNCTION_CODE (fndecl)) + { + case BUILT_IN_GOMP_BARRIER: + case BUILT_IN_GOMP_CANCEL: + case BUILT_IN_GOMP_CANCELLATION_POINT: + case BUILT_IN_GOMP_TASKYIELD: + case BUILT_IN_GOMP_TASKWAIT: + case BUILT_IN_GOMP_TASKGROUP_START: + case BUILT_IN_GOMP_TASKGROUP_END: + remove = !check_omp_nesting_restrictions (stmt, ctx); + break; + default: + break; + } + } + if (remove) { - bool remove = false; - if (is_gimple_omp (stmt)) - remove = !check_omp_nesting_restrictions (stmt, ctx); - else if (is_gimple_call (stmt)) - { - tree fndecl = gimple_call_fndecl (stmt); - if (fndecl - && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL) - switch (DECL_FUNCTION_CODE (fndecl)) - { - case BUILT_IN_GOMP_BARRIER: - case BUILT_IN_GOMP_CANCEL: - case BUILT_IN_GOMP_CANCELLATION_POINT: - case BUILT_IN_GOMP_TASKYIELD: - case BUILT_IN_GOMP_TASKWAIT: - case BUILT_IN_GOMP_TASKGROUP_START: - case BUILT_IN_GOMP_TASKGROUP_END: - remove = !check_omp_nesting_restrictions (stmt, ctx); - break; - default: - break; - } - } - if (remove) - { - stmt = gimple_build_nop (); - gsi_replace (gsi, stmt, false); - } + stmt = gimple_build_nop (); + gsi_replace (gsi, stmt, false); } *handled_ops_p = true; @@ -2301,10 +2388,15 @@ scan_omp (gimple_seq *body_p, omp_contex /* Build a call to GOMP_barrier. */ -static tree -build_omp_barrier (void) +static gimple +build_omp_barrier (tree lhs) { - return build_call_expr (builtin_decl_explicit (BUILT_IN_GOMP_BARRIER), 0); + tree fndecl = builtin_decl_explicit (lhs ? BUILT_IN_GOMP_BARRIER_CANCEL + : BUILT_IN_GOMP_BARRIER); + gimple g = gimple_build_call (fndecl, 0); + if (lhs) + gimple_call_set_lhs (g, lhs); + return g; } /* If a context was created for STMT when it was scanned, return it. */ @@ -3131,7 +3223,7 @@ lower_rec_input_clauses (tree clauses, g #pragma omp distribute. */ if (gimple_code (ctx->stmt) != GIMPLE_OMP_FOR || gimple_omp_for_kind (ctx->stmt) == GF_OMP_FOR_KIND_FOR) - gimplify_and_add (build_omp_barrier (), ilist); + gimple_seq_add_stmt (ilist, build_omp_barrier (NULL_TREE)); } /* If max_vf is non-NULL, then we can use only vectorization factor @@ -5048,9 +5140,13 @@ expand_omp_for_generic (struct omp_regio gsi = gsi_last_bb (exit_bb); if (gimple_omp_return_nowait_p (gsi_stmt (gsi))) t = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END_NOWAIT); + else if (gimple_omp_return_lhs (gsi_stmt (gsi))) + t = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END_CANCEL); else t = builtin_decl_explicit (BUILT_IN_GOMP_LOOP_END); stmt = gimple_build_call (t, 0); + if (gimple_omp_return_lhs (gsi_stmt (gsi))) + gimple_call_set_lhs (stmt, gimple_omp_return_lhs (gsi_stmt (gsi))); gsi_insert_after (&gsi, stmt, GSI_SAME_STMT); gsi_remove (&gsi, true); @@ -5443,10 +5539,11 @@ expand_omp_for_static_nochunk (struct om /* Replace the GIMPLE_OMP_RETURN with a barrier, or nothing. */ gsi = gsi_last_bb (exit_bb); - if (!gimple_omp_return_nowait_p (gsi_stmt (gsi)) - && gimple_omp_for_kind (fd->for_stmt) == GF_OMP_FOR_KIND_FOR) - force_gimple_operand_gsi (&gsi, build_omp_barrier (), false, NULL_TREE, - false, GSI_SAME_STMT); + if (!gimple_omp_return_nowait_p (gsi_stmt (gsi))) + { + t = gimple_omp_return_lhs (gsi_stmt (gsi)); + gsi_insert_after (&gsi, build_omp_barrier (t), GSI_SAME_STMT); + } gsi_remove (&gsi, true); /* Connect all the blocks. */ @@ -5834,10 +5931,11 @@ expand_omp_for_static_chunk (struct omp_ /* Replace the GIMPLE_OMP_RETURN with a barrier, or nothing. */ si = gsi_last_bb (exit_bb); - if (!gimple_omp_return_nowait_p (gsi_stmt (si)) - && gimple_omp_for_kind (fd->for_stmt) == GF_OMP_FOR_KIND_FOR) - force_gimple_operand_gsi (&si, build_omp_barrier (), false, NULL_TREE, - false, GSI_SAME_STMT); + if (!gimple_omp_return_nowait_p (gsi_stmt (si))) + { + t = gimple_omp_return_lhs (gsi_stmt (si)); + gsi_insert_after (&si, build_omp_barrier (t), GSI_SAME_STMT); + } gsi_remove (&si, true); /* Connect the new blocks. */ @@ -6540,9 +6638,13 @@ expand_omp_sections (struct omp_region * si = gsi_last_bb (l2_bb); if (gimple_omp_return_nowait_p (gsi_stmt (si))) t = builtin_decl_explicit (BUILT_IN_GOMP_SECTIONS_END_NOWAIT); + else if (gimple_omp_return_lhs (gsi_stmt (si))) + t = builtin_decl_explicit (BUILT_IN_GOMP_SECTIONS_END_CANCEL); else t = builtin_decl_explicit (BUILT_IN_GOMP_SECTIONS_END); stmt = gimple_build_call (t, 0); + if (gimple_omp_return_lhs (gsi_stmt (si))) + gimple_call_set_lhs (stmt, gimple_omp_return_lhs (gsi_stmt (si))); gsi_insert_after (&si, stmt, GSI_SAME_STMT); gsi_remove (&si, true); @@ -6576,8 +6678,10 @@ expand_omp_single (struct omp_region *re si = gsi_last_bb (exit_bb); if (!gimple_omp_return_nowait_p (gsi_stmt (si)) || need_barrier) - force_gimple_operand_gsi (&si, build_omp_barrier (), false, NULL_TREE, - false, GSI_SAME_STMT); + { + tree t = gimple_omp_return_lhs (gsi_stmt (si)); + gsi_insert_after (&si, build_omp_barrier (t), GSI_SAME_STMT); + } gsi_remove (&si, true); single_succ_edge (exit_bb)->flags = EDGE_FALLTHRU; } @@ -7434,6 +7538,32 @@ struct gimple_opt_pass pass_expand_omp = /* Routines to lower OpenMP directives into OMP-GIMPLE. */ +/* If ctx is a worksharing context inside of a cancellable parallel + region and it isn't nowait, add lhs to its GIMPLE_OMP_RETURN + and conditional branch to parallel's cancel_label to handle + cancellation in the implicit barrier. */ + +static void +maybe_add_implicit_barrier_cancel (omp_context *ctx, gimple_seq *body) +{ + gimple omp_return = gimple_seq_last_stmt (*body); + gcc_assert (gimple_code (omp_return) == GIMPLE_OMP_RETURN); + if (gimple_omp_return_nowait_p (omp_return)) + return; + if (ctx->outer + && gimple_code (ctx->outer->stmt) == GIMPLE_OMP_PARALLEL + && ctx->outer->cancellable) + { + tree lhs = create_tmp_var (boolean_type_node, NULL); + gimple_omp_return_set_lhs (omp_return, lhs); + tree fallthru_label = create_artificial_label (UNKNOWN_LOCATION); + gimple g = gimple_build_cond (NE_EXPR, lhs, boolean_false_node, + ctx->outer->cancel_label, fallthru_label); + gimple_seq_add_stmt (body, g); + gimple_seq_add_stmt (body, gimple_build_label (fallthru_label)); + } +} + /* Lower the OpenMP sections directive in the current statement in GSI_P. CTX is the enclosing OMP context for the current statement. */ @@ -7517,10 +7647,13 @@ lower_omp_sections (gimple_stmt_iterator new_body = maybe_catch_exception (new_body); + if (ctx->cancellable) + gimple_seq_add_stmt (&new_body, gimple_build_label (ctx->cancel_label)); t = gimple_build_omp_return (!!find_omp_clause (gimple_omp_sections_clauses (stmt), OMP_CLAUSE_NOWAIT)); gimple_seq_add_stmt (&new_body, t); + maybe_add_implicit_barrier_cancel (ctx, &new_body); gimple_bind_set_body (new_stmt, new_body); } @@ -7681,6 +7814,7 @@ lower_omp_single (gimple_stmt_iterator * (!!find_omp_clause (gimple_omp_single_clauses (single_stmt), OMP_CLAUSE_NOWAIT)); gimple_seq_add_stmt (&bind_body, t); + maybe_add_implicit_barrier_cancel (ctx, &bind_body); gimple_bind_set_body (bind, bind_body); pop_gimplify_context (bind); @@ -8042,7 +8176,10 @@ lower_omp_for (gimple_stmt_iterator *gsi body = maybe_catch_exception (body); /* Region exit marker goes at the end of the loop body. */ + if (ctx->cancellable) + gimple_seq_add_stmt (&body, gimple_build_label (ctx->cancel_label)); gimple_seq_add_stmt (&body, gimple_build_omp_return (fd.have_nowait)); + maybe_add_implicit_barrier_cancel (ctx, &body); pop_gimplify_context (new_stmt); gimple_bind_append_vars (new_stmt, ctx->block_vars); @@ -8444,6 +8581,8 @@ lower_omp_taskreg (gimple_stmt_iterator gimple_seq_add_seq (&new_body, par_body); gimple_seq_add_seq (&new_body, par_olist); new_body = maybe_catch_exception (new_body); + if (ctx->cancellable) + gimple_seq_add_stmt (&new_body, gimple_build_label (ctx->cancel_label)); gimple_seq_add_stmt (&new_body, gimple_build_omp_return (false)); gimple_omp_set_body (stmt, new_body); @@ -8534,16 +8673,23 @@ lower_omp_1 (gimple_stmt_iterator *gsi_p case GIMPLE_OMP_PARALLEL: case GIMPLE_OMP_TASK: ctx = maybe_lookup_ctx (stmt); + gcc_assert (ctx); + if (ctx->cancellable) + ctx->cancel_label = create_artificial_label (UNKNOWN_LOCATION); lower_omp_taskreg (gsi_p, ctx); break; case GIMPLE_OMP_FOR: ctx = maybe_lookup_ctx (stmt); gcc_assert (ctx); + if (ctx->cancellable) + ctx->cancel_label = create_artificial_label (UNKNOWN_LOCATION); lower_omp_for (gsi_p, ctx); break; case GIMPLE_OMP_SECTIONS: ctx = maybe_lookup_ctx (stmt); gcc_assert (ctx); + if (ctx->cancellable) + ctx->cancel_label = create_artificial_label (UNKNOWN_LOCATION); lower_omp_sections (gsi_p, ctx); break; case GIMPLE_OMP_SINGLE: @@ -8572,6 +8718,56 @@ lower_omp_1 (gimple_stmt_iterator *gsi_p lower_omp_regimplify_p, ctx ? NULL : &wi, NULL)) gimple_regimplify_operands (stmt, gsi_p); break; + case GIMPLE_CALL: + tree fndecl; + fndecl = gimple_call_fndecl (stmt); + if (fndecl + && DECL_BUILT_IN_CLASS (fndecl) == BUILT_IN_NORMAL) + switch (DECL_FUNCTION_CODE (fndecl)) + { + case BUILT_IN_GOMP_BARRIER: + if (ctx == NULL) + break; + /* FALLTHRU */ + case BUILT_IN_GOMP_CANCEL: + case BUILT_IN_GOMP_CANCELLATION_POINT: + omp_context *cctx; + cctx = ctx; + if (gimple_code (cctx->stmt) == GIMPLE_OMP_SECTION) + cctx = cctx->outer; + gcc_assert (gimple_call_lhs (stmt) == NULL_TREE); + if (!cctx->cancellable) + { + if (DECL_FUNCTION_CODE (fndecl) + == BUILT_IN_GOMP_CANCELLATION_POINT) + { + stmt = gimple_build_nop (); + gsi_replace (gsi_p, stmt, false); + } + break; + } + tree lhs; + lhs = create_tmp_var (boolean_type_node, NULL); + if (DECL_FUNCTION_CODE (fndecl) == BUILT_IN_GOMP_BARRIER) + { + fndecl = builtin_decl_explicit (BUILT_IN_GOMP_BARRIER_CANCEL); + gimple_call_set_fndecl (stmt, fndecl); + gimple_call_set_fntype (stmt, TREE_TYPE (fndecl)); + } + gimple_call_set_lhs (stmt, lhs); + tree fallthru_label; + fallthru_label = create_artificial_label (UNKNOWN_LOCATION); + gimple g; + g = gimple_build_label (fallthru_label); + gsi_insert_after (gsi_p, g, GSI_SAME_STMT); + g = gimple_build_cond (NE_EXPR, lhs, boolean_false_node, + cctx->cancel_label, fallthru_label); + gsi_insert_after (gsi_p, g, GSI_SAME_STMT); + break; + default: + break; + } + /* FALLTHRU */ default: if ((ctx || task_shared_vars) && walk_gimple_op (stmt, lower_omp_regimplify_p,