From patchwork Fri Oct 2 22:46:59 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aldy Hernandez X-Patchwork-Id: 525843 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 2FE131402D5 for ; Sat, 3 Oct 2015 08:47:11 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=R8yH2hO7; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:message-id:date:mime-version:content-type; q=dns; s=default; b=rO8s+khcyKcz1X5FVdpRnrd8JhzUVN/av5ow5NvHGhPkTULXpG bHIweHIxaiSgg5/8P4aPAiac/Z3BHjX5T08NHJPwW184GEydeT+7XELX6iRbMz+K RCsGXpJN4Ff1YP3i33dPEzwUMhLOsD/YWm3LhudupFNXS25oT8Jh6Jl+M= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:to :from:subject:message-id:date:mime-version:content-type; s= default; bh=hO5D5M4tFFJYK/eB4H+pDXqhLW8=; b=R8yH2hO7MT6iS4bpK2So 96zBdO7qIze5U7z3BJ6iH5PlylGamNSwK/LfoFsGafDXzN/aExu+ifnGIaMJngWH 7AyvmF/ANWMJ2wK9TSFf0sSYAZ3ljnwht4ah5OlhAdEpY5AuU7MRv3VJIs7qzmsk 4IRZjcYss0FEPwEApc7SjY0= Received: (qmail 100038 invoked by alias); 2 Oct 2015 22:47:04 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 100020 invoked by uid 89); 2 Oct 2015 22:47:03 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-6.0 required=5.0 tests=AWL, BAYES_00, SPF_HELO_PASS, T_RP_MATCHES_RCVD autolearn=ham version=3.3.2 X-HELO: mx1.redhat.com Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Fri, 02 Oct 2015 22:47:02 +0000 Received: from int-mx11.intmail.prod.int.phx2.redhat.com (int-mx11.intmail.prod.int.phx2.redhat.com [10.5.11.24]) by mx1.redhat.com (Postfix) with ESMTPS id 4A3548E697 for ; Fri, 2 Oct 2015 22:47:01 +0000 (UTC) Received: from reynosa.quesejoda.com (vpn-55-210.rdu2.redhat.com [10.10.55.210]) by int-mx11.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id t92Ml0QF022097; Fri, 2 Oct 2015 18:47:00 -0400 To: Jakub Jelinek , gcc-patches From: Aldy Hernandez Subject: [gomp4.1] fix dependency scheduling problem Message-ID: <560F0963.1030102@redhat.com> Date: Fri, 2 Oct 2015 15:46:59 -0700 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Thunderbird/38.1.0 MIME-Version: 1.0 Hi. As I may have mentioned earlier, I'm implementing OpenMP 4.1's task priorities for GCC's libgomp. This has had me looking at how we currently implement scheduling, and I believe we have a problem in how we rearrange dependencies. Currently in gomp_task_maybe_wait_for_dependencies(), we have the following code for rearranging the first dependency (tsk) from a list of dependencies. else if (tsk != task->children) { /* Remove tsk from the sibling list... */ tsk->prev_child->next_child = tsk->next_child; tsk->next_child->prev_child = tsk->prev_child; /* ...and insert it into the running task's children. */ -->BAD tsk->prev_child = task->children; -->BAD tsk->next_child = task->children->next_child; task->children = tsk; tsk->prev_child->next_child = tsk; tsk->next_child->prev_child = tsk; } If say, you have a parent_depends_on task PD1 that is in the following children queue: C1 -> C2 -> C3 -> PD1 -> C4 (Where parent->children points to C1 and C4 wraps around to C1 as per the circular list.) The above code will transform the children queue into: PD1 -> C2 -> C3 -> C4 -> C1 The surrounding code looks sane, but this particular snippet quoted above has us moving the first child to the end of the queue, which is probably not what we want. However, at least we're still keeping the parent_depends_on tasks first, just that we moved other previously unaffected children to the end of the queue. What we really want is: PD1 -> C1 -> C2 -> C3 -> C4 The attached patch does just that. I have also rewritten the comments now that I actually understand what's going on :). Eventually this will all become clearer with my upcoming/proposed API for dealing with all these queues. As discussed on IRC, there is another issue you pointed out in gomp_task_run_pre(), which I will address in a followup patch. Tested on x86-64 Linux by running "make check-target-libgomp" with OMP_NUM_THREADS={1,2,55}. OK for branch? commit 6d8f6db0583326d803c7c7abd8ea26cc842643fc Author: Aldy Hernandez Date: Fri Oct 2 15:40:30 2015 -0700 * task.c (gomp_task_maybe_wait_for_dependencies): Fix scheduling problem such that the first non parent_depends_on task does not end up at the end of the children queue. diff --git a/libgomp/task.c b/libgomp/task.c index f6a67eb..5c412fc 100644 --- a/libgomp/task.c +++ b/libgomp/task.c @@ -1140,17 +1140,37 @@ gomp_task_maybe_wait_for_dependencies (void **depend) { tsk->parent_depends_on = true; ++num_awaited; + /* If a task we need to wait for is not already + running and is ready to be scheduled, move it to + front, so that we run it as soon as possible. + + We rearrange the children queue such that all + parent_depends_on tasks are first, and + last_parent_depends_on points to the last such task + we rearranged. For example, given the following + children where PD[123] are the parent_depends_on + tasks: + + task->children + | + V + C1 -> C2 -> C3 -> PD1 -> PD2 -> PD3 -> C4 + + We rearrange such that: + + task->children + | +--- last_parent_depends_on + | | + V V + PD1 -> PD2 -> PD3 -> C1 -> C2 -> C3 -> C4 + */ + if (tsk->num_dependees == 0 && tsk->kind == GOMP_TASK_WAITING) { - /* If a task we need to wait for is not already - running and is ready to be scheduled, move it - to front, so that we run it as soon as possible. */ if (last_parent_depends_on) { - /* Remove tsk from the sibling list... */ tsk->prev_child->next_child = tsk->next_child; tsk->next_child->prev_child = tsk->prev_child; - /* ...and insert it into last_parent_depends_on. */ tsk->prev_child = last_parent_depends_on; tsk->next_child = last_parent_depends_on->next_child; tsk->prev_child->next_child = tsk; @@ -1158,21 +1178,14 @@ gomp_task_maybe_wait_for_dependencies (void **depend) } else if (tsk != task->children) { - /* Remove tsk from the sibling list... */ tsk->prev_child->next_child = tsk->next_child; tsk->next_child->prev_child = tsk->prev_child; - /* ...and insert it into the running task's - children. */ - tsk->prev_child = task->children; - tsk->next_child = task->children->next_child; + tsk->prev_child = task->children->prev_child; + tsk->next_child = task->children; task->children = tsk; tsk->prev_child->next_child = tsk; tsk->next_child->prev_child = tsk; } - else - { - /* It's already in task->children. Nothing to do. */; - } last_parent_depends_on = tsk; } }