From patchwork Wed Jan 30 21:58:24 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Herton Ronaldo Krzesinski X-Patchwork-Id: 217020 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from chlorine.canonical.com (chlorine.canonical.com [91.189.94.204]) by ozlabs.org (Postfix) with ESMTP id BE2352C0292 for ; Thu, 31 Jan 2013 10:12:05 +1100 (EST) Received: from localhost ([127.0.0.1] helo=chlorine.canonical.com) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1U0ffr-0000kW-G4; Wed, 30 Jan 2013 21:58:31 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by chlorine.canonical.com with esmtp (Exim 4.71) (envelope-from ) id 1U0ffo-0000iL-SH for kernel-team@lists.ubuntu.com; Wed, 30 Jan 2013 21:58:29 +0000 Received: from 189.58.14.63.dynamic.adsl.gvt.net.br ([189.58.14.63] helo=canonical.com) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1U0ffn-0004u2-9x; Wed, 30 Jan 2013 21:58:28 +0000 From: Herton Ronaldo Krzesinski To: Tejun Heo Subject: [ 3.5.y.z extended stable ] Patch "async: fix __lowest_in_progress()" has been added to staging queue Date: Wed, 30 Jan 2013 19:58:24 -0200 Message-Id: <1359583104-18117-1-git-send-email-herton.krzesinski@canonical.com> X-Mailer: git-send-email 1.7.9.5 X-Extended-Stable: 3.5 Cc: Linus Torvalds , kernel-team@lists.ubuntu.com, Arjan van de Ven X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.13 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: kernel-team-bounces@lists.ubuntu.com Errors-To: kernel-team-bounces@lists.ubuntu.com This is a note to let you know that I have just added a patch titled async: fix __lowest_in_progress() to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree which can be found at: http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.5.y-queue If you, or anyone else, feels it should not be added to this tree, please reply to this email. For more information about the 3.5.y.z tree, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Herton ------ From 235327d41022f54d0c09a15de4e5e75d290f9f9a Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Tue, 22 Jan 2013 16:15:15 -0800 Subject: [PATCH] async: fix __lowest_in_progress() commit f56c3196f251012de9b3ebaff55732a9074fdaae upstream. Commit 083b804c4d3e ("async: use workqueue for worker pool") made it possible that async jobs are moved from pending to running out-of-order. While pending async jobs will be queued and dispatched for execution in the same order, nothing guarantees they'll enter "1) move self to the running queue" of async_run_entry_fn() in the same order. Before the conversion, async implemented its own worker pool. An async worker, upon being woken up, fetches the first item from the pending list, which kept the executing lists sorted. The conversion to workqueue was done by adding work_struct to each async_entry and async just schedules the work item. The queueing and dispatching of such work items are still in order but now each worker thread is associated with a specific async_entry and moves that specific async_entry to the executing list. So, depending on which worker reaches that point earlier, which is non-deterministic, we may end up moving an async_entry with larger cookie before one with smaller one. This broke __lowest_in_progress(). running->domain may not be properly sorted and is not guaranteed to contain lower cookies than pending list when not empty. Fix it by ensuring sort-inserting to the running list and always looking at both pending and running when trying to determine the lowest cookie. Over time, the async synchronization implementation became quite messy. We better restructure it such that each async_entry is linked to two lists - one global and one per domain - and not move it when execution starts. There's no reason to distinguish pending and running. They behave the same for synchronization purposes. Signed-off-by: Tejun Heo Cc: Arjan van de Ven Signed-off-by: Linus Torvalds [ herton: there is no struct async_domain on 3.5, adjust for using list_head directly instead ] Signed-off-by: Herton Ronaldo Krzesinski --- kernel/async.c | 27 ++++++++++++++++++++------- 1 file changed, 20 insertions(+), 7 deletions(-) -- 1.7.9.5 diff --git a/kernel/async.c b/kernel/async.c index 32d8dc9..f383244 100644 --- a/kernel/async.c +++ b/kernel/async.c @@ -84,6 +84,8 @@ static atomic_t entry_count; */ static async_cookie_t __lowest_in_progress(struct list_head *running) { + async_cookie_t first_running = next_cookie; /* infinity value */ + async_cookie_t first_pending = next_cookie; /* ditto */ struct async_entry *entry; if (!running) { /* just check the entry count */ @@ -93,17 +95,24 @@ static async_cookie_t __lowest_in_progress(struct list_head *running) return next_cookie; } + /* + * Both running and pending lists are sorted but not disjoint. + * Take the first cookies from both and return the min. + */ if (!list_empty(running)) { entry = list_first_entry(running, struct async_entry, list); - return entry->cookie; + first_running = entry->cookie; } - list_for_each_entry(entry, &async_pending, list) - if (entry->running == running) - return entry->cookie; + list_for_each_entry(entry, &async_pending, list) { + if (entry->running == running) { + first_pending = entry->cookie; + break; + } + } - return next_cookie; /* "infinity" value */ + return min(first_running, first_pending); } static async_cookie_t lowest_in_progress(struct list_head *running) @@ -124,12 +133,16 @@ static void async_run_entry_fn(struct work_struct *work) { struct async_entry *entry = container_of(work, struct async_entry, work); + struct async_entry *pos; unsigned long flags; ktime_t uninitialized_var(calltime), delta, rettime; - /* 1) move self to the running queue */ + /* 1) move self to the running queue, make sure it stays sorted */ spin_lock_irqsave(&async_lock, flags); - list_move_tail(&entry->list, entry->running); + list_for_each_entry_reverse(pos, entry->running, list) + if (entry->cookie < pos->cookie) + break; + list_move_tail(&entry->list, &pos->list); spin_unlock_irqrestore(&async_lock, flags); /* 2) run (and print duration) */