From patchwork Fri Jan 16 02:23:23 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ying Xue X-Patchwork-Id: 429661 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id A6A081400F1 for ; Fri, 16 Jan 2015 13:24:01 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932292AbbAPCX5 (ORCPT ); Thu, 15 Jan 2015 21:23:57 -0500 Received: from mail.windriver.com ([147.11.1.11]:59543 "EHLO mail.windriver.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932271AbbAPCX4 (ORCPT ); Thu, 15 Jan 2015 21:23:56 -0500 Received: from ALA-HCB.corp.ad.wrs.com (ala-hcb.corp.ad.wrs.com [147.11.189.41]) by mail.windriver.com (8.14.9/8.14.5) with ESMTP id t0G2NRaw028881 (version=TLSv1/SSLv3 cipher=AES256-SHA bits=256 verify=FAIL); Thu, 15 Jan 2015 18:23:27 -0800 (PST) Received: from ying.corp.ad.wrs.com (128.224.163.180) by ALA-HCB.corp.ad.wrs.com (147.11.189.41) with Microsoft SMTP Server id 14.3.174.1; Thu, 15 Jan 2015 18:23:26 -0800 From: Ying Xue To: CC: , , Subject: [PATCH net-next v4] rhashtable: Fix race in rhashtable_destroy() and use regular work_struct Date: Fri, 16 Jan 2015 10:23:23 +0800 Message-ID: <1421375003-7389-1-git-send-email-ying.xue@windriver.com> X-Mailer: git-send-email 1.7.9.5 MIME-Version: 1.0 Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org When we put our declared work task in the global workqueue with schedule_delayed_work(), its delay parameter is always zero. Therefore, we should define a regular work in rhashtable structure instead of a delayed work. By the way, we add a condition to check whether resizing functions are NULL before cancelling the work, avoiding to cancel an uninitialized work. Lastly, while we wait for all work items we submitted before to run to completion with cancel_delayed_work(), ht->mutex has been taken in rhashtable_destroy(). Moreover, cancel_delayed_work() doesn't return until all work items are accomplished, and when work items are scheduled, the work's function - rht_deferred_worker() will be called. However, as rht_deferred_worker() also needs to acquire the lock, deadlock might happen at the moment as the lock is already held before. So if the cancel work function is moved out of the lock covered scope, this will avoid the deadlock. Fixes: 97defe1 ("rhashtable: Per bucket locks & deferred expansion/shrinking") Signed-off-by: Ying Xue Cc: Thomas Graf Signed-off-by: Ying Xue --- Change log: v4: - Add an empty line in rhashtable_destroy() - Revise commit description v3: - Revise patch header description from Thomas's suggestion v2: - Move cancel_work_sync() out of ht->mutex lock scope from Thomas's comments include/linux/rhashtable.h | 2 +- lib/rhashtable.c | 12 ++++++------ 2 files changed, 7 insertions(+), 7 deletions(-) diff --git a/include/linux/rhashtable.h b/include/linux/rhashtable.h index 9570832..a2562ed 100644 --- a/include/linux/rhashtable.h +++ b/include/linux/rhashtable.h @@ -119,7 +119,7 @@ struct rhashtable { atomic_t nelems; atomic_t shift; struct rhashtable_params p; - struct delayed_work run_work; + struct work_struct run_work; struct mutex mutex; bool being_destroyed; }; diff --git a/lib/rhashtable.c b/lib/rhashtable.c index aca6998..84a78e3 100644 --- a/lib/rhashtable.c +++ b/lib/rhashtable.c @@ -485,7 +485,7 @@ static void rht_deferred_worker(struct work_struct *work) struct rhashtable *ht; struct bucket_table *tbl; - ht = container_of(work, struct rhashtable, run_work.work); + ht = container_of(work, struct rhashtable, run_work); mutex_lock(&ht->mutex); tbl = rht_dereference(ht->tbl, ht); @@ -507,7 +507,7 @@ static void rhashtable_wakeup_worker(struct rhashtable *ht) if (tbl == new_tbl && ((ht->p.grow_decision && ht->p.grow_decision(ht, size)) || (ht->p.shrink_decision && ht->p.shrink_decision(ht, size)))) - schedule_delayed_work(&ht->run_work, 0); + schedule_work(&ht->run_work); } static void __rhashtable_insert(struct rhashtable *ht, struct rhash_head *obj, @@ -903,7 +903,7 @@ int rhashtable_init(struct rhashtable *ht, struct rhashtable_params *params) get_random_bytes(&ht->p.hash_rnd, sizeof(ht->p.hash_rnd)); if (ht->p.grow_decision || ht->p.shrink_decision) - INIT_DEFERRABLE_WORK(&ht->run_work, rht_deferred_worker); + INIT_WORK(&ht->run_work, rht_deferred_worker); return 0; } @@ -921,11 +921,11 @@ void rhashtable_destroy(struct rhashtable *ht) { ht->being_destroyed = true; - mutex_lock(&ht->mutex); + if (ht->p.grow_decision || ht->p.shrink_decision) + cancel_work_sync(&ht->run_work); - cancel_delayed_work(&ht->run_work); + mutex_lock(&ht->mutex); bucket_table_free(rht_dereference(ht->tbl, ht)); - mutex_unlock(&ht->mutex); } EXPORT_SYMBOL_GPL(rhashtable_destroy);