From patchwork Thu Jul 28 08:19:49 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bhaktipriya Shridhar X-Patchwork-Id: 653642 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3s0PtZ1X1vz9t0X for ; Thu, 28 Jul 2016 18:20:22 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=rYmfJIru; dkim-atps=neutral Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755646AbcG1IT7 (ORCPT ); Thu, 28 Jul 2016 04:19:59 -0400 Received: from mail-pf0-f194.google.com ([209.85.192.194]:32851 "EHLO mail-pf0-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754053AbcG1ITz (ORCPT ); Thu, 28 Jul 2016 04:19:55 -0400 Received: by mail-pf0-f194.google.com with SMTP id i6so3297885pfe.0; Thu, 28 Jul 2016 01:19:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:mime-version:content-disposition :user-agent; bh=irRduQZW2O5Lw1FNS+PjP5SQpGugrBuJMnSg0k4khh8=; b=rYmfJIrun/t+vQg40Ix5wWxlE6zoaTUWxCA9+3K+n2cZOCz8GUecfXT9w/qp6YkCVw Ec3pzQgIaytnQyJhAxUcjDJbC/5gdky1dgP/rkWo2wkRPdk3IgE+v2mqtmEc4OL0BCLb F4T5fpUi0qSwq8QHrMkeIKveXUNiiwGFyMlfPZ3i0dw5OP5MEgjFZuZmKuLN30rzJG81 hqENLzFJj9WFoh9/UYKCXhaNKzRYYqyDyRkjzVYIjsMVkdJz+M5PWS6N6dcu85SlD2KN /1VA+mmUP2KwiyRiXHFRbJrOPDisI0bQDPIOv3QLcNC5OUBUbIIwciEqDrBA+HyhHQ7S OynQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:date:from:to:cc:subject:message-id:mime-version :content-disposition:user-agent; bh=irRduQZW2O5Lw1FNS+PjP5SQpGugrBuJMnSg0k4khh8=; b=DZVkF7ii2D5sazbvyp+TLuf9uLjKUIZYXfW1QAp5+39wF56ClcQa3fC5Sd7lOEFI9J rGIDchlOsNkNQZdBOuSSZDEyc/3J8pcRDaU6/wkRK/0hEvU18C/CXstMggallsPmw+pe To0ikOMFMT6cCSZ+O8gZBHXkSS6TqR65V7a61cP0gYgOCl4sPd5kJUU/7iTdSXyJ1HZS 0+FJyFOvVGSNriSKXbRC3MnDqBcCKTsT2rkjCtLHtvzW7yKc1/8XbO9+hcltXT0XHQwh jALZewyTds/h6C3fvJtb0OSiK7m00Bmbm7MiomgCbqjY3v8qo1nuF1Sc0lTNRwsxjobu yO2A== X-Gm-Message-State: AEkoouurgtriGCW9hwZepbsdDgmWlr7biGiK7uPto/DQGHSO4rxKIGtzd1yPOw45djZHaA== X-Received: by 10.98.71.140 with SMTP id p12mr57100334pfi.93.1469693994066; Thu, 28 Jul 2016 01:19:54 -0700 (PDT) Received: from Karyakshetra ([219.91.163.229]) by smtp.gmail.com with ESMTPSA id l82sm14777511pfk.8.2016.07.28.01.19.52 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Thu, 28 Jul 2016 01:19:53 -0700 (PDT) Date: Thu, 28 Jul 2016 13:49:49 +0530 From: Bhaktipriya Shridhar To: Matan Barak , Leon Romanovsky Cc: netdev@vger.kernel.org, linux-rdma@vger.kernel.org, linux-kernel@vger.kernel.org, Tejun Heo Subject: [PATCH] net/mlx5_core/pagealloc: Remove deprecated create_singlethread_workqueue Message-ID: <20160728081949.GA5561@Karyakshetra> MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.5.23 (2014-03-12) Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org A dedicated workqueue has been used since the work items are being used on a memory reclaim path. WQ_MEM_RECLAIM has been set to guarantee forward progress under memory pressure. The workqueue has a single work item. Hence, alloc_workqueue() is used instead of alloc_ordered_workqueue() since ordering is unnecessary when there's only one work item. Explicit concurrency limit is unnecessary here since there are only a fixed number of work items. Signed-off-by: Bhaktipriya Shridhar --- drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- 2.1.4 diff --git a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c index 9eeee05..7c85262 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/pagealloc.c @@ -552,7 +552,8 @@ void mlx5_pagealloc_cleanup(struct mlx5_core_dev *dev) int mlx5_pagealloc_start(struct mlx5_core_dev *dev) { - dev->priv.pg_wq = create_singlethread_workqueue("mlx5_page_allocator"); + dev->priv.pg_wq = alloc_workqueue("mlx5_page_allocator", + WQ_MEM_RECLAIM, 0); if (!dev->priv.pg_wq) return -ENOMEM;