From patchwork Sun Mar 31 14:31:45 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Viresh Kumar X-Patchwork-Id: 232614 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 58EDA2C00CF for ; Mon, 1 Apr 2013 01:33:25 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755563Ab3CaOc1 (ORCPT ); Sun, 31 Mar 2013 10:32:27 -0400 Received: from mail-pb0-f52.google.com ([209.85.160.52]:53851 "EHLO mail-pb0-f52.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755518Ab3CaOcZ (ORCPT ); Sun, 31 Mar 2013 10:32:25 -0400 Received: by mail-pb0-f52.google.com with SMTP id mc8so14242pbc.39 for ; Sun, 31 Mar 2013 07:32:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=x-received:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references:in-reply-to:references:x-gm-message-state; bh=n2pK9NYhG01dUx5wTp6eIDGsif+QEnzPcBAlwAS0ymg=; b=E3EK9Zr64arDstxZpbU4fOo6ZHJZfbUXEYHuWxiTJENk+pIqdF4+Ikfj/XC0tn+jpM pgySC6wTN2IXTh9Og7J7unfoctFmB9120udkdmNu5cT+Y9oRbKcO60e9WB4s+NJFBmez MdopnCUK2qMbGeaIDHWcGpGvYNVnhOv7Wn3JB3yYFI4SKHq/Nr8oWsPOyTDzZ677RIKB Zigd4l/VdNqQEFO06j8BLPv5+mesK6f6Gzx7WyGVED0CgMsLWvWRpZZTkLgMSr0KJPAV fx04s07W8bWf1lKcjMB0D8P6MBL68npP7Bll1mPg5ehpalUeFMRQ0mlp9ln11vMO841U Mh9w== X-Received: by 10.68.195.161 with SMTP id if1mr13548943pbc.207.1364740344789; Sun, 31 Mar 2013 07:32:24 -0700 (PDT) Received: from localhost ([122.167.73.68]) by mx.google.com with ESMTPS id xl10sm11392439pac.15.2013.03.31.07.32.18 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Sun, 31 Mar 2013 07:32:24 -0700 (PDT) From: Viresh Kumar To: tj@kernel.org Cc: linaro-kernel@lists.linaro.org, patches@linaro.org, robin.randhawa@arm.com, Steve.Bannister@arm.com, Liviu.Dudau@arm.com, charles.garcia-tobin@arm.com, arvind.chauhan@arm.com, davem@davemloft.net, airlied@redhat.com, axboe@kernel.dk, tglx@linutronix.de, peterz@infradead.org, mingo@redhat.com, rostedt@goodmis.org, linux-rt-users@vger.kernel.org, linux-kernel@vger.kernel.org, Viresh Kumar , netdev@vger.kernel.org Subject: [PATCH V4 2/4] PHYLIB: queue work on unbound wq Date: Sun, 31 Mar 2013 20:01:45 +0530 Message-Id: <6be4e9ad2048a2a9d60143f58931c3fe94770175.1364740180.git.viresh.kumar@linaro.org> X-Mailer: git-send-email 1.7.12.rc2.18.g61b472e In-Reply-To: References: In-Reply-To: References: X-Gm-Message-State: ALoCoQly8aAnbpVXB5mclI5zhcyDUPP3JE+dq/dxCBNdiYqF+70N+pQ4SGD4PvcmF+frcuVuGsku Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Phylib uses workqueues for multiple purposes. There is no real dependency of scheduling these on the cpu which scheduled them. On a idle system, it is observed that and idle cpu wakes up many times just to service this work. It would be better if we can schedule it on a cpu which the scheduler believes to be the most appropriate one. This patch replaces system_wq with system_unbound_wq for PHYLIB. Cc: David S. Miller Cc: netdev@vger.kernel.org Signed-off-by: Viresh Kumar Acked-by: David S. Miller --- drivers/net/phy/phy.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c index c14f147..b2fe180 100644 --- a/drivers/net/phy/phy.c +++ b/drivers/net/phy/phy.c @@ -439,7 +439,7 @@ void phy_start_machine(struct phy_device *phydev, { phydev->adjust_state = handler; - schedule_delayed_work(&phydev->state_queue, HZ); + queue_delayed_work(system_unbound_wq, &phydev->state_queue, HZ); } /** @@ -500,7 +500,7 @@ static irqreturn_t phy_interrupt(int irq, void *phy_dat) disable_irq_nosync(irq); atomic_inc(&phydev->irq_disable); - schedule_work(&phydev->phy_queue); + queue_work(system_unbound_wq, &phydev->phy_queue); return IRQ_HANDLED; } @@ -655,7 +655,7 @@ static void phy_change(struct work_struct *work) /* reschedule state queue work to run as soon as possible */ cancel_delayed_work_sync(&phydev->state_queue); - schedule_delayed_work(&phydev->state_queue, 0); + queue_delayed_work(system_unbound_wq, &phydev->state_queue, 0); return; @@ -918,7 +918,8 @@ void phy_state_machine(struct work_struct *work) if (err < 0) phy_error(phydev); - schedule_delayed_work(&phydev->state_queue, PHY_STATE_TIME * HZ); + queue_delayed_work(system_unbound_wq, &phydev->state_queue, + PHY_STATE_TIME * HZ); } static inline void mmd_phy_indirect(struct mii_bus *bus, int prtad, int devad,