From patchwork Thu Sep 19 09:23:48 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Elmously X-Patchwork-Id: 1164486 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46YryN6nndz9sNk; Thu, 19 Sep 2019 19:24:12 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1iAsfY-0004ec-9R; Thu, 19 Sep 2019 09:24:08 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iAsfU-0004c7-4E for kernel-team@lists.ubuntu.com; Thu, 19 Sep 2019 09:24:04 +0000 Received: from mail-io1-f71.google.com ([209.85.166.71]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iAsfT-0001F4-Pv for kernel-team@lists.ubuntu.com; Thu, 19 Sep 2019 09:24:03 +0000 Received: by mail-io1-f71.google.com with SMTP id w16so4255121ioc.15 for ; Thu, 19 Sep 2019 02:24:03 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=tI3Bha3RUTO+NZHpooLN666uOz1uxL+HzzvfaY9/Cig=; b=KVudR9+d28Tq2NhqAXo8cRNp9vWDpQzGNVHUgmrro9BVxGl6bb1Rt8WShCgNymnES2 A57nQ7xQaXEQC0VRqTajc/FEqnH4EfD3Uv3qNAv8u3qUnWVu4MLoZJeLnF4nbHkYBthy TB7BI123JcMa1ciDZyGKK/OJOl9TtVcEuZ9t9+Q9MBG1e1WtLWvJiBU9RdG/jylJykAz yz6sXXNN9SYVbD9YLyGjdjf0sVAHBi7AljPnjRp4iUBV8u+uWgo4jf5JoATe7XUuxVTg S5eEdzUuTyP9EWSrt26eWYbGM/8h3tpRCJzEXB2eMY5nv9WLvdDeZP/Xn5OS6UjHUWSC rWoQ== X-Gm-Message-State: APjAAAVG8nYZPOGYvI7IypJOyZvBbHtX+Zaeo9gN/b4iqoJLw31+gWc3 VtKQZIrPY7QiOnIA7nboFzviOATFDXAjQ3yEybE0P3A+6uxNFs9f0F5lFQxtMwcT9/QhSQCFhUm Ou4WMJ9y3aVJfQufFmnPRET5VPGY4NgC02ifCScnm1A== X-Received: by 2002:a02:aa8f:: with SMTP id u15mr10859037jai.13.1568885042640; Thu, 19 Sep 2019 02:24:02 -0700 (PDT) X-Google-Smtp-Source: APXvYqytuTq4I2nGzjgN0gltumwd9oyP7ENL9qhnr1n/97aGsg601SQRlXiQBUbtEoQsO7Ndr2fdRg== X-Received: by 2002:a02:aa8f:: with SMTP id u15mr10859010jai.13.1568885042379; Thu, 19 Sep 2019 02:24:02 -0700 (PDT) Received: from kbuntu2.fuzzbuzz.org (dhcp-24-53-242-107.cable.user.start.ca. [24.53.242.107]) by smtp.gmail.com with ESMTPSA id d9sm798937ioq.9.2019.09.19.02.24.01 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Sep 2019 02:24:01 -0700 (PDT) From: Khalid Elmously To: kernel-team@lists.ubuntu.com Subject: [SRU][Xenial/gcp][PATCH 6/11] net: allow to call netif_reset_xps_queues() under cpus_read_lock Date: Thu, 19 Sep 2019 05:23:48 -0400 Message-Id: <20190919092353.29993-7-khalid.elmously@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190919092353.29993-1-khalid.elmously@canonical.com> References: <20190919092353.29993-1-khalid.elmously@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Andrei Vagin BugLink: https://bugs.launchpad.net/bugs/1810457 The definition of static_key_slow_inc() has cpus_read_lock in place. In the virtio_net driver, XPS queues are initialized after setting the queue:cpu affinity in virtnet_set_affinity() which is already protected within cpus_read_lock. Lockdep prints a warning when we are trying to acquire cpus_read_lock when it is already held. This patch adds an ability to call __netif_set_xps_queue under cpus_read_lock(). Acked-by: Jason Wang ============================================ WARNING: possible recursive locking detected 4.18.0-rc3-next-20180703+ #1 Not tainted -------------------------------------------- swapper/0/1 is trying to acquire lock: 00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: static_key_slow_inc+0xe/0x20 but task is already holding lock: 00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: init_vqs+0x513/0x5a0 other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(cpu_hotplug_lock.rw_sem); lock(cpu_hotplug_lock.rw_sem); *** DEADLOCK *** May be due to missing lock nesting notation 3 locks held by swapper/0/1: #0: 00000000244bc7da (&dev->mutex){....}, at: __driver_attach+0x5a/0x110 #1: 00000000cf973d46 (cpu_hotplug_lock.rw_sem){++++}, at: init_vqs+0x513/0x5a0 #2: 000000005cd8463f (xps_map_mutex){+.+.}, at: __netif_set_xps_queue+0x8d/0xc60 v2: move cpus_read_lock() out of __netif_set_xps_queue() Cc: "Nambiar, Amritha" Cc: "Michael S. Tsirkin" Cc: Jason Wang Fixes: 8af2c06ff4b1 ("net-sysfs: Add interface for Rx queue(s) map per Tx queue") Signed-off-by: Andrei Vagin Signed-off-by: David S. Miller (cherry picked from commit 4d99f6602cb552fb58db0c3b1d935bb6fa017f24) Signed-off-by: Marcelo Henrique Cerri Signed-off-by: Khalid Elmously --- drivers/net/virtio_net.c | 4 +++- net/core/dev.c | 20 +++++++++++++++----- net/core/net-sysfs.c | 4 ++++ 3 files changed, 22 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c index 2b6916c012d2..194085a5f372 100644 --- a/drivers/net/virtio_net.c +++ b/drivers/net/virtio_net.c @@ -1739,9 +1739,11 @@ static void virtnet_set_affinity(struct virtnet_info *vi) i = 0; for_each_online_cpu(cpu) { + const unsigned long *mask = cpumask_bits(cpumask_of(cpu)); + virtqueue_set_affinity(vi->rq[i].vq, cpu); virtqueue_set_affinity(vi->sq[i].vq, cpu); - netif_set_xps_queue(vi->dev, cpumask_of(cpu), i); + __netif_set_xps_queue(vi->dev, mask, i, false); i++; } diff --git a/net/core/dev.c b/net/core/dev.c index 1ec9cda3c33c..96e4f57647ab 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -2153,6 +2153,7 @@ static void netif_reset_xps_queues(struct net_device *dev, u16 offset, if (!static_key_false(&xps_needed)) return; + cpus_read_lock(); mutex_lock(&xps_map_mutex); if (static_key_false(&xps_rxqs_needed)) { @@ -2176,10 +2177,11 @@ static void netif_reset_xps_queues(struct net_device *dev, u16 offset, out_no_maps: if (static_key_enabled(&xps_rxqs_needed)) - static_key_slow_dec(&xps_rxqs_needed); + static_key_slow_dec_cpuslocked(&xps_rxqs_needed); - static_key_slow_dec(&xps_needed); + static_key_slow_dec_cpuslocked(&xps_needed); mutex_unlock(&xps_map_mutex); + cpus_read_unlock(); } static void netif_reset_xps_queues_gt(struct net_device *dev, u16 index) @@ -2227,6 +2229,7 @@ static struct xps_map *expand_xps_map(struct xps_map *map, int attr_index, return new_map; } +/* Must be called under cpus_read_lock */ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, u16 index, bool is_rxqs_map) { @@ -2287,9 +2290,9 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, if (!new_dev_maps) goto out_no_new_maps; - static_key_slow_inc(&xps_needed); + static_key_slow_inc_cpuslocked(&xps_needed); if (is_rxqs_map) - static_key_slow_inc(&xps_rxqs_needed); + static_key_slow_inc_cpuslocked(&xps_rxqs_needed); for (j = -1; j = netif_attrmask_next(j, possible_mask, nr_ids), j < nr_ids;) { @@ -2418,11 +2421,18 @@ int __netif_set_xps_queue(struct net_device *dev, const unsigned long *mask, kfree(new_dev_maps); return -ENOMEM; } +EXPORT_SYMBOL_GPL(__netif_set_xps_queue); int netif_set_xps_queue(struct net_device *dev, const struct cpumask *mask, u16 index) { - return __netif_set_xps_queue(dev, cpumask_bits(mask), index, false); + int ret; + + cpus_read_lock(); + ret = __netif_set_xps_queue(dev, cpumask_bits(mask), index, false); + cpus_read_unlock(); + + return ret; } EXPORT_SYMBOL(netif_set_xps_queue); diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index 209ce2320f5f..85bb80ae6c30 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -26,6 +26,7 @@ #include #include #include +#include #include "net-sysfs.h" @@ -1394,7 +1395,10 @@ static ssize_t xps_rxqs_store(struct netdev_queue *queue, const char *buf, return err; } + cpus_read_lock(); err = __netif_set_xps_queue(dev, mask, index, true); + cpus_read_unlock(); + kfree(mask); return err ? : len; }