From patchwork Wed Jun 10 16:12:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Narayan Lal X-Patchwork-Id: 1307066 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=S4uWNIwR; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49hsTx5bsQz9sRW for ; Thu, 11 Jun 2020 02:13:09 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728160AbgFJQNC (ORCPT ); Wed, 10 Jun 2020 12:13:02 -0400 Received: from us-smtp-1.mimecast.com ([207.211.31.81]:26456 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726219AbgFJQNB (ORCPT ); Wed, 10 Jun 2020 12:13:01 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1591805580; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:in-reply-to:in-reply-to:references:references; bh=jszjpg093lUjd7SXWU6y9l2BOYmNd24mrcTabfHoHS8=; b=S4uWNIwRnPHO6P5a/TEF1x9+jWNqx0IgH3ePBQhp3lA2KCDeCyMZ5yB6KjXofyrCNCz8Ex ppiuxA53zXoE96i3gbtbPWZdmQSh1CpsCgJX87sGQkyQcLWGRs/4MldbmnLaOBB8x1LALx Wi15FqexbHgJiaA76fJ4DORxFesMCeU= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-19-yAvxoEUTOCaz1q_IZOKaGg-1; Wed, 10 Jun 2020 12:12:53 -0400 X-MC-Unique: yAvxoEUTOCaz1q_IZOKaGg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 094A5805783; Wed, 10 Jun 2020 16:12:52 +0000 (UTC) Received: from virtlab423.virt.lab.eng.bos.redhat.com (virtlab423.virt.lab.eng.bos.redhat.com [10.19.152.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8EF56512FE; Wed, 10 Jun 2020 16:12:50 +0000 (UTC) From: Nitesh Narayan Lal To: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, frederic@kernel.org, mtosatti@redhat.com, juri.lelli@redhat.com, abelits@marvell.com, bhelgaas@google.com, linux-pci@vger.kernel.org, rostedt@goodmis.org, mingo@kernel.org, peterz@infradead.org, tglx@linutronix.de Subject: [Patch v1 1/3] lib: restricting cpumask_local_spread to only houskeeping CPUs Date: Wed, 10 Jun 2020 12:12:24 -0400 Message-Id: <20200610161226.424337-2-nitesh@redhat.com> In-Reply-To: <20200610161226.424337-1-nitesh@redhat.com> References: <20200610161226.424337-1-nitesh@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Alex Belits The current implementation of cpumask_local_spread() does not respect the isolated CPUs, i.e., even if a CPU has been isolated for Real-Time task, it will return it to the caller for pinning of its IRQ threads. Having these unwanted IRQ threads on an isolated CPU adds up to a latency overhead. This patch restricts the CPUs that are returned for spreading IRQs only to the available housekeeping CPUs. Signed-off-by: Alex Belits Signed-off-by: Nitesh Narayan Lal --- lib/cpumask.c | 43 +++++++++++++++++++++++++------------------ 1 file changed, 25 insertions(+), 18 deletions(-) diff --git a/lib/cpumask.c b/lib/cpumask.c index fb22fb266f93..cc4311a8c079 100644 --- a/lib/cpumask.c +++ b/lib/cpumask.c @@ -6,6 +6,7 @@ #include #include #include +#include /** * cpumask_next - get the next cpu in a cpumask @@ -205,28 +206,34 @@ void __init free_bootmem_cpumask_var(cpumask_var_t mask) */ unsigned int cpumask_local_spread(unsigned int i, int node) { - int cpu; + int cpu, m, n, hk_flags; + const struct cpumask *mask; + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ; + mask = housekeeping_cpumask(hk_flags); + m = cpumask_weight(mask); /* Wrap: we always want a cpu. */ - i %= num_online_cpus(); + n = i % m; + while (m-- > 0) { + if (node == NUMA_NO_NODE) { + for_each_cpu(cpu, mask) + if (n-- == 0) + return cpu; + } else { + /* NUMA first. */ + for_each_cpu_and(cpu, cpumask_of_node(node), mask) + if (n-- == 0) + return cpu; - if (node == NUMA_NO_NODE) { - for_each_cpu(cpu, cpu_online_mask) - if (i-- == 0) - return cpu; - } else { - /* NUMA first. */ - for_each_cpu_and(cpu, cpumask_of_node(node), cpu_online_mask) - if (i-- == 0) - return cpu; + for_each_cpu(cpu, mask) { + /* Skip NUMA nodes, done above. */ + if (cpumask_test_cpu(cpu, + cpumask_of_node(node))) + continue; - for_each_cpu(cpu, cpu_online_mask) { - /* Skip NUMA nodes, done above. */ - if (cpumask_test_cpu(cpu, cpumask_of_node(node))) - continue; - - if (i-- == 0) - return cpu; + if (n-- == 0) + return cpu; + } } } BUG(); From patchwork Wed Jun 10 16:12:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Narayan Lal X-Patchwork-Id: 1307064 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=jUrjvI9w; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49hsTm5sVtz9sRR for ; Thu, 11 Jun 2020 02:13:00 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728069AbgFJQM6 (ORCPT ); Wed, 10 Jun 2020 12:12:58 -0400 Received: from us-smtp-1.mimecast.com ([205.139.110.61]:57014 "EHLO us-smtp-delivery-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726219AbgFJQM6 (ORCPT ); Wed, 10 Jun 2020 12:12:58 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1591805577; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:in-reply-to:in-reply-to:references:references; bh=/xI/fxoF3iNFNky368q/BjisPF86sZAJhx5RKW1xvKU=; b=jUrjvI9wKFupbD2z9RHJgRo44lDw9ZBRspJJ082QTSbOebMiYFwiEXQGqRkx8f0odoMhbD Ci6tkwSFi23KxMPKXeAbtRx8QjnKQSl3IkhIv4q0R7jasctHVin4e6bXwOxFetv9+/734W YBxUq/LnE9WdKn4lVe8RcRjKXIRk5gs= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-344-asJmdb5fPoGYYEeXXN00zw-1; Wed, 10 Jun 2020 12:12:55 -0400 X-MC-Unique: asJmdb5fPoGYYEeXXN00zw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 9082618FE868; Wed, 10 Jun 2020 16:12:53 +0000 (UTC) Received: from virtlab423.virt.lab.eng.bos.redhat.com (virtlab423.virt.lab.eng.bos.redhat.com [10.19.152.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id 347897A8CD; Wed, 10 Jun 2020 16:12:52 +0000 (UTC) From: Nitesh Narayan Lal To: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, frederic@kernel.org, mtosatti@redhat.com, juri.lelli@redhat.com, abelits@marvell.com, bhelgaas@google.com, linux-pci@vger.kernel.org, rostedt@goodmis.org, mingo@kernel.org, peterz@infradead.org, tglx@linutronix.de Subject: [Patch v1 2/3] PCI: prevent work_on_cpu's probe to execute on isolated CPUs Date: Wed, 10 Jun 2020 12:12:25 -0400 Message-Id: <20200610161226.424337-3-nitesh@redhat.com> In-Reply-To: <20200610161226.424337-1-nitesh@redhat.com> References: <20200610161226.424337-1-nitesh@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Alex Belits pci_call_probe() prevents the nesting of work_on_cpu() for a scenario where a VF device is probed from work_on_cpu() of the Physical device. This patch replaces the cpumask used in pci_call_probe() from all online CPUs to only housekeeping CPUs. This is to ensure that there are no additional latency overheads caused due to the pinning of jobs on isolated CPUs. Signed-off-by: Alex Belits Signed-off-by: Nitesh Narayan Lal Acked-by: Bjorn Helgaas Reviewed-by: Frederic Weisbecker --- drivers/pci/pci-driver.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/pci/pci-driver.c b/drivers/pci/pci-driver.c index da6510af1221..449466f71040 100644 --- a/drivers/pci/pci-driver.c +++ b/drivers/pci/pci-driver.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -333,6 +334,7 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev, const struct pci_device_id *id) { int error, node, cpu; + int hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ; struct drv_dev_and_id ddi = { drv, dev, id }; /* @@ -353,7 +355,8 @@ static int pci_call_probe(struct pci_driver *drv, struct pci_dev *dev, pci_physfn_is_probed(dev)) cpu = nr_cpu_ids; else - cpu = cpumask_any_and(cpumask_of_node(node), cpu_online_mask); + cpu = cpumask_any_and(cpumask_of_node(node), + housekeeping_cpumask(hk_flags)); if (cpu < nr_cpu_ids) error = work_on_cpu(cpu, local_pci_probe, &ddi); From patchwork Wed Jun 10 16:12:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nitesh Narayan Lal X-Patchwork-Id: 1307065 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=linux-pci-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=redhat.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=LSfZZnld; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49hsTn3t4Xz9sRW for ; Thu, 11 Jun 2020 02:13:01 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728107AbgFJQNA (ORCPT ); Wed, 10 Jun 2020 12:13:00 -0400 Received: from us-smtp-delivery-1.mimecast.com ([207.211.31.120]:35023 "EHLO us-smtp-1.mimecast.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1726219AbgFJQNA (ORCPT ); Wed, 10 Jun 2020 12:13:00 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1591805579; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:in-reply-to:in-reply-to:references:references; bh=kexTF+bb21B8QATwOuG1QvcMoAkYdNsoySmbLQ5tWFE=; b=LSfZZnlda1NlBXZ5pfz/7MTjfEH6+Qwc/5LxePjLTLrsQPvy/NW30vIZpFSgkmK5qL3fw1 mWLYPri9OgNVchpB7/7hEBTvztKUb0q7o0nufSaTESCJBteaz4fedYjG6exeIxlDZ4+sYw 32yZqXjqSBujPrq35pkbxkjE4DHimwQ= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-457-3KDz_bx7Ob2K6akTQorBiQ-1; Wed, 10 Jun 2020 12:12:56 -0400 X-MC-Unique: 3KDz_bx7Ob2K6akTQorBiQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 25681A0BD8; Wed, 10 Jun 2020 16:12:55 +0000 (UTC) Received: from virtlab423.virt.lab.eng.bos.redhat.com (virtlab423.virt.lab.eng.bos.redhat.com [10.19.152.154]) by smtp.corp.redhat.com (Postfix) with ESMTP id BC362512FE; Wed, 10 Jun 2020 16:12:53 +0000 (UTC) From: Nitesh Narayan Lal To: linux-kernel@vger.kernel.org, linux-api@vger.kernel.org, frederic@kernel.org, mtosatti@redhat.com, juri.lelli@redhat.com, abelits@marvell.com, bhelgaas@google.com, linux-pci@vger.kernel.org, rostedt@goodmis.org, mingo@kernel.org, peterz@infradead.org, tglx@linutronix.de Subject: [Patch v1 3/3] net: restrict queuing of receive packets to housekeeping CPUs Date: Wed, 10 Jun 2020 12:12:26 -0400 Message-Id: <20200610161226.424337-4-nitesh@redhat.com> In-Reply-To: <20200610161226.424337-1-nitesh@redhat.com> References: <20200610161226.424337-1-nitesh@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org From: Alex Belits With the existing implementation of store_rps_map() packets are queued in the receive path on the backlog queues of other CPUs irrespective of whether they are isolated or not. This could add a latency overhead to any RT workload that is running on the same CPU. This patch ensures that store_rps_map() only uses available housekeeping CPUs for storing the rps_map. Signed-off-by: Alex Belits Signed-off-by: Nitesh Narayan Lal --- net/core/net-sysfs.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/net/core/net-sysfs.c b/net/core/net-sysfs.c index e353b822bb15..16e433287191 100644 --- a/net/core/net-sysfs.c +++ b/net/core/net-sysfs.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include #include @@ -741,7 +742,7 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue, { struct rps_map *old_map, *map; cpumask_var_t mask; - int err, cpu, i; + int err, cpu, i, hk_flags; static DEFINE_MUTEX(rps_map_mutex); if (!capable(CAP_NET_ADMIN)) @@ -756,6 +757,13 @@ static ssize_t store_rps_map(struct netdev_rx_queue *queue, return err; } + hk_flags = HK_FLAG_DOMAIN | HK_FLAG_WQ; + cpumask_and(mask, mask, housekeeping_cpumask(hk_flags)); + if (cpumask_weight(mask) == 0) { + free_cpumask_var(mask); + return -EINVAL; + } + map = kzalloc(max_t(unsigned int, RPS_MAP_SIZE(cpumask_weight(mask)), L1_CACHE_BYTES), GFP_KERNEL);