From patchwork Tue May 15 13:03:07 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jose Ricardo Ziviani X-Patchwork-Id: 914794 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=linux.vnet.ibm.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40mLM13ypkz9s1d; Thu, 17 May 2018 03:01:49 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1fIzo4-0006JC-91; Wed, 16 May 2018 17:01:40 +0000 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5] helo=mx0a-001b2d01.pphosted.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1fIZcC-0004Gb-96 for kernel-team@lists.ubuntu.com; Tue, 15 May 2018 13:03:40 +0000 Received: from pps.filterd (m0098419.ppops.net [127.0.0.1]) by mx0b-001b2d01.pphosted.com (8.16.0.22/8.16.0.22) with SMTP id w4FCoBps097672 for ; Tue, 15 May 2018 09:03:39 -0400 Received: from e13.ny.us.ibm.com (e13.ny.us.ibm.com [129.33.205.203]) by mx0b-001b2d01.pphosted.com with ESMTP id 2hywumxedp-1 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=NOT) for ; Tue, 15 May 2018 09:03:38 -0400 Received: from localhost by e13.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 15 May 2018 09:03:38 -0400 Received: from b01cxnp22034.gho.pok.ibm.com (9.57.198.24) by e13.ny.us.ibm.com (146.89.104.200) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 15 May 2018 09:03:36 -0400 Received: from b01ledav004.gho.pok.ibm.com (b01ledav004.gho.pok.ibm.com [9.57.199.109]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id w4FD3aVM51970082; Tue, 15 May 2018 13:03:36 GMT Received: from b01ledav004.gho.pok.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id BF485112051; Tue, 15 May 2018 09:03:38 -0400 (EDT) Received: from pacoca.br.ibm.com (unknown [9.7.47.248]) by b01ledav004.gho.pok.ibm.com (Postfix) with ESMTP id 51B1D112047; Tue, 15 May 2018 09:03:38 -0400 (EDT) From: Jose Ricardo Ziviani To: kernel-team@lists.ubuntu.com Subject: [Bionic][PATCH v3 04/21] genirq/affinity: Move actual irq vector spreading into a helper function Date: Tue, 15 May 2018 10:03:07 -0300 X-Mailer: git-send-email 2.17.0 In-Reply-To: <20180515130324.23815-1-joserz@linux.vnet.ibm.com> References: <20180515130324.23815-1-joserz@linux.vnet.ibm.com> X-TM-AS-GCONF: 00 x-cbid: 18051513-0008-0000-0000-00000307CEC9 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00009029; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000260; SDB=6.01032654; UDB=6.00527946; IPR=6.00811785; MB=3.00021126; MTD=3.00000008; XFM=3.00000015; UTC=2018-05-15 13:03:38 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 18051513-0009-0000-0000-00003941E4C6 Message-Id: <20180515130324.23815-5-joserz@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2018-05-15_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 suspectscore=1 phishscore=0 bulkscore=0 spamscore=0 clxscore=1015 lowpriorityscore=0 impostorscore=0 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1709140000 definitions=main-1805150132 X-Mailman-Approved-At: Wed, 16 May 2018 17:01:18 +0000 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Ming Lei BugLink: http://bugs.launchpad.net/bugs/1759723 No functional change, just prepare for converting to 2-stage irq vector spreading. Signed-off-by: Ming Lei Signed-off-by: Thomas Gleixner Reviewed-by: Christoph Hellwig Cc: Jens Axboe Cc: linux-block@vger.kernel.org Cc: Laurence Oberman Cc: Christoph Hellwig Link: https://lkml.kernel.org/r/20180308105358.1506-3-ming.lei@redhat.com (cherry picked from commit b3e6aaa8d94d618e685c4df08bef991a4fb43923) Signed-off-by: Jose Ricardo Ziviani --- kernel/irq/affinity.c | 97 ++++++++++++++++++++++++------------------- 1 file changed, 55 insertions(+), 42 deletions(-) diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c index 272c968d9ef1..a9c36904500c 100644 --- a/kernel/irq/affinity.c +++ b/kernel/irq/affinity.c @@ -94,50 +94,19 @@ static int get_nodes_in_cpumask(cpumask_var_t *node_to_cpumask, return nodes; } -/** - * irq_create_affinity_masks - Create affinity masks for multiqueue spreading - * @nvecs: The total number of vectors - * @affd: Description of the affinity requirements - * - * Returns the masks pointer or NULL if allocation failed. - */ -struct cpumask * -irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) +static int irq_build_affinity_masks(int nvecs, const struct irq_affinity *affd, + cpumask_var_t *node_to_cpumask, + const struct cpumask *cpu_mask, + struct cpumask *nmsk, + struct cpumask *masks) { - int n, nodes, cpus_per_vec, extra_vecs, curvec; int affv = nvecs - affd->pre_vectors - affd->post_vectors; int last_affv = affv + affd->pre_vectors; + int curvec = affd->pre_vectors; nodemask_t nodemsk = NODE_MASK_NONE; - struct cpumask *masks = NULL; - cpumask_var_t nmsk, *node_to_cpumask; + int n, nodes, cpus_per_vec, extra_vecs; - /* - * If there aren't any vectors left after applying the pre/post - * vectors don't bother with assigning affinity. - */ - if (!affv) - return NULL; - - if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) - return NULL; - - node_to_cpumask = alloc_node_to_cpumask(); - if (!node_to_cpumask) - goto outcpumsk; - - masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); - if (!masks) - goto outnodemsk; - - /* Fill out vectors at the beginning that don't need affinity */ - for (curvec = 0; curvec < affd->pre_vectors; curvec++) - cpumask_copy(masks + curvec, irq_default_affinity); - - /* Stabilize the cpumasks */ - get_online_cpus(); - build_node_to_cpumask(node_to_cpumask); - nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_possible_mask, - &nodemsk); + nodes = get_nodes_in_cpumask(node_to_cpumask, cpu_mask, &nodemsk); /* * If the number of nodes in the mask is greater than or equal the @@ -150,7 +119,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) if (++curvec == last_affv) break; } - goto done; + goto out; } for_each_node_mask(n, nodemsk) { @@ -160,7 +129,7 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) vecs_per_node = (affv - (curvec - affd->pre_vectors)) / nodes; /* Get the cpus on this node which are in the mask */ - cpumask_and(nmsk, cpu_possible_mask, node_to_cpumask[n]); + cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); /* Calculate the number of cpus per vector */ ncpus = cpumask_weight(nmsk); @@ -186,7 +155,51 @@ irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) --nodes; } -done: +out: + return curvec - affd->pre_vectors; +} + +/** + * irq_create_affinity_masks - Create affinity masks for multiqueue spreading + * @nvecs: The total number of vectors + * @affd: Description of the affinity requirements + * + * Returns the masks pointer or NULL if allocation failed. + */ +struct cpumask * +irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) +{ + cpumask_var_t nmsk, *node_to_cpumask; + struct cpumask *masks = NULL; + int curvec; + + /* + * If there aren't any vectors left after applying the pre/post + * vectors don't bother with assigning affinity. + */ + if (nvecs == affd->pre_vectors + affd->post_vectors) + return NULL; + + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) + return NULL; + + node_to_cpumask = alloc_node_to_cpumask(); + if (!node_to_cpumask) + goto outcpumsk; + + masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); + if (!masks) + goto outnodemsk; + + /* Fill out vectors at the beginning that don't need affinity */ + for (curvec = 0; curvec < affd->pre_vectors; curvec++) + cpumask_copy(masks + curvec, irq_default_affinity); + + /* Stabilize the cpumasks */ + get_online_cpus(); + build_node_to_cpumask(node_to_cpumask); + curvec += irq_build_affinity_masks(nvecs, affd, node_to_cpumask, + cpu_possible_mask, nmsk, masks); put_online_cpus(); /* Fill out vectors at the end that don't need affinity */