From patchwork Wed Jun 2 21:21:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1486885 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FwMRf0nGWz9sW7; Thu, 3 Jun 2021 07:22:04 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1loYJM-0003UO-HC; Wed, 02 Jun 2021 21:22:00 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1loYJJ-0003TM-N9 for kernel-team@lists.ubuntu.com; Wed, 02 Jun 2021 21:21:57 +0000 Received: from mail-pj1-f69.google.com ([209.85.216.69]) by youngberry.canonical.com with esmtps (TLS1.2) tls TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.93) (envelope-from ) id 1loYJJ-0001ja-En for kernel-team@lists.ubuntu.com; Wed, 02 Jun 2021 21:21:57 +0000 Received: by mail-pj1-f69.google.com with SMTP id ck20-20020a17090afe14b02901613b146d17so2299417pjb.6 for ; Wed, 02 Jun 2021 14:21:57 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Ar5yo86A7a1roqVuGiX9AaoMEy2I8Us2iR8nc8v7JVY=; b=qaC/DWlL1gP1ObR0hW4REfdixjVqqjiBcwDib8PcMYzDqz92kL76l7O/Fx6xcgBojX UGqUz9UVKO9ZA13agIOBhUJ18lSZ06SOO4/SA4p2t5XIElMCL45p4hE5T8XyPS/su/Rf hLFTPiLSYmhxGdDBmGumnQTwtObzSAwMtunskhuz7HuKeQwL+2eduWi5GUOZVJ2fllVi FiK27UIi+CCH7iYmSqmbw6OF7rDHy++ok7m9JD9c+19UoqTKqCeNkW5HhRKDN7J1prMa 0YwCsOeJZvFgZKo3Hn/jnQ8VoOSsXibQ6Z/hOj3xJIlWI5DE0j3rw0kzX668DrdYNY2v P6Fg== X-Gm-Message-State: AOAM530T9CRCg1NtSTuvrQJmO6azMoWWyl88B3D83HUV0h5ZX/01OJ1r 3IKAjWPAScBc71d2D+YdrG2LchcWVHQTlEfBFaYIoFQCSgcAm7SG2krIKKC/1/T95IPf3IqY/TN 7gNd3XVlIPxOjaSxRoVa+gjpwVhiVOAouFaqC/IYgCQ== X-Received: by 2002:a62:8204:0:b029:2ea:2647:bb4f with SMTP id w4-20020a6282040000b02902ea2647bb4fmr4019963pfd.23.1622668915869; Wed, 02 Jun 2021 14:21:55 -0700 (PDT) X-Google-Smtp-Source: ABdhPJy58zkb0kUEk2dfTqaZB8eQhVpCP9yp31iLgL4YcAv3FFRd6+tpdFBTDh3zzQ2J6Dsw8u/PhQ== X-Received: by 2002:a62:8204:0:b029:2ea:2647:bb4f with SMTP id w4-20020a6282040000b02902ea2647bb4fmr4019952pfd.23.1622668915673; Wed, 02 Jun 2021 14:21:55 -0700 (PDT) Received: from localhost.localdomain ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id p11sm322129pjo.19.2021.06.02.14.21.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 02 Jun 2021 14:21:55 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH] scsi: storvsc: Parameterize number hardware queues Date: Wed, 2 Jun 2021 15:21:44 -0600 Message-Id: <20210602212144.17799-2-tim.gardner@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210602212144.17799-1-tim.gardner@canonical.com> References: <20210602212144.17799-1-tim.gardner@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: "Melanie Plageman (Microsoft)" BugLink: https://bugs.launchpad.net/bugs/1930626 Add ability to set the number of hardware queues with new module parameter, storvsc_max_hw_queues. The default value remains the number of CPUs. This functionality is useful in some environments (e.g. Microsoft Azure) where decreasing the number of hardware queues has been shown to improve performance. Link: https://lore.kernel.org/r/20210224232948.4651-1-melanieplageman@gmail.com Reviewed-by: Michael Kelley Signed-off-by: Melanie Plageman (Microsoft) Signed-off-by: Martin K. Petersen (cherry picked from commit a81a38cc6ddaf128c7ca9e3fffff21c243f33c97) Signed-off-by: Tim Gardner Acked-by: Colin Ian King --- drivers/scsi/storvsc_drv.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c index e416a6064158..b97561eae671 100644 --- a/drivers/scsi/storvsc_drv.c +++ b/drivers/scsi/storvsc_drv.c @@ -378,10 +378,14 @@ static u32 max_outstanding_req_per_channel; static int storvsc_change_queue_depth(struct scsi_device *sdev, int queue_depth); static int storvsc_vcpus_per_sub_channel = 4; +static unsigned int storvsc_max_hw_queues; module_param(storvsc_ringbuffer_size, int, S_IRUGO); MODULE_PARM_DESC(storvsc_ringbuffer_size, "Ring buffer size (bytes)"); +module_param(storvsc_max_hw_queues, uint, 0644); +MODULE_PARM_DESC(storvsc_max_hw_queues, "Maximum number of hardware queues"); + module_param(storvsc_vcpus_per_sub_channel, int, S_IRUGO); MODULE_PARM_DESC(storvsc_vcpus_per_sub_channel, "Ratio of VCPUs to subchannels"); @@ -1897,6 +1901,7 @@ static int storvsc_probe(struct hv_device *device, { int ret; int num_cpus = num_online_cpus(); + int num_present_cpus = num_present_cpus(); struct Scsi_Host *host; struct hv_host_device *host_dev; bool dev_is_ide = ((dev_id->driver_data == IDE_GUID) ? true : false); @@ -2010,8 +2015,17 @@ static int storvsc_probe(struct hv_device *device, * For non-IDE disks, the host supports multiple channels. * Set the number of HW queues we are supporting. */ - if (!dev_is_ide) - host->nr_hw_queues = num_present_cpus(); + if (!dev_is_ide) { + if (storvsc_max_hw_queues > num_present_cpus) { + storvsc_max_hw_queues = 0; + storvsc_log(device, STORVSC_LOGGING_WARN, + "Resetting invalid storvsc_max_hw_queues value to default.\n"); + } + if (storvsc_max_hw_queues) + host->nr_hw_queues = storvsc_max_hw_queues; + else + host->nr_hw_queues = num_present_cpus; + } /* * Set the error handler work queue.