From patchwork Fri Nov 11 01:26:24 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Reed X-Patchwork-Id: 1702351 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=JoJfAw0N; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4N7gzD3LNVz23mg for ; Fri, 11 Nov 2022 12:26:46 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1otIoV-00080k-EU; Fri, 11 Nov 2022 01:26:35 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1otIoS-0007zy-Tn for kernel-team@lists.ubuntu.com; Fri, 11 Nov 2022 01:26:32 +0000 Received: from mail-oa1-f70.google.com (mail-oa1-f70.google.com [209.85.160.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id B9918412C2 for ; Fri, 11 Nov 2022 01:26:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1668129992; bh=xNYyrc0Lkg0IK0ZNv9CmMcTQdkaJUvfwKODjRaYJCRI=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=JoJfAw0NsrS8J6hmsV2yvvHpi1hcbc3Xv/zprKpxUBagCBDZ7mJ6vyWWpgad1++pv 5I/Rif6r5YnhWYSqRqiTjYYbmhnCziTaaooJ+NUepGSlAfL1P+GtJW05qZR4Lt847I QWOFo129R2xeyze9qGRvxASajPk0xO2eXaXjJQoAmwAq3EEWW+cxjLnYxYjTbl1z4n GxWFTESjJ46tKa5mEOlmJ/Vtt5bvmxlXPajBa1lCUp3RhOyirUdrQ7BULzJ2UmlYzu HuKT/ZpK01pPoJIPx+SLgtVvrgWSHcatrQLiIksxtsNrkXNXRiYTqgT08PnI4pFeBA ob8RZgD6nKzGQ== Received: by mail-oa1-f70.google.com with SMTP id 586e51a60fabf-13b041fd3cbso1684688fac.16 for ; Thu, 10 Nov 2022 17:26:32 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=xNYyrc0Lkg0IK0ZNv9CmMcTQdkaJUvfwKODjRaYJCRI=; b=C7zpfOGPiwgU6iOykkR0OuZUMXPhI+BJqJ1WWeh3hsVvArPZFumr3WPi18TrQAejsM CkKoGHHhKSs8wT3+pkN3PqlQt+C/jdZY4yeaFunSupWKausRZNUvZalBpFhEJgBrBvqq CRuXXhFfjv2PILS9Q99fp/JH14QzPoPTV2SI6eMUMZSUMxqsGmjveq1ji6sKJKkb3SDw mPIufswvePmmcqnvkovAmWAdABPvOLIhNrv5jmObAaYpKhD3bc1SVfXt4Od7sMKKs+/s sFcEvvteEfaVrsPapdLdlqG9xYmnyHZFoQc6xFqy7tGvfsR82C3A3GGRoDUGx9Nprm2v /S2A== X-Gm-Message-State: ACrzQf0SHRsobDKvt8Xv6LCFJI6fdc43miTe+bweKapMIzVypl4sKCFR h1/cOgVJmo7iQDG4VdLAqtw3bXSTmj+q/h7KCeFvqdyd1GsCeoiDocEPTAW0YJeBLE6vfH4faoB S2gT0OF4fHohwGumoVs7LHDYLLxnS3DS1GSS+vA7DLA== X-Received: by 2002:a05:6870:5d89:b0:13b:d69f:b114 with SMTP id fu9-20020a0568705d8900b0013bd69fb114mr2697344oab.12.1668129991468; Thu, 10 Nov 2022 17:26:31 -0800 (PST) X-Google-Smtp-Source: AMsMyM5DFghCy0MOwJZ/IisxPiPP/Gj7639D8pkcMYKHf2ODGg/WTnyHfD7GrGlPBWUMCNeGqonAWQ== X-Received: by 2002:a05:6870:5d89:b0:13b:d69f:b114 with SMTP id fu9-20020a0568705d8900b0013bd69fb114mr2697340oab.12.1668129991093; Thu, 10 Nov 2022 17:26:31 -0800 (PST) Received: from localhost ([2600:1700:1d0:5e50:4099:f1a2:cfad:287f]) by smtp.gmail.com with ESMTPSA id z19-20020a0568301db300b006690f65a830sm480948oti.14.2022.11.10.17.26.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 10 Nov 2022 17:26:30 -0800 (PST) From: Michael Reed To: kernel-team@lists.ubuntu.com Subject: [SRU][J][PATCH 2/4] nvme-tcp: handle number of queue changes Date: Thu, 10 Nov 2022 19:26:24 -0600 Message-Id: <20221111012626.39213-3-michael.reed@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20221111012626.39213-1-michael.reed@canonical.com> References: <20221111012626.39213-1-michael.reed@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Daniel Wagner On reconnect, the number of queues might have changed. In the case where we have more queues available than previously we try to access queues which are not initialized yet. The other case where we have less queues than previously, the connection attempt will fail because the target doesn't support the old number of queues and we end up in a reconnect loop. Thus, only start queues which are currently present in the tagset limited by the number of available queues. Then we update the tagset and we can start any new queue. Signed-off-by: Daniel Wagner Reviewed-by: Sagi Grimberg Reviewed-by: Hannes Reinecke Signed-off-by: Christoph Hellwig (cherry picked from commit 09035f86496d8dea7a05a07f6dcb8083c0a3d885) Signed-off-by: Michael Reed BugLink: https://bugs.launchpad.net/bugs/1989990 --- drivers/nvme/host/tcp.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c index 20138e132558..3474c080bcae 100644 --- a/drivers/nvme/host/tcp.c +++ b/drivers/nvme/host/tcp.c @@ -1720,11 +1720,12 @@ static void nvme_tcp_stop_io_queues(struct nvme_ctrl *ctrl) nvme_tcp_stop_queue(ctrl, i); } -static int nvme_tcp_start_io_queues(struct nvme_ctrl *ctrl) +static int nvme_tcp_start_io_queues(struct nvme_ctrl *ctrl, + int first, int last) { int i, ret = 0; - for (i = 1; i < ctrl->queue_count; i++) { + for (i = first; i < last; i++) { ret = nvme_tcp_start_queue(ctrl, i); if (ret) goto out_stop_queues; @@ -1733,7 +1734,7 @@ static int nvme_tcp_start_io_queues(struct nvme_ctrl *ctrl) return 0; out_stop_queues: - for (i--; i >= 1; i--) + for (i--; i >= first; i--) nvme_tcp_stop_queue(ctrl, i); return ret; } @@ -1860,7 +1861,7 @@ static void nvme_tcp_destroy_io_queues(struct nvme_ctrl *ctrl, bool remove) static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new) { - int ret; + int ret, nr_queues; ret = nvme_tcp_alloc_io_queues(ctrl); if (ret) @@ -1880,7 +1881,13 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new) } } - ret = nvme_tcp_start_io_queues(ctrl); + /* + * Only start IO queues for which we have allocated the tagset + * and limitted it to the available queues. On reconnects, the + * queue number might have changed. + */ + nr_queues = min(ctrl->tagset->nr_hw_queues + 1, ctrl->queue_count); + ret = nvme_tcp_start_io_queues(ctrl, 1, nr_queues); if (ret) goto out_cleanup_connect_q; @@ -1900,6 +1907,15 @@ static int nvme_tcp_configure_io_queues(struct nvme_ctrl *ctrl, bool new) nvme_unfreeze(ctrl); } + /* + * If the number of queues has increased (reconnect case) + * start all new queues now. + */ + ret = nvme_tcp_start_io_queues(ctrl, nr_queues, + ctrl->tagset->nr_hw_queues + 1); + if (ret) + goto out_wait_freeze_timed_out; + return 0; out_wait_freeze_timed_out: