From patchwork Mon Jun 5 05:45:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerald Yang X-Patchwork-Id: 1790218 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=wIQCw1hg; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QZN000tXjz20fG for ; Mon, 5 Jun 2023 15:46:39 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q632z-00079O-MX; Mon, 05 Jun 2023 05:46:29 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q632x-00077I-9j for kernel-team@lists.ubuntu.com; Mon, 05 Jun 2023 05:46:27 +0000 Received: from mail-pg1-f200.google.com (mail-pg1-f200.google.com [209.85.215.200]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 2DF5D3F47C for ; Mon, 5 Jun 2023 05:46:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1685943986; bh=oHA51DKtf9i5LkNvZGu/L6UxKDE1NBlF4lkTIpsq78M=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=wIQCw1hgbO9CyRwqqTctgvFtjVtgFg7sRBrN8KMxUJVHF/E4b2RQUC21n+wQpeDiw wOPYv/qvVWO4MdqbR/By+dhB7q8APdAWZ0aM/qqgZi00Dw+VVaN3XvR9KsSzzUGY54 2Onc1OP5poq+YXVJ2Em0RTY7L9Nmd2G557bziHMMwmbOkGh1QWuUfgu2u35DrtdTZY e/YE1q7AXCDLBHOMtX98IBPVpifHnkyfG0iurzpJmvVRo1Mf2GFpL1jjWSzbSo7LAr xHAqA6+td7k914f9vpd8qHY+Bw80Y/wGQBYp2ysPfEO4hkdl6p0ddRLl5x75LLz5cL YT/WGZehKGw5A== Received: by mail-pg1-f200.google.com with SMTP id 41be03b00d2f7-528ab7097afso3803650a12.1 for ; Sun, 04 Jun 2023 22:46:26 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685943983; x=1688535983; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=oHA51DKtf9i5LkNvZGu/L6UxKDE1NBlF4lkTIpsq78M=; b=OTFaehT6zeT4cTWhixyVbqd5K7usQErrAbv9gLuHqqYIaP9QWU/bq3+mTkOwz9q0Zy Dqz031eWdGHyE0RPjHjJqvYfgdv0Hfp2Gp6tqShCfteHVh5PDCHzBC0yeffG44JLaj55 d6XV4B42uDiy2mAXkg7D/MbZM6mGs/sch0gYTpkJF9lNvXqDvalOqz9jiumYopPILWYM 3BQPhjjg7oVv04jBmiWEPt1F7L67NNa/4+hqXkUCy3ynv9nPLCjXm9EGoxZZQJkahyvB pEsv4oGIrVdhx6nFUWKq6p1B9SpkETOt7ol+Qhr4p7FvYSpptDQOZ9S0nrzNTFJzymc/ V+Yw== X-Gm-Message-State: AC+VfDz9h8WrmZ64w0ouhd7EDlMNyZ1yVSN1FZ59JKUOyglQTh3Bd1Pf XQ/K9vvQwrX2ZUl+SeCCS9bj1XEvHSrj431rypJ1TTe7zbi9Hm6W+CQbVHYRnNWi0alVjCknGkj GVCJp4XvqLz7uj7qky5PrLVEz0nedlh+3r5tPKG2B+r6tcCuVJQ== X-Received: by 2002:a05:6a20:7d96:b0:116:fd37:c924 with SMTP id v22-20020a056a207d9600b00116fd37c924mr890837pzj.5.1685943983374; Sun, 04 Jun 2023 22:46:23 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ53/TrsC4u/PJaF/RixeasIwOHs0Ffr0zM2cn83FLaz5I2ZzUZXlcZgKflczRcR/z5Fvaqu3Q== X-Received: by 2002:a05:6a20:7d96:b0:116:fd37:c924 with SMTP id v22-20020a056a207d9600b00116fd37c924mr890825pzj.5.1685943983028; Sun, 04 Jun 2023 22:46:23 -0700 (PDT) Received: from localhost.localdomain (220-135-31-21.hinet-ip.hinet.net. [220.135.31.21]) by smtp.gmail.com with ESMTPSA id p5-20020a170902eac500b001b03a1a3151sm5637798pld.70.2023.06.04.22.46.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jun 2023 22:46:22 -0700 (PDT) From: Gerald Yang To: kernel-team@lists.ubuntu.com Subject: [SRU][K][PATCH 5/8] sbitmap: Avoid leaving waitqueue in invalid state in __sbq_wake_up() Date: Mon, 5 Jun 2023 13:45:58 +0800 Message-Id: <20230605054601.1410517-6-gerald.yang@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605054601.1410517-1-gerald.yang@canonical.com> References: <20230605054601.1410517-1-gerald.yang@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jan Kara When __sbq_wake_up() decrements wait_cnt to 0 but races with someone else waking the waiter on the waitqueue (so the waitqueue becomes empty), it exits without reseting wait_cnt to wake_batch number. Once wait_cnt is 0, nobody will ever reset the wait_cnt or wake the new waiters resulting in possible deadlocks or busyloops. Fix the problem by making sure we reset wait_cnt even if we didn't wake up anybody in the end. Fixes: 040b83fcecfb ("sbitmap: fix possible io hung due to lost wakeup") Reported-by: Keith Busch Signed-off-by: Jan Kara Link: https://lore.kernel.org/r/20220908130937.2795-1-jack@suse.cz Signed-off-by: Jens Axboe Signed-off-by: Gerald Yang --- lib/sbitmap.c | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/lib/sbitmap.c b/lib/sbitmap.c index a39b1a877366..47cd8fb894ba 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -604,6 +604,7 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) struct sbq_wait_state *ws; unsigned int wake_batch; int wait_cnt; + bool ret; ws = sbq_wake_ptr(sbq); if (!ws) @@ -614,12 +615,23 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) * For concurrent callers of this, callers should call this function * again to wakeup a new batch on a different 'ws'. */ - if (wait_cnt < 0 || !waitqueue_active(&ws->wait)) + if (wait_cnt < 0) return true; + /* + * If we decremented queue without waiters, retry to avoid lost + * wakeups. + */ if (wait_cnt > 0) - return false; + return !waitqueue_active(&ws->wait); + /* + * When wait_cnt == 0, we have to be particularly careful as we are + * responsible to reset wait_cnt regardless whether we've actually + * woken up anybody. But in case we didn't wakeup anybody, we still + * need to retry. + */ + ret = !waitqueue_active(&ws->wait); wake_batch = READ_ONCE(sbq->wake_batch); /* @@ -648,7 +660,7 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq) sbq_index_atomic_inc(&sbq->wake_index); atomic_set(&ws->wait_cnt, wake_batch); - return false; + return ret; } void sbitmap_queue_wake_up(struct sbitmap_queue *sbq)