From patchwork Mon Jun 5 05:45:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gerald Yang X-Patchwork-Id: 1790221 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=qjziVoUs; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QZN014xsPz20fN for ; Mon, 5 Jun 2023 15:46:41 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q6331-0007BB-9L; Mon, 05 Jun 2023 05:46:31 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q632x-000773-3P for kernel-team@lists.ubuntu.com; Mon, 05 Jun 2023 05:46:27 +0000 Received: from mail-oa1-f70.google.com (mail-oa1-f70.google.com [209.85.160.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 6443B3F0E9 for ; Mon, 5 Jun 2023 05:46:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1685943984; bh=bha4NTAhcgfFxDQ9v+T1L/A7x8WEHmZg5t9EluRDWzI=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=qjziVoUs4dO/3EFKPwUta5yLoDdjIvW7CtNX92VKr/ocXbaJkeKL2QSf072ODoj0w 5k60cz99gtZ+05mv6RRGIvy0ec+zO3Y0yLQhuGB1LmB2eriUV1+0p2sqSqyXa0y94+ uaQ/o3GNLuTaFhdJAsgGG0qoUC3jFzAm+6MutUB1h/iO0nTVjz8Eg3ruXkPTNBE1Zf FgKXwxrdTemKaaqTRw7zf+aiUSTrqmdUeALPzAO5rUwiCaeB0zp77yBUEa5Im9q6ku zoL8Y7DgfNj6ksSTHKmtjrOhUBYfHAh+TkZKzFaNL7fzz4Sazc9SIQQDzf/aUsYP3G 6INzCq6c4u9IQ== Received: by mail-oa1-f70.google.com with SMTP id 586e51a60fabf-19fe1eea866so4279638fac.3 for ; Sun, 04 Jun 2023 22:46:24 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685943982; x=1688535982; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=bha4NTAhcgfFxDQ9v+T1L/A7x8WEHmZg5t9EluRDWzI=; b=TjJSdTQ7RH/NT0LSzDw8Dh+0ChYO++O2M44OW+DbwVca5JKfiZauocNLqwkYJZbD4C rou71qUnJpCipKBcT3LTOuSOQ0NixFoI4r1IGC/cY/hG/FkdccrRi1kY8gRY5P5ltjjZ PrBzO1LZsNfHwMUNN9UWpMmzIGgXgmw+vdJr6hyOISEM8bWHXBzasLcFTo12UF7JMC7Y axfMZqd+UPBnGt5dXvdSgAWI7LTKWg0zgLCFjhHsPiUUNxsHQpWzIeDvnldCxfw+bkzq 547iytMJDbAX7zQrJkZog+AiIx/aU5QNIsh8/9IN8lHYawx5j6weH+nT9JlonmKvxfUY JDDQ== X-Gm-Message-State: AC+VfDwVL52Rfl01n1wjiO9f+ySZWiI1yBs4VtrjWqISVYX1mUxJypsX UOVVORR3Wl/wjn11WsHcMZG2mfZJF6XDsibp7OCHRcYPiuy6TkaU3aUS6gR65NbyTYjMicGvTz1 8Gcmr+iLCDlRd8gLB5Qcj18n3wKoxipX2VN82z4Ffq69tM+hacA== X-Received: by 2002:a05:6870:8641:b0:19f:4f5c:82a7 with SMTP id i1-20020a056870864100b0019f4f5c82a7mr7590688oal.22.1685943981868; Sun, 04 Jun 2023 22:46:21 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ6Tis+aS8u3fLqwGMC5OftOhMWj+fvlqTv9cvPlKGW8eBj3IgqWtYHT81+eA2urigB/u4wZ/w== X-Received: by 2002:a05:6870:8641:b0:19f:4f5c:82a7 with SMTP id i1-20020a056870864100b0019f4f5c82a7mr7590675oal.22.1685943981448; Sun, 04 Jun 2023 22:46:21 -0700 (PDT) Received: from localhost.localdomain (220-135-31-21.hinet-ip.hinet.net. [220.135.31.21]) by smtp.gmail.com with ESMTPSA id p5-20020a170902eac500b001b03a1a3151sm5637798pld.70.2023.06.04.22.46.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 04 Jun 2023 22:46:21 -0700 (PDT) From: Gerald Yang To: kernel-team@lists.ubuntu.com Subject: [SRU][K][PATCH 4/8] Revert "sbitmap: fix batched wait_cnt accounting" Date: Mon, 5 Jun 2023 13:45:57 +0800 Message-Id: <20230605054601.1410517-5-gerald.yang@canonical.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230605054601.1410517-1-gerald.yang@canonical.com> References: <20230605054601.1410517-1-gerald.yang@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Jens Axboe This reverts commit 16ede66973c84f890c03584f79158dd5b2d725f5. This is causing issues with CPU stalls on my test box, revert it for now until we understand what is going on. It looks like infinite looping off sbitmap_queue_wake_up(), but hard to tell with a lot of CPUs hitting this issue and the console scrolling infinitely. Link: https://lore.kernel.org/linux-block/e742813b-ce5c-0d58-205b-1626f639b1bd@kernel.dk/ Signed-off-by: Jens Axboe Signed-off-by: Gerald Yang --- block/blk-mq-tag.c | 2 +- include/linux/sbitmap.h | 3 +-- lib/sbitmap.c | 31 ++++++++++++++----------------- 3 files changed, 16 insertions(+), 20 deletions(-) diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c index 7aea93047caf..2dcd738c6952 100644 --- a/block/blk-mq-tag.c +++ b/block/blk-mq-tag.c @@ -200,7 +200,7 @@ unsigned int blk_mq_get_tag(struct blk_mq_alloc_data *data) * other allocations on previous queue won't be starved. */ if (bt != bt_prev) - sbitmap_queue_wake_up(bt_prev, 1); + sbitmap_queue_wake_up(bt_prev); ws = bt_wait_ptr(bt, data->hctx); } while (1); diff --git a/include/linux/sbitmap.h b/include/linux/sbitmap.h index 4d2d5205ab58..8f5a86e210b9 100644 --- a/include/linux/sbitmap.h +++ b/include/linux/sbitmap.h @@ -575,9 +575,8 @@ void sbitmap_queue_wake_all(struct sbitmap_queue *sbq); * sbitmap_queue_wake_up() - Wake up some of waiters in one waitqueue * on a &struct sbitmap_queue. * @sbq: Bitmap queue to wake up. - * @nr: Number of bits cleared. */ -void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr); +void sbitmap_queue_wake_up(struct sbitmap_queue *sbq); /** * sbitmap_queue_show() - Dump &struct sbitmap_queue information to a &struct diff --git a/lib/sbitmap.c b/lib/sbitmap.c index 2fedf07a9db5..a39b1a877366 100644 --- a/lib/sbitmap.c +++ b/lib/sbitmap.c @@ -599,38 +599,34 @@ static struct sbq_wait_state *sbq_wake_ptr(struct sbitmap_queue *sbq) return NULL; } -static bool __sbq_wake_up(struct sbitmap_queue *sbq, int nr) +static bool __sbq_wake_up(struct sbitmap_queue *sbq) { struct sbq_wait_state *ws; - int wake_batch, wait_cnt, cur; + unsigned int wake_batch; + int wait_cnt; ws = sbq_wake_ptr(sbq); - if (!ws || !nr) + if (!ws) return false; - wake_batch = READ_ONCE(sbq->wake_batch); - cur = atomic_read(&ws->wait_cnt); - do { - if (cur <= 0) - return true; - wait_cnt = cur - nr; - } while (!atomic_try_cmpxchg(&ws->wait_cnt, &cur, wait_cnt)); - + wait_cnt = atomic_dec_return(&ws->wait_cnt); /* * For concurrent callers of this, callers should call this function * again to wakeup a new batch on a different 'ws'. */ - if (!waitqueue_active(&ws->wait)) + if (wait_cnt < 0 || !waitqueue_active(&ws->wait)) return true; if (wait_cnt > 0) return false; + wake_batch = READ_ONCE(sbq->wake_batch); + /* * Wake up first in case that concurrent callers decrease wait_cnt * while waitqueue is empty. */ - wake_up_nr(&ws->wait, max(wake_batch, nr)); + wake_up_nr(&ws->wait, wake_batch); /* * Pairs with the memory barrier in sbitmap_queue_resize() to @@ -655,11 +651,12 @@ static bool __sbq_wake_up(struct sbitmap_queue *sbq, int nr) return false; } -void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr) +void sbitmap_queue_wake_up(struct sbitmap_queue *sbq) { - while (__sbq_wake_up(sbq, nr)) + while (__sbq_wake_up(sbq)) ; } +EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up); static inline void sbitmap_update_cpu_hint(struct sbitmap *sb, int cpu, int tag) { @@ -696,7 +693,7 @@ void sbitmap_queue_clear_batch(struct sbitmap_queue *sbq, int offset, atomic_long_andnot(mask, (atomic_long_t *) addr); smp_mb__after_atomic(); - sbitmap_queue_wake_up(sbq, nr_tags); + sbitmap_queue_wake_up(sbq); sbitmap_update_cpu_hint(&sbq->sb, raw_smp_processor_id(), tags[nr_tags - 1] - offset); } @@ -724,7 +721,7 @@ void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr, * waiter. See the comment on waitqueue_active(). */ smp_mb__after_atomic(); - sbitmap_queue_wake_up(sbq, 1); + sbitmap_queue_wake_up(sbq); sbitmap_update_cpu_hint(&sbq->sb, cpu, nr); } EXPORT_SYMBOL_GPL(sbitmap_queue_clear);