From patchwork Wed Oct 21 01:59:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1385290 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4CGDFZ5Xkcz9sTD; Wed, 21 Oct 2020 12:59:30 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1kV3PS-0005jH-Fr; Wed, 21 Oct 2020 01:59:26 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kV3PO-0005iR-Nj for kernel-team@lists.ubuntu.com; Wed, 21 Oct 2020 01:59:22 +0000 Received: from mail-pg1-f198.google.com ([209.85.215.198]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1kV3PO-0002kC-C7 for kernel-team@lists.ubuntu.com; Wed, 21 Oct 2020 01:59:22 +0000 Received: by mail-pg1-f198.google.com with SMTP id j10so373603pgc.6 for ; Tue, 20 Oct 2020 18:59:22 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hQXLkeo1f4E69weKQHQ8kVJR72RCIumxlAmyQBWymBc=; b=nWYjKfcbCu2a6A2TWebIAmp6/Lwm9OJCxwE25cg92OdrE+ToQabH5rbD5mdM6RMd1B Nnl5a3lhKqhqJAmIpXM7BJkzXNJnmHxuI41zNPwwf6QSvElSHlD35f3QIkSjP4elqZdF iTfKonAXL188CtilLUAS7Sbl2FhfnSsGnVt9gQ2KeqfckUVC/X8mHZqPDfDKgYrMo3gS +zJ//C/iWuXIlFnHOhsLM1ut7yfuh/NZ8m77mY76oUJ3PBsfp+89ymCeijqvT80hCpo1 3YztD/5ZGcrvgMd0v0+wkeD5zNnEtrMm/aqG9sTpEsWPql6LS33fPEuntcGuU1pKUE1L 36KQ== X-Gm-Message-State: AOAM531xVY9pHFeZckyYRfoAAevlZ7Ksi9FQJor7PqrgRMGubWzbdiq0 MpOz9RdMWV6zYD22cJ1fJhkKnLoFOe3djDP+fN7ONGRebC94PuWrPXAjqfZp5bieSPzQHnTegs9 JTJvoCozIETqfA3R2tDv34M/pWAb+x9bfps9CHFIXXA== X-Received: by 2002:a63:1466:: with SMTP id 38mr1043816pgu.150.1603245560895; Tue, 20 Oct 2020 18:59:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJzLZOr9WtFrJbiyjyEwC2e1HN6ZbKVLEEIkSnO3mJd6EfPmZrLNNPDr+NgCDskvhgnE0CyqKA== X-Received: by 2002:a63:1466:: with SMTP id 38mr1043801pgu.150.1603245560539; Tue, 20 Oct 2020 18:59:20 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id j23sm377339pgm.76.2020.10.20.18.59.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 20 Oct 2020 18:59:20 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][F][PATCH 1/3] bcache: remove member accessed from struct btree Date: Wed, 21 Oct 2020 14:59:00 +1300 Message-Id: <20201021015902.19223-2-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20201021015902.19223-1-matthew.ruffell@canonical.com> References: <20201021015902.19223-1-matthew.ruffell@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Coly Li BugLink: https://bugs.launchpad.net/bugs/1898786 The member 'accessed' of struct btree is used in bch_mca_scan() when shrinking btree node caches. The original idea is, if b->accessed is set, clean it and look at next btree node cache from c->btree_cache list, and only shrink the caches whose b->accessed is cleaned. Then only cold btree node cache will be shrunk. But when I/O pressure is high, it is very probably that b->accessed of a btree node cache will be set again in bch_btree_node_get() before bch_mca_scan() selects it again. Then there is no chance for bch_mca_scan() to shrink enough memory back to slub or slab system. This patch removes member accessed from struct btree, then once a btree node ache is selected, it will be immediately shunk. By this change, bch_mca_scan() may release btree node cahce more efficiently. Signed-off-by: Coly Li Signed-off-by: Jens Axboe (cherry picked from commit 125d98edd11464c8e0ec9eaaba7d682d0f832686) Signed-off-by: Matthew Ruffell --- drivers/md/bcache/btree.c | 8 ++------ drivers/md/bcache/btree.h | 2 -- 2 files changed, 2 insertions(+), 8 deletions(-) diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c index 8d06105fc9ff..050f1c333d4e 100644 --- a/drivers/md/bcache/btree.c +++ b/drivers/md/bcache/btree.c @@ -749,14 +749,12 @@ static unsigned long bch_mca_scan(struct shrinker *shrink, b = list_first_entry(&c->btree_cache, struct btree, list); list_rotate_left(&c->btree_cache); - if (!b->accessed && - !mca_reap(b, 0, false)) { + if (!mca_reap(b, 0, false)) { mca_bucket_free(b); mca_data_free(b); rw_unlock(true, b); freed++; - } else - b->accessed = 0; + } } out: mutex_unlock(&c->bucket_lock); @@ -1064,7 +1062,6 @@ struct btree *bch_btree_node_get(struct cache_set *c, struct btree_op *op, BUG_ON(!b->written); b->parent = parent; - b->accessed = 1; for (; i <= b->keys.nsets && b->keys.set[i].size; i++) { prefetch(b->keys.set[i].tree); @@ -1155,7 +1152,6 @@ struct btree *__bch_btree_node_alloc(struct cache_set *c, struct btree_op *op, goto retry; } - b->accessed = 1; b->parent = parent; bch_bset_init_next(&b->keys, b->keys.set->data, bset_magic(&b->c->sb)); diff --git a/drivers/md/bcache/btree.h b/drivers/md/bcache/btree.h index 76cfd121a486..f4dcca449391 100644 --- a/drivers/md/bcache/btree.h +++ b/drivers/md/bcache/btree.h @@ -121,8 +121,6 @@ struct btree { /* Key/pointer for this btree node */ BKEY_PADDED(key); - /* Single bit - set when accessed, cleared by shrinker */ - unsigned long accessed; unsigned long seq; struct rw_semaphore lock; struct cache_set *c;