{"id":2229769,"url":"http://patchwork.ozlabs.org/api/1.1/patches/2229769/?format=json","web_url":"http://patchwork.ozlabs.org/project/linux-cifs-client/patch/20260428160804.281745-10-sprasad@microsoft.com/","project":{"id":12,"url":"http://patchwork.ozlabs.org/api/1.1/projects/12/?format=json","name":"Linux CIFS Client","link_name":"linux-cifs-client","list_id":"linux-cifs.vger.kernel.org","list_email":"linux-cifs@vger.kernel.org","web_url":"","scm_url":"","webscm_url":""},"msgid":"<20260428160804.281745-10-sprasad@microsoft.com>","date":"2026-04-28T16:07:55","name":"[v3,10/19] cifs: back cached_dirents with page cache","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"8aec70bd6b696e16bd3421c1ab685036aee998d8","submitter":{"id":79368,"url":"http://patchwork.ozlabs.org/api/1.1/people/79368/?format=json","name":"Shyam Prasad N","email":"nspmangalore@gmail.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/linux-cifs-client/patch/20260428160804.281745-10-sprasad@microsoft.com/mbox/","series":[{"id":501896,"url":"http://patchwork.ozlabs.org/api/1.1/series/501896/?format=json","web_url":"http://patchwork.ozlabs.org/project/linux-cifs-client/list/?series=501896","date":"2026-04-28T16:07:57","name":"[v3,01/19] cifs: change_conf needs to be called for session setup","version":3,"mbox":"http://patchwork.ozlabs.org/series/501896/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2229769/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2229769/checks/","tags":{},"headers":{"Return-Path":"\n <linux-cifs+bounces-11249-incoming=patchwork.ozlabs.org@vger.kernel.org>","X-Original-To":["incoming@patchwork.ozlabs.org","linux-cifs@vger.kernel.org"],"Delivered-To":"patchwork-incoming@legolas.ozlabs.org","Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=gmail.com header.i=@gmail.com header.a=rsa-sha256\n header.s=20251104 header.b=VWNhdBYd;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=104.64.211.4; helo=sin.lore.kernel.org;\n envelope-from=linux-cifs+bounces-11249-incoming=patchwork.ozlabs.org@vger.kernel.org;\n receiver=patchwork.ozlabs.org)","smtp.subspace.kernel.org;\n\tdkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com\n header.b=\"VWNhdBYd\"","smtp.subspace.kernel.org;\n arc=none smtp.client-ip=209.85.214.180","smtp.subspace.kernel.org;\n dmarc=pass (p=none dis=none) header.from=gmail.com","smtp.subspace.kernel.org;\n spf=pass smtp.mailfrom=gmail.com"],"Received":["from sin.lore.kernel.org (sin.lore.kernel.org [104.64.211.4])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4g4nXh2xjyz1xvV\n\tfor <incoming@patchwork.ozlabs.org>; Wed, 29 Apr 2026 03:31:16 +1000 (AEST)","from smtp.subspace.kernel.org (conduit.subspace.kernel.org\n [100.90.174.1])\n\tby sin.lore.kernel.org (Postfix) with ESMTP id A26F4306823B\n\tfor <incoming@patchwork.ozlabs.org>; Tue, 28 Apr 2026 16:14:40 +0000 (UTC)","from localhost.localdomain (localhost.localdomain [127.0.0.1])\n\tby smtp.subspace.kernel.org (Postfix) with ESMTP id 8735344BCAF;\n\tTue, 28 Apr 2026 16:08:33 +0000 (UTC)","from mail-pl1-f180.google.com (mail-pl1-f180.google.com\n [209.85.214.180])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))\n\t(No client certificate requested)\n\tby smtp.subspace.kernel.org (Postfix) with ESMTPS id 296DA44D011\n\tfor <linux-cifs@vger.kernel.org>; Tue, 28 Apr 2026 16:08:30 +0000 (UTC)","by mail-pl1-f180.google.com with SMTP id\n d9443c01a7336-2ab077e3f32so50630055ad.3\n        for <linux-cifs@vger.kernel.org>;\n Tue, 28 Apr 2026 09:08:30 -0700 (PDT)","from sprasad-dev1.corp.microsoft.com ([167.220.110.216])\n        by smtp.gmail.com with ESMTPSA id\n d9443c01a7336-2b97ac7894csm30864465ad.50.2026.04.28.09.08.27\n        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);\n        Tue, 28 Apr 2026 09:08:28 -0700 (PDT)"],"ARC-Seal":"i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;\n\tt=1777392513; cv=none;\n b=YFZEna4r37n0TY4hT9+q9O79FIgiAHFqicfWDBla4sKpsayYsfvy5bFSq9DtnVUcD5hEP1RDb6oSKJ06QkXqbiagUsH2QmxUU2WQgFv8rN8gfUA6gdoYHVqAqEODoWXClShlogIP76tA2pPSRUmB0cYdEe8rAyj8/JOfyCu045k=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=subspace.kernel.org;\n\ts=arc-20240116; t=1777392513; c=relaxed/simple;\n\tbh=KrklyM46jEjug8lEQ/glvmg/6Li5t27U9mxLYKgqqpM=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t MIME-Version;\n b=Z8YdWyb7TEr53AH0jNwkXuueovsANAtWW3Ytu7dgafV+D+rWyfttRWU4tNpZ3n7EiiRGcTdPb3gDVj4aG00Y+NY+G6BJT8GWzn853vHtJmpxfB+/TkdHKs/ZQOnNIu1em6UJppJld52Xzc/FJLRHqkps0CVw6vNslCJfTjiw+IA=","ARC-Authentication-Results":"i=1; smtp.subspace.kernel.org;\n dmarc=pass (p=none dis=none) header.from=gmail.com;\n spf=pass smtp.mailfrom=gmail.com;\n dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com\n header.b=VWNhdBYd; arc=none smtp.client-ip=209.85.214.180","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n        d=gmail.com; s=20251104; t=1777392509; x=1777997309;\n darn=vger.kernel.org;\n        h=content-transfer-encoding:mime-version:references:in-reply-to\n         :message-id:date:subject:cc:to:from:from:to:cc:subject:date\n         :message-id:reply-to;\n        bh=Rveh/HF8C3KXO4imN6PnFvTUaetb6aPc3faJz/Kya3Y=;\n        b=VWNhdBYdDFPMga9X8KG8DHWPE7mgjB2o4VEuqjVKlbODJLFRXRLv19NjupBJTUlKHK\n         2HYc5NGj3Xjt3t25+dP3U1wy/8RZR5I23qMUBMq6p56xjX+tsEZU2QEZnXDGsD4OqlTo\n         PM2mh+LYfSTqSg7zTqY5hirAdMEq7dHMgX2i32XoQmJpxhRXg52YYTQce4yJvC8TG1Rp\n         ZKwifOusbYnIe+f5eFioafpWuIUNe03QuMDDdKYedcJ/wYQ525Sx4iZ6GkpKpTBLJY4x\n         80Ybzz8IlarVIko6uRVVI7pHhtOyypjeyRBuBhUSk94UhyjNhVdpS3326JT31pUoDDMg\n         vlvQ==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n        d=1e100.net; s=20251104; t=1777392509; x=1777997309;\n        h=content-transfer-encoding:mime-version:references:in-reply-to\n         :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from\n         :to:cc:subject:date:message-id:reply-to;\n        bh=Rveh/HF8C3KXO4imN6PnFvTUaetb6aPc3faJz/Kya3Y=;\n        b=EjX9Bf1j5DeVVSrGaYYFgt8oICMierB+47r150NLq2EF+DlCTc4eCiY7M5naiIBOhz\n         1btpXTyOfgkMZTL/ZG8+KhD7hqm9bHVsw2tRmncfIb2B2Sih+wUNBU6VYPRrSlYybR8e\n         8fovjWEzuIkHFf8CMit4iGvkQiyChcWTRrm2i2m6lPayAK+7jhrVcB5f5nnWVFSUkcBP\n         ycpMIogmaYdDQZxgmOHXAS2tq/nAInqhKRjao3NyJvbDDVz9sNiFYRMbsnJA8a35x3T4\n         YWFDNOz+z0+ihdkqo+r6RgvUIEzjkFRBsSfcSV+ODC1Vasx/AHUwPfsGxo1BiBjflIyl\n         AuEw==","X-Gm-Message-State":"AOJu0YySq41qnOmMK15vc8kpTFoGRWbhNL53/4h6LEYa0QEQ3f5YFwNg\n\tQlLYhk5DdsChaQ3Xq2scfOsCzPYF+dm3m03wcR5QeFEELrrltCsfOmYBeyCscNsdrvw=","X-Gm-Gg":"AeBDievnDjlU4AAy2JECM8QdC7brume+PZmTFqnDpRhe/DnE3YFCFEp6LOvAvXuMxj3\n\th/dz1wPfprrpNrUEIp32+WsBAKGmpi4VnC7XwYCPc5ZJyGOgGSvFkF/XKK2+9f7/MfprOWV6s19\n\tiGdSTT31yt42/VUTehZBTYJ98zbmhPy6ZNu3BoujjbVA7+VtrJRMens7bx1zWF8ItmaBGFOtMvD\n\t0hiqF3NHs+cDeH2LNd7cJikFBWL9l+KKV5rZDPiW6FSlunOLiOWkqh0Bhg884CTfcvgZRdfVENO\n\tv2EKA5BzBuDJ2ln6bCAcnkX1hUyvMwHHl6Z6hEPChh9ohzXGA1lYa6iw9pcxalNd9B+FcC/HGfY\n\tN5NXcmpjD0essyjOFWreT6ZAA3x7fjjcNZtqEwRev7qjg3+02WMOAfGe78OTjpmtCg9odZPF2sn\n\teTSdf5mWMrV/weFcibs+bs/5LBwWQeDAFb4yb0tvQNcfSs74CTN/+q2NwXqbNkKkMm","X-Received":"by 2002:a17:902:e546:b0:2b2:49a7:a5bd with SMTP id\n d9443c01a7336-2b97c412025mr36526255ad.1.1777392508618;\n        Tue, 28 Apr 2026 09:08:28 -0700 (PDT)","From":"nspmangalore@gmail.com","X-Google-Original-From":"sprasad@microsoft.com","To":"linux-cifs@vger.kernel.org,\n\tsmfrench@gmail.com,\n\tpc@manguebit.org,\n\tbharathsm@microsoft.com,\n\tdhowells@redhat.com,\n\thenrique.carvalho@suse.com,\n\tematsumiya@suse.de","Cc":"Shyam Prasad N <sprasad@microsoft.com>","Subject":"[PATCH v3 10/19] cifs: back cached_dirents with page cache","Date":"Tue, 28 Apr 2026 21:37:55 +0530","Message-ID":"<20260428160804.281745-10-sprasad@microsoft.com>","X-Mailer":"git-send-email 2.43.0","In-Reply-To":"<20260428160804.281745-1-sprasad@microsoft.com>","References":"<20260428160804.281745-1-sprasad@microsoft.com>","Precedence":"bulk","X-Mailing-List":"linux-cifs@vger.kernel.org","List-Id":"<linux-cifs.vger.kernel.org>","List-Subscribe":"<mailto:linux-cifs+subscribe@vger.kernel.org>","List-Unsubscribe":"<mailto:linux-cifs+unsubscribe@vger.kernel.org>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit"},"content":"From: Shyam Prasad N <sprasad@microsoft.com>\n\nToday the cached_dirents is a linked list with one entry per dentry.\nThis is very inefficient from the point of view of memory allocation\nand memory management.\n\nThis change introduces a hybrid structure. cached_dirents will start\nwith maintaining a linked list for cached_dirents for small directories.\nWhen the size of the directory (in terms of number of dirents) exceeds a\nthreshold (64), cached_dirents will now switch over to a folioq structure\nto store the cached_dirents.\n\nThe idea is to reduce the number of memory allocations significantly for\nlarge directories. Additionally, this change also tries to store short names\n(names less than 64 bytes) in the folio itself, further reducing the\nmemory allocation calls. If the namelen is greater than 64 bytes or\nif the folio does not have space to store more names, it falls back to kmalloc.\n\nSigned-off-by: Shyam Prasad N <sprasad@microsoft.com>\n---\n fs/smb/client/cached_dir.c | 1219 ++++++++++++++++++++++++++++++++----\n fs/smb/client/cached_dir.h |  141 ++++-\n fs/smb/client/cifsproto.h  |    1 +\n 3 files changed, 1236 insertions(+), 125 deletions(-)","diff":"diff --git a/fs/smb/client/cached_dir.c b/fs/smb/client/cached_dir.c\nindex 614a241393b59..7cfbe50db66f5 100644\n--- a/fs/smb/client/cached_dir.c\n+++ b/fs/smb/client/cached_dir.c\n@@ -6,22 +6,29 @@\n  */\n \n #include <linux/namei.h>\n+#include <linux/completion.h>\n+#include <linux/kmemleak.h>\n+#include <linux/hash.h>\n #include \"cifsglob.h\"\n #include \"cifsproto.h\"\n #include \"cifs_debug.h\"\n #include \"smb2proto.h\"\n #include \"cached_dir.h\"\n+#include \"trace.h\"\n \n static struct cached_fid *init_cached_dir(const char *path);\n static void free_cached_dir(struct cached_fid *cfid);\n static void smb2_close_cached_fid(struct kref *ref);\n static void cfids_laundromat_worker(struct work_struct *work);\n \n+#define CACHED_DIRENT_HASH_BITS\t7\n+\n struct cached_dir_dentry {\n \tstruct list_head entry;\n \tstruct dentry *dentry;\n };\n \n+/* Generic helpers */\n bool cached_dir_is_valid(struct cached_fid *cfid)\n {\n \tbool valid;\n@@ -53,50 +60,689 @@ bool cached_dir_copy_lease_key(struct cached_fid *cfid,\n \treturn valid;\n }\n \n+/* Cached mapping helpers */\n+static inline const char *cached_dirent_name(const struct cifs_cached_dir_mapping *cached_mapping,\n+\t\t\t\t\t     const struct cached_dirent *de)\n+{\n+\tif (de->external_name)\n+\t\treturn de->name;\n+\n+\treturn ((const char *)cached_mapping) + de->inline_name_off;\n+}\n+\n+static inline struct cifs_cached_dir_mapping *cached_dir_mapping(struct folio *folio)\n+{\n+\treturn folio_address(folio);\n+}\n+\n+static inline size_t cached_dirent_array_bytes(unsigned int entries)\n+{\n+\treturn struct_size((struct cifs_cached_dir_mapping *)NULL, entries, entries);\n+}\n+\n+static inline bool cached_dirent_has_space_for_record(const struct cifs_cached_dir_mapping *cached_mapping,\n+\t\t\t\t\t\t      size_t record_bytes)\n+{\n+\treturn cached_dirent_array_bytes(cached_mapping->entries_count + 1) + record_bytes <=\n+\t\tcached_mapping->name_tail_offset;\n+}\n+\n+/* for short names, try to place them inside the folio */\n+static bool cached_dirent_try_inline_name(struct folio *folio,\n+\t\t\t\t\t  struct cifs_cached_dir_mapping *cached_mapping,\n+\t\t\t\t\t  struct cached_dirent *de,\n+\t\t\t\t\t  const char *name,\n+\t\t\t\t\t  unsigned int namelen,\n+\t\t\t\t\t  const char **stored_name)\n+{\n+\tchar *base;\n+\tu32 tail;\n+\n+\tif (namelen > CIFS_CACHED_INLINE_NAME_LEN)\n+\t\treturn false;\n+\n+\t/* try to fit cached_dirent+name in the same folio (inline) */\n+\tif (!cached_dirent_has_space_for_record(cached_mapping, namelen))\n+\t\treturn false;\n+\n+\tbase = folio_address(folio);\n+\tif (!base)\n+\t\treturn false;\n+\n+\ttail = cached_mapping->name_tail_offset - namelen;\n+\tmemcpy(base + tail, name, namelen);\n+\tde->external_name = false;\n+\tde->inline_name_off = tail;\n+\tde->name = NULL;\n+\tcached_mapping->name_tail_offset = tail;\n+\t*stored_name = base + tail;\n+\treturn true;\n+}\n+\n+static unsigned int cached_dir_folio_count(struct cached_dirents *cde)\n+{\n+\tstruct folio_queue *fq;\n+\tunsigned int count = 0;\n+\n+\tfor (fq = cde->folioq; fq; fq = fq->next) {\n+\t\tcount += folioq_count(fq);\n+\t}\n+\n+\treturn count;\n+}\n+\n+/* insert cursor helpers to aid fast appends to cached_dir */\n+static void cached_dir_reset_insert_cursor_locked(struct cached_dirents *cde)\n+{\n+\tcde->insert_cursor_fq = cde->folioq;\n+\tcde->insert_cursor_slot = 0;\n+\tcde->insert_cursor_folio_index = 0;\n+}\n+\n+static void cached_dir_set_insert_cursor_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\tstruct folio_queue *fq,\n+\t\t\t\t\t\tunsigned int slot,\n+\t\t\t\t\t\tunsigned int folio_index)\n+{\n+\tcde->insert_cursor_fq = fq;\n+\tcde->insert_cursor_slot = slot;\n+\tcde->insert_cursor_folio_index = folio_index;\n+}\n+\n+static bool cached_dirents_use_folioq_locked(struct cached_dirents *cde)\n+{\n+\treturn cde->folioq != NULL;\n+}\n+\n+static void cached_dir_init_new_folios(struct cached_dirents *cde,\n+\t\t\t\t       unsigned int old_folio_count)\n+{\n+\tstruct folio_queue *fq;\n+\tunsigned int folio_index = 0;\n+\n+\tfor (fq = cde->folioq; fq; fq = fq->next) {\n+\t\tfor (int s = 0; s < folioq_count(fq); s++, folio_index++) {\n+\t\t\tstruct folio *folio = folioq_folio(fq, s);\n+\t\t\tvoid *base;\n+\n+\t\t\tif (folio_index < old_folio_count)\n+\t\t\t\tcontinue;\n+\n+\t\t\tbase = folio_address(folio);\n+\t\t\tif (base) {\n+\t\t\t\tmemset(base, 0, folio_size(folio));\n+\t\t\t\tcached_dir_mapping(folio)->name_tail_offset = folio_size(folio);\n+\t\t\t}\n+\t\t}\n+\t}\n+}\n+\n+/*\n+ * Expand the folioq backing store for a cached directory by one PAGE_SIZE.\n+ * Called by add_cached_dirent_folioq_locked() when no free slot is found in\n+ * the existing folios, and by convert_cached_dirents_list_to_folioq_locked()\n+ * when initializing folioq mode for the first time.\n+ *\n+ * After growing, newly added folios are zeroed and their name_tail_offset is\n+ * set to folio_size so that inline name packing starts from the tail.\n+ * The insert cursor must be reset by the caller after this returns.\n+ */\n+static int grow_cached_dirents_folioq_locked(struct cached_dirents *cde)\n+{\n+\tunsigned int old_folio_count;\n+\tsize_t old_size, target_size;\n+\tint rc;\n+\n+\told_folio_count = cached_dir_folio_count(cde);\n+\told_size = cde->folioq_size;\n+\ttarget_size = old_size + PAGE_SIZE;\n+\n+\tcifs_dbg(FYI,\n+\t\t \"cached_dir folioq alloc: old_size=%zu target_size=%zu\\n\",\n+\t\t old_size, target_size);\n+\n+\trc = netfs_alloc_folioq_buffer(NULL, &cde->folioq,\n+\t\t\t\t      &cde->folioq_size,\n+\t\t\t\t      target_size, GFP_NOFS);\n+\tif (rc < 0)\n+\t\treturn rc;\n+\n+\tcached_dir_init_new_folios(cde, old_folio_count);\n+\n+\treturn 0;\n+}\n+\n+/* lookup cached_dirent by traversing the list */\n+static struct cached_dir_lookup_entry *lookup_cached_dirent_list_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\t\t const char *name,\n+\t\t\t\t\t\t\t unsigned int namelen)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tu32 name_hash;\n+\n+\tname_hash = full_name_hash(NULL, name, namelen);\n+\n+\tlist_for_each_entry(entry, &cde->entry_list, list_node) {\n+\t\tif (entry->name_hash == name_hash &&\n+\t\t    entry->dirent &&\n+\t\t    entry->dirent->name_len == namelen &&\n+\t\t    memcmp(entry->dirent->name, name, namelen) == 0)\n+\t\t\treturn entry;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+/* lookup cached_dirent in folioq by using the hash table */\n+static struct cached_dir_lookup_entry *lookup_cached_dirent_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\t\t\t   const char *name,\n+\t\t\t\t\t\t\t\t   unsigned int namelen)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct hlist_head *bucket;\n+\tu32 name_hash;\n+\n+\tif (!cde->lookup_ht)\n+\t\treturn NULL;\n+\n+\tname_hash = full_name_hash(NULL, name, namelen);\n+\tbucket = &cde->lookup_ht[hash_32(name_hash, CACHED_DIRENT_HASH_BITS)];\n+\n+\thlist_for_each_entry(entry, bucket, hash_node) {\n+\t\tif (entry->name_hash == name_hash &&\n+\t\t    entry->dirent &&\n+\t\t    entry->dirent->name_len == namelen &&\n+\t\t    memcmp(entry->dirent->name, name, namelen) == 0)\n+\t\t\treturn entry;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+/* lookup wrapper to decide if the entry is in list or folioq */\n+static struct cached_dir_lookup_entry *lookup_cached_dirent_entry_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\t\t\t  const char *name,\n+\t\t\t\t\t\t\t\t  unsigned int namelen)\n+{\n+\tif (cached_dirents_use_folioq_locked(cde))\n+\t\treturn lookup_cached_dirent_locked(cde, name, namelen);\n+\n+\treturn lookup_cached_dirent_list_locked(cde, name, namelen);\n+}\n+\n+/* lookup the last cached_dir_mapping in the folioq */\n+static struct cifs_cached_dir_mapping *last_cached_dir_mapping_locked(struct cached_dirents *cde)\n+{\n+\tstruct folio_queue *fq;\n+\tunsigned int slot;\n+\tstruct cifs_cached_dir_mapping *last = NULL;\n+\n+\tlockdep_assert_held(&cde->de_mutex);\n+\n+\tif (!cde->folioq)\n+\t\treturn NULL;\n+\n+\t/* Fast path: the insert cursor tracks the most recent append location. */\n+\tif (cde->insert_cursor_fq) {\n+\t\tslot = cde->insert_cursor_slot;\n+\t\tif (slot < folioq_count(cde->insert_cursor_fq)) {\n+\t\t\tlast = cached_dir_mapping(folioq_folio(cde->insert_cursor_fq, slot));\n+\t\t\tif (last && last->entries_count)\n+\t\t\t\treturn last;\n+\t\t}\n+\t}\n+\n+\tfor (fq = cde->folioq; fq; fq = fq->next) {\n+\t\tfor (int s = 0; s < folioq_count(fq); s++) {\n+\t\t\tstruct cifs_cached_dir_mapping *cached_mapping;\n+\n+\t\t\tcached_mapping = cached_dir_mapping(folioq_folio(fq, s));\n+\t\t\tif (cached_mapping && cached_mapping->entries_count)\n+\t\t\t\tlast = cached_mapping;\n+\t\t}\n+\t}\n+\n+\treturn last;\n+}\n+\n+/* emit dirents from the cache, starting with the current position of ctx */\n static bool emit_cached_dirents(struct cached_dirents *cde,\n \t\t\t\tstruct dir_context *ctx)\n {\n-\tstruct cached_dirent *dirent;\n+\tstruct folio_queue *fq;\n \tbool rc;\n \n \tlockdep_assert_held(&cde->de_mutex);\n \n-\tlist_for_each_entry(dirent, &cde->entries, entry) {\n-\t\t/*\n-\t\t * Skip all early entries prior to the current lseek()\n-\t\t * position.\n-\t\t */\n-\t\tif (ctx->pos > dirent->pos)\n-\t\t\tcontinue;\n-\t\t/*\n-\t\t * We recorded the current ->pos value for the dirent\n-\t\t * when we stored it in the cache.\n-\t\t * However, this sequence of ->pos values may have holes\n-\t\t * in it, for example dot-dirs returned from the server\n-\t\t * are suppressed.\n-\t\t * Handle this by forcing ctx->pos to be the same as the\n-\t\t * ->pos of the current dirent we emit from the cache.\n-\t\t * This means that when we emit these entries from the cache\n-\t\t * we now emit them with the same ->pos value as in the\n-\t\t * initial scan.\n-\t\t */\n-\t\tctx->pos = dirent->pos;\n-\t\trc = dir_emit(ctx, dirent->name, dirent->namelen,\n-\t\t\t      dirent->fattr.cf_uniqueid,\n-\t\t\t      dirent->fattr.cf_dtype);\n-\t\tif (!rc)\n-\t\t\treturn rc;\n-\t\tctx->pos++;\n+\t/* if folioq is empty, this is a small dir; dirents will be found in list */\n+\tif (!cde->folioq) {\n+\t\tstruct cached_dir_lookup_entry *entry;\n+\n+\t\tlist_for_each_entry(entry, &cde->entry_list, list_node) {\n+\t\t\tstruct cached_dirent *dirent = entry->dirent;\n+\n+\t\t\tif (dirent->tombstone)\n+\t\t\t\tcontinue;\n+\t\t\tif (ctx->pos > dirent->ctx_pos)\n+\t\t\t\tcontinue;\n+\n+\t\t\tctx->pos = dirent->ctx_pos;\n+\t\t\trc = dir_emit(ctx, dirent->name, dirent->name_len,\n+\t\t\t\t      dirent->fattr.cf_uniqueid,\n+\t\t\t\t      dirent->fattr.cf_dtype);\n+\t\t\tif (!rc)\n+\t\t\t\treturn rc;\n+\t\t\tctx->pos++;\n+\t\t}\n+\n+\t\treturn cde->is_valid;\n \t}\n+\n+\t/* large dir; emit from folioq */\n+\tfor (fq = cde->folioq; fq; fq = fq->next) {\n+\t\tfor (int s = 0; s < folioq_count(fq); s++) {\n+\t\t\tstruct folio *folio = folioq_folio(fq, s);\n+\t\t\tstruct cifs_cached_dir_mapping *cached_mapping;\n+\n+\t\t\tcached_mapping = cached_dir_mapping(folio);\n+\t\t\tif (!cached_mapping)\n+\t\t\t\treturn false;\n+\n+\t\t\tfor (u32 i = 0; i < cached_mapping->entries_count; i++) {\n+\t\t\t\tstruct cached_dirent *dirent = &cached_mapping->entries[i];\n+\t\t\t\tconst char *name;\n+\n+\t\t\t\tif (dirent->tombstone)\n+\t\t\t\t\tcontinue;\n+\n+\t\t\t\tname = cached_dirent_name(cached_mapping, dirent);\n+\n+\t\t\t\t/*\n+\t\t\t\t * Skip all early entries prior to the current lseek()\n+\t\t\t\t * position.\n+\t\t\t\t */\n+\t\t\t\tif (ctx->pos > dirent->ctx_pos)\n+\t\t\t\t\tcontinue;\n+\t\t\t\t/*\n+\t\t\t\t * We recorded the current ->pos value for the dirent\n+\t\t\t\t * when we stored it in the cache.\n+\t\t\t\t * However, this sequence of ->pos values may have holes\n+\t\t\t\t * in it, for example dot-dirs returned from the server\n+\t\t\t\t * are suppressed.\n+\t\t\t\t * Handle this by forcing ctx->pos to be the same as the\n+\t\t\t\t * ->pos of the current dirent we emit from the cache.\n+\t\t\t\t * This means that when we emit these entries from the cache\n+\t\t\t\t * we now emit them with the same ->pos value as in the\n+\t\t\t\t * initial scan.\n+\t\t\t\t */\n+\t\t\t\tctx->pos = dirent->ctx_pos;\n+\t\t\t\trc = dir_emit(ctx, name, dirent->name_len,\n+\t\t\t\t\t      dirent->fattr.cf_uniqueid,\n+\t\t\t\t\t      dirent->fattr.cf_dtype);\n+\t\t\t\tif (!rc)\n+\t\t\t\t\treturn rc;\n+\t\t\t\tctx->pos++;\n+\t\t\t}\n+\n+\t\t\tif (cached_mapping->folio_is_eof)\n+\t\t\t\treturn true;\n+\t\t}\n+\t}\n+\treturn true;\n+}\n+\n+/* release the lookup hashtable */\n+static void release_lookup_table_locked(struct cached_dirents *cde)\n+{\n+\tint bucket;\n+\n+\tif (!cde->lookup_ht)\n+\t\treturn;\n+\n+\tfor (bucket = 0; bucket < (1 << CACHED_DIRENT_HASH_BITS); bucket++) {\n+\t\tstruct cached_dir_lookup_entry *entry;\n+\t\tstruct hlist_node *tmp;\n+\n+\t\thlist_for_each_entry_safe(entry, tmp, &cde->lookup_ht[bucket], hash_node) {\n+\t\t\thlist_del(&entry->hash_node);\n+\t\t\tkfree(entry);\n+\t\t}\n+\t}\n+\n+\tkfree(cde->lookup_ht);\n+\tcde->lookup_ht = NULL;\n+\tcde->lookup_bytes = 0;\n+}\n+\n+/* release all cached_dirents in list */\n+static void release_cached_dirents_list_locked(struct cached_dirents *cde)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dir_lookup_entry *tmp;\n+\n+\tlist_for_each_entry_safe(entry, tmp, &cde->entry_list, list_node) {\n+\t\tlist_del(&entry->list_node);\n+\t\tif (entry->dirent) {\n+\t\t\tif (entry->dirent->external_name)\n+\t\t\t\tkfree((void *)entry->dirent->name);\n+\t\t\tkfree(entry->dirent);\n+\t\t}\n+\t\tkfree(entry);\n+\t}\n+\n+\tcde->entry_list_count = 0;\n+}\n+\n+/* release all cached_dirents in folioq */\n+static void release_cached_dirents_folioq_locked(struct cached_dirents *cde)\n+{\n+\tstruct folio_queue *fq;\n+\n+\tlockdep_assert_held(&cde->de_mutex);\n+\n+\tfor (fq = cde->folioq; fq; fq = fq->next) {\n+\t\tfor (int s = 0; s < folioq_count(fq); s++) {\n+\t\t\tstruct folio *folio = folioq_folio(fq, s);\n+\t\t\tstruct cifs_cached_dir_mapping *cached_mapping;\n+\n+\t\t\tcached_mapping = cached_dir_mapping(folio);\n+\t\t\tif (!cached_mapping)\n+\t\t\t\tcontinue;\n+\n+\t\t\tfor (u32 i = 0; i < cached_mapping->entries_count; i++)\n+\t\t\t\tif (cached_mapping->entries[i].external_name)\n+\t\t\t\t\tkfree((void *)cached_mapping->entries[i].name);\n+\t\t}\n+\t}\n+\n+\tif (cde->folioq) {\n+\t\tcifs_dbg(FYI, \"cached_dir folioq free: old_size=%zu target_size=%d\\n\",\n+\t\t\t cde->folioq_size, 0);\n+\t\tnetfs_free_folioq_buffer(cde->folioq);\n+\t\tcde->folioq = NULL;\n+\t}\n+\n+\tcde->folioq_size = 0;\n+}\n+\n+/* release wrapper for cached_dirents */\n+static void release_cached_dirents_locked(struct cached_dirents *cde)\n+{\n+\tlockdep_assert_held(&cde->de_mutex);\n+\n+\tif (cached_dirents_use_folioq_locked(cde))\n+\t\trelease_cached_dirents_folioq_locked(cde);\n+\telse\n+\t\trelease_cached_dirents_list_locked(cde);\n+\n+\trelease_lookup_table_locked(cde);\n+\n+\tcde->entries_count = 0;\n+\tcde->external_name_bytes = 0;\n+\tcde->lookup_bytes = 0;\n+\tcde->bytes_used = 0;\n+\tcde->dir_inode = NULL;\n+\tcached_dir_reset_insert_cursor_locked(cde);\n+}\n+\n+/* invalidate cached_dirents and release resources, but keep the cache structure for reuse */\n+static void fail_cached_dir_locked(struct cached_dirents *cde)\n+{\n+\tcde->is_failed = 1;\n+\trelease_cached_dirents_locked(cde);\n+\t/*\n+\t * Reset the file pointer so the next cifs_readdir from position 0\n+\t * can claim this slot and repopulate the cache.\n+\t */\n+\tcde->file = NULL;\n+}\n+\n+/* insert cached_dirent into lookup hashtable */\n+static int insert_cached_dir_lookup_locked(struct cached_dirents *cde,\n+\t\t\t\t\t   const char *name,\n+\t\t\t\t\t   unsigned int namelen,\n+\t\t\t\t\t   struct cached_dirent *dirent,\n+\t\t\t\t\t   bool pending_dcache)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct hlist_head *bucket;\n+\n+\tentry = kzalloc(sizeof(*entry), GFP_KERNEL);\n+\tif (!entry)\n+\t\treturn -ENOMEM;\n+\n+\tentry->name_hash = full_name_hash(NULL, name, namelen);\n+\tentry->dirent = dirent;\n+\tentry->pending_dcache = pending_dcache;\n+\tinit_completion(&entry->dcache_complete);\n+\n+\tbucket = &cde->lookup_ht[hash_32(entry->name_hash, CACHED_DIRENT_HASH_BITS)];\n+\thlist_add_head(&entry->hash_node, bucket);\n+\tcde->lookup_bytes += sizeof(*entry);\n+\treturn 0;\n+}\n+\n+/* add cached_dirent to folioq */\n+static bool add_cached_dirent_folioq_locked(struct cached_dirents *cde,\n+\t\t\t\t\t    loff_t ctx_pos,\n+\t\t\t\t\t    const char *name,\n+\t\t\t\t\t    unsigned int namelen,\n+\t\t\t\t\t    const struct cifs_fattr *fattr,\n+\t\t\t\t\t    bool pending_dcache)\n+{\n+\tstruct cached_dirent *de;\n+\tstruct cifs_cached_dir_mapping *cached_mapping = NULL;\n+\tconst char *stored_name;\n+\tstruct folio *target_folio = NULL;\n+\tstruct folio_queue *fq;\n+\tunsigned int cur_folio;\n+\tunsigned int start_slot;\n+\tint rc;\n+\tbool grew = false;\n+\n+\tif (!cde->lookup_ht) {\n+\t\tcde->lookup_ht = kcalloc(1 << CACHED_DIRENT_HASH_BITS,\n+\t\t\t\t\t sizeof(*cde->lookup_ht), GFP_KERNEL);\n+\t\tif (!cde->lookup_ht) {\n+\t\t\tfail_cached_dir_locked(cde);\n+\t\t\treturn false;\n+\t\t}\n+\t}\n+\n+\t/* Grow phase: ensure folioq exists */\n+\tif (!cde->folioq) {\n+\t\trc = grow_cached_dirents_folioq_locked(cde);\n+\t\tif (rc < 0) {\n+\t\t\tfail_cached_dir_locked(cde);\n+\t\t\treturn false;\n+\t\t}\n+\t\tcached_dir_reset_insert_cursor_locked(cde);\n+\t}\n+\n+\tif (!cde->insert_cursor_fq)\n+\t\tcached_dir_reset_insert_cursor_locked(cde);\n+\n+retry_insert:\n+\t/* Insertion phase: try to find space in current folios */\n+\tde = NULL;\n+\tfq = cde->insert_cursor_fq;\n+\tstart_slot = cde->insert_cursor_slot;\n+\tcur_folio = cde->insert_cursor_folio_index;\n+\tif (!fq) {\n+\t\tfq = cde->folioq;\n+\t\tstart_slot = 0;\n+\t\tcur_folio = 0;\n+\t}\n+\n+\tfor (; fq && !de; fq = fq->next) {\n+\t\tfor (int s = start_slot; s < folioq_count(fq) && !de; s++, cur_folio++) {\n+\t\t\tstruct folio *folio = folioq_folio(fq, s);\n+\n+\t\t\tcached_mapping = cached_dir_mapping(folio);\n+\t\t\tif (!cached_mapping)\n+\t\t\t\tcontinue;\n+\n+\t\t\tif (cached_mapping->folio_full)\n+\t\t\t\tcontinue;\n+\n+\t\t\tif (cached_dirent_has_space_for_record(cached_mapping, 0)) {\n+\t\t\t\ttarget_folio = folio;\n+\t\t\t\tde = &cached_mapping->entries[cached_mapping->entries_count];\n+\t\t\t\tcached_dir_set_insert_cursor_locked(cde, fq, s, cur_folio);\n+\t\t\t\tbreak;\n+\t\t\t}\n+\n+\t\t\tcached_mapping->folio_full = 1;\n+\t\t}\n+\t\tstart_slot = 0;\n+\t}\n+\n+\t/* If no space found and haven't grown yet, grow and retry once */\n+\tif (!de && !grew) {\n+\t\trc = grow_cached_dirents_folioq_locked(cde);\n+\t\tif (rc < 0) {\n+\t\t\tfail_cached_dir_locked(cde);\n+\t\t\treturn false;\n+\t\t}\n+\n+\t\tcached_dir_reset_insert_cursor_locked(cde);\n+\t\tgrew = true;\n+\t\tgoto retry_insert;\n+\t}\n+\n+\tif (!de) {\n+\t\tfail_cached_dir_locked(cde);\n+\t\treturn false;\n+\t}\n+\n+\tmemset(de, 0, sizeof(*de));\n+\tde->name_len = namelen;\n+\tde->ctx_pos = ctx_pos;\n+\tmemcpy(&de->fattr, fattr, sizeof(*fattr));\n+\tstored_name = NULL;\n+\tif (!cached_dirent_try_inline_name(target_folio, cached_mapping, de,\n+\t\t\t\t\t      name, namelen, &stored_name)) {\n+\t\tde->name = kstrndup(name, namelen, GFP_KERNEL);\n+\t\tif (!de->name) {\n+\t\t\tfail_cached_dir_locked(cde);\n+\t\t\treturn false;\n+\t\t}\n+\t\tkmemleak_not_leak((void *)de->name);\n+\t\tde->external_name = true;\n+\t\tcde->external_name_bytes += (size_t)namelen + 1;\n+\t\tstored_name = de->name;\n+\t} else {\n+\t\tde->external_name = false;\n+\t}\n+\tde->name = stored_name;\n+\n+\tif (insert_cached_dir_lookup_locked(cde, stored_name, namelen,\n+\t\t\t\t   de,\n+\t\t\t\t   pending_dcache) < 0) {\n+\t\tif (de->external_name)\n+\t\t\tkfree((void *)de->name);\n+\t\tmemset(de, 0, sizeof(*de));\n+\t\tfail_cached_dir_locked(cde);\n+\t\treturn false;\n+\t}\n+\n+\tcached_mapping->entries_count++;\n+\tcde->entries_count++;\n+\tcde->bytes_used = cde->folioq_size + cde->external_name_bytes +\n+\t\t\t\t  cde->lookup_bytes;\n+\treturn true;\n+}\n+\n+/* add cached_dirent to list */\n+static bool add_cached_dirent_list_locked(struct cached_dirents *cde,\n+\t\t\t\t\t  loff_t ctx_pos,\n+\t\t\t\t\t  const char *name,\n+\t\t\t\t\t  unsigned int namelen,\n+\t\t\t\t\t  const struct cifs_fattr *fattr)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dirent *de;\n+\n+\tentry = kzalloc(sizeof(*entry), GFP_KERNEL);\n+\tif (!entry)\n+\t\treturn false;\n+\n+\tde = kzalloc(sizeof(*de), GFP_KERNEL);\n+\tif (!de) {\n+\t\tkfree(entry);\n+\t\treturn false;\n+\t}\n+\n+\tde->name = kstrndup(name, namelen, GFP_KERNEL);\n+\tif (!de->name) {\n+\t\tkfree(de);\n+\t\tkfree(entry);\n+\t\treturn false;\n+\t}\n+\n+\tde->name_len = namelen;\n+\tde->external_name = true;\n+\tde->ctx_pos = ctx_pos;\n+\tmemcpy(&de->fattr, fattr, sizeof(*fattr));\n+\n+\tentry->dirent = de;\n+\tentry->name_hash = full_name_hash(NULL, name, namelen);\n+\tentry->pending_dcache = false;\n+\tlist_add_tail(&entry->list_node, &cde->entry_list);\n+\n+\tcde->entry_list_count++;\n+\tcde->entries_count++;\n+\tcde->external_name_bytes += (size_t)namelen + 1;\n+\tcde->bytes_used = cde->external_name_bytes +\n+\t\t\t  cde->entry_list_count * (sizeof(*entry) + sizeof(*de));\n \treturn true;\n }\n \n+/* convert cached_dirents from list to folioq format, freeing list entries */\n+static int convert_cached_dirents_list_to_folioq_locked(struct cached_dirents *cde)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dir_lookup_entry *tmp;\n+\tunsigned long restored_entries = 0;\n+\n+\tif (cde->folioq)\n+\t\treturn 0;\n+\n+\trelease_lookup_table_locked(cde);\n+\tcde->entries_count = 0;\n+\tcde->external_name_bytes = 0;\n+\tcde->lookup_bytes = 0;\n+\tcde->bytes_used = 0;\n+\n+\tlist_for_each_entry_safe(entry, tmp, &cde->entry_list, list_node) {\n+\t\tif (!add_cached_dirent_folioq_locked(cde, entry->dirent->ctx_pos,\n+\t\t\t\t\t\t   entry->dirent->name,\n+\t\t\t\t\t\t   entry->dirent->name_len,\n+\t\t\t\t\t\t   &entry->dirent->fattr, false)) {\n+\t\t\treturn -ENOMEM;\n+\t\t}\n+\n+\t\trestored_entries++;\n+\t\tlist_del(&entry->list_node);\n+\t\tkfree((void *)entry->dirent->name);\n+\t\tkfree(entry->dirent);\n+\t\tkfree(entry);\n+\t}\n+\n+\tcde->entry_list_count = 0;\n+\tcde->entries_count = restored_entries;\n+\tcde->bytes_used = cde->folioq_size + cde->external_name_bytes +\n+\t\t\t  cde->lookup_bytes;\n+\treturn 0;\n+}\n+\n+/* add cached_dirent, deciding whether to put it in the list or folioq */\n static bool add_cached_dirent(struct cached_dirents *cde,\n \t\t\t      struct dir_context *ctx, const char *name,\n \t\t\t      int namelen, struct cifs_fattr *fattr,\n \t\t\t      struct file *file)\n {\n-\tstruct cached_dirent *de;\n+\tint rc;\n \n \tlockdep_assert_held(&cde->de_mutex);\n \n@@ -105,32 +751,36 @@ static bool add_cached_dirent(struct cached_dirents *cde,\n \tif (cde->is_valid || cde->is_failed)\n \t\treturn false;\n \tif (ctx->pos != cde->pos) {\n-\t\tcde->is_failed = 1;\n+\t\tfail_cached_dir_locked(cde);\n \t\treturn false;\n \t}\n-\tde = kzalloc_obj(*de, GFP_KERNEL);\n-\tif (de == NULL) {\n-\t\tcde->is_failed = 1;\n-\t\treturn false;\n+\n+\tif (!cached_dirents_use_folioq_locked(cde)) {\n+\t\tif (cde->entry_list_count < CIFS_CACHED_DIRENT_LIST_THRESHOLD)\n+\t\t\treturn add_cached_dirent_list_locked(cde, ctx->pos, name,\n+\t\t\t\t\t\t     namelen, fattr);\n+\n+\t\trc = convert_cached_dirents_list_to_folioq_locked(cde);\n+\t\tif (rc < 0) {\n+\t\t\tfail_cached_dir_locked(cde);\n+\t\t\treturn false;\n+\t\t}\n \t}\n-\tde->namelen = namelen;\n-\tde->name = kstrndup(name, namelen, GFP_KERNEL);\n-\tif (de->name == NULL) {\n-\t\tkfree(de);\n-\t\tcde->is_failed = 1;\n+\n+\tif (!add_cached_dirent_folioq_locked(cde, ctx->pos, name, namelen, fattr,\n+\t\t\t\t\t     true)) {\n+\t\tfail_cached_dir_locked(cde);\n \t\treturn false;\n \t}\n-\tde->pos = ctx->pos;\n \n-\tmemcpy(&de->fattr, fattr, sizeof(struct cifs_fattr));\n-\n-\tlist_add_tail(&de->entry, &cde->entries);\n-\t/* update accounting */\n-\tcde->entries_count++;\n-\tcde->bytes_used += sizeof(*de) + (size_t)namelen + 1;\n \treturn true;\n }\n \n+/*\n+ * emit cached dirents for the current ctx position if the cache is valid.\n+ * If there is no ongoing population for this directory (ctx->pos == 0) then\n+ * make the ongoing readdir call responsible for populating the cache\n+ */\n bool emit_cached_dir_if_valid(struct cached_fid *cfid,\n \t\t\t      struct file *file,\n \t\t\t      struct dir_context *ctx)\n@@ -146,7 +796,15 @@ bool emit_cached_dir_if_valid(struct cached_fid *cfid,\n \t */\n \tif (ctx->pos == 0 && cfid->dirents.file == NULL) {\n \t\tcfid->dirents.file = file;\n+\t\tcfid->dirents.dir_inode = file_inode(file);\n \t\tcfid->dirents.pos = 2;\n+\t\tcached_dir_reset_insert_cursor_locked(&cfid->dirents);\n+\t\t/*\n+\t\t * A previous population attempt may have failed and left\n+\t\t * is_failed set.  Clear it now so add_cached_dirent() will\n+\t\t * accept new entries from this readdir pass.\n+\t\t */\n+\t\tcfid->dirents.is_failed = 0;\n \t}\n \n \tif (!cfid->dirents.is_valid) {\n@@ -161,6 +819,155 @@ bool emit_cached_dir_if_valid(struct cached_fid *cfid,\n \treturn true;\n }\n \n+/* update the cached dir position during a readdir population pass */\n+static void update_cached_dirents_count(struct cached_dirents *cde,\n+\t\t\t\t\tstruct file *file)\n+{\n+\tif (cde->file != file)\n+\t\treturn;\n+\tif (cde->is_valid || cde->is_failed)\n+\t\treturn;\n+\n+\tcde->pos++;\n+}\n+\n+/* mark the cached_dirents as valid if readdir population pass completed successfully */\n+static void finished_cached_dirents_count(struct cached_dirents *cde,\n+\t\t\t\t\t  struct dir_context *ctx,\n+\t\t\t\t\t  struct file *file)\n+{\n+\tstruct cifs_cached_dir_mapping *cached_mapping;\n+\n+\tif (cde->file != file)\n+\t\treturn;\n+\tif (cde->is_valid || cde->is_failed)\n+\t\treturn;\n+\tif (ctx->pos != cde->pos)\n+\t\treturn;\n+\n+\tcached_mapping = last_cached_dir_mapping_locked(cde);\n+\tif (cached_mapping)\n+\t\tcached_mapping->folio_is_eof = 1;\n+\n+\tcde->is_valid = 1;\n+}\n+\n+/* update the cached_dirent for a given name in list */\n+static bool update_cached_dirent_list_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\t     const char *name,\n+\t\t\t\t\t\t     unsigned int namelen,\n+\t\t\t\t\t\t     const struct cifs_fattr *fattr)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dirent *dirent;\n+\n+\tentry = lookup_cached_dirent_list_locked(cde, name, namelen);\n+\tif (!entry)\n+\t\treturn false;\n+\n+\tdirent = entry->dirent;\n+\tif (!dirent)\n+\t\treturn false;\n+\n+\tmemcpy(&dirent->fattr, fattr, sizeof(dirent->fattr));\n+\tdirent->tombstone = false;\n+\treturn true;\n+}\n+\n+/* update the cached_dirent for a given name in folioq */\n+static bool update_cached_dirent_folioq_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\t       const char *name,\n+\t\t\t\t\t\t       unsigned int namelen,\n+\t\t\t\t\t\t       const struct cifs_fattr *fattr)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dirent *dirent;\n+\n+\tentry = lookup_cached_dirent_locked(cde, name, namelen);\n+\tif (!entry)\n+\t\treturn false;\n+\n+\tdirent = entry->dirent;\n+\tif (!dirent)\n+\t\treturn false;\n+\n+\tmemcpy(&dirent->fattr, fattr, sizeof(dirent->fattr));\n+\tdirent->tombstone = false;\n+\treturn true;\n+}\n+\n+/* update wrapper to decide if the entry is in list or folioq */\n+static bool update_cached_dirent_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\tconst char *name,\n+\t\t\t\t\t\tunsigned int namelen,\n+\t\t\t\t\t\tconst struct cifs_fattr *fattr)\n+{\n+\tif (cached_dirents_use_folioq_locked(cde))\n+\t\treturn update_cached_dirent_folioq_locked(cde, name, namelen,\n+\t\t\t\t\t\t\t  fattr);\n+\n+\treturn update_cached_dirent_list_locked(cde, name, namelen,\n+\t\t\t\t\t\t\t fattr);\n+}\n+\n+/* invalidate a cached_dirent by name in list */\n+static bool invalidate_cached_dirent_list_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\t const char *name,\n+\t\t\t\t\t\t unsigned int namelen)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dirent *dirent;\n+\n+\tentry = lookup_cached_dirent_list_locked(cde, name, namelen);\n+\tif (!entry)\n+\t\treturn true;\n+\n+\tdirent = entry->dirent;\n+\tif (!dirent)\n+\t\treturn true;\n+\n+\tdirent->tombstone = true;\n+\treturn true;\n+}\n+\n+/* invalidate a cached_dirent by name in folioq */\n+static bool invalidate_cached_dirent_folioq_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\t   const char *name,\n+\t\t\t\t\t\t   unsigned int namelen)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dirent *dirent;\n+\n+\tentry = lookup_cached_dirent_locked(cde, name, namelen);\n+\tif (!entry)\n+\t\treturn true;\n+\n+\tdirent = entry->dirent;\n+\tif (!dirent)\n+\t\treturn false;\n+\n+\tdirent->tombstone = true;\n+\tif (entry->pending_dcache) {\n+\t\tentry->pending_dcache = false;\n+\t\tcomplete_all(&entry->dcache_complete);\n+\t}\n+\n+\treturn true;\n+}\n+\n+/* invalidate wrapper to decide if the entry is in list or folioq */\n+static bool invalidate_cached_dirent_locked(struct cached_dirents *cde,\n+\t\t\t\t\t\tconst char *name,\n+\t\t\t\t\t\tunsigned int namelen)\n+{\n+\tif (cached_dirents_use_folioq_locked(cde))\n+\t\treturn invalidate_cached_dirent_folioq_locked(cde, name,\n+\t\t\t\t\t\t\t      namelen);\n+\n+\treturn invalidate_cached_dirent_list_locked(cde, name, namelen);\n+}\n+\n+/* append a dirent to the cached_dir */\n bool add_to_cached_dir(struct cached_fid *cfid,\n \t\t       struct dir_context *ctx,\n \t\t       const char *name,\n@@ -168,96 +975,258 @@ bool add_to_cached_dir(struct cached_fid *cfid,\n \t\t       struct cifs_fattr *fattr,\n \t\t       struct file *file)\n {\n-\tsize_t delta_bytes;\n+\tunsigned long old_entries;\n+\tunsigned long new_entries;\n+\tu64 old_bytes;\n+\tu64 new_bytes;\n+\tlong entry_diff;\n+\tlong long bytes_diff;\n \tbool added = false;\n \n \tif (!cfid)\n \t\treturn false;\n \n-\t/* Cost of this entry */\n-\tdelta_bytes = sizeof(struct cached_dirent) + (size_t)namelen + 1;\n-\n \tmutex_lock(&cfid->dirents.de_mutex);\n+\told_entries = cfid->dirents.entries_count;\n+\told_bytes = cfid->dirents.bytes_used;\n \tadded = add_cached_dirent(&cfid->dirents, ctx, name, namelen,\n \t\t\t\t  fattr, file);\n+\tnew_entries = cfid->dirents.entries_count;\n+\tnew_bytes = cfid->dirents.bytes_used;\n \tmutex_unlock(&cfid->dirents.de_mutex);\n \n-\tif (added) {\n-\t\t/* per-tcon then global for consistency with free path */\n-\t\tatomic64_add((long long)delta_bytes, &cfid->cfids->total_dirents_bytes);\n-\t\tatomic_long_inc(&cfid->cfids->total_dirents_entries);\n-\t\tatomic64_add((long long)delta_bytes, &cifs_dircache_bytes_used);\n+\tentry_diff = (long)new_entries - (long)old_entries;\n+\tbytes_diff = (long long)new_bytes - (long long)old_bytes;\n+\n+\tif (entry_diff > 0) {\n+\t\tatomic_long_add(entry_diff, &cfid->cfids->total_dirents_entries);\n+\t} else if (entry_diff < 0) {\n+\t\tatomic_long_sub(-entry_diff, &cfid->cfids->total_dirents_entries);\n+\t}\n+\n+\tif (bytes_diff > 0) {\n+\t\tatomic64_add(bytes_diff, &cfid->cfids->total_dirents_bytes);\n+\t\tatomic64_add(bytes_diff, &cifs_dircache_bytes_used);\n+\t} else if (bytes_diff < 0) {\n+\t\tatomic64_sub(-bytes_diff, &cfid->cfids->total_dirents_bytes);\n+\t\tatomic64_sub(-bytes_diff, &cifs_dircache_bytes_used);\n \t}\n \n+\n \treturn added;\n }\n \n-static void update_cached_dirents_count(struct cached_dirents *cde,\n-\t\t\t\t\tstruct file *file)\n+/* update the cached_dir position during a readdir population pass */\n+void update_pos_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t      struct file *file)\n {\n-\tif (cde->file != file)\n-\t\treturn;\n-\tif (cde->is_valid || cde->is_failed)\n+\tif (!cfid)\n \t\treturn;\n \n-\tcde->pos++;\n+\tmutex_lock(&cfid->dirents.de_mutex);\n+\tupdate_cached_dirents_count(&cfid->dirents, file);\n+\tmutex_unlock(&cfid->dirents.de_mutex);\n }\n \n-static void finished_cached_dirents_count(struct cached_dirents *cde,\n-\t\t\t\t\t  struct dir_context *ctx,\n-\t\t\t\t\t  struct file *file)\n+/* signal completion of cached_dir population after a readdir pass */\n+void complete_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t\tstruct dir_context *ctx,\n+\t\t\t\t\tstruct file *file)\n {\n-\tif (cde->file != file)\n-\t\treturn;\n-\tif (cde->is_valid || cde->is_failed)\n-\t\treturn;\n-\tif (ctx->pos != cde->pos)\n+\tstruct cached_dirents *cde;\n+\n+\tif (!cfid)\n \t\treturn;\n \n-\tcde->is_valid = 1;\n+\tcde = &cfid->dirents;\n+\tmutex_lock(&cfid->dirents.de_mutex);\n+\tfinished_cached_dirents_count(cde, ctx, file);\n+\tmutex_unlock(&cfid->dirents.de_mutex);\n }\n \n-void update_pos_cached_dir(struct cached_fid *cfid,\n-\t\t\t\t      struct file *file)\n+/*\n+ * lookup a cached_dirent by name, returning -ENOENT if not found or if the\n+ * entry is a tombstone.  The result struct is filled in with the fattr of the\n+ * found entry, and flags indicating whether the entry was found, whether the\n+ * cache was fully populated at the time of lookup, and whether there was an\n+ * active lease on the directory at the time of lookup.\n+ */\n+int lookup_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t const char *name,\n+\t\t\t\t unsigned int namelen,\n+\t\t\t\t struct cached_dirent_lookup_result *result)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tstruct cached_dirent *dirent;\n+\tbool lease_active;\n+\n+\tif (!cfid || !name || !namelen || !result)\n+\t\treturn -EINVAL;\n+\n+\tmemset(result, 0, sizeof(*result));\n+\n+\tspin_lock(&cfid->cfid_lock);\n+\tlease_active = is_valid_cached_dir(cfid);\n+\tspin_unlock(&cfid->cfid_lock);\n+\n+\tmutex_lock(&cfid->dirents.de_mutex);\n+\tresult->under_active_lease = lease_active;\n+\tresult->fully_populated = cfid->dirents.is_valid;\n+\n+\tentry = lookup_cached_dirent_entry_locked(&cfid->dirents, name, namelen);\n+\tif (!entry || !entry->dirent) {\n+\t\tmutex_unlock(&cfid->dirents.de_mutex);\n+\t\treturn -ENOENT;\n+\t}\n+\n+\tdirent = entry->dirent;\n+\tif (dirent->tombstone) {\n+\t\tmutex_unlock(&cfid->dirents.de_mutex);\n+\t\treturn -ENOENT;\n+\t}\n+\n+\tresult->found = true;\n+\tmemcpy(&result->fattr, &dirent->fattr, sizeof(result->fattr));\n+\n+\tmutex_unlock(&cfid->dirents.de_mutex);\n+\treturn 0;\n+}\n+\n+/*\n+ * Invalidate all cached_dirents for a cached_fid. We generally\n+ * try to invalidate specific entries by name. This is used as\n+ * a last resort when we can't invalidate specific entries\n+ */\n+void invalidate_cached_dir_contents(struct cached_fid *cfid)\n {\n \tif (!cfid)\n \t\treturn;\n \n \tmutex_lock(&cfid->dirents.de_mutex);\n-\tupdate_cached_dirents_count(&cfid->dirents, file);\n+\tfail_cached_dir_locked(&cfid->dirents);\n \tmutex_unlock(&cfid->dirents.de_mutex);\n }\n \n-void complete_cached_dir(struct cached_fid *cfid,\n-\t\t\t\t\tstruct dir_context *ctx,\n-\t\t\t\t\tstruct file *file)\n+/*\n+ * Update a cached_dirent for a given name.  Returns true if the entry was\n+ * found and updated, false if the entry was not found or if the cache is not\n+ * valid.\n+ */\n+bool update_dirent_in_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t  const char *name,\n+\t\t\t\t  unsigned int namelen,\n+\t\t\t\t  const struct cifs_fattr *fattr)\n+{\n+\tbool updated = false;\n+\n+\tif (!cfid || !name || !namelen || !fattr)\n+\t\treturn false;\n+\n+\tmutex_lock(&cfid->dirents.de_mutex);\n+\tupdated = update_cached_dirent_locked(&cfid->dirents, name,\n+\t\t\t\t\t\t      namelen, fattr);\n+\tmutex_unlock(&cfid->dirents.de_mutex);\n+\treturn updated;\n+}\n+\n+/*\n+ * Invalidate a cached_dirent for a given name.  Returns true if the entry was\n+ * found and invalidated, false if the entry was not found or if the cache is\n+ * not valid.\n+ */\n+bool invalidate_dirent_in_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t      const char *name,\n+\t\t\t\t      unsigned int namelen)\n {\n+\tbool invalidated = false;\n+\n+\tif (!cfid || !name || !namelen)\n+\t\treturn false;\n+\tif (!cached_dir_is_valid(cfid))\n+\t\treturn false;\n+\n+\tmutex_lock(&cfid->dirents.de_mutex);\n+\tif (!cfid->dirents.is_valid || cfid->dirents.is_failed)\n+\t\tgoto out_unlock;\n+\n+\tinvalidated = invalidate_cached_dirent_locked(&cfid->dirents,\n+\t\t\t\t\t\t\t name, namelen);\n+\n+out_unlock:\n+\tmutex_unlock(&cfid->dirents.de_mutex);\n+\treturn invalidated;\n+}\n+\n+/*\n+ * Signal completion of dcache population for a specific dirent.\n+ * Called after cifs_prime_dcache returns, on both sync and async paths.\n+ * Clears the pending_dcache flag and unblocks any waiting lookups.\n+ */\n+void cifs_complete_pending_dcache(struct cached_fid *cfid,\n+\t\tconst char *name, unsigned int namelen)\n+{\n+\tstruct cached_dir_lookup_entry *entry;\n+\tbool uses_folioq;\n+\tint ret = -ENOENT;\n+\n \tif (!cfid)\n \t\treturn;\n \n \tmutex_lock(&cfid->dirents.de_mutex);\n-\tfinished_cached_dirents_count(&cfid->dirents, ctx, file);\n+\tuses_folioq = cached_dirents_use_folioq_locked(&cfid->dirents);\n+\tentry = lookup_cached_dirent_entry_locked(&cfid->dirents, name, namelen);\n+\tif (entry) {\n+\t\tif (uses_folioq && entry->pending_dcache) {\n+\t\t\tentry->pending_dcache = false;\n+\t\t\tcomplete_all(&entry->dcache_complete);\n+\t\t}\n+\t\tret = 0;\n+\t}\n \tmutex_unlock(&cfid->dirents.de_mutex);\n+\tcifs_dbg(FYI, \"Dcache population of %.*s. status: %d\\n\",\n+\t\t\t\t\tnamelen, name, ret);\n }\n \n-struct cached_dirent *lookup_cached_dirent(struct cached_dirents *cde,\n-\t\t\t\t   const char *name,\n-\t\t\t\t   unsigned int namelen)\n+/*\n+ * Signal completion of dcache population for a specific dirent.\n+ * Wait for async dcache population to complete for a specific dirent.\n+ * Returns: 0 on completion or entry not pending, -ETIMEDOUT on timeout,\n+ *          -ENOENT if entry not found in the cache.\n+ */\n+int cifs_wait_for_pending_dcache(struct cached_fid *cfid,\n+\t\tconst char *name, unsigned int namelen)\n {\n-\tstruct cached_dirent *entry;\n+\tstruct cached_dir_lookup_entry *entry;\n+\tbool uses_folioq;\n+\tstruct completion *comp = NULL;\n+\tint ret = -ENOENT;\n \n-\tif (!cde)\n-\t\treturn NULL;\n+\tif (!cfid)\n+\t\treturn -ENOENT;\n \n-\tlockdep_assert_held(&cde->de_mutex);\n+\tmutex_lock(&cfid->dirents.de_mutex);\n+\tuses_folioq = cached_dirents_use_folioq_locked(&cfid->dirents);\n+\tentry = lookup_cached_dirent_entry_locked(&cfid->dirents, name, namelen);\n+\tif (entry) {\n+\t\tret = 0;\n+\t\tif (uses_folioq && entry->pending_dcache)\n+\t\t\tcomp = &entry->dcache_complete;\n+\t}\n+\tmutex_unlock(&cfid->dirents.de_mutex);\n \n-\tlist_for_each_entry(entry, &cde->entries, entry) {\n-\t\tif (entry->namelen == namelen &&\n-\t\t    memcmp(entry->name, name, namelen) == 0)\n-\t\t\treturn entry;\n+\tif (comp) {\n+\t\tif (wait_for_completion_timeout(comp, CIFS_DCACHE_WAIT_TIMEOUT) == 0) {\n+\t\t\tcifs_dbg(FYI, \"Timeout waiting for dcache population of %.*s\\n\",\n+\t\t\t\t\tnamelen, name);\n+\t\t\tret = -ETIMEDOUT;\n+\t\t} else {\n+\t\t\tcifs_dbg(FYI, \"Dcache population completed for %.*s\\n\",\n+\t\t\t\t\tnamelen, name);\n+\t\t\tret = 0;\n+\t\t}\n \t}\n \n-\treturn NULL;\n+\treturn ret;\n }\n \n static struct cached_fid *find_or_create_cached_dir(struct cached_fids *cfids,\n@@ -682,7 +1651,9 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon,\n \t\t\t      struct cached_fid **ret_cfid)\n {\n \tstruct cached_fid *cfid;\n+\tstruct cached_fid *trace_cfid = NULL;\n \tstruct cached_fids *cfids = tcon->cfids;\n+\tint rc = -ENOENT;\n \n \tif (cfids == NULL)\n \t\treturn -EOPNOTSUPP;\n@@ -702,13 +1673,15 @@ int open_cached_dir_by_dentry(struct cifs_tcon *tcon,\n \t\t\tkref_get(&cfid->refcount);\n \t\t\t*ret_cfid = cfid;\n \t\t\tcfid->last_access_time = jiffies;\n+\t\t\trc = 0;\n+\t\t\ttrace_cfid = cfid;\n \t\t\tspin_unlock(&cfid->cfid_lock);\n \t\t\tspin_unlock(&cfids->cfid_list_lock);\n-\t\t\treturn 0;\n+\t\t\treturn rc;\n \t\t}\n \t}\n \tspin_unlock(&cfids->cfid_list_lock);\n-\treturn -ENOENT;\n+\treturn rc;\n }\n \n static void\n@@ -853,10 +1826,10 @@ void close_all_cached_dirs(struct cifs_sb_info *cifs_sb)\n }\n \n /*\n- * Invalidate all cached dirs when a TCON has been reset\n- * due to a session loss.\n+ * Queue all cached dirs for invalidation on laundromat without waiting.\n+ * Safe for callers that hold cifs_tcp_ses_lock.\n  */\n-void invalidate_all_cached_dirs(struct cifs_tcon *tcon)\n+void invalidate_all_cached_dirs_nowait(struct cifs_tcon *tcon)\n {\n \tstruct cached_fids *cfids = tcon->cfids;\n \tstruct cached_fid *cfid, *q;\n@@ -890,8 +1863,22 @@ void invalidate_all_cached_dirs(struct cifs_tcon *tcon)\n \t}\n \tspin_unlock(&cfids->cfid_list_lock);\n \n-\t/* run laundromat unconditionally now as there might have been previously queued work */\n+\t/* Run laundromat now as there might have been previously queued work. */\n \tmod_delayed_work(cfid_put_wq, &cfids->laundromat_work, 0);\n+}\n+\n+/*\n+ * Invalidate all cached dirs when a TCON has been reset\n+ * due to a session loss.\n+ */\n+void invalidate_all_cached_dirs(struct cifs_tcon *tcon)\n+{\n+\tstruct cached_fids *cfids = tcon->cfids;\n+\n+\tif (!cfids)\n+\t\treturn;\n+\n+\tinvalidate_all_cached_dirs_nowait(tcon);\n \tflush_delayed_work(&cfids->laundromat_work);\n }\n \n@@ -980,7 +1967,7 @@ static struct cached_fid *init_cached_dir(const char *path)\n \tINIT_WORK(&cfid->close_work, cached_dir_offload_close);\n \tINIT_WORK(&cfid->put_work, cached_dir_put_work);\n \tINIT_LIST_HEAD(&cfid->entry);\n-\tINIT_LIST_HEAD(&cfid->dirents.entries);\n+\tINIT_LIST_HEAD(&cfid->dirents.entry_list);\n \tmutex_init(&cfid->dirents.de_mutex);\n \tmutex_init(&cfid->cfid_open_mutex);\n \tspin_lock_init(&cfid->cfid_lock);\n@@ -990,38 +1977,34 @@ static struct cached_fid *init_cached_dir(const char *path)\n \n static void free_cached_dir(struct cached_fid *cfid)\n {\n-\tstruct cached_dirent *dirent, *q;\n+\tunsigned long entries_count = 0;\n+\tu64 bytes_used = 0;\n \n \tWARN_ON(work_pending(&cfid->close_work));\n \tWARN_ON(work_pending(&cfid->put_work));\n \n+\n \tdput(cfid->dentry);\n \tcfid->dentry = NULL;\n \n-\t/*\n-\t * Delete all cached dirent names\n-\t */\n-\tlist_for_each_entry_safe(dirent, q, &cfid->dirents.entries, entry) {\n-\t\tlist_del(&dirent->entry);\n-\t\tkfree(dirent->name);\n-\t\tkfree(dirent);\n-\t}\n+\tmutex_lock(&cfid->dirents.de_mutex);\n+\tentries_count = cfid->dirents.entries_count;\n+\tbytes_used = cfid->dirents.bytes_used;\n+\trelease_cached_dirents_locked(&cfid->dirents);\n+\tmutex_unlock(&cfid->dirents.de_mutex);\n \n \t/* adjust tcon-level counters and reset per-dir accounting */\n \tif (cfid->cfids) {\n-\t\tif (cfid->dirents.entries_count)\n-\t\t\tatomic_long_sub((long)cfid->dirents.entries_count,\n+\t\tif (entries_count)\n+\t\t\tatomic_long_sub((long)entries_count,\n \t\t\t\t\t&cfid->cfids->total_dirents_entries);\n-\t\tif (cfid->dirents.bytes_used) {\n-\t\t\tatomic64_sub((long long)cfid->dirents.bytes_used,\n+\t\tif (bytes_used) {\n+\t\t\tatomic64_sub((long long)bytes_used,\n \t\t\t\t\t&cfid->cfids->total_dirents_bytes);\n-\t\t\tatomic64_sub((long long)cfid->dirents.bytes_used,\n+\t\t\tatomic64_sub((long long)bytes_used,\n \t\t\t\t\t&cifs_dircache_bytes_used);\n \t\t}\n \t}\n-\tcfid->dirents.entries_count = 0;\n-\tcfid->dirents.bytes_used = 0;\n-\n \tkfree(cfid->path);\n \tcfid->path = NULL;\n \tkfree(cfid);\n@@ -1041,7 +2024,7 @@ static void cfids_laundromat_worker(struct work_struct *work)\n \n \tlist_for_each_entry_safe(cfid, q, &cfids->entries, entry) {\n \t\tspin_lock(&cfid->cfid_lock);\n-\t\tif (cfid->last_access_time &&\n+\t\tif (dir_cache_timeout && cfid->last_access_time &&\n \t\t    time_after(jiffies, cfid->last_access_time + HZ * dir_cache_timeout)) {\n \t\t\tcfid->on_list = false;\n \t\t\tlist_move(&cfid->entry, &entry);\n@@ -1083,8 +2066,9 @@ static void cfids_laundromat_worker(struct work_struct *work)\n \t\t\t */\n \t\t\tclose_cached_dir(cfid);\n \t}\n-\tqueue_delayed_work(cfid_put_wq, &cfids->laundromat_work,\n-\t\t\t   dir_cache_timeout * HZ);\n+\tif (dir_cache_timeout)\n+\t\tqueue_delayed_work(cfid_put_wq, &cfids->laundromat_work,\n+\t\t\t\t   dir_cache_timeout * HZ);\n }\n \n struct cached_fids *init_cached_dirs(void)\n@@ -1099,8 +2083,9 @@ struct cached_fids *init_cached_dirs(void)\n \tINIT_LIST_HEAD(&cfids->dying);\n \n \tINIT_DELAYED_WORK(&cfids->laundromat_work, cfids_laundromat_worker);\n-\tqueue_delayed_work(cfid_put_wq, &cfids->laundromat_work,\n-\t\t\t   dir_cache_timeout * HZ);\n+\tif (dir_cache_timeout)\n+\t\tqueue_delayed_work(cfid_put_wq, &cfids->laundromat_work,\n+\t\t\t\t   dir_cache_timeout * HZ);\n \n \tatomic_long_set(&cfids->total_dirents_entries, 0);\n \tatomic64_set(&cfids->total_dirents_bytes, 0);\ndiff --git a/fs/smb/client/cached_dir.h b/fs/smb/client/cached_dir.h\nindex 0767350b40fba..0726f25b9144a 100644\n--- a/fs/smb/client/cached_dir.h\n+++ b/fs/smb/client/cached_dir.h\n@@ -8,16 +8,107 @@\n #ifndef _CACHED_DIR_H\n #define _CACHED_DIR_H\n \n+#include <linux/completion.h>\n+#include <linux/build_bug.h>\n+#include <linux/list.h>\n+#include <linux/netfs.h>\n+\n struct cifs_search_info;\n \n+/* Timeout for waiting on async dcache population to complete */\n+#define CIFS_DCACHE_WAIT_TIMEOUT\t(HZ / 10)\n+\n+#define CIFS_CACHED_INLINE_NAME_LEN\t64\n+#define CIFS_CACHED_DIRENT_LIST_THRESHOLD\t64\n+\n struct cached_dirent {\n-\tstruct list_head entry;\n-\tchar *name;\n-\tint namelen;\n-\tloff_t pos;\n+\tconst char *name;\n+\tu32 name_len;\n+\tbool external_name;\n+\tbool tombstone;\n+\tu32 inline_name_off;\n+\tloff_t ctx_pos;\n \tstruct cifs_fattr fattr;\n };\n \n+/*\n+ * Folio-backed cached directory entry storage:\n+ *\n+ * Directory entries are stored in a folio_queue managed by cached_dirents.\n+ * Each folio's virtual address points to a cifs_cached_dir_mapping structure,\n+ * which combines directory metadata and a variable-length array of cached_dirent\n+ * entries in a single folio allocation.\n+ *\n+ * Layout within each folio:\n+ *   [cifs_cached_dir_mapping] [cached_dirent[0]] ... [cached_dirent[n]]\n+ *                             ^                                            ^\n+ *                             |-------- entries_count ---------|\n+ *                             |-------- name_tail_offset (growing downward) ---------|\n+ *                             Inline name data (packed at tail of the folio)\n+ *\n+ * Field meanings:\n+ *   name_tail_offset: Current start offset of inline-name storage in the folio.\n+ *                     This moves downward as inline names are packed from tail.\n+ *   folio_full: Set when this folio cannot accept another cached_dirent record\n+ *               (record array would collide with inline-name tail region).\n+ *   folio_is_eof: Set when this folio contains the last emitted dirent for the\n+ *                 cached directory stream; readers stop when this folio is seen.\n+ *\n+ * Inline name optimization:\n+ *   Names <= CIFS_CACHED_INLINE_NAME_LEN are packed at the tail of the folio,\n+ *   after the last dirent entry. This avoids per-name allocation. For longer names,\n+ *   external_name is set and a separate kstrndup'd pointer is used.\n+ *\n+ * Tracking and lookup:\n+ *   A hash table (lookup_ht) in cached_dirents indexes all entries by name.\n+ *   Each hash entry (cached_dir_lookup_entry) records:\n+ *     - name pointer (points into inline region or external memory)\n+ *     - dirent pointer (points to cached_dirent in folio or list allocation)\n+ *   This enables O(1) lookups during dirent reservation and update operations,\n+ *   while also allowing list-backed staging to reuse cached_dirent directly.\n+ *\n+ * Sequencing and position tracking:\n+ *   last_pos tracks the directory position (ctx->pos) of the last entry added\n+ *   to this folio. When adding the next entry, we use last_pos + 1 to maintain\n+ *   consistent incrementing positions used for directory iteration.\n+ */\n+struct cifs_cached_dir_mapping {\n+\tu64 last_cookie;\n+\tu32 entries_count;\n+\tu32 name_tail_offset;\n+\tu32 folio_full:1;\n+\tu32 folio_is_eof:1;\n+\tstruct cached_dirent entries[];\n+};\n+\n+struct cached_dir_lookup_entry {\n+\tstruct hlist_node hash_node;\n+\tstruct list_head list_node;\n+\tstruct completion dcache_complete;\n+\tstruct cached_dirent *dirent;\n+\tu32 name_hash;\n+\tbool pending_dcache;\n+};\n+\n+/*\n+ * Per-directory dirent cache using a two-mode storage strategy:\n+ *\n+ * Small directories (up to CIFS_CACHED_DIRENT_LIST_THRESHOLD entries):\n+ *   Entries are stored as individually allocated cached_dirent structs linked\n+ *   via cached_dir_lookup_entry nodes in entry_list. Each entry carries its\n+ *   own name allocation. This avoids folio overhead for short-lived or small\n+ *   directories.\n+ *\n+ * Large directories (above the threshold):\n+ *   The list is converted to folio-backed storage. Entries are packed into\n+ *   folios managed by folioq, with names <= CIFS_CACHED_INLINE_NAME_LEN stored\n+ *   inline at the tail of each folio to reduce per-name allocations. A hash\n+ *   table (lookup_ht) provides O(1) name lookup in this mode.\n+ *\n+ * The active mode is determined by whether folioq is non-NULL. All CRUD\n+ * operations (insert, lookup, update, invalidate, release) dispatch to the\n+ * appropriate list or folioq implementation via mode-dispatching helpers.\n+ */\n struct cached_dirents {\n \tbool is_valid:1;\n \tbool is_failed:1;\n@@ -25,9 +116,23 @@ struct cached_dirents {\n \t\t\t    * Used to associate the cache with a single\n \t\t\t    * open file instance.\n \t\t\t    */\n+\tstruct inode *dir_inode;\n \tstruct mutex de_mutex;\n \tloff_t pos;\t\t /* Expected ctx->pos */\n-\tstruct list_head entries;\n+\tstruct folio_queue *folioq;\n+\tstruct list_head entry_list;\n+\tunsigned int entry_list_count;\n+\t/*\n+\t * Insertion cursor used by add_cached_dirent() to avoid rescanning folioq\n+\t * from the head on every append.\n+\t */\n+\tstruct folio_queue *insert_cursor_fq;\n+\tunsigned int insert_cursor_slot;\n+\tunsigned int insert_cursor_folio_index;\n+\tsize_t folioq_size;\n+\tunsigned long external_name_bytes;\n+\tstruct hlist_head *lookup_ht;\n+\tunsigned long lookup_bytes;\n \t/* accounting for cached entries in this directory */\n \tunsigned long entries_count;\n \tunsigned long bytes_used;\n@@ -57,6 +162,13 @@ struct cached_fid {\n \tstruct smb2_file_all_info file_all_info;\n };\n \n+struct cached_dirent_lookup_result {\n+\tbool found;\n+\tbool under_active_lease;\n+\tbool fully_populated;\n+\tstruct cifs_fattr fattr;\n+};\n+\n /* default MAX_CACHED_FIDS is 16 */\n struct cached_fids {\n \t/* Must be held when:\n@@ -115,12 +227,25 @@ void update_pos_cached_dir(struct cached_fid *cfid,\n void complete_cached_dir(struct cached_fid *cfid,\n \t\t\t\t\tstruct dir_context *ctx,\n \t\t\t\t\tstruct file *file);\n-struct cached_dirent *lookup_cached_dirent(struct cached_dirents *cde,\n-\t\t\t\t   const char *name,\n-\t\t\t\t   unsigned int namelen);\n+int lookup_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t const char *name, unsigned int namelen,\n+\t\t\t\t struct cached_dirent_lookup_result *result);\n+void invalidate_cached_dir_contents(struct cached_fid *cfid);\n+bool update_dirent_in_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t  const char *name,\n+\t\t\t\t  unsigned int namelen,\n+\t\t\t\t  const struct cifs_fattr *fattr);\n+bool invalidate_dirent_in_cached_dir(struct cached_fid *cfid,\n+\t\t\t\t      const char *name,\n+\t\t\t\t      unsigned int namelen);\n+void cifs_complete_pending_dcache(struct cached_fid *cfid,\n+\t\t\t\t  const char *name, unsigned int namelen);\n+int cifs_wait_for_pending_dcache(struct cached_fid *cfid,\n+\t\t\t\t const char *name, unsigned int namelen);\n void drop_cached_dir_by_name(const unsigned int xid, struct cifs_tcon *tcon,\n \t\t\t     const char *name, struct cifs_sb_info *cifs_sb);\n void close_all_cached_dirs(struct cifs_sb_info *cifs_sb);\n+void invalidate_all_cached_dirs_nowait(struct cifs_tcon *tcon);\n void invalidate_all_cached_dirs(struct cifs_tcon *tcon);\n bool cached_dir_lease_break(struct cifs_tcon *tcon, __u8 lease_key[16]);\n \ndiff --git a/fs/smb/client/cifsproto.h b/fs/smb/client/cifsproto.h\nindex bbbee0ef09443..1bf34a97f051f 100644\n--- a/fs/smb/client/cifsproto.h\n+++ b/fs/smb/client/cifsproto.h\n@@ -179,6 +179,7 @@ void cifs_unix_basic_to_fattr(struct cifs_fattr *fattr,\n void cifs_dir_info_to_fattr(struct cifs_fattr *fattr,\n \t\t\t    FILE_DIRECTORY_INFO *info,\n \t\t\t    struct cifs_sb_info *cifs_sb);\n+void cifs_inode_to_fattr(struct inode *inode, struct cifs_fattr *fattr);\n int cifs_fattr_to_inode(struct inode *inode, struct cifs_fattr *fattr,\n \t\t\tbool from_readdir);\n struct inode *cifs_iget(struct super_block *sb, struct cifs_fattr *fattr);\n","prefixes":["v3","10/19"]}