diff mbox series

[RFC,v3,25/26] filemap: support disable large folios on active inode

Message ID 20240127015825.1608160-26-yi.zhang@huaweicloud.com
State New
Headers show
Series [v3,01/26] ext4: refactor ext4_da_map_blocks() | expand

Commit Message

Zhang Yi Jan. 27, 2024, 1:58 a.m. UTC
From: Zhang Yi <yi.zhang@huawei.com>

Since commit 730633f0b7f9 ("mm: Protect operations adding pages to page
cache with invalidate_lock"), mapping->invalidate_lock can protect us
from adding new folios into page cache. So it's possible to disable
active inodes' large folios support, even through it might be dangerous.
Filesystems can disable it under mapping->invalidate_lock and drop all
page cache before dropping AS_LARGE_FOLIO_SUPPORT, besides, the order of
folio must also be determined under the lock.

Signed-off-by: Zhang Yi <yi.zhang@huawei.com>
---
 include/linux/pagemap.h | 14 ++++++++++++++
 mm/readahead.c          |  6 ++++--
 2 files changed, 18 insertions(+), 2 deletions(-)
diff mbox series

Patch

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 06142ff7f9ce..7f77670960d8 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -343,6 +343,20 @@  static inline void mapping_set_large_folios(struct address_space *mapping)
 	__set_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
 }
 
+/**
+ * mapping_clear_large_folios() - The file disable supports large folios.
+ * @mapping: The file.
+ *
+ * The filesystem have to make sure the file is in atomic context and all
+ * cached folios have been cleared under mapping->invalidate_lock before
+ * calling this function.
+ */
+static inline void mapping_clear_large_folios(struct address_space *mapping)
+{
+	WARN_ON_ONCE(!rwsem_is_locked(&mapping->invalidate_lock));
+	__clear_bit(AS_LARGE_FOLIO_SUPPORT, &mapping->flags);
+}
+
 /*
  * Large folio support currently depends on THP.  These dependencies are
  * being worked on but are not yet fixed.
diff --git a/mm/readahead.c b/mm/readahead.c
index 6925e6959fd3..c97eceaf7214 100644
--- a/mm/readahead.c
+++ b/mm/readahead.c
@@ -493,8 +493,11 @@  void page_cache_ra_order(struct readahead_control *ractl,
 	int err = 0;
 	gfp_t gfp = readahead_gfp_mask(mapping);
 
-	if (!mapping_large_folio_support(mapping) || ra->size < 4)
+	filemap_invalidate_lock_shared(mapping);
+	if (!mapping_large_folio_support(mapping) || ra->size < 4) {
+		filemap_invalidate_unlock_shared(mapping);
 		goto fallback;
+	}
 
 	limit = min(limit, index + ra->size - 1);
 
@@ -506,7 +509,6 @@  void page_cache_ra_order(struct readahead_control *ractl,
 			new_order--;
 	}
 
-	filemap_invalidate_lock_shared(mapping);
 	while (index <= limit) {
 		unsigned int order = new_order;