diff mbox series

[RFC] ext4: skip concurrent inode updates in lazytime optimization

Message ID 158031264567.6836.126132376018905207.stgit@buzz
State New
Headers show
Series [RFC] ext4: skip concurrent inode updates in lazytime optimization | expand

Commit Message

Konstantin Khlebnikov Jan. 29, 2020, 3:44 p.m. UTC
Function ext4_update_other_inodes_time() implements optimization which
opportunistically updates times for inodes within same inode table block.

For now	concurrent inode lookup by number does not scale well because
inode hash table is protected with single spinlock. It could become very
hot at concurrent writes to fast nvme when inode cache has enough inodes.

Probably someday inode hash will become searchable under RCU.
(see linked patchset by David Howells)

Let's skip concurrent updates instead of wasting cpu time at spinlock.

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Link: https://lore.kernel.org/lkml/155620449631.4720.8762546550728087460.stgit@warthog.procyon.org.uk/
---
 fs/ext4/inode.c |    7 +++++++
 1 file changed, 7 insertions(+)

Comments

Andreas Dilger Jan. 29, 2020, 7:53 p.m. UTC | #1
On Jan 29, 2020, at 8:44 AM, Konstantin Khlebnikov <khlebnikov@yandex-team.ru> wrote:
> 
> Function ext4_update_other_inodes_time() implements optimization which
> opportunistically updates times for inodes within same inode table block.
> 
> For now	concurrent inode lookup by number does not scale well because
> inode hash table is protected with single spinlock. It could become very
> hot at concurrent writes to fast nvme when inode cache has enough inodes.
> 
> Probably someday inode hash will become searchable under RCU.
> (see linked patchset by David Howells)
> 
> Let's skip concurrent updates instead of wasting cpu time at spinlock.

Do you have any benchmark numbers to confirm that this is an improvement?
The performance results should be included here in the commit message, so
that the patch reviewers can make a useful decision about the patch, and
in the future if this patch is shown to be a regression for some other
workload we can see what workload(s) it originally improved performance on.

Cheers, Andreas

> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
> Link: https://lore.kernel.org/lkml/155620449631.4720.8762546550728087460.stgit@warthog.procyon.org.uk/
> ---
> fs/ext4/inode.c |    7 +++++++
> 1 file changed, 7 insertions(+)
> 
> diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
> index 629a25d999f0..dc3e1b38e3ed 100644
> --- a/fs/ext4/inode.c
> +++ b/fs/ext4/inode.c
> @@ -4849,11 +4849,16 @@ static int other_inode_match(struct inode * inode, unsigned long ino,
> static void ext4_update_other_inodes_time(struct super_block *sb,
> 					  unsigned long orig_ino, char *buf)
> {
> +	static DEFINE_SPINLOCK(lock);
> 	struct other_inode oi;
> 	unsigned long ino;
> 	int i, inodes_per_block = EXT4_SB(sb)->s_inodes_per_block;
> 	int inode_size = EXT4_INODE_SIZE(sb);
> 
> +	/* Don't bother inode_hash_lock with concurrent updates. */
> +	if (!spin_trylock(&lock))
> +		return;
> +
> 	oi.orig_ino = orig_ino;
> 	/*
> 	 * Calculate the first inode in the inode table block.  Inode
> @@ -4867,6 +4872,8 @@ static void ext4_update_other_inodes_time(struct super_block *sb,
> 		oi.raw_inode = (struct ext4_inode *) buf;
> 		(void) find_inode_nowait(sb, ino, other_inode_match, &oi);
> 	}
> +
> +	spin_unlock(&lock);
> }
> 
> /*
> 


Cheers, Andreas
Theodore Ts'o Jan. 29, 2020, 10:15 p.m. UTC | #2
On Wed, Jan 29, 2020 at 06:44:05PM +0300, Konstantin Khlebnikov wrote:
> Function ext4_update_other_inodes_time() implements optimization which
> opportunistically updates times for inodes within same inode table block.
> 
> For now	concurrent inode lookup by number does not scale well because
> inode hash table is protected with single spinlock. It could become very
> hot at concurrent writes to fast nvme when inode cache has enough inodes.
> 
> Probably someday inode hash will become searchable under RCU.
> (see linked patchset by David Howells)
> 
> Let's skip concurrent updates instead of wasting cpu time at spinlock.
> 
> Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
> Link: https://lore.kernel.org/lkml/155620449631.4720.8762546550728087460.stgit@warthog.procyon.org.uk/

Hmm.... I wonder what Al thinks of adding a varaint of
find_inode_nowait() which uses tries to grab the inode_hash_lock()
using a trylock, and returns ERR_PTR(-EAGAIN) if the attempt to grab
the lock fails.

This might be better since it will prevent other conflicts between
ext4_update_other_inodes_time() and other attempts to lookup inodes
which can't be skipped if things are busy.

      	       	       	  	     - Ted
diff mbox series

Patch

diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c
index 629a25d999f0..dc3e1b38e3ed 100644
--- a/fs/ext4/inode.c
+++ b/fs/ext4/inode.c
@@ -4849,11 +4849,16 @@  static int other_inode_match(struct inode * inode, unsigned long ino,
 static void ext4_update_other_inodes_time(struct super_block *sb,
 					  unsigned long orig_ino, char *buf)
 {
+	static DEFINE_SPINLOCK(lock);
 	struct other_inode oi;
 	unsigned long ino;
 	int i, inodes_per_block = EXT4_SB(sb)->s_inodes_per_block;
 	int inode_size = EXT4_INODE_SIZE(sb);
 
+	/* Don't bother inode_hash_lock with concurrent updates. */
+	if (!spin_trylock(&lock))
+		return;
+
 	oi.orig_ino = orig_ino;
 	/*
 	 * Calculate the first inode in the inode table block.  Inode
@@ -4867,6 +4872,8 @@  static void ext4_update_other_inodes_time(struct super_block *sb,
 		oi.raw_inode = (struct ext4_inode *) buf;
 		(void) find_inode_nowait(sb, ino, other_inode_match, &oi);
 	}
+
+	spin_unlock(&lock);
 }
 
 /*