Patchwork cifs: implement drop_inode superblock op

login
register
mail settings
Submitter Jeff Layton
Date May 25, 2010, 7:24 p.m.
Message ID <1274815488-29173-1-git-send-email-jlayton@redhat.com>
Download mbox | patch
Permalink /patch/53573/
State New
Headers show

Comments

Jeff Layton - May 25, 2010, 7:24 p.m.
The standard behavior for drop_inode is to delete the inode when the
last reference to it is put and the nlink count goes to 0. This helps
keep inodes that are still considered "not deleted" in cache as long as
possible even when there aren't dentries attached to them.

When server inode numbers are disabled, it's not possible for cifs_iget
to ever match an existing inode (since inode numbers are generated via
iunique). In this situation, cifs can keep a lot of inodes in cache that
will never be used again.

Implement a drop_inode routine that deletes the inode if server inode
numbers are disabled on the mount. This helps keep the cifs inode
caches down to a more manageable size when server inode numbers are
disabled.

Signed-off-by: Jeff Layton <jlayton@redhat.com>
---
 fs/cifs/cifsfs.c |   14 ++++++++++++--
 1 files changed, 12 insertions(+), 2 deletions(-)
Steve French - May 25, 2010, 10:14 p.m.
Any rough idea of performance or memory savings (even in something
artificial like dbench run)?

On Tue, May 25, 2010 at 2:24 PM, Jeff Layton <jlayton@redhat.com> wrote:
> The standard behavior for drop_inode is to delete the inode when the
> last reference to it is put and the nlink count goes to 0. This helps
> keep inodes that are still considered "not deleted" in cache as long as
> possible even when there aren't dentries attached to them.
>
> When server inode numbers are disabled, it's not possible for cifs_iget
> to ever match an existing inode (since inode numbers are generated via
> iunique). In this situation, cifs can keep a lot of inodes in cache that
> will never be used again.
>
> Implement a drop_inode routine that deletes the inode if server inode
> numbers are disabled on the mount. This helps keep the cifs inode
> caches down to a more manageable size when server inode numbers are
> disabled.
>
> Signed-off-by: Jeff Layton <jlayton@redhat.com>
> ---
>  fs/cifs/cifsfs.c |   14 ++++++++++++--
>  1 files changed, 12 insertions(+), 2 deletions(-)
>
> diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
> index 78c02eb..8f647db 100644
> --- a/fs/cifs/cifsfs.c
> +++ b/fs/cifs/cifsfs.c
> @@ -473,13 +473,23 @@ static int cifs_remount(struct super_block *sb, int *flags, char *data)
>        return 0;
>  }
>
> +void cifs_drop_inode(struct inode *inode)
> +{
> +       struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
> +
> +       if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)
> +               return generic_drop_inode(inode);
> +
> +       return generic_delete_inode(inode);
> +}
> +
>  static const struct super_operations cifs_super_ops = {
>        .put_super = cifs_put_super,
>        .statfs = cifs_statfs,
>        .alloc_inode = cifs_alloc_inode,
>        .destroy_inode = cifs_destroy_inode,
> -/*     .drop_inode         = generic_delete_inode,
> -       .delete_inode   = cifs_delete_inode,  */  /* Do not need above two
> +       .drop_inode     = cifs_drop_inode,
> +/*     .delete_inode   = cifs_delete_inode,  */  /* Do not need above two
>        functions unless later we add lazy close of inodes or unless the
>        kernel forgets to call us with the same number of releases (closes)
>        as opens */
> --
> 1.6.6.1
>
>
Jeff Layton - May 26, 2010, 12:09 a.m.
On Tue, 25 May 2010 17:14:10 -0500
Steve French <smfrench@gmail.com> wrote:

> Any rough idea of performance or memory savings (even in something
> artificial like dbench run)?
> 

It's more of a memory savings thing. When I mount with -o noserverino
and run fsstress on the mount, I'd regularly see the size of the
cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this
patch in place, it rarely goes over 2M in size.

Eventually, memory pressure will force the size to go down, but if we
know that they'll never be used again (which is the case with
noserverino), it's better to go ahead and just free them.

> On Tue, May 25, 2010 at 2:24 PM, Jeff Layton <jlayton@redhat.com> wrote:
> > The standard behavior for drop_inode is to delete the inode when the
> > last reference to it is put and the nlink count goes to 0. This helps
> > keep inodes that are still considered "not deleted" in cache as long as
> > possible even when there aren't dentries attached to them.
> >
> > When server inode numbers are disabled, it's not possible for cifs_iget
> > to ever match an existing inode (since inode numbers are generated via
> > iunique). In this situation, cifs can keep a lot of inodes in cache that
> > will never be used again.
> >
> > Implement a drop_inode routine that deletes the inode if server inode
> > numbers are disabled on the mount. This helps keep the cifs inode
> > caches down to a more manageable size when server inode numbers are
> > disabled.
> >
> > Signed-off-by: Jeff Layton <jlayton@redhat.com>
> > ---
> >  fs/cifs/cifsfs.c |   14 ++++++++++++--
> >  1 files changed, 12 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
> > index 78c02eb..8f647db 100644
> > --- a/fs/cifs/cifsfs.c
> > +++ b/fs/cifs/cifsfs.c
> > @@ -473,13 +473,23 @@ static int cifs_remount(struct super_block *sb, int *flags, char *data)
> >        return 0;
> >  }
> >
> > +void cifs_drop_inode(struct inode *inode)
> > +{
> > +       struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
> > +
> > +       if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)
> > +               return generic_drop_inode(inode);
> > +
> > +       return generic_delete_inode(inode);
> > +}
> > +
> >  static const struct super_operations cifs_super_ops = {
> >        .put_super = cifs_put_super,
> >        .statfs = cifs_statfs,
> >        .alloc_inode = cifs_alloc_inode,
> >        .destroy_inode = cifs_destroy_inode,
> > -/*     .drop_inode         = generic_delete_inode,
> > -       .delete_inode   = cifs_delete_inode,  */  /* Do not need above two
> > +       .drop_inode     = cifs_drop_inode,
> > +/*     .delete_inode   = cifs_delete_inode,  */  /* Do not need above two
> >        functions unless later we add lazy close of inodes or unless the
> >        kernel forgets to call us with the same number of releases (closes)
> >        as opens */
> > --
> > 1.6.6.1
> >
> >
> 
> 
>
Raja R Harinath - May 26, 2010, 2:16 a.m.
Hi,

Jeff Layton <jlayton@redhat.com> writes:

[snip]
>  static const struct super_operations cifs_super_ops = {
>  	.put_super = cifs_put_super,
>  	.statfs = cifs_statfs,
>  	.alloc_inode = cifs_alloc_inode,
>  	.destroy_inode = cifs_destroy_inode,
> -/*	.drop_inode	    = generic_delete_inode,
> -	.delete_inode	= cifs_delete_inode,  */  /* Do not need above two
> +	.drop_inode	= cifs_drop_inode,
> +/*	.delete_inode	= cifs_delete_inode,  */  /* Do not need above two
>  	functions unless later we add lazy close of inodes or unless the
>  	kernel forgets to call us with the same number of releases (closes)
>  	as opens */

I think the comment needs to be updated, at least to decrement the "two
functions".

- Hari
Scott Lovenberg - May 26, 2010, 11:19 p.m.
>> Any rough idea of performance or memory savings (even in something
>> artificial like dbench run)?
>>
>>      
> It's more of a memory savings thing. When I mount with -o noserverino
> and run fsstress on the mount, I'd regularly see the size of the
> cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this
> patch in place, it rarely goes over 2M in size.
>
> Eventually, memory pressure will force the size to go down, but if we
> know that they'll never be used again (which is the case with
> noserverino), it's better to go ahead and just free them.
>
>    
I take it this overrides the behavior of the vfs_cache_pressure before 
the memory pressure makes reclaiming cache necessary?
Jeff Layton - May 27, 2010, 1:38 p.m.
On Wed, 26 May 2010 19:19:11 -0400
Scott Lovenberg <scott.lovenberg@gmail.com> wrote:

> 
> >> Any rough idea of performance or memory savings (even in something
> >> artificial like dbench run)?
> >>
> >>      
> > It's more of a memory savings thing. When I mount with -o noserverino
> > and run fsstress on the mount, I'd regularly see the size of the
> > cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this
> > patch in place, it rarely goes over 2M in size.
> >
> > Eventually, memory pressure will force the size to go down, but if we
> > know that they'll never be used again (which is the case with
> > noserverino), it's better to go ahead and just free them.
> >
> >    
> I take it this overrides the behavior of the vfs_cache_pressure before 
> the memory pressure makes reclaiming cache necessary?

Not exactly. vfs_cache_pressure just governs the way in which the VM
subsystem will attempt to free memory when it needs it by changing the
preference for flushing inode and dentry caches.

This patch just aims to delete inodes that we know will never be used
again as soon as their refcount drops to 0.
Steve French - May 27, 2010, 2:51 p.m.
Cached metadata will still be valid for 1 second - do these still have
dentries pointing to them?

On Thu, May 27, 2010 at 8:38 AM, Jeff Layton <jlayton@redhat.com> wrote:
> On Wed, 26 May 2010 19:19:11 -0400
> Scott Lovenberg <scott.lovenberg@gmail.com> wrote:
>
>>
>> >> Any rough idea of performance or memory savings (even in something
>> >> artificial like dbench run)?
>> >>
>> >>
>> > It's more of a memory savings thing. When I mount with -o noserverino
>> > and run fsstress on the mount, I'd regularly see the size of the
>> > cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this
>> > patch in place, it rarely goes over 2M in size.
>> >
>> > Eventually, memory pressure will force the size to go down, but if we
>> > know that they'll never be used again (which is the case with
>> > noserverino), it's better to go ahead and just free them.
>> >
>> >
>> I take it this overrides the behavior of the vfs_cache_pressure before
>> the memory pressure makes reclaiming cache necessary?
>
> Not exactly. vfs_cache_pressure just governs the way in which the VM
> subsystem will attempt to free memory when it needs it by changing the
> preference for flushing inode and dentry caches.
>
> This patch just aims to delete inodes that we know will never be used
> again as soon as their refcount drops to 0.
>
> --
> Jeff Layton <jlayton@redhat.com>
>
Jeff Layton - May 27, 2010, 3:20 p.m.
On Thu, 27 May 2010 09:51:40 -0500
Steve French <smfrench@gmail.com> wrote:

> Cached metadata will still be valid for 1 second - do these still have
> dentries pointing to them?
> 

No. If the i_count is 0, then any dentries attached to the inode would
have gone away. These are inodes that are subject to get flushed out of
the cache at any time. This patch just makes that happen sooner.

> On Thu, May 27, 2010 at 8:38 AM, Jeff Layton <jlayton@redhat.com> wrote:
> > On Wed, 26 May 2010 19:19:11 -0400
> > Scott Lovenberg <scott.lovenberg@gmail.com> wrote:
> >
> >>
> >> >> Any rough idea of performance or memory savings (even in something
> >> >> artificial like dbench run)?
> >> >>
> >> >>
> >> > It's more of a memory savings thing. When I mount with -o noserverino
> >> > and run fsstress on the mount, I'd regularly see the size of the
> >> > cifs_inode_cache hit 60M or more (on a client with 1G RAM). With this
> >> > patch in place, it rarely goes over 2M in size.
> >> >
> >> > Eventually, memory pressure will force the size to go down, but if we
> >> > know that they'll never be used again (which is the case with
> >> > noserverino), it's better to go ahead and just free them.
> >> >
> >> >
> >> I take it this overrides the behavior of the vfs_cache_pressure before
> >> the memory pressure makes reclaiming cache necessary?
> >
> > Not exactly. vfs_cache_pressure just governs the way in which the VM
> > subsystem will attempt to free memory when it needs it by changing the
> > preference for flushing inode and dentry caches.
> >
> > This patch just aims to delete inodes that we know will never be used
> > again as soon as their refcount drops to 0.
> >
> > --
> > Jeff Layton <jlayton@redhat.com>
> >
> 
> 
> 
> -- 
> Thanks,
> 
> Steve

Patch

diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c
index 78c02eb..8f647db 100644
--- a/fs/cifs/cifsfs.c
+++ b/fs/cifs/cifsfs.c
@@ -473,13 +473,23 @@  static int cifs_remount(struct super_block *sb, int *flags, char *data)
 	return 0;
 }
 
+void cifs_drop_inode(struct inode *inode)
+{
+	struct cifs_sb_info *cifs_sb = CIFS_SB(inode->i_sb);
+
+	if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)
+		return generic_drop_inode(inode);
+
+	return generic_delete_inode(inode);
+}
+
 static const struct super_operations cifs_super_ops = {
 	.put_super = cifs_put_super,
 	.statfs = cifs_statfs,
 	.alloc_inode = cifs_alloc_inode,
 	.destroy_inode = cifs_destroy_inode,
-/*	.drop_inode	    = generic_delete_inode,
-	.delete_inode	= cifs_delete_inode,  */  /* Do not need above two
+	.drop_inode	= cifs_drop_inode,
+/*	.delete_inode	= cifs_delete_inode,  */  /* Do not need above two
 	functions unless later we add lazy close of inodes or unless the
 	kernel forgets to call us with the same number of releases (closes)
 	as opens */