From patchwork Wed Sep 9 15:49:04 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jeff Layton X-Patchwork-Id: 33199 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.samba.org (fn.samba.org [216.83.154.106]) by bilbo.ozlabs.org (Postfix) with ESMTP id A8775B7088 for ; Thu, 10 Sep 2009 01:49:17 +1000 (EST) Received: from fn.samba.org (localhost [127.0.0.1]) by lists.samba.org (Postfix) with ESMTP id 39730AD0ED; Wed, 9 Sep 2009 09:38:37 -0600 (MDT) X-Spam-Checker-Version: SpamAssassin 3.2.5 (2008-06-10) on fn.samba.org X-Spam-Level: X-Spam-Status: No, score=-3.5 required=3.8 tests=AWL, BAYES_00, MISSING_HEADERS, SPF_HELO_PASS,SPF_PASS autolearn=no version=3.2.5 X-Original-To: linux-cifs-client@lists.samba.org Delivered-To: linux-cifs-client@lists.samba.org Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by lists.samba.org (Postfix) with ESMTP id 89532AD0AB for ; Wed, 9 Sep 2009 09:38:31 -0600 (MDT) Received: from int-mx04.intmail.prod.int.phx2.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.17]) by mx1.redhat.com (8.13.8/8.13.8) with ESMTP id n89Fn7JM025982; Wed, 9 Sep 2009 11:49:08 -0400 Received: from tlielax.poochiereds.net (vpn-9-184.rdu.redhat.com [10.11.9.184]) by int-mx04.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id n89Fn6uQ023890; Wed, 9 Sep 2009 11:49:06 -0400 Date: Wed, 9 Sep 2009 11:49:04 -0400 From: Jeff Layton Message-ID: <20090909114904.2c0cc540@tlielax.poochiereds.net> In-Reply-To: <20090908132858.3f2a8778@tlielax.poochiereds.net> References: <1252419179-31353-1-git-send-email-jlayton@redhat.com> <524f69650909081012m48b862e2kf947bddfc083982d@mail.gmail.com> <20090908132858.3f2a8778@tlielax.poochiereds.net> Mime-Version: 1.0 X-Scanned-By: MIMEDefang 2.67 on 10.5.11.17 Cc: Steve French , linux-cifs-client@lists.samba.org Subject: Re: [linux-cifs-client] [PATCH 0/4] cifs: alternate approach to fixing oplock queue races X-BeenThere: linux-cifs-client@lists.samba.org X-Mailman-Version: 2.1.12 Precedence: list List-Id: The Linux CIFS VFS client List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-cifs-client-bounces@lists.samba.org Errors-To: linux-cifs-client-bounces@lists.samba.org On Tue, 8 Sep 2009 13:28:58 -0400 Jeff Layton wrote: > On Tue, 8 Sep 2009 12:12:22 -0500 > Steve French wrote: > > > On Tue, Sep 8, 2009 at 9:12 AM, Jeff Layton wrote: > > > This patchset is an alternate approach to fixing the oplock queue > > > problems. Rather than tracking oplocks with separate structures, this > > > patchset adds some fields to cifsFileInfo. > > > > > > When an oplock break comes in, is_valid_oplock_break takes an extra > > > reference to the cifsFileInfo and queues the oplock break job to the > > > events workqueue. > > > > This looks excellent - the 4th patch has to be tried carefully but > > easier to read than alternatives. Any idea when the schedule_work > > would get dispatched (is the current task put to sleep immediately?) > > > > When it actually runs is sort of dependent on what other jobs are > running in queue, how many CPU's are in the box etc. In practice, I > don't think this set will materially change when the oplock break > actually happens, unless there is a lot of stuff sitting in the events > queue beforehand. > > It may also help performance if a lot of oplock breaks come in at once > since they won't be serialized. You could have one running for each CPU > you have. > > Another thing we could consider is using the new slow_work stuff that > dhowells added recently. An oplock break task could take a while to > run if there is a lot of data to be flushed, so that may be a better > scheme (though it does add a build-time kconfig dependency to CIFS). > > LWN has a good writeup on the slow_work infrastructure: > > http://lwn.net/Articles/329464/ > Here's a quick conversion to slow_work that I did this morning. This patch applies on top of the set I sent yesterday. Tested with connectathon and seems to work correctly (sniffing traffic showed the oplock break response go out onto the wire). I like this better than queueing to the events queue -- seems less likely to block more critical tasks. Thoughts? From 46ae620b364db83b3db79508364303d4b01780b3 Mon Sep 17 00:00:00 2001 From: Jeff Layton Date: Wed, 9 Sep 2009 11:44:16 -0400 Subject: [PATCH 5/4] cifs: convert oplock_break job to slow_work Seems like a good candidate for this. Signed-off-by: Jeff Layton --- fs/cifs/Kconfig | 1 + fs/cifs/cifsfs.c | 6 ++++++ fs/cifs/cifsglob.h | 4 +++- fs/cifs/cifsproto.h | 1 - fs/cifs/dir.c | 3 ++- fs/cifs/file.c | 27 ++++++++++++++++++++++++--- fs/cifs/misc.c | 13 +++++++++---- 7 files changed, 45 insertions(+), 10 deletions(-) diff --git a/fs/cifs/Kconfig b/fs/cifs/Kconfig index 6994a0f..80f3525 100644 --- a/fs/cifs/Kconfig +++ b/fs/cifs/Kconfig @@ -2,6 +2,7 @@ config CIFS tristate "CIFS support (advanced network filesystem, SMBFS successor)" depends on INET select NLS + select SLOW_WORK help This is the client VFS module for the Common Internet File System (CIFS) protocol which is the successor to the Server Message Block diff --git a/fs/cifs/cifsfs.c b/fs/cifs/cifsfs.c index c4872ad..89142b3 100644 --- a/fs/cifs/cifsfs.c +++ b/fs/cifs/cifsfs.c @@ -1038,8 +1038,14 @@ init_cifs(void) if (rc) goto out_unregister_key_type; #endif + rc = slow_work_register_user(); + if (rc) + goto out_unregister_resolver_key; + return 0; + out_unregister_resolver_key: + unregister_key_type(&key_type_dns_resolver); #ifdef CONFIG_CIFS_DFS_UPCALL out_unregister_key_type: #endif diff --git a/fs/cifs/cifsglob.h b/fs/cifs/cifsglob.h index 2be0471..c3280fa 100644 --- a/fs/cifs/cifsglob.h +++ b/fs/cifs/cifsglob.h @@ -18,6 +18,7 @@ */ #include #include +#include #include "cifs_fs_sb.h" #include "cifsacl.h" /* @@ -355,7 +356,7 @@ struct cifsFileInfo { atomic_t count; /* reference count */ struct mutex fh_mutex; /* prevents reopen race after dead ses*/ struct cifs_search_info srch_inf; - struct work_struct oplock_break; /* workqueue job for oplock breaks */ + struct slow_work oplock_break; /* slow_work job for oplock breaks */ }; /* Take a reference on the file private data */ @@ -722,3 +723,4 @@ GLOBAL_EXTERN unsigned int cifs_min_rcv; /* min size of big ntwrk buf pool */ GLOBAL_EXTERN unsigned int cifs_min_small; /* min size of small buf pool */ GLOBAL_EXTERN unsigned int cifs_max_pending; /* MAX requests at once to server*/ +extern const struct slow_work_ops cifs_oplock_break_ops; diff --git a/fs/cifs/cifsproto.h b/fs/cifs/cifsproto.h index b063802..f2a6241 100644 --- a/fs/cifs/cifsproto.h +++ b/fs/cifs/cifsproto.h @@ -88,7 +88,6 @@ extern int CIFS_SessSetup(unsigned int xid, struct cifsSesInfo *ses, extern __u16 GetNextMid(struct TCP_Server_Info *server); extern struct timespec cifs_NTtimeToUnix(__le64 utc_nanoseconds_since_1601); extern u64 cifs_UnixTimeToNT(struct timespec); -extern void cifs_oplock_break(struct work_struct *work); extern struct timespec cnvrtDosUnixTm(__le16 le_date, __le16 le_time, int offset); diff --git a/fs/cifs/dir.c b/fs/cifs/dir.c index 5d70d7a..ae09e5c 100644 --- a/fs/cifs/dir.c +++ b/fs/cifs/dir.c @@ -31,6 +31,7 @@ #include "cifs_debug.h" #include "cifs_fs_sb.h" + static void renew_parental_timestamps(struct dentry *direntry) { @@ -155,7 +156,7 @@ cifs_fill_fileinfo(struct inode *newinode, __u16 fileHandle, mutex_init(&pCifsFile->lock_mutex); INIT_LIST_HEAD(&pCifsFile->llist); atomic_set(&pCifsFile->count, 1); - INIT_WORK(&pCifsFile->oplock_break, cifs_oplock_break); + slow_work_init(&pCifsFile->oplock_break, &cifs_oplock_break_ops); /* set the following in open now pCifsFile->pfile = file; */ diff --git a/fs/cifs/file.c b/fs/cifs/file.c index aae5870..04aab66 100644 --- a/fs/cifs/file.c +++ b/fs/cifs/file.c @@ -57,7 +57,7 @@ static inline struct cifsFileInfo *cifs_init_private( /* Initialize reference count to one. The private data is freed on the release of the last reference */ atomic_set(&private_data->count, 1); - INIT_WORK(&private_data->oplock_break, cifs_oplock_break); + slow_work_init(&private_data->oplock_break, &cifs_oplock_break_ops); return private_data; } @@ -2311,8 +2311,8 @@ out: return rc; } -void -cifs_oplock_break(struct work_struct *work) +static void +cifs_oplock_break(struct slow_work *work) { struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, oplock_break); @@ -2350,10 +2350,31 @@ cifs_oplock_break(struct work_struct *work) LOCKING_ANDX_OPLOCK_RELEASE, false); cFYI(1, ("Oplock release rc = %d", rc)); } +} +static int +cifs_oplock_break_get(struct slow_work *work) +{ + struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, + oplock_break); + cifsFileInfo_get(cfile); + return 0; +} + +static void +cifs_oplock_break_put(struct slow_work *work) +{ + struct cifsFileInfo *cfile = container_of(work, struct cifsFileInfo, + oplock_break); cifsFileInfo_put(cfile); } +const struct slow_work_ops cifs_oplock_break_ops = { + .get_ref = cifs_oplock_break_get, + .put_ref = cifs_oplock_break_put, + .execute = cifs_oplock_break, +}; + const struct address_space_operations cifs_addr_ops = { .readpage = cifs_readpage, .readpages = cifs_readpages, diff --git a/fs/cifs/misc.c b/fs/cifs/misc.c index 8e098e6..0241b25 100644 --- a/fs/cifs/misc.c +++ b/fs/cifs/misc.c @@ -499,6 +499,7 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) struct cifsTconInfo *tcon; struct cifsInodeInfo *pCifsInode; struct cifsFileInfo *netfile; + int rc; cFYI(1, ("Checking for oplock break or dnotify response")); if ((pSMB->hdr.Command == SMB_COM_NT_TRANSACT) && @@ -578,16 +579,20 @@ is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv) return true; } - cifsFileInfo_get(netfile); - read_unlock(&GlobalSMBSeslock); - read_unlock(&cifs_tcp_ses_lock); cFYI(1, ("file id match, oplock break")); pCifsInode = CIFS_I(netfile->pInode); pCifsInode->clientCanCacheAll = false; if (pSMB->OplockLevel == 0) pCifsInode->clientCanCacheRead = false; - if (schedule_work(&netfile->oplock_break)) + rc = slow_work_enqueue(&netfile->oplock_break); + if (rc) { + cERROR(1, ("failed to enqueue oplock " + "break: %d\n", rc)); + } else { netfile->oplock_break_cancelled = false; + } + read_unlock(&GlobalSMBSeslock); + read_unlock(&cifs_tcp_ses_lock); return true; } read_unlock(&GlobalSMBSeslock); -- 1.6.0.6