Patchwork jffs2: Re-enable write-buffering after filesystem sync

login
register
mail settings
Submitter Al Viro
Date Aug. 1, 2014, 7:13 p.m.
Message ID <20140801191340.GN18016@ZenIV.linux.org.uk>
Download mbox | patch
Permalink /patch/375850/
State New
Headers show

Comments

Al Viro - Aug. 1, 2014, 7:13 p.m.
On Fri, Aug 01, 2014 at 01:15:45PM -0400, Jeff Harris wrote:
> On Fri, Aug 1, 2014 at 12:56 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
> >
> > On Fri, Aug 01, 2014 at 12:06:12PM -0400, Jeff Harris wrote:
> >
> > > +     spin_lock(&c->wbuf_dwork_lock);
> > >       cancel_delayed_work_sync(&c->wbuf_dwork);
> >
> > Umm...  Usually ..._sync in function name is a sign of potential sleeper,
> > and calling those under a spinlock is a bad idea.
> >
> > And looking at the definition of cancel_delayed_work_sync() turns up the
> > following call chain: cancel_delayed_work_sync() -> __cancel_work_timer() ->
> > flush_work() -> wait_for_completion(), which definitely isn't something
> > you should ever do under a spinlock.
> 
> The jffs2_dirty_trigger function calls queue_delayed_work under the spinlock.
> I suppose the flag could be reset after the cancel so the dirty
> trigger would see
> the false value.

Why do we need ->wbuf_queued, anyway?  It's not as if queue_delayed_work()
wouldn't do the right thing - if the sucker already has WORK_STRUCT_PENDING_BIT
set it'll simply do nothing.  And it won't spend much time finding that
out - in fact, it'll be cheaper than what we do right now.

IOW, how about the following (completely untested):

[jffs2] kill wbuf_queued/wbuf_dwork_lock

schedule_delayed_work() happening when the work is already pending is
a cheap no-op.  Don't bother with ->wbuf_queued logics - it's both
broken (cancelling ->wbuf_dwork leaves it set, as spotted by Jeff Harris)
and pointless.  It's cheaper to let schedule_delayed_work() handle that
case.

Reported-by: Jeff Harris <jefftharris@gmail.com>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
---
Jeff Harris - Aug. 4, 2014, 6:02 p.m.
On Fri, Aug 1, 2014 at 3:13 PM, Al Viro <viro@zeniv.linux.org.uk> wrote:
> Why do we need ->wbuf_queued, anyway?  It's not as if queue_delayed_work()
> wouldn't do the right thing - if the sucker already has WORK_STRUCT_PENDING_BIT
> set it'll simply do nothing.  And it won't spend much time finding that
> out - in fact, it'll be cheaper than what we do right now.
>
> IOW, how about the following (completely untested):
>

I tested the patch against our 3.13 device, and it appears to resolve
the original
write-buffering after sync problem we were having.  I did not do any
other regression
testing, though.

Jeff

Patch

diff --git a/fs/jffs2/jffs2_fs_sb.h b/fs/jffs2/jffs2_fs_sb.h
index 413ef89..046fee8 100644
--- a/fs/jffs2/jffs2_fs_sb.h
+++ b/fs/jffs2/jffs2_fs_sb.h
@@ -134,8 +134,6 @@  struct jffs2_sb_info {
 	struct rw_semaphore wbuf_sem;	/* Protects the write buffer */
 
 	struct delayed_work wbuf_dwork; /* write-buffer write-out work */
-	int wbuf_queued;                /* non-zero delayed work is queued */
-	spinlock_t wbuf_dwork_lock;     /* protects wbuf_dwork and and wbuf_queued */
 
 	unsigned char *oobbuf;
 	int oobavail; /* How many bytes are available for JFFS2 in OOB */
diff --git a/fs/jffs2/wbuf.c b/fs/jffs2/wbuf.c
index a6597d6..09ed551 100644
--- a/fs/jffs2/wbuf.c
+++ b/fs/jffs2/wbuf.c
@@ -1162,10 +1162,6 @@  static void delayed_wbuf_sync(struct work_struct *work)
 	struct jffs2_sb_info *c = work_to_sb(work);
 	struct super_block *sb = OFNI_BS_2SFFJ(c);
 
-	spin_lock(&c->wbuf_dwork_lock);
-	c->wbuf_queued = 0;
-	spin_unlock(&c->wbuf_dwork_lock);
-
 	if (!(sb->s_flags & MS_RDONLY)) {
 		jffs2_dbg(1, "%s()\n", __func__);
 		jffs2_flush_wbuf_gc(c, 0);
@@ -1180,14 +1176,9 @@  void jffs2_dirty_trigger(struct jffs2_sb_info *c)
 	if (sb->s_flags & MS_RDONLY)
 		return;
 
-	spin_lock(&c->wbuf_dwork_lock);
-	if (!c->wbuf_queued) {
+	delay = msecs_to_jiffies(dirty_writeback_interval * 10);
+	if (queue_delayed_work(system_long_wq, &c->wbuf_dwork, delay))
 		jffs2_dbg(1, "%s()\n", __func__);
-		delay = msecs_to_jiffies(dirty_writeback_interval * 10);
-		queue_delayed_work(system_long_wq, &c->wbuf_dwork, delay);
-		c->wbuf_queued = 1;
-	}
-	spin_unlock(&c->wbuf_dwork_lock);
 }
 
 int jffs2_nand_flash_setup(struct jffs2_sb_info *c)
@@ -1211,7 +1202,6 @@  int jffs2_nand_flash_setup(struct jffs2_sb_info *c)
 
 	/* Initialise write buffer */
 	init_rwsem(&c->wbuf_sem);
-	spin_lock_init(&c->wbuf_dwork_lock);
 	INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync);
 	c->wbuf_pagesize = c->mtd->writesize;
 	c->wbuf_ofs = 0xFFFFFFFF;
@@ -1251,7 +1241,6 @@  int jffs2_dataflash_setup(struct jffs2_sb_info *c) {
 
 	/* Initialize write buffer */
 	init_rwsem(&c->wbuf_sem);
-	spin_lock_init(&c->wbuf_dwork_lock);
 	INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync);
 	c->wbuf_pagesize =  c->mtd->erasesize;
 
@@ -1311,7 +1300,6 @@  int jffs2_nor_wbuf_flash_setup(struct jffs2_sb_info *c) {
 
 	/* Initialize write buffer */
 	init_rwsem(&c->wbuf_sem);
-	spin_lock_init(&c->wbuf_dwork_lock);
 	INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync);
 
 	c->wbuf_pagesize = c->mtd->writesize;
@@ -1346,7 +1334,6 @@  int jffs2_ubivol_setup(struct jffs2_sb_info *c) {
 		return 0;
 
 	init_rwsem(&c->wbuf_sem);
-	spin_lock_init(&c->wbuf_dwork_lock);
 	INIT_DELAYED_WORK(&c->wbuf_dwork, delayed_wbuf_sync);
 
 	c->wbuf_pagesize =  c->mtd->writesize;