diff mbox

[ext3] Changes to block device after an ext3 mount point has been remounted readonly

Message ID 877hq2tyg8.fsf@openvz.org
State Not Applicable, archived
Headers show

Commit Message

Dmitry Monakhov Feb. 24, 2010, 4:01 p.m. UTC
Jan Kara <jack@suse.cz> writes:

>> The fact is that I've been able to reproduce the problem on LVM block
>> devices, and sd* block devices so it's definitely not a loop device
>> specific problem.
>> 
>> By the way, I tried several other things other than "echo s
>> >/proc/sysrq_trigger" I tried multiple sync followed with a one minute
>> "sleep",
>> 
>> "echo 3 >/proc/sys/vm/drop_caches" seems to lower the chances of "hash
>> changes" but doesn't stops them.
>   Strange. When I use sync(1) in your script and use /dev/sda5 instead of a
> /dev/loop0, I cannot reproduce the problem (was running the script for
> something like an hour).
Theoretically some pages may exist after rw=>ro remount
because of generic race between write/sync, And they will be written
in by writepage if page already has buffers. This not happen in ext4
because. Each time it try to perform writepages it try to start_journal
and this result in EROFS.
The race bug will be closed some day but new one may appear again.

Let's be honest and change ext3 writepage like follows:
- check ROFS flag inside write page
- dump writepage's errors.
From a7cadf8017626cd80fcd8ea5a0e4deff4f63e02e Mon Sep 17 00:00:00 2001
From: Dmitry Monakhov <dmonakhov@openvz.org>
Date: Wed, 24 Feb 2010 18:17:58 +0300
Subject: [PATCH] ext3: add sanity checks to writeback

There is theoretical possibility to perform writepage on
RO superblock. Add explicit check for what case.

In fact writepage may fail by a number of reasons.
This is really rare case but sill may result in data loss.
At least we have to dump a error message.

Signed-off-by: Dmitry Monakhov <dmonakhov@openvz.org>
---
 fs/ext3/inode.c |   40 +++++++++++++++++++++++++++++++++++-----
 1 files changed, 35 insertions(+), 5 deletions(-)

Comments

Camille Moncelier Feb. 24, 2010, 4:26 p.m. UTC | #1
> Theoretically some pages may exist after rw=>ro remount
> because of generic race between write/sync, And they will be written
> in by writepage if page already has buffers. This not happen in ext4
> because. Each time it try to perform writepages it try to start_journal
> and this result in EROFS.
> The race bug will be closed some day but new one may appear again.
>
> Let's be honest and change ext3 writepage like follows:
> - check ROFS flag inside write page
> - dump writepage's errors.
>
>
I think I don't understand correctly your patch. For me it seems that
when an ext3 filesystem is remounted ro, some data may not have been
written to disk right ?

But as far as I understand some writes are performed on the journal on
remount-ro, before the ro flag is set. So if writepage comes to play
and write data to disk it my have to update the journal again, no ? If
not it would mean that the journal would reference data that aren't
available on disk ?

Last question, would it be hard to implement a patch that trigger
writepage and wait for completion when remounting read-only (I have no
expertise on filesystems in general, but I tried my best to understand
the ext3 driver)
Jan Kara Feb. 24, 2010, 4:56 p.m. UTC | #2
On Wed 24-02-10 19:01:27, Dmitry Monakhov wrote:
> Jan Kara <jack@suse.cz> writes:
> 
> >> The fact is that I've been able to reproduce the problem on LVM block
> >> devices, and sd* block devices so it's definitely not a loop device
> >> specific problem.
> >> 
> >> By the way, I tried several other things other than "echo s
> >> >/proc/sysrq_trigger" I tried multiple sync followed with a one minute
> >> "sleep",
> >> 
> >> "echo 3 >/proc/sys/vm/drop_caches" seems to lower the chances of "hash
> >> changes" but doesn't stops them.
> >   Strange. When I use sync(1) in your script and use /dev/sda5 instead of a
> > /dev/loop0, I cannot reproduce the problem (was running the script for
> > something like an hour).
> Theoretically some pages may exist after rw=>ro remount
> because of generic race between write/sync, And they will be written
> in by writepage if page already has buffers. This not happen in ext4
> because. Each time it try to perform writepages it try to start_journal
> and this result in EROFS.
> The race bug will be closed some day but new one may appear again.
  OK, I see that in theory a process can open file for writing after
fs_may_remount_ro() before MS_RDONLY flag gets set. That could be really
nasty. But by no means we should solve this VFS problem by spilling error
messages from the filesystem. Especially because block_write_full_page can
fail from a number of legal reasons (ENOSPC, EDQUOT, EIO) and we don't want
to pollute logs with such stuff.
  BTW: This isn't the race Camille could see because he did all the writes,
then sync and then remount-ro...

  Al, Christoph, do I miss something or there is really nothing which
prevents a process from opening a file after the fs_may_remount_ro() check
in do_remount_sb()?

								Honza
Eric Sandeen Feb. 24, 2010, 4:57 p.m. UTC | #3
Dmitry Monakhov wrote:
> Jan Kara <jack@suse.cz> writes:
> 
>>> The fact is that I've been able to reproduce the problem on LVM block
>>> devices, and sd* block devices so it's definitely not a loop device
>>> specific problem.
>>>
>>> By the way, I tried several other things other than "echo s
>>>> /proc/sysrq_trigger" I tried multiple sync followed with a one minute
>>> "sleep",
>>>
>>> "echo 3 >/proc/sys/vm/drop_caches" seems to lower the chances of "hash
>>> changes" but doesn't stops them.
>>   Strange. When I use sync(1) in your script and use /dev/sda5 instead of a
>> /dev/loop0, I cannot reproduce the problem (was running the script for
>> something like an hour).
> Theoretically some pages may exist after rw=>ro remount
> because of generic race between write/sync, And they will be written
> in by writepage if page already has buffers. This not happen in ext4
> because. Each time it try to perform writepages it try to start_journal
> and this result in EROFS.
> The race bug will be closed some day but new one may appear again.
> 
> Let's be honest and change ext3 writepage like follows:
> - check ROFS flag inside write page
> - dump writepage's errors.
> 
> 

sounds like the wrong approach to me, we really need to fix the root
cause and make remount,ro finish the job, I think.

Throwing away writes which an application already thinks are completed
just because remount,ro didn't keep up sounds like a bad idea.  I think
I would much rather have the write complete shortly after the readonly
transition, if I had to choose...

I haven't looked at these paths at all but just hand-wavily,
remount,ro should follow pretty much the same path as freeze,
I think.  And if freeze isn't getting everything on-disk we have
an even bigger problem.

-Eric
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara Feb. 24, 2010, 4:59 p.m. UTC | #4
On Wed 24-02-10 17:26:37, Camille Moncelier wrote:
> > Theoretically some pages may exist after rw=>ro remount
> > because of generic race between write/sync, And they will be written
> > in by writepage if page already has buffers. This not happen in ext4
> > because. Each time it try to perform writepages it try to start_journal
> > and this result in EROFS.
> > The race bug will be closed some day but new one may appear again.
> >
> > Let's be honest and change ext3 writepage like follows:
> > - check ROFS flag inside write page
> > - dump writepage's errors.
> >
> >
> I think I don't understand correctly your patch. For me it seems that
> when an ext3 filesystem is remounted ro, some data may not have been
> written to disk right ?
  I think that Dmitry was concerned about the fact that a process could
open a file and write to it after we synced the filesystem in
do_remount_sb().

								Honza
Jan Kara Feb. 24, 2010, 5:05 p.m. UTC | #5
On Wed 24-02-10 10:57:59, Eric Sandeen wrote:
> Dmitry Monakhov wrote:
> > Jan Kara <jack@suse.cz> writes:
> >>> The fact is that I've been able to reproduce the problem on LVM block
> >>> devices, and sd* block devices so it's definitely not a loop device
> >>> specific problem.
> >>>
> >>> By the way, I tried several other things other than "echo s
> >>>> /proc/sysrq_trigger" I tried multiple sync followed with a one minute
> >>> "sleep",
> >>>
> >>> "echo 3 >/proc/sys/vm/drop_caches" seems to lower the chances of "hash
> >>> changes" but doesn't stops them.
> >>   Strange. When I use sync(1) in your script and use /dev/sda5 instead of a
> >> /dev/loop0, I cannot reproduce the problem (was running the script for
> >> something like an hour).
> > Theoretically some pages may exist after rw=>ro remount
> > because of generic race between write/sync, And they will be written
> > in by writepage if page already has buffers. This not happen in ext4
> > because. Each time it try to perform writepages it try to start_journal
> > and this result in EROFS.
> > The race bug will be closed some day but new one may appear again.
> > 
> > Let's be honest and change ext3 writepage like follows:
> > - check ROFS flag inside write page
> > - dump writepage's errors.
> > 
> > 
> 
> sounds like the wrong approach to me, we really need to fix the root
> cause and make remount,ro finish the job, I think.
> 
> Throwing away writes which an application already thinks are completed
> just because remount,ro didn't keep up sounds like a bad idea.  I think
> I would much rather have the write complete shortly after the readonly
> transition, if I had to choose...
  Well, my opinion is that VFS should take care about the rw->ro transition
so that it isn't racy...

> I haven't looked at these paths at all but just hand-wavily,
> remount,ro should follow pretty much the same path as freeze,
> I think.  And if freeze isn't getting everything on-disk we have
> an even bigger problem.
  With freeze you can still keep dirty data in cache until the filesystem
unfreezes so it's a different situation from rw->ro transition.

								Honza
Dmitry Monakhov Feb. 24, 2010, 5:26 p.m. UTC | #6
Jan Kara <jack@suse.cz> writes:

> On Wed 24-02-10 10:57:59, Eric Sandeen wrote:
>> Dmitry Monakhov wrote:
>> > Jan Kara <jack@suse.cz> writes:
>> >>> The fact is that I've been able to reproduce the problem on LVM block
>> >>> devices, and sd* block devices so it's definitely not a loop device
>> >>> specific problem.
>> >>>
>> >>> By the way, I tried several other things other than "echo s
>> >>>> /proc/sysrq_trigger" I tried multiple sync followed with a one minute
>> >>> "sleep",
>> >>>
>> >>> "echo 3 >/proc/sys/vm/drop_caches" seems to lower the chances of "hash
>> >>> changes" but doesn't stops them.
>> >>   Strange. When I use sync(1) in your script and use /dev/sda5 instead of a
>> >> /dev/loop0, I cannot reproduce the problem (was running the script for
>> >> something like an hour).
>> > Theoretically some pages may exist after rw=>ro remount
>> > because of generic race between write/sync, And they will be written
>> > in by writepage if page already has buffers. This not happen in ext4
>> > because. Each time it try to perform writepages it try to start_journal
>> > and this result in EROFS.
>> > The race bug will be closed some day but new one may appear again.
>> > 
>> > Let's be honest and change ext3 writepage like follows:
>> > - check ROFS flag inside write page
>> > - dump writepage's errors.
>> > 
>> > 
>> 
>> sounds like the wrong approach to me, we really need to fix the root
>> cause and make remount,ro finish the job, I think.
Off course, but still. This is just a sanity check. Similar check
in ext4 help me to find the generic issue. Off course it have to
be guarded by unlikely() statement
>> 
>> Throwing away writes which an application already thinks are completed
>> just because remount,ro didn't keep up sounds like a bad idea.  I think
>> I would much rather have the write complete shortly after the readonly
>> transition, if I had to choose...
>   Well, my opinion is that VFS should take care about the rw->ro transition
> so that it isn't racy...
No, My patch just try to nail the RO semantics in to writepage.
Since other places are already guarded by start_journal, writepage is
the only one which may has weakness.
About ENOSPC/EDQUOT spam. It may be not bad to print a error message
for crazy person who use mmap for space file.
>
>> I haven't looked at these paths at all but just hand-wavily,
>> remount,ro should follow pretty much the same path as freeze,
>> I think.  And if freeze isn't getting everything on-disk we have
>> an even bigger problem.
>   With freeze you can still keep dirty data in cache until the filesystem
> unfreezes so it's a different situation from rw->ro transition.
In fact freeze is also not absolutely io proof :)
When i've worked on COW device i use freeze-fs for consistent
image creation, And sometimes after filesystem was friezed
i still get bios. We do not investigate this too deeply
and just queue bios in to pending queue.

>
> 								Honza
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara Feb. 24, 2010, 9:36 p.m. UTC | #7
On Wed 24-02-10 20:26:13, Dmitry Monakhov wrote:
> Jan Kara <jack@suse.cz> writes:
> > On Wed 24-02-10 10:57:59, Eric Sandeen wrote:
> >> Dmitry Monakhov wrote:
> >> > Jan Kara <jack@suse.cz> writes:
> >> >>> The fact is that I've been able to reproduce the problem on LVM block
> >> >>> devices, and sd* block devices so it's definitely not a loop device
> >> >>> specific problem.
> >> >>>
> >> >>> By the way, I tried several other things other than "echo s
> >> >>>> /proc/sysrq_trigger" I tried multiple sync followed with a one minute
> >> >>> "sleep",
> >> >>>
> >> >>> "echo 3 >/proc/sys/vm/drop_caches" seems to lower the chances of "hash
> >> >>> changes" but doesn't stops them.
> >> >>   Strange. When I use sync(1) in your script and use /dev/sda5 instead of a
> >> >> /dev/loop0, I cannot reproduce the problem (was running the script for
> >> >> something like an hour).
> >> > Theoretically some pages may exist after rw=>ro remount
> >> > because of generic race between write/sync, And they will be written
> >> > in by writepage if page already has buffers. This not happen in ext4
> >> > because. Each time it try to perform writepages it try to start_journal
> >> > and this result in EROFS.
> >> > The race bug will be closed some day but new one may appear again.
> >> > 
> >> > Let's be honest and change ext3 writepage like follows:
> >> > - check ROFS flag inside write page
> >> > - dump writepage's errors.
> >> > 
> >> > 
> >> 
> >> sounds like the wrong approach to me, we really need to fix the root
> >> cause and make remount,ro finish the job, I think.
> Off course, but still. This is just a sanity check. Similar check
> in ext4 help me to find the generic issue. Off course it have to
> be guarded by unlikely() statement.
  Well I think that something like

  WARN_ON_ONCE(IS_RDONLY(inode));

  in the beginning of every ext3 writepage implementation would be totally
sufficient for catching such bugs. Plus it has the advantage that it won't
loose user's data if possible. So I'll take patch in this direction.

> >> Throwing away writes which an application already thinks are completed
> >> just because remount,ro didn't keep up sounds like a bad idea.  I think
> >> I would much rather have the write complete shortly after the readonly
> >> transition, if I had to choose...
> >   Well, my opinion is that VFS should take care about the rw->ro transition
> > so that it isn't racy...
> No, My patch just try to nail the RO semantics in to writepage.
> Since other places are already guarded by start_journal, writepage is
> the only one which may has weakness.
> About ENOSPC/EDQUOT spam. It may be not bad to print a error message
> for crazy person who use mmap for space file.
  I'm sorry but I disagree. We set the error in the mapping and return the
error in case user calls fsync() on the file. Now I agree that most
applications will just miss that but that's no excuse for us writing such
messages in the system log. The user just got what he told the system to
do.
  And yes, we could be nicer to applications by making sure at page-fault
time that we have space for the mmaped write. I actually have patches for
that but they are stuck in the queue behind Nick's
truncate-calling-sequence rewrite.
								Honza
Christoph Hellwig March 2, 2010, 9:34 a.m. UTC | #8
On Wed, Feb 24, 2010 at 05:56:46PM +0100, Jan Kara wrote:
>   OK, I see that in theory a process can open file for writing after
> fs_may_remount_ro() before MS_RDONLY flag gets set. That could be really
> nasty.

Not just in theory, but also in practice.  We can easily hit this under
load with XFS.

> But by no means we should solve this VFS problem by spilling error
> messages from the filesystem.

Exactly.

>   Al, Christoph, do I miss something or there is really nothing which
> prevents a process from opening a file after the fs_may_remount_ro() check
> in do_remount_sb()?

No, there is nothing.  We really do need a multi-stage remount read-only
process:

 1) stop any writes from userland, that is opening new files writeable
 2) stop any periodic writeback from the VM or filesystem-internal
 3) write out all filesystem data and metadata
 4) mark the filesystem fully read-only

--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Dmitry Monakhov March 2, 2010, 10:01 a.m. UTC | #9
Christoph Hellwig <hch@lst.de> writes:

> On Wed, Feb 24, 2010 at 05:56:46PM +0100, Jan Kara wrote:
>>   OK, I see that in theory a process can open file for writing after
>> fs_may_remount_ro() before MS_RDONLY flag gets set. That could be really
>> nasty.
>
> Not just in theory, but also in practice.  We can easily hit this under
> load with XFS.
>
>> But by no means we should solve this VFS problem by spilling error
>> messages from the filesystem.
>
> Exactly.

>
>>   Al, Christoph, do I miss something or there is really nothing which
>> prevents a process from opening a file after the fs_may_remount_ro() check
>> in do_remount_sb()?
>
> No, there is nothing.  We really do need a multi-stage remount read-only
> process:
>
>  1) stop any writes from userland, that is opening new files writeable
This is not quite good idea because sync may take really long time,
#fsstress -p32 -d /mnt/TEST -l9999999 -n99999999 -z -f creat=100 -f write=100
#sleep 60;
#killall -9 fsstress
#time mount mnt -oremount,ro
it take several minutes to complete.
And at the end it may fail but other reason.
>  2) stop any periodic writeback from the VM or filesystem-internal
>  3) write out all filesystem data and metadata
>  4) mark the filesystem fully read-only

I've tried to sole the issue in lightly another way
Please take a look on this 
http://marc.info/?l=linux-fsdevel&m=126723036525624&w=2
1) Mark fs as GOING_TO_REMOUNT
2) any new writer will clear this flag
   This allow us to not block 
3) check flag before fssync and after and return EBUSY in this case. 
4) At this time we may to block writers (this is absent in my patch)
   It is acceptable to block writers at this time because later stages
   doesn't take too long.
5) perform fs-specific remount method.
6) Marks filesystem as MS_RDONLY.
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jan Kara March 2, 2010, 1:26 p.m. UTC | #10
On Tue 02-03-10 13:01:52, Dmitry Monakhov wrote:
> Christoph Hellwig <hch@lst.de> writes:
> >>   Al, Christoph, do I miss something or there is really nothing which
> >> prevents a process from opening a file after the fs_may_remount_ro() check
> >> in do_remount_sb()?
> >
> > No, there is nothing.  We really do need a multi-stage remount read-only
> > process:
> >
> >  1) stop any writes from userland, that is opening new files writeable
> This is not quite good idea because sync may take really long time,
> #fsstress -p32 -d /mnt/TEST -l9999999 -n99999999 -z -f creat=100 -f write=100
> #sleep 60;
> #killall -9 fsstress
> #time mount mnt -oremount,ro
> it take several minutes to complete.
> And at the end it may fail but other reason.
  Two points here:
1) Current writeback code has a bug that while we are umounting/remounting,
sync_filesystem() just degrades to doing all writeback in sync mode
(because any non-sync writeback fails to get s_umount sem for reading
and thus skips all the inodes of the superblock). This has considerable
impact on the speed of sync during umount / remount.

2) IMHO it's not bad to block all opens for writing during remounting RO
(and thus also during the sync). It's not a performance issue (remounting
RO does not happen often), it won't confuse any application or so even if
we later decide we cannot really finish remounting. Surely we'd have to
come up with a better waiting scheme than just cpu_relax() in
mnt_want_write() but that shouldn't be hard. The only thing I'm slightly
worried about is whether we won't hit some locking issues (i.e., caller
of mnt_want_write holding some lock needed to finish remount...).

> >  2) stop any periodic writeback from the VM or filesystem-internal
> >  3) write out all filesystem data and metadata
> >  4) mark the filesystem fully read-only
> 
> I've tried to sole the issue in lightly another way
> Please take a look on this 
> http://marc.info/?l=linux-fsdevel&m=126723036525624&w=2
> 1) Mark fs as GOING_TO_REMOUNT
> 2) any new writer will clear this flag
>    This allow us to not block 
> 3) check flag before fssync and after and return EBUSY in this case. 
> 4) At this time we may to block writers (this is absent in my patch)
>    It is acceptable to block writers at this time because later stages
>    doesn't take too long.
> 5) perform fs-specific remount method.
> 6) Marks filesystem as MS_RDONLY.
  I like my solution more since in my solution, admin does not have go
hunting for an application which keeps touching the filesystem while he is
trying to remount it read only (currently, using lsof is usually enough but
after your changes, running something like "while true; do touch /mnt/;
done" has much larger window to stop remounting RO).
  But in principle your solution is acceptable for me as well.

								Honza
Joel Becker March 2, 2010, 11:10 p.m. UTC | #11
On Tue, Mar 02, 2010 at 10:34:31AM +0100, Christoph Hellwig wrote:
> No, there is nothing.  We really do need a multi-stage remount read-only
> process:
> 
>  1) stop any writes from userland, that is opening new files writeable
>  2) stop any periodic writeback from the VM or filesystem-internal
>  3) write out all filesystem data and metadata
>  4) mark the filesystem fully read-only

	If you can code this up in a happily accessible way, we can use
it in ocfs2 to handle some error cases without puking.  That would make
us very happy.  Specifically, we haven't yet taken the time to audit how
we would ensure step (2).

Joel
diff mbox

Patch

diff --git a/fs/ext3/inode.c b/fs/ext3/inode.c
index 455e6e6..cf0e3aa 100644
--- a/fs/ext3/inode.c
+++ b/fs/ext3/inode.c
@@ -1536,6 +1536,11 @@  static int ext3_ordered_writepage(struct page *page,
 	if (ext3_journal_current_handle())
 		goto out_fail;
 
+	if (inode->i_sb->s_flags & MS_RDONLY) {
+		err = -EROFS;
+		goto out_fail;
+	}
+
 	if (!page_has_buffers(page)) {
 		create_empty_buffers(page, inode->i_sb->s_blocksize,
 				(1 << BH_Dirty)|(1 << BH_Uptodate));
@@ -1546,7 +1551,8 @@  static int ext3_ordered_writepage(struct page *page,
 				       NULL, buffer_unmapped)) {
 			/* Provide NULL get_block() to catch bugs if buffers
 			 * weren't really mapped */
-			return block_write_full_page(page, NULL, wbc);
+			ret = block_write_full_page(page, NULL, wbc);
+			goto out;
 		}
 	}
 	handle = ext3_journal_start(inode, ext3_writepage_trans_blocks(inode));
@@ -1584,12 +1590,17 @@  static int ext3_ordered_writepage(struct page *page,
 	err = ext3_journal_stop(handle);
 	if (!ret)
 		ret = err;
+out:
+	if (ret)
+		ext3_msg(inode->i_sb, KERN_CRIT, "%s: failed "
+			"%ld pages, ino %lu; err %d\n", __func__,
+			wbc->nr_to_write, inode->i_ino, ret);
 	return ret;
 
 out_fail:
 	redirty_page_for_writepage(wbc, page);
 	unlock_page(page);
-	return ret;
+	goto out;
 }
 
 static int ext3_writeback_writepage(struct page *page,
@@ -1603,12 +1614,18 @@  static int ext3_writeback_writepage(struct page *page,
 	if (ext3_journal_current_handle())
 		goto out_fail;
 
+	if (inode->i_sb->s_flags & MS_RDONLY) {
+		err = -EROFS;
+		goto out_fail;
+	}
+
 	if (page_has_buffers(page)) {
 		if (!walk_page_buffers(NULL, page_buffers(page), 0,
 				      PAGE_CACHE_SIZE, NULL, buffer_unmapped)) {
 			/* Provide NULL get_block() to catch bugs if buffers
 			 * weren't really mapped */
-			return block_write_full_page(page, NULL, wbc);
+			ret = block_write_full_page(page, NULL, wbc);
+			goto out;
 		}
 	}
 
@@ -1626,12 +1643,17 @@  static int ext3_writeback_writepage(struct page *page,
 	err = ext3_journal_stop(handle);
 	if (!ret)
 		ret = err;
+out:
+	if (ret)
+		ext3_msg(inode->i_sb, KERN_CRIT, "%s: failed "
+			"%ld pages, ino %lu; err %d\n", __func__,
+			wbc->nr_to_write, inode->i_ino, ret);
 	return ret;
 
 out_fail:
 	redirty_page_for_writepage(wbc, page);
 	unlock_page(page);
-	return ret;
+	goto out;
 }
 
 static int ext3_journalled_writepage(struct page *page,
@@ -1645,6 +1667,11 @@  static int ext3_journalled_writepage(struct page *page,
 	if (ext3_journal_current_handle())
 		goto no_write;
 
+	if (inode->i_sb->s_flags & MS_RDONLY) {
+		err = -EROFS;
+		goto no_write;
+	}
+
 	handle = ext3_journal_start(inode, ext3_writepage_trans_blocks(inode));
 	if (IS_ERR(handle)) {
 		ret = PTR_ERR(handle);
@@ -1684,8 +1711,11 @@  static int ext3_journalled_writepage(struct page *page,
 	if (!ret)
 		ret = err;
 out:
+	if (ret)
+		ext3_msg(inode->i_sb, KERN_CRIT, "%s: failed "
+			"%ld pages, ino %lu; err %d\n", __func__,
+			wbc->nr_to_write, inode->i_ino, ret);
 	return ret;
-
 no_write:
 	redirty_page_for_writepage(wbc, page);
 out_unlock: