Message ID | 1334175816-20688-1-git-send-email-jcd@tribudubois.net |
---|---|
State | New, archived |
Headers | show |
On Wed, 2012-04-11 at 22:23 +0200, Jean-Christophe DUBOIS wrote: > If the system was powered off while JFFS2 was creating or moving > (GC) an extended attribute node, it might happen at next reboot > that the node CRC is OK but the data (name and value) might be > incomplete and therefore corrupted. > > During the mount scan we need to check the xattr data integrity to > weed out bad ones and keep good ones (whith an earlier version). > > Whitout this check the xattr data integrity problem was detected > a lot later and was not cured automatically (-EIO was returned). > > Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> > --- AFAIR, the whole idea of having 2 CRC checksums in JFFS2 nodes is to speed up scanning. E.g., if we hit an inode node, we check only "node_crc" to validate the "metadata", insert the inode to the in-memory data structures, and read the next node. We do not check the data CRC (data_crc), otherwise scanning would be much slower. If the data are corrupted, we notice this later, wen reading it. The same is done for xattrs - during scanning we do not check the data payload. I do not say this is ideal design, but it is how it is. So I do not think your patch should be merged.
On Wed, 2012-04-11 at 22:23 +0200, Jean-Christophe DUBOIS wrote: > If the system was powered off while JFFS2 was creating or moving > (GC) an extended attribute node, it might happen at next reboot > that the node CRC is OK but the data (name and value) might be > incomplete and therefore corrupted. > > During the mount scan we need to check the xattr data integrity to > weed out bad ones and keep good ones (whith an earlier version). > > Whitout this check the xattr data integrity problem was detected > a lot later and was not cured automatically (-EIO was returned). > > Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> > --- AFAIR, the whole idea of having 2 CRC checksums in JFFS2 nodes is to speed up scanning. E.g., if we hit an inode node, we check only "node_crc" to validate the "metadata", insert the inode to the in-memory data structures, and read the next node. We do not check the data CRC (data_crc), otherwise scanning would be much slower. If the data are corrupted, we notice this later, wen reading it. The same is done for xattrs - during scanning we do not check the data payload. I do not say this is ideal design, but it is how it is. So I do not think your patch should be merged.
On 22/04/2012 15:15, Artem Bityutskiy wrote: > On Wed, 2012-04-11 at 22:23 +0200, Jean-Christophe DUBOIS wrote: >> If the system was powered off while JFFS2 was creating or moving >> (GC) an extended attribute node, it might happen at next reboot >> that the node CRC is OK but the data (name and value) might be >> incomplete and therefore corrupted. >> >> During the mount scan we need to check the xattr data integrity to >> weed out bad ones and keep good ones (whith an earlier version). >> >> Whitout this check the xattr data integrity problem was detected >> a lot later and was not cured automatically (-EIO was returned). >> >> Signed-off-by: Jean-Christophe DUBOIS<jcd@tribudubois.net> >> --- > AFAIR, the whole idea of having 2 CRC checksums in JFFS2 nodes is to > speed up scanning. E.g., if we hit an inode node, we check only > "node_crc" to validate the "metadata", insert the inode to the in-memory > data structures, and read the next node. We do not check the data CRC > (data_crc), otherwise scanning would be much slower. If the data are > corrupted, we notice this later, wen reading it. > > The same is done for xattrs - during scanning we do not check the data > payload. In case of attributes, JFFS2 do detect that they are corrupted later but it is too late. At that time we have lost reference to earlier "versions" of the attribute and I think we would need to rescan the all partition to find it back (assuming it has not been deleted in the mean time as I am not sure how GC deals with earlier versions of the attributes). So it seems better to do it in the first place. You also have to consider that attributes are generally "short" (unlike the file content) and there is not much data associated to them. so the overhead implied by the attribute data scanning is not very important (considering the attribute data is already in memory). But it is true that there is an overhead added by this patch. What happen for now is that if an attribute is corrupted, then we will not be able to ever read it again (JFFS2 returns a negative/recoverable error on corruption detection on any access and does not try to cure the problem). JFFS2 will not be able to recover an earlier version of the attribute if one exists and we are basically stuck. The only way to recover for now seems to be to delete the file associated to the attribute and then recreate both the file and the attribute. I don't think it is a very desirable behavior especially on an embedded system (the main target for JFFS2) where I would like the file system to recover by itself if possible. So if any of you thinks it is possible to get back to an earlier "version" of the attribute past the scan time so that JFFS2 can recover from corrupted attributes I might try implementing this instead (but I would appreciate some initial hints). But for now I am not sure how to do it with the actual code. > > I do not say this is ideal design, but it is how it is. So I do not > think your patch should be merged. This patch allows JFFS2 to recover from corrupted attributes (at least in my test case). This might be at the expense of a slight overhead at scan time but for my system (embedded system with no admin console) this is a trade off I am happy to consider for now as I don't have an alternate solution and I need to use attribute. BTW I am wondering what would happen if a "security attribute" (SElinux, IMA, ...) was getting corrupted. It seems to me the result would be quite annoying.
Hi Jean-Christophe, On Sun, 2012-04-22 at 18:17 +0200, Jean-Christophe DUBOIS wrote: > > The same is done for xattrs - during scanning we do not check the data > > payload. > In case of attributes, JFFS2 do detect that they are corrupted later but > it is too late. At that time we have lost reference to earlier > "versions" of the attribute and I think we would need to rescan the all > partition to find it back (assuming it has not been deleted in the mean > time as I am not sure how GC deals with earlier versions of the > attributes). So it seems better to do it in the first place. Wait. Now I think I starting remembering a bit more. First of all, you surely have many good arguments. But my counter-argument - the same happens for data nodes. We do not check data CRC during scanning. Xattr nodes should be treated the same way. JFFS2 scans in 2 phases: 1. early scanning - it scans only the headers of the nodes, but not the payloads. No I/O on the FS is possible. 2. late background scanning - the GC threads goes through every single inode and reads it, so forces data CRC checking. JFFS2 is available for reading, but write requests may block (if they require GC, AFAIR, because we cannot do GC before we have checked everything, otherwise we may lose a valuable older node version). So I guess this is what you should do for xattrs as well - check payload CRC at the second stage scanning, not the first one. The idea on this split was to do minimum and make the FS available for reading, and continue the rest in background. This background thing is quite heavy BTW, and slows down the entire system boot. > You also have to consider that attributes are generally "short" (unlike > the file content) and there is not much data associated to them. so the > overhead implied by the attribute data scanning is not very important > (considering the attribute data is already in memory). But it is true > that there is an overhead added by this patch. > What happen for now is that if an attribute is corrupted, then we will > not be able to ever read it again (JFFS2 returns a negative/recoverable > error on corruption detection on any access and does not try to cure the > problem). AFAIU, this should be dealt with during the second stage scanning.
diff --git a/fs/jffs2/scan.c b/fs/jffs2/scan.c index 7654e87..91f44d7 100644 --- a/fs/jffs2/scan.c +++ b/fs/jffs2/scan.c @@ -354,6 +354,16 @@ static int jffs2_scan_xattr_node(struct jffs2_sb_info *c, struct jffs2_erasebloc return 0; } + /* we also need to check the xattr data integrity to weed out bad ones */ + crc = crc32(0, &rx->data[0], rx->name_len + 1 + je16_to_cpu(rx->value_len)); + if (crc != je32_to_cpu(rx->data_crc)) { + JFFS2_WARNING("xattr data CRC failed at %#08x, read=%#08x, calc=%#08x\n", + ofs, je32_to_cpu(rx->data_crc), crc); + if ((err = jffs2_scan_dirty_space(c, jeb, je32_to_cpu(rx->totlen)))) + return err; + return 0; + } + xd = jffs2_setup_xattr_datum(c, xid, version); if (IS_ERR(xd)) return PTR_ERR(xd);
If the system was powered off while JFFS2 was creating or moving (GC) an extended attribute node, it might happen at next reboot that the node CRC is OK but the data (name and value) might be incomplete and therefore corrupted. During the mount scan we need to check the xattr data integrity to weed out bad ones and keep good ones (whith an earlier version). Whitout this check the xattr data integrity problem was detected a lot later and was not cured automatically (-EIO was returned). Signed-off-by: Jean-Christophe DUBOIS <jcd@tribudubois.net> --- fs/jffs2/scan.c | 10 ++++++++++ 1 files changed, 10 insertions(+), 0 deletions(-)