Patchwork [MTD-UTILS,compr_lzo.c] Fix calls to liblzo library

login
register
mail settings
Submitter Carsten Schlote
Date Jan. 5, 2009, 8:12 p.m.
Message ID <1231186350.6612.11.camel@kplaws107lx>
Download mbox | patch
Permalink /patch/16799/
State New
Headers show

Comments

Carsten Schlote - Jan. 5, 2009, 8:12 p.m.
This patch fixes the calls to liblzo. Some things shouldn't ever worked
before. I noticed the problem on 64bit Linux - suddently 
mkfs.jffs2 starts segfaulting.

The -t option didn't work as well.

Now LZO compression and -t option of mkfs.jffs2 is functional again.

http://www.vahanus.net/cgi-bin/gitweb.cgi?p=mtd-utils.git;a=commit;h=5cec421106993009ad43e6188ce44906138c2d38
Carsten Schlote - Jan. 7, 2009, 10:48 a.m.
Hi Josh,

> > ...
> >+	lzo_compress_buf = malloc(2*page_size);
> 
> Don't leave comments in the code about why things are dangerous and
> then
> leave the code itself commented out.  A comment about why you need to
> allocate a certain size is fine, but this is pretty ugly.  For a
minute
> I thought you were doing two mallocs and ignoring the first one
because
> I didn't see the multi-line /* comment.
> 
> Did you poke around with gdb at all to determine why it segfaults?

Not yet due to lack of time problem (HW series ramp-up right now). The
comment was left to trigger questions like yours :-)  

I came across this problem, when I compiled my ptxdist projects on
Ubuntu64 at home. Suddently mkfs.jffs2 segfaulted - it always worked
fine on IA32.

So I spent some time to fix this problem, as they are:
- Wrong usage of LZO API calls for decompression. Caused -t to fail,
also
  on IA32. This is now fixed correctly by setting the destlen to the
  allowed maximum before calling the 'safe' decompression function.
- Crash on AMD64 with the currently supplied work buffer - just
increased the buffer
  until it worked again - that's a hack, but a working one.
  Possible suspects: 
    - Compiler uses different value and structure alignments on 64bit
hosts
      increasing size of structures and arrays.

I agree that it is undesireable to 'guess' workbuffer sizes. I had no
time to check and figure out, if there is an appropriate LZO API call to
derive a safe size for a workbuffer for a given algorythm and
compression parameters.

Frankly spoken, the previous formula is no solution either. The comments
says: 'Taken from their FAQs'. Beside the note, just some hardcoded
numbers are used to get some size. There is no explanation, what these
values mean or from what and where these constants are taken.

The correct solution is to query the size from the LZO library itself -
should the estimate be wrong, just fix the library and all applications
will work again.

If somebody knows how to get the correct information from LZO library -
just fix and post a patch, or tell me where to find this information and
I will fix and post a patch. 

Carsten
Josh Boyer - Jan. 7, 2009, 1:02 p.m.
On Mon, Jan 05, 2009 at 09:12:30PM +0100, Carsten Schlote wrote:
>This patch fixes the calls to liblzo. Some things shouldn't ever worked
>before. I noticed the problem on 64bit Linux - suddently 
>mkfs.jffs2 starts segfaulting.
>
>The -t option didn't work as well.
>
>Now LZO compression and -t option of mkfs.jffs2 is functional again.
>
>http://www.vahanus.net/cgi-bin/gitweb.cgi?p=mtd-utils.git;a=commit;h=5cec421106993009ad43e6188ce44906138c2d38
>
>-- 
>Carsten Schlote <c.schlote@konzeptpark.de>
>konzeptpark.de

>commit 5cec421106993009ad43e6188ce44906138c2d38
>Author: Carsten Schlote <c.schlote@konzeptpark.de>
>Date:   Mon Jan 5 18:58:22 2009 +0100
>
>    [PATCH][compr_lzo.c] Fix calls to LZO library
>    
>    Fixed the calls to LZO library. I wonder how this could ever
>    work before.
>    
>    Now LZO compression and -t option in mkfs.jffs2 are working.
>    
>    Signed-off-by: Carsten Schlote <c.schlote@konzeptpark.de>
>
>diff --git a/compr_lzo.c b/compr_lzo.c
>index a0bb362..f4ae697 100644
>--- a/compr_lzo.c
>+++ b/compr_lzo.c
>@@ -48,17 +48,19 @@ static void *lzo_compress_buf;
> static int jffs2_lzo_cmpr(unsigned char *data_in, unsigned char *cpage_out,
> 			  uint32_t *sourcelen, uint32_t *dstlen, void *model)
> {
>-	uint32_t compress_size;
>+	lzo_uint compress_size;

What is the lzo_uint type?  If it's some odd thing that changes
size based on compile options, I'm not overly thrilled.  Anyway,
it's not immediately obvious to me why uint32_t doesn't work.

> 	int ret;
> 
> 	ret = lzo1x_999_compress(data_in, *sourcelen, lzo_compress_buf, &compress_size, lzo_mem);
> 
>-	if (ret != LZO_E_OK)
>+	if (ret != LZO_E_OK) {
>+		printf(__FILE__ ":Got compress error: ret=%d, cl=%ld, destlen=%ld\n",ret,compress_size,(long int)(*dstlen));
> 		return -1;
>+	}
> 
> 	if (compress_size > *dstlen)
> 		return -1;
>-
>+	

Unneeded whitespace change.

> 	memcpy(cpage_out, lzo_compress_buf, compress_size);
> 	*dstlen = compress_size;
> 
>@@ -69,12 +71,15 @@ static int jffs2_lzo_decompress(unsigned char *data_in, unsigned char *cpage_out
> 				 uint32_t srclen, uint32_t destlen, void *model)
> {
> 	int ret;
>-	uint32_t dl;
>+	lzo_uint dl;
> 
>+	dl = destlen;
> 	ret = lzo1x_decompress_safe(data_in,srclen,cpage_out,&dl,NULL);
> 
>-	if (ret != LZO_E_OK || dl != destlen)
>+	if ((ret != LZO_E_OK) || (dl != destlen)) {
>+		printf(__FILE__ ":Got decompress error: ret=%d, dl=%ld, destlen=%ld\n",ret,dl,(long int)destlen);
> 		return -1;
>+	}
> 
> 	return 0;
> }
>@@ -92,12 +97,22 @@ int jffs2_lzo_init(void)
> {
> 	int ret;
> 
>+	lzo_init();
>+	
> 	lzo_mem = malloc(LZO1X_999_MEM_COMPRESS);
> 	if (!lzo_mem)
> 		return -1;
> 
> 	/* Worse case LZO compression size from their FAQ */
>-	lzo_compress_buf = malloc(page_size + (page_size / 16) + 64 + 3);
>+	/* Dangerous: Seems to be wrong for 64bit intel platforms. At least it results
>+                into s SEGV.
>+
>+	lzo_compress_buf = malloc(page_size + (page_size / 16) + 64 + 3); 
>+
>+	   The line below fixes the SEGV. No further work has be done to determine
>+           the smallest possible size for a worstcase scenario (uncompressable data)
>+	*/
>+	lzo_compress_buf = malloc(2*page_size);

Don't leave comments in the code about why things are dangerous and then
leave the code itself commented out.  A comment about why you need to
allocate a certain size is fine, but this is pretty ugly.  For a minute
I thought you were doing two mallocs and ignoring the first one because
I didn't see the multi-line /* comment.

Did you poke around with gdb at all to determine why it segfaults?

josh

Patch

diff --git a/compr_lzo.c b/compr_lzo.c

index a0bb362..f4ae697 100644

--- a/compr_lzo.c

+++ b/compr_lzo.c

@@ -48,17 +48,19 @@  static void *lzo_compress_buf;

 static int jffs2_lzo_cmpr(unsigned char *data_in, unsigned char *cpage_out,
 			  uint32_t *sourcelen, uint32_t *dstlen, void *model)
 {
-	uint32_t compress_size;

+	lzo_uint compress_size;

 	int ret;
 
 	ret = lzo1x_999_compress(data_in, *sourcelen, lzo_compress_buf, &compress_size, lzo_mem);
 
-	if (ret != LZO_E_OK)

+	if (ret != LZO_E_OK) {

+		printf(__FILE__ ":Got compress error: ret=%d, cl=%ld, destlen=%ld\n",ret,compress_size,(long int)(*dstlen));

 		return -1;
+	}

 
 	if (compress_size > *dstlen)
 		return -1;
-

+	

 	memcpy(cpage_out, lzo_compress_buf, compress_size);
 	*dstlen = compress_size;
 
@@ -69,12 +71,15 @@  static int jffs2_lzo_decompress(unsigned char *data_in, unsigned char *cpage_out

 				 uint32_t srclen, uint32_t destlen, void *model)
 {
 	int ret;
-	uint32_t dl;

+	lzo_uint dl;

 
+	dl = destlen;

 	ret = lzo1x_decompress_safe(data_in,srclen,cpage_out,&dl,NULL);
 
-	if (ret != LZO_E_OK || dl != destlen)

+	if ((ret != LZO_E_OK) || (dl != destlen)) {

+		printf(__FILE__ ":Got decompress error: ret=%d, dl=%ld, destlen=%ld\n",ret,dl,(long int)destlen);

 		return -1;
+	}

 
 	return 0;
 }
@@ -92,12 +97,22 @@  int jffs2_lzo_init(void)

 {
 	int ret;
 
+	lzo_init();

+	

 	lzo_mem = malloc(LZO1X_999_MEM_COMPRESS);
 	if (!lzo_mem)
 		return -1;
 
 	/* Worse case LZO compression size from their FAQ */
-	lzo_compress_buf = malloc(page_size + (page_size / 16) + 64 + 3);

+	/* Dangerous: Seems to be wrong for 64bit intel platforms. At least it results

+                into s SEGV.

+

+	lzo_compress_buf = malloc(page_size + (page_size / 16) + 64 + 3); 

+

+	   The line below fixes the SEGV. No further work has be done to determine

+           the smallest possible size for a worstcase scenario (uncompressable data)

+	*/

+	lzo_compress_buf = malloc(2*page_size);

 	if (!lzo_compress_buf) {
 		free(lzo_mem);
 		return -1;