Message ID | 1432123016-21081-1-git-send-email-luis.henriques@canonical.com |
---|---|
State | New |
Headers | show |
On Wed, May 20, 2015 at 12:56:56PM +0100, Luis Henriques wrote: > From: Lukas Czerner <lczerner@redhat.com> > > Currently there is a bug in zero range code which causes zero range > calls to only allocate block aligned portion of the range, while > ignoring the rest in some cases. > > In some cases, namely if the end of the range is past i_size, we do > attempt to preallocate the last nonaligned block. However this might > cause kernel to BUG() in some carefully designed zero range requests > on setups where page size > block size. > > Fix this problem by first preallocating the entire range, including > the nonaligned edges and converting the written extents to unwritten > in the next step. This approach will also give us the advantage of > having the range to be as linearly contiguous as possible. > > Signed-off-by: Lukas Czerner <lczerner@redhat.com> > Signed-off-by: Theodore Ts'o <tytso@mit.edu> > (cherry picked from commit 0f2af21aae11972fa924374ddcf52e88347cf5a8) > CVE-2015-0275 > BugLink: https://bugs.launchpad.net/bugs/1425270 > Signed-off-by: Luis Henriques <luis.henriques@canonical.com> > --- > fs/ext4/extents.c | 31 +++++++++++++++++++------------ > 1 file changed, 19 insertions(+), 12 deletions(-) > > diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c > index bed43081720f..aa522429b751 100644 > --- a/fs/ext4/extents.c > +++ b/fs/ext4/extents.c > @@ -4803,12 +4803,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, > else > max_blocks -= lblk; > > - flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT | > - EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | > - EXT4_EX_NOCACHE; > - if (mode & FALLOC_FL_KEEP_SIZE) > - flags |= EXT4_GET_BLOCKS_KEEP_SIZE; > - > mutex_lock(&inode->i_mutex); > > /* > @@ -4825,15 +4819,28 @@ static long ext4_zero_range(struct file *file, loff_t offset, > ret = inode_newsize_ok(inode, new_size); > if (ret) > goto out_mutex; > - /* > - * If we have a partial block after EOF we have to allocate > - * the entire block. > - */ > - if (partial_end) > - max_blocks += 1; > } > > + flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; > + if (mode & FALLOC_FL_KEEP_SIZE) > + flags |= EXT4_GET_BLOCKS_KEEP_SIZE; > + > + /* Preallocate the range including the unaligned edges */ > + if (partial_begin || partial_end) { > + ret = ext4_alloc_file_blocks(file, > + round_down(offset, 1 << blkbits) >> blkbits, > + (round_up((offset + len), 1 << blkbits) - > + round_down(offset, 1 << blkbits)) >> blkbits, > + new_size, flags, mode); > + if (ret) > + goto out_mutex; > + > + } > + > + /* Zero range excluding the unaligned edges */ > if (max_blocks > 0) { > + flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | > + EXT4_EX_NOCACHE); > > /* Now release the pages and zero block aligned part of pages*/ > truncate_pagecache_range(inode, start, end - 1); Ugg. This seems to do what is claimed in the description at least, and is a clean cherry-pick against mainline: Acked-by: Andy Whitcroft <apw@canonical.com> -apw
On 20.05.2015 13:56, Luis Henriques wrote: > From: Lukas Czerner <lczerner@redhat.com> > > Currently there is a bug in zero range code which causes zero range > calls to only allocate block aligned portion of the range, while > ignoring the rest in some cases. > > In some cases, namely if the end of the range is past i_size, we do > attempt to preallocate the last nonaligned block. However this might > cause kernel to BUG() in some carefully designed zero range requests > on setups where page size > block size. > > Fix this problem by first preallocating the entire range, including > the nonaligned edges and converting the written extents to unwritten > in the next step. This approach will also give us the advantage of > having the range to be as linearly contiguous as possible. > > Signed-off-by: Lukas Czerner <lczerner@redhat.com> > Signed-off-by: Theodore Ts'o <tytso@mit.edu> > (cherry picked from commit 0f2af21aae11972fa924374ddcf52e88347cf5a8) > CVE-2015-0275 > BugLink: https://bugs.launchpad.net/bugs/1425270 > Signed-off-by: Luis Henriques <luis.henriques@canonical.com> > --- > fs/ext4/extents.c | 31 +++++++++++++++++++------------ > 1 file changed, 19 insertions(+), 12 deletions(-) > > diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c > index bed43081720f..aa522429b751 100644 > --- a/fs/ext4/extents.c > +++ b/fs/ext4/extents.c > @@ -4803,12 +4803,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, > else > max_blocks -= lblk; > > - flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT | > - EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | > - EXT4_EX_NOCACHE; > - if (mode & FALLOC_FL_KEEP_SIZE) > - flags |= EXT4_GET_BLOCKS_KEEP_SIZE; > - > mutex_lock(&inode->i_mutex); > > /* > @@ -4825,15 +4819,28 @@ static long ext4_zero_range(struct file *file, loff_t offset, > ret = inode_newsize_ok(inode, new_size); > if (ret) > goto out_mutex; > - /* > - * If we have a partial block after EOF we have to allocate > - * the entire block. > - */ > - if (partial_end) > - max_blocks += 1; > } > > + flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; > + if (mode & FALLOC_FL_KEEP_SIZE) > + flags |= EXT4_GET_BLOCKS_KEEP_SIZE; > + > + /* Preallocate the range including the unaligned edges */ > + if (partial_begin || partial_end) { > + ret = ext4_alloc_file_blocks(file, > + round_down(offset, 1 << blkbits) >> blkbits, > + (round_up((offset + len), 1 << blkbits) - > + round_down(offset, 1 << blkbits)) >> blkbits, > + new_size, flags, mode); > + if (ret) > + goto out_mutex; > + > + } > + > + /* Zero range excluding the unaligned edges */ > if (max_blocks > 0) { > + flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | > + EXT4_EX_NOCACHE); > > /* Now release the pages and zero block aligned part of pages*/ > truncate_pagecache_range(inode, start, end - 1); > Seems to do what the description indicates. And clean pick.
Applied to Vivid master-next branch.
diff --git a/fs/ext4/extents.c b/fs/ext4/extents.c index bed43081720f..aa522429b751 100644 --- a/fs/ext4/extents.c +++ b/fs/ext4/extents.c @@ -4803,12 +4803,6 @@ static long ext4_zero_range(struct file *file, loff_t offset, else max_blocks -= lblk; - flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT | - EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | - EXT4_EX_NOCACHE; - if (mode & FALLOC_FL_KEEP_SIZE) - flags |= EXT4_GET_BLOCKS_KEEP_SIZE; - mutex_lock(&inode->i_mutex); /* @@ -4825,15 +4819,28 @@ static long ext4_zero_range(struct file *file, loff_t offset, ret = inode_newsize_ok(inode, new_size); if (ret) goto out_mutex; - /* - * If we have a partial block after EOF we have to allocate - * the entire block. - */ - if (partial_end) - max_blocks += 1; } + flags = EXT4_GET_BLOCKS_CREATE_UNWRIT_EXT; + if (mode & FALLOC_FL_KEEP_SIZE) + flags |= EXT4_GET_BLOCKS_KEEP_SIZE; + + /* Preallocate the range including the unaligned edges */ + if (partial_begin || partial_end) { + ret = ext4_alloc_file_blocks(file, + round_down(offset, 1 << blkbits) >> blkbits, + (round_up((offset + len), 1 << blkbits) - + round_down(offset, 1 << blkbits)) >> blkbits, + new_size, flags, mode); + if (ret) + goto out_mutex; + + } + + /* Zero range excluding the unaligned edges */ if (max_blocks > 0) { + flags |= (EXT4_GET_BLOCKS_CONVERT_UNWRITTEN | + EXT4_EX_NOCACHE); /* Now release the pages and zero block aligned part of pages*/ truncate_pagecache_range(inode, start, end - 1);