Patchwork [Hardy,Maverick,CVE-2011-0463,1/1] Treat writes as new when holes span across page boundaries, CVE-2011-0463

login
register
mail settings
Submitter Brad Figg
Date April 26, 2011, 4 p.m.
Message ID <1303833612-1571-1-git-send-email-brad.figg@canonical.com>
Download mbox | patch
Permalink /patch/92936/
State New
Headers show

Comments

Brad Figg - April 26, 2011, 4 p.m.
From: Goldwyn Rodrigues <rgoldwyn@gmail.com>

BugLink: http://bugs.launchpad.net/bugs/770483

CVE-2011-0463

When a hole spans across page boundaries, the next write forces
a read of the block. This could end up reading existing garbage
data from the disk in ocfs2_map_page_blocks. This leads to
non-zero holes. In order to avoid this, mark the writes as new
when the holes span across page boundaries.

Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.de>
Signed-off-by: jlbec <jlbec@evilplan.org>

(cherry-pick of commit 272b62c1f0f6f742046e45b50b6fec98860208a0)
Signed-off-by: Brad Figg <brad.figg@canonical.com>
---
 fs/ocfs2/aops.c |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)
Tim Gardner - April 26, 2011, 4:08 p.m.
On 04/26/2011 10:00 AM, Brad Figg wrote:
> From: Goldwyn Rodrigues<rgoldwyn@gmail.com>
>
> BugLink: http://bugs.launchpad.net/bugs/770483
>
> CVE-2011-0463
>
> When a hole spans across page boundaries, the next write forces
> a read of the block. This could end up reading existing garbage
> data from the disk in ocfs2_map_page_blocks. This leads to
> non-zero holes. In order to avoid this, mark the writes as new
> when the holes span across page boundaries.
>
> Signed-off-by: Goldwyn Rodrigues<rgoldwyn@suse.de>
> Signed-off-by: jlbec<jlbec@evilplan.org>
>
> (cherry-pick of commit 272b62c1f0f6f742046e45b50b6fec98860208a0)
> Signed-off-by: Brad Figg<brad.figg@canonical.com>
> ---
>   fs/ocfs2/aops.c |    6 ++++++
>   1 files changed, 6 insertions(+), 0 deletions(-)
>
> diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
> index 0d44b77..b5d7fb9 100644
> --- a/fs/ocfs2/aops.c
> +++ b/fs/ocfs2/aops.c
> @@ -1015,6 +1015,12 @@ static int ocfs2_prepare_page_for_write(struct inode *inode, u64 *p_blkno,
>   	ocfs2_figure_cluster_boundaries(OCFS2_SB(inode->i_sb), cpos,
>   					&cluster_start,&cluster_end);
>
> +	/* treat the write as new if the a hole/lseek spanned across
> +	 * the page boundary.
> +	 */
> +	new = new | ((i_size_read(inode)<= page_offset(page))&&
> +			(page_offset(page)<= user_pos));
> +
>   	if (page == wc->w_target_page) {
>   		map_from = user_pos&  (PAGE_CACHE_SIZE - 1);
>   		map_to = map_from + user_len;


Acked-by: Tim Gardner <tim.gardner@canonical.com>
Leann Ogasawara - April 26, 2011, 4:11 p.m.
On Tue, 2011-04-26 at 09:00 -0700, Brad Figg wrote:
> From: Goldwyn Rodrigues <rgoldwyn@gmail.com>
> 
> BugLink: http://bugs.launchpad.net/bugs/770483
> 
> CVE-2011-0463
> 
> When a hole spans across page boundaries, the next write forces
> a read of the block. This could end up reading existing garbage
> data from the disk in ocfs2_map_page_blocks. This leads to
> non-zero holes. In order to avoid this, mark the writes as new
> when the holes span across page boundaries.
> 
> Signed-off-by: Goldwyn Rodrigues <rgoldwyn@suse.de>
> Signed-off-by: jlbec <jlbec@evilplan.org>
> 
> (cherry-pick of commit 272b62c1f0f6f742046e45b50b6fec98860208a0)
> Signed-off-by: Brad Figg <brad.figg@canonical.com>

Acked-by: Leann Ogasawara <leann.ogasawara@canonical.com>

> ---
>  fs/ocfs2/aops.c |    6 ++++++
>  1 files changed, 6 insertions(+), 0 deletions(-)
> 
> diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
> index 0d44b77..b5d7fb9 100644
> --- a/fs/ocfs2/aops.c
> +++ b/fs/ocfs2/aops.c
> @@ -1015,6 +1015,12 @@ static int ocfs2_prepare_page_for_write(struct inode *inode, u64 *p_blkno,
>  	ocfs2_figure_cluster_boundaries(OCFS2_SB(inode->i_sb), cpos,
>  					&cluster_start, &cluster_end);
>  
> +	/* treat the write as new if the a hole/lseek spanned across
> +	 * the page boundary.
> +	 */
> +	new = new | ((i_size_read(inode) <= page_offset(page)) &&
> +			(page_offset(page) <= user_pos));
> +
>  	if (page == wc->w_target_page) {
>  		map_from = user_pos & (PAGE_CACHE_SIZE - 1);
>  		map_to = map_from + user_len;
> -- 
> 1.7.0.4
> 
>
Tim Gardner - April 26, 2011, 4:21 p.m.
On 04/26/2011 10:00 AM, Brad Figg wrote:
> From: Goldwyn Rodrigues<rgoldwyn@gmail.com>
>
> BugLink: http://bugs.launchpad.net/bugs/770483
>
> CVE-2011-0463
>
> When a hole spans across page boundaries, the next write forces
> a read of the block. This could end up reading existing garbage
> data from the disk in ocfs2_map_page_blocks. This leads to
> non-zero holes. In order to avoid this, mark the writes as new
> when the holes span across page boundaries.
>
> Signed-off-by: Goldwyn Rodrigues<rgoldwyn@suse.de>
> Signed-off-by: jlbec<jlbec@evilplan.org>
>
> (cherry-pick of commit 272b62c1f0f6f742046e45b50b6fec98860208a0)
> Signed-off-by: Brad Figg<brad.figg@canonical.com>
> ---
>   fs/ocfs2/aops.c |    6 ++++++
>   1 files changed, 6 insertions(+), 0 deletions(-)
>
> diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
> index 0d44b77..b5d7fb9 100644
> --- a/fs/ocfs2/aops.c
> +++ b/fs/ocfs2/aops.c
> @@ -1015,6 +1015,12 @@ static int ocfs2_prepare_page_for_write(struct inode *inode, u64 *p_blkno,
>   	ocfs2_figure_cluster_boundaries(OCFS2_SB(inode->i_sb), cpos,
>   					&cluster_start,&cluster_end);
>
> +	/* treat the write as new if the a hole/lseek spanned across
> +	 * the page boundary.
> +	 */
> +	new = new | ((i_size_read(inode)<= page_offset(page))&&
> +			(page_offset(page)<= user_pos));
> +
>   	if (page == wc->w_target_page) {
>   		map_from = user_pos&  (PAGE_CACHE_SIZE - 1);
>   		map_to = map_from + user_len;

applied to Hardy/Maverick

Patch

diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c
index 0d44b77..b5d7fb9 100644
--- a/fs/ocfs2/aops.c
+++ b/fs/ocfs2/aops.c
@@ -1015,6 +1015,12 @@  static int ocfs2_prepare_page_for_write(struct inode *inode, u64 *p_blkno,
 	ocfs2_figure_cluster_boundaries(OCFS2_SB(inode->i_sb), cpos,
 					&cluster_start, &cluster_end);
 
+	/* treat the write as new if the a hole/lseek spanned across
+	 * the page boundary.
+	 */
+	new = new | ((i_size_read(inode) <= page_offset(page)) &&
+			(page_offset(page) <= user_pos));
+
 	if (page == wc->w_target_page) {
 		map_from = user_pos & (PAGE_CACHE_SIZE - 1);
 		map_to = map_from + user_len;