diff mbox series

[SRU,Trusty,1/1] mm/madvise.c: fix madvise() infinite loop under special circumstances

Message ID 20180420125016.10300-2-kleber.souza@canonical.com
State New
Headers show
Series Fix for CVE-2017-18208 | expand

Commit Message

Kleber Sacilotto de Souza April 20, 2018, 12:50 p.m. UTC
From: chenjie <chenjie6@huawei.com>

CVE-2017-18208

MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
Unfortunately madvise_willneed() doesn't communicate this information
properly to the generic madvise syscall implementation.  The calling
convention is quite subtle there.  madvise_vma() is supposed to either
return an error or update &prev otherwise the main loop will never
advance to the next vma and it will keep looping for ever without a way
to get out of the kernel.

It seems this has been broken since introduction.  Nobody has noticed
because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.

[mhocko@suse.com: rewrite changelog]
Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com
Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
Signed-off-by: chenjie <chenjie6@huawei.com>
Signed-off-by: guoxuenan <guoxuenan@huawei.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: zhangyi (F) <yi.zhang@huawei.com>
Cc: Miao Xie <miaoxie@huawei.com>
Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
Cc: Shaohua Li <shli@fb.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Carsten Otte <cotte@de.ibm.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(backported from commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91)
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
---
 mm/madvise.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

Comments

Colin Ian King April 20, 2018, 12:55 p.m. UTC | #1
On 20/04/18 13:50, Kleber Sacilotto de Souza wrote:
> From: chenjie <chenjie6@huawei.com>
> 
> CVE-2017-18208
> 
> MADVISE_WILLNEED has always been a noop for DAX (formerly XIP) mappings.
> Unfortunately madvise_willneed() doesn't communicate this information
> properly to the generic madvise syscall implementation.  The calling
> convention is quite subtle there.  madvise_vma() is supposed to either
> return an error or update &prev otherwise the main loop will never
> advance to the next vma and it will keep looping for ever without a way
> to get out of the kernel.
> 
> It seems this has been broken since introduction.  Nobody has noticed
> because nobody seems to be using MADVISE_WILLNEED on these DAX mappings.
> 
> [mhocko@suse.com: rewrite changelog]
> Link: http://lkml.kernel.org/r/20171127115318.911-1-guoxuenan@huawei.com
> Fixes: fe77ba6f4f97 ("[PATCH] xip: madvice/fadvice: execute in place")
> Signed-off-by: chenjie <chenjie6@huawei.com>
> Signed-off-by: guoxuenan <guoxuenan@huawei.com>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Cc: Minchan Kim <minchan@kernel.org>
> Cc: zhangyi (F) <yi.zhang@huawei.com>
> Cc: Miao Xie <miaoxie@huawei.com>
> Cc: Mike Rapoport <rppt@linux.vnet.ibm.com>
> Cc: Shaohua Li <shli@fb.com>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Mel Gorman <mgorman@techsingularity.net>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com>
> Cc: Rik van Riel <riel@redhat.com>
> Cc: Carsten Otte <cotte@de.ibm.com>
> Cc: Dan Williams <dan.j.williams@intel.com>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> (backported from commit 6ea8d958a2c95a1d514015d4e29ba21a8c0a1a91)
> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
> ---
>  mm/madvise.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/mm/madvise.c b/mm/madvise.c
> index 539eeb96b323..08f7501b57d0 100644
> --- a/mm/madvise.c
> +++ b/mm/madvise.c
> @@ -221,9 +221,9 @@ static long madvise_willneed(struct vm_area_struct *vma,
>  {
>  	struct file *file = vma->vm_file;
>  
> +	*prev = vma;
>  #ifdef CONFIG_SWAP
>  	if (!file || mapping_cap_swap_backed(file->f_mapping)) {
> -		*prev = vma;
>  		if (!file)
>  			force_swapin_readahead(vma, start, end);
>  		else
> @@ -241,7 +241,6 @@ static long madvise_willneed(struct vm_area_struct *vma,
>  		return 0;
>  	}
>  
> -	*prev = vma;
>  	start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
>  	if (end > vma->vm_end)
>  		end = vma->vm_end;
> 
Looks OK to me.

Acked-by: Colin Ian King <colin.king@canonical.com>
diff mbox series

Patch

diff --git a/mm/madvise.c b/mm/madvise.c
index 539eeb96b323..08f7501b57d0 100644
--- a/mm/madvise.c
+++ b/mm/madvise.c
@@ -221,9 +221,9 @@  static long madvise_willneed(struct vm_area_struct *vma,
 {
 	struct file *file = vma->vm_file;
 
+	*prev = vma;
 #ifdef CONFIG_SWAP
 	if (!file || mapping_cap_swap_backed(file->f_mapping)) {
-		*prev = vma;
 		if (!file)
 			force_swapin_readahead(vma, start, end);
 		else
@@ -241,7 +241,6 @@  static long madvise_willneed(struct vm_area_struct *vma,
 		return 0;
 	}
 
-	*prev = vma;
 	start = ((start - vma->vm_start) >> PAGE_SHIFT) + vma->vm_pgoff;
 	if (end > vma->vm_end)
 		end = vma->vm_end;