[SRU,Trusty,1/1] ipc/shm: Fix shmat mmap nil-page protection

Message ID 20171130174315.2101-2-kleber.souza@canonical.com
State New
Headers show
Series
  • Fix for CVE-2017-5669
Related show

Commit Message

Kleber Souza Nov. 30, 2017, 5:43 p.m.
From: Davidlohr Bueso <dave@stgolabs.net>

The issue is described here, with a nice testcase:

    https://bugzilla.kernel.org/show_bug.cgi?id=192931

The problem is that shmat() calls do_mmap_pgoff() with MAP_FIXED, and
the address rounded down to 0.  For the regular mmap case, the
protection mentioned above is that the kernel gets to generate the
address -- arch_get_unmapped_area() will always check for MAP_FIXED and
return that address.  So by the time we do security_mmap_addr(0) things
get funky for shmat().

The testcase itself shows that while a regular user crashes, root will
not have a problem attaching a nil-page.  There are two possible fixes
to this.  The first, and which this patch does, is to simply allow root
to crash as well -- this is also regular mmap behavior, ie when hacking
up the testcase and adding mmap(...  |MAP_FIXED).  While this approach
is the safer option, the second alternative is to ignore SHM_RND if the
rounded address is 0, thus only having MAP_SHARED flags.  This makes the
behavior of shmat() identical to the mmap() case.  The downside of this
is obviously user visible, but does make sense in that it maintains
semantics after the round-down wrt 0 address and mmap.

Passes shm related ltp tests.

Link: http://lkml.kernel.org/r/1486050195-18629-1-git-send-email-dave@stgolabs.net
Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
Reported-by: Gareth Evans <gareth.evans@contextis.co.uk>
Cc: Manfred Spraul <manfred@colorfullife.com>
Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

CVE-2017-5669
(cherry picked from commit 95e91b831f87ac8e1f8ed50c14d709089b4e01b8)
Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
---
 ipc/shm.c | 13 +++++++++----
 1 file changed, 9 insertions(+), 4 deletions(-)

Comments

Colin King Nov. 30, 2017, 5:47 p.m. | #1
On 30/11/17 17:43, Kleber Sacilotto de Souza wrote:
> From: Davidlohr Bueso <dave@stgolabs.net>
> 
> The issue is described here, with a nice testcase:
> 
>     https://bugzilla.kernel.org/show_bug.cgi?id=192931
> 
> The problem is that shmat() calls do_mmap_pgoff() with MAP_FIXED, and
> the address rounded down to 0.  For the regular mmap case, the
> protection mentioned above is that the kernel gets to generate the
> address -- arch_get_unmapped_area() will always check for MAP_FIXED and
> return that address.  So by the time we do security_mmap_addr(0) things
> get funky for shmat().
> 
> The testcase itself shows that while a regular user crashes, root will
> not have a problem attaching a nil-page.  There are two possible fixes
> to this.  The first, and which this patch does, is to simply allow root
> to crash as well -- this is also regular mmap behavior, ie when hacking
> up the testcase and adding mmap(...  |MAP_FIXED).  While this approach
> is the safer option, the second alternative is to ignore SHM_RND if the
> rounded address is 0, thus only having MAP_SHARED flags.  This makes the
> behavior of shmat() identical to the mmap() case.  The downside of this
> is obviously user visible, but does make sense in that it maintains
> semantics after the round-down wrt 0 address and mmap.
> 
> Passes shm related ltp tests.
> 
> Link: http://lkml.kernel.org/r/1486050195-18629-1-git-send-email-dave@stgolabs.net
> Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
> Reported-by: Gareth Evans <gareth.evans@contextis.co.uk>
> Cc: Manfred Spraul <manfred@colorfullife.com>
> Cc: Michael Kerrisk <mtk.manpages@googlemail.com>
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> 
> CVE-2017-5669
> (cherry picked from commit 95e91b831f87ac8e1f8ed50c14d709089b4e01b8)
> Signed-off-by: Kleber Sacilotto de Souza <kleber.souza@canonical.com>
> ---
>  ipc/shm.c | 13 +++++++++----
>  1 file changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/ipc/shm.c b/ipc/shm.c
> index d7805acb44fd..06ea9ef7f54a 100644
> --- a/ipc/shm.c
> +++ b/ipc/shm.c
> @@ -1091,8 +1091,8 @@ SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct shmid_ds __user *, buf)
>   * "raddr" thing points to kernel space, and there has to be a wrapper around
>   * this.
>   */
> -long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
> -	      unsigned long shmlba)
> +long do_shmat(int shmid, char __user *shmaddr, int shmflg,
> +	      ulong *raddr, unsigned long shmlba)
>  {
>  	struct shmid_kernel *shp;
>  	unsigned long addr;
> @@ -1113,8 +1113,13 @@ long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
>  		goto out;
>  	else if ((addr = (ulong)shmaddr)) {
>  		if (addr & (shmlba - 1)) {
> -			if (shmflg & SHM_RND)
> -				addr &= ~(shmlba - 1);	   /* round down */
> +			/*
> +			 * Round down to the nearest multiple of shmlba.
> +			 * For sane do_mmap_pgoff() parameters, avoid
> +			 * round downs that trigger nil-page and MAP_FIXED.
> +			 */
> +			if ((shmflg & SHM_RND) && addr >= shmlba)
> +				addr &= ~(shmlba - 1);
>  			else
>  #ifndef __ARCH_FORCE_SHMLBA
>  				if (addr & ~PAGE_MASK)
> 
Clean upstream cherry pick, looks sane to me.

Acked-by: Colin Ian King <colin.king@canonical.com>
Po-Hsu Lin Dec. 12, 2017, 6:24 a.m. | #2
Clean cherry-pick with positive test result.
Acked-by: Po-Hsu Lin <po-hsu.lin@canonical.com>
Thadeu Lima de Souza Cascardo Dec. 12, 2017, 4:17 p.m. | #3
Applied to trusty master-next branch.

Thanks.
Cascardo.

Applied-to: trusty/master-next

Patch

diff --git a/ipc/shm.c b/ipc/shm.c
index d7805acb44fd..06ea9ef7f54a 100644
--- a/ipc/shm.c
+++ b/ipc/shm.c
@@ -1091,8 +1091,8 @@  SYSCALL_DEFINE3(shmctl, int, shmid, int, cmd, struct shmid_ds __user *, buf)
  * "raddr" thing points to kernel space, and there has to be a wrapper around
  * this.
  */
-long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
-	      unsigned long shmlba)
+long do_shmat(int shmid, char __user *shmaddr, int shmflg,
+	      ulong *raddr, unsigned long shmlba)
 {
 	struct shmid_kernel *shp;
 	unsigned long addr;
@@ -1113,8 +1113,13 @@  long do_shmat(int shmid, char __user *shmaddr, int shmflg, ulong *raddr,
 		goto out;
 	else if ((addr = (ulong)shmaddr)) {
 		if (addr & (shmlba - 1)) {
-			if (shmflg & SHM_RND)
-				addr &= ~(shmlba - 1);	   /* round down */
+			/*
+			 * Round down to the nearest multiple of shmlba.
+			 * For sane do_mmap_pgoff() parameters, avoid
+			 * round downs that trigger nil-page and MAP_FIXED.
+			 */
+			if ((shmflg & SHM_RND) && addr >= shmlba)
+				addr &= ~(shmlba - 1);
 			else
 #ifndef __ARCH_FORCE_SHMLBA
 				if (addr & ~PAGE_MASK)