From patchwork Wed Aug 28 15:28:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Connor Kuehl X-Patchwork-Id: 1154548 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46JV5m716Rz9sN1; Thu, 29 Aug 2019 01:29:16 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1i2zsn-00081S-EQ; Wed, 28 Aug 2019 15:29:13 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1i2zsl-000816-TQ for kernel-team@lists.ubuntu.com; Wed, 28 Aug 2019 15:29:11 +0000 Received: from mail-pg1-f198.google.com ([209.85.215.198]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1i2zsl-0005YR-D8 for kernel-team@lists.ubuntu.com; Wed, 28 Aug 2019 15:29:11 +0000 Received: by mail-pg1-f198.google.com with SMTP id k20so114198pgg.15 for ; Wed, 28 Aug 2019 08:29:11 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=vbBNjphxZyVk35U3nhyQxRthwDYmejlIJNhiTGWrJPA=; b=bEnOh2oMB6Vo5yRw6baCSfZfuFdIrMFV33I3USWHutUeYMhWkZIGiSLBollF0HW1gA oS8cbxRXGYq/C4UG5UHHE1FtgCiVI6VYk3/IsQxSvP7cmPME00lLBp0UUa1jjNshHjLq 5713R9igSDcLYgM9OHhIJbySnJwal/zevHWPP/yPIo6mRfEVqnz1ds7YKwTF9LnKqgyo 60wXprWJeTToajT5+lQ3YkfucJsTFGV9tGpoT7B7SYUWZmYTn4nn3NIwy00EOd3r/9t/ VodvltVm693mieWG0nfXyRjazncpHSiAdaCtlFqyhn0dxXNtxqWmoJjClfA7N8x47Qx5 youQ== X-Gm-Message-State: APjAAAWUKwRUVHm31v5zdb21XrJyHhXOlTnwu/wV4+YkgXqFpVBR1/GZ RySaggRxdoCIwry/IM9Nu+L0LXZjFeimu0AiBVnfZSyrcLiScscG2ulocSIZrCvu0/lI7YPkVlI GlDI767whrXePxMwx/yb5GsiO267OX+68uRD4hU9m8w== X-Received: by 2002:a17:90a:1ae1:: with SMTP id p88mr4723992pjp.26.1567006149099; Wed, 28 Aug 2019 08:29:09 -0700 (PDT) X-Google-Smtp-Source: APXvYqy3fxoOSOs4XR12FNrb8uWzxN5gFv5nqGo3pB/TdVUrnWp5o7j9+ViKguzNU50S70CnidWx9w== X-Received: by 2002:a17:90a:1ae1:: with SMTP id p88mr4723977pjp.26.1567006148838; Wed, 28 Aug 2019 08:29:08 -0700 (PDT) Received: from localhost.localdomain (c-24-20-45-88.hsd1.or.comcast.net. [24.20.45.88]) by smtp.gmail.com with ESMTPSA id j187sm4032270pfg.178.2019.08.28.08.29.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 28 Aug 2019 08:29:08 -0700 (PDT) From: Connor Kuehl To: kernel-team@lists.ubuntu.com Subject: [Xenial][SRU][CVE-2016-10905][PATCH 1/1] GFS2: don't set rgrp gl_object until it's inserted into rgrp tree Date: Wed, 28 Aug 2019 08:28:39 -0700 Message-Id: <20190828152839.5463-2-connor.kuehl@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190828152839.5463-1-connor.kuehl@canonical.com> References: <20190828152839.5463-1-connor.kuehl@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Bob Peterson CVE-2016-10905 Before this patch, function read_rindex_entry would set a rgrp glock's gl_object pointer to itself before inserting the rgrp into the rgrp rbtree. The problem is: if another process was also reading the rgrp in, and had already inserted its newly created rgrp, then the second call to read_rindex_entry would overwrite that value, then return a bad return code to the caller. Later, other functions would reference the now-freed rgrp memory by way of gl_object. In some cases, that could result in gfs2_rgrp_brelse being called twice for the same rgrp: once for the failed attempt and once for the "real" rgrp release. Eventually the kernel would panic. There are also a number of other things that could go wrong when a kernel module is accessing freed storage. For example, this could result in rgrp corruption because the fake rgrp would point to a fake bitmap in memory too, causing gfs2_inplace_reserve to search some random memory for free blocks, and find some, since we were never setting rgd->rd_bits to NULL before freeing it. This patch fixes the problem by not setting gl_object until we have successfully inserted the rgrp into the rbtree. Also, it sets rd_bits to NULL as it frees them, which will ensure any accidental access to the wrong rgrp will result in a kernel panic rather than file system corruption, which is preferred. Signed-off-by: Bob Peterson (backported from commit 36e4ad0316c017d5b271378ed9a1c9a4b77fab5f) [ Connor Kuehl: Minor context adjustment. The hunk in read_rindex_entry() expected 'PAGE_CACHE_ALIGN' to be 'PAGE_ALIGN' but that rename is introduced in a mainline patch that is not in Xenial: 09cbfeaf1a5a "mm, fs: get rid of PAGE_CACHE_* and page_cache_{get,release} macros" ] Signed-off-by: Connor Kuehl Acked-by: Tyler Hicks Acked-by: Kleber Sacilotto de Souza --- fs/gfs2/rgrp.c | 13 ++++++++----- 1 file changed, 8 insertions(+), 5 deletions(-) diff --git a/fs/gfs2/rgrp.c b/fs/gfs2/rgrp.c index ef24894edecc..9c159e6ad116 100644 --- a/fs/gfs2/rgrp.c +++ b/fs/gfs2/rgrp.c @@ -739,6 +739,7 @@ void gfs2_clear_rgrpd(struct gfs2_sbd *sdp) gfs2_free_clones(rgd); kfree(rgd->rd_bits); + rgd->rd_bits = NULL; return_all_reservations(rgd); kmem_cache_free(gfs2_rgrpd_cachep, rgd); } @@ -933,10 +934,6 @@ static int read_rindex_entry(struct gfs2_inode *ip) if (error) goto fail; - rgd->rd_gl->gl_object = rgd; - rgd->rd_gl->gl_vm.start = (rgd->rd_addr * bsize) & PAGE_CACHE_MASK; - rgd->rd_gl->gl_vm.end = PAGE_CACHE_ALIGN((rgd->rd_addr + - rgd->rd_length) * bsize) - 1; rgd->rd_rgl = (struct gfs2_rgrp_lvb *)rgd->rd_gl->gl_lksb.sb_lvbptr; rgd->rd_flags &= ~(GFS2_RDF_UPTODATE | GFS2_RDF_PREFERRED); if (rgd->rd_data > sdp->sd_max_rg_data) @@ -944,14 +941,20 @@ static int read_rindex_entry(struct gfs2_inode *ip) spin_lock(&sdp->sd_rindex_spin); error = rgd_insert(rgd); spin_unlock(&sdp->sd_rindex_spin); - if (!error) + if (!error) { + rgd->rd_gl->gl_object = rgd; + rgd->rd_gl->gl_vm.start = (rgd->rd_addr * bsize) & PAGE_MASK; + rgd->rd_gl->gl_vm.end = PAGE_ALIGN((rgd->rd_addr + + rgd->rd_length) * bsize) - 1; return 0; + } error = 0; /* someone else read in the rgrp; free it and ignore it */ gfs2_glock_put(rgd->rd_gl); fail: kfree(rgd->rd_bits); + rgd->rd_bits = NULL; kmem_cache_free(gfs2_rgrpd_cachep, rgd); return error; }