From patchwork Tue Sep 24 07:47:10 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 1166386 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 46ctZz22GNz9sP7; Tue, 24 Sep 2019 17:47:55 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1iCfY5-0006FO-KZ; Tue, 24 Sep 2019 07:47:49 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iCfY0-0006FH-TF for kernel-team@lists.ubuntu.com; Tue, 24 Sep 2019 07:47:44 +0000 Received: from mail-ed1-f72.google.com ([209.85.208.72]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1iCfY0-0005C6-KI for kernel-team@lists.ubuntu.com; Tue, 24 Sep 2019 07:47:44 +0000 Received: by mail-ed1-f72.google.com with SMTP id y21so525050edr.18 for ; Tue, 24 Sep 2019 00:47:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=3kmZjDDWoDnmnrcNOxZESMB2QPS8eUtBrwlZx52E1KM=; b=GtGW87mvYLTJG7gzFZfrMJH595fIziJ2Oil1/22YEBV62aPubd+V6PIvlZ8G6uU9kY X+3dxtUHB3/M+ammaashllFHmZ2FoCZjcQQ9GFakV0ijo97doNftCgw7Sb11wltghtIC zxOnkzYu8JxyDWQPd5sL7dRrHUQ8xJlRJGQBLKFu2kPaqMND7Fbi6OBkRYYKU/b5fdQF 0T+CN7XxKJcR79iC5dEeHt1quU720EtCZc3rMTIcnsYMdfftKRLkGZe/k6mDv0EA4ymh B3pcag7PFEhuBxYgI2exh0zOkX/T3Ji6swevlkmZObbFs7/Je9+eHSvQjRYZU26jhnGB DgwA== X-Gm-Message-State: APjAAAVzed24wcBg2JTRrL5iUzmHHwxt6h3bSk5lFyDFqLSaOQg1M9e9 jmUxUKrdjr4e9jXYv2UEp8IMFtuUclUEHhswnutDYzo764jC7Od+SA784R/rVrJCkNwhk930RHb 3dP3YerRrlKH66uulbeVVsxnIYV+NxEi33KGx+D+Dow== X-Received: by 2002:a17:906:7a0d:: with SMTP id d13mr1238221ejo.242.1569311264099; Tue, 24 Sep 2019 00:47:44 -0700 (PDT) X-Google-Smtp-Source: APXvYqzRQ9agfTTSI2MtMB97hpmqjGjNymDMJIqWShYzpDArYniTWc09z2LNeHkly/9HtDXxoGc6ig== X-Received: by 2002:a17:906:7a0d:: with SMTP id d13mr1238210ejo.242.1569311263884; Tue, 24 Sep 2019 00:47:43 -0700 (PDT) Received: from localhost.localdomain ([194.191.228.147]) by smtp.gmail.com with ESMTPSA id s24sm213263edx.5.2019.09.24.00.47.43 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 Sep 2019 00:47:43 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Eoan][CVE-2019-14821][PATCH] KVM: coalesced_mmio: add bounds checking Date: Tue, 24 Sep 2019 09:47:10 +0200 Message-Id: <20190924074710.14715-1-juergh@canonical.com> X-Mailer: git-send-email 2.20.1 MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Matt Delco The first/last indexes are typically shared with a user app. The app can change the 'last' index that the kernel uses to store the next result. This change sanity checks the index before using it for writing to a potentially arbitrary address. This fixes CVE-2019-14821. Cc: stable@vger.kernel.org Fixes: 5f94c1741bdc ("KVM: Add coalesced MMIO support (common part)") Signed-off-by: Matt Delco Signed-off-by: Jim Mattson Reported-by: syzbot+983c866c3dd6efa3662a@syzkaller.appspotmail.com [Use READ_ONCE. - Paolo] Signed-off-by: Paolo Bonzini CVE-2019-14821 (cherry picked from commit b60fe990c6b07ef6d4df67bc0530c7c90a62623a) Signed-off-by: Juerg Haefliger Acked-by: Connor Kuehl Acked-by: Tyler Hicks --- virt/kvm/coalesced_mmio.c | 19 +++++++++++-------- 1 file changed, 11 insertions(+), 8 deletions(-) diff --git a/virt/kvm/coalesced_mmio.c b/virt/kvm/coalesced_mmio.c index 5294abb3f178..8ffd07e2a160 100644 --- a/virt/kvm/coalesced_mmio.c +++ b/virt/kvm/coalesced_mmio.c @@ -40,7 +40,7 @@ static int coalesced_mmio_in_range(struct kvm_coalesced_mmio_dev *dev, return 1; } -static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev) +static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev, u32 last) { struct kvm_coalesced_mmio_ring *ring; unsigned avail; @@ -52,7 +52,7 @@ static int coalesced_mmio_has_room(struct kvm_coalesced_mmio_dev *dev) * there is always one unused entry in the buffer */ ring = dev->kvm->coalesced_mmio_ring; - avail = (ring->first - ring->last - 1) % KVM_COALESCED_MMIO_MAX; + avail = (ring->first - last - 1) % KVM_COALESCED_MMIO_MAX; if (avail == 0) { /* full */ return 0; @@ -67,25 +67,28 @@ static int coalesced_mmio_write(struct kvm_vcpu *vcpu, { struct kvm_coalesced_mmio_dev *dev = to_mmio(this); struct kvm_coalesced_mmio_ring *ring = dev->kvm->coalesced_mmio_ring; + __u32 insert; if (!coalesced_mmio_in_range(dev, addr, len)) return -EOPNOTSUPP; spin_lock(&dev->kvm->ring_lock); - if (!coalesced_mmio_has_room(dev)) { + insert = READ_ONCE(ring->last); + if (!coalesced_mmio_has_room(dev, insert) || + insert >= KVM_COALESCED_MMIO_MAX) { spin_unlock(&dev->kvm->ring_lock); return -EOPNOTSUPP; } /* copy data in first free entry of the ring */ - ring->coalesced_mmio[ring->last].phys_addr = addr; - ring->coalesced_mmio[ring->last].len = len; - memcpy(ring->coalesced_mmio[ring->last].data, val, len); - ring->coalesced_mmio[ring->last].pio = dev->zone.pio; + ring->coalesced_mmio[insert].phys_addr = addr; + ring->coalesced_mmio[insert].len = len; + memcpy(ring->coalesced_mmio[insert].data, val, len); + ring->coalesced_mmio[insert].pio = dev->zone.pio; smp_wmb(); - ring->last = (ring->last + 1) % KVM_COALESCED_MMIO_MAX; + ring->last = (insert + 1) % KVM_COALESCED_MMIO_MAX; spin_unlock(&dev->kvm->ring_lock); return 0; }