From patchwork Thu Apr 8 18:40:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tim Gardner X-Patchwork-Id: 1463992 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4FGVTZ2Q7Rz9sW1; Fri, 9 Apr 2021 04:41:22 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1lUZaf-0002RC-Av; Thu, 08 Apr 2021 18:41:17 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1lUZaZ-0002Mm-LA for kernel-team@lists.ubuntu.com; Thu, 08 Apr 2021 18:41:11 +0000 Received: from mail-pg1-f200.google.com ([209.85.215.200]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1lUZaZ-0008Qs-0o for kernel-team@lists.ubuntu.com; Thu, 08 Apr 2021 18:41:11 +0000 Received: by mail-pg1-f200.google.com with SMTP id n8so895625pgb.9 for ; Thu, 08 Apr 2021 11:41:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=a39eFJY05FMOcccYt+UpsOm9xglfaYqRfYIeVDjzjMw=; b=bfBnpJiLcWOWIT7kwN7GmxqSeDvtiGGPyM5TYApbXh/mrOuL3/et6jscohwECWDyCa 6a1jE2ZyBKKasIY7Vpbr2jMKLshQgJGY1aV1YHDU+ReZXWtEHrHDlIf5/3QJ8rUHPKWH eOliGGjZMapJY01+5KbKjoJQ2I19I4TpiRlc2OKk+FZWAuIsnk+4zOg7bJvnlGYqXcom aCTYxwanYwr9lENbR/j2b7hTDG+bsf0TnlhDt0XRRKTDgXA6Kmi8PY5UsBtEcI+Zqo+X he56JRDclUXZuWgvm8pstMcySUjoFonYeJOA2uTnZ1D3Y8iEmYXkww2NP3yqsy7ooOCu GtPw== X-Gm-Message-State: AOAM531xkhu+c2C+bWtT44ONZpr37KsyXqyYjLhXay1cSBjMKg70luu7 M6J+80/XjYKgD3FtfSeVKw2fwp9ZeAdwyisrPNHq95PTv/i8Fbv64qe++B9kS9KFTNEnc06sr8J Ob4Yy6JkPUMga3h4rCGq9O4ooXPbTUGgzNUVpbY9n6A== X-Received: by 2002:a65:4082:: with SMTP id t2mr9385281pgp.396.1617907269238; Thu, 08 Apr 2021 11:41:09 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx/wLkM4V8ySUVmEit1w/kyAB2owIrNpGpLKR9cfLFlDZT8wV3b/9olKdhPfekIfyyd9JKemw== X-Received: by 2002:a65:4082:: with SMTP id t2mr9385268pgp.396.1617907269054; Thu, 08 Apr 2021 11:41:09 -0700 (PDT) Received: from localhost.localdomain ([69.163.84.166]) by smtp.gmail.com with ESMTPSA id d8sm201989pfq.27.2021.04.08.11.41.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 08 Apr 2021 11:41:08 -0700 (PDT) From: Tim Gardner To: kernel-team@lists.ubuntu.com Subject: [PATCH 2/2][OEM-5.10] netfilter: x_tables: Use correct memory barriers. Date: Thu, 8 Apr 2021 12:40:53 -0600 Message-Id: <20210408184053.23263-9-tim.gardner@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210408184053.23263-1-tim.gardner@canonical.com> References: <20210408184053.23263-1-tim.gardner@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Mark Tomlinson CVE-2021-29650 When a new table value was assigned, it was followed by a write memory barrier. This ensured that all writes before this point would complete before any writes after this point. However, to determine whether the rules are unused, the sequence counter is read. To ensure that all writes have been done before these reads, a full memory barrier is needed, not just a write memory barrier. The same argument applies when incrementing the counter, before the rules are read. Changing to using smp_mb() instead of smp_wmb() fixes the kernel panic reported in cc00bcaa5899 (which is still present), while still maintaining the same speed of replacing tables. The smb_mb() barriers potentially slow the packet path, however testing has shown no measurable change in performance on a 4-core MIPS64 platform. Fixes: 7f5c6d4f665b ("netfilter: get rid of atomic ops in fast path") Signed-off-by: Mark Tomlinson Signed-off-by: Pablo Neira Ayuso (cherry picked from commit 175e476b8cdf2a4de7432583b49c871345e4f8a1) Signed-off-by: Tim Gardner --- include/linux/netfilter/x_tables.h | 2 +- net/netfilter/x_tables.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/netfilter/x_tables.h b/include/linux/netfilter/x_tables.h index 5deb099d156d..8ec48466410a 100644 --- a/include/linux/netfilter/x_tables.h +++ b/include/linux/netfilter/x_tables.h @@ -376,7 +376,7 @@ static inline unsigned int xt_write_recseq_begin(void) * since addend is most likely 1 */ __this_cpu_add(xt_recseq.sequence, addend); - smp_wmb(); + smp_mb(); return addend; } diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c index 7df3aef39c5c..6bd31a7a27fc 100644 --- a/net/netfilter/x_tables.c +++ b/net/netfilter/x_tables.c @@ -1389,7 +1389,7 @@ xt_replace_table(struct xt_table *table, table->private = newinfo; /* make sure all cpus see new ->private value */ - smp_wmb(); + smp_mb(); /* * Even though table entries have now been swapped, other CPU's