From patchwork Thu Aug 30 12:52:35 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Juerg Haefliger X-Patchwork-Id: 963857 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 421Mpv2K89z9ryt; Thu, 30 Aug 2018 22:52:55 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1fvMRL-0003te-Ki; Thu, 30 Aug 2018 12:52:47 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.86_2) (envelope-from ) id 1fvMRJ-0003sf-9D for kernel-team@lists.ubuntu.com; Thu, 30 Aug 2018 12:52:45 +0000 Received: from mail-ed1-f72.google.com ([209.85.208.72]) by youngberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1fvMRJ-0000cK-1l for kernel-team@lists.ubuntu.com; Thu, 30 Aug 2018 12:52:45 +0000 Received: by mail-ed1-f72.google.com with SMTP id x24-v6so3624372edm.13 for ; Thu, 30 Aug 2018 05:52:45 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=8LzN0i2SCfRSqdZGGzulhYQLWoudhGGesWsaIxzGq34=; b=R5TRKctxC9Oz1QXRZa5zTW/flRDrQ148IwyCbhNPCVsj+3EGm5AHf4X5j+tBtTqc7O Yo+hXIUMw+ZVrWUwk4rujaxAuB60GX579UJByQEVxCo0UvethRS+QbyKhxnj2HmCmqTt I/+4cP9deUjDrfZoIqwOU0ssUZzl++pBpbOlMuFZm7TNr8dYN6obBF1S9vd97NOAOV/9 6560cSedAk7phIPPHwfQj+NcgpEKM6tbIVoFLtIkMZ/XlbeHG50MDlYGzVWfU+aS8S8t 7Kz/gahrz5rtBUp7eBT6HpMhVWszGWCH+RSMCEWs0gVLvhhF2IuLgM+OOII0uk/IAsLX +QBw== X-Gm-Message-State: APzg51BJO/BuhRTpOgm533QzZCu8ngqfSbETmsUkkQWNffaRhhEg+ngC T5pKt38bOcMnB0kZVm5uKCgnLnLwqy/Etatfz+gXj2D2UQYbN9AIhqLF7/AFmNBpL+6WnOf1DMg RcjeMPYSJiEOGCI0bDcg6Pj7eUZkPwrBwSg8nI7u82g== X-Received: by 2002:aa7:d80e:: with SMTP id v14-v6mr12924559edq.255.1535633564608; Thu, 30 Aug 2018 05:52:44 -0700 (PDT) X-Google-Smtp-Source: ANB0VdaCnlOzXKrtXzwO9aGvRQeccPG/uxQ7KANndmwfIANmEhrq+dQd9ge/+Tr+lpZ4SXJ8wL7Sow== X-Received: by 2002:aa7:d80e:: with SMTP id v14-v6mr12924544edq.255.1535633564485; Thu, 30 Aug 2018 05:52:44 -0700 (PDT) Received: from localhost.localdomain ([81.221.205.149]) by smtp.gmail.com with ESMTPSA id y27-v6sm2953550edb.20.2018.08.30.05.52.43 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 30 Aug 2018 05:52:43 -0700 (PDT) From: Juerg Haefliger X-Google-Original-From: Juerg Haefliger To: kernel-team@lists.ubuntu.com Subject: [SRU][Trusty][PATCH v2 3/7] x86/asm: Add pud/pmd mask interfaces to handle large PAT bit Date: Thu, 30 Aug 2018 14:52:35 +0200 Message-Id: <20180830125239.16775-4-juergh@canonical.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20180830125239.16775-1-juergh@canonical.com> References: <20180822064021.17216-1-juergh@canonical.com> <20180830125239.16775-1-juergh@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: juergh@canonical.com MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Toshi Kani The PAT bit gets relocated to bit 12 when PUD and PMD mappings are used. This bit 12, however, is not covered by PTE_FLAGS_MASK, which is used for masking pfn and flags for all levels. Add pud/pmd mask interfaces to handle pfn and flags properly by using P?D_PAGE_MASK when PUD/PMD mappings are used, i.e. PSE bit is set. Suggested-by: Juergen Gross Signed-off-by: Toshi Kani Cc: Andrew Morton Cc: Juergen Gross Cc: H. Peter Anvin Cc: Ingo Molnar Cc: Borislav Petkov Cc: Konrad Wilk Cc: Robert Elliot Cc: linux-mm@kvack.org Link: http://lkml.kernel.org/r/1442514264-12475-4-git-send-email-toshi.kani@hpe.com Signed-off-by: Thomas Gleixner CVE-2018-3620 CVE-2018-3646 (cherry picked from commit 4be4c1fb9a754b100466ebaec50f825be0b2050b) Signed-off-by: Juerg Haefliger --- arch/x86/include/asm/pgtable_types.h | 36 ++++++++++++++++++++++++++-- 1 file changed, 34 insertions(+), 2 deletions(-) diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h index a0c024c7478e..a71489cc88c2 100644 --- a/arch/x86/include/asm/pgtable_types.h +++ b/arch/x86/include/asm/pgtable_types.h @@ -261,10 +261,10 @@ #include -/* PTE_PFN_MASK extracts the PFN from a (pte|pmd|pud|pgd)val_t */ +/* Extracts the PFN from a (pte|pmd|pud|pgd)val_t of a 4KB page */ #define PTE_PFN_MASK ((pteval_t)PHYSICAL_PAGE_MASK) -/* PTE_FLAGS_MASK extracts the flags from a (pte|pmd|pud|pgd)val_t */ +/* Extracts the flags from a (pte|pmd|pud|pgd)val_t of a 4KB page */ #define PTE_FLAGS_MASK (~PTE_PFN_MASK) typedef struct pgprot { pgprotval_t pgprot; } pgprot_t; @@ -328,11 +328,43 @@ static inline pmdval_t native_pmd_val(pmd_t pmd) } #endif +static inline pudval_t pud_pfn_mask(pud_t pud) +{ + if (native_pud_val(pud) & _PAGE_PSE) + return PUD_PAGE_MASK & PHYSICAL_PAGE_MASK; + else + return PTE_PFN_MASK; +} + +static inline pudval_t pud_flags_mask(pud_t pud) +{ + if (native_pud_val(pud) & _PAGE_PSE) + return ~(PUD_PAGE_MASK & (pudval_t)PHYSICAL_PAGE_MASK); + else + return ~PTE_PFN_MASK; +} + static inline pudval_t pud_flags(pud_t pud) { return native_pud_val(pud) & PTE_FLAGS_MASK; } +static inline pmdval_t pmd_pfn_mask(pmd_t pmd) +{ + if (native_pmd_val(pmd) & _PAGE_PSE) + return PMD_PAGE_MASK & PHYSICAL_PAGE_MASK; + else + return PTE_PFN_MASK; +} + +static inline pmdval_t pmd_flags_mask(pmd_t pmd) +{ + if (native_pmd_val(pmd) & _PAGE_PSE) + return ~(PMD_PAGE_MASK & (pmdval_t)PHYSICAL_PAGE_MASK); + else + return ~PTE_PFN_MASK; +} + static inline pmdval_t pmd_flags(pmd_t pmd) { return native_pmd_val(pmd) & PTE_FLAGS_MASK;