From patchwork Fri Oct 20 16:57:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Aziz X-Patchwork-Id: 828731 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3yJX9W6nb6z9t6m for ; Sat, 21 Oct 2017 04:00:31 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752589AbdJTQ77 (ORCPT ); Fri, 20 Oct 2017 12:59:59 -0400 Received: from userp1040.oracle.com ([156.151.31.81]:34934 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752099AbdJTQ7x (ORCPT ); Fri, 20 Oct 2017 12:59:53 -0400 Received: from userv0022.oracle.com (userv0022.oracle.com [156.151.31.74]) by userp1040.oracle.com (Sentrion-MTA-4.3.2/Sentrion-MTA-4.3.2) with ESMTP id v9KGwPHS016825 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 20 Oct 2017 16:58:25 GMT Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75]) by userv0022.oracle.com (8.14.4/8.14.4) with ESMTP id v9KGwPX2024390 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK); Fri, 20 Oct 2017 16:58:25 GMT Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25]) by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id v9KGwNRp017956; Fri, 20 Oct 2017 16:58:23 GMT Received: from concerto.us.oracle.com (/24.9.64.241) by default (Oracle Beehive Gateway v4.0) with ESMTP ; Fri, 20 Oct 2017 09:58:23 -0700 From: Khalid Aziz To: akpm@linux-foundation.org, davem@davemloft.net, dave.hansen@linux.intel.com, arnd@arndb.de Cc: Khalid Aziz , kirill.shutemov@linux.intel.com, mhocko@suse.com, jack@suse.cz, ross.zwisler@linux.intel.com, aneesh.kumar@linux.vnet.ibm.com, dave.jiang@intel.com, willy@infradead.org, hughd@google.com, minchan@kernel.org, hannes@cmpxchg.org, shli@fb.com, mingo@kernel.org, jmarchan@redhat.com, lstoakes@gmail.com, linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, sparclinux@vger.kernel.org, Khalid Aziz Subject: [PATCH v9 02/10] mm, swap: Add infrastructure for saving page metadata as well on swap Date: Fri, 20 Oct 2017 10:57:55 -0600 Message-Id: <9152c4afc14316bb1b26158559964f8e8c07ab1a.1508364660.git.khalid.aziz@oracle.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: References: In-Reply-To: References: X-Source-IP: userv0022.oracle.com [156.151.31.74] Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org If a processor supports special metadata for a page, for example ADI version tags on SPARC M7, this metadata must be saved when the page is swapped out. The same metadata must be restored when the page is swapped back in. This patch adds two new architecture specific functions - arch_do_swap_page() to be called when a page is swapped in, and arch_unmap_one() to be called when a page is being unmapped for swap out. These architecture hooks allow page metadata to be saved if the architecture supports it. Signed-off-by: Khalid Aziz Cc: Khalid Aziz Acked-by: Jerome Marchand --- v8: - Fixed an erroneous "}" v6: - Updated parameter list for arch_do_swap_page() and arch_unmap_one() v5: - Replaced set_swp_pte() function with new architecture functions arch_do_swap_page() and arch_unmap_one() include/asm-generic/pgtable.h | 36 ++++++++++++++++++++++++++++++++++++ mm/memory.c | 1 + mm/rmap.c | 14 ++++++++++++++ 3 files changed, 51 insertions(+) diff --git a/include/asm-generic/pgtable.h b/include/asm-generic/pgtable.h index 7dfa767dc680..15668c2470b4 100644 --- a/include/asm-generic/pgtable.h +++ b/include/asm-generic/pgtable.h @@ -392,6 +392,42 @@ static inline int pud_same(pud_t pud_a, pud_t pud_b) #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #endif +#ifndef __HAVE_ARCH_DO_SWAP_PAGE +/* + * Some architectures support metadata associated with a page. When a + * page is being swapped out, this metadata must be saved so it can be + * restored when the page is swapped back in. SPARC M7 and newer + * processors support an ADI (Application Data Integrity) tag for the + * page as metadata for the page. arch_do_swap_page() can restore this + * metadata when a page is swapped back in. + */ +static inline void arch_do_swap_page(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long addr, + pte_t pte, pte_t oldpte) +{ + +} +#endif + +#ifndef __HAVE_ARCH_UNMAP_ONE +/* + * Some architectures support metadata associated with a page. When a + * page is being swapped out, this metadata must be saved so it can be + * restored when the page is swapped back in. SPARC M7 and newer + * processors support an ADI (Application Data Integrity) tag for the + * page as metadata for the page. arch_unmap_one() can save this + * metadata on a swap-out of a page. + */ +static inline int arch_unmap_one(struct mm_struct *mm, + struct vm_area_struct *vma, + unsigned long addr, + pte_t orig_pte) +{ + return 0; +} +#endif + #ifndef __HAVE_ARCH_PGD_OFFSET_GATE #define pgd_offset_gate(mm, addr) pgd_offset(mm, addr) #endif diff --git a/mm/memory.c b/mm/memory.c index 56e48e4593cb..ef0459938e96 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2828,6 +2828,7 @@ int do_swap_page(struct vm_fault *vmf) if (pte_swp_soft_dirty(vmf->orig_pte)) pte = pte_mksoft_dirty(pte); set_pte_at(vma->vm_mm, vmf->address, vmf->pte, pte); + arch_do_swap_page(vma->vm_mm, vma, vmf->address, pte, vmf->orig_pte); vmf->orig_pte = pte; if (page == swapcache) { do_page_add_anon_rmap(page, vma, vmf->address, exclusive); diff --git a/mm/rmap.c b/mm/rmap.c index c570f82e6827..40a436c927c6 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -1448,6 +1448,14 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, (flags & TTU_MIGRATION)) { swp_entry_t entry; pte_t swp_pte; + + if (arch_unmap_one(mm, vma, address, pteval) < 0) { + set_pte_at(mm, address, pvmw.pte, pteval); + ret = false; + page_vma_mapped_walk_done(&pvmw); + break; + } + /* * Store the pfn of the page in a special migration * pte. do_swap_page() will wait until the migration @@ -1498,6 +1506,12 @@ static bool try_to_unmap_one(struct page *page, struct vm_area_struct *vma, page_vma_mapped_walk_done(&pvmw); break; } + if (arch_unmap_one(mm, vma, address, pteval) < 0) { + set_pte_at(mm, address, pvmw.pte, pteval); + ret = false; + page_vma_mapped_walk_done(&pvmw); + break; + } if (list_empty(&mm->mmlist)) { spin_lock(&mmlist_lock); if (list_empty(&mm->mmlist))