From patchwork Thu Jun 6 07:20:34 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Morton X-Patchwork-Id: 249300 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [IPv6:::1]) by ozlabs.org (Postfix) with ESMTP id 862622C0319 for ; Thu, 6 Jun 2013 17:21:15 +1000 (EST) Received: from mail.linuxfoundation.org (mail.linuxfoundation.org [140.211.169.12]) by ozlabs.org (Postfix) with ESMTP id 8AA512C02CC for ; Thu, 6 Jun 2013 17:20:49 +1000 (EST) Received: from localhost (c-24-7-24-226.hsd1.ca.comcast.net [24.7.24.226]) by mail.linuxfoundation.org (Postfix) with ESMTPSA id 8CDDB6EF; Thu, 6 Jun 2013 07:20:48 +0000 (UTC) Date: Thu, 6 Jun 2013 00:20:34 -0700 From: Andrew Morton To: "Aneesh Kumar K.V" Subject: Re: [PATCH -V10 00/15] THP support for PPC64 Message-Id: <20130606002034.2b442e8a.akpm@linux-foundation.org> In-Reply-To: <8738svyds4.fsf@linux.vnet.ibm.com> References: <1370446119-8837-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> <1370475066.3766.249.camel@pasglop> <20130605171310.065c6fe2a3313c69bcfa0fc8@linux-foundation.org> <8738svyds4.fsf@linux.vnet.ibm.com> X-Mailer: Sylpheed 2.7.1 (GTK+ 2.18.9; x86_64-redhat-linux-gnu) Mime-Version: 1.0 Cc: paulus@samba.org, linuxppc-dev@lists.ozlabs.org X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" On Thu, 06 Jun 2013 11:35:47 +0530 "Aneesh Kumar K.V" wrote: > Andrew Morton writes: > > > On Thu, 06 Jun 2013 09:31:06 +1000 Benjamin Herrenschmidt wrote: > > > >> On Wed, 2013-06-05 at 20:58 +0530, Aneesh Kumar K.V wrote: > >> > > >> > This is the second patchset needed to support THP on ppc64. Some of the changes > >> > included in this series are tricky in that it changes the powerpc linux page table > >> > walk subtly. We also overload few of the pte flags for ptes at PMD level (huge > >> > page PTEs). > >> > > >> > The related mm/ changes are already merged to Andrew's -mm tree. > >> > >> If I am to put that into powerpc-next, I need the dependent mm/ changes as well. > >> > >> Do you have them in the form of a separate git tree that is *exactly* (same SHA1s) > >> what is expected to go upstream via Andrew ? > >> > >> Andrew, are they fully acked on your side and ready to go ? > > > > Not being on linuxppc-dev I'm at a bit of a loss here. > > > > I assume we're referring to > > > > mm-thp-add-pmd-args-to-pgtable-deposit-and-withdraw-apis.patch > > mm-thp-withdraw-the-pgtable-after-pmdp-related-operations.patch > > mm-thp-withdraw-the-pgtable-after-pmdp-related-operations-fix.patch > > mm-thp-dont-use-hpage_shift-in-transparent-hugepage-code.patch > > mm-thp-deposit-the-transpare-huge-pgtable-before-set_pmd.patch > > > > There is one more: > > mm/THP: Use the right function when updating access flags > > mm-thp-use-the-right-function-when-updating-access-flags.patc Hereunder. This actually precedes the above four(+fix) patches. From: "Aneesh Kumar K.V" Subject: mm/thp: use the correct function when updating access flags We should use pmdp_set_access_flags to update access flags. Archs like powerpc use extra checks(_PAGE_BUSY) when updating a hugepage PTE. A set_pmd_at doesn't do those checks. We should use set_pmd_at only when updating a none hugepage PTE. Signed-off-by: Aneesh Kumar K.V Cc: Andrea Arcangeli a Signed-off-by: Andrew Morton --- mm/huge_memory.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff -puN mm/huge_memory.c~mm-thp-use-the-right-function-when-updating-access-flags mm/huge_memory.c --- a/mm/huge_memory.c~mm-thp-use-the-right-function-when-updating-access-flags +++ a/mm/huge_memory.c @@ -1265,7 +1265,9 @@ struct page *follow_trans_huge_pmd(struc * young bit, instead of the current set_pmd_at. */ _pmd = pmd_mkyoung(pmd_mkdirty(*pmd)); - set_pmd_at(mm, addr & HPAGE_PMD_MASK, pmd, _pmd); + if (pmdp_set_access_flags(vma, addr & HPAGE_PMD_MASK, + pmd, _pmd, 1)) + update_mmu_cache_pmd(vma, addr, pmd); } if ((flags & FOLL_MLOCK) && (vma->vm_flags & VM_LOCKED)) { if (page->mapping && trylock_page(page)) {