From patchwork Tue May 16 09:24:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Aneesh Kumar K.V" X-Patchwork-Id: 762863 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from lists.ozlabs.org (lists.ozlabs.org [103.22.144.68]) (using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wRsmh32hdz9s7k for ; Tue, 16 May 2017 19:37:24 +1000 (AEST) Received: from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3]) by lists.ozlabs.org (Postfix) with ESMTP id 3wRsmh2HXCzDqlM for ; Tue, 16 May 2017 19:37:24 +1000 (AEST) X-Original-To: linuxppc-dev@lists.ozlabs.org Delivered-To: linuxppc-dev@lists.ozlabs.org Received: from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com [148.163.156.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.ozlabs.org (Postfix) with ESMTPS id 3wRsVY24MPzDqcm for ; Tue, 16 May 2017 19:25:09 +1000 (AEST) Received: from pps.filterd (m0098404.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.20/8.16.0.20) with SMTP id v4G9NrKt127434 for ; Tue, 16 May 2017 05:25:03 -0400 Received: from e36.co.us.ibm.com (e36.co.us.ibm.com [32.97.110.154]) by mx0a-001b2d01.pphosted.com with ESMTP id 2afs4senm4-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Tue, 16 May 2017 05:25:03 -0400 Received: from localhost by e36.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 16 May 2017 03:25:02 -0600 Received: from b03cxnp08027.gho.boulder.ibm.com (9.17.130.19) by e36.co.us.ibm.com (192.168.1.136) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 16 May 2017 03:24:58 -0600 Received: from b03ledav001.gho.boulder.ibm.com (b03ledav001.gho.boulder.ibm.com [9.17.130.232]) by b03cxnp08027.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id v4G9OwJ08847804; Tue, 16 May 2017 02:24:58 -0700 Received: from b03ledav001.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id 6312F6E040; Tue, 16 May 2017 03:24:58 -0600 (MDT) Received: from skywalker.in.ibm.com (unknown [9.124.35.179]) by b03ledav001.gho.boulder.ibm.com (Postfix) with ESMTP id C07F56E048; Tue, 16 May 2017 03:24:56 -0600 (MDT) From: "Aneesh Kumar K.V" To: benh@kernel.crashing.org, paulus@samba.org, mpe@ellerman.id.au Subject: [PATCH] powerpc/mm/hugetlb: Add support for reserving gigantic huge pages via kernel command line Date: Tue, 16 May 2017 14:54:51 +0530 X-Mailer: git-send-email 2.7.4 X-TM-AS-GCONF: 00 x-cbid: 17051609-0020-0000-0000-00000BF321A8 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00007071; HX=3.00000241; KW=3.00000007; PH=3.00000004; SC=3.00000212; SDB=6.00861192; UDB=6.00427112; IPR=6.00640856; BA=6.00005351; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00015478; XFM=3.00000015; UTC=2017-05-16 09:25:01 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 17051609-0021-0000-0000-00005C57F4B3 Message-Id: <1494926691-24664-1-git-send-email-aneesh.kumar@linux.vnet.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2017-05-16_03:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1703280000 definitions=main-1705160083 X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linuxppc-dev@lists.ozlabs.org, "Aneesh Kumar K.V" Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" We use the kernel command line to do reservation of hugetlb pages. The code duplcation here is mostly to make it simpler. With 64 bit book3s, we need to support either 16G or 1G gigantic hugepage. Whereas the FSL_BOOK3E implementation needs to support multiple gigantic hugepage. We avoid the gpage_npages array and use a gpage_npage count for ppc64. We also cannot use the generic code to do the gigantic page allocation because that will require conditonal to handle the pseries allocation, where the memory is already reserved by the hypervisor. Inorder to keep it simpler, book3s 64 implements a version that keeps it simpler and working with pseries. Signed-off-by: Aneesh Kumar K.V --- arch/powerpc/include/asm/hugetlb.h | 8 +--- arch/powerpc/mm/hugetlbpage.c | 78 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 79 insertions(+), 7 deletions(-) diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h index 7f4025a6c69e..03401a17d1da 100644 --- a/arch/powerpc/include/asm/hugetlb.h +++ b/arch/powerpc/include/asm/hugetlb.h @@ -218,13 +218,7 @@ static inline pte_t *hugepte_offset(hugepd_t hpd, unsigned long addr, } #endif /* CONFIG_HUGETLB_PAGE */ -/* - * FSL Book3E platforms require special gpage handling - the gpages - * are reserved early in the boot process by memblock instead of via - * the .dts as on IBM platforms. - */ -#if defined(CONFIG_HUGETLB_PAGE) && (defined(CONFIG_PPC_FSL_BOOK3E) || \ - defined(CONFIG_PPC_8xx)) +#ifdef CONFIG_HUGETLB_PAGE extern void __init reserve_hugetlb_gpages(void); #else static inline void reserve_hugetlb_gpages(void) diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c index 1816b965a142..4ebaa18f2495 100644 --- a/arch/powerpc/mm/hugetlbpage.c +++ b/arch/powerpc/mm/hugetlbpage.c @@ -19,6 +19,7 @@ #include #include #include +#include #include #include #include @@ -373,6 +374,83 @@ int alloc_bootmem_huge_page(struct hstate *hstate) m->hstate = hstate; return 1; } + +static unsigned long gpage_npages; +static int __init do_gpage_early_setup(char *param, char *val, + const char *unused, void *arg) +{ + unsigned long npages; + static unsigned long size = 0; + unsigned long gpage_size = 1UL << 34; + + if (radix_enabled()) + gpage_size = 1UL << 30; + + /* + * The hugepagesz and hugepages cmdline options are interleaved. We + * use the size variable to keep track of whether or not this was done + * properly and skip over instances where it is incorrect. Other + * command-line parsing code will issue warnings, so we don't need to. + * + */ + if ((strcmp(param, "default_hugepagesz") == 0) || + (strcmp(param, "hugepagesz") == 0)) { + size = memparse(val, NULL); + /* + * We want to handle on 16GB gigantic huge page here. + */ + if (size != gpage_size) + size = 0; + } else if (strcmp(param, "hugepages") == 0) { + if (size != 0) { + if (sscanf(val, "%lu", &npages) <= 0) + npages = 0; + if (npages > MAX_NUMBER_GPAGES) { + pr_warn("MMU: %lu 16GB pages requested, " + "limiting to %d pages\n", npages, + MAX_NUMBER_GPAGES); + npages = MAX_NUMBER_GPAGES; + } + gpage_npages = npages; + size = 0; + } + } + return 0; +} + +/* + * This will just do the necessary memblock reservations. Every else is + * done by core, based on kernel command line parsing. + */ +void __init reserve_hugetlb_gpages(void) +{ + char buf[10]; + phys_addr_t base; + unsigned long gpage_size = 1UL << 34; + static __initdata char cmdline[COMMAND_LINE_SIZE]; + + if (radix_enabled()) + gpage_size = 1UL << 30; + + strlcpy(cmdline, boot_command_line, COMMAND_LINE_SIZE); + parse_args("hugetlb gpages", cmdline, NULL, 0, 0, 0, + NULL, &do_gpage_early_setup); + + if (!gpage_npages) + return; + + string_get_size(gpage_size, 1, STRING_UNITS_2, buf, sizeof(buf)); + pr_info("Trying to reserve %ld %s pages\n", gpage_npages, buf); + + /* Allocate one page at a time */ + while(gpage_npages) { + base = memblock_alloc_base(gpage_size, gpage_size, + MEMBLOCK_ALLOC_ANYWHERE); + add_gpage(base, gpage_size, 1); + gpage_npages--; + } +} + #endif #if defined(CONFIG_PPC_FSL_BOOK3E) || defined(CONFIG_PPC_8xx)