From patchwork Tue May 21 09:51:40 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yang Xu X-Patchwork-Id: 1102646 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=lists.linux.it (client-ip=213.254.12.146; helo=picard.linux.it; envelope-from=ltp-bounces+incoming=patchwork.ozlabs.org@lists.linux.it; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=cn.fujitsu.com Received: from picard.linux.it (picard.linux.it [213.254.12.146]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 457WJD3CBvz9s7h for ; Tue, 21 May 2019 19:51:56 +1000 (AEST) Received: from picard.linux.it (localhost [IPv6:::1]) by picard.linux.it (Postfix) with ESMTP id E9C4D3EA71E for ; Tue, 21 May 2019 11:51:53 +0200 (CEST) X-Original-To: ltp@lists.linux.it Delivered-To: ltp@picard.linux.it Received: from in-2.smtp.seeweb.it (in-2.smtp.seeweb.it [IPv6:2001:4b78:1:20::2]) by picard.linux.it (Postfix) with ESMTP id F1BCC3EA70A for ; Tue, 21 May 2019 11:51:52 +0200 (CEST) Received: from heian.cn.fujitsu.com (mail.cn.fujitsu.com [183.91.158.132]) by in-2.smtp.seeweb.it (Postfix) with ESMTP id DDB5460042D for ; Tue, 21 May 2019 11:51:47 +0200 (CEST) X-IronPort-AV: E=Sophos;i="5.60,494,1549900800"; d="scan'208";a="63978105" Received: from unknown (HELO cn.fujitsu.com) ([10.167.33.5]) by heian.cn.fujitsu.com with ESMTP; 21 May 2019 17:51:45 +0800 Received: from G08CNEXCHPEKD02.g08.fujitsu.local (unknown [10.167.33.83]) by cn.fujitsu.com (Postfix) with ESMTP id BBDCE4CDB2E6; Tue, 21 May 2019 17:51:46 +0800 (CST) Received: from localhost.localdomain (10.167.215.30) by G08CNEXCHPEKD02.g08.fujitsu.local (10.167.33.89) with Microsoft SMTP Server (TLS) id 14.3.439.0; Tue, 21 May 2019 17:51:43 +0800 From: Yang Xu To: Date: Tue, 21 May 2019 17:51:40 +0800 Message-ID: <1558432300-2269-1-git-send-email-xuyang2018.jy@cn.fujitsu.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <20190520144145.GC28976@rei.lan> References: <20190520144145.GC28976@rei.lan> MIME-Version: 1.0 X-Originating-IP: [10.167.215.30] X-yoursite-MailScanner-ID: BBDCE4CDB2E6.A1287 X-yoursite-MailScanner: Found to be clean X-yoursite-MailScanner-From: xuyang2018.jy@cn.fujitsu.com X-Spam-Status: No, score=0.0 required=7.0 tests=SPF_HELO_NONE,SPF_NONE autolearn=disabled version=3.4.0 X-Virus-Scanned: clamav-milter 0.99.2 at in-2.smtp.seeweb.it X-Virus-Status: Clean X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on in-2.smtp.seeweb.it Cc: ltp@lists.linux.it Subject: [LTP] [PATCH v3] syscalls/move_page12: Add new regression test X-BeenThere: ltp@lists.linux.it X-Mailman-Version: 2.1.18 Precedence: list List-Id: Linux Test Project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: ltp-bounces+incoming=patchwork.ozlabs.org@lists.linux.it Sender: "ltp" The bug has been fixed in kernel: 'c9d398fa2378 ("mm, hugetlb: use pte_present() instead of pmd_present() in follow_huge_pmd()"). Signed-off-by: Yang Xu Signed-off-by: Xiao Yang --- .../kernel/syscalls/move_pages/move_pages12.c | 104 ++++++++++++------ 1 file changed, 70 insertions(+), 34 deletions(-) diff --git a/testcases/kernel/syscalls/move_pages/move_pages12.c b/testcases/kernel/syscalls/move_pages/move_pages12.c index 04fda8bef..104467565 100644 --- a/testcases/kernel/syscalls/move_pages/move_pages12.c +++ b/testcases/kernel/syscalls/move_pages/move_pages12.c @@ -1,35 +1,41 @@ +// SPDX-License-Identifier: GPL-2.0-or-later /* - * Copyright (c) 2016 Fujitsu Ltd. + * Copyright (c) 2019 FUJITSU LIMITED. All rights reserved. * Author: Naoya Horiguchi * Ported: Guangwen Feng - * - * This program is free software: you can redistribute it and/or modify - * it under the terms of the GNU General Public License as published by - * the Free Software Foundation, either version 2 of the License, or - * (at your option) any later version. - * - * This program is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - * GNU General Public License for more details. - * - * You should have received a copy of the GNU General Public License - * along with this program, if not, see . + * Ported: Xiao Yang + * Ported: Yang Xu */ /* - * This is a regression test for the race condition between move_pages() - * and freeing hugepages, where move_pages() calls follow_page(FOLL_GET) - * for hugepages internally and tries to get its refcount without - * preventing concurrent freeing. + * Description: * - * This test can crash the buggy kernel, and the bug was fixed in: + * Test #1: + * This is a regression test for the race condition between move_pages() + * and freeing hugepages, where move_pages() calls follow_page(FOLL_GET) + * for hugepages internally and tries to get its refcount without + * preventing concurrent freeing. * - * commit e66f17ff71772b209eed39de35aaa99ba819c93d - * Author: Naoya Horiguchi - * Date: Wed Feb 11 15:25:22 2015 -0800 + * This test can crash the buggy kernel, and the bug was fixed in: + * + * commit e66f17ff71772b209eed39de35aaa99ba819c93d + * Author: Naoya Horiguchi + * Date: Wed Feb 11 15:25:22 2015 -0800 + * + * mm/hugetlb: take page table lock in follow_huge_pmd() + * + * Test #2: + * This is a regression test for the race condition, where move_pages() + * and soft offline are called on a single hugetlb page concurrently. + * + * This bug can crash the buggy kernel, and was fixed by: * - * mm/hugetlb: take page table lock in follow_huge_pmd() + * commit c9d398fa237882ea07167e23bcfc5e6847066518 + * Author: Naoya Horiguchi + * Date: Fri Mar 31 15:11:55 2017 -0700 + * + * mm, hugetlb: use pte_present() instead of pmd_present() in + * follow_huge_pmd() */ #include @@ -49,9 +55,16 @@ #define PATH_MEMINFO "/proc/meminfo" #define PATH_NR_HUGEPAGES "/proc/sys/vm/nr_hugepages" #define PATH_HUGEPAGES "/sys/kernel/mm/hugepages/" -#define TEST_PAGES 2 #define TEST_NODES 2 +static struct tcase { + int tpages; + int offline; +} tcases[] = { + {2, 0}, + {2, 1}, +}; + static int pgsz, hpsz; static long orig_hugepages = -1; static char path_hugepages_node1[PATH_MAX]; @@ -61,9 +74,19 @@ static long orig_hugepages_node2 = -1; static unsigned int node1, node2; static void *addr; -static void do_child(void) +static int do_soft_offline(int tpgs) +{ + if (madvise(addr, tpgs * hpsz, MADV_SOFT_OFFLINE) == -1) { + if (errno != EINVAL) + tst_res(TFAIL | TTERRNO, "madvise failed"); + return errno; + } + return 0; +} + +static void do_child(int tpgs) { - int test_pages = TEST_PAGES * hpsz / pgsz; + int test_pages = tpgs * hpsz / pgsz; int i, j; int *nodes, *status; void **pages; @@ -96,34 +119,46 @@ static void do_child(void) exit(0); } -static void do_test(void) +static void do_test(unsigned int n) { int i; pid_t cpid = -1; int status; unsigned int twenty_percent = (tst_timeout_remaining() / 5); - addr = SAFE_MMAP(NULL, TEST_PAGES * hpsz, PROT_READ | PROT_WRITE, + addr = SAFE_MMAP(NULL, tcases[n].tpages * hpsz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, -1, 0); - SAFE_MUNMAP(addr, TEST_PAGES * hpsz); + SAFE_MUNMAP(addr, tcases[n].tpages * hpsz); cpid = SAFE_FORK(); if (cpid == 0) - do_child(); + do_child(tcases[n].tpages); for (i = 0; i < LOOPS; i++) { void *ptr; - ptr = SAFE_MMAP(NULL, TEST_PAGES * hpsz, + ptr = SAFE_MMAP(NULL, tcases[n].tpages * hpsz, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, -1, 0); if (ptr != addr) tst_brk(TBROK, "Failed to mmap at desired addr"); - memset(addr, 0, TEST_PAGES * hpsz); + memset(addr, 0, tcases[n].tpages * hpsz); + + if (tcases[n].offline) { + if (do_soft_offline(tcases[n].tpages) == EINVAL) { + SAFE_KILL(cpid, SIGKILL); + SAFE_WAITPID(cpid, &status, 0); + SAFE_MUNMAP(addr, tcases[n].tpages * hpsz); + tst_res(TFAIL, + "madvise() didn't support " + "MADV_SOFT_OFFLINE"); + return; + } + } - SAFE_MUNMAP(addr, TEST_PAGES * hpsz); + SAFE_MUNMAP(addr, tcases[n].tpages * hpsz); if (tst_timeout_remaining() < twenty_percent) break; @@ -266,7 +301,8 @@ static struct tst_test test = { .forks_child = 1, .setup = setup, .cleanup = cleanup, - .test_all = do_test, + .test = do_test, + .tcnt = ARRAY_SIZE(tcases), }; #else