From patchwork Thu May 7 14:59:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ira Weiny X-Patchwork-Id: 1285433 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.infradead.org (client-ip=2607:7c80:54:e::133; helo=bombadil.infradead.org; envelope-from=linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.a=rsa-sha256 header.s=bombadil.20170209 header.b=AUzwgVBJ; dkim-atps=neutral Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:e::133]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49Hxbc6961z9sTC for ; Fri, 8 May 2020 01:05:32 +1000 (AEST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=SL8eOU5VMh9dr9pfR/Sgjbxanvmg9YH9+HSKOsrtDvw=; b=AUzwgVBJa61RWf b2ATHQVXS/j3G81PSzSZGw7hJOtt6KqetlKEQdVneme7IBWvKIUuQfD1tuvRPhqJXu+uBjcU3YxLK aIg3iWp4LuzrAyt4lsNF279Oi89I0uFSCxiKM8+pDmA/xf1w0khlrsEHYLf+1ZAJ8p9gRD2mvFEb5 R5/LejArhpaTBY8JfIiPzRNZF75Krr0uB+eDSOX/Xch67AlopvOtRGkI0m+0uNigK/AApuW6T/ycs od+UjTZ1AeDXIKken5WgWbZN1T33Z8aGNNdwM13hlhmhM/inynxdupuICH2j4pFMVxu4XbzWr18CB /iSmOt8u8B5N1u9KHUwg==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWi5a-00013c-0H; Thu, 07 May 2020 15:05:30 +0000 Received: from mga14.intel.com ([192.55.52.115]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1jWi0j-0003uJ-0B; Thu, 07 May 2020 15:00:31 +0000 IronPort-SDR: 2llxHwM5+VLzms7jd+VHqyGk518RLDaDjz7AFT7wAWEEj62wW2/CInLXyPkJT9twQ7ZBu/xPcX akaEpzd/RK6w== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga006.fm.intel.com ([10.253.24.20]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2020 08:00:13 -0700 IronPort-SDR: rItSlL4hsyBNkXMqNql+Ys5NswC+4JXGqRysaI2kTLNQAY2D5Loizqww6p54Idpm9iU3xvV6ff CCjw69mcPYbg== X-IronPort-AV: E=Sophos;i="5.73,364,1583222400"; d="scan'208";a="462168553" Received: from iweiny-desk2.sc.intel.com (HELO localhost) ([10.3.52.147]) by fmsmga006-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 07 May 2020 08:00:13 -0700 From: ira.weiny@intel.com To: linux-kernel@vger.kernel.org, Andrew Morton Subject: [PATCH V3 05/15] {x86,powerpc,microblaze}/kmap: Move preempt disable Date: Thu, 7 May 2020 07:59:53 -0700 Message-Id: <20200507150004.1423069-6-ira.weiny@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200507150004.1423069-1-ira.weiny@intel.com> References: <20200507150004.1423069-1-ira.weiny@intel.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20200507_080029_217177_D660305D X-CRM114-Status: GOOD ( 11.35 ) X-Spam-Score: -5.0 (-----) X-Spam-Report: SpamAssassin version 3.4.4 on bombadil.infradead.org summary: Content analysis details: (-5.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -5.0 RCVD_IN_DNSWL_HI RBL: Sender listed at https://www.dnswl.org/, high trust [192.55.52.115 listed in list.dnswl.org] -0.0 SPF_PASS SPF: sender matches SPF record 0.0 SPF_HELO_NONE SPF: HELO does not publish an SPF Record X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Peter Zijlstra , Benjamin Herrenschmidt , Dave Hansen , dri-devel@lists.freedesktop.org, "James E.J. Bottomley" , Max Filippov , Paul Mackerras , "H. Peter Anvin" , sparclinux@vger.kernel.org, Ira Weiny , Thomas Gleixner , Helge Deller , x86@kernel.org, linux-csky@vger.kernel.org, Christoph Hellwig , Ingo Molnar , linux-snps-arc@lists.infradead.org, linux-xtensa@linux-xtensa.org, Borislav Petkov , Al Viro , Andy Lutomirski , Dan Williams , linux-arm-kernel@lists.infradead.org, Chris Zankel , Thomas Bogendoerfer , linux-parisc@vger.kernel.org, linux-mips@vger.kernel.org, Christian Koenig , linuxppc-dev@lists.ozlabs.org, "David S. Miller" Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Ira Weiny During this kmap() conversion series we must maintain bisect-ability. To do this, kmap_atomic_prot() in x86, powerpc, and microblaze need to remain functional. Create a temporary inline version of kmap_atomic_prot within these architectures so we can rework their kmap_atomic() calls and then lift kmap_atomic_prot() to the core. Reviewed-by: Christoph Hellwig Suggested-by: Al Viro Signed-off-by: Ira Weiny --- Changes from V2: Fix microblaze not being static inline Changes from V1: New patch --- arch/microblaze/include/asm/highmem.h | 11 ++++++++++- arch/microblaze/mm/highmem.c | 10 ++-------- arch/powerpc/include/asm/highmem.h | 11 ++++++++++- arch/powerpc/mm/highmem.c | 9 ++------- arch/x86/include/asm/highmem.h | 11 ++++++++++- arch/x86/mm/highmem_32.c | 10 ++-------- 6 files changed, 36 insertions(+), 26 deletions(-) diff --git a/arch/microblaze/include/asm/highmem.h b/arch/microblaze/include/asm/highmem.h index 0c94046f2d58..c38d920a1171 100644 --- a/arch/microblaze/include/asm/highmem.h +++ b/arch/microblaze/include/asm/highmem.h @@ -51,7 +51,16 @@ extern pte_t *pkmap_page_table; #define PKMAP_NR(virt) ((virt - PKMAP_BASE) >> PAGE_SHIFT) #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) -extern void *kmap_atomic_prot(struct page *page, pgprot_t prot); +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + preempt_disable(); + pagefault_disable(); + if (!PageHighMem(page)) + return page_address(page); + + return kmap_atomic_high_prot(page, prot); +} extern void __kunmap_atomic(void *kvaddr); static inline void *kmap_atomic(struct page *page) diff --git a/arch/microblaze/mm/highmem.c b/arch/microblaze/mm/highmem.c index d7569f77fa15..0e3efaa8a004 100644 --- a/arch/microblaze/mm/highmem.c +++ b/arch/microblaze/mm/highmem.c @@ -32,18 +32,12 @@ */ #include -void *kmap_atomic_prot(struct page *page, pgprot_t prot) +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) { unsigned long vaddr; int idx, type; - preempt_disable(); - pagefault_disable(); - if (!PageHighMem(page)) - return page_address(page); - - type = kmap_atomic_idx_push(); idx = type + KM_TYPE_NR*smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); @@ -55,7 +49,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot) return (void *) vaddr; } -EXPORT_SYMBOL(kmap_atomic_prot); +EXPORT_SYMBOL(kmap_atomic_high_prot); void __kunmap_atomic(void *kvaddr) { diff --git a/arch/powerpc/include/asm/highmem.h b/arch/powerpc/include/asm/highmem.h index ba3371977d49..d049806a8354 100644 --- a/arch/powerpc/include/asm/highmem.h +++ b/arch/powerpc/include/asm/highmem.h @@ -59,7 +59,16 @@ extern pte_t *pkmap_page_table; #define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT) #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) -extern void *kmap_atomic_prot(struct page *page, pgprot_t prot); +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + preempt_disable(); + pagefault_disable(); + if (!PageHighMem(page)) + return page_address(page); + + return kmap_atomic_high_prot(page, prot); +} extern void __kunmap_atomic(void *kvaddr); static inline void *kmap_atomic(struct page *page) diff --git a/arch/powerpc/mm/highmem.c b/arch/powerpc/mm/highmem.c index 320c1672b2ae..f075cef6d663 100644 --- a/arch/powerpc/mm/highmem.c +++ b/arch/powerpc/mm/highmem.c @@ -30,16 +30,11 @@ * be used in IRQ contexts, so in some (very limited) cases we need * it. */ -void *kmap_atomic_prot(struct page *page, pgprot_t prot) +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) { unsigned long vaddr; int idx, type; - preempt_disable(); - pagefault_disable(); - if (!PageHighMem(page)) - return page_address(page); - type = kmap_atomic_idx_push(); idx = type + KM_TYPE_NR*smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); @@ -49,7 +44,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot) return (void*) vaddr; } -EXPORT_SYMBOL(kmap_atomic_prot); +EXPORT_SYMBOL(kmap_atomic_high_prot); void __kunmap_atomic(void *kvaddr) { diff --git a/arch/x86/include/asm/highmem.h b/arch/x86/include/asm/highmem.h index 90b96594d6c5..61f47fef40e5 100644 --- a/arch/x86/include/asm/highmem.h +++ b/arch/x86/include/asm/highmem.h @@ -58,7 +58,16 @@ extern unsigned long highstart_pfn, highend_pfn; #define PKMAP_NR(virt) ((virt-PKMAP_BASE) >> PAGE_SHIFT) #define PKMAP_ADDR(nr) (PKMAP_BASE + ((nr) << PAGE_SHIFT)) -void *kmap_atomic_prot(struct page *page, pgprot_t prot); +extern void *kmap_atomic_high_prot(struct page *page, pgprot_t prot); +static inline void *kmap_atomic_prot(struct page *page, pgprot_t prot) +{ + preempt_disable(); + pagefault_disable(); + if (!PageHighMem(page)) + return page_address(page); + + return kmap_atomic_high_prot(page, prot); +} void *kmap_atomic(struct page *page); void __kunmap_atomic(void *kvaddr); void *kmap_atomic_pfn(unsigned long pfn); diff --git a/arch/x86/mm/highmem_32.c b/arch/x86/mm/highmem_32.c index c4ebfd0ae401..48b56b1af902 100644 --- a/arch/x86/mm/highmem_32.c +++ b/arch/x86/mm/highmem_32.c @@ -12,17 +12,11 @@ * However when holding an atomic kmap it is not legal to sleep, so atomic * kmaps are appropriate for short, tight code paths only. */ -void *kmap_atomic_prot(struct page *page, pgprot_t prot) +void *kmap_atomic_high_prot(struct page *page, pgprot_t prot) { unsigned long vaddr; int idx, type; - preempt_disable(); - pagefault_disable(); - - if (!PageHighMem(page)) - return page_address(page); - type = kmap_atomic_idx_push(); idx = type + KM_TYPE_NR*smp_processor_id(); vaddr = __fix_to_virt(FIX_KMAP_BEGIN + idx); @@ -32,7 +26,7 @@ void *kmap_atomic_prot(struct page *page, pgprot_t prot) return (void *)vaddr; } -EXPORT_SYMBOL(kmap_atomic_prot); +EXPORT_SYMBOL(kmap_atomic_high_prot); void *kmap_atomic(struct page *page) {