From patchwork Thu Jun 15 08:43:52 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noam Camus X-Patchwork-Id: 776187 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [65.50.211.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wpHBs3xh4z9s7M for ; Thu, 15 Jun 2017 18:45:25 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="NQI+srbg"; dkim-atps=neutral DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:Cc:List-Subscribe: List-Help:List-Post:List-Archive:List-Unsubscribe:List-Id:References: In-Reply-To:Message-Id:Date:Subject:To:From:Reply-To:Content-ID: Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc :Resent-Message-ID:List-Owner; bh=9s9kna8T5QWwAVsVvWjgkDbLowtqie2z5izwwtg5HQk=; b=NQI+srbgo7ssHJDDZ591vhz/DH nPoTuurxfO99eqxJ+IIq4/lgbPN9uY6BHnu/rjpiQJoIvyGrMkiiVUREmReT/bvwHkRlHliqq2ETW XgqE2oSx4COMhrkFqsTRmaGbO9eZ1etIH6mKzxLMxptbNyrHqOyXQQw/GJGxfLTWsbFpLEmky2jgK 3jJJs/BrzX5J9q7l8IaEuHejaFz5x4qwicWhsL4dfmlyVr3H+jsxumrqGTt961bC0BsiuyVB9zkDP hF7Do1TG5WMeP26gG7ltc9t9rt7sX/brrY/3ymX41qY2COqpyRJG6EFv5fjCv+XoO1j39KeLd+q3j IGn60Cng==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dLQP6-0007Zm-4Z; Thu, 15 Jun 2017 08:45:24 +0000 Received: from mail-il-dmz.mellanox.com ([193.47.165.129] helo=mellanox.co.il) by bombadil.infradead.org with esmtp (Exim 4.87 #1 (Red Hat Linux)) id 1dLQP0-0006Gg-LJ for linux-snps-arc@lists.infradead.org; Thu, 15 Jun 2017 08:45:22 +0000 Received: from Internal Mail-Server by MTLPINE1 (envelope-from noamca@mellanox.com) with ESMTPS (AES256-SHA encrypted); 15 Jun 2017 11:44:50 +0300 Received: from nps20.mtl.labs.mlnx. (l-nps20.mtl.labs.mlnx [10.7.191.20]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id v5F8ior8007661; Thu, 15 Jun 2017 11:44:50 +0300 From: Noam Camus To: linux-snps-arc@lists.infradead.org Subject: [PATCH v3 02/11] ARC: send ipi to all cpus sharing task mm in case of page fault Date: Thu, 15 Jun 2017 11:43:52 +0300 Message-Id: <1497516241-16446-3-git-send-email-noamca@mellanox.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1497516241-16446-1-git-send-email-noamca@mellanox.com> References: <1497516241-16446-1-git-send-email-noamca@mellanox.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20170615_014519_193813_E419B0FC X-CRM114-Status: UNSURE ( 8.72 ) X-CRM114-Notice: Please train this message. X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.4.1 on bombadil.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 T_RP_MATCHES_RCVD Envelope sender domain matches handover relay domain -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] 0.0 UNPARSEABLE_RELAY Informational: message has unparseable relay lines X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Noam Camus , linux-kernel@vger.kernel.org MIME-Version: 1.0 Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Noam Camus This patch is derived due to performance issue. The use case is a page fault that resides on more than the local cpu. Trying to broadcast all CPUs results on performance degradation. So we try to avoid this by sending only to the relevant CPUs. Signed-off-by: Noam Camus Reviewed-by: Alexey Brodkin --- arch/arc/include/asm/cacheflush.h | 3 ++- arch/arc/mm/cache.c | 12 ++++++++++-- arch/arc/mm/tlb.c | 2 +- 3 files changed, 13 insertions(+), 4 deletions(-) diff --git a/arch/arc/include/asm/cacheflush.h b/arch/arc/include/asm/cacheflush.h index fc662f4..716dba1 100644 --- a/arch/arc/include/asm/cacheflush.h +++ b/arch/arc/include/asm/cacheflush.h @@ -33,7 +33,8 @@ void flush_icache_range(unsigned long kstart, unsigned long kend); void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len); -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr); +void __inv_icache_page(struct vm_area_struct *vma, + phys_addr_t paddr, unsigned long vaddr); void __flush_dcache_page(phys_addr_t paddr, unsigned long vaddr); #define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1 diff --git a/arch/arc/mm/cache.c b/arch/arc/mm/cache.c index bdb5227..bfad0fa 100644 --- a/arch/arc/mm/cache.c +++ b/arch/arc/mm/cache.c @@ -934,9 +934,17 @@ void __sync_icache_dcache(phys_addr_t paddr, unsigned long vaddr, int len) } /* wrapper to compile time eliminate alignment checks in flush loop */ -void __inv_icache_page(phys_addr_t paddr, unsigned long vaddr) +void __inv_icache_page(struct vm_area_struct *vma, + phys_addr_t paddr, unsigned long vaddr) { - __ic_line_inv_vaddr(paddr, vaddr, PAGE_SIZE); + struct ic_inv_args ic_inv = { + .paddr = paddr, + .vaddr = vaddr, + .sz = PAGE_SIZE + }; + + on_each_cpu_mask(mm_cpumask(vma->vm_mm), + __ic_line_inv_vaddr_helper, &ic_inv, 1); } /* diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c index 2b6da60..e298da9 100644 --- a/arch/arc/mm/tlb.c +++ b/arch/arc/mm/tlb.c @@ -626,7 +626,7 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long vaddr_unaligned, /* invalidate any existing icache lines (U-mapping) */ if (vma->vm_flags & VM_EXEC) - __inv_icache_page(paddr, vaddr); + __inv_icache_page(vma, paddr, vaddr); } } }