From patchwork Sun Aug 7 17:43:00 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stefan Agner X-Patchwork-Id: 656474 X-Patchwork-Delegate: trini@ti.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from theia.denx.de (theia.denx.de [85.214.87.163]) by ozlabs.org (Postfix) with ESMTP id 3s6nvX1yHDz9sR9 for ; Mon, 8 Aug 2016 03:43:20 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; secure) header.d=agner.ch header.i=@agner.ch header.b=XVpAX57e; dkim-atps=neutral Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 1239E4BA68; Sun, 7 Aug 2016 19:43:18 +0200 (CEST) Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zGozg0hGxtTi; Sun, 7 Aug 2016 19:43:17 +0200 (CEST) Received: from theia.denx.de (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 3B3364BA38; Sun, 7 Aug 2016 19:43:17 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 577D04BA38 for ; Sun, 7 Aug 2016 19:43:14 +0200 (CEST) Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tlYFJFzFpnI3 for ; Sun, 7 Aug 2016 19:43:14 +0200 (CEST) X-policyd-weight: NOT_IN_SBL_XBL_SPAMHAUS=-1.5 NOT_IN_SPAMCOP=-1.5 NOT_IN_BL_NJABL=-1.5 (only DNSBL check requested) Received: from mail.kmu-office.ch (mail.kmu-office.ch [178.209.48.109]) by theia.denx.de (Postfix) with ESMTPS id 28A704BA16 for ; Sun, 7 Aug 2016 19:43:10 +0200 (CEST) Received: from trochilidae.lan (unknown [IPv6:2601:602:8802:504f:3e97:eff:fe92:db3b]) by mail.kmu-office.ch (Postfix) with ESMTPSA id 7048F5C067C; Sun, 7 Aug 2016 19:38:43 +0200 (CEST) From: Stefan Agner To: u-boot@lists.denx.de, sjg@chromium.org Date: Sun, 7 Aug 2016 10:43:00 -0700 Message-Id: <20160807174301.23482-1-stefan@agner.ch> X-Mailer: git-send-email 2.9.0 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=agner.ch; s=dkim; t=1470591525; bh=UGPdaiQHJMnBclAl50HleR3vrCO6du9onmmcDkKNuiE=; h=From:To:Cc:Subject:Date:Message-Id; b=XVpAX57e/IouEX4nGzCgHa5vWVg5DjxpxMTiHxQ65eVyOZsP/px8zd1ox6EhyVY30MQZ4u6Aw6DcNFRdpigaWaXMqVu9fKOVBbu52HZ4wKM4N9N2ApEjH/q986KswKVCke//0nXPFk3XFoV7N/cA7kflMcG2wkeW8qLpX8uGa9s= Cc: Marek Vasut , Stefan Agner , Marcel Ziswiler , Max Krummenacher Subject: [U-Boot] [PATCH v4] arm: cache: always flush cache line size for page table X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.15 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" From: Stefan Agner The page table is maintained by the CPU, hence it is safe to always align cache flush to a whole cache line size. This allows to use mmu_page_table_flush for a single page table, e.g. when configure only small regions through mmu_set_region_dcache_behaviour. Signed-off-by: Stefan Agner Tested-by: Fabio Estevam Reviewed-by: Simon Glass Reviewed-by: Heiko Schocher --- Changes in v4: - Fixed spelling misstake for real Changes in v3: - Fixed spelling misstake Changes in v2: - Move cache line alignment from mmu_page_table_flush to mmu_set_region_dcache_behaviour arch/arm/lib/cache-cp15.c | 13 ++++++++++++- 1 file changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/arm/lib/cache-cp15.c b/arch/arm/lib/cache-cp15.c index 1121dc3..488094e 100644 --- a/arch/arm/lib/cache-cp15.c +++ b/arch/arm/lib/cache-cp15.c @@ -62,6 +62,7 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, enum dcache_option option) { u32 *page_table = (u32 *)gd->arch.tlb_addr; + phys_addr_t startpt, stoppt; unsigned long upto, end; end = ALIGN(start + size, MMU_SECTION_SIZE) >> MMU_SECTION_SHIFT; @@ -70,7 +71,17 @@ void mmu_set_region_dcache_behaviour(phys_addr_t start, size_t size, option); for (upto = start; upto < end; upto++) set_section_dcache(upto, option); - mmu_page_table_flush((u32)&page_table[start], (u32)&page_table[end]); + + /* + * Make sure range is cache line aligned + * Only CPU maintains page tables, hence it is safe to always + * flush complete cache lines... + */ + startpt = (phys_addr_t)&page_table[start]; + startpt &= ~(CONFIG_SYS_CACHELINE_SIZE - 1); + stoppt = (phys_addr_t)&page_table[end]; + stoppt = ALIGN(stoppt, CONFIG_SYS_CACHELINE_SIZE); + mmu_page_table_flush(startpt, stoppt); } __weak void dram_bank_mmu_setup(int bank)