From patchwork Thu Jun 9 13:31:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexey Brodkin X-Patchwork-Id: 632757 X-Patchwork-Delegate: alexey.brodkin@gmail.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from theia.denx.de (theia.denx.de [85.214.87.163]) by ozlabs.org (Postfix) with ESMTP id 3rQR7H575rz9sDG for ; Thu, 9 Jun 2016 23:32:26 +1000 (AEST) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id EB899A753B; Thu, 9 Jun 2016 15:32:22 +0200 (CEST) Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id Ktx61ato2byR; Thu, 9 Jun 2016 15:32:22 +0200 (CEST) Received: from theia.denx.de (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 67A58A7516; Thu, 9 Jun 2016 15:32:22 +0200 (CEST) Received: from localhost (localhost [127.0.0.1]) by theia.denx.de (Postfix) with ESMTP id 1F219A7516 for ; Thu, 9 Jun 2016 15:32:19 +0200 (CEST) Received: from theia.denx.de ([127.0.0.1]) by localhost (theia.denx.de [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id tUnUQhibw6ol for ; Thu, 9 Jun 2016 15:32:19 +0200 (CEST) X-policyd-weight: NOT_IN_SBL_XBL_SPAMHAUS=-1.5 NOT_IN_SPAMCOP=-1.5 NOT_IN_BL_NJABL=-1.5 (only DNSBL check requested) Received: from smtprelay.synopsys.com (smtprelay2.synopsys.com [198.182.60.111]) by theia.denx.de (Postfix) with ESMTPS id 8FD85A74EE for ; Thu, 9 Jun 2016 15:32:14 +0200 (CEST) Received: from dc8secmta1.synopsys.com (dc8secmta1.synopsys.com [10.13.218.200]) by smtprelay.synopsys.com (Postfix) with ESMTP id C28B210C12CC for ; Thu, 9 Jun 2016 06:32:12 -0700 (PDT) Received: from dc8secmta1.internal.synopsys.com (dc8secmta1.internal.synopsys.com [127.0.0.1]) by dc8secmta1.internal.synopsys.com (Service) with ESMTP id 554BF27113 for ; Thu, 9 Jun 2016 06:32:12 -0700 (PDT) Received: from mailhost.synopsys.com (mailhost1.synopsys.com [10.12.238.239]) by dc8secmta1.internal.synopsys.com (Service) with ESMTP id 259C127102 for ; Thu, 9 Jun 2016 06:32:12 -0700 (PDT) Received: from mailhost.synopsys.com (localhost [127.0.0.1]) by mailhost.synopsys.com (Postfix) with ESMTP id 116FBCDD; Thu, 9 Jun 2016 06:32:12 -0700 (PDT) Received: from abrodkin-7440l.internal.synopsys.com (abrodkin-7440l.internal.synopsys.com [10.121.8.129]) by mailhost.synopsys.com (Postfix) with ESMTP id 42D5BCDA; Thu, 9 Jun 2016 06:32:10 -0700 (PDT) From: Alexey Brodkin To: u-boot@lists.denx.de Date: Thu, 9 Jun 2016 16:31:21 +0300 Message-Id: <1465479081-25992-1-git-send-email-abrodkin@synopsys.com> X-Mailer: git-send-email 2.5.5 Cc: Alexey Brodkin Subject: [U-Boot] [PATCH] arc: Update data accessors with use of memory barriers X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.15 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" Memory barriers are proven to be a requirement for both compiler and real hardware to properly serialize access to critical data. For example if CPU or data bus it uses may do reordering of data accesses absence of memory barriers might easily lead to very subtle and hard to debug data corruptions. This implementation was heavily borrowed from up to date Linux kernel. Signed-off-by: Alexey Brodkin --- arch/arc/include/asm/io.h | 95 +++++++++++++++++++++++++++++++++++++++-------- 1 file changed, 79 insertions(+), 16 deletions(-) diff --git a/arch/arc/include/asm/io.h b/arch/arc/include/asm/io.h index b6f7724..42e7f22 100644 --- a/arch/arc/include/asm/io.h +++ b/arch/arc/include/asm/io.h @@ -10,6 +10,46 @@ #include #include +#ifdef CONFIG_ISA_ARCV2 + +/* + * ARCv2 based HS38 cores are in-order issue, but still weakly ordered + * due to micro-arch buffering/queuing of load/store, cache hit vs. miss ... + * + * Explicit barrier provided by DMB instruction + * - Operand supports fine grained load/store/load+store semantics + * - Ensures that selected memory operation issued before it will complete + * before any subsequent memory operation of same type + * - DMB guarantees SMP as well as local barrier semantics + * (asm-generic/barrier.h ensures sane smp_*mb if not defined here, i.e. + * UP: barrier(), SMP: smp_*mb == *mb) + * - DSYNC provides DMB+completion_of_cache_bpu_maintenance_ops hence not needed + * in the general case. Plus it only provides full barrier. + */ + +#define mb() asm volatile("dmb 3\n" : : : "memory") +#define rmb() asm volatile("dmb 1\n" : : : "memory") +#define wmb() asm volatile("dmb 2\n" : : : "memory") + +#else + +/* + * ARCompact based cores (ARC700) only have SYNC instruction which is super + * heavy weight as it flushes the pipeline as well. + * There are no real SMP implementations of such cores. + */ + +#define mb() asm volatile("sync\n" : : : "memory") +#endif + +#ifdef CONFIG_ISA_ARCV2 +#define __iormb() rmb() +#define __iowmb() wmb() +#else +#define __iormb() do { } while (0) +#define __iowmb() do { } while (0) +#endif + /* * Given a physical address and a length, return a virtual address * that can be used to access the memory range with the caching @@ -72,18 +112,6 @@ static inline u32 __raw_readl(const volatile void __iomem *addr) return w; } -#define readb __raw_readb - -static inline u16 readw(const volatile void __iomem *addr) -{ - return __le16_to_cpu(__raw_readw(addr)); -} - -static inline u32 readl(const volatile void __iomem *addr) -{ - return __le32_to_cpu(__raw_readl(addr)); -} - static inline void __raw_writeb(u8 b, volatile void __iomem *addr) { __asm__ __volatile__("stb%U1 %0, %1\n" @@ -108,10 +136,6 @@ static inline void __raw_writel(u32 w, volatile void __iomem *addr) : "memory"); } -#define writeb __raw_writeb -#define writew(b, addr) __raw_writew(__cpu_to_le16(b), addr) -#define writel(b, addr) __raw_writel(__cpu_to_le32(b), addr) - static inline int __raw_readsb(unsigned int addr, void *data, int bytelen) { __asm__ __volatile__ ("1:ld.di r8, [r0]\n" @@ -184,6 +208,45 @@ static inline int __raw_writesl(unsigned int addr, void *data, int longlen) return longlen; } +/* + * MMIO can also get buffered/optimized in micro-arch, so barriers needed + * Based on ARM model for the typical use case + * + * + * + * or: + * + * + * + * http://lkml.kernel.org/r/20150622133656.GG1583@arm.com + */ +#define readb(c) ({ u8 __v = readb_relaxed(c); __iormb(); __v; }) +#define readw(c) ({ u16 __v = readw_relaxed(c); __iormb(); __v; }) +#define readl(c) ({ u32 __v = readl_relaxed(c); __iormb(); __v; }) + +#define writeb(v,c) ({ __iowmb(); writeb_relaxed(v,c); }) +#define writew(v,c) ({ __iowmb(); writew_relaxed(v,c); }) +#define writel(v,c) ({ __iowmb(); writel_relaxed(v,c); }) + +/* + * Relaxed API for drivers which can handle barrier ordering themselves + * + * Also these are defined to perform little endian accesses. + * To provide the typical device register semantics of fixed endian, + * swap the byte order for Big Endian + * + * http://lkml.kernel.org/r/201603100845.30602.arnd@arndb.de + */ +#define readb_relaxed(c) __raw_readb(c) +#define readw_relaxed(c) ({ u16 __r = le16_to_cpu((__force __le16) \ + __raw_readw(c)); __r; }) +#define readl_relaxed(c) ({ u32 __r = le32_to_cpu((__force __le32) \ + __raw_readl(c)); __r; }) + +#define writeb_relaxed(v,c) __raw_writeb(v,c) +#define writew_relaxed(v,c) __raw_writew((__force u16) cpu_to_le16(v),c) +#define writel_relaxed(v,c) __raw_writel((__force u32) cpu_to_le32(v),c) + #define out_arch(type, endian, a, v) __raw_write##type(cpu_to_##endian(v), a) #define in_arch(type, endian, a) endian##_to_cpu(__raw_read##type(a))