From patchwork Wed Dec 16 01:10:33 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Noam Camus X-Patchwork-Id: 557268 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2001:1868:205::9]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id D01271402ED for ; Wed, 16 Dec 2015 12:16:07 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1a90hK-0002Ql-Ah; Wed, 16 Dec 2015 01:16:06 +0000 Received: from mail-ve1eur01on060b.outbound.protection.outlook.com ([2a01:111:f400:fe1f::60b] helo=EUR01-VE1-obe.outbound.protection.outlook.com) by bombadil.infradead.org with esmtps (Exim 4.80.1 #2 (Red Hat Linux)) id 1a90hH-0002LR-1Y for linux-snps-arc@lists.infradead.org; Wed, 16 Dec 2015 01:16:04 +0000 Received: from DB4PR02CA0013.eurprd02.prod.outlook.com (2a01:111:e400:983b::13) by AM3PR02MB002.eurprd02.prod.outlook.com (2a01:111:e400:8804::14) with Microsoft SMTP Server (TLS) id 15.1.355.16; Wed, 16 Dec 2015 01:15:20 +0000 Received: from DB3FFO11FD025.protection.gbl (2a01:111:f400:7e04::112) by DB4PR02CA0013.outlook.office365.com (2a01:111:e400:983b::13) with Microsoft SMTP Server (TLS) id 15.1.355.16 via Frontend Transport; Wed, 16 Dec 2015 01:15:19 +0000 Authentication-Results: spf=fail (sender IP is 212.179.42.66) smtp.mailfrom=ezchip.com; linaro.org; dkim=none (message not signed) header.d=none; linaro.org; dmarc=none action=none header.from=ezchip.com; Received-SPF: Fail (protection.outlook.com: domain of ezchip.com does not designate 212.179.42.66 as permitted sender) receiver=protection.outlook.com; client-ip=212.179.42.66; helo=ezex10.ezchip.com; Received: from ezex10.ezchip.com (212.179.42.66) by DB3FFO11FD025.mail.protection.outlook.com (10.47.217.56) with Microsoft SMTP Server (TLS) id 15.1.346.13 via Frontend Transport; Wed, 16 Dec 2015 01:15:19 +0000 Received: from localhost.localdomain (10.1.3.132) by ezex10.ezchip.com (10.1.1.4) with Microsoft SMTP Server (TLS) id 14.3.224.2; Wed, 16 Dec 2015 03:14:55 +0200 From: Noam Camus To: Subject: [PATCH v4 14/19] ARC: [plat-eznps] Use dedicated atomic/bitops/cmpxchg Date: Wed, 16 Dec 2015 03:10:33 +0200 Message-ID: <1450228238-4499-15-git-send-email-noamc@ezchip.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1450228238-4499-1-git-send-email-noamc@ezchip.com> References: <1450228238-4499-1-git-send-email-noamc@ezchip.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-11.0.0.1191-8.000.1202-22004.003 X-TM-AS-Result: No--6.630000-8.000000-31 X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-EOPAttributedMessage: 0 X-Microsoft-Exchange-Diagnostics: 1; DB3FFO11FD025; 1:kenENjbRrHr758WA1PW3qY9nyUB1B4g8wcagEO32Kxl2H0TImjX0Y6rIyZtrfr/cd5dPeleonmx7XuuvuFTHdN5MIDXOVhLOY1jWSfqUwR59/hfbOTsr+M4wbTEalcckKumNUQVqng+eF6uYmxGPsiaHbUED0MPh8HSCUVKwbn0XeTacDzP2wTk7Q8l9MaRYKiZyP0Qa09dhiCPd3z4fQTJo+26gh0Y+3FbllwUJEJA8V+0B5rNJXkRl1KYXWmxhg6iGlWc+PVMGyTt2s6MFFF1p+yvYMvP1w3bnae1tHKIM/pwpId+ZoBloqSuDX4xsrYJDyP40p83mDkj5SWZhsokBw0zUERKko7YW4i2btjuVEB7BD73/LVEHL3R8id0DA9GREBQf3fjmTdY2d2hwxg== X-Forefront-Antispam-Report: CIP:212.179.42.66; CTRY:IL; IPV:NLI; EFV:NLI; SFV:NSPM; SFS:(10009020)(6009001)(2980300002)(1110001)(1109001)(339900001)(189002)(199003)(105606002)(77096005)(48376002)(2950100001)(86362001)(36756003)(106466001)(47776003)(33646002)(5003940100001)(87936001)(85426001)(50226001)(49486002)(50986999)(1220700001)(229853001)(50466002)(110136002)(104016004)(76176999)(2351001)(189998001)(1096002)(6806005)(92566002)(107886002)(11100500001)(19580405001)(19580395003)(4001430100002)(5001970100001)(5008740100001)(586003)(21314002); DIR:OUT; SFP:1101; SCL:1; SRVR:AM3PR02MB002; H:ezex10.ezchip.com; FPR:; SPF:Fail; PTR:ezmail.ezchip.com; MX:1; A:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; AM3PR02MB002; 2:OgS5Md4THQa65e6rudLJxnGd0ST5o1EuXlN80J6+4ZkQ8mPQxcLx2D1k+wwoUXgXMEQEbGUySp5XPGdZ4MZRFXRw8+SoknzQTuyxULytwWuZ6OkK1tnzh9WtHq2rq3tSi3Vt6/uQlZS684a4JXVaKw==; 3:Zt87J7a4tEKRkHFR0eOYP5MakCJALoqmOU1Wu64zz8WMi2YjePhCrT4EIph+JYThA38JV88fG5upMnOsoMkuXmbSnJBgtG2W6FjSu4LjrSm7jd+qC4J9s4kVKS7vhtmVZ4LIDiBCI67Ach4cYm8BCx6/+pAEs95uF+rE8nki+YdeXf4XiOeZNhabJAFSGYPXTEqsdMaC6kkjz2C5Yfq5dN3bT86kvYfk8smC9aelrHA=; 25:ACW52//AZ2f8tGQBn3R19LxLKmwwrWZMxe+fxfjMMZp62ZezJRIPGXot3zE1ADbr8k7yxzpGFIQ2TKy1IPIxmKhFrIqEzP7myrt3P5YzAxcKXyxfocsTfp+zUN886MGvK0mVptnRo4RntfbQCi3Rt+8AwCB/8nMD6ITZ4vinC42U1xDORswqxPdfCqi6Ljdeq3j6h+td5BV4aKm4jwRxjafrHZcYR+pAQ2pnRaEXhZOR3AM4rwpCSgZizzziMqbm; 20:Feu7gPeWX/IWuqAsVtqPJTqWcuN4YO3V7srzcNKs8oQ31ixKUlgbKrDkJ8VYiLD9xOQN7QrX2wgn/5R3HOyA7XqvsxDBAkbF2/nczTUgmDId1E05Mp5FE6zxD0QXsfVYE4O4aMhWHQwy3A7lDc9tr/y5L2DYy6SBnu52oZkJoXs= X-Microsoft-Antispam: UriScan:;BCL:0;PCL:0;RULEID:;SRVR:AM3PR02MB002; X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(121898900299872); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(601004)(2401047)(520078)(5005006)(8121501046)(3002001)(10201501046); SRVR:AM3PR02MB002; BCL:0; PCL:0; RULEID:; SRVR:AM3PR02MB002; X-Microsoft-Exchange-Diagnostics: 1; AM3PR02MB002; 4:xtDdDBkVwXkoNAUUTw+r7hF2QPzG3r9BlM8hrECkaEijp/iWYNxD3w7Ns5LA6IYUvTD7g+tqvNybIU93mxaoyrpICuoHV/BMYNCbEaykPfSDp/7cr0/vKrDImb8mrDsIlM6TiwSkhJmV7E9fHzjbQnt9Iyv4ViPin8LK2G4y3DeQ3rWK4KWY8GSf45ijQYgHf3I4CgrP/ijBvlSr13yFC3AkgrEWSswo5YP/a5BUF6tHVXxCuXJZ9Og0S8kRyrp85JC+YoGmua+A4iU3XElT7K1cH6NZhshQXcXE4+zY70BeiR51YzS/QWRZ/dog7Z3czqDb54FVKDN6cBFakhlHI3Ti2LwEn0lRT1wpjCITPkhQk1Yiefbs3Pnp62iTkG4WacepG9+vXl+ZaMtUJX40kSux9raWYNzogjPAaajvR4I= X-Forefront-PRVS: 0792DBEAD0 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; AM3PR02MB002; 23:flyvuJQnwmgWzaU7lII30juK9foKYDTTqVv4eCTriR?= =?us-ascii?Q?jiHibElVm1ySeGrDP135iOiCXHsidRwSDDwV5m8V+x+igqLTeSm518zxCr/2?= =?us-ascii?Q?7Mm+iVcUw7KKhyi7dgzqcZ6ZyXc125946K6Xw+t+j+fD+EmySE4zHuzSdDdn?= =?us-ascii?Q?3GvkZ5I0Gw3TOCEQhaGX4fbS7hDQrI/VbdveO5CCFAS+Pfi1Vvz1qW4/bEbP?= =?us-ascii?Q?gWsx3xT/ciBlLAJXHneXDUAsRGnR+xv8ws5esyCjMyWgvvE7Q/9F7UCrwNIg?= =?us-ascii?Q?4LENHo5+kFZ+9dQE3V6/dRDqN+RR0IHggO55Txmj22j/2KdYt7KNVd0ypZbD?= =?us-ascii?Q?Mu8d9XNgwMOctdOFfs9miYdieA1vIM+zovd9TMWfhzzWVYuTSK6lIT+6PLkS?= =?us-ascii?Q?Vxg/C/5YZGad5PKqnhomo/YqnO4donHfeUPcbQG1alSCN71hXvl0p3L3oY8d?= =?us-ascii?Q?C+DCelzsBupVc52vUK2MnkZQpXdrfzJuvZGFWzLOWaf/v6diitM+CCk1vOiR?= =?us-ascii?Q?HzObQg8blaKCD+jKeC6Vx/d8tfcfL/SuuL+zESmFvBC6E0esF+6QmwkkldK9?= =?us-ascii?Q?tbIcx50J9n1MTHXfdRU/Woh5DvEEExY+W6jOZOSPyChsg1T6AyVv3zR7i/wb?= =?us-ascii?Q?u8Uk0dtEMx/HaezKSPMtpMp+rx45gc1SX0DM6Qt+qs8Ty0RRTsm3eftTai95?= =?us-ascii?Q?ZrSpg6zjUOs9S8SGH+wak03Xw066ITfVhHXD6eKGCVVTIeQSep6C3QD3DEOa?= =?us-ascii?Q?SBfk6kFYO3bbwgmVa1nyCW+v6vcyoqZCufRsBXvyp6ATqUeXmoVOz4qaZLTE?= =?us-ascii?Q?EwhvIF1n+RWwo+rqHbZ5c7lmhX8K90s3uIT0ywlg1W1sGreH4y59XmuYNyHi?= =?us-ascii?Q?xKu62ggRv6q5gMhM1y5iawA3XLLC4VO7oGJfSznFnNlsF6gCbSRMFVE9mk9t?= =?us-ascii?Q?xIDfq10D9AG6F9MaO6snMynJ17+9gB75blEE1oK7KYbMkKeJEWLWAg0T6E63?= =?us-ascii?Q?Mf4o1cpk/mw8Le7oOnVPIR6HH2iQLbVAFovK1alB48WZL/NvK3SYWhVkdAKh?= =?us-ascii?Q?+mGHOwQilwdkA4OGlGVipH570UXtZjOSUsdi4nUkAyOsbqjG025dQAEa7ZF0?= =?us-ascii?Q?GTJXokvik=3D?= X-Microsoft-Exchange-Diagnostics: 1; AM3PR02MB002; 5:+bB4FFhBsKXnjrqVNTQf1r03GC25lUjDLInXhz3ZxB0XcavSZiPqWdvZFf0w31v/MY5RMfdhhfI7rCk9DD6I9hh2eHXJEUL+RnV27wxOCst3O555X2/6y1yx4lijUGM1aZp53Dp/9pmr4W8j4QhWkQ==; 24:w3YkD3XmMZSqnr1b+Ei8WE4HFy/7bqZ+lGo8p7PTS6ISNtrrZcnjZ4vz0CJmKcBJxRftuYdkB4tTJaJqdx91g8Onl30ibJNfG1S1usqCbcY= SpamDiagnosticOutput: 1:23 SpamDiagnosticMetadata: NSPM X-OriginatorOrg: ezchip.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Dec 2015 01:15:19.1381 (UTC) X-MS-Exchange-CrossTenant-Id: 0fc16e0a-3cd3-4092-8b2f-0a42cff122c3 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=0fc16e0a-3cd3-4092-8b2f-0a42cff122c3; Ip=[212.179.42.66]; Helo=[ezex10.ezchip.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM3PR02MB002 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20151215_171603_459069_752CEB5F X-CRM114-Status: GOOD ( 10.64 ) X-Spam-Score: -1.9 (-) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-1.9 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_HELO_PASS SPF: HELO matches SPF record -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] X-BeenThere: linux-snps-arc@lists.infradead.org X-Mailman-Version: 2.1.20 Precedence: list List-Id: Linux on Synopsys ARC Processors List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: cmetcalf@ezchip.com, daniel.lezcano@linaro.org, Noam Camus , linux-kernel@vger.kernel.org Sender: "linux-snps-arc" Errors-To: linux-snps-arc-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org From: Noam Camus We need our own implementaions since we lack LLSC support. Our extended ISA provided with optimized solution for all 32bit operations we see in these three headers. Signed-off-by: Noam Camus --- arch/arc/include/asm/atomic.h | 79 +++++++++++++++++++++++++++++++++++- arch/arc/include/asm/bitops.h | 54 +++++++++++++++++++++++++ arch/arc/include/asm/cmpxchg.h | 87 +++++++++++++++++++++++++++++++++------- 3 files changed, 202 insertions(+), 18 deletions(-) diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h index 7730d30..a626996 100644 --- a/arch/arc/include/asm/atomic.h +++ b/arch/arc/include/asm/atomic.h @@ -17,6 +17,7 @@ #include #include +#ifndef CONFIG_ARC_PLAT_EZNPS #define atomic_read(v) READ_ONCE((v)->counter) #ifdef CONFIG_ARC_HAS_LLSC @@ -180,12 +181,84 @@ ATOMIC_OP(andnot, &= ~, bic) ATOMIC_OP(or, |=, or) ATOMIC_OP(xor, ^=, xor) -#undef ATOMIC_OPS -#undef ATOMIC_OP_RETURN -#undef ATOMIC_OP #undef SCOND_FAIL_RETRY_VAR_DEF #undef SCOND_FAIL_RETRY_ASM #undef SCOND_FAIL_RETRY_VARS +#else /* CONFIG_ARC_PLAT_EZNPS */ +static inline int atomic_read(const atomic_t *v) +{ + int temp; + + __asm__ __volatile__( + " ld.di %0, [%1]" + : "=r"(temp) + : "r"(&v->counter) + : "memory"); + return temp; +} + +static inline void atomic_set(atomic_t *v, int i) +{ + __asm__ __volatile__( + " st.di %0,[%1]" + : + : "r"(i), "r"(&v->counter) + : "memory"); +} + +#define ATOMIC_OP(op, c_op, asm_op) \ +static inline void atomic_##op(int i, atomic_t *v) \ +{ \ + __asm__ __volatile__( \ + " mov r2, %0\n" \ + " mov r3, %1\n" \ + " .word %2\n" \ + : \ + : "r"(i), "r"(&v->counter), "i"(asm_op) \ + : "r2", "r3", "memory"); \ +} \ + +#define ATOMIC_OP_RETURN(op, c_op, asm_op) \ +static inline int atomic_##op##_return(int i, atomic_t *v) \ +{ \ + unsigned int temp = i; \ + \ + /* Explicit full memory barrier needed before/after */ \ + smp_mb(); \ + \ + __asm__ __volatile__( \ + " mov r2, %0\n" \ + " mov r3, %1\n" \ + " .word %2\n" \ + " mov %0, r2" \ + : "+r"(temp) \ + : "r"(&v->counter), "i"(asm_op) \ + : "r2", "r3", "memory"); \ + \ + smp_mb(); \ + \ + temp c_op i; \ + \ + return temp; \ +} + +#define ATOMIC_OPS(op, c_op, asm_op) \ + ATOMIC_OP(op, c_op, asm_op) \ + ATOMIC_OP_RETURN(op, c_op, asm_op) + +ATOMIC_OPS(add, +=, CTOP_INST_AADD_DI_R2_R2_R3) +#define atomic_sub(i, v) atomic_add(-(i), (v)) +#define atomic_sub_return(i, v) atomic_add_return(-(i), (v)) + +ATOMIC_OP(and, &=, CTOP_INST_AAND_DI_R2_R2_R3) +#define atomic_andnot(mask, v) atomic_and(~(mask), (v)) +ATOMIC_OP(or, |=, CTOP_INST_AOR_DI_R2_R2_R3) +ATOMIC_OP(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3) +#endif /* CONFIG_ARC_PLAT_EZNPS */ + +#undef ATOMIC_OPS +#undef ATOMIC_OP_RETURN +#undef ATOMIC_OP /** * __atomic_add_unless - add unless the number is a given value diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h index 57c1f33..5a29185 100644 --- a/arch/arc/include/asm/bitops.h +++ b/arch/arc/include/asm/bitops.h @@ -22,6 +22,7 @@ #include #endif +#ifndef CONFIG_ARC_PLAT_EZNPS #if defined(CONFIG_ARC_HAS_LLSC) /* @@ -155,6 +156,53 @@ static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long * } #endif /* CONFIG_ARC_HAS_LLSC */ +#else /* CONFIG_ARC_PLAT_EZNPS */ +#define BIT_OP(op, c_op, asm_op) \ +static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\ +{ \ + m += nr >> 5; \ + \ + nr = (1UL << (nr & 0x1f)); \ + if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ + nr = ~nr; \ + \ + __asm__ __volatile__( \ + " mov r2, %0\n" \ + " mov r3, %1\n" \ + " .word %2\n" \ + : \ + : "r"(nr), "r"(m), "i"(asm_op) \ + : "r2", "r3", "memory"); \ +} + +#define TEST_N_BIT_OP(op, c_op, asm_op) \ +static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\ +{ \ + unsigned long old; \ + \ + m += nr >> 5; \ + \ + nr = old = (1UL << (nr & 0x1f)); \ + if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \ + old = ~old; \ + \ + /* Explicit full memory barrier needed before/after */ \ + smp_mb(); \ + \ + __asm__ __volatile__( \ + " mov r2, %0\n" \ + " mov r3, %1\n" \ + " .word %2\n" \ + " mov %0, r2" \ + : "+r"(old) \ + : "r"(m), "i"(asm_op) \ + : "r2", "r3", "memory"); \ + \ + smp_mb(); \ + \ + return (old & nr) != 0; \ +} +#endif /* CONFIG_ARC_PLAT_EZNPS */ /*************************************** * Non atomic variants @@ -196,9 +244,15 @@ static inline int __test_and_##op##_bit(unsigned long nr, volatile unsigned long /* __test_and_set_bit(), __test_and_clear_bit(), __test_and_change_bit() */\ __TEST_N_BIT_OP(op, c_op, asm_op) +#ifndef CONFIG_ARC_PLAT_EZNPS BIT_OPS(set, |, bset) BIT_OPS(clear, & ~, bclr) BIT_OPS(change, ^, bxor) +#else +BIT_OPS(set, |, CTOP_INST_AOR_DI_R2_R2_R3) +BIT_OPS(clear, & ~, CTOP_INST_AAND_DI_R2_R2_R3) +BIT_OPS(change, ^, CTOP_INST_AXOR_DI_R2_R2_R3) +#endif /* * This routine doesn't need to be atomic. diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h index af7a2db..6d320d3 100644 --- a/arch/arc/include/asm/cmpxchg.h +++ b/arch/arc/include/asm/cmpxchg.h @@ -14,6 +14,7 @@ #include #include +#ifndef CONFIG_ARC_PLAT_EZNPS #ifdef CONFIG_ARC_HAS_LLSC static inline unsigned long @@ -66,21 +67,6 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new) #endif /* CONFIG_ARC_HAS_LLSC */ -#define cmpxchg(ptr, o, n) ((typeof(*(ptr)))__cmpxchg((ptr), \ - (unsigned long)(o), (unsigned long)(n))) - -/* - * Since not supported natively, ARC cmpxchg() uses atomic_ops_lock (UP/SMP) - * just to gaurantee semantics. - * atomic_cmpxchg() needs to use the same locks as it's other atomic siblings - * which also happens to be atomic_ops_lock. - * - * Thus despite semantically being different, implementation of atomic_cmpxchg() - * is same as cmpxchg(). - */ -#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) - - /* * xchg (reg with memory) based on "Native atomic" EX insn */ @@ -143,6 +129,63 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr, #endif +#else /* CONFIG_ARC_PLAT_EZNPS */ +static inline unsigned long +__cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new) +{ + /* + * Explicit full memory barrier needed before/after + */ + smp_mb(); + + write_aux_reg(CTOP_AUX_GPA1, expected); + + __asm__ __volatile__( + " mov r2, %0\n" + " mov r3, %1\n" + " .word %2\n" + " mov %0, r2" + : "+r"(new) + : "r"(ptr), "i"(CTOP_INST_EXC_DI_R2_R2_R3) + : "r2", "r3", "memory"); + + smp_mb(); + + return new; +} + +static inline unsigned long __xchg(unsigned long val, volatile void *ptr, + int size) +{ + extern unsigned long __xchg_bad_pointer(void); + + switch (size) { + case 4: + /* + * Explicit full memory barrier needed before/after + */ + smp_mb(); + + __asm__ __volatile__( + " mov r2, %0\n" + " mov r3, %1\n" + " .word %2\n" + " mov %0, r2\n" + : "+r"(val) + : "r"(ptr), "i"(CTOP_INST_XEX_DI_R2_R2_R3) + : "r2", "r3", "memory"); + + smp_mb(); + + return val; + } + return __xchg_bad_pointer(); +} + +#define xchg(ptr, with) ((typeof(*(ptr)))__xchg((unsigned long)(with), (ptr), \ + sizeof(*(ptr)))) +#endif /* CONFIG_ARC_PLAT_EZNPS */ + /* * "atomic" variant of xchg() * REQ: It needs to follow the same serialization rules as other atomic_xxx() @@ -158,4 +201,18 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr, */ #define atomic_xchg(v, new) (xchg(&((v)->counter), new)) +#define cmpxchg(ptr, o, n) ((typeof(*(ptr)))__cmpxchg((ptr), \ + (unsigned long)(o), (unsigned long)(n))) + +/* + * Since not supported natively, ARC cmpxchg() uses atomic_ops_lock (UP/SMP) + * just to gaurantee semantics. + * atomic_cmpxchg() needs to use the same locks as it's other atomic siblings + * which also happens to be atomic_ops_lock. + * + * Thus despite semantically being different, implementation of atomic_cmpxchg() + * is same as cmpxchg(). + */ +#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n))) + #endif