get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/703139/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 703139,
    "url": "http://patchwork.ozlabs.org/api/patches/703139/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1481047767-60255-2-git-send-email-xinhui.pan@linux.vnet.ibm.com/",
    "project": {
        "id": 2,
        "url": "http://patchwork.ozlabs.org/api/projects/2/?format=api",
        "name": "Linux PPC development",
        "link_name": "linuxppc-dev",
        "list_id": "linuxppc-dev.lists.ozlabs.org",
        "list_email": "linuxppc-dev@lists.ozlabs.org",
        "web_url": "https://github.com/linuxppc/wiki/wiki",
        "scm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git",
        "webscm_url": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/",
        "list_archive_url": "https://lore.kernel.org/linuxppc-dev/",
        "list_archive_url_format": "https://lore.kernel.org/linuxppc-dev/{}/",
        "commit_url_format": "https://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git/commit/?id={}"
    },
    "msgid": "<1481047767-60255-2-git-send-email-xinhui.pan@linux.vnet.ibm.com>",
    "list_archive_url": "https://lore.kernel.org/linuxppc-dev/1481047767-60255-2-git-send-email-xinhui.pan@linux.vnet.ibm.com/",
    "date": "2016-12-06T18:09:22",
    "name": "[v9,1/6] powerpc/qspinlock: powerpc support qspinlock",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "4c6e2ac165f43b92a0dd30bef4c88990db426fba",
    "submitter": {
        "id": 67833,
        "url": "http://patchwork.ozlabs.org/api/people/67833/?format=api",
        "name": "xinhui",
        "email": "xinhui.pan@linux.vnet.ibm.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/linuxppc-dev/patch/1481047767-60255-2-git-send-email-xinhui.pan@linux.vnet.ibm.com/mbox/",
    "series": [],
    "comments": "http://patchwork.ozlabs.org/api/patches/703139/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/703139/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>",
        "X-Original-To": [
            "patchwork-incoming@ozlabs.org",
            "linuxppc-dev@lists.ozlabs.org"
        ],
        "Delivered-To": [
            "patchwork-incoming@ozlabs.org",
            "linuxppc-dev@lists.ozlabs.org"
        ],
        "Received": [
            "from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\t(using TLSv1.2 with cipher ADH-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby ozlabs.org (Postfix) with ESMTPS id 3tY2Dh63WJz9vbs\n\tfor <patchwork-incoming@ozlabs.org>;\n\tWed,  7 Dec 2016 00:15:32 +1100 (AEDT)",
            "from lists.ozlabs.org (lists.ozlabs.org [IPv6:2401:3900:2:1::3])\n\tby lists.ozlabs.org (Postfix) with ESMTP id 3tY2Dh3JBrzDvkb\n\tfor <patchwork-incoming@ozlabs.org>;\n\tWed,  7 Dec 2016 00:15:32 +1100 (AEDT)",
            "from mx0a-001b2d01.pphosted.com (mx0a-001b2d01.pphosted.com\n\t[148.163.156.1])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256\n\tbits)) (No client certificate requested)\n\tby lists.ozlabs.org (Postfix) with ESMTPS id 3tY29r62qVzDvsK\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tWed,  7 Dec 2016 00:13:04 +1100 (AEDT)",
            "from pps.filterd (m0098396.ppops.net [127.0.0.1])\n\tby mx0a-001b2d01.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id\n\tuB6D5YJT005359\n\tfor <linuxppc-dev@lists.ozlabs.org>; Tue, 6 Dec 2016 08:13:02 -0500",
            "from e23smtp02.au.ibm.com (e23smtp02.au.ibm.com [202.81.31.144])\n\tby mx0a-001b2d01.pphosted.com with ESMTP id 275qv2mns5-1\n\t(version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT)\n\tfor <linuxppc-dev@lists.ozlabs.org>; Tue, 06 Dec 2016 08:13:02 -0500",
            "from localhost\n\tby e23smtp02.au.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use\n\tOnly! Violators will be prosecuted\n\tfor <linuxppc-dev@lists.ozlabs.org> from\n\t<xinhui.pan@linux.vnet.ibm.com>; Tue, 6 Dec 2016 23:13:00 +1000",
            "from d23dlp01.au.ibm.com (202.81.31.203)\n\tby e23smtp02.au.ibm.com (202.81.31.208) with IBM ESMTP SMTP Gateway:\n\tAuthorized Use Only! Violators will be prosecuted; \n\tTue, 6 Dec 2016 23:12:57 +1000",
            "from d23relay06.au.ibm.com (d23relay06.au.ibm.com [9.185.63.219])\n\tby d23dlp01.au.ibm.com (Postfix) with ESMTP id C5EB72CE805A\n\tfor <linuxppc-dev@lists.ozlabs.org>;\n\tWed,  7 Dec 2016 00:12:56 +1100 (EST)",
            "from d23av02.au.ibm.com (d23av02.au.ibm.com [9.190.235.138])\n\tby d23relay06.au.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id\n\tuB6DCuBd46792798\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 7 Dec 2016 00:12:56 +1100",
            "from d23av02.au.ibm.com (localhost [127.0.0.1])\n\tby d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id\n\tuB6DCtTK000368\n\tfor <linuxppc-dev@lists.ozlabs.org>; Wed, 7 Dec 2016 00:12:56 +1100",
            "from ltcalpine2-lp13.aus.stglabs.ibm.com\n\t(ltcalpine2-lp13.aus.stglabs.ibm.com [9.40.195.196])\n\tby d23av02.au.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id\n\tuB6DCnYu032667; Wed, 7 Dec 2016 00:12:53 +1100"
        ],
        "From": "Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>",
        "To": "linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org",
        "Subject": "[PATCH v9 1/6] powerpc/qspinlock: powerpc support qspinlock",
        "Date": "Tue,  6 Dec 2016 13:09:22 -0500",
        "X-Mailer": "git-send-email 2.4.11",
        "In-Reply-To": "<1481047767-60255-1-git-send-email-xinhui.pan@linux.vnet.ibm.com>",
        "References": "<1481047767-60255-1-git-send-email-xinhui.pan@linux.vnet.ibm.com>",
        "X-TM-AS-MML": "disable",
        "X-Content-Scanned": "Fidelis XPS MAILER",
        "x-cbid": "16120613-0004-0000-0000-000001C4086A",
        "X-IBM-AV-DETECTION": "SAVI=unused REMOTE=unused XFE=unused",
        "x-cbparentid": "16120613-0005-0000-0000-00000941430B",
        "Message-Id": "<1481047767-60255-2-git-send-email-xinhui.pan@linux.vnet.ibm.com>",
        "X-Proofpoint-Virus-Version": "vendor=fsecure engine=2.50.10432:, ,\n\tdefinitions=2016-12-06_07:, , signatures=0",
        "X-Proofpoint-Spam-Details": "rule=outbound_notspam policy=outbound score=0\n\tspamscore=0 suspectscore=0\n\tmalwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam\n\tadjust=0 reason=mlx scancount=1 engine=8.0.1-1609300000\n\tdefinitions=main-1612060208",
        "X-BeenThere": "linuxppc-dev@lists.ozlabs.org",
        "X-Mailman-Version": "2.1.23",
        "Precedence": "list",
        "List-Id": "Linux on PowerPC Developers Mail List\n\t<linuxppc-dev.lists.ozlabs.org>",
        "List-Unsubscribe": "<https://lists.ozlabs.org/options/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=unsubscribe>",
        "List-Archive": "<http://lists.ozlabs.org/pipermail/linuxppc-dev/>",
        "List-Post": "<mailto:linuxppc-dev@lists.ozlabs.org>",
        "List-Help": "<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=help>",
        "List-Subscribe": "<https://lists.ozlabs.org/listinfo/linuxppc-dev>,\n\t<mailto:linuxppc-dev-request@lists.ozlabs.org?subject=subscribe>",
        "Cc": "xinhui.pan@linux.vnet.ibm.com, peterz@infradead.org, boqun.feng@gmail.com,\n\tvirtualization@lists.linux-foundation.org, mingo@redhat.com,\n\tpaulus@samba.org, longman@redhat.com, paulmck@linux.vnet.ibm.com",
        "Errors-To": "linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org",
        "Sender": "\"Linuxppc-dev\"\n\t<linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org>"
    },
    "content": "This patch add basic code to enable qspinlock on powerpc. qspinlock is\none kind of fairlock implementation. And seen some performance improvement\nunder some scenarios.\n\nqueued_spin_unlock() release the lock by just one write of NULL to the\n::locked field which sits at different places in the two endianness\nsystem.\n\nWe override some arch_spin_XXX as powerpc has io_sync stuff which makes\nsure the io operations are protected by the lock correctly.\n\nThere is another special case, see commit\n2c610022711 (\"locking/qspinlock: Fix spin_unlock_wait() some more\")\n\nSigned-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com>\n---\n arch/powerpc/include/asm/qspinlock.h      | 66 +++++++++++++++++++++++++++++++\n arch/powerpc/include/asm/spinlock.h       | 31 +++++++++------\n arch/powerpc/include/asm/spinlock_types.h |  4 ++\n arch/powerpc/lib/locks.c                  | 62 +++++++++++++++++++++++++++++\n 4 files changed, 150 insertions(+), 13 deletions(-)\n create mode 100644 arch/powerpc/include/asm/qspinlock.h",
    "diff": "diff --git a/arch/powerpc/include/asm/qspinlock.h b/arch/powerpc/include/asm/qspinlock.h\nnew file mode 100644\nindex 0000000..4c89256\n--- /dev/null\n+++ b/arch/powerpc/include/asm/qspinlock.h\n@@ -0,0 +1,66 @@\n+#ifndef _ASM_POWERPC_QSPINLOCK_H\n+#define _ASM_POWERPC_QSPINLOCK_H\n+\n+#include <asm-generic/qspinlock_types.h>\n+\n+#define SPIN_THRESHOLD (1 << 15)\n+#define queued_spin_unlock queued_spin_unlock\n+#define queued_spin_is_locked queued_spin_is_locked\n+#define queued_spin_unlock_wait queued_spin_unlock_wait\n+\n+extern void queued_spin_unlock_wait(struct qspinlock *lock);\n+\n+static inline u8 *__qspinlock_lock_byte(struct qspinlock *lock)\n+{\n+\treturn (u8 *)lock + 3 * IS_BUILTIN(CONFIG_CPU_BIG_ENDIAN);\n+}\n+\n+static inline void queued_spin_unlock(struct qspinlock *lock)\n+{\n+\t/* release semantics is required */\n+\tsmp_store_release(__qspinlock_lock_byte(lock), 0);\n+}\n+\n+static inline int queued_spin_is_locked(struct qspinlock *lock)\n+{\n+\tsmp_mb();\n+\treturn atomic_read(&lock->val);\n+}\n+\n+#include <asm-generic/qspinlock.h>\n+\n+/* we need override it as ppc has io_sync stuff */\n+#undef arch_spin_trylock\n+#undef arch_spin_lock\n+#undef arch_spin_lock_flags\n+#undef arch_spin_unlock\n+#define arch_spin_trylock arch_spin_trylock\n+#define arch_spin_lock arch_spin_lock\n+#define arch_spin_lock_flags arch_spin_lock_flags\n+#define arch_spin_unlock arch_spin_unlock\n+\n+static inline int arch_spin_trylock(arch_spinlock_t *lock)\n+{\n+\tCLEAR_IO_SYNC;\n+\treturn queued_spin_trylock(lock);\n+}\n+\n+static inline void arch_spin_lock(arch_spinlock_t *lock)\n+{\n+\tCLEAR_IO_SYNC;\n+\tqueued_spin_lock(lock);\n+}\n+\n+static inline\n+void arch_spin_lock_flags(arch_spinlock_t *lock, unsigned long flags)\n+{\n+\tCLEAR_IO_SYNC;\n+\tqueued_spin_lock(lock);\n+}\n+\n+static inline void arch_spin_unlock(arch_spinlock_t *lock)\n+{\n+\tSYNC_IO;\n+\tqueued_spin_unlock(lock);\n+}\n+#endif /* _ASM_POWERPC_QSPINLOCK_H */\ndiff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h\nindex 8c1b913..954099e 100644\n--- a/arch/powerpc/include/asm/spinlock.h\n+++ b/arch/powerpc/include/asm/spinlock.h\n@@ -60,6 +60,23 @@ static inline bool vcpu_is_preempted(int cpu)\n }\n #endif\n \n+#if defined(CONFIG_PPC_SPLPAR)\n+/* We only yield to the hypervisor if we are in shared processor mode */\n+#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))\n+extern void __spin_yield(arch_spinlock_t *lock);\n+extern void __rw_yield(arch_rwlock_t *lock);\n+#else /* SPLPAR */\n+#define __spin_yield(x)        barrier()\n+#define __rw_yield(x)  barrier()\n+#define SHARED_PROCESSOR       0\n+#endif\n+\n+#ifdef CONFIG_QUEUED_SPINLOCKS\n+#include <asm/qspinlock.h>\n+#else\n+\n+#define arch_spin_relax(lock)  __spin_yield(lock)\n+\n static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)\n {\n \treturn lock.slock == 0;\n@@ -114,18 +131,6 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)\n  * held.  Conveniently, we have a word in the paca that holds this\n  * value.\n  */\n-\n-#if defined(CONFIG_PPC_SPLPAR)\n-/* We only yield to the hypervisor if we are in shared processor mode */\n-#define SHARED_PROCESSOR (lppaca_shared_proc(local_paca->lppaca_ptr))\n-extern void __spin_yield(arch_spinlock_t *lock);\n-extern void __rw_yield(arch_rwlock_t *lock);\n-#else /* SPLPAR */\n-#define __spin_yield(x)\tbarrier()\n-#define __rw_yield(x)\tbarrier()\n-#define SHARED_PROCESSOR\t0\n-#endif\n-\n static inline void arch_spin_lock(arch_spinlock_t *lock)\n {\n \tCLEAR_IO_SYNC;\n@@ -203,6 +208,7 @@ static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)\n \tsmp_mb();\n }\n \n+#endif /* !CONFIG_QUEUED_SPINLOCKS */\n /*\n  * Read-write spinlocks, allowing multiple readers\n  * but only one writer.\n@@ -338,7 +344,6 @@ static inline void arch_write_unlock(arch_rwlock_t *rw)\n #define arch_read_lock_flags(lock, flags) arch_read_lock(lock)\n #define arch_write_lock_flags(lock, flags) arch_write_lock(lock)\n \n-#define arch_spin_relax(lock)\t__spin_yield(lock)\n #define arch_read_relax(lock)\t__rw_yield(lock)\n #define arch_write_relax(lock)\t__rw_yield(lock)\n \ndiff --git a/arch/powerpc/include/asm/spinlock_types.h b/arch/powerpc/include/asm/spinlock_types.h\nindex 2351adc..bd7144e 100644\n--- a/arch/powerpc/include/asm/spinlock_types.h\n+++ b/arch/powerpc/include/asm/spinlock_types.h\n@@ -5,11 +5,15 @@\n # error \"please don't include this file directly\"\n #endif\n \n+#ifdef CONFIG_QUEUED_SPINLOCKS\n+#include <asm-generic/qspinlock_types.h>\n+#else\n typedef struct {\n \tvolatile unsigned int slock;\n } arch_spinlock_t;\n \n #define __ARCH_SPIN_LOCK_UNLOCKED\t{ 0 }\n+#endif\n \n typedef struct {\n \tvolatile signed int lock;\ndiff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c\nindex b7b1237..8f6dbb0 100644\n--- a/arch/powerpc/lib/locks.c\n+++ b/arch/powerpc/lib/locks.c\n@@ -23,6 +23,7 @@\n #include <asm/hvcall.h>\n #include <asm/smp.h>\n \n+#ifndef CONFIG_QUEUED_SPINLOCKS\n void __spin_yield(arch_spinlock_t *lock)\n {\n \tunsigned int lock_value, holder_cpu, yield_count;\n@@ -42,6 +43,7 @@ void __spin_yield(arch_spinlock_t *lock)\n \t\tget_hard_smp_processor_id(holder_cpu), yield_count);\n }\n EXPORT_SYMBOL_GPL(__spin_yield);\n+#endif\n \n /*\n  * Waiting for a read lock or a write lock on a rwlock...\n@@ -68,3 +70,63 @@ void __rw_yield(arch_rwlock_t *rw)\n \t\tget_hard_smp_processor_id(holder_cpu), yield_count);\n }\n #endif\n+\n+#ifdef CONFIG_QUEUED_SPINLOCKS\n+/*\n+ * This forbid we load an old value in another LL/SC. Because the SC here force\n+ * another LL/SC repeat. So we guarantee all loads in another LL and SC will\n+ * read correct value.\n+ */\n+static inline u32 atomic_read_sync(atomic_t *v)\n+{\n+\tu32 val;\n+\n+\t__asm__ __volatile__(\n+\"1:\t\" PPC_LWARX(%0, 0, %2, 0) \"\\n\"\n+\"\tstwcx. %0, 0, %2\\n\"\n+\"\tbne- 1b\\n\"\n+\t: \"=&r\" (val), \"+m\" (*v)\n+\t: \"r\" (v)\n+\t: \"cr0\", \"xer\");\n+\n+\treturn val;\n+}\n+\n+void queued_spin_unlock_wait(struct qspinlock *lock)\n+{\n+\n+\tu32 val;\n+\n+\tsmp_mb();\n+\n+\t/*\n+\t * copied from generic queue_spin_unlock_wait with little modification\n+\t */\n+\tfor (;;) {\n+\t\t/* need _sync, as we might race with another LL/SC in lock()*/\n+\t\tval = atomic_read_sync(&lock->val);\n+\n+\t\tif (!val) /* not locked, we're done */\n+\t\t\tgoto done;\n+\n+\t\tif (val & _Q_LOCKED_MASK) /* locked, go wait for unlock */\n+\t\t\tbreak;\n+\n+\t\t/* not locked, but pending, wait until we observe the lock */\n+\t\tcpu_relax();\n+\t}\n+\n+\t/*\n+\t * any unlock is good.\n+\t * And _sync() is not needed here, because once we got here, we must\n+\t * already read the ->val as LOCKED via a _sync(). Combining the\n+\t * smp_mb() before, we guarantee that all the memory accesses before\n+\t * unlock_wait() must be observed by the next lock critical section.\n+\t */\n+\twhile (atomic_read(&lock->val) & _Q_LOCKED_MASK)\n+\t\tcpu_relax();\n+done:\n+\tsmp_mb();\n+}\n+EXPORT_SYMBOL(queued_spin_unlock_wait);\n+#endif\n",
    "prefixes": [
        "v9",
        "1/6"
    ]
}