From patchwork Wed Apr 27 12:02:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Borislav Petkov X-Patchwork-Id: 615579 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3qvz9c1050z9t3w for ; Wed, 27 Apr 2016 22:02:44 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753852AbcD0MC2 (ORCPT ); Wed, 27 Apr 2016 08:02:28 -0400 Received: from mail.skyhub.de ([78.46.96.112]:53763 "EHLO mail.skyhub.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753511AbcD0MCZ (ORCPT ); Wed, 27 Apr 2016 08:02:25 -0400 X-Virus-Scanned: Nedap ESD1 at mail.skyhub.de Received: from mail.skyhub.de ([127.0.0.1]) by localhost (door.skyhub.de [127.0.0.1]) (amavisd-new, port 10026) with ESMTP id Em2tjXBkJJ7x; Wed, 27 Apr 2016 14:02:22 +0200 (CEST) Received: from pd.tnic (p5DDC56C2.dip0.t-ipconnect.de [93.220.86.194]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 24DF31DA250; Wed, 27 Apr 2016 14:02:22 +0200 (CEST) Received: by pd.tnic (Postfix, from userid 1000) id C56591617AB; Wed, 27 Apr 2016 14:02:17 +0200 (CEST) Date: Wed, 27 Apr 2016 14:02:17 +0200 From: Borislav Petkov To: "H. Peter Anvin" Cc: Peter Zijlstra , Michal Hocko , Ingo Molnar , LKML , Ingo Molnar , Thomas Gleixner , "David S. Miller" , Tony Luck , Andrew Morton , Chris Zankel , Max Filippov , x86@kernel.org, linux-alpha@vger.kernel.org, linux-ia64@vger.kernel.org, linux-s390@vger.kernel.org, linux-sh@vger.kernel.org, sparclinux@vger.kernel.org, linux-xtensa@linux-xtensa.org, linux-arch@vger.kernel.org, Josh Poimboeuf Subject: [PATCH] x86/locking/rwsem: Cleanup ____down_write() Message-ID: <20160427120217.GE21011@pd.tnic> References: <20160413091625.GF14351@dhcp22.suse.cz> <20160413091943.GA17858@gmail.com> <20160413102731.GA29896@gmail.com> <20160413124943.GH14351@dhcp22.suse.cz> <20160420134019.GX3448@twins.programming.kicks-ass.net> <91A11395-ACAA-4043-B770-2DF6CBAED54C@zytor.com> <20160420204501.GA6815@pd.tnic> <5717EF59.1030709@zytor.com> <20160420213637.GA4978@pd.tnic> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.5.24 (2015-08-30) Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org On Wed, Apr 20, 2016 at 03:29:30PM -0700, H. Peter Anvin wrote: > Since it is a fixed register we could just mark edx clobbered, but > with more flexible register constraints it can permit gcc to allocate > a temp resister for us. How about the following? It boots fine in kvm and the asm changes are only trivial gcc comments differences: --- From: Borislav Petkov Date: Wed, 27 Apr 2016 13:47:32 +0200 Subject: [PATCH] x86/locking/rwsem: Cleanup ____down_write() Move the RWSEM_ACTIVE_WRITE_BIAS out of the inline asm to reduce the number of arguments. Also, make it an input argument only (why it was an output operand, I still don't know...). For better readability, use symbolic names for the arguments and move the linebreak backspace to 80 cols. Resulting asm differs only in the temporary gcc variable names and locations: --- before 2016-04-27 13:39:05.320778458 +0200 +++ after 2016-04-27 13:52:37.336778994 +0200 @@ -11,8 +11,8 @@ down_write_killable: .LBB84: .LBB85: .LBB86: - .loc 2 128 0 - movabsq $-4294967295, %rdx #, tmp + .loc 2 130 0 + movabsq $-4294967295, %rdx #, tmp94 movq %rdi, %rax # sem, sem .LBE86: .LBE85: @@ -23,17 +23,17 @@ down_write_killable: .LBB89: .LBB88: .LBB87: - .loc 2 128 0 + .loc 2 130 0 #APP -# 128 "./arch/x86/include/asm/rwsem.h" 1 +# 130 "./arch/x86/include/asm/rwsem.h" 1 # beginning down_write .pushsection .smp_locks,"a" .balign 4 .long 671f - . .popsection 671: - lock; xadd %rdx,(%rax) # tmp, sem - test %edx , %edx # tmp + lock; xadd %rdx,(%rax) # tmp94, sem + test %edx , %edx # tmp94 jz 1f call call_rwsem_down_write_failed_killable 1: Signed-off-by: Borislav Petkov --- arch/x86/include/asm/rwsem.h | 36 +++++++++++++++++++----------------- 1 file changed, 19 insertions(+), 17 deletions(-) diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h index 453744c1d347..d2f8d10a6d97 100644 --- a/arch/x86/include/asm/rwsem.h +++ b/arch/x86/include/asm/rwsem.h @@ -99,23 +99,25 @@ static inline int __down_read_trylock(struct rw_semaphore *sem) /* * lock for writing */ -#define ____down_write(sem, slow_path) \ -({ \ - long tmp; \ - struct rw_semaphore* ret; \ - asm volatile("# beginning down_write\n\t" \ - LOCK_PREFIX " xadd %1,(%3)\n\t" \ - /* adds 0xffff0001, returns the old value */ \ - " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" \ - /* was the active mask 0 before? */\ - " jz 1f\n" \ - " call " slow_path "\n" \ - "1:\n" \ - "# ending down_write" \ - : "+m" (sem->count), "=d" (tmp), "=a" (ret) \ - : "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS) \ - : "memory", "cc"); \ - ret; \ +#define ____down_write(sem, slow_path) \ +({ \ + long tmp = RWSEM_ACTIVE_WRITE_BIAS; \ + struct rw_semaphore* ret; \ + \ + asm volatile("# beginning down_write\n\t" \ + LOCK_PREFIX " xadd %[tmp],(%[sem])\n\t" \ + /* adds 0xffff0001, returns the old value */ \ + " test " __ASM_SEL(%w[tmp],%k[tmp]) "," \ + __ASM_SEL(%w[tmp],%k[tmp]) "\n\t" \ + /* was the active mask 0 before? */ \ + " jz 1f\n" \ + " call " slow_path "\n" \ + "1:\n" \ + "# ending down_write" \ + : "+m" (sem->count), "=a" (ret) \ + : [sem] "a" (sem), [tmp] "r" (tmp) \ + : "memory", "cc"); \ + ret; \ }) static inline void __down_write(struct rw_semaphore *sem)