From patchwork Sun Nov 14 00:59:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hongyu Wang X-Patchwork-Id: 1554778 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: bilbo.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=Z+XULg52; dkim-atps=neutral Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by bilbo.ozlabs.org (Postfix) with ESMTPS id 4HsDX84Ynxz9s0r for ; Sun, 14 Nov 2021 12:00:38 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 543273857C60 for ; Sun, 14 Nov 2021 01:00:35 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 543273857C60 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1636851635; bh=INY+5pe2vEipgj/dGCX38S4TfmzZw0nvzjeXczwAaUI=; h=To:Subject:Date:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:Cc:From; b=Z+XULg52vs1VRH2CqUfVCitqDYIcxr+n3If4WdDw83EOQgh621YK8VDSMc7J7ML7M /0RUtOzK3RVy2XkkllPqCwAX+lcpReoD0PkUq3keqgksgHOhSQEiUZpzCG571R15Fx w6I+dtYnAjb1eLw5vZhszhvvL8VMSdkDnFALMaXk= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by sourceware.org (Postfix) with ESMTPS id A4B4A3858D3C for ; Sun, 14 Nov 2021 00:59:48 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org A4B4A3858D3C X-IronPort-AV: E=McAfee;i="6200,9189,10167"; a="233536881" X-IronPort-AV: E=Sophos;i="5.87,233,1631602800"; d="scan'208";a="233536881" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Nov 2021 16:59:47 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.87,233,1631602800"; d="scan'208";a="603427508" Received: from scymds01.sc.intel.com ([10.148.94.138]) by orsmga004.jf.intel.com with ESMTP; 13 Nov 2021 16:59:46 -0800 Received: from shliclel320.sh.intel.com (shliclel320.sh.intel.com [10.239.236.50]) by scymds01.sc.intel.com with ESMTP id 1AE0xjko015881; Sat, 13 Nov 2021 16:59:45 -0800 To: jakub@redhat.com Subject: [PATCH] PR libgomp/103068: Optimize gomp_mutex_lock_slow for x86 target Date: Sun, 14 Nov 2021 08:59:44 +0800 Message-Id: <20211114005944.66759-1-hongyu.wang@intel.com> X-Mailer: git-send-email 2.18.1 X-Spam-Status: No, score=-10.2 required=5.0 tests=BAYES_00, FREEMAIL_ENVFROM_END_DIGIT, FREEMAIL_FORGED_FROMDOMAIN, FREEMAIL_FROM, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, KAM_DMARC_NONE, KAM_DMARC_STATUS, SPF_HELO_NONE, SPF_SOFTFAIL, SPOOFED_FREEMAIL, TXREP autolearn=ham autolearn_force=no version=3.4.4 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Hongyu Wang via Gcc-patches From: Hongyu Wang Reply-To: Hongyu Wang Cc: gcc-patches@gcc.gnu.org Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" Hi, From the CPU's point of view, getting a cache line for writing is more expensive than reading. See Appendix A.2 Spinlock in: https://www.intel.com/content/dam/www/public/us/en/documents/white-papers /xeon-lock-scaling-analysis-paper.pdf The full compare and swap will grab the cache line exclusive and causes excessive cache line bouncing. For gomp_mutex_lock_slow, it spins on __atomic_compare_exchange_n, so add load-check to continue spin if cmpxchg may fail. Bootstrapped/regtested on x86_64-pc-linux-gnu{-m32,}. Ok for master? libgomp/ChangeLog: PR libgomp/103068 * config/linux/mutex.c (gomp_mutex_lock_slow): Continue spin loop when mutex is not 0 under x86 target. * config/linux/x86/futex.h (TARGET_X86_AVOID_CMPXCHG): Define. --- libgomp/config/linux/mutex.c | 5 +++++ libgomp/config/linux/x86/futex.h | 2 ++ 2 files changed, 7 insertions(+) diff --git a/libgomp/config/linux/mutex.c b/libgomp/config/linux/mutex.c index 838264dc1f9..4e87566eb2b 100644 --- a/libgomp/config/linux/mutex.c +++ b/libgomp/config/linux/mutex.c @@ -49,6 +49,11 @@ gomp_mutex_lock_slow (gomp_mutex_t *mutex, int oldval) } else { +#ifdef TARGET_X86_AVOID_CMPXCHG + /* For x86, omit cmpxchg when atomic load shows mutex is not 0. */ + if ((oldval = __atomic_load_n (mutex, MEMMODEL_RELAXED)) != 0) + continue; +#endif /* Something changed. If now unlocked, we're good to go. */ oldval = 0; if (__atomic_compare_exchange_n (mutex, &oldval, 1, false, diff --git a/libgomp/config/linux/x86/futex.h b/libgomp/config/linux/x86/futex.h index e7f53399a4e..acc1d1467d7 100644 --- a/libgomp/config/linux/x86/futex.h +++ b/libgomp/config/linux/x86/futex.h @@ -122,3 +122,5 @@ cpu_relax (void) { __builtin_ia32_pause (); } + +#define TARGET_X86_AVOID_CMPXCHG