From patchwork Fri Jul 13 06:52:32 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kemi X-Patchwork-Id: 943311 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=sourceware.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=libc-alpha-return-94244-incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.b="uBS0y8Ff"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 41Rk9p59zFz9ryt for ; Fri, 13 Jul 2018 16:56:30 +1000 (AEST) DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:cc:subject:date:message-id:in-reply-to :references; q=dns; s=default; b=Le5il0pSfDwpiLw4U/lJL85/OL2hQOX 4CfvwsA/HhBk/eYdp+SIYMBCYrsKnuD3f/TWyUzIPI/JKEyMA+8sTVVt6pR8KvKO ZGK36y+9yX1OfIhAFOhj/xX9W73XkSsKx1JeDBFDpqvKij2KB8si3X5ChIgO/nYH Rquc21WvToCc= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:cc:subject:date:message-id:in-reply-to :references; s=default; bh=uk7F1qnWdcq4P+/grxMhXkbyX4Q=; b=uBS0y 8FfAaQCOST+WTAoedoTMI+b4VG1lgcEqlre6Og1ScdRy4M5N9B7RZh8IlAI1Ldt/ L37EyZeFL2VqgcfwB7oLkzP/1sVKVTlw5btdqbU2ZOyuD5UDW/1PlNhybVnm2i1Y dpMdzlWIFxNdilAHfuFKHzDFK7lAuExPpd3tfw= Received: (qmail 15362 invoked by alias); 13 Jul 2018 06:55:57 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 15233 invoked by uid 89); 13 Jul 2018 06:55:55 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-25.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LAZY_DOMAIN_SECURITY autolearn=ham version=3.3.2 spammy=NUMA, numa, pthreads, pipe X-HELO: mga06.intel.com From: Kemi Wang To: Adhemerval Zanella , Florian Weimer , Rical Jason , Carlos Donell , Glibc alpha Cc: Dave Hansen , Tim Chen , Andi Kleen , Ying Huang , Aaron Lu , Lu Aubrey , Kemi Wang Subject: [PATCH v2 5/5] Manual: Add manual for pthread mutex Date: Fri, 13 Jul 2018 14:52:32 +0800 Message-Id: <1531464752-18830-6-git-send-email-kemi.wang@intel.com> In-Reply-To: <1531464752-18830-1-git-send-email-kemi.wang@intel.com> References: <1531464752-18830-1-git-send-email-kemi.wang@intel.com> Pthread mutex is not described in the documentation, so I started to document pthread mutex with PTHREAD_MUTEX_QUEUESPINNER type here at least. Signed-off-by: Kemi Wang --- manual/Makefile | 2 +- manual/mutex.texi | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 69 insertions(+), 1 deletion(-) create mode 100644 manual/mutex.texi diff --git a/manual/Makefile b/manual/Makefile index c275664..e56880a 100644 --- a/manual/Makefile +++ b/manual/Makefile @@ -39,7 +39,7 @@ chapters = $(addsuffix .texi, \ pipe socket terminal syslog math arith time \ resource setjmp signal startup process ipc job \ nss users sysinfo conf crypt debug threads \ - probes tunables) + probes tunables mutex) appendices = lang.texi header.texi install.texi maint.texi platform.texi \ contrib.texi licenses = freemanuals.texi lgpl-2.1.texi fdl-1.3.texi diff --git a/manual/mutex.texi b/manual/mutex.texi new file mode 100644 index 0000000..1520a8c --- /dev/null +++ b/manual/mutex.texi @@ -0,0 +1,68 @@ +@node Pthread mutex +@c %MENU% mutex + +This chapter describes the usage and implmentation of POSIX Pthreads Mutex. + +@menu +* Mutex introduction:: What is mutex? +* Mutex type:: The capability for each type of mutex +* Mutex usage:: How to use mutex? +* Usage scenarios and limitation:: +@end menu + +@node Mutex introduction +@section Mutex introduction + +Mutex is used to protect the data structure shared among threads/processes. + +@node Mutex type +@section Mutex type + +@deftp Type PTHREAD_MUTEX_QUEUESPINNER_NP +Queue spinner mutex can reduce the overhead of lock holder transition and +make mutex scalable in a large system with more and more CPUs (E.g. NUMA +architecture) by queuing spinners. It puts mutex spinners into a queue +before spinning on the mutex lock and only allows one spinner spinning on +mutex lock. Thus, when lock is released, the current spinner can acquire +the lock immediately because the cache line including mutex lock is only +contended between previous lock holder and current spinner, and the +overhead of lock acquisition via spinning is always O(1) no matter how +severe the lock is contended. +@end deftp + +@node Mutex usage +@section Mutex usage + +@deftp Type PTHREAD_MUTEX_QUEUESPINNER_NP +Queue spinner mutex can be initialized simply by either using the macro +definition @code{PTHREAD_QUEUESPINNER_MUTEX_INITIALIZER_NP} or dynamically +calling @code{pthread_mutex_init}. + +Static initialization: +@smallexample +mutex = PTHREAD_QUEUESPINNER_MUTEX_INITIALIZER_NP +@end smallexample + +Dynamic initialization: +@smallexample +pthread_mutexattr_init(&attr) +pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_QUEUESPINNER_NP) +pthread_mutex_init(&mutex, &attr) +@end smallexample +@end deftp + +@node Usage scenarios and limitation +@section Usage scenarios and limitation + +@deftp TYPE PTHREAD_MUTEX_QUEUESPINNER_NP +There could be a potential risk to use mutex initialized with type +@code{PTHREAD_MUTEX_QUEUESPINNER_NP} if CPU resource is oversubscribed. E.g. +when a lock holder is transferred to the next spinner in the queue. but it +is not running (the CPU is scheduled to run other task at that moment). +Thus, the other spinners have to wait and it may lead to lock performance +collapse. Therefore, queue spinner mutex would be carefully used for +applications to pursue performance and fairness without oversubsribing CPU +resource. E.g. Run a application within a container in private or public +cloud infrastructure or a application running on the CPUs without subscribed +by other tasks at the same time. +@end deftp