From patchwork Mon Dec 17 12:30:54 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: kemi X-Patchwork-Id: 1014463 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=sourceware.org (client-ip=209.132.180.131; helo=sourceware.org; envelope-from=libc-alpha-return-98381-incoming=patchwork.ozlabs.org@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.b="p8ZeN/K9"; dkim-atps=neutral Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43JLGR2Cgbz9sCh for ; Mon, 17 Dec 2018 23:35:27 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:cc:subject:date:message-id:in-reply-to :references; q=dns; s=default; b=Ax7NcRVzTIHYwxmK5tVFh3pxtK23rtW 9E6hwXpQbZk1HF+fXEfxDe2ywGFm/9sJDoE7Qc+4nWlfd82YO8fLEN/wGWkGeanA l6QXLLDsFwazg0ww7O2pqiY0JoZZNJMonWQuPu4bdCoA9CuaxjgeSfSqLdNqam// GLB7hBe277Ps= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:from:to:cc:subject:date:message-id:in-reply-to :references; s=default; bh=Hg1Lx6pLgurp2mlxF2wjJPlnqC0=; b=p8ZeN /K98PwrVoB+tzieecJir3WltT859HdWlZKPz/0vQrg4nh8GNY8SbjlpPNSP0HVsX QY9A/YwIwbpMpTjorIzdUdeomd7KagqTcS5pyzk9j10Lg79AO/M0A4vU6CT/TD3e u1hzCvmhTg/7u+LpvmQwDl9VeXc8ZovqoKEzhw= Received: (qmail 38436 invoked by alias); 17 Dec 2018 12:35:14 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 38317 invoked by uid 89); 17 Dec 2018 12:35:13 -0000 Authentication-Results: sourceware.org; auth=none X-Spam-SWARE-Status: No, score=-25.9 required=5.0 tests=BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LAZY_DOMAIN_SECURITY autolearn=ham version=3.3.2 spammy=holder, numa, NUMA, socket X-HELO: mga14.intel.com From: Kemi Wang To: Carlos , Glibc alpha Cc: Kemi Wang Subject: [RESEND PATCH 3/3] Manual: Add manual for pthread mutex Date: Mon, 17 Dec 2018 20:30:54 +0800 Message-Id: <1545049854-14472-3-git-send-email-kemi.wang@intel.com> In-Reply-To: <1545049854-14472-1-git-send-email-kemi.wang@intel.com> References: <1545049854-14472-1-git-send-email-kemi.wang@intel.com> Pthread mutex is not described in the documentation, so I started to document pthread mutex with PTHREAD_MUTEX_QUEUESPINNER type here at least. Signed-off-by: Kemi Wang --- manual/Makefile | 2 +- manual/mutex.texi | 68 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 69 insertions(+), 1 deletion(-) create mode 100644 manual/mutex.texi diff --git a/manual/Makefile b/manual/Makefile index 5f6006d..0a8b80d 100644 --- a/manual/Makefile +++ b/manual/Makefile @@ -39,7 +39,7 @@ chapters = $(addsuffix .texi, \ pipe socket terminal syslog math arith time \ resource setjmp signal startup process ipc job \ nss users sysinfo conf crypt debug threads \ - probes tunables) + probes tunables mutex) appendices = lang.texi header.texi install.texi maint.texi platform.texi \ contrib.texi licenses = freemanuals.texi lgpl-2.1.texi fdl-1.3.texi diff --git a/manual/mutex.texi b/manual/mutex.texi new file mode 100644 index 0000000..1520a8c --- /dev/null +++ b/manual/mutex.texi @@ -0,0 +1,68 @@ +@node Pthread mutex +@c %MENU% mutex + +This chapter describes the usage and implmentation of POSIX Pthreads Mutex. + +@menu +* Mutex introduction:: What is mutex? +* Mutex type:: The capability for each type of mutex +* Mutex usage:: How to use mutex? +* Usage scenarios and limitation:: +@end menu + +@node Mutex introduction +@section Mutex introduction + +Mutex is used to protect the data structure shared among threads/processes. + +@node Mutex type +@section Mutex type + +@deftp Type PTHREAD_MUTEX_QUEUESPINNER_NP +Queue spinner mutex can reduce the overhead of lock holder transition and +make mutex scalable in a large system with more and more CPUs (E.g. NUMA +architecture) by queuing spinners. It puts mutex spinners into a queue +before spinning on the mutex lock and only allows one spinner spinning on +mutex lock. Thus, when lock is released, the current spinner can acquire +the lock immediately because the cache line including mutex lock is only +contended between previous lock holder and current spinner, and the +overhead of lock acquisition via spinning is always O(1) no matter how +severe the lock is contended. +@end deftp + +@node Mutex usage +@section Mutex usage + +@deftp Type PTHREAD_MUTEX_QUEUESPINNER_NP +Queue spinner mutex can be initialized simply by either using the macro +definition @code{PTHREAD_QUEUESPINNER_MUTEX_INITIALIZER_NP} or dynamically +calling @code{pthread_mutex_init}. + +Static initialization: +@smallexample +mutex = PTHREAD_QUEUESPINNER_MUTEX_INITIALIZER_NP +@end smallexample + +Dynamic initialization: +@smallexample +pthread_mutexattr_init(&attr) +pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_QUEUESPINNER_NP) +pthread_mutex_init(&mutex, &attr) +@end smallexample +@end deftp + +@node Usage scenarios and limitation +@section Usage scenarios and limitation + +@deftp TYPE PTHREAD_MUTEX_QUEUESPINNER_NP +There could be a potential risk to use mutex initialized with type +@code{PTHREAD_MUTEX_QUEUESPINNER_NP} if CPU resource is oversubscribed. E.g. +when a lock holder is transferred to the next spinner in the queue. but it +is not running (the CPU is scheduled to run other task at that moment). +Thus, the other spinners have to wait and it may lead to lock performance +collapse. Therefore, queue spinner mutex would be carefully used for +applications to pursue performance and fairness without oversubsribing CPU +resource. E.g. Run a application within a container in private or public +cloud infrastructure or a application running on the CPUs without subscribed +by other tasks at the same time. +@end deftp