[PATCHv3] Protect _dl_profile_fixup data-dependency order [BZ #23690]

Message ID 20181011025754.23862-1-tuliom@linux.ibm.com
State New
Headers show
Series
  • [PATCHv3] Protect _dl_profile_fixup data-dependency order [BZ #23690]
Related show

Commit Message

Tulio Magno Quites Machado Filho Oct. 11, 2018, 2:57 a.m.
Florian Weimer <fw@deneb.enyo.de> writes:

> * Tulio Magno Quites Machado Filho:
>
>> I suspect this patch doesn't address all the comments from v1.
>> However, I believe some of the open questions/comments may not be
>> necessary anymore after the latest changes.
>>
>> I've decided to not add the new test to xtests, because it executes in
>> less than 3s in most of my tests.  There is just a single case that
>> takes up to 30s.
>>
>> Changes since v1:
>>
>>  - Fixed the coding style issues.
>>  - Replaced atomic loads/store with memory fences.
>>  - Added a test.
>
> I don't think the fences are correct, they still need to be combined
> with relaxed MO loads and stores.
>
> Does the issue that Carlos mentioned really show up in cross-builds?

Yes, it does fail on hppa and ia64.
But v3 (using thread fences) pass on build-many-glibcs.

Changes since v2:

 - Fixed coding style in nptl/tst-audit-threads-mod1.c.
 - Replaced pthreads.h functions with respective support/xthread.h ones.
 - Replaced malloc() with xcalloc() in nptl/tst-audit-threads.c.
 - Removed bzero().
 - Reduced the amount of functions to 7k in order to fit the relocation
   limit  of some architectures, e.g. m68k, mips.
 - Fixed issues in nptl/Makefile.

Changes since v1:

 - Fixed the coding style issues.
 - Replaced atomic loads/store with memory fences.
 - Added a test.

---- 8< ----

The field reloc_result->addr is used to indicate if the rest of the
fields of reloc_result have already been written, creating a
data-dependency order.
Reading reloc_result->addr to the variable value requires to complete
before reading the rest of the fields of reloc_result.
Likewise, the writes to the other fields of the reloc_result must
complete before reloc_result-addr is updated.

Tested with build-many-glibcs.

2018-10-10  Tulio Magno Quites Machado Filho  <tuliom@linux.ibm.com>

	[BZ #23690]
	* elf/dl-runtime.c (_dl_profile_fixup): Guarantee memory
	modification order when accessing reloc_result->addr.
	* nptl/Makefile (tests): Add tst-audit-threads.
	(modules-names): Add tst-audit-threads-mod1 and
	tst-audit-threads-mod2.
	Add rules to build tst-audit-threads.
	* nptl/tst-audit-threads-mod1.c: New file.
	* nptl/tst-audit-threads-mod2.c: Likewise.
	* nptl/tst-audit-threads.c: Likewise.
	* nptl/tst-audit-threads.h: Likewise.

Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>
---
 elf/dl-runtime.c              | 19 ++++++++-
 nptl/Makefile                 | 10 ++++-
 nptl/tst-audit-threads-mod1.c | 38 ++++++++++++++++++
 nptl/tst-audit-threads-mod2.c | 22 +++++++++++
 nptl/tst-audit-threads.c      | 91 +++++++++++++++++++++++++++++++++++++++++++
 nptl/tst-audit-threads.h      | 84 +++++++++++++++++++++++++++++++++++++++
 6 files changed, 260 insertions(+), 4 deletions(-)
 create mode 100644 nptl/tst-audit-threads-mod1.c
 create mode 100644 nptl/tst-audit-threads-mod2.c
 create mode 100644 nptl/tst-audit-threads.c
 create mode 100644 nptl/tst-audit-threads.h

Comments

Carlos O'Donell Oct. 12, 2018, 1:03 a.m. | #1
On 10/10/18 10:57 PM, Tulio Magno Quites Machado Filho wrote:
> Florian Weimer <fw@deneb.enyo.de> writes:
> 
>> * Tulio Magno Quites Machado Filho:
>>
>>> I suspect this patch doesn't address all the comments from v1.
>>> However, I believe some of the open questions/comments may not be
>>> necessary anymore after the latest changes.
>>>
>>> I've decided to not add the new test to xtests, because it executes in
>>> less than 3s in most of my tests.  There is just a single case that
>>> takes up to 30s.
>>>
>>> Changes since v1:
>>>
>>>  - Fixed the coding style issues.
>>>  - Replaced atomic loads/store with memory fences.
>>>  - Added a test.
>>
>> I don't think the fences are correct, they still need to be combined
>> with relaxed MO loads and stores.
>>
>> Does the issue that Carlos mentioned really show up in cross-builds?
> 
> Yes, it does fail on hppa and ia64.
> But v3 (using thread fences) pass on build-many-glibcs.

We will need a v4. Please review (1), (2) and (3) carefully, feel free to
ignore (4).

(1) I added a bunch of comments.

Comments added inline.

(2) -Wl,-z,now worries.

Added some things for you to check.

(3) Fence-to-fence sync.

For fence-to-fence synchronization to work we need an acquire and release
fence, and we have that.

We are missing the atomic read and write of the guard. Please review below.
Florian mentioned this in his review. He is correct.

And all the problems are back again because you can't do atomic loads of
the large guards because they are actually the function descriptor structures.
However, this is just laziness, we used the addr because it was convenient.
It is no longer convenient. Just add a 'init' field to reloc_result and use
that as the guard to synchronize the threads against for initialization of
the results. This should solve the reloc_result problem (ignorning the issues
hppa and ia64 have with the fdesc updates across multiple threads in _dl_fixup).

(4) Review of elf_machine_fixup_plt, and DL_FIXUP_MAKE_VALUE.	

I reviewed the uses of elf_machine_fixup_plt, and DL_FIXUP_MAKE_VALUE to
see if there was any other case of this problem, particularly where there
might be a case where a write happens on one thread that might not be
seen in another.

I also looked at _dl_relocate_object and the initialization of all 
l_reloc_result via calloc, and that is also covered because the
atomic_thread_fence_acquire ensures any secondary thread sees the
initialization.

So just _dl_fixup for hppa and ia64 (the case not related to this issue)
still have potential ordering issues if the compiler writes ip before gp.

Nothing for you to worry about.

> Changes since v2:
> 
>  - Fixed coding style in nptl/tst-audit-threads-mod1.c.
>  - Replaced pthreads.h functions with respective support/xthread.h ones.
>  - Replaced malloc() with xcalloc() in nptl/tst-audit-threads.c.
>  - Removed bzero().
>  - Reduced the amount of functions to 7k in order to fit the relocation
>    limit  of some architectures, e.g. m68k, mips.
>  - Fixed issues in nptl/Makefile.
> 
> Changes since v1:
> 
>  - Fixed the coding style issues.
>  - Replaced atomic loads/store with memory fences.
>  - Added a test.
> 
> ---- 8< ----
> 
> The field reloc_result->addr is used to indicate if the rest of the
> fields of reloc_result have already been written, creating a
> data-dependency order.
> Reading reloc_result->addr to the variable value requires to complete
> before reading the rest of the fields of reloc_result.
> Likewise, the writes to the other fields of the reloc_result must
> complete before reloc_result-addr is updated.
> 
> Tested with build-many-glibcs.
> 
> 2018-10-10  Tulio Magno Quites Machado Filho  <tuliom@linux.ibm.com>
> 
> 	[BZ #23690]
> 	* elf/dl-runtime.c (_dl_profile_fixup): Guarantee memory
> 	modification order when accessing reloc_result->addr.
> 	* nptl/Makefile (tests): Add tst-audit-threads.
> 	(modules-names): Add tst-audit-threads-mod1 and
> 	tst-audit-threads-mod2.
> 	Add rules to build tst-audit-threads.
> 	* nptl/tst-audit-threads-mod1.c: New file.
> 	* nptl/tst-audit-threads-mod2.c: Likewise.
> 	* nptl/tst-audit-threads.c: Likewise.
> 	* nptl/tst-audit-threads.h: Likewise.
> 
> Signed-off-by: Tulio Magno Quites Machado Filho <tuliom@linux.ibm.com>

Please send v4.

> ---
>  elf/dl-runtime.c              | 19 ++++++++-
>  nptl/Makefile                 | 10 ++++-
>  nptl/tst-audit-threads-mod1.c | 38 ++++++++++++++++++
>  nptl/tst-audit-threads-mod2.c | 22 +++++++++++
>  nptl/tst-audit-threads.c      | 91 +++++++++++++++++++++++++++++++++++++++++++
>  nptl/tst-audit-threads.h      | 84 +++++++++++++++++++++++++++++++++++++++
>  6 files changed, 260 insertions(+), 4 deletions(-)
>  create mode 100644 nptl/tst-audit-threads-mod1.c
>  create mode 100644 nptl/tst-audit-threads-mod2.c
>  create mode 100644 nptl/tst-audit-threads.c
>  create mode 100644 nptl/tst-audit-threads.h
> 
> diff --git a/elf/dl-runtime.c b/elf/dl-runtime.c
> index 63bbc89776..c1ba372bd7 100644
> --- a/elf/dl-runtime.c
> +++ b/elf/dl-runtime.c
> @@ -183,9 +183,18 @@ _dl_profile_fixup (
>    /* This is the address in the array where we store the result of previous
>       relocations.  */
>    struct reloc_result *reloc_result = &l->l_reloc_result[reloc_index];
> -  DL_FIXUP_VALUE_TYPE *resultp = &reloc_result->addr;
>  
> +  /* CONCURRENCY NOTES:
> +

Suggest adding:

Multiple threads may be calling the same PLT sequence and with LD_AUDIT enabled
they will be calling into _dl_profile_fixup to update the reloc_result with the
result of the lazy resolution. The reloc_result guard variable is addr, and we
use relaxed MO loads and store to it along with an atomic_thread_acquire and
atomic_thread_release fence to ensure that the results of the structure are
consistent with the loaded value of the guard.

> +     The following code uses DL_FIXUP_VALUE_CODE_ADDR to access a potential
> +     member of reloc_result->addr to indicate if it is the first time this
> +     object is being relocated.
> +     Reading/Writing from/to reloc_result->addr must not happen before previous
> +     writes to reloc_result complete as they could end-up with an incomplete
> +     struct.  */

OK.

> +  DL_FIXUP_VALUE_TYPE *resultp = &reloc_result->addr;

OK.

>    DL_FIXUP_VALUE_TYPE value = *resultp;

Not OK. This is a guard. You read it here, and write to it below.
That's a data race. Both need to be atomic accesses with any MO you want.
On hppa this will require a new enough compile to get a 64-bit atomic load.
On ia64 I don't know if there is a usable 128-bit atomic.

The key problem here is that addr is being overloaded as a guard here because
it was convenient. It's non-zero when the symbol is initialized, otherwhise it's
zero when it's not. However, for arches with function descriptors you've found
out that using it is causing problems because it's too big for traditional atomic
operations.

What you really need is a new "init" field in reloc_result, make it a word,
and then use word-sized atomics on that with relaxed MO, and keep the fences.

> +  atomic_thread_fence_acquire ();

OK, this acquire ensures all previous writes on threads are visible.

>    if (DL_FIXUP_VALUE_CODE_ADDR (value) == 0)

OK, either this is zero, and we redo the initialization, or it's not
and we see all the results of the previous writes because of the
atomic_thread_fence_acquire.

>      {
>        /* This is the first time we have to relocate this object.  */
> @@ -346,7 +355,13 @@ _dl_profile_fixup (
>  
>        /* Store the result for later runs.  */
>        if (__glibc_likely (! GLRO(dl_bind_not)))
> -	*resultp = value;

OK.

> +	{
> +	  /* Guarantee all previous writes complete before
> +	     resultp (aka. reloc_result->addr) is updated.  See CONCURRENCY
> +	     NOTES earlier  */
> +	  atomic_thread_fence_release ();

OK, this ensures that any write done by the auditors, if any, are seen by
subsequent threads attempting a resolution of the same function, and this
sequences-before all the writes with the earlier acquire.

> +	  *resultp = value;

Not OK, see above, this needs to be an atomic relaxed-MO store to 'init'
or something smaller than value.

You need a guard small enough that arches will have an atomic load/store
to the size.

> +	}
>      }
>  
>    /* By default we do not call the pltexit function.  */
> diff --git a/nptl/Makefile b/nptl/Makefile
> index be8066524c..48aba579c0 100644
> --- a/nptl/Makefile
> +++ b/nptl/Makefile
> @@ -382,7 +382,8 @@ tests += tst-cancelx2 tst-cancelx3 tst-cancelx4 tst-cancelx5 \
>  	 tst-cleanupx0 tst-cleanupx1 tst-cleanupx2 tst-cleanupx3 tst-cleanupx4 \
>  	 tst-oncex3 tst-oncex4
>  ifeq ($(build-shared),yes)
> -tests += tst-atfork2 tst-tls4 tst-_res1 tst-fini1 tst-compat-forwarder
> +tests += tst-atfork2 tst-tls4 tst-_res1 tst-fini1 tst-compat-forwarder \
> +	 tst-audit-threads

OK.

>  tests-internal += tst-tls3 tst-tls3-malloc tst-tls5 tst-stackguard1
>  tests-nolibpthread += tst-fini1
>  ifeq ($(have-z-execstack),yes)
> @@ -394,7 +395,8 @@ modules-names = tst-atfork2mod tst-tls3mod tst-tls4moda tst-tls4modb \
>  		tst-tls5mod tst-tls5moda tst-tls5modb tst-tls5modc \
>  		tst-tls5modd tst-tls5mode tst-tls5modf tst-stack4mod \
>  		tst-_res1mod1 tst-_res1mod2 tst-execstack-mod tst-fini1mod \
> -		tst-join7mod tst-compat-forwarder-mod
> +		tst-join7mod tst-compat-forwarder-mod tst-audit-threads-mod1 \
> +		tst-audit-threads-mod2

OK.

>  extra-test-objs += $(addsuffix .os,$(strip $(modules-names))) \
>  		   tst-cleanup4aux.o tst-cleanupx4aux.o
>  test-extras += tst-cleanup4aux tst-cleanupx4aux
> @@ -709,6 +711,10 @@ endif
>  
>  $(objpfx)tst-compat-forwarder: $(objpfx)tst-compat-forwarder-mod.so
>  
> +$(objpfx)tst-audit-threads: $(objpfx)tst-audit-threads-mod2.so
> +$(objpfx)tst-audit-threads.out: $(objpfx)tst-audit-threads-mod1.so
> +tst-audit-threads-ENV = LD_AUDIT=$(objpfx)tst-audit-threads-mod1.so

Do we need to add -Wl,-z,lazy?

Users might have -Wl,-z,now as the default for their build?

With BIND_NOW the test doesn't test what we want.

> +
>  # The tests here better do not run in parallel
>  ifneq ($(filter %tests,$(MAKECMDGOALS)),)
>  .NOTPARALLEL:
> diff --git a/nptl/tst-audit-threads-mod1.c b/nptl/tst-audit-threads-mod1.c
> new file mode 100644
> index 0000000000..194c65a6bb
> --- /dev/null
> +++ b/nptl/tst-audit-threads-mod1.c
> @@ -0,0 +1,38 @@
> +/* Dummy audit library for test-audit-threads.
> +
> +   Copyright (C) 2018 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +
> +#include <elf.h>
> +#include <link.h>
> +#include <stdio.h>
> +#include <assert.h>
> +#include <string.h>
> +

Suggest:

/* We must use a dummy LD_AUDIT module to force the dynamic loader to
   *not* update the real PLT, and instead use a cached value for the
   lazy resolution result. It is the update of that cached value that
   we are testing for correctness by doing this.  */

> +volatile int count = 0;
> +
> +unsigned int
> +la_version (unsigned int ver)
> +{
> +  return 1;
> +}
> +
> +unsigned int
> +la_objopen (struct link_map *map, Lmid_t lmid, uintptr_t *cookie)
> +{
> +  return LA_FLG_BINDTO | LA_FLG_BINDFROM;
> +}

I'm worried binutils will optimize away the PLT entries and this test will
pass without failing but the lazy resolution will not be tested.

Can we just *count* the number of PLT resolutions and see if they match?

Counting the PLT resolutions and using -Wl,-z,lazy (above) will mean we have
done our best to test what we intended to test.

> diff --git a/nptl/tst-audit-threads-mod2.c b/nptl/tst-audit-threads-mod2.c
> new file mode 100644
> index 0000000000..6ceedb0196
> --- /dev/null
> +++ b/nptl/tst-audit-threads-mod2.c
> @@ -0,0 +1,22 @@
> +/* Shared object with a huge number of functions for test-audit-threads.
> +
> +   Copyright (C) 2018 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +

Suggest:

/* Define all the retNumN functions in a library.  */

Just to be clear that this must be distinct from the executable.

> +/* Define all the retNumN functions.  */
> +#define definenum
> +#include "tst-audit-threads.h"
> diff --git a/nptl/tst-audit-threads.c b/nptl/tst-audit-threads.c
> new file mode 100644
> index 0000000000..0c81edc762
> --- /dev/null
> +++ b/nptl/tst-audit-threads.c
> @@ -0,0 +1,91 @@
> +/* Test multi-threading using LD_AUDIT.
> +
> +   Copyright (C) 2018 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +

Suggest:

/* This test uses a dummy LD_AUDIT library (test-audit-threads-mod1) and a
   library with a huge number of functions in order to validate lazy symbol
   binding with an audit library.  We use one thread per CPU to test that
   concurrent lazy resolution does not have any defects which would cause
   the process to fail.  We use an LD_AUDIT library to force the testing of
   the relocation resolution caching code in the dynamic loader i.e. 
   _dl_runtime_profile and _dl_profile_fixup.  */

> +/* This test uses a dummy LD_AUDIT library (test-audit-threads-mod1) and a
> +   library with a huge number of functions in order to validate lazy symbol
> +   binding with an audit library.  */
> +
> +#include <support/xthread.h>
> +#include <strings.h>
> +#include <stdlib.h>
> +#include <sys/sysinfo.h>
> +
> +static int do_test (void);
> +
> +/* This test usually takes less than 3s to run.  However, there are cases that
> +   take up to 30s.  */
> +#define TIMEOUT 60
> +#define TEST_FUNCTION do_test ()
> +#include "../test-skeleton.c"
> +

Suggest:

/* Declare the functions we are going to call.  */

> +#define externnum
> +#include "tst-audit-threads.h"
> +#undef externnum
> +
> +int num_threads;
> +pthread_barrier_t barrier;
> +
> +void
> +sync_all (int num)
> +{
> +  pthread_barrier_wait (&barrier);
> +}
> +
> +void
> +call_all_ret_nums (void)
> +{

Suggest:

/* Call each function one at a time from all threads.  */

> +#define callnum
> +#include "tst-audit-threads.h"
> +#undef callnum
> +}
> +
> +void *
> +thread_main (void *unused)
> +{
> +  call_all_ret_nums ();
> +  return NULL;
> +}
> +
> +#define STR2(X) #X
> +#define STR(X) STR2(X)
> +
> +static int
> +do_test (void)
> +{
> +  int i;
> +  pthread_t *threads;
> +
> +  num_threads = get_nprocs ();
> +  if (num_threads <= 1)
> +    num_threads = 2;

OK.

> +
> +  /* Used to synchronize all the threads after calling each retNumN.  */
> +  xpthread_barrier_init (&barrier, NULL, num_threads);

OK.

> +
> +  threads = (pthread_t *) xcalloc (num_threads, sizeof(pthread_t));
> +  for (i = 0; i < num_threads; i++)
> +    threads[i] = xpthread_create(NULL, thread_main, NULL);
> +
> +  for (i = 0; i < num_threads; i++)
> +    xpthread_join(threads[i]);
> +
> +  free (threads);
> +
> +  return 0;

OK.

> +}
> diff --git a/nptl/tst-audit-threads.h b/nptl/tst-audit-threads.h
> new file mode 100644
> index 0000000000..cb17645f4b
> --- /dev/null
> +++ b/nptl/tst-audit-threads.h
> @@ -0,0 +1,84 @@
> +/* Helper header for test-audit-threads.
> +
> +   Copyright (C) 2018 Free Software Foundation, Inc.
> +   This file is part of the GNU C Library.
> +
> +   The GNU C Library is free software; you can redistribute it and/or
> +   modify it under the terms of the GNU Lesser General Public
> +   License as published by the Free Software Foundation; either
> +   version 2.1 of the License, or (at your option) any later version.
> +
> +   The GNU C Library is distributed in the hope that it will be useful,
> +   but WITHOUT ANY WARRANTY; without even the implied warranty of
> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> +   Lesser General Public License for more details.
> +
> +   You should have received a copy of the GNU Lesser General Public
> +   License along with the GNU C Library; if not, see
> +   <http://www.gnu.org/licenses/>.  */
> +

Suggest adding:

/* We use this helper to create a large number of functions, all of
   which will be resolved lazily and thus have their PLT updated.
   This is done to provide enough functions that we can statistically
   observe a thread vs. PLT resolution failure if one exists.  */

> +#define CONCAT(a, b) a ## b
> +#define NUM(x, y) CONCAT (x, y)
> +
> +#define FUNC10(x)	\
> +  FUNC (NUM (x, 0));	\
> +  FUNC (NUM (x, 1));	\
> +  FUNC (NUM (x, 2));	\
> +  FUNC (NUM (x, 3));	\
> +  FUNC (NUM (x, 4));	\
> +  FUNC (NUM (x, 5));	\
> +  FUNC (NUM (x, 6));	\
> +  FUNC (NUM (x, 7));	\
> +  FUNC (NUM (x, 8));	\
> +  FUNC (NUM (x, 9))
> +
> +#define FUNC100(x)	\
> +  FUNC10 (NUM (x, 0));	\
> +  FUNC10 (NUM (x, 1));	\
> +  FUNC10 (NUM (x, 2));	\
> +  FUNC10 (NUM (x, 3));	\
> +  FUNC10 (NUM (x, 4));	\
> +  FUNC10 (NUM (x, 5));	\
> +  FUNC10 (NUM (x, 6));	\
> +  FUNC10 (NUM (x, 7));	\
> +  FUNC10 (NUM (x, 8));	\
> +  FUNC10 (NUM (x, 9))
> +
> +#define FUNC1000(x)		\
> +  FUNC100 (NUM (x, 0));		\
> +  FUNC100 (NUM (x, 1));		\
> +  FUNC100 (NUM (x, 2));		\
> +  FUNC100 (NUM (x, 3));		\
> +  FUNC100 (NUM (x, 4));		\
> +  FUNC100 (NUM (x, 5));		\
> +  FUNC100 (NUM (x, 6));		\
> +  FUNC100 (NUM (x, 7));		\
> +  FUNC100 (NUM (x, 8));		\
> +  FUNC100 (NUM (x, 9))
> +
> +#define FUNC7000()	\
> +  FUNC1000 (1);		\
> +  FUNC1000 (2);		\
> +  FUNC1000 (3);		\
> +  FUNC1000 (4);		\
> +  FUNC1000 (5);		\
> +  FUNC1000 (6);		\
> +  FUNC1000 (7);
> +
> +#ifdef FUNC
> +# undef FUNC
> +#endif
> +
> +#ifdef externnum
> +# define FUNC(x) extern int CONCAT (retNum, x) (void)
> +#endif

OK.

> +
> +#ifdef definenum
> +# define FUNC(x) int CONCAT (retNum, x) (void) { return x; }
> +#endif

OK.

> +
> +#ifdef callnum
> +# define FUNC(x) CONCAT (retNum, x) (); sync_all (x)
> +#endif

OK.

> +
> +FUNC7000 ();
> 

OK, 7000 functions to test, all of which need resolution.
Florian Weimer Oct. 15, 2018, 12:57 p.m. | #2
* Carlos O'Donell:

> (3) Fence-to-fence sync.
>
> For fence-to-fence synchronization to work we need an acquire and release
> fence, and we have that.
>
> We are missing the atomic read and write of the guard. Please review below.
> Florian mentioned this in his review. He is correct.
>
> And all the problems are back again because you can't do atomic loads of
> the large guards because they are actually the function descriptor structures.
> However, this is just laziness, we used the addr because it was convenient.
> It is no longer convenient. Just add a 'init' field to reloc_result and use
> that as the guard to synchronize the threads against for initialization of
> the results. This should solve the reloc_result problem (ignorning the issues
> hppa and ia64 have with the fdesc updates across multiple threads in _dl_fixup).

I think due to various external factors, we should go with the
fence-based solution for now, and change it later to something which
uses an acquire/release on the code address later, using proper atomics.

I don't want to see this bug fix blocked by ia64 and hppa.  The proper
fix needs some reshuffling of the macros here, or maybe use an unused
bit in the flags field as an indicator for initialization.

> (4) Review of elf_machine_fixup_plt, and DL_FIXUP_MAKE_VALUE.	
> 
> I reviewed the uses of elf_machine_fixup_plt, and DL_FIXUP_MAKE_VALUE to
> see if there was any other case of this problem, particularly where there
> might be a case where a write happens on one thread that might not be
> seen in another.
> 
> I also looked at _dl_relocate_object and the initialization of all 
> l_reloc_result via calloc, and that is also covered because the
> atomic_thread_fence_acquire ensures any secondary thread sees the
> initialization.

I don't think the analysis is correct.  It's up to the application to
ensure that the dlopen (or at least the call to an ELF constructor in
the new DSO) happens before a call to any function in the DSO, and this
is why there is no need to synchronize the calloc with the profiling
code.

Thanks,
Florian
Carlos O'Donell Oct. 15, 2018, 1:53 p.m. | #3
On 10/15/18 8:57 AM, Florian Weimer wrote:
> * Carlos O'Donell:
> 
>> (3) Fence-to-fence sync.
>>
>> For fence-to-fence synchronization to work we need an acquire and release
>> fence, and we have that.
>>
>> We are missing the atomic read and write of the guard. Please review below.
>> Florian mentioned this in his review. He is correct.
>>
>> And all the problems are back again because you can't do atomic loads of
>> the large guards because they are actually the function descriptor structures.
>> However, this is just laziness, we used the addr because it was convenient.
>> It is no longer convenient. Just add a 'init' field to reloc_result and use
>> that as the guard to synchronize the threads against for initialization of
>> the results. This should solve the reloc_result problem (ignorning the issues
>> hppa and ia64 have with the fdesc updates across multiple threads in _dl_fixup).
> 
> I think due to various external factors, we should go with the
> fence-based solution for now, and change it later to something which
> uses an acquire/release on the code address later, using proper atomics.

Let me clarify.

The fence fix as proposed in v3 is wrong for all architectures.

We are emulating C/C++ 11 atomics within glibc, and a fence-to-fence sync
*requires* an atomic load / store of the guard, you can't use a non-atomic
access. The point of the atomic load/store is to ensure you don't have a
data race.

> I don't want to see this bug fix blocked by ia64 and hppa.  The proper
> fix needs some reshuffling of the macros here, or maybe use an unused
> bit in the flags field as an indicator for initialization.

The fix for this is straight forward.

Add a new initializer field to the reloc_result, it's an internal data
structure. It can be as big as we want and we can optimize it later.

You don't need to do any big cleanups, but we *do* have to get the
synchronization correct.

>> (4) Review of elf_machine_fixup_plt, and DL_FIXUP_MAKE_VALUE.	
>>
>> I reviewed the uses of elf_machine_fixup_plt, and DL_FIXUP_MAKE_VALUE to
>> see if there was any other case of this problem, particularly where there
>> might be a case where a write happens on one thread that might not be
>> seen in another.
>>
>> I also looked at _dl_relocate_object and the initialization of all 
>> l_reloc_result via calloc, and that is also covered because the
>> atomic_thread_fence_acquire ensures any secondary thread sees the
>> initialization.
> 
> I don't think the analysis is correct.  It's up to the application to
> ensure that the dlopen (or at least the call to an ELF constructor in
> the new DSO) happens before a call to any function in the DSO, and this
> is why there is no need to synchronize the calloc with the profiling
> code.

I agree, you would need some inter-thread synchronization to ensure all
other threads new the dlopen was complete, and that would ensure that
the writes would be seen.
Florian Weimer Oct. 17, 2018, 8:12 p.m. | #4
* Carlos O'Donell:

> On 10/15/18 8:57 AM, Florian Weimer wrote:
>> * Carlos O'Donell:
>> 
>>> (3) Fence-to-fence sync.
>>>
>>> For fence-to-fence synchronization to work we need an acquire and release
>>> fence, and we have that.
>>>
>>> We are missing the atomic read and write of the guard. Please review below.
>>> Florian mentioned this in his review. He is correct.
>>>
>>> And all the problems are back again because you can't do atomic loads of
>>> the large guards because they are actually the function descriptor structures.
>>> However, this is just laziness, we used the addr because it was convenient.
>>> It is no longer convenient. Just add a 'init' field to reloc_result and use
>>> that as the guard to synchronize the threads against for initialization of
>>> the results. This should solve the reloc_result problem (ignorning the issues
>>> hppa and ia64 have with the fdesc updates across multiple threads in _dl_fixup).
>> 
>> I think due to various external factors, we should go with the
>> fence-based solution for now, and change it later to something which
>> uses an acquire/release on the code address later, using proper atomics.
>
> Let me clarify.
>
> The fence fix as proposed in v3 is wrong for all architectures.
>
> We are emulating C/C++ 11 atomics within glibc, and a fence-to-fence sync
> *requires* an atomic load / store of the guard, you can't use a non-atomic
> access. The point of the atomic load/store is to ensure you don't have a
> data race.

Carlos, I'm sorry, but I think your position is logically inconsistent.

Formally, you cannot follow the memory model here without a substantial
rewrite of the code, breaking up the struct fdesc abstraction.  The
reason is that without blocking synchronization, you still end up with
two non-atomic writes to the same object, which is a data race, and
undefined, even if both threads write the same value.

As far as I can see, POWER is !USE_ATOMIC_COMPILER_BUILTINS, so our
relaxed MO store is just a regular store, without a compiler barrier.
That means after all that rewriting, we basically end up with the same
code and the same formal data race that we would have when we just used
fences.

This is different for USE_ATOMIC_COMPILER_BUILTINS architectures, where
we do use actual atomic stores.  But for !USE_ATOMIC_COMPILER_BUILTINS,
the fence-based approach is as good as we can get, with or without
breaking the abstractions.

So as I said, given the constraints we are working under, we should go
with the solution based on fences, and have that tested on Aarch64 as
well.

>> I don't want to see this bug fix blocked by ia64 and hppa.  The proper
>> fix needs some reshuffling of the macros here, or maybe use an unused
>> bit in the flags field as an indicator for initialization.
>
> The fix for this is straight forward.
>
> Add a new initializer field to the reloc_result, it's an internal data
> structure. It can be as big as we want and we can optimize it later.
>
> You don't need to do any big cleanups, but we *do* have to get the
> synchronization correct.

See above; I don't think we can get the synchronization formally
correct, even with any level of cleanups.  In the data race case, we
would have

  atomic acquire MO load of initializer field
  non-atomic writes to various struct fields
  atomic release MO store to initializer field

in each thread.  That's still undefined behavior due to the blocking
stores in the middle.

Let me reiterate: Just because you say our atomics are C11, it doesn't
make them so.  They are syntactically different, and they are not
presented to the compiler as atomics for !USE_ATOMIC_COMPILER_BUILTINS.
I know that you and Torvald didn't consider this a problem in the past,
but maybe you can reconsider your position?

Thanks,
Florian
Carlos O'Donell Oct. 18, 2018, 2:02 a.m. | #5
On 10/17/18 4:12 PM, Florian Weimer wrote:
> * Carlos O'Donell:
> 
>> On 10/15/18 8:57 AM, Florian Weimer wrote:
>>> * Carlos O'Donell:
>>>
>>>> (3) Fence-to-fence sync.
>>>>
>>>> For fence-to-fence synchronization to work we need an acquire and release
>>>> fence, and we have that.
>>>>
>>>> We are missing the atomic read and write of the guard. Please review below.
>>>> Florian mentioned this in his review. He is correct.
>>>>
>>>> And all the problems are back again because you can't do atomic loads of
>>>> the large guards because they are actually the function descriptor structures.
>>>> However, this is just laziness, we used the addr because it was convenient.
>>>> It is no longer convenient. Just add a 'init' field to reloc_result and use
>>>> that as the guard to synchronize the threads against for initialization of
>>>> the results. This should solve the reloc_result problem (ignorning the issues
>>>> hppa and ia64 have with the fdesc updates across multiple threads in _dl_fixup).
>>>
>>> I think due to various external factors, we should go with the
>>> fence-based solution for now, and change it later to something which
>>> uses an acquire/release on the code address later, using proper atomics.
>>
>> Let me clarify.
>>
>> The fence fix as proposed in v3 is wrong for all architectures.
>>
>> We are emulating C/C++ 11 atomics within glibc, and a fence-to-fence sync
>> *requires* an atomic load / store of the guard, you can't use a non-atomic
>> access. The point of the atomic load/store is to ensure you don't have a
>> data race.
> 
> Carlos, I'm sorry, but I think your position is logically inconsistent.

Yes, it *is* logically inconsistent. I agree with you.

However, to *be* logically consistent I'd have to fix all data races in
this code in one go, and I can't, it's too much work.

All I want is for any *changes* to follow C11 semantics, and I think we
can do that without major surgery.

Consider it an ideological flaw that I want everyone to practice following
a consistent memory model and think about these problems in terms of that
memory model, and evaluate patches using that model.

> Formally, you cannot follow the memory model here without a substantial
> rewrite of the code, breaking up the struct fdesc abstraction.  The
> reason is that without blocking synchronization, you still end up with
> two non-atomic writes to the same object, which is a data race, and
> undefined, even if both threads write the same value.

There are two distinct problems here, and each can be handled distinctly.

The first is the problem at hand, that there is a data-dependency issue
with the update of the struct reloc_result structure. We have multiple
threads writing to the reloc_result structure, in general those threads
write the same value (locks are taken _dl_lookup_symbol_x), and while
this is a data race, I don't care about it and we aren't going to fix 
it. The only thing we should do is that a thread that determines the 
reloc_result is initialized should see all the correct value in the 
structure and not a sheared result. That is all that we are fixing
here, call that the "change."

We can follow the memory model far enough to avoid a sheared result
being read out of the struct reloc_result.

We have not fixed the data races that occur when two threads read a
zero addr value and both use non-atomic writes to update reloc_result,
and I don't intend the patch to fix that. I don't require that.

> As far as I can see, POWER is !USE_ATOMIC_COMPILER_BUILTINS, so our
> relaxed MO store is just a regular store, without a compiler barrier.
> That means after all that rewriting, we basically end up with the same
> code and the same formal data race that we would have when we just used
> fences.

That's fine.

The use of the guard+fence-to-fence sync is, from a C11 perspective,
correct. However, I recommend adding a reloc_result->reloc_init and
using that with release/acquire loads.

> This is different for USE_ATOMIC_COMPILER_BUILTINS architectures, where
> we do use actual atomic stores.  But for !USE_ATOMIC_COMPILER_BUILTINS,
> the fence-based approach is as good as we can get, with or without
> breaking the abstractions.

We can do better.

> So as I said, given the constraints we are working under, we should go
> with the solution based on fences, and have that tested on Aarch64 as
> well.

I whole heartedly appreciate a pragmatic approach to these problems, but
I still challenge that we can do better without much more work.

>>> I don't want to see this bug fix blocked by ia64 and hppa.  The proper
>>> fix needs some reshuffling of the macros here, or maybe use an unused
>>> bit in the flags field as an indicator for initialization.
>>
>> The fix for this is straight forward.
>>
>> Add a new initializer field to the reloc_result, it's an internal data
>> structure. It can be as big as we want and we can optimize it later.
>>
>> You don't need to do any big cleanups, but we *do* have to get the
>> synchronization correct.
> 
> See above; I don't think we can get the synchronization formally
> correct, even with any level of cleanups.  In the data race case, we
> would have
> 
>   atomic acquire MO load of initializer field
>   non-atomic writes to various struct fields
>   atomic release MO store to initializer field
> 
> in each thread.  That's still undefined behavior due to the blocking
> stores in the middle.

That's fine. The changes made are correct, even if the whole algorithm
itself is not.

> Let me reiterate: Just because you say our atomics are C11, it doesn't
> make them so.  They are syntactically different, and they are not
> presented to the compiler as atomics for !USE_ATOMIC_COMPILER_BUILTINS.
> I know that you and Torvald didn't consider this a problem in the past,
> but maybe you can reconsider your position?

My position is unchanged.

If I could summarize our positions, I would write:

(1) The "Pragmatic approach"

- Since we don't have C11 atomics, we should just use the fences because
  they fix the data dependency issue, and stop there. We need not go any
  further until we are ready to fix the underlying algorithm of the result
  updates and *then* we can follow C11.

(2) The "Incremental C11 approach"

- Assume we are C11, and take an incremental approach where we fix the
  data dependency issue using correct synchronization primitives, even
  if it doesn't solve all of the data races.

Did I summarize your position accurately?

I prefer (2).

Tulio can have a preference also.

I spoke to Tulio on IRC and he says he has a working v4 in build-many-glibcs.

I figure we'll commit something probably tomorrow for this.

Just for the record I'm attaching my WIP v4 so you can see what I mean about
the solution. Yes, I did make the structure 4 bytes larger, and that will have
a real impact, but it removes the dependency on the size of the function pointer,
and I like that for future maintenance, and we'll likely rewrite the whole struct
when we fix 23790 to get a more optimal layout.
Carlos O'Donell Oct. 18, 2018, 2:17 a.m. | #6
On 10/17/18 10:02 PM, Carlos O'Donell wrote:
> My position is unchanged.
> 
> If I could summarize our positions, I would write:
> 
> (1) The "Pragmatic approach"
> 
> - Since we don't have C11 atomics, we should just use the fences because
>   they fix the data dependency issue, and stop there. We need not go any
>   further until we are ready to fix the underlying algorithm of the result
>   updates and *then* we can follow C11.
> 
> (2) The "Incremental C11 approach"
> 
> - Assume we are C11, and take an incremental approach where we fix the
>   data dependency issue using correct synchronization primitives, even
>   if it doesn't solve all of the data races.
> 
> Did I summarize your position accurately?
> 
> I prefer (2).
> 
> Tulio can have a preference also.
> 
> I spoke to Tulio on IRC and he says he has a working v4 in build-many-glibcs.
> 
> I figure we'll commit something probably tomorrow for this.
> 
> Just for the record I'm attaching my WIP v4 so you can see what I mean about
> the solution. Yes, I did make the structure 4 bytes larger, and that will have
> a real impact, but it removes the dependency on the size of the function pointer,
> and I like that for future maintenance, and we'll likely rewrite the whole struct
> when we fix 23790 to get a more optimal layout.

I'm happy to see that Tulio and I basically had the same solution for v4.

Actually, I have a bug in mine where I didn't set local reloc_init to 1
which results in the plenter not being called for all threads doing the
initialization, but Tulio's code has this fixed :-)

I've reviewed Tulio's patch and so with v5 I think we're done.
Florian Weimer Oct. 18, 2018, 7:24 a.m. | #7
* Carlos O'Donell:

> The use of the guard+fence-to-fence sync is, from a C11 perspective,
> correct.

I really don't think this is true:

| Two expression evaluations conflict if one of them modifies a memory
| location and the other one reads or modifies the same memory location.

(C11 5.1.2.4p4)

| The execution of a program contains a data race if it contains two
| conflicting actions in different threads, at least one of which is not
| atomic, and neither happens before the other. Any such data race
| results in undefined behavior.

(C11 51.2.4p25)

We still have unordered conflicting non-atomic writes after Tulio's
patch.  I don't think they matter to us.  But this is *not* correct for
C11.

Thanks,
Florian
Carlos O'Donell Oct. 18, 2018, 1:39 p.m. | #8
On 10/18/18 3:24 AM, Florian Weimer wrote:
> * Carlos O'Donell:
> 
>> The use of the guard+fence-to-fence sync is, from a C11 perspective,
>> correct.
> 
> I really don't think this is true:
> 
> | Two expression evaluations conflict if one of them modifies a memory
> | location and the other one reads or modifies the same memory location.
> 
> (C11 5.1.2.4p4)
> 
> | The execution of a program contains a data race if it contains two
> | conflicting actions in different threads, at least one of which is not
> | atomic, and neither happens before the other. Any such data race
> | results in undefined behavior.
> 
> (C11 51.2.4p25)
> 
> We still have unordered conflicting non-atomic writes after Tulio's
> patch.  I don't think they matter to us.  But this is *not* correct for
> C11.

I agree completely. My point is that the change, the specific lines Tulio
is touching, and the changes made, are correct, a fence-to-fence sync
requires an atomic guard access. I agree it doesn't fix the actual problem
of multiple threads doing the same updates to the reloc_result.

glibc is *full* of data races, and that doesn't mean we will just give up
on using C11 semantics until we can fix them all. Any changes we do, we
should do them so they are correct.

It really feels like we agree, but we're talking past eachother.  Did my
previous email clarify our positions and which one I choose and why?

See:
https://www.sourceware.org/ml/libc-alpha/2018-10/msg00320.html

If I didn't understand your position correctly, please correct what I 
wrote so I can understand your suggestion.
Adhemerval Zanella Oct. 18, 2018, 6:21 p.m. | #9
On 18/10/2018 10:39, Carlos O'Donell wrote:
> On 10/18/18 3:24 AM, Florian Weimer wrote:
>> * Carlos O'Donell:
>>
>>> The use of the guard+fence-to-fence sync is, from a C11 perspective,
>>> correct.
>>
>> I really don't think this is true:
>>
>> | Two expression evaluations conflict if one of them modifies a memory
>> | location and the other one reads or modifies the same memory location.
>>
>> (C11 5.1.2.4p4)
>>
>> | The execution of a program contains a data race if it contains two
>> | conflicting actions in different threads, at least one of which is not
>> | atomic, and neither happens before the other. Any such data race
>> | results in undefined behavior.
>>
>> (C11 51.2.4p25)
>>
>> We still have unordered conflicting non-atomic writes after Tulio's
>> patch.  I don't think they matter to us.  But this is *not* correct for
>> C11.
> 
> I agree completely. My point is that the change, the specific lines Tulio
> is touching, and the changes made, are correct, a fence-to-fence sync
> requires an atomic guard access. I agree it doesn't fix the actual problem
> of multiple threads doing the same updates to the reloc_result.
> 
> glibc is *full* of data races, and that doesn't mean we will just give up
> on using C11 semantics until we can fix them all. Any changes we do, we
> should do them so they are correct.
> 
> It really feels like we agree, but we're talking past eachother.  Did my
> previous email clarify our positions and which one I choose and why?
> 
> See:
> https://www.sourceware.org/ml/libc-alpha/2018-10/msg00320.html
> 
> If I didn't understand your position correctly, please correct what I 
> wrote so I can understand your suggestion.
> 

Wouldn't just disable lazy-resolution for LD_AUDIT be a simpler solution?
More and more distributions are set bind-now as default build option and
audition already implies some performance overhead (not considering the
lazy-resolution performance gain might also not represent true in real
world cases).
Carlos O'Donell Oct. 18, 2018, 6:43 p.m. | #10
On 10/18/18 2:21 PM, Adhemerval Zanella wrote:
> 
> 
> On 18/10/2018 10:39, Carlos O'Donell wrote:
>> On 10/18/18 3:24 AM, Florian Weimer wrote:
>>> * Carlos O'Donell:
>>>
>>>> The use of the guard+fence-to-fence sync is, from a C11 perspective,
>>>> correct.
>>>
>>> I really don't think this is true:
>>>
>>> | Two expression evaluations conflict if one of them modifies a memory
>>> | location and the other one reads or modifies the same memory location.
>>>
>>> (C11 5.1.2.4p4)
>>>
>>> | The execution of a program contains a data race if it contains two
>>> | conflicting actions in different threads, at least one of which is not
>>> | atomic, and neither happens before the other. Any such data race
>>> | results in undefined behavior.
>>>
>>> (C11 51.2.4p25)
>>>
>>> We still have unordered conflicting non-atomic writes after Tulio's
>>> patch.  I don't think they matter to us.  But this is *not* correct for
>>> C11.
>>
>> I agree completely. My point is that the change, the specific lines Tulio
>> is touching, and the changes made, are correct, a fence-to-fence sync
>> requires an atomic guard access. I agree it doesn't fix the actual problem
>> of multiple threads doing the same updates to the reloc_result.
>>
>> glibc is *full* of data races, and that doesn't mean we will just give up
>> on using C11 semantics until we can fix them all. Any changes we do, we
>> should do them so they are correct.
>>
>> It really feels like we agree, but we're talking past eachother.  Did my
>> previous email clarify our positions and which one I choose and why?
>>
>> See:
>> https://www.sourceware.org/ml/libc-alpha/2018-10/msg00320.html
>>
>> If I didn't understand your position correctly, please correct what I 
>> wrote so I can understand your suggestion.
>>
> 
> Wouldn't just disable lazy-resolution for LD_AUDIT be a simpler solution?

This is not the question I would ask myself in this case.

Consider that auditing is independent of the manner in which the application
is deployed by the user (built with or without lazy binding).

Thus enabling auditing should have as little impact on the underlying
application deployment as possible.

Forcing immediate binding for LD_AUDIT has an impact we cannot measure,
because we aren't the user with the application.

The point of these features is to allow for users to customize their choices
to meet their application needs. It is not a one-siz-fits-all.

> More and more distributions are set bind-now as default build option and
> audition already implies some performance overhead (not considering the
> lazy-resolution performance gain might also not represent true in real
> world cases).
 
Distribution choices are different from user application choices.

Sometimes we make unilateral choices, but only if it's a clear win.

The most recent case was AArch64 TLSDESC, where Arm decided that TLSDESC
would always be resolved non-lazily (Szabolcs will correct me if I'm wrong).
This was a case where the synchronization required to update the TLSDESC
was so costly on a per-function-call basis that it was clearly always a
win to force TLSDESC to always be immediately bound, and drop the required
synchronization (a cost you always had to pay).

Here the situation is less clear, and we have less data with which to make
the choice. Selection of lazy vs. non-lazy is still a choice we give users
and it is independent of auditing.

In summary:

- Selection of lazy vs non-lazy binding is presently an orthogonal user
  choice from auditing.

- Distribution choices are about general solutions that work best for a
  large number of users.

- Lastly, a one-size-fits-all solution doesn't work best for all users.

Unless there is a very strong and compelling reason to force non-lazy-binding
for LD_AUDIT, I would not recommend we do it. It's just a question of user
choice.

I also think that the new reloc_result.init field can now be used to
implement a lockless algorithm to update the relocs without data races,
but it would be "part 2" of fixing P&C for LD_AUDIT.
Adhemerval Zanella Oct. 18, 2018, 7:40 p.m. | #11
On 18/10/2018 15:43, Carlos O'Donell wrote:
> On 10/18/18 2:21 PM, Adhemerval Zanella wrote:
>>
>>
>> On 18/10/2018 10:39, Carlos O'Donell wrote:
>>> On 10/18/18 3:24 AM, Florian Weimer wrote:
>>>> * Carlos O'Donell:
>>>>
>>>>> The use of the guard+fence-to-fence sync is, from a C11 perspective,
>>>>> correct.
>>>>
>>>> I really don't think this is true:
>>>>
>>>> | Two expression evaluations conflict if one of them modifies a memory
>>>> | location and the other one reads or modifies the same memory location.
>>>>
>>>> (C11 5.1.2.4p4)
>>>>
>>>> | The execution of a program contains a data race if it contains two
>>>> | conflicting actions in different threads, at least one of which is not
>>>> | atomic, and neither happens before the other. Any such data race
>>>> | results in undefined behavior.
>>>>
>>>> (C11 51.2.4p25)
>>>>
>>>> We still have unordered conflicting non-atomic writes after Tulio's
>>>> patch.  I don't think they matter to us.  But this is *not* correct for
>>>> C11.
>>>
>>> I agree completely. My point is that the change, the specific lines Tulio
>>> is touching, and the changes made, are correct, a fence-to-fence sync
>>> requires an atomic guard access. I agree it doesn't fix the actual problem
>>> of multiple threads doing the same updates to the reloc_result.
>>>
>>> glibc is *full* of data races, and that doesn't mean we will just give up
>>> on using C11 semantics until we can fix them all. Any changes we do, we
>>> should do them so they are correct.
>>>
>>> It really feels like we agree, but we're talking past eachother.  Did my
>>> previous email clarify our positions and which one I choose and why?
>>>
>>> See:
>>> https://www.sourceware.org/ml/libc-alpha/2018-10/msg00320.html
>>>
>>> If I didn't understand your position correctly, please correct what I 
>>> wrote so I can understand your suggestion.
>>>
>>
>> Wouldn't just disable lazy-resolution for LD_AUDIT be a simpler solution?
> 
> This is not the question I would ask myself in this case.
> 
> Consider that auditing is independent of the manner in which the application
> is deployed by the user (built with or without lazy binding).

I disagree, each possible user option we support incurs in extra
maintainability and in this case the possible combination of current 
trampoline types and arch-specific code increases even more the burden
of not only provide, but to ensure correctness and testability.

> 
> Thus enabling auditing should have as little impact on the underlying
> application deployment as possible.
> 
> Forcing immediate binding for LD_AUDIT has an impact we cannot measure,
> because we aren't the user with the application.

I agree, but I constantly I hear that lazy-binding might show performance
advantages without much data to actually to back this up. Do we have actual
benchmarks and data that show it still a relevant feature?

> 
> The point of these features is to allow for users to customize their choices
> to meet their application needs. It is not a one-siz-fits-all.
> 
>> More and more distributions are set bind-now as default build option and
>> audition already implies some performance overhead (not considering the
>> lazy-resolution performance gain might also not represent true in real
>> world cases).
>  
> Distribution choices are different from user application choices.
> 
> Sometimes we make unilateral choices, but only if it's a clear win.
> 
> The most recent case was AArch64 TLSDESC, where Arm decided that TLSDESC
> would always be resolved non-lazily (Szabolcs will correct me if I'm wrong).
> This was a case where the synchronization required to update the TLSDESC
> was so costly on a per-function-call basis that it was clearly always a
> win to force TLSDESC to always be immediately bound, and drop the required
> synchronization (a cost you always had to pay).
> 
> Here the situation is less clear, and we have less data with which to make
> the choice. Selection of lazy vs. non-lazy is still a choice we give users
> and it is independent of auditing.
> 
> In summary:
> 
> - Selection of lazy vs non-lazy binding is presently an orthogonal user
>   choice from auditing.
> 
> - Distribution choices are about general solutions that work best for a
>   large number of users.
> 
> - Lastly, a one-size-fits-all solution doesn't work best for all users.
> 
> Unless there is a very strong and compelling reason to force non-lazy-binding
> for LD_AUDIT, I would not recommend we do it. It's just a question of user
> choice.

My point is since we have limited resources, specially for synchronization
issues which required an extra level of carefulness; I see we should prioritize
better and revaluate some taken decisions. Some decisions were made to handle a 
very specific issue in the past which might not be relevant for current usercases,
where the trade-off of performance/usability/maintainability might have changed.

We already had some lazy-bind issues in the past (BZ#19129, BZ#18034, BZ#726),
still have some (BZ#23296, BZ#23240, BZ#21349, BZ#20107), and might still contain
some not accounted for in bugzilla for not so widespread used options (ld audit,
ifunc, tlsdesc, etc.). These are just the one I got from a very basic bugzilla 
search, we might have more.

This lead to ask me if lazy-bind still worth all the required internal complexity
and which real world gains we are trying to obtain besides just the option for
itself. I do agree that giving more user choices are a better thing, but we
need to balance usefulness, usability, and maintenance.

> 
> I also think that the new reloc_result.init field can now be used to
> implement a lockless algorithm to update the relocs without data races,
> but it would be "part 2" of fixing P&C for LD_AUDIT.
>
Carlos O'Donell Oct. 23, 2018, 12:17 a.m. | #12
On 10/18/18 3:40 PM, Adhemerval Zanella wrote:
> I disagree, each possible user option we support incurs in extra
> maintainability and in this case the possible combination of current 
> trampoline types and arch-specific code increases even more the burden
> of not only provide, but to ensure correctness and testability.

I agree with you on this.

>> Thus enabling auditing should have as little impact on the underlying
>> application deployment as possible.
>>
>> Forcing immediate binding for LD_AUDIT has an impact we cannot measure,
>> because we aren't the user with the application.
> 
> I agree, but I constantly I hear that lazy-binding might show performance
> advantages without much data to actually to back this up. Do we have actual
> benchmarks and data that show it still a relevant feature?

There are two issues at hand.

(1) Lazy-binding provides a hook for developer tooling.

(2) Lazy-binding speeds up application startup.

We have concrete evidence for (1), it's LD_AUDIT, and latrace/ltrace, and
a bunch of other smaller developer tooling.

There is even production systems using it like Spindle:
https://computation.llnl.gov/projects/spindle

Spindle has immediate examples of where all aspects of the dynamic loading
process are slowed down by large scientific workloads.

However, we don't have any good microbenchmarks to show the difference
between lazy and non-lazy. I should write some so we can have a concrete
discussion.

I see rented cloud environments as places where lazy-binding would help
reduce CPU usage costs.

I see distribution usage of BIND_NOW as a security measure that while
important is not always relevant to users running services inside their
own networks. Why pay the performance cost of security relevant features
if you don't need them?

>>
>> The point of these features is to allow for users to customize their choices
>> to meet their application needs. It is not a one-siz-fits-all.
>>
>>> More and more distributions are set bind-now as default build option and
>>> audition already implies some performance overhead (not considering the
>>> lazy-resolution performance gain might also not represent true in real
>>> world cases).
>>  
>> Distribution choices are different from user application choices.
>>
>> Sometimes we make unilateral choices, but only if it's a clear win.
>>
>> The most recent case was AArch64 TLSDESC, where Arm decided that TLSDESC
>> would always be resolved non-lazily (Szabolcs will correct me if I'm wrong).
>> This was a case where the synchronization required to update the TLSDESC
>> was so costly on a per-function-call basis that it was clearly always a
>> win to force TLSDESC to always be immediately bound, and drop the required
>> synchronization (a cost you always had to pay).
>>
>> Here the situation is less clear, and we have less data with which to make
>> the choice. Selection of lazy vs. non-lazy is still a choice we give users
>> and it is independent of auditing.
>>
>> In summary:
>>
>> - Selection of lazy vs non-lazy binding is presently an orthogonal user
>>   choice from auditing.
>>
>> - Distribution choices are about general solutions that work best for a
>>   large number of users.
>>
>> - Lastly, a one-size-fits-all solution doesn't work best for all users.
>>
>> Unless there is a very strong and compelling reason to force non-lazy-binding
>> for LD_AUDIT, I would not recommend we do it. It's just a question of user
>> choice.
> 
> My point is since we have limited resources, specially for synchronization
> issues which required an extra level of carefulness; I see we should prioritize
> better and revaluate some taken decisions. Some decisions were made to handle a 
> very specific issue in the past which might not be relevant for current usercases,
> where the trade-off of performance/usability/maintainability might have changed.

Agreed. I think we need some benchmarks here to have a real discussion.

> We already had some lazy-bind issues in the past (BZ#19129, BZ#18034, BZ#726),
> still have some (BZ#23296, BZ#23240, BZ#21349, BZ#20107), and might still contain
> some not accounted for in bugzilla for not so widespread used options (ld audit,
> ifunc, tlsdesc, etc.). These are just the one I got from a very basic bugzilla 
> search, we might have more.

I agree, it is compilcated by the fact that multiple threads resolve the symbols
at the same time.

> This lead to ask me if lazy-bind still worth all the required internal complexity
> and which real world gains we are trying to obtain besides just the option for
> itself. I do agree that giving more user choices are a better thing, but we
> need to balance usefulness, usability, and maintenance.

I don't disagree, *but* if we are going to get rid of lazy-binding, something
we have supported for a long time, it's going to have to be with good evidence
to show our users that it really doesn't matter anymore.

I hope that makes my position clearer.

In summary:

- If we are going to make a change to remove lazy-binding it has to be in an
  informed manner with results from benchmarking that allow us to give
  evidence to our users.
Adhemerval Zanella Oct. 23, 2018, 2:08 p.m. | #13
On 22/10/2018 21:17, Carlos O'Donell wrote:
> On 10/18/18 3:40 PM, Adhemerval Zanella wrote:
>> I disagree, each possible user option we support incurs in extra
>> maintainability and in this case the possible combination of current 
>> trampoline types and arch-specific code increases even more the burden
>> of not only provide, but to ensure correctness and testability.
> 
> I agree with you on this.
> 
>>> Thus enabling auditing should have as little impact on the underlying
>>> application deployment as possible.
>>>
>>> Forcing immediate binding for LD_AUDIT has an impact we cannot measure,
>>> because we aren't the user with the application.
>>
>> I agree, but I constantly I hear that lazy-binding might show performance
>> advantages without much data to actually to back this up. Do we have actual
>> benchmarks and data that show it still a relevant feature?
> 
> There are two issues at hand.
> 
> (1) Lazy-binding provides a hook for developer tooling.
> 
> (2) Lazy-binding speeds up application startup.
> 
> We have concrete evidence for (1), it's LD_AUDIT, and latrace/ltrace, and
> a bunch of other smaller developer tooling.
> 
> There is even production systems using it like Spindle:
> https://computation.llnl.gov/projects/spindle
> 
> Spindle has immediate examples of where all aspects of the dynamic loading
> process are slowed down by large scientific workloads.

Correct me if I am wrong, but from the paper it seems it intercepts the 
the file operations and use a shared caching mechanism to avoid duplicate
the loading time. My understanding is the issue they are trying to solve
is not relocation runtime overhead, but rather parallel file system operations
when multiple processes loads a bulk of shared libraries and python modules
incurring in I/O concurrency overhead.

Also, it says rtld-audit PLT interposition is in fact a performance issue
which they had to actually make the spindle client to handle the GOT
setup. They do seems to use the symbol binding to intercept open* calls
to handle script languages loading.

I understand that they adapted the rtld-audit to their needs, however it
does not really require lazy-binding to intercept the library calls to
intercept the file operations (readdir for instance). Also I think it
would be feasible to call la_symbind* on first symbol resolution for
non-lazy mode.

> 
> However, we don't have any good microbenchmarks to show the difference
> between lazy and non-lazy. I should write some so we can have a concrete
> discussion.
> 
> I see rented cloud environments as places where lazy-binding would help
> reduce CPU usage costs.
> 
> I see distribution usage of BIND_NOW as a security measure that while
> important is not always relevant to users running services inside their
> own networks. Why pay the performance cost of security relevant features
> if you don't need them?

I do agree with you, but my point is 1. maybe the performance gains do not
really outweigh the code complexity and its maintainability costs and 2. the
other factor (security in this case) might be more cost effectively.

> 
>>>
>>> The point of these features is to allow for users to customize their choices
>>> to meet their application needs. It is not a one-siz-fits-all.
>>>
>>>> More and more distributions are set bind-now as default build option and
>>>> audition already implies some performance overhead (not considering the
>>>> lazy-resolution performance gain might also not represent true in real
>>>> world cases).
>>>  
>>> Distribution choices are different from user application choices.
>>>
>>> Sometimes we make unilateral choices, but only if it's a clear win.
>>>
>>> The most recent case was AArch64 TLSDESC, where Arm decided that TLSDESC
>>> would always be resolved non-lazily (Szabolcs will correct me if I'm wrong).
>>> This was a case where the synchronization required to update the TLSDESC
>>> was so costly on a per-function-call basis that it was clearly always a
>>> win to force TLSDESC to always be immediately bound, and drop the required
>>> synchronization (a cost you always had to pay).
>>>
>>> Here the situation is less clear, and we have less data with which to make
>>> the choice. Selection of lazy vs. non-lazy is still a choice we give users
>>> and it is independent of auditing.
>>>
>>> In summary:
>>>
>>> - Selection of lazy vs non-lazy binding is presently an orthogonal user
>>>   choice from auditing.
>>>
>>> - Distribution choices are about general solutions that work best for a
>>>   large number of users.
>>>
>>> - Lastly, a one-size-fits-all solution doesn't work best for all users.
>>>
>>> Unless there is a very strong and compelling reason to force non-lazy-binding
>>> for LD_AUDIT, I would not recommend we do it. It's just a question of user
>>> choice.
>>
>> My point is since we have limited resources, specially for synchronization
>> issues which required an extra level of carefulness; I see we should prioritize
>> better and revaluate some taken decisions. Some decisions were made to handle a 
>> very specific issue in the past which might not be relevant for current usercases,
>> where the trade-off of performance/usability/maintainability might have changed.
> 
> Agreed. I think we need some benchmarks here to have a real discussion.

Agreed.

> 
>> We already had some lazy-bind issues in the past (BZ#19129, BZ#18034, BZ#726),
>> still have some (BZ#23296, BZ#23240, BZ#21349, BZ#20107), and might still contain
>> some not accounted for in bugzilla for not so widespread used options (ld audit,
>> ifunc, tlsdesc, etc.). These are just the one I got from a very basic bugzilla 
>> search, we might have more.
> 
> I agree, it is compilcated by the fact that multiple threads resolve the symbols
> at the same time.
> 
>> This lead to ask me if lazy-bind still worth all the required internal complexity
>> and which real world gains we are trying to obtain besides just the option for
>> itself. I do agree that giving more user choices are a better thing, but we
>> need to balance usefulness, usability, and maintenance.
> 
> I don't disagree, *but* if we are going to get rid of lazy-binding, something
> we have supported for a long time, it's going to have to be with good evidence
> to show our users that it really doesn't matter anymore.
> 
> I hope that makes my position clearer.
> 
> In summary:
> 
> - If we are going to make a change to remove lazy-binding it has to be in an
>   informed manner with results from benchmarking that allow us to give
>   evidence to our users.
> 

Your position is clear and also to make mine clearly I just want to check
if removing lazy-binding as default might be an option. I also don't want
to block this patch, so might move this discussion no another thread.
Carlos O'Donell Oct. 23, 2018, 3:20 p.m. | #14
On 10/23/18 10:08 AM, Adhemerval Zanella wrote:
> Your position is clear and also to make mine clearly I just want to check
> if removing lazy-binding as default might be an option. I also don't want
> to block this patch, so might move this discussion no another thread.
 
Yes, removing lazy-binding is an option. We just need to evaluate all the
consequences of this and act to ensure certain features like LD_AUDIT
keep working.

Patch

diff --git a/elf/dl-runtime.c b/elf/dl-runtime.c
index 63bbc89776..c1ba372bd7 100644
--- a/elf/dl-runtime.c
+++ b/elf/dl-runtime.c
@@ -183,9 +183,18 @@  _dl_profile_fixup (
   /* This is the address in the array where we store the result of previous
      relocations.  */
   struct reloc_result *reloc_result = &l->l_reloc_result[reloc_index];
-  DL_FIXUP_VALUE_TYPE *resultp = &reloc_result->addr;
 
+  /* CONCURRENCY NOTES:
+
+     The following code uses DL_FIXUP_VALUE_CODE_ADDR to access a potential
+     member of reloc_result->addr to indicate if it is the first time this
+     object is being relocated.
+     Reading/Writing from/to reloc_result->addr must not happen before previous
+     writes to reloc_result complete as they could end-up with an incomplete
+     struct.  */
+  DL_FIXUP_VALUE_TYPE *resultp = &reloc_result->addr;
   DL_FIXUP_VALUE_TYPE value = *resultp;
+  atomic_thread_fence_acquire ();
   if (DL_FIXUP_VALUE_CODE_ADDR (value) == 0)
     {
       /* This is the first time we have to relocate this object.  */
@@ -346,7 +355,13 @@  _dl_profile_fixup (
 
       /* Store the result for later runs.  */
       if (__glibc_likely (! GLRO(dl_bind_not)))
-	*resultp = value;
+	{
+	  /* Guarantee all previous writes complete before
+	     resultp (aka. reloc_result->addr) is updated.  See CONCURRENCY
+	     NOTES earlier  */
+	  atomic_thread_fence_release ();
+	  *resultp = value;
+	}
     }
 
   /* By default we do not call the pltexit function.  */
diff --git a/nptl/Makefile b/nptl/Makefile
index be8066524c..48aba579c0 100644
--- a/nptl/Makefile
+++ b/nptl/Makefile
@@ -382,7 +382,8 @@  tests += tst-cancelx2 tst-cancelx3 tst-cancelx4 tst-cancelx5 \
 	 tst-cleanupx0 tst-cleanupx1 tst-cleanupx2 tst-cleanupx3 tst-cleanupx4 \
 	 tst-oncex3 tst-oncex4
 ifeq ($(build-shared),yes)
-tests += tst-atfork2 tst-tls4 tst-_res1 tst-fini1 tst-compat-forwarder
+tests += tst-atfork2 tst-tls4 tst-_res1 tst-fini1 tst-compat-forwarder \
+	 tst-audit-threads
 tests-internal += tst-tls3 tst-tls3-malloc tst-tls5 tst-stackguard1
 tests-nolibpthread += tst-fini1
 ifeq ($(have-z-execstack),yes)
@@ -394,7 +395,8 @@  modules-names = tst-atfork2mod tst-tls3mod tst-tls4moda tst-tls4modb \
 		tst-tls5mod tst-tls5moda tst-tls5modb tst-tls5modc \
 		tst-tls5modd tst-tls5mode tst-tls5modf tst-stack4mod \
 		tst-_res1mod1 tst-_res1mod2 tst-execstack-mod tst-fini1mod \
-		tst-join7mod tst-compat-forwarder-mod
+		tst-join7mod tst-compat-forwarder-mod tst-audit-threads-mod1 \
+		tst-audit-threads-mod2
 extra-test-objs += $(addsuffix .os,$(strip $(modules-names))) \
 		   tst-cleanup4aux.o tst-cleanupx4aux.o
 test-extras += tst-cleanup4aux tst-cleanupx4aux
@@ -709,6 +711,10 @@  endif
 
 $(objpfx)tst-compat-forwarder: $(objpfx)tst-compat-forwarder-mod.so
 
+$(objpfx)tst-audit-threads: $(objpfx)tst-audit-threads-mod2.so
+$(objpfx)tst-audit-threads.out: $(objpfx)tst-audit-threads-mod1.so
+tst-audit-threads-ENV = LD_AUDIT=$(objpfx)tst-audit-threads-mod1.so
+
 # The tests here better do not run in parallel
 ifneq ($(filter %tests,$(MAKECMDGOALS)),)
 .NOTPARALLEL:
diff --git a/nptl/tst-audit-threads-mod1.c b/nptl/tst-audit-threads-mod1.c
new file mode 100644
index 0000000000..194c65a6bb
--- /dev/null
+++ b/nptl/tst-audit-threads-mod1.c
@@ -0,0 +1,38 @@ 
+/* Dummy audit library for test-audit-threads.
+
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <elf.h>
+#include <link.h>
+#include <stdio.h>
+#include <assert.h>
+#include <string.h>
+
+volatile int count = 0;
+
+unsigned int
+la_version (unsigned int ver)
+{
+  return 1;
+}
+
+unsigned int
+la_objopen (struct link_map *map, Lmid_t lmid, uintptr_t *cookie)
+{
+  return LA_FLG_BINDTO | LA_FLG_BINDFROM;
+}
diff --git a/nptl/tst-audit-threads-mod2.c b/nptl/tst-audit-threads-mod2.c
new file mode 100644
index 0000000000..6ceedb0196
--- /dev/null
+++ b/nptl/tst-audit-threads-mod2.c
@@ -0,0 +1,22 @@ 
+/* Shared object with a huge number of functions for test-audit-threads.
+
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* Define all the retNumN functions.  */
+#define definenum
+#include "tst-audit-threads.h"
diff --git a/nptl/tst-audit-threads.c b/nptl/tst-audit-threads.c
new file mode 100644
index 0000000000..0c81edc762
--- /dev/null
+++ b/nptl/tst-audit-threads.c
@@ -0,0 +1,91 @@ 
+/* Test multi-threading using LD_AUDIT.
+
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* This test uses a dummy LD_AUDIT library (test-audit-threads-mod1) and a
+   library with a huge number of functions in order to validate lazy symbol
+   binding with an audit library.  */
+
+#include <support/xthread.h>
+#include <strings.h>
+#include <stdlib.h>
+#include <sys/sysinfo.h>
+
+static int do_test (void);
+
+/* This test usually takes less than 3s to run.  However, there are cases that
+   take up to 30s.  */
+#define TIMEOUT 60
+#define TEST_FUNCTION do_test ()
+#include "../test-skeleton.c"
+
+#define externnum
+#include "tst-audit-threads.h"
+#undef externnum
+
+int num_threads;
+pthread_barrier_t barrier;
+
+void
+sync_all (int num)
+{
+  pthread_barrier_wait (&barrier);
+}
+
+void
+call_all_ret_nums (void)
+{
+#define callnum
+#include "tst-audit-threads.h"
+#undef callnum
+}
+
+void *
+thread_main (void *unused)
+{
+  call_all_ret_nums ();
+  return NULL;
+}
+
+#define STR2(X) #X
+#define STR(X) STR2(X)
+
+static int
+do_test (void)
+{
+  int i;
+  pthread_t *threads;
+
+  num_threads = get_nprocs ();
+  if (num_threads <= 1)
+    num_threads = 2;
+
+  /* Used to synchronize all the threads after calling each retNumN.  */
+  xpthread_barrier_init (&barrier, NULL, num_threads);
+
+  threads = (pthread_t *) xcalloc (num_threads, sizeof(pthread_t));
+  for (i = 0; i < num_threads; i++)
+    threads[i] = xpthread_create(NULL, thread_main, NULL);
+
+  for (i = 0; i < num_threads; i++)
+    xpthread_join(threads[i]);
+
+  free (threads);
+
+  return 0;
+}
diff --git a/nptl/tst-audit-threads.h b/nptl/tst-audit-threads.h
new file mode 100644
index 0000000000..cb17645f4b
--- /dev/null
+++ b/nptl/tst-audit-threads.h
@@ -0,0 +1,84 @@ 
+/* Helper header for test-audit-threads.
+
+   Copyright (C) 2018 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#define CONCAT(a, b) a ## b
+#define NUM(x, y) CONCAT (x, y)
+
+#define FUNC10(x)	\
+  FUNC (NUM (x, 0));	\
+  FUNC (NUM (x, 1));	\
+  FUNC (NUM (x, 2));	\
+  FUNC (NUM (x, 3));	\
+  FUNC (NUM (x, 4));	\
+  FUNC (NUM (x, 5));	\
+  FUNC (NUM (x, 6));	\
+  FUNC (NUM (x, 7));	\
+  FUNC (NUM (x, 8));	\
+  FUNC (NUM (x, 9))
+
+#define FUNC100(x)	\
+  FUNC10 (NUM (x, 0));	\
+  FUNC10 (NUM (x, 1));	\
+  FUNC10 (NUM (x, 2));	\
+  FUNC10 (NUM (x, 3));	\
+  FUNC10 (NUM (x, 4));	\
+  FUNC10 (NUM (x, 5));	\
+  FUNC10 (NUM (x, 6));	\
+  FUNC10 (NUM (x, 7));	\
+  FUNC10 (NUM (x, 8));	\
+  FUNC10 (NUM (x, 9))
+
+#define FUNC1000(x)		\
+  FUNC100 (NUM (x, 0));		\
+  FUNC100 (NUM (x, 1));		\
+  FUNC100 (NUM (x, 2));		\
+  FUNC100 (NUM (x, 3));		\
+  FUNC100 (NUM (x, 4));		\
+  FUNC100 (NUM (x, 5));		\
+  FUNC100 (NUM (x, 6));		\
+  FUNC100 (NUM (x, 7));		\
+  FUNC100 (NUM (x, 8));		\
+  FUNC100 (NUM (x, 9))
+
+#define FUNC7000()	\
+  FUNC1000 (1);		\
+  FUNC1000 (2);		\
+  FUNC1000 (3);		\
+  FUNC1000 (4);		\
+  FUNC1000 (5);		\
+  FUNC1000 (6);		\
+  FUNC1000 (7);
+
+#ifdef FUNC
+# undef FUNC
+#endif
+
+#ifdef externnum
+# define FUNC(x) extern int CONCAT (retNum, x) (void)
+#endif
+
+#ifdef definenum
+# define FUNC(x) int CONCAT (retNum, x) (void) { return x; }
+#endif
+
+#ifdef callnum
+# define FUNC(x) CONCAT (retNum, x) (); sync_all (x)
+#endif
+
+FUNC7000 ();