glibc: Remove CPU set size checking from affinity functions [BZ #19143]
diff mbox

Message ID 562A3D82.5010907@redhat.com
State New
Headers show

Commit Message

Florian Weimer Oct. 23, 2015, 2 p.m. UTC
On 10/19/2015 07:23 PM, Carlos O'Donell wrote:

>> The current situation, briefly stated, is this: glibc tries to guess the
>> kernel CPU set size and rejects attempts to specify an affinity mask
>> which is larger than that, but it does not work, and glibc and the
>> kernel still silently accept CPU affinity masks with invalid bits,
>> without returning an error.  The glibc check does not provide any value
>> to applications, it just adds pointless complexity to the library.
>> Therefore, I want to remove it from glibc.
> 
> The point of the code you want to remove was to detect the case where
> the user set CPU bits outside of the maximum possible number of supported
> CPUs. AFAIK today that value is CONFIG_NR_CPUS and the various variables
> that derive from that value.

CONFIG_NR_CPUS is the absolute maximum for a specific kernel (available
at run time in /sys/devices/system/cpu/kernel_max).  The kernel lowers
the observable value if it detects the system cannot support more than a
specific number of CPUs.  This is the kernel-internal nr_cpu_ids
variable.  I don't think its value is directly exported, but it can
currently be derived from /sys/devices/system/cpu/possible.

(The difficulty of obtaining this value, and the tendency of the kernel
to replace hard compile-time limits with run-time configuration options,
makes me think it is unwise to expose these values through sysconf.)

> Can you elaborate some more on any of the false negative cases you think
> might impact user applications? For example what happens if you use the
> stock cpu_set_t size? It would seem to me that such a use keeps working
> and there is no change.

Yes, applications which worked before continue to work.  But you can now
specify an all-ones mask you have allocated, and glibc will not reject
it because it has set bits beyond the value it guessed for nr_cpu_ids.

>> Remove CPU set size checking from affinity functions [BZ #19143]
>>
>> With current kernel versions, the check does not reliably detect that
>> unavailable CPUs are requested, for these reasons:
>>
>> (1) The kernel will silently ignore non-allowed CPUs.
> 
> You mean to say that if sched_setaffinity is called with a CPU mask 
> bit set to enabled, but that cpu is not allowed for the process, then
> it will ignore the setting?

Yes, that is what the kernel does.

> This requires you run sched_getaffinity to
> verify what CPUs you're actually set to run on?

Yes, if you care about this detail.

> Is this because the
> cpuset mechanism is merged with sched_setaffinity and overrides it?

As far as I can tell, yes.  I do not know where the kernel gets the
other mask from, but it seems cgroups-related (hence the Cc:), and it
does and AND on those two masks.

>> (3) The existing probing code assumes that the CPU mask size is a
>>     power of two and at least 1024.  Neither it has to be a power
>>     of two, nor is the minimum possible value 1024, so the value
>>     determined is often too large, resulting in incorrect false
>>     negatives.
> 
> Could you explain those "false negative" cases again?

I tried to make this clearer in the revised commit message of the
attached patch.

> The goal is to keep nptl/ free from linux-isms, and AFAICT your test is
> indeed free of any linux-specific features since your code for finding
> the size of the cpu mask is generic. Is there anything I might have missed
> that would make your test linux-specific? Keep in mind that we share nptl/
> with the nacl port.

Good point.  sched_getcpu is not universally available, so I had to move
the tests to sysdeps/unix/sysv/linux.

I noticed that tst-getcpu was failing (bug 19164), so I added another
sched_setaffinity test variant that supersedes it.

>> +* sched_setaffinity, pthread_setaffinity_np no longer attempt to guess the
>> +  kernel-internal CPU set size.  This means that requests that change the
>> +  CPU affinity which failed before will now succeed.  Applications that need
> 
> Please provide at least one example of a failure which now succeeds.

I've updated the NEWS entry.

>> +/* We wave two loops running for two seconds each.  */
>> +#define TIMEOUT 8
> 
> Why two seconds?
> 
> Why two threads?
> 
> If the value of 2 seconds is arbitrary please state so in 
> a comment such that future reviewers can adjust it as they
> see fit without having to review the history of the value.

Should be clearer in the new version.

>> +++ b/posix/tst-affinity.c
>> @@ -0,0 +1,254 @@
> 
> Needs a one line test description with BZ#.

I did not include the bug number because it is a generic, non-regression
test.

>> +static int
>> +find_set_size (void)
>> +{
>> +  /* We need to use multiples of 64 because otherwise, CPU_ALLOC
>> +     over-allocates, and and we do not see all bits returned by the
>> +     kernel.  */
> 
> How does CPU_ALLOC over-allocating result in not seeing bits from 
> the kernel? Is this because you get over-allocation in CPU_ALLOC,
> but your own external count of num_cpus would be lower and it's
> num_cpus you return? Can't you rely on the result of CPU_ALLOC_SIZE
> and return that?

See find_last_cpu.  It is tricky to determine the actual size of a CPU
set just based on the macros.  I think I got it right.

It turns out the comment was incorrect, I changed the code.

> Why do we do this instead of calling sysconf to get the number
> of CPUs?

sysconf currently does not give us the proper number, and I really don't
think applications should rely on it.  This is a separate conversation,
IMHO.  I added a comment.

>> +
>> +static bool
>> +test_size (const struct conf *conf, size_t size)
>> +{
> 
> Should print PASS:/FAIL: prefix to make grepping easier.

I added info/warning/error prefixes instead.

> Should do what other tests do e.g. err |= and run each
> test on a distinct line. Makes it easier to add tests and
> disable tests while debugging.

Ah, right.

I have added tests which cover additional scenarios (cross-process
sched_* calls and cross-thread pthread_sched* calls).

Florian

Comments

Florian Weimer Nov. 4, 2015, 8:17 p.m. UTC | #1
On 10/23/2015 04:00 PM, Florian Weimer wrote:
> 2015-10-23  Florian Weimer  <fweimer@redhat.com>
> 
> 	[BZ #19143]
> 	[BZ #19164]
> 	* nptl/check-cpuset.h: Remove.
> 	* nptl/pthread_attr_setaffinity.c (__pthread_attr_setaffinity_new):
> 	Remove CPU set size check.
> 	* nptl/pthread_setattr_default_np.c (pthread_setattr_default_np):
> 	Likewise.
> 	* sysdeps/unix/sysv/linux/check-cpuset.h: Remove.
> 	* sysdeps/unix/sysv/linux/pthread_setaffinity.c
> 	(__kernel_cpumask_size, __determine_cpumask_size): Remove.
> 	(__pthread_setaffinity_new): Remove CPU set size check.
> 	* sysdeps/unix/sysv/linux/sched_setaffinity.c
> 	(__kernel_cpumask_size): Remove.
> 	(__sched_setaffinity_new): Remove CPU set size check.
> 	* manual/threads.texi (Default Thread Attributes): Remove stale
> 	reference to check_cpuset_attr, determine_cpumask_size in comment.
> 	* sysdeps/unix/sysv/linux/Makefile [$(subdir) == posix] (tests):
> 	Remove tst-getcpu.  Add tst-affinity, tst-affinity-pid.
> 	[$(subdir) == nptl] (tests): Add tst-thread-affinity-pthread,
> 	tst-thread-affinity-pthread2, tst-thread-affinity-sched.
> 	* sysdeps/unix/sysv/linux/tst-affinity.c: New file.
> 	* sysdeps/unix/sysv/linux/tst-affinity-pid.c: New file.
> 	* sysdeps/unix/sysv/linux/tst-skeleton-affinity.c: New skeleton test file.
> 	* sysdeps/unix/sysv/linux/tst-thread-affinity-sched.c: New file.
> 	* sysdeps/unix/sysv/linux/tst-thread-affinity-pthread.c: New file.
> 	* sysdeps/unix/sysv/linux/tst-thread-affinity-pthread2.c: New file.
> 	* sysdeps/unix/sysv/linux/tst-thread-skeleton-affinity.c: New
> 	skeleton test file.
> 	* sysdeps/unix/sysv/linux/tst-getcpu.c: Remove.  Superseded by
> 	tst-affinity-pid.

Ping?

Unfortunately, I didn't get *any* feedback from the kernel people I
Cc:ed.  There is very little traffic on the cgroups is list in general,
though.

(There is an extend_alloca removal hidden in there. :)

Florian
Florian Weimer Nov. 24, 2015, 4:39 p.m. UTC | #2
On 11/04/2015 09:17 PM, Florian Weimer wrote:
> On 10/23/2015 04:00 PM, Florian Weimer wrote:
>> 2015-10-23  Florian Weimer  <fweimer@redhat.com>
>>
>> 	[BZ #19143]
>> 	[BZ #19164]
>> 	* nptl/check-cpuset.h: Remove.
>> 	* nptl/pthread_attr_setaffinity.c (__pthread_attr_setaffinity_new):
>> 	Remove CPU set size check.
>> 	* nptl/pthread_setattr_default_np.c (pthread_setattr_default_np):
>> 	Likewise.
>> 	* sysdeps/unix/sysv/linux/check-cpuset.h: Remove.
>> 	* sysdeps/unix/sysv/linux/pthread_setaffinity.c
>> 	(__kernel_cpumask_size, __determine_cpumask_size): Remove.
>> 	(__pthread_setaffinity_new): Remove CPU set size check.
>> 	* sysdeps/unix/sysv/linux/sched_setaffinity.c
>> 	(__kernel_cpumask_size): Remove.
>> 	(__sched_setaffinity_new): Remove CPU set size check.
>> 	* manual/threads.texi (Default Thread Attributes): Remove stale
>> 	reference to check_cpuset_attr, determine_cpumask_size in comment.
>> 	* sysdeps/unix/sysv/linux/Makefile [$(subdir) == posix] (tests):
>> 	Remove tst-getcpu.  Add tst-affinity, tst-affinity-pid.
>> 	[$(subdir) == nptl] (tests): Add tst-thread-affinity-pthread,
>> 	tst-thread-affinity-pthread2, tst-thread-affinity-sched.
>> 	* sysdeps/unix/sysv/linux/tst-affinity.c: New file.
>> 	* sysdeps/unix/sysv/linux/tst-affinity-pid.c: New file.
>> 	* sysdeps/unix/sysv/linux/tst-skeleton-affinity.c: New skeleton test file.
>> 	* sysdeps/unix/sysv/linux/tst-thread-affinity-sched.c: New file.
>> 	* sysdeps/unix/sysv/linux/tst-thread-affinity-pthread.c: New file.
>> 	* sysdeps/unix/sysv/linux/tst-thread-affinity-pthread2.c: New file.
>> 	* sysdeps/unix/sysv/linux/tst-thread-skeleton-affinity.c: New
>> 	skeleton test file.
>> 	* sysdeps/unix/sysv/linux/tst-getcpu.c: Remove.  Superseded by
>> 	tst-affinity-pid.
> 
> Ping?
> 
> Unfortunately, I didn't get *any* feedback from the kernel people I
> Cc:ed.  There is very little traffic on the cgroups is list in general,
> though.

I have committed this.  I believe all glibc-specific issues have been
addressed, and the technical contents was no longer in dispute.

Thanks,
Florian
Michael Kerrisk (man-pages) March 2, 2016, 2:12 p.m. UTC | #3
Hi Florian,

On Tue, Nov 24, 2015 at 5:39 PM, Florian Weimer <fweimer@redhat.com> wrote:
> On 11/04/2015 09:17 PM, Florian Weimer wrote:
>> On 10/23/2015 04:00 PM, Florian Weimer wrote:
>>> 2015-10-23  Florian Weimer  <fweimer@redhat.com>
>>>
>>>      [BZ #19143]
>>>      [BZ #19164]
>>>      * nptl/check-cpuset.h: Remove.
>>>      * nptl/pthread_attr_setaffinity.c (__pthread_attr_setaffinity_new):
>>>      Remove CPU set size check.
>>>      * nptl/pthread_setattr_default_np.c (pthread_setattr_default_np):
>>>      Likewise.
>>>      * sysdeps/unix/sysv/linux/check-cpuset.h: Remove.
>>>      * sysdeps/unix/sysv/linux/pthread_setaffinity.c
>>>      (__kernel_cpumask_size, __determine_cpumask_size): Remove.
>>>      (__pthread_setaffinity_new): Remove CPU set size check.
>>>      * sysdeps/unix/sysv/linux/sched_setaffinity.c
>>>      (__kernel_cpumask_size): Remove.
>>>      (__sched_setaffinity_new): Remove CPU set size check.
>>>      * manual/threads.texi (Default Thread Attributes): Remove stale
>>>      reference to check_cpuset_attr, determine_cpumask_size in comment.
>>>      * sysdeps/unix/sysv/linux/Makefile [$(subdir) == posix] (tests):
>>>      Remove tst-getcpu.  Add tst-affinity, tst-affinity-pid.
>>>      [$(subdir) == nptl] (tests): Add tst-thread-affinity-pthread,
>>>      tst-thread-affinity-pthread2, tst-thread-affinity-sched.
>>>      * sysdeps/unix/sysv/linux/tst-affinity.c: New file.
>>>      * sysdeps/unix/sysv/linux/tst-affinity-pid.c: New file.
>>>      * sysdeps/unix/sysv/linux/tst-skeleton-affinity.c: New skeleton test file.
>>>      * sysdeps/unix/sysv/linux/tst-thread-affinity-sched.c: New file.
>>>      * sysdeps/unix/sysv/linux/tst-thread-affinity-pthread.c: New file.
>>>      * sysdeps/unix/sysv/linux/tst-thread-affinity-pthread2.c: New file.
>>>      * sysdeps/unix/sysv/linux/tst-thread-skeleton-affinity.c: New
>>>      skeleton test file.
>>>      * sysdeps/unix/sysv/linux/tst-getcpu.c: Remove.  Superseded by
>>>      tst-affinity-pid.
>>
>> Ping?
>>
>> Unfortunately, I didn't get *any* feedback from the kernel people I
>> Cc:ed.  There is very little traffic on the cgroups is list in general,
>> though.
>
> I have committed this.  I believe all glibc-specific issues have been
> addressed, and the technical contents was no longer in dispute.

With this change, I wonder if some pieces in the sched_setaffinity(2)
man page (http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html)
may be in order. Below, I've quoted some relevant pieces from the man
page. Anything there that needs tweaking?

   ERRORS
       EINVAL The affinity bit mask mask contains no processors that  are
              currently  physically  on  the  system and permitted to the
              thread according to any restrictions that may be imposed by
              the "cpuset" mechanism described in cpuset(7).

       EINVAL (sched_getaffinity()   and,   in   kernels   before  2.6.9,
              sched_setaffinity()) cpusetsize is smaller than the size of
              the affinity mask used by the kernel.
   [...]
   NOTES
   [...]
     Handling systems with large CPU affinity masks
       The  underlying  system  calls  (which  represent CPU masks as bit
       masks of type unsigned long *) impose no restriction on  the  size
       of  the  CPU mask.  However, the cpu_set_t data type used by glibc
       has a fixed size of 128 bytes, meaning that the maximum CPU number
       that  can be represented is 1023.  If the kernel CPU affinity mask
       is larger than 1024, then calls of the form:

           sched_getaffinity(pid, sizeof(cpu_set_t), &mask);

       will fail with the error EINVAL, the error produced by the  under‐
       lying  system  call  for the case where the mask size specified in
       cpusetsize is smaller than the size of the affinity mask  used  by
       the  kernel.   (Depending  on  the system CPU topology, the kernel
       affinity mask can be  substantially  larger  than  the  number  of
       active CPUs in the system.)

       When  working on systems with large kernel CPU affinity masks, one
       must dynamically allocate the mask argument.  Currently, the  only
       way  to  do  this  is by probing for the size of the required mask
       using sched_getaffinity() calls with increasing mask sizes  (until
       the call does not fail with the error EINVAL).

Cheers,

Michael
Florian Weimer March 8, 2016, 11:17 a.m. UTC | #4
On 03/02/2016 03:12 PM, Michael Kerrisk wrote:

> With this change, I wonder if some pieces in the sched_setaffinity(2)
> man page (http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html)
> may be in order. Below, I've quoted some relevant pieces from the man
> page. Anything there that needs tweaking?
> 
>    ERRORS
>        EINVAL The affinity bit mask mask contains no processors that  are
>               currently  physically  on  the  system and permitted to the
>               thread according to any restrictions that may be imposed by
>               the "cpuset" mechanism described in cpuset(7).
> 
>        EINVAL (sched_getaffinity()   and,   in   kernels   before  2.6.9,
>               sched_setaffinity()) cpusetsize is smaller than the size of
>               the affinity mask used by the kernel.
>    [...]
>    NOTES
>    [...]
>      Handling systems with large CPU affinity masks
>        The  underlying  system  calls  (which  represent CPU masks as bit
>        masks of type unsigned long *) impose no restriction on  the  size
>        of  the  CPU mask.  However, the cpu_set_t data type used by glibc
>        has a fixed size of 128 bytes, meaning that the maximum CPU number
>        that  can be represented is 1023.  If the kernel CPU affinity mask
>        is larger than 1024, then calls of the form:
> 
>            sched_getaffinity(pid, sizeof(cpu_set_t), &mask);
> 
>        will fail with the error EINVAL, the error produced by the  under‐
>        lying  system  call  for the case where the mask size specified in
>        cpusetsize is smaller than the size of the affinity mask  used  by
>        the  kernel.   (Depending  on  the system CPU topology, the kernel
>        affinity mask can be  substantially  larger  than  the  number  of
>        active CPUs in the system.)

That's still true.

>        When  working on systems with large kernel CPU affinity masks, one
>        must dynamically allocate the mask argument.  Currently, the  only
>        way  to  do  this  is by probing for the size of the required mask
>        using sched_getaffinity() calls with increasing mask sizes  (until
>        the call does not fail with the error EINVAL).

I think this needs to reference the CPU_ALLOC manual page.

One caveat is that sched_getaffinity can set bits beyond the requested
allocation size (in bits) because the kernel gets a padded CPU vector
and sees a few additional bits.  The fix for that is to iterate over the
bits, counting those which are set, and stop if you reach the value of
CPU_COUNT, rather than iterating over the bits you allocated.

Florian
Michael Kerrisk (man-pages) March 8, 2016, 7:42 p.m. UTC | #5
Hello Florian,

On 03/08/2016 12:17 PM, Florian Weimer wrote:
> On 03/02/2016 03:12 PM, Michael Kerrisk wrote:
> 
>> With this change, I wonder if some pieces in the sched_setaffinity(2)
>> man page (http://man7.org/linux/man-pages/man2/sched_setaffinity.2.html)
>> may be in order. Below, I've quoted some relevant pieces from the man
>> page. Anything there that needs tweaking?
>>
>>    ERRORS
>>        EINVAL The affinity bit mask mask contains no processors that  are
>>               currently  physically  on  the  system and permitted to the
>>               thread according to any restrictions that may be imposed by
>>               the "cpuset" mechanism described in cpuset(7).
>>
>>        EINVAL (sched_getaffinity()   and,   in   kernels   before  2.6.9,
>>               sched_setaffinity()) cpusetsize is smaller than the size of
>>               the affinity mask used by the kernel.
>>    [...]
>>    NOTES
>>    [...]
>>      Handling systems with large CPU affinity masks
>>        The  underlying  system  calls  (which  represent CPU masks as bit
>>        masks of type unsigned long *) impose no restriction on  the  size
>>        of  the  CPU mask.  However, the cpu_set_t data type used by glibc
>>        has a fixed size of 128 bytes, meaning that the maximum CPU number
>>        that  can be represented is 1023.  If the kernel CPU affinity mask
>>        is larger than 1024, then calls of the form:
>>
>>            sched_getaffinity(pid, sizeof(cpu_set_t), &mask);
>>
>>        will fail with the error EINVAL, the error produced by the  under‐
>>        lying  system  call  for the case where the mask size specified in
>>        cpusetsize is smaller than the size of the affinity mask  used  by
>>        the  kernel.   (Depending  on  the system CPU topology, the kernel
>>        affinity mask can be  substantially  larger  than  the  number  of
>>        active CPUs in the system.)
> 
> That's still true.
> 
>>        When  working on systems with large kernel CPU affinity masks, one
>>        must dynamically allocate the mask argument.  Currently, the  only
>>        way  to  do  this  is by probing for the size of the required mask
>>        using sched_getaffinity() calls with increasing mask sizes  (until
>>        the call does not fail with the error EINVAL).
> 
> I think this needs to reference the CPU_ALLOC manual page.
> 
> One caveat is that sched_getaffinity can set bits beyond the requested
> allocation size (in bits) because the kernel gets a padded CPU vector
> and sees a few additional bits.  

I'm not quite clear on this point. Does it get a padded CPU vector
because CPU_ALLOC() might allocate a vector of size larger than the
user requested?

Cheers,

Michael

> The fix for that is to iterate over the
> bits, counting those which are set, and stop if you reach the value of
> CPU_COUNT, rather than iterating over the bits you allocated.
> 
> Florian
>
Florian Weimer March 10, 2016, 11:20 a.m. UTC | #6
On 03/08/2016 08:42 PM, Michael Kerrisk (man-pages) wrote:

>> One caveat is that sched_getaffinity can set bits beyond the requested
>> allocation size (in bits) because the kernel gets a padded CPU vector
>> and sees a few additional bits.  
> 
> I'm not quite clear on this point. Does it get a padded CPU vector
> because CPU_ALLOC() might allocate a vector of size larger than the
> user requested?

Yes, this is the problem, combined with CPU_ALLOC_SIZE returning the
larger size (which is unavoidable).

This whole interface could have been designed much better (compare
select to epoll, for instance).

Florian
Michael Kerrisk (man-pages) March 10, 2016, 5:07 p.m. UTC | #7
Hello Florian.

On 03/10/2016 12:20 PM, Florian Weimer wrote:
> On 03/08/2016 08:42 PM, Michael Kerrisk (man-pages) wrote:
> 
>>> One caveat is that sched_getaffinity can set bits beyond the requested
>>> allocation size (in bits) because the kernel gets a padded CPU vector
>>> and sees a few additional bits.  
>>
>> I'm not quite clear on this point. Does it get a padded CPU vector
>> because CPU_ALLOC() might allocate a vector of size larger than the
>> user requested?
> 
> Yes, this is the problem, combined with CPU_ALLOC_SIZE returning the
> larger size (which is unavoidable).

Thanks for the clarification. I added this paragraph:

       Be aware that CPU_ALLOC(3) may allocate a slightly  larger  CPU
       set  than  requested  (because  CPU sets are implemented as bit
       masks  allocated  in  units  of  sizeof(long)).   Consequently,
       sched_getaffinity()  can  set bits beyond the requested alloca‐
       tion size, because the  kernel  sees  a  few  additional  bits.
       Therefore,  the  caller  should  iterate  over  the bits in the
       returned set, counting those  which  are  set,  and  stop  upon
       reaching  the value returned by CPU_COUNT(3) (rather than iter‐
       ating over the number of bits requested  to  be  allocated).

> This whole interface could have been designed much better (compare
> select to epoll, for instance).

Indeed!

Cheers,

Michael
Florian Weimer March 10, 2016, 8:03 p.m. UTC | #8
On 03/10/2016 06:07 PM, Michael Kerrisk (man-pages) wrote:
> Hello Florian.
> 
> On 03/10/2016 12:20 PM, Florian Weimer wrote:
>> On 03/08/2016 08:42 PM, Michael Kerrisk (man-pages) wrote:
>>
>>>> One caveat is that sched_getaffinity can set bits beyond the requested
>>>> allocation size (in bits) because the kernel gets a padded CPU vector
>>>> and sees a few additional bits.  
>>>
>>> I'm not quite clear on this point. Does it get a padded CPU vector
>>> because CPU_ALLOC() might allocate a vector of size larger than the
>>> user requested?
>>
>> Yes, this is the problem, combined with CPU_ALLOC_SIZE returning the
>> larger size (which is unavoidable).
> 
> Thanks for the clarification. I added this paragraph:
> 
>        Be aware that CPU_ALLOC(3) may allocate a slightly  larger  CPU
>        set  than  requested  (because  CPU sets are implemented as bit
>        masks  allocated  in  units  of  sizeof(long)).   Consequently,
>        sched_getaffinity()  can  set bits beyond the requested alloca‐
>        tion size, because the  kernel  sees  a  few  additional  bits.
>        Therefore,  the  caller  should  iterate  over  the bits in the
>        returned set, counting those  which  are  set,  and  stop  upon
>        reaching  the value returned by CPU_COUNT(3) (rather than iter‐
>        ating over the number of bits requested  to  be  allocated).

This looks reasonable, thanks.

Florian
Michael Kerrisk (man-pages) March 10, 2016, 8:05 p.m. UTC | #9
Hi Florian,

On 10 March 2016 at 21:03, Florian Weimer <fweimer@redhat.com> wrote:
> On 03/10/2016 06:07 PM, Michael Kerrisk (man-pages) wrote:
>> Hello Florian.
>>
>> On 03/10/2016 12:20 PM, Florian Weimer wrote:
>>> On 03/08/2016 08:42 PM, Michael Kerrisk (man-pages) wrote:
>>>
>>>>> One caveat is that sched_getaffinity can set bits beyond the requested
>>>>> allocation size (in bits) because the kernel gets a padded CPU vector
>>>>> and sees a few additional bits.
>>>>
>>>> I'm not quite clear on this point. Does it get a padded CPU vector
>>>> because CPU_ALLOC() might allocate a vector of size larger than the
>>>> user requested?
>>>
>>> Yes, this is the problem, combined with CPU_ALLOC_SIZE returning the
>>> larger size (which is unavoidable).
>>
>> Thanks for the clarification. I added this paragraph:
>>
>>        Be aware that CPU_ALLOC(3) may allocate a slightly  larger  CPU
>>        set  than  requested  (because  CPU sets are implemented as bit
>>        masks  allocated  in  units  of  sizeof(long)).   Consequently,
>>        sched_getaffinity()  can  set bits beyond the requested alloca‐
>>        tion size, because the  kernel  sees  a  few  additional  bits.
>>        Therefore,  the  caller  should  iterate  over  the bits in the
>>        returned set, counting those  which  are  set,  and  stop  upon
>>        reaching  the value returned by CPU_COUNT(3) (rather than iter‐
>>        ating over the number of bits requested  to  be  allocated).
>
> This looks reasonable, thanks.

Thanks for checking it!

Cheers,

Michael

Patch
diff mbox

Remove CPU set size checking from affinity functions [BZ #19143]

With current kernel versions, the check does not reliably detect that
unavailable CPUs are requested, for these reasons:

(1) The kernel will silently ignore non-allowed CPUs, that is, CPUs
    which are physically present but disallowed for the thread
    based on system configuratuon.

(2) Similarly, CPU bits which lack an online CPU (possible CPUs)
    are ignored.

(3) The existing probing code assumes that the CPU mask size is a
    power of two and at least 1024.  Neither has it to be a power
    of two, nor is the minimum possible value 1024, so the value
    determined is often too large.  This means that the CPU set
    size check in glibc accepts CPU bits beyond the actual hard
    system limit.

(4) Future kernel versions may not even have a fixed CPU set size.

After the removal of the probing code, the kernel still returns
EINVAL if no CPU in the requested set remains which can run the
thread after the affinity change.

Applications which care about the exact affinity mask will have
to query it using sched_getaffinity after setting it.  Due to the
effects described above, this commit does not change this.

The new tests supersede tst-getcpu, which is removed.  This
addresses bug 19164 because the new tests allocate CPU sets
dynamically.

2015-10-23  Florian Weimer  <fweimer@redhat.com>

	[BZ #19143]
	[BZ #19164]
	* nptl/check-cpuset.h: Remove.
	* nptl/pthread_attr_setaffinity.c (__pthread_attr_setaffinity_new):
	Remove CPU set size check.
	* nptl/pthread_setattr_default_np.c (pthread_setattr_default_np):
	Likewise.
	* sysdeps/unix/sysv/linux/check-cpuset.h: Remove.
	* sysdeps/unix/sysv/linux/pthread_setaffinity.c
	(__kernel_cpumask_size, __determine_cpumask_size): Remove.
	(__pthread_setaffinity_new): Remove CPU set size check.
	* sysdeps/unix/sysv/linux/sched_setaffinity.c
	(__kernel_cpumask_size): Remove.
	(__sched_setaffinity_new): Remove CPU set size check.
	* manual/threads.texi (Default Thread Attributes): Remove stale
	reference to check_cpuset_attr, determine_cpumask_size in comment.
	* sysdeps/unix/sysv/linux/Makefile [$(subdir) == posix] (tests):
	Remove tst-getcpu.  Add tst-affinity, tst-affinity-pid.
	[$(subdir) == nptl] (tests): Add tst-thread-affinity-pthread,
	tst-thread-affinity-pthread2, tst-thread-affinity-sched.
	* sysdeps/unix/sysv/linux/tst-affinity.c: New file.
	* sysdeps/unix/sysv/linux/tst-affinity-pid.c: New file.
	* sysdeps/unix/sysv/linux/tst-skeleton-affinity.c: New skeleton test file.
	* sysdeps/unix/sysv/linux/tst-thread-affinity-sched.c: New file.
	* sysdeps/unix/sysv/linux/tst-thread-affinity-pthread.c: New file.
	* sysdeps/unix/sysv/linux/tst-thread-affinity-pthread2.c: New file.
	* sysdeps/unix/sysv/linux/tst-thread-skeleton-affinity.c: New
	skeleton test file.
	* sysdeps/unix/sysv/linux/tst-getcpu.c: Remove.  Superseded by
	tst-affinity-pid.

diff --git a/NEWS b/NEWS
index 00e3b03..fb06d3f 100644
--- a/NEWS
+++ b/NEWS
@@ -21,7 +21,15 @@  Version 2.23
   18980, 18981, 18982, 18985, 19003, 19007, 19012, 19016, 19018, 19032,
   19046, 19049, 19050, 19059, 19071, 19074, 19076, 19077, 19078, 19079,
   19085, 19086, 19088, 19094, 19095, 19124, 19125, 19129, 19134, 19137,
-  19156.
+  19143, 19156, 19164.
+
+* sched_setaffinity, pthread_setaffinity_np no longer attempt to guess the
+  kernel-internal CPU set size.  This means that requests that change the
+  CPU affinity which failed before (for example, an all-ones CPU mask) will
+  now succeed.  Applications that need to determine the effective CPU
+  affinities need to call sched_getaffinity or pthread_getaffinity_np after
+  setting it because the kernel can adjust it (and the previous size check
+  would not detect this in the majority of cases).
 
 * There is now a --disable-timezone-tools configure option for disabling the
   building and installing of the timezone related utilities (zic, zdump, and
diff --git a/manual/threads.texi b/manual/threads.texi
index 4d080d4..00cc725 100644
--- a/manual/threads.texi
+++ b/manual/threads.texi
@@ -111,8 +111,6 @@  failure.
 @c  check_sched_priority_attr ok
 @c   sched_get_priority_min dup ok
 @c   sched_get_priority_max dup ok
-@c  check_cpuset_attr ok
-@c   determine_cpumask_size ok
 @c  check_stacksize_attr ok
 @c  lll_lock @asulock @aculock
 @c  free dup @ascuheap @acsmem
diff --git a/nptl/check-cpuset.h b/nptl/check-cpuset.h
deleted file mode 100644
index 315bdf2..0000000
--- a/nptl/check-cpuset.h
+++ /dev/null
@@ -1,32 +0,0 @@ 
-/* Validate cpu_set_t values for NPTL.  Stub version.
-   Copyright (C) 2015 Free Software Foundation, Inc.
-   This file is part of the GNU C Library.
-
-   The GNU C Library is free software; you can redistribute it and/or
-   modify it under the terms of the GNU Lesser General Public
-   License as published by the Free Software Foundation; either
-   version 2.1 of the License, or (at your option) any later version.
-
-   The GNU C Library is distributed in the hope that it will be useful,
-   but WITHOUT ANY WARRANTY; without even the implied warranty of
-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   Lesser General Public License for more details.
-
-   You should have received a copy of the GNU Lesser General Public
-   License along with the GNU C Library; if not, see
-   <http://www.gnu.org/licenses/>.  */
-
-#include <errno.h>
-
-/* Returns 0 if CS and SZ are valid values for the cpuset and cpuset size
-   respectively.  Otherwise it returns an error number.  */
-static inline int
-check_cpuset_attr (const cpu_set_t *cs, const size_t sz)
-{
-  if (sz == 0)
-    return 0;
-
-  /* This means pthread_attr_setaffinity will return ENOSYS, which
-     is the right thing when the cpu_set_t features are not available.  */
-  return ENOSYS;
-}
diff --git a/nptl/pthread_attr_setaffinity.c b/nptl/pthread_attr_setaffinity.c
index 7a127b8..571835d 100644
--- a/nptl/pthread_attr_setaffinity.c
+++ b/nptl/pthread_attr_setaffinity.c
@@ -23,7 +23,6 @@ 
 #include <string.h>
 #include <pthreadP.h>
 #include <shlib-compat.h>
-#include <check-cpuset.h>
 
 
 int
@@ -43,11 +42,6 @@  __pthread_attr_setaffinity_new (pthread_attr_t *attr, size_t cpusetsize,
     }
   else
     {
-      int ret = check_cpuset_attr (cpuset, cpusetsize);
-
-      if (ret)
-        return ret;
-
       if (iattr->cpusetsize != cpusetsize)
 	{
 	  void *newp = (cpu_set_t *) realloc (iattr->cpuset, cpusetsize);
diff --git a/nptl/pthread_setattr_default_np.c b/nptl/pthread_setattr_default_np.c
index 457a467..1a661f1 100644
--- a/nptl/pthread_setattr_default_np.c
+++ b/nptl/pthread_setattr_default_np.c
@@ -21,7 +21,6 @@ 
 #include <pthreadP.h>
 #include <assert.h>
 #include <string.h>
-#include <check-cpuset.h>
 
 
 int
@@ -48,10 +47,6 @@  pthread_setattr_default_np (const pthread_attr_t *in)
 	return ret;
     }
 
-  ret = check_cpuset_attr (real_in->cpuset, real_in->cpusetsize);
-  if (ret)
-    return ret;
-
   /* stacksize == 0 is fine.  It means that we don't change the current
      value.  */
   if (real_in->stacksize != 0)
diff --git a/sysdeps/unix/sysv/linux/Makefile b/sysdeps/unix/sysv/linux/Makefile
index 2c67a66..d66ca77 100644
--- a/sysdeps/unix/sysv/linux/Makefile
+++ b/sysdeps/unix/sysv/linux/Makefile
@@ -139,7 +139,7 @@  sysdep_headers += bits/initspin.h
 
 sysdep_routines += sched_getcpu
 
-tests += tst-getcpu
+tests += tst-affinity tst-affinity-pid
 
 CFLAGS-fork.c = $(libio-mtsafe)
 CFLAGS-getpid.o = -fomit-frame-pointer
@@ -192,5 +192,7 @@  CFLAGS-gai.c += -DNEED_NETLINK
 endif
 
 ifeq ($(subdir),nptl)
-tests += tst-setgetname tst-align-clone tst-getpid1 tst-getpid2
+tests += tst-setgetname tst-align-clone tst-getpid1 tst-getpid2 \
+	tst-thread-affinity-pthread tst-thread-affinity-pthread2 \
+	tst-thread-affinity-sched
 endif
diff --git a/sysdeps/unix/sysv/linux/check-cpuset.h b/sysdeps/unix/sysv/linux/check-cpuset.h
deleted file mode 100644
index 1d55e0b..0000000
--- a/sysdeps/unix/sysv/linux/check-cpuset.h
+++ /dev/null
@@ -1,48 +0,0 @@ 
-/* Validate cpu_set_t values for NPTL.  Linux version.
-   Copyright (C) 2002-2015 Free Software Foundation, Inc.
-   This file is part of the GNU C Library.
-
-   The GNU C Library is free software; you can redistribute it and/or
-   modify it under the terms of the GNU Lesser General Public
-   License as published by the Free Software Foundation; either
-   version 2.1 of the License, or (at your option) any later version.
-
-   The GNU C Library is distributed in the hope that it will be useful,
-   but WITHOUT ANY WARRANTY; without even the implied warranty of
-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-   Lesser General Public License for more details.
-
-   You should have received a copy of the GNU Lesser General Public
-   License along with the GNU C Library; if not, see
-   <http://www.gnu.org/licenses/>.  */
-
-#include <pthread.h>
-#include <errno.h>
-
-
-/* Defined in pthread_setaffinity.c.  */
-extern size_t __kernel_cpumask_size attribute_hidden;
-extern int __determine_cpumask_size (pid_t tid);
-
-/* Returns 0 if CS and SZ are valid values for the cpuset and cpuset size
-   respectively.  Otherwise it returns an error number.  */
-static inline int
-check_cpuset_attr (const cpu_set_t *cs, const size_t sz)
-{
-  if (__kernel_cpumask_size == 0)
-    {
-      int res = __determine_cpumask_size (THREAD_SELF->tid);
-      if (res)
-	return res;
-    }
-
-  /* Check whether the new bitmask has any bit set beyond the
-     last one the kernel accepts.  */
-  for (size_t cnt = __kernel_cpumask_size; cnt < sz; ++cnt)
-    if (((char *) cs)[cnt] != '\0')
-      /* Found a nonzero byte.  This means the user request cannot be
-	 fulfilled.  */
-      return EINVAL;
-
-  return 0;
-}
diff --git a/sysdeps/unix/sysv/linux/pthread_setaffinity.c b/sysdeps/unix/sysv/linux/pthread_setaffinity.c
index e891818..2ebf09d 100644
--- a/sysdeps/unix/sysv/linux/pthread_setaffinity.c
+++ b/sysdeps/unix/sysv/linux/pthread_setaffinity.c
@@ -23,62 +23,14 @@ 
 #include <shlib-compat.h>
 
 
-size_t __kernel_cpumask_size attribute_hidden;
-
-
-/* Determine the size of cpumask_t in the kernel.  */
-int
-__determine_cpumask_size (pid_t tid)
-{
-  size_t psize;
-  int res;
-
-  for (psize = 128; ; psize *= 2)
-    {
-      char buf[psize];
-      INTERNAL_SYSCALL_DECL (err);
-
-      res = INTERNAL_SYSCALL (sched_getaffinity, err, 3, tid, psize, buf);
-      if (INTERNAL_SYSCALL_ERROR_P (res, err))
-	{
-	  if (INTERNAL_SYSCALL_ERRNO (res, err) != EINVAL)
-	    return INTERNAL_SYSCALL_ERRNO (res, err);
-	}
-      else
-	break;
-    }
-
-  if (res != 0)
-    __kernel_cpumask_size = res;
-
-  return 0;
-}
-
-
 int
 __pthread_setaffinity_new (pthread_t th, size_t cpusetsize,
 			   const cpu_set_t *cpuset)
 {
   const struct pthread *pd = (const struct pthread *) th;
-
   INTERNAL_SYSCALL_DECL (err);
   int res;
 
-  if (__glibc_unlikely (__kernel_cpumask_size == 0))
-    {
-      res = __determine_cpumask_size (pd->tid);
-      if (res != 0)
-	return res;
-    }
-
-  /* We now know the size of the kernel cpumask_t.  Make sure the user
-     does not request to set a bit beyond that.  */
-  for (size_t cnt = __kernel_cpumask_size; cnt < cpusetsize; ++cnt)
-    if (((char *) cpuset)[cnt] != '\0')
-      /* Found a nonzero byte.  This means the user request cannot be
-	 fulfilled.  */
-      return EINVAL;
-
   res = INTERNAL_SYSCALL (sched_setaffinity, err, 3, pd->tid, cpusetsize,
 			  cpuset);
 
diff --git a/sysdeps/unix/sysv/linux/sched_setaffinity.c b/sysdeps/unix/sysv/linux/sched_setaffinity.c
index b528617..dfddce7 100644
--- a/sysdeps/unix/sysv/linux/sched_setaffinity.c
+++ b/sysdeps/unix/sysv/linux/sched_setaffinity.c
@@ -22,50 +22,13 @@ 
 #include <unistd.h>
 #include <sys/types.h>
 #include <shlib-compat.h>
-#include <alloca.h>
 
 
 #ifdef __NR_sched_setaffinity
-static size_t __kernel_cpumask_size;
-
 
 int
 __sched_setaffinity_new (pid_t pid, size_t cpusetsize, const cpu_set_t *cpuset)
 {
-  if (__glibc_unlikely (__kernel_cpumask_size == 0))
-    {
-      INTERNAL_SYSCALL_DECL (err);
-      int res;
-
-      size_t psize = 128;
-      void *p = alloca (psize);
-
-      while (res = INTERNAL_SYSCALL (sched_getaffinity, err, 3, getpid (),
-				     psize, p),
-	     INTERNAL_SYSCALL_ERROR_P (res, err)
-	     && INTERNAL_SYSCALL_ERRNO (res, err) == EINVAL)
-	p = extend_alloca (p, psize, 2 * psize);
-
-      if (res == 0 || INTERNAL_SYSCALL_ERROR_P (res, err))
-	{
-	  __set_errno (INTERNAL_SYSCALL_ERRNO (res, err));
-	  return -1;
-	}
-
-      __kernel_cpumask_size = res;
-    }
-
-  /* We now know the size of the kernel cpumask_t.  Make sure the user
-     does not request to set a bit beyond that.  */
-  for (size_t cnt = __kernel_cpumask_size; cnt < cpusetsize; ++cnt)
-    if (((char *) cpuset)[cnt] != '\0')
-      {
-        /* Found a nonzero byte.  This means the user request cannot be
-	   fulfilled.  */
-	__set_errno (EINVAL);
-	return -1;
-      }
-
   int result = INLINE_SYSCALL (sched_setaffinity, 3, pid, cpusetsize, cpuset);
 
 #ifdef RESET_VGETCPU_CACHE
diff --git a/sysdeps/unix/sysv/linux/tst-affinity-pid.c b/sysdeps/unix/sysv/linux/tst-affinity-pid.c
new file mode 100644
index 0000000..309f1ad
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/tst-affinity-pid.c
@@ -0,0 +1,201 @@ 
+/* Test for sched_getaffinity and sched_setaffinity, PID version.
+   Copyright (C) 2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* Function definitions for the benefit of tst-skeleton-affinity.c.
+   This variant forks a child process which then invokes
+   sched_getaffinity and sched_setaffinity on the parent PID.  */
+
+#include <errno.h>
+#include <stdlib.h>
+#include <sched.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <sys/wait.h>
+#include <unistd.h>
+
+static int
+write_fully (int fd, const void *buffer, size_t length)
+{
+  const void *end = buffer + length;
+  while (buffer < end)
+    {
+      ssize_t bytes_written = TEMP_FAILURE_RETRY
+        (write (fd, buffer, end - buffer));
+      if (bytes_written < 0)
+        return -1;
+      if (bytes_written == 0)
+        {
+          errno = ENOSPC;
+          return -1;
+        }
+      buffer += bytes_written;
+    }
+  return 0;
+}
+
+static ssize_t
+read_fully (int fd, void *buffer, size_t length)
+{
+  const void *start = buffer;
+  const void *end = buffer + length;
+  while (buffer < end)
+    {
+      ssize_t bytes_read = TEMP_FAILURE_RETRY
+        (read (fd, buffer, end - buffer));
+      if (bytes_read < 0)
+        return -1;
+      if (bytes_read == 0)
+        return buffer - start;
+      buffer += bytes_read;
+    }
+  return length;
+}
+
+static int
+process_child_response (int *pipes, pid_t child,
+                        cpu_set_t *set, size_t size)
+{
+  close (pipes[1]);
+
+  int value_from_child;
+  ssize_t bytes_read = read_fully
+    (pipes[0], &value_from_child, sizeof (value_from_child));
+  if (bytes_read < 0)
+    {
+      printf ("error: read from child: %m\n");
+      exit (1);
+    }
+  if (bytes_read != sizeof (value_from_child))
+    {
+      printf ("error: not enough bytes from child: %zd\n", bytes_read);
+      exit (1);
+    }
+  if (value_from_child == 0)
+    {
+      bytes_read = read_fully (pipes[0], set, size);
+      if (bytes_read < 0)
+        {
+          printf ("error: read: %m\n");
+          exit (1);
+        }
+      if (bytes_read != size)
+        {
+          printf ("error: not enough bytes from child: %zd\n", bytes_read);
+          exit (1);
+        }
+    }
+
+  int status;
+  if (waitpid (child, &status, 0) < 0)
+    {
+      printf ("error: waitpid: %m\n");
+      exit (1);
+    }
+  if (!(WIFEXITED (status) && WEXITSTATUS (status) == 0))
+    {
+      printf ("error: invalid status from : %m\n");
+      exit (1);
+    }
+
+  close (pipes[0]);
+
+  if (value_from_child != 0)
+    {
+      errno = value_from_child;
+      return -1;
+    }
+  return 0;
+}
+
+static int
+getaffinity (size_t size, cpu_set_t *set)
+{
+  int pipes[2];
+  if (pipe (pipes) < 0)
+    {
+      printf ("error: pipe: %m\n");
+      exit (1);
+    }
+
+  int ret = fork ();
+  if (ret < 0)
+    {
+      printf ("error: fork: %m\n");
+      exit (1);
+    }
+  if (ret == 0)
+    {
+      /* Child.  */
+      int ret = sched_getaffinity (getppid (), size, set);
+      if (ret < 0)
+        ret = errno;
+      if (write_fully (pipes[1], &ret, sizeof (ret)) < 0
+          || write_fully (pipes[1], set, size) < 0
+          || (ret == 0 && write_fully (pipes[1], set, size) < 0))
+        {
+          printf ("error: write: %m\n");
+          _exit (1);
+        }
+      _exit (0);
+    }
+
+  /* Parent.  */
+  return process_child_response (pipes, ret, set, size);
+}
+
+static int
+setaffinity (size_t size, const cpu_set_t *set)
+{
+  int pipes[2];
+  if (pipe (pipes) < 0)
+    {
+      printf ("error: pipe: %m\n");
+      exit (1);
+    }
+
+  int ret = fork ();
+  if (ret < 0)
+    {
+      printf ("error: fork: %m\n");
+      exit (1);
+    }
+  if (ret == 0)
+    {
+      /* Child.  */
+      int ret = sched_setaffinity (getppid (), size, set);
+      if (write_fully (pipes[1], &ret, sizeof (ret)) < 0)
+        {
+          printf ("error: write: %m\n");
+          _exit (1);
+        }
+      _exit (0);
+    }
+
+  /* Parent.  There is no affinity mask to read from the child, so the
+     size is 0.  */
+  return process_child_response (pipes, ret, NULL, 0);
+}
+
+struct conf;
+static bool early_test (struct conf *unused)
+{
+  return true;
+}
+
+#include "tst-skeleton-affinity.c"
diff --git a/sysdeps/unix/sysv/linux/tst-affinity.c b/sysdeps/unix/sysv/linux/tst-affinity.c
new file mode 100644
index 0000000..a5c02d4
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/tst-affinity.c
@@ -0,0 +1,43 @@ 
+/* Single-threaded test for sched_getaffinity and sched_setaffinity.
+   Copyright (C) 2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* Function definitions for the benefit of
+   tst-skeleton-affinity.c.  */
+
+#include <stdbool.h>
+#include <sched.h>
+
+static int
+getaffinity (size_t size, cpu_set_t *set)
+{
+  return sched_getaffinity (0, size, set);
+}
+
+static int
+setaffinity (size_t size, const cpu_set_t *set)
+{
+  return sched_setaffinity (0, size, set);
+}
+
+struct conf;
+static bool early_test (struct conf *unused)
+{
+  return true;
+}
+
+#include "tst-skeleton-affinity.c"
diff --git a/sysdeps/unix/sysv/linux/tst-getcpu.c b/sysdeps/unix/sysv/linux/tst-getcpu.c
deleted file mode 100644
index d9c05a7..0000000
--- a/sysdeps/unix/sysv/linux/tst-getcpu.c
+++ /dev/null
@@ -1,59 +0,0 @@ 
-#include <errno.h>
-#include <stdio.h>
-#include <sched.h>
-#include <unistd.h>
-
-
-static int
-do_test (void)
-{
-  cpu_set_t cs;
-  if (sched_getaffinity (getpid (), sizeof (cs), &cs) != 0)
-    {
-      printf ("getaffinity failed: %m\n");
-      return 1;
-    }
-
-  int result = 0;
-  int cpu = 0;
-  while (CPU_COUNT (&cs) != 0)
-    {
-      if (CPU_ISSET (cpu, &cs))
-	{
-	  cpu_set_t cs2;
-	  CPU_ZERO (&cs2);
-	  CPU_SET (cpu, &cs2);
-	  if (sched_setaffinity (getpid (), sizeof (cs2), &cs2) != 0)
-	    {
-	      printf ("setaffinity(%d) failed: %m\n", cpu);
-	      result = 1;
-	    }
-	  else
-	    {
-	      int cpu2 = sched_getcpu ();
-	      if (cpu2 == -1)
-		{
-		  if (errno == ENOSYS)
-		    {
-		      puts ("getcpu syscall not implemented");
-		      return 0;
-		    }
-		  perror ("getcpu failed");
-		  result = 1;
-		}
-	      if (cpu2 != cpu)
-		{
-		  printf ("getcpu results %d should be %d\n", cpu2, cpu);
-		  result = 1;
-		}
-	    }
-	  CPU_CLR (cpu, &cs);
-	}
-      ++cpu;
-    }
-
-  return result;
-}
-
-#define TEST_FUNCTION do_test ()
-#include <test-skeleton.c>
diff --git a/sysdeps/unix/sysv/linux/tst-skeleton-affinity.c b/sysdeps/unix/sysv/linux/tst-skeleton-affinity.c
new file mode 100644
index 0000000..8b8347d
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/tst-skeleton-affinity.c
@@ -0,0 +1,278 @@ 
+/* Generic test case for CPU affinity functions.
+   Copyright (C) 2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* This file is included by the tst-affinity*.c files to test the two
+   variants of the functions, under different conditions.  The
+   following functions have to be definied:
+
+   static int getaffinity (size_t, cpu_set_t *);
+   static int setaffinity (size_t, const cpu_set_t *);
+   static bool early_test (struct conf *);
+
+   The first two functions shall affect the affinity mask for the
+   current thread and return 0 for success, -1 for error (with an
+   error code in errno).
+
+   early_test is invoked before the tests in this file affect the
+   affinity masks.  If it returns true, testing continues, otherwise
+   no more tests run and the overall test exits with status 1.
+*/
+
+#include <errno.h>
+#include <limits.h>
+#include <sched.h>
+#include <stdbool.h>
+#include <stdio.h>
+
+/* CPU set configuration determined.  Can be used from early_test.  */
+struct conf
+{
+  int set_size;			/* in bits */
+  int last_cpu;
+};
+
+static int
+find_set_size (void)
+{
+  /* There is considerable controversy about how to determine the size
+     of the kernel CPU mask.  The probing loop below is only intended
+     for testing purposes.  */
+  for (int num_cpus = 64; num_cpus <= INT_MAX / 2; ++num_cpus)
+    {
+      cpu_set_t *set = CPU_ALLOC (num_cpus);
+      size_t size = CPU_ALLOC_SIZE (num_cpus);
+
+      if (set == NULL)
+	{
+	  printf ("error: CPU_ALLOC (%d) failed\n", num_cpus);
+	  return -1;
+	}
+      if (getaffinity (size, set) == 0)
+	{
+	  CPU_FREE (set);
+	  return num_cpus;
+	}
+      if (errno != EINVAL)
+	{
+	  printf ("error: getaffinity for %d CPUs: %m\n", num_cpus);
+	  CPU_FREE (set);
+	  return -1;
+	}
+      CPU_FREE (set);
+    }
+  puts ("error: Cannot find maximum CPU number");
+  return -1;
+}
+
+static int
+find_last_cpu (const cpu_set_t *set, size_t size)
+{
+  /* We need to determine the set size with CPU_COUNT_S and the
+     cpus_found counter because there is no direct way to obtain the
+     actual CPU set size, in bits, from the value of
+     CPU_ALLOC_SIZE.  */
+  size_t cpus_found = 0;
+  size_t total_cpus = CPU_COUNT_S (size, set);
+  int last_cpu = -1;
+
+  for (int cpu = 0; cpus_found < total_cpus; ++cpu)
+    {
+      if (CPU_ISSET_S (cpu, size, set))
+	{
+	  last_cpu = cpu;
+	  ++cpus_found;
+	}
+    }
+  return last_cpu;
+}
+
+static void
+setup_conf (struct conf *conf)
+{
+  *conf = (struct conf) {-1, -1};
+  conf->set_size = find_set_size ();
+  if (conf->set_size > 0)
+    {
+      cpu_set_t *set = CPU_ALLOC (conf->set_size);
+
+      if (set == NULL)
+	{
+	  printf ("error: CPU_ALLOC (%d) failed\n", conf->set_size);
+	  CPU_FREE (set);
+	  return;
+	}
+      if (getaffinity (CPU_ALLOC_SIZE (conf->set_size), set) < 0)
+	{
+	  printf ("error: getaffinity failed: %m\n");
+	  CPU_FREE (set);
+	  return;
+	}
+      conf->last_cpu = find_last_cpu (set, CPU_ALLOC_SIZE (conf->set_size));
+      if (conf->last_cpu < 0)
+	puts ("info: No test CPU found");
+      CPU_FREE (set);
+    }
+}
+
+static bool
+test_size (const struct conf *conf, size_t size)
+{
+  if (size < conf->set_size)
+    {
+      printf ("info: Test not run for CPU set size %zu\n", size);
+      return true;
+    }
+
+  cpu_set_t *initial_set = CPU_ALLOC (size);
+  cpu_set_t *set2 = CPU_ALLOC (size);
+  cpu_set_t *active_cpu_set = CPU_ALLOC (size);
+
+  if (initial_set == NULL || set2 == NULL || active_cpu_set == NULL)
+    {
+      printf ("error: size %zu: CPU_ALLOC failed\n", size);
+      return false;
+    }
+  size_t kernel_size = CPU_ALLOC_SIZE (size);
+
+  if (getaffinity (kernel_size, initial_set) < 0)
+    {
+      printf ("error: size %zu: getaffinity: %m\n", size);
+      return false;
+    }
+  if (setaffinity (kernel_size, initial_set) < 0)
+    {
+      printf ("error: size %zu: setaffinity: %m\n", size);
+      return true;
+    }
+
+  /* Use one-CPU set to test switching between CPUs.  */
+  int last_active_cpu = -1;
+  for (int cpu = 0; cpu <= conf->last_cpu; ++cpu)
+    {
+      int active_cpu = sched_getcpu ();
+      if (last_active_cpu >= 0 && last_active_cpu != active_cpu)
+	{
+	  printf ("error: Unexpected CPU %d, expected %d\n",
+		  active_cpu, last_active_cpu);
+	  return false;
+	}
+
+      if (!CPU_ISSET_S (cpu, kernel_size, initial_set))
+	continue;
+      last_active_cpu = cpu;
+
+      CPU_ZERO_S (kernel_size, active_cpu_set);
+      CPU_SET_S (cpu, kernel_size, active_cpu_set);
+      if (setaffinity (kernel_size, active_cpu_set) < 0)
+	{
+	  printf ("error: size %zu: setaffinity (%d): %m\n", size, cpu);
+	  return false;
+	}
+      active_cpu = sched_getcpu ();
+      if (active_cpu != cpu)
+	{
+	  printf ("error: Unexpected CPU %d, expected %d\n", active_cpu, cpu);
+	  return false;
+	}
+      if (getaffinity (kernel_size, set2) < 0)
+	{
+	  printf ("error: size %zu: getaffinity (2): %m\n", size);
+	  return false;
+	}
+      if (!CPU_EQUAL_S (kernel_size, active_cpu_set, set2))
+	{
+	  printf ("error: size %zu: CPU sets do not match\n", size);
+	  return false;
+	}
+    }
+
+  /* Test setting the all-ones set.  */
+  for (int cpu = 0; cpu < size; ++cpu)
+    CPU_SET_S (cpu, kernel_size, set2);
+  if (setaffinity (kernel_size, set2) < 0)
+    {
+      printf ("error: size %zu: setaffinity (3): %m\n", size);
+      return false;
+    }
+
+  if (setaffinity (kernel_size, initial_set) < 0)
+    {
+      printf ("error: size %zu: setaffinity (4): %m\n", size);
+      return false;
+    }
+  if (getaffinity (kernel_size, set2) < 0)
+    {
+      printf ("error: size %zu: getaffinity (3): %m\n", size);
+      return false;
+    }
+  if (!CPU_EQUAL_S (kernel_size, initial_set, set2))
+    {
+      printf ("error: size %zu: CPU sets do not match (2)\n", size);
+      return false;
+    }
+
+  CPU_FREE (initial_set);
+  CPU_FREE (set2);
+  CPU_FREE (active_cpu_set);
+
+  return true;
+}
+
+static int
+do_test (void)
+{
+  {
+    cpu_set_t set;
+    if (getaffinity (sizeof (set), &set) < 0 && errno == ENOSYS)
+      {
+	puts ("warning: getaffinity not supported, test cannot run");
+	return 0;
+      }
+    if (sched_getcpu () < 0 && errno == ENOSYS)
+      {
+	puts ("warning: sched_getcpu not supported, test cannot run");
+	return 0;
+      }
+  }
+
+  struct conf conf;
+  setup_conf (&conf);
+  printf ("info: Detected CPU set size (in bits): %d\n", conf.set_size);
+  printf ("info: Maximum test CPU: %d\n", conf.last_cpu);
+  if (conf.set_size < 0 || conf.last_cpu < 0)
+    return 1;
+
+  if (!early_test (&conf))
+    return 1;
+
+  bool error = false;
+  error |= !test_size (&conf, 1024);
+  error |= !test_size (&conf, conf.set_size);
+  error |= !test_size (&conf, 2);
+  error |= !test_size (&conf, 32);
+  error |= !test_size (&conf, 40);
+  error |= !test_size (&conf, 64);
+  error |= !test_size (&conf, 96);
+  error |= !test_size (&conf, 128);
+  error |= !test_size (&conf, 256);
+  error |= !test_size (&conf, 8192);
+  return error;
+}
+
+#define TEST_FUNCTION do_test ()
+#include "../test-skeleton.c"
diff --git a/sysdeps/unix/sysv/linux/tst-skeleton-thread-affinity.c b/sysdeps/unix/sysv/linux/tst-skeleton-thread-affinity.c
new file mode 100644
index 0000000..69e09bb
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/tst-skeleton-thread-affinity.c
@@ -0,0 +1,280 @@ 
+/* Generic test for CPU affinity functions, multi-threaded variant.
+   Copyright (C) 2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+/* Before including this file, a test has to declare the helper
+   getaffinity and setaffinity functions described in
+   tst-skeleton-affinity.c, which is included below.  */
+
+#include <errno.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <sys/time.h>
+
+struct conf;
+static bool early_test (struct conf *);
+
+/* Arbitrary run time for each pass.  */
+#define PASS_TIMEOUT 2
+
+/* There are two passes (one with sched_yield, one without), and we
+   double the timeout to be on the safe side.  */
+#define TIMEOUT (2 * PASS_TIMEOUT * 2)
+
+#include "tst-skeleton-affinity.c"
+
+/* 0 if still running, 1 of stopping requested.  */
+static int still_running;
+
+/* 0 if no scheduling failures, 1 if failures are encountered.  */
+static int failed;
+
+static void *
+thread_burn_one_cpu (void *closure)
+{
+  int cpu = (uintptr_t) closure;
+  while (__atomic_load_n (&still_running, __ATOMIC_RELAXED) == 0)
+    {
+      int current = sched_getcpu ();
+      if (sched_getcpu () != cpu)
+	{
+	  printf ("error: Pinned thread %d ran on impossible cpu %d\n",
+		  cpu, current);
+	  __atomic_store_n (&failed, 1, __ATOMIC_RELAXED);
+	  /* Terminate early.  */
+	  __atomic_store_n (&still_running, 1, __ATOMIC_RELAXED);
+	}
+    }
+  return NULL;
+}
+
+struct burn_thread
+{
+  pthread_t self;
+  struct conf *conf;
+  cpu_set_t *initial_set;
+  cpu_set_t *seen_set;
+  int thread;
+};
+
+static void *
+thread_burn_any_cpu (void *closure)
+{
+  struct burn_thread *param = closure;
+
+  /* Schedule this thread around a bit to see if it lands on another
+     CPU.  Run this for 2 seconds, once with sched_yield, once
+     without.  */
+  for (int pass = 1; pass <= 2; ++pass)
+    {
+      time_t start = time (NULL);
+      while (time (NULL) - start <= PASS_TIMEOUT)
+	{
+	  int cpu = sched_getcpu ();
+	  if (cpu > param->conf->last_cpu
+	      || !CPU_ISSET_S (cpu, CPU_ALLOC_SIZE (param->conf->set_size),
+			       param->initial_set))
+	    {
+	      printf ("error: Unpinned thread %d ran on impossible CPU %d\n",
+		      param->thread, cpu);
+	      __atomic_store_n (&failed, 1, __ATOMIC_RELAXED);
+	      return NULL;
+	    }
+	  CPU_SET_S (cpu, CPU_ALLOC_SIZE (param->conf->set_size),
+		     param->seen_set);
+	  if (pass == 1)
+	    sched_yield ();
+	}
+    }
+  return NULL;
+}
+
+static void
+stop_and_join_threads (struct conf *conf, cpu_set_t *set,
+		       pthread_t *pinned_first, pthread_t *pinned_last,
+		       struct burn_thread *other_first,
+		       struct burn_thread *other_last)
+{
+  __atomic_store_n (&still_running, 1, __ATOMIC_RELAXED);
+  for (pthread_t *p = pinned_first; p < pinned_last; ++p)
+    {
+      int cpu = p - pinned_first;
+      if (!CPU_ISSET_S (cpu, CPU_ALLOC_SIZE (conf->set_size), set))
+	continue;
+
+      int ret = pthread_join (*p, NULL);
+      if (ret != 0)
+	{
+	  printf ("error: Failed to join thread %d: %s\n", cpu, strerror (ret));
+	  fflush (stdout);
+	  /* Cannot shut down cleanly with threads still running.  */
+	  abort ();
+	}
+    }
+
+  for (struct burn_thread *p = other_first; p < other_last; ++p)
+    {
+      int cpu = p - other_first;
+      if (!CPU_ISSET_S (cpu, CPU_ALLOC_SIZE (conf->set_size), set))
+	continue;
+
+      int ret = pthread_join (p->self, NULL);
+      if (ret != 0)
+	{
+	  printf ("error: Failed to join thread %d: %s\n", cpu, strerror (ret));
+	  fflush (stdout);
+	  /* Cannot shut down cleanly with threads still running.  */
+	  abort ();
+	}
+    }
+}
+
+/* Tries to check that the initial set of CPUs is complete and that
+   the main thread will not run on any other threads.  */
+static bool
+early_test (struct conf *conf)
+{
+  pthread_t *pinned_threads
+    = calloc (conf->last_cpu + 1, sizeof (*pinned_threads));
+  struct burn_thread *other_threads
+    = calloc (conf->last_cpu + 1, sizeof (*other_threads));
+  cpu_set_t *initial_set = CPU_ALLOC (conf->set_size);
+  cpu_set_t *scratch_set = CPU_ALLOC (conf->set_size);
+
+  if (pinned_threads == NULL || other_threads == NULL
+      || initial_set == NULL || scratch_set == NULL)
+    {
+      puts ("error: Memory allocation failure");
+      return false;
+    }
+  if (getaffinity (CPU_ALLOC_SIZE (conf->set_size), initial_set) < 0)
+    {
+      printf ("error: pthread_getaffinity_np failed: %m\n");
+      return false;
+    }
+  for (int cpu = 0; cpu <= conf->last_cpu; ++cpu)
+    {
+      if (!CPU_ISSET_S (cpu, CPU_ALLOC_SIZE (conf->set_size), initial_set))
+	continue;
+      other_threads[cpu].conf = conf;
+      other_threads[cpu].initial_set = initial_set;
+      other_threads[cpu].thread = cpu;
+      other_threads[cpu].seen_set = CPU_ALLOC (conf->set_size);
+      if (other_threads[cpu].seen_set == NULL)
+	{
+	  puts ("error: Memory allocation failure");
+	  return false;
+	}
+      CPU_ZERO_S (CPU_ALLOC_SIZE (conf->set_size),
+		  other_threads[cpu].seen_set);
+    }
+
+  pthread_attr_t attr;
+  int ret = pthread_attr_init (&attr);
+  if (ret != 0)
+    {
+      printf ("error: pthread_attr_init failed: %s\n", strerror (ret));
+      return false;
+    }
+
+  /* Spawn a thread pinned to each available CPU.  */
+  for (int cpu = 0; cpu <= conf->last_cpu; ++cpu)
+    {
+      if (!CPU_ISSET_S (cpu, CPU_ALLOC_SIZE (conf->set_size), initial_set))
+	continue;
+      CPU_ZERO_S (CPU_ALLOC_SIZE (conf->set_size), scratch_set);
+      CPU_SET_S (cpu, CPU_ALLOC_SIZE (conf->set_size), scratch_set);
+      ret = pthread_attr_setaffinity_np
+	(&attr, CPU_ALLOC_SIZE (conf->set_size), scratch_set);
+      if (ret != 0)
+	{
+	  printf ("error: pthread_attr_setaffinity_np for CPU %d failed: %s\n",
+		  cpu, strerror (ret));
+	  stop_and_join_threads (conf, initial_set,
+				 pinned_threads, pinned_threads + cpu,
+				 NULL, NULL);
+	  return false;
+	}
+      ret = pthread_create (pinned_threads + cpu, &attr,
+			    thread_burn_one_cpu, (void *) (uintptr_t) cpu);
+      if (ret != 0)
+	{
+	  printf ("error: pthread_create for CPU %d failed: %s\n",
+		  cpu, strerror (ret));
+	  stop_and_join_threads (conf, initial_set,
+				 pinned_threads, pinned_threads + cpu,
+				 NULL, NULL);
+	  return false;
+	}
+    }
+
+  /* Spawn another set of threads running on all CPUs.  */
+  for (int cpu = 0; cpu <= conf->last_cpu; ++cpu)
+    {
+      if (!CPU_ISSET_S (cpu, CPU_ALLOC_SIZE (conf->set_size), initial_set))
+	continue;
+      ret = pthread_create (&other_threads[cpu].self, NULL,
+			    thread_burn_any_cpu, other_threads + cpu);
+      if (ret != 0)
+	{
+	  printf ("error: pthread_create for thread %d failed: %s\n",
+		  cpu, strerror (ret));
+	  stop_and_join_threads (conf, initial_set,
+				 pinned_threads,
+				 pinned_threads + conf->last_cpu + 1,
+				 other_threads, other_threads + cpu);
+	  return false;
+	}
+    }
+
+  /* Main thread.  */
+  struct burn_thread main_thread;
+  main_thread.conf = conf;
+  main_thread.initial_set = initial_set;
+  main_thread.seen_set = scratch_set;
+  main_thread.thread = -1;
+  CPU_ZERO_S (CPU_ALLOC_SIZE (conf->set_size), main_thread.seen_set);
+  thread_burn_any_cpu (&main_thread);
+  stop_and_join_threads (conf, initial_set,
+			 pinned_threads,
+			 pinned_threads + conf->last_cpu + 1,
+			 other_threads, other_threads + conf->last_cpu + 1);
+
+  printf ("info: Main thread ran on %d CPU(s) of %d available CPU(s)\n",
+	  CPU_COUNT_S (CPU_ALLOC_SIZE (conf->set_size), scratch_set),
+	  CPU_COUNT_S (CPU_ALLOC_SIZE (conf->set_size), initial_set));
+  CPU_ZERO_S (CPU_ALLOC_SIZE (conf->set_size), scratch_set);
+  for (int cpu = 0; cpu <= conf->last_cpu; ++cpu)
+    {
+      if (!CPU_ISSET_S (cpu, CPU_ALLOC_SIZE (conf->set_size), initial_set))
+	continue;
+      CPU_OR_S (CPU_ALLOC_SIZE (conf->set_size),
+		scratch_set, scratch_set, other_threads[cpu].seen_set);
+      CPU_FREE (other_threads[cpu].seen_set);
+    }
+  printf ("info: Other threads ran on %d CPU(s)\n",
+	  CPU_COUNT_S (CPU_ALLOC_SIZE (conf->set_size), scratch_set));;
+
+
+  pthread_attr_destroy (&attr);
+  CPU_FREE (scratch_set);
+  CPU_FREE (initial_set);
+  free (pinned_threads);
+  free (other_threads);
+  return failed == 0;
+}
diff --git a/sysdeps/unix/sysv/linux/tst-thread-affinity-pthread.c b/sysdeps/unix/sysv/linux/tst-thread-affinity-pthread.c
new file mode 100644
index 0000000..cf97c52
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/tst-thread-affinity-pthread.c
@@ -0,0 +1,49 @@ 
+/* Multi-threaded test for pthread_getaffinity_np, pthread_setaffinity_np.
+   Copyright (C) 2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <errno.h>
+#include <pthread.h>
+
+/* Defined for the benefit of tst-skeleton-thread-affinity.c, included
+   below.  */
+
+static int
+setaffinity (size_t size, const cpu_set_t *set)
+{
+  int ret = pthread_setaffinity_np (pthread_self (), size, set);
+  if (ret != 0)
+    {
+      errno = ret;
+      return -1;
+    }
+  return 0;
+}
+
+static int
+getaffinity (size_t size, cpu_set_t *set)
+{
+  int ret = pthread_getaffinity_np (pthread_self (), size, set);
+  if (ret != 0)
+    {
+      errno = ret;
+      return -1;
+    }
+  return 0;
+}
+
+#include "tst-skeleton-thread-affinity.c"
diff --git a/sysdeps/unix/sysv/linux/tst-thread-affinity-pthread2.c b/sysdeps/unix/sysv/linux/tst-thread-affinity-pthread2.c
new file mode 100644
index 0000000..21cc9ae
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/tst-thread-affinity-pthread2.c
@@ -0,0 +1,95 @@ 
+/* Separate thread test for pthread_getaffinity_np, pthread_setaffinity_np.
+   Copyright (C) 2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <errno.h>
+#include <pthread.h>
+#include <stdbool.h>
+#include <stdio.h>
+#include <stdlib.h>
+
+/* Defined for the benefit of tst-skeleton-thread-affinity.c, included
+   below.  This variant runs the functions on a separate thread.  */
+
+struct affinity_access_task
+{
+  pthread_t thread;
+  cpu_set_t *set;
+  size_t size;
+  bool get;
+  int result;
+};
+
+static void *
+affinity_access_thread (void *closure)
+{
+  struct affinity_access_task *task = closure;
+  if (task->get)
+    task->result = pthread_getaffinity_np
+      (task->thread, task->size, task->set);
+  else
+    task->result = pthread_setaffinity_np
+      (task->thread, task->size, task->set);
+  return NULL;
+}
+
+static int
+run_affinity_access_thread (cpu_set_t *set, size_t size, bool get)
+{
+  struct affinity_access_task task =
+    {
+      .thread = pthread_self (),
+      .set = set,
+      .size = size,
+      .get = get
+    };
+  pthread_t thr;
+  int ret = pthread_create (&thr, NULL, affinity_access_thread, &task);
+  if (ret != 0)
+    {
+      errno = ret;
+      printf ("error: could not create affinity access thread: %m\n");
+      abort ();
+    }
+  ret = pthread_join (thr, NULL);
+  if (ret != 0)
+    {
+      errno = ret;
+      printf ("error: could not join affinity access thread: %m\n");
+      abort ();
+    }
+  if (task.result != 0)
+    {
+      errno = task.result;
+      return -1;
+    }
+  return 0;
+}
+
+static int
+setaffinity (size_t size, const cpu_set_t *set)
+{
+  return run_affinity_access_thread ((cpu_set_t *) set, size, false);
+}
+
+static int
+getaffinity (size_t size, cpu_set_t *set)
+{
+  return run_affinity_access_thread (set, size, true);
+}
+
+#include "tst-skeleton-thread-affinity.c"
diff --git a/sysdeps/unix/sysv/linux/tst-thread-affinity-sched.c b/sysdeps/unix/sysv/linux/tst-thread-affinity-sched.c
new file mode 100644
index 0000000..05289c7
--- /dev/null
+++ b/sysdeps/unix/sysv/linux/tst-thread-affinity-sched.c
@@ -0,0 +1,36 @@ 
+/* Multi-threaded test for sched_getaffinity_np, sched_setaffinity_np.
+   Copyright (C) 2015 Free Software Foundation, Inc.
+   This file is part of the GNU C Library.
+
+   The GNU C Library is free software; you can redistribute it and/or
+   modify it under the terms of the GNU Lesser General Public
+   License as published by the Free Software Foundation; either
+   version 2.1 of the License, or (at your option) any later version.
+
+   The GNU C Library is distributed in the hope that it will be useful,
+   but WITHOUT ANY WARRANTY; without even the implied warranty of
+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+   Lesser General Public License for more details.
+
+   You should have received a copy of the GNU Lesser General Public
+   License along with the GNU C Library; if not, see
+   <http://www.gnu.org/licenses/>.  */
+
+#include <sched.h>
+
+/* Defined for the benefit of tst-skeleton-thread-affinity.c, included
+   below.  */
+
+static int
+getaffinity (size_t size, cpu_set_t *set)
+{
+  return sched_getaffinity (0, size, set);
+}
+
+static int
+setaffinity (size_t size, const cpu_set_t *set)
+{
+  return sched_setaffinity (0, size, set);
+}
+
+#include "tst-skeleton-thread-affinity.c"