Message ID | 1457445064-7107-18-git-send-email-nix@esperi.org.uk |
---|---|
State | New |
Headers | show |
On 03/08/2016 02:51 PM, Nix wrote: > From: Nick Alcock <nick.alcock@oracle.com> > > The x86-specific versions of both pthread_cond_wait and > pthread_cond_timedwait have (in their fall-back-to-futex-wait slow > paths) calls to __pthread_mutex_cond_lock_adjust followed by > __pthread_mutex_unlock_usercnt, which load the parameters before the > first call but then assume that the first parameter, in %eax, will > survive unaffected. This happens to have been true before now, but %eax > is a call-clobbered register, and this assumption is not safe: it could > change at any time, at GCC's whim, and indeed the stack-protector canary > checking code clobbers %eax while checking that the canary is > uncorrupted. > > So reload %eax before calling __pthread_mutex_unlock_usercnt. (Do this > unconditionally, even when stack-protection is not in use, because it's > the right thing to do, it's a slow path, and anything else is dicing > with death.) > > v5: New. > > * sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload > call-clobbered %eax on retry path. > * sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise. Nice catch. This should go in separately because it's independent of the stack protector work. Thanks, Florian
diff --git a/sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S b/sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S index 96f8a8d..6256376 100644 --- a/sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S +++ b/sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S @@ -297,6 +297,7 @@ __pthread_cond_timedwait: correctly. */ movl dep_mutex(%ebx), %eax call __pthread_mutex_cond_lock_adjust + movl dep_mutex(%ebx), %eax xorl %edx, %edx call __pthread_mutex_unlock_usercnt jmp 8b diff --git a/sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S b/sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S index 94302b0..5016718 100644 --- a/sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S +++ b/sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S @@ -309,6 +309,7 @@ __pthread_cond_wait: correctly. */ movl dep_mutex(%ebx), %eax call __pthread_mutex_cond_lock_adjust + movl dep_mutex(%ebx), %eax xorl %edx, %edx call __pthread_mutex_unlock_usercnt jmp 8b
From: Nick Alcock <nick.alcock@oracle.com> The x86-specific versions of both pthread_cond_wait and pthread_cond_timedwait have (in their fall-back-to-futex-wait slow paths) calls to __pthread_mutex_cond_lock_adjust followed by __pthread_mutex_unlock_usercnt, which load the parameters before the first call but then assume that the first parameter, in %eax, will survive unaffected. This happens to have been true before now, but %eax is a call-clobbered register, and this assumption is not safe: it could change at any time, at GCC's whim, and indeed the stack-protector canary checking code clobbers %eax while checking that the canary is uncorrupted. So reload %eax before calling __pthread_mutex_unlock_usercnt. (Do this unconditionally, even when stack-protection is not in use, because it's the right thing to do, it's a slow path, and anything else is dicing with death.) v5: New. * sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S: Reload call-clobbered %eax on retry path. * sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S: Likewise. --- sysdeps/unix/sysv/linux/i386/pthread_cond_timedwait.S | 1 + sysdeps/unix/sysv/linux/i386/pthread_cond_wait.S | 1 + 2 files changed, 2 insertions(+)