From patchwork Thu Jun 1 18:12:42 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 769890 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3wdwT06VmVz9sN8 for ; Fri, 2 Jun 2017 04:13:40 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.b="izZdGJVK"; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:date:from:to:subject:message-id:reply-to :mime-version:content-type; q=dns; s=default; b=njuKOX40XCskBgXg QswTVwwefD1Ba6tiHH/z5AbxvP4lXEX0jW9GMiVsTKAztc3JUyZC8TfaCMVgyy5i /cK99vRYgbTNWsJOfLndMd3c1dZlJyXdIAsBy1oz2LXpyYwrk4irmr827galcqhq f2ZNAxOvRyj/qcHA4KW/5R07GKY= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:date:from:to:subject:message-id:reply-to :mime-version:content-type; s=default; bh=CddBBLhf98TuiTP2GZK6S8 KC6aY=; b=izZdGJVK8ztx1Xoap8miCUWifYuvL+jcSbQabqHxjx36IT6PZruLxr lSYnPEN0AQZZ3kF268eQLvI/9v5Y0S5+BzR9g22/4EtB/8XzC6nHZU+pDHZ/feIl DNGO1rb5ZSCgdfA+wQ3u52CRIJ0AffnbJ2kQtQyRs5KbUVDf3Nkv8= Received: (qmail 75774 invoked by alias); 1 Jun 2017 18:13:14 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 71020 invoked by uid 89); 1 Jun 2017 18:13:04 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-24.0 required=5.0 tests=AWL, BAYES_00, GIT_PATCH_0, GIT_PATCH_1, GIT_PATCH_2, GIT_PATCH_3, KAM_LAZY_DOMAIN_SECURITY, NO_DNS_FOR_FROM, RP_MATCHES_RCVD autolearn=ham version=3.3.2 spammy= X-HELO: mga01.intel.com X-ExtLoop1: 1 Date: Thu, 1 Jun 2017 11:12:42 -0700 From: "H.J. Lu" To: GNU C Library Subject: [PATCH] x86-64: Optimize strlen/strnlen/wcslen/wcsnlen with AVX2 Message-ID: <20170601181242.GA28627@lucon.org> Reply-To: "H.J. Lu" MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.8.0 (2017-02-23) Optimize strlen/strnlen/wcslen/wcsnlen with AVX2 to check 32 bytes with a single vector compare instruction. It is as fast as SSE2 versions for size <= 16 bytes and up to 1X faster for or size > 16 bytes on Haswell. Select AVX2 version on AVX2 machines where vzeroupper is preferred and AVX unaligned load is fast. NB: It uses TZCNT instead of BSF since TZCNT produces the same result as BSF for non-zero input. TZCNT is faster than BSF and is executed as BSF if machine doesn't support TZCNT. Any comments? H.J. --- * sysdeps/x86_64/multiarch/Makefile (sysdep_routines): Add strlen-avx2, strnlen-avx2 and wcslen-avx2 and wcsnlen-avx2. * sysdeps/x86_64/multiarch/ifunc-impl-list.c (__libc_ifunc_impl_list): Add tests for __strlen_avx2, __strlen_sse2, __strnlen_avx2, __strnlen_sse2, __wcslen_avx2, __wcslen_sse2, __wcsnlen_avx2 and __wcsnlen_sse2. * sysdeps/x86_64/multiarch/strlen-avx2.S: New file. * sysdeps/x86_64/multiarch/strlen.S: Likewise. * sysdeps/x86_64/multiarch/strnlen-avx2.S: Likewise. * sysdeps/x86_64/multiarch/strnlen.S: Likewise. * sysdeps/x86_64/multiarch/wcslen-avx2.S: Likewise. * sysdeps/x86_64/multiarch/wcslen.S: Likewise. * sysdeps/x86_64/multiarch/wcsnlen-avx2.S: Likewise. * sysdeps/x86_64/multiarch/wcsnlen.S: Likewise. --- sysdeps/x86_64/multiarch/Makefile | 4 +- sysdeps/x86_64/multiarch/ifunc-impl-list.c | 28 ++ sysdeps/x86_64/multiarch/strlen-avx2.S | 394 +++++++++++++++++++++++++++++ sysdeps/x86_64/multiarch/strlen.S | 64 +++++ sysdeps/x86_64/multiarch/strnlen-avx2.S | 4 + sysdeps/x86_64/multiarch/strnlen.S | 65 +++++ sysdeps/x86_64/multiarch/wcslen-avx2.S | 4 + sysdeps/x86_64/multiarch/wcslen.S | 55 ++++ sysdeps/x86_64/multiarch/wcsnlen-avx2.S | 5 + sysdeps/x86_64/multiarch/wcsnlen.S | 55 ++++ 10 files changed, 677 insertions(+), 1 deletion(-) create mode 100644 sysdeps/x86_64/multiarch/strlen-avx2.S create mode 100644 sysdeps/x86_64/multiarch/strlen.S create mode 100644 sysdeps/x86_64/multiarch/strnlen-avx2.S create mode 100644 sysdeps/x86_64/multiarch/strnlen.S create mode 100644 sysdeps/x86_64/multiarch/wcslen-avx2.S create mode 100644 sysdeps/x86_64/multiarch/wcslen.S create mode 100644 sysdeps/x86_64/multiarch/wcsnlen-avx2.S create mode 100644 sysdeps/x86_64/multiarch/wcsnlen.S diff --git a/sysdeps/x86_64/multiarch/Makefile b/sysdeps/x86_64/multiarch/Makefile index 48aba0f..f33f21a 100644 --- a/sysdeps/x86_64/multiarch/Makefile +++ b/sysdeps/x86_64/multiarch/Makefile @@ -13,6 +13,7 @@ sysdep_routines += strncat-c stpncpy-c strncpy-c strcmp-ssse3 \ memcpy-ssse3-back \ memmove-ssse3-back \ memmove-avx512-no-vzeroupper strcasecmp_l-ssse3 \ + strlen-avx2 strnlen-avx2 \ strncase_l-ssse3 strcat-ssse3 strncat-ssse3\ strcpy-ssse3 strncpy-ssse3 stpcpy-ssse3 stpncpy-ssse3 \ strcpy-sse2-unaligned strncpy-sse2-unaligned \ @@ -35,5 +36,6 @@ ifeq ($(subdir),wcsmbs) sysdep_routines += wmemcmp-sse4 wmemcmp-ssse3 wmemcmp-c \ wmemchr-avx2 \ wmemcmp-avx2 \ - wcscpy-ssse3 wcscpy-c + wcscpy-ssse3 wcscpy-c \ + wcslen-avx2 wcsnlen-avx2 endif diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c index ae09241..c2b07b3 100644 --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c @@ -165,6 +165,20 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, __rawmemchr_avx2) IFUNC_IMPL_ADD (array, i, rawmemchr, 1, __rawmemchr_sse2)) + /* Support sysdeps/x86_64/multiarch/strlen.S. */ + IFUNC_IMPL (i, name, strlen, + IFUNC_IMPL_ADD (array, i, strlen, + HAS_ARCH_FEATURE (AVX2_Usable), + __strlen_avx2) + IFUNC_IMPL_ADD (array, i, strlen, 1, __strlen_sse2)) + + /* Support sysdeps/x86_64/multiarch/strnlen.S. */ + IFUNC_IMPL (i, name, strnlen, + IFUNC_IMPL_ADD (array, i, strnlen, + HAS_ARCH_FEATURE (AVX2_Usable), + __strnlen_avx2) + IFUNC_IMPL_ADD (array, i, strnlen, 1, __strnlen_sse2)) + /* Support sysdeps/x86_64/multiarch/stpncpy.S. */ IFUNC_IMPL (i, name, stpncpy, IFUNC_IMPL_ADD (array, i, stpncpy, HAS_CPU_FEATURE (SSSE3), @@ -309,6 +323,20 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, __wcscpy_ssse3) IFUNC_IMPL_ADD (array, i, wcscpy, 1, __wcscpy_sse2)) + /* Support sysdeps/x86_64/multiarch/wcslen.S. */ + IFUNC_IMPL (i, name, wcslen, + IFUNC_IMPL_ADD (array, i, wcslen, + HAS_ARCH_FEATURE (AVX2_Usable), + __wcslen_avx2) + IFUNC_IMPL_ADD (array, i, wcslen, 1, __wcslen_sse2)) + + /* Support sysdeps/x86_64/multiarch/wcsnlen.S. */ + IFUNC_IMPL (i, name, wcsnlen, + IFUNC_IMPL_ADD (array, i, wcsnlen, + HAS_ARCH_FEATURE (AVX2_Usable), + __wcsnlen_avx2) + IFUNC_IMPL_ADD (array, i, wcsnlen, 1, __wcsnlen_sse2)) + /* Support sysdeps/x86_64/multiarch/wmemchr.S. */ IFUNC_IMPL (i, name, wmemchr, IFUNC_IMPL_ADD (array, i, wmemchr, diff --git a/sysdeps/x86_64/multiarch/strlen-avx2.S b/sysdeps/x86_64/multiarch/strlen-avx2.S new file mode 100644 index 0000000..1dc823a --- /dev/null +++ b/sysdeps/x86_64/multiarch/strlen-avx2.S @@ -0,0 +1,394 @@ +/* strlen/strnlen/wcslen/wcsnlen optimized with AVX2. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#if IS_IN (libc) + +# include + +# ifndef STRLEN +# define STRLEN __strlen_avx2 +# endif + +# ifdef USE_AS_WCSLEN +# define VPCMPEQ vpcmpeqd +# define VPMINU vpminud +# else +# define VPCMPEQ vpcmpeqb +# define VPMINU vpminub +# endif + +# ifndef VZEROUPPER +# define VZEROUPPER vzeroupper +# endif + +# define VEC_SIZE 32 + + .section .text.avx,"ax",@progbits +ENTRY (STRLEN) +# ifdef USE_AS_STRNLEN + /* Check for zero length. */ + testq %rsi, %rsi + jz L(zero) +# ifdef USE_AS_WCSLEN + shl $2, %rsi +# endif + movq %rsi, %r8 +# endif + movl %edi, %ecx + movq %rdi, %rdx + vpxor %xmm0, %xmm0, %xmm0 + + /* Check if we may cross page boundary with one vector load. */ + andl $(2 * VEC_SIZE - 1), %ecx + cmpl $VEC_SIZE, %ecx + ja L(cros_page_boundary) + + /* Check the first VEC_SIZE bytes. */ + VPCMPEQ (%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + +# ifdef USE_AS_STRNLEN + jnz L(first_vec_x0_check) + /* Adjust length and check the end of data. */ + subq $VEC_SIZE, %rsi + jbe L(max) +# else + jnz L(first_vec_x0) +# endif + + /* Align data for aligned loads in the loop. */ + addq $VEC_SIZE, %rdi + andl $(VEC_SIZE - 1), %ecx + andq $-VEC_SIZE, %rdi + +# ifdef USE_AS_STRNLEN + /* Adjust length. */ + addq %rcx, %rsi + + subq $(VEC_SIZE * 4), %rsi + jbe L(last_4x_vec_or_less) +# endif + jmp L(more_4x_vec) + + .p2align 4 +L(cros_page_boundary): + andl $(VEC_SIZE - 1), %ecx + andq $-VEC_SIZE, %rdi + VPCMPEQ (%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + /* Remove the leading bytes. */ + sarl %cl, %eax + testl %eax, %eax + jz L(aligned_more) + tzcntl %eax, %eax +# ifdef USE_AS_STRNLEN + /* Check the end of data. */ + cmpq %rax, %rsi + jbe L(max) +# endif + addq %rdi, %rax + addq %rcx, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(aligned_more): +# ifdef USE_AS_STRNLEN + /* "rcx" is less than VEC_SIZE. Calculate "rdx + rcx - VEC_SIZE" + with "rdx - (VEC_SIZE - rcx)" instead of "(rdx + rcx) - VEC_SIZE" + to void possible addition overflow. */ + negq %rcx + addq $VEC_SIZE, %rcx + + /* Check the end of data. */ + subq %rcx, %rsi + jbe L(max) +# endif + + addq $VEC_SIZE, %rdi + +# ifdef USE_AS_STRNLEN + subq $(VEC_SIZE * 4), %rsi + jbe L(last_4x_vec_or_less) +# endif + +L(more_4x_vec): + /* Check the first 4 * VEC_SIZE. Only one VEC_SIZE at a time + since data is only aligned to VEC_SIZE. */ + VPCMPEQ (%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x0) + + VPCMPEQ VEC_SIZE(%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x1) + + VPCMPEQ (VEC_SIZE * 2)(%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x2) + + VPCMPEQ (VEC_SIZE * 3)(%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x3) + + addq $(VEC_SIZE * 4), %rdi + +# ifdef USE_AS_STRNLEN + subq $(VEC_SIZE * 4), %rsi + jbe L(last_4x_vec_or_less) +# endif + + /* Align data to 4 * VEC_SIZE. */ + movq %rdi, %rcx + andl $(4 * VEC_SIZE - 1), %ecx + andq $-(4 * VEC_SIZE), %rdi + +# ifdef USE_AS_STRNLEN + /* Adjust length. */ + addq %rcx, %rsi +# endif + + .p2align 4 +L(loop_4x_vec): + /* Compare 4 * VEC at a time forward. */ + vmovdqa (%rdi), %ymm1 + vmovdqa VEC_SIZE(%rdi), %ymm2 + vmovdqa (VEC_SIZE * 2)(%rdi), %ymm3 + vmovdqa (VEC_SIZE * 3)(%rdi), %ymm4 + VPMINU %ymm1, %ymm2, %ymm5 + VPMINU %ymm3, %ymm4, %ymm6 + VPMINU %ymm5, %ymm6, %ymm5 + + VPCMPEQ %ymm5, %ymm0, %ymm5 + vpmovmskb %ymm5, %eax + testl %eax, %eax + jnz L(4x_vec_end) + + addq $(VEC_SIZE * 4), %rdi + +# ifndef USE_AS_STRNLEN + jmp L(loop_4x_vec) +# else + subq $(VEC_SIZE * 4), %rsi + ja L(loop_4x_vec) + +L(last_4x_vec_or_less): + /* Less than 4 * VEC and aligned to VEC_SIZE. */ + addl $(VEC_SIZE * 2), %esi + jle L(last_2x_vec) + + VPCMPEQ (%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x0) + + VPCMPEQ VEC_SIZE(%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x1) + + VPCMPEQ (VEC_SIZE * 2)(%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + + jnz L(first_vec_x2_check) + subl $VEC_SIZE, %esi + jle L(max) + + VPCMPEQ (VEC_SIZE * 3)(%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + + jnz L(first_vec_x3_check) + movq %r8, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(last_2x_vec): + addl $(VEC_SIZE * 2), %esi + VPCMPEQ (%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + + jnz L(first_vec_x0_check) + subl $VEC_SIZE, %esi + jle L(max) + + VPCMPEQ VEC_SIZE(%rdi), %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x1_check) + movq %r8, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(first_vec_x0_check): + tzcntl %eax, %eax + /* Check the end of data. */ + cmpq %rax, %rsi + jbe L(max) + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(first_vec_x1_check): + tzcntl %eax, %eax + /* Check the end of data. */ + cmpq %rax, %rsi + jbe L(max) + addq $VEC_SIZE, %rax + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(first_vec_x2_check): + tzcntl %eax, %eax + /* Check the end of data. */ + cmpq %rax, %rsi + jbe L(max) + addq $(VEC_SIZE * 2), %rax + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(first_vec_x3_check): + tzcntl %eax, %eax + /* Check the end of data. */ + cmpq %rax, %rsi + jbe L(max) + addq $(VEC_SIZE * 3), %rax + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(max): + movq %r8, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(zero): + xorl %eax, %eax + ret +# endif + + .p2align 4 +L(first_vec_x0): + tzcntl %eax, %eax + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(first_vec_x1): + tzcntl %eax, %eax + addq $VEC_SIZE, %rax + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(first_vec_x2): + tzcntl %eax, %eax + addq $(VEC_SIZE * 2), %rax + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + + .p2align 4 +L(4x_vec_end): + VPCMPEQ %ymm1, %ymm0, %ymm1 + vpmovmskb %ymm1, %eax + testl %eax, %eax + jnz L(first_vec_x0) + VPCMPEQ %ymm2, %ymm0, %ymm2 + vpmovmskb %ymm2, %eax + testl %eax, %eax + jnz L(first_vec_x1) + VPCMPEQ %ymm3, %ymm0, %ymm3 + vpmovmskb %ymm3, %eax + testl %eax, %eax + jnz L(first_vec_x2) + VPCMPEQ %ymm4, %ymm0, %ymm4 + vpmovmskb %ymm4, %eax + testl %eax, %eax +L(first_vec_x3): + tzcntl %eax, %eax + addq $(VEC_SIZE * 3), %rax + addq %rdi, %rax + subq %rdx, %rax +# ifdef USE_AS_WCSLEN + shrq $2, %rax +# endif + VZEROUPPER + ret + +END (STRLEN) +#endif diff --git a/sysdeps/x86_64/multiarch/strlen.S b/sysdeps/x86_64/multiarch/strlen.S new file mode 100644 index 0000000..2847440 --- /dev/null +++ b/sysdeps/x86_64/multiarch/strlen.S @@ -0,0 +1,64 @@ +/* Multiple versions of strlen + All versions must be listed in ifunc-impl-list.c. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* Define multiple versions only for the definition in libc. */ +#if IS_IN (libc) + .text +ENTRY(strlen) + .type strlen, @gnu_indirect_function + LOAD_RTLD_GLOBAL_RO_RDX + HAS_ARCH_FEATURE (Prefer_No_VZEROUPPER) + jnz 1f + HAS_ARCH_FEATURE (AVX2_Usable) + jz 1f + HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load) + jz 1f + leaq __strlen_avx2(%rip), %rax + ret + +1: leaq __strlen_sse2(%rip), %rax + ret +END(strlen) + +# undef ENTRY +# define ENTRY(name) \ + .type __strlen_sse2, @function; \ + .p2align 4; \ + .globl __strlen_sse2; \ + .hidden __strlen_sse2; \ + __strlen_sse2: cfi_startproc; \ + CALL_MCOUNT +# undef END +# define END(name) \ + cfi_endproc; .size __strlen_sse2, .-__strlen_sse2 + +# ifdef SHARED +# undef libc_hidden_builtin_def +/* It doesn't make sense to send libc-internal strlen calls through a PLT. + The speedup we get from using AVX2 instructions is likely eaten away + by the indirect call in the PLT. */ +# define libc_hidden_builtin_def(name) \ + .globl __GI_strlen; __GI_strlen = __strlen_sse2 +# endif +#endif + +#include "../strlen.S" diff --git a/sysdeps/x86_64/multiarch/strnlen-avx2.S b/sysdeps/x86_64/multiarch/strnlen-avx2.S new file mode 100644 index 0000000..c4062b2 --- /dev/null +++ b/sysdeps/x86_64/multiarch/strnlen-avx2.S @@ -0,0 +1,4 @@ +#define STRLEN __strnlen_avx2 +#define USE_AS_STRNLEN 1 + +#include "strlen-avx2.S" diff --git a/sysdeps/x86_64/multiarch/strnlen.S b/sysdeps/x86_64/multiarch/strnlen.S new file mode 100644 index 0000000..0c2289a --- /dev/null +++ b/sysdeps/x86_64/multiarch/strnlen.S @@ -0,0 +1,65 @@ +/* Multiple versions of strnlen + All versions must be listed in ifunc-impl-list.c. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* Define multiple versions only for the definition in libc. */ +#if IS_IN (libc) + .text +ENTRY(__strnlen) + .type __strnlen, @gnu_indirect_function + LOAD_RTLD_GLOBAL_RO_RDX + HAS_ARCH_FEATURE (Prefer_No_VZEROUPPER) + jnz 1f + HAS_ARCH_FEATURE (AVX2_Usable) + jz 1f + HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load) + jz 1f + leaq __strnlen_avx2(%rip), %rax + ret + +1: leaq __strnlen_sse2(%rip), %rax + ret +END(__strnlen) + +# undef ENTRY +# define ENTRY(name) \ + .type __strnlen_sse2, @function; \ + .p2align 4; \ + .globl __strnlen_sse2; \ + .hidden __strnlen_sse2; \ + __strnlen_sse2: cfi_startproc; \ + CALL_MCOUNT +# undef END +# define END(name) \ + cfi_endproc; .size __strnlen_sse2, .-__strnlen_sse2 + +# ifdef SHARED +/* It doesn't make sense to send libc-internal strnlen calls through a PLT. + The speedup we get from using AVX2 instructions is likely eaten away + by the indirect call in the PLT. */ +# undef libc_hidden_def +# define libc_hidden_def(name) \ + .globl __GI_strnlen; __GI_strnlen = __strnlen_sse2; \ + .globl __GI___strnlen; __GI___strnlen = __strnlen_sse2 +# endif +#endif + +#include "../strnlen.S" diff --git a/sysdeps/x86_64/multiarch/wcslen-avx2.S b/sysdeps/x86_64/multiarch/wcslen-avx2.S new file mode 100644 index 0000000..c9224f1 --- /dev/null +++ b/sysdeps/x86_64/multiarch/wcslen-avx2.S @@ -0,0 +1,4 @@ +#define STRLEN __wcslen_avx2 +#define USE_AS_WCSLEN 1 + +#include "strlen-avx2.S" diff --git a/sysdeps/x86_64/multiarch/wcslen.S b/sysdeps/x86_64/multiarch/wcslen.S new file mode 100644 index 0000000..04369b6 --- /dev/null +++ b/sysdeps/x86_64/multiarch/wcslen.S @@ -0,0 +1,55 @@ +/* Multiple versions of wcslen + All versions must be listed in ifunc-impl-list.c. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* Define multiple versions only for the definition in libc. */ +#if IS_IN (libc) + .text +ENTRY(__wcslen) + .type __wcslen, @gnu_indirect_function + LOAD_RTLD_GLOBAL_RO_RDX + HAS_ARCH_FEATURE (Prefer_No_VZEROUPPER) + jnz 1f + HAS_ARCH_FEATURE (AVX2_Usable) + jz 1f + HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load) + jz 1f + leaq __wcslen_avx2(%rip), %rax + ret + +1: leaq __wcslen_sse2(%rip), %rax + ret +END(__wcslen) + +# undef ENTRY +# define ENTRY(name) \ + .type __wcslen_sse2, @function; \ + .p2align 4; \ + .globl __wcslen_sse2; \ + .hidden __wcslen_sse2; \ + __wcslen_sse2: cfi_startproc; \ + CALL_MCOUNT +# undef END +# define END(name) \ + cfi_endproc; .size __wcslen_sse2, .-__wcslen_sse2 +#endif + +#include "../wcslen.S" diff --git a/sysdeps/x86_64/multiarch/wcsnlen-avx2.S b/sysdeps/x86_64/multiarch/wcsnlen-avx2.S new file mode 100644 index 0000000..fac8354 --- /dev/null +++ b/sysdeps/x86_64/multiarch/wcsnlen-avx2.S @@ -0,0 +1,5 @@ +#define STRLEN __wcsnlen_avx2 +#define USE_AS_WCSLEN 1 +#define USE_AS_STRNLEN 1 + +#include "strlen-avx2.S" diff --git a/sysdeps/x86_64/multiarch/wcsnlen.S b/sysdeps/x86_64/multiarch/wcsnlen.S new file mode 100644 index 0000000..0893bea --- /dev/null +++ b/sysdeps/x86_64/multiarch/wcsnlen.S @@ -0,0 +1,55 @@ +/* Multiple versions of wcsnlen + All versions must be listed in ifunc-impl-list.c. + Copyright (C) 2017 Free Software Foundation, Inc. + This file is part of the GNU C Library. + + The GNU C Library is free software; you can redistribute it and/or + modify it under the terms of the GNU Lesser General Public + License as published by the Free Software Foundation; either + version 2.1 of the License, or (at your option) any later version. + + The GNU C Library is distributed in the hope that it will be useful, + but WITHOUT ANY WARRANTY; without even the implied warranty of + MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + Lesser General Public License for more details. + + You should have received a copy of the GNU Lesser General Public + License along with the GNU C Library; if not, see + . */ + +#include +#include + +/* Define multiple versions only for the definition in libc. */ +#if IS_IN (libc) + .text +ENTRY(__wcsnlen) + .type __wcsnlen, @gnu_indirect_function + LOAD_RTLD_GLOBAL_RO_RDX + HAS_ARCH_FEATURE (Prefer_No_VZEROUPPER) + jnz 1f + HAS_ARCH_FEATURE (AVX2_Usable) + jz 1f + HAS_ARCH_FEATURE (AVX_Fast_Unaligned_Load) + jz 1f + leaq __wcsnlen_avx2(%rip), %rax + ret + +1: leaq __wcsnlen_sse2(%rip), %rax + ret +END(__wcsnlen) + +# undef ENTRY +# define ENTRY(name) \ + .type __wcsnlen_sse2, @function; \ + .p2align 4; \ + .globl __wcsnlen_sse2; \ + .hidden __wcsnlen_sse2; \ + __wcsnlen_sse2: cfi_startproc; \ + CALL_MCOUNT +# undef END +# define END(name) \ + cfi_endproc; .size __wcsnlen_sse2, .-__wcsnlen_sse2 +#endif + +#include "../wcsnlen.S"