From patchwork Tue Jun 30 04:35:28 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "H.J. Lu" X-Patchwork-Id: 1319520 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=sourceware.org Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=jhGCMKRa; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49ws4374Vdz9sDX for ; Tue, 30 Jun 2020 14:35:47 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 4E28D3858D38; Tue, 30 Jun 2020 04:35:40 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 4E28D3858D38 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1593491740; bh=8RUH6u6FeH1Uw6JL9k4l9bjxDKh46h6CODAez3Cg67M=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To:Cc: From; b=jhGCMKRa4kjNBveujHz2z8ovwOpSSh106oAA5uwuLVQOVTpPNU+zIlMF4NHiUoJFg 71/NfrRTQg5NSe6Y1ejFjNMN0lAn+L2gCTY5b/iB+au91b4Y0KAXYLq1zWnrMZp67s VWXRmlA/E+PXpCyPEq4i/+LhIT2IdP3CpN9ukZRg= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by sourceware.org (Postfix) with ESMTPS id C0BEE385EC55 for ; Tue, 30 Jun 2020 04:35:32 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org C0BEE385EC55 Received: by mail-pj1-x102c.google.com with SMTP id l6so5863028pjq.1 for ; Mon, 29 Jun 2020 21:35:32 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=8RUH6u6FeH1Uw6JL9k4l9bjxDKh46h6CODAez3Cg67M=; b=RNR51kjsJRfHTVN/7OJE/qy+DocgWS2fprxwm4QTZ7DLCnw6gt/FB23z3hHR4fsLGB dck7wsr7hoXRxczz3Fp53Eab4KaA3eR/UubRTtqULivnyfsf9U7BDI4azlnynqTxJ9E9 WvU+BwI8eQe5M3zHBziG9gXvw6+Z+bJ+6aMQMo8hhiosST9czqpc5/imb3qyDL6yT0SI CY2Os5tGCuLfrD1ZHE+64Zoj2Ixsk2AETSOJu5ohabt9E7BjR+1CTYlgWb7Hzij5oXPC ZpNHChqjyDNdNTvwFYWeyGzVrcMkCVSfJiD1ONBatTmy43ZVnT75JzlBbk0bPP08SnoH TUlA== X-Gm-Message-State: AOAM5308IByGe7G7/LWrr+dShpK/2w/0Rj5Jp3gwFuHFrMqB9CD+JbdA P7D/eV1j6cDcidbheZahKxY= X-Google-Smtp-Source: ABdhPJy42NJE4ygnoIantJnIxzRkEcBncJWCWlRAmJ1m5UV8GF1YUoOMjzMs4KC/PPnIjQCUkq/QaA== X-Received: by 2002:a17:902:6194:: with SMTP id u20mr16492472plj.333.1593491730800; Mon, 29 Jun 2020 21:35:30 -0700 (PDT) Received: from gnu-cfl-2.localdomain (c-69-181-90-243.hsd1.ca.comcast.net. [69.181.90.243]) by smtp.gmail.com with ESMTPSA id g135sm1057029pfb.41.2020.06.29.21.35.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 29 Jun 2020 21:35:30 -0700 (PDT) Received: from gnu-cfl-2.localdomain (localhost [IPv6:::1]) by gnu-cfl-2.localdomain (Postfix) with ESMTP id 3397F1A04CC; Mon, 29 Jun 2020 21:35:29 -0700 (PDT) To: libc-alpha@sourceware.org Subject: V7: [PATCH 1/2] x86: Support usable check for all CPU features Date: Mon, 29 Jun 2020 21:35:28 -0700 Message-Id: <20200630043529.739181-2-hjl.tools@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20200630043529.739181-1-hjl.tools@gmail.com> References: <20200630043529.739181-1-hjl.tools@gmail.com> MIME-Version: 1.0 X-Spam-Status: No, score=-13.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, DKIM_VALID_EF, FREEMAIL_FROM, GIT_PATCH_0, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: "H.J. Lu via Libc-alpha" From: "H.J. Lu" Reply-To: "H.J. Lu" Cc: Florian Weimer Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" Support usable check for all CPU features with the following changes: 1. Change struct cpu_features to struct cpuid_features { struct cpuid_registers cpuid; struct cpuid_registers usable; }; struct cpu_features { struct cpu_features_basic basic; struct cpuid_features features[COMMON_CPUID_INDEX_MAX]; unsigned int preferred[PREFERRED_FEATURE_INDEX_MAX]; ... }; so that there is a usable bit for each cpuid bit. 2. After the cpuid bits have been initialized, copy them to the usable bits. 3. Clear the usable bits which require OS support. 4. If the feature is supported by OS, copy its cpuid bit to its usable bit. The results are a. If the feature doesn't requre OS support, its usable bit is copied from the cpuid bit. b. Otherwise, its usable bit is copied from the cpuid bit only if the feature is supported by OS. HAS_CPU_FEATURE can be used to check if CPU supports the feature. CPU_FEATURE_USABLE can used to check if both CPU and OS support the feature. --- sysdeps/i386/i686/multiarch/s_fma.c | 2 +- sysdeps/i386/i686/multiarch/s_fmaf.c | 2 +- sysdeps/x86/cacheinfo.c | 2 +- sysdeps/x86/cpu-features.c | 341 +++++++++--------- sysdeps/x86/cpu-features.h | 173 ++------- sysdeps/x86/cpu-tunables.c | 170 ++++----- sysdeps/x86/tst-get-cpu-features.c | 119 ++++++ sysdeps/x86_64/Makefile | 6 +- sysdeps/x86_64/dl-machine.h | 6 +- sysdeps/x86_64/fpu/math-tests-arch.h | 6 +- sysdeps/x86_64/fpu/multiarch/ifunc-avx-fma4.h | 8 +- sysdeps/x86_64/fpu/multiarch/ifunc-fma.h | 4 +- sysdeps/x86_64/fpu/multiarch/ifunc-fma4.h | 6 +- .../x86_64/fpu/multiarch/ifunc-mathvec-avx2.h | 4 +- .../fpu/multiarch/ifunc-mathvec-avx512.h | 4 +- sysdeps/x86_64/fpu/multiarch/s_fma.c | 4 +- sysdeps/x86_64/fpu/multiarch/s_fmaf.c | 4 +- sysdeps/x86_64/multiarch/ifunc-avx2.h | 2 +- sysdeps/x86_64/multiarch/ifunc-impl-list.c | 146 ++++---- sysdeps/x86_64/multiarch/ifunc-memcmp.h | 2 +- sysdeps/x86_64/multiarch/ifunc-memmove.h | 2 +- sysdeps/x86_64/multiarch/ifunc-memset.h | 4 +- sysdeps/x86_64/multiarch/ifunc-strcasecmp.h | 2 +- sysdeps/x86_64/multiarch/ifunc-strcpy.h | 2 +- sysdeps/x86_64/multiarch/ifunc-wmemset.h | 4 +- sysdeps/x86_64/multiarch/strchr.c | 2 +- sysdeps/x86_64/multiarch/strcmp.c | 2 +- sysdeps/x86_64/multiarch/strncmp.c | 2 +- sysdeps/x86_64/multiarch/test-multiarch.c | 8 +- sysdeps/x86_64/multiarch/wcsnlen.c | 2 +- 30 files changed, 506 insertions(+), 535 deletions(-) diff --git a/sysdeps/i386/i686/multiarch/s_fma.c b/sysdeps/i386/i686/multiarch/s_fma.c index 90f649f52a..0729853e21 100644 --- a/sysdeps/i386/i686/multiarch/s_fma.c +++ b/sysdeps/i386/i686/multiarch/s_fma.c @@ -27,7 +27,7 @@ extern double __fma_ia32 (double x, double y, double z) attribute_hidden; extern double __fma_fma (double x, double y, double z) attribute_hidden; libm_ifunc (__fma, - HAS_ARCH_FEATURE (FMA_Usable) ? __fma_fma : __fma_ia32); + CPU_FEATURE_USABLE (FMA) ? __fma_fma : __fma_ia32); libm_alias_double (__fma, fma) #define __fma __fma_ia32 diff --git a/sysdeps/i386/i686/multiarch/s_fmaf.c b/sysdeps/i386/i686/multiarch/s_fmaf.c index 27757eca9d..20f965c342 100644 --- a/sysdeps/i386/i686/multiarch/s_fmaf.c +++ b/sysdeps/i386/i686/multiarch/s_fmaf.c @@ -27,7 +27,7 @@ extern float __fmaf_ia32 (float x, float y, float z) attribute_hidden; extern float __fmaf_fma (float x, float y, float z) attribute_hidden; libm_ifunc (__fmaf, - HAS_ARCH_FEATURE (FMA_Usable) ? __fmaf_fma : __fmaf_ia32); + CPU_FEATURE_USABLE (FMA) ? __fmaf_fma : __fmaf_ia32); libm_alias_float (__fma, fma) #define __fmaf __fmaf_ia32 diff --git a/sysdeps/x86/cacheinfo.c b/sysdeps/x86/cacheinfo.c index 311502dee3..92e7cddada 100644 --- a/sysdeps/x86/cacheinfo.c +++ b/sysdeps/x86/cacheinfo.c @@ -731,7 +731,7 @@ intel_bug_no_cache_info: /* Assume that all logical threads share the highest cache level. */ threads - = ((cpu_features->cpuid[COMMON_CPUID_INDEX_1].ebx + = ((cpu_features->features[COMMON_CPUID_INDEX_1].cpuid.ebx >> 16) & 0xff); } diff --git a/sysdeps/x86/cpu-features.c b/sysdeps/x86/cpu-features.c index c351bdd54a..354e3171f4 100644 --- a/sysdeps/x86/cpu-features.c +++ b/sysdeps/x86/cpu-features.c @@ -42,73 +42,44 @@ extern void TUNABLE_CALLBACK (set_x86_shstk) (tunable_val_t *) #endif static void -get_extended_indices (struct cpu_features *cpu_features) -{ - unsigned int eax, ebx, ecx, edx; - __cpuid (0x80000000, eax, ebx, ecx, edx); - if (eax >= 0x80000001) - __cpuid (0x80000001, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000001].eax, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000001].ebx, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000001].ecx, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000001].edx); - if (eax >= 0x80000007) - __cpuid (0x80000007, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000007].eax, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000007].ebx, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000007].ecx, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000007].edx); - if (eax >= 0x80000008) - __cpuid (0x80000008, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000008].eax, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000008].ebx, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000008].ecx, - cpu_features->cpuid[COMMON_CPUID_INDEX_80000008].edx); -} - -static void -get_common_indices (struct cpu_features *cpu_features, - unsigned int *family, unsigned int *model, - unsigned int *extended_model, unsigned int *stepping) +update_usable (struct cpu_features *cpu_features) { - if (family) - { - unsigned int eax; - __cpuid (1, eax, cpu_features->cpuid[COMMON_CPUID_INDEX_1].ebx, - cpu_features->cpuid[COMMON_CPUID_INDEX_1].ecx, - cpu_features->cpuid[COMMON_CPUID_INDEX_1].edx); - cpu_features->cpuid[COMMON_CPUID_INDEX_1].eax = eax; - *family = (eax >> 8) & 0x0f; - *model = (eax >> 4) & 0x0f; - *extended_model = (eax >> 12) & 0xf0; - *stepping = eax & 0x0f; - if (*family == 0x0f) - { - *family += (eax >> 20) & 0xff; - *model += *extended_model; - } - } - - if (cpu_features->basic.max_cpuid >= 7) - { - __cpuid_count (7, 0, - cpu_features->cpuid[COMMON_CPUID_INDEX_7].eax, - cpu_features->cpuid[COMMON_CPUID_INDEX_7].ebx, - cpu_features->cpuid[COMMON_CPUID_INDEX_7].ecx, - cpu_features->cpuid[COMMON_CPUID_INDEX_7].edx); - __cpuid_count (7, 1, - cpu_features->cpuid[COMMON_CPUID_INDEX_7_ECX_1].eax, - cpu_features->cpuid[COMMON_CPUID_INDEX_7_ECX_1].ebx, - cpu_features->cpuid[COMMON_CPUID_INDEX_7_ECX_1].ecx, - cpu_features->cpuid[COMMON_CPUID_INDEX_7_ECX_1].edx); - } - - if (cpu_features->basic.max_cpuid >= 0xd) - __cpuid_count (0xd, 1, - cpu_features->cpuid[COMMON_CPUID_INDEX_D_ECX_1].eax, - cpu_features->cpuid[COMMON_CPUID_INDEX_D_ECX_1].ebx, - cpu_features->cpuid[COMMON_CPUID_INDEX_D_ECX_1].ecx, - cpu_features->cpuid[COMMON_CPUID_INDEX_D_ECX_1].edx); + /* Copy the cpuid array to the usable array. */ + unsigned int i; + for (i = 0; i < COMMON_CPUID_INDEX_MAX; i++) + cpu_features->features[i].usable = cpu_features->features[i].cpuid; + + /* Clear the usable bits which require OS support. */ + CPU_FEATURE_UNSET (cpu_features, FMA, usable); + CPU_FEATURE_UNSET (cpu_features, AVX, usable); + CPU_FEATURE_UNSET (cpu_features, F16C, usable); + CPU_FEATURE_UNSET (cpu_features, AVX2, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512F, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512DQ, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_IFMA, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512PF, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512ER, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512CD, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512BW, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512VL, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_VBMI, usable); + CPU_FEATURE_UNSET (cpu_features, PKU, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_VBMI2, usable); + CPU_FEATURE_UNSET (cpu_features, VAES, usable); + CPU_FEATURE_UNSET (cpu_features, VPCLMULQDQ, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_VNNI, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_BITALG, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_VPOPCNTDQ, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_4VNNIW, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_4FMAPS, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_VP2INTERSECT, usable); + CPU_FEATURE_UNSET (cpu_features, AMX_BF16, usable); + CPU_FEATURE_UNSET (cpu_features, AMX_TILE, usable); + CPU_FEATURE_UNSET (cpu_features, AMX_INT8, usable); + CPU_FEATURE_UNSET (cpu_features, XOP, usable); + CPU_FEATURE_UNSET (cpu_features, FMA4, usable); + CPU_FEATURE_UNSET (cpu_features, XSAVEC, usable); + CPU_FEATURE_UNSET (cpu_features, AVX512_BF16, usable); /* Can we call xgetbv? */ if (CPU_FEATURES_CPU_P (cpu_features, OSXSAVE)) @@ -123,40 +94,28 @@ get_common_indices (struct cpu_features *cpu_features, /* Determine if AVX is usable. */ if (CPU_FEATURES_CPU_P (cpu_features, AVX)) { - cpu_features->usable[index_arch_AVX_Usable] - |= bit_arch_AVX_Usable; + CPU_FEATURE_SET (cpu_features, AVX, usable); /* The following features depend on AVX being usable. */ /* Determine if AVX2 is usable. */ if (CPU_FEATURES_CPU_P (cpu_features, AVX2)) - { - cpu_features->usable[index_arch_AVX2_Usable] - |= bit_arch_AVX2_Usable; - - /* Unaligned load with 256-bit AVX registers are faster on - Intel/AMD processors with AVX2. */ - cpu_features->preferred[index_arch_AVX_Fast_Unaligned_Load] - |= bit_arch_AVX_Fast_Unaligned_Load; - } + { + CPU_FEATURE_SET (cpu_features, AVX2, usable); + + /* Unaligned load with 256-bit AVX registers are faster + on Intel/AMD processors with AVX2. */ + cpu_features->preferred[index_arch_AVX_Fast_Unaligned_Load] + |= bit_arch_AVX_Fast_Unaligned_Load; + } /* Determine if FMA is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, FMA)) - cpu_features->usable[index_arch_FMA_Usable] - |= bit_arch_FMA_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, FMA); /* Determine if VAES is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, VAES)) - cpu_features->usable[index_arch_VAES_Usable] - |= bit_arch_VAES_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, VAES); /* Determine if VPCLMULQDQ is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, VPCLMULQDQ)) - cpu_features->usable[index_arch_VPCLMULQDQ_Usable] - |= bit_arch_VPCLMULQDQ_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, VPCLMULQDQ); /* Determine if XOP is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, XOP)) - cpu_features->usable[index_arch_XOP_Usable] - |= bit_arch_XOP_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, XOP); /* Determine if F16C is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, F16C)) - cpu_features->usable[index_arch_F16C_Usable] - |= bit_arch_F16C_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, F16C); } /* Check if OPMASK state, upper 256-bit of ZMM0-ZMM15 and @@ -168,73 +127,41 @@ get_common_indices (struct cpu_features *cpu_features, /* Determine if AVX512F is usable. */ if (CPU_FEATURES_CPU_P (cpu_features, AVX512F)) { - cpu_features->usable[index_arch_AVX512F_Usable] - |= bit_arch_AVX512F_Usable; + CPU_FEATURE_SET (cpu_features, AVX512F, usable); /* Determine if AVX512CD is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512CD)) - cpu_features->usable[index_arch_AVX512CD_Usable] - |= bit_arch_AVX512CD_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512CD); /* Determine if AVX512ER is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512ER)) - cpu_features->usable[index_arch_AVX512ER_Usable] - |= bit_arch_AVX512ER_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512ER); /* Determine if AVX512PF is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512PF)) - cpu_features->usable[index_arch_AVX512PF_Usable] - |= bit_arch_AVX512PF_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512PF); /* Determine if AVX512VL is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512VL)) - cpu_features->usable[index_arch_AVX512VL_Usable] - |= bit_arch_AVX512VL_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512VL); /* Determine if AVX512DQ is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512DQ)) - cpu_features->usable[index_arch_AVX512DQ_Usable] - |= bit_arch_AVX512DQ_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512DQ); /* Determine if AVX512BW is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512BW)) - cpu_features->usable[index_arch_AVX512BW_Usable] - |= bit_arch_AVX512BW_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512BW); /* Determine if AVX512_4FMAPS is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_4FMAPS)) - cpu_features->usable[index_arch_AVX512_4FMAPS_Usable] - |= bit_arch_AVX512_4FMAPS_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_4FMAPS); /* Determine if AVX512_4VNNIW is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_4VNNIW)) - cpu_features->usable[index_arch_AVX512_4VNNIW_Usable] - |= bit_arch_AVX512_4VNNIW_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_4VNNIW); /* Determine if AVX512_BITALG is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_BITALG)) - cpu_features->usable[index_arch_AVX512_BITALG_Usable] - |= bit_arch_AVX512_BITALG_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_BITALG); /* Determine if AVX512_IFMA is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_IFMA)) - cpu_features->usable[index_arch_AVX512_IFMA_Usable] - |= bit_arch_AVX512_IFMA_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_IFMA); /* Determine if AVX512_VBMI is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_VBMI)) - cpu_features->usable[index_arch_AVX512_VBMI_Usable] - |= bit_arch_AVX512_VBMI_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_VBMI); /* Determine if AVX512_VBMI2 is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_VBMI2)) - cpu_features->usable[index_arch_AVX512_VBMI2_Usable] - |= bit_arch_AVX512_VBMI2_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_VBMI2); /* Determine if is AVX512_VNNI usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_VNNI)) - cpu_features->usable[index_arch_AVX512_VNNI_Usable] - |= bit_arch_AVX512_VNNI_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_VNNI); /* Determine if AVX512_VPOPCNTDQ is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_VPOPCNTDQ)) - cpu_features->usable[index_arch_AVX512_VPOPCNTDQ_Usable] - |= bit_arch_AVX512_VPOPCNTDQ_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, + AVX512_VPOPCNTDQ); /* Determine if AVX512_VP2INTERSECT is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, - AVX512_VP2INTERSECT)) - cpu_features->usable[index_arch_AVX512_VP2INTERSECT_Usable] - |= bit_arch_AVX512_VP2INTERSECT_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, + AVX512_VP2INTERSECT); /* Determine if AVX512_BF16 is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AVX512_BF16)) - cpu_features->usable[index_arch_AVX512_BF16_Usable] - |= bit_arch_AVX512_BF16_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AVX512_BF16); } } } @@ -244,17 +171,11 @@ get_common_indices (struct cpu_features *cpu_features, == (bit_XTILECFG_state | bit_XTILEDATA_state)) { /* Determine if AMX_BF16 is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AMX_BF16)) - cpu_features->usable[index_arch_AMX_BF16_Usable] - |= bit_arch_AMX_BF16_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AMX_BF16); /* Determine if AMX_TILE is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AMX_TILE)) - cpu_features->usable[index_arch_AMX_TILE_Usable] - |= bit_arch_AMX_TILE_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AMX_TILE); /* Determine if AMX_INT8 is usable. */ - if (CPU_FEATURES_CPU_P (cpu_features, AMX_INT8)) - cpu_features->usable[index_arch_AMX_INT8_Usable] - |= bit_arch_AMX_INT8_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, AMX_INT8); } /* For _dl_runtime_resolve, set xsave_state_size to xsave area @@ -318,8 +239,7 @@ get_common_indices (struct cpu_features *cpu_features, { cpu_features->xsave_state_size = ALIGN_UP (size + STATE_SAVE_OFFSET, 64); - cpu_features->usable[index_arch_XSAVEC_Usable] - |= bit_arch_XSAVEC_Usable; + CPU_FEATURE_SET (cpu_features, XSAVEC, usable); } } } @@ -328,8 +248,79 @@ get_common_indices (struct cpu_features *cpu_features, /* Determine if PKU is usable. */ if (CPU_FEATURES_CPU_P (cpu_features, OSPKE)) - cpu_features->usable[index_arch_PKU_Usable] - |= bit_arch_PKU_Usable; + CPU_FEATURE_SET (cpu_features, PKU, usable); +} + +static void +get_extended_indices (struct cpu_features *cpu_features) +{ + unsigned int eax, ebx, ecx, edx; + __cpuid (0x80000000, eax, ebx, ecx, edx); + if (eax >= 0x80000001) + __cpuid (0x80000001, + cpu_features->features[COMMON_CPUID_INDEX_80000001].cpuid.eax, + cpu_features->features[COMMON_CPUID_INDEX_80000001].cpuid.ebx, + cpu_features->features[COMMON_CPUID_INDEX_80000001].cpuid.ecx, + cpu_features->features[COMMON_CPUID_INDEX_80000001].cpuid.edx); + if (eax >= 0x80000007) + __cpuid (0x80000007, + cpu_features->features[COMMON_CPUID_INDEX_80000007].cpuid.eax, + cpu_features->features[COMMON_CPUID_INDEX_80000007].cpuid.ebx, + cpu_features->features[COMMON_CPUID_INDEX_80000007].cpuid.ecx, + cpu_features->features[COMMON_CPUID_INDEX_80000007].cpuid.edx); + if (eax >= 0x80000008) + __cpuid (0x80000008, + cpu_features->features[COMMON_CPUID_INDEX_80000008].cpuid.eax, + cpu_features->features[COMMON_CPUID_INDEX_80000008].cpuid.ebx, + cpu_features->features[COMMON_CPUID_INDEX_80000008].cpuid.ecx, + cpu_features->features[COMMON_CPUID_INDEX_80000008].cpuid.edx); +} + +static void +get_common_indices (struct cpu_features *cpu_features, + unsigned int *family, unsigned int *model, + unsigned int *extended_model, unsigned int *stepping) +{ + if (family) + { + unsigned int eax; + __cpuid (1, eax, + cpu_features->features[COMMON_CPUID_INDEX_1].cpuid.ebx, + cpu_features->features[COMMON_CPUID_INDEX_1].cpuid.ecx, + cpu_features->features[COMMON_CPUID_INDEX_1].cpuid.edx); + cpu_features->features[COMMON_CPUID_INDEX_1].cpuid.eax = eax; + *family = (eax >> 8) & 0x0f; + *model = (eax >> 4) & 0x0f; + *extended_model = (eax >> 12) & 0xf0; + *stepping = eax & 0x0f; + if (*family == 0x0f) + { + *family += (eax >> 20) & 0xff; + *model += *extended_model; + } + } + + if (cpu_features->basic.max_cpuid >= 7) + { + __cpuid_count (7, 0, + cpu_features->features[COMMON_CPUID_INDEX_7].cpuid.eax, + cpu_features->features[COMMON_CPUID_INDEX_7].cpuid.ebx, + cpu_features->features[COMMON_CPUID_INDEX_7].cpuid.ecx, + cpu_features->features[COMMON_CPUID_INDEX_7].cpuid.edx); + __cpuid_count (7, 1, + cpu_features->features[COMMON_CPUID_INDEX_7_ECX_1].cpuid.eax, + cpu_features->features[COMMON_CPUID_INDEX_7_ECX_1].cpuid.ebx, + cpu_features->features[COMMON_CPUID_INDEX_7_ECX_1].cpuid.ecx, + cpu_features->features[COMMON_CPUID_INDEX_7_ECX_1].cpuid.edx); + } + + if (cpu_features->basic.max_cpuid >= 0xd) + __cpuid_count (0xd, 1, + cpu_features->features[COMMON_CPUID_INDEX_D_ECX_1].cpuid.eax, + cpu_features->features[COMMON_CPUID_INDEX_D_ECX_1].cpuid.ebx, + cpu_features->features[COMMON_CPUID_INDEX_D_ECX_1].cpuid.ecx, + cpu_features->features[COMMON_CPUID_INDEX_D_ECX_1].cpuid.edx); + } _Static_assert (((index_arch_Fast_Unaligned_Load @@ -353,8 +344,6 @@ init_cpu_features (struct cpu_features *cpu_features) unsigned int stepping = 0; enum cpu_features_kind kind; - cpu_features->usable_p = cpu_features->usable; - #if !HAS_CPUID if (__get_cpuid_max (0, 0) == 0) { @@ -377,6 +366,8 @@ init_cpu_features (struct cpu_features *cpu_features) get_extended_indices (cpu_features); + update_usable (cpu_features); + if (family == 0x06) { model += extended_model; @@ -473,7 +464,8 @@ init_cpu_features (struct cpu_features *cpu_features) with stepping >= 4) to avoid TSX on kernels that weren't updated with the latest microcode package (which disables broken feature by default). */ - cpu_features->cpuid[index_cpu_RTM].reg_RTM &= ~bit_cpu_RTM; + CPU_FEATURE_UNSET (cpu_features, RTM, cpuid); + CPU_FEATURE_UNSET (cpu_features, RTM, usable); break; } } @@ -502,15 +494,15 @@ init_cpu_features (struct cpu_features *cpu_features) get_extended_indices (cpu_features); - ecx = cpu_features->cpuid[COMMON_CPUID_INDEX_1].ecx; + update_usable (cpu_features); - if (HAS_ARCH_FEATURE (AVX_Usable)) + ecx = cpu_features->features[COMMON_CPUID_INDEX_1].cpuid.ecx; + + if (CPU_FEATURE_USABLE_P (cpu_features, AVX)) { /* Since the FMA4 bit is in COMMON_CPUID_INDEX_80000001 and FMA4 requires AVX, determine if FMA4 is usable here. */ - if (CPU_FEATURES_CPU_P (cpu_features, FMA4)) - cpu_features->usable[index_arch_FMA4_Usable] - |= bit_arch_FMA4_Usable; + CPU_FEATURE_SET_USABLE (cpu_features, FMA4); } if (family == 0x15) @@ -541,13 +533,15 @@ init_cpu_features (struct cpu_features *cpu_features) get_extended_indices (cpu_features); + update_usable (cpu_features); + model += extended_model; if (family == 0x6) { if (model == 0xf || model == 0x19) { - cpu_features->usable[index_arch_AVX_Usable] - &= ~(bit_arch_AVX_Usable | bit_arch_AVX2_Usable); + CPU_FEATURE_UNSET (cpu_features, AVX, usable); + CPU_FEATURE_UNSET (cpu_features, AVX2, usable); cpu_features->preferred[index_arch_Slow_SSE4_2] |= bit_arch_Slow_SSE4_2; @@ -560,8 +554,8 @@ init_cpu_features (struct cpu_features *cpu_features) { if (model == 0x1b) { - cpu_features->usable[index_arch_AVX_Usable] - &= ~(bit_arch_AVX_Usable | bit_arch_AVX2_Usable); + CPU_FEATURE_UNSET (cpu_features, AVX, usable); + CPU_FEATURE_UNSET (cpu_features, AVX2, usable); cpu_features->preferred[index_arch_Slow_SSE4_2] |= bit_arch_Slow_SSE4_2; @@ -571,8 +565,8 @@ init_cpu_features (struct cpu_features *cpu_features) } else if (model == 0x3b) { - cpu_features->usable[index_arch_AVX_Usable] - &= ~(bit_arch_AVX_Usable | bit_arch_AVX2_Usable); + CPU_FEATURE_UNSET (cpu_features, AVX, usable); + CPU_FEATURE_UNSET (cpu_features, AVX2, usable); cpu_features->preferred[index_arch_AVX_Fast_Unaligned_Load] &= ~bit_arch_AVX_Fast_Unaligned_Load; @@ -583,6 +577,7 @@ init_cpu_features (struct cpu_features *cpu_features) { kind = arch_kind_other; get_common_indices (cpu_features, NULL, NULL, NULL, NULL); + update_usable (cpu_features); } /* Support i586 if CX8 is available. */ @@ -625,7 +620,7 @@ no_cpuid: { const char *platform = NULL; - if (CPU_FEATURES_ARCH_P (cpu_features, AVX512F_Usable) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX512F) && CPU_FEATURES_CPU_P (cpu_features, AVX512CD)) { if (CPU_FEATURES_CPU_P (cpu_features, AVX512ER)) @@ -643,8 +638,8 @@ no_cpuid: } if (platform == NULL - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) - && CPU_FEATURES_ARCH_P (cpu_features, FMA_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) + && CPU_FEATURE_USABLE_P (cpu_features, FMA) && CPU_FEATURES_CPU_P (cpu_features, BMI1) && CPU_FEATURES_CPU_P (cpu_features, BMI2) && CPU_FEATURES_CPU_P (cpu_features, LZCNT) diff --git a/sysdeps/x86/cpu-features.h b/sysdeps/x86/cpu-features.h index d66dc206f7..d3e930befc 100644 --- a/sysdeps/x86/cpu-features.h +++ b/sysdeps/x86/cpu-features.h @@ -18,15 +18,6 @@ #ifndef cpu_features_h #define cpu_features_h -enum -{ - /* The integer bit array index for the first set of usable feature - bits. */ - USABLE_FEATURE_INDEX_1 = 0, - /* The current maximum size of the feature integer bit array. */ - USABLE_FEATURE_INDEX_MAX -}; - enum { /* The integer bit array index for the first set of preferred feature @@ -57,6 +48,12 @@ struct cpuid_registers unsigned int edx; }; +struct cpuid_features +{ + struct cpuid_registers cpuid; + struct cpuid_registers usable; +}; + enum cpu_features_kind { arch_kind_unknown = 0, @@ -78,9 +75,7 @@ struct cpu_features_basic struct cpu_features { struct cpu_features_basic basic; - unsigned int *usable_p; - struct cpuid_registers cpuid[COMMON_CPUID_INDEX_MAX]; - unsigned int usable[USABLE_FEATURE_INDEX_MAX]; + struct cpuid_features features[COMMON_CPUID_INDEX_MAX]; unsigned int preferred[PREFERRED_FEATURE_INDEX_MAX]; /* The state size for XSAVEC or XSAVE. The type must be unsigned long int so that we use @@ -110,117 +105,40 @@ extern const struct cpu_features *__get_cpu_features (void) __attribute__ ((const)); /* Only used directly in cpu-features.c. */ -# define CPU_FEATURES_CPU_P(ptr, name) \ - ((ptr->cpuid[index_cpu_##name].reg_##name & (bit_cpu_##name)) != 0) -# define CPU_FEATURES_ARCH_P(ptr, name) \ - ((ptr->feature_##name[index_arch_##name] & (bit_arch_##name)) != 0) +#define CPU_FEATURE_CHECK_P(ptr, name, check) \ + ((ptr->features[index_cpu_##name].check.reg_##name \ + & bit_cpu_##name) != 0) +#define CPU_FEATURE_SET(ptr, name, check) \ + ptr->features[index_cpu_##name].check.reg_##name |= bit_cpu_##name; +#define CPU_FEATURE_UNSET(ptr, name, check) \ + ptr->features[index_cpu_##name].check.reg_##name &= ~bit_cpu_##name; +#define CPU_FEATURE_SET_USABLE(ptr, name) \ + ptr->features[index_cpu_##name].usable.reg_##name \ + |= ptr->features[index_cpu_##name].cpuid.reg_##name & bit_cpu_##name; +#define CPU_FEATURE_PREFERRED_P(ptr, name) \ + ((ptr->preferred[index_arch_##name] & bit_arch_##name) != 0) +#define CPU_FEATURE_CPU_P(ptr, name) \ + CPU_FEATURE_CHECK_P (ptr, name, cpuid) +#define CPU_FEATURE_USABLE_P(ptr, name) \ + CPU_FEATURE_CHECK_P (ptr, name, usable) /* HAS_CPU_FEATURE evaluates to true if CPU supports the feature. */ #define HAS_CPU_FEATURE(name) \ - CPU_FEATURES_CPU_P (__get_cpu_features (), name) -/* HAS_ARCH_FEATURE evaluates to true if we may use the feature at - runtime. */ -# define HAS_ARCH_FEATURE(name) \ - CPU_FEATURES_ARCH_P (__get_cpu_features (), name) + CPU_FEATURE_CPU_P (__get_cpu_features (), name) /* CPU_FEATURE_USABLE evaluates to true if the feature is usable. */ #define CPU_FEATURE_USABLE(name) \ - HAS_ARCH_FEATURE (name##_Usable) - -/* Architecture features. */ - -/* USABLE_FEATURE_INDEX_1. */ -#define bit_arch_AVX_Usable (1u << 0) -#define bit_arch_AVX2_Usable (1u << 1) -#define bit_arch_AVX512F_Usable (1u << 2) -#define bit_arch_AVX512CD_Usable (1u << 3) -#define bit_arch_AVX512ER_Usable (1u << 4) -#define bit_arch_AVX512PF_Usable (1u << 5) -#define bit_arch_AVX512VL_Usable (1u << 6) -#define bit_arch_AVX512DQ_Usable (1u << 7) -#define bit_arch_AVX512BW_Usable (1u << 8) -#define bit_arch_AVX512_4FMAPS_Usable (1u << 9) -#define bit_arch_AVX512_4VNNIW_Usable (1u << 10) -#define bit_arch_AVX512_BITALG_Usable (1u << 11) -#define bit_arch_AVX512_IFMA_Usable (1u << 12) -#define bit_arch_AVX512_VBMI_Usable (1u << 13) -#define bit_arch_AVX512_VBMI2_Usable (1u << 14) -#define bit_arch_AVX512_VNNI_Usable (1u << 15) -#define bit_arch_AVX512_VPOPCNTDQ_Usable (1u << 16) -#define bit_arch_FMA_Usable (1u << 17) -#define bit_arch_FMA4_Usable (1u << 18) -#define bit_arch_VAES_Usable (1u << 19) -#define bit_arch_VPCLMULQDQ_Usable (1u << 20) -#define bit_arch_XOP_Usable (1u << 21) -#define bit_arch_XSAVEC_Usable (1u << 22) -#define bit_arch_F16C_Usable (1u << 23) -#define bit_arch_AVX512_VP2INTERSECT_Usable (1u << 24) -#define bit_arch_AVX512_BF16_Usable (1u << 25) -#define bit_arch_PKU_Usable (1u << 26) -#define bit_arch_AMX_BF16_Usable (1u << 27) -#define bit_arch_AMX_TILE_Usable (1u << 28) -#define bit_arch_AMX_INT8_Usable (1u << 29) - -#define index_arch_AVX_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX2_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512F_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512CD_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512ER_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512PF_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512VL_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512BW_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512DQ_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_4FMAPS_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_4VNNIW_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_BITALG_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_IFMA_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_VBMI_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_VBMI2_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_VNNI_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_VPOPCNTDQ_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_FMA_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_FMA4_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_VAES_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_VPCLMULQDQ_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_XOP_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_XSAVEC_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_F16C_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_VP2INTERSECT_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AVX512_BF16_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_PKU_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AMX_BF16_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AMX_TILE_Usable USABLE_FEATURE_INDEX_1 -#define index_arch_AMX_INT8_Usable USABLE_FEATURE_INDEX_1 - -#define feature_AVX_Usable usable -#define feature_AVX2_Usable usable -#define feature_AVX512F_Usable usable -#define feature_AVX512CD_Usable usable -#define feature_AVX512ER_Usable usable -#define feature_AVX512PF_Usable usable -#define feature_AVX512VL_Usable usable -#define feature_AVX512BW_Usable usable -#define feature_AVX512DQ_Usable usable -#define feature_AVX512_4FMAPS_Usable usable -#define feature_AVX512_4VNNIW_Usable usable -#define feature_AVX512_BITALG_Usable usable -#define feature_AVX512_IFMA_Usable usable -#define feature_AVX512_VBMI_Usable usable -#define feature_AVX512_VBMI2_Usable usable -#define feature_AVX512_VNNI_Usable usable -#define feature_AVX512_VPOPCNTDQ_Usable usable -#define feature_FMA_Usable usable -#define feature_FMA4_Usable usable -#define feature_VAES_Usable usable -#define feature_VPCLMULQDQ_Usable usable -#define feature_XOP_Usable usable -#define feature_XSAVEC_Usable usable -#define feature_F16C_Usable usable -#define feature_AVX512_VP2INTERSECT_Usable usable -#define feature_AVX512_BF16_Usable usable -#define feature_PKU_Usable usable -#define feature_AMX_BF16_Usable usable -#define feature_AMX_TILE_Usable usable -#define feature_AMX_INT8_Usable usable + CPU_FEATURE_USABLE_P (__get_cpu_features (), name) +/* CPU_FEATURE_PREFER evaluates to true if we prefer the feature at + runtime. */ +#define CPU_FEATURE_PREFERRED(name) \ + CPU_FEATURE_PREFERRED_P(__get_cpu_features (), name) + +#define CPU_FEATURES_CPU_P(ptr, name) \ + CPU_FEATURE_CPU_P (ptr, name) +#define CPU_FEATURES_ARCH_P(ptr, name) \ + CPU_FEATURE_PREFERRED_P (ptr, name) +#define HAS_ARCH_FEATURE(name) \ + CPU_FEATURE_PREFERRED (name) /* CPU features. */ @@ -814,23 +732,6 @@ extern const struct cpu_features *__get_cpu_features (void) #define index_arch_MathVec_Prefer_No_AVX512 PREFERRED_FEATURE_INDEX_1 #define index_arch_Prefer_FSRM PREFERRED_FEATURE_INDEX_1 -#define feature_Fast_Rep_String preferred -#define feature_Fast_Copy_Backward preferred -#define feature_Slow_BSF preferred -#define feature_Fast_Unaligned_Load preferred -#define feature_Prefer_PMINUB_for_stringop preferred -#define feature_Fast_Unaligned_Copy preferred -#define feature_I586 preferred -#define feature_I686 preferred -#define feature_Slow_SSE4_2 preferred -#define feature_AVX_Fast_Unaligned_Load preferred -#define feature_Prefer_MAP_32BIT_EXEC preferred -#define feature_Prefer_No_VZEROUPPER preferred -#define feature_Prefer_ERMS preferred -#define feature_Prefer_No_AVX512 preferred -#define feature_MathVec_Prefer_No_AVX512 preferred -#define feature_Prefer_FSRM preferred - /* XCR0 Feature flags. */ #define bit_XMM_state (1u << 1) #define bit_YMM_state (1u << 2) @@ -844,8 +745,6 @@ extern const struct cpu_features *__get_cpu_features (void) /* Unused for x86. */ # define INIT_ARCH() # define __get_cpu_features() (&GLRO(dl_x86_cpu_features)) -# define x86_get_cpuid_registers(i) \ - (&(GLRO(dl_x86_cpu_features).cpuid[i])) # endif #ifdef __x86_64__ diff --git a/sysdeps/x86/cpu-tunables.c b/sysdeps/x86/cpu-tunables.c index 666ec571f2..684ae72c1c 100644 --- a/sysdeps/x86/cpu-tunables.c +++ b/sysdeps/x86/cpu-tunables.c @@ -43,66 +43,46 @@ extern __typeof (memcmp) DEFAULT_MEMCMP; _Static_assert (sizeof (#name) - 1 == len, #name " != " #len); \ if (!DEFAULT_MEMCMP (f, #name, len)) \ { \ - cpu_features->cpuid[index_cpu_##name].reg_##name \ - &= ~bit_cpu_##name; \ + CPU_FEATURE_UNSET (cpu_features, name, cpuid) \ + CPU_FEATURE_UNSET (cpu_features, name, usable) \ break; \ } -/* Disable an ARCH feature NAME. We don't enable an ARCH feature which - isn't available. */ -# define CHECK_GLIBC_IFUNC_ARCH_OFF(f, cpu_features, name, len) \ +/* Disable a preferred feature NAME. We don't enable a preferred feature + which isn't available. */ +# define CHECK_GLIBC_IFUNC_PREFERRED_OFF(f, cpu_features, name, len) \ _Static_assert (sizeof (#name) - 1 == len, #name " != " #len); \ if (!DEFAULT_MEMCMP (f, #name, len)) \ { \ - cpu_features->feature_##name[index_arch_##name] \ + cpu_features->preferred[index_arch_##name] \ &= ~bit_arch_##name; \ break; \ } -/* Enable/disable an ARCH feature NAME. */ -# define CHECK_GLIBC_IFUNC_ARCH_BOTH(f, cpu_features, name, disable, \ - len) \ +/* Enable/disable a preferred feature NAME. */ +# define CHECK_GLIBC_IFUNC_PREFERRED_BOTH(f, cpu_features, name, \ + disable, len) \ _Static_assert (sizeof (#name) - 1 == len, #name " != " #len); \ if (!DEFAULT_MEMCMP (f, #name, len)) \ { \ if (disable) \ - cpu_features->feature_##name[index_arch_##name] \ - &= ~bit_arch_##name; \ + cpu_features->preferred[index_arch_##name] &= ~bit_arch_##name; \ else \ - cpu_features->feature_##name[index_arch_##name] \ - |= bit_arch_##name; \ + cpu_features->preferred[index_arch_##name] |= bit_arch_##name; \ break; \ } -/* Enable/disable an ARCH feature NAME. Enable an ARCH feature only - if the ARCH feature NEED is also enabled. */ -# define CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH(f, cpu_features, name, \ +/* Enable/disable a preferred feature NAME. Enable a preferred feature + only if the feature NEED is usable. */ +# define CHECK_GLIBC_IFUNC_PREFERRED_NEED_BOTH(f, cpu_features, name, \ need, disable, len) \ _Static_assert (sizeof (#name) - 1 == len, #name " != " #len); \ if (!DEFAULT_MEMCMP (f, #name, len)) \ { \ if (disable) \ - cpu_features->feature_##name[index_arch_##name] \ - &= ~bit_arch_##name; \ - else if (CPU_FEATURES_ARCH_P (cpu_features, need)) \ - cpu_features->feature_##name[index_arch_##name] \ - |= bit_arch_##name; \ - break; \ - } - -/* Enable/disable an ARCH feature NAME. Enable an ARCH feature only - if the CPU feature NEED is also enabled. */ -# define CHECK_GLIBC_IFUNC_ARCH_NEED_CPU_BOTH(f, cpu_features, name, \ - need, disable, len) \ - _Static_assert (sizeof (#name) - 1 == len, #name " != " #len); \ - if (!DEFAULT_MEMCMP (f, #name, len)) \ - { \ - if (disable) \ - cpu_features->feature_##name[index_arch_##name] \ - &= ~bit_arch_##name; \ - else if (CPU_FEATURES_CPU_P (cpu_features, need)) \ - cpu_features->feature_##name[index_arch_##name] \ - |= bit_arch_##name; \ + cpu_features->preferred[index_arch_##name] &= ~bit_arch_##name; \ + else if (CPU_FEATURE_USABLE_P (cpu_features, need)) \ + cpu_features->preferred[index_arch_##name] |= bit_arch_##name; \ break; \ } @@ -178,8 +158,8 @@ TUNABLE_CALLBACK (set_hwcaps) (tunable_val_t *valp) CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, ERMS, 4); CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, FMA4, 4); CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, SSE2, 4); - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, I586, 4); - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, I686, 4); + CHECK_GLIBC_IFUNC_PREFERRED_OFF (n, cpu_features, I586, 4); + CHECK_GLIBC_IFUNC_PREFERRED_OFF (n, cpu_features, I686, 4); } break; case 5: @@ -197,6 +177,14 @@ TUNABLE_CALLBACK (set_hwcaps) (tunable_val_t *valp) CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, POPCNT, 6); CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, SSE4_1, 6); CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, SSE4_2, 6); + if (!DEFAULT_MEMCMP (n, "XSAVEC", 6)) + { + /* Update xsave_state_size to XSAVE state size. */ + cpu_features->xsave_state_size + = cpu_features->xsave_state_full_size; + CPU_FEATURE_UNSET (cpu_features, XSAVEC, cpuid); + CPU_FEATURE_UNSET (cpu_features, XSAVEC, usable); + } } break; case 7: @@ -216,115 +204,85 @@ TUNABLE_CALLBACK (set_hwcaps) (tunable_val_t *valp) CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512PF, 8); CHECK_GLIBC_IFUNC_CPU_OFF (n, cpu_features, AVX512VL, 8); } - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, Slow_BSF, - disable, 8); - break; - case 10: - if (disable) - { - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, AVX_Usable, - 10); - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, FMA_Usable, - 10); - } + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, Slow_BSF, + disable, 8); break; case 11: - if (disable) { - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, AVX2_Usable, - 11); - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, FMA4_Usable, - 11); - } - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, Prefer_ERMS, - disable, 11); - CHECK_GLIBC_IFUNC_ARCH_NEED_CPU_BOTH (n, cpu_features, - Slow_SSE4_2, SSE4_2, + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Prefer_ERMS, disable, 11); - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, Prefer_FSRM, - disable, 11); - break; - case 13: - if (disable) - { - /* Update xsave_state_size to XSAVE state size. */ - cpu_features->xsave_state_size - = cpu_features->xsave_state_full_size; - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, - XSAVEC_Usable, 13); - } - break; - case 14: - if (disable) - { - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, - AVX512F_Usable, 14); + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Prefer_FSRM, + disable, 11); + CHECK_GLIBC_IFUNC_PREFERRED_NEED_BOTH (n, cpu_features, + Slow_SSE4_2, + SSE4_2, + disable, 11); } break; case 15: - if (disable) { - CHECK_GLIBC_IFUNC_ARCH_OFF (n, cpu_features, - AVX512DQ_Usable, 15); + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Fast_Rep_String, + disable, 15); } - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, Fast_Rep_String, - disable, 15); break; case 16: { - CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH - (n, cpu_features, Prefer_No_AVX512, AVX512F_Usable, + CHECK_GLIBC_IFUNC_PREFERRED_NEED_BOTH + (n, cpu_features, Prefer_No_AVX512, AVX512F, disable, 16); } break; case 18: { - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, - Fast_Copy_Backward, disable, - 18); + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Fast_Copy_Backward, + disable, 18); } break; case 19: { - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, - Fast_Unaligned_Load, disable, - 19); - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, - Fast_Unaligned_Copy, disable, - 19); + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Fast_Unaligned_Load, + disable, 19); + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Fast_Unaligned_Copy, + disable, 19); } break; case 20: { - CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH - (n, cpu_features, Prefer_No_VZEROUPPER, AVX_Usable, - disable, 20); + CHECK_GLIBC_IFUNC_PREFERRED_NEED_BOTH + (n, cpu_features, Prefer_No_VZEROUPPER, AVX, disable, + 20); } break; case 21: { - CHECK_GLIBC_IFUNC_ARCH_BOTH (n, cpu_features, - Prefer_MAP_32BIT_EXEC, disable, - 21); + CHECK_GLIBC_IFUNC_PREFERRED_BOTH (n, cpu_features, + Prefer_MAP_32BIT_EXEC, + disable, 21); } break; case 23: { - CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH - (n, cpu_features, AVX_Fast_Unaligned_Load, AVX_Usable, + CHECK_GLIBC_IFUNC_PREFERRED_NEED_BOTH + (n, cpu_features, AVX_Fast_Unaligned_Load, AVX, disable, 23); } break; case 24: { - CHECK_GLIBC_IFUNC_ARCH_NEED_ARCH_BOTH - (n, cpu_features, MathVec_Prefer_No_AVX512, - AVX512F_Usable, disable, 24); + CHECK_GLIBC_IFUNC_PREFERRED_NEED_BOTH + (n, cpu_features, MathVec_Prefer_No_AVX512, AVX512F, + disable, 24); } break; case 26: { - CHECK_GLIBC_IFUNC_ARCH_NEED_CPU_BOTH + CHECK_GLIBC_IFUNC_PREFERRED_NEED_BOTH (n, cpu_features, Prefer_PMINUB_for_stringop, SSE2, disable, 26); } diff --git a/sysdeps/x86/tst-get-cpu-features.c b/sysdeps/x86/tst-get-cpu-features.c index dafd66434c..4f0ec8315a 100644 --- a/sysdeps/x86/tst-get-cpu-features.c +++ b/sysdeps/x86/tst-get-cpu-features.c @@ -219,35 +219,154 @@ do_test (void) CHECK_CPU_FEATURE (AVX512_BF16); printf ("Usable CPU features:\n"); + CHECK_CPU_FEATURE_USABLE (SSE3); + CHECK_CPU_FEATURE_USABLE (PCLMULQDQ); + CHECK_CPU_FEATURE_USABLE (DTES64); + CHECK_CPU_FEATURE_USABLE (MONITOR); + CHECK_CPU_FEATURE_USABLE (DS_CPL); + CHECK_CPU_FEATURE_USABLE (VMX); + CHECK_CPU_FEATURE_USABLE (SMX); + CHECK_CPU_FEATURE_USABLE (EST); + CHECK_CPU_FEATURE_USABLE (TM2); + CHECK_CPU_FEATURE_USABLE (SSSE3); + CHECK_CPU_FEATURE_USABLE (CNXT_ID); + CHECK_CPU_FEATURE_USABLE (SDBG); CHECK_CPU_FEATURE_USABLE (FMA); + CHECK_CPU_FEATURE_USABLE (CMPXCHG16B); + CHECK_CPU_FEATURE_USABLE (XTPRUPDCTRL); + CHECK_CPU_FEATURE_USABLE (PDCM); + CHECK_CPU_FEATURE_USABLE (PCID); + CHECK_CPU_FEATURE_USABLE (DCA); + CHECK_CPU_FEATURE_USABLE (SSE4_1); + CHECK_CPU_FEATURE_USABLE (SSE4_2); + CHECK_CPU_FEATURE_USABLE (X2APIC); + CHECK_CPU_FEATURE_USABLE (MOVBE); + CHECK_CPU_FEATURE_USABLE (POPCNT); + CHECK_CPU_FEATURE_USABLE (TSC_DEADLINE); + CHECK_CPU_FEATURE_USABLE (AES); + CHECK_CPU_FEATURE_USABLE (XSAVE); + CHECK_CPU_FEATURE_USABLE (OSXSAVE); CHECK_CPU_FEATURE_USABLE (AVX); CHECK_CPU_FEATURE_USABLE (F16C); + CHECK_CPU_FEATURE_USABLE (RDRAND); + CHECK_CPU_FEATURE_USABLE (FPU); + CHECK_CPU_FEATURE_USABLE (VME); + CHECK_CPU_FEATURE_USABLE (DE); + CHECK_CPU_FEATURE_USABLE (PSE); + CHECK_CPU_FEATURE_USABLE (TSC); + CHECK_CPU_FEATURE_USABLE (MSR); + CHECK_CPU_FEATURE_USABLE (PAE); + CHECK_CPU_FEATURE_USABLE (MCE); + CHECK_CPU_FEATURE_USABLE (CX8); + CHECK_CPU_FEATURE_USABLE (APIC); + CHECK_CPU_FEATURE_USABLE (SEP); + CHECK_CPU_FEATURE_USABLE (MTRR); + CHECK_CPU_FEATURE_USABLE (PGE); + CHECK_CPU_FEATURE_USABLE (MCA); + CHECK_CPU_FEATURE_USABLE (CMOV); + CHECK_CPU_FEATURE_USABLE (PAT); + CHECK_CPU_FEATURE_USABLE (PSE_36); + CHECK_CPU_FEATURE_USABLE (PSN); + CHECK_CPU_FEATURE_USABLE (CLFSH); + CHECK_CPU_FEATURE_USABLE (DS); + CHECK_CPU_FEATURE_USABLE (ACPI); + CHECK_CPU_FEATURE_USABLE (MMX); + CHECK_CPU_FEATURE_USABLE (FXSR); + CHECK_CPU_FEATURE_USABLE (SSE); + CHECK_CPU_FEATURE_USABLE (SSE2); + CHECK_CPU_FEATURE_USABLE (SS); + CHECK_CPU_FEATURE_USABLE (HTT); + CHECK_CPU_FEATURE_USABLE (TM); + CHECK_CPU_FEATURE_USABLE (PBE); + CHECK_CPU_FEATURE_USABLE (FSGSBASE); + CHECK_CPU_FEATURE_USABLE (TSC_ADJUST); + CHECK_CPU_FEATURE_USABLE (SGX); + CHECK_CPU_FEATURE_USABLE (BMI1); + CHECK_CPU_FEATURE_USABLE (HLE); CHECK_CPU_FEATURE_USABLE (AVX2); + CHECK_CPU_FEATURE_USABLE (SMEP); + CHECK_CPU_FEATURE_USABLE (BMI2); + CHECK_CPU_FEATURE_USABLE (ERMS); + CHECK_CPU_FEATURE_USABLE (INVPCID); + CHECK_CPU_FEATURE_USABLE (RTM); + CHECK_CPU_FEATURE_USABLE (PQM); + CHECK_CPU_FEATURE_USABLE (MPX); + CHECK_CPU_FEATURE_USABLE (PQE); CHECK_CPU_FEATURE_USABLE (AVX512F); CHECK_CPU_FEATURE_USABLE (AVX512DQ); + CHECK_CPU_FEATURE_USABLE (RDSEED); + CHECK_CPU_FEATURE_USABLE (ADX); + CHECK_CPU_FEATURE_USABLE (SMAP); CHECK_CPU_FEATURE_USABLE (AVX512_IFMA); + CHECK_CPU_FEATURE_USABLE (CLFLUSHOPT); + CHECK_CPU_FEATURE_USABLE (CLWB); + CHECK_CPU_FEATURE_USABLE (TRACE); CHECK_CPU_FEATURE_USABLE (AVX512PF); CHECK_CPU_FEATURE_USABLE (AVX512ER); CHECK_CPU_FEATURE_USABLE (AVX512CD); + CHECK_CPU_FEATURE_USABLE (SHA); CHECK_CPU_FEATURE_USABLE (AVX512BW); CHECK_CPU_FEATURE_USABLE (AVX512VL); + CHECK_CPU_FEATURE_USABLE (PREFETCHWT1); CHECK_CPU_FEATURE_USABLE (AVX512_VBMI); + CHECK_CPU_FEATURE_USABLE (UMIP); CHECK_CPU_FEATURE_USABLE (PKU); + CHECK_CPU_FEATURE_USABLE (OSPKE); + CHECK_CPU_FEATURE_USABLE (WAITPKG); CHECK_CPU_FEATURE_USABLE (AVX512_VBMI2); + CHECK_CPU_FEATURE_USABLE (SHSTK); + CHECK_CPU_FEATURE_USABLE (GFNI); CHECK_CPU_FEATURE_USABLE (VAES); CHECK_CPU_FEATURE_USABLE (VPCLMULQDQ); CHECK_CPU_FEATURE_USABLE (AVX512_VNNI); CHECK_CPU_FEATURE_USABLE (AVX512_BITALG); CHECK_CPU_FEATURE_USABLE (AVX512_VPOPCNTDQ); + CHECK_CPU_FEATURE_USABLE (RDPID); + CHECK_CPU_FEATURE_USABLE (CLDEMOTE); + CHECK_CPU_FEATURE_USABLE (MOVDIRI); + CHECK_CPU_FEATURE_USABLE (MOVDIR64B); + CHECK_CPU_FEATURE_USABLE (ENQCMD); + CHECK_CPU_FEATURE_USABLE (SGX_LC); + CHECK_CPU_FEATURE_USABLE (PKS); CHECK_CPU_FEATURE_USABLE (AVX512_4VNNIW); CHECK_CPU_FEATURE_USABLE (AVX512_4FMAPS); + CHECK_CPU_FEATURE_USABLE (FSRM); CHECK_CPU_FEATURE_USABLE (AVX512_VP2INTERSECT); + CHECK_CPU_FEATURE_USABLE (MD_CLEAR); + CHECK_CPU_FEATURE_USABLE (SERIALIZE); + CHECK_CPU_FEATURE_USABLE (HYBRID); + CHECK_CPU_FEATURE_USABLE (TSXLDTRK); + CHECK_CPU_FEATURE_USABLE (PCONFIG); + CHECK_CPU_FEATURE_USABLE (IBT); CHECK_CPU_FEATURE_USABLE (AMX_BF16); CHECK_CPU_FEATURE_USABLE (AMX_TILE); CHECK_CPU_FEATURE_USABLE (AMX_INT8); + CHECK_CPU_FEATURE_USABLE (IBRS_IBPB); + CHECK_CPU_FEATURE_USABLE (STIBP); + CHECK_CPU_FEATURE_USABLE (L1D_FLUSH); + CHECK_CPU_FEATURE_USABLE (ARCH_CAPABILITIES); + CHECK_CPU_FEATURE_USABLE (CORE_CAPABILITIES); + CHECK_CPU_FEATURE_USABLE (SSBD); + CHECK_CPU_FEATURE_USABLE (LAHF64_SAHF64); + CHECK_CPU_FEATURE_USABLE (SVM); + CHECK_CPU_FEATURE_USABLE (LZCNT); + CHECK_CPU_FEATURE_USABLE (SSE4A); + CHECK_CPU_FEATURE_USABLE (PREFETCHW); CHECK_CPU_FEATURE_USABLE (XOP); + CHECK_CPU_FEATURE_USABLE (LWP); CHECK_CPU_FEATURE_USABLE (FMA4); + CHECK_CPU_FEATURE_USABLE (TBM); + CHECK_CPU_FEATURE_USABLE (SYSCALL_SYSRET); + CHECK_CPU_FEATURE_USABLE (NX); + CHECK_CPU_FEATURE_USABLE (PAGE1GB); + CHECK_CPU_FEATURE_USABLE (RDTSCP); + CHECK_CPU_FEATURE_USABLE (LM); + CHECK_CPU_FEATURE_USABLE (XSAVEOPT); CHECK_CPU_FEATURE_USABLE (XSAVEC); + CHECK_CPU_FEATURE_USABLE (XGETBV_ECX_1); + CHECK_CPU_FEATURE_USABLE (XSAVES); + CHECK_CPU_FEATURE_USABLE (INVARIANT_TSC); + CHECK_CPU_FEATURE_USABLE (WBNOINVD); CHECK_CPU_FEATURE_USABLE (AVX512_BF16); return 0; diff --git a/sysdeps/x86_64/Makefile b/sysdeps/x86_64/Makefile index d51cf03ac9..2a17af1df0 100644 --- a/sysdeps/x86_64/Makefile +++ b/sysdeps/x86_64/Makefile @@ -57,7 +57,7 @@ modules-names += x86_64/tst-x86_64mod-1 LDFLAGS-tst-x86_64mod-1.so = -Wl,-soname,tst-x86_64mod-1.so ifneq (no,$(have-tunables)) # Test the state size for XSAVE when XSAVEC is disabled. -tst-x86_64-1-ENV = GLIBC_TUNABLES=glibc.cpu.hwcaps=-XSAVEC_Usable +tst-x86_64-1-ENV = GLIBC_TUNABLES=glibc.cpu.hwcaps=-XSAVEC endif $(objpfx)tst-x86_64-1: $(objpfx)x86_64/tst-x86_64mod-1.so @@ -71,10 +71,10 @@ CFLAGS-tst-platformmod-2.c = -mno-avx LDFLAGS-tst-platformmod-2.so = -Wl,-soname,tst-platformmod-2.so $(objpfx)tst-platform-1: $(objpfx)tst-platformmod-1.so $(objpfx)tst-platform-1.out: $(objpfx)x86_64/tst-platformmod-2.so -# Turn off AVX512F_Usable and AVX2_Usable so that GLRO(dl_platform) is +# Turn off AVX512F and AVX2 so that GLRO(dl_platform) is # always set to x86_64. tst-platform-1-ENV = LD_PRELOAD=$(objpfx)\$$PLATFORM/tst-platformmod-2.so \ - GLIBC_TUNABLES=glibc.cpu.hwcaps=-AVX512F_Usable,-AVX2_Usable + GLIBC_TUNABLES=glibc.cpu.hwcaps=-AVX512F,-AVX2 endif tests += tst-audit3 tst-audit4 tst-audit5 tst-audit6 tst-audit7 \ diff --git a/sysdeps/x86_64/dl-machine.h b/sysdeps/x86_64/dl-machine.h index 8e9baffeb4..ca73d8fef9 100644 --- a/sysdeps/x86_64/dl-machine.h +++ b/sysdeps/x86_64/dl-machine.h @@ -99,9 +99,9 @@ elf_machine_runtime_setup (struct link_map *l, int lazy, int profile) end in this function. */ if (__glibc_unlikely (profile)) { - if (HAS_ARCH_FEATURE (AVX512F_Usable)) + if (CPU_FEATURE_USABLE (AVX512F)) *(ElfW(Addr) *) (got + 2) = (ElfW(Addr)) &_dl_runtime_profile_avx512; - else if (HAS_ARCH_FEATURE (AVX_Usable)) + else if (CPU_FEATURE_USABLE (AVX)) *(ElfW(Addr) *) (got + 2) = (ElfW(Addr)) &_dl_runtime_profile_avx; else *(ElfW(Addr) *) (got + 2) = (ElfW(Addr)) &_dl_runtime_profile_sse; @@ -119,7 +119,7 @@ elf_machine_runtime_setup (struct link_map *l, int lazy, int profile) the resolved address. */ if (GLRO(dl_x86_cpu_features).xsave_state_size != 0) *(ElfW(Addr) *) (got + 2) - = (HAS_ARCH_FEATURE (XSAVEC_Usable) + = (CPU_FEATURE_USABLE (XSAVEC) ? (ElfW(Addr)) &_dl_runtime_resolve_xsavec : (ElfW(Addr)) &_dl_runtime_resolve_xsave); else diff --git a/sysdeps/x86_64/fpu/math-tests-arch.h b/sysdeps/x86_64/fpu/math-tests-arch.h index 435ddad991..33ea763de2 100644 --- a/sysdeps/x86_64/fpu/math-tests-arch.h +++ b/sysdeps/x86_64/fpu/math-tests-arch.h @@ -24,7 +24,7 @@ # define CHECK_ARCH_EXT \ do \ { \ - if (!HAS_ARCH_FEATURE (AVX_Usable)) return; \ + if (!CPU_FEATURE_USABLE (AVX)) return; \ } \ while (0) @@ -34,7 +34,7 @@ # define CHECK_ARCH_EXT \ do \ { \ - if (!HAS_ARCH_FEATURE (AVX2_Usable)) return; \ + if (!CPU_FEATURE_USABLE (AVX2)) return; \ } \ while (0) @@ -44,7 +44,7 @@ # define CHECK_ARCH_EXT \ do \ { \ - if (!HAS_ARCH_FEATURE (AVX512F_Usable)) return; \ + if (!CPU_FEATURE_USABLE (AVX512F)) return; \ } \ while (0) diff --git a/sysdeps/x86_64/fpu/multiarch/ifunc-avx-fma4.h b/sysdeps/x86_64/fpu/multiarch/ifunc-avx-fma4.h index 86835eebc1..95fe2f4d70 100644 --- a/sysdeps/x86_64/fpu/multiarch/ifunc-avx-fma4.h +++ b/sysdeps/x86_64/fpu/multiarch/ifunc-avx-fma4.h @@ -29,14 +29,14 @@ IFUNC_SELECTOR (void) { const struct cpu_features* cpu_features = __get_cpu_features (); - if (CPU_FEATURES_ARCH_P (cpu_features, FMA_Usable) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, FMA) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2)) return OPTIMIZE (fma); - if (CPU_FEATURES_ARCH_P (cpu_features, FMA4_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, FMA4)) return OPTIMIZE (fma4); - if (CPU_FEATURES_ARCH_P (cpu_features, AVX_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX)) return OPTIMIZE (avx); return OPTIMIZE (sse2); diff --git a/sysdeps/x86_64/fpu/multiarch/ifunc-fma.h b/sysdeps/x86_64/fpu/multiarch/ifunc-fma.h index 2242d97de0..0a25a44ab0 100644 --- a/sysdeps/x86_64/fpu/multiarch/ifunc-fma.h +++ b/sysdeps/x86_64/fpu/multiarch/ifunc-fma.h @@ -26,8 +26,8 @@ IFUNC_SELECTOR (void) { const struct cpu_features* cpu_features = __get_cpu_features (); - if (CPU_FEATURES_ARCH_P (cpu_features, FMA_Usable) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, FMA) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2)) return OPTIMIZE (fma); return OPTIMIZE (sse2); diff --git a/sysdeps/x86_64/fpu/multiarch/ifunc-fma4.h b/sysdeps/x86_64/fpu/multiarch/ifunc-fma4.h index 03adf86b9b..7659758972 100644 --- a/sysdeps/x86_64/fpu/multiarch/ifunc-fma4.h +++ b/sysdeps/x86_64/fpu/multiarch/ifunc-fma4.h @@ -28,11 +28,11 @@ IFUNC_SELECTOR (void) { const struct cpu_features* cpu_features = __get_cpu_features (); - if (CPU_FEATURES_ARCH_P (cpu_features, FMA_Usable) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, FMA) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2)) return OPTIMIZE (fma); - if (CPU_FEATURES_ARCH_P (cpu_features, FMA4_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, FMA)) return OPTIMIZE (fma4); return OPTIMIZE (sse2); diff --git a/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx2.h b/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx2.h index 9c5c6f1476..2655e55444 100644 --- a/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx2.h +++ b/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx2.h @@ -31,8 +31,8 @@ IFUNC_SELECTOR (void) { const struct cpu_features* cpu_features = __get_cpu_features (); - if (CPU_FEATURES_ARCH_P (cpu_features, FMA_Usable) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, FMA) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2)) return OPTIMIZE (avx2); return OPTIMIZE (sse_wrapper); diff --git a/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h b/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h index 70e22c53bf..5f8326503b 100644 --- a/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h +++ b/sysdeps/x86_64/fpu/multiarch/ifunc-mathvec-avx512.h @@ -34,10 +34,10 @@ IFUNC_SELECTOR (void) if (!CPU_FEATURES_ARCH_P (cpu_features, MathVec_Prefer_No_AVX512)) { - if (CPU_FEATURES_ARCH_P (cpu_features, AVX512DQ_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX512DQ)) return OPTIMIZE (skx); - if (CPU_FEATURES_ARCH_P (cpu_features, AVX512F_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX512F)) return OPTIMIZE (knl); } diff --git a/sysdeps/x86_64/fpu/multiarch/s_fma.c b/sysdeps/x86_64/fpu/multiarch/s_fma.c index 9992a1e97a..0d8c0b7911 100644 --- a/sysdeps/x86_64/fpu/multiarch/s_fma.c +++ b/sysdeps/x86_64/fpu/multiarch/s_fma.c @@ -41,8 +41,8 @@ __fma_fma4 (double x, double y, double z) } -libm_ifunc (__fma, HAS_ARCH_FEATURE (FMA_Usable) - ? __fma_fma3 : (HAS_ARCH_FEATURE (FMA4_Usable) +libm_ifunc (__fma, CPU_FEATURE_USABLE (FMA) + ? __fma_fma3 : (CPU_FEATURE_USABLE (FMA4) ? __fma_fma4 : __fma_sse2)); libm_alias_double (__fma, fma) diff --git a/sysdeps/x86_64/fpu/multiarch/s_fmaf.c b/sysdeps/x86_64/fpu/multiarch/s_fmaf.c index 4cbcf1f61b..c01e5a21d4 100644 --- a/sysdeps/x86_64/fpu/multiarch/s_fmaf.c +++ b/sysdeps/x86_64/fpu/multiarch/s_fmaf.c @@ -40,8 +40,8 @@ __fmaf_fma4 (float x, float y, float z) } -libm_ifunc (__fmaf, HAS_ARCH_FEATURE (FMA_Usable) - ? __fmaf_fma3 : (HAS_ARCH_FEATURE (FMA4_Usable) +libm_ifunc (__fmaf, CPU_FEATURE_USABLE (FMA) + ? __fmaf_fma3 : (CPU_FEATURE_USABLE (FMA4) ? __fmaf_fma4 : __fmaf_sse2)); libm_alias_float (__fma, fma) diff --git a/sysdeps/x86_64/multiarch/ifunc-avx2.h b/sysdeps/x86_64/multiarch/ifunc-avx2.h index 69f30398ae..f4e311d470 100644 --- a/sysdeps/x86_64/multiarch/ifunc-avx2.h +++ b/sysdeps/x86_64/multiarch/ifunc-avx2.h @@ -28,7 +28,7 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) return OPTIMIZE (avx2); diff --git a/sysdeps/x86_64/multiarch/ifunc-impl-list.c b/sysdeps/x86_64/multiarch/ifunc-impl-list.c index ce7eb1eecf..c094e06ed4 100644 --- a/sysdeps/x86_64/multiarch/ifunc-impl-list.c +++ b/sysdeps/x86_64/multiarch/ifunc-impl-list.c @@ -41,14 +41,14 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/memchr.c. */ IFUNC_IMPL (i, name, memchr, IFUNC_IMPL_ADD (array, i, memchr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __memchr_avx2) IFUNC_IMPL_ADD (array, i, memchr, 1, __memchr_sse2)) /* Support sysdeps/x86_64/multiarch/memcmp.c. */ IFUNC_IMPL (i, name, memcmp, IFUNC_IMPL_ADD (array, i, memcmp, - (HAS_ARCH_FEATURE (AVX2_Usable) + (CPU_FEATURE_USABLE (AVX2) && HAS_CPU_FEATURE (MOVBE)), __memcmp_avx2_movbe) IFUNC_IMPL_ADD (array, i, memcmp, HAS_CPU_FEATURE (SSE4_1), @@ -61,19 +61,19 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/memmove_chk.c. */ IFUNC_IMPL (i, name, __memmove_chk, IFUNC_IMPL_ADD (array, i, __memmove_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memmove_chk_avx512_no_vzeroupper) IFUNC_IMPL_ADD (array, i, __memmove_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memmove_chk_avx512_unaligned) IFUNC_IMPL_ADD (array, i, __memmove_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memmove_chk_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, __memmove_chk, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memmove_chk_avx_unaligned) IFUNC_IMPL_ADD (array, i, __memmove_chk, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memmove_chk_avx_unaligned_erms) IFUNC_IMPL_ADD (array, i, __memmove_chk, HAS_CPU_FEATURE (SSSE3), @@ -92,19 +92,19 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/memmove.c. */ IFUNC_IMPL (i, name, memmove, IFUNC_IMPL_ADD (array, i, memmove, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memmove_avx_unaligned) IFUNC_IMPL_ADD (array, i, memmove, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memmove_avx_unaligned_erms) IFUNC_IMPL_ADD (array, i, memmove, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memmove_avx512_no_vzeroupper) IFUNC_IMPL_ADD (array, i, memmove, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memmove_avx512_unaligned) IFUNC_IMPL_ADD (array, i, memmove, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memmove_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, memmove, HAS_CPU_FEATURE (SSSE3), __memmove_ssse3_back) @@ -119,7 +119,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/memrchr.c. */ IFUNC_IMPL (i, name, memrchr, IFUNC_IMPL_ADD (array, i, memrchr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __memrchr_avx2) IFUNC_IMPL_ADD (array, i, memrchr, 1, __memrchr_sse2)) @@ -133,19 +133,19 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL_ADD (array, i, __memset_chk, 1, __memset_chk_sse2_unaligned_erms) IFUNC_IMPL_ADD (array, i, __memset_chk, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __memset_chk_avx2_unaligned) IFUNC_IMPL_ADD (array, i, __memset_chk, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __memset_chk_avx2_unaligned_erms) IFUNC_IMPL_ADD (array, i, __memset_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memset_chk_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, __memset_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memset_chk_avx512_unaligned) IFUNC_IMPL_ADD (array, i, __memset_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memset_chk_avx512_no_vzeroupper) ) #endif @@ -158,40 +158,40 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, __memset_sse2_unaligned_erms) IFUNC_IMPL_ADD (array, i, memset, 1, __memset_erms) IFUNC_IMPL_ADD (array, i, memset, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __memset_avx2_unaligned) IFUNC_IMPL_ADD (array, i, memset, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __memset_avx2_unaligned_erms) IFUNC_IMPL_ADD (array, i, memset, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memset_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, memset, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memset_avx512_unaligned) IFUNC_IMPL_ADD (array, i, memset, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memset_avx512_no_vzeroupper) ) /* Support sysdeps/x86_64/multiarch/rawmemchr.c. */ IFUNC_IMPL (i, name, rawmemchr, IFUNC_IMPL_ADD (array, i, rawmemchr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __rawmemchr_avx2) IFUNC_IMPL_ADD (array, i, rawmemchr, 1, __rawmemchr_sse2)) /* Support sysdeps/x86_64/multiarch/strlen.c. */ IFUNC_IMPL (i, name, strlen, IFUNC_IMPL_ADD (array, i, strlen, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __strlen_avx2) IFUNC_IMPL_ADD (array, i, strlen, 1, __strlen_sse2)) /* Support sysdeps/x86_64/multiarch/strnlen.c. */ IFUNC_IMPL (i, name, strnlen, IFUNC_IMPL_ADD (array, i, strnlen, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __strnlen_avx2) IFUNC_IMPL_ADD (array, i, strnlen, 1, __strnlen_sse2)) @@ -199,7 +199,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, stpncpy, IFUNC_IMPL_ADD (array, i, stpncpy, HAS_CPU_FEATURE (SSSE3), __stpncpy_ssse3) - IFUNC_IMPL_ADD (array, i, stpncpy, HAS_ARCH_FEATURE (AVX2_Usable), + IFUNC_IMPL_ADD (array, i, stpncpy, CPU_FEATURE_USABLE (AVX2), __stpncpy_avx2) IFUNC_IMPL_ADD (array, i, stpncpy, 1, __stpncpy_sse2_unaligned) @@ -209,7 +209,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL (i, name, stpcpy, IFUNC_IMPL_ADD (array, i, stpcpy, HAS_CPU_FEATURE (SSSE3), __stpcpy_ssse3) - IFUNC_IMPL_ADD (array, i, stpcpy, HAS_ARCH_FEATURE (AVX2_Usable), + IFUNC_IMPL_ADD (array, i, stpcpy, CPU_FEATURE_USABLE (AVX2), __stpcpy_avx2) IFUNC_IMPL_ADD (array, i, stpcpy, 1, __stpcpy_sse2_unaligned) IFUNC_IMPL_ADD (array, i, stpcpy, 1, __stpcpy_sse2)) @@ -217,7 +217,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strcasecmp_l.c. */ IFUNC_IMPL (i, name, strcasecmp, IFUNC_IMPL_ADD (array, i, strcasecmp, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __strcasecmp_avx) IFUNC_IMPL_ADD (array, i, strcasecmp, HAS_CPU_FEATURE (SSE4_2), @@ -230,7 +230,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strcasecmp_l.c. */ IFUNC_IMPL (i, name, strcasecmp_l, IFUNC_IMPL_ADD (array, i, strcasecmp_l, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __strcasecmp_l_avx) IFUNC_IMPL_ADD (array, i, strcasecmp_l, HAS_CPU_FEATURE (SSE4_2), @@ -243,7 +243,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strcat.c. */ IFUNC_IMPL (i, name, strcat, - IFUNC_IMPL_ADD (array, i, strcat, HAS_ARCH_FEATURE (AVX2_Usable), + IFUNC_IMPL_ADD (array, i, strcat, CPU_FEATURE_USABLE (AVX2), __strcat_avx2) IFUNC_IMPL_ADD (array, i, strcat, HAS_CPU_FEATURE (SSSE3), __strcat_ssse3) @@ -253,7 +253,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strchr.c. */ IFUNC_IMPL (i, name, strchr, IFUNC_IMPL_ADD (array, i, strchr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __strchr_avx2) IFUNC_IMPL_ADD (array, i, strchr, 1, __strchr_sse2_no_bsf) IFUNC_IMPL_ADD (array, i, strchr, 1, __strchr_sse2)) @@ -261,21 +261,21 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strchrnul.c. */ IFUNC_IMPL (i, name, strchrnul, IFUNC_IMPL_ADD (array, i, strchrnul, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __strchrnul_avx2) IFUNC_IMPL_ADD (array, i, strchrnul, 1, __strchrnul_sse2)) /* Support sysdeps/x86_64/multiarch/strrchr.c. */ IFUNC_IMPL (i, name, strrchr, IFUNC_IMPL_ADD (array, i, strrchr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __strrchr_avx2) IFUNC_IMPL_ADD (array, i, strrchr, 1, __strrchr_sse2)) /* Support sysdeps/x86_64/multiarch/strcmp.c. */ IFUNC_IMPL (i, name, strcmp, IFUNC_IMPL_ADD (array, i, strcmp, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __strcmp_avx2) IFUNC_IMPL_ADD (array, i, strcmp, HAS_CPU_FEATURE (SSE4_2), __strcmp_sse42) @@ -286,7 +286,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strcpy.c. */ IFUNC_IMPL (i, name, strcpy, - IFUNC_IMPL_ADD (array, i, strcpy, HAS_ARCH_FEATURE (AVX2_Usable), + IFUNC_IMPL_ADD (array, i, strcpy, CPU_FEATURE_USABLE (AVX2), __strcpy_avx2) IFUNC_IMPL_ADD (array, i, strcpy, HAS_CPU_FEATURE (SSSE3), __strcpy_ssse3) @@ -302,7 +302,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strncase_l.c. */ IFUNC_IMPL (i, name, strncasecmp, IFUNC_IMPL_ADD (array, i, strncasecmp, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __strncasecmp_avx) IFUNC_IMPL_ADD (array, i, strncasecmp, HAS_CPU_FEATURE (SSE4_2), @@ -316,7 +316,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strncase_l.c. */ IFUNC_IMPL (i, name, strncasecmp_l, IFUNC_IMPL_ADD (array, i, strncasecmp_l, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __strncasecmp_l_avx) IFUNC_IMPL_ADD (array, i, strncasecmp_l, HAS_CPU_FEATURE (SSE4_2), @@ -329,7 +329,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strncat.c. */ IFUNC_IMPL (i, name, strncat, - IFUNC_IMPL_ADD (array, i, strncat, HAS_ARCH_FEATURE (AVX2_Usable), + IFUNC_IMPL_ADD (array, i, strncat, CPU_FEATURE_USABLE (AVX2), __strncat_avx2) IFUNC_IMPL_ADD (array, i, strncat, HAS_CPU_FEATURE (SSSE3), __strncat_ssse3) @@ -339,7 +339,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strncpy.c. */ IFUNC_IMPL (i, name, strncpy, - IFUNC_IMPL_ADD (array, i, strncpy, HAS_ARCH_FEATURE (AVX2_Usable), + IFUNC_IMPL_ADD (array, i, strncpy, CPU_FEATURE_USABLE (AVX2), __strncpy_avx2) IFUNC_IMPL_ADD (array, i, strncpy, HAS_CPU_FEATURE (SSSE3), __strncpy_ssse3) @@ -368,28 +368,28 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/wcschr.c. */ IFUNC_IMPL (i, name, wcschr, IFUNC_IMPL_ADD (array, i, wcschr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wcschr_avx2) IFUNC_IMPL_ADD (array, i, wcschr, 1, __wcschr_sse2)) /* Support sysdeps/x86_64/multiarch/wcsrchr.c. */ IFUNC_IMPL (i, name, wcsrchr, IFUNC_IMPL_ADD (array, i, wcsrchr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wcsrchr_avx2) IFUNC_IMPL_ADD (array, i, wcsrchr, 1, __wcsrchr_sse2)) /* Support sysdeps/x86_64/multiarch/wcscmp.c. */ IFUNC_IMPL (i, name, wcscmp, IFUNC_IMPL_ADD (array, i, wcscmp, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wcscmp_avx2) IFUNC_IMPL_ADD (array, i, wcscmp, 1, __wcscmp_sse2)) /* Support sysdeps/x86_64/multiarch/wcsncmp.c. */ IFUNC_IMPL (i, name, wcsncmp, IFUNC_IMPL_ADD (array, i, wcsncmp, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wcsncmp_avx2) IFUNC_IMPL_ADD (array, i, wcsncmp, 1, __wcsncmp_sse2)) @@ -402,14 +402,14 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/wcslen.c. */ IFUNC_IMPL (i, name, wcslen, IFUNC_IMPL_ADD (array, i, wcslen, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wcslen_avx2) IFUNC_IMPL_ADD (array, i, wcslen, 1, __wcslen_sse2)) /* Support sysdeps/x86_64/multiarch/wcsnlen.c. */ IFUNC_IMPL (i, name, wcsnlen, IFUNC_IMPL_ADD (array, i, wcsnlen, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wcsnlen_avx2) IFUNC_IMPL_ADD (array, i, wcsnlen, HAS_CPU_FEATURE (SSE4_1), @@ -419,14 +419,14 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/wmemchr.c. */ IFUNC_IMPL (i, name, wmemchr, IFUNC_IMPL_ADD (array, i, wmemchr, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wmemchr_avx2) IFUNC_IMPL_ADD (array, i, wmemchr, 1, __wmemchr_sse2)) /* Support sysdeps/x86_64/multiarch/wmemcmp.c. */ IFUNC_IMPL (i, name, wmemcmp, IFUNC_IMPL_ADD (array, i, wmemcmp, - (HAS_ARCH_FEATURE (AVX2_Usable) + (CPU_FEATURE_USABLE (AVX2) && HAS_CPU_FEATURE (MOVBE)), __wmemcmp_avx2_movbe) IFUNC_IMPL_ADD (array, i, wmemcmp, HAS_CPU_FEATURE (SSE4_1), @@ -440,29 +440,29 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL_ADD (array, i, wmemset, 1, __wmemset_sse2_unaligned) IFUNC_IMPL_ADD (array, i, wmemset, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wmemset_avx2_unaligned) IFUNC_IMPL_ADD (array, i, wmemset, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __wmemset_avx512_unaligned)) #ifdef SHARED /* Support sysdeps/x86_64/multiarch/memcpy_chk.c. */ IFUNC_IMPL (i, name, __memcpy_chk, IFUNC_IMPL_ADD (array, i, __memcpy_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memcpy_chk_avx512_no_vzeroupper) IFUNC_IMPL_ADD (array, i, __memcpy_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memcpy_chk_avx512_unaligned) IFUNC_IMPL_ADD (array, i, __memcpy_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memcpy_chk_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, __memcpy_chk, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memcpy_chk_avx_unaligned) IFUNC_IMPL_ADD (array, i, __memcpy_chk, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memcpy_chk_avx_unaligned_erms) IFUNC_IMPL_ADD (array, i, __memcpy_chk, HAS_CPU_FEATURE (SSSE3), @@ -481,23 +481,23 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/memcpy.c. */ IFUNC_IMPL (i, name, memcpy, IFUNC_IMPL_ADD (array, i, memcpy, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memcpy_avx_unaligned) IFUNC_IMPL_ADD (array, i, memcpy, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __memcpy_avx_unaligned_erms) IFUNC_IMPL_ADD (array, i, memcpy, HAS_CPU_FEATURE (SSSE3), __memcpy_ssse3_back) IFUNC_IMPL_ADD (array, i, memcpy, HAS_CPU_FEATURE (SSSE3), __memcpy_ssse3) IFUNC_IMPL_ADD (array, i, memcpy, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memcpy_avx512_no_vzeroupper) IFUNC_IMPL_ADD (array, i, memcpy, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memcpy_avx512_unaligned) IFUNC_IMPL_ADD (array, i, memcpy, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __memcpy_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, memcpy, 1, __memcpy_sse2_unaligned) IFUNC_IMPL_ADD (array, i, memcpy, 1, @@ -508,19 +508,19 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/mempcpy_chk.c. */ IFUNC_IMPL (i, name, __mempcpy_chk, IFUNC_IMPL_ADD (array, i, __mempcpy_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __mempcpy_chk_avx512_no_vzeroupper) IFUNC_IMPL_ADD (array, i, __mempcpy_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __mempcpy_chk_avx512_unaligned) IFUNC_IMPL_ADD (array, i, __mempcpy_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __mempcpy_chk_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, __mempcpy_chk, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __mempcpy_chk_avx_unaligned) IFUNC_IMPL_ADD (array, i, __mempcpy_chk, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __mempcpy_chk_avx_unaligned_erms) IFUNC_IMPL_ADD (array, i, __mempcpy_chk, HAS_CPU_FEATURE (SSSE3), @@ -539,19 +539,19 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/mempcpy.c. */ IFUNC_IMPL (i, name, mempcpy, IFUNC_IMPL_ADD (array, i, mempcpy, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __mempcpy_avx512_no_vzeroupper) IFUNC_IMPL_ADD (array, i, mempcpy, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __mempcpy_avx512_unaligned) IFUNC_IMPL_ADD (array, i, mempcpy, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __mempcpy_avx512_unaligned_erms) IFUNC_IMPL_ADD (array, i, mempcpy, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __mempcpy_avx_unaligned) IFUNC_IMPL_ADD (array, i, mempcpy, - HAS_ARCH_FEATURE (AVX_Usable), + CPU_FEATURE_USABLE (AVX), __mempcpy_avx_unaligned_erms) IFUNC_IMPL_ADD (array, i, mempcpy, HAS_CPU_FEATURE (SSSE3), __mempcpy_ssse3_back) @@ -566,7 +566,7 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, /* Support sysdeps/x86_64/multiarch/strncmp.c. */ IFUNC_IMPL (i, name, strncmp, IFUNC_IMPL_ADD (array, i, strncmp, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __strncmp_avx2) IFUNC_IMPL_ADD (array, i, strncmp, HAS_CPU_FEATURE (SSE4_2), __strncmp_sse42) @@ -580,10 +580,10 @@ __libc_ifunc_impl_list (const char *name, struct libc_ifunc_impl *array, IFUNC_IMPL_ADD (array, i, __wmemset_chk, 1, __wmemset_chk_sse2_unaligned) IFUNC_IMPL_ADD (array, i, __wmemset_chk, - HAS_ARCH_FEATURE (AVX2_Usable), + CPU_FEATURE_USABLE (AVX2), __wmemset_chk_avx2_unaligned) IFUNC_IMPL_ADD (array, i, __wmemset_chk, - HAS_ARCH_FEATURE (AVX512F_Usable), + CPU_FEATURE_USABLE (AVX512F), __wmemset_chk_avx512_unaligned)) #endif diff --git a/sysdeps/x86_64/multiarch/ifunc-memcmp.h b/sysdeps/x86_64/multiarch/ifunc-memcmp.h index c14db39cf4..bb6ce8b547 100644 --- a/sysdeps/x86_64/multiarch/ifunc-memcmp.h +++ b/sysdeps/x86_64/multiarch/ifunc-memcmp.h @@ -30,7 +30,7 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_CPU_P (cpu_features, MOVBE) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) return OPTIMIZE (avx2_movbe); diff --git a/sysdeps/x86_64/multiarch/ifunc-memmove.h b/sysdeps/x86_64/multiarch/ifunc-memmove.h index 81673d2019..4443cbc288 100644 --- a/sysdeps/x86_64/multiarch/ifunc-memmove.h +++ b/sysdeps/x86_64/multiarch/ifunc-memmove.h @@ -45,7 +45,7 @@ IFUNC_SELECTOR (void) || CPU_FEATURES_ARCH_P (cpu_features, Prefer_FSRM)) return OPTIMIZE (erms); - if (CPU_FEATURES_ARCH_P (cpu_features, AVX512F_Usable) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX512F) && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512)) { if (CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER)) diff --git a/sysdeps/x86_64/multiarch/ifunc-memset.h b/sysdeps/x86_64/multiarch/ifunc-memset.h index d690293385..b3e69c7e5e 100644 --- a/sysdeps/x86_64/multiarch/ifunc-memset.h +++ b/sysdeps/x86_64/multiarch/ifunc-memset.h @@ -42,7 +42,7 @@ IFUNC_SELECTOR (void) if (CPU_FEATURES_ARCH_P (cpu_features, Prefer_ERMS)) return OPTIMIZE (erms); - if (CPU_FEATURES_ARCH_P (cpu_features, AVX512F_Usable) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX512F) && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512)) { if (CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER)) @@ -54,7 +54,7 @@ IFUNC_SELECTOR (void) return OPTIMIZE (avx512_unaligned); } - if (CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX2)) { if (CPU_FEATURES_CPU_P (cpu_features, ERMS)) return OPTIMIZE (avx2_unaligned_erms); diff --git a/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h b/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h index f349ee70fd..07d4aad8b6 100644 --- a/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h +++ b/sysdeps/x86_64/multiarch/ifunc-strcasecmp.h @@ -29,7 +29,7 @@ IFUNC_SELECTOR (void) { const struct cpu_features* cpu_features = __get_cpu_features (); - if (CPU_FEATURES_ARCH_P (cpu_features, AVX_Usable)) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX)) return OPTIMIZE (avx); if (CPU_FEATURES_CPU_P (cpu_features, SSE4_2) diff --git a/sysdeps/x86_64/multiarch/ifunc-strcpy.h b/sysdeps/x86_64/multiarch/ifunc-strcpy.h index ae4f451803..ba9a5468ae 100644 --- a/sysdeps/x86_64/multiarch/ifunc-strcpy.h +++ b/sysdeps/x86_64/multiarch/ifunc-strcpy.h @@ -32,7 +32,7 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) return OPTIMIZE (avx2); diff --git a/sysdeps/x86_64/multiarch/ifunc-wmemset.h b/sysdeps/x86_64/multiarch/ifunc-wmemset.h index 583f6310a1..8cfce562fc 100644 --- a/sysdeps/x86_64/multiarch/ifunc-wmemset.h +++ b/sysdeps/x86_64/multiarch/ifunc-wmemset.h @@ -28,10 +28,10 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) { - if (CPU_FEATURES_ARCH_P (cpu_features, AVX512F_Usable) + if (CPU_FEATURE_USABLE_P (cpu_features, AVX512F) && !CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_AVX512)) return OPTIMIZE (avx512_unaligned); else diff --git a/sysdeps/x86_64/multiarch/strchr.c b/sysdeps/x86_64/multiarch/strchr.c index f27980dd36..8df4609bf8 100644 --- a/sysdeps/x86_64/multiarch/strchr.c +++ b/sysdeps/x86_64/multiarch/strchr.c @@ -36,7 +36,7 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) return OPTIMIZE (avx2); diff --git a/sysdeps/x86_64/multiarch/strcmp.c b/sysdeps/x86_64/multiarch/strcmp.c index 4db7332ac1..5ddfdc28f8 100644 --- a/sysdeps/x86_64/multiarch/strcmp.c +++ b/sysdeps/x86_64/multiarch/strcmp.c @@ -37,7 +37,7 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) return OPTIMIZE (avx2); diff --git a/sysdeps/x86_64/multiarch/strncmp.c b/sysdeps/x86_64/multiarch/strncmp.c index 6b63b0ac29..a4f514db1f 100644 --- a/sysdeps/x86_64/multiarch/strncmp.c +++ b/sysdeps/x86_64/multiarch/strncmp.c @@ -37,7 +37,7 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) return OPTIMIZE (avx2); diff --git a/sysdeps/x86_64/multiarch/test-multiarch.c b/sysdeps/x86_64/multiarch/test-multiarch.c index 317373ceda..7b1fa6811c 100644 --- a/sysdeps/x86_64/multiarch/test-multiarch.c +++ b/sysdeps/x86_64/multiarch/test-multiarch.c @@ -75,10 +75,10 @@ do_test (int argc, char **argv) int fails; get_cpuinfo (); - fails = check_proc ("avx", HAS_ARCH_FEATURE (AVX_Usable), - "HAS_ARCH_FEATURE (AVX_Usable)"); - fails += check_proc ("fma4", HAS_ARCH_FEATURE (FMA4_Usable), - "HAS_ARCH_FEATURE (FMA4_Usable)"); + fails = check_proc ("avx", CPU_FEATURE_USABLE (AVX), + "CPU_FEATURE_USABLE (AVX)"); + fails += check_proc ("fma4", CPU_FEATURE_USABLE (FMA4), + "CPU_FEATURE_USABLE (FMA4)"); fails += check_proc ("sse4_2", HAS_CPU_FEATURE (SSE4_2), "HAS_CPU_FEATURE (SSE4_2)"); fails += check_proc ("sse4_1", HAS_CPU_FEATURE (SSE4_1) diff --git a/sysdeps/x86_64/multiarch/wcsnlen.c b/sysdeps/x86_64/multiarch/wcsnlen.c index 8c1fc1a574..e1eb49624f 100644 --- a/sysdeps/x86_64/multiarch/wcsnlen.c +++ b/sysdeps/x86_64/multiarch/wcsnlen.c @@ -36,7 +36,7 @@ IFUNC_SELECTOR (void) const struct cpu_features* cpu_features = __get_cpu_features (); if (!CPU_FEATURES_ARCH_P (cpu_features, Prefer_No_VZEROUPPER) - && CPU_FEATURES_ARCH_P (cpu_features, AVX2_Usable) + && CPU_FEATURE_USABLE_P (cpu_features, AVX2) && CPU_FEATURES_ARCH_P (cpu_features, AVX_Fast_Unaligned_Load)) return OPTIMIZE (avx2);