[{"id":3679450,"web_url":"http://patchwork.ozlabs.org/comment/3679450/","msgid":"<f5a3de89-4c4a-43c9-8e7e-28bdf4107195@linaro.org>","list_archive_url":null,"date":"2026-04-20T15:58:12","subject":"Re: [PATCH v2 4/4] AArch64: Implement AdvSIMD and SVE powr(f)\n routines","submitter":{"id":66065,"url":"http://patchwork.ozlabs.org/api/people/66065/","name":"Adhemerval Zanella Netto","email":"adhemerval.zanella@linaro.org"},"content":"On 15/04/26 05:32, Pierre Blanchard wrote:\n> Vector variants of the new C23 powr routines.\n> \n> These provide same maximum error error as pow by virtue of\n> relying on shared approximation techniques and sources.\n> \n> Note: Benchmark inputs for powr(f) are identical to pow(f).\n> \n> Performance gain over pow on V1 with GCC@15:\n> - SVE powr: 10-12% on subnormal x, 12-13% on x < 0.\n> - SVE powrf: 15% on all x < 0.\n> - AdvSIMD powr: for x < 0, 40% if x subnormal, 60% otherwise.\n> - AdvSIMD powrf: 4% on x subnormals or x < 0.\n> ---\n> Ok for master? If so please commit for me as I don't have commit rights.\n> Thanks,\n\nLGTM, thanks.  I will install on master.\n\nReviewed-by: Adhemerval Zanella  <adhemerval.zanella@linaro.org>\n\n> Pierre\n>  bits/libm-simd-decl-stubs.h                   |  11 ++\n>  math/bits/mathcalls.h                         |   1 +\n>  sysdeps/aarch64/fpu/Makefile                  |   1 +\n>  sysdeps/aarch64/fpu/Versions                  |   7 +\n>  sysdeps/aarch64/fpu/advsimd_f32_protos.h      |   1 +\n>  sysdeps/aarch64/fpu/bits/math-vector.h        |   8 +\n>  .../fpu/finclude/math-vector-fortran.h        |   2 +\n>  sysdeps/aarch64/fpu/powr_advsimd.c            | 148 ++++++++++++++++++\n>  sysdeps/aarch64/fpu/powr_sve.c                | 123 +++++++++++++++\n>  sysdeps/aarch64/fpu/powrf_advsimd.c           | 135 ++++++++++++++++\n>  sysdeps/aarch64/fpu/powrf_sve.c               | 135 ++++++++++++++++\n>  .../fpu/test-double-advsimd-wrappers.c        |   1 +\n>  .../aarch64/fpu/test-double-sve-wrappers.c    |   1 +\n>  .../aarch64/fpu/test-float-advsimd-wrappers.c |   1 +\n>  sysdeps/aarch64/fpu/test-float-sve-wrappers.c |   1 +\n>  sysdeps/aarch64/fpu/v_powrf_inline.h          |   4 +-\n>  .../unix/sysv/linux/aarch64/libmvec.abilist   |   5 +\n>  17 files changed, 584 insertions(+), 1 deletion(-)\n>  create mode 100644 sysdeps/aarch64/fpu/powr_advsimd.c\n>  create mode 100644 sysdeps/aarch64/fpu/powr_sve.c\n>  create mode 100644 sysdeps/aarch64/fpu/powrf_advsimd.c\n>  create mode 100644 sysdeps/aarch64/fpu/powrf_sve.c\n> \n> diff --git a/bits/libm-simd-decl-stubs.h b/bits/libm-simd-decl-stubs.h\n> index 2b19901d3e..5cb0e4a245 100644\n> --- a/bits/libm-simd-decl-stubs.h\n> +++ b/bits/libm-simd-decl-stubs.h\n> @@ -99,6 +99,17 @@\n>  #define __DECL_SIMD_powf64x\n>  #define __DECL_SIMD_powf128x\n>  \n> +#define __DECL_SIMD_powr\n> +#define __DECL_SIMD_powrf\n> +#define __DECL_SIMD_powrl\n> +#define __DECL_SIMD_powrf16\n> +#define __DECL_SIMD_powrf32\n> +#define __DECL_SIMD_powrf64\n> +#define __DECL_SIMD_powrf128\n> +#define __DECL_SIMD_powrf32x\n> +#define __DECL_SIMD_powrf64x\n> +#define __DECL_SIMD_powrf128x\n> +\n>  #define __DECL_SIMD_acos\n>  #define __DECL_SIMD_acosf\n>  #define __DECL_SIMD_acosl\n> diff --git a/math/bits/mathcalls.h b/math/bits/mathcalls.h\n> index a4c994d0a7..4e983f5c7b 100644\n> --- a/math/bits/mathcalls.h\n> +++ b/math/bits/mathcalls.h\n> @@ -197,6 +197,7 @@ __MATHCALL (compoundn,, (_Mdouble_ __x, long long int __y));\n>  __MATHCALL (pown,, (_Mdouble_ __x, long long int __y));\n>  \n>  /* Return X to the Y power.  */\n> +__MATHCALL_VEC (powr,, (_Mdouble_ __x, _Mdouble_ __y));\n>  __MATHCALL (powr,, (_Mdouble_ __x, _Mdouble_ __y));\n>  \n>  /* Return the Yth root of X.  */\n> diff --git a/sysdeps/aarch64/fpu/Makefile b/sysdeps/aarch64/fpu/Makefile\n> index 998fc08d43..6c8cacf21d 100644\n> --- a/sysdeps/aarch64/fpu/Makefile\n> +++ b/sysdeps/aarch64/fpu/Makefile\n> @@ -29,6 +29,7 @@ libmvec-supported-funcs = acos \\\n>                            log2 \\\n>                            log2p1 \\\n>                            pow \\\n> +                          powr \\\n>                            rsqrt \\\n>                            sin \\\n>                            sinh \\\n> diff --git a/sysdeps/aarch64/fpu/Versions b/sysdeps/aarch64/fpu/Versions\n> index d68510a20e..2337f9d331 100644\n> --- a/sysdeps/aarch64/fpu/Versions\n> +++ b/sysdeps/aarch64/fpu/Versions\n> @@ -206,4 +206,11 @@ libmvec {\n>      _ZGVsMxv_rsqrt;\n>      _ZGVsMxv_rsqrtf;\n>    }\n> +  GLIBC_2.44 {\n> +    _ZGVnN2vv_powr;\n> +    _ZGVnN2vv_powrf;\n> +    _ZGVnN4vv_powrf;\n> +    _ZGVsMxvv_powr;\n> +    _ZGVsMxvv_powrf;\n> +  }\n>  }\n> diff --git a/sysdeps/aarch64/fpu/advsimd_f32_protos.h b/sysdeps/aarch64/fpu/advsimd_f32_protos.h\n> index 81de7351f1..59210b11ad 100644\n> --- a/sysdeps/aarch64/fpu/advsimd_f32_protos.h\n> +++ b/sysdeps/aarch64/fpu/advsimd_f32_protos.h\n> @@ -47,6 +47,7 @@ libmvec_hidden_proto (V_NAME_F1(log2p1));\n>  libmvec_hidden_proto (V_NAME_F1(logp1));\n>  libmvec_hidden_proto (V_NAME_F1(log));\n>  libmvec_hidden_proto (V_NAME_F2(pow));\n> +libmvec_hidden_proto (V_NAME_F2(powr));\n>  libmvec_hidden_proto (V_NAME_F1(rsqrt));\n>  libmvec_hidden_proto (V_NAME_F1(sin));\n>  libmvec_hidden_proto (V_NAME_F1(sinh));\n> diff --git a/sysdeps/aarch64/fpu/bits/math-vector.h b/sysdeps/aarch64/fpu/bits/math-vector.h\n> index 442cd2a02c..db218eedf9 100644\n> --- a/sysdeps/aarch64/fpu/bits/math-vector.h\n> +++ b/sysdeps/aarch64/fpu/bits/math-vector.h\n> @@ -157,6 +157,10 @@\n>  # define __DECL_SIMD_pow __DECL_SIMD_aarch64\n>  # undef __DECL_SIMD_powf\n>  # define __DECL_SIMD_powf __DECL_SIMD_aarch64\n> +# undef __DECL_SIMD_powr\n> +# define __DECL_SIMD_powr __DECL_SIMD_aarch64\n> +# undef __DECL_SIMD_powrf\n> +# define __DECL_SIMD_powrf __DECL_SIMD_aarch64\n>  # undef __DECL_SIMD_rsqrt\n>  # define __DECL_SIMD_rsqrt __DECL_SIMD_aarch64\n>  # undef __DECL_SIMD_rsqrtf\n> @@ -243,6 +247,7 @@ __vpcs __f32x4_t _ZGVnN4v_log2f (__f32x4_t);\n>  __vpcs __f32x4_t _ZGVnN4v_log2p1f (__f32x4_t);\n>  __vpcs __f32x4_t _ZGVnN4v_logp1f (__f32x4_t);\n>  __vpcs __f32x4_t _ZGVnN4vv_powf (__f32x4_t, __f32x4_t);\n> +__vpcs __f32x4_t _ZGVnN4vv_powrf (__f32x4_t, __f32x4_t);\n>  __vpcs __f32x4_t _ZGVnN4v_rsqrtf (__f32x4_t);\n>  __vpcs __f32x4_t _ZGVnN4v_sinf (__f32x4_t);\n>  __vpcs __f32x4_t _ZGVnN4v_sinhf (__f32x4_t);\n> @@ -283,6 +288,7 @@ __vpcs __f64x2_t _ZGVnN2v_log2 (__f64x2_t);\n>  __vpcs __f64x2_t _ZGVnN2v_log2p1 (__f64x2_t);\n>  __vpcs __f64x2_t _ZGVnN2v_logp1 (__f64x2_t);\n>  __vpcs __f64x2_t _ZGVnN2vv_pow (__f64x2_t, __f64x2_t);\n> +__vpcs __f64x2_t _ZGVnN2vv_powr (__f64x2_t, __f64x2_t);\n>  __vpcs __f64x2_t _ZGVnN2v_rsqrt (__f64x2_t);\n>  __vpcs __f64x2_t _ZGVnN2v_sin (__f64x2_t);\n>  __vpcs __f64x2_t _ZGVnN2v_sinh (__f64x2_t);\n> @@ -328,6 +334,7 @@ __sv_f32_t _ZGVsMxv_log2f (__sv_f32_t, __sv_bool_t);\n>  __sv_f32_t _ZGVsMxv_log2p1f (__sv_f32_t, __sv_bool_t);\n>  __sv_f32_t _ZGVsMxv_logp1f (__sv_f32_t, __sv_bool_t);\n>  __sv_f32_t _ZGVsMxvv_powf (__sv_f32_t, __sv_f32_t, __sv_bool_t);\n> +__sv_f32_t _ZGVsMxvv_powrf (__sv_f32_t, __sv_f32_t, __sv_bool_t);\n>  __sv_f32_t _ZGVsMxv_rsqrtf (__sv_f32_t, __sv_bool_t);\n>  __sv_f32_t _ZGVsMxv_sinf (__sv_f32_t, __sv_bool_t);\n>  __sv_f32_t _ZGVsMxv_sinhf (__sv_f32_t, __sv_bool_t);\n> @@ -368,6 +375,7 @@ __sv_f64_t _ZGVsMxv_log2 (__sv_f64_t, __sv_bool_t);\n>  __sv_f64_t _ZGVsMxv_log2p1 (__sv_f64_t, __sv_bool_t);\n>  __sv_f64_t _ZGVsMxv_logp1 (__sv_f64_t, __sv_bool_t);\n>  __sv_f64_t _ZGVsMxvv_pow (__sv_f64_t, __sv_f64_t, __sv_bool_t);\n> +__sv_f64_t _ZGVsMxvv_powr (__sv_f64_t, __sv_f64_t, __sv_bool_t);\n>  __sv_f64_t _ZGVsMxv_rsqrt (__sv_f64_t, __sv_bool_t);\n>  __sv_f64_t _ZGVsMxv_sin (__sv_f64_t, __sv_bool_t);\n>  __sv_f64_t _ZGVsMxv_sinh (__sv_f64_t, __sv_bool_t);\n> diff --git a/sysdeps/aarch64/fpu/finclude/math-vector-fortran.h b/sysdeps/aarch64/fpu/finclude/math-vector-fortran.h\n> index 46fc8a627c..71ee5d6a0e 100644\n> --- a/sysdeps/aarch64/fpu/finclude/math-vector-fortran.h\n> +++ b/sysdeps/aarch64/fpu/finclude/math-vector-fortran.h\n> @@ -80,6 +80,8 @@\n>  !GCC$ builtin (logp1f) attributes simd (notinbranch) if('fastmath')\n>  !GCC$ builtin (pow) attributes simd (notinbranch) if('fastmath')\n>  !GCC$ builtin (powf) attributes simd (notinbranch) if('fastmath')\n> +!GCC$ builtin (powr) attributes simd (notinbranch) if('fastmath')\n> +!GCC$ builtin (powrf) attributes simd (notinbranch) if('fastmath')\n>  !GCC$ builtin (rsqrt) attributes simd (notinbranch) if('fastmath')\n>  !GCC$ builtin (rsqrtf) attributes simd (notinbranch) if('fastmath')\n>  !GCC$ builtin (sin) attributes simd (notinbranch) if('fastmath')\n> diff --git a/sysdeps/aarch64/fpu/powr_advsimd.c b/sysdeps/aarch64/fpu/powr_advsimd.c\n> new file mode 100644\n> index 0000000000..8163ae87ad\n> --- /dev/null\n> +++ b/sysdeps/aarch64/fpu/powr_advsimd.c\n> @@ -0,0 +1,148 @@\n> +/* Double-precision vector (AdvSIMD) powr function\n> +\n> +   Copyright (C) 2026 Free Software Foundation, Inc.\n> +   This file is part of the GNU C Library.\n> +\n> +   The GNU C Library is free software; you can redistribute it and/or\n> +   modify it under the terms of the GNU Lesser General Public\n> +   License as published by the Free Software Foundation; either\n> +   version 2.1 of the License, or (at your option) any later version.\n> +\n> +   The GNU C Library is distributed in the hope that it will be useful,\n> +   but WITHOUT ANY WARRANTY; without even the implied warranty of\n> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n> +   Lesser General Public License for more details.\n> +\n> +   You should have received a copy of the GNU Lesser General Public\n> +   License along with the GNU C Library; if not, see\n> +   <https://www.gnu.org/licenses/>.  */\n> +\n> +#include \"pow_common.h\"\n> +#include \"v_math.h\"\n> +\n> +#include \"v_pow_inline.h\"\n> +\n> +static double NOINLINE\n> +powr_scalar_special_case (double x, double y)\n> +{\n> +  /* Negative x returns NaN (+0/-0 and NaN x not handled here).  */\n> +  if (x < 0)\n> +    return __builtin_nan (\"\");\n> +\n> +  uint64_t ix = asuint64 (x);\n> +  uint64_t iy = asuint64 (y);\n> +  uint32_t topx = top12 (x);\n> +  uint32_t topy = top12 (y);\n> +\n> +  /* Special cases: (x < 0x1p-126 or inf or nan) or\n> +     (|y| < 0x1p-65 or |y| >= 0x1p63 or nan).  */\n> +  if (__glibc_unlikely (topx - SmallPowX >= ThresPowX\n> +\t\t|| (topy & 0x7ff) - SmallPowY >= ThresPowY))\n> +    {\n> +      /* |y| is 0, Inf or NaN.  */\n> +      if (__glibc_unlikely (zeroinfnan (iy)))\n> +\t{\n> +\t  if (2 * ix > 2 * asuint64 (INFINITY)\n> +\t      || 2 * iy > 2 * asuint64 (INFINITY))\n> +\t    return __builtin_nan (\"\");\n> +\t  if (2 * iy == 0)\n> +\t    {\n> +\t      /* |x| = 0 or inf.  */\n> +\t      if ((2 * ix == 0) || (2 * ix == 2 * asuint64 (INFINITY)))\n> +\t\treturn __builtin_nan (\"\");\n> +\t      /* x is finite.  */\n> +\t      return 1.0;\n> +\t    }\n> +\t  /* |y| = Inf and x = 1.0.  */\n> +\t  if (ix == asuint64 (1.0))\n> +\t    return __builtin_nan (\"\");\n> +\t  /* |x| < 1 and y = Inf or |x| > 1 and y = -Inf.  */\n> +\t  if ((2 * ix < 2 * asuint64 (1.0)) == !(iy >> 63))\n> +\t    return 0.0;\n> +\t  /* |y| = Inf and previous conditions not met.  */\n> +\t  return y * y;\n> +\t}\n> +      /* |x| is 0, Inf or NaN.  */\n> +      if (__glibc_unlikely (zeroinfnan (ix)))\n> +\t{\n> +\t  double x2 = x * x;\n> +\t  return iy >> 63 ? 1 / x2 : x2;\n> +\t}\n> +      /* Here x and y are non-zero finite.  */\n> +      /* Note: if |y| > 1075 * ln2 * 2^53 ~= 0x1.749p62 then powr(x,y) = inf/0\n> +\t and if |y| < 2^-54 / 1075 ~= 0x1.e7b6p-65 then powr(x,y) = +-1.  */\n> +      if ((topy & 0x7ff) - SmallPowY >= ThresPowY)\n> +\t{\n> +\t  if (ix == asuint64 (1.0))\n> +\t    return 1.0;\n> +\t  /* |y| < 2^-65, x^y ~= 1 + y*log(x).  */\n> +\t  if ((topy & 0x7ff) < SmallPowY)\n> +\t    return 1.0;\n> +\t  return (ix > asuint64 (1.0)) == (topy < 0x800) ? INFINITY : 0;\n> +\t}\n> +      if (topx == 0)\n> +\t{\n> +\t  /* Normalize subnormal x so exponent becomes negative.  */\n> +\t  ix = asuint64 (x * 0x1p52);\n> +\t  ix -= 52ULL << 52;\n> +\t}\n> +    }\n> +\n> +  /* Core computation of exp (y * log (x)).  */\n> +  double lo;\n> +  double hi = log_inline (ix, &lo);\n> +  double ehi = y * hi;\n> +  double elo = y * lo + fma (y, hi, -ehi);\n> +  return exp_inline (ehi, elo, 0);\n> +}\n> +\n> +static float64x2_t VPCS_ATTR NOINLINE\n> +scalar_fallback (float64x2_t x, float64x2_t y)\n> +{\n> +  return (float64x2_t){ powr_scalar_special_case (x[0], y[0]),\n> +\t\t\tpowr_scalar_special_case (x[1], y[1]) };\n> +}\n> +\n> +/* Implementation of AdvSIMD powr.\n> +   Maximum measured error is 1.04 ULPs:\n> +   _ZGVnN2vv_powr(0x1.024a3e56b3c3p-136, 0x1.87910248b58acp-13)\n> +     got 0x1.f71162f473251p-1\n> +    want 0x1.f71162f473252p-1.  */\n> +float64x2_t VPCS_ATTR V_NAME_D2 (powr) (float64x2_t x, float64x2_t y)\n> +{\n> +  const struct data *d = ptr_barrier (&data);\n> +\n> +  /* Case of x <= 0 is too complicated to be vectorised efficiently here,\n> +     fallback to scalar pow for all lanes if any x < 0 detected.  */\n> +  if (v_any_u64 (vclezq_s64 (vreinterpretq_s64_f64 (x))))\n> +    return scalar_fallback (x, y);\n> +\n> +  uint64x2_t vix = vreinterpretq_u64_f64 (x);\n> +  uint64x2_t viy = vreinterpretq_u64_f64 (y);\n> +\n> +  /* Special cases of x or y.\n> +     The case y==0 does not trigger a special case, since in this case it is\n> +     necessary to fix the result only if x is a signalling nan, which already\n> +     triggers a special case. We test y==0 directly in the scalar fallback.  */\n> +  uint64x2_t x_is_inf_or_nan = vcgeq_u64 (vandq_u64 (vix, d->inf), d->inf);\n> +  uint64x2_t y_is_inf_or_nan = vcgeq_u64 (vandq_u64 (viy, d->inf), d->inf);\n> +  uint64x2_t special = vorrq_u64 (x_is_inf_or_nan, y_is_inf_or_nan);\n> +\n> +  /* Fallback to scalar on all lanes if any lane is inf or nan.  */\n> +  if (__glibc_unlikely (v_any_u64 (special)))\n> +    return scalar_fallback (x, y);\n> +\n> +  /* Cases of subnormal x: |x| < 0x1p-1022.  */\n> +  uint64x2_t x_is_subnormal = vcaltq_f64 (x, d->subnormal_bound);\n> +  if (__glibc_unlikely (v_any_u64 (x_is_subnormal)))\n> +    {\n> +      /* Normalize subnormal x so exponent becomes negative.  */\n> +      uint64x2_t vix_norm\n> +\t  = vreinterpretq_u64_f64 (vmulq_f64 (x, d->subnormal_scale));\n> +      vix_norm = vsubq_u64 (vix_norm, d->subnormal_bias);\n> +      x = vbslq_f64 (x_is_subnormal, vreinterpretq_f64_u64 (vix_norm), x);\n> +    }\n> +\n> +  /* Core computation of exp (y * log (x)).  */\n> +  return v_pow_inline (x, y, d);\n> +}\n> diff --git a/sysdeps/aarch64/fpu/powr_sve.c b/sysdeps/aarch64/fpu/powr_sve.c\n> new file mode 100644\n> index 0000000000..ae599a037d\n> --- /dev/null\n> +++ b/sysdeps/aarch64/fpu/powr_sve.c\n> @@ -0,0 +1,123 @@\n> +/* Double-precision vector (SVE) powr function\n> +\n> +   Copyright (C) 2026 Free Software Foundation, Inc.\n> +   This file is part of the GNU C Library.\n> +\n> +   The GNU C Library is free software; you can redistribute it and/or\n> +   modify it under the terms of the GNU Lesser General Public\n> +   License as published by the Free Software Foundation; either\n> +   version 2.1 of the License, or (at your option) any later version.\n> +\n> +   The GNU C Library is distributed in the hope that it will be useful,\n> +   but WITHOUT ANY WARRANTY; without even the implied warranty of\n> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n> +   Lesser General Public License for more details.\n> +\n> +   You should have received a copy of the GNU Lesser General Public\n> +   License along with the GNU C Library; if not, see\n> +   <https://www.gnu.org/licenses/>.  */\n> +\n> +#include \"math_config.h\"\n> +#include \"pow_common.h\"\n> +#include \"sv_math.h\"\n> +\n> +#define WANT_SV_POW_SIGN_BIAS 0\n> +#include \"sv_pow_inline.h\"\n> +\n> +/* A scalar subroutine used to fix main powr special cases.  */\n> +static inline double\n> +powr_specialcase (double x, double y)\n> +{\n> +  uint64_t ix = asuint64 (x);\n> +  uint64_t iy = asuint64 (y);\n> +  /* |y| is 0, Inf or NaN.  */\n> +  if (__glibc_unlikely (zeroinfnan (iy)))\n> +    {\n> +      /* |x| or |y| is NaN.  */\n> +      if (2 * ix > 2 * asuint64 (INFINITY) || 2 * iy > 2 * asuint64 (INFINITY))\n> +\treturn __builtin_nan (\"\");\n> +      /* |y| is 0.0.  */\n> +      if (2 * iy == 0)\n> +\t{\n> +\t  /* |x| = 0 or Inf.  */\n> +\t  if ((2 * ix == 0) || (2 * ix == 2 * asuint64 (INFINITY)))\n> +\t    return __builtin_nan (\"\");\n> +\t  /* x is finite.  */\n> +\t  return 1.0;\n> +\t}\n> +      /* x is 1.0.  */\n> +      if (ix == asuint64 (1.0))\n> +\treturn __builtin_nan (\"\");\n> +      /* |x| < 1 and y = Inf or |x| > 1 and y = -Inf.  */\n> +      if ((2 * ix < 2 * asuint64 (1.0)) == !(iy >> 63))\n> +\treturn 0.0;\n> +      /* |y| = Inf and previous conditions not met.  */\n> +      return y * y;\n> +    }\n> +  /* x is 0, Inf or NaN. Negative x are handled in the core.  */\n> +  if (__glibc_unlikely (zeroinfnan (ix)))\n> +    {\n> +      double x2 = x * x;\n> +      return (iy >> 63) ? 1 / x2 : x2;\n> +    }\n> +  /* Return x for convenience, but make sure result is never used.  */\n> +  return x;\n> +}\n> +\n> +/* Scalar fallback for special case routines with custom signature.  */\n> +static svfloat64_t NOINLINE\n> +sv_powr_specialcase (svfloat64_t x1, svfloat64_t x2, svfloat64_t y,\n> +\t\t     svbool_t cmp)\n> +{\n> +  return sv_call2_f64 (powr_specialcase, x1, x2, y, cmp);\n> +}\n> +\n> +/* Implementation of SVE powr.\n> +\n> +   Provides the same accuracy as AdvSIMD pow and powr, since it relies on the\n> +   same algorithm.\n> +\n> +   Maximum measured error is 1.04 ULPs:\n> +   SV_NAME_D2 (powr) (0x1.3d2d45bc848acp+63, -0x1.a48a38b40cd43p-12)\n> +     got 0x1.f7116284221fcp-1\n> +    want 0x1.f7116284221fdp-1.  */\n> +svfloat64_t SV_NAME_D2 (powr) (svfloat64_t x, svfloat64_t y, const svbool_t pg)\n> +{\n> +  const struct data *d = ptr_barrier (&data);\n> +\n> +  svuint64_t vix = svreinterpret_u64 (x);\n> +  svuint64_t viy = svreinterpret_u64 (y);\n> +\n> +  svbool_t xpos = svcmpge (pg, x, sv_f64 (0.0));\n> +\n> +  /* Special cases of x or y: zero, inf and nan.  */\n> +  svbool_t xspecial = sv_zeroinfnan (xpos, vix);\n> +  svbool_t yspecial = sv_zeroinfnan (xpos, viy);\n> +  svbool_t cmp = svorr_z (xpos, xspecial, yspecial);\n> +\n> +  /* Cases of positive subnormal x: 0 < x < 0x1p-1022.  */\n> +  svbool_t x_is_subnormal = svaclt (xpos, x, 0x1p-1022);\n> +  if (__glibc_unlikely (svptest_any (xpos, x_is_subnormal)))\n> +    {\n> +      /* Normalize subnormal x so exponent becomes negative.  */\n> +      svuint64_t vix_norm\n> +\t  = svreinterpret_u64 (svmul_m (x_is_subnormal, x, 0x1p52));\n> +      vix = svsub_m (x_is_subnormal, vix_norm, 52ULL << 52);\n> +    }\n> +\n> +  svfloat64_t vlo;\n> +  svfloat64_t vhi = sv_log_inline (xpos, vix, &vlo, d);\n> +\n> +  svfloat64_t vehi = svmul_x (svptrue_b64 (), y, vhi);\n> +  svfloat64_t vemi = svmls_x (xpos, vehi, y, vhi);\n> +  svfloat64_t velo = svnmls_x (xpos, vemi, y, vlo);\n> +  svfloat64_t vz = sv_exp_inline (xpos, vehi, velo, sv_u64 (0), d);\n> +\n> +  /* Cases of negative x.  */\n> +  vz = svsel (xpos, vz, sv_f64 (__builtin_nan (\"\")));\n> +\n> +  if (__glibc_unlikely (svptest_any (cmp, cmp)))\n> +    return sv_powr_specialcase (x, y, vz, cmp);\n> +\n> +  return vz;\n> +}\n> diff --git a/sysdeps/aarch64/fpu/powrf_advsimd.c b/sysdeps/aarch64/fpu/powrf_advsimd.c\n> new file mode 100644\n> index 0000000000..c0d5de3d07\n> --- /dev/null\n> +++ b/sysdeps/aarch64/fpu/powrf_advsimd.c\n> @@ -0,0 +1,135 @@\n> +/* Single-precision vector (AdvSIMD) powr function\n> +\n> +   Copyright (C) 2026 Free Software Foundation, Inc.\n> +   This file is part of the GNU C Library.\n> +\n> +   The GNU C Library is free software; you can redistribute it and/or\n> +   modify it under the terms of the GNU Lesser General Public\n> +   License as published by the Free Software Foundation; either\n> +   version 2.1 of the License, or (at your option) any later version.\n> +\n> +   The GNU C Library is distributed in the hope that it will be useful,\n> +   but WITHOUT ANY WARRANTY; without even the implied warranty of\n> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n> +   Lesser General Public License for more details.\n> +\n> +   You should have received a copy of the GNU Lesser General Public\n> +   License along with the GNU C Library; if not, see\n> +   <https://www.gnu.org/licenses/>.  */\n> +\n> +#include \"flt-32/math_config.h\"\n> +#include \"v_math.h\"\n> +#include \"v_powrf_inline.h\"\n> +\n> +/* A scalar subroutine used to fix main powrf special cases.  */\n> +static inline float\n> +powrf_specialcase (float x, float y)\n> +{\n> +  /* Negative x returns NaN (+0/-0 and NaN x not handled here).  */\n> +  if (x < 0)\n> +    return __builtin_nanf (\"\");\n> +\n> +  uint32_t ix = asuint (x);\n> +  uint32_t iy = asuint (y);\n> +  /* y is 0, Inf or NaN.  */\n> +  if (__glibc_unlikely (zeroinfnan (iy)))\n> +    {\n> +      /* |x| or |y| is NaN.  */\n> +      if (2 * ix > 2u * 0x7f800000 || 2 * iy > 2u * 0x7f800000)\n> +\treturn __builtin_nanf (\"\");\n> +      /* |y| = 0.  */\n> +      if (2 * iy == 0)\n> +\t{\n> +\t  /* |x| = 0 or inf.  */\n> +\t  if ((2 * ix == 0) || (2 * ix == 2u * 0x7f800000))\n> +\t    return __builtin_nanf (\"\");\n> +\t  /* x is finite.  */\n> +\t  return 1.0f;\n> +\t}\n> +      /* |y| = Inf and x = 1.0.  */\n> +      if (ix == 0x3f800000)\n> +\treturn __builtin_nanf (\"\");\n> +      /* |x| < 1 and y = Inf or |x| > 1 and y = -Inf.  */\n> +      if ((2 * ix < 2 * 0x3f800000) == !(iy & 0x80000000))\n> +\treturn 0.0f;\n> +      /* |y| = Inf and previous conditions not met.  */\n> +      return y * y;\n> +    }\n> +  /* x is 0, Inf or NaN. Negative x are handled in the core.  */\n> +  if (__glibc_unlikely (zeroinfnan (ix)))\n> +    {\n> +      float x2 = x * x;\n> +      return iy & 0x80000000 ? 1 / x2 : x2;\n> +    }\n> +\n> +  /* Return x for convenience, but make sure result is never used.  */\n> +  return x;\n> +}\n> +\n> +/* Special case function wrapper.  */\n> +static float32x4_t VPCS_ATTR NOINLINE\n> +special_case (float32x4_t x, float32x4_t y, float32x4_t ret, uint32x4_t cmp)\n> +{\n> +  return v_call2_f32 (powrf_specialcase, x, y, ret, cmp);\n> +}\n> +\n> +/* Power implementation for x containing negative or subnormal lanes.  */\n> +static inline float32x4_t\n> +v_powrf_x_is_neg_or_sub (float32x4_t x, float32x4_t y, const struct data *d)\n> +{\n> +  uint32x4_t xsmall = vcaltq_f32 (x, v_f32 (0x1p-126f));\n> +\n> +  /* Normalize subnormals.  */\n> +  float32x4_t a = vabsq_f32 (x);\n> +  uint32x4_t ia_norm = vreinterpretq_u32_f32 (vmulq_f32 (a, d->norm));\n> +  ia_norm = vsubq_u32 (ia_norm, d->subnormal_bias);\n> +  a = vbslq_f32 (xsmall, vreinterpretq_f32_u32 (ia_norm), a);\n> +\n> +  /* Evaluate exp (y * log(x)) using |x| and sign bias correction.  */\n> +  float32x4_t ret = v_powrf_core (a, y, d);\n> +\n> +  /* Cases of finite y and finite negative x.  */\n> +  uint32x4_t xisneg = vcltzq_f32 (x);\n> +  return vbslq_f32 (xisneg, d->nan, ret);\n> +}\n> +\n> +/* Implementation of AdvSIMD powrf.\n> +\n> +     powr(x,y) := exp(y * log (x))\n> +\n> +   This means powr(x,y) core computation matches that of pow(x,y)\n> +   but powr returns NaN for negative x even if y is an integer.\n> +\n> +   Maximum measured error is 2.57 ULPs:\n> +   V_NAME_F2 (powr) (0x1.031706p+0, 0x1.ce2ec2p+12)\n> +     got 0x1.fff868p+127\n> +    want 0x1.fff862p+127.  */\n> +float32x4_t VPCS_ATTR NOINLINE V_NAME_F2 (powr) (float32x4_t x, float32x4_t y)\n> +{\n> +  const struct data *d = ptr_barrier (&data);\n> +\n> +  /* Special cases of x or y: zero, inf and nan.  */\n> +  uint32x4_t ix = vreinterpretq_u32_f32 (x);\n> +  uint32x4_t iy = vreinterpretq_u32_f32 (y);\n> +  uint32x4_t xspecial = v_zeroinfnan (d, ix);\n> +  uint32x4_t yspecial = v_zeroinfnan (d, iy);\n> +  uint32x4_t cmp = vorrq_u32 (xspecial, yspecial);\n> +\n> +  /* Evaluate pow(x, y) for x containing negative or subnormal lanes.  */\n> +  uint32x4_t x_is_neg_or_sub = vcltq_f32 (x, v_f32 (0x1p-126f));\n> +  if (__glibc_unlikely (v_any_u32 (x_is_neg_or_sub)))\n> +    {\n> +      float32x4_t ret = v_powrf_x_is_neg_or_sub (x, y, d);\n> +      if (__glibc_unlikely (v_any_u32 (cmp)))\n> +\treturn special_case (x, y, ret, cmp);\n> +      return ret;\n> +    }\n> +\n> +  /* Else evaluate pow(x, y) for normal and positive x only.  */\n> +  if (__glibc_unlikely (v_any_u32 (cmp)))\n> +    return special_case (x, y, v_powrf_core (x, y, d), cmp);\n> +  return v_powrf_core (x, y, d);\n> +}\n> +\n> +libmvec_hidden_def (V_NAME_F2 (powr))\n> +HALF_WIDTH_ALIAS_F2 (powr)\n> diff --git a/sysdeps/aarch64/fpu/powrf_sve.c b/sysdeps/aarch64/fpu/powrf_sve.c\n> new file mode 100644\n> index 0000000000..32891be0a0\n> --- /dev/null\n> +++ b/sysdeps/aarch64/fpu/powrf_sve.c\n> @@ -0,0 +1,135 @@\n> +/* Single-precision vector (SVE) powr function\n> +\n> +   Copyright (C) 2026 Free Software Foundation, Inc.\n> +   This file is part of the GNU C Library.\n> +\n> +   The GNU C Library is free software; you can redistribute it and/or\n> +   modify it under the terms of the GNU Lesser General Public\n> +   License as published by the Free Software Foundation; either\n> +   version 2.1 of the License, or (at your option) any later version.\n> +\n> +   The GNU C Library is distributed in the hope that it will be useful,\n> +   but WITHOUT ANY WARRANTY; without even the implied warranty of\n> +   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n> +   Lesser General Public License for more details.\n> +\n> +   You should have received a copy of the GNU Lesser General Public\n> +   License along with the GNU C Library; if not, see\n> +   <https://www.gnu.org/licenses/>.  */\n> +\n> +#include \"flt-32/math_config.h\"\n> +#include \"sv_math.h\"\n> +\n> +#define WANT_SV_POWF_SIGN_BIAS 0\n> +#include \"sv_powf_inline.h\"\n> +\n> +/* A scalar subroutine used to fix main powrf special cases.  */\n> +static inline float\n> +powrf_specialcase (float x, float y)\n> +{\n> +  uint32_t ix = asuint (x);\n> +  uint32_t iy = asuint (y);\n> +  /* |y| is 0, Inf or NaN.  */\n> +  if (__glibc_unlikely (zeroinfnan (iy)))\n> +    {\n> +      /* |x| or |y| is NaN.  */\n> +      if (2 * ix > 2u * 0x7f800000 || 2 * iy > 2u * 0x7f800000)\n> +\treturn __builtin_nanf (\"\");\n> +      /* |y| = 0.  */\n> +      if (2 * iy == 0)\n> +\t{\n> +\t  /* |x| = 0 or Inf.  */\n> +\t  if ((2 * ix == 0) || (2 * ix == 2u * 0x7f800000))\n> +\t    return __builtin_nanf (\"\");\n> +\t  /* x is finite.  */\n> +\t  return 1.0f;\n> +\t}\n> +      /* |y| = Inf and x = 1.0.  */\n> +      if (ix == 0x3f800000)\n> +\treturn __builtin_nanf (\"\");\n> +      /* |x| < 1 and y = Inf or |x| > 1 and y = -Inf.  */\n> +      if ((2 * ix < 2 * 0x3f800000) == !(iy & 0x80000000))\n> +\treturn 0.0f;\n> +      /* |y| = Inf and previous conditions not met.  */\n> +      return y * y;\n> +    }\n> +  /* x is 0, Inf or NaN. Negative x are handled in the core.  */\n> +  if (__glibc_unlikely (zeroinfnan (ix)))\n> +    {\n> +      float x2 = x * x;\n> +      return iy & 0x80000000 ? 1 / x2 : x2;\n> +    }\n> +  /* Return x for convenience, but make sure result is never used.  */\n> +  return x;\n> +}\n> +\n> +/* Scalar fallback for special case routines with custom signature.  */\n> +static svfloat32_t NOINLINE\n> +sv_call_powrf_sc (svfloat32_t x1, svfloat32_t x2, svfloat32_t y, svbool_t cmp)\n> +{\n> +  return sv_call2_f32 (powrf_specialcase, x1, x2, y, cmp);\n> +}\n> +\n> +/* Implementation of SVE powrf.\n> +\n> +   Provides the same accuracy as AdvSIMD powf and powrf, since it relies on the\n> +   same algorithm.\n> +\n> +   Maximum measured error is 2.57 ULPs:\n> +   SV_NAME_F2 (powr) (0x1.031706p+0, 0x1.ce2ec2p+12)\n> +     got 0x1.fff868p+127\n> +    want 0x1.fff862p+127.  */\n> +svfloat32_t SV_NAME_F2 (powr) (svfloat32_t x, svfloat32_t y, const svbool_t pg)\n> +{\n> +  const struct data *d = ptr_barrier (&data);\n> +\n> +  svuint32_t vix = svreinterpret_u32 (x);\n> +  svuint32_t viy = svreinterpret_u32 (y);\n> +\n> +  svbool_t xpos = svcmpge (pg, x, sv_f32 (0.0f));\n> +\n> +  /* Special cases of x or y: zero, inf and nan.  */\n> +  svbool_t xspecial = sv_zeroinfnan (xpos, vix);\n> +  svbool_t yspecial = sv_zeroinfnan (xpos, viy);\n> +  svbool_t cmp = svorr_z (xpos, xspecial, yspecial);\n> +\n> +  /* Cases of subnormal x: |x| < 0x1p-126.  */\n> +  svbool_t x_is_subnormal = svaclt (xpos, x, d->small_bound);\n> +  if (__glibc_unlikely (svptest_any (xpos, x_is_subnormal)))\n> +    {\n> +      /* Normalize subnormal x so exponent becomes negative.  */\n> +      vix = svreinterpret_u32 (svmul_m (x_is_subnormal, x, 0x1p23f));\n> +      vix = svsub_m (x_is_subnormal, vix, d->subnormal_bias);\n> +    }\n> +\n> +  /* Part of core computation carried in working precision.  */\n> +  svuint32_t tmp = svsub_x (xpos, vix, d->off);\n> +  svuint32_t i\n> +      = svand_x (xpos, svlsr_x (xpos, tmp, (23 - V_POWF_LOG2_TABLE_BITS)),\n> +\t\t V_POWF_LOG2_N - 1);\n> +  svuint32_t top = svand_x (xpos, tmp, 0xff800000);\n> +  svuint32_t iz = svsub_x (xpos, vix, top);\n> +  svint32_t k\n> +      = svasr_x (xpos, svreinterpret_s32 (top), (23 - V_POWF_EXP2_TABLE_BITS));\n> +\n> +  /* Compute core in extended precision and return intermediate ylogx results\n> +     to handle cases of underflow and underflow in exp.  */\n> +  svfloat32_t ylogx;\n> +  /* Pass a dummy sign_bias so we can re-use powf core.\n> +     The core is simplified by setting WANT_SV_POWF_SIGN_BIAS = 0.  */\n> +  svfloat32_t ret = sv_powf_core (xpos, i, iz, k, y, sv_u32 (0), &ylogx, d);\n> +\n> +  /* Handle exp special cases of underflow and overflow.  */\n> +  svbool_t no_uflow = svcmpgt (xpos, ylogx, d->uflow_bound);\n> +  svbool_t oflow = svcmpgt (xpos, ylogx, d->oflow_bound);\n> +  svfloat32_t ret_flow = svdup_n_f32_z (no_uflow, INFINITY);\n> +  ret = svsel (svorn_z (xpos, oflow, no_uflow), ret_flow, ret);\n> +\n> +  /* Cases of negative x.  */\n> +  ret = svsel (xpos, ret, sv_f32 (__builtin_nanf (\"\")));\n> +\n> +  if (__glibc_unlikely (svptest_any (cmp, cmp)))\n> +    return sv_call_powrf_sc (x, y, ret, cmp);\n> +\n> +  return ret;\n> +}\n> diff --git a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c\n> index 19adf79fde..74980a7f6f 100644\n> --- a/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c\n> +++ b/sysdeps/aarch64/fpu/test-double-advsimd-wrappers.c\n> @@ -54,6 +54,7 @@ VPCS_VECTOR_WRAPPER (log1p_advsimd, _ZGVnN2v_log1p)\n>  VPCS_VECTOR_WRAPPER (log2_advsimd, _ZGVnN2v_log2)\n>  VPCS_VECTOR_WRAPPER (log2p1_advsimd, _ZGVnN2v_log2p1)\n>  VPCS_VECTOR_WRAPPER_ff (pow_advsimd, _ZGVnN2vv_pow)\n> +VPCS_VECTOR_WRAPPER_ff (powr_advsimd, _ZGVnN2vv_powr)\n>  VPCS_VECTOR_WRAPPER (rsqrt_advsimd, _ZGVnN2v_rsqrt)\n>  VPCS_VECTOR_WRAPPER (sin_advsimd, _ZGVnN2v_sin)\n>  VPCS_VECTOR_WRAPPER (sinh_advsimd, _ZGVnN2v_sinh)\n> diff --git a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c\n> index 86e73756a2..e6e3d652c9 100644\n> --- a/sysdeps/aarch64/fpu/test-double-sve-wrappers.c\n> +++ b/sysdeps/aarch64/fpu/test-double-sve-wrappers.c\n> @@ -73,6 +73,7 @@ SVE_VECTOR_WRAPPER (log1p_sve, _ZGVsMxv_log1p)\n>  SVE_VECTOR_WRAPPER (log2_sve, _ZGVsMxv_log2)\n>  SVE_VECTOR_WRAPPER (log2p1_sve, _ZGVsMxv_log2p1)\n>  SVE_VECTOR_WRAPPER_ff (pow_sve, _ZGVsMxvv_pow)\n> +SVE_VECTOR_WRAPPER_ff (powr_sve, _ZGVsMxvv_powr)\n>  SVE_VECTOR_WRAPPER (rsqrt_sve, _ZGVsMxv_rsqrt)\n>  SVE_VECTOR_WRAPPER (sin_sve, _ZGVsMxv_sin)\n>  SVE_VECTOR_WRAPPER (sinh_sve, _ZGVsMxv_sinh)\n> diff --git a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c\n> index 3bd3f5c950..223e491007 100644\n> --- a/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c\n> +++ b/sysdeps/aarch64/fpu/test-float-advsimd-wrappers.c\n> @@ -54,6 +54,7 @@ VPCS_VECTOR_WRAPPER (log1pf_advsimd, _ZGVnN4v_log1pf)\n>  VPCS_VECTOR_WRAPPER (log2f_advsimd, _ZGVnN4v_log2f)\n>  VPCS_VECTOR_WRAPPER (log2p1f_advsimd, _ZGVnN4v_log2p1f)\n>  VPCS_VECTOR_WRAPPER_ff (powf_advsimd, _ZGVnN4vv_powf)\n> +VPCS_VECTOR_WRAPPER_ff (powrf_advsimd, _ZGVnN4vv_powrf)\n>  VPCS_VECTOR_WRAPPER (rsqrtf_advsimd, _ZGVnN4v_rsqrtf)\n>  VPCS_VECTOR_WRAPPER (sinf_advsimd, _ZGVnN4v_sinf)\n>  VPCS_VECTOR_WRAPPER (sinhf_advsimd, _ZGVnN4v_sinhf)\n> diff --git a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c\n> index 0d9a7e5b93..2a01f93b5a 100644\n> --- a/sysdeps/aarch64/fpu/test-float-sve-wrappers.c\n> +++ b/sysdeps/aarch64/fpu/test-float-sve-wrappers.c\n> @@ -73,6 +73,7 @@ SVE_VECTOR_WRAPPER (log1pf_sve, _ZGVsMxv_log1pf)\n>  SVE_VECTOR_WRAPPER (log2f_sve, _ZGVsMxv_log2f)\n>  SVE_VECTOR_WRAPPER (log2p1f_sve, _ZGVsMxv_log2p1f)\n>  SVE_VECTOR_WRAPPER_ff (powf_sve, _ZGVsMxvv_powf)\n> +SVE_VECTOR_WRAPPER_ff (powrf_sve, _ZGVsMxvv_powrf)\n>  SVE_VECTOR_WRAPPER (rsqrtf_sve, _ZGVsMxv_rsqrtf)\n>  SVE_VECTOR_WRAPPER (sinf_sve, _ZGVsMxv_sinf)\n>  SVE_VECTOR_WRAPPER (sinhf_sve, _ZGVsMxv_sinhf)\n> diff --git a/sysdeps/aarch64/fpu/v_powrf_inline.h b/sysdeps/aarch64/fpu/v_powrf_inline.h\n> index 0168c32338..ea87a2b85b 100644\n> --- a/sysdeps/aarch64/fpu/v_powrf_inline.h\n> +++ b/sysdeps/aarch64/fpu/v_powrf_inline.h\n> @@ -1,6 +1,6 @@\n>  /* Helper for AdvSIMD single-precision powr\n>  \n> -   Copyright (C) 2025 Free Software Foundation, Inc.\n> +   Copyright (C) 2025-2026 Free Software Foundation, Inc.\n>     This file is part of the GNU C Library.\n>  \n>     The GNU C Library is free software; you can redistribute it and/or\n> @@ -17,6 +17,8 @@\n>     License along with the GNU C Library; if not, see\n>     <https://www.gnu.org/licenses/>.  */\n>  \n> +#include \"powf_common.h\"\n> +\n>  #define Log2IdxMask (V_POWF_LOG2_N - 1)\n>  #define Exp2IdxMask (V_POWF_EXP2_N - 1)\n>  #define Scale ((double) V_POWF_EXP2_N)\n> diff --git a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist\n> index 6d13d53613..638e34e500 100644\n> --- a/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist\n> +++ b/sysdeps/unix/sysv/linux/aarch64/libmvec.abilist\n> @@ -193,3 +193,8 @@ GLIBC_2.43 _ZGVsMxv_log2p1 F\n>  GLIBC_2.43 _ZGVsMxv_log2p1f F\n>  GLIBC_2.43 _ZGVsMxv_rsqrt F\n>  GLIBC_2.43 _ZGVsMxv_rsqrtf F\n> +GLIBC_2.44 _ZGVnN2vv_powr F\n> +GLIBC_2.44 _ZGVnN2vv_powrf F\n> +GLIBC_2.44 _ZGVnN4vv_powrf F\n> +GLIBC_2.44 _ZGVsMxvv_powr F\n> +GLIBC_2.44 _ZGVsMxvv_powrf F","headers":{"Return-Path":"<libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org>","X-Original-To":["incoming@patchwork.ozlabs.org","libc-alpha@sourceware.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","libc-alpha@sourceware.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256\n header.s=google header.b=iRAPUUaT;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org\n (client-ip=38.145.34.32; helo=vm01.sourceware.org;\n envelope-from=libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org;\n receiver=patchwork.ozlabs.org)","sourceware.org;\n\tdkim=pass (2048-bit key,\n unprotected) header.d=linaro.org header.i=@linaro.org header.a=rsa-sha256\n header.s=google header.b=iRAPUUaT","sourceware.org;\n dmarc=pass (p=none dis=none) header.from=linaro.org","sourceware.org; spf=pass smtp.mailfrom=linaro.org","server2.sourceware.org;\n arc=none smtp.remote-ip=2607:f8b0:4864:20::1330"],"Received":["from vm01.sourceware.org (vm01.sourceware.org [38.145.34.32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fzqsc34bNz1yCv\n\tfor <incoming@patchwork.ozlabs.org>; Tue, 21 Apr 2026 01:58:44 +1000 (AEST)","from vm01.sourceware.org (localhost [127.0.0.1])\n\tby sourceware.org (Postfix) with ESMTP id 578364CCCA18\n\tfor <incoming@patchwork.ozlabs.org>; Mon, 20 Apr 2026 15:58:42 +0000 (GMT)","from mail-dy1-x1330.google.com (mail-dy1-x1330.google.com\n [IPv6:2607:f8b0:4864:20::1330])\n by sourceware.org (Postfix) with ESMTPS id 328624CD203D\n for <libc-alpha@sourceware.org>; Mon, 20 Apr 2026 15:58:17 +0000 (GMT)","by mail-dy1-x1330.google.com with SMTP id\n 5a478bee46e88-2bdcf5970cdso2657109eec.0\n for <libc-alpha@sourceware.org>; Mon, 20 Apr 2026 08:58:17 -0700 (PDT)","from ?IPV6:2804:1b3:a7c3:d5d0:c49:69f8:6bda:7b88?\n ([2804:1b3:a7c3:d5d0:c49:69f8:6bda:7b88])\n by smtp.gmail.com with ESMTPSA id\n 5a478bee46e88-2e53a4a8018sm18431957eec.8.2026.04.20.08.58.12\n for <libc-alpha@sourceware.org>\n (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);\n Mon, 20 Apr 2026 08:58:13 -0700 (PDT)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org 578364CCCA18","OpenDKIM Filter v2.11.0 sourceware.org 328624CD203D"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 328624CD203D","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org 328624CD203D","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1776700697; cv=none;\n b=dvfqBeC8tAiYMEsu7tOz9k1pBeyRI8iOBh6ywiz7npvgatPfdiVBt9LpBOyzbToQ2JOGzy3CXAgmo/WKWnd+F5w01vXTgY6Vr7VEvXJqvoXORBu9TsiheaZGT49wauFCHF4Ph+pZuLtKBfzERa3jIOkP5/MGrrWcAgRLOh2D+7g=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1776700697; c=relaxed/simple;\n bh=Mtv6sWtlAqllMACXZ+qc2N6JOuxmpwe03t18QWn+1IE=;\n h=DKIM-Signature:Message-ID:Date:MIME-Version:Subject:To:From;\n b=Xw3yYipgSEIi2eWPYp8kvvYDF6gcJkIlRol6sjM3vlBWHHuKiOasiSWU3tVGWxThZmuqHoAsdW26Nzf6mJOupIbHpa4wMCmtVyh/dE5S2TKFWcfhgYpbMQapHGsXt+h4C0Wj8UIz+VYRux/l8sVi2I5uiOFehBc4a+UKh0VLQPU=","ARC-Authentication-Results":"i=1; server2.sourceware.org","DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=linaro.org; s=google; t=1776700695; x=1777305495; darn=sourceware.org;\n h=content-transfer-encoding:in-reply-to:organization:from\n :content-language:references:to:subject:user-agent:mime-version:date\n :message-id:from:to:cc:subject:date:message-id:reply-to;\n bh=VC+qDD/tg6AyLvMbotfUQ4nBPCGdQIgTWqmCbF5uTiQ=;\n b=iRAPUUaTnVukKtfk6jpo++gKKP1miziJaXoAU2FzvdeuCiX6h4mpcpzcjuGQ5zGA5D\n q1m/SfjOE3YJJtHN9RSrXTiMquJgnFiyfklffvQA1MJBFap6/SlmIeDTZV5T43VrCc5R\n CI+PnUUlbNXdAzGik2nbnCI1bhsjYIipNJkIiYFdZUP6PZklStODMWxUkoQekqZJ+X0D\n b+olBRWM5zGY9Yn2fKvhqtIyazTWlQnCS0H7AwAew50pQRj4vKnl+RaOwSXNzNdd4gBT\n ZgHnPxFQE2F5xyR+h0w4E5ZGP+ke36NnHb0BjkMHmd5HtyenLrpKnfPS2YLKVUden933\n Kw1A==","X-Google-DKIM-Signature":"v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20251104; t=1776700695; x=1777305495;\n h=content-transfer-encoding:in-reply-to:organization:from\n :content-language:references:to:subject:user-agent:mime-version:date\n :message-id:x-gm-gg:x-gm-message-state:from:to:cc:subject:date\n :message-id:reply-to;\n bh=VC+qDD/tg6AyLvMbotfUQ4nBPCGdQIgTWqmCbF5uTiQ=;\n b=o4SlJN3VQbf+ffU2Hy0XTnpEqwh2z4pg4izondlMj3G0w+jI5KhyOOCFmchnsczQUo\n ONHDPu8YT5Jqzkv7MXBdIy9+Roiusq39Wz8BCH/z90DICibUgt3mDrg1Ml0V46O5JZUO\n YWueMZFOrk9dYhhIt82c22cFoUlIuwCeIh9MKbhLNsVXj1kRQGK+OcY2cXj9nPOXPadU\n 0cIXhXi0pqB9U0R+S7gCk3EhCQbT0MXIZJ8c2gYbbLQoSPM41E8Cn4AvYPd9fXNHrSxy\n ra+iqLKO8yQ20j3NToXNA8tkBZ9tDZzME3ZCkujmxy6ltbgkEFC+xL/AlWdUtjh1LC0q\n pqWA==","X-Gm-Message-State":"AOJu0YyNkHoXP+CVEYxkYuEw20IobKkqsDi4okKjy7FquFL2k/dOC12C\n n7lOc327EX8M019c53tKRbsVaxdPAVqvLAJdXNXObL5LScU3v5b3pVZZCFvVwXqIAgpesT0co24\n 29wFQ","X-Gm-Gg":"AeBDieu2dpYmOxfunT1MuZiwu4j+zhTADy0OitjWp1zrU3FOgEJoMChqwKN4tFgryKP\n OPYoLxJabfixBCtO4QCtZeN2Q802kX9UnHnxMBJz/QN7mBxOfJaeoozRtjhhnMI4BXL30yjO0k8\n pZjAzqztNA9//2t2U9n2QP5P/M2nxQqZ9E1lCkteiKODFXkvuOT2XdpY05lhDa7uQz5wEJsVLjO\n 5+S7HZ0C8KYVrytySdqenvnO7EeRxB0Y55cseVN4skP0ausO1+pg4QP50nyheitvFahRQ8xei0y\n 8wzZMKxizaOUsVMcMb0r1Dh7YMO08jxDLKKNKRjbiqGengn6NjcaYtBFVKcaF1qEmvIZ4zMVxNy\n g+hG6GF6smHShiHC+Px9hZFCJB2k6UGsLUf6+LkCssb5PRnOZLKVorjn2T/VqRPVS0H2dpDBFXK\n XnluL5YT5MM43dxYWamDhIBHylBk4PyRoQRlSyzX96OMEQCWkdIUKkdqjGKcnglVZgaNVhhzhqE\n 64yUyJeD0Zqoyqd4DwvqT0WUj92QcanEIur0nUlaK2zy3X2VUKo","X-Received":"by 2002:a05:7301:3d19:b0:2d1:9b35:4edb with SMTP id\n 5a478bee46e88-2e41a1e3fb6mr5724508eec.0.1776700694691;\n Mon, 20 Apr 2026 08:58:14 -0700 (PDT)","Message-ID":"<f5a3de89-4c4a-43c9-8e7e-28bdf4107195@linaro.org>","Date":"Mon, 20 Apr 2026 12:58:12 -0300","MIME-Version":"1.0","User-Agent":"Mozilla Thunderbird","Subject":"Re: [PATCH v2 4/4] AArch64: Implement AdvSIMD and SVE powr(f)\n routines","To":"libc-alpha@sourceware.org","References":"<20260415083244.2560-1-pierre.blanchard@arm.com>\n <20260415083244.2560-4-pierre.blanchard@arm.com>","Content-Language":"en-US","From":"Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>","Organization":"Linaro","In-Reply-To":"<20260415083244.2560-4-pierre.blanchard@arm.com>","Content-Type":"text/plain; charset=UTF-8","Content-Transfer-Encoding":"7bit","X-BeenThere":"libc-alpha@sourceware.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Libc-alpha mailing list <libc-alpha.sourceware.org>","List-Unsubscribe":"<https://sourceware.org/mailman/options/libc-alpha>,\n <mailto:libc-alpha-request@sourceware.org?subject=unsubscribe>","List-Archive":"<https://sourceware.org/pipermail/libc-alpha/>","List-Post":"<mailto:libc-alpha@sourceware.org>","List-Help":"<mailto:libc-alpha-request@sourceware.org?subject=help>","List-Subscribe":"<https://sourceware.org/mailman/listinfo/libc-alpha>,\n <mailto:libc-alpha-request@sourceware.org?subject=subscribe>","Errors-To":"libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org"}}]