{"id":2235054,"url":"http://patchwork.ozlabs.org/api/1.2/patches/2235054/?format=json","web_url":"http://patchwork.ozlabs.org/project/glibc/patch/20260508132211.3504357-3-yury.khrustalev@arm.com/","project":{"id":41,"url":"http://patchwork.ozlabs.org/api/1.2/projects/41/?format=json","name":"GNU C Library","link_name":"glibc","list_id":"libc-alpha.sourceware.org","list_email":"libc-alpha@sourceware.org","web_url":"","scm_url":"","webscm_url":"","list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<20260508132211.3504357-3-yury.khrustalev@arm.com>","list_archive_url":null,"date":"2026-05-08T13:22:10","name":"[2/3] malloc: Remove code conditional on USE_MTAG","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"ac6a24bb3655ce647b9f9b8acf6a6ffd4342f1d1","submitter":{"id":88214,"url":"http://patchwork.ozlabs.org/api/1.2/people/88214/?format=json","name":"Yury Khrustalev","email":"yury.khrustalev@arm.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/glibc/patch/20260508132211.3504357-3-yury.khrustalev@arm.com/mbox/","series":[{"id":503390,"url":"http://patchwork.ozlabs.org/api/1.2/series/503390/?format=json","web_url":"http://patchwork.ozlabs.org/project/glibc/list/?series=503390","date":"2026-05-08T13:22:08","name":"Remove broken memory tagging in malloc","version":1,"mbox":"http://patchwork.ozlabs.org/series/503390/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2235054/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2235054/checks/","tags":{},"related":[],"headers":{"Return-Path":"<libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org>","X-Original-To":["incoming@patchwork.ozlabs.org","libc-alpha@sourceware.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","libc-alpha@sourceware.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n unprotected) header.d=arm.com header.i=@arm.com header.a=rsa-sha256\n header.s=foss header.b=H0clGrwL;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org\n (client-ip=2620:52:6:3111::32; helo=vm01.sourceware.org;\n envelope-from=libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org;\n receiver=patchwork.ozlabs.org)","sourceware.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key,\n unprotected) header.d=arm.com header.i=@arm.com header.a=rsa-sha256\n header.s=foss header.b=H0clGrwL","sourceware.org;\n dmarc=pass (p=none dis=none) header.from=arm.com","sourceware.org; spf=pass smtp.mailfrom=arm.com","sourceware.org;\n arc=none smtp.remote-ip=217.140.110.172"],"Received":["from vm01.sourceware.org (vm01.sourceware.org\n [IPv6:2620:52:6:3111::32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gBqcL0LKYz1yKd\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 08 May 2026 23:25:02 +1000 (AEST)","from vm01.sourceware.org (localhost [IPv6:::1])\n\tby sourceware.org (Postfix) with ESMTP id DA0CB4BA2E12\n\tfor <incoming@patchwork.ozlabs.org>; Fri,  8 May 2026 13:24:59 +0000 (GMT)","from foss.arm.com (foss.arm.com [217.140.110.172])\n by sourceware.org (Postfix) with ESMTP id E516B4BA2E3A\n for <libc-alpha@sourceware.org>; Fri,  8 May 2026 13:22:25 +0000 (GMT)","from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])\n by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D5F6532C9;\n Fri,  8 May 2026 06:22:19 -0700 (PDT)","from fdebian.localdomain (G7GWP2TF97.cambridge.arm.com [10.1.34.30])\n by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id\n EF9933F836; Fri,  8 May 2026 06:22:23 -0700 (PDT)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org DA0CB4BA2E12","OpenDKIM Filter v2.11.0 sourceware.org E516B4BA2E3A"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org E516B4BA2E3A","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org E516B4BA2E3A","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1778246546; cv=none;\n b=fpm97ZKS3lDcGXPac/1Gp5/TRnNK5X4vGLgHdyG3wt0kp2NRlXCCCP7pQ4Kms6ADm41yJ9wVpEVwiWGYQSoKCNPdQjKCjgyIWaGK/X9GowdTWNA/iePZgL4j1CnmBt1gue+/4SIlurILwuaD2GMsz0OKt+uqoFDJdn1Zp+xoIas=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1778246546; c=relaxed/simple;\n bh=MRQ61cgMVijPmp9u0WniUEXOs/gEqlO7wRoGer7QORE=;\n h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version;\n b=G0Tw/Up3mdj+kZGp502vMQPDIbZoqvJjtQ38MeSJE1neCqiwzIzxQm+darq6Mf/ZzllrPonwybzEH3h0EBF29nrUiJJimE0KLA4FeCw26v1szTvs70kqDsVEi3YEEXmgo9CU4INhZBaY6V5LmKxGo5viBzRRaIbCVEC4M3On2pU=","ARC-Authentication-Results":"i=1; sourceware.org;\n dkim=pass (1024-bit key, unprotected)\n header.d=arm.com header.i=@arm.com header.a=rsa-sha256 header.s=foss\n header.b=H0clGrwL","DKIM-Signature":"v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss;\n t=1778246545; bh=MRQ61cgMVijPmp9u0WniUEXOs/gEqlO7wRoGer7QORE=;\n h=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n b=H0clGrwLYLZSbWfQEJZBmc5tKHLuyHT77uqPIjrtnHtVEQyVWgCe55GRKe7qESB97\n Rw/yMLOeHZ4wHY85mdlnGWsJ/4esVcdw3c6y7IZG0FqIPrBD2+AMOhwREsGiiJAyJW\n kzGzcK/Vv6A7NgY2qf366NVBrpN2ADVYwVb4RqFY=","From":"Yury Khrustalev <yury.khrustalev@arm.com>","To":"libc-alpha@sourceware.org","Cc":"DJ Delorie <dj@redhat.com>,\n Adhemerval Zanella <adhemerval.zanella@linaro.org>,\n Andreas Schwab <schwab@suse.de>, Wilco Dijkstra <wilco.dijkstra@arm.com>,\n Florian Weimer <fweimer@redhat.com>","Subject":"[PATCH 2/3] malloc: Remove code conditional on USE_MTAG","Date":"Fri,  8 May 2026 14:22:10 +0100","Message-ID":"<20260508132211.3504357-3-yury.khrustalev@arm.com>","X-Mailer":"git-send-email 2.47.3","In-Reply-To":"<20260508132211.3504357-1-yury.khrustalev@arm.com>","References":"<20260508132211.3504357-1-yury.khrustalev@arm.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-BeenThere":"libc-alpha@sourceware.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Libc-alpha mailing list <libc-alpha.sourceware.org>","List-Unsubscribe":"<https://sourceware.org/mailman/options/libc-alpha>,\n <mailto:libc-alpha-request@sourceware.org?subject=unsubscribe>","List-Archive":"<https://sourceware.org/pipermail/libc-alpha/>","List-Post":"<mailto:libc-alpha@sourceware.org>","List-Help":"<mailto:libc-alpha-request@sourceware.org?subject=help>","List-Subscribe":"<https://sourceware.org/mailman/listinfo/libc-alpha>,\n <mailto:libc-alpha-request@sourceware.org?subject=subscribe>","Errors-To":"libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org"},"content":"Further malloc refactoring related to memory tagging.\n\nRemove code that was only compiled when macro USE_MTAG was defined\nexcept for the AArch64-specific assembly MTE code that is going to\nbe compiled unconditionally from now on.\n\nAs a result, we change 'mtag_mmap_flags' to 'extra_mmap_prot' that\nis now always defined. Change of the name due to this being used\nas part of PROT options in mmap syscalls rather than part of flags.\n\nRemove 'mtag_enabled' that would become compile-time false. Also\nremove any code that would never be compiled when 'mtag_enabled'\nis false.\n---\n malloc/arena.c                                | 18 +----\n malloc/malloc-check.c                         | 10 ---\n malloc/malloc.c                               | 73 +------------------\n sysdeps/aarch64/__mtag_tag_region.S           |  3 -\n sysdeps/aarch64/__mtag_tag_zero_region.S      |  3 -\n sysdeps/aarch64/cpu-features.h                |  3 +-\n sysdeps/aarch64/libc-mtag.h                   |  5 +-\n .../unix/sysv/linux/aarch64/cpu-features.c    | 28 -------\n 8 files changed, 8 insertions(+), 135 deletions(-)","diff":"diff --git a/malloc/arena.c b/malloc/arena.c\nindex ddde32c712..023cb3ba06 100644\n--- a/malloc/arena.c\n+++ b/malloc/arena.c\n@@ -252,20 +252,6 @@ __ptmalloc_init (void)\n   tcache_key_initialize ();\n #endif\n \n-#ifdef USE_MTAG\n-  if ((TUNABLE_GET_FULL (glibc, mem, tagging, int32_t, NULL) & 1) != 0)\n-    {\n-      /* If the tunable says that we should be using tagged memory\n-\t and that morecore does not support tagged regions, then\n-\t disable it.  */\n-      if (__MTAG_SBRK_UNTAGGED)\n-\t__always_fail_morecore = true;\n-\n-      mtag_enabled = true;\n-      mtag_mmap_flags = __MTAG_MMAP_FLAGS;\n-    }\n-#endif\n-\n #if defined SHARED && IS_IN (libc)\n   /* In case this libc copy is in a non-default namespace, never use\n      brk.  Likewise if dlopened from statically linked program.  The\n@@ -417,7 +403,7 @@ alloc_new_heap  (size_t size, size_t top_pad, size_t pagesize,\n             }\n         }\n     }\n-  if (__mprotect (p2, size, mtag_mmap_flags | PROT_READ | PROT_WRITE) != 0)\n+  if (__mprotect (p2, size, extra_mmap_prot | PROT_READ | PROT_WRITE) != 0)\n     {\n       __munmap (p2, max_size);\n       return NULL;\n@@ -471,7 +457,7 @@ grow_heap (heap_info *h, long diff)\n     {\n       if (__mprotect ((char *) h + h->mprotect_size,\n                       (unsigned long) new_size - h->mprotect_size,\n-                      mtag_mmap_flags | PROT_READ | PROT_WRITE) != 0)\n+                      extra_mmap_prot | PROT_READ | PROT_WRITE) != 0)\n         return -2;\n \n       h->mprotect_size = new_size;\ndiff --git a/malloc/malloc-check.c b/malloc/malloc-check.c\nindex 49b623df12..ae5025d69a 100644\n--- a/malloc/malloc-check.c\n+++ b/malloc/malloc-check.c\n@@ -217,11 +217,6 @@ free_check (void *mem)\n \n   int err = errno;\n \n-  /* Quickly check that the freed pointer matches the tag for the memory.\n-     This gives a useful double-free detection.  */\n-  if (__glibc_unlikely (mtag_enabled))\n-    *(volatile char *)mem;\n-\n   __libc_lock_lock (main_arena.mutex);\n   p = mem2chunk_check (mem, NULL);\n   if (!p)\n@@ -263,11 +258,6 @@ realloc_check (void *oldmem, size_t bytes)\n       return NULL;\n     }\n \n-  /* Quickly check that the freed pointer matches the tag for the memory.\n-     This gives a useful double-free detection.  */\n-  if (__glibc_unlikely (mtag_enabled))\n-    *(volatile char *)oldmem;\n-\n   __libc_lock_lock (main_arena.mutex);\n   const mchunkptr oldp = mem2chunk_check (oldmem, &magic_p);\n   __libc_lock_unlock (main_arena.mutex);\ndiff --git a/malloc/malloc.c b/malloc/malloc.c\nindex 57b58382b1..d273c28501 100644\n--- a/malloc/malloc.c\n+++ b/malloc/malloc.c\n@@ -405,27 +405,17 @@ verify (PTRDIFF_MAX <= SIZE_MAX / 2);\n    tagging is not enabled, it simply returns the original pointer.\n */\n \n-#ifdef USE_MTAG\n-static bool mtag_enabled = false;\n-static int mtag_mmap_flags = 0;\n-#else\n-# define mtag_enabled false\n-# define mtag_mmap_flags 0\n-#endif\n+static int extra_mmap_prot = 0;\n \n static __always_inline void *\n tag_region (void *ptr, size_t size)\n {\n-  if (__glibc_unlikely (mtag_enabled))\n-    return __libc_mtag_tag_region (ptr, size);\n   return ptr;\n }\n \n static __always_inline void *\n tag_new_zero_region (void *ptr, size_t size)\n {\n-  if (__glibc_unlikely (mtag_enabled))\n-    return __libc_mtag_tag_zero_region (__libc_mtag_new_tag (ptr), size);\n   return memset (ptr, 0, size);\n }\n \n@@ -436,8 +426,6 @@ tag_new_usable (void *ptr);\n static __always_inline void *\n tag_at (void *ptr)\n {\n-  if (__glibc_unlikely (mtag_enabled))\n-    return __libc_mtag_address_get_tag (ptr);\n   return ptr;\n }\n \n@@ -1259,23 +1247,6 @@ checked_request2size (size_t req) __nonnull (1)\n {\n   if (__glibc_unlikely (req > PTRDIFF_MAX))\n     return SIZE_MAX;\n-\n-  /* When using tagged memory, we cannot share the end of the user\n-     block with the header for the next chunk, so ensure that we\n-     allocate blocks that are rounded up to the granule size.  Take\n-     care not to overflow from close to MAX_SIZE_T to a small\n-     number.  Ideally, this would be part of request2size(), but that\n-     must be a macro that produces a compile time constant if passed\n-     a constant literal.  */\n-  if (__glibc_unlikely (mtag_enabled))\n-    {\n-      /* Ensure this is not evaluated if !mtag_enabled, see gcc PR 99551.  */\n-      asm (\"\");\n-\n-      req = (req + (__MTAG_GRANULE_SIZE - 1)) &\n-\t    ~(size_t)(__MTAG_GRANULE_SIZE - 1);\n-    }\n-\n   return request2size (req);\n }\n \n@@ -1378,25 +1349,11 @@ checked_request2size (size_t req) __nonnull (1)\n \n /* This is the size of the real usable data in the chunk.  Not valid for\n    dumped heap chunks.  */\n-#define memsize(p)                                                    \\\n-  (__MTAG_GRANULE_SIZE > SIZE_SZ && __glibc_unlikely (mtag_enabled) ? \\\n-    chunksize (p) - CHUNK_HDR_SZ :                                    \\\n-    chunksize (p) - CHUNK_HDR_SZ + SIZE_SZ)\n-\n-/* If memory tagging is enabled the layout changes to accommodate the granule\n-   size, this is wasteful for small allocations so not done by default.\n-   Both the chunk header and user data has to be granule aligned.  */\n-_Static_assert (__MTAG_GRANULE_SIZE <= CHUNK_HDR_SZ,\n-\t\t\"memory tagging is not supported with large granule.\");\n+#define memsize(p) (chunksize (p) - CHUNK_HDR_SZ + SIZE_SZ)\n \n static __always_inline void *\n tag_new_usable (void *ptr)\n {\n-  if (__glibc_unlikely (mtag_enabled) && ptr)\n-    {\n-      mchunkptr cp = mem2chunk(ptr);\n-      ptr = __libc_mtag_tag_region (__libc_mtag_new_tag (ptr), memsize (cp));\n-    }\n   return ptr;\n }\n \n@@ -2233,7 +2190,7 @@ sysmalloc_mmap (INTERNAL_SIZE_T nb, size_t pagesize, int extra_flags)\n   size_t size = ALIGN_UP (nb + padding + CHUNK_HDR_SZ, pagesize);\n \n   char *mm = (char *) MMAP (NULL, size,\n-\t\t\t    mtag_mmap_flags | PROT_READ | PROT_WRITE,\n+\t\t\t    extra_mmap_prot | PROT_READ | PROT_WRITE,\n \t\t\t    extra_flags);\n   if (mm == MAP_FAILED)\n     return mm;\n@@ -2274,7 +2231,7 @@ sysmalloc_mmap_fallback (size_t *s, size_t size, size_t minsize,\n     size = minsize;\n \n   char *mbrk = (char *) (MMAP (NULL, size,\n-\t\t\t       mtag_mmap_flags | PROT_READ | PROT_WRITE,\n+\t\t\t       extra_mmap_prot | PROT_READ | PROT_WRITE,\n \t\t\t       extra_flags));\n   if (mbrk == MAP_FAILED)\n     return MAP_FAILED;\n@@ -3301,11 +3258,6 @@ __libc_free (void *mem)\n   if (mem == NULL)                              /* free(0) has no effect */\n     return;\n \n-  /* Quickly check that the freed pointer matches the tag for the memory.\n-     This gives a useful double-free detection.  */\n-  if (__glibc_unlikely (mtag_enabled))\n-    *(volatile char *)mem;\n-\n   p = mem2chunk (mem);\n \n   /* Mark the chunk as belonging to the library again.  */\n@@ -3373,11 +3325,6 @@ __libc_realloc (void *oldmem, size_t bytes)\n     }\n #endif\n \n-  /* Perform a quick check to ensure that the pointer's tag matches the\n-     memory's tag.  */\n-  if (__glibc_unlikely (mtag_enabled))\n-    *(volatile char*) oldmem;\n-\n   /* chunk corresponding to oldmem */\n   const mchunkptr oldp = mem2chunk (oldmem);\n \n@@ -3673,12 +3620,6 @@ __libc_calloc2 (size_t sz)\n \n   p = mem2chunk (mem);\n \n-  /* If we are using memory tagging, then we need to set the tags\n-     regardless of MORECORE_CLEARS, so we zero the whole block while\n-     doing so.  */\n-  if (__glibc_unlikely (mtag_enabled))\n-    return tag_new_zero_region (mem, memsize (p));\n-\n   csz = chunksize (p);\n \n   /* Two optional cases in which clearing not necessary */\n@@ -3725,9 +3666,6 @@ __libc_calloc (size_t n, size_t elem_size)\n \t  if (tcache->entries[tc_idx] != NULL)\n \t    {\n \t      void *mem = tcache_get (tc_idx);\n-\t      if (__glibc_unlikely (mtag_enabled))\n-\t\treturn tag_new_zero_region (mem, memsize (mem2chunk (mem)));\n-\n \t      return clear_memory ((INTERNAL_SIZE_T *) mem, tidx2usize (tc_idx));\n \t    }\n \t}\n@@ -3737,9 +3675,6 @@ __libc_calloc (size_t n, size_t elem_size)\n \t  void *mem = tcache_get_large (tc_idx, nb);\n \t  if (mem != NULL)\n \t    {\n-\t      if (__glibc_unlikely (mtag_enabled))\n-\t        return tag_new_zero_region (mem, memsize (mem2chunk (mem)));\n-\n \t      return memset (mem, 0, memsize (mem2chunk (mem)));\n \t    }\n \t}\ndiff --git a/sysdeps/aarch64/__mtag_tag_region.S b/sysdeps/aarch64/__mtag_tag_region.S\nindex bad3193bfe..85e330812e 100644\n--- a/sysdeps/aarch64/__mtag_tag_region.S\n+++ b/sysdeps/aarch64/__mtag_tag_region.S\n@@ -18,8 +18,6 @@\n \n #include <sysdep.h>\n \n-#ifdef USE_MTAG\n-\n /* Assumptions:\n  *\n  * ARMv8-a, AArch64, MTE, LP64 ABI.\n@@ -107,4 +105,3 @@ L(no_zva_loop):\n \tret\n \n END (__libc_mtag_tag_region)\n-#endif /* USE_MTAG */\ndiff --git a/sysdeps/aarch64/__mtag_tag_zero_region.S b/sysdeps/aarch64/__mtag_tag_zero_region.S\nindex 3bc6e7301f..1a84b3e4d4 100644\n--- a/sysdeps/aarch64/__mtag_tag_zero_region.S\n+++ b/sysdeps/aarch64/__mtag_tag_zero_region.S\n@@ -18,8 +18,6 @@\n \n #include <sysdep.h>\n \n-#ifdef USE_MTAG\n-\n /* Assumptions:\n  *\n  * ARMv8-a, AArch64, MTE, LP64 ABI.\n@@ -107,4 +105,3 @@ L(no_zva_loop):\n \tret\n \n END (__libc_mtag_tag_zero_region)\n-#endif /* USE_MTAG */\ndiff --git a/sysdeps/aarch64/cpu-features.h b/sysdeps/aarch64/cpu-features.h\nindex d6367a4596..f414060066 100644\n--- a/sysdeps/aarch64/cpu-features.h\n+++ b/sysdeps/aarch64/cpu-features.h\n@@ -64,8 +64,7 @@ struct cpu_features\n   uint64_t midr_el1;\n   unsigned zva_size;\n   bool bti;\n-  /* Currently, the GLIBC memory tagging tunable only defines 8 bits.  */\n-  uint8_t mte_state;\n+  uint8_t reserved;\n   bool sve;\n   bool unused;\n   bool mops;\ndiff --git a/sysdeps/aarch64/libc-mtag.h b/sysdeps/aarch64/libc-mtag.h\nindex 1d7368b806..663b866bf8 100644\n--- a/sysdeps/aarch64/libc-mtag.h\n+++ b/sysdeps/aarch64/libc-mtag.h\n@@ -19,10 +19,7 @@\n #ifndef _AARCH64_LIBC_MTAG_H\n #define _AARCH64_LIBC_MTAG_H 1\n \n-#ifndef USE_MTAG\n-/* Generic bindings for systems that do not support memory tagging.  */\n-#include_next \"libc-mtag.h\"\n-#else\n+#if 0\n \n /* Used to ensure additional alignment when objects need to have distinct\n    tags.  */\ndiff --git a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c\nindex 36bd72bb12..cda1f82948 100644\n--- a/sysdeps/unix/sysv/linux/aarch64/cpu-features.c\n+++ b/sysdeps/unix/sysv/linux/aarch64/cpu-features.c\n@@ -20,7 +20,6 @@\n #include <cpu-features.h>\n #include <sys/auxv.h>\n #include <elf/dl-hwcaps.h>\n-#include <sys/prctl.h>\n #include <sys/utsname.h>\n #include <dl-tunables-parse.h>\n #include <dl-symbol-redir-ifunc.h>\n@@ -96,33 +95,6 @@ init_cpu_features (struct cpu_features *cpu_features)\n   if (cpu_features->bti)\n     GLRO (dl_aarch64_bti) = TUNABLE_GET (glibc, cpu, aarch64_bti, uint64_t, 0);\n \n-  /* Setup memory tagging support if the HW and kernel support it, and if\n-     the user has requested it.  */\n-  cpu_features->mte_state = 0;\n-\n-#ifdef USE_MTAG\n-  int mte_state = TUNABLE_GET (glibc, mem, tagging, unsigned, 0);\n-  cpu_features->mte_state = (GLRO (dl_hwcap2) & HWCAP2_MTE) ? mte_state : 0;\n-  /* If we lack the MTE feature, disable the tunable, since it will\n-     otherwise cause instructions that won't run on this CPU to be used.  */\n-  TUNABLE_SET (glibc, mem, tagging, cpu_features->mte_state);\n-\n-  if (cpu_features->mte_state & 4)\n-    /* Enable choosing system-preferred faulting mode.  */\n-    __prctl (PR_SET_TAGGED_ADDR_CTRL,\n-\t     (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_SYNC | PR_MTE_TCF_ASYNC\n-\t      | MTE_ALLOWED_TAGS),\n-\t     0, 0, 0);\n-  else if (cpu_features->mte_state & 2)\n-    __prctl (PR_SET_TAGGED_ADDR_CTRL,\n-\t     (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_SYNC | MTE_ALLOWED_TAGS),\n-\t     0, 0, 0);\n-  else if (cpu_features->mte_state)\n-    __prctl (PR_SET_TAGGED_ADDR_CTRL,\n-\t     (PR_TAGGED_ADDR_ENABLE | PR_MTE_TCF_ASYNC | MTE_ALLOWED_TAGS),\n-\t     0, 0, 0);\n-#endif\n-\n   /* Check if SVE is supported.  */\n   cpu_features->sve = GLRO (dl_hwcap) & HWCAP_SVE;\n \n","prefixes":["2/3"]}