{"id":2235055,"url":"http://patchwork.ozlabs.org/api/1.2/patches/2235055/?format=json","web_url":"http://patchwork.ozlabs.org/project/glibc/patch/20260508132211.3504357-4-yury.khrustalev@arm.com/","project":{"id":41,"url":"http://patchwork.ozlabs.org/api/1.2/projects/41/?format=json","name":"GNU C Library","link_name":"glibc","list_id":"libc-alpha.sourceware.org","list_email":"libc-alpha@sourceware.org","web_url":"","scm_url":"","webscm_url":"","list_archive_url":"","list_archive_url_format":"","commit_url_format":""},"msgid":"<20260508132211.3504357-4-yury.khrustalev@arm.com>","list_archive_url":null,"date":"2026-05-08T13:22:11","name":"[3/3] malloc: Remove currently broken memory tagging","commit_ref":null,"pull_url":null,"state":"new","archived":false,"hash":"256a82ef3971d06d693d374e8d61f8c072eccda9","submitter":{"id":88214,"url":"http://patchwork.ozlabs.org/api/1.2/people/88214/?format=json","name":"Yury Khrustalev","email":"yury.khrustalev@arm.com"},"delegate":null,"mbox":"http://patchwork.ozlabs.org/project/glibc/patch/20260508132211.3504357-4-yury.khrustalev@arm.com/mbox/","series":[{"id":503390,"url":"http://patchwork.ozlabs.org/api/1.2/series/503390/?format=json","web_url":"http://patchwork.ozlabs.org/project/glibc/list/?series=503390","date":"2026-05-08T13:22:08","name":"Remove broken memory tagging in malloc","version":1,"mbox":"http://patchwork.ozlabs.org/series/503390/mbox/"}],"comments":"http://patchwork.ozlabs.org/api/patches/2235055/comments/","check":"pending","checks":"http://patchwork.ozlabs.org/api/patches/2235055/checks/","tags":{},"related":[],"headers":{"Return-Path":"<libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org>","X-Original-To":["incoming@patchwork.ozlabs.org","libc-alpha@sourceware.org"],"Delivered-To":["patchwork-incoming@legolas.ozlabs.org","libc-alpha@sourceware.org"],"Authentication-Results":["legolas.ozlabs.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key;\n unprotected) header.d=arm.com header.i=@arm.com header.a=rsa-sha256\n header.s=foss header.b=iKvA4HLL;\n\tdkim-atps=neutral","legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org\n (client-ip=38.145.34.32; helo=vm01.sourceware.org;\n envelope-from=libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org;\n receiver=patchwork.ozlabs.org)","sourceware.org;\n\tdkim=fail reason=\"signature verification failed\" (1024-bit key,\n unprotected) header.d=arm.com header.i=@arm.com header.a=rsa-sha256\n header.s=foss header.b=iKvA4HLL","sourceware.org;\n dmarc=pass (p=none dis=none) header.from=arm.com","sourceware.org; spf=pass smtp.mailfrom=arm.com","sourceware.org;\n arc=none smtp.remote-ip=217.140.110.172"],"Received":["from vm01.sourceware.org (vm01.sourceware.org [38.145.34.32])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4gBqd91Qfhz1yKd\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 08 May 2026 23:26:05 +1000 (AEST)","from vm01.sourceware.org (localhost [IPv6:::1])\n\tby sourceware.org (Postfix) with ESMTP id 42FD64BA5436\n\tfor <incoming@patchwork.ozlabs.org>; Fri,  8 May 2026 13:26:03 +0000 (GMT)","from foss.arm.com (foss.arm.com [217.140.110.172])\n by sourceware.org (Postfix) with ESMTP id 8BFD74BA23C1\n for <libc-alpha@sourceware.org>; Fri,  8 May 2026 13:22:27 +0000 (GMT)","from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])\n by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AD9C7263D;\n Fri,  8 May 2026 06:22:21 -0700 (PDT)","from fdebian.localdomain (G7GWP2TF97.cambridge.arm.com [10.1.34.30])\n by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id\n 6D98E3F836; Fri,  8 May 2026 06:22:25 -0700 (PDT)"],"DKIM-Filter":["OpenDKIM Filter v2.11.0 sourceware.org 42FD64BA5436","OpenDKIM Filter v2.11.0 sourceware.org 8BFD74BA23C1"],"DMARC-Filter":"OpenDMARC Filter v1.4.2 sourceware.org 8BFD74BA23C1","ARC-Filter":"OpenARC Filter v1.0.0 sourceware.org 8BFD74BA23C1","ARC-Seal":"i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1778246547; cv=none;\n b=XqEA4hPX/7xEMsH32+T4FNstfvkNr/rR+iVELp54KYjOofTPI34STZf6rxEFs7vYDodFj7TTCMwUZ5xTmBUbLshBaWnmrXJ0v5QajMpChHPLU29QxaCbBUGquViPuqfqlw4vXnFOO70zs8AFaxlQSlfxZjQYlLOpg4NhYpg+AX8=","ARC-Message-Signature":"i=1; a=rsa-sha256; d=sourceware.org; s=key;\n t=1778246547; c=relaxed/simple;\n bh=q8n+zpNoTZPKrzCIYm3BJ6FQf4GkiqCSRRo/VGOCjcY=;\n h=DKIM-Signature:From:To:Subject:Date:Message-ID:MIME-Version;\n b=HScmn0bmveLQKef88mjwM+7KS9u/si6WMlesfIEFnO8CadAv83qHRm6QTu/tw+RY8QGfwyJojSaKAriSNZrNHNFGKBQAmP+MaduRhqqpLm102fZOcdE0SZn2J5Rwk+Zi+H3nJSBL36hM8v3FEPeW8lgMbEkPotHQbEJQQI5xMqE=","ARC-Authentication-Results":"i=1; sourceware.org;\n dkim=pass (1024-bit key, unprotected)\n header.d=arm.com header.i=@arm.com header.a=rsa-sha256 header.s=foss\n header.b=iKvA4HLL","DKIM-Signature":"v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss;\n t=1778246546; bh=q8n+zpNoTZPKrzCIYm3BJ6FQf4GkiqCSRRo/VGOCjcY=;\n h=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n b=iKvA4HLLc6fJ36vPNhmEj6RdmouG4+g4qcNEy8cWdp1Vk96IEn039Jayr00a9b8PW\n rWeWKg3X6jfxpRJLjERRQyuDWSlbkMWhq8IWx6JII6iT99NXGLrRRuObN7i5Hq1HDy\n t9DiE6cR5qkaoI6bSegHSvNZkyQN9cAjoIk/kcNU=","From":"Yury Khrustalev <yury.khrustalev@arm.com>","To":"libc-alpha@sourceware.org","Cc":"DJ Delorie <dj@redhat.com>,\n Adhemerval Zanella <adhemerval.zanella@linaro.org>,\n Andreas Schwab <schwab@suse.de>, Wilco Dijkstra <wilco.dijkstra@arm.com>,\n Florian Weimer <fweimer@redhat.com>","Subject":"[PATCH 3/3] malloc: Remove currently broken memory tagging","Date":"Fri,  8 May 2026 14:22:11 +0100","Message-ID":"<20260508132211.3504357-4-yury.khrustalev@arm.com>","X-Mailer":"git-send-email 2.47.3","In-Reply-To":"<20260508132211.3504357-1-yury.khrustalev@arm.com>","References":"<20260508132211.3504357-1-yury.khrustalev@arm.com>","MIME-Version":"1.0","Content-Transfer-Encoding":"8bit","X-BeenThere":"libc-alpha@sourceware.org","X-Mailman-Version":"2.1.30","Precedence":"list","List-Id":"Libc-alpha mailing list <libc-alpha.sourceware.org>","List-Unsubscribe":"<https://sourceware.org/mailman/options/libc-alpha>,\n <mailto:libc-alpha-request@sourceware.org?subject=unsubscribe>","List-Archive":"<https://sourceware.org/pipermail/libc-alpha/>","List-Post":"<mailto:libc-alpha@sourceware.org>","List-Help":"<mailto:libc-alpha-request@sourceware.org?subject=help>","List-Subscribe":"<https://sourceware.org/mailman/listinfo/libc-alpha>,\n <mailto:libc-alpha-request@sourceware.org?subject=subscribe>","Errors-To":"libc-alpha-bounces~incoming=patchwork.ozlabs.org@sourceware.org"},"content":"Remove AArch64-specific code, that is currently broken, from the\ncore malloc implementation.\n\nCode clean-up, no functional change unrelated to memory tagging.\n---\n malloc/malloc-check.c                         |  16 +-\n malloc/malloc.c                               | 163 ++----------------\n sysdeps/aarch64/Makefile                      |   9 +-\n ...__mtag_tag_region.S => __mte_tag_region.S} |   4 +-\n ..._zero_region.S => __mte_tag_region_zero.S} |   4 +-\n .../aarch64/{libc-mtag.h => aarch64-mte.h}    |  70 ++++----\n sysdeps/generic/libc-mtag.h                   |  73 --------\n 7 files changed, 63 insertions(+), 276 deletions(-)\n rename sysdeps/aarch64/{__mtag_tag_region.S => __mte_tag_region.S} (97%)\n rename sysdeps/aarch64/{__mtag_tag_zero_region.S => __mte_tag_region_zero.S} (97%)\n rename sysdeps/aarch64/{libc-mtag.h => aarch64-mte.h} (57%)\n delete mode 100644 sysdeps/generic/libc-mtag.h","diff":"diff --git a/malloc/malloc-check.c b/malloc/malloc-check.c\nindex ae5025d69a..10259bc7f3 100644\n--- a/malloc/malloc-check.c\n+++ b/malloc/malloc-check.c\n@@ -19,12 +19,8 @@\n #define __mremap mremap\n #include \"malloc.c\"\n \n-/* When memory is tagged, the checking data is stored in the user part\n-   of the chunk.  We can't rely on the user not having modified the\n-   tags, so fetch the tag at each location before dereferencing\n-   it.  */\n #define SAFE_CHAR_OFFSET(p,offset) \\\n-  ((unsigned char *) tag_at (((unsigned char *) p) + offset))\n+  ((unsigned char *) (((unsigned char *) p) + offset))\n \n /* A simple, standard set of debugging hooks.  Overhead is `only' one\n    byte per chunk; still this will catch most cases of double frees or\n@@ -204,7 +200,7 @@ malloc_check (size_t sz)\n   top_check ();\n   victim = _int_malloc (&main_arena, nb);\n   __libc_lock_unlock (main_arena.mutex);\n-  return mem2mem_check (tag_new_usable (victim), sz);\n+  return mem2mem_check (victim, sz);\n }\n \n static void\n@@ -228,8 +224,6 @@ free_check (void *mem)\n     }\n   else\n     {\n-      /* Mark the chunk as belonging to the library again.  */\n-      (void)tag_region (chunk2mem (p), memsize (p));\n       _int_free_chunk (&main_arena, p, chunksize (p), 1);\n       __libc_lock_unlock (main_arena.mutex);\n     }\n@@ -278,7 +272,7 @@ realloc_check (void *oldmem, size_t bytes)\n #if HAVE_MREMAP\n       mchunkptr newp = mremap_chunk (oldp, chnb);\n       if (newp)\n-        newmem = chunk2mem_tag (newp);\n+        newmem = chunk2mem (newp);\n       else\n #endif\n       {\n@@ -313,7 +307,7 @@ invert:\n \n   __libc_lock_unlock (main_arena.mutex);\n \n-  return mem2mem_check (tag_new_usable (newmem), bytes);\n+  return mem2mem_check (newmem, bytes);\n }\n \n static void *\n@@ -355,7 +349,7 @@ memalign_check (size_t alignment, size_t bytes)\n   top_check ();\n   mem = _int_memalign (&main_arena, alignment, bytes + 1);\n   __libc_lock_unlock (main_arena.mutex);\n-  return mem2mem_check (tag_new_usable (mem), bytes);\n+  return mem2mem_check (mem, bytes);\n }\n \n static void\ndiff --git a/malloc/malloc.c b/malloc/malloc.c\nindex d273c28501..9f9cef0cec 100644\n--- a/malloc/malloc.c\n+++ b/malloc/malloc.c\n@@ -233,9 +233,7 @@\n /* For ALIGN_UP et. al.  */\n #include <libc-pointer-arith.h>\n \n-/* For memory tagging.  */\n-#include <libc-mtag.h>\n-\n+/* For internal malloc interfaces and declarations.  */\n #include <malloc/malloc-internal.h>\n \n /* For SINGLE_THREAD_P.  */\n@@ -349,86 +347,8 @@ verify (PTRDIFF_MAX <= SIZE_MAX / 2);\n #define MORECORE         (*__glibc_morecore)\n #define MORECORE_FAILURE  NULL\n \n-/* Memory tagging.  */\n-\n-/* Some systems support the concept of tagging (sometimes known as\n-   coloring) memory locations on a fine grained basis.  Each memory\n-   location is given a color (normally allocated randomly) and\n-   pointers are also colored.  When the pointer is dereferenced, the\n-   pointer's color is checked against the memory's color and if they\n-   differ the access is faulted (sometimes lazily).\n-\n-   We use this in glibc by maintaining a single color for the malloc\n-   data structures that are interleaved with the user data and then\n-   assigning separate colors for each block allocation handed out.  In\n-   this way simple buffer overruns will be rapidly detected.  When\n-   memory is freed, the memory is recolored back to the glibc default\n-   so that simple use-after-free errors can also be detected.\n-\n-   If memory is reallocated the buffer is recolored even if the\n-   address remains the same.  This has a performance impact, but\n-   guarantees that the old pointer cannot mistakenly be reused (code\n-   that compares old against new will see a mismatch and will then\n-   need to behave as though realloc moved the data to a new location).\n-\n-   Internal API for memory tagging support.\n-\n-   The aim is to keep the code for memory tagging support as close to\n-   the normal APIs in glibc as possible, so that if tagging is not\n-   enabled in the library, or is disabled at runtime then standard\n-   operations can continue to be used.  Support macros are used to do\n-   this:\n-\n-   void *tag_new_zero_region (void *ptr, size_t size)\n-\n-   Allocates a new tag, colors the memory with that tag, zeros the\n-   memory and returns a pointer that is correctly colored for that\n-   location.  The non-tagging version will simply call memset with 0.\n-\n-   void *tag_region (void *ptr, size_t size)\n-\n-   Color the region of memory pointed to by PTR and size SIZE with\n-   the color of PTR.  Returns the original pointer.\n-\n-   void *tag_new_usable (void *ptr)\n-\n-   Allocate a new random color and use it to color the user region of\n-   a chunk; this may include data from the subsequent chunk's header\n-   if tagging is sufficiently fine grained.  Returns PTR suitably\n-   recolored for accessing the memory there.\n-\n-   void *tag_at (void *ptr)\n-\n-   Read the current color of the memory at the address pointed to by\n-   PTR (ignoring it's current color) and return PTR recolored to that\n-   color.  PTR must be valid address in all other respects.  When\n-   tagging is not enabled, it simply returns the original pointer.\n-*/\n-\n static int extra_mmap_prot = 0;\n \n-static __always_inline void *\n-tag_region (void *ptr, size_t size)\n-{\n-  return ptr;\n-}\n-\n-static __always_inline void *\n-tag_new_zero_region (void *ptr, size_t size)\n-{\n-  return memset (ptr, 0, size);\n-}\n-\n-/* Defined later.  */\n-static void *\n-tag_new_usable (void *ptr);\n-\n-static __always_inline void *\n-tag_at (void *ptr)\n-{\n-  return ptr;\n-}\n-\n #include <string.h>\n \n /*\n@@ -1183,38 +1103,15 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n   ---------- Size and alignment checks and conversions ----------\n */\n \n-/* Conversion from malloc headers to user pointers, and back.  When\n-   using memory tagging the user data and the malloc data structure\n-   headers have distinct tags.  Converting fully from one to the other\n-   involves extracting the tag at the other address and creating a\n-   suitable pointer using it.  That can be quite expensive.  There are\n-   cases when the pointers are not dereferenced (for example only used\n-   for alignment check) so the tags are not relevant, and there are\n-   cases when user data is not tagged distinctly from malloc headers\n-   (user data is untagged because tagging is done late in malloc and\n-   early in free).  User memory tagging across internal interfaces:\n-\n-      sysmalloc: Returns untagged memory.\n-      _int_malloc: Returns untagged memory.\n-      _int_memalign: Returns untagged memory.\n-      _int_memalign: Returns untagged memory.\n-      _mid_memalign: Returns tagged memory.\n-      _int_realloc: Takes and returns tagged memory.\n-*/\n-\n /* The chunk header is two SIZE_SZ elements, but this is used widely, so\n    we define it here for clarity later.  */\n #define CHUNK_HDR_SZ (2 * SIZE_SZ)\n \n-/* Convert a chunk address to a user mem pointer without correcting\n-   the tag.  */\n+/* Convert a chunk address to a user mem pointer.  */\n #define chunk2mem(p) ((void*)((char*)(p) + CHUNK_HDR_SZ))\n \n-/* Convert a chunk address to a user mem pointer and extract the right tag.  */\n-#define chunk2mem_tag(p) ((void*)tag_at ((char*)(p) + CHUNK_HDR_SZ))\n-\n-/* Convert a user mem pointer to a chunk address and extract the right tag.  */\n-#define mem2chunk(mem) ((mchunkptr)tag_at (((char*)(mem) - CHUNK_HDR_SZ)))\n+/* Convert a user mem pointer to a chunk address.  */\n+#define mem2chunk(mem) ((mchunkptr) (((char*)(mem) - CHUNK_HDR_SZ)))\n \n /* The smallest possible chunk */\n #define MIN_CHUNK_SIZE        (offsetof(struct malloc_chunk, fd_nextsize))\n@@ -1351,12 +1248,6 @@ checked_request2size (size_t req) __nonnull (1)\n    dumped heap chunks.  */\n #define memsize(p) (chunksize (p) - CHUNK_HDR_SZ + SIZE_SZ)\n \n-static __always_inline void *\n-tag_new_usable (void *ptr)\n-{\n-  return ptr;\n-}\n-\n /* Huge page used for an mmap chunk.  */\n #define MMAP_HP 0x1\n \n@@ -3068,7 +2959,7 @@ tcache_get_align (size_t nb, size_t alignment)\n       if (te != NULL\n \t  && csize == nb\n \t  && PTR_IS_ALIGNED (te, alignment))\n-\treturn tag_new_usable (tcache_get_n (tc_idx, tep, mangled));\n+\treturn tcache_get_n (tc_idx, tep, mangled);\n     }\n   return NULL;\n }\n@@ -3186,7 +3077,7 @@ __libc_malloc2 (size_t bytes)\n \n   if (SINGLE_THREAD_P)\n     {\n-      victim = tag_new_usable (_int_malloc (&main_arena, bytes));\n+      victim = _int_malloc (&main_arena, bytes);\n       assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||\n \t      &main_arena == arena_for_chunk (mem2chunk (victim)));\n       return victim;\n@@ -3207,8 +3098,6 @@ __libc_malloc2 (size_t bytes)\n   if (ar_ptr != NULL)\n     __libc_lock_unlock (ar_ptr->mutex);\n \n-  victim = tag_new_usable (victim);\n-\n   assert (!victim || chunk_is_mmapped (mem2chunk (victim)) ||\n           ar_ptr == arena_for_chunk (mem2chunk (victim)));\n   return victim;\n@@ -3227,14 +3116,14 @@ __libc_malloc (size_t bytes)\n       if (__glibc_likely (tc_idx < TCACHE_SMALL_BINS))\n         {\n \t  if (tcache->entries[tc_idx] != NULL)\n-\t    return tag_new_usable (tcache_get (tc_idx));\n+\t    return tcache_get (tc_idx);\n \t}\n       else\n         {\n \t  tc_idx = large_csize2tidx (nb);\n \t  void *victim = tcache_get_large (tc_idx, nb);\n \t  if (victim != NULL)\n-\t    return tag_new_usable (victim);\n+\t    return victim;\n \t}\n     }\n #endif\n@@ -3260,9 +3149,6 @@ __libc_free (void *mem)\n \n   p = mem2chunk (mem);\n \n-  /* Mark the chunk as belonging to the library again.  */\n-  tag_region (chunk2mem (p), memsize (p));\n-\n   INTERNAL_SIZE_T size = chunksize (p);\n \n   if (__glibc_unlikely (misaligned_chunk (p)))\n@@ -3366,15 +3252,7 @@ __libc_realloc (void *oldmem, size_t bytes)\n #if HAVE_MREMAP\n       newp = mremap_chunk (oldp, nb);\n       if (newp)\n-\t{\n-\t  void *newmem = chunk2mem_tag (newp);\n-\t  /* Give the new block a different tag.  This helps to ensure\n-\t     that stale handles to the previous mapping are not\n-\t     reused.  There's a performance hit for both us and the\n-\t     caller for doing this, so we might want to\n-\t     reconsider.  */\n-\t  return tag_new_usable (newmem);\n-\t}\n+\treturn chunk2mem (newp);\n #endif\n       /* Return if shrinking and mremap was unsuccessful.  */\n       if (bytes <= usable)\n@@ -3416,10 +3294,8 @@ __libc_realloc (void *oldmem, size_t bytes)\n       newp = __libc_malloc (bytes);\n       if (newp != NULL)\n         {\n-\t  size_t sz = memsize (oldp);\n-\t  memcpy (newp, oldmem, sz);\n-\t  (void) tag_region (chunk2mem (oldp), sz);\n-          _int_free_chunk (ar_ptr, oldp, chunksize (oldp), 0);\n+\t  memcpy (newp, oldmem, memsize (oldp));\n+\t  _int_free_chunk (ar_ptr, oldp, chunksize (oldp), 0);\n         }\n     }\n \n@@ -3503,7 +3379,7 @@ _mid_memalign (size_t alignment, size_t bytes)\n #if USE_TCACHE\n   void *victim = tcache_get_align (checked_request2size (bytes), alignment);\n   if (victim != NULL)\n-    return tag_new_usable (victim);\n+    return victim;\n #endif\n \n   if (SINGLE_THREAD_P)\n@@ -3511,7 +3387,7 @@ _mid_memalign (size_t alignment, size_t bytes)\n       p = _int_memalign (&main_arena, alignment, bytes);\n       assert (!p || chunk_is_mmapped (mem2chunk (p)) ||\n \t      &main_arena == arena_for_chunk (mem2chunk (p)));\n-      return tag_new_usable (p);\n+      return p;\n     }\n \n   arena_get (ar_ptr, bytes + alignment + MINSIZE);\n@@ -3529,7 +3405,7 @@ _mid_memalign (size_t alignment, size_t bytes)\n \n   assert (!p || chunk_is_mmapped (mem2chunk (p)) ||\n           ar_ptr == arena_for_chunk (mem2chunk (p)));\n-  return tag_new_usable (p);\n+  return p;\n }\n \n void *\n@@ -4446,7 +4322,7 @@ _int_realloc (mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize,\n           av->top = chunk_at_offset (oldp, nb);\n           set_head (av->top, (newsize - nb) | PREV_INUSE);\n           check_inuse_chunk (av, oldp);\n-          return tag_new_usable (chunk2mem (oldp));\n+          return chunk2mem (oldp);\n         }\n \n       /* Try to expand forward into next chunk;  split off remainder below */\n@@ -4480,10 +4356,7 @@ _int_realloc (mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize,\n           else\n             {\n \t      void *oldmem = chunk2mem (oldp);\n-\t      size_t sz = memsize (oldp);\n-\t      (void) tag_region (oldmem, sz);\n-\t      newmem = tag_new_usable (newmem);\n-\t      memcpy (newmem, oldmem, sz);\n+\t      memcpy (newmem, oldmem, memsize (oldp));\n \t      _int_free_chunk (av, oldp, chunksize (oldp), 1);\n \t      check_inuse_chunk (av, newp);\n \t      return newmem;\n@@ -4505,8 +4378,6 @@ _int_realloc (mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize,\n   else   /* split remainder */\n     {\n       remainder = chunk_at_offset (newp, nb);\n-      /* Clear any user-space tags before writing the header.  */\n-      remainder = tag_region (remainder, remainder_size);\n       set_head_size (newp, nb | (av != &main_arena ? NON_MAIN_ARENA : 0));\n       set_head (remainder, remainder_size | PREV_INUSE |\n                 (av != &main_arena ? NON_MAIN_ARENA : 0));\n@@ -4516,7 +4387,7 @@ _int_realloc (mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize,\n     }\n \n   check_inuse_chunk (av, newp);\n-  return tag_new_usable (chunk2mem (newp));\n+  return chunk2mem (newp);\n }\n \n /*\ndiff --git a/sysdeps/aarch64/Makefile b/sysdeps/aarch64/Makefile\nindex a78622cc35..bafadf77e2 100644\n--- a/sysdeps/aarch64/Makefile\n+++ b/sysdeps/aarch64/Makefile\n@@ -82,8 +82,8 @@ sysdep_headers += \\\n sysdep_routines += \\\n   __alloc_gcs \\\n   __arm_za_disable \\\n-  __mtag_tag_region \\\n-  __mtag_tag_zero_region \\\n+  __mte_tag_region \\\n+  __mte_tag_region_zero \\\n   # sysdep_routines\n \n tests += \\\n@@ -102,10 +102,7 @@ $(objpfx)tst-sme-clone3: $(objpfx)clone3.o $(objpfx)__arm_za_disable.o\n endif\n \n ifeq ($(subdir),malloc)\n-sysdep_malloc_debug_routines = \\\n-  __mtag_tag_region \\\n-  __mtag_tag_zero_region \\\n-  # sysdep_malloc_debug_routines\n+\n endif # malloc directory\n \n ifeq ($(subdir),support)\ndiff --git a/sysdeps/aarch64/__mtag_tag_region.S b/sysdeps/aarch64/__mte_tag_region.S\nsimilarity index 97%\nrename from sysdeps/aarch64/__mtag_tag_region.S\nrename to sysdeps/aarch64/__mte_tag_region.S\nindex 85e330812e..1698489fc2 100644\n--- a/sysdeps/aarch64/__mtag_tag_region.S\n+++ b/sysdeps/aarch64/__mte_tag_region.S\n@@ -37,7 +37,7 @@\n #define tmp\tx4\n #define zva_val\tx4\n \n-ENTRY (__libc_mtag_tag_region)\n+ENTRY (__mte_tag_region)\n \tadd\tdstend, dstin, count\n \n \tcmp\tcount, 96\n@@ -104,4 +104,4 @@ L(no_zva_loop):\n \tst2g\tdstin, [dstend, -32]\n \tret\n \n-END (__libc_mtag_tag_region)\n+END (__mte_tag_region)\ndiff --git a/sysdeps/aarch64/__mtag_tag_zero_region.S b/sysdeps/aarch64/__mte_tag_region_zero.S\nsimilarity index 97%\nrename from sysdeps/aarch64/__mtag_tag_zero_region.S\nrename to sysdeps/aarch64/__mte_tag_region_zero.S\nindex 1a84b3e4d4..2f506c9ee8 100644\n--- a/sysdeps/aarch64/__mtag_tag_zero_region.S\n+++ b/sysdeps/aarch64/__mte_tag_region_zero.S\n@@ -37,7 +37,7 @@\n #define tmp\tx4\n #define zva_val\tx4\n \n-ENTRY (__libc_mtag_tag_zero_region)\n+ENTRY (__mte_tag_region_zero)\n \tadd\tdstend, dstin, count\n \n \tcmp\tcount, 96\n@@ -104,4 +104,4 @@ L(no_zva_loop):\n \tstz2g\tdstin, [dstend, -32]\n \tret\n \n-END (__libc_mtag_tag_zero_region)\n+END (__mte_tag_region_zero)\ndiff --git a/sysdeps/aarch64/libc-mtag.h b/sysdeps/aarch64/aarch64-mte.h\nsimilarity index 57%\nrename from sysdeps/aarch64/libc-mtag.h\nrename to sysdeps/aarch64/aarch64-mte.h\nindex 663b866bf8..f42564f528 100644\n--- a/sysdeps/aarch64/libc-mtag.h\n+++ b/sysdeps/aarch64/aarch64-mte.h\n@@ -1,4 +1,4 @@\n-/* libc-internal interface for tagged (colored) memory support.\n+/* AArch64 MTE (Memory Tagging Extension) declarations.\n    Copyright (C) 2020-2026 Free Software Foundation, Inc.\n    This file is part of the GNU C Library.\n \n@@ -16,51 +16,49 @@\n    License along with the GNU C Library; if not, see\n    <http://www.gnu.org/licenses/>.  */\n \n-#ifndef _AARCH64_LIBC_MTAG_H\n-#define _AARCH64_LIBC_MTAG_H 1\n+#ifndef _AARCH64_MTE_H\n+#define _AARCH64_MTE_H 1\n \n-#if 0\n+#include <stddef.h>\n+#include <stdint.h>\n+#include <sys/cdefs.h>\n \n-/* Used to ensure additional alignment when objects need to have distinct\n-   tags.  */\n-#define __MTAG_GRANULE_SIZE 16\n-\n-/* Non-zero if memory obtained via morecore (sbrk) is not tagged.  */\n-#define __MTAG_SBRK_UNTAGGED 1\n-\n-/* Extra flags to pass to mmap to get tagged pages.  */\n-#define __MTAG_MMAP_FLAGS PROT_MTE\n-\n-/* Set the tags for a region of memory, which must have size and alignment\n-   that are multiples of __MTAG_GRANULE_SIZE.  Size cannot be zero.  */\n-void *__libc_mtag_tag_region (void *, size_t);\n-\n-/* Optimized equivalent to __libc_mtag_tag_region followed by memset to 0.  */\n-void *__libc_mtag_tag_zero_region (void *, size_t);\n-\n-/* Convert address P to a pointer that is tagged correctly for that\n-   location.  */\n-static __always_inline void *\n-__libc_mtag_address_get_tag (void *p)\n+/* Assign a new (random) tag to a pointer P (does not adjust the\n+   allocation tag on the memory addressed).  */\n+static __always_inline __attribute_maybe_unused__ void *\n+__mte_new_tag (void *p)\n {\n   register void *x0 asm (\"x0\") = p;\n-  asm (\".inst 0xd9600000 /* ldg x0, [x0] */\" : \"+r\" (x0));\n+  register uintptr_t x1 asm (\"x1\");\n+  /* Guarantee that the new tag is not the same as now.  */\n+  asm (\".inst 0x9adf1401 /* gmi x1, x0, xzr */\\n\"\n+       \".inst 0x9ac11000 /* irg x0, x0, x1 */\" : \"+r\" (x0), \"=r\" (x1));\n   return x0;\n }\n \n-/* Assign a new (random) tag to a pointer P (does not adjust the tag on\n-   the memory addressed).  */\n-static __always_inline void *\n-__libc_mtag_new_tag (void *p)\n+/* Clears logical tag in the input pointer.  */\n+static __always_inline __attribute_maybe_unused__ void *\n+__mte_clear_tag (void *p)\n+{\n+  return (void *)((uintptr_t)p & ~(0xfull << 56ull));\n+}\n+\n+/* Convert address P to a pointer that is tagged correctly for that\n+   location (logical tag in the returned pointer will be the same\n+   as the allocation tag in the addressed memory).  */\n+static __always_inline __attribute_maybe_unused__ void *\n+__mte_get_tag (void *p)\n {\n   register void *x0 asm (\"x0\") = p;\n-  register unsigned long x1 asm (\"x1\");\n-  /* Guarantee that the new tag is not the same as now.  */\n-  asm (\".inst 0x9adf1401 /* gmi x1, x0, xzr */\\n\"\n-       \".inst 0x9ac11000 /* irg x0, x0, x1 */\" : \"+r\" (x0), \"=r\" (x1));\n+  asm (\".inst 0xd9600000 /* ldg x0, [x0] */\" : \"+r\" (x0));\n   return x0;\n }\n \n-#endif /* USE_MTAG */\n+/* Set the tags for a region of memory, which must have size and alignment\n+   that are multiples of MTE_GRANULE_SIZE.  Size cannot be zero.  */\n+void *__mte_tag_region (void *, size_t);\n+\n+/* Optimized equivalent to __mte_tag_region followed by memset to 0.  */\n+void *__mte_tag_region_zero (void *, size_t);\n \n-#endif /* _AARCH64_LIBC_MTAG_H */\n+#endif /* _AARCH64_MTE_H */\ndiff --git a/sysdeps/generic/libc-mtag.h b/sysdeps/generic/libc-mtag.h\ndeleted file mode 100644\nindex 5477bfa17f..0000000000\n--- a/sysdeps/generic/libc-mtag.h\n+++ /dev/null\n@@ -1,73 +0,0 @@\n-/* libc-internal interface for tagged (colored) memory support.\n-   Copyright (C) 2020-2026 Free Software Foundation, Inc.\n-   This file is part of the GNU C Library.\n-\n-   The GNU C Library is free software; you can redistribute it and/or\n-   modify it under the terms of the GNU Lesser General Public\n-   License as published by the Free Software Foundation; either\n-   version 2.1 of the License, or (at your option) any later version.\n-\n-   The GNU C Library is distributed in the hope that it will be useful,\n-   but WITHOUT ANY WARRANTY; without even the implied warranty of\n-   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU\n-   Lesser General Public License for more details.\n-\n-   You should have received a copy of the GNU Lesser General Public\n-   License along with the GNU C Library; if not, see\n-   <http://www.gnu.org/licenses/>.  */\n-\n-#ifndef _GENERIC_LIBC_MTAG_H\n-#define _GENERIC_LIBC_MTAG_H 1\n-\n-/* Generic bindings for systems that do not support memory tagging.  */\n-\n-/* Used to ensure additional alignment when objects need to have distinct\n-   tags.  */\n-#define __MTAG_GRANULE_SIZE 1\n-\n-/* Non-zero if memory obtained via morecore (sbrk) is not tagged.  */\n-#define __MTAG_SBRK_UNTAGGED 0\n-\n-/* Extra flags to pass to mmap() to request a tagged region of memory.  */\n-#define __MTAG_MMAP_FLAGS 0\n-\n-/* Memory tagging target hooks are only called when memory tagging is\n-   enabled at runtime.  The generic definitions here must not be used.  */\n-void __libc_mtag_link_error (void);\n-\n-/* Set the tags for a region of memory, which must have size and alignment\n-   that are multiples of __MTAG_GRANULE_SIZE.  Size cannot be zero.  */\n-static inline void *\n-__libc_mtag_tag_region (void *p, size_t n)\n-{\n-  __libc_mtag_link_error ();\n-  return p;\n-}\n-\n-/* Optimized equivalent to __libc_mtag_tag_region followed by memset to 0.  */\n-static inline void *\n-__libc_mtag_tag_zero_region (void *p, size_t n)\n-{\n-  __libc_mtag_link_error ();\n-  return memset (p, 0, n);\n-}\n-\n-/* Convert address P to a pointer that is tagged correctly for that\n-   location.  */\n-static inline void *\n-__libc_mtag_address_get_tag (void *p)\n-{\n-  __libc_mtag_link_error ();\n-  return p;\n-}\n-\n-/* Assign a new (random) tag to a pointer P (does not adjust the tag on\n-   the memory addressed).  */\n-static inline void *\n-__libc_mtag_new_tag (void *p)\n-{\n-  __libc_mtag_link_error ();\n-  return p;\n-}\n-\n-#endif /* _GENERIC_LIBC_MTAG_H */\n","prefixes":["3/3"]}