From patchwork Thu Mar 4 16:32:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Szabolcs Nagy X-Patchwork-Id: 1447440 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; secure) header.d=sourceware.org header.i=@sourceware.org header.a=rsa-sha256 header.s=default header.b=c/OYjPEh; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4DrxH64v3Mz9sW4 for ; Fri, 5 Mar 2021 03:32:34 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id B5DB93AAA08C; Thu, 4 Mar 2021 16:32:32 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org B5DB93AAA08C DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sourceware.org; s=default; t=1614875552; bh=mYwAwgalaLht6dQxi87bIS67ONOwq/ogUZwcHdoTst8=; h=To:Subject:Date:In-Reply-To:References:List-Id:List-Unsubscribe: List-Archive:List-Post:List-Help:List-Subscribe:From:Reply-To: From; b=c/OYjPEhL1YKQeJhvSHo2xGXcH2c6BNEObqBW25sQRtGhjWkTMf+IMqRk23+pOek4 dhsktIWYhlTD+M8otTcIAE6WOa5TM8c3629yn16uk9uqmHu3fN5FmfXZJPU9W4zSZx g4RNHEg4is0k7LLVJFhQDV7fozz+nZ62ns4lVAzg= X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-eopbgr80052.outbound.protection.outlook.com [40.107.8.52]) by sourceware.org (Postfix) with ESMTPS id 11EEE3892447 for ; Thu, 4 Mar 2021 16:32:29 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 11EEE3892447 Received: from DU2PR04CA0112.eurprd04.prod.outlook.com (2603:10a6:10:230::27) by VI1PR0801MB1711.eurprd08.prod.outlook.com (2603:10a6:800:4e::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.20; Thu, 4 Mar 2021 16:32:26 +0000 Received: from DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com (2603:10a6:10:230:cafe::c6) by DU2PR04CA0112.outlook.office365.com (2603:10a6:10:230::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Thu, 4 Mar 2021 16:32:26 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123) smtp.mailfrom=arm.com; sourceware.org; dkim=pass (signature was verified) header.d=armh.onmicrosoft.com;sourceware.org; dmarc=pass action=none header.from=arm.com; Received-SPF: Pass (protection.outlook.com: domain of arm.com designates 63.35.35.123 as permitted sender) receiver=protection.outlook.com; client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com; Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by DB5EUR03FT042.mail.protection.outlook.com (10.152.21.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3912.17 via Frontend Transport; Thu, 4 Mar 2021 16:32:26 +0000 Received: ("Tessian outbound efd554c08f3f:v71"); Thu, 04 Mar 2021 16:32:26 +0000 X-CheckRecipientChecked: true X-CR-MTA-CID: a62ded729909d324 X-CR-MTA-TID: 64aa7808 Received: from 446c17aeb450.2 by 64aa7808-outbound-1.mta.getcheckrecipient.com id 5D0BE996-64C9-42B2-9E54-0DB884B57CB9.1; Thu, 04 Mar 2021 16:32:19 +0000 Received: from EUR05-VI1-obe.outbound.protection.outlook.com by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 446c17aeb450.2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384); Thu, 04 Mar 2021 16:32:19 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HK8XvCz3VallgfkZWKIbLEK8A8p9G0KDDfO/UES++l4XBnL3gpmxQIAdFlfV0mT0WKIQ5SRQd7RL0Qsh9yG6saD9AVoEmgPLHbrwgRZxMA22woPTNNMe3Du0uFZPLw/YGdNi9zUuyd5e4nbrPwE//8lpXhL4GwwUXBGmE1Ug47/XWCKX4X/mB958YyWw1c88tI8weYvlC1LrCm42zyxeLODCjGvemQX/LsC0Cn42aWwFlenS+s3czvL/ZKrJbC7yYCPfFLEThKEAGhs0ed5UwE+NlsvVDPy4f+NEc5OawQd+cn7jM7MpNVtIqX4nxKZD/CTVNRQyUY8T0jufJOb0SQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mYwAwgalaLht6dQxi87bIS67ONOwq/ogUZwcHdoTst8=; b=OpPvvKvieB8s0jSwJcPsgwlttkEqB1OoiT9lS3YUKUa8OSwxomKGhwWszZoS1GpQGdWMpBETf634ZC+VM8WPLHp/Gyd41Ec3NxsyX9rYxUQ+lPsfkH1e7Yg4lTe3WqO03qo2mKW5TXpHUdbKHYecGCzR5yRvFi+Ilvdcri/ClxN2YCBTqJliTsRtoeGtAs6OxWWPwT7pyEv0scmKixmX111qnHI0TwBN/Yyg1MISKdJlvYlWFRNLwdx8S83XMlAASsk4OX3yiv8iT6Uumqb4VfSdgYS40XXfuZdQ5hfsPcsDRNIFxi8yFxPX37/MvktCtc63rY0knECZh807V4FReg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none Authentication-Results-Original: sourceware.org; dkim=none (message not signed) header.d=none;sourceware.org; dmarc=none action=none header.from=arm.com; Received: from PA4PR08MB6320.eurprd08.prod.outlook.com (2603:10a6:102:e5::9) by PA4PR08MB6064.eurprd08.prod.outlook.com (2603:10a6:102:e2::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.20; Thu, 4 Mar 2021 16:32:17 +0000 Received: from PA4PR08MB6320.eurprd08.prod.outlook.com ([fe80::60f0:3773:69b8:e336]) by PA4PR08MB6320.eurprd08.prod.outlook.com ([fe80::60f0:3773:69b8:e336%2]) with mapi id 15.20.3912.021; Thu, 4 Mar 2021 16:32:17 +0000 To: libc-alpha@sourceware.org, Richard.Earnshaw@arm.com, DJ Delorie Subject: [PATCH 07/16] malloc: Refactor TAG_ macros to avoid indirection Date: Thu, 4 Mar 2021 16:32:10 +0000 Message-Id: X-Mailer: git-send-email 2.17.1 In-Reply-To: References: X-Originating-IP: [217.140.106.55] X-ClientProxiedBy: LO4P123CA0118.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:192::15) To PA4PR08MB6320.eurprd08.prod.outlook.com (2603:10a6:102:e5::9) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from localhost.localdomain (217.140.106.55) by LO4P123CA0118.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:192::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3890.25 via Frontend Transport; Thu, 4 Mar 2021 16:32:16 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 34c1669d-5d83-43f3-b8b9-08d8df2b189a X-MS-TrafficTypeDiagnostic: PA4PR08MB6064:|VI1PR0801MB1711: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: x-checkrecipientrouted: true NoDisclaimer: true X-MS-Oob-TLC-OOBClassifiers: OLM:10000;OLM:10000; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam-Untrusted: BCL:0; X-Microsoft-Antispam-Message-Info-Original: CNUZafMMFmjSEdRq0PcoPIdELjC0W9VhsHwMe+8JmnLw/ajDuH5KdFqBC+3UqLAlw/ngOFQrjO0+W61EFpIrSYnITeFyRzp2WtbCsL/Do5+iWCF5505PrGmRNNFXkUghQ58G9rgu+aZEklT4Q+zq8oHm57je9lrYd2B2M8IuYrlZyWEzMao3ZeSvP1m+zZb7FvQVn/kPFh48STUOkKrbZNrZPV63+ajle8TacQC7Zv9pbBAhSl9keVvf/YiLLemWRhx2erhP2F9ARXtSkmP16Z1mQ5k2CTRuIsmJah4eNTJvIB13com+a05H6WZ09viNEUqlka99mni+0KO16k01ocZ5oq/7lhn7U9usaebAtDnzPrhsXwnZ86nFVw43H7OoVMetTutJWmZkBWHmoeks65j0V3UWAyxL0T5vWdwpAaiz47wmkhJcRBKZ4B4CnRklSU8OOpNkl1NQorJw6zY+/MomgdNSlrQ6NyzmkDjyFtCxyhODCLMpv5+pUcvFK3BGJLB3jeekje0W1JaajHu02zUgExTN75u6LqjiwOi2Fl6JlPYGdU+AqBObw/gnDL4ibGEc2e8VDpKDEiTfBgBM4A== X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:PA4PR08MB6320.eurprd08.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(39850400004)(376002)(366004)(346002)(136003)(396003)(69590400012)(8676002)(6512007)(30864003)(8936002)(478600001)(6666004)(36756003)(2616005)(52116002)(956004)(86362001)(6916009)(6506007)(5660300002)(26005)(6486002)(316002)(66476007)(66946007)(66556008)(186003)(2906002)(16526019)(44832011)(83380400001); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: 2+cF+PrvuifD4aF714Vcs6HCZp4dFH8wmZpVhYm0j8rZcgxjCwGFt8o3c8IGWMaFjiRD+4OoFigwBgXMlfcfRTXh0Gg6r5/AI7of5ot5k/xtSoMafLq/wGIJYtUkckI92b2/geLUxvdFAiQCeCDFkzfWa1xTWVtQQMhlNlptGyCkSnb6ODGaAhEWwwU16SP6AhUxwFi7kgsptLVKM7Klv19Y0a1ehRpOo1ZAB1FZ/Q2xnuseIIa/Cq/Fo9/AedjnLgAWZ9A11bLD48+E8T+AESLiTaMdE2opUUhkq/2ZScUkrZmmCCfqy1RoqpMz6dDAa/zjTRBMMzi23qbvKcqI3PqeeLisD5l6fNGiL+JUcswwCNKipAX1ocBw4tfG7vyDj4ieSysIe92zwraE94Um7KXAyPr5UZ1McKRnAqGk23eI0tH95K5E5U1Cvsu/evJnD9qez2s+HSNsDVMjnVCmkvL8/6ssOMeK5i1LP3lbRIKdQKqcBEVcLNcdps9jxuP5XUuoh7b57kkzHWXik+o+T1mWQ4btDwU0gHCSIjF0Zm3xOA6FA2SCaN07c0e4dVe26ny/IE0SFJ/WqLCV6hMjezMwHzzWKvau8aAKvUwQJSPFWXm1u9p0Ys0fsjlSbezCoNV07+Xlu3Sw7u0uyYw32/TW78cZh8y7lHPW4rdTxdT3m03tEvWOgtT/hFHOyzHsXKvqg7WpQ4oxKJ61UB+e59EYsKXDK3e4UWyxMgnvzKFjWsl6CMQ6xQqRpTV+RJP9hjAwKeN0khF3huYkbXez+bhz2ZV+lzCj/gXrr+29b5K2Ml7EzUePnm4X23jt72bh1qRNwRsILzEG4oMAl+1qZH3BFCgedZwz6Wt7gByjo84d2fq0CAUprO+s/r5diTHbBnvzUfAKuPoZEp58UhH+lXx7BZvNYnFSpGNaulhWKMxu4oGmXr8D/Eoevja7XWDOFVeGVXzkSdbnqQPWd1Y/Mp0ekdfrynRRCY6AYfyhPg/dMElDpuA7hkdEVVz2FJslZet2QIj8prRQzxWa+YOkXOA6FOad+vDgDRb7FcRck0y1A4Km1uGGRLMiiCP96Fohc/YJiucnpNofllHp0shW4Y+l5dYv3y9kQ/dT2uyofQERDYJrWeFs/uodx5Bt2w8kdJrZK7lvHYg2bUR+Tf2Iq9yaD7nUgHwYMOMtpuw9GYR8Xy9Kem+N1U3npldozk6Za3fQHsulf4BfX1lQnb1p57C5MvBqD21dhtpU1kJOZokWsCJpK7hqWNGLE8E7rTBv1R8Agr+MvmIuvVAZfVn2tvA9d7JQ6xH6JGlFOljmNjMFdO67qt1oBbJ4+5vdpos8 X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6064 Original-Authentication-Results: sourceware.org; dkim=none (message not signed) header.d=none; sourceware.org; dmarc=none action=none header.from=arm.com; X-EOPAttributedMessage: 0 X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com X-MS-Office365-Filtering-Correlation-Id-Prvs: 2c078ea3-2529-448b-7a28-08d8df2b12fa X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: +f7OU3SWFBoTzvQiMLf0OQpXlPS1l19BxscxStb1IcgQHrphy+ien3A80AJcRBR2Y07lpxKo5nyyYoIOZQVRrfYYX7295sUFFfFjGktt1+xLHymxjxYUVbGfcTXsGf+NZCC8d/b7zmYOwEg8FtOoBHa0ol/MaxMnlEtnjNUYUg2rYjchVYwJHXcUkkdnZ+drJMzB6N0djH+LBd23ydrfDLoAzI3KXTQfJwWYzAy9yuiBOwNffwM+nVHupb76SEd0Ux7tHC3NiO2YuU3IdjdkscHd+l0H1OCaxGtbjKrPJahT9NDCm2/ScKYchTx9gUw/IZv+oqzgSVsrNrogerXKBTEjafUJFLz3Dkw33q8NSBa8mb82BYtOcIr1sBOL5gfzMubCPqhB0/LNRwZ9rzog5+RwCjiBlOsz0sBiEfYmwwXwzLYhGp6j0ElU0VFtpZmzhWSX2JMPOBW2beVyhCKjrcara+/SJniXLUbPPY7JOKk/DXsoWwA5350yTZWCo6aIYrAsEwm36Db1hUhFd+ivP+9yOJSafwL9qmltF+90TnVIwXrlJ3w0MNPLGeEfn0Yn7vr+Ly3Za1LbT0drGffJK71/BFIlbB92K0pYcqJDEC6ALgsIRfA1kU/vjI+tKWKhYJpyH9wKO64Dw4ehiK2cExBhdupj8zyZMeNqonl1UftlQgl9CoHvHoJg4YFqfrIv X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com; PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFS:(4636009)(396003)(39850400004)(136003)(346002)(376002)(46966006)(36840700001)(16526019)(478600001)(83380400001)(2906002)(316002)(356005)(69590400012)(44832011)(26005)(86362001)(36860700001)(81166007)(336012)(82740400003)(6666004)(82310400003)(5660300002)(70206006)(70586007)(6486002)(8936002)(47076005)(36756003)(6862004)(6506007)(30864003)(8676002)(186003)(2616005)(956004)(6512007); DIR:OUT; SFP:1101; X-OriginatorOrg: arm.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 Mar 2021 16:32:26.2225 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 34c1669d-5d83-43f3-b8b9-08d8df2b189a X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123]; Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com] X-MS-Exchange-CrossTenant-AuthSource: DB5EUR03FT042.eop-EUR03.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1711 X-Spam-Status: No, score=-13.9 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, GIT_PATCH_0, MSGID_FROM_MTA_HEADER, RCVD_IN_DNSWL_NONE, RCVD_IN_MSPIKE_H2, SPF_HELO_PASS, SPF_PASS, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Szabolcs Nagy via Libc-alpha From: Szabolcs Nagy Reply-To: Szabolcs Nagy Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" This does not change behaviour, just removes one layer of indirection in the internal memory tagging logic. Use tag_ and mtag_ prefixes instead of __tag_ and __mtag_ since these are all symbols with internal linkage, private to malloc.c, so there is no user namespace pollution issue. --- malloc/arena.c | 16 +++++----- malloc/hooks.c | 10 +++--- malloc/malloc.c | 81 +++++++++++++++++++++++-------------------------- 3 files changed, 51 insertions(+), 56 deletions(-) diff --git a/malloc/arena.c b/malloc/arena.c index 0777dc70c6..d0778fea92 100644 --- a/malloc/arena.c +++ b/malloc/arena.c @@ -332,12 +332,12 @@ ptmalloc_init (void) if (__MTAG_SBRK_UNTAGGED) __morecore = __failing_morecore; - __mtag_mmap_flags = __MTAG_MMAP_FLAGS; - __tag_new_memset = __mtag_tag_new_memset; - __tag_region = __libc_mtag_tag_region; - __tag_new_usable = __mtag_tag_new_usable; - __tag_at = __libc_mtag_address_get_tag; - __mtag_granule_mask = ~(size_t)(__MTAG_GRANULE_SIZE - 1); + mtag_mmap_flags = __MTAG_MMAP_FLAGS; + tag_new_memset = __mtag_tag_new_memset; + tag_region = __libc_mtag_tag_region; + tag_new_usable = __mtag_tag_new_usable; + tag_at = __libc_mtag_address_get_tag; + mtag_granule_mask = ~(size_t)(__MTAG_GRANULE_SIZE - 1); } #endif @@ -557,7 +557,7 @@ new_heap (size_t size, size_t top_pad) } } } - if (__mprotect (p2, size, MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE) != 0) + if (__mprotect (p2, size, mtag_mmap_flags | PROT_READ | PROT_WRITE) != 0) { __munmap (p2, HEAP_MAX_SIZE); return 0; @@ -587,7 +587,7 @@ grow_heap (heap_info *h, long diff) { if (__mprotect ((char *) h + h->mprotect_size, (unsigned long) new_size - h->mprotect_size, - MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE) != 0) + mtag_mmap_flags | PROT_READ | PROT_WRITE) != 0) return -2; h->mprotect_size = new_size; diff --git a/malloc/hooks.c b/malloc/hooks.c index efec05f0a8..d8e304c31c 100644 --- a/malloc/hooks.c +++ b/malloc/hooks.c @@ -68,7 +68,7 @@ __malloc_check_init (void) tags, so fetch the tag at each location before dereferencing it. */ #define SAFE_CHAR_OFFSET(p,offset) \ - ((unsigned char *) TAG_AT (((unsigned char *) p) + offset)) + ((unsigned char *) tag_at (((unsigned char *) p) + offset)) /* A simple, standard set of debugging hooks. Overhead is `only' one byte per chunk; still this will catch most cases of double frees or @@ -249,7 +249,7 @@ malloc_check (size_t sz, const void *caller) top_check (); victim = _int_malloc (&main_arena, nb); __libc_lock_unlock (main_arena.mutex); - return mem2mem_check (TAG_NEW_USABLE (victim), sz); + return mem2mem_check (tag_new_usable (victim), sz); } static void @@ -280,7 +280,7 @@ free_check (void *mem, const void *caller) else { /* Mark the chunk as belonging to the library again. */ - (void)TAG_REGION (chunk2rawmem (p), CHUNK_AVAILABLE_SIZE (p) + (void)tag_region (chunk2rawmem (p), CHUNK_AVAILABLE_SIZE (p) - CHUNK_HDR_SZ); _int_free (&main_arena, p, 1); __libc_lock_unlock (main_arena.mutex); @@ -375,7 +375,7 @@ invert: __libc_lock_unlock (main_arena.mutex); - return mem2mem_check (TAG_NEW_USABLE (newmem), bytes); + return mem2mem_check (tag_new_usable (newmem), bytes); } static void * @@ -417,7 +417,7 @@ memalign_check (size_t alignment, size_t bytes, const void *caller) top_check (); mem = _int_memalign (&main_arena, alignment, bytes + 1); __libc_lock_unlock (main_arena.mutex); - return mem2mem_check (TAG_NEW_USABLE (mem), bytes); + return mem2mem_check (tag_new_usable (mem), bytes); } #if SHLIB_COMPAT (libc, GLIBC_2_0, GLIBC_2_25) diff --git a/malloc/malloc.c b/malloc/malloc.c index b4c800bd7f..e5f520267b 100644 --- a/malloc/malloc.c +++ b/malloc/malloc.c @@ -413,26 +413,26 @@ void *(*__morecore)(ptrdiff_t) = __default_morecore; operations can continue to be used. Support macros are used to do this: - void *TAG_NEW_MEMSET (void *ptr, int, val, size_t size) + void *tag_new_memset (void *ptr, int, val, size_t size) Has the same interface as memset(), but additionally allocates a new tag, colors the memory with that tag and returns a pointer that is correctly colored for that location. The non-tagging version will simply call memset. - void *TAG_REGION (void *ptr, size_t size) + void *tag_region (void *ptr, size_t size) Color the region of memory pointed to by PTR and size SIZE with the color of PTR. Returns the original pointer. - void *TAG_NEW_USABLE (void *ptr) + void *tag_new_usable (void *ptr) Allocate a new random color and use it to color the user region of a chunk; this may include data from the subsequent chunk's header if tagging is sufficiently fine grained. Returns PTR suitably recolored for accessing the memory there. - void *TAG_AT (void *ptr) + void *tag_at (void *ptr) Read the current color of the memory at the address pointed to by PTR (ignoring it's current color) and return PTR recolored to that @@ -455,25 +455,20 @@ __default_tag_nop (void *ptr) return ptr; } -static int __mtag_mmap_flags = 0; -static size_t __mtag_granule_mask = ~(size_t)0; +static int mtag_mmap_flags = 0; +static size_t mtag_granule_mask = ~(size_t)0; -static void *(*__tag_new_memset)(void *, int, size_t) = memset; -static void *(*__tag_region)(void *, size_t) = __default_tag_region; -static void *(*__tag_new_usable)(void *) = __default_tag_nop; -static void *(*__tag_at)(void *) = __default_tag_nop; +static void *(*tag_new_memset)(void *, int, size_t) = memset; +static void *(*tag_region)(void *, size_t) = __default_tag_region; +static void *(*tag_new_usable)(void *) = __default_tag_nop; +static void *(*tag_at)(void *) = __default_tag_nop; -# define MTAG_MMAP_FLAGS __mtag_mmap_flags -# define TAG_NEW_MEMSET(ptr, val, size) __tag_new_memset (ptr, val, size) -# define TAG_REGION(ptr, size) __tag_region (ptr, size) -# define TAG_NEW_USABLE(ptr) __tag_new_usable (ptr) -# define TAG_AT(ptr) __tag_at (ptr) #else -# define MTAG_MMAP_FLAGS 0 -# define TAG_NEW_MEMSET(ptr, val, size) memset (ptr, val, size) -# define TAG_REGION(ptr, size) (ptr) -# define TAG_NEW_USABLE(ptr) (ptr) -# define TAG_AT(ptr) (ptr) +# define mtag_mmap_flags 0 +# define tag_new_memset(ptr, val, size) memset (ptr, val, size) +# define tag_region(ptr, size) (ptr) +# define tag_new_usable(ptr) (ptr) +# define tag_at(ptr) (ptr) #endif #include @@ -1305,8 +1300,8 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ /* Convert between user mem pointers and chunk pointers, updating any memory tags on the pointer to respect the tag value at that location. */ -#define chunk2mem(p) ((void*)TAG_AT (((char*)(p) + CHUNK_HDR_SZ))) -#define mem2chunk(mem) ((mchunkptr)TAG_AT (((char*)(mem) - CHUNK_HDR_SZ))) +#define chunk2mem(p) ((void *)tag_at (((char*)(p) + CHUNK_HDR_SZ))) +#define mem2chunk(mem) ((mchunkptr)tag_at (((char*)(mem) - CHUNK_HDR_SZ))) /* The smallest possible chunk */ #define MIN_CHUNK_SIZE (offsetof(struct malloc_chunk, fd_nextsize)) @@ -1337,7 +1332,7 @@ nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ #ifdef USE_MTAG #define CHUNK_AVAILABLE_SIZE(p) \ ((chunksize (p) + (chunk_is_mmapped (p) ? 0 : SIZE_SZ)) \ - & __mtag_granule_mask) + & mtag_granule_mask) #else #define CHUNK_AVAILABLE_SIZE(p) \ (chunksize (p) + (chunk_is_mmapped (p) ? 0 : SIZE_SZ)) @@ -1361,7 +1356,7 @@ checked_request2size (size_t req, size_t *sz) __nonnull (1) number. Ideally, this would be part of request2size(), but that must be a macro that produces a compile time constant if passed a constant literal. */ - req = (req + ~__mtag_granule_mask) & __mtag_granule_mask; + req = (req + ~mtag_granule_mask) & mtag_granule_mask; #endif *sz = request2size (req); @@ -2467,7 +2462,7 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) if ((unsigned long) (size) > (unsigned long) (nb)) { mm = (char *) (MMAP (0, size, - MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE, 0)); + mtag_mmap_flags | PROT_READ | PROT_WRITE, 0)); if (mm != MAP_FAILED) { @@ -2665,7 +2660,7 @@ sysmalloc (INTERNAL_SIZE_T nb, mstate av) if ((unsigned long) (size) > (unsigned long) (nb)) { char *mbrk = (char *) (MMAP (0, size, - MTAG_MMAP_FLAGS | PROT_READ | PROT_WRITE, + mtag_mmap_flags | PROT_READ | PROT_WRITE, 0)); if (mbrk != MAP_FAILED) @@ -3221,14 +3216,14 @@ __libc_malloc (size_t bytes) && tcache->counts[tc_idx] > 0) { victim = tcache_get (tc_idx); - return TAG_NEW_USABLE (victim); + return tag_new_usable (victim); } DIAG_POP_NEEDS_COMMENT; #endif if (SINGLE_THREAD_P) { - victim = TAG_NEW_USABLE (_int_malloc (&main_arena, bytes)); + victim = tag_new_usable (_int_malloc (&main_arena, bytes)); assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || &main_arena == arena_for_chunk (mem2chunk (victim))); return victim; @@ -3249,7 +3244,7 @@ __libc_malloc (size_t bytes) if (ar_ptr != NULL) __libc_lock_unlock (ar_ptr->mutex); - victim = TAG_NEW_USABLE (victim); + victim = tag_new_usable (victim); assert (!victim || chunk_is_mmapped (mem2chunk (victim)) || ar_ptr == arena_for_chunk (mem2chunk (victim))); @@ -3305,7 +3300,7 @@ __libc_free (void *mem) MAYBE_INIT_TCACHE (); /* Mark the chunk as belonging to the library again. */ - (void)TAG_REGION (chunk2rawmem (p), + (void)tag_region (chunk2rawmem (p), CHUNK_AVAILABLE_SIZE (p) - CHUNK_HDR_SZ); ar_ptr = arena_for_chunk (p); @@ -3408,7 +3403,7 @@ __libc_realloc (void *oldmem, size_t bytes) reused. There's a performance hit for both us and the caller for doing this, so we might want to reconsider. */ - return TAG_NEW_USABLE (newmem); + return tag_new_usable (newmem); } #endif /* Note the extra SIZE_SZ overhead. */ @@ -3451,7 +3446,7 @@ __libc_realloc (void *oldmem, size_t bytes) { size_t sz = CHUNK_AVAILABLE_SIZE (oldp) - CHUNK_HDR_SZ; memcpy (newp, oldmem, sz); - (void) TAG_REGION (chunk2rawmem (oldp), sz); + (void) tag_region (chunk2rawmem (oldp), sz); _int_free (ar_ptr, oldp, 0); } } @@ -3509,7 +3504,7 @@ _mid_memalign (size_t alignment, size_t bytes, void *address) p = _int_memalign (&main_arena, alignment, bytes); assert (!p || chunk_is_mmapped (mem2chunk (p)) || &main_arena == arena_for_chunk (mem2chunk (p))); - return TAG_NEW_USABLE (p); + return tag_new_usable (p); } arena_get (ar_ptr, bytes + alignment + MINSIZE); @@ -3527,7 +3522,7 @@ _mid_memalign (size_t alignment, size_t bytes, void *address) assert (!p || chunk_is_mmapped (mem2chunk (p)) || ar_ptr == arena_for_chunk (mem2chunk (p))); - return TAG_NEW_USABLE (p); + return tag_new_usable (p); } /* For ISO C11. */ weak_alias (__libc_memalign, aligned_alloc) @@ -3544,7 +3539,7 @@ __libc_valloc (size_t bytes) void *address = RETURN_ADDRESS (0); size_t pagesize = GLRO (dl_pagesize); p = _mid_memalign (pagesize, bytes, address); - return TAG_NEW_USABLE (p); + return tag_new_usable (p); } void * @@ -3569,7 +3564,7 @@ __libc_pvalloc (size_t bytes) rounded_bytes = rounded_bytes & -(pagesize - 1); p = _mid_memalign (pagesize, rounded_bytes, address); - return TAG_NEW_USABLE (p); + return tag_new_usable (p); } void * @@ -3666,7 +3661,7 @@ __libc_calloc (size_t n, size_t elem_size) regardless of MORECORE_CLEARS, so we zero the whole block while doing so. */ #ifdef USE_MTAG - return TAG_NEW_MEMSET (mem, 0, CHUNK_AVAILABLE_SIZE (p) - CHUNK_HDR_SZ); + return tag_new_memset (mem, 0, CHUNK_AVAILABLE_SIZE (p) - CHUNK_HDR_SZ); #else INTERNAL_SIZE_T csz = chunksize (p); @@ -4821,7 +4816,7 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, av->top = chunk_at_offset (oldp, nb); set_head (av->top, (newsize - nb) | PREV_INUSE); check_inuse_chunk (av, oldp); - return TAG_NEW_USABLE (chunk2rawmem (oldp)); + return tag_new_usable (chunk2rawmem (oldp)); } /* Try to expand forward into next chunk; split off remainder below */ @@ -4856,9 +4851,9 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, { void *oldmem = chunk2mem (oldp); size_t sz = CHUNK_AVAILABLE_SIZE (oldp) - CHUNK_HDR_SZ; - newmem = TAG_NEW_USABLE (newmem); + newmem = tag_new_usable (newmem); memcpy (newmem, oldmem, sz); - (void) TAG_REGION (chunk2rawmem (oldp), sz); + (void) tag_region (chunk2rawmem (oldp), sz); _int_free (av, oldp, 1); check_inuse_chunk (av, newp); return chunk2mem (newp); @@ -4881,7 +4876,7 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, { remainder = chunk_at_offset (newp, nb); /* Clear any user-space tags before writing the header. */ - remainder = TAG_REGION (remainder, remainder_size); + remainder = tag_region (remainder, remainder_size); set_head_size (newp, nb | (av != &main_arena ? NON_MAIN_ARENA : 0)); set_head (remainder, remainder_size | PREV_INUSE | (av != &main_arena ? NON_MAIN_ARENA : 0)); @@ -4891,7 +4886,7 @@ _int_realloc(mstate av, mchunkptr oldp, INTERNAL_SIZE_T oldsize, } check_inuse_chunk (av, newp); - return TAG_NEW_USABLE (chunk2rawmem (newp)); + return tag_new_usable (chunk2rawmem (newp)); } /* @@ -5108,7 +5103,7 @@ musable (void *mem) /* The usable space may be reduced if memory tagging is needed, since we cannot share the user-space data with malloc's internal data structure. */ - result &= __mtag_granule_mask; + result &= mtag_granule_mask; #endif return result; }