From patchwork Tue Jun 9 12:00:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Alexander Anisimov X-Patchwork-Id: 1305843 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=sourceware.org (client-ip=8.43.85.97; helo=sourceware.org; envelope-from=libc-alpha-bounces@sourceware.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=omprussia.ru Received: from sourceware.org (server2.sourceware.org [8.43.85.97]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 49h7ws6dx9z9sQx for ; Tue, 9 Jun 2020 22:00:29 +1000 (AEST) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 9FF2B383E825; Tue, 9 Jun 2020 12:00:12 +0000 (GMT) X-Original-To: libc-alpha@sourceware.org Delivered-To: libc-alpha@sourceware.org Received: from mxout04.lancloud.ru (mxout04.lancloud.ru [89.108.124.63]) by sourceware.org (Postfix) with ESMTPS id 80CDF3887002 for ; Tue, 9 Jun 2020 12:00:06 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.3.2 sourceware.org 80CDF3887002 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=omprussia.ru Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=a.anisimov@omprussia.ru Received: from LanCloud DKIM-Filter: OpenDKIM Filter v2.11.0 mxout04.lancloud.ru 22EB720700FF Received: from LanCloud Received: from LanCloud Date: Tue, 9 Jun 2020 15:00:03 +0300 From: Alexander Anisimov To: Subject: [PATCH] arm: fix multiarch memcpy for negative len [BZ #25620] Message-ID: <20200609120003.GA8412@anisyan> Mail-Followup-To: libc-alpha@sourceware.org, fw@deneb.enyo.de, zhuyan34@huawei.com MIME-Version: 1.0 Content-Disposition: inline User-Agent: Mutt/1.9.4 (2018-02-28) X-Originating-IP: [46.39.249.65] X-ClientProxiedBy: LFEXT01.lancloud.ru (fd00:f066::141) To LFEX08.lancloud.ru (fd00:f066::58) X-Spam-Status: No, score=-9.7 required=5.0 tests=BAYES_00, GIT_PATCH_0, KAM_DMARC_STATUS, RCVD_IN_DNSWL_NONE, SPF_HELO_NONE, SPF_PASS, TXREP, UNPARSEABLE_RELAY autolearn=ham autolearn_force=no version=3.4.2 X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on server2.sourceware.org X-BeenThere: libc-alpha@sourceware.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Libc-alpha mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: zhuyan34@huawei.com, fw@deneb.enyo.de Errors-To: libc-alpha-bounces@sourceware.org Sender: "Libc-alpha" Hi, Someday ago Evgeniy sent patch [1] to fix behavior of memcpy and memmove, when negative len is passed to. The one is only for common arm implementation (sysdeps/arm). Now, I've prepared same fix for multiarch memcpy implementation. (sysdeps/arm/armv7/multiarch/memcpy_impl.S). All test-cases have been finished with success, including string/tst-memmove-overflow.c by Florian Weimer. This patch and the [1] can fully eliminate vulnerability CVE-2020-6096. But as I know Yan Zhu has already started to fix it [BZ #25620] for multiarch [2]. I appreciate Yan Zhu to finalize his version. However this issue is important for our project, that's why I offer own fix. Signed-off-by: Evgeny Eremin Signed-off-by: Konstantin Karasev Signed-off-by: Anton Rybakov Signed-off-by: Ildar Kamaletdinov [1] https://sourceware.org/pipermail/libc-alpha/2020-June/114702.html [2] https://sourceware.org/pipermail/libc-alpha/2020-April/112671.html --- -- Alexander Anisimov Software Engineer Open Mobile Platform https://omprussia.ru diff --git a/sysdeps/arm/armv7/multiarch/memcpy_impl.S b/sysdeps/arm/armv7/multiarch/memcpy_impl.S index 2de17263..802c310f 100644 --- a/sysdeps/arm/armv7/multiarch/memcpy_impl.S +++ b/sysdeps/arm/armv7/multiarch/memcpy_impl.S @@ -268,7 +268,7 @@ ENTRY(memcpy) mov dst, dstin /* Preserve dstin, we need to return it. */ cmp count, #64 - bge .Lcpy_not_short + bhs .Lcpy_not_short /* Deal with small copies quickly by dropping straight into the exit block. */ @@ -351,10 +351,10 @@ ENTRY(memcpy) 1: subs tmp2, count, #64 /* Use tmp2 for count. */ - blt .Ltail63aligned + blo .Ltail63aligned cmp tmp2, #512 - bge .Lcpy_body_long + bhs .Lcpy_body_long .Lcpy_body_medium: /* Count in tmp2. */ #ifdef USE_VFP @@ -378,7 +378,7 @@ ENTRY(memcpy) add src, src, #64 vstr d1, [dst, #56] add dst, dst, #64 - bge 1b + bhs 1b tst tmp2, #0x3f beq .Ldone @@ -412,7 +412,7 @@ ENTRY(memcpy) ldrd A_l, A_h, [src, #64]! strd A_l, A_h, [dst, #64]! subs tmp2, tmp2, #64 - bge 1b + bhs 1b tst tmp2, #0x3f bne 1f ldr tmp2,[sp], #FRAME_SIZE @@ -482,7 +482,7 @@ ENTRY(memcpy) add src, src, #32 subs tmp2, tmp2, #prefetch_lines * 64 * 2 - blt 2f + blo 2f 1: cpy_line_vfp d3, 0 cpy_line_vfp d4, 64 @@ -494,7 +494,7 @@ ENTRY(memcpy) add dst, dst, #2 * 64 add src, src, #2 * 64 subs tmp2, tmp2, #prefetch_lines * 64 - bge 1b + bhs 1b 2: cpy_tail_vfp d3, 0 @@ -615,8 +615,8 @@ ENTRY(memcpy) 1: pld [src, #(3 * 64)] subs count, count, #64 - ldrmi tmp2, [sp], #FRAME_SIZE - bmi .Ltail63unaligned + ldrlo tmp2, [sp], #FRAME_SIZE + blo .Ltail63unaligned pld [src, #(4 * 64)] #ifdef USE_NEON @@ -633,7 +633,7 @@ ENTRY(memcpy) neon_load_multi d0-d3, src neon_load_multi d4-d7, src subs count, count, #64 - bmi 2f + blo 2f 1: pld [src, #(4 * 64)] neon_store_multi d0-d3, dst @@ -641,7 +641,7 @@ ENTRY(memcpy) neon_store_multi d4-d7, dst neon_load_multi d4-d7, src subs count, count, #64 - bpl 1b + bhs 1b 2: neon_store_multi d0-d3, dst neon_store_multi d4-d7, dst