From patchwork Thu Jun 10 15:02:17 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Biener X-Patchwork-Id: 55228 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 4A99BB7D85 for ; Fri, 11 Jun 2010 01:02:36 +1000 (EST) Received: (qmail 13868 invoked by alias); 10 Jun 2010 15:02:34 -0000 Received: (qmail 13761 invoked by uid 22791); 10 Jun 2010 15:02:31 -0000 X-SWARE-Spam-Status: No, hits=-5.9 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_HI, TW_CP, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from cantor.suse.de (HELO mx1.suse.de) (195.135.220.2) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Thu, 10 Jun 2010 15:02:21 +0000 Received: from relay2.suse.de (charybdis-ext.suse.de [195.135.221.2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.suse.de (Postfix) with ESMTP id 42C4C9417A for ; Thu, 10 Jun 2010 17:02:17 +0200 (CEST) Date: Thu, 10 Jun 2010 17:02:17 +0200 (CEST) From: Richard Guenther To: gcc-patches@gcc.gnu.org Subject: [PATCH][mem-ref2] Fix memmove folding Message-ID: User-Agent: Alpine 2.00 (LNX 1167 2008-08-23) MIME-Version: 1.0 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org to handle MEM_REF arguments. As we fold during gimplification also after gimplifying arguments we can drop the INDIRECT_REF path and rely on gimplified arguments. Bootstrapped and tested on x86_64-unknown-linux-gnu, applied to the branch. Richard. 2010-06-10 Richard Guenther * builtins.c (fold_builtin_memory_op): Simplify and handle MEM_REFs in folding memmove to memcpy. Index: builtins.c =================================================================== --- builtins.c (revision 160434) +++ builtins.c (working copy) @@ -8370,37 +8376,26 @@ fold_builtin_memory_op (location_t loc, } /* If *src and *dest can't overlap, optimize into memcpy as well. */ - srcvar = build_fold_indirect_ref_loc (loc, src); - destvar = build_fold_indirect_ref_loc (loc, dest); - if (srcvar - && !TREE_THIS_VOLATILE (srcvar) - && destvar - && !TREE_THIS_VOLATILE (destvar)) + if (TREE_CODE (src) == ADDR_EXPR + && TREE_CODE (dest) == ADDR_EXPR) { tree src_base, dest_base, fn; HOST_WIDE_INT src_offset = 0, dest_offset = 0; HOST_WIDE_INT size = -1; HOST_WIDE_INT maxsize = -1; - src_base = srcvar; - if (handled_component_p (src_base)) - src_base = get_ref_base_and_extent (src_base, &src_offset, - &size, &maxsize); - dest_base = destvar; - if (handled_component_p (dest_base)) - dest_base = get_ref_base_and_extent (dest_base, &dest_offset, - &size, &maxsize); + srcvar = TREE_OPERAND (src, 0); + src_base = get_ref_base_and_extent (srcvar, &src_offset, + &size, &maxsize); + destvar = TREE_OPERAND (dest, 0); + dest_base = get_ref_base_and_extent (destvar, &dest_offset, + &size, &maxsize); if (host_integerp (len, 1)) - { - maxsize = tree_low_cst (len, 1); - if (maxsize - > INTTYPE_MAXIMUM (HOST_WIDE_INT) / BITS_PER_UNIT) - maxsize = -1; - else - maxsize *= BITS_PER_UNIT; - } + maxsize = tree_low_cst (len, 1); else maxsize = -1; + src_offset /= BITS_PER_UNIT; + dest_offset /= BITS_PER_UNIT; if (SSA_VAR_P (src_base) && SSA_VAR_P (dest_base)) { @@ -8409,13 +8404,25 @@ fold_builtin_memory_op (location_t loc, dest_offset, maxsize)) return NULL_TREE; } - else if (TREE_CODE (src_base) == INDIRECT_REF - && TREE_CODE (dest_base) == INDIRECT_REF) + else if (TREE_CODE (src_base) == MEM_REF + && TREE_CODE (dest_base) == MEM_REF) { + double_int off; if (! operand_equal_p (TREE_OPERAND (src_base, 0), - TREE_OPERAND (dest_base, 0), 0) - || ranges_overlap_p (src_offset, maxsize, - dest_offset, maxsize)) + TREE_OPERAND (dest_base, 0), 0)) + return NULL_TREE; + off = double_int_add (mem_ref_offset (src_base), + shwi_to_double_int (src_offset)); + if (!double_int_fits_in_shwi_p (off)) + return NULL_TREE; + src_offset = off.low; + off = double_int_add (mem_ref_offset (dest_base), + shwi_to_double_int (dest_offset)); + if (!double_int_fits_in_shwi_p (off)) + return NULL_TREE; + dest_offset = off.low; + if (ranges_overlap_p (src_offset, maxsize, + dest_offset, maxsize)) return NULL_TREE; } else