From patchwork Wed Apr 13 11:30:31 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nick Clifton X-Patchwork-Id: 90986 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id ABB12B6EDF for ; Wed, 13 Apr 2011 21:30:11 +1000 (EST) Received: (qmail 15876 invoked by alias); 13 Apr 2011 11:30:08 -0000 Received: (qmail 15858 invoked by uid 22791); 13 Apr 2011 11:30:07 -0000 X-SWARE-Spam-Status: No, hits=-6.2 required=5.0 tests=AWL, BAYES_00, RCVD_IN_DNSWL_HI, SPF_HELO_PASS, T_RP_MATCHES_RCVD X-Spam-Check-By: sourceware.org Received: from mx1.redhat.com (HELO mx1.redhat.com) (209.132.183.28) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 13 Apr 2011 11:29:55 +0000 Received: from int-mx01.intmail.prod.int.phx2.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p3DBTsFK028003 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK) for ; Wed, 13 Apr 2011 07:29:55 -0400 Received: from Gift.redhat.com (vpn1-5-67.ams2.redhat.com [10.36.5.67]) by int-mx01.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p3DBTqcu024758 for ; Wed, 13 Apr 2011 07:29:54 -0400 From: Nick Clifton To: gcc-patches@gcc.gnu.org Subject: RX: Do not use SMOVF insn to move blocks of volatile memory. Date: Wed, 13 Apr 2011 12:30:31 +0100 Message-ID: MIME-Version: 1.0 X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Hi Guys, I am checking in the patch below to the 4.5 branch and mainline sources to fix a problem with the RX's SMOVF instruction. This instruction copies blocks of memory, but it always loads and stores aligned 32-bit values. If necessary it will load extra bytes from the beginning or end of the destination block in order to be able to write back a whole word. This can be a problem if the destination block is in the I/O address space and those extra bytes do not exist or must not be read. The patch fixes the problem by disabling the use of the SMOVF instruction when volatile pointers are involved. In this case gcc will be forced to use another method to copy the data, most likely a loop of byte loads and stores. Cheers Nick PS. I am not applying the patch to the 4.6 branch since it is already present there. gcc/ChangeLog 2011-04-13 Nick Clifton * config/rx/rx.md (movmemsi): Do not use this pattern when volatile pointers are involved. Index: gcc/config/rx/rx.md =================================================================== --- gcc/config/rx/rx.md (revision 170734) +++ gcc/config/rx/rx.md (working copy) @@ -1897,6 +1897,14 @@ rtx addr2 = gen_rtx_REG (SImode, 2); rtx len = gen_rtx_REG (SImode, 3); + /* Do not use when the source or destination are volatile - the SMOVF + instruction will read and write in word sized blocks, which may be + outside of the valid address range. */ + if (MEM_P (operands[0]) && MEM_VOLATILE_P (operands[0])) + FAIL; + if (MEM_P (operands[1]) && MEM_VOLATILE_P (operands[1])) + FAIL; + if (REG_P (operands[0]) && (REGNO (operands[0]) == 2 || REGNO (operands[0]) == 3)) FAIL;