From patchwork Thu May 6 13:55:13 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bernhard Kohl X-Patchwork-Id: 51851 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 9086EB7D53 for ; Fri, 7 May 2010 02:19:10 +1000 (EST) Received: from localhost ([127.0.0.1]:49124 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OA3n1-0000T5-NU for incoming@patchwork.ozlabs.org; Thu, 06 May 2010 12:19:07 -0400 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1OA1lh-000629-06 for qemu-devel@nongnu.org; Thu, 06 May 2010 10:09:37 -0400 Received: from [140.186.70.92] (port=53076 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1OA1lZ-0005PH-0x for qemu-devel@nongnu.org; Thu, 06 May 2010 10:09:35 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1OA1Xo-0004Mi-9L for qemu-devel@nongnu.org; Thu, 06 May 2010 09:55:19 -0400 Received: from demumfd001.nsn-inter.net ([93.183.12.32]:13252) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1OA1Xn-0004MV-Tr for qemu-devel@nongnu.org; Thu, 06 May 2010 09:55:16 -0400 Received: from demuprx017.emea.nsn-intra.net ([10.150.129.56]) by demumfd001.nsn-inter.net (8.12.11.20060308/8.12.11) with ESMTP id o46DtDR8014830 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=FAIL) for ; Thu, 6 May 2010 15:55:13 +0200 Received: from [10.148.23.89] ([10.148.23.89]) by demuprx017.emea.nsn-intra.net (8.12.11.20060308/8.12.11) with ESMTP id o46DtDjA026915 for ; Thu, 6 May 2010 15:55:13 +0200 Message-ID: <4BE2CA41.8060701@nsn.com> Date: Thu, 06 May 2010 15:55:13 +0200 From: Bernhard Kohl User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100330 Fedora/3.0.4-1.fc12 Thunderbird/3.0.4 MIME-Version: 1.0 To: qemu-devel@nongnu.org X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4-2.6 Subject: [Qemu-devel] [PATCH][RESEND] exec: optimize lduw_phys and stw_phys X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Implementation of the optimized code for these two functions. This is necessary for virtio which reads and writes VirtQueue index fields using these functions. The assumption is that this are atomic operations, which is not the case, if the memcpy() function which is used in the non optimized code does single byte copying. This happens for example with an older WindRiver glibc. Signed-off-by: Bernhard Kohl --- RESEND: This message did not reach the gmane archive and maybe others. --- exec.c | 67 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++------ 1 files changed, 60 insertions(+), 7 deletions(-) diff --git a/exec.c b/exec.c index 14d1fd7..fb40398 100644 --- a/exec.c +++ b/exec.c @@ -3739,12 +3739,36 @@ uint32_t ldub_phys(target_phys_addr_t addr) return val; } -/* XXX: optimize */ +/* warning: addr must be aligned */ uint32_t lduw_phys(target_phys_addr_t addr) { - uint16_t val; - cpu_physical_memory_read(addr, (uint8_t *)&val, 2); - return tswap16(val); + int io_index; + uint8_t *ptr; + uint32_t val; + unsigned long pd; + PhysPageDesc *p; + + p = phys_page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + pd = IO_MEM_UNASSIGNED; + } else { + pd = p->phys_offset; + } + + if ((pd & ~TARGET_PAGE_MASK) > IO_MEM_ROM && + !(pd & IO_MEM_ROMD)) { + /* I/O case */ + io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); + if (p) + addr = (addr & ~TARGET_PAGE_MASK) + p->region_offset; + val = io_mem_read[io_index][1](io_mem_opaque[io_index], addr); + } else { + /* RAM case */ + ptr = qemu_get_ram_ptr(pd & TARGET_PAGE_MASK) + + (addr & ~TARGET_PAGE_MASK); + val = lduw_p(ptr); + } + return val; } /* warning: addr must be aligned. The ram page is not masked as dirty @@ -3861,11 +3885,40 @@ void stb_phys(target_phys_addr_t addr, uint32_t val) cpu_physical_memory_write(addr, &v, 1); } -/* XXX: optimize */ +/* warning: addr must be aligned */ void stw_phys(target_phys_addr_t addr, uint32_t val) { - uint16_t v = tswap16(val); - cpu_physical_memory_write(addr, (const uint8_t *)&v, 2); + int io_index; + uint8_t *ptr; + unsigned long pd; + PhysPageDesc *p; + + p = phys_page_find(addr >> TARGET_PAGE_BITS); + if (!p) { + pd = IO_MEM_UNASSIGNED; + } else { + pd = p->phys_offset; + } + + if ((pd & ~TARGET_PAGE_MASK) != IO_MEM_RAM) { + io_index = (pd >> IO_MEM_SHIFT) & (IO_MEM_NB_ENTRIES - 1); + if (p) + addr = (addr & ~TARGET_PAGE_MASK) + p->region_offset; + io_mem_write[io_index][1](io_mem_opaque[io_index], addr, val); + } else { + unsigned long addr1; + addr1 = (pd & TARGET_PAGE_MASK) + (addr & ~TARGET_PAGE_MASK); + /* RAM case */ + ptr = qemu_get_ram_ptr(addr1); + stw_p(ptr, val); + if (!cpu_physical_memory_is_dirty(addr1)) { + /* invalidate code */ + tb_invalidate_phys_page_range(addr1, addr1 + 2, 0); + /* set dirty bit */ + phys_ram_dirty[addr1 >> TARGET_PAGE_BITS] |= + (0xff & ~CODE_DIRTY_FLAG); + } + } } /* XXX: optimize */