From patchwork Fri Apr 9 17:49:10 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 50158 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [199.232.76.165]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id B42D1B7D2E for ; Thu, 15 Apr 2010 00:25:04 +1000 (EST) Received: from localhost ([127.0.0.1]:55930 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1O23WW-0007cu-8k for incoming@patchwork.ozlabs.org; Wed, 14 Apr 2010 10:25:00 -0400 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1O23V8-0007Hn-H8 for qemu-devel@nongnu.org; Wed, 14 Apr 2010 10:23:34 -0400 Received: from [140.186.70.92] (port=37274 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1O23V4-0007Gk-Lc for qemu-devel@nongnu.org; Wed, 14 Apr 2010 10:23:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.69) (envelope-from ) id 1O23V2-0006c9-Gp for qemu-devel@nongnu.org; Wed, 14 Apr 2010 10:23:30 -0400 Received: from are.twiddle.net ([75.149.56.221]:43809) by eggs.gnu.org with esmtp (Exim 4.69) (envelope-from ) id 1O23V1-0006a5-Oi for qemu-devel@nongnu.org; Wed, 14 Apr 2010 10:23:27 -0400 Received: by are.twiddle.net (Postfix, from userid 5000) id 28FB9EC3; Wed, 14 Apr 2010 07:16:20 -0700 (PDT) Message-Id: In-Reply-To: References: From: Richard Henderson Date: Fri, 9 Apr 2010 10:49:10 -0700 To: qemu-devel@nongnu.org X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2) Cc: aurelien@aurel32.net Subject: [Qemu-devel] [PATCH 4/6] tcg-hppa: Schedule the address masking after the TLB load. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.5 Precedence: list List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Issue the tlb load as early as possible and perform the address masking while the load is completing. Signed-off-by: Richard Henderson --- tcg/hppa/tcg-target.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-) diff --git a/tcg/hppa/tcg-target.c b/tcg/hppa/tcg-target.c index 2f3b770..6941e22 100644 --- a/tcg/hppa/tcg-target.c +++ b/tcg/hppa/tcg-target.c @@ -904,7 +904,6 @@ static int tcg_out_tlb_read(TCGContext *s, int r0, int r1, int addrlo, CPU_TLB_ENTRY_BITS is > 3, so we can't merge that shift with the add that follows. */ tcg_out_extr(s, r1, addrlo, TARGET_PAGE_BITS, CPU_TLB_BITS, 0); - tcg_out_andi(s, r0, addrlo, TARGET_PAGE_MASK | ((1 << s_bits) - 1)); tcg_out_shli(s, r1, r1, CPU_TLB_ENTRY_BITS); tcg_out_arith(s, r1, r1, TCG_AREG0, INSN_ADDL); @@ -927,6 +926,12 @@ static int tcg_out_tlb_read(TCGContext *s, int r0, int r1, int addrlo, tcg_out_ld(s, TCG_TYPE_PTR, TCG_REG_R20, r1, offset); } + /* Compute the value that ought to appear in the TLB for a hit, namely, the page + of the address. We include the low N bits of the address to catch unaligned + accesses and force them onto the slow path. Do this computation after having + issued the load from the TLB slot to give the load time to complete. */ + tcg_out_andi(s, r0, addrlo, TARGET_PAGE_MASK | ((1 << s_bits) - 1)); + /* If not equal, jump to lab_miss. */ if (TARGET_LONG_BITS == 64) { tcg_out_brcond2(s, TCG_COND_NE, TCG_REG_R20, TCG_REG_R23,