From patchwork Fri May 31 17:57:03 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jani Kokkonen X-Patchwork-Id: 248011 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id A004C2C0079 for ; Sat, 1 Jun 2013 06:51:20 +1000 (EST) Received: from localhost ([::1]:47736 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UiWI9-0002Hi-S7 for incoming@patchwork.ozlabs.org; Fri, 31 May 2013 16:51:17 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59037) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UiTZz-0005y2-8e for qemu-devel@nongnu.org; Fri, 31 May 2013 13:57:34 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1UiTZw-0001YB-9q for qemu-devel@nongnu.org; Fri, 31 May 2013 13:57:31 -0400 Received: from lhrrgout.huawei.com ([194.213.3.17]:35020) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1UiTZv-0001Y5-Oa for qemu-devel@nongnu.org; Fri, 31 May 2013 13:57:28 -0400 Received: from 172.18.7.190 (EHLO lhreml203-edg.china.huawei.com) ([172.18.7.190]) by lhrrg01-dlp.huawei.com (MOS 4.3.5-GA FastPath queued) with ESMTP id ATJ80131; Fri, 31 May 2013 17:57:17 +0000 (GMT) Received: from LHREML403-HUB.china.huawei.com (10.201.5.217) by lhreml203-edg.huawei.com (172.18.7.221) with Microsoft SMTP Server (TLS) id 14.1.323.7; Fri, 31 May 2013 18:56:37 +0100 Received: from [127.0.0.1] (10.220.139.78) by lhreml403-hub.china.huawei.com (10.201.5.217) with Microsoft SMTP Server id 14.1.323.7; Fri, 31 May 2013 18:57:07 +0100 Message-ID: <51A8E46F.5030707@huawei.com> Date: Fri, 31 May 2013 19:57:03 +0200 From: Jani Kokkonen User-Agent: Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20130509 Thunderbird/17.0.6 MIME-Version: 1.0 To: Peter Maydell References: <51A8E339.5000500@huawei.com> In-Reply-To: <51A8E339.5000500@huawei.com> X-Originating-IP: [10.220.139.78] X-CFilter-Loop: Reflected X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.4.x-2.6.x [generic] X-Received-From: 194.213.3.17 X-Mailman-Approved-At: Fri, 31 May 2013 16:50:37 -0400 Cc: Laurent Desnogues , Claudio Fontana , qemu-devel@nongnu.org, Richard Henderson Subject: [Qemu-devel] [PATCH 1/4] tcg/aarch64: more low level ops in preparation of tlb, lookup X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org From: Claudio Fontana for arith operations, add SUBS and add a shift parameter so that all arith instructions can make use of shifted registers. Also add functions to TEST/AND registers with immediate patterns. Signed-off-by: Claudio Fontana --- tcg/aarch64/tcg-target.c | 72 ++++++++++++++++++++++++++++++++++++++---------- 1 file changed, 58 insertions(+), 14 deletions(-) diff --git a/tcg/aarch64/tcg-target.c b/tcg/aarch64/tcg-target.c index ff626eb..1343d49 100644 --- a/tcg/aarch64/tcg-target.c +++ b/tcg/aarch64/tcg-target.c @@ -188,6 +188,7 @@ enum aarch64_ldst_op_type { /* type of operation */ enum aarch64_arith_opc { ARITH_ADD = 0x0b, ARITH_SUB = 0x4b, + ARITH_SUBS = 0x6b, ARITH_AND = 0x0a, ARITH_OR = 0x2a, ARITH_XOR = 0x4a @@ -394,12 +395,20 @@ static inline void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg, } static inline void tcg_out_arith(TCGContext *s, enum aarch64_arith_opc opc, - int ext, TCGReg rd, TCGReg rn, TCGReg rm) + int ext, TCGReg rd, TCGReg rn, TCGReg rm, + int shift_imm) { /* Using shifted register arithmetic operations */ /* if extended registry operation (64bit) just OR with 0x80 << 24 */ - unsigned int base = ext ? (0x80 | opc) << 24 : opc << 24; - tcg_out32(s, base | rm << 16 | rn << 5 | rd); + unsigned int shift, base = ext ? (0x80 | opc) << 24 : opc << 24; + if (shift_imm == 0) { + shift = 0; + } else if (shift_imm > 0) { + shift = shift_imm << 10 | 1 << 22; + } else /* (shift_imm < 0) */ { + shift = (-shift_imm) << 10; + } + tcg_out32(s, base | rm << 16 | shift | rn << 5 | rd); } static inline void tcg_out_mul(TCGContext *s, int ext, @@ -482,11 +491,11 @@ static inline void tcg_out_rotl(TCGContext *s, int ext, tcg_out_extr(s, ext, rd, rn, rn, bits - (m & max)); } -static inline void tcg_out_cmp(TCGContext *s, int ext, TCGReg rn, TCGReg rm) +static inline void tcg_out_cmp(TCGContext *s, int ext, TCGReg rn, TCGReg rm, + int shift_imm) { /* Using CMP alias SUBS wzr, Wn, Wm */ - unsigned int base = ext ? 0xeb00001f : 0x6b00001f; - tcg_out32(s, base | rm << 16 | rn << 5); + tcg_out_arith(s, ARITH_SUBS, ext, TCG_REG_XZR, rn, rm, shift_imm); } static inline void tcg_out_cset(TCGContext *s, int ext, TCGReg rd, TCGCond c) @@ -569,6 +578,40 @@ static inline void tcg_out_call(TCGContext *s, tcg_target_long target) } } +/* encode a logical immediate, mapping user parameter + M=set bits pattern length to S=M-1 */ +static inline unsigned int +aarch64_limm(unsigned int m, unsigned int r) +{ + assert(m > 0); + return r << 16 | (m - 1) << 10; +} + +/* test a register against an immediate bit pattern made of + M set bits rotated right by R. + Examples: + to test a 32/64 reg against 0x00000007, pass M = 3, R = 0. + to test a 32/64 reg against 0x000000ff, pass M = 8, R = 0. + to test a 32bit reg against 0xff000000, pass M = 8, R = 8. + to test a 32bit reg against 0xff0000ff, pass M = 16, R = 8. + */ +static inline void tcg_out_tst(TCGContext *s, int ext, TCGReg rn, + unsigned int m, unsigned int r) +{ + /* using TST alias of ANDS XZR, Xn,#bimm64 0x7200001f */ + unsigned int base = ext ? 0xf240001f : 0x7200001f; + tcg_out32(s, base | aarch64_limm(m, r) | rn << 5); +} + +/* and a register with a bit pattern, similarly to TST, no flags change */ +static inline void tcg_out_andi(TCGContext *s, int ext, TCGReg rd, TCGReg rn, + unsigned int m, unsigned int r) +{ + /* using AND 0x12000000 */ + unsigned int base = ext ? 0x92400000 : 0x12000000; + tcg_out32(s, base | aarch64_limm(m, r) | rn << 5 | rd); +} + static inline void tcg_out_ret(TCGContext *s) { /* emit RET { LR } */ @@ -830,31 +873,31 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_add_i64: ext = 1; /* fall through */ case INDEX_op_add_i32: - tcg_out_arith(s, ARITH_ADD, ext, args[0], args[1], args[2]); + tcg_out_arith(s, ARITH_ADD, ext, args[0], args[1], args[2], 0); break; case INDEX_op_sub_i64: ext = 1; /* fall through */ case INDEX_op_sub_i32: - tcg_out_arith(s, ARITH_SUB, ext, args[0], args[1], args[2]); + tcg_out_arith(s, ARITH_SUB, ext, args[0], args[1], args[2], 0); break; case INDEX_op_and_i64: ext = 1; /* fall through */ case INDEX_op_and_i32: - tcg_out_arith(s, ARITH_AND, ext, args[0], args[1], args[2]); + tcg_out_arith(s, ARITH_AND, ext, args[0], args[1], args[2], 0); break; case INDEX_op_or_i64: ext = 1; /* fall through */ case INDEX_op_or_i32: - tcg_out_arith(s, ARITH_OR, ext, args[0], args[1], args[2]); + tcg_out_arith(s, ARITH_OR, ext, args[0], args[1], args[2], 0); break; case INDEX_op_xor_i64: ext = 1; /* fall through */ case INDEX_op_xor_i32: - tcg_out_arith(s, ARITH_XOR, ext, args[0], args[1], args[2]); + tcg_out_arith(s, ARITH_XOR, ext, args[0], args[1], args[2], 0); break; case INDEX_op_mul_i64: @@ -909,7 +952,8 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, if (const_args[2]) { /* ROR / EXTR Wd, Wm, Wm, 32 - m */ tcg_out_rotl(s, ext, args[0], args[1], args[2]); } else { - tcg_out_arith(s, ARITH_SUB, 0, TCG_REG_TMP, TCG_REG_XZR, args[2]); + tcg_out_arith(s, ARITH_SUB, 0, + TCG_REG_TMP, TCG_REG_XZR, args[2], 0); tcg_out_shiftrot_reg(s, SRR_ROR, ext, args[0], args[1], TCG_REG_TMP); } @@ -918,14 +962,14 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_brcond_i64: ext = 1; /* fall through */ case INDEX_op_brcond_i32: /* CMP 0, 1, cond(2), label 3 */ - tcg_out_cmp(s, ext, args[0], args[1]); + tcg_out_cmp(s, ext, args[0], args[1], 0); tcg_out_goto_label_cond(s, args[2], args[3]); break; case INDEX_op_setcond_i64: ext = 1; /* fall through */ case INDEX_op_setcond_i32: - tcg_out_cmp(s, ext, args[1], args[2]); + tcg_out_cmp(s, ext, args[1], args[2], 0); tcg_out_cset(s, 0, args[0], args[3]); break;