From patchwork Mon Sep 2 17:54:39 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 272022 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 657792C00A0 for ; Tue, 3 Sep 2013 03:56:25 +1000 (EST) Received: from localhost ([::1]:41361 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VGYMR-0005up-Jg for incoming@patchwork.ozlabs.org; Mon, 02 Sep 2013 13:56:23 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56569) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VGYLT-0005i5-8Y for qemu-devel@nongnu.org; Mon, 02 Sep 2013 13:55:29 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VGYLN-00015c-EE for qemu-devel@nongnu.org; Mon, 02 Sep 2013 13:55:23 -0400 Received: from mail-pb0-x22d.google.com ([2607:f8b0:400e:c01::22d]:63409) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VGYLN-00015O-1g for qemu-devel@nongnu.org; Mon, 02 Sep 2013 13:55:17 -0400 Received: by mail-pb0-f45.google.com with SMTP id mc17so4996940pbc.32 for ; Mon, 02 Sep 2013 10:55:16 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=sender:from:to:cc:subject:date:message-id:in-reply-to:references; bh=zQrfe9KUrJr3urBaCnxArCqpSisdGZqzxKa9M2QgL+E=; b=YV6OviHVP/S80dJ7AzcqNw8ml56YmVhfibj/h63JZOL98LqXV+HeOTHJedgdYGGzV7 ZrW8wT7a7o9jtKjNsDZWI/QK3EMXzExyId3j+cCbsUXIz4TtZoruRuiEsB1eeLWUtUur +LmznW+KvSOzONpGAWJ8DZNAmAlnMSSzW+hX5ECOJb5ZIkc8APq3yc5Wi00M1ubRfB02 +kbdOttVIV2Pmdr0V0A1gp/qt78zqyG+UEiXQ69Pvdb2Fawk8usLR+C4gu/OI26cJqGh b5YJ10APZA2Xpfg6EcKUkXaGNg/VKYTXDuB3twbvm5nHUKmDODJ8rXuaK0e1jOmkgTYK V76w== X-Received: by 10.68.178.226 with SMTP id db2mr4568625pbc.134.1378144516109; Mon, 02 Sep 2013 10:55:16 -0700 (PDT) Received: from pebble.twiddle.net (50-194-63-110-static.hfc.comcastbusiness.net. [50.194.63.110]) by mx.google.com with ESMTPSA id tr10sm17218114pbc.22.1969.12.31.16.00.00 (version=TLSv1.2 cipher=RC4-SHA bits=128/128); Mon, 02 Sep 2013 10:55:15 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Mon, 2 Sep 2013 10:54:39 -0700 Message-Id: <1378144503-15808-6-git-send-email-rth@twiddle.net> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1378144503-15808-1-git-send-email-rth@twiddle.net> References: <1378144503-15808-1-git-send-email-rth@twiddle.net> X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2607:f8b0:400e:c01::22d Cc: claudio.fontana@huawei.com, Richard Henderson Subject: [Qemu-devel] [PATCH v3 05/29] tcg-aarch64: Change enum aarch64_arith_opc to AArch64Insn X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org And since we're no longer talking about opcodes, change the values to be shifted into the opcode field, avoiding a shift at runtime. Signed-off-by: Richard Henderson --- tcg/aarch64/tcg-target.c | 43 +++++++++++++++++++++++-------------------- 1 file changed, 23 insertions(+), 20 deletions(-) diff --git a/tcg/aarch64/tcg-target.c b/tcg/aarch64/tcg-target.c index 974a1d0..d1ca402 100644 --- a/tcg/aarch64/tcg-target.c +++ b/tcg/aarch64/tcg-target.c @@ -199,16 +199,19 @@ enum aarch64_ldst_op_type { /* type of operation */ LDST_LD_S_W = 0xc, /* load and sign-extend into Wt */ }; -enum aarch64_arith_opc { - ARITH_AND = 0x0a, - ARITH_ADD = 0x0b, - ARITH_OR = 0x2a, - ARITH_ADDS = 0x2b, - ARITH_XOR = 0x4a, - ARITH_SUB = 0x4b, - ARITH_ANDS = 0x6a, - ARITH_SUBS = 0x6b, -}; +typedef enum { + /* Logical shifted register instructions */ + INSN_AND = 0x0a000000, + INSN_ORR = 0x2a000000, + INSN_EOR = 0x4a000000, + INSN_ANDS = 0x6a000000, + + /* Add/subtract shifted register instructions */ + INSN_ADD = 0x0b000000, + INSN_ADDS = 0x2b000000, + INSN_SUB = 0x4b000000, + INSN_SUBS = 0x6b000000, +} AArch64Insn; enum aarch64_srr_opc { SRR_SHL = 0x0, @@ -436,13 +439,13 @@ static inline void tcg_out_st(TCGContext *s, TCGType type, TCGReg arg, arg, arg1, arg2); } -static inline void tcg_out_arith(TCGContext *s, enum aarch64_arith_opc opc, +static inline void tcg_out_arith(TCGContext *s, AArch64Insn insn, bool ext, TCGReg rd, TCGReg rn, TCGReg rm, int shift_imm) { /* Using shifted register arithmetic operations */ /* if extended register operation (64bit) just OR with 0x80 << 24 */ - unsigned int shift, base = ext ? (0x80 | opc) << 24 : opc << 24; + unsigned int shift, base = insn | (ext ? 0x80000000 : 0); if (shift_imm == 0) { shift = 0; } else if (shift_imm > 0) { @@ -537,7 +540,7 @@ static inline void tcg_out_cmp(TCGContext *s, bool ext, TCGReg rn, TCGReg rm, int shift_imm) { /* Using CMP alias SUBS wzr, Wn, Wm */ - tcg_out_arith(s, ARITH_SUBS, ext, TCG_REG_XZR, rn, rm, shift_imm); + tcg_out_arith(s, INSN_SUBS, ext, TCG_REG_XZR, rn, rm, shift_imm); } static inline void tcg_out_cset(TCGContext *s, bool ext, TCGReg rd, TCGCond c) @@ -894,7 +897,7 @@ static void tcg_out_tlb_read(TCGContext *s, TCGReg addr_reg, tcg_out_addi(s, 1, TCG_REG_X2, base, tlb_offset & 0xfff000); /* Merge the tlb index contribution into X2. X2 = X2 + (X0 << CPU_TLB_ENTRY_BITS) */ - tcg_out_arith(s, ARITH_ADD, 1, TCG_REG_X2, TCG_REG_X2, + tcg_out_arith(s, INSN_ADD, 1, TCG_REG_X2, TCG_REG_X2, TCG_REG_X0, -CPU_TLB_ENTRY_BITS); /* Merge "low bits" from tlb offset, load the tlb comparator into X0. X0 = load [X2 + (tlb_offset & 0x000fff)] */ @@ -1171,27 +1174,27 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, case INDEX_op_add_i64: case INDEX_op_add_i32: - tcg_out_arith(s, ARITH_ADD, ext, a0, a1, a2, 0); + tcg_out_arith(s, INSN_ADD, ext, a0, a1, a2, 0); break; case INDEX_op_sub_i64: case INDEX_op_sub_i32: - tcg_out_arith(s, ARITH_SUB, ext, a0, a1, a2, 0); + tcg_out_arith(s, INSN_SUB, ext, a0, a1, a2, 0); break; case INDEX_op_and_i64: case INDEX_op_and_i32: - tcg_out_arith(s, ARITH_AND, ext, a0, a1, a2, 0); + tcg_out_arith(s, INSN_AND, ext, a0, a1, a2, 0); break; case INDEX_op_or_i64: case INDEX_op_or_i32: - tcg_out_arith(s, ARITH_OR, ext, a0, a1, a2, 0); + tcg_out_arith(s, INSN_ORR, ext, a0, a1, a2, 0); break; case INDEX_op_xor_i64: case INDEX_op_xor_i32: - tcg_out_arith(s, ARITH_XOR, ext, a0, a1, a2, 0); + tcg_out_arith(s, INSN_EOR, ext, a0, a1, a2, 0); break; case INDEX_op_mul_i64: @@ -1240,7 +1243,7 @@ static void tcg_out_op(TCGContext *s, TCGOpcode opc, if (c2) { /* ROR / EXTR Wd, Wm, Wm, 32 - m */ tcg_out_rotl(s, ext, a0, a1, a2); } else { - tcg_out_arith(s, ARITH_SUB, 0, TCG_REG_TMP, TCG_REG_XZR, a2, 0); + tcg_out_arith(s, INSN_SUB, 0, TCG_REG_TMP, TCG_REG_XZR, a2, 0); tcg_out_shiftrot_reg(s, SRR_ROR, ext, a0, a1, TCG_REG_TMP); } break;