From patchwork Wed Aug 17 21:11:46 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Henderson X-Patchwork-Id: 110422 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id D9A11B6FD1 for ; Thu, 18 Aug 2011 08:46:33 +1000 (EST) Received: from localhost ([::1]:34311 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QtnPo-0008AE-Jy for incoming@patchwork.ozlabs.org; Wed, 17 Aug 2011 17:12:44 -0400 Received: from eggs.gnu.org ([140.186.70.92]:34399) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QtnPL-0007FX-0D for qemu-devel@nongnu.org; Wed, 17 Aug 2011 17:12:26 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QtnP5-0003dD-8V for qemu-devel@nongnu.org; Wed, 17 Aug 2011 17:12:13 -0400 Received: from mail-gw0-f45.google.com ([74.125.83.45]:40705) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QtnP4-0003cV-JD for qemu-devel@nongnu.org; Wed, 17 Aug 2011 17:11:58 -0400 Received: by gwb19 with SMTP id 19so570652gwb.4 for ; Wed, 17 Aug 2011 14:11:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=sender:from:to:subject:date:message-id:x-mailer:in-reply-to :references; bh=FQvogn9WIbBXlLUB0K/qmjnoSFelzNFMBedUaZ4W5Nc=; b=NdzB1pyYymM24jT3TOuAfZSGnBOMdnM/tR6F2BCcmJMJwIsiW0CPGDH8+FA60h4RAK z/llcMeYCNfyDtEhadyzPUDdii+jKU3Z2vm2F7wMu1/zw9vHy6qNZ1NjFD0Hs58aX/43 +mq5RPqt6z6DkSGCfSMMHAM7+KoM5zmy834zM= Received: by 10.142.55.19 with SMTP id d19mr724871wfa.215.1313615517196; Wed, 17 Aug 2011 14:11:57 -0700 (PDT) Received: from localhost.localdomain (c-71-227-161-214.hsd1.wa.comcast.net [71.227.161.214]) by mx.google.com with ESMTPS id g4sm901648pbj.9.2011.08.17.14.11.55 (version=TLSv1/SSLv3 cipher=OTHER); Wed, 17 Aug 2011 14:11:56 -0700 (PDT) From: Richard Henderson To: qemu-devel@nongnu.org Date: Wed, 17 Aug 2011 14:11:46 -0700 Message-Id: <1313615510-10615-3-git-send-email-rth@twiddle.net> X-Mailer: git-send-email 1.7.4.4 In-Reply-To: <1313615510-10615-1-git-send-email-rth@twiddle.net> References: <1313615510-10615-1-git-send-email-rth@twiddle.net> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2) X-Received-From: 74.125.83.45 Subject: [Qemu-devel] [PATCH 2/6] tcg: Always define all of the TCGOpcode enum members. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org By always defining these symbols, we can eliminate a lot of ifdefs. To allow this to be checked reliably, the semantics of the TCG_TARGET_HAS_* macros must be changed from def/undef to true/false. This allows even more ifdefs to be removed, converting them into C if statements. Signed-off-by: Richard Henderson --- tcg/arm/tcg-target.h | 30 +- tcg/hppa/tcg-target.h | 29 +- tcg/i386/tcg-target.h | 68 ++-- tcg/ia64/tcg-target.h | 66 ++-- tcg/mips/tcg-target.h | 31 +- tcg/optimize.c | 156 ++------- tcg/ppc/tcg-target.h | 31 +- tcg/ppc64/tcg-target.h | 68 ++-- tcg/s390/tcg-target.h | 68 ++-- tcg/sparc/tcg-target.h | 68 ++-- tcg/tcg-op.h | 946 +++++++++++++++++++++++------------------------- tcg/tcg-opc.h | 242 +++++-------- tcg/tcg.c | 4 + tcg/tcg.h | 38 ++ 14 files changed, 837 insertions(+), 1008 deletions(-) diff --git a/tcg/arm/tcg-target.h b/tcg/arm/tcg-target.h index d8d7d94..0e0f69a 100644 --- a/tcg/arm/tcg-target.h +++ b/tcg/arm/tcg-target.h @@ -58,20 +58,22 @@ enum { #define TCG_TARGET_CALL_STACK_OFFSET 0 /* optional instructions */ -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -#undef TCG_TARGET_HAS_ext8u_i32 /* and r0, r1, #0xff */ -#define TCG_TARGET_HAS_ext16u_i32 -#define TCG_TARGET_HAS_bswap16_i32 -#define TCG_TARGET_HAS_bswap32_i32 -#define TCG_TARGET_HAS_not_i32 -#define TCG_TARGET_HAS_neg_i32 -#define TCG_TARGET_HAS_rot_i32 -#define TCG_TARGET_HAS_andc_i32 -// #define TCG_TARGET_HAS_orc_i32 -// #define TCG_TARGET_HAS_eqv_i32 -// #define TCG_TARGET_HAS_nand_i32 -// #define TCG_TARGET_HAS_nor_i32 +#define TCG_TARGET_HAS_div_i32 0 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_ext8u_i32 0 /* and r0, r1, #0xff */ +#define TCG_TARGET_HAS_ext16u_i32 1 +#define TCG_TARGET_HAS_bswap16_i32 1 +#define TCG_TARGET_HAS_bswap32_i32 1 +#define TCG_TARGET_HAS_not_i32 1 +#define TCG_TARGET_HAS_neg_i32 1 +#define TCG_TARGET_HAS_rot_i32 1 +#define TCG_TARGET_HAS_andc_i32 1 +#define TCG_TARGET_HAS_orc_i32 0 +#define TCG_TARGET_HAS_eqv_i32 0 +#define TCG_TARGET_HAS_nand_i32 0 +#define TCG_TARGET_HAS_nor_i32 0 +#define TCG_TARGET_HAS_deposit_i32 0 #define TCG_TARGET_HAS_GUEST_BASE diff --git a/tcg/hppa/tcg-target.h b/tcg/hppa/tcg-target.h index f7919ce..ed90efc 100644 --- a/tcg/hppa/tcg-target.h +++ b/tcg/hppa/tcg-target.h @@ -85,21 +85,24 @@ enum { #define TCG_TARGET_STACK_GROWSUP /* optional instructions */ -// #define TCG_TARGET_HAS_div_i32 -#define TCG_TARGET_HAS_rot_i32 -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -#define TCG_TARGET_HAS_bswap16_i32 -#define TCG_TARGET_HAS_bswap32_i32 -#define TCG_TARGET_HAS_not_i32 -#define TCG_TARGET_HAS_andc_i32 -// #define TCG_TARGET_HAS_orc_i32 -#define TCG_TARGET_HAS_deposit_i32 +#define TCG_TARGET_HAS_div_i32 0 +#define TCG_TARGET_HAS_rot_i32 1 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_bswap16_i32 1 +#define TCG_TARGET_HAS_bswap32_i32 1 +#define TCG_TARGET_HAS_not_i32 1 +#define TCG_TARGET_HAS_andc_i32 1 +#define TCG_TARGET_HAS_orc_i32 0 +#define TCG_TARGET_HAS_eqv_i32 0 +#define TCG_TARGET_HAS_nand_i32 0 +#define TCG_TARGET_HAS_nor_i32 0 +#define TCG_TARGET_HAS_deposit_i32 1 /* optional instructions automatically implemented */ -#undef TCG_TARGET_HAS_neg_i32 /* sub rd, 0, rs */ -#undef TCG_TARGET_HAS_ext8u_i32 /* and rd, rs, 0xff */ -#undef TCG_TARGET_HAS_ext16u_i32 /* and rd, rs, 0xffff */ +#define TCG_TARGET_HAS_neg_i32 0 /* sub rd, 0, rs */ +#define TCG_TARGET_HAS_ext8u_i32 0 /* and rd, rs, 0xff */ +#define TCG_TARGET_HAS_ext16u_i32 0 /* and rd, rs, 0xffff */ #define TCG_TARGET_HAS_GUEST_BASE diff --git a/tcg/i386/tcg-target.h b/tcg/i386/tcg-target.h index bfafbfc..5088e47 100644 --- a/tcg/i386/tcg-target.h +++ b/tcg/i386/tcg-target.h @@ -75,41 +75,43 @@ enum { #define TCG_TARGET_CALL_STACK_OFFSET 0 /* optional instructions */ -#define TCG_TARGET_HAS_div2_i32 -#define TCG_TARGET_HAS_rot_i32 -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -#define TCG_TARGET_HAS_ext8u_i32 -#define TCG_TARGET_HAS_ext16u_i32 -#define TCG_TARGET_HAS_bswap16_i32 -#define TCG_TARGET_HAS_bswap32_i32 -#define TCG_TARGET_HAS_neg_i32 -#define TCG_TARGET_HAS_not_i32 -// #define TCG_TARGET_HAS_andc_i32 -// #define TCG_TARGET_HAS_orc_i32 -// #define TCG_TARGET_HAS_eqv_i32 -// #define TCG_TARGET_HAS_nand_i32 -// #define TCG_TARGET_HAS_nor_i32 +#define TCG_TARGET_HAS_div2_i32 1 +#define TCG_TARGET_HAS_rot_i32 1 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_ext8u_i32 1 +#define TCG_TARGET_HAS_ext16u_i32 1 +#define TCG_TARGET_HAS_bswap16_i32 1 +#define TCG_TARGET_HAS_bswap32_i32 1 +#define TCG_TARGET_HAS_neg_i32 1 +#define TCG_TARGET_HAS_not_i32 1 +#define TCG_TARGET_HAS_andc_i32 0 +#define TCG_TARGET_HAS_orc_i32 0 +#define TCG_TARGET_HAS_eqv_i32 0 +#define TCG_TARGET_HAS_nand_i32 0 +#define TCG_TARGET_HAS_nor_i32 0 +#define TCG_TARGET_HAS_deposit_i32 0 #if TCG_TARGET_REG_BITS == 64 -#define TCG_TARGET_HAS_div2_i64 -#define TCG_TARGET_HAS_rot_i64 -#define TCG_TARGET_HAS_ext8s_i64 -#define TCG_TARGET_HAS_ext16s_i64 -#define TCG_TARGET_HAS_ext32s_i64 -#define TCG_TARGET_HAS_ext8u_i64 -#define TCG_TARGET_HAS_ext16u_i64 -#define TCG_TARGET_HAS_ext32u_i64 -#define TCG_TARGET_HAS_bswap16_i64 -#define TCG_TARGET_HAS_bswap32_i64 -#define TCG_TARGET_HAS_bswap64_i64 -#define TCG_TARGET_HAS_neg_i64 -#define TCG_TARGET_HAS_not_i64 -// #define TCG_TARGET_HAS_andc_i64 -// #define TCG_TARGET_HAS_orc_i64 -// #define TCG_TARGET_HAS_eqv_i64 -// #define TCG_TARGET_HAS_nand_i64 -// #define TCG_TARGET_HAS_nor_i64 +#define TCG_TARGET_HAS_div2_i64 1 +#define TCG_TARGET_HAS_rot_i64 1 +#define TCG_TARGET_HAS_ext8s_i64 1 +#define TCG_TARGET_HAS_ext16s_i64 1 +#define TCG_TARGET_HAS_ext32s_i64 1 +#define TCG_TARGET_HAS_ext8u_i64 1 +#define TCG_TARGET_HAS_ext16u_i64 1 +#define TCG_TARGET_HAS_ext32u_i64 1 +#define TCG_TARGET_HAS_bswap16_i64 1 +#define TCG_TARGET_HAS_bswap32_i64 1 +#define TCG_TARGET_HAS_bswap64_i64 1 +#define TCG_TARGET_HAS_neg_i64 1 +#define TCG_TARGET_HAS_not_i64 1 +#define TCG_TARGET_HAS_andc_i64 0 +#define TCG_TARGET_HAS_orc_i64 0 +#define TCG_TARGET_HAS_eqv_i64 0 +#define TCG_TARGET_HAS_nand_i64 0 +#define TCG_TARGET_HAS_nor_i64 0 +#define TCG_TARGET_HAS_deposit_i64 0 #endif #define TCG_TARGET_HAS_GUEST_BASE diff --git a/tcg/ia64/tcg-target.h b/tcg/ia64/tcg-target.h index e56e88f..ddc93c1 100644 --- a/tcg/ia64/tcg-target.h +++ b/tcg/ia64/tcg-target.h @@ -104,39 +104,43 @@ enum { #define TCG_TARGET_CALL_STACK_OFFSET 16 /* optional instructions */ -#define TCG_TARGET_HAS_andc_i32 -#define TCG_TARGET_HAS_andc_i64 -#define TCG_TARGET_HAS_bswap16_i32 -#define TCG_TARGET_HAS_bswap16_i64 -#define TCG_TARGET_HAS_bswap32_i32 -#define TCG_TARGET_HAS_bswap32_i64 -#define TCG_TARGET_HAS_bswap64_i64 -#define TCG_TARGET_HAS_eqv_i32 -#define TCG_TARGET_HAS_eqv_i64 -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -#define TCG_TARGET_HAS_ext8s_i64 -#define TCG_TARGET_HAS_ext16s_i64 -#define TCG_TARGET_HAS_ext32s_i64 -#define TCG_TARGET_HAS_ext8u_i32 -#define TCG_TARGET_HAS_ext16u_i32 -#define TCG_TARGET_HAS_ext8u_i64 -#define TCG_TARGET_HAS_ext16u_i64 -#define TCG_TARGET_HAS_ext32u_i64 -#define TCG_TARGET_HAS_nand_i32 -#define TCG_TARGET_HAS_nand_i64 -#define TCG_TARGET_HAS_nor_i32 -#define TCG_TARGET_HAS_nor_i64 -#define TCG_TARGET_HAS_orc_i32 -#define TCG_TARGET_HAS_orc_i64 -#define TCG_TARGET_HAS_rot_i32 -#define TCG_TARGET_HAS_rot_i64 +#define TCG_TARGET_HAS_div_i32 0 +#define TCG_TARGET_HAS_div_i64 0 +#define TCG_TARGET_HAS_andc_i32 1 +#define TCG_TARGET_HAS_andc_i64 1 +#define TCG_TARGET_HAS_bswap16_i32 1 +#define TCG_TARGET_HAS_bswap16_i64 1 +#define TCG_TARGET_HAS_bswap32_i32 1 +#define TCG_TARGET_HAS_bswap32_i64 1 +#define TCG_TARGET_HAS_bswap64_i64 1 +#define TCG_TARGET_HAS_eqv_i32 1 +#define TCG_TARGET_HAS_eqv_i64 1 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_ext8s_i64 1 +#define TCG_TARGET_HAS_ext16s_i64 1 +#define TCG_TARGET_HAS_ext32s_i64 1 +#define TCG_TARGET_HAS_ext8u_i32 1 +#define TCG_TARGET_HAS_ext16u_i32 1 +#define TCG_TARGET_HAS_ext8u_i64 1 +#define TCG_TARGET_HAS_ext16u_i64 1 +#define TCG_TARGET_HAS_ext32u_i64 1 +#define TCG_TARGET_HAS_nand_i32 1 +#define TCG_TARGET_HAS_nand_i64 1 +#define TCG_TARGET_HAS_nor_i32 1 +#define TCG_TARGET_HAS_nor_i64 1 +#define TCG_TARGET_HAS_orc_i32 1 +#define TCG_TARGET_HAS_orc_i64 1 +#define TCG_TARGET_HAS_rot_i32 1 +#define TCG_TARGET_HAS_rot_i64 1 +#define TCG_TARGET_HAS_deposit_i32 0 +#define TCG_TARGET_HAS_deposit_i64 0 /* optional instructions automatically implemented */ -#undef TCG_TARGET_HAS_neg_i32 /* sub r1, r0, r3 */ -#undef TCG_TARGET_HAS_neg_i64 /* sub r1, r0, r3 */ -#undef TCG_TARGET_HAS_not_i32 /* xor r1, -1, r3 */ -#undef TCG_TARGET_HAS_not_i64 /* xor r1, -1, r3 */ +#define TCG_TARGET_HAS_neg_i32 0 /* sub r1, r0, r3 */ +#define TCG_TARGET_HAS_neg_i64 0 /* sub r1, r0, r3 */ +#define TCG_TARGET_HAS_not_i32 0 /* xor r1, -1, r3 */ +#define TCG_TARGET_HAS_not_i64 0 /* xor r1, -1, r3 */ /* Note: must be synced with dyngen-exec.h */ #define TCG_AREG0 TCG_REG_R7 diff --git a/tcg/mips/tcg-target.h b/tcg/mips/tcg-target.h index 8cb7d88..43c5501 100644 --- a/tcg/mips/tcg-target.h +++ b/tcg/mips/tcg-target.h @@ -78,23 +78,24 @@ enum { #define TCG_TARGET_CALL_ALIGN_ARGS 1 /* optional instructions */ -#define TCG_TARGET_HAS_div_i32 -#define TCG_TARGET_HAS_not_i32 -#define TCG_TARGET_HAS_nor_i32 -#undef TCG_TARGET_HAS_rot_i32 -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -#undef TCG_TARGET_HAS_bswap32_i32 -#undef TCG_TARGET_HAS_bswap16_i32 -#undef TCG_TARGET_HAS_andc_i32 -#undef TCG_TARGET_HAS_orc_i32 -#undef TCG_TARGET_HAS_eqv_i32 -#undef TCG_TARGET_HAS_nand_i32 +#define TCG_TARGET_HAS_div_i32 1 +#define TCG_TARGET_HAS_not_i32 1 +#define TCG_TARGET_HAS_nor_i32 1 +#define TCG_TARGET_HAS_rot_i32 0 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_bswap32_i32 0 +#define TCG_TARGET_HAS_bswap16_i32 0 +#define TCG_TARGET_HAS_andc_i32 0 +#define TCG_TARGET_HAS_orc_i32 0 +#define TCG_TARGET_HAS_eqv_i32 0 +#define TCG_TARGET_HAS_nand_i32 0 +#define TCG_TARGET_HAS_deposit_i32 0 /* optional instructions automatically implemented */ -#undef TCG_TARGET_HAS_neg_i32 /* sub rd, zero, rt */ -#undef TCG_TARGET_HAS_ext8u_i32 /* andi rt, rs, 0xff */ -#undef TCG_TARGET_HAS_ext16u_i32 /* andi rt, rs, 0xffff */ +#define TCG_TARGET_HAS_neg_i32 0 /* sub rd, zero, rt */ +#define TCG_TARGET_HAS_ext8u_i32 0 /* andi rt, rs, 0xff */ +#define TCG_TARGET_HAS_ext16u_i32 0 /* andi rt, rs, 0xffff */ /* Note: must be synced with dyngen-exec.h */ #define TCG_AREG0 TCG_REG_S0 diff --git a/tcg/optimize.c b/tcg/optimize.c index 98c7e3f..32f928f 100644 --- a/tcg/optimize.c +++ b/tcg/optimize.c @@ -31,14 +31,9 @@ #include "qemu-common.h" #include "tcg-op.h" -#if TCG_TARGET_REG_BITS == 64 #define CASE_OP_32_64(x) \ glue(glue(case INDEX_op_, x), _i32): \ glue(glue(case INDEX_op_, x), _i64) -#else -#define CASE_OP_32_64(x) \ - glue(glue(case INDEX_op_, x), _i32) -#endif typedef enum { TCG_TEMP_UNDEF = 0, @@ -103,10 +98,8 @@ static int op_to_movi(int op) switch (op_bits(op)) { case 32: return INDEX_op_movi_i32; -#if TCG_TARGET_REG_BITS == 64 case 64: return INDEX_op_movi_i64; -#endif default: fprintf(stderr, "op_to_movi: unexpected return value of " "function op_bits.\n"); @@ -155,10 +148,8 @@ static int op_to_mov(int op) switch (op_bits(op)) { case 32: return INDEX_op_mov_i32; -#if TCG_TARGET_REG_BITS == 64 case 64: return INDEX_op_mov_i64; -#endif default: fprintf(stderr, "op_to_mov: unexpected return value of " "function op_bits.\n"); @@ -190,124 +181,57 @@ static TCGArg do_constant_folding_2(int op, TCGArg x, TCGArg y) case INDEX_op_shl_i32: return (uint32_t)x << (uint32_t)y; -#if TCG_TARGET_REG_BITS == 64 case INDEX_op_shl_i64: return (uint64_t)x << (uint64_t)y; -#endif case INDEX_op_shr_i32: return (uint32_t)x >> (uint32_t)y; -#if TCG_TARGET_REG_BITS == 64 case INDEX_op_shr_i64: return (uint64_t)x >> (uint64_t)y; -#endif case INDEX_op_sar_i32: return (int32_t)x >> (int32_t)y; -#if TCG_TARGET_REG_BITS == 64 case INDEX_op_sar_i64: return (int64_t)x >> (int64_t)y; -#endif -#ifdef TCG_TARGET_HAS_rot_i32 case INDEX_op_rotr_i32: -#if TCG_TARGET_REG_BITS == 64 - x &= 0xffffffff; - y &= 0xffffffff; -#endif - x = (x << (32 - y)) | (x >> y); + x = ((uint32_t)x << (32 - y)) | ((uint32_t)x >> y); return x; -#endif -#ifdef TCG_TARGET_HAS_rot_i64 -#if TCG_TARGET_REG_BITS == 64 case INDEX_op_rotr_i64: - x = (x << (64 - y)) | (x >> y); + x = ((uint64_t)x << (64 - y)) | ((uint64_t)x >> y); return x; -#endif -#endif -#ifdef TCG_TARGET_HAS_rot_i32 case INDEX_op_rotl_i32: -#if TCG_TARGET_REG_BITS == 64 - x &= 0xffffffff; - y &= 0xffffffff; -#endif - x = (x << y) | (x >> (32 - y)); + x = ((uint32_t)x << y) | ((uint32_t)x >> (32 - y)); return x; -#endif -#ifdef TCG_TARGET_HAS_rot_i64 -#if TCG_TARGET_REG_BITS == 64 case INDEX_op_rotl_i64: - x = (x << y) | (x >> (64 - y)); + x = ((uint64_t)x << y) | ((uint64_t)x >> (64 - y)); return x; -#endif -#endif - -#if defined(TCG_TARGET_HAS_not_i32) || defined(TCG_TARGET_HAS_not_i64) -#ifdef TCG_TARGET_HAS_not_i32 - case INDEX_op_not_i32: -#endif -#ifdef TCG_TARGET_HAS_not_i64 - case INDEX_op_not_i64: -#endif + + CASE_OP_32_64(not): return ~x; -#endif - -#if defined(TCG_TARGET_HAS_ext8s_i32) || defined(TCG_TARGET_HAS_ext8s_i64) -#ifdef TCG_TARGET_HAS_ext8s_i32 - case INDEX_op_ext8s_i32: -#endif -#ifdef TCG_TARGET_HAS_ext8s_i64 - case INDEX_op_ext8s_i64: -#endif + + CASE_OP_32_64(ext8s): return (int8_t)x; -#endif - -#if defined(TCG_TARGET_HAS_ext16s_i32) || defined(TCG_TARGET_HAS_ext16s_i64) -#ifdef TCG_TARGET_HAS_ext16s_i32 - case INDEX_op_ext16s_i32: -#endif -#ifdef TCG_TARGET_HAS_ext16s_i64 - case INDEX_op_ext16s_i64: -#endif + + CASE_OP_32_64(ext16s): return (int16_t)x; -#endif - -#if defined(TCG_TARGET_HAS_ext8u_i32) || defined(TCG_TARGET_HAS_ext8u_i64) -#ifdef TCG_TARGET_HAS_ext8u_i32 - case INDEX_op_ext8u_i32: -#endif -#ifdef TCG_TARGET_HAS_ext8u_i64 - case INDEX_op_ext8u_i64: -#endif + + CASE_OP_32_64(ext8u): return (uint8_t)x; -#endif - -#if defined(TCG_TARGET_HAS_ext16u_i32) || defined(TCG_TARGET_HAS_ext16u_i64) -#ifdef TCG_TARGET_HAS_ext16u_i32 - case INDEX_op_ext16u_i32: -#endif -#ifdef TCG_TARGET_HAS_ext16u_i64 - case INDEX_op_ext16u_i64: -#endif + + CASE_OP_32_64(ext16u): return (uint16_t)x; -#endif -#if TCG_TARGET_REG_BITS == 64 -#ifdef TCG_TARGET_HAS_ext32s_i64 case INDEX_op_ext32s_i64: return (int32_t)x; -#endif -#ifdef TCG_TARGET_HAS_ext32u_i64 case INDEX_op_ext32u_i64: return (uint32_t)x; -#endif -#endif default: fprintf(stderr, @@ -319,11 +243,9 @@ static TCGArg do_constant_folding_2(int op, TCGArg x, TCGArg y) static TCGArg do_constant_folding(int op, TCGArg x, TCGArg y) { TCGArg res = do_constant_folding_2(op, x, y); -#if TCG_TARGET_REG_BITS == 64 if (op_bits(op) == 32) { res &= 0xffffffff; } -#endif return res; } @@ -385,14 +307,8 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr, CASE_OP_32_64(shl): CASE_OP_32_64(shr): CASE_OP_32_64(sar): -#ifdef TCG_TARGET_HAS_rot_i32 - case INDEX_op_rotl_i32: - case INDEX_op_rotr_i32: -#endif -#ifdef TCG_TARGET_HAS_rot_i64 - case INDEX_op_rotl_i64: - case INDEX_op_rotr_i64: -#endif + CASE_OP_32_64(rotl): + CASE_OP_32_64(rotr): if (temps[args[1]].state == TCG_TEMP_CONST) { /* Proceed with possible constant folding. */ break; @@ -473,34 +389,12 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr, args += 2; break; CASE_OP_32_64(not): -#ifdef TCG_TARGET_HAS_ext8s_i32 - case INDEX_op_ext8s_i32: -#endif -#ifdef TCG_TARGET_HAS_ext8s_i64 - case INDEX_op_ext8s_i64: -#endif -#ifdef TCG_TARGET_HAS_ext16s_i32 - case INDEX_op_ext16s_i32: -#endif -#ifdef TCG_TARGET_HAS_ext16s_i64 - case INDEX_op_ext16s_i64: -#endif -#ifdef TCG_TARGET_HAS_ext8u_i32 - case INDEX_op_ext8u_i32: -#endif -#ifdef TCG_TARGET_HAS_ext8u_i64 - case INDEX_op_ext8u_i64: -#endif -#ifdef TCG_TARGET_HAS_ext16u_i32 - case INDEX_op_ext16u_i32: -#endif -#ifdef TCG_TARGET_HAS_ext16u_i64 - case INDEX_op_ext16u_i64: -#endif -#if TCG_TARGET_REG_BITS == 64 + CASE_OP_32_64(ext8s): + CASE_OP_32_64(ext8u): + CASE_OP_32_64(ext16s): + CASE_OP_32_64(ext16u): case INDEX_op_ext32s_i64: case INDEX_op_ext32u_i64: -#endif if (temps[args[1]].state == TCG_TEMP_CONST) { gen_opc_buf[op_index] = op_to_movi(op); tmp = do_constant_folding(op, temps[args[1]].val, 0); @@ -525,14 +419,8 @@ static TCGArg *tcg_constant_folding(TCGContext *s, uint16_t *tcg_opc_ptr, CASE_OP_32_64(shl): CASE_OP_32_64(shr): CASE_OP_32_64(sar): -#ifdef TCG_TARGET_HAS_rot_i32 - case INDEX_op_rotl_i32: - case INDEX_op_rotr_i32: -#endif -#ifdef TCG_TARGET_HAS_rot_i64 - case INDEX_op_rotl_i64: - case INDEX_op_rotr_i64: -#endif + CASE_OP_32_64(rotl): + CASE_OP_32_64(rotr): if (temps[args[1]].state == TCG_TEMP_CONST && temps[args[2]].state == TCG_TEMP_CONST) { gen_opc_buf[op_index] = op_to_movi(op); diff --git a/tcg/ppc/tcg-target.h b/tcg/ppc/tcg-target.h index a1f8599..8c35c4e 100644 --- a/tcg/ppc/tcg-target.h +++ b/tcg/ppc/tcg-target.h @@ -77,21 +77,22 @@ enum { #endif /* optional instructions */ -#define TCG_TARGET_HAS_div_i32 -#define TCG_TARGET_HAS_rot_i32 -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -#define TCG_TARGET_HAS_ext8u_i32 -#define TCG_TARGET_HAS_ext16u_i32 -#define TCG_TARGET_HAS_bswap16_i32 -#define TCG_TARGET_HAS_bswap32_i32 -#define TCG_TARGET_HAS_not_i32 -#define TCG_TARGET_HAS_neg_i32 -#define TCG_TARGET_HAS_andc_i32 -#define TCG_TARGET_HAS_orc_i32 -#define TCG_TARGET_HAS_eqv_i32 -#define TCG_TARGET_HAS_nand_i32 -#define TCG_TARGET_HAS_nor_i32 +#define TCG_TARGET_HAS_div_i32 1 +#define TCG_TARGET_HAS_rot_i32 1 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_ext8u_i32 1 +#define TCG_TARGET_HAS_ext16u_i32 1 +#define TCG_TARGET_HAS_bswap16_i32 1 +#define TCG_TARGET_HAS_bswap32_i32 1 +#define TCG_TARGET_HAS_not_i32 1 +#define TCG_TARGET_HAS_neg_i32 1 +#define TCG_TARGET_HAS_andc_i32 1 +#define TCG_TARGET_HAS_orc_i32 1 +#define TCG_TARGET_HAS_eqv_i32 1 +#define TCG_TARGET_HAS_nand_i32 1 +#define TCG_TARGET_HAS_nor_i32 1 +#define TCG_TARGET_HAS_deposit_i32 0 #define TCG_AREG0 TCG_REG_R27 diff --git a/tcg/ppc64/tcg-target.h b/tcg/ppc64/tcg-target.h index 8a6db11..041fe9d 100644 --- a/tcg/ppc64/tcg-target.h +++ b/tcg/ppc64/tcg-target.h @@ -68,40 +68,42 @@ enum { #define TCG_TARGET_CALL_STACK_OFFSET 48 /* optional instructions */ -#define TCG_TARGET_HAS_div_i32 -/* #define TCG_TARGET_HAS_rot_i32 */ -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -/* #define TCG_TARGET_HAS_ext8u_i32 */ -/* #define TCG_TARGET_HAS_ext16u_i32 */ -/* #define TCG_TARGET_HAS_bswap16_i32 */ -/* #define TCG_TARGET_HAS_bswap32_i32 */ -/* #define TCG_TARGET_HAS_not_i32 */ -#define TCG_TARGET_HAS_neg_i32 -/* #define TCG_TARGET_HAS_andc_i32 */ -/* #define TCG_TARGET_HAS_orc_i32 */ -/* #define TCG_TARGET_HAS_eqv_i32 */ -/* #define TCG_TARGET_HAS_nand_i32 */ -/* #define TCG_TARGET_HAS_nor_i32 */ +#define TCG_TARGET_HAS_div_i32 1 +#define TCG_TARGET_HAS_rot_i32 0 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_ext8u_i32 0 +#define TCG_TARGET_HAS_ext16u_i32 0 +#define TCG_TARGET_HAS_bswap16_i32 0 +#define TCG_TARGET_HAS_bswap32_i32 0 +#define TCG_TARGET_HAS_not_i32 0 +#define TCG_TARGET_HAS_neg_i32 1 +#define TCG_TARGET_HAS_andc_i32 0 +#define TCG_TARGET_HAS_orc_i32 0 +#define TCG_TARGET_HAS_eqv_i32 0 +#define TCG_TARGET_HAS_nand_i32 0 +#define TCG_TARGET_HAS_nor_i32 0 +#define TCG_TARGET_HAS_deposit_i32 0 -#define TCG_TARGET_HAS_div_i64 -/* #define TCG_TARGET_HAS_rot_i64 */ -#define TCG_TARGET_HAS_ext8s_i64 -#define TCG_TARGET_HAS_ext16s_i64 -#define TCG_TARGET_HAS_ext32s_i64 -/* #define TCG_TARGET_HAS_ext8u_i64 */ -/* #define TCG_TARGET_HAS_ext16u_i64 */ -/* #define TCG_TARGET_HAS_ext32u_i64 */ -/* #define TCG_TARGET_HAS_bswap16_i64 */ -/* #define TCG_TARGET_HAS_bswap32_i64 */ -/* #define TCG_TARGET_HAS_bswap64_i64 */ -/* #define TCG_TARGET_HAS_not_i64 */ -#define TCG_TARGET_HAS_neg_i64 -/* #define TCG_TARGET_HAS_andc_i64 */ -/* #define TCG_TARGET_HAS_orc_i64 */ -/* #define TCG_TARGET_HAS_eqv_i64 */ -/* #define TCG_TARGET_HAS_nand_i64 */ -/* #define TCG_TARGET_HAS_nor_i64 */ +#define TCG_TARGET_HAS_div_i64 1 +#define TCG_TARGET_HAS_rot_i64 0 +#define TCG_TARGET_HAS_ext8s_i64 1 +#define TCG_TARGET_HAS_ext16s_i64 1 +#define TCG_TARGET_HAS_ext32s_i64 1 +#define TCG_TARGET_HAS_ext8u_i64 0 +#define TCG_TARGET_HAS_ext16u_i64 0 +#define TCG_TARGET_HAS_ext32u_i64 0 +#define TCG_TARGET_HAS_bswap16_i64 0 +#define TCG_TARGET_HAS_bswap32_i64 0 +#define TCG_TARGET_HAS_bswap64_i64 0 +#define TCG_TARGET_HAS_not_i64 0 +#define TCG_TARGET_HAS_neg_i64 1 +#define TCG_TARGET_HAS_andc_i64 0 +#define TCG_TARGET_HAS_orc_i64 0 +#define TCG_TARGET_HAS_eqv_i64 0 +#define TCG_TARGET_HAS_nand_i64 0 +#define TCG_TARGET_HAS_nor_i64 0 +#define TCG_TARGET_HAS_deposit_i64 0 #define TCG_AREG0 TCG_REG_R27 diff --git a/tcg/s390/tcg-target.h b/tcg/s390/tcg-target.h index 4e45cf3..35ebac3 100644 --- a/tcg/s390/tcg-target.h +++ b/tcg/s390/tcg-target.h @@ -53,41 +53,43 @@ typedef enum TCGReg { #define TCG_TARGET_NB_REGS 16 /* optional instructions */ -#define TCG_TARGET_HAS_div2_i32 -#define TCG_TARGET_HAS_rot_i32 -#define TCG_TARGET_HAS_ext8s_i32 -#define TCG_TARGET_HAS_ext16s_i32 -#define TCG_TARGET_HAS_ext8u_i32 -#define TCG_TARGET_HAS_ext16u_i32 -#define TCG_TARGET_HAS_bswap16_i32 -#define TCG_TARGET_HAS_bswap32_i32 -// #define TCG_TARGET_HAS_not_i32 -#define TCG_TARGET_HAS_neg_i32 -// #define TCG_TARGET_HAS_andc_i32 -// #define TCG_TARGET_HAS_orc_i32 -// #define TCG_TARGET_HAS_eqv_i32 -// #define TCG_TARGET_HAS_nand_i32 -// #define TCG_TARGET_HAS_nor_i32 +#define TCG_TARGET_HAS_div2_i32 1 +#define TCG_TARGET_HAS_rot_i32 1 +#define TCG_TARGET_HAS_ext8s_i32 1 +#define TCG_TARGET_HAS_ext16s_i32 1 +#define TCG_TARGET_HAS_ext8u_i32 1 +#define TCG_TARGET_HAS_ext16u_i32 1 +#define TCG_TARGET_HAS_bswap16_i32 1 +#define TCG_TARGET_HAS_bswap32_i32 1 +#define TCG_TARGET_HAS_not_i32 0 +#define TCG_TARGET_HAS_neg_i32 1 +#define TCG_TARGET_HAS_andc_i32 0 +#define TCG_TARGET_HAS_orc_i32 0 +#define TCG_TARGET_HAS_eqv_i32 0 +#define TCG_TARGET_HAS_nand_i32 0 +#define TCG_TARGET_HAS_nor_i32 0 +#define TCG_TARGET_HAS_deposit_i32 0 #if TCG_TARGET_REG_BITS == 64 -#define TCG_TARGET_HAS_div2_i64 -#define TCG_TARGET_HAS_rot_i64 -#define TCG_TARGET_HAS_ext8s_i64 -#define TCG_TARGET_HAS_ext16s_i64 -#define TCG_TARGET_HAS_ext32s_i64 -#define TCG_TARGET_HAS_ext8u_i64 -#define TCG_TARGET_HAS_ext16u_i64 -#define TCG_TARGET_HAS_ext32u_i64 -#define TCG_TARGET_HAS_bswap16_i64 -#define TCG_TARGET_HAS_bswap32_i64 -#define TCG_TARGET_HAS_bswap64_i64 -// #define TCG_TARGET_HAS_not_i64 -#define TCG_TARGET_HAS_neg_i64 -// #define TCG_TARGET_HAS_andc_i64 -// #define TCG_TARGET_HAS_orc_i64 -// #define TCG_TARGET_HAS_eqv_i64 -// #define TCG_TARGET_HAS_nand_i64 -// #define TCG_TARGET_HAS_nor_i64 +#define TCG_TARGET_HAS_div2_i64 1 +#define TCG_TARGET_HAS_rot_i64 1 +#define TCG_TARGET_HAS_ext8s_i64 1 +#define TCG_TARGET_HAS_ext16s_i64 1 +#define TCG_TARGET_HAS_ext32s_i64 1 +#define TCG_TARGET_HAS_ext8u_i64 1 +#define TCG_TARGET_HAS_ext16u_i64 1 +#define TCG_TARGET_HAS_ext32u_i64 1 +#define TCG_TARGET_HAS_bswap16_i64 1 +#define TCG_TARGET_HAS_bswap32_i64 1 +#define TCG_TARGET_HAS_bswap64_i64 1 +#define TCG_TARGET_HAS_not_i64 0 +#define TCG_TARGET_HAS_neg_i64 1 +#define TCG_TARGET_HAS_andc_i64 0 +#define TCG_TARGET_HAS_orc_i64 0 +#define TCG_TARGET_HAS_eqv_i64 0 +#define TCG_TARGET_HAS_nand_i64 0 +#define TCG_TARGET_HAS_nor_i64 0 +#define TCG_TARGET_HAS_deposit_i64 0 #endif #define TCG_TARGET_HAS_GUEST_BASE diff --git a/tcg/sparc/tcg-target.h b/tcg/sparc/tcg-target.h index df0785e..7b4e7f9 100644 --- a/tcg/sparc/tcg-target.h +++ b/tcg/sparc/tcg-target.h @@ -92,41 +92,43 @@ enum { #endif /* optional instructions */ -#define TCG_TARGET_HAS_div_i32 -// #define TCG_TARGET_HAS_rot_i32 -// #define TCG_TARGET_HAS_ext8s_i32 -// #define TCG_TARGET_HAS_ext16s_i32 -// #define TCG_TARGET_HAS_ext8u_i32 -// #define TCG_TARGET_HAS_ext16u_i32 -// #define TCG_TARGET_HAS_bswap16_i32 -// #define TCG_TARGET_HAS_bswap32_i32 -#define TCG_TARGET_HAS_neg_i32 -#define TCG_TARGET_HAS_not_i32 -#define TCG_TARGET_HAS_andc_i32 -#define TCG_TARGET_HAS_orc_i32 -// #define TCG_TARGET_HAS_eqv_i32 -// #define TCG_TARGET_HAS_nand_i32 -// #define TCG_TARGET_HAS_nor_i32 +#define TCG_TARGET_HAS_div_i32 1 +#define TCG_TARGET_HAS_rot_i32 0 +#define TCG_TARGET_HAS_ext8s_i32 0 +#define TCG_TARGET_HAS_ext16s_i32 0 +#define TCG_TARGET_HAS_ext8u_i32 0 +#define TCG_TARGET_HAS_ext16u_i32 0 +#define TCG_TARGET_HAS_bswap16_i32 0 +#define TCG_TARGET_HAS_bswap32_i32 0 +#define TCG_TARGET_HAS_neg_i32 1 +#define TCG_TARGET_HAS_not_i32 1 +#define TCG_TARGET_HAS_andc_i32 1 +#define TCG_TARGET_HAS_orc_i32 1 +#define TCG_TARGET_HAS_eqv_i32 0 +#define TCG_TARGET_HAS_nand_i32 0 +#define TCG_TARGET_HAS_nor_i32 0 +#define TCG_TARGET_HAS_deposit_i32 0 #if TCG_TARGET_REG_BITS == 64 -#define TCG_TARGET_HAS_div_i64 -// #define TCG_TARGET_HAS_rot_i64 -// #define TCG_TARGET_HAS_ext8s_i64 -// #define TCG_TARGET_HAS_ext16s_i64 -#define TCG_TARGET_HAS_ext32s_i64 -// #define TCG_TARGET_HAS_ext8u_i64 -// #define TCG_TARGET_HAS_ext16u_i64 -#define TCG_TARGET_HAS_ext32u_i64 -// #define TCG_TARGET_HAS_bswap16_i64 -// #define TCG_TARGET_HAS_bswap32_i64 -// #define TCG_TARGET_HAS_bswap64_i64 -#define TCG_TARGET_HAS_neg_i64 -#define TCG_TARGET_HAS_not_i64 -#define TCG_TARGET_HAS_andc_i64 -#define TCG_TARGET_HAS_orc_i64 -// #define TCG_TARGET_HAS_eqv_i64 -// #define TCG_TARGET_HAS_nand_i64 -// #define TCG_TARGET_HAS_nor_i64 +#define TCG_TARGET_HAS_div_i64 1 +#define TCG_TARGET_HAS_rot_i64 0 +#define TCG_TARGET_HAS_ext8s_i64 0 +#define TCG_TARGET_HAS_ext16s_i64 0 +#define TCG_TARGET_HAS_ext32s_i64 1 +#define TCG_TARGET_HAS_ext8u_i64 0 +#define TCG_TARGET_HAS_ext16u_i64 0 +#define TCG_TARGET_HAS_ext32u_i64 1 +#define TCG_TARGET_HAS_bswap16_i64 0 +#define TCG_TARGET_HAS_bswap32_i64 0 +#define TCG_TARGET_HAS_bswap64_i64 0 +#define TCG_TARGET_HAS_neg_i64 1 +#define TCG_TARGET_HAS_not_i64 1 +#define TCG_TARGET_HAS_andc_i64 1 +#define TCG_TARGET_HAS_orc_i64 1 +#define TCG_TARGET_HAS_eqv_i64 0 +#define TCG_TARGET_HAS_nand_i64 0 +#define TCG_TARGET_HAS_nor_i64 0 +#define TCG_TARGET_HAS_deposit_i64 0 #endif /* Note: must be synced with dyngen-exec.h */ diff --git a/tcg/tcg-op.h b/tcg/tcg-op.h index ebf5e13..404b637 100644 --- a/tcg/tcg-op.h +++ b/tcg/tcg-op.h @@ -664,107 +664,81 @@ static inline void tcg_gen_muli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) tcg_temp_free_i32(t0); } -#ifdef TCG_TARGET_HAS_div_i32 static inline void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { - tcg_gen_op3_i32(INDEX_op_div_i32, ret, arg1, arg2); -} - -static inline void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - tcg_gen_op3_i32(INDEX_op_rem_i32, ret, arg1, arg2); -} - -static inline void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - tcg_gen_op3_i32(INDEX_op_divu_i32, ret, arg1, arg2); -} - -static inline void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - tcg_gen_op3_i32(INDEX_op_remu_i32, ret, arg1, arg2); -} -#elif defined(TCG_TARGET_HAS_div2_i32) -static inline void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - TCGv_i32 t0; - t0 = tcg_temp_new_i32(); - tcg_gen_sari_i32(t0, arg1, 31); - tcg_gen_op5_i32(INDEX_op_div2_i32, ret, t0, arg1, t0, arg2); - tcg_temp_free_i32(t0); -} - -static inline void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - TCGv_i32 t0; - t0 = tcg_temp_new_i32(); - tcg_gen_sari_i32(t0, arg1, 31); - tcg_gen_op5_i32(INDEX_op_div2_i32, t0, ret, arg1, t0, arg2); - tcg_temp_free_i32(t0); -} - -static inline void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - TCGv_i32 t0; - t0 = tcg_temp_new_i32(); - tcg_gen_movi_i32(t0, 0); - tcg_gen_op5_i32(INDEX_op_divu2_i32, ret, t0, arg1, t0, arg2); - tcg_temp_free_i32(t0); -} - -static inline void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - TCGv_i32 t0; - t0 = tcg_temp_new_i32(); - tcg_gen_movi_i32(t0, 0); - tcg_gen_op5_i32(INDEX_op_divu2_i32, t0, ret, arg1, t0, arg2); - tcg_temp_free_i32(t0); -} -#else -static inline void tcg_gen_div_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) -{ - int sizemask = 0; - /* Return value and both arguments are 32-bit and signed. */ - sizemask |= tcg_gen_sizemask(0, 0, 1); - sizemask |= tcg_gen_sizemask(1, 0, 1); - sizemask |= tcg_gen_sizemask(2, 0, 1); - - tcg_gen_helper32(tcg_helper_div_i32, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i32) { + tcg_gen_op3_i32(INDEX_op_div_i32, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i32) { + TCGv_i32 t0 = tcg_temp_new_i32(); + tcg_gen_sari_i32(t0, arg1, 31); + tcg_gen_op5_i32(INDEX_op_div2_i32, ret, t0, arg1, t0, arg2); + tcg_temp_free_i32(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 32-bit and signed. */ + sizemask |= tcg_gen_sizemask(0, 0, 1); + sizemask |= tcg_gen_sizemask(1, 0, 1); + sizemask |= tcg_gen_sizemask(2, 0, 1); + tcg_gen_helper32(tcg_helper_div_i32, sizemask, ret, arg1, arg2); + } } static inline void tcg_gen_rem_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { - int sizemask = 0; - /* Return value and both arguments are 32-bit and signed. */ - sizemask |= tcg_gen_sizemask(0, 0, 1); - sizemask |= tcg_gen_sizemask(1, 0, 1); - sizemask |= tcg_gen_sizemask(2, 0, 1); - - tcg_gen_helper32(tcg_helper_rem_i32, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i32) { + tcg_gen_op3_i32(INDEX_op_rem_i32, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i32) { + TCGv_i32 t0 = tcg_temp_new_i32(); + tcg_gen_sari_i32(t0, arg1, 31); + tcg_gen_op5_i32(INDEX_op_div2_i32, t0, ret, arg1, t0, arg2); + tcg_temp_free_i32(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 32-bit and signed. */ + sizemask |= tcg_gen_sizemask(0, 0, 1); + sizemask |= tcg_gen_sizemask(1, 0, 1); + sizemask |= tcg_gen_sizemask(2, 0, 1); + tcg_gen_helper32(tcg_helper_rem_i32, sizemask, ret, arg1, arg2); + } } static inline void tcg_gen_divu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { - int sizemask = 0; - /* Return value and both arguments are 32-bit and unsigned. */ - sizemask |= tcg_gen_sizemask(0, 0, 0); - sizemask |= tcg_gen_sizemask(1, 0, 0); - sizemask |= tcg_gen_sizemask(2, 0, 0); - - tcg_gen_helper32(tcg_helper_divu_i32, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i32) { + tcg_gen_op3_i32(INDEX_op_divu_i32, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i32) { + TCGv_i32 t0 = tcg_temp_new_i32(); + tcg_gen_movi_i32(t0, 0); + tcg_gen_op5_i32(INDEX_op_divu2_i32, ret, t0, arg1, t0, arg2); + tcg_temp_free_i32(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 32-bit and unsigned. */ + sizemask |= tcg_gen_sizemask(0, 0, 0); + sizemask |= tcg_gen_sizemask(1, 0, 0); + sizemask |= tcg_gen_sizemask(2, 0, 0); + tcg_gen_helper32(tcg_helper_divu_i32, sizemask, ret, arg1, arg2); + } } static inline void tcg_gen_remu_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { - int sizemask = 0; - /* Return value and both arguments are 32-bit and unsigned. */ - sizemask |= tcg_gen_sizemask(0, 0, 0); - sizemask |= tcg_gen_sizemask(1, 0, 0); - sizemask |= tcg_gen_sizemask(2, 0, 0); - - tcg_gen_helper32(tcg_helper_remu_i32, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i32) { + tcg_gen_op3_i32(INDEX_op_remu_i32, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i32) { + TCGv_i32 t0 = tcg_temp_new_i32(); + tcg_gen_movi_i32(t0, 0); + tcg_gen_op5_i32(INDEX_op_divu2_i32, t0, ret, arg1, t0, arg2); + tcg_temp_free_i32(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 32-bit and unsigned. */ + sizemask |= tcg_gen_sizemask(0, 0, 0); + sizemask |= tcg_gen_sizemask(1, 0, 0); + sizemask |= tcg_gen_sizemask(2, 0, 0); + tcg_gen_helper32(tcg_helper_remu_i32, sizemask, ret, arg1, arg2); + } } -#endif #if TCG_TARGET_REG_BITS == 32 @@ -1250,109 +1224,82 @@ static inline void tcg_gen_mul_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) tcg_gen_op3_i64(INDEX_op_mul_i64, ret, arg1, arg2); } -#ifdef TCG_TARGET_HAS_div_i64 -static inline void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - tcg_gen_op3_i64(INDEX_op_div_i64, ret, arg1, arg2); -} - -static inline void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - tcg_gen_op3_i64(INDEX_op_rem_i64, ret, arg1, arg2); -} - -static inline void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - tcg_gen_op3_i64(INDEX_op_divu_i64, ret, arg1, arg2); -} - -static inline void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - tcg_gen_op3_i64(INDEX_op_remu_i64, ret, arg1, arg2); -} -#elif defined(TCG_TARGET_HAS_div2_i64) -static inline void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - TCGv_i64 t0; - t0 = tcg_temp_new_i64(); - tcg_gen_sari_i64(t0, arg1, 63); - tcg_gen_op5_i64(INDEX_op_div2_i64, ret, t0, arg1, t0, arg2); - tcg_temp_free_i64(t0); -} - -static inline void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - TCGv_i64 t0; - t0 = tcg_temp_new_i64(); - tcg_gen_sari_i64(t0, arg1, 63); - tcg_gen_op5_i64(INDEX_op_div2_i64, t0, ret, arg1, t0, arg2); - tcg_temp_free_i64(t0); -} - -static inline void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - TCGv_i64 t0; - t0 = tcg_temp_new_i64(); - tcg_gen_movi_i64(t0, 0); - tcg_gen_op5_i64(INDEX_op_divu2_i64, ret, t0, arg1, t0, arg2); - tcg_temp_free_i64(t0); -} - -static inline void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) -{ - TCGv_i64 t0; - t0 = tcg_temp_new_i64(); - tcg_gen_movi_i64(t0, 0); - tcg_gen_op5_i64(INDEX_op_divu2_i64, t0, ret, arg1, t0, arg2); - tcg_temp_free_i64(t0); -} -#else static inline void tcg_gen_div_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { - int sizemask = 0; - /* Return value and both arguments are 64-bit and signed. */ - sizemask |= tcg_gen_sizemask(0, 1, 1); - sizemask |= tcg_gen_sizemask(1, 1, 1); - sizemask |= tcg_gen_sizemask(2, 1, 1); - - tcg_gen_helper64(tcg_helper_div_i64, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i64) { + tcg_gen_op3_i64(INDEX_op_div_i64, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i64) { + TCGv_i64 t0 = tcg_temp_new_i64(); + tcg_gen_sari_i64(t0, arg1, 63); + tcg_gen_op5_i64(INDEX_op_div2_i64, ret, t0, arg1, t0, arg2); + tcg_temp_free_i64(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 64-bit and signed. */ + sizemask |= tcg_gen_sizemask(0, 1, 1); + sizemask |= tcg_gen_sizemask(1, 1, 1); + sizemask |= tcg_gen_sizemask(2, 1, 1); + tcg_gen_helper64(tcg_helper_div_i64, sizemask, ret, arg1, arg2); + } } static inline void tcg_gen_rem_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { - int sizemask = 0; - /* Return value and both arguments are 64-bit and signed. */ - sizemask |= tcg_gen_sizemask(0, 1, 1); - sizemask |= tcg_gen_sizemask(1, 1, 1); - sizemask |= tcg_gen_sizemask(2, 1, 1); - - tcg_gen_helper64(tcg_helper_rem_i64, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i64) { + tcg_gen_op3_i64(INDEX_op_rem_i64, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i64) { + TCGv_i64 t0 = tcg_temp_new_i64(); + tcg_gen_sari_i64(t0, arg1, 63); + tcg_gen_op5_i64(INDEX_op_div2_i64, t0, ret, arg1, t0, arg2); + tcg_temp_free_i64(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 64-bit and signed. */ + sizemask |= tcg_gen_sizemask(0, 1, 1); + sizemask |= tcg_gen_sizemask(1, 1, 1); + sizemask |= tcg_gen_sizemask(2, 1, 1); + tcg_gen_helper64(tcg_helper_rem_i64, sizemask, ret, arg1, arg2); + } } static inline void tcg_gen_divu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { - int sizemask = 0; - /* Return value and both arguments are 64-bit and unsigned. */ - sizemask |= tcg_gen_sizemask(0, 1, 0); - sizemask |= tcg_gen_sizemask(1, 1, 0); - sizemask |= tcg_gen_sizemask(2, 1, 0); - - tcg_gen_helper64(tcg_helper_divu_i64, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i64) { + tcg_gen_op3_i64(INDEX_op_divu_i64, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i64) { + TCGv_i64 t0 = tcg_temp_new_i64(); + tcg_gen_movi_i64(t0, 0); + tcg_gen_op5_i64(INDEX_op_divu2_i64, ret, t0, arg1, t0, arg2); + tcg_temp_free_i64(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 64-bit and unsigned. */ + sizemask |= tcg_gen_sizemask(0, 1, 0); + sizemask |= tcg_gen_sizemask(1, 1, 0); + sizemask |= tcg_gen_sizemask(2, 1, 0); + tcg_gen_helper64(tcg_helper_divu_i64, sizemask, ret, arg1, arg2); + } } static inline void tcg_gen_remu_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { - int sizemask = 0; - /* Return value and both arguments are 64-bit and unsigned. */ - sizemask |= tcg_gen_sizemask(0, 1, 0); - sizemask |= tcg_gen_sizemask(1, 1, 0); - sizemask |= tcg_gen_sizemask(2, 1, 0); - - tcg_gen_helper64(tcg_helper_remu_i64, sizemask, ret, arg1, arg2); + if (TCG_TARGET_HAS_div_i64) { + tcg_gen_op3_i64(INDEX_op_remu_i64, ret, arg1, arg2); + } else if (TCG_TARGET_HAS_div2_i64) { + TCGv_i64 t0 = tcg_temp_new_i64(); + tcg_gen_movi_i64(t0, 0); + tcg_gen_op5_i64(INDEX_op_divu2_i64, t0, ret, arg1, t0, arg2); + tcg_temp_free_i64(t0); + } else { + int sizemask = 0; + /* Return value and both arguments are 64-bit and unsigned. */ + sizemask |= tcg_gen_sizemask(0, 1, 0); + sizemask |= tcg_gen_sizemask(1, 1, 0); + sizemask |= tcg_gen_sizemask(2, 1, 0); + tcg_gen_helper64(tcg_helper_remu_i64, sizemask, ret, arg1, arg2); + } } -#endif - -#endif +#endif /* TCG_TARGET_REG_BITS == 32 */ static inline void tcg_gen_addi_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) { @@ -1413,82 +1360,82 @@ static inline void tcg_gen_muli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) static inline void tcg_gen_ext8s_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_ext8s_i32 - tcg_gen_op2_i32(INDEX_op_ext8s_i32, ret, arg); -#else - tcg_gen_shli_i32(ret, arg, 24); - tcg_gen_sari_i32(ret, ret, 24); -#endif + if (TCG_TARGET_HAS_ext8s_i32) { + tcg_gen_op2_i32(INDEX_op_ext8s_i32, ret, arg); + } else { + tcg_gen_shli_i32(ret, arg, 24); + tcg_gen_sari_i32(ret, ret, 24); + } } static inline void tcg_gen_ext16s_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_ext16s_i32 - tcg_gen_op2_i32(INDEX_op_ext16s_i32, ret, arg); -#else - tcg_gen_shli_i32(ret, arg, 16); - tcg_gen_sari_i32(ret, ret, 16); -#endif + if (TCG_TARGET_HAS_ext16s_i32) { + tcg_gen_op2_i32(INDEX_op_ext16s_i32, ret, arg); + } else { + tcg_gen_shli_i32(ret, arg, 16); + tcg_gen_sari_i32(ret, ret, 16); + } } static inline void tcg_gen_ext8u_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_ext8u_i32 - tcg_gen_op2_i32(INDEX_op_ext8u_i32, ret, arg); -#else - tcg_gen_andi_i32(ret, arg, 0xffu); -#endif + if (TCG_TARGET_HAS_ext8u_i32) { + tcg_gen_op2_i32(INDEX_op_ext8u_i32, ret, arg); + } else { + tcg_gen_andi_i32(ret, arg, 0xffu); + } } static inline void tcg_gen_ext16u_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_ext16u_i32 - tcg_gen_op2_i32(INDEX_op_ext16u_i32, ret, arg); -#else - tcg_gen_andi_i32(ret, arg, 0xffffu); -#endif + if (TCG_TARGET_HAS_ext16u_i32) { + tcg_gen_op2_i32(INDEX_op_ext16u_i32, ret, arg); + } else { + tcg_gen_andi_i32(ret, arg, 0xffffu); + } } /* Note: we assume the two high bytes are set to zero */ static inline void tcg_gen_bswap16_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_bswap16_i32 - tcg_gen_op2_i32(INDEX_op_bswap16_i32, ret, arg); -#else - TCGv_i32 t0 = tcg_temp_new_i32(); + if (TCG_TARGET_HAS_bswap16_i32) { + tcg_gen_op2_i32(INDEX_op_bswap16_i32, ret, arg); + } else { + TCGv_i32 t0 = tcg_temp_new_i32(); - tcg_gen_ext8u_i32(t0, arg); - tcg_gen_shli_i32(t0, t0, 8); - tcg_gen_shri_i32(ret, arg, 8); - tcg_gen_or_i32(ret, ret, t0); - tcg_temp_free_i32(t0); -#endif + tcg_gen_ext8u_i32(t0, arg); + tcg_gen_shli_i32(t0, t0, 8); + tcg_gen_shri_i32(ret, arg, 8); + tcg_gen_or_i32(ret, ret, t0); + tcg_temp_free_i32(t0); + } } static inline void tcg_gen_bswap32_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_bswap32_i32 - tcg_gen_op2_i32(INDEX_op_bswap32_i32, ret, arg); -#else - TCGv_i32 t0, t1; - t0 = tcg_temp_new_i32(); - t1 = tcg_temp_new_i32(); + if (TCG_TARGET_HAS_bswap32_i32) { + tcg_gen_op2_i32(INDEX_op_bswap32_i32, ret, arg); + } else { + TCGv_i32 t0, t1; + t0 = tcg_temp_new_i32(); + t1 = tcg_temp_new_i32(); - tcg_gen_shli_i32(t0, arg, 24); + tcg_gen_shli_i32(t0, arg, 24); - tcg_gen_andi_i32(t1, arg, 0x0000ff00); - tcg_gen_shli_i32(t1, t1, 8); - tcg_gen_or_i32(t0, t0, t1); + tcg_gen_andi_i32(t1, arg, 0x0000ff00); + tcg_gen_shli_i32(t1, t1, 8); + tcg_gen_or_i32(t0, t0, t1); - tcg_gen_shri_i32(t1, arg, 8); - tcg_gen_andi_i32(t1, t1, 0x0000ff00); - tcg_gen_or_i32(t0, t0, t1); + tcg_gen_shri_i32(t1, arg, 8); + tcg_gen_andi_i32(t1, t1, 0x0000ff00); + tcg_gen_or_i32(t0, t0, t1); - tcg_gen_shri_i32(t1, arg, 24); - tcg_gen_or_i32(ret, t0, t1); - tcg_temp_free_i32(t0); - tcg_temp_free_i32(t1); -#endif + tcg_gen_shri_i32(t1, arg, 24); + tcg_gen_or_i32(ret, t0, t1); + tcg_temp_free_i32(t0); + tcg_temp_free_i32(t1); + } } #if TCG_TARGET_REG_BITS == 32 @@ -1576,59 +1523,59 @@ static inline void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg) static inline void tcg_gen_ext8s_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_ext8s_i64 - tcg_gen_op2_i64(INDEX_op_ext8s_i64, ret, arg); -#else - tcg_gen_shli_i64(ret, arg, 56); - tcg_gen_sari_i64(ret, ret, 56); -#endif + if (TCG_TARGET_HAS_ext8s_i64) { + tcg_gen_op2_i64(INDEX_op_ext8s_i64, ret, arg); + } else { + tcg_gen_shli_i64(ret, arg, 56); + tcg_gen_sari_i64(ret, ret, 56); + } } static inline void tcg_gen_ext16s_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_ext16s_i64 - tcg_gen_op2_i64(INDEX_op_ext16s_i64, ret, arg); -#else - tcg_gen_shli_i64(ret, arg, 48); - tcg_gen_sari_i64(ret, ret, 48); -#endif + if (TCG_TARGET_HAS_ext16s_i64) { + tcg_gen_op2_i64(INDEX_op_ext16s_i64, ret, arg); + } else { + tcg_gen_shli_i64(ret, arg, 48); + tcg_gen_sari_i64(ret, ret, 48); + } } static inline void tcg_gen_ext32s_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_ext32s_i64 - tcg_gen_op2_i64(INDEX_op_ext32s_i64, ret, arg); -#else - tcg_gen_shli_i64(ret, arg, 32); - tcg_gen_sari_i64(ret, ret, 32); -#endif + if (TCG_TARGET_HAS_ext32s_i64) { + tcg_gen_op2_i64(INDEX_op_ext32s_i64, ret, arg); + } else { + tcg_gen_shli_i64(ret, arg, 32); + tcg_gen_sari_i64(ret, ret, 32); + } } static inline void tcg_gen_ext8u_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_ext8u_i64 - tcg_gen_op2_i64(INDEX_op_ext8u_i64, ret, arg); -#else - tcg_gen_andi_i64(ret, arg, 0xffu); -#endif + if (TCG_TARGET_HAS_ext8u_i64) { + tcg_gen_op2_i64(INDEX_op_ext8u_i64, ret, arg); + } else { + tcg_gen_andi_i64(ret, arg, 0xffu); + } } static inline void tcg_gen_ext16u_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_ext16u_i64 - tcg_gen_op2_i64(INDEX_op_ext16u_i64, ret, arg); -#else - tcg_gen_andi_i64(ret, arg, 0xffffu); -#endif + if (TCG_TARGET_HAS_ext16u_i64) { + tcg_gen_op2_i64(INDEX_op_ext16u_i64, ret, arg); + } else { + tcg_gen_andi_i64(ret, arg, 0xffffu); + } } static inline void tcg_gen_ext32u_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_ext32u_i64 - tcg_gen_op2_i64(INDEX_op_ext32u_i64, ret, arg); -#else - tcg_gen_andi_i64(ret, arg, 0xffffffffu); -#endif + if (TCG_TARGET_HAS_ext32u_i64) { + tcg_gen_op2_i64(INDEX_op_ext32u_i64, ret, arg); + } else { + tcg_gen_andi_i64(ret, arg, 0xffffffffu); + } } /* Note: we assume the target supports move between 32 and 64 bit @@ -1655,130 +1602,132 @@ static inline void tcg_gen_ext_i32_i64(TCGv_i64 ret, TCGv_i32 arg) /* Note: we assume the six high bytes are set to zero */ static inline void tcg_gen_bswap16_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_bswap16_i64 - tcg_gen_op2_i64(INDEX_op_bswap16_i64, ret, arg); -#else - TCGv_i64 t0 = tcg_temp_new_i64(); + if (TCG_TARGET_HAS_bswap16_i64) { + tcg_gen_op2_i64(INDEX_op_bswap16_i64, ret, arg); + } else { + TCGv_i64 t0 = tcg_temp_new_i64(); - tcg_gen_ext8u_i64(t0, arg); - tcg_gen_shli_i64(t0, t0, 8); - tcg_gen_shri_i64(ret, arg, 8); - tcg_gen_or_i64(ret, ret, t0); - tcg_temp_free_i64(t0); -#endif + tcg_gen_ext8u_i64(t0, arg); + tcg_gen_shli_i64(t0, t0, 8); + tcg_gen_shri_i64(ret, arg, 8); + tcg_gen_or_i64(ret, ret, t0); + tcg_temp_free_i64(t0); + } } /* Note: we assume the four high bytes are set to zero */ static inline void tcg_gen_bswap32_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_bswap32_i64 - tcg_gen_op2_i64(INDEX_op_bswap32_i64, ret, arg); -#else - TCGv_i64 t0, t1; - t0 = tcg_temp_new_i64(); - t1 = tcg_temp_new_i64(); + if (TCG_TARGET_HAS_bswap32_i64) { + tcg_gen_op2_i64(INDEX_op_bswap32_i64, ret, arg); + } else { + TCGv_i64 t0, t1; + t0 = tcg_temp_new_i64(); + t1 = tcg_temp_new_i64(); - tcg_gen_shli_i64(t0, arg, 24); - tcg_gen_ext32u_i64(t0, t0); + tcg_gen_shli_i64(t0, arg, 24); + tcg_gen_ext32u_i64(t0, t0); - tcg_gen_andi_i64(t1, arg, 0x0000ff00); - tcg_gen_shli_i64(t1, t1, 8); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_andi_i64(t1, arg, 0x0000ff00); + tcg_gen_shli_i64(t1, t1, 8); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_shri_i64(t1, arg, 8); - tcg_gen_andi_i64(t1, t1, 0x0000ff00); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_shri_i64(t1, arg, 8); + tcg_gen_andi_i64(t1, t1, 0x0000ff00); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_shri_i64(t1, arg, 24); - tcg_gen_or_i64(ret, t0, t1); - tcg_temp_free_i64(t0); - tcg_temp_free_i64(t1); -#endif + tcg_gen_shri_i64(t1, arg, 24); + tcg_gen_or_i64(ret, t0, t1); + tcg_temp_free_i64(t0); + tcg_temp_free_i64(t1); + } } static inline void tcg_gen_bswap64_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_bswap64_i64 - tcg_gen_op2_i64(INDEX_op_bswap64_i64, ret, arg); -#else - TCGv_i64 t0 = tcg_temp_new_i64(); - TCGv_i64 t1 = tcg_temp_new_i64(); + if (TCG_TARGET_HAS_bswap64_i64) { + tcg_gen_op2_i64(INDEX_op_bswap64_i64, ret, arg); + } else { + TCGv_i64 t0 = tcg_temp_new_i64(); + TCGv_i64 t1 = tcg_temp_new_i64(); - tcg_gen_shli_i64(t0, arg, 56); + tcg_gen_shli_i64(t0, arg, 56); - tcg_gen_andi_i64(t1, arg, 0x0000ff00); - tcg_gen_shli_i64(t1, t1, 40); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_andi_i64(t1, arg, 0x0000ff00); + tcg_gen_shli_i64(t1, t1, 40); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_andi_i64(t1, arg, 0x00ff0000); - tcg_gen_shli_i64(t1, t1, 24); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_andi_i64(t1, arg, 0x00ff0000); + tcg_gen_shli_i64(t1, t1, 24); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_andi_i64(t1, arg, 0xff000000); - tcg_gen_shli_i64(t1, t1, 8); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_andi_i64(t1, arg, 0xff000000); + tcg_gen_shli_i64(t1, t1, 8); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_shri_i64(t1, arg, 8); - tcg_gen_andi_i64(t1, t1, 0xff000000); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_shri_i64(t1, arg, 8); + tcg_gen_andi_i64(t1, t1, 0xff000000); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_shri_i64(t1, arg, 24); - tcg_gen_andi_i64(t1, t1, 0x00ff0000); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_shri_i64(t1, arg, 24); + tcg_gen_andi_i64(t1, t1, 0x00ff0000); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_shri_i64(t1, arg, 40); - tcg_gen_andi_i64(t1, t1, 0x0000ff00); - tcg_gen_or_i64(t0, t0, t1); + tcg_gen_shri_i64(t1, arg, 40); + tcg_gen_andi_i64(t1, t1, 0x0000ff00); + tcg_gen_or_i64(t0, t0, t1); - tcg_gen_shri_i64(t1, arg, 56); - tcg_gen_or_i64(ret, t0, t1); - tcg_temp_free_i64(t0); - tcg_temp_free_i64(t1); -#endif + tcg_gen_shri_i64(t1, arg, 56); + tcg_gen_or_i64(ret, t0, t1); + tcg_temp_free_i64(t0); + tcg_temp_free_i64(t1); + } } #endif static inline void tcg_gen_neg_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_neg_i32 - tcg_gen_op2_i32(INDEX_op_neg_i32, ret, arg); -#else - TCGv_i32 t0 = tcg_const_i32(0); - tcg_gen_sub_i32(ret, t0, arg); - tcg_temp_free_i32(t0); -#endif + if (TCG_TARGET_HAS_neg_i32) { + tcg_gen_op2_i32(INDEX_op_neg_i32, ret, arg); + } else { + TCGv_i32 t0 = tcg_const_i32(0); + tcg_gen_sub_i32(ret, t0, arg); + tcg_temp_free_i32(t0); + } } static inline void tcg_gen_neg_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_neg_i64 - tcg_gen_op2_i64(INDEX_op_neg_i64, ret, arg); -#else - TCGv_i64 t0 = tcg_const_i64(0); - tcg_gen_sub_i64(ret, t0, arg); - tcg_temp_free_i64(t0); -#endif + if (TCG_TARGET_HAS_neg_i64) { + tcg_gen_op2_i64(INDEX_op_neg_i64, ret, arg); + } else { + TCGv_i64 t0 = tcg_const_i64(0); + tcg_gen_sub_i64(ret, t0, arg); + tcg_temp_free_i64(t0); + } } static inline void tcg_gen_not_i32(TCGv_i32 ret, TCGv_i32 arg) { -#ifdef TCG_TARGET_HAS_not_i32 - tcg_gen_op2_i32(INDEX_op_not_i32, ret, arg); -#else - tcg_gen_xori_i32(ret, arg, -1); -#endif + if (TCG_TARGET_HAS_not_i32) { + tcg_gen_op2_i32(INDEX_op_not_i32, ret, arg); + } else { + tcg_gen_xori_i32(ret, arg, -1); + } } static inline void tcg_gen_not_i64(TCGv_i64 ret, TCGv_i64 arg) { -#ifdef TCG_TARGET_HAS_not_i64 - tcg_gen_op2_i64(INDEX_op_not_i64, ret, arg); -#elif defined(TCG_TARGET_HAS_not_i32) && TCG_TARGET_REG_BITS == 32 +#if TCG_TARGET_REG_BITS == 64 + if (TCG_TARGET_HAS_not_i64) { + tcg_gen_op2_i64(INDEX_op_not_i64, ret, arg); + } else { + tcg_gen_xori_i64(ret, arg, -1); + } +#else tcg_gen_not_i32(TCGV_LOW(ret), TCGV_LOW(arg)); tcg_gen_not_i32(TCGV_HIGH(ret), TCGV_HIGH(arg)); -#else - tcg_gen_xori_i64(ret, arg, -1); #endif } @@ -1787,18 +1736,15 @@ static inline void tcg_gen_discard_i32(TCGv_i32 arg) tcg_gen_op1_i32(INDEX_op_discard, arg); } -#if TCG_TARGET_REG_BITS == 32 static inline void tcg_gen_discard_i64(TCGv_i64 arg) { +#if TCG_TARGET_REG_BITS == 32 tcg_gen_discard_i32(TCGV_LOW(arg)); tcg_gen_discard_i32(TCGV_HIGH(arg)); -} #else -static inline void tcg_gen_discard_i64(TCGv_i64 arg) -{ tcg_gen_op1_i64(INDEX_op_discard, arg); -} #endif +} static inline void tcg_gen_concat_i32_i64(TCGv_i64 dest, TCGv_i32 low, TCGv_i32 high) { @@ -1832,165 +1778,170 @@ static inline void tcg_gen_concat32_i64(TCGv_i64 dest, TCGv_i64 low, TCGv_i64 hi static inline void tcg_gen_andc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { -#ifdef TCG_TARGET_HAS_andc_i32 - tcg_gen_op3_i32(INDEX_op_andc_i32, ret, arg1, arg2); -#else - TCGv_i32 t0; - t0 = tcg_temp_new_i32(); - tcg_gen_not_i32(t0, arg2); - tcg_gen_and_i32(ret, arg1, t0); - tcg_temp_free_i32(t0); -#endif + if (TCG_TARGET_HAS_andc_i32) { + tcg_gen_op3_i32(INDEX_op_andc_i32, ret, arg1, arg2); + } else { + TCGv_i32 t0 = tcg_temp_new_i32(); + tcg_gen_not_i32(t0, arg2); + tcg_gen_and_i32(ret, arg1, t0); + tcg_temp_free_i32(t0); + } } static inline void tcg_gen_andc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { -#ifdef TCG_TARGET_HAS_andc_i64 - tcg_gen_op3_i64(INDEX_op_andc_i64, ret, arg1, arg2); -#elif defined(TCG_TARGET_HAS_andc_i32) && TCG_TARGET_REG_BITS == 32 +#if TCG_TARGET_REG_BITS == 64 + if (TCG_TARGET_HAS_andc_i64) { + tcg_gen_op3_i64(INDEX_op_andc_i64, ret, arg1, arg2); + } else { + TCGv_i64 t0 = tcg_temp_new_i64(); + tcg_gen_not_i64(t0, arg2); + tcg_gen_and_i64(ret, arg1, t0); + tcg_temp_free_i64(t0); + } +#else tcg_gen_andc_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2)); tcg_gen_andc_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2)); -#else - TCGv_i64 t0; - t0 = tcg_temp_new_i64(); - tcg_gen_not_i64(t0, arg2); - tcg_gen_and_i64(ret, arg1, t0); - tcg_temp_free_i64(t0); #endif } static inline void tcg_gen_eqv_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { -#ifdef TCG_TARGET_HAS_eqv_i32 - tcg_gen_op3_i32(INDEX_op_eqv_i32, ret, arg1, arg2); -#else - tcg_gen_xor_i32(ret, arg1, arg2); - tcg_gen_not_i32(ret, ret); -#endif + if (TCG_TARGET_HAS_eqv_i32) { + tcg_gen_op3_i32(INDEX_op_eqv_i32, ret, arg1, arg2); + } else { + tcg_gen_xor_i32(ret, arg1, arg2); + tcg_gen_not_i32(ret, ret); + } } static inline void tcg_gen_eqv_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { -#ifdef TCG_TARGET_HAS_eqv_i64 - tcg_gen_op3_i64(INDEX_op_eqv_i64, ret, arg1, arg2); -#elif defined(TCG_TARGET_HAS_eqv_i32) && TCG_TARGET_REG_BITS == 32 +#if TCG_TARGET_REG_BITS == 64 + if (TCG_TARGET_HAS_eqv_i64) { + tcg_gen_op3_i64(INDEX_op_eqv_i64, ret, arg1, arg2); + } else { + tcg_gen_xor_i64(ret, arg1, arg2); + tcg_gen_not_i64(ret, ret); + } +#else tcg_gen_eqv_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2)); tcg_gen_eqv_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2)); -#else - tcg_gen_xor_i64(ret, arg1, arg2); - tcg_gen_not_i64(ret, ret); #endif } static inline void tcg_gen_nand_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { -#ifdef TCG_TARGET_HAS_nand_i32 - tcg_gen_op3_i32(INDEX_op_nand_i32, ret, arg1, arg2); -#else - tcg_gen_and_i32(ret, arg1, arg2); - tcg_gen_not_i32(ret, ret); -#endif + if (TCG_TARGET_HAS_nand_i32) { + tcg_gen_op3_i32(INDEX_op_nand_i32, ret, arg1, arg2); + } else { + tcg_gen_and_i32(ret, arg1, arg2); + tcg_gen_not_i32(ret, ret); + } } static inline void tcg_gen_nand_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { -#ifdef TCG_TARGET_HAS_nand_i64 - tcg_gen_op3_i64(INDEX_op_nand_i64, ret, arg1, arg2); -#elif defined(TCG_TARGET_HAS_nand_i32) && TCG_TARGET_REG_BITS == 32 +#if TCG_TARGET_REG_BITS == 64 + if (TCG_TARGET_HAS_nand_i64) { + tcg_gen_op3_i64(INDEX_op_nand_i64, ret, arg1, arg2); + } else { + tcg_gen_and_i64(ret, arg1, arg2); + tcg_gen_not_i64(ret, ret); + } +#else tcg_gen_nand_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2)); tcg_gen_nand_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2)); -#else - tcg_gen_and_i64(ret, arg1, arg2); - tcg_gen_not_i64(ret, ret); #endif } static inline void tcg_gen_nor_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { -#ifdef TCG_TARGET_HAS_nor_i32 - tcg_gen_op3_i32(INDEX_op_nor_i32, ret, arg1, arg2); -#else - tcg_gen_or_i32(ret, arg1, arg2); - tcg_gen_not_i32(ret, ret); -#endif + if (TCG_TARGET_HAS_nor_i32) { + tcg_gen_op3_i32(INDEX_op_nor_i32, ret, arg1, arg2); + } else { + tcg_gen_or_i32(ret, arg1, arg2); + tcg_gen_not_i32(ret, ret); + } } static inline void tcg_gen_nor_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { -#ifdef TCG_TARGET_HAS_nor_i64 - tcg_gen_op3_i64(INDEX_op_nor_i64, ret, arg1, arg2); -#elif defined(TCG_TARGET_HAS_nor_i32) && TCG_TARGET_REG_BITS == 32 +#if TCG_TARGET_REG_BITS == 64 + if (TCG_TARGET_HAS_nor_i64) { + tcg_gen_op3_i64(INDEX_op_nor_i64, ret, arg1, arg2); + } else { + tcg_gen_or_i64(ret, arg1, arg2); + tcg_gen_not_i64(ret, ret); + } +#else tcg_gen_nor_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2)); tcg_gen_nor_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2)); -#else - tcg_gen_or_i64(ret, arg1, arg2); - tcg_gen_not_i64(ret, ret); #endif } static inline void tcg_gen_orc_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { -#ifdef TCG_TARGET_HAS_orc_i32 - tcg_gen_op3_i32(INDEX_op_orc_i32, ret, arg1, arg2); -#else - TCGv_i32 t0; - t0 = tcg_temp_new_i32(); - tcg_gen_not_i32(t0, arg2); - tcg_gen_or_i32(ret, arg1, t0); - tcg_temp_free_i32(t0); -#endif + if (TCG_TARGET_HAS_orc_i32) { + tcg_gen_op3_i32(INDEX_op_orc_i32, ret, arg1, arg2); + } else { + TCGv_i32 t0 = tcg_temp_new_i32(); + tcg_gen_not_i32(t0, arg2); + tcg_gen_or_i32(ret, arg1, t0); + tcg_temp_free_i32(t0); + } } static inline void tcg_gen_orc_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { -#ifdef TCG_TARGET_HAS_orc_i64 - tcg_gen_op3_i64(INDEX_op_orc_i64, ret, arg1, arg2); -#elif defined(TCG_TARGET_HAS_orc_i32) && TCG_TARGET_REG_BITS == 32 +#if TCG_TARGET_REG_BITS == 64 + if (TCG_TARGET_HAS_orc_i64) { + tcg_gen_op3_i64(INDEX_op_orc_i64, ret, arg1, arg2); + } else { + TCGv_i64 t0 = tcg_temp_new_i64(); + tcg_gen_not_i64(t0, arg2); + tcg_gen_or_i64(ret, arg1, t0); + tcg_temp_free_i64(t0); + } +#else tcg_gen_orc_i32(TCGV_LOW(ret), TCGV_LOW(arg1), TCGV_LOW(arg2)); tcg_gen_orc_i32(TCGV_HIGH(ret), TCGV_HIGH(arg1), TCGV_HIGH(arg2)); -#else - TCGv_i64 t0; - t0 = tcg_temp_new_i64(); - tcg_gen_not_i64(t0, arg2); - tcg_gen_or_i64(ret, arg1, t0); - tcg_temp_free_i64(t0); #endif } static inline void tcg_gen_rotl_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { -#ifdef TCG_TARGET_HAS_rot_i32 - tcg_gen_op3_i32(INDEX_op_rotl_i32, ret, arg1, arg2); -#else - TCGv_i32 t0, t1; + if (TCG_TARGET_HAS_rot_i32) { + tcg_gen_op3_i32(INDEX_op_rotl_i32, ret, arg1, arg2); + } else { + TCGv_i32 t0, t1; - t0 = tcg_temp_new_i32(); - t1 = tcg_temp_new_i32(); - tcg_gen_shl_i32(t0, arg1, arg2); - tcg_gen_subfi_i32(t1, 32, arg2); - tcg_gen_shr_i32(t1, arg1, t1); - tcg_gen_or_i32(ret, t0, t1); - tcg_temp_free_i32(t0); - tcg_temp_free_i32(t1); -#endif + t0 = tcg_temp_new_i32(); + t1 = tcg_temp_new_i32(); + tcg_gen_shl_i32(t0, arg1, arg2); + tcg_gen_subfi_i32(t1, 32, arg2); + tcg_gen_shr_i32(t1, arg1, t1); + tcg_gen_or_i32(ret, t0, t1); + tcg_temp_free_i32(t0); + tcg_temp_free_i32(t1); + } } static inline void tcg_gen_rotl_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { -#ifdef TCG_TARGET_HAS_rot_i64 - tcg_gen_op3_i64(INDEX_op_rotl_i64, ret, arg1, arg2); -#else - TCGv_i64 t0, t1; - - t0 = tcg_temp_new_i64(); - t1 = tcg_temp_new_i64(); - tcg_gen_shl_i64(t0, arg1, arg2); - tcg_gen_subfi_i64(t1, 64, arg2); - tcg_gen_shr_i64(t1, arg1, t1); - tcg_gen_or_i64(ret, t0, t1); - tcg_temp_free_i64(t0); - tcg_temp_free_i64(t1); -#endif + if (TCG_TARGET_HAS_rot_i64) { + tcg_gen_op3_i64(INDEX_op_rotl_i64, ret, arg1, arg2); + } else { + TCGv_i64 t0, t1; + t0 = tcg_temp_new_i64(); + t1 = tcg_temp_new_i64(); + tcg_gen_shl_i64(t0, arg1, arg2); + tcg_gen_subfi_i64(t1, 64, arg2); + tcg_gen_shr_i64(t1, arg1, t1); + tcg_gen_or_i64(ret, t0, t1); + tcg_temp_free_i64(t0); + tcg_temp_free_i64(t1); + } } static inline void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) @@ -1998,12 +1949,11 @@ static inline void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) /* some cases can be optimized here */ if (arg2 == 0) { tcg_gen_mov_i32(ret, arg1); - } else { -#ifdef TCG_TARGET_HAS_rot_i32 + } else if (TCG_TARGET_HAS_rot_i32) { TCGv_i32 t0 = tcg_const_i32(arg2); tcg_gen_rotl_i32(ret, arg1, t0); tcg_temp_free_i32(t0); -#else + } else { TCGv_i32 t0, t1; t0 = tcg_temp_new_i32(); t1 = tcg_temp_new_i32(); @@ -2012,7 +1962,6 @@ static inline void tcg_gen_rotli_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) tcg_gen_or_i32(ret, t0, t1); tcg_temp_free_i32(t0); tcg_temp_free_i32(t1); -#endif } } @@ -2021,12 +1970,11 @@ static inline void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) /* some cases can be optimized here */ if (arg2 == 0) { tcg_gen_mov_i64(ret, arg1); - } else { -#ifdef TCG_TARGET_HAS_rot_i64 + } else if (TCG_TARGET_HAS_rot_i64) { TCGv_i64 t0 = tcg_const_i64(arg2); tcg_gen_rotl_i64(ret, arg1, t0); tcg_temp_free_i64(t0); -#else + } else { TCGv_i64 t0, t1; t0 = tcg_temp_new_i64(); t1 = tcg_temp_new_i64(); @@ -2035,44 +1983,42 @@ static inline void tcg_gen_rotli_i64(TCGv_i64 ret, TCGv_i64 arg1, int64_t arg2) tcg_gen_or_i64(ret, t0, t1); tcg_temp_free_i64(t0); tcg_temp_free_i64(t1); -#endif } } static inline void tcg_gen_rotr_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2) { -#ifdef TCG_TARGET_HAS_rot_i32 - tcg_gen_op3_i32(INDEX_op_rotr_i32, ret, arg1, arg2); -#else - TCGv_i32 t0, t1; + if (TCG_TARGET_HAS_rot_i32) { + tcg_gen_op3_i32(INDEX_op_rotr_i32, ret, arg1, arg2); + } else { + TCGv_i32 t0, t1; - t0 = tcg_temp_new_i32(); - t1 = tcg_temp_new_i32(); - tcg_gen_shr_i32(t0, arg1, arg2); - tcg_gen_subfi_i32(t1, 32, arg2); - tcg_gen_shl_i32(t1, arg1, t1); - tcg_gen_or_i32(ret, t0, t1); - tcg_temp_free_i32(t0); - tcg_temp_free_i32(t1); -#endif + t0 = tcg_temp_new_i32(); + t1 = tcg_temp_new_i32(); + tcg_gen_shr_i32(t0, arg1, arg2); + tcg_gen_subfi_i32(t1, 32, arg2); + tcg_gen_shl_i32(t1, arg1, t1); + tcg_gen_or_i32(ret, t0, t1); + tcg_temp_free_i32(t0); + tcg_temp_free_i32(t1); + } } static inline void tcg_gen_rotr_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2) { -#ifdef TCG_TARGET_HAS_rot_i64 - tcg_gen_op3_i64(INDEX_op_rotr_i64, ret, arg1, arg2); -#else - TCGv_i64 t0, t1; - - t0 = tcg_temp_new_i64(); - t1 = tcg_temp_new_i64(); - tcg_gen_shr_i64(t0, arg1, arg2); - tcg_gen_subfi_i64(t1, 64, arg2); - tcg_gen_shl_i64(t1, arg1, t1); - tcg_gen_or_i64(ret, t0, t1); - tcg_temp_free_i64(t0); - tcg_temp_free_i64(t1); -#endif + if (TCG_TARGET_HAS_rot_i64) { + tcg_gen_op3_i64(INDEX_op_rotr_i64, ret, arg1, arg2); + } else { + TCGv_i64 t0, t1; + t0 = tcg_temp_new_i64(); + t1 = tcg_temp_new_i64(); + tcg_gen_shr_i64(t0, arg1, arg2); + tcg_gen_subfi_i64(t1, 64, arg2); + tcg_gen_shl_i64(t1, arg1, t1); + tcg_gen_or_i64(ret, t0, t1); + tcg_temp_free_i64(t0); + tcg_temp_free_i64(t1); + } } static inline void tcg_gen_rotri_i32(TCGv_i32 ret, TCGv_i32 arg1, int32_t arg2) @@ -2099,38 +2045,38 @@ static inline void tcg_gen_deposit_i32(TCGv_i32 ret, TCGv_i32 arg1, TCGv_i32 arg2, unsigned int ofs, unsigned int len) { -#ifdef TCG_TARGET_HAS_deposit_i32 - tcg_gen_op5ii_i32(INDEX_op_deposit_i32, ret, arg1, arg2, ofs, len); -#else - uint32_t mask = (1u << len) - 1; - TCGv_i32 t1 = tcg_temp_new_i32 (); + if (TCG_TARGET_HAS_deposit_i32) { + tcg_gen_op5ii_i32(INDEX_op_deposit_i32, ret, arg1, arg2, ofs, len); + } else { + uint32_t mask = (1u << len) - 1; + TCGv_i32 t1 = tcg_temp_new_i32 (); - tcg_gen_andi_i32(t1, arg2, mask); - tcg_gen_shli_i32(t1, t1, ofs); - tcg_gen_andi_i32(ret, arg1, ~(mask << ofs)); - tcg_gen_or_i32(ret, ret, t1); + tcg_gen_andi_i32(t1, arg2, mask); + tcg_gen_shli_i32(t1, t1, ofs); + tcg_gen_andi_i32(ret, arg1, ~(mask << ofs)); + tcg_gen_or_i32(ret, ret, t1); - tcg_temp_free_i32(t1); -#endif + tcg_temp_free_i32(t1); + } } static inline void tcg_gen_deposit_i64(TCGv_i64 ret, TCGv_i64 arg1, TCGv_i64 arg2, unsigned int ofs, unsigned int len) { -#ifdef TCG_TARGET_HAS_deposit_i64 - tcg_gen_op5ii_i64(INDEX_op_deposit_i64, ret, arg1, arg2, ofs, len); -#else - uint64_t mask = (1ull << len) - 1; - TCGv_i64 t1 = tcg_temp_new_i64 (); + if (TCG_TARGET_HAS_deposit_i64) { + tcg_gen_op5ii_i64(INDEX_op_deposit_i64, ret, arg1, arg2, ofs, len); + } else { + uint64_t mask = (1ull << len) - 1; + TCGv_i64 t1 = tcg_temp_new_i64 (); - tcg_gen_andi_i64(t1, arg2, mask); - tcg_gen_shli_i64(t1, t1, ofs); - tcg_gen_andi_i64(ret, arg1, ~(mask << ofs)); - tcg_gen_or_i64(ret, ret, t1); + tcg_gen_andi_i64(t1, arg2, mask); + tcg_gen_shli_i64(t1, t1, ofs); + tcg_gen_andi_i64(ret, arg1, ~(mask << ofs)); + tcg_gen_or_i64(ret, ret, t1); - tcg_temp_free_i64(t1); -#endif + tcg_temp_free_i64(t1); + } } /***************************************/ diff --git a/tcg/tcg-opc.h b/tcg/tcg-opc.h index b48669b..8e06d03 100644 --- a/tcg/tcg-opc.h +++ b/tcg/tcg-opc.h @@ -41,6 +41,13 @@ DEF(call, 0, 1, 2, TCG_OPF_SIDE_EFFECTS) /* variable number of parameters */ DEF(jmp, 0, 1, 0, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS) DEF(br, 0, 0, 1, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS) +#define IMPL(X) (X ? 0 : TCG_OPF_NOT_PRESENT) +#if TCG_TARGET_REG_BITS == 32 +# define IMPL64 TCG_OPF_64BIT | TCG_OPF_NOT_PRESENT +#else +# define IMPL64 TCG_OPF_64BIT +#endif + DEF(mov_i32, 1, 1, 0, 0) DEF(movi_i32, 1, 0, 1, 0) DEF(setcond_i32, 1, 2, 1, 0) @@ -57,16 +64,12 @@ DEF(st_i32, 0, 2, 1, TCG_OPF_SIDE_EFFECTS) DEF(add_i32, 1, 2, 0, 0) DEF(sub_i32, 1, 2, 0, 0) DEF(mul_i32, 1, 2, 0, 0) -#ifdef TCG_TARGET_HAS_div_i32 -DEF(div_i32, 1, 2, 0, 0) -DEF(divu_i32, 1, 2, 0, 0) -DEF(rem_i32, 1, 2, 0, 0) -DEF(remu_i32, 1, 2, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_div2_i32 -DEF(div2_i32, 2, 3, 0, 0) -DEF(divu2_i32, 2, 3, 0, 0) -#endif +DEF(div_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32)) +DEF(divu_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32)) +DEF(rem_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32)) +DEF(remu_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_div_i32)) +DEF(div2_i32, 2, 3, 0, IMPL(TCG_TARGET_HAS_div2_i32)) +DEF(divu2_i32, 2, 3, 0, IMPL(TCG_TARGET_HAS_div2_i32)) DEF(and_i32, 1, 2, 0, 0) DEF(or_i32, 1, 2, 0, 0) DEF(xor_i32, 1, 2, 0, 0) @@ -74,157 +77,86 @@ DEF(xor_i32, 1, 2, 0, 0) DEF(shl_i32, 1, 2, 0, 0) DEF(shr_i32, 1, 2, 0, 0) DEF(sar_i32, 1, 2, 0, 0) -#ifdef TCG_TARGET_HAS_rot_i32 -DEF(rotl_i32, 1, 2, 0, 0) -DEF(rotr_i32, 1, 2, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_deposit_i32 -DEF(deposit_i32, 1, 2, 2, 0) -#endif +DEF(rotl_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_rot_i32)) +DEF(rotr_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_rot_i32)) +DEF(deposit_i32, 1, 2, 2, IMPL(TCG_TARGET_HAS_deposit_i32)) DEF(brcond_i32, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS) -#if TCG_TARGET_REG_BITS == 32 -DEF(add2_i32, 2, 4, 0, 0) -DEF(sub2_i32, 2, 4, 0, 0) -DEF(brcond2_i32, 0, 4, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS) -DEF(mulu2_i32, 2, 2, 0, 0) -DEF(setcond2_i32, 1, 4, 1, 0) -#endif -#ifdef TCG_TARGET_HAS_ext8s_i32 -DEF(ext8s_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_ext16s_i32 -DEF(ext16s_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_ext8u_i32 -DEF(ext8u_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_ext16u_i32 -DEF(ext16u_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_bswap16_i32 -DEF(bswap16_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_bswap32_i32 -DEF(bswap32_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_not_i32 -DEF(not_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_neg_i32 -DEF(neg_i32, 1, 1, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_andc_i32 -DEF(andc_i32, 1, 2, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_orc_i32 -DEF(orc_i32, 1, 2, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_eqv_i32 -DEF(eqv_i32, 1, 2, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_nand_i32 -DEF(nand_i32, 1, 2, 0, 0) -#endif -#ifdef TCG_TARGET_HAS_nor_i32 -DEF(nor_i32, 1, 2, 0, 0) -#endif -#if TCG_TARGET_REG_BITS == 64 -DEF(mov_i64, 1, 1, 0, TCG_OPF_64BIT) -DEF(movi_i64, 1, 0, 1, TCG_OPF_64BIT) -DEF(setcond_i64, 1, 2, 1, TCG_OPF_64BIT) +DEF(add2_i32, 2, 4, 0, IMPL(TCG_TARGET_REG_BITS == 32)) +DEF(sub2_i32, 2, 4, 0, IMPL(TCG_TARGET_REG_BITS == 32)) +DEF(brcond2_i32, 0, 4, 2, + TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS | IMPL(TCG_TARGET_REG_BITS == 32)) +DEF(mulu2_i32, 2, 2, 0, IMPL(TCG_TARGET_REG_BITS == 32)) +DEF(setcond2_i32, 1, 4, 1, IMPL(TCG_TARGET_REG_BITS == 32)) + +DEF(ext8s_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext8s_i32)) +DEF(ext16s_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext16s_i32)) +DEF(ext8u_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext8u_i32)) +DEF(ext16u_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_ext16u_i32)) +DEF(bswap16_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_bswap16_i32)) +DEF(bswap32_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_bswap32_i32)) +DEF(not_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_not_i32)) +DEF(neg_i32, 1, 1, 0, IMPL(TCG_TARGET_HAS_neg_i32)) +DEF(andc_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_andc_i32)) +DEF(orc_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_orc_i32)) +DEF(eqv_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_eqv_i32)) +DEF(nand_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_nand_i32)) +DEF(nor_i32, 1, 2, 0, IMPL(TCG_TARGET_HAS_nor_i32)) + +DEF(mov_i64, 1, 1, 0, IMPL64) +DEF(movi_i64, 1, 0, 1, IMPL64) +DEF(setcond_i64, 1, 2, 1, IMPL64) /* load/store */ -DEF(ld8u_i64, 1, 1, 1, TCG_OPF_64BIT) -DEF(ld8s_i64, 1, 1, 1, TCG_OPF_64BIT) -DEF(ld16u_i64, 1, 1, 1, TCG_OPF_64BIT) -DEF(ld16s_i64, 1, 1, 1, TCG_OPF_64BIT) -DEF(ld32u_i64, 1, 1, 1, TCG_OPF_64BIT) -DEF(ld32s_i64, 1, 1, 1, TCG_OPF_64BIT) -DEF(ld_i64, 1, 1, 1, TCG_OPF_64BIT) -DEF(st8_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT) -DEF(st16_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT) -DEF(st32_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT) -DEF(st_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT) +DEF(ld8u_i64, 1, 1, 1, IMPL64) +DEF(ld8s_i64, 1, 1, 1, IMPL64) +DEF(ld16u_i64, 1, 1, 1, IMPL64) +DEF(ld16s_i64, 1, 1, 1, IMPL64) +DEF(ld32u_i64, 1, 1, 1, IMPL64) +DEF(ld32s_i64, 1, 1, 1, IMPL64) +DEF(ld_i64, 1, 1, 1, IMPL64) +DEF(st8_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64) +DEF(st16_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64) +DEF(st32_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64) +DEF(st_i64, 0, 2, 1, TCG_OPF_SIDE_EFFECTS | IMPL64) /* arith */ -DEF(add_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(sub_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(mul_i64, 1, 2, 0, TCG_OPF_64BIT) -#ifdef TCG_TARGET_HAS_div_i64 -DEF(div_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(divu_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(rem_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(remu_i64, 1, 2, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_div2_i64 -DEF(div2_i64, 2, 3, 0, TCG_OPF_64BIT) -DEF(divu2_i64, 2, 3, 0, TCG_OPF_64BIT) -#endif -DEF(and_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(or_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(xor_i64, 1, 2, 0, TCG_OPF_64BIT) +DEF(add_i64, 1, 2, 0, IMPL64) +DEF(sub_i64, 1, 2, 0, IMPL64) +DEF(mul_i64, 1, 2, 0, IMPL64) +DEF(div_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64)) +DEF(divu_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64)) +DEF(rem_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64)) +DEF(remu_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div_i64)) +DEF(div2_i64, 2, 3, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div2_i64)) +DEF(divu2_i64, 2, 3, 0, IMPL64 | IMPL(TCG_TARGET_HAS_div2_i64)) +DEF(and_i64, 1, 2, 0, IMPL64) +DEF(or_i64, 1, 2, 0, IMPL64) +DEF(xor_i64, 1, 2, 0, IMPL64) /* shifts/rotates */ -DEF(shl_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(shr_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(sar_i64, 1, 2, 0, TCG_OPF_64BIT) -#ifdef TCG_TARGET_HAS_rot_i64 -DEF(rotl_i64, 1, 2, 0, TCG_OPF_64BIT) -DEF(rotr_i64, 1, 2, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_deposit_i64 -DEF(deposit_i64, 1, 2, 2, TCG_OPF_64BIT) -#endif +DEF(shl_i64, 1, 2, 0, IMPL64) +DEF(shr_i64, 1, 2, 0, IMPL64) +DEF(sar_i64, 1, 2, 0, IMPL64) +DEF(rotl_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_rot_i64)) +DEF(rotr_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_rot_i64)) +DEF(deposit_i64, 1, 2, 2, IMPL64 | IMPL(TCG_TARGET_HAS_deposit_i64)) -DEF(brcond_i64, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS | TCG_OPF_64BIT) -#ifdef TCG_TARGET_HAS_ext8s_i64 -DEF(ext8s_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_ext16s_i64 -DEF(ext16s_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_ext32s_i64 -DEF(ext32s_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_ext8u_i64 -DEF(ext8u_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_ext16u_i64 -DEF(ext16u_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_ext32u_i64 -DEF(ext32u_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_bswap16_i64 -DEF(bswap16_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_bswap32_i64 -DEF(bswap32_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_bswap64_i64 -DEF(bswap64_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_not_i64 -DEF(not_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_neg_i64 -DEF(neg_i64, 1, 1, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_andc_i64 -DEF(andc_i64, 1, 2, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_orc_i64 -DEF(orc_i64, 1, 2, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_eqv_i64 -DEF(eqv_i64, 1, 2, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_nand_i64 -DEF(nand_i64, 1, 2, 0, TCG_OPF_64BIT) -#endif -#ifdef TCG_TARGET_HAS_nor_i64 -DEF(nor_i64, 1, 2, 0, TCG_OPF_64BIT) -#endif -#endif +DEF(brcond_i64, 0, 2, 2, TCG_OPF_BB_END | TCG_OPF_SIDE_EFFECTS | IMPL64) +DEF(ext8s_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext8s_i64)) +DEF(ext16s_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext16s_i64)) +DEF(ext32s_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext32s_i64)) +DEF(ext8u_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext8u_i64)) +DEF(ext16u_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext16u_i64)) +DEF(ext32u_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_ext32u_i64)) +DEF(bswap16_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_bswap16_i64)) +DEF(bswap32_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_bswap32_i64)) +DEF(bswap64_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_bswap64_i64)) +DEF(not_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_not_i64)) +DEF(neg_i64, 1, 1, 0, IMPL64 | IMPL(TCG_TARGET_HAS_neg_i64)) +DEF(andc_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_andc_i64)) +DEF(orc_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_orc_i64)) +DEF(eqv_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_eqv_i64)) +DEF(nand_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_nand_i64)) +DEF(nor_i64, 1, 2, 0, IMPL64 | IMPL(TCG_TARGET_HAS_nor_i64)) /* QEMU specific */ #if TARGET_LONG_BITS > TCG_TARGET_REG_BITS @@ -307,4 +239,6 @@ DEF(qemu_st64, 0, 2, 1, TCG_OPF_CALL_CLOBBER | TCG_OPF_SIDE_EFFECTS) #endif /* TCG_TARGET_REG_BITS != 32 */ +#undef IMPL +#undef IMPL64 #undef DEF diff --git a/tcg/tcg.c b/tcg/tcg.c index 7179bd4..3e1e972 100644 --- a/tcg/tcg.c +++ b/tcg/tcg.c @@ -2124,6 +2124,10 @@ static inline int tcg_gen_code_common(TCGContext *s, uint8_t *gen_code_buf, case INDEX_op_end: goto the_end; default: + /* Sanity check that we've not introduced any unhandled opcodes. */ + if (def->flags & TCG_OPF_NOT_PRESENT) { + tcg_abort(); + } /* Note: in order to speed up the code, it would be much faster to have specialized register allocator functions for some common argument patterns */ diff --git a/tcg/tcg.h b/tcg/tcg.h index 6a4f6e4..dc5e9c9 100644 --- a/tcg/tcg.h +++ b/tcg/tcg.h @@ -47,6 +47,42 @@ typedef uint64_t TCGRegSet; #error unsupported #endif +/* Turn some undef macros into false macros. */ +#if TCG_TARGET_REG_BITS == 32 +#define TCG_TARGET_HAS_div_i64 0 +#define TCG_TARGET_HAS_div2_i64 0 +#define TCG_TARGET_HAS_rot_i64 0 +#define TCG_TARGET_HAS_ext8s_i64 0 +#define TCG_TARGET_HAS_ext16s_i64 0 +#define TCG_TARGET_HAS_ext32s_i64 0 +#define TCG_TARGET_HAS_ext8u_i64 0 +#define TCG_TARGET_HAS_ext16u_i64 0 +#define TCG_TARGET_HAS_ext32u_i64 0 +#define TCG_TARGET_HAS_bswap16_i64 0 +#define TCG_TARGET_HAS_bswap32_i64 0 +#define TCG_TARGET_HAS_bswap64_i64 0 +#define TCG_TARGET_HAS_neg_i64 0 +#define TCG_TARGET_HAS_not_i64 0 +#define TCG_TARGET_HAS_andc_i64 0 +#define TCG_TARGET_HAS_orc_i64 0 +#define TCG_TARGET_HAS_eqv_i64 0 +#define TCG_TARGET_HAS_nand_i64 0 +#define TCG_TARGET_HAS_nor_i64 0 +#define TCG_TARGET_HAS_deposit_i64 0 +#endif + +/* Only one of DIV or DIV2 should be defined. */ +#if defined(TCG_TARGET_HAS_div_i32) +#define TCG_TARGET_HAS_div2_i32 0 +#elif defined(TCG_TARGET_HAS_div2_i32) +#define TCG_TARGET_HAS_div_i32 0 +#endif +#if defined(TCG_TARGET_HAS_div_i64) +#define TCG_TARGET_HAS_div2_i64 0 +#elif defined(TCG_TARGET_HAS_div2_i64) +#define TCG_TARGET_HAS_div_i64 0 +#endif + typedef enum TCGOpcode { #define DEF(name, oargs, iargs, cargs, flags) INDEX_op_ ## name, #include "tcg-opc.h" @@ -456,6 +492,8 @@ enum { TCG_OPF_SIDE_EFFECTS = 0x04, /* Instruction operands are 64-bits (otherwise 32-bits). */ TCG_OPF_64BIT = 0x08, + /* Instruction is optional and not implemented by the host. */ + TCG_OPF_NOT_PRESENT = 0x10, }; typedef struct TCGOpDef {