From patchwork Sun Dec 22 11:26:01 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Aurelien Jarno X-Patchwork-Id: 304460 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 8A2C72C0079 for ; Sun, 22 Dec 2013 22:26:49 +1100 (EST) Received: from localhost ([::1]:57361 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VuhBF-0003lH-R9 for incoming@patchwork.ozlabs.org; Sun, 22 Dec 2013 06:26:45 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35910) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VuhAt-0003Zv-7t for qemu-devel@nongnu.org; Sun, 22 Dec 2013 06:26:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VuhAo-0008KK-Dm for qemu-devel@nongnu.org; Sun, 22 Dec 2013 06:26:23 -0500 Received: from hall.aurel32.net ([2001:bc8:30d7:101::1]:50330) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VuhAo-0008KF-4g for qemu-devel@nongnu.org; Sun, 22 Dec 2013 06:26:18 -0500 Received: from [37.162.107.224] (helo=ohm.rr44.fr) by hall.aurel32.net with esmtpsa (TLS1.2:DHE_RSA_AES_128_CBC_SHA1:128) (Exim 4.80) (envelope-from ) id 1VuhAj-0002Dl-Jj; Sun, 22 Dec 2013 12:26:14 +0100 Received: from aurel32 by ohm.rr44.fr with local (Exim 4.80) (envelope-from ) id 1VuhAc-0006v1-F3; Sun, 22 Dec 2013 12:26:06 +0100 From: Aurelien Jarno To: qemu-devel@nongnu.org Date: Sun, 22 Dec 2013 12:26:01 +0100 Message-Id: <1387711561-26550-1-git-send-email-aurelien@aurel32.net> X-Mailer: git-send-email 1.7.10.4 X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2001:bc8:30d7:101::1 Cc: Tom Musta , Alexander Graf , Aurelien Jarno Subject: [Qemu-devel] [PATCH] target-ppc: fix VSX extension TCG code X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org The VSX TCG code is using _i64 types mixed wiht _tl types. While this is correct for 64-bit targets, this breaks the compilation with 32-bit targets and --enable-debug-tcg. This patch fixes that by always using the correct type. Note that we can probably do better for the load/stores using the new load/store TCG instructions, but that would need changes to other places of the code, while this patch intents to be a bug fix patch only. Cc: Tom Musta Cc: Alexander Graf Signed-off-by: Aurelien Jarno --- target-ppc/translate.c | 132 +++++++++++++++++++++++++----------------------- 1 file changed, 70 insertions(+), 62 deletions(-) diff --git a/target-ppc/translate.c b/target-ppc/translate.c index ea58dc9..2f9e9ac 100644 --- a/target-ppc/translate.c +++ b/target-ppc/translate.c @@ -7048,13 +7048,13 @@ static void gen_lxvdsx(DisasContext *ctx) EA = tcg_temp_new(); gen_addr_reg_index(ctx, EA); gen_qemu_ld64(ctx, cpu_vsrh(xT(ctx->opcode)), EA); - tcg_gen_mov_tl(cpu_vsrl(xT(ctx->opcode)), cpu_vsrh(xT(ctx->opcode))); + tcg_gen_mov_i64(cpu_vsrl(xT(ctx->opcode)), cpu_vsrh(xT(ctx->opcode))); tcg_temp_free(EA); } static void gen_lxvw4x(DisasContext *ctx) { - TCGv EA, tmp; + TCGv EA, lo, hi; TCGv_i64 xth = cpu_vsrh(xT(ctx->opcode)); TCGv_i64 xtl = cpu_vsrl(xT(ctx->opcode)); if (unlikely(!ctx->vsx_enabled)) { @@ -7063,21 +7063,23 @@ static void gen_lxvw4x(DisasContext *ctx) } gen_set_access_type(ctx, ACCESS_INT); EA = tcg_temp_new(); - tmp = tcg_temp_new(); + lo = tcg_temp_new(); + hi = tcg_temp_new(); gen_addr_reg_index(ctx, EA); - gen_qemu_ld32u(ctx, tmp, EA); + gen_qemu_ld32u(ctx, lo, EA); tcg_gen_addi_tl(EA, EA, 4); - gen_qemu_ld32u(ctx, xth, EA); - tcg_gen_deposit_i64(xth, xth, tmp, 32, 32); + gen_qemu_ld32u(ctx, hi, EA); + tcg_gen_concat_tl_i64(xth, lo, hi); tcg_gen_addi_tl(EA, EA, 4); - gen_qemu_ld32u(ctx, tmp, EA); + gen_qemu_ld32u(ctx, lo, EA); tcg_gen_addi_tl(EA, EA, 4); - gen_qemu_ld32u(ctx, xtl, EA); - tcg_gen_deposit_i64(xtl, xtl, tmp, 32, 32); + gen_qemu_ld32u(ctx, hi, EA); + tcg_gen_concat_tl_i64(xtl, lo, hi); tcg_temp_free(EA); - tcg_temp_free(tmp); + tcg_temp_free(hi); + tcg_temp_free(lo); } static void gen_stxsdx(DisasContext *ctx) @@ -7113,6 +7115,7 @@ static void gen_stxvd2x(DisasContext *ctx) static void gen_stxvw4x(DisasContext *ctx) { TCGv EA, tmp; + TCGv_i64 tmp64; if (unlikely(!ctx->vsx_enabled)) { gen_exception(ctx, POWERPC_EXCP_VSXU); return; @@ -7121,19 +7124,24 @@ static void gen_stxvw4x(DisasContext *ctx) EA = tcg_temp_new(); gen_addr_reg_index(ctx, EA); tmp = tcg_temp_new(); + tmp64 = tcg_temp_new_i64(); - tcg_gen_shri_i64(tmp, cpu_vsrh(xS(ctx->opcode)), 32); + tcg_gen_shri_i64(tmp64, cpu_vsrh(xS(ctx->opcode)), 32); + tcg_gen_trunc_i64_tl(tmp, tmp64); gen_qemu_st32(ctx, tmp, EA); tcg_gen_addi_tl(EA, EA, 4); - gen_qemu_st32(ctx, cpu_vsrh(xS(ctx->opcode)), EA); + tcg_gen_trunc_i64_tl(tmp, cpu_vsrh(xS(ctx->opcode))); + gen_qemu_st32(ctx, tmp, EA); - tcg_gen_shri_i64(tmp, cpu_vsrl(xS(ctx->opcode)), 32); - tcg_gen_addi_tl(EA, EA, 4); + tcg_gen_shri_i64(tmp64, cpu_vsrl(xS(ctx->opcode)), 32); + tcg_gen_trunc_i64_tl(tmp, tmp64); gen_qemu_st32(ctx, tmp, EA); tcg_gen_addi_tl(EA, EA, 4); - gen_qemu_st32(ctx, cpu_vsrl(xS(ctx->opcode)), EA); + tcg_gen_trunc_i64_tl(tmp, cpu_vsrl(xS(ctx->opcode))); + gen_qemu_st32(ctx, tmp, EA); tcg_temp_free(EA); + tcg_temp_free_i64(tmp64); tcg_temp_free(tmp); } @@ -7171,8 +7179,8 @@ static void glue(gen_, name)(DisasContext * ctx) \ gen_exception(ctx, POWERPC_EXCP_VSXU); \ return; \ } \ - xb = tcg_temp_new(); \ - sgm = tcg_temp_new(); \ + xb = tcg_temp_new_i64(); \ + sgm = tcg_temp_new_i64(); \ tcg_gen_mov_i64(xb, cpu_vsrh(xB(ctx->opcode))); \ tcg_gen_movi_i64(sgm, sgn_mask); \ switch (op) { \ @@ -7189,18 +7197,18 @@ static void glue(gen_, name)(DisasContext * ctx) \ break; \ } \ case OP_CPSGN: { \ - TCGv_i64 xa = tcg_temp_new(); \ + TCGv_i64 xa = tcg_temp_new_i64(); \ tcg_gen_mov_i64(xa, cpu_vsrh(xA(ctx->opcode))); \ tcg_gen_and_i64(xa, xa, sgm); \ tcg_gen_andc_i64(xb, xb, sgm); \ tcg_gen_or_i64(xb, xb, xa); \ - tcg_temp_free(xa); \ + tcg_temp_free_i64(xa); \ break; \ } \ } \ tcg_gen_mov_i64(cpu_vsrh(xT(ctx->opcode)), xb); \ - tcg_temp_free(xb); \ - tcg_temp_free(sgm); \ + tcg_temp_free_i64(xb); \ + tcg_temp_free_i64(sgm); \ } VSX_SCALAR_MOVE(xsabsdp, OP_ABS, SGN_MASK_DP) @@ -7216,9 +7224,9 @@ static void glue(gen_, name)(DisasContext * ctx) \ gen_exception(ctx, POWERPC_EXCP_VSXU); \ return; \ } \ - xbh = tcg_temp_new(); \ - xbl = tcg_temp_new(); \ - sgm = tcg_temp_new(); \ + xbh = tcg_temp_new_i64(); \ + xbl = tcg_temp_new_i64(); \ + sgm = tcg_temp_new_i64(); \ tcg_gen_mov_i64(xbh, cpu_vsrh(xB(ctx->opcode))); \ tcg_gen_mov_i64(xbl, cpu_vsrl(xB(ctx->opcode))); \ tcg_gen_movi_i64(sgm, sgn_mask); \ @@ -7239,8 +7247,8 @@ static void glue(gen_, name)(DisasContext * ctx) \ break; \ } \ case OP_CPSGN: { \ - TCGv_i64 xah = tcg_temp_new(); \ - TCGv_i64 xal = tcg_temp_new(); \ + TCGv_i64 xah = tcg_temp_new_i64(); \ + TCGv_i64 xal = tcg_temp_new_i64(); \ tcg_gen_mov_i64(xah, cpu_vsrh(xA(ctx->opcode))); \ tcg_gen_mov_i64(xal, cpu_vsrl(xA(ctx->opcode))); \ tcg_gen_and_i64(xah, xah, sgm); \ @@ -7249,16 +7257,16 @@ static void glue(gen_, name)(DisasContext * ctx) \ tcg_gen_andc_i64(xbl, xbl, sgm); \ tcg_gen_or_i64(xbh, xbh, xah); \ tcg_gen_or_i64(xbl, xbl, xal); \ - tcg_temp_free(xah); \ - tcg_temp_free(xal); \ + tcg_temp_free_i64(xah); \ + tcg_temp_free_i64(xal); \ break; \ } \ } \ tcg_gen_mov_i64(cpu_vsrh(xT(ctx->opcode)), xbh); \ tcg_gen_mov_i64(cpu_vsrl(xT(ctx->opcode)), xbl); \ - tcg_temp_free(xbh); \ - tcg_temp_free(xbl); \ - tcg_temp_free(sgm); \ + tcg_temp_free_i64(xbh); \ + tcg_temp_free_i64(xbl); \ + tcg_temp_free_i64(sgm); \ } VSX_VECTOR_MOVE(xvabsdp, OP_ABS, SGN_MASK_DP) @@ -7284,11 +7292,11 @@ static void glue(gen_, name)(DisasContext * ctx) \ cpu_vsrl(xB(ctx->opcode))); \ } -VSX_LOGICAL(xxland, tcg_gen_and_tl) -VSX_LOGICAL(xxlandc, tcg_gen_andc_tl) -VSX_LOGICAL(xxlor, tcg_gen_or_tl) -VSX_LOGICAL(xxlxor, tcg_gen_xor_tl) -VSX_LOGICAL(xxlnor, tcg_gen_nor_tl) +VSX_LOGICAL(xxland, tcg_gen_and_i64) +VSX_LOGICAL(xxlandc, tcg_gen_andc_i64) +VSX_LOGICAL(xxlor, tcg_gen_or_i64) +VSX_LOGICAL(xxlxor, tcg_gen_xor_i64) +VSX_LOGICAL(xxlnor, tcg_gen_nor_i64) #define VSX_XXMRG(name, high) \ static void glue(gen_, name)(DisasContext * ctx) \ @@ -7298,10 +7306,10 @@ static void glue(gen_, name)(DisasContext * ctx) \ gen_exception(ctx, POWERPC_EXCP_VSXU); \ return; \ } \ - a0 = tcg_temp_new(); \ - a1 = tcg_temp_new(); \ - b0 = tcg_temp_new(); \ - b1 = tcg_temp_new(); \ + a0 = tcg_temp_new_i64(); \ + a1 = tcg_temp_new_i64(); \ + b0 = tcg_temp_new_i64(); \ + b1 = tcg_temp_new_i64(); \ if (high) { \ tcg_gen_mov_i64(a0, cpu_vsrh(xA(ctx->opcode))); \ tcg_gen_mov_i64(a1, cpu_vsrh(xA(ctx->opcode))); \ @@ -7319,10 +7327,10 @@ static void glue(gen_, name)(DisasContext * ctx) \ b0, a0, 32, 32); \ tcg_gen_deposit_i64(cpu_vsrl(xT(ctx->opcode)), \ b1, a1, 32, 32); \ - tcg_temp_free(a0); \ - tcg_temp_free(a1); \ - tcg_temp_free(b0); \ - tcg_temp_free(b1); \ + tcg_temp_free_i64(a0); \ + tcg_temp_free_i64(a1); \ + tcg_temp_free_i64(b0); \ + tcg_temp_free_i64(b1); \ } VSX_XXMRG(xxmrghw, 1) @@ -7335,9 +7343,9 @@ static void gen_xxsel(DisasContext * ctx) gen_exception(ctx, POWERPC_EXCP_VSXU); return; } - a = tcg_temp_new(); - b = tcg_temp_new(); - c = tcg_temp_new(); + a = tcg_temp_new_i64(); + b = tcg_temp_new_i64(); + c = tcg_temp_new_i64(); tcg_gen_mov_i64(a, cpu_vsrh(xA(ctx->opcode))); tcg_gen_mov_i64(b, cpu_vsrh(xB(ctx->opcode))); @@ -7355,9 +7363,9 @@ static void gen_xxsel(DisasContext * ctx) tcg_gen_andc_i64(a, a, c); tcg_gen_or_i64(cpu_vsrl(xT(ctx->opcode)), a, b); - tcg_temp_free(a); - tcg_temp_free(b); - tcg_temp_free(c); + tcg_temp_free_i64(a); + tcg_temp_free_i64(b); + tcg_temp_free_i64(c); } static void gen_xxspltw(DisasContext *ctx) @@ -7372,8 +7380,8 @@ static void gen_xxspltw(DisasContext *ctx) return; } - b = tcg_temp_new(); - b2 = tcg_temp_new(); + b = tcg_temp_new_i64(); + b2 = tcg_temp_new_i64(); if (UIM(ctx->opcode) & 1) { tcg_gen_ext32u_i64(b, vsr); @@ -7385,8 +7393,8 @@ static void gen_xxspltw(DisasContext *ctx) tcg_gen_or_i64(cpu_vsrh(xT(ctx->opcode)), b, b2); tcg_gen_mov_i64(cpu_vsrl(xT(ctx->opcode)), cpu_vsrh(xT(ctx->opcode))); - tcg_temp_free(b); - tcg_temp_free(b2); + tcg_temp_free_i64(b); + tcg_temp_free_i64(b2); } static void gen_xxsldwi(DisasContext *ctx) @@ -7396,8 +7404,8 @@ static void gen_xxsldwi(DisasContext *ctx) gen_exception(ctx, POWERPC_EXCP_VSXU); return; } - xth = tcg_temp_new(); - xtl = tcg_temp_new(); + xth = tcg_temp_new_i64(); + xtl = tcg_temp_new_i64(); switch (SHW(ctx->opcode)) { case 0: { @@ -7406,7 +7414,7 @@ static void gen_xxsldwi(DisasContext *ctx) break; } case 1: { - TCGv_i64 t0 = tcg_temp_new(); + TCGv_i64 t0 = tcg_temp_new_i64(); tcg_gen_mov_i64(xth, cpu_vsrh(xA(ctx->opcode))); tcg_gen_shli_i64(xth, xth, 32); tcg_gen_mov_i64(t0, cpu_vsrl(xA(ctx->opcode))); @@ -7417,7 +7425,7 @@ static void gen_xxsldwi(DisasContext *ctx) tcg_gen_mov_i64(t0, cpu_vsrh(xB(ctx->opcode))); tcg_gen_shri_i64(t0, t0, 32); tcg_gen_or_i64(xtl, xtl, t0); - tcg_temp_free(t0); + tcg_temp_free_i64(t0); break; } case 2: { @@ -7426,7 +7434,7 @@ static void gen_xxsldwi(DisasContext *ctx) break; } case 3: { - TCGv_i64 t0 = tcg_temp_new(); + TCGv_i64 t0 = tcg_temp_new_i64(); tcg_gen_mov_i64(xth, cpu_vsrl(xA(ctx->opcode))); tcg_gen_shli_i64(xth, xth, 32); tcg_gen_mov_i64(t0, cpu_vsrh(xB(ctx->opcode))); @@ -7437,7 +7445,7 @@ static void gen_xxsldwi(DisasContext *ctx) tcg_gen_mov_i64(t0, cpu_vsrl(xB(ctx->opcode))); tcg_gen_shri_i64(t0, t0, 32); tcg_gen_or_i64(xtl, xtl, t0); - tcg_temp_free(t0); + tcg_temp_free_i64(t0); break; } } @@ -7445,8 +7453,8 @@ static void gen_xxsldwi(DisasContext *ctx) tcg_gen_mov_i64(cpu_vsrh(xT(ctx->opcode)), xth); tcg_gen_mov_i64(cpu_vsrl(xT(ctx->opcode)), xtl); - tcg_temp_free(xth); - tcg_temp_free(xtl); + tcg_temp_free_i64(xth); + tcg_temp_free_i64(xtl); }