From patchwork Fri Oct 4 13:20:00 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Musta X-Patchwork-Id: 280615 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id DA8E22C009C for ; Fri, 4 Oct 2013 23:53:19 +1000 (EST) Received: from localhost ([::1]:48123 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VS5oj-0002Qy-QQ for incoming@patchwork.ozlabs.org; Fri, 04 Oct 2013 09:53:17 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:56196) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VS5J5-0007tC-E7 for qemu-devel@nongnu.org; Fri, 04 Oct 2013 09:20:43 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1VS5Iw-0004bo-Lf for qemu-devel@nongnu.org; Fri, 04 Oct 2013 09:20:35 -0400 Received: from mail-qe0-x22f.google.com ([2607:f8b0:400d:c02::22f]:63570) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1VS5If-0004HF-1t; Fri, 04 Oct 2013 09:20:09 -0400 Received: by mail-qe0-f47.google.com with SMTP id b4so2863494qen.34 for ; Fri, 04 Oct 2013 06:20:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=message-id:date:from:user-agent:mime-version:to:cc:subject :references:in-reply-to:content-type:content-transfer-encoding; bh=3NHT2ejvX9L7a2b+1D23Rsbz4BjScWi2mWgEIdSICT0=; b=vv8mzy3Y9nWcjQspwknS5bm+6RTB1GQpekvjLdQ5WPcaJpcWNjYovmcMF6hTtYdPvo FNcsgXRZcPqRek2NWe5dLXlPmlMtRRaQhO3ewfJM9/g1X1j3+CNQh6dlHM9z95BVkpPE J1UNaqLT1T2QVvCgPffYljoG9tjKZ5oVF8tlMM0Sld+VBqpT8P+4Wy8C/Mj4lsF5yMyi LpQ86ePCou0ORIi8GpciFcHYfYnFGT/A7L8URRpWdUNxBHTEuF9cQu0Ljm+9r7yGj9qJ UT50/115Drd8a9mx5kciT8mZYPK9OqTfj+Zfq/iLOYU9OClMtPzZAx0mX5y7RsEzhnmr WIEg== X-Received: by 10.224.64.200 with SMTP id f8mr18065196qai.55.1380892808628; Fri, 04 Oct 2013 06:20:08 -0700 (PDT) Received: from [9.10.80.128] (rchp4.rochester.ibm.com. [129.42.161.36]) by mx.google.com with ESMTPSA id v96sm19095963yhp.3.1969.12.31.16.00.00 (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Fri, 04 Oct 2013 06:20:08 -0700 (PDT) Message-ID: <524EC080.90902@gmail.com> Date: Fri, 04 Oct 2013 08:20:00 -0500 From: Tom Musta User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 MIME-Version: 1.0 To: qemu-ppc@nongnu.org References: <524EBE04.8050207@gmail.com> In-Reply-To: <524EBE04.8050207@gmail.com> X-detected-operating-system: by eggs.gnu.org: Error: Malformed IPv6 address (bad octet value). X-Received-From: 2607:f8b0:400d:c02::22f X-Mailman-Approved-At: Fri, 04 Oct 2013 09:52:38 -0400 Cc: Tom Musta , qemu-devel@nongnu.org Subject: [Qemu-devel] [PATCH 07/13] Add VSX Scalar Move Instructions X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org This patch adds the VSX scalar move instructions: - xsabsdp (Scalar Absolute Value Double-Precision) - xsnabspd (Scalar Negative Absolute Value Double-Precision) - xsnegdp (Scalar Negate Double-Precision) - xscpsgndp (Scalar Copy Sign Double-Precision) A common generator macro (VSX_SCALAR_MOVE) is added since these instructions vary only slightly from each other. Macros to support VSX XX2 and XX3 form opcodes are also added. These macros handle the overloading of "opcode 2" space (instruction bits 26:30) caused by AX and BX bits (29 and 30, respectively). Signed-off-by: Tom Musta --- target-ppc/translate.c | 66 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 66 insertions(+), 0 deletions(-) extension ***/ /* Register moves */ @@ -9617,6 +9666,18 @@ GEN_HANDLER_E(stxsdx, 0x1F, 0xC, 0x16, 0, PPC_NONE, PPC2_VSX), GEN_HANDLER_E(stxvd2x, 0x1F, 0xC, 0x1E, 0, PPC_NONE, PPC2_VSX), GEN_HANDLER_E(stxvw4x, 0x1F, 0xC, 0x1C, 0, PPC_NONE, PPC2_VSX), +#undef GEN_XX2FORM +#define GEN_XX2FORM(name, opc2, opc3, fl2) \ +GEN_HANDLER2_E(name, #name, 0x3C, opc2 | 0, opc3, 0, PPC_NONE, fl2), \ +GEN_HANDLER2_E(name, #name, 0x3C, opc2 | 1, opc3, 0, PPC_NONE, fl2) + +#undef GEN_XX3FORM +#define GEN_XX3FORM(name, opc2, opc3, fl2) \ +GEN_HANDLER2_E(name, #name, 0x3C, opc2 | 0, opc3, 0, PPC_NONE, fl2), \ +GEN_HANDLER2_E(name, #name, 0x3C, opc2 | 1, opc3, 0, PPC_NONE, fl2), \ +GEN_HANDLER2_E(name, #name, 0x3C, opc2 | 2, opc3, 0, PPC_NONE, fl2), \ +GEN_HANDLER2_E(name, #name, 0x3C, opc2 | 3, opc3, 0, PPC_NONE, fl2) + #undef GEN_XX3FORM_DM #define GEN_XX3FORM_DM(name, opc2, opc3) \ GEN_HANDLER2_E(name, #name, 0x3C, opc2|0x00, opc3|0x00, 0, PPC_NONE, PPC2_VSX),\ @@ -9636,6 +9697,11 @@ GEN_HANDLER2_E(name, #name, 0x3C, opc2|0x01, opc3|0x0C, 0, PPC_NONE, PPC2_VSX),\ GEN_HANDLER2_E(name, #name, 0x3C, opc2|0x02, opc3|0x0C, 0, PPC_NONE, PPC2_VSX),\ GEN_HANDLER2_E(name, #name, 0x3C, opc2|0x03, opc3|0x0C, 0, PPC_NONE, PPC2_VSX) +GEN_XX2FORM(xsabsdp, 0x12, 0x15, PPC2_VSX), +GEN_XX2FORM(xsnabsdp, 0x12, 0x16, PPC2_VSX), +GEN_XX2FORM(xsnegdp, 0x12, 0x17, PPC2_VSX), +GEN_XX3FORM(xscpsgndp, 0x00, 0x16, PPC2_VSX), + GEN_XX3FORM_DM(xxpermdi, 0x08, 0x01), #undef GEN_SPE diff --git a/target-ppc/translate.c b/target-ppc/translate.c index 7d71fb9..db54e4f 100644 --- a/target-ppc/translate.c +++ b/target-ppc/translate.c @@ -7158,6 +7158,55 @@ static void gen_xxpermdi(DisasContext *ctx) tcg_gen_mov_i64(cpu_vsrl(xT(ctx->opcode)), cpu_vsrl(xB(ctx->opcode))); } } +#define OP_ABS 1 +#define OP_NABS 2 +#define OP_NEG 3 +#define OP_CPSGN 4 +#define SGN_MASK_DP 0x8000000000000000ul +#define SGN_MASK_SP 0x8000000080000000ul + +#define VSX_SCALAR_MOVE(name, op, sgn_mask) \ +static void glue(gen_, name)(DisasContext * ctx) \ + { \ + TCGv_i64 xb; \ + if (unlikely(!ctx->vsx_enabled)) { \ + gen_exception(ctx, POWERPC_EXCP_VSXU); \ + return; \ + } \ + xb = tcg_temp_new(); \ + tcg_gen_mov_i64(xb, cpu_vsrh(xB(ctx->opcode))); \ + switch (op) { \ + case OP_ABS: { \ + tcg_gen_andi_i64(xb, xb, ~(sgn_mask)); \ + break; \ + } \ + case OP_NABS: { \ + tcg_gen_ori_i64(xb, xb, (sgn_mask)); \ + break; \ + } \ + case OP_NEG: { \ + tcg_gen_xori_i64(xb, xb, (sgn_mask)); \ + break; \ + } \ + case OP_CPSGN: { \ + TCGv_i64 xa = tcg_temp_new(); \ + tcg_gen_mov_i64(xa, cpu_vsrh(xA(ctx->opcode))); \ + tcg_gen_andi_i64(xa, xa, (sgn_mask)); \ + tcg_gen_andi_i64(xb, xb, ~(sgn_mask)); \ + tcg_gen_or_i64(xb, xb, xa); \ + tcg_temp_free(xa); \ + break; \ + } \ + } \ + tcg_gen_mov_i64(cpu_vsrh(xT(ctx->opcode)), xb); \ + tcg_temp_free(xb); \ + } + +VSX_SCALAR_MOVE(xsabsdp, OP_ABS, SGN_MASK_DP) +VSX_SCALAR_MOVE(xsnabsdp, OP_NABS, SGN_MASK_DP) +VSX_SCALAR_MOVE(xsnegdp, OP_NEG, SGN_MASK_DP) +VSX_SCALAR_MOVE(xscpsgndp, OP_CPSGN, SGN_MASK_DP) + /*** SPE