From patchwork Tue Feb 25 10:35:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 1243984 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=c-sky.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48Rb3C07RNz9sP7 for ; Tue, 25 Feb 2020 21:37:11 +1100 (AEDT) Received: from localhost ([::1]:52220 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XaO-0004WZ-I6 for incoming@patchwork.ozlabs.org; Tue, 25 Feb 2020 05:37:08 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:35282) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XYi-0001B2-0K for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j6XYe-0001Py-7I for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:23 -0500 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:50323) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j6XYd-0001Na-A0; Tue, 25 Feb 2020 05:35:20 -0500 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=green; DM=CONTINUE|CONTINUE|true|0.539249-0.0317305-0.429021; DS=CONTINUE|ham_system_inform|0.193201-0.0010354-0.805764; FP=0|0|0|0|0|-1|-1|-1; HT=e02c03303; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GsFPuQz_1582626910; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GsFPuQz_1582626910) by smtp.aliyun-inc.com(10.147.40.44); Tue, 25 Feb 2020 18:35:11 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v4 1/5] target/riscv: add vector unit stride load and store instructions Date: Tue, 25 Feb 2020 18:35:04 +0800 Message-Id: <20200225103508.7651-2-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200225103508.7651-1-zhiwei_liu@c-sky.com> References: <20200225103508.7651-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-riscv@nongnu.org, qemu-devel@nongnu.org, linux-csky@vger.kernel.org, wenmeng_zhang@c-sky.com, wxy194768@alibaba-inc.com, LIU Zhiwei Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Vector unit-stride operations access elements stored contiguously in memory starting from the base effective address. The Zvlsseg expands some vector load/store segment instructions, which move multiple contiguous fields in memory to and from consecutively numbered vector register Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 70 ++++ target/riscv/insn32.decode | 17 + target/riscv/insn_trans/trans_rvv.inc.c | 188 +++++++++++ target/riscv/translate.c | 2 + target/riscv/vector_helper.c | 404 ++++++++++++++++++++++++ 5 files changed, 681 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 3c28c7e407..996639c0fa 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -78,3 +78,73 @@ DEF_HELPER_1(tlb_flush, void, env) #endif /* Vector functions */ DEF_HELPER_3(vsetvl, tl, env, tl, tl) +DEF_HELPER_5(vlb_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlb_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlh_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlw_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vle_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbu_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhu_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwu_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsb_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsh_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vsw_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_b_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_h_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_w_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vse_v_d_mask, void, ptr, ptr, tl, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 5dc009c3cd..dad3ed91c7 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -43,6 +43,7 @@ &u imm rd &shift shamt rs1 rd &atomic aq rl rs2 rs1 rd +&r2nfvm vm rd rs1 nf # Formats 32: @r ....... ..... ..... ... ..... ....... &r %rs2 %rs1 %rd @@ -62,6 +63,7 @@ @r_rm ....... ..... ..... ... ..... ....... %rs2 %rs1 %rm %rd @r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd @r2 ....... ..... ..... ... ..... ....... %rs1 %rd +@r2_nfvm nf:3 ... vm:1 ..... ..... ... ..... ....... &r2nfvm %rs1 %rd @r2_zimm . zimm:11 ..... ... ..... ....... %rs1 %rd @sfence_vma ....... ..... ..... ... ..... ....... %rs2 %rs1 @@ -206,5 +208,20 @@ fcvt_d_w 1101001 00000 ..... ... ..... 1010011 @r2_rm fcvt_d_wu 1101001 00001 ..... ... ..... 1010011 @r2_rm # *** RV32V Extension *** + +# *** Vector loads and stores are encoded within LOADFP/STORE-FP *** +vlb_v ... 100 . 00000 ..... 000 ..... 0000111 @r2_nfvm +vlh_v ... 100 . 00000 ..... 101 ..... 0000111 @r2_nfvm +vlw_v ... 100 . 00000 ..... 110 ..... 0000111 @r2_nfvm +vle_v ... 000 . 00000 ..... 111 ..... 0000111 @r2_nfvm +vlbu_v ... 000 . 00000 ..... 000 ..... 0000111 @r2_nfvm +vlhu_v ... 000 . 00000 ..... 101 ..... 0000111 @r2_nfvm +vlwu_v ... 000 . 00000 ..... 110 ..... 0000111 @r2_nfvm +vsb_v ... 000 . 00000 ..... 000 ..... 0100111 @r2_nfvm +vsh_v ... 000 . 00000 ..... 101 ..... 0100111 @r2_nfvm +vsw_v ... 000 . 00000 ..... 110 ..... 0100111 @r2_nfvm +vse_v ... 000 . 00000 ..... 111 ..... 0100111 @r2_nfvm + +# *** new major opcode OP-V *** vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index da82c72bbf..b0e97e7e06 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -15,6 +15,8 @@ * You should have received a copy of the GNU General Public License along with * this program. If not, see . */ +#include "tcg/tcg-op-gvec.h" +#include "tcg/tcg-gvec-desc.h" static bool trans_vsetvl(DisasContext *ctx, arg_vsetvl * a) { @@ -67,3 +69,189 @@ static bool trans_vsetvli(DisasContext *ctx, arg_vsetvli * a) tcg_temp_free(dst); return true; } + +/* vector register offset from env */ +static uint32_t vreg_ofs(DisasContext *s, int reg) +{ + return offsetof(CPURISCVState, vreg) + reg * s->vlen / 8; +} + +/* check functions */ +static bool vext_check_isa_ill(DisasContext *s, target_ulong isa) +{ + return !s->vill && ((s->misa & isa) == isa); +} + +static bool vext_check_reg(DisasContext *s, uint32_t reg, bool widen) +{ + int legal = widen ? 2 << s->lmul : 1 << s->lmul; + + return !((s->lmul == 0x3 && widen) || (reg % legal)); +} + +static bool vext_check_overlap_mask(DisasContext *s, uint32_t vd, bool vm) +{ + return !(s->lmul > 1 && vm == 0 && vd == 0); +} + +static bool vext_check_nf(DisasContext *s, uint32_t nf) +{ + return s->lmul * (nf + 1) <= 8; +} + +/* common translation macro */ +#define GEN_VEXT_TRANS(NAME, SEQ, ARGTYPE, OP, CHECK) \ +static bool trans_##NAME(DisasContext *s, arg_##ARGTYPE *a)\ +{ \ + if (CHECK(s, a)) { \ + return OP(s, a, SEQ); \ + } \ + return false; \ +} + +/* + *** unit stride load and store + */ +typedef void gen_helper_ldst_us(TCGv_ptr, TCGv_ptr, TCGv, + TCGv_env, TCGv_i32); + +static bool ldst_us_trans(uint32_t vd, uint32_t rs1, uint32_t data, + gen_helper_ldst_us *fn, DisasContext *s) +{ + TCGv_ptr dest, mask; + TCGv base; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + base = tcg_temp_new(); + + /* + * As simd_desc supports at most 256 bytes, and in this implementation, + * the max vector group length is 2048 bytes. So split it into two parts. + * + * The first part is vlen in bytes, encoded in maxsz of simd_desc. + * The second part is lmul, encoded in data of simd_desc. + */ + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free(base); + tcg_temp_free_i32(desc); + return true; +} + +static bool ld_us_op(DisasContext *s, arg_r2nfvm *a, uint8_t seq) +{ + uint8_t nf = a->nf + 1; + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (nf << 11); + gen_helper_ldst_us *fn; + static gen_helper_ldst_us * const fns[2][7][4] = { + /* masked unit stride load */ + { { gen_helper_vlb_v_b_mask, gen_helper_vlb_v_h_mask, + gen_helper_vlb_v_w_mask, gen_helper_vlb_v_d_mask }, + { NULL, gen_helper_vlh_v_h_mask, + gen_helper_vlh_v_w_mask, gen_helper_vlh_v_d_mask }, + { NULL, NULL, + gen_helper_vlw_v_w_mask, gen_helper_vlw_v_d_mask }, + { gen_helper_vle_v_b_mask, gen_helper_vle_v_h_mask, + gen_helper_vle_v_w_mask, gen_helper_vle_v_d_mask }, + { gen_helper_vlbu_v_b_mask, gen_helper_vlbu_v_h_mask, + gen_helper_vlbu_v_w_mask, gen_helper_vlbu_v_d_mask }, + { NULL, gen_helper_vlhu_v_h_mask, + gen_helper_vlhu_v_w_mask, gen_helper_vlhu_v_d_mask }, + { NULL, NULL, + gen_helper_vlwu_v_w_mask, gen_helper_vlwu_v_d_mask } }, + /* unmasked unit stride load */ + { { gen_helper_vlb_v_b, gen_helper_vlb_v_h, + gen_helper_vlb_v_w, gen_helper_vlb_v_d }, + { NULL, gen_helper_vlh_v_h, + gen_helper_vlh_v_w, gen_helper_vlh_v_d }, + { NULL, NULL, + gen_helper_vlw_v_w, gen_helper_vlw_v_d }, + { gen_helper_vle_v_b, gen_helper_vle_v_h, + gen_helper_vle_v_w, gen_helper_vle_v_d }, + { gen_helper_vlbu_v_b, gen_helper_vlbu_v_h, + gen_helper_vlbu_v_w, gen_helper_vlbu_v_d }, + { NULL, gen_helper_vlhu_v_h, + gen_helper_vlhu_v_w, gen_helper_vlhu_v_d }, + { NULL, NULL, + gen_helper_vlwu_v_w, gen_helper_vlwu_v_d } } + }; + + fn = fns[a->vm][seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldst_us_trans(a->rd, a->rs1, data, fn, s); +} + +static bool ld_us_check(DisasContext *s, arg_r2nfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vlb_v, 0, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlh_v, 1, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlw_v, 2, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vle_v, 3, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlbu_v, 4, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlhu_v, 5, r2nfvm, ld_us_op, ld_us_check) +GEN_VEXT_TRANS(vlwu_v, 6, r2nfvm, ld_us_op, ld_us_check) + +static bool st_us_op(DisasContext *s, arg_r2nfvm *a, uint8_t seq) +{ + uint8_t nf = a->nf + 1; + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (nf << 11); + gen_helper_ldst_us *fn; + static gen_helper_ldst_us * const fns[2][4][4] = { + /* masked unit stride load and store */ + { { gen_helper_vsb_v_b_mask, gen_helper_vsb_v_h_mask, + gen_helper_vsb_v_w_mask, gen_helper_vsb_v_d_mask }, + { NULL, gen_helper_vsh_v_h_mask, + gen_helper_vsh_v_w_mask, gen_helper_vsh_v_d_mask }, + { NULL, NULL, + gen_helper_vsw_v_w_mask, gen_helper_vsw_v_d_mask }, + { gen_helper_vse_v_b_mask, gen_helper_vse_v_h_mask, + gen_helper_vse_v_w_mask, gen_helper_vse_v_d_mask } }, + /* unmasked unit stride store */ + { { gen_helper_vsb_v_b, gen_helper_vsb_v_h, + gen_helper_vsb_v_w, gen_helper_vsb_v_d }, + { NULL, gen_helper_vsh_v_h, + gen_helper_vsh_v_w, gen_helper_vsh_v_d }, + { NULL, NULL, + gen_helper_vsw_v_w, gen_helper_vsw_v_d }, + { gen_helper_vse_v_b, gen_helper_vse_v_h, + gen_helper_vse_v_w, gen_helper_vse_v_d } } + }; + + fn = fns[a->vm][seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldst_us_trans(a->rd, a->rs1, data, fn, s); +} + +static bool st_us_check(DisasContext *s, arg_r2nfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vsb_v, 0, r2nfvm, st_us_op, st_us_check) +GEN_VEXT_TRANS(vsh_v, 1, r2nfvm, st_us_op, st_us_check) +GEN_VEXT_TRANS(vsw_v, 2, r2nfvm, st_us_op, st_us_check) +GEN_VEXT_TRANS(vse_v, 3, r2nfvm, st_us_op, st_us_check) diff --git a/target/riscv/translate.c b/target/riscv/translate.c index cc356aabd8..faec71e239 100644 --- a/target/riscv/translate.c +++ b/target/riscv/translate.c @@ -60,6 +60,7 @@ typedef struct DisasContext { uint8_t lmul; uint8_t sew; uint16_t vlen; + uint16_t mlen; bool vl_eq_vlmax; } DisasContext; @@ -755,6 +756,7 @@ static void riscv_tr_init_disas_context(DisasContextBase *dcbase, CPUState *cs) ctx->vill = FIELD_EX32(tb_flags, TB_FLAGS, VILL); ctx->sew = FIELD_EX32(tb_flags, TB_FLAGS, SEW); ctx->lmul = FIELD_EX32(tb_flags, TB_FLAGS, LMUL); + ctx->mlen = 1 << (ctx->sew + 3 - ctx->lmul); ctx->vl_eq_vlmax = FIELD_EX32(tb_flags, TB_FLAGS, VL_EQ_VLMAX); } diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 07db704656..39984cebd2 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -18,8 +18,10 @@ #include "qemu/osdep.h" #include "cpu.h" +#include "exec/memop.h" #include "exec/exec-all.h" #include "exec/helper-proto.h" +#include "tcg/tcg-gvec-desc.h" #include target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1, @@ -51,3 +53,405 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1, env->vstart = 0; return vl; } + +/* + * Note that vector data is stored in host-endian 64-bit chunks, + * so addressing units smaller than that needs a host-endian fixup. + */ +#ifdef HOST_WORDS_BIGENDIAN +#define H1(x) ((x) ^ 7) +#define H1_2(x) ((x) ^ 6) +#define H1_4(x) ((x) ^ 4) +#define H2(x) ((x) ^ 3) +#define H4(x) ((x) ^ 1) +#define H8(x) ((x)) +#else +#define H1(x) (x) +#define H1_2(x) (x) +#define H1_4(x) (x) +#define H2(x) (x) +#define H4(x) (x) +#define H8(x) (x) +#endif + +static inline uint32_t vext_nf(uint32_t desc) +{ + return (simd_data(desc) >> 11) & 0xf; +} + +static inline uint32_t vext_mlen(uint32_t desc) +{ + return simd_data(desc) & 0xff; +} + +static inline uint32_t vext_vm(uint32_t desc) +{ + return (simd_data(desc) >> 8) & 0x1; +} + +static inline uint32_t vext_lmul(uint32_t desc) +{ + return (simd_data(desc) >> 9) & 0x3; +} + +/* + * Get vector group length in bytes. Its range is [64, 2048]. + * + * As simd_desc support at most 256, the max vlen is 512 bits. + * So vlen in bytes is encoded as maxsz. + */ +static inline uint32_t vext_maxsz(uint32_t desc) +{ + return simd_maxsz(desc) << vext_lmul(desc); +} + +/* + * This function checks watchpoint before real load operation. + * + * In softmmu mode, the TLB API probe_access is enough for watchpoint check. + * In user mode, there is no watchpoint support now. + * + * It will triggle an exception if there is no mapping in TLB + * and page table walk can't fill the TLB entry. Then the guest + * software can return here after process the exception or never return. + */ +static void probe_read_access(CPURISCVState *env, target_ulong addr, + target_ulong len, uintptr_t ra) +{ + while (len) { + const target_ulong pagelen = -(addr | TARGET_PAGE_MASK); + const target_ulong curlen = MIN(pagelen, len); + + probe_read(env, addr, curlen, cpu_mmu_index(env, false), ra); + addr += curlen; + len -= curlen; + } +} + +static void probe_write_access(CPURISCVState *env, target_ulong addr, + target_ulong len, uintptr_t ra) +{ + while (len) { + const target_ulong pagelen = -(addr | TARGET_PAGE_MASK); + const target_ulong curlen = MIN(pagelen, len); + + probe_write(env, addr, curlen, cpu_mmu_index(env, false), ra); + addr += curlen; + len -= curlen; + } +} + +#ifdef HOST_WORDS_BIGENDIAN +static void vext_clear(void *tail, uint32_t cnt, uint32_t tot) +{ + /* + * Split the remaining range to two parts. + * The first part is in the last uint64_t unit. + * The second part start from the next uint64_t unit. + */ + int part1 = 0, part2 = tot - cnt; + if (cnt % 64) { + part1 = 64 - (cnt % 64); + part2 = tot - cnt - part1; + memset(tail & ~(63ULL), 0, part1); + memset((tail + 64) & ~(63ULL), 0, part2); + } else { + memset(tail, 0, part2); + } +} +#else +static void vext_clear(void *tail, uint32_t cnt, uint32_t tot) +{ + memset(tail, 0, tot - cnt); +} +#endif + +static inline int vext_elem_mask(void *v0, int mlen, int index) +{ + + int idx = (index * mlen) / 8; + int pos = (index * mlen) % 8; + + switch (mlen) { + case 8: + return *((uint8_t *)v0 + H1(index)) & 0x1; + case 16: + return *((uint16_t *)v0 + H2(index)) & 0x1; + case 32: + return *((uint32_t *)v0 + H4(index)) & 0x1; + case 64: + return *((uint64_t *)v0 + index) & 0x1; + default: + return (*((uint8_t *)v0 + H1(idx)) >> pos) & 0x1; + } +} + +/* elements operations for load and store */ +typedef void (*vext_ld_elem_fn)(CPURISCVState *env, target_ulong addr, + uint32_t idx, void *vd, uintptr_t retaddr); +typedef void (*vext_st_elem_fn)(CPURISCVState *env, target_ulong addr, + uint32_t idx, void *vd, uintptr_t retaddr); +typedef void (*vext_ld_clear_elem)(void *vd, uint32_t idx, + uint32_t cnt, uint32_t tot); + +#define GEN_VEXT_LD_ELEM(NAME, MTYPE, ETYPE, H, LDSUF) \ +static void vext_##NAME##_ld_elem(CPURISCVState *env, abi_ptr addr, \ + uint32_t idx, void *vd, uintptr_t retaddr) \ +{ \ + int mmu_idx = cpu_mmu_index(env, false); \ + MTYPE data; \ + ETYPE *cur = ((ETYPE *)vd + H(idx)); \ + data = cpu_##LDSUF##_mmuidx_ra(env, addr, mmu_idx, retaddr); \ + *cur = data; \ +} \ +static void vext_##NAME##_clear_elem(void *vd, uint32_t idx, \ + uint32_t cnt, uint32_t tot) \ +{ \ + ETYPE *cur = ((ETYPE *)vd + H(idx)); \ + vext_clear(cur, cnt, tot); \ +} + +GEN_VEXT_LD_ELEM(vlb_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vlb_v_h, int8_t, int16_t, H2, ldsb) +GEN_VEXT_LD_ELEM(vlb_v_w, int8_t, int32_t, H4, ldsb) +GEN_VEXT_LD_ELEM(vlb_v_d, int8_t, int64_t, H8, ldsb) +GEN_VEXT_LD_ELEM(vlh_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vlh_v_w, int16_t, int32_t, H4, ldsw) +GEN_VEXT_LD_ELEM(vlh_v_d, int16_t, int64_t, H8, ldsw) +GEN_VEXT_LD_ELEM(vlw_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlw_v_d, int32_t, int64_t, H8, ldl) +GEN_VEXT_LD_ELEM(vle_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vle_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vle_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vle_v_d, int64_t, int64_t, H8, ldq) +GEN_VEXT_LD_ELEM(vlbu_v_b, uint8_t, uint8_t, H1, ldub) +GEN_VEXT_LD_ELEM(vlbu_v_h, uint8_t, uint16_t, H2, ldub) +GEN_VEXT_LD_ELEM(vlbu_v_w, uint8_t, uint32_t, H4, ldub) +GEN_VEXT_LD_ELEM(vlbu_v_d, uint8_t, uint64_t, H8, ldub) +GEN_VEXT_LD_ELEM(vlhu_v_h, uint16_t, uint16_t, H2, lduw) +GEN_VEXT_LD_ELEM(vlhu_v_w, uint16_t, uint32_t, H4, lduw) +GEN_VEXT_LD_ELEM(vlhu_v_d, uint16_t, uint64_t, H8, lduw) +GEN_VEXT_LD_ELEM(vlwu_v_w, uint32_t, uint32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlwu_v_d, uint32_t, uint64_t, H8, ldl) + +#define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \ +static void vext_##NAME##_st_elem(CPURISCVState *env, abi_ptr addr, \ + uint32_t idx, void *vd, uintptr_t retaddr) \ +{ \ + int mmu_idx = cpu_mmu_index(env, false); \ + ETYPE data = *((ETYPE *)vd + H(idx)); \ + cpu_##STSUF##_mmuidx_ra(env, addr, data, mmu_idx, retaddr); \ +} + +GEN_VEXT_ST_ELEM(vsb_v_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(vsb_v_h, int16_t, H2, stb) +GEN_VEXT_ST_ELEM(vsb_v_w, int32_t, H4, stb) +GEN_VEXT_ST_ELEM(vsb_v_d, int64_t, H8, stb) +GEN_VEXT_ST_ELEM(vsh_v_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(vsh_v_w, int32_t, H4, stw) +GEN_VEXT_ST_ELEM(vsh_v_d, int64_t, H8, stw) +GEN_VEXT_ST_ELEM(vsw_v_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(vsw_v_d, int64_t, H8, stl) +GEN_VEXT_ST_ELEM(vse_v_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(vse_v_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(vse_v_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(vse_v_d, int64_t, H8, stq) + +/* + *** unit-stride: load vector element from continuous guest memory + */ +static inline void vext_ld_us_mask(void *vd, void *v0, target_ulong base, + CPURISCVState *env, uint32_t desc, + vext_ld_elem_fn ld_elem, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t mlen = vext_mlen(desc); + uint32_t nf = vext_nf(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + if (env->vl == 0) { + return; + } + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_read_access(env, base + nf * i * msz, nf * msz, ra); + } + /* load bytes from guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + target_ulong addr = base + (i * nf + k) * msz; + ld_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } +} + +static inline void vext_ld_us(void *vd, target_ulong base, + CPURISCVState *env, uint32_t desc, + vext_ld_elem_fn ld_elem, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + if (env->vl == 0) { + return; + } + /* probe every access*/ + probe_read_access(env, base, env->vl * nf * msz, ra); + /* load bytes from guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + while (k < nf) { + target_ulong addr = base + (i * nf + k) * msz; + ld_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } +} + +#define GEN_VEXT_LD_US(NAME, MTYPE, ETYPE) \ +void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ld_us_mask(vd, v0, base, env, desc, \ + vext_##NAME##_ld_elem, \ + vext_##NAME##_clear_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} \ + \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ld_us(vd, base, env, desc, \ + vext_##NAME##_ld_elem, \ + vext_##NAME##_clear_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +GEN_VEXT_LD_US(vlb_v_b, int8_t, int8_t) +GEN_VEXT_LD_US(vlb_v_h, int8_t, int16_t) +GEN_VEXT_LD_US(vlb_v_w, int8_t, int32_t) +GEN_VEXT_LD_US(vlb_v_d, int8_t, int64_t) +GEN_VEXT_LD_US(vlh_v_h, int16_t, int16_t) +GEN_VEXT_LD_US(vlh_v_w, int16_t, int32_t) +GEN_VEXT_LD_US(vlh_v_d, int16_t, int64_t) +GEN_VEXT_LD_US(vlw_v_w, int32_t, int32_t) +GEN_VEXT_LD_US(vlw_v_d, int32_t, int64_t) +GEN_VEXT_LD_US(vle_v_b, int8_t, int8_t) +GEN_VEXT_LD_US(vle_v_h, int16_t, int16_t) +GEN_VEXT_LD_US(vle_v_w, int32_t, int32_t) +GEN_VEXT_LD_US(vle_v_d, int64_t, int64_t) +GEN_VEXT_LD_US(vlbu_v_b, uint8_t, uint8_t) +GEN_VEXT_LD_US(vlbu_v_h, uint8_t, uint16_t) +GEN_VEXT_LD_US(vlbu_v_w, uint8_t, uint32_t) +GEN_VEXT_LD_US(vlbu_v_d, uint8_t, uint64_t) +GEN_VEXT_LD_US(vlhu_v_h, uint16_t, uint16_t) +GEN_VEXT_LD_US(vlhu_v_w, uint16_t, uint32_t) +GEN_VEXT_LD_US(vlhu_v_d, uint16_t, uint64_t) +GEN_VEXT_LD_US(vlwu_v_w, uint32_t, uint32_t) +GEN_VEXT_LD_US(vlwu_v_d, uint32_t, uint64_t) + +/* unit-stride: store vector element to guest memory */ +static void vext_st_us_mask(void *vd, void *v0, target_ulong base, + CPURISCVState *env, uint32_t desc, + vext_st_elem_fn st_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_write_access(env, base + nf * i * msz, nf * msz, ra); + } + /* store bytes to guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + target_ulong addr = base + (i * nf + k) * msz; + st_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } +} + +static void vext_st_us(void *vd, target_ulong base, + CPURISCVState *env, uint32_t desc, + vext_st_elem_fn st_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + /* probe every access*/ + probe_write_access(env, base, env->vl * nf * msz, ra); + /* load bytes from guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + while (k < nf) { + target_ulong addr = base + (i * nf + k) * msz; + st_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } +} + +#define GEN_VEXT_ST_US(NAME, MTYPE, ETYPE) \ +void HELPER(NAME##_mask)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + vext_st_us_mask(vd, v0, base, env, desc, \ + vext_##NAME##_st_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} \ + \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + vext_st_us(vd, base, env, desc, \ + vext_##NAME##_st_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +GEN_VEXT_ST_US(vsb_v_b, int8_t, int8_t) +GEN_VEXT_ST_US(vsb_v_h, int8_t, int16_t) +GEN_VEXT_ST_US(vsb_v_w, int8_t, int32_t) +GEN_VEXT_ST_US(vsb_v_d, int8_t, int64_t) +GEN_VEXT_ST_US(vsh_v_h, int16_t, int16_t) +GEN_VEXT_ST_US(vsh_v_w, int16_t, int32_t) +GEN_VEXT_ST_US(vsh_v_d, int16_t, int64_t) +GEN_VEXT_ST_US(vsw_v_w, int32_t, int32_t) +GEN_VEXT_ST_US(vsw_v_d, int32_t, int64_t) +GEN_VEXT_ST_US(vse_v_b, int8_t, int8_t) +GEN_VEXT_ST_US(vse_v_h, int16_t, int16_t) +GEN_VEXT_ST_US(vse_v_w, int32_t, int32_t) +GEN_VEXT_ST_US(vse_v_d, int64_t, int64_t) From patchwork Tue Feb 25 10:35:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 1243981 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=c-sky.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48Rb1p5CKjz9sQt for ; Tue, 25 Feb 2020 21:35:58 +1100 (AEDT) Received: from localhost ([::1]:52192 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XZE-0001Hu-Hf for incoming@patchwork.ozlabs.org; Tue, 25 Feb 2020 05:35:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:35275) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XYh-0001Ap-OF for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j6XYd-0001Pj-V8 for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:23 -0500 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:44720) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j6XYd-0001Ns-A9; Tue, 25 Feb 2020 05:35:19 -0500 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=green; DM=CONTINUE|CONTINUE|true|0.521027-0.0251501-0.453823; DS=CONTINUE|ham_system_inform|0.00396993-0.000209295-0.995821; FP=0|0|0|0|0|-1|-1|-1; HT=e02c03311; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GsFPuQz_1582626910; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GsFPuQz_1582626910) by smtp.aliyun-inc.com(10.147.40.44); Tue, 25 Feb 2020 18:35:12 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v4 2/5] target/riscv: add vector stride load and store instructions Date: Tue, 25 Feb 2020 18:35:05 +0800 Message-Id: <20200225103508.7651-3-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200225103508.7651-1-zhiwei_liu@c-sky.com> References: <20200225103508.7651-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-riscv@nongnu.org, qemu-devel@nongnu.org, linux-csky@vger.kernel.org, wenmeng_zhang@c-sky.com, wxy194768@alibaba-inc.com, LIU Zhiwei Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Vector strided operations access the first memory element at the base address, and then access subsequent elements at address increments given by the byte offset contained in the x register specified by rs2. Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 35 +++++ target/riscv/insn32.decode | 14 ++ target/riscv/insn_trans/trans_rvv.inc.c | 117 +++++++++++++++++ target/riscv/vector_helper.c | 166 ++++++++++++++++++++++++ 4 files changed, 332 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 996639c0fa..87dfa90609 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -148,3 +148,38 @@ DEF_HELPER_5(vse_v_w, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vse_v_w_mask, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vse_v_d, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vse_v_d_mask, void, ptr, ptr, tl, env, i32) +DEF_HELPER_6(vlsb_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsb_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsb_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsb_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsh_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsh_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsh_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsw_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsw_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlse_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlsbu_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlshu_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlshu_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlshu_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlswu_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlswu_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssb_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssh_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssh_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssh_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssw_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vssw_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_b, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_h, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_w, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vsse_v_d, void, ptr, ptr, tl, tl, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index dad3ed91c7..2f2d3d13b3 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -44,6 +44,7 @@ &shift shamt rs1 rd &atomic aq rl rs2 rs1 rd &r2nfvm vm rd rs1 nf +&rnfvm vm rd rs1 rs2 nf # Formats 32: @r ....... ..... ..... ... ..... ....... &r %rs2 %rs1 %rd @@ -64,6 +65,7 @@ @r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd @r2 ....... ..... ..... ... ..... ....... %rs1 %rd @r2_nfvm nf:3 ... vm:1 ..... ..... ... ..... ....... &r2nfvm %rs1 %rd +@r_nfvm nf:3 ... vm:1 ..... ..... ... ..... ....... &rnfvm %rs2 %rs1 %rd @r2_zimm . zimm:11 ..... ... ..... ....... %rs1 %rd @sfence_vma ....... ..... ..... ... ..... ....... %rs2 %rs1 @@ -222,6 +224,18 @@ vsh_v ... 000 . 00000 ..... 101 ..... 0100111 @r2_nfvm vsw_v ... 000 . 00000 ..... 110 ..... 0100111 @r2_nfvm vse_v ... 000 . 00000 ..... 111 ..... 0100111 @r2_nfvm +vlsb_v ... 110 . ..... ..... 000 ..... 0000111 @r_nfvm +vlsh_v ... 110 . ..... ..... 101 ..... 0000111 @r_nfvm +vlsw_v ... 110 . ..... ..... 110 ..... 0000111 @r_nfvm +vlse_v ... 010 . ..... ..... 111 ..... 0000111 @r_nfvm +vlsbu_v ... 010 . ..... ..... 000 ..... 0000111 @r_nfvm +vlshu_v ... 010 . ..... ..... 101 ..... 0000111 @r_nfvm +vlswu_v ... 010 . ..... ..... 110 ..... 0000111 @r_nfvm +vssb_v ... 010 . ..... ..... 000 ..... 0100111 @r_nfvm +vssh_v ... 010 . ..... ..... 101 ..... 0100111 @r_nfvm +vssw_v ... 010 . ..... ..... 110 ..... 0100111 @r_nfvm +vsse_v ... 010 . ..... ..... 111 ..... 0100111 @r_nfvm + # *** new major opcode OP-V *** vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index b0e97e7e06..1b627dc880 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -255,3 +255,120 @@ GEN_VEXT_TRANS(vsb_v, 0, r2nfvm, st_us_op, st_us_check) GEN_VEXT_TRANS(vsh_v, 1, r2nfvm, st_us_op, st_us_check) GEN_VEXT_TRANS(vsw_v, 2, r2nfvm, st_us_op, st_us_check) GEN_VEXT_TRANS(vse_v, 3, r2nfvm, st_us_op, st_us_check) + +/* + *** stride load and store + */ +typedef void gen_helper_ldst_stride(TCGv_ptr, TCGv_ptr, TCGv, + TCGv, TCGv_env, TCGv_i32); + +static bool ldst_stride_trans(uint32_t vd, uint32_t rs1, uint32_t rs2, + uint32_t data, gen_helper_ldst_stride *fn, DisasContext *s) +{ + TCGv_ptr dest, mask; + TCGv base, stride; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + base = tcg_temp_new(); + stride = tcg_temp_new(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + gen_get_gpr(stride, rs2); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, stride, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free(base); + tcg_temp_free(stride); + tcg_temp_free_i32(desc); + return true; +} + +static bool ld_stride_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint8_t nf = a->nf + 1; + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (nf << 11); + gen_helper_ldst_stride *fn; + static gen_helper_ldst_stride * const fns[7][4] = { + { gen_helper_vlsb_v_b, gen_helper_vlsb_v_h, + gen_helper_vlsb_v_w, gen_helper_vlsb_v_d }, + { NULL, gen_helper_vlsh_v_h, + gen_helper_vlsh_v_w, gen_helper_vlsh_v_d }, + { NULL, NULL, + gen_helper_vlsw_v_w, gen_helper_vlsw_v_d }, + { gen_helper_vlse_v_b, gen_helper_vlse_v_h, + gen_helper_vlse_v_w, gen_helper_vlse_v_d }, + { gen_helper_vlsbu_v_b, gen_helper_vlsbu_v_h, + gen_helper_vlsbu_v_w, gen_helper_vlsbu_v_d }, + { NULL, gen_helper_vlshu_v_h, + gen_helper_vlshu_v_w, gen_helper_vlshu_v_d }, + { NULL, NULL, + gen_helper_vlswu_v_w, gen_helper_vlswu_v_d }, + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool ld_stride_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vlsb_v, 0, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlsh_v, 1, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlsw_v, 2, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlse_v, 3, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlsbu_v, 4, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlshu_v, 5, rnfvm, ld_stride_op, ld_stride_check) +GEN_VEXT_TRANS(vlswu_v, 6, rnfvm, ld_stride_op, ld_stride_check) + +static bool st_stride_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint8_t nf = a->nf + 1; + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (nf << 11); + gen_helper_ldst_stride *fn; + static gen_helper_ldst_stride * const fns[4][4] = { + /* masked stride store */ + { gen_helper_vssb_v_b, gen_helper_vssb_v_h, + gen_helper_vssb_v_w, gen_helper_vssb_v_d }, + { NULL, gen_helper_vssh_v_h, + gen_helper_vssh_v_w, gen_helper_vssh_v_d }, + { NULL, NULL, + gen_helper_vssw_v_w, gen_helper_vssw_v_d }, + { gen_helper_vsse_v_b, gen_helper_vsse_v_h, + gen_helper_vsse_v_w, gen_helper_vsse_v_d } + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldst_stride_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool st_stride_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vssb_v, 0, rnfvm, st_stride_op, st_stride_check) +GEN_VEXT_TRANS(vssh_v, 1, rnfvm, st_stride_op, st_stride_check) +GEN_VEXT_TRANS(vssw_v, 2, rnfvm, st_stride_op, st_stride_check) +GEN_VEXT_TRANS(vsse_v, 3, rnfvm, st_stride_op, st_stride_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 39984cebd2..8a8c062ad5 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -233,6 +233,28 @@ GEN_VEXT_LD_ELEM(vlhu_v_w, uint16_t, uint32_t, H4, lduw) GEN_VEXT_LD_ELEM(vlhu_v_d, uint16_t, uint64_t, H8, lduw) GEN_VEXT_LD_ELEM(vlwu_v_w, uint32_t, uint32_t, H4, ldl) GEN_VEXT_LD_ELEM(vlwu_v_d, uint32_t, uint64_t, H8, ldl) +GEN_VEXT_LD_ELEM(vlsb_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vlsb_v_h, int8_t, int16_t, H2, ldsb) +GEN_VEXT_LD_ELEM(vlsb_v_w, int8_t, int32_t, H4, ldsb) +GEN_VEXT_LD_ELEM(vlsb_v_d, int8_t, int64_t, H8, ldsb) +GEN_VEXT_LD_ELEM(vlsh_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vlsh_v_w, int16_t, int32_t, H4, ldsw) +GEN_VEXT_LD_ELEM(vlsh_v_d, int16_t, int64_t, H8, ldsw) +GEN_VEXT_LD_ELEM(vlsw_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlsw_v_d, int32_t, int64_t, H8, ldl) +GEN_VEXT_LD_ELEM(vlse_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vlse_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vlse_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlse_v_d, int64_t, int64_t, H8, ldq) +GEN_VEXT_LD_ELEM(vlsbu_v_b, uint8_t, uint8_t, H1, ldub) +GEN_VEXT_LD_ELEM(vlsbu_v_h, uint8_t, uint16_t, H2, ldub) +GEN_VEXT_LD_ELEM(vlsbu_v_w, uint8_t, uint32_t, H4, ldub) +GEN_VEXT_LD_ELEM(vlsbu_v_d, uint8_t, uint64_t, H8, ldub) +GEN_VEXT_LD_ELEM(vlshu_v_h, uint16_t, uint16_t, H2, lduw) +GEN_VEXT_LD_ELEM(vlshu_v_w, uint16_t, uint32_t, H4, lduw) +GEN_VEXT_LD_ELEM(vlshu_v_d, uint16_t, uint64_t, H8, lduw) +GEN_VEXT_LD_ELEM(vlswu_v_w, uint32_t, uint32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlswu_v_d, uint32_t, uint64_t, H8, ldl) #define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \ static void vext_##NAME##_st_elem(CPURISCVState *env, abi_ptr addr, \ @@ -256,6 +278,19 @@ GEN_VEXT_ST_ELEM(vse_v_b, int8_t, H1, stb) GEN_VEXT_ST_ELEM(vse_v_h, int16_t, H2, stw) GEN_VEXT_ST_ELEM(vse_v_w, int32_t, H4, stl) GEN_VEXT_ST_ELEM(vse_v_d, int64_t, H8, stq) +GEN_VEXT_ST_ELEM(vssb_v_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(vssb_v_h, int16_t, H2, stb) +GEN_VEXT_ST_ELEM(vssb_v_w, int32_t, H4, stb) +GEN_VEXT_ST_ELEM(vssb_v_d, int64_t, H8, stb) +GEN_VEXT_ST_ELEM(vssh_v_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(vssh_v_w, int32_t, H4, stw) +GEN_VEXT_ST_ELEM(vssh_v_d, int64_t, H8, stw) +GEN_VEXT_ST_ELEM(vssw_v_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(vssw_v_d, int64_t, H8, stl) +GEN_VEXT_ST_ELEM(vsse_v_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(vsse_v_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(vsse_v_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(vsse_v_d, int64_t, H8, stq) /* *** unit-stride: load vector element from continuous guest memory @@ -455,3 +490,134 @@ GEN_VEXT_ST_US(vse_v_b, int8_t, int8_t) GEN_VEXT_ST_US(vse_v_h, int16_t, int16_t) GEN_VEXT_ST_US(vse_v_w, int32_t, int32_t) GEN_VEXT_ST_US(vse_v_d, int64_t, int64_t) + +/* + *** stride: load strided vector element from guest memory + */ +static void vext_ld_stride(void *vd, void *v0, target_ulong base, + target_ulong stride, CPURISCVState *env, uint32_t desc, + vext_ld_elem_fn ld_elem, vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + if (env->vl == 0) { + return; + } + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_read_access(env, base + stride * i, nf * msz, ra); + } + /* load bytes from guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + target_ulong addr = base + stride * i + k * msz; + ld_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } +} + +#define GEN_VEXT_LD_STRIDE(NAME, MTYPE, ETYPE) \ +void HELPER(NAME)(void *vd, void * v0, target_ulong base, \ + target_ulong stride, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ld_stride(vd, v0, base, stride, env, desc, \ + vext_##NAME##_ld_elem, vext_##NAME##_clear_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +GEN_VEXT_LD_STRIDE(vlsb_v_b, int8_t, int8_t) +GEN_VEXT_LD_STRIDE(vlsb_v_h, int8_t, int16_t) +GEN_VEXT_LD_STRIDE(vlsb_v_w, int8_t, int32_t) +GEN_VEXT_LD_STRIDE(vlsb_v_d, int8_t, int64_t) +GEN_VEXT_LD_STRIDE(vlsh_v_h, int16_t, int16_t) +GEN_VEXT_LD_STRIDE(vlsh_v_w, int16_t, int32_t) +GEN_VEXT_LD_STRIDE(vlsh_v_d, int16_t, int64_t) +GEN_VEXT_LD_STRIDE(vlsw_v_w, int32_t, int32_t) +GEN_VEXT_LD_STRIDE(vlsw_v_d, int32_t, int64_t) +GEN_VEXT_LD_STRIDE(vlse_v_b, int8_t, int8_t) +GEN_VEXT_LD_STRIDE(vlse_v_h, int16_t, int16_t) +GEN_VEXT_LD_STRIDE(vlse_v_w, int32_t, int32_t) +GEN_VEXT_LD_STRIDE(vlse_v_d, int64_t, int64_t) +GEN_VEXT_LD_STRIDE(vlsbu_v_b, uint8_t, uint8_t) +GEN_VEXT_LD_STRIDE(vlsbu_v_h, uint8_t, uint16_t) +GEN_VEXT_LD_STRIDE(vlsbu_v_w, uint8_t, uint32_t) +GEN_VEXT_LD_STRIDE(vlsbu_v_d, uint8_t, uint64_t) +GEN_VEXT_LD_STRIDE(vlshu_v_h, uint16_t, uint16_t) +GEN_VEXT_LD_STRIDE(vlshu_v_w, uint16_t, uint32_t) +GEN_VEXT_LD_STRIDE(vlshu_v_d, uint16_t, uint64_t) +GEN_VEXT_LD_STRIDE(vlswu_v_w, uint32_t, uint32_t) +GEN_VEXT_LD_STRIDE(vlswu_v_d, uint32_t, uint64_t) + +/* + *** stride: store strided vector element to guest memory + */ +static void vext_st_stride(void *vd, void *v0, target_ulong base, + target_ulong stride, CPURISCVState *env, uint32_t desc, + vext_st_elem_fn st_elem, uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_write_access(env, base + stride * i, nf * msz, ra); + } + /* store bytes to guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + target_ulong addr = base + stride * i + k * msz; + st_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } +} + +#define GEN_VEXT_ST_STRIDE(NAME, MTYPE, ETYPE) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + target_ulong stride, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_st_stride(vd, v0, base, stride, env, desc, \ + vext_##NAME##_st_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +GEN_VEXT_ST_STRIDE(vssb_v_b, int8_t, int8_t) +GEN_VEXT_ST_STRIDE(vssb_v_h, int8_t, int16_t) +GEN_VEXT_ST_STRIDE(vssb_v_w, int8_t, int32_t) +GEN_VEXT_ST_STRIDE(vssb_v_d, int8_t, int64_t) +GEN_VEXT_ST_STRIDE(vssh_v_h, int16_t, int16_t) +GEN_VEXT_ST_STRIDE(vssh_v_w, int16_t, int32_t) +GEN_VEXT_ST_STRIDE(vssh_v_d, int16_t, int64_t) +GEN_VEXT_ST_STRIDE(vssw_v_w, int32_t, int32_t) +GEN_VEXT_ST_STRIDE(vssw_v_d, int32_t, int64_t) +GEN_VEXT_ST_STRIDE(vsse_v_b, int8_t, int8_t) +GEN_VEXT_ST_STRIDE(vsse_v_h, int16_t, int16_t) +GEN_VEXT_ST_STRIDE(vsse_v_w, int32_t, int32_t) +GEN_VEXT_ST_STRIDE(vsse_v_d, int64_t, int64_t) From patchwork Tue Feb 25 10:35:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 1243982 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=c-sky.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48Rb1q4KYXz9sRQ for ; Tue, 25 Feb 2020 21:35:59 +1100 (AEDT) Received: from localhost ([::1]:52194 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XZF-0001Io-FW for incoming@patchwork.ozlabs.org; Tue, 25 Feb 2020 05:35:57 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:35277) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XYh-0001Aq-QE for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:27 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j6XYe-0001Pr-4y for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:23 -0500 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:33007) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j6XYd-0001Nt-8u; Tue, 25 Feb 2020 05:35:20 -0500 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=green; DM=CONTINUE|CONTINUE|true|0.578815-0.0392774-0.381908; DS=CONTINUE|ham_system_inform|0.0040965-8.31447e-05-0.99582; FP=0|0|0|0|0|-1|-1|-1; HT=e02c03278; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GsFPuQz_1582626910; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GsFPuQz_1582626910) by smtp.aliyun-inc.com(10.147.40.44); Tue, 25 Feb 2020 18:35:12 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v4 3/5] target/riscv: add vector index load and store instructions Date: Tue, 25 Feb 2020 18:35:06 +0800 Message-Id: <20200225103508.7651-4-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200225103508.7651-1-zhiwei_liu@c-sky.com> References: <20200225103508.7651-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-riscv@nongnu.org, qemu-devel@nongnu.org, linux-csky@vger.kernel.org, wenmeng_zhang@c-sky.com, wxy194768@alibaba-inc.com, LIU Zhiwei Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Vector indexed operations add the contents of each element of the vector offset operand specified by vs2 to the base effective address to give the effective address of each element. Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 35 ++++ target/riscv/insn32.decode | 16 ++ target/riscv/insn_trans/trans_rvv.inc.c | 138 +++++++++++++++ target/riscv/vector_helper.c | 219 ++++++++++++++++++++++++ 4 files changed, 408 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 87dfa90609..f9b3da60ca 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -183,3 +183,38 @@ DEF_HELPER_6(vsse_v_b, void, ptr, ptr, tl, tl, env, i32) DEF_HELPER_6(vsse_v_h, void, ptr, ptr, tl, tl, env, i32) DEF_HELPER_6(vsse_v_w, void, ptr, ptr, tl, tl, env, i32) DEF_HELPER_6(vsse_v_d, void, ptr, ptr, tl, tl, env, i32) +DEF_HELPER_6(vlxb_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxb_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxb_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxb_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxh_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxh_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxh_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxe_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxbu_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxhu_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxhu_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxhu_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxwu_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vlxwu_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxb_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxh_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxh_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxh_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_b, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_h, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vsxe_v_d, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 2f2d3d13b3..6a363a6b7e 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -236,6 +236,22 @@ vssh_v ... 010 . ..... ..... 101 ..... 0100111 @r_nfvm vssw_v ... 010 . ..... ..... 110 ..... 0100111 @r_nfvm vsse_v ... 010 . ..... ..... 111 ..... 0100111 @r_nfvm +vlxb_v ... 111 . ..... ..... 000 ..... 0000111 @r_nfvm +vlxh_v ... 111 . ..... ..... 101 ..... 0000111 @r_nfvm +vlxw_v ... 111 . ..... ..... 110 ..... 0000111 @r_nfvm +vlxe_v ... 011 . ..... ..... 111 ..... 0000111 @r_nfvm +vlxbu_v ... 011 . ..... ..... 000 ..... 0000111 @r_nfvm +vlxhu_v ... 011 . ..... ..... 101 ..... 0000111 @r_nfvm +vlxwu_v ... 011 . ..... ..... 110 ..... 0000111 @r_nfvm +vsxb_v ... 011 . ..... ..... 000 ..... 0100111 @r_nfvm +vsxh_v ... 011 . ..... ..... 101 ..... 0100111 @r_nfvm +vsxw_v ... 011 . ..... ..... 110 ..... 0100111 @r_nfvm +vsxe_v ... 011 . ..... ..... 111 ..... 0100111 @r_nfvm +vsuxb_v ... 111 . ..... ..... 000 ..... 0100111 @r_nfvm +vsuxh_v ... 111 . ..... ..... 101 ..... 0100111 @r_nfvm +vsuxw_v ... 111 . ..... ..... 110 ..... 0100111 @r_nfvm +vsuxe_v ... 111 . ..... ..... 111 ..... 0100111 @r_nfvm + # *** new major opcode OP-V *** vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index 1b627dc880..c0d560d789 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -372,3 +372,141 @@ GEN_VEXT_TRANS(vssb_v, 0, rnfvm, st_stride_op, st_stride_check) GEN_VEXT_TRANS(vssh_v, 1, rnfvm, st_stride_op, st_stride_check) GEN_VEXT_TRANS(vssw_v, 2, rnfvm, st_stride_op, st_stride_check) GEN_VEXT_TRANS(vsse_v, 3, rnfvm, st_stride_op, st_stride_check) + +/* + *** index load and store + */ +typedef void gen_helper_ldst_index(TCGv_ptr, TCGv_ptr, TCGv, + TCGv_ptr, TCGv_env, TCGv_i32); + +static bool ldst_index_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, + uint32_t data, gen_helper_ldst_index *fn, DisasContext *s) +{ + TCGv_ptr dest, mask, index; + TCGv base; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + index = tcg_temp_new_ptr(); + base = tcg_temp_new(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, index, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free_ptr(index); + tcg_temp_free(base); + tcg_temp_free_i32(desc); + return true; +} + +static bool ld_index_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint8_t nf = a->nf + 1; + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (nf << 11); + gen_helper_ldst_index *fn; + static gen_helper_ldst_index * const fns[7][4] = { + { gen_helper_vlxb_v_b, gen_helper_vlxb_v_h, + gen_helper_vlxb_v_w, gen_helper_vlxb_v_d }, + { NULL, gen_helper_vlxh_v_h, + gen_helper_vlxh_v_w, gen_helper_vlxh_v_d }, + { NULL, NULL, + gen_helper_vlxw_v_w, gen_helper_vlxw_v_d }, + { gen_helper_vlxe_v_b, gen_helper_vlxe_v_h, + gen_helper_vlxe_v_w, gen_helper_vlxe_v_d }, + { gen_helper_vlxbu_v_b, gen_helper_vlxbu_v_h, + gen_helper_vlxbu_v_w, gen_helper_vlxbu_v_d }, + { NULL, gen_helper_vlxhu_v_h, + gen_helper_vlxhu_v_w, gen_helper_vlxhu_v_d }, + { NULL, NULL, + gen_helper_vlxwu_v_w, gen_helper_vlxwu_v_d }, + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldst_index_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool ld_index_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_overlap_mask(s, a->rd, a->vm) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vlxb_v, 0, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxh_v, 1, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxw_v, 2, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxe_v, 3, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxbu_v, 4, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxhu_v, 5, rnfvm, ld_index_op, ld_index_check) +GEN_VEXT_TRANS(vlxwu_v, 6, rnfvm, ld_index_op, ld_index_check) + +static bool st_index_op(DisasContext *s, arg_rnfvm *a, uint8_t seq) +{ + uint8_t nf = a->nf + 1; + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (nf << 11); + gen_helper_ldst_index *fn; + static gen_helper_ldst_index * const fns[4][4] = { + { gen_helper_vsxb_v_b, gen_helper_vsxb_v_h, + gen_helper_vsxb_v_w, gen_helper_vsxb_v_d }, + { NULL, gen_helper_vsxh_v_h, + gen_helper_vsxh_v_w, gen_helper_vsxh_v_d }, + { NULL, NULL, + gen_helper_vsxw_v_w, gen_helper_vsxw_v_d }, + { gen_helper_vsxe_v_b, gen_helper_vsxe_v_h, + gen_helper_vsxe_v_w, gen_helper_vsxe_v_d } + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldst_index_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool st_index_check(DisasContext *s, arg_rnfvm* a) +{ + return (vext_check_isa_ill(s, RVV) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false) && + vext_check_nf(s, a->nf)); +} + +GEN_VEXT_TRANS(vsxb_v, 0, rnfvm, st_index_op, st_index_check) +GEN_VEXT_TRANS(vsxh_v, 1, rnfvm, st_index_op, st_index_check) +GEN_VEXT_TRANS(vsxw_v, 2, rnfvm, st_index_op, st_index_check) +GEN_VEXT_TRANS(vsxe_v, 3, rnfvm, st_index_op, st_index_check) + +static bool trans_vsuxb_v(DisasContext *s, arg_rnfvm* a) +{ + return trans_vsxb_v(s, a); +} + +static bool trans_vsuxh_v(DisasContext *s, arg_rnfvm* a) +{ + return trans_vsxh_v(s, a); +} + +static bool trans_vsuxw_v(DisasContext *s, arg_rnfvm* a) +{ + return trans_vsxw_v(s, a); +} + +static bool trans_vsuxe_v(DisasContext *s, arg_rnfvm* a) +{ + return trans_vsxe_v(s, a); +} diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 8a8c062ad5..908f48e216 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -255,6 +255,28 @@ GEN_VEXT_LD_ELEM(vlshu_v_w, uint16_t, uint32_t, H4, lduw) GEN_VEXT_LD_ELEM(vlshu_v_d, uint16_t, uint64_t, H8, lduw) GEN_VEXT_LD_ELEM(vlswu_v_w, uint32_t, uint32_t, H4, ldl) GEN_VEXT_LD_ELEM(vlswu_v_d, uint32_t, uint64_t, H8, ldl) +GEN_VEXT_LD_ELEM(vlxb_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vlxb_v_h, int8_t, int16_t, H2, ldsb) +GEN_VEXT_LD_ELEM(vlxb_v_w, int8_t, int32_t, H4, ldsb) +GEN_VEXT_LD_ELEM(vlxb_v_d, int8_t, int64_t, H8, ldsb) +GEN_VEXT_LD_ELEM(vlxh_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vlxh_v_w, int16_t, int32_t, H4, ldsw) +GEN_VEXT_LD_ELEM(vlxh_v_d, int16_t, int64_t, H8, ldsw) +GEN_VEXT_LD_ELEM(vlxw_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlxw_v_d, int32_t, int64_t, H8, ldl) +GEN_VEXT_LD_ELEM(vlxe_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vlxe_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vlxe_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlxe_v_d, int64_t, int64_t, H8, ldq) +GEN_VEXT_LD_ELEM(vlxbu_v_b, uint8_t, uint8_t, H1, ldub) +GEN_VEXT_LD_ELEM(vlxbu_v_h, uint8_t, uint16_t, H2, ldub) +GEN_VEXT_LD_ELEM(vlxbu_v_w, uint8_t, uint32_t, H4, ldub) +GEN_VEXT_LD_ELEM(vlxbu_v_d, uint8_t, uint64_t, H8, ldub) +GEN_VEXT_LD_ELEM(vlxhu_v_h, uint16_t, uint16_t, H2, lduw) +GEN_VEXT_LD_ELEM(vlxhu_v_w, uint16_t, uint32_t, H4, lduw) +GEN_VEXT_LD_ELEM(vlxhu_v_d, uint16_t, uint64_t, H8, lduw) +GEN_VEXT_LD_ELEM(vlxwu_v_w, uint32_t, uint32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlxwu_v_d, uint32_t, uint64_t, H8, ldl) #define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \ static void vext_##NAME##_st_elem(CPURISCVState *env, abi_ptr addr, \ @@ -291,6 +313,19 @@ GEN_VEXT_ST_ELEM(vsse_v_b, int8_t, H1, stb) GEN_VEXT_ST_ELEM(vsse_v_h, int16_t, H2, stw) GEN_VEXT_ST_ELEM(vsse_v_w, int32_t, H4, stl) GEN_VEXT_ST_ELEM(vsse_v_d, int64_t, H8, stq) +GEN_VEXT_ST_ELEM(vsxb_v_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(vsxb_v_h, int16_t, H2, stb) +GEN_VEXT_ST_ELEM(vsxb_v_w, int32_t, H4, stb) +GEN_VEXT_ST_ELEM(vsxb_v_d, int64_t, H8, stb) +GEN_VEXT_ST_ELEM(vsxh_v_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(vsxh_v_w, int32_t, H4, stw) +GEN_VEXT_ST_ELEM(vsxh_v_d, int64_t, H8, stw) +GEN_VEXT_ST_ELEM(vsxw_v_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(vsxw_v_d, int64_t, H8, stl) +GEN_VEXT_ST_ELEM(vsxe_v_b, int8_t, H1, stb) +GEN_VEXT_ST_ELEM(vsxe_v_h, int16_t, H2, stw) +GEN_VEXT_ST_ELEM(vsxe_v_w, int32_t, H4, stl) +GEN_VEXT_ST_ELEM(vsxe_v_d, int64_t, H8, stq) /* *** unit-stride: load vector element from continuous guest memory @@ -621,3 +656,187 @@ GEN_VEXT_ST_STRIDE(vsse_v_b, int8_t, int8_t) GEN_VEXT_ST_STRIDE(vsse_v_h, int16_t, int16_t) GEN_VEXT_ST_STRIDE(vsse_v_w, int32_t, int32_t) GEN_VEXT_ST_STRIDE(vsse_v_d, int64_t, int64_t) + +/* + *** index: load indexed vector element from guest memory + */ +typedef target_ulong (*vext_get_index_addr)(target_ulong base, + uint32_t idx, void *vs2); + +#define GEN_VEXT_GET_INDEX_ADDR(NAME, ETYPE, H) \ +static target_ulong vext_##NAME##_get_addr(target_ulong base, \ + uint32_t idx, void *vs2) \ +{ \ + return (base + *((ETYPE *)vs2 + H(idx))); \ +} + +GEN_VEXT_GET_INDEX_ADDR(vlxb_v_b, int8_t, H1) +GEN_VEXT_GET_INDEX_ADDR(vlxb_v_h, int16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vlxb_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vlxb_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vlxh_v_h, int16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vlxh_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vlxh_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vlxw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vlxw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vlxe_v_b, int8_t, H1) +GEN_VEXT_GET_INDEX_ADDR(vlxe_v_h, int16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vlxe_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vlxe_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vlxbu_v_b, uint8_t, H1) +GEN_VEXT_GET_INDEX_ADDR(vlxbu_v_h, uint16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vlxbu_v_w, uint32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vlxbu_v_d, uint64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vlxhu_v_h, uint16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vlxhu_v_w, uint32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vlxhu_v_d, uint64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vlxwu_v_w, uint32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vlxwu_v_d, uint64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vsxb_v_b, int8_t, H1) +GEN_VEXT_GET_INDEX_ADDR(vsxb_v_h, int16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vsxb_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vsxb_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vsxh_v_h, int16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vsxh_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vsxh_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vsxw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vsxw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vsxe_v_b, int8_t, H1) +GEN_VEXT_GET_INDEX_ADDR(vsxe_v_h, int16_t, H2) +GEN_VEXT_GET_INDEX_ADDR(vsxe_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vsxe_v_d, int64_t, H8) + +static inline void vext_ld_index(void *vd, void *v0, target_ulong base, + void *vs2, CPURISCVState *env, uint32_t desc, + vext_get_index_addr get_index_addr, + vext_ld_elem_fn ld_elem, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + if (env->vl == 0) { + return; + } + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_read_access(env, get_index_addr(base, i, vs2), nf * msz, ra); + } + /* load bytes from guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + abi_ptr addr = get_index_addr(base, i, vs2) + k * msz; + ld_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } +} + +#define GEN_VEXT_LD_INDEX(NAME, MTYPE, ETYPE) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ld_index(vd, v0, base, vs2, env, desc, \ + vext_##NAME##_get_addr, \ + vext_##NAME##_ld_elem, \ + vext_##NAME##_clear_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +GEN_VEXT_LD_INDEX(vlxb_v_b, int8_t, int8_t) +GEN_VEXT_LD_INDEX(vlxb_v_h, int8_t, int16_t) +GEN_VEXT_LD_INDEX(vlxb_v_w, int8_t, int32_t) +GEN_VEXT_LD_INDEX(vlxb_v_d, int8_t, int64_t) +GEN_VEXT_LD_INDEX(vlxh_v_h, int16_t, int16_t) +GEN_VEXT_LD_INDEX(vlxh_v_w, int16_t, int32_t) +GEN_VEXT_LD_INDEX(vlxh_v_d, int16_t, int64_t) +GEN_VEXT_LD_INDEX(vlxw_v_w, int32_t, int32_t) +GEN_VEXT_LD_INDEX(vlxw_v_d, int32_t, int64_t) +GEN_VEXT_LD_INDEX(vlxe_v_b, int8_t, int8_t) +GEN_VEXT_LD_INDEX(vlxe_v_h, int16_t, int16_t) +GEN_VEXT_LD_INDEX(vlxe_v_w, int32_t, int32_t) +GEN_VEXT_LD_INDEX(vlxe_v_d, int64_t, int64_t) +GEN_VEXT_LD_INDEX(vlxbu_v_b, uint8_t, uint8_t) +GEN_VEXT_LD_INDEX(vlxbu_v_h, uint8_t, uint16_t) +GEN_VEXT_LD_INDEX(vlxbu_v_w, uint8_t, uint32_t) +GEN_VEXT_LD_INDEX(vlxbu_v_d, uint8_t, uint64_t) +GEN_VEXT_LD_INDEX(vlxhu_v_h, uint16_t, uint16_t) +GEN_VEXT_LD_INDEX(vlxhu_v_w, uint16_t, uint32_t) +GEN_VEXT_LD_INDEX(vlxhu_v_d, uint16_t, uint64_t) +GEN_VEXT_LD_INDEX(vlxwu_v_w, uint32_t, uint32_t) +GEN_VEXT_LD_INDEX(vlxwu_v_d, uint32_t, uint64_t) + +/* + *** index: store indexed vector element to guest memory + */ +static inline void vext_st_index(void *vd, void *v0, target_ulong base, + void *vs2, CPURISCVState *env, uint32_t desc, + vext_get_index_addr get_index_addr, + vext_st_elem_fn st_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i, k; + uint32_t nf = vext_nf(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_write_access(env, get_index_addr(base, i, vs2), nf * msz, ra); + } + /* store bytes to guest memory */ + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + target_ulong addr = get_index_addr(base, i, vs2) + k * msz; + st_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } +} + +#define GEN_VEXT_ST_INDEX(NAME, MTYPE, ETYPE) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_st_index(vd, v0, base, vs2, env, desc, \ + vext_##NAME##_get_addr, \ + vext_##NAME##_st_elem, \ + sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +GEN_VEXT_ST_INDEX(vsxb_v_b, int8_t, int8_t) +GEN_VEXT_ST_INDEX(vsxb_v_h, int8_t, int16_t) +GEN_VEXT_ST_INDEX(vsxb_v_w, int8_t, int32_t) +GEN_VEXT_ST_INDEX(vsxb_v_d, int8_t, int64_t) +GEN_VEXT_ST_INDEX(vsxh_v_h, int16_t, int16_t) +GEN_VEXT_ST_INDEX(vsxh_v_w, int16_t, int32_t) +GEN_VEXT_ST_INDEX(vsxh_v_d, int16_t, int64_t) +GEN_VEXT_ST_INDEX(vsxw_v_w, int32_t, int32_t) +GEN_VEXT_ST_INDEX(vsxw_v_d, int32_t, int64_t) +GEN_VEXT_ST_INDEX(vsxe_v_b, int8_t, int8_t) +GEN_VEXT_ST_INDEX(vsxe_v_h, int16_t, int16_t) +GEN_VEXT_ST_INDEX(vsxe_v_w, int32_t, int32_t) +GEN_VEXT_ST_INDEX(vsxe_v_d, int64_t, int64_t) From patchwork Tue Feb 25 10:35:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 1243980 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=c-sky.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48Rb1p49x3z9sP7 for ; Tue, 25 Feb 2020 21:35:58 +1100 (AEDT) Received: from localhost ([::1]:52190 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XZE-0001FY-Ft for incoming@patchwork.ozlabs.org; Tue, 25 Feb 2020 05:35:56 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:35269) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XYh-0001Ak-Ew for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:26 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j6XYd-0001PX-SY for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:23 -0500 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:46025) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j6XYd-0001O1-8t; Tue, 25 Feb 2020 05:35:19 -0500 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=green; DM=CONTINUE|CONTINUE|true|0.65445-0.0401631-0.305387; DS=CONTINUE|ham_system_inform|0.686825-0.000258514-0.312917; FP=0|0|0|0|0|-1|-1|-1; HT=e02c03308; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GsFPuQz_1582626910; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GsFPuQz_1582626910) by smtp.aliyun-inc.com(10.147.40.44); Tue, 25 Feb 2020 18:35:13 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v4 4/5] target/riscv: add fault-only-first unit stride load Date: Tue, 25 Feb 2020 18:35:07 +0800 Message-Id: <20200225103508.7651-5-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200225103508.7651-1-zhiwei_liu@c-sky.com> References: <20200225103508.7651-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-riscv@nongnu.org, qemu-devel@nongnu.org, linux-csky@vger.kernel.org, wenmeng_zhang@c-sky.com, wxy194768@alibaba-inc.com, LIU Zhiwei Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" The unit-stride fault-only-fault load instructions are used to vectorize loops with data-dependent exit conditions(while loops). These instructions execute as a regular load except that they will only take a trap on element 0. Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 22 ++++ target/riscv/insn32.decode | 7 ++ target/riscv/insn_trans/trans_rvv.inc.c | 66 ++++++++++++ target/riscv/vector_helper.c | 136 ++++++++++++++++++++++++ 4 files changed, 231 insertions(+) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index f9b3da60ca..72ba4d9bdb 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -218,3 +218,25 @@ DEF_HELPER_6(vsxe_v_b, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsxe_v_h, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsxe_v_w, void, ptr, ptr, tl, ptr, env, i32) DEF_HELPER_6(vsxe_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_5(vlbff_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbff_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbff_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbff_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhff_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhff_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhff_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwff_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwff_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vleff_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vleff_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vleff_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vleff_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbuff_v_b, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbuff_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbuff_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlbuff_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhuff_v_h, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhuff_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlhuff_v_d, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwuff_v_w, void, ptr, ptr, tl, env, i32) +DEF_HELPER_5(vlwuff_v_d, void, ptr, ptr, tl, env, i32) diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 6a363a6b7e..973ac63fda 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -219,6 +219,13 @@ vle_v ... 000 . 00000 ..... 111 ..... 0000111 @r2_nfvm vlbu_v ... 000 . 00000 ..... 000 ..... 0000111 @r2_nfvm vlhu_v ... 000 . 00000 ..... 101 ..... 0000111 @r2_nfvm vlwu_v ... 000 . 00000 ..... 110 ..... 0000111 @r2_nfvm +vlbff_v ... 100 . 10000 ..... 000 ..... 0000111 @r2_nfvm +vlhff_v ... 100 . 10000 ..... 101 ..... 0000111 @r2_nfvm +vlwff_v ... 100 . 10000 ..... 110 ..... 0000111 @r2_nfvm +vleff_v ... 000 . 10000 ..... 111 ..... 0000111 @r2_nfvm +vlbuff_v ... 000 . 10000 ..... 000 ..... 0000111 @r2_nfvm +vlhuff_v ... 000 . 10000 ..... 101 ..... 0000111 @r2_nfvm +vlwuff_v ... 000 . 10000 ..... 110 ..... 0000111 @r2_nfvm vsb_v ... 000 . 00000 ..... 000 ..... 0100111 @r2_nfvm vsh_v ... 000 . 00000 ..... 101 ..... 0100111 @r2_nfvm vsw_v ... 000 . 00000 ..... 110 ..... 0100111 @r2_nfvm diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index c0d560d789..dda3ba555c 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -510,3 +510,69 @@ static bool trans_vsuxe_v(DisasContext *s, arg_rnfvm* a) { return trans_vsxe_v(s, a); } + +/* + *** unit stride fault-only-first load + */ +static bool ldff_trans(uint32_t vd, uint32_t rs1, uint32_t data, + gen_helper_ldst_us *fn, DisasContext *s) +{ + TCGv_ptr dest, mask; + TCGv base; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + base = tcg_temp_new(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free(base); + tcg_temp_free_i32(desc); + return true; +} + +static bool ldff_op(DisasContext *s, arg_r2nfvm *a, uint8_t seq) +{ + uint8_t nf = a->nf + 1; + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (nf << 11); + gen_helper_ldst_us *fn; + static gen_helper_ldst_us * const fns[7][4] = { + { gen_helper_vlbff_v_b, gen_helper_vlbff_v_h, + gen_helper_vlbff_v_w, gen_helper_vlbff_v_d }, + { NULL, gen_helper_vlhff_v_h, + gen_helper_vlhff_v_w, gen_helper_vlhff_v_d }, + { NULL, NULL, + gen_helper_vlwff_v_w, gen_helper_vlwff_v_d }, + { gen_helper_vleff_v_b, gen_helper_vleff_v_h, + gen_helper_vleff_v_w, gen_helper_vleff_v_d }, + { gen_helper_vlbuff_v_b, gen_helper_vlbuff_v_h, + gen_helper_vlbuff_v_w, gen_helper_vlbuff_v_d }, + { NULL, gen_helper_vlhuff_v_h, + gen_helper_vlhuff_v_w, gen_helper_vlhuff_v_d }, + { NULL, NULL, + gen_helper_vlwuff_v_w, gen_helper_vlwuff_v_d } + }; + + fn = fns[seq][s->sew]; + if (fn == NULL) { + return false; + } + + return ldff_trans(a->rd, a->rs1, data, fn, s); +} + +GEN_VEXT_TRANS(vlbff_v, 0, r2nfvm, ldff_op, ld_us_check) +GEN_VEXT_TRANS(vlhff_v, 1, r2nfvm, ldff_op, ld_us_check) +GEN_VEXT_TRANS(vlwff_v, 2, r2nfvm, ldff_op, ld_us_check) +GEN_VEXT_TRANS(vleff_v, 3, r2nfvm, ldff_op, ld_us_check) +GEN_VEXT_TRANS(vlbuff_v, 4, r2nfvm, ldff_op, ld_us_check) +GEN_VEXT_TRANS(vlhuff_v, 5, r2nfvm, ldff_op, ld_us_check) +GEN_VEXT_TRANS(vlwuff_v, 6, r2nfvm, ldff_op, ld_us_check) diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 908f48e216..6772e31ecb 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -277,6 +277,28 @@ GEN_VEXT_LD_ELEM(vlxhu_v_w, uint16_t, uint32_t, H4, lduw) GEN_VEXT_LD_ELEM(vlxhu_v_d, uint16_t, uint64_t, H8, lduw) GEN_VEXT_LD_ELEM(vlxwu_v_w, uint32_t, uint32_t, H4, ldl) GEN_VEXT_LD_ELEM(vlxwu_v_d, uint32_t, uint64_t, H8, ldl) +GEN_VEXT_LD_ELEM(vlbff_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vlbff_v_h, int8_t, int16_t, H2, ldsb) +GEN_VEXT_LD_ELEM(vlbff_v_w, int8_t, int32_t, H4, ldsb) +GEN_VEXT_LD_ELEM(vlbff_v_d, int8_t, int64_t, H8, ldsb) +GEN_VEXT_LD_ELEM(vlhff_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vlhff_v_w, int16_t, int32_t, H4, ldsw) +GEN_VEXT_LD_ELEM(vlhff_v_d, int16_t, int64_t, H8, ldsw) +GEN_VEXT_LD_ELEM(vlwff_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlwff_v_d, int32_t, int64_t, H8, ldl) +GEN_VEXT_LD_ELEM(vleff_v_b, int8_t, int8_t, H1, ldsb) +GEN_VEXT_LD_ELEM(vleff_v_h, int16_t, int16_t, H2, ldsw) +GEN_VEXT_LD_ELEM(vleff_v_w, int32_t, int32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vleff_v_d, int64_t, int64_t, H8, ldq) +GEN_VEXT_LD_ELEM(vlbuff_v_b, uint8_t, uint8_t, H1, ldub) +GEN_VEXT_LD_ELEM(vlbuff_v_h, uint8_t, uint16_t, H2, ldub) +GEN_VEXT_LD_ELEM(vlbuff_v_w, uint8_t, uint32_t, H4, ldub) +GEN_VEXT_LD_ELEM(vlbuff_v_d, uint8_t, uint64_t, H8, ldub) +GEN_VEXT_LD_ELEM(vlhuff_v_h, uint16_t, uint16_t, H2, lduw) +GEN_VEXT_LD_ELEM(vlhuff_v_w, uint16_t, uint32_t, H4, lduw) +GEN_VEXT_LD_ELEM(vlhuff_v_d, uint16_t, uint64_t, H8, lduw) +GEN_VEXT_LD_ELEM(vlwuff_v_w, uint32_t, uint32_t, H4, ldl) +GEN_VEXT_LD_ELEM(vlwuff_v_d, uint32_t, uint64_t, H8, ldl) #define GEN_VEXT_ST_ELEM(NAME, ETYPE, H, STSUF) \ static void vext_##NAME##_st_elem(CPURISCVState *env, abi_ptr addr, \ @@ -840,3 +862,117 @@ GEN_VEXT_ST_INDEX(vsxe_v_b, int8_t, int8_t) GEN_VEXT_ST_INDEX(vsxe_v_h, int16_t, int16_t) GEN_VEXT_ST_INDEX(vsxe_v_w, int32_t, int32_t) GEN_VEXT_ST_INDEX(vsxe_v_d, int64_t, int64_t) + +/* + *** unit-stride fault-only-fisrt load instructions + */ +static inline void vext_ldff(void *vd, void *v0, target_ulong base, + CPURISCVState *env, uint32_t desc, + vext_ld_elem_fn ld_elem, + vext_ld_clear_elem clear_elem, + int mmuidx, uint32_t esz, uint32_t msz, uintptr_t ra) +{ + void *host; + uint32_t i, k, vl = 0; + uint32_t mlen = vext_mlen(desc); + uint32_t nf = vext_nf(desc); + uint32_t vm = vext_vm(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + target_ulong addr, offset, remain; + + if (env->vl == 0) { + return; + } + /* probe every access*/ + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + addr = base + nf * i * msz; + if (i == 0) { + probe_read_access(env, addr, nf * msz, ra); + } else { + /* if it triggles an exception, no need to check watchpoint */ + offset = -(addr | TARGET_PAGE_MASK); + remain = nf * msz; + while (remain > 0) { + host = tlb_vaddr_to_host(env, addr, MMU_DATA_LOAD, mmuidx); + if (host) { +#ifdef CONFIG_USER_ONLY + if (page_check_range(addr, nf * msz, PAGE_READ) < 0) { + vl = i; + goto ProbeSuccess; + } +#else + probe_read_access(env, addr, nf * msz, ra); +#endif + } else { + vl = i; + goto ProbeSuccess; + } + if (remain <= offset) { + break; + } + remain -= offset; + addr += offset; + offset = -(addr | TARGET_PAGE_MASK); + } + } + } +ProbeSuccess: + /* load bytes from guest memory */ + if (vl != 0) { + env->vl = vl; + } + for (i = 0; i < env->vl; i++) { + k = 0; + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + while (k < nf) { + target_ulong addr = base + (i * nf + k) * msz; + ld_elem(env, addr, i + k * vlmax, vd, ra); + k++; + } + } + /* clear tail elements */ + if (vl != 0) { + return; + } + for (k = 0; k < nf; k++) { + clear_elem(vd, env->vl + k * vlmax, env->vl * esz, vlmax * esz); + } +} + +#define GEN_VEXT_LDFF(NAME, MTYPE, ETYPE, MMUIDX) \ +void HELPER(NAME)(void *vd, void *v0, target_ulong base, \ + CPURISCVState *env, uint32_t desc) \ +{ \ + vext_ldff(vd, v0, base, env, desc, \ + vext_##NAME##_ld_elem, \ + vext_##NAME##_clear_elem, \ + MMUIDX, sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +GEN_VEXT_LDFF(vlbff_v_b, int8_t, int8_t, MO_SB) +GEN_VEXT_LDFF(vlbff_v_h, int8_t, int16_t, MO_SB) +GEN_VEXT_LDFF(vlbff_v_w, int8_t, int32_t, MO_SB) +GEN_VEXT_LDFF(vlbff_v_d, int8_t, int64_t, MO_SB) +GEN_VEXT_LDFF(vlhff_v_h, int16_t, int16_t, MO_LESW) +GEN_VEXT_LDFF(vlhff_v_w, int16_t, int32_t, MO_LESW) +GEN_VEXT_LDFF(vlhff_v_d, int16_t, int64_t, MO_LESW) +GEN_VEXT_LDFF(vlwff_v_w, int32_t, int32_t, MO_LESL) +GEN_VEXT_LDFF(vlwff_v_d, int32_t, int64_t, MO_LESL) +GEN_VEXT_LDFF(vleff_v_b, int8_t, int8_t, MO_SB) +GEN_VEXT_LDFF(vleff_v_h, int16_t, int16_t, MO_LESW) +GEN_VEXT_LDFF(vleff_v_w, int32_t, int32_t, MO_LESL) +GEN_VEXT_LDFF(vleff_v_d, int64_t, int64_t, MO_LEQ) +GEN_VEXT_LDFF(vlbuff_v_b, uint8_t, uint8_t, MO_UB) +GEN_VEXT_LDFF(vlbuff_v_h, uint8_t, uint16_t, MO_UB) +GEN_VEXT_LDFF(vlbuff_v_w, uint8_t, uint32_t, MO_UB) +GEN_VEXT_LDFF(vlbuff_v_d, uint8_t, uint64_t, MO_UB) +GEN_VEXT_LDFF(vlhuff_v_h, uint16_t, uint16_t, MO_LEUW) +GEN_VEXT_LDFF(vlhuff_v_w, uint16_t, uint32_t, MO_LEUW) +GEN_VEXT_LDFF(vlhuff_v_d, uint16_t, uint64_t, MO_LEUW) +GEN_VEXT_LDFF(vlwuff_v_w, uint32_t, uint32_t, MO_LEUL) +GEN_VEXT_LDFF(vlwuff_v_d, uint32_t, uint64_t, MO_LEUL) From patchwork Tue Feb 25 10:35:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: LIU Zhiwei X-Patchwork-Id: 1243985 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=none (p=none dis=none) header.from=c-sky.com Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 48Rb3K04WLz9sNg for ; Tue, 25 Feb 2020 21:37:17 +1100 (AEDT) Received: from localhost ([::1]:52228 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XaU-0004nM-UJ for incoming@patchwork.ozlabs.org; Tue, 25 Feb 2020 05:37:14 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]:35289) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1j6XYi-0001B5-Lz for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:28 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1j6XYf-0001QX-2j for qemu-devel@nongnu.org; Tue, 25 Feb 2020 05:35:24 -0500 Received: from smtp2200-217.mail.aliyun.com ([121.197.200.217]:36913) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1j6XYe-0001OS-Eh; Tue, 25 Feb 2020 05:35:21 -0500 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07436282|-1; CH=green; DM=CONTINUE|CONTINUE|true|0.360181-0.0298022-0.610016; DS=CONTINUE|ham_system_inform|0.731082-0.000239082-0.268679; FP=0|0|0|0|0|-1|-1|-1; HT=e02c03310; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=10; RT=10; SR=0; TI=SMTPD_---.GsFPuQz_1582626910; Received: from L-PF1D6DP4-1208.hz.ali.com(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.GsFPuQz_1582626910) by smtp.aliyun-inc.com(10.147.40.44); Tue, 25 Feb 2020 18:35:13 +0800 From: LIU Zhiwei To: richard.henderson@linaro.org, alistair23@gmail.com, chihmin.chao@sifive.com, palmer@dabbelt.com Subject: [PATCH v4 5/5] target/riscv: add vector amo operations Date: Tue, 25 Feb 2020 18:35:08 +0800 Message-Id: <20200225103508.7651-6-zhiwei_liu@c-sky.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20200225103508.7651-1-zhiwei_liu@c-sky.com> References: <20200225103508.7651-1-zhiwei_liu@c-sky.com> MIME-Version: 1.0 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] [fuzzy] X-Received-From: 121.197.200.217 X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-riscv@nongnu.org, qemu-devel@nongnu.org, linux-csky@vger.kernel.org, wenmeng_zhang@c-sky.com, wxy194768@alibaba-inc.com, LIU Zhiwei Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Vector AMOs operate as if aq and rl bits were zero on each element with regard to ordering relative to other instructions in the same hart. Vector AMOs provide no ordering guarantee between element operations in the same vector AMO instruction Signed-off-by: LIU Zhiwei --- target/riscv/helper.h | 56 +++++ target/riscv/insn32-64.decode | 11 + target/riscv/insn32.decode | 13 ++ target/riscv/insn_trans/trans_rvv.inc.c | 154 +++++++++++++ target/riscv/vector_helper.c | 280 +++++++++++++++++++++++- 5 files changed, 513 insertions(+), 1 deletion(-) diff --git a/target/riscv/helper.h b/target/riscv/helper.h index 72ba4d9bdb..cbe0d107c0 100644 --- a/target/riscv/helper.h +++ b/target/riscv/helper.h @@ -240,3 +240,59 @@ DEF_HELPER_5(vlhuff_v_w, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vlhuff_v_d, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vlwuff_v_w, void, ptr, ptr, tl, env, i32) DEF_HELPER_5(vlwuff_v_d, void, ptr, ptr, tl, env, i32) +#ifdef TARGET_RISCV64 +DEF_HELPER_6(vamoswapw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoswapd_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddd_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxorw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxord_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandd_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoorw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoord_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomind_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxd_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominuw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominud_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxuw_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxud_v_d_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoswapw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoswapd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxorw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxord_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoorw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoord_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomind_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxd_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominuw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominud_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxuw_v_d, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxud_v_d, void, ptr, ptr, tl, ptr, env, i32) +#endif +DEF_HELPER_6(vamoswapw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxorw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoorw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominuw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxuw_v_w_a, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoswapw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoaddw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoxorw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoandw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamoorw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamominuw_v_w, void, ptr, ptr, tl, ptr, env, i32) +DEF_HELPER_6(vamomaxuw_v_w, void, ptr, ptr, tl, ptr, env, i32) diff --git a/target/riscv/insn32-64.decode b/target/riscv/insn32-64.decode index 380bf791bc..86153d93fa 100644 --- a/target/riscv/insn32-64.decode +++ b/target/riscv/insn32-64.decode @@ -57,6 +57,17 @@ amomax_d 10100 . . ..... ..... 011 ..... 0101111 @atom_st amominu_d 11000 . . ..... ..... 011 ..... 0101111 @atom_st amomaxu_d 11100 . . ..... ..... 011 ..... 0101111 @atom_st +#*** Vector AMO operations (in addition to Zvamo) *** +vamoswapd_v 00001 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoaddd_v 00000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoxord_v 00100 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoandd_v 01100 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamoord_v 01000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamomind_v 10000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamomaxd_v 10100 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamominud_v 11000 . . ..... ..... 111 ..... 0101111 @r_wdvm +vamomaxud_v 11100 . . ..... ..... 111 ..... 0101111 @r_wdvm + # *** RV64F Standard Extension (in addition to RV32F) *** fcvt_l_s 1100000 00010 ..... ... ..... 1010011 @r2_rm fcvt_lu_s 1100000 00011 ..... ... ..... 1010011 @r2_rm diff --git a/target/riscv/insn32.decode b/target/riscv/insn32.decode index 973ac63fda..077551dd13 100644 --- a/target/riscv/insn32.decode +++ b/target/riscv/insn32.decode @@ -43,6 +43,7 @@ &u imm rd &shift shamt rs1 rd &atomic aq rl rs2 rs1 rd +&rwdvm vm wd rd rs1 rs2 &r2nfvm vm rd rs1 nf &rnfvm vm rd rs1 rs2 nf @@ -64,6 +65,7 @@ @r_rm ....... ..... ..... ... ..... ....... %rs2 %rs1 %rm %rd @r2_rm ....... ..... ..... ... ..... ....... %rs1 %rm %rd @r2 ....... ..... ..... ... ..... ....... %rs1 %rd +@r_wdvm ..... wd:1 vm:1 ..... ..... ... ..... ....... &rwdvm %rs2 %rs1 %rd @r2_nfvm nf:3 ... vm:1 ..... ..... ... ..... ....... &r2nfvm %rs1 %rd @r_nfvm nf:3 ... vm:1 ..... ..... ... ..... ....... &rnfvm %rs2 %rs1 %rd @r2_zimm . zimm:11 ..... ... ..... ....... %rs1 %rd @@ -259,6 +261,17 @@ vsuxh_v ... 111 . ..... ..... 101 ..... 0100111 @r_nfvm vsuxw_v ... 111 . ..... ..... 110 ..... 0100111 @r_nfvm vsuxe_v ... 111 . ..... ..... 111 ..... 0100111 @r_nfvm +#*** Vector AMO operations are encoded under the standard AMO major opcode *** +vamoswapw_v 00001 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoaddw_v 00000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoxorw_v 00100 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoandw_v 01100 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamoorw_v 01000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamominw_v 10000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamomaxw_v 10100 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamominuw_v 11000 . . ..... ..... 110 ..... 0101111 @r_wdvm +vamomaxuw_v 11100 . . ..... ..... 110 ..... 0101111 @r_wdvm + # *** new major opcode OP-V *** vsetvli 0 ........... ..... 111 ..... 1010111 @r2_zimm vsetvl 1000000 ..... ..... 111 ..... 1010111 @r diff --git a/target/riscv/insn_trans/trans_rvv.inc.c b/target/riscv/insn_trans/trans_rvv.inc.c index dda3ba555c..a0e1e496f2 100644 --- a/target/riscv/insn_trans/trans_rvv.inc.c +++ b/target/riscv/insn_trans/trans_rvv.inc.c @@ -576,3 +576,157 @@ GEN_VEXT_TRANS(vleff_v, 3, r2nfvm, ldff_op, ld_us_check) GEN_VEXT_TRANS(vlbuff_v, 4, r2nfvm, ldff_op, ld_us_check) GEN_VEXT_TRANS(vlhuff_v, 5, r2nfvm, ldff_op, ld_us_check) GEN_VEXT_TRANS(vlwuff_v, 6, r2nfvm, ldff_op, ld_us_check) + +/* + *** vector atomic operation + */ +typedef void gen_helper_amo(TCGv_ptr, TCGv_ptr, TCGv, TCGv_ptr, + TCGv_env, TCGv_i32); + +static bool amo_trans(uint32_t vd, uint32_t rs1, uint32_t vs2, + uint32_t data, gen_helper_amo *fn, DisasContext *s) +{ + TCGv_ptr dest, mask, index; + TCGv base; + TCGv_i32 desc; + + dest = tcg_temp_new_ptr(); + mask = tcg_temp_new_ptr(); + index = tcg_temp_new_ptr(); + base = tcg_temp_new(); + desc = tcg_const_i32(simd_desc(0, s->vlen / 8, data)); + + gen_get_gpr(base, rs1); + tcg_gen_addi_ptr(dest, cpu_env, vreg_ofs(s, vd)); + tcg_gen_addi_ptr(index, cpu_env, vreg_ofs(s, vs2)); + tcg_gen_addi_ptr(mask, cpu_env, vreg_ofs(s, 0)); + + fn(dest, mask, base, index, cpu_env, desc); + + tcg_temp_free_ptr(dest); + tcg_temp_free_ptr(mask); + tcg_temp_free_ptr(index); + tcg_temp_free(base); + tcg_temp_free_i32(desc); + return true; +} + +static bool amo_op(DisasContext *s, arg_rwdvm *a, uint8_t seq) +{ + uint32_t data = s->mlen | (a->vm << 8) | (s->lmul << 9) | (a->wd << 11); + gen_helper_amo *fn; +#ifdef TARGET_RISCV64 + static gen_helper_amo *const fns[2][18][2] = { + /* atomic operation */ + { { gen_helper_vamoswapw_v_w_a, gen_helper_vamoswapw_v_d_a }, + { gen_helper_vamoaddw_v_w_a, gen_helper_vamoaddw_v_d_a }, + { gen_helper_vamoxorw_v_w_a, gen_helper_vamoxorw_v_d_a }, + { gen_helper_vamoandw_v_w_a, gen_helper_vamoandw_v_d_a }, + { gen_helper_vamoorw_v_w_a, gen_helper_vamoorw_v_d_a }, + { gen_helper_vamominw_v_w_a, gen_helper_vamominw_v_d_a }, + { gen_helper_vamomaxw_v_w_a, gen_helper_vamomaxw_v_d_a }, + { gen_helper_vamominuw_v_w_a, gen_helper_vamominuw_v_d_a }, + { gen_helper_vamomaxuw_v_w_a, gen_helper_vamomaxuw_v_d_a }, + { NULL, gen_helper_vamoswapd_v_d_a }, + { NULL, gen_helper_vamoaddd_v_d_a }, + { NULL, gen_helper_vamoxord_v_d_a }, + { NULL, gen_helper_vamoandd_v_d_a }, + { NULL, gen_helper_vamoord_v_d_a }, + { NULL, gen_helper_vamomind_v_d_a }, + { NULL, gen_helper_vamomaxd_v_d_a }, + { NULL, gen_helper_vamominud_v_d_a }, + { NULL, gen_helper_vamomaxud_v_d_a } }, + /* no atomic operation */ + { { gen_helper_vamoswapw_v_w, gen_helper_vamoswapw_v_d }, + { gen_helper_vamoaddw_v_w, gen_helper_vamoaddw_v_d }, + { gen_helper_vamoxorw_v_w, gen_helper_vamoxorw_v_d }, + { gen_helper_vamoandw_v_w, gen_helper_vamoandw_v_d }, + { gen_helper_vamoorw_v_w, gen_helper_vamoorw_v_d }, + { gen_helper_vamominw_v_w, gen_helper_vamominw_v_d }, + { gen_helper_vamomaxw_v_w, gen_helper_vamomaxw_v_d }, + { gen_helper_vamominuw_v_w, gen_helper_vamominuw_v_d }, + { gen_helper_vamomaxuw_v_w, gen_helper_vamomaxuw_v_d }, + { NULL, gen_helper_vamoswapd_v_d }, + { NULL, gen_helper_vamoaddd_v_d }, + { NULL, gen_helper_vamoxord_v_d }, + { NULL, gen_helper_vamoandd_v_d }, + { NULL, gen_helper_vamoord_v_d }, + { NULL, gen_helper_vamomind_v_d }, + { NULL, gen_helper_vamomaxd_v_d }, + { NULL, gen_helper_vamominud_v_d }, + { NULL, gen_helper_vamomaxud_v_d } } + }; +#else + static gen_helper_amo *const fns[2][9][2] = { + /* atomic operation */ + { { gen_helper_vamoswapw_v_w_a, NULL }, + { gen_helper_vamoaddw_v_w_a, NULL }, + { gen_helper_vamoxorw_v_w_a, NULL }, + { gen_helper_vamoandw_v_w_a, NULL }, + { gen_helper_vamoorw_v_w_a, NULL }, + { gen_helper_vamominw_v_w_a, NULL }, + { gen_helper_vamomaxw_v_w_a, NULL }, + { gen_helper_vamominuw_v_w_a, NULL }, + { gen_helper_vamomaxuw_v_w_a, NULL } }, + /* no atomic operation */ + { { gen_helper_vamoswapw_v_w, NULL }, + { gen_helper_vamoaddw_v_w, NULL }, + { gen_helper_vamoxorw_v_w, NULL }, + { gen_helper_vamoandw_v_w, NULL }, + { gen_helper_vamoorw_v_w, NULL }, + { gen_helper_vamominw_v_w, NULL }, + { gen_helper_vamomaxw_v_w, NULL }, + { gen_helper_vamominuw_v_w, NULL }, + { gen_helper_vamomaxuw_v_w, NULL } } + }; +#endif + if (s->sew < 2) { + return false; + } + + if (tb_cflags(s->base.tb) & CF_PARALLEL) { +#ifdef CONFIG_ATOMIC64 + fn = fns[0][seq][s->sew - 2]; +#else + gen_helper_exit_atomic(cpu_env); + s->base.is_jmp = DISAS_NORETURN; + return true; +#endif + } else { + fn = fns[1][seq][s->sew - 2]; + } + if (fn == NULL) { + return false; + } + + return amo_trans(a->rd, a->rs1, a->rs2, data, fn, s); +} + +static bool amo_check(DisasContext *s, arg_rwdvm* a) +{ + return (vext_check_isa_ill(s, RVV | RVA) && + (a->wd ? vext_check_overlap_mask(s, a->rd, a->vm) : 1) && + vext_check_reg(s, a->rd, false) && + vext_check_reg(s, a->rs2, false)); +} + +GEN_VEXT_TRANS(vamoswapw_v, 0, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoaddw_v, 1, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoxorw_v, 2, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoandw_v, 3, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoorw_v, 4, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamominw_v, 5, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxw_v, 6, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamominuw_v, 7, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxuw_v, 8, rwdvm, amo_op, amo_check) +#ifdef TARGET_RISCV64 +GEN_VEXT_TRANS(vamoswapd_v, 9, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoaddd_v, 10, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoxord_v, 11, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoandd_v, 12, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamoord_v, 13, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomind_v, 14, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxd_v, 15, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamominud_v, 16, rwdvm, amo_op, amo_check) +GEN_VEXT_TRANS(vamomaxud_v, 17, rwdvm, amo_op, amo_check) +#endif diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c index 6772e31ecb..fe07a29cd2 100644 --- a/target/riscv/vector_helper.c +++ b/target/riscv/vector_helper.c @@ -94,6 +94,11 @@ static inline uint32_t vext_lmul(uint32_t desc) return (simd_data(desc) >> 9) & 0x3; } +static uint32_t vext_wd(uint32_t desc) +{ + return (simd_data(desc) >> 11) & 0x1; +} + /* * Get vector group length in bytes. Its range is [64, 2048]. * @@ -166,9 +171,22 @@ static void vext_clear(void *tail, uint32_t cnt, uint32_t tot) } #endif -static inline int vext_elem_mask(void *v0, int mlen, int index) +static void vext_clearl(void *vd, uint32_t idx, uint32_t cnt, uint32_t tot) +{ + int32_t *cur = ((int32_t *)vd + H4(idx)); + vext_clear(cur, cnt, tot); +} + +#ifdef TARGET_RISCV64 +static void vext_clearq(void *vd, uint32_t idx, uint32_t cnt, uint32_t tot) { + int64_t *cur = (int64_t *)vd + idx; + vext_clear(cur, cnt, tot); +} +#endif +static inline int vext_elem_mask(void *v0, int mlen, int index) +{ int idx = (index * mlen) / 8; int pos = (index * mlen) % 8; @@ -976,3 +994,263 @@ GEN_VEXT_LDFF(vlhuff_v_w, uint16_t, uint32_t, MO_LEUW) GEN_VEXT_LDFF(vlhuff_v_d, uint16_t, uint64_t, MO_LEUW) GEN_VEXT_LDFF(vlwuff_v_w, uint32_t, uint32_t, MO_LEUL) GEN_VEXT_LDFF(vlwuff_v_d, uint32_t, uint64_t, MO_LEUL) + +/* + *** Vector AMO Operations (Zvamo) + */ +typedef void (*vext_amo_noatomic_fn)(void *vs3, target_ulong addr, + uint32_t wd, uint32_t idx, CPURISCVState *env, uintptr_t retaddr); +typedef void (*vext_amo_atomic_fn)(void *vs3, target_ulong addr, + uint32_t wd, uint32_t idx, CPURISCVState *env); + +#ifdef TARGET_RISCV64 +GEN_VEXT_GET_INDEX_ADDR(vamoswapw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoswapd_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoaddw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoaddd_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoxorw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoxord_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoandw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoandd_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoorw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamoord_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamominw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamomind_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamomaxw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamomaxd_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamominuw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamominud_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamomaxuw_v_d, int64_t, H8) +GEN_VEXT_GET_INDEX_ADDR(vamomaxud_v_d, int64_t, H8) +#endif +GEN_VEXT_GET_INDEX_ADDR(vamoswapw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamoaddw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamoxorw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamoandw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamoorw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamominw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamomaxw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamominuw_v_w, int32_t, H4) +GEN_VEXT_GET_INDEX_ADDR(vamomaxuw_v_w, int32_t, H4) + +/* no atomic opreation for vector atomic insructions */ +#define DO_SWAP(N, M) (M) +#define DO_AND(N, M) (N & M) +#define DO_XOR(N, M) (N ^ M) +#define DO_OR(N, M) (N | M) +#define DO_ADD(N, M) (N + M) +#define DO_MAX(N, M) ((N) >= (M) ? (N) : (M)) +#define DO_MIN(N, M) ((N) >= (M) ? (M) : (N)) + +#define GEN_VEXT_AMO_NOATOMIC_OP(NAME, ETYPE, MTYPE, H, DO_OP, SUF) \ +static void vext_##NAME##_noatomic_op(void *vs3, target_ulong addr, \ + uint32_t wd, uint32_t idx, CPURISCVState *env, uintptr_t retaddr)\ +{ \ + ETYPE ret; \ + target_ulong tmp; \ + int mmu_idx = cpu_mmu_index(env, false); \ + tmp = cpu_ld##SUF##_mmuidx_ra(env, addr, mmu_idx, retaddr); \ + ret = DO_OP((ETYPE)(MTYPE)tmp, *((ETYPE *)vs3 + H(idx))); \ + cpu_st##SUF##_mmuidx_ra(env, addr, ret, mmu_idx, retaddr); \ + if (wd) { \ + *((ETYPE *)vs3 + H(idx)) = (target_long)(MTYPE)tmp; \ + } \ +} + +GEN_VEXT_AMO_NOATOMIC_OP(vamoswapw_v_w, int32_t, int32_t, H4, DO_SWAP, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoaddw_v_w, int32_t, int32_t, H4, DO_ADD, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoxorw_v_w, int32_t, int32_t, H4, DO_XOR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoandw_v_w, int32_t, int32_t, H4, DO_AND, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoorw_v_w, int32_t, int32_t, H4, DO_OR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamominw_v_w, int32_t, int32_t, H4, DO_MIN, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxw_v_w, int32_t, int32_t, H4, DO_MAX, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamominuw_v_w, uint32_t, int32_t, H4, DO_MIN, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxuw_v_w, uint32_t, int32_t, H4, DO_MAX, l) +#ifdef TARGET_RISCV64 +GEN_VEXT_AMO_NOATOMIC_OP(vamoswapw_v_d, int64_t, int32_t, H8, DO_SWAP, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoswapd_v_d, int64_t, int64_t, H8, DO_SWAP, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoaddw_v_d, int64_t, int32_t, H8, DO_ADD, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoaddd_v_d, int64_t, int64_t, H8, DO_ADD, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoxorw_v_d, int64_t, int32_t, H8, DO_XOR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoxord_v_d, int64_t, int64_t, H8, DO_XOR, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoandw_v_d, int64_t, int32_t, H8, DO_AND, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoandd_v_d, int64_t, int64_t, H8, DO_AND, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamoorw_v_d, int64_t, int32_t, H8, DO_OR, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamoord_v_d, int64_t, int64_t, H8, DO_OR, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamominw_v_d, int64_t, int32_t, H8, DO_MIN, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomind_v_d, int64_t, int64_t, H8, DO_MIN, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxw_v_d, int64_t, int32_t, H8, DO_MAX, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxd_v_d, int64_t, int64_t, H8, DO_MAX, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamominuw_v_d, uint64_t, int32_t, H8, DO_MIN, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamominud_v_d, uint64_t, int64_t, H8, DO_MIN, q) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxuw_v_d, uint64_t, int32_t, H8, DO_MAX, l) +GEN_VEXT_AMO_NOATOMIC_OP(vamomaxud_v_d, uint64_t, int64_t, H8, DO_MAX, q) +#endif + +/* atomic opreation for vector atomic insructions */ +#ifndef CONFIG_USER_ONLY +#define GEN_VEXT_ATOMIC_OP(NAME, ETYPE, MTYPE, MOFLAG, H, AMO) \ +static void vext_##NAME##_atomic_op(void *vs3, target_ulong addr, \ + uint32_t wd, uint32_t idx, CPURISCVState *env) \ +{ \ + target_ulong tmp; \ + int mem_idx = cpu_mmu_index(env, false); \ + tmp = helper_atomic_##AMO##_le(env, addr, *((ETYPE *)vs3 + H(idx)), \ + make_memop_idx(MO_ALIGN | MOFLAG, mem_idx)); \ + if (wd) { \ + *((ETYPE *)vs3 + H(idx)) = (target_long)(MTYPE)tmp; \ + } \ +} +#else +#define GEN_VEXT_ATOMIC_OP(NAME, ETYPE, MTYPE, MOFLAG, H, AMO) \ +static void vext_##NAME##_atomic_op(void *vs3, target_ulong addr, \ + uint32_t wd, uint32_t idx, CPURISCVState *env) \ +{ \ + target_ulong tmp; \ + tmp = helper_atomic_##AMO##_le(env, addr, *((ETYPE *)vs3 + H(idx))); \ + if (wd) { \ + *((ETYPE *)vs3 + H(idx)) = (target_long)(MTYPE)tmp; \ + } \ +} +#endif + +GEN_VEXT_ATOMIC_OP(vamoswapw_v_w, int32_t, int32_t, MO_TESL, H4, xchgl) +GEN_VEXT_ATOMIC_OP(vamoaddw_v_w, int32_t, int32_t, MO_TESL, H4, fetch_addl) +GEN_VEXT_ATOMIC_OP(vamoxorw_v_w, int32_t, int32_t, MO_TESL, H4, fetch_xorl) +GEN_VEXT_ATOMIC_OP(vamoandw_v_w, int32_t, int32_t, MO_TESL, H4, fetch_andl) +GEN_VEXT_ATOMIC_OP(vamoorw_v_w, int32_t, int32_t, MO_TESL, H4, fetch_orl) +GEN_VEXT_ATOMIC_OP(vamominw_v_w, int32_t, int32_t, MO_TESL, H4, fetch_sminl) +GEN_VEXT_ATOMIC_OP(vamomaxw_v_w, int32_t, int32_t, MO_TESL, H4, fetch_smaxl) +GEN_VEXT_ATOMIC_OP(vamominuw_v_w, uint32_t, int32_t, MO_TEUL, H4, fetch_uminl) +GEN_VEXT_ATOMIC_OP(vamomaxuw_v_w, uint32_t, int32_t, MO_TEUL, H4, fetch_umaxl) +#ifdef TARGET_RISCV64 +GEN_VEXT_ATOMIC_OP(vamoswapw_v_d, int64_t, int32_t, MO_TESL, H8, xchgl) +GEN_VEXT_ATOMIC_OP(vamoswapd_v_d, int64_t, int64_t, MO_TEQ, H8, xchgq) +GEN_VEXT_ATOMIC_OP(vamoaddw_v_d, int64_t, int32_t, MO_TESL, H8, fetch_addl) +GEN_VEXT_ATOMIC_OP(vamoaddd_v_d, int64_t, int64_t, MO_TEQ, H8, fetch_addq) +GEN_VEXT_ATOMIC_OP(vamoxorw_v_d, int64_t, int32_t, MO_TESL, H8, fetch_xorl) +GEN_VEXT_ATOMIC_OP(vamoxord_v_d, int64_t, int64_t, MO_TEQ, H8, fetch_xorq) +GEN_VEXT_ATOMIC_OP(vamoandw_v_d, int64_t, int32_t, MO_TESL, H8, fetch_andl) +GEN_VEXT_ATOMIC_OP(vamoandd_v_d, int64_t, int64_t, MO_TEQ, H8, fetch_andq) +GEN_VEXT_ATOMIC_OP(vamoorw_v_d, int64_t, int32_t, MO_TESL, H8, fetch_orl) +GEN_VEXT_ATOMIC_OP(vamoord_v_d, int64_t, int64_t, MO_TEQ, H8, fetch_orq) +GEN_VEXT_ATOMIC_OP(vamominw_v_d, int64_t, int32_t, MO_TESL, H8, fetch_sminl) +GEN_VEXT_ATOMIC_OP(vamomind_v_d, int64_t, int64_t, MO_TEQ, H8, fetch_sminq) +GEN_VEXT_ATOMIC_OP(vamomaxw_v_d, int64_t, int32_t, MO_TESL, H8, fetch_smaxl) +GEN_VEXT_ATOMIC_OP(vamomaxd_v_d, int64_t, int64_t, MO_TEQ, H8, fetch_smaxq) +GEN_VEXT_ATOMIC_OP(vamominuw_v_d, uint64_t, int32_t, MO_TEUL, H8, fetch_uminl) +GEN_VEXT_ATOMIC_OP(vamominud_v_d, uint64_t, int64_t, MO_TEQ, H8, fetch_uminq) +GEN_VEXT_ATOMIC_OP(vamomaxuw_v_d, uint64_t, int32_t, MO_TEUL, H8, fetch_umaxl) +GEN_VEXT_ATOMIC_OP(vamomaxud_v_d, uint64_t, int64_t, MO_TEQ, H8, fetch_umaxq) +#endif + +static inline void vext_amo_atomic(void *vs3, void *v0, target_ulong base, + void *vs2, CPURISCVState *env, uint32_t desc, + vext_get_index_addr get_index_addr, + vext_amo_atomic_fn atomic_op, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i; + target_long addr; + uint32_t wd = vext_wd(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_read_access(env, get_index_addr(base, i, vs2), msz, ra); + probe_write_access(env, get_index_addr(base, i, vs2), msz, ra); + } + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + addr = get_index_addr(base, i, vs2); + atomic_op(vs3, addr, wd, i, env); + } + clear_elem(vs3, env->vl, env->vl * esz, vlmax * esz); +} + +static inline void vext_amo_noatomic(void *vs3, void *v0, target_ulong base, + void *vs2, CPURISCVState *env, uint32_t desc, + vext_get_index_addr get_index_addr, + vext_amo_noatomic_fn noatomic_op, + vext_ld_clear_elem clear_elem, + uint32_t esz, uint32_t msz, uintptr_t ra) +{ + uint32_t i; + target_long addr; + uint32_t wd = vext_wd(desc); + uint32_t vm = vext_vm(desc); + uint32_t mlen = vext_mlen(desc); + uint32_t vlmax = vext_maxsz(desc) / esz; + + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + probe_read_access(env, get_index_addr(base, i, vs2), msz, ra); + probe_write_access(env, get_index_addr(base, i, vs2), msz, ra); + } + for (i = 0; i < env->vl; i++) { + if (!vm && !vext_elem_mask(v0, mlen, i)) { + continue; + } + addr = get_index_addr(base, i, vs2); + noatomic_op(vs3, addr, wd, i, env, ra); + } + clear_elem(vs3, env->vl, env->vl * esz, vlmax * esz); +} + +#define GEN_VEXT_AMO(NAME, MTYPE, ETYPE, CLEAR_FN) \ +void HELPER(NAME##_a)(void *vs3, void *v0, target_ulong base, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_amo_atomic(vs3, v0, base, vs2, env, desc, \ + vext_##NAME##_get_addr, \ + vext_##NAME##_atomic_op, \ + CLEAR_FN, sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} \ + \ +void HELPER(NAME)(void *vs3, void *v0, target_ulong base, \ + void *vs2, CPURISCVState *env, uint32_t desc) \ +{ \ + vext_amo_noatomic(vs3, v0, base, vs2, env, desc, \ + vext_##NAME##_get_addr, \ + vext_##NAME##_noatomic_op, \ + CLEAR_FN, sizeof(ETYPE), sizeof(MTYPE), GETPC()); \ +} + +#ifdef TARGET_RISCV64 +GEN_VEXT_AMO(vamoswapw_v_d, int32_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoswapd_v_d, int64_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoaddw_v_d, int32_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoaddd_v_d, int64_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoxorw_v_d, int32_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoxord_v_d, int64_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoandw_v_d, int32_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoandd_v_d, int64_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoorw_v_d, int32_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamoord_v_d, int64_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamominw_v_d, int32_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamomind_v_d, int64_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamomaxw_v_d, int32_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamomaxd_v_d, int64_t, int64_t, vext_clearq) +GEN_VEXT_AMO(vamominuw_v_d, uint32_t, uint64_t, vext_clearq) +GEN_VEXT_AMO(vamominud_v_d, uint64_t, uint64_t, vext_clearq) +GEN_VEXT_AMO(vamomaxuw_v_d, uint32_t, uint64_t, vext_clearq) +GEN_VEXT_AMO(vamomaxud_v_d, uint64_t, uint64_t, vext_clearq) +#endif +GEN_VEXT_AMO(vamoswapw_v_w, int32_t, int32_t, vext_clearl) +GEN_VEXT_AMO(vamoaddw_v_w, int32_t, int32_t, vext_clearl) +GEN_VEXT_AMO(vamoxorw_v_w, int32_t, int32_t, vext_clearl) +GEN_VEXT_AMO(vamoandw_v_w, int32_t, int32_t, vext_clearl) +GEN_VEXT_AMO(vamoorw_v_w, int32_t, int32_t, vext_clearl) +GEN_VEXT_AMO(vamominw_v_w, int32_t, int32_t, vext_clearl) +GEN_VEXT_AMO(vamomaxw_v_w, int32_t, int32_t, vext_clearl) +GEN_VEXT_AMO(vamominuw_v_w, uint32_t, uint32_t, vext_clearl) +GEN_VEXT_AMO(vamomaxuw_v_w, uint32_t, uint32_t, vext_clearl)