diff mbox series

[v2,08/12] target/riscv/vector_helper.c: use vlenb in HELPER(vsetvl)

Message ID 20240115222528.257342-9-dbarboza@ventanamicro.com
State New
Headers show
Series target/riscv: add 'cpu->cfg.vlenb', remove 'cpu->cfg.vlen' | expand

Commit Message

Daniel Henrique Barboza Jan. 15, 2024, 10:25 p.m. UTC
Use the new 'vlenb' CPU config to validate fractional LMUL. The original
comparison is done with 'vlen' and 'sew', both in bits. Adjust the shift
to use vlenb.

Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
---
 target/riscv/vector_helper.c | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

Comments

Richard Henderson Jan. 15, 2024, 10:57 p.m. UTC | #1
On 1/16/24 09:25, Daniel Henrique Barboza wrote:
> Use the new 'vlenb' CPU config to validate fractional LMUL. The original
> comparison is done with 'vlen' and 'sew', both in bits. Adjust the shift
> to use vlenb.
> 
> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
> ---
>   target/riscv/vector_helper.c | 8 ++++++--
>   1 file changed, 6 insertions(+), 2 deletions(-)
> 
> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
> index cb944229b0..9e3ae4b5d3 100644
> --- a/target/riscv/vector_helper.c
> +++ b/target/riscv/vector_helper.c
> @@ -45,9 +45,13 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
>                                               xlen - 1 - R_VTYPE_RESERVED_SHIFT);
>   
>       if (lmul & 4) {
> -        /* Fractional LMUL - check LMUL * VLEN >= SEW */
> +        /*
> +         * Fractional LMUL: check VLEN * LMUL >= SEW,
> +         * or VLEN * (8 - lmul) >= SEW. Using VLENB we
> +         * need 3 less shifts rights.

The last sentence is structured oddly.  Perhaps

   Using VLENB, we decrease the right shift by 3

or perhaps just show the expansion:

/*
  * Fractional LMUL, check
  *
  *    VLEN * LMUL >= SEW
  *    VLEN >> (8 - lmul) >= sew
  *    (vlenb << 3) >> (8 - lmul) >= sew
  *    vlenb >> (8 - 3 - lmul) >= sew
  */

Anyway,
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>

r~


> +         */
>           if (lmul == 4 ||
> -            cpu->cfg.vlen >> (8 - lmul) < sew) {
> +            cpu->cfg.vlenb >> (8 - 3 - lmul) < sew) {
>               vill = true;
>           }
>       }
Daniel Henrique Barboza Jan. 16, 2024, 8:45 p.m. UTC | #2
On 1/15/24 19:57, Richard Henderson wrote:
> On 1/16/24 09:25, Daniel Henrique Barboza wrote:
>> Use the new 'vlenb' CPU config to validate fractional LMUL. The original
>> comparison is done with 'vlen' and 'sew', both in bits. Adjust the shift
>> to use vlenb.
>>
>> Signed-off-by: Daniel Henrique Barboza <dbarboza@ventanamicro.com>
>> ---
>>   target/riscv/vector_helper.c | 8 ++++++--
>>   1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
>> index cb944229b0..9e3ae4b5d3 100644
>> --- a/target/riscv/vector_helper.c
>> +++ b/target/riscv/vector_helper.c
>> @@ -45,9 +45,13 @@ target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
>>                                               xlen - 1 - R_VTYPE_RESERVED_SHIFT);
>>       if (lmul & 4) {
>> -        /* Fractional LMUL - check LMUL * VLEN >= SEW */
>> +        /*
>> +         * Fractional LMUL: check VLEN * LMUL >= SEW,
>> +         * or VLEN * (8 - lmul) >= SEW. Using VLENB we
>> +         * need 3 less shifts rights.
> 
> The last sentence is structured oddly.  Perhaps
> 
>    Using VLENB, we decrease the right shift by 3
> 
> or perhaps just show the expansion:
> 
> /*
>   * Fractional LMUL, check
>   *
>   *    VLEN * LMUL >= SEW
>   *    VLEN >> (8 - lmul) >= sew
>   *    (vlenb << 3) >> (8 - lmul) >= sew
>   *    vlenb >> (8 - 3 - lmul) >= sew
>   */

Just changed the comment to show the expansion. Thanks,


Daniel

> 
> Anyway,
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> 
> r~
> 
> 
>> +         */
>>           if (lmul == 4 ||
>> -            cpu->cfg.vlen >> (8 - lmul) < sew) {
>> +            cpu->cfg.vlenb >> (8 - 3 - lmul) < sew) {
>>               vill = true;
>>           }
>>       }
>
diff mbox series

Patch

diff --git a/target/riscv/vector_helper.c b/target/riscv/vector_helper.c
index cb944229b0..9e3ae4b5d3 100644
--- a/target/riscv/vector_helper.c
+++ b/target/riscv/vector_helper.c
@@ -45,9 +45,13 @@  target_ulong HELPER(vsetvl)(CPURISCVState *env, target_ulong s1,
                                             xlen - 1 - R_VTYPE_RESERVED_SHIFT);
 
     if (lmul & 4) {
-        /* Fractional LMUL - check LMUL * VLEN >= SEW */
+        /*
+         * Fractional LMUL: check VLEN * LMUL >= SEW,
+         * or VLEN * (8 - lmul) >= SEW. Using VLENB we
+         * need 3 less shifts rights.
+         */
         if (lmul == 4 ||
-            cpu->cfg.vlen >> (8 - lmul) < sew) {
+            cpu->cfg.vlenb >> (8 - 3 - lmul) < sew) {
             vill = true;
         }
     }