Message ID | 20171013162438.32458-8-alex.bennee@linaro.org |
---|---|
State | New |
Headers | show |
Series | v8.2 half-precision support (work-in-progress) | expand |
On 10/13/2017 09:24 AM, Alex Bennée wrote: > This will be required when expanding the MINMAX() macro for 16 > bit/half-precision operations. > > Signed-off-by: Alex Bennée <alex.bennee@linaro.org> > --- > include/fpu/softfloat.h | 7 +++++++ > 1 file changed, 7 insertions(+) Reviewed-by: Richard Henderson <richard.henderson@linaro.org> > + return make_float16(float16_val(a) & 0x7fff); > +} > /*---------------------------------------------------------------------------- Watch the spacing. r~
diff --git a/include/fpu/softfloat.h b/include/fpu/softfloat.h index d5e99667b6..edf402d422 100644 --- a/include/fpu/softfloat.h +++ b/include/fpu/softfloat.h @@ -374,6 +374,13 @@ static inline int float16_is_zero_or_denormal(float16 a) return (float16_val(a) & 0x7c00) == 0; } +static inline float16 float16_abs(float16 a) +{ + /* Note that abs does *not* handle NaN specially, nor does + * it flush denormal inputs to zero. + */ + return make_float16(float16_val(a) & 0x7fff); +} /*---------------------------------------------------------------------------- | The pattern for a default generated half-precision NaN. *----------------------------------------------------------------------------*/
This will be required when expanding the MINMAX() macro for 16 bit/half-precision operations. Signed-off-by: Alex Bennée <alex.bennee@linaro.org> --- include/fpu/softfloat.h | 7 +++++++ 1 file changed, 7 insertions(+)