diff mbox

[PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.

Message ID VI1PR0801MB203191C5CA00160F3DAC7E40FF890@VI1PR0801MB2031.eurprd08.prod.outlook.com
State New
Headers show

Commit Message

Tamar Christina Nov. 25, 2016, 12:18 p.m. UTC
Hi Joseph,

I have updated the patch with the changes,
w.r.t to the formatting, there are tabs there that seem to be rendered
at 4 spaces wide. In my editor setup at 8 spaces they are correct.

Kind Regards,
Tamar

Comments

Tamar Christina Dec. 2, 2016, 4:20 p.m. UTC | #1
Ping?

Is there anything else I need to do for the patch or is it OK for trunk?

Thanks,
Tamar
Tamar Christina Dec. 12, 2016, 9:19 a.m. UTC | #2
Ping
Jeff Law Dec. 12, 2016, 3:53 p.m. UTC | #3
On 12/12/2016 02:19 AM, Tamar Christina wrote:
> Ping
>
> ________________________________________
> From: Tamar Christina
> Sent: Friday, December 2, 2016 4:20:42 PM
> To: Joseph Myers
> Cc: GCC Patches; Wilco Dijkstra; rguenther@suse.de; law@redhat.com; Michael Meissner; nd
> Subject: Re: [PATCH][GCC][PATCHv3] Improve fpclassify w.r.t IEEE like numbers in GIMPLE.
>
> Ping?
>
> Is there anything else I need to do for the patch or is it OK for trunk?
Just a note, it is in my queue of things to look at.   GIven it was 
posted well before stage1 close I think it deserves the opportunity to 
move forward (even if we need another iteration) even though we're well 
into stage3.

Jeff
Jeff Law Dec. 14, 2016, 8:07 a.m. UTC | #4
On 11/25/2016 05:18 AM, Tamar Christina wrote:
> Hi Joseph,
>
> I have updated the patch with the changes,
> w.r.t to the formatting, there are tabs there that seem to be rendered
> at 4 spaces wide. In my editor setup at 8 spaces they are correct.
>
Various random comments, mostly formatting.  I do think we're going to 
need another iteration.  The good news is I don't expect we'll be asking 
for major changes in direction, just trying to tie up various loose 
ends, so it shouldn't take nearly as long.

On a high level, presumably there's no real value in keeping the old 
code to "fold" fpclassify.  By exposing those operations as integer 
logicals for the fast path, if the FP value becomes a constant during 
the optimization pipeline we'll see the reinterpreted values flowing 
into the new integer logical tests and they'll simplify just like 
anything else.  Right?

The old IBM format is still supported, though they are expected to be 
moveing towards a standard ieee 128 bit format.  So my only concern is 
that we preserve correct behavior for those cases -- I don't really care 
about optimizing them.  So I think you need to keep them.

For documenting builtins, using existing builtins as a template.



> @@ -822,6 +882,736 @@ lower_builtin_setjmp (gimple_stmt_iterator *gsi)
>    gsi_remove (gsi, false);
>  }
>
> +static tree
> +emit_tree_and_return_var (gimple_seq *seq, tree arg)
Needs a function comment.

> +{
> +  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
> +    return arg;
> +
> +  tree tmp = create_tmp_reg (TREE_TYPE(arg));
Nit.  Need space between TREE_TYPE and the open-parn for its arglist

> +  gassign *stm = gimple_build_assign(tmp, arg);
Similarly between gimple_build_assign and its arglist.  This formatting 
nit is repeated often.  Please check the patch as a whole and correct. 
Probably the best way to go is a search for [a-z] immediately preceding 
an open paren.


> +/* This function builds an if statement that ends up using explicit branches
> +   instead of becoming a csel.  This function assumes you will fall through to
> +   the next statements after this condition for the false branch.  */
A function comment is supposed to document each argument (and the return 
value if any).  It's probably better to avoid referring to an ARM 
instruction (csel) and instead describe the intent more genericly.


> +static void
> +emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
> +		tree cond, tree true_branch)
> +{
> +    /* Create labels for fall through.  */
> +  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
Comment is indented too deep.  It should line up with the code.


> +
> +static tree
> +get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
Needs function comment.  I'm not going to call out each one.  Please 
verify that every new function has a comment and that they document 
their arguments and return values.

Here's an example from builtins.c you might want to refer back to:

> /* For a memory reference expression EXP compute values M and N such that M
>    divides (&EXP - N) and such that N < M.  If these numbers can be determined,
>    store M in alignp and N in *BITPOSP and return true.  Otherwise return false
>    and store BITS_PER_UNIT to *alignp and any bit-offset to *bitposp.  */
>
> bool
> get_object_alignment_1 (tree exp, unsigned int *alignp,
>                         unsigned HOST_WIDE_INT *bitposp)
> {
>   return get_object_alignment_2 (exp, alignp, bitposp, false);
> }



> +  /* Check if the number that is being classified is close enough to IEEE 754
> +     format to be able to go in the early exit code.  */
> +static bool
> +use_ieee_int_mode (tree arg)
Comment should describe the argument.  Comment is indented 2 spaces too 
deep on both lines.  This occurs in several please.  I won't call each 
one out, but please walk through the patch as a whole and fix them.


> +{
> +  tree type = TREE_TYPE (arg);
> +
> +  machine_mode mode = TYPE_MODE (type);
> +
> +  const real_format *format = REAL_MODE_FORMAT (mode);
> +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
> +  return (format->is_binary_ieee_compatible
> +	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
> +	  /* We explicitly disable quad float support on 32 bit systems.  */
> +	  && !(UNITS_PER_WORD == 4 && type_width == 128)
> +	  && targetm.scalar_mode_supported_p (mode));
> +}
Presumably this is why you needed the target.h inclusion.

Note that on some systems we even disable 64bit floating point support. 
I suspect this check needs a little re-thinking as I don't think that 
checking for a specific UNITS_PER_WORD is correct, nor is checking the 
width of the type.  I'm not offhand sure what the test should be, just 
that I think we need something better here.



> +
> +static tree
> +is_normal (gimple_seq *seq, tree arg, location_t loc)
> +{
> +  tree type = TREE_TYPE (arg);
> +
> +  machine_mode mode = TYPE_MODE (type);
> +  const real_format *format = REAL_MODE_FORMAT (mode);
> +  const tree bool_type = boolean_type_node;
> +
> +  /* Perform IBM extended format fixups if required.  */
> +  bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
> +
> +  /* If not using optimized route then exit early.  */
> +  if (!use_ieee_int_mode (arg))
> +  {
Please check on your formatting.  The open-curly is indented two spaces 
relative to the IF statement.   I see multiple occurrences, so please go 
through the patch and check for them.  I'm not going to try and call out 
each one.






> +
> +static tree
> +is_infinity (gimple_seq *seq, tree arg, location_t loc)
There's a huge amount of code duplication between is_infinity and 
is_finite.  Would it make sense to try and refactor a bit to avoid the 
duplication?  Some of the early setup is even common among most (all?) 
if the is_* functions, consider factoring the common bits into routines 
you can reuse.


> +
> +/* Determines if the given number is a NaN value.
> +   This function is the last in the chain and only has to
> +   check if it's preconditions are true.  */
> +static tree
> +is_nan (gimple_seq *seq, tree arg, location_t loc)
So in the old code we checked UNGT_EXPR, in the new code's slow path you 
check UNORDERED.  Was that change intentional?

In general, I'm going to assume you've got the right tests.   I've done 
light spot checking and will probably do even more on the next iteration.

I see a lot of formatting and lack-of-function comment issues.  I'm 
comfortable with the overall direction though.  I'll want to look more 
closely at the helpers you're using to build up gimple code and how 
they're used.  I get the sense that we ought to already have something 
to do those things, and if not we may want to move yours into a more 
generic location.  I'll probably focus on that and trying to pick up any 
missed nits on the next iteration.  Again, the good news is that I don't 
expect the next iteration to take as long to get to.

Thanks for your patience.

jeff
Tamar Christina Dec. 15, 2016, 10:14 a.m. UTC | #5
> On a high level, presumably there's no real value in keeping the old
> code to "fold" fpclassify.  By exposing those operations as integer
> logicals for the fast path, if the FP value becomes a constant during
> the optimization pipeline we'll see the reinterpreted values flowing
> into the new integer logical tests and they'll simplify just like
> anything else.  Right?

Yes, if it becomes a constant it will be folded away, both in the integer and the fp case.

> The old IBM format is still supported, though they are expected to be
> moveing towards a standard ieee 128 bit format.  So my only concern is
> that we preserve correct behavior for those cases -- I don't really care
> about optimizing them.  So I think you need to keep them.

Yes, I re-added them. It's mostly a copy paste from what they were in the
other functions. But I have no way of testing it.

> For documenting builtins, using existing builtins as a template.

Yeah, I based them off the fpclassify documentation.

> > +{
> > +  tree type = TREE_TYPE (arg);
> > +
> > +  machine_mode mode = TYPE_MODE (type);
> > +
> > +  const real_format *format = REAL_MODE_FORMAT (mode);
>  > +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
> > +  return (format->is_binary_ieee_compatible
> > +       && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
> > +       /* We explicitly disable quad float support on 32 bit systems.  */
> > +       && !(UNITS_PER_WORD == 4 && type_width == 128)
> > +       && targetm.scalar_mode_supported_p (mode));
> > +}
> Presumably this is why you needed the target.h inclusion.
>
> Note that on some systems we even disable 64bit floating point support.
> I suspect this check needs a little re-thinking as I don't think that
> checking for a specific UNITS_PER_WORD is correct, nor is checking the
> width of the type.  I'm not offhand sure what the test should be, just
> that I think we need something better here.

I think what I really wanted to test here is if there was an integer mode available
which has the exact width as the floating point one. So I have replaced this with
just a call to int_mode_for_mode. Which is probably more correct.

> > +
> > +/* Determines if the given number is a NaN value.
> > +   This function is the last in the chain and only has to
> > +   check if it's preconditions are true.  */
> > +static tree
> > +is_nan (gimple_seq *seq, tree arg, location_t loc)
> So in the old code we checked UNGT_EXPR, in the new code's slow path you
> check UNORDERED.  Was that change intentional?

The old FP code used UNORDERED and the new one was using ORDERED and negating the result.
I've replaced it with UNORDERED, but both are correct.

Thanks for the review,
I'll get the new patch out ASAP.

Tamar
Joseph Myers Dec. 15, 2016, 7:03 p.m. UTC | #6
On Thu, 15 Dec 2016, Tamar Christina wrote:

> > Note that on some systems we even disable 64bit floating point support.
> > I suspect this check needs a little re-thinking as I don't think that
> > checking for a specific UNITS_PER_WORD is correct, nor is checking the
> > width of the type.  I'm not offhand sure what the test should be, just
> > that I think we need something better here.
> 
> I think what I really wanted to test here is if there was an integer 
> mode available which has the exact width as the floating point one. So I 
> have replaced this with just a call to int_mode_for_mode. Which is 
> probably more correct.

I think an integer mode should always exist - even in the case of TFmode 
on 32-bit systems (32-bit sparc / s390, for example, use TFmode long 
double for GNU/Linux, and it's supported as _Float128 and __float128 on 
32-bit x86).  It just be not be usable for arithmetic or declaring 
variables of that type.

I don't know whether TImode bitwise operations, such as generated by this 
fpclassify work, will get properly lowered to operations on supported 
narrower modes, but I hope so (clearly it's simpler if you can write 
things straightforwardly and have them cover this case of TFmode on 32-bit 
systems automatically through lowering elsewhere in the compiler, than if 
covering that case would require additional code - the more cases you 
cover, the more opportunity there is for glibc to use the built-in 
functions even with -fsignaling-nans).
Jeff Law Dec. 19, 2016, 8:27 p.m. UTC | #7
On 12/15/2016 03:14 AM, Tamar Christina wrote:
>
>> On a high level, presumably there's no real value in keeping the old
>> code to "fold" fpclassify.  By exposing those operations as integer
>> logicals for the fast path, if the FP value becomes a constant during
>> the optimization pipeline we'll see the reinterpreted values flowing
>> into the new integer logical tests and they'll simplify just like
>> anything else.  Right?
>
> Yes, if it becomes a constant it will be folded away, both in the integer and the fp case.
Thanks for clarifying.


>
>> The old IBM format is still supported, though they are expected to be
>> moveing towards a standard ieee 128 bit format.  So my only concern is
>> that we preserve correct behavior for those cases -- I don't really care
>> about optimizing them.  So I think you need to keep them.
>
> Yes, I re-added them. It's mostly a copy paste from what they were in the
> other functions. But I have no way of testing it.
Understood.

>>  > +  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
>>> +  return (format->is_binary_ieee_compatible
>>> +       && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
>>> +       /* We explicitly disable quad float support on 32 bit systems.  */
>>> +       && !(UNITS_PER_WORD == 4 && type_width == 128)
>>> +       && targetm.scalar_mode_supported_p (mode));
>>> +}
>> Presumably this is why you needed the target.h inclusion.
>>
>> Note that on some systems we even disable 64bit floating point support.
>> I suspect this check needs a little re-thinking as I don't think that
>> checking for a specific UNITS_PER_WORD is correct, nor is checking the
>> width of the type.  I'm not offhand sure what the test should be, just
>> that I think we need something better here.
>
> I think what I really wanted to test here is if there was an integer mode available
> which has the exact width as the floating point one. So I have replaced this with
> just a call to int_mode_for_mode. Which is probably more correct.
I'll need to think about it, but would inherently think that 
int_mode_for_mode is better than an explicit check of UNITS_PER_WORD and 
typewidth.


>
>>> +
>>> +/* Determines if the given number is a NaN value.
>>> +   This function is the last in the chain and only has to
>>> +   check if it's preconditions are true.  */
>>> +static tree
>>> +is_nan (gimple_seq *seq, tree arg, location_t loc)
>> So in the old code we checked UNGT_EXPR, in the new code's slow path you
>> check UNORDERED.  Was that change intentional?
>
> The old FP code used UNORDERED and the new one was using ORDERED and negating the result.
> I've replaced it with UNORDERED, but both are correct.
OK.  Just wanted to make sure.

jeff
Tamar Christina Jan. 3, 2017, 9:55 a.m. UTC | #8
Hi Jeff,

I wasn't sure if you saw the updated patch attached to the previous email or if you just hadn't had the time to look at it yet.

Cheers,
Tamar
Tamar Christina Jan. 16, 2017, 5:06 p.m. UTC | #9
Ping. Does this still have a chance or should I table it till Stage 1 opens again?

Tamar.
diff mbox

Patch

diff --git a/gcc/builtins.c b/gcc/builtins.c
index 3ac2d44148440b124559ba7cd3de483b7a74b72d..2340f60edb31ebf964367699aaf33ac8401dff41 100644
--- a/gcc/builtins.c
+++ b/gcc/builtins.c
@@ -160,7 +160,6 @@  static tree fold_builtin_0 (location_t, tree);
 static tree fold_builtin_1 (location_t, tree, tree);
 static tree fold_builtin_2 (location_t, tree, tree, tree);
 static tree fold_builtin_3 (location_t, tree, tree, tree, tree);
-static tree fold_builtin_varargs (location_t, tree, tree*, int);
 
 static tree fold_builtin_strpbrk (location_t, tree, tree, tree);
 static tree fold_builtin_strstr (location_t, tree, tree, tree);
@@ -2202,19 +2201,8 @@  interclass_mathfn_icode (tree arg, tree fndecl)
   switch (DECL_FUNCTION_CODE (fndecl))
     {
     CASE_FLT_FN (BUILT_IN_ILOGB):
-      errno_set = true; builtin_optab = ilogb_optab; break;
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      builtin_optab = isinf_optab; break;
-    case BUILT_IN_ISNORMAL:
-    case BUILT_IN_ISFINITE:
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      /* These builtins have no optabs (yet).  */
+      errno_set = true;
+      builtin_optab = ilogb_optab;
       break;
     default:
       gcc_unreachable ();
@@ -2233,8 +2221,7 @@  interclass_mathfn_icode (tree arg, tree fndecl)
 }
 
 /* Expand a call to one of the builtin math functions that operate on
-   floating point argument and output an integer result (ilogb, isinf,
-   isnan, etc).
+   floating point argument and output an integer result (ilogb, etc).
    Return 0 if a normal call should be emitted rather than expanding the
    function in-line.  EXP is the expression that is a call to the builtin
    function; if convenient, the result should be placed in TARGET.  */
@@ -5997,11 +5984,7 @@  expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
     CASE_FLT_FN (BUILT_IN_ILOGB):
       if (! flag_unsafe_math_optimizations)
 	break;
-      gcc_fallthrough ();
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-    case BUILT_IN_ISNORMAL:
+	
       target = expand_builtin_interclass_mathfn (exp, target);
       if (target)
 	return target;
@@ -6281,8 +6264,25 @@  expand_builtin (tree exp, rtx target, rtx subtarget, machine_mode mode,
 	}
       break;
 
+    CASE_FLT_FN (BUILT_IN_ISINF):
+    case BUILT_IN_ISNAND32:
+    case BUILT_IN_ISNAND64:
+    case BUILT_IN_ISNAND128:
+    case BUILT_IN_ISNAN:
+    case BUILT_IN_ISINFD32:
+    case BUILT_IN_ISINFD64:
+    case BUILT_IN_ISINFD128:
+    case BUILT_IN_ISNORMAL:
+    case BUILT_IN_ISZERO:
+    case BUILT_IN_ISSUBNORMAL:
+    case BUILT_IN_FPCLASSIFY:
     case BUILT_IN_SETJMP:
-      /* This should have been lowered to the builtins below.  */
+    CASE_FLT_FN (BUILT_IN_FINITE):
+    case BUILT_IN_FINITED32:
+    case BUILT_IN_FINITED64:
+    case BUILT_IN_FINITED128:
+    case BUILT_IN_ISFINITE:
+      /* These should have been lowered to the builtins in gimple-low.c.  */
       gcc_unreachable ();
 
     case BUILT_IN_SETJMP_SETUP:
@@ -7622,184 +7622,19 @@  fold_builtin_modf (location_t loc, tree arg0, tree arg1, tree rettype)
   return NULL_TREE;
 }
 
-/* Given a location LOC, an interclass builtin function decl FNDECL
-   and its single argument ARG, return an folded expression computing
-   the same, or NULL_TREE if we either couldn't or didn't want to fold
-   (the latter happen if there's an RTL instruction available).  */
-
-static tree
-fold_builtin_interclass_mathfn (location_t loc, tree fndecl, tree arg)
-{
-  machine_mode mode;
-
-  if (!validate_arg (arg, REAL_TYPE))
-    return NULL_TREE;
-
-  if (interclass_mathfn_icode (arg, fndecl) != CODE_FOR_nothing)
-    return NULL_TREE;
-
-  mode = TYPE_MODE (TREE_TYPE (arg));
 
-  bool is_ibm_extended = MODE_COMPOSITE_P (mode);
 
-  /* If there is no optab, try generic code.  */
-  switch (DECL_FUNCTION_CODE (fndecl))
-    {
-      tree result;
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-      {
-	/* isinf(x) -> isgreater(fabs(x),DBL_MAX).  */
-	tree const isgr_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isgr_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	return result;
-      }
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_ISFINITE:
-      {
-	/* isfinite(x) -> islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	REAL_VALUE_TYPE r;
-	char buf[128];
-
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&r, buf);
-	result = build_call_expr (isle_fn, 2,
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	/*result = fold_build2_loc (loc, UNGT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  fold_build1_loc (loc, ABS_EXPR, type, arg),
-				  build_real (type, r));
-	result = fold_build1_loc (loc, TRUTH_NOT_EXPR,
-				  TREE_TYPE (TREE_TYPE (fndecl)),
-				  result);*/
-	return result;
-      }
-    case BUILT_IN_ISNORMAL:
-      {
-	/* isnormal(x) -> isgreaterequal(fabs(x),DBL_MIN) &
-	   islessequal(fabs(x),DBL_MAX).  */
-	tree const isle_fn = builtin_decl_explicit (BUILT_IN_ISLESSEQUAL);
-	tree type = TREE_TYPE (arg);
-	tree orig_arg, max_exp, min_exp;
-	machine_mode orig_mode = mode;
-	REAL_VALUE_TYPE rmax, rmin;
-	char buf[128];
-
-	orig_arg = arg = builtin_save_expr (arg);
-	if (is_ibm_extended)
-	  {
-	    /* Use double to test the normal range of IBM extended
-	       precision.  Emin for IBM extended precision is
-	       different to emin for IEEE double, being 53 higher
-	       since the low double exponent is at least 53 lower
-	       than the high double exponent.  */
-	    type = double_type_node;
-	    mode = DFmode;
-	    arg = fold_build1_loc (loc, NOP_EXPR, type, arg);
-	  }
-	arg = fold_build1_loc (loc, ABS_EXPR, type, arg);
-
-	get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
-	real_from_string (&rmax, buf);
-	sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (orig_mode)->emin - 1);
-	real_from_string (&rmin, buf);
-	max_exp = build_real (type, rmax);
-	min_exp = build_real (type, rmin);
-
-	max_exp = build_call_expr (isle_fn, 2, arg, max_exp);
-	if (is_ibm_extended)
-	  {
-	    /* Testing the high end of the range is done just using
-	       the high double, using the same test as isfinite().
-	       For the subnormal end of the range we first test the
-	       high double, then if its magnitude is equal to the
-	       limit of 0x1p-969, we test whether the low double is
-	       non-zero and opposite sign to the high double.  */
-	    tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
-	    tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
-	    tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
-	    tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
-				       arg, min_exp);
-	    tree as_complex = build1 (VIEW_CONVERT_EXPR,
-				      complex_double_type_node, orig_arg);
-	    tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
-	    tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
-	    tree zero = build_real (type, dconst0);
-	    tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
-	    tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
-	    tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
-	    tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
-				      fold_build3 (COND_EXPR,
-						   integer_type_node,
-						   hilt, logt, lolt));
-	    eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
-				  eq_min, ok_lo);
-	    min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
-				   gt_min, eq_min);
-	  }
-	else
-	  {
-	    tree const isge_fn
-	      = builtin_decl_explicit (BUILT_IN_ISGREATEREQUAL);
-	    min_exp = build_call_expr (isge_fn, 2, arg, min_exp);
-	  }
-	result = fold_build2 (BIT_AND_EXPR, integer_type_node,
-			      max_exp, min_exp);
-	return result;
-      }
-    default:
-      break;
-    }
-
-  return NULL_TREE;
-}
-
-/* Fold a call to __builtin_isnan(), __builtin_isinf, __builtin_finite.
+/* Fold a call to __builtin_isinf_sign.
    ARG is the argument for the call.  */
 
 static tree
-fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
+fold_builtin_classify (location_t loc, tree arg, int builtin_index)
 {
-  tree type = TREE_TYPE (TREE_TYPE (fndecl));
-
   if (!validate_arg (arg, REAL_TYPE))
     return NULL_TREE;
 
   switch (builtin_index)
     {
-    case BUILT_IN_ISINF:
-      if (!HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      return NULL_TREE;
-
     case BUILT_IN_ISINF_SIGN:
       {
 	/* isinf_sign(x) -> isinf(x) ? (signbit(x) ? -1 : 1) : 0 */
@@ -7832,106 +7667,11 @@  fold_builtin_classify (location_t loc, tree fndecl, tree arg, int builtin_index)
 	return tmp;
       }
 
-    case BUILT_IN_ISFINITE:
-      if (!HONOR_NANS (arg)
-	  && !HONOR_INFINITIES (arg))
-	return omit_one_operand_loc (loc, type, integer_one_node, arg);
-
-      return NULL_TREE;
-
-    case BUILT_IN_ISNAN:
-      if (!HONOR_NANS (arg))
-	return omit_one_operand_loc (loc, type, integer_zero_node, arg);
-
-      {
-	bool is_ibm_extended = MODE_COMPOSITE_P (TYPE_MODE (TREE_TYPE (arg)));
-	if (is_ibm_extended)
-	  {
-	    /* NaN and Inf are encoded in the high-order double value
-	       only.  The low-order value is not significant.  */
-	    arg = fold_build1_loc (loc, NOP_EXPR, double_type_node, arg);
-	  }
-      }
-      arg = builtin_save_expr (arg);
-      return fold_build2_loc (loc, UNORDERED_EXPR, type, arg, arg);
-
     default:
       gcc_unreachable ();
     }
 }
 
-/* Fold a call to __builtin_fpclassify(int, int, int, int, int, ...).
-   This builtin will generate code to return the appropriate floating
-   point classification depending on the value of the floating point
-   number passed in.  The possible return values must be supplied as
-   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
-   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
-   one floating point argument which is "type generic".  */
-
-static tree
-fold_builtin_fpclassify (location_t loc, tree *args, int nargs)
-{
-  tree fp_nan, fp_infinite, fp_normal, fp_subnormal, fp_zero,
-    arg, type, res, tmp;
-  machine_mode mode;
-  REAL_VALUE_TYPE r;
-  char buf[128];
-
-  /* Verify the required arguments in the original call.  */
-  if (nargs != 6
-      || !validate_arg (args[0], INTEGER_TYPE)
-      || !validate_arg (args[1], INTEGER_TYPE)
-      || !validate_arg (args[2], INTEGER_TYPE)
-      || !validate_arg (args[3], INTEGER_TYPE)
-      || !validate_arg (args[4], INTEGER_TYPE)
-      || !validate_arg (args[5], REAL_TYPE))
-    return NULL_TREE;
-
-  fp_nan = args[0];
-  fp_infinite = args[1];
-  fp_normal = args[2];
-  fp_subnormal = args[3];
-  fp_zero = args[4];
-  arg = args[5];
-  type = TREE_TYPE (arg);
-  mode = TYPE_MODE (type);
-  arg = builtin_save_expr (fold_build1_loc (loc, ABS_EXPR, type, arg));
-
-  /* fpclassify(x) ->
-       isnan(x) ? FP_NAN :
-         (fabs(x) == Inf ? FP_INFINITE :
-	   (fabs(x) >= DBL_MIN ? FP_NORMAL :
-	     (x == 0 ? FP_ZERO : FP_SUBNORMAL))).  */
-
-  tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-		     build_real (type, dconst0));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node,
-		     tmp, fp_zero, fp_subnormal);
-
-  sprintf (buf, "0x1p%d", REAL_MODE_FORMAT (mode)->emin - 1);
-  real_from_string (&r, buf);
-  tmp = fold_build2_loc (loc, GE_EXPR, integer_type_node,
-		     arg, build_real (type, r));
-  res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, fp_normal, res);
-
-  if (HONOR_INFINITIES (mode))
-    {
-      real_inf (&r);
-      tmp = fold_build2_loc (loc, EQ_EXPR, integer_type_node, arg,
-			 build_real (type, r));
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp,
-			 fp_infinite, res);
-    }
-
-  if (HONOR_NANS (mode))
-    {
-      tmp = fold_build2_loc (loc, ORDERED_EXPR, integer_type_node, arg, arg);
-      res = fold_build3_loc (loc, COND_EXPR, integer_type_node, tmp, res, fp_nan);
-    }
-
-  return res;
-}
-
 /* Fold a call to an unordered comparison function such as
    __builtin_isgreater().  FNDECL is the FUNCTION_DECL for the function
    being called and ARG0 and ARG1 are the arguments for the call.
@@ -8232,40 +7972,8 @@  fold_builtin_1 (location_t loc, tree fndecl, tree arg0)
     case BUILT_IN_ISDIGIT:
       return fold_builtin_isdigit (loc, arg0);
 
-    CASE_FLT_FN (BUILT_IN_FINITE):
-    case BUILT_IN_FINITED32:
-    case BUILT_IN_FINITED64:
-    case BUILT_IN_FINITED128:
-    case BUILT_IN_ISFINITE:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISFINITE);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    CASE_FLT_FN (BUILT_IN_ISINF):
-    case BUILT_IN_ISINFD32:
-    case BUILT_IN_ISINFD64:
-    case BUILT_IN_ISINFD128:
-      {
-	tree ret = fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF);
-	if (ret)
-	  return ret;
-	return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-      }
-
-    case BUILT_IN_ISNORMAL:
-      return fold_builtin_interclass_mathfn (loc, fndecl, arg0);
-
     case BUILT_IN_ISINF_SIGN:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISINF_SIGN);
-
-    CASE_FLT_FN (BUILT_IN_ISNAN):
-    case BUILT_IN_ISNAND32:
-    case BUILT_IN_ISNAND64:
-    case BUILT_IN_ISNAND128:
-      return fold_builtin_classify (loc, fndecl, arg0, BUILT_IN_ISNAN);
+      return fold_builtin_classify (loc, arg0, BUILT_IN_ISINF_SIGN);
 
     case BUILT_IN_FREE:
       if (integer_zerop (arg0))
@@ -8465,7 +8173,6 @@  fold_builtin_n (location_t loc, tree fndecl, tree *args, int nargs, bool)
       ret = fold_builtin_3 (loc, fndecl, args[0], args[1], args[2]);
       break;
     default:
-      ret = fold_builtin_varargs (loc, fndecl, args, nargs);
       break;
     }
   if (ret)
@@ -9422,37 +9129,6 @@  fold_builtin_object_size (tree ptr, tree ost)
   return NULL_TREE;
 }
 
-/* Builtins with folding operations that operate on "..." arguments
-   need special handling; we need to store the arguments in a convenient
-   data structure before attempting any folding.  Fortunately there are
-   only a few builtins that fall into this category.  FNDECL is the
-   function, EXP is the CALL_EXPR for the call.  */
-
-static tree
-fold_builtin_varargs (location_t loc, tree fndecl, tree *args, int nargs)
-{
-  enum built_in_function fcode = DECL_FUNCTION_CODE (fndecl);
-  tree ret = NULL_TREE;
-
-  switch (fcode)
-    {
-    case BUILT_IN_FPCLASSIFY:
-      ret = fold_builtin_fpclassify (loc, args, nargs);
-      break;
-
-    default:
-      break;
-    }
-  if (ret)
-    {
-      ret = build1 (NOP_EXPR, TREE_TYPE (ret), ret);
-      SET_EXPR_LOCATION (ret, loc);
-      TREE_NO_WARNING (ret) = 1;
-      return ret;
-    }
-  return NULL_TREE;
-}
-
 /* Initialize format string characters in the target charset.  */
 
 bool
diff --git a/gcc/builtins.def b/gcc/builtins.def
index 219feebd3aebefbd079bf37cc801453cd1965e00..91aa6f37fa098777bc794bad56d8c561ab9fdc44 100644
--- a/gcc/builtins.def
+++ b/gcc/builtins.def
@@ -831,6 +831,8 @@  DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFL, "isinfl", BT_FN_INT_LONGDOUBLE, ATTR_CO
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD32, "isinfd32", BT_FN_INT_DFLOAT32, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD64, "isinfd64", BT_FN_INT_DFLOAT64, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISINFD128, "isinfd128", BT_FN_INT_DFLOAT128, ATTR_CONST_NOTHROW_LEAF_LIST)
+DEF_GCC_BUILTIN        (BUILT_IN_ISZERO, "iszero", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
+DEF_GCC_BUILTIN        (BUILT_IN_ISSUBNORMAL, "issubnormal", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_C99_C90RES_BUILTIN (BUILT_IN_ISNAN, "isnan", BT_FN_INT_VAR, ATTR_CONST_NOTHROW_TYPEGENERIC_LEAF)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANF, "isnanf", BT_FN_INT_FLOAT, ATTR_CONST_NOTHROW_LEAF_LIST)
 DEF_EXT_LIB_BUILTIN    (BUILT_IN_ISNANL, "isnanl", BT_FN_INT_LONGDOUBLE, ATTR_CONST_NOTHROW_LEAF_LIST)
diff --git a/gcc/doc/extend.texi b/gcc/doc/extend.texi
index 0669f7999beb078822e471352036d8f13517812d..c240bbe9a8fd595a0e7e2b41fb708efae1e5279a 100644
--- a/gcc/doc/extend.texi
+++ b/gcc/doc/extend.texi
@@ -10433,6 +10433,10 @@  in the Cilk Plus language manual which can be found at
 @findex __builtin_isgreater
 @findex __builtin_isgreaterequal
 @findex __builtin_isinf_sign
+@findex __builtin_isinf
+@findex __builtin_isnan
+@findex __builtin_iszero
+@findex __builtin_issubnormal
 @findex __builtin_isless
 @findex __builtin_islessequal
 @findex __builtin_islessgreater
@@ -11496,7 +11500,54 @@  constant values and they must appear in this order: @code{FP_NAN},
 @code{FP_INFINITE}, @code{FP_NORMAL}, @code{FP_SUBNORMAL} and
 @code{FP_ZERO}.  The ellipsis is for exactly one floating-point value
 to classify.  GCC treats the last argument as type-generic, which
-means it does not do default promotion from float to double.
+means it does not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnan (...)
+This built-in implements the C99 isnan functionality which checks if
+the given argument represents a NaN.  The return value of the
+function will either be a 0 (false) or a 1 (true).
+On most systems, when an IEEE 754 floating-point type is used this
+built-in does not produce a signal when a signaling NaN is used.
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isinf (...)
+This built-in implements the C99 isinf functionality which checks if
+the given argument represents an infinite number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_isnormal (...)
+This built-in implements the C99 isnormal functionality which checks if
+the given argument represents a normal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_iszero (...)
+This built-in implements the TS 18661-1:2014 iszero functionality which checks if
+the given argument represents the number 0 or -0.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
+@end deftypefn
+
+@deftypefn {Built-in Function} int __builtin_issubnormal (...)
+This built-in implements the TS 18661-1:2014 issubnormal functionality which checks if
+the given argument represents a subnormal number.  The return
+value of the function will either be a 0 (false) or a 1 (true).
+
+GCC treats the argument as type-generic, which means it does
+not do default promotion from @code{float} to @code{double}.
 @end deftypefn
 
 @deftypefn {Built-in Function} double __builtin_inf (void)
diff --git a/gcc/gimple-low.c b/gcc/gimple-low.c
index 64752b67b86b3d01df5f5661e4666df98b7b91d1..6ec9179193b07ac5426afa1fea5308b0bab1c069 100644
--- a/gcc/gimple-low.c
+++ b/gcc/gimple-low.c
@@ -30,6 +30,8 @@  along with GCC; see the file COPYING3.  If not see
 #include "calls.h"
 #include "gimple-iterator.h"
 #include "gimple-low.h"
+#include "stor-layout.h"
+#include "target.h"
 
 /* The differences between High GIMPLE and Low GIMPLE are the
    following:
@@ -72,6 +74,13 @@  static void lower_gimple_bind (gimple_stmt_iterator *, struct lower_data *);
 static void lower_try_catch (gimple_stmt_iterator *, struct lower_data *);
 static void lower_gimple_return (gimple_stmt_iterator *, struct lower_data *);
 static void lower_builtin_setjmp (gimple_stmt_iterator *);
+static void lower_builtin_fpclassify (gimple_stmt_iterator *);
+static void lower_builtin_isnan (gimple_stmt_iterator *);
+static void lower_builtin_isinfinite (gimple_stmt_iterator *);
+static void lower_builtin_isnormal (gimple_stmt_iterator *);
+static void lower_builtin_iszero (gimple_stmt_iterator *);
+static void lower_builtin_issubnormal (gimple_stmt_iterator *);
+static void lower_builtin_isfinite (gimple_stmt_iterator *);
 static void lower_builtin_posix_memalign (gimple_stmt_iterator *);
 
 
@@ -330,18 +339,69 @@  lower_stmt (gimple_stmt_iterator *gsi, struct lower_data *data)
 	if (decl
 	    && DECL_BUILT_IN_CLASS (decl) == BUILT_IN_NORMAL)
 	  {
-	    if (DECL_FUNCTION_CODE (decl) == BUILT_IN_SETJMP)
+	    switch (DECL_FUNCTION_CODE (decl))
 	      {
+	      case BUILT_IN_SETJMP:
 		lower_builtin_setjmp (gsi);
 		data->cannot_fallthru = false;
 		return;
-	      }
-	    else if (DECL_FUNCTION_CODE (decl) == BUILT_IN_POSIX_MEMALIGN
-		     && flag_tree_bit_ccp
-		     && gimple_builtin_call_types_compatible_p (stmt, decl))
-	      {
-		lower_builtin_posix_memalign (gsi);
+
+	      case BUILT_IN_POSIX_MEMALIGN:
+		if (flag_tree_bit_ccp
+		    && gimple_builtin_call_types_compatible_p (stmt, decl))
+		  {
+			lower_builtin_posix_memalign (gsi);
+			return;
+		  }
+		break;
+
+	      case BUILT_IN_FPCLASSIFY:
+		lower_builtin_fpclassify (gsi);
+		data->cannot_fallthru = false;
 		return;
+
+	      CASE_FLT_FN (BUILT_IN_ISINF):
+	      case BUILT_IN_ISINFD32:
+	      case BUILT_IN_ISINFD64:
+	      case BUILT_IN_ISINFD128:
+		lower_builtin_isinfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNAND32:
+	      case BUILT_IN_ISNAND64:
+	      case BUILT_IN_ISNAND128:
+	      CASE_FLT_FN (BUILT_IN_ISNAN):
+		lower_builtin_isnan (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISNORMAL:
+		lower_builtin_isnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISZERO:
+		lower_builtin_iszero (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      case BUILT_IN_ISSUBNORMAL:
+		lower_builtin_issubnormal (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      CASE_FLT_FN (BUILT_IN_FINITE):
+	      case BUILT_IN_FINITED32:
+	      case BUILT_IN_FINITED64:
+	      case BUILT_IN_FINITED128:
+	      case BUILT_IN_ISFINITE:
+		lower_builtin_isfinite (gsi);
+		data->cannot_fallthru = false;
+		return;
+
+	      default:
+		break;
 	      }
 	  }
 
@@ -822,6 +882,736 @@  lower_builtin_setjmp (gimple_stmt_iterator *gsi)
   gsi_remove (gsi, false);
 }
 
+static tree
+emit_tree_and_return_var (gimple_seq *seq, tree arg)
+{
+  if (TREE_CODE (arg) == SSA_NAME || VAR_P (arg))
+    return arg;
+
+  tree tmp = create_tmp_reg (TREE_TYPE(arg));
+  gassign *stm = gimple_build_assign(tmp, arg);
+  gimple_seq_add_stmt (seq, stm);
+  return tmp;
+}
+
+/* This function builds an if statement that ends up using explicit branches
+   instead of becoming a csel.  This function assumes you will fall through to
+   the next statements after this condition for the false branch.  */
+static void
+emit_tree_cond (gimple_seq *seq, tree result_variable, tree exit_label,
+		tree cond, tree true_branch)
+{
+    /* Create labels for fall through.  */
+  tree true_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree false_label = create_artificial_label (UNKNOWN_LOCATION);
+  gcond *stmt = gimple_build_cond_from_tree (cond, true_label, false_label);
+  gimple_seq_add_stmt (seq, stmt);
+
+  /* Build the true case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (true_label));
+  tree value = TREE_CONSTANT (true_branch)
+	     ? true_branch
+	     : emit_tree_and_return_var (seq, true_branch);
+  gimple_seq_add_stmt (seq, gimple_build_assign (result_variable, value));
+  gimple_seq_add_stmt (seq, gimple_build_goto (exit_label));
+
+  /* Build the false case.  */
+  gimple_seq_add_stmt (seq, gimple_build_label (false_label));
+}
+
+static tree
+get_num_as_int (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  /* Re-interpret the float as an unsigned integer type
+     with equal precision.  */
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+  tree conv_arg = fold_build1_loc (loc, VIEW_CONVERT_EXPR, int_arg_type, arg);
+  return emit_tree_and_return_var(seq, conv_arg);
+}
+
+  /* Check if the number that is being classified is close enough to IEEE 754
+     format to be able to go in the early exit code.  */
+static bool
+use_ieee_int_mode (tree arg)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  return (format->is_binary_ieee_compatible
+	  && FLOAT_WORDS_BIG_ENDIAN == WORDS_BIG_ENDIAN
+	  /* We explicitly disable quad float support on 32 bit systems.  */
+	  && !(UNITS_PER_WORD == 4 && type_width == 128)
+	  && targetm.scalar_mode_supported_p (mode));
+}
+
+  /* perform standard IBM extended format fixups for FP functions.  */
+static bool
+perform_ibm_extended_fixups (tree *arg, machine_mode *mode,
+			     tree *type, location_t loc)
+{
+  bool is_ibm_extended = MODE_COMPOSITE_P (*mode);
+  if (is_ibm_extended)
+    {
+      /* NaN and Inf are encoded in the high-order double value
+	 only.  The low-order value is not significant.  */
+      *type = double_type_node;
+      *mode = DFmode;
+      *arg = fold_build1_loc (loc, NOP_EXPR, *type, *arg);
+    }
+
+  return is_ibm_extended;
+}
+
+static tree
+is_normal (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const tree bool_type = boolean_type_node;
+
+  /* Perform IBM extended format fixups if required.  */
+  bool is_ibm_extended = perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree orig_arg = arg;
+    if (TREE_CODE (arg) != SSA_NAME
+	&& (TREE_ADDRESSABLE (arg) != 0
+	  || (TREE_CODE (arg) != PARM_DECL
+	      && (!VAR_P (arg) || TREE_STATIC (arg)))))
+      orig_arg = save_expr (arg);
+
+    REAL_VALUE_TYPE rinf, rmin;
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    char buf[128];
+    real_inf (&rinf);
+    get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&rmin, buf);
+
+    tree inf_exp = fold_build2_loc (loc, LT_EXPR, bool_type, arg_p,
+				    build_real (type, rinf));
+    tree min_exp = build_real (type, rmin);
+    if (is_ibm_extended)
+      {
+	/* Testing the high end of the range is done just using
+	   the high double, using the same test as isfinite().
+	   For the subnormal end of the range we first test the
+	   high double, then if its magnitude is equal to the
+	   limit of 0x1p-969, we test whether the low double is
+	   non-zero and opposite sign to the high double.  */
+	tree const islt_fn = builtin_decl_explicit (BUILT_IN_ISLESS);
+	tree const isgt_fn = builtin_decl_explicit (BUILT_IN_ISGREATER);
+	tree gt_min = build_call_expr (isgt_fn, 2, arg, min_exp);
+	tree eq_min = fold_build2 (EQ_EXPR, integer_type_node,
+				   arg, min_exp);
+	tree as_complex = build1 (VIEW_CONVERT_EXPR,
+				  complex_double_type_node, orig_arg);
+	tree hi_dbl = build1 (REALPART_EXPR, type, as_complex);
+	tree lo_dbl = build1 (IMAGPART_EXPR, type, as_complex);
+	tree zero = build_real (type, dconst0);
+	tree hilt = build_call_expr (islt_fn, 2, hi_dbl, zero);
+	tree lolt = build_call_expr (islt_fn, 2, lo_dbl, zero);
+	tree logt = build_call_expr (isgt_fn, 2, lo_dbl, zero);
+	tree ok_lo = fold_build1 (TRUTH_NOT_EXPR, integer_type_node,
+				  fold_build3 (COND_EXPR,
+					       integer_type_node,
+					       hilt, logt, lolt));
+	eq_min = fold_build2 (TRUTH_ANDIF_EXPR, integer_type_node,
+			      eq_min, ok_lo);
+	min_exp = fold_build2 (TRUTH_ORIF_EXPR, integer_type_node,
+			       gt_min, eq_min);
+      }
+      else
+      {
+	min_exp = fold_build2_loc (loc, GE_EXPR, bool_type, arg_p,
+				   min_exp);
+      }
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, min_exp),
+			 emit_tree_and_return_var (seq, inf_exp));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const tree int_type = unsigned_type_node;
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  const int exp_mask  = (1 << exp_bits) - 1;
+
+  /* Get the number reinterpreted as an integer.  */
+  tree int_arg = get_num_as_int (seq, arg, loc);
+
+  /* Extract exp bits from the float, where we expect the exponent to be.
+     We create a new type because BIT_FIELD_REF does not allow you to
+     extract less bits than the precision of the storage variable.  */
+  tree exp_tmp
+    = fold_build3_loc (loc, BIT_FIELD_REF,
+		       build_nonstandard_integer_type (exp_bits, true),
+		       int_arg,
+		       build_int_cstu (int_type, exp_bits),
+		       build_int_cstu (int_type, format->p - 1));
+  tree exp_bitfield = emit_tree_and_return_var (seq, exp_tmp);
+
+  /* Re-interpret the extracted exponent bits as a 32 bit int.
+     This allows us to continue doing operations as int_type.  */
+  tree exp
+    = emit_tree_and_return_var(seq,fold_build1_loc (loc, NOP_EXPR, int_type,
+						    exp_bitfield));
+
+  /* exp_mask & ~1.  */
+  tree mask_check
+     = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+			build_int_cstu (int_type, exp_mask),
+			fold_build1_loc (loc, BIT_NOT_EXPR, int_type,
+					 build_int_cstu (int_type, 1)));
+
+  /* (exp + 1) & mask_check.
+     Check to see if exp is not all 0 or all 1.  */
+  tree exp_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, int_type,
+		       emit_tree_and_return_var (seq,
+				fold_build2_loc (loc, PLUS_EXPR, int_type, exp,
+						 build_int_cstu (int_type, 1))),
+		       mask_check);
+
+  tree res = fold_build2_loc (loc, NE_EXPR, boolean_type_node,
+			      build_int_cstu (int_type, 0),
+			      emit_tree_and_return_var(seq, exp_check));
+
+  return emit_tree_and_return_var (seq, res);
+}
+
+static tree
+is_zero (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+  machine_mode mode = TYPE_MODE (type);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree res = fold_build2_loc (loc, EQ_EXPR, boolean_type_node, arg,
+				build_real (type, dconst0));
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* num << 1 == 0.
+     This checks to see if the number is zero.  */
+  tree zero_check
+    = fold_build2_loc (loc, EQ_EXPR, boolean_type_node,
+		       build_int_cstu (int_arg_type, 0),
+		       emit_tree_and_return_var (seq, int_arg));
+
+  return emit_tree_and_return_var (seq, zero_check);
+}
+
+static tree
+is_subnormal (gimple_seq *seq, tree arg, location_t loc)
+{
+  const tree bool_type = boolean_type_node;
+
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    char buf[128];
+    get_min_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&r, buf);
+    tree subnorm = fold_build2_loc (loc, LT_EXPR, bool_type,
+				    arg_p, build_real (type, r));
+
+    tree zero = fold_build2_loc (loc, GT_EXPR, bool_type, arg_p,
+				 build_real (type, dconst0));
+
+    tree res
+      = fold_build2_loc (loc, BIT_AND_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, subnorm),
+			 emit_tree_and_return_var (seq, zero));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check for a zero exponent and non-zero mantissa.
+     This can be done with two comparisons by first apply a
+     removing the sign bit and checking if the value is larger
+     than the mantissa mask.  */
+
+  /* This creates a mask to be used to check the mantissa value in the shifted
+     integer representation of the fpnum.  */
+  tree significant_bit = build_int_cstu (int_arg_type, format->p - 1);
+  tree mantissa_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					significant_bit),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Check if exponent is zero and mantissa is not.  */
+  tree subnorm_cond
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, LE_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 mantissa_mask));
+
+  tree zero_cond
+    = fold_build2_loc (loc, GT_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, int_arg),
+		       build_int_cstu (int_arg_type, 0));
+
+  tree subnorm_check
+    = fold_build2_loc (loc, BIT_AND_EXPR, boolean_type_node,
+		       emit_tree_and_return_var (seq, subnorm_cond),
+		       emit_tree_and_return_var (seq, zero_cond));
+
+  return emit_tree_and_return_var (seq, subnorm_check);
+}
+
+static tree
+is_infinity (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_INFINITIES (mode))
+  {
+    return build_int_cst (bool_type, false);
+  }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE r;
+    real_inf (&r);
+    tree res = fold_build2_loc (loc, EQ_EXPR, bool_type, arg_p,
+				build_real (type, r));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0.  */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, EQ_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+static tree
+is_finite (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (arg) && !HONOR_INFINITIES (arg))
+  {
+    return build_int_cst (bool_type, true);
+  }
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    REAL_VALUE_TYPE rmax;
+    char buf[128];
+    get_max_float (REAL_MODE_FORMAT (mode), buf, sizeof (buf));
+    real_from_string (&rmax, buf);
+
+    tree res = fold_build2_loc (loc, LE_EXPR, bool_type, arg_p,
+				build_real (type, rmax));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  gcc_assert (format->p > 0);
+
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign. */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is 0. */
+  tree inf_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, LT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, inf_check);
+}
+
+/* Determines if the given number is a NaN value.
+   This function is the last in the chain and only has to
+   check if it's preconditions are true.  */
+static tree
+is_nan (gimple_seq *seq, tree arg, location_t loc)
+{
+  tree type = TREE_TYPE (arg);
+
+  machine_mode mode = TYPE_MODE (type);
+  const tree bool_type = boolean_type_node;
+
+  if (!HONOR_NANS (mode))
+  {
+    return build_int_cst (bool_type, false);
+  }
+
+  const real_format *format = REAL_MODE_FORMAT (mode);
+
+  /* Perform IBM extended format fixups if required.  */
+  perform_ibm_extended_fixups (&arg, &mode, &type, loc);
+
+  /* If not using optimized route then exit early.  */
+  if (!use_ieee_int_mode (arg))
+  {
+    tree arg_p
+      = emit_tree_and_return_var (seq, fold_build1_loc (loc, ABS_EXPR, type,
+							arg));
+    tree eq_check
+      = fold_build2_loc (loc, ORDERED_EXPR, bool_type,arg_p, arg_p);
+
+    tree res
+      = fold_build1_loc (loc, BIT_NOT_EXPR, bool_type,
+			 emit_tree_and_return_var (seq, eq_check));
+
+    return emit_tree_and_return_var (seq, res);
+  }
+
+  const HOST_WIDE_INT type_width = TYPE_PRECISION (type);
+  tree int_arg_type = build_nonstandard_integer_type (type_width, true);
+
+  /* This creates a mask to be used to check the exp value in the shifted
+     integer representation of the fpnum.  */
+  const int exp_bits  = (GET_MODE_SIZE (mode) * BITS_PER_UNIT) - format->p;
+  tree significant_bit = build_int_cstu (int_arg_type, format->p);
+  tree exp_mask
+    = fold_build2_loc (loc, MINUS_EXPR, int_arg_type,
+		       fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+					build_int_cstu (int_arg_type, 2),
+					build_int_cstu (int_arg_type,
+							exp_bits - 1)),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* Get the number reinterpreted as an integer.
+     Shift left to remove the sign.  */
+  tree int_arg
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       get_num_as_int (seq, arg, loc),
+		       build_int_cstu (int_arg_type, 1));
+
+  /* This mask checks to see if the exp has all bits set and mantissa no
+     bits set.  */
+  tree inf_mask
+    = fold_build2_loc (loc, LSHIFT_EXPR, int_arg_type,
+		       exp_mask, significant_bit);
+
+  /* Check if exponent has all bits set and mantissa is not 0.  */
+  tree nan_check
+    = emit_tree_and_return_var(seq,
+	fold_build2_loc (loc, GT_EXPR, bool_type,
+			 emit_tree_and_return_var(seq, int_arg),
+			 inf_mask));
+
+  return emit_tree_and_return_var (seq, nan_check);
+}
+
+/* Validate a single argument ARG against a tree code CODE representing
+   a type.  */
+static bool
+gimple_validate_arg (gimple* call, int index, enum tree_code code)
+{
+  const tree arg = gimple_call_arg(call, index);
+  if (!arg)
+    return false;
+  else if (code == POINTER_TYPE)
+    return POINTER_TYPE_P (TREE_TYPE (arg));
+  else if (code == INTEGER_TYPE)
+    return INTEGRAL_TYPE_P (TREE_TYPE (arg));
+  return code == TREE_CODE (TREE_TYPE (arg));
+}
+
+/* Lowers calls to __builtin_fpclassify to
+   fpclassify (x) ->
+     isnormal(x) ? FP_NORMAL :
+       iszero (x) ? FP_ZERO :
+	 isnan (x) ? FP_NAN :
+	   isinfinite (x) ? FP_INFINITE :
+	     FP_SUBNORMAL.
+
+   The code may use integer arithmentic if it decides
+   that the produced assembly would be faster. This can only be done
+   for numbers that are similar to IEEE-754 in format.
+
+   This builtin will generate code to return the appropriate floating
+   point classification depending on the value of the floating point
+   number passed in.  The possible return values must be supplied as
+   int arguments to the call in the following order: FP_NAN, FP_INFINITE,
+   FP_NORMAL, FP_SUBNORMAL and FP_ZERO.  The ellipses is for exactly
+   one floating point argument which is "type generic".
+*/
+static void
+lower_builtin_fpclassify (gimple_stmt_iterator *gsi)
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 6
+      || !gimple_validate_arg (call, 0, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 1, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 2, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 3, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 4, INTEGER_TYPE)
+      || !gimple_validate_arg (call, 5, REAL_TYPE))
+    return;
+
+  /* Collect the arguments from the call.  */
+  tree fp_nan = gimple_call_arg (call, 0);
+  tree fp_infinite = gimple_call_arg (call, 1);
+  tree fp_normal = gimple_call_arg (call, 2);
+  tree fp_subnormal = gimple_call_arg (call, 3);
+  tree fp_zero = gimple_call_arg (call, 4);
+  tree arg = gimple_call_arg (call, 5);
+
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+    dest = create_tmp_reg (TREE_TYPE (orig_dest));
+
+  emit_tree_cond (&body, dest, done_label,
+		  is_normal (&body, arg, loc), fp_normal);
+  emit_tree_cond (&body, dest, done_label,
+		  is_zero (&body, arg, loc), fp_zero);
+  emit_tree_cond (&body, dest, done_label,
+		  is_nan (&body, arg, loc), fp_nan);
+  emit_tree_cond (&body, dest, done_label,
+		  is_infinity (&body, arg, loc), fp_infinite);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to is_subnormal.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, fp_subnormal));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+
+  /* Remove the call to __builtin_fpclassify.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+gen_call_fp_builtin (gimple_stmt_iterator *gsi,
+		     tree (*fndecl)(gimple_seq *, tree, location_t))
+{
+  gimple *call = gsi_stmt (*gsi);
+  location_t loc = gimple_location (call);
+
+  /* Verify the required arguments in the original call.  */
+  if (gimple_call_num_args (call) != 1
+      || !gimple_validate_arg (call, 0, REAL_TYPE))
+    return;
+
+  tree arg = gimple_call_arg (call, 0);
+  gimple_seq body = NULL;
+
+  /* Create label to jump to to exit.  */
+  tree done_label = create_artificial_label (UNKNOWN_LOCATION);
+  tree dest;
+  tree orig_dest = dest = gimple_call_lhs (call);
+  tree type = TREE_TYPE (orig_dest);
+  if (orig_dest && TREE_CODE (orig_dest) == SSA_NAME)
+      dest = create_tmp_reg (type);
+
+  tree t_true = build_int_cst (type, true);
+  tree t_false = build_int_cst (type, false);
+
+  emit_tree_cond (&body, dest, done_label,
+		  fndecl (&body, arg, loc), t_true);
+
+  /* And finally, emit the default case if nothing else matches.
+     This replaces the call to false.  */
+  gimple_seq_add_stmt (&body, gimple_build_assign (dest, t_false));
+  gimple_seq_add_stmt (&body, gimple_build_label (done_label));
+
+  /* Build orig_dest = dest if necessary.  */
+  if (dest != orig_dest)
+  {
+    gimple_seq_add_stmt (&body, gimple_build_assign (orig_dest, dest));
+  }
+
+  gsi_insert_seq_before (gsi, body, GSI_SAME_STMT);
+
+  /* Remove the call to the builtin.  */
+  gsi_remove (gsi, false);
+}
+
+static void
+lower_builtin_isnan (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_nan);
+}
+
+static void
+lower_builtin_isinfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_infinity);
+}
+
+static void
+lower_builtin_isnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_normal);
+}
+
+static void
+lower_builtin_iszero (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_zero);
+}
+
+static void
+lower_builtin_issubnormal (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_subnormal);
+}
+
+static void
+lower_builtin_isfinite (gimple_stmt_iterator *gsi)
+{
+  gen_call_fp_builtin (gsi, &is_finite);
+}
+
 /* Lower calls to posix_memalign to
      res = posix_memalign (ptr, align, size);
      if (res == 0)
diff --git a/gcc/real.h b/gcc/real.h
index 59af580e78f2637be84f71b98b45ec6611053222..4b1b92138e07f43a175a2cbee4d952afad5898f7 100644
--- a/gcc/real.h
+++ b/gcc/real.h
@@ -161,6 +161,19 @@  struct real_format
   bool has_signed_zero;
   bool qnan_msb_set;
   bool canonical_nan_lsbs_set;
+
+  /* This flag indicates whether the format is suitable for the optimized
+     code paths for the __builtin_fpclassify function and friends.  For
+     this, the format must be a base 2 representation with the sign bit as
+     the most-significant bit followed by (exp <= 32) exponent bits
+     followed by the mantissa bits.  It must be possible to interpret the
+     bits of the floating-point representation as an integer.  NaNs and
+     INFs (if available) must be represented by the same schema used by
+     IEEE 754.  (NaNs must be represented by an exponent with all bits 1,
+     any mantissa except all bits 0 and any sign bit.  +INF and -INF must be
+     represented by an exponent with all bits 1, a mantissa with all bits 0 and
+     a sign bit of 0 and 1 respectively.)  */
+  bool is_binary_ieee_compatible;
   const char *name;
 };
 
@@ -511,6 +524,11 @@  extern bool real_isinteger (const REAL_VALUE_TYPE *, HOST_WIDE_INT *);
    float string.  BUF must be large enough to contain the result.  */
 extern void get_max_float (const struct real_format *, char *, size_t);
 
+/* Write into BUF the smallest positive normalized number x,
+   such that b**(x-1) is normalized.  BUF must be large enough
+   to contain the result.  */
+extern void get_min_float (const struct real_format *, char *, size_t);
+
 #ifndef GENERATOR_FILE
 /* real related routines.  */
 extern wide_int real_to_integer (const REAL_VALUE_TYPE *, bool *, int);
diff --git a/gcc/real.c b/gcc/real.c
index 66e88e2ad366f7848609d157074c80420d778bcf..20c907a6d543c73ba62aa9a8ddf6973d82de7832 100644
--- a/gcc/real.c
+++ b/gcc/real.c
@@ -3052,6 +3052,7 @@  const struct real_format ieee_single_format =
     true,
     true,
     false,
+    true,
     "ieee_single"
   };
 
@@ -3075,6 +3076,7 @@  const struct real_format mips_single_format =
     true,
     false,
     true,
+    true,
     "mips_single"
   };
 
@@ -3098,6 +3100,7 @@  const struct real_format motorola_single_format =
     true,
     true,
     true,
+    true,
     "motorola_single"
   };
 
@@ -3132,6 +3135,7 @@  const struct real_format spu_single_format =
     true,
     false,
     false,
+    false,
     "spu_single"
   };
 
@@ -3343,6 +3347,7 @@  const struct real_format ieee_double_format =
     true,
     true,
     false,
+    true,
     "ieee_double"
   };
 
@@ -3366,6 +3371,7 @@  const struct real_format mips_double_format =
     true,
     false,
     true,
+    true,
     "mips_double"
   };
 
@@ -3389,6 +3395,7 @@  const struct real_format motorola_double_format =
     true,
     true,
     true,
+    true,
     "motorola_double"
   };
 
@@ -3735,6 +3742,7 @@  const struct real_format ieee_extended_motorola_format =
     true,
     true,
     true,
+    false,
     "ieee_extended_motorola"
   };
 
@@ -3758,6 +3766,7 @@  const struct real_format ieee_extended_intel_96_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96"
   };
 
@@ -3781,6 +3790,7 @@  const struct real_format ieee_extended_intel_128_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_128"
   };
 
@@ -3806,6 +3816,7 @@  const struct real_format ieee_extended_intel_96_round_53_format =
     true,
     true,
     false,
+    false,
     "ieee_extended_intel_96_round_53"
   };
 
@@ -3896,6 +3907,7 @@  const struct real_format ibm_extended_format =
     true,
     true,
     false,
+    false,
     "ibm_extended"
   };
 
@@ -3919,6 +3931,7 @@  const struct real_format mips_extended_format =
     true,
     false,
     true,
+    false,
     "mips_extended"
   };
 
@@ -4184,6 +4197,7 @@  const struct real_format ieee_quad_format =
     true,
     true,
     false,
+    true,
     "ieee_quad"
   };
 
@@ -4207,6 +4221,7 @@  const struct real_format mips_quad_format =
     true,
     false,
     true,
+    true,
     "mips_quad"
   };
 
@@ -4509,6 +4524,7 @@  const struct real_format vax_f_format =
     false,
     false,
     false,
+    false,
     "vax_f"
   };
 
@@ -4532,6 +4548,7 @@  const struct real_format vax_d_format =
     false,
     false,
     false,
+    false,
     "vax_d"
   };
 
@@ -4555,6 +4572,7 @@  const struct real_format vax_g_format =
     false,
     false,
     false,
+    false,
     "vax_g"
   };
 
@@ -4633,6 +4651,7 @@  const struct real_format decimal_single_format =
     true,
     true,
     false,
+    false,
     "decimal_single"
   };
 
@@ -4657,6 +4676,7 @@  const struct real_format decimal_double_format =
     true,
     true,
     false,
+    false,
     "decimal_double"
   };
 
@@ -4681,6 +4701,7 @@  const struct real_format decimal_quad_format =
     true,
     true,
     false,
+    false,
     "decimal_quad"
   };
 
@@ -4820,6 +4841,7 @@  const struct real_format ieee_half_format =
     true,
     true,
     false,
+    true,
     "ieee_half"
   };
 
@@ -4846,6 +4868,7 @@  const struct real_format arm_half_format =
     true,
     false,
     false,
+    false,
     "arm_half"
   };
 
@@ -4893,6 +4916,7 @@  const struct real_format real_internal_format =
     true,
     true,
     false,
+    false,
     "real_internal"
   };
 
@@ -5080,6 +5104,16 @@  get_max_float (const struct real_format *fmt, char *buf, size_t len)
   gcc_assert (strlen (buf) < len);
 }
 
+/* Write into BUF the minimum negative representable finite floating-point
+   number, x, such that b**(x-1) is normalized.
+   BUF must be large enough to contain the result.  */
+void
+get_min_float (const struct real_format *fmt, char *buf, size_t len)
+{
+  sprintf (buf, "0x1p%d", fmt->emin - 1);
+  gcc_assert (strlen (buf) < len);
+}
+
 /* True if mode M has a NaN representation and
    the treatment of NaN operands is important.  */
 
diff --git a/gcc/testsuite/gcc.dg/builtins-43.c b/gcc/testsuite/gcc.dg/builtins-43.c
index f7c318edf084104b9b820e18e631ed61e760569e..5d41c28aef8619f06658f45846ae15dd8b4987ed 100644
--- a/gcc/testsuite/gcc.dg/builtins-43.c
+++ b/gcc/testsuite/gcc.dg/builtins-43.c
@@ -1,5 +1,5 @@ 
 /* { dg-do compile } */
-/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-gimple -fdump-tree-optimized" } */
+/* { dg-options "-O1 -fno-trapping-math -fno-finite-math-only -fdump-tree-lower -fdump-tree-optimized" } */
   
 extern void f(int);
 extern void link_error ();
@@ -51,7 +51,7 @@  main ()
 
 
 /* Check that all instances of __builtin_isnan were folded.  */
-/* { dg-final { scan-tree-dump-times "isnan" 0 "gimple" } } */
+/* { dg-final { scan-tree-dump-times "isnan" 0 "lower" } } */
 
 /* Check that all instances of link_error were subject to DCE.  */
 /* { dg-final { scan-tree-dump-times "link_error" 0 "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/fold-notunord.c b/gcc/testsuite/gcc.dg/fold-notunord.c
deleted file mode 100644
index ca345154ac204cb5f380855828421b7f88d49052..0000000000000000000000000000000000000000
--- a/gcc/testsuite/gcc.dg/fold-notunord.c
+++ /dev/null
@@ -1,9 +0,0 @@ 
-/* { dg-do compile } */
-/* { dg-options "-O -ftrapping-math -fdump-tree-optimized" } */
-
-int f (double d)
-{
-  return !__builtin_isnan (d);
-}
-
-/* { dg-final { scan-tree-dump " ord " "optimized" } } */
diff --git a/gcc/testsuite/gcc.dg/pr28796-1.c b/gcc/testsuite/gcc.dg/pr28796-1.c
index 077118a298878441e812410f3a6bf3707fb1d839..a57b4e350af1bc45344106fdeab4b32ef87f233f 100644
--- a/gcc/testsuite/gcc.dg/pr28796-1.c
+++ b/gcc/testsuite/gcc.dg/pr28796-1.c
@@ -1,5 +1,5 @@ 
 /* { dg-do link } */
-/* { dg-options "-ffinite-math-only" } */
+/* { dg-options "-ffinite-math-only -O2" } */
 
 extern void link_error(void);
 
diff --git a/gcc/testsuite/gcc.dg/torture/float128-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..ec9d3ad41e24280978707888590eec1b562207f0
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128-tg-4.c
@@ -0,0 +1,11 @@ 
+/* Test _Float128 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128_runtime } */
+
+#define WIDTH 128
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0ede861716750453a86c9abc703ad0b2826674c6
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float128x-tg-4.c
@@ -0,0 +1,11 @@ 
+/* Test _Float128x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float128x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float128x_runtime } */
+
+#define WIDTH 128
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float16-tg-4.c b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..007c4c224ea95537c31185d0aff964d1975f2190
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float16-tg-4.c
@@ -0,0 +1,11 @@ 
+/* Test _Float16 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float16 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float16_runtime } */
+
+#define WIDTH 16
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..c7f8353da2cffdfc2c2f58f5da3d5363b95e6f91
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32-tg-4.c
@@ -0,0 +1,11 @@ 
+/* Test _Float32 type-generic built-in functions: __builtin_f__builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32_runtime } */
+
+#define WIDTH 32
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..0d7a592920aca112d5f6409e565d4582c253c977
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float32x-tg-4.c
@@ -0,0 +1,11 @@ 
+/* Test _Float32x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float32x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float32x_runtime } */
+
+#define WIDTH 32
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..bb25a22a68e60ce2717ab3583bbec595dd563c35
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64-tg-4.c
@@ -0,0 +1,11 @@ 
+/* Test _Float64 type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64 } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64_runtime } */
+
+#define WIDTH 64
+#define EXT 0
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
new file mode 100644
index 0000000000000000000000000000000000000000..82305d916b8bd75131e2c647fd37f74cadbc8f1d
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/float64x-tg-4.c
@@ -0,0 +1,11 @@ 
+/* Test _Float64x type-generic built-in functions: __builtin_iszero,
+   __builtin_issubnormal.  */
+/* { dg-do run } */
+/* { dg-options "" } */
+/* { dg-add-options float64x } */
+/* { dg-add-options ieee } */
+/* { dg-require-effective-target float64x_runtime } */
+
+#define WIDTH 64
+#define EXT 1
+#include "floatn-tg-4.h"
diff --git a/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
new file mode 100644
index 0000000000000000000000000000000000000000..aa3448c090cf797a1525b1045ffebeed79cace40
--- /dev/null
+++ b/gcc/testsuite/gcc.dg/torture/floatn-tg-4.h
@@ -0,0 +1,99 @@ 
+/* Tests for _FloatN / _FloatNx types: compile and execution tests for
+   type-generic built-in functions: __builtin_iszero, __builtin_issubnormal.
+   Before including this file, define WIDTH as the value N; define EXT to 1
+   for _FloatNx and 0 for _FloatN.  */
+
+#define __STDC_WANT_IEC_60559_TYPES_EXT__
+#include <float.h>
+
+#define CONCATX(X, Y) X ## Y
+#define CONCAT(X, Y) CONCATX (X, Y)
+#define CONCAT3(X, Y, Z) CONCAT (CONCAT (X, Y), Z)
+#define CONCAT4(W, X, Y, Z) CONCAT (CONCAT (CONCAT (W, X), Y), Z)
+
+#if EXT
+# define TYPE CONCAT3 (_Float, WIDTH, x)
+# define CST(C) CONCAT4 (C, f, WIDTH, x)
+# define MAX CONCAT3 (FLT, WIDTH, X_MAX)
+# define MIN CONCAT3 (FLT, WIDTH, X_MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, X_TRUE_MIN)
+#else
+# define TYPE CONCAT (_Float, WIDTH)
+# define CST(C) CONCAT3 (C, f, WIDTH)
+# define MAX CONCAT3 (FLT, WIDTH, _MAX)
+# define MIN CONCAT3 (FLT, WIDTH, _MIN)
+# define TRUE_MIN CONCAT3 (FLT, WIDTH, _TRUE_MIN)
+#endif
+
+extern void exit (int);
+extern void abort (void);
+
+volatile TYPE inf = __builtin_inf (), nanval = __builtin_nan ("");
+volatile TYPE neginf = -__builtin_inf (), negnanval = -__builtin_nan ("");
+volatile TYPE zero = CST (0.0), negzero = -CST (0.0), one = CST (1.0);
+volatile TYPE max = MAX, negmax = -MAX, min = MIN, negmin = -MIN;
+volatile TYPE true_min = TRUE_MIN, negtrue_min = -TRUE_MIN;
+volatile TYPE sub_norm = MIN / 2.0;
+
+int
+main (void)
+{
+  if (__builtin_iszero (inf) == 1)
+    abort ();
+  if (__builtin_iszero (nanval) == 1)
+    abort ();
+  if (__builtin_iszero (neginf) == 1)
+    abort ();
+  if (__builtin_iszero (negnanval) == 1)
+    abort ();
+  if (__builtin_iszero (zero) != 1)
+    abort ();
+  if (__builtin_iszero (negzero) != 1)
+    abort ();
+  if (__builtin_iszero (one) == 1)
+    abort ();
+  if (__builtin_iszero (max) == 1)
+    abort ();
+  if (__builtin_iszero (negmax) == 1)
+    abort ();
+  if (__builtin_iszero (min) == 1)
+    abort ();
+  if (__builtin_iszero (negmin) == 1)
+    abort ();
+  if (__builtin_iszero (true_min) == 1)
+    abort ();
+  if (__builtin_iszero (negtrue_min) == 1)
+    abort ();
+  if (__builtin_iszero (sub_norm) == 1)
+    abort ();
+
+  if (__builtin_issubnormal (inf) == 1)
+    abort ();
+  if (__builtin_issubnormal (nanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (neginf) == 1)
+    abort ();
+  if (__builtin_issubnormal (negnanval) == 1)
+    abort ();
+  if (__builtin_issubnormal (zero) == 1)
+    abort ();
+  if (__builtin_issubnormal (negzero) == 1)
+    abort ();
+  if (__builtin_issubnormal (one) == 1)
+    abort ();
+  if (__builtin_issubnormal (max) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmax) == 1)
+    abort ();
+  if (__builtin_issubnormal (min) == 1)
+    abort ();
+  if (__builtin_issubnormal (negmin) == 1)
+    abort ();
+  if (__builtin_issubnormal (true_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (negtrue_min) != 1)
+    abort ();
+  if (__builtin_issubnormal (sub_norm) != 1)
+    abort ();
+  exit (0);
+}
diff --git a/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
new file mode 100644
index 0000000000000000000000000000000000000000..84a73a6483780dac2347e72fa7d139545d2087eb
--- /dev/null
+++ b/gcc/testsuite/gcc.target/aarch64/builtin-fpclassify.c
@@ -0,0 +1,22 @@ 
+/* This file checks the code generation for the new __builtin_fpclassify.
+   because checking the exact assembly isn't very useful, we'll just be checking
+   for the presence of certain instructions and the omition of others. */
+/* { dg-options "-O2" } */
+/* { dg-do compile } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fabs\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmp\[ \t\]?" } } */
+/* { dg-final { scan-assembler-not "\[ \t\]?fcmpe\[ \t\]?" } } */
+/* { dg-final { scan-assembler "\[ \t\]?sbfx\[ \t\]?" } } */
+
+#include <stdio.h>
+#include <math.h>
+
+/*
+ fp_nan = args[0];
+ fp_infinite = args[1];
+ fp_normal = args[2];
+ fp_subnormal = args[3];
+ fp_zero = args[4];
+*/
+
+int f(double x) { return __builtin_fpclassify(0, 1, 4, 3, 2, x); }