Patchwork [2/6] Andes nds32: machine description of nds32 porting (1).

login
register
mail settings
Submitter Chung-Ju Wu
Date Sept. 8, 2013, 4:13 p.m.
Message ID <522CA211.60200@gmail.com>
Download mbox | patch
Permalink /patch/273446/
State New
Headers show

Comments

Chung-Ju Wu - Sept. 8, 2013, 4:13 p.m.
On 7/25/13 5:35 PM, Chung-Ju Wu wrote:
> On 7/25/13 12:08 AM, Joseph S. Myers wrote:
>> On Wed, 24 Jul 2013, Chung-Ju Wu wrote:
> Attached is another revised patch and its summary.
> The modifications described above are listed as item 11 and 12:
> 
>    1. Remove fancy comments formatting.
>    2. Sort static functions and variables and put the target hook
>       structure initialization at the end of the file.
>    3. Avoid all comments referring to section numbers.
>    4. Use snprintf () instead of sprinf ().
>    5. Use sorry () instead of error () for unimplemented functionalities.
>    6. Do not check command options error in nds32.h.
>      Use Negative() in nds32.opt instead.
>    7. Remove %{v} from LINK_SPEC.
>    8. Add OPTION_DEFAULT_SPECS for -march option.
>    9. Enable/disable corresponding flags internally for -march= option.
>   10. Add MULTILIB_DEFAULTS for multilib endianness.
>   11. Use form-feeds (Control-L character) to separate logical sections.
>   12. The nds32_intrinsic_register_names[] array itself can be made const.
> 

It has been a while since last v2 patch.
I create a new v3 patch to fix some typo and indentation.

Is it OK to apply on the trunk?


Best regards,
jasonwucj
Richard Sandiford - Sept. 13, 2013, 3:33 p.m.
Chung-Ju Wu <jasonwucj@gmail.com> writes:
> It has been a while since last v2 patch.
> I create a new v3 patch to fix some typo and indentation.

I had a read through out of curiosity, and FWIW, it looks very clean and
well-commented to me.  See below for a few questions and comments.
This isn't an official review though.

> +  /* Calculate the bytes of saving callee-saved registers on stack.  */
> +  cfun->machine->callee_saved_regs_size = 0;
> +  cfun->machine->callee_saved_regs_first_regno = SP_REGNUM;
> +  cfun->machine->callee_saved_regs_last_regno  = SP_REGNUM;
> +  /* Currently, there is no need to check $r28~$r31
> +     because we will save them in another way.  */
> +  for (r = 0; r < 28; r++)
> +    {
> +      if (NDS32_REQUIRED_CALLEE_SAVED_P (r))
> +	{
> +	  /* Each register is 4 bytes.  */
> +	  cfun->machine->callee_saved_regs_size += 4;
> +
> +	  /* Mark the first required callee-saved register
> +	     (only need to set it once).
> +	     If first regno == SP_REGNUM, we can tell that
> +	     it is the first time to be here.  */
> +	  if (cfun->machine->callee_saved_regs_first_regno == SP_REGNUM)
> +	    cfun->machine->callee_saved_regs_first_regno = r;
> +	  /* Mark the last required callee-saved register.  */
> +	  cfun->machine->callee_saved_regs_last_regno = r;
> +	}
> +    }
> +
> +  /* Note:
> +     Since our smw/lmw instructions use Rb and Re
> +     to store/load registers consecutively,
> +     we need to check again to see if there is any register
> +     which is not live in the function but in the range
> +     between 'callee_saved_regs_first_regno' and 'callee_save_regs_last_regno'.
> +     If we find that registers, save their register size
> +     in the 'callee_saved_regs_size' for prologue and epilogue.
> +
> +     For example:
> +     Assume that the registers $r6, $r7, $r8, $r10, and $r11 are live
> +     in the function, so initially callee_saved_regs_size = 4 * 5 = 20.
> +     However, although $r9 is not live in the function,
> +     it is in the range between $r6 and $r11.
> +     We have to increase callee_saved_regs_size so that
> +     prologue and epilogue can use it to issue
> +     smw/lmw instructions and adjust the offset correctly.  */
> +  for (r =  cfun->machine->callee_saved_regs_first_regno;
> +       r <= cfun->machine->callee_saved_regs_last_regno;
> +       r++)
> +    {
> +      if (!df_regs_ever_live_p (r)
> +	  && r > cfun->machine->callee_saved_regs_first_regno
> +	  && r < cfun->machine->callee_saved_regs_last_regno)
> +	{
> +	  /* Found one register which is not live in the function
> +	     but in the range between first_regno and last_regno.  */
> +	  cfun->machine->callee_saved_regs_size += 4;
> +	}
> +    }

I wasn't sure here why you were setting callee_saved_regs_size in the
first loop and then had the second loop to handle the others.  Couldn't
this just be done by moving:

> +  /* Adjustment for v3push instructions:
> +     If we are using v3push (push25/pop25) instructions,
> +     we need to make sure Rb is $r6 and Re is
> +     located on $r6, $r8, $r10, or $r14.
> +     Some results above will be discarded and recomputed.
> +     Note that it is only available under V3/V3M ISA.  */
> +  if (TARGET_V3PUSH)
> +    {
> +      /* Recompute:
> +           cfun->machine->fp_size
> +           cfun->machine->gp_size
> +           cfun->machine->lp_size
> +           cfun->machine->callee_saved_regs_first_regno
> +           cfun->machine->callee_saved_regs_last_regno
> +           cfun->machine->callee_saved_regs_size */
> +
> +      /* For v3push instructions, $fp, $gp, and $lp are always saved.  */
> +      cfun->machine->fp_size = 4;
> +      cfun->machine->gp_size = 4;
> +      cfun->machine->lp_size = 4;
> +
> +      /* Remember to set Rb = $r6.  */
> +      cfun->machine->callee_saved_regs_first_regno = 6;
> +
> +      if (cfun->machine->callee_saved_regs_last_regno <= 6)
> +	{
> +	  /* Re = $r6 */
> +	  cfun->machine->callee_saved_regs_last_regno = 6;
> +	}
> +      else if (cfun->machine->callee_saved_regs_last_regno <= 8)
> +	{
> +	  /* Re = $r8 */
> +	  cfun->machine->callee_saved_regs_last_regno = 8;
> +	}
> +      else if (cfun->machine->callee_saved_regs_last_regno <= 10)
> +	{
> +	  /* Re = $r10 */
> +	  cfun->machine->callee_saved_regs_last_regno = 10;
> +	}
> +      else if (cfun->machine->callee_saved_regs_last_regno <= 14)
> +	{
> +	  /* Re = $r14 */
> +	  cfun->machine->callee_saved_regs_last_regno = 14;
> +	}
> +      else if (cfun->machine->callee_saved_regs_last_regno == SP_REGNUM)
> +	{
> +	  /* If last_regno is SP_REGNUM, which means
> +	     it is never changed, so set it to Re = $r6.  */
> +	  cfun->machine->callee_saved_regs_last_regno = 6;
> +	}
> +      else
> +	{
> +	  /* The program flow should not go here.  */
> +	  gcc_unreachable ();
> +	}
> +
> +      /* Compute pushed size of callee-saved registers.  */
> +      cfun->machine->callee_saved_regs_size
> +	= 4 * (cfun->machine->callee_saved_regs_last_regno
> +	       - cfun->machine->callee_saved_regs_first_regno
> +	       + 1);

...this last statement out of the "if" block and making it
conditional on "cfun->machine->callee_saved_regs_last_regno != SP_REGNUM"?

> +/* Function to create a parallel rtx pattern
> +   which presents stack push multiple behavior.
> +   The overall concept are:
> +     "unspec_stack_push_multiple",
> +     "use/clobber stack pointer",
> +     "use Rb to Re", and "use $fp, $gp, $lp".  */
> +static rtx
> +nds32_gen_stack_push_multiple (rtx Rb, rtx Re, rtx En4)
> +{
> +  int regno;
> +  int extra_count;
> +  int num_use_regs;
> +  int par_index;
> +
> +  rtx reg;
> +  rtx parallel_insn;
> +
> +  /* We need to provide a customized rtx which contains
> +     necessary information for data analysis,
> +     so we create a parallel rtx like this:
> +     (parallel [(unspec [(reg: Rb)
> +                         (reg: Re)
> +                         (const_int En4)] UNSPEC_STACK_PUSH_MULTIPLE)
> +                (use (reg:SI SP_REGNUM))
> +                (clobber (reg:SI SP_REGNUM))
> +                (use (reg:SI Rb))
> +                (use (reg:SI Rb+1))
> +                ...
> +                (use (reg:SI Re))
> +                (use (reg:SI FP_REGNUM))
> +                (use (reg:SI GP_REGNUM))
> +                (use (reg:SI LP_REGNUM))]) */

I know it's a pain to implement, but the pattern should really model the
memory too, like you do for the call-frame information.  I.e.:

  (parallel [(set (mem:SI ...)
                  (reg:SI Rb))
             (set (mem:SI ...)
                  (reg:SI Rb+1))
             ...])

Same idea for pop.  (It might be that you don't need separate call-frame
info after that change.)

> +/* Function to construct isr vectors information array.
> +   We need to check:
> +     1. Traverse interrupt/exception/reset for settig vector id.
> +     2. Only 'nested', 'not_nested', or 'nested_ready' in the attributes.
> +     3. Only 'save_all' or 'partial_save' in the attributes.  */
> +static void
> +nds32_construct_isr_vectors_information (tree func_attrs,
> +					 const char *func_name)
> +{
> +  int nested_p, not_nested_p, nested_ready_p;
> +  int save_all_p, partial_save_p;
> +  tree intr, excp, reset;
> +  int temp_count;
> +
> +  nested_p = not_nested_p = nested_ready_p = 0;
> +  save_all_p = partial_save_p = 0;
> +  temp_count = 0;
> +
> +  /* We must check at MOST one attribute to set save-reg.  */
> +  if (lookup_attribute ("save_all", func_attrs))
> +    save_all_p = 1;
> +  if (lookup_attribute ("partial_save", func_attrs))
> +    partial_save_p = 1;
> +
> +  if ((save_all_p + partial_save_p) > 1)
> +    error ("multiple save reg attributes to function %qs", func_name);

I think this kind of error is really supposed to be reported in
TARGET_MERGE_DECL_ATTRIBUTES, where it can include full location
information.  Same for the other checks.

> +	  else
> +	    {
> +	      /* Issue error if it is not a valid integer value.  */
> +	      error ("invalid id value for interrupt/exception attribute");
> +	    }

...and I think this kind of argument checking is supposed to go in
TARGET_INSERT_ATTRIBUTES.

> +      /* Create one more instruction to move value
> +         into the temporary register.  */
> +      value_move_insn = gen_movsi (tmp_reg, GEN_INT (full_value));
> +      /* Emit rtx into insn list and receive its transformed insn rtx.  */
> +      value_move_insn = emit_insn (value_move_insn);

This can be simplified to:

  value_move_insn = emit_move_insn (tmp_reg, GEN_INT (full_value));

which is more usual than calling gen_mov* directly.

> +/* Return true if MODE/TYPE need double word alignment.  */
> +static bool
> +nds32_needs_double_word_align (enum machine_mode mode, const_tree type)
> +{
> +  return (GET_MODE_ALIGNMENT (mode) > PARM_BOUNDARY
> +	  || (type && TYPE_ALIGN (type) > PARM_BOUNDARY));
> +}

For something ABI-related like this, it's usually better not to look at
"mode" when "type" is nonnull, since the type directly reflects the
source language whereas "mode" is more of an internal detail.  E.g..
something like:

  (type ? TYPE_ALIGN (type) : GET_MODE_ALIGNMENT (mode)) > PARM_BOUNDARY

> +/* Function that check if 'INDEX' is valid to be a index rtx for address.
> +
> +   OUTER_MODE : Machine mode of outer address rtx.
> +   OUTER_CODE : rtx code of outer address rtx.
> +        INDEX : Check if this rtx is valid to be a index for address.
> +         BASE : The rtx of base register.
> +       STRICT : If it is true, we are in reload pass or after reload pass.  */
> +static bool
> +nds32_legitimate_index_p (enum machine_mode outer_mode,
> +			  enum rtx_code outer_code ATTRIBUTE_UNUSED,
> +			  rtx index,
> +			  rtx base ATTRIBUTE_UNUSED,
> +			  bool strict)

Minor, but since this is an internal function, it'd probably be better
to remove the unused parameters.  Also.

> +	  /* Further check if the value is legal for the 'outer_mode'.  */
> +	  if (!satisfies_constraint_Is15 (index))
> +	    goto non_legitimate_index;
...
> +non_legitimate_index:
> +  return false;

...better to just use "return false" here.  (I'm guessing this code
dates back to the old GO_IF_LEGITIMATE_ADDRESS days.)

> +/* Function to expand builtin function for
> +   '[(unspec [(reg)])]'.  */

Plain unspecs are pretty dangerous, since there's nothing really
to anchor them to a particular position.  I've not yet looked through
the .md file to see what the patterns actually look like though.

> +  /* Determine the modes of the instruction operands.
> +     Note that we don't have left-hand-side result,
> +     so operands[0] IS FOR arg0.  */
> +  mode0 = insn_data[icode].operand[0].mode;
> +
> +  /* Refer to nds32.intrinsic.md,
> +     we want to ensure operands[0] is register_operand.  */
> +  if (!((*insn_data[icode].operand[0].predicate) (op0, mode0)))
> +    op0 = copy_to_mode_reg (mode0, op0);
> +
> +  /* Emit new instruction and return original target.  */
> +  pat = GEN_FCN (icode) (op0);
> +
> +  if (!pat)
> +    return 0;
> +
> +  emit_insn (pat);
> +
> +  return target;

Please use the create_*_operand/maybe_expand_insn routines from optabs.h
for this kind of thing.

> +/* Permitting tail calls.  */
> +
> +static bool
> +nds32_warn_func_return (tree decl)
> +{
> +  /* Naked functions are implemented entirely in assembly, including the
> +     return sequence, so suppress warnings about this.  */
> +  return !nds32_naked_function_p (decl);
> +}

The comment above the function doesn't seem to match the function.

> +  /* Instruction-cache sync instruction,
> +     requesting an arugment as staring address.  */
> +  rtx isync_insn;

Typo: "argument as starting address".

> +      /* Now check [reg], [symbol_ref], and [const].  */
> +      if (GET_CODE (x) != REG
> +	  && GET_CODE (x) != SYMBOL_REF
> +	  && GET_CODE (x) != CONST)
> +	goto non_legitimate_address;
> [...]
> +non_legitimate_address:
> +  return false;

As above, no need for the gotos.  Just return false directly.

> +    case PLUS:
> +      /* virtual_stack_vars_rtx will eventually transfer to SP or FP
> +         force [virtual_stack_vars + reg or const] to register first,
> +         make the offset_address combination to
> +         other addressing mode possible.  */
> +      if (mode == BLKmode
> +	  && REG_P (XEXP (x, 0))
> +	  && (REGNO (XEXP (x, 0)) == VIRTUAL_STACK_VARS_REGNUM))
> +	goto non_legitimate_address;

I don't understand this, sorry.  Is it an optimisation, or needed for
correctness?

> +static rtx
> +nds32_legitimize_address (rtx x,
> +			  rtx oldx ATTRIBUTE_UNUSED,
> +			  enum machine_mode mode ATTRIBUTE_UNUSED)
> +{
> +  /* 'mode' is the machine mode of memory operand
> +     that uses 'x' as address.  */
> +
> +  return x;
> +}

There's no need to define a dummy function here.  Just leave the default
TARGET_LEGITIMIZE_ADDRESS in place.

> +  /* If fp$, $gp, $lp, and all callee-save registers are NOT required

Typo: $fp.  Another instance later.

> +  /* If the function is 'naked', we do not have to generate
> +     epilogue code fragment BUT 'ret' instruction.  */
> +  if (cfun->machine->naked_p)
> +    {
> +      /* Generate return instruction by using unspec_func_return pattern.
> +         Make sure this instruction is after gen_blockage().
> +         NOTE that $lp will become 'live'
> +         after this instruction has been emitted.  */
> +      emit_insn (gen_unspec_func_return ());
> +      return;
> +    }

FWIW, this is different from AVR (for one), which explicitly doesn't emit a
return instruction.  One of the uses for AVR is to chain code together
into an .init-like section.

> +  if (frame_pointer_needed)
> +    {
> +      /* adjust $sp = $fp - ($fp size) - ($gp size) - ($lp size)
> +       *                  - (4 * callee-saved-registers)
> +       *
> +       * Note: No need to adjust
> +       *       cfun->machine->callee_saved_area_padding_bytes,
> +       *       because we want to adjust stack pointer
> +       *       to the position for pop instruction.
> +       */

There are a few comments that have this style, but GNU layout is the:

      /* ~~~~~~~~~~~~~~~
         ~~~~~~~~~~~~~~~.  */

style you use in most places.

> +  /* Caculate the number of 1-bit of (~ival), if there is only one 1-bit,

Typo: Calculate

> +     it means the original ival has only one 0-bit,
> +     So it is ok to perform 'bclr' operation.  */
> +
> +  one_bit_count = __builtin_popcount ((unsigned int) (~ival));
> +
> +  /* 'bclr' is a performance extension instruction.  */
> +  return (TARGET_PERF_EXT && (one_bit_count == 1));

We shouldn't really call __builtin_popcount directly.  You could just
use popcount_hwi, or the (x & -x) == x thing.

Redundant brackets around the ==.  There are a few other cases in the
patch too, although most of it seems OK.

> +      switch (mode_test)
> +	{
> +	case QImode:
> +	  /* 333 format.  */
> +	  if (val >=0 && val < 8 && regno < 8)
> +	    return ADDRESS_LO_REG_IMM3U;
> +	  break;
> +
> +	case HImode:
> +	  /* 333 format.  */
> +	  if (val >=0 && val < 16 && (val % 2 == 0) && regno < 8)
> +	    return ADDRESS_LO_REG_IMM3U;

Missing space before "0".  A few other cases in the same function.

> +   The trampoline code for nds32 target must contains folloing parts:

Typo: following.

Thanks,
Richard

Patch

diff --git gcc/config/nds32/nds32.c gcc/config/nds32/nds32.c
new file mode 100644
index 0000000..9e10c59e
--- /dev/null
+++ gcc/config/nds32/nds32.c
@@ -0,0 +1,5232 @@ 
+/* Subroutines used for code generation of Andes NDS32 cpu for GNU compiler
+   Copyright (C) 2012-2013 Free Software Foundation, Inc.
+   Contributed by Andes Technology Corporation.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+
+
+#include "config.h"
+#include "system.h"
+#include "coretypes.h"
+#include "tm.h"
+#include "tree.h"
+#include "rtl.h"
+#include "regs.h"
+#include "hard-reg-set.h"
+#include "insn-config.h"	/* Required by recog.h.  */
+#include "conditions.h"
+#include "output.h"
+#include "insn-attr.h"		/* For DFA state_t.  */
+#include "insn-codes.h"		/* For CODE_FOR_xxx.  */
+#include "reload.h"		/* For push_reload().  */
+#include "flags.h"
+#include "function.h"
+#include "expr.h"
+#include "recog.h"
+#include "diagnostic-core.h"
+#include "df.h"
+#include "tm_p.h"
+#include "tm-constrs.h"
+#include "optabs.h"		/* For GEN_FCN.  */
+#include "target.h"
+#include "target-def.h"
+#include "langhooks.h"		/* For add_builtin_function().  */
+#include "ggc.h"
+
+/* ------------------------------------------------------------------------ */
+
+/* This file is divided into six parts:
+
+     PART 1: Auxiliary static variable definitions and
+             target hook static variable definitions.
+
+     PART 2: Auxiliary static function definitions.
+
+     PART 3: Implement target hook stuff definitions.
+
+     PART 4: Implemet extern function definitions,
+             the prototype is in nds32-protos.h.
+
+     PART 5: Initialize target hook structure and definitions.  */
+
+/* ------------------------------------------------------------------------ */
+
+/* PART 1: Auxiliary static variable definitions and
+           target hook static variable definitions.  */
+
+/* Refer to nds32.h, there are maximum 73 isr vectors in nds32 architecture.
+   0 for reset handler with __attribute__((reset())),
+   1-8 for exception handler with __attribute__((exception(1,...,8))),
+   and 9-72 for interrupt handler with __attribute__((interrupt(0,...,63))).
+   We use an array to record essential information for each vector.  */
+static struct nds32_isr_info nds32_isr_vectors[NDS32_N_ISR_VECTORS];
+
+/* Define intrinsic register names.
+   Please refer to nds32_intrinsic.h file, the index is corresponding to
+   'enum nds32_intrinsic_registers' data type values.
+   NOTE that the base value starting from 1024.  */
+static const char* const nds32_intrinsic_register_names[] =
+{
+  "$PSW", "$IPSW", "$ITYPE", "$IPC"
+};
+
+/* Defining target-specific uses of __attribute__.  */
+static const struct attribute_spec nds32_attribute_table[] =
+{
+  /* Syntax: { name, min_len, max_len, decl_required, type_required,
+               function_type_required, handler, affects_type_identity } */
+
+  /* The interrupt vid: [0-63]+ (actual vector number starts from 9 to 72).  */
+  { "interrupt",    1, 64, false, false, false, NULL, false },
+  /* The exception vid: [1-8]+  (actual vector number starts from 1 to 8).  */
+  { "exception",    1,  8, false, false, false, NULL, false },
+  /* Argument is user's interrupt numbers.  The vector number is always 0.  */
+  { "reset",        1,  1, false, false, false, NULL, false },
+
+  /* The attributes describing isr nested type.  */
+  { "nested",       0,  0, false, false, false, NULL, false },
+  { "not_nested",   0,  0, false, false, false, NULL, false },
+  { "nested_ready", 0,  0, false, false, false, NULL, false },
+
+  /* The attributes describing isr register save scheme.  */
+  { "save_all",     0,  0, false, false, false, NULL, false },
+  { "partial_save", 0,  0, false, false, false, NULL, false },
+
+  /* The attributes used by reset attribute.  */
+  { "nmi",          1,  1, false, false, false, NULL, false },
+  { "warm",         1,  1, false, false, false, NULL, false },
+
+  /* The attribute telling no prologue/epilogue.  */
+  { "naked",        0,  0, false, false, false, NULL, false },
+
+  /* The last attribute spec is set to be NULL.  */
+  { NULL,           0,  0, false, false, false, NULL, false }
+};
+
+
+/* ------------------------------------------------------------------------ */
+
+/* PART 2: Auxiliary static function definitions.  */
+
+/* Function to save and restore machine-specific function data.  */
+static struct machine_function *
+nds32_init_machine_status (void)
+{
+  struct machine_function *machine;
+  machine = ggc_alloc_cleared_machine_function ();
+
+  /* Initially assume this function needs prologue/epilogue.  */
+  machine->naked_p = 0;
+
+  /* Initially assume this function does NOT use fp_as_gp optimization.  */
+  machine->fp_as_gp_p = 0;
+
+  return machine;
+}
+
+/* Function to compute stack frame size and
+   store into cfun->machine structure.  */
+static void
+nds32_compute_stack_frame (void)
+{
+  int r;
+  int block_size;
+
+  /* Before computing everything for stack frame size,
+     we check if it is still worth to use fp_as_gp optimization.
+     If it is, the 'df_regs_ever_live_p (FP_REGNUM)' will be set
+     so that $fp will be saved on stack.  */
+  cfun->machine->fp_as_gp_p = nds32_fp_as_gp_check_available ();
+
+  /* Because nds32_compute_stack_frame() will be called from different place,
+     everytime we enter this function, we have to assume this function
+     needs prologue/epilogue.  */
+  cfun->machine->naked_p = 0;
+
+  /* Get variadic arguments size to prepare pretend arguments and
+     push them into stack at prologue.
+     Currently, we do not push variadic arguments by ourself.
+     We have GCC handle all the works.
+     The caller will push all corresponding nameless arguments into stack,
+     and the callee is able to retrieve them without problems.
+     These variables are still preserved in case one day
+     we would like caller passing arguments with registers.  */
+  cfun->machine->va_args_size = 0;
+  cfun->machine->va_args_first_regno = SP_REGNUM;
+  cfun->machine->va_args_last_regno  = SP_REGNUM;
+
+  /* Get local variables, incoming variables, and temporary variables size.
+     Note that we need to make sure it is 8-byte alignment because
+     there may be no padding bytes if we are using LRA.  */
+  cfun->machine->local_size = NDS32_ROUND_UP_DOUBLE_WORD (get_frame_size ());
+
+  /* Get outgoing arguments size.  */
+  cfun->machine->out_args_size = crtl->outgoing_args_size;
+
+  /* If $fp value is required to be saved on stack, it needs 4 bytes space.
+     Check whether $fp is ever live.  */
+  cfun->machine->fp_size = (df_regs_ever_live_p (FP_REGNUM)) ? 4 : 0;
+
+  /* If $gp value is required to be saved on stack, it needs 4 bytes space.
+     Check whether we are using PIC code genration.  */
+  cfun->machine->gp_size = (flag_pic) ? 4 : 0;
+
+  /* If $lp value is required to be saved on stack, it needs 4 bytes space.
+     Check whether $lp is ever live.  */
+  cfun->machine->lp_size = (df_regs_ever_live_p (LP_REGNUM)) ? 4 : 0;
+
+  /* Initially there is no padding bytes.  */
+  cfun->machine->callee_saved_area_padding_bytes = 0;
+
+  /* Calculate the bytes of saving callee-saved registers on stack.  */
+  cfun->machine->callee_saved_regs_size = 0;
+  cfun->machine->callee_saved_regs_first_regno = SP_REGNUM;
+  cfun->machine->callee_saved_regs_last_regno  = SP_REGNUM;
+  /* Currently, there is no need to check $r28~$r31
+     because we will save them in another way.  */
+  for (r = 0; r < 28; r++)
+    {
+      if (NDS32_REQUIRED_CALLEE_SAVED_P (r))
+	{
+	  /* Each register is 4 bytes.  */
+	  cfun->machine->callee_saved_regs_size += 4;
+
+	  /* Mark the first required callee-saved register
+	     (only need to set it once).
+	     If first regno == SP_REGNUM, we can tell that
+	     it is the first time to be here.  */
+	  if (cfun->machine->callee_saved_regs_first_regno == SP_REGNUM)
+	    cfun->machine->callee_saved_regs_first_regno = r;
+	  /* Mark the last required callee-saved register.  */
+	  cfun->machine->callee_saved_regs_last_regno = r;
+	}
+    }
+
+  /* Note:
+     Since our smw/lmw instructions use Rb and Re
+     to store/load registers consecutively,
+     we need to check again to see if there is any register
+     which is not live in the function but in the range
+     between 'callee_saved_regs_first_regno' and 'callee_save_regs_last_regno'.
+     If we find that registers, save their register size
+     in the 'callee_saved_regs_size' for prologue and epilogue.
+
+     For example:
+     Assume that the registers $r6, $r7, $r8, $r10, and $r11 are live
+     in the function, so initially callee_saved_regs_size = 4 * 5 = 20.
+     However, although $r9 is not live in the function,
+     it is in the range between $r6 and $r11.
+     We have to increase callee_saved_regs_size so that
+     prologue and epilogue can use it to issue
+     smw/lmw instructions and adjust the offset correctly.  */
+  for (r =  cfun->machine->callee_saved_regs_first_regno;
+       r <= cfun->machine->callee_saved_regs_last_regno;
+       r++)
+    {
+      if (!df_regs_ever_live_p (r)
+	  && r > cfun->machine->callee_saved_regs_first_regno
+	  && r < cfun->machine->callee_saved_regs_last_regno)
+	{
+	  /* Found one register which is not live in the function
+	     but in the range between first_regno and last_regno.  */
+	  cfun->machine->callee_saved_regs_size += 4;
+	}
+    }
+
+  /* Check if this function can omit prologue/epilogue code fragment.
+     If there is 'naked' attribute in this function,
+     we can set 'naked_p' flag to indicate that
+     we do not have to generate prologue/epilogue.
+     Or, if all the following conditions succeed,
+     we can set this function 'naked_p' as well:
+       condition 1: first_regno == last_regno == SP_REGNUM,
+                    which means we do not have to save
+                    any callee-saved registers.
+       condition 2: Both $lp and $fp are NOT live in this function,
+                    which means we do not need to save them.
+       condition 3: There is no local_size, which means
+                    we do not need to adjust $sp.  */
+  if (lookup_attribute ("naked", DECL_ATTRIBUTES (current_function_decl))
+      || (cfun->machine->callee_saved_regs_first_regno == SP_REGNUM
+	  && cfun->machine->callee_saved_regs_last_regno == SP_REGNUM
+	  && !df_regs_ever_live_p (FP_REGNUM)
+	  && !df_regs_ever_live_p (LP_REGNUM)
+	  && cfun->machine->local_size == 0))
+    {
+      /* Set this function 'naked_p' and
+         other functions can check this flag.  */
+      cfun->machine->naked_p = 1;
+
+      /* No need to save $fp, $gp, and $lp.
+         We should set these value to be zero
+         so that nds32_initial_elimination_offset() can work properly.  */
+      cfun->machine->fp_size = 0;
+      cfun->machine->gp_size = 0;
+      cfun->machine->lp_size = 0;
+
+      /* If stack usage computation is required,
+         we need to provide the static stack size.  */
+      if (flag_stack_usage_info)
+	current_function_static_stack_size = 0;
+
+      /* No need to do following adjustment, return immediately.  */
+      return;
+    }
+
+  /* Adjustment for v3push instructions:
+     If we are using v3push (push25/pop25) instructions,
+     we need to make sure Rb is $r6 and Re is
+     located on $r6, $r8, $r10, or $r14.
+     Some results above will be discarded and recomputed.
+     Note that it is only available under V3/V3M ISA.  */
+  if (TARGET_V3PUSH)
+    {
+      /* Recompute:
+           cfun->machine->fp_size
+           cfun->machine->gp_size
+           cfun->machine->lp_size
+           cfun->machine->callee_saved_regs_first_regno
+           cfun->machine->callee_saved_regs_last_regno
+           cfun->machine->callee_saved_regs_size */
+
+      /* For v3push instructions, $fp, $gp, and $lp are always saved.  */
+      cfun->machine->fp_size = 4;
+      cfun->machine->gp_size = 4;
+      cfun->machine->lp_size = 4;
+
+      /* Remember to set Rb = $r6.  */
+      cfun->machine->callee_saved_regs_first_regno = 6;
+
+      if (cfun->machine->callee_saved_regs_last_regno <= 6)
+	{
+	  /* Re = $r6 */
+	  cfun->machine->callee_saved_regs_last_regno = 6;
+	}
+      else if (cfun->machine->callee_saved_regs_last_regno <= 8)
+	{
+	  /* Re = $r8 */
+	  cfun->machine->callee_saved_regs_last_regno = 8;
+	}
+      else if (cfun->machine->callee_saved_regs_last_regno <= 10)
+	{
+	  /* Re = $r10 */
+	  cfun->machine->callee_saved_regs_last_regno = 10;
+	}
+      else if (cfun->machine->callee_saved_regs_last_regno <= 14)
+	{
+	  /* Re = $r14 */
+	  cfun->machine->callee_saved_regs_last_regno = 14;
+	}
+      else if (cfun->machine->callee_saved_regs_last_regno == SP_REGNUM)
+	{
+	  /* If last_regno is SP_REGNUM, which means
+	     it is never changed, so set it to Re = $r6.  */
+	  cfun->machine->callee_saved_regs_last_regno = 6;
+	}
+      else
+	{
+	  /* The program flow should not go here.  */
+	  gcc_unreachable ();
+	}
+
+      /* Compute pushed size of callee-saved registers.  */
+      cfun->machine->callee_saved_regs_size
+	= 4 * (cfun->machine->callee_saved_regs_last_regno
+	       - cfun->machine->callee_saved_regs_first_regno
+	       + 1);
+    }
+
+  /* Important: We need to make sure that
+                (va_args_size + fp_size + gp_size
+                 + lp_size + callee_saved_regs_size)
+                is 8-byte alignment.
+                If it is not, calculate the padding bytes.  */
+  block_size = cfun->machine->va_args_size
+	       + cfun->machine->fp_size
+	       + cfun->machine->gp_size
+	       + cfun->machine->lp_size
+	       + cfun->machine->callee_saved_regs_size;
+  if (!NDS32_DOUBLE_WORD_ALIGN_P (block_size))
+    {
+      cfun->machine->callee_saved_area_padding_bytes
+	= NDS32_ROUND_UP_DOUBLE_WORD (block_size) - block_size;
+    }
+
+  /* If stack usage computation is required,
+     we need to provide the static stack size.  */
+  if (flag_stack_usage_info)
+    {
+      current_function_static_stack_size
+	= NDS32_ROUND_UP_DOUBLE_WORD (block_size)
+	  + cfun->machine->local_size
+	  + cfun->machine->out_args_size;
+    }
+}
+
+/* Function to create a parallel rtx pattern
+   which presents stack push multiple behavior.
+   The overall concept are:
+     "unspec_stack_push_multiple",
+     "use/clobber stack pointer",
+     "use Rb to Re", and "use $fp, $gp, $lp".  */
+static rtx
+nds32_gen_stack_push_multiple (rtx Rb, rtx Re, rtx En4)
+{
+  int regno;
+  int extra_count;
+  int num_use_regs;
+  int par_index;
+
+  rtx reg;
+  rtx parallel_insn;
+
+  /* We need to provide a customized rtx which contains
+     necessary information for data analysis,
+     so we create a parallel rtx like this:
+     (parallel [(unspec [(reg: Rb)
+                         (reg: Re)
+                         (const_int En4)] UNSPEC_STACK_PUSH_MULTIPLE)
+                (use (reg:SI SP_REGNUM))
+                (clobber (reg:SI SP_REGNUM))
+                (use (reg:SI Rb))
+                (use (reg:SI Rb+1))
+                ...
+                (use (reg:SI Re))
+                (use (reg:SI FP_REGNUM))
+                (use (reg:SI GP_REGNUM))
+                (use (reg:SI LP_REGNUM))]) */
+
+  /* Calculate the number of registers that will be pushed.  */
+  extra_count = 0;
+  if (cfun->machine->fp_size)
+    extra_count++;
+  if (cfun->machine->gp_size)
+    extra_count++;
+  if (cfun->machine->lp_size)
+    extra_count++;
+  num_use_regs = REGNO (Re) - REGNO (Rb) + 1 + extra_count;
+
+  /* In addition to used registers,
+     we need more spaces for 'unspec', 'use sp', and 'clobber sp' rtx.  */
+  parallel_insn = gen_rtx_PARALLEL (VOIDmode,
+				    rtvec_alloc (num_use_regs + 1 + 2));
+  par_index = 0;
+
+  /* Create (unspec [Rb Re En4]).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_UNSPEC (BLKmode, gen_rtvec (3, Rb, Re, En4),
+		      UNSPEC_STACK_PUSH_MULTIPLE);
+  par_index++;
+  /* Create (use (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_USE (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+  /* Create (clobber (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+
+  /* Create (use (reg X)) from Rb to Re.  */
+  for (regno = REGNO (Rb); regno <= (int) REGNO (Re); regno++)
+    {
+      reg = gen_rtx_REG (SImode, regno);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+    }
+
+  /* Create (use (reg $fp)), (use (reg $gp)), (use (reg $lp)) if necessary.  */
+  if (cfun->machine->fp_size)
+    {
+      reg = gen_rtx_REG (SImode, FP_REGNUM);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+    }
+  if (cfun->machine->gp_size)
+    {
+      reg = gen_rtx_REG (SImode, GP_REGNUM);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+    }
+  if (cfun->machine->lp_size)
+    {
+      reg = gen_rtx_REG (SImode, LP_REGNUM);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+    }
+
+  return parallel_insn;
+}
+
+/* Function to create a parallel rtx pattern
+   which presents stack pop multiple behavior.
+   The overall concept are:
+     "unspec_stack_pop_multiple",
+     "clobber stack pointer",
+     "clobber Rb to Re", and "clobber $fp, $gp, $lp".  */
+static rtx
+nds32_gen_stack_pop_multiple (rtx Rb, rtx Re, rtx En4)
+{
+  int regno;
+  int extra_count;
+  int num_use_regs;
+  int par_index;
+
+  rtx reg;
+  rtx parallel_insn;
+
+  /* We need to provide a customized rtx which contains
+     necessary information for data analysis,
+     so we create a parallel rtx like this:
+     (NOTE: We have to clearly claim that we are using $sp, Rb ~ Re,
+            $fp, $gp, and $lp, otherwise it may be renamed by optimization.)
+     (parallel [(unspec [(reg: Rb)
+                         (reg: Re)
+                         (const_int En4)] UNSPEC_STACK_POP_MULTIPLE)
+                (use (reg:SI SP_REGNUM))
+                (clobber (reg:SI SP_REGNUM))
+                (use (reg:SI Rb))
+                (clobber (reg:SI Rb))
+                (use (reg:SI Rb+1))
+                (clobber (reg:SI Rb+1))
+                ...
+                (use (reg:SI Re))
+                (clobber (reg:SI Re))
+                (use (reg:SI FP_REGNUM))
+                (clobber (reg:SI FP_REGNUM))
+                (use (reg:SI GP_REGNUM))
+                (clobber (reg:SI GP_REGNUM))
+                (use (reg:SI LP_REGNUM))
+                (clobber (reg:SI LP_REGNUM))]) */
+
+  /* Calculate the number of registers that will be poped.  */
+  extra_count = 0;
+  if (cfun->machine->fp_size)
+    extra_count++;
+  if (cfun->machine->gp_size)
+    extra_count++;
+  if (cfun->machine->lp_size)
+    extra_count++;
+  num_use_regs = REGNO (Re) - REGNO (Rb) + 1 + extra_count;
+
+  /* In addition to use & clobber registers,
+     we need more spaces for 'unspec', 'use sp' and 'clobber sp' rtx.  */
+  parallel_insn = gen_rtx_PARALLEL (VOIDmode,
+				    rtvec_alloc (num_use_regs * 2 + 1 + 2));
+  par_index = 0;
+
+  /* Create (unspec [Rb Re En4]).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_UNSPEC (BLKmode, gen_rtvec (3, Rb, Re, En4),
+		      UNSPEC_STACK_POP_MULTIPLE);
+  par_index++;
+  /* Create (use (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_USE (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+  /* Create (clobber (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+
+  /* Create (use (reg X)) and (clobber (reg X)) from Rb to Re.  */
+  for (regno = REGNO (Rb); regno <= (int) REGNO (Re); regno++)
+    {
+      reg = gen_rtx_REG (SImode, regno);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+      par_index++;
+    }
+
+  /* Create (use (reg $fp)), (clobber (reg $fp)),
+            (use (reg $gp)), (clobber (reg $gp)),
+            (use (reg $lp)), (clobber (reg $lp)) if necessary.  */
+  if (cfun->machine->fp_size)
+    {
+      reg = gen_rtx_REG (SImode, FP_REGNUM);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+      par_index++;
+    }
+  if (cfun->machine->gp_size)
+    {
+      reg = gen_rtx_REG (SImode, GP_REGNUM);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+      par_index++;
+    }
+  if (cfun->machine->lp_size)
+    {
+      reg = gen_rtx_REG (SImode, LP_REGNUM);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+      par_index++;
+    }
+
+  return parallel_insn;
+}
+
+/* Function to create a parallel rtx pattern
+   which presents stack v3push behavior.
+   The overall concept are:
+     "unspec_stack_v3push",
+     "use/clobber stack pointer",
+     "use Rb to Re", and "use $fp, $gp, $lp".  */
+static rtx
+nds32_gen_stack_v3push (rtx Rb,
+			rtx Re,
+			rtx En4 ATTRIBUTE_UNUSED,
+			rtx imm8u)
+{
+  int regno;
+  int num_use_regs;
+  int par_index;
+
+  rtx reg;
+  rtx parallel_insn;
+
+  /* We need to provide a customized rtx which contains
+     necessary information for data analysis,
+     so we create a parallel rtx like this:
+     (parallel [(unspec [(reg: Re)
+                         (const_int imm8u)] UNSPEC_STACK_V3PUSH)
+                (use (reg:SI SP_REGNUM))
+                (clobber (reg:SI SP_REGNUM))
+                (use (reg:SI Rb))
+                (use (reg:SI Rb+1))
+                ...
+                (use (reg:SI Re))
+                (use (reg:SI FP_REGNUM))
+                (use (reg:SI GP_REGNUM))
+                (use (reg:SI LP_REGNUM))]) */
+
+  /* Calculate the number of registers that will be pushed.
+     Since $fp, $gp, and $lp is always pushed with v3push instruction,
+     we need to count these three regs.  */
+  num_use_regs = REGNO (Re) - REGNO (Rb) + 1 + 3;
+
+  /* In addition to used registers,
+     we need more spaces for 'unspec', 'use sp', and 'clobber sp' rtx.  */
+  parallel_insn = gen_rtx_PARALLEL (VOIDmode,
+				    rtvec_alloc (num_use_regs + 1 + 2));
+  par_index = 0;
+
+  /* Create (unspec [Re imm8u]).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_UNSPEC (BLKmode, gen_rtvec (2, Re, imm8u),
+		      UNSPEC_STACK_V3PUSH);
+  par_index++;
+  /* Create (use (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_USE (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+  /* Create (clobber (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+
+  /* Create (use (reg X)) from Rb to Re.  */
+  for (regno = REGNO (Rb); regno <= (int) REGNO (Re); regno++)
+    {
+      reg = gen_rtx_REG (SImode, regno);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+    }
+
+  /* Create (use (reg $fp)), (use (reg $gp)), (use (reg $lp)).  */
+  reg = gen_rtx_REG (SImode, FP_REGNUM);
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+  par_index++;
+  reg = gen_rtx_REG (SImode, GP_REGNUM);
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+  par_index++;
+  reg = gen_rtx_REG (SImode, LP_REGNUM);
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+  par_index++;
+
+  return parallel_insn;
+}
+
+/* Function to create a parallel rtx pattern
+   which presents stack v3pop behavior.
+   The overall concept are:
+     "unspec_stack_v3pop",
+     "clobber stack pointer",
+     "clobber Rb to Re", and "clobber $fp, $gp, $lp".  */
+static rtx
+nds32_gen_stack_v3pop (rtx Rb,
+		       rtx Re,
+		       rtx En4 ATTRIBUTE_UNUSED,
+		       rtx imm8u)
+{
+  int regno;
+  int num_use_regs;
+  int par_index;
+
+  rtx reg;
+  rtx parallel_insn;
+
+  /* We need to provide a customized rtx which contains
+     necessary information for data analysis,
+     so we create a parallel rtx like this:
+     (NOTE: We have to clearly claim that we are using $sp, Rb ~ Re,
+            $fp, $gp, and $lp, otherwise it may be renamed by optimization.)
+     (parallel [(unspec [(reg: Re)
+                         (const_int imm8u)] UNSPEC_STACK_V3POP)
+                (use (reg:SI SP_REGNUM))
+                (clobber (reg:SI SP_REGNUM))
+                (use (reg:SI Rb))
+                (clobber (reg:SI Rb))
+                (use (reg:SI Rb+1))
+                (clobber (reg:SI Rb+1))
+                ...
+                (use (reg:SI Re))
+                (clobber (reg:SI Re))
+                (use (reg:SI FP_REGNUM))
+                (clobber (reg:SI FP_REGNUM))
+                (use (reg:SI GP_REGNUM))
+                (clobber (reg:SI GP_REGNUM))
+                (use (reg:SI LP_REGNUM))
+                (clobber (reg:SI LP_REGNUM))]) */
+
+  /* Calculate the number of registers that will be poped.
+     Since $fp, $gp, and $lp is always poped with v3pop instruction,
+     we need to count these three regs.  */
+  num_use_regs = REGNO (Re) - REGNO (Rb) + 1 + 3;
+
+  /* In addition to use & clobber registers,
+     we need more spaces for 'unspec', 'use sp' and 'clobber sp' rtx.  */
+  parallel_insn = gen_rtx_PARALLEL (VOIDmode,
+				    rtvec_alloc (num_use_regs * 2 + 1 + 2));
+  par_index = 0;
+
+  /* Create (unspec [Re imm8u]).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_UNSPEC (BLKmode, gen_rtvec (2, Re, imm8u),
+		      UNSPEC_STACK_V3POP);
+  par_index++;
+  /* Create (use (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_USE (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+  /* Create (clobber (reg SP)).  */
+  XVECEXP (parallel_insn, 0, par_index)
+    = gen_rtx_CLOBBER (VOIDmode, gen_rtx_REG (SImode, SP_REGNUM));
+  par_index++;
+
+  /* Create (use (reg X)) and (clobber (reg X)) from Rb to Re.  */
+  for (regno = REGNO (Rb); regno <= (int) REGNO (Re); regno++)
+    {
+      reg = gen_rtx_REG (SImode, regno);
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+      par_index++;
+      XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+      par_index++;
+    }
+
+  /* Create (use (reg $fp)), (clobber (reg $fp)),
+            (use (reg $gp)), (clobber (reg $gp)),
+            (use (reg $lp)), (clobber (reg $lp)).  */
+  reg = gen_rtx_REG (SImode, FP_REGNUM);
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+  par_index++;
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+  par_index++;
+
+  reg = gen_rtx_REG (SImode, GP_REGNUM);
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+  par_index++;
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+  par_index++;
+
+  reg = gen_rtx_REG (SImode, LP_REGNUM);
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_USE (VOIDmode, reg);
+  par_index++;
+  XVECEXP (parallel_insn, 0, par_index) = gen_rtx_CLOBBER (VOIDmode, reg);
+  par_index++;
+
+  return parallel_insn;
+}
+
+/* A helper function to emit section head template.  */
+static void
+nds32_emit_section_head_template (char section_name[],
+				  char symbol_name[],
+				  int align_value,
+				  bool object_p)
+{
+  const char *flags_str;
+  const char *type_str;
+
+  flags_str = (object_p) ? "\"a\"" : "\"ax\"";
+  type_str = (object_p) ? "@object" : "@function";
+
+  fprintf (asm_out_file, "\t.section\t%s, %s\n", section_name, flags_str);
+  fprintf (asm_out_file, "\t.align\t%d\n", align_value);
+  fprintf (asm_out_file, "\t.global\t%s\n", symbol_name);
+  fprintf (asm_out_file, "\t.type\t%s, %s\n", symbol_name, type_str);
+  fprintf (asm_out_file, "%s:\n", symbol_name);
+}
+
+/* A helper function to emit section tail template.  */
+static void
+nds32_emit_section_tail_template (char symbol_name[])
+{
+  fprintf (asm_out_file, "\t.size\t%s, .-%s\n", symbol_name, symbol_name);
+}
+
+/* Function to emit isr jump table section.  */
+static void
+nds32_emit_isr_jmptbl_section (int vector_id)
+{
+  char section_name[100];
+  char symbol_name[100];
+
+  /* Prepare jmptbl section and symbol name.  */
+  snprintf (section_name, sizeof (section_name),
+	    ".nds32_jmptbl.%02d", vector_id);
+  snprintf (symbol_name, sizeof (symbol_name),
+	    "_nds32_jmptbl_%02d", vector_id);
+
+  nds32_emit_section_head_template (section_name, symbol_name, 2, true);
+  fprintf (asm_out_file, "\t.word\t%s\n",
+			 nds32_isr_vectors[vector_id].func_name);
+  nds32_emit_section_tail_template (symbol_name);
+}
+
+/* Function to emit isr vector section.  */
+static void
+nds32_emit_isr_vector_section (int vector_id)
+{
+  unsigned int vector_number_offset = 0;
+  const char *c_str = "CATEGORY";
+  const char *sr_str = "SR";
+  const char *nt_str = "NT";
+  const char *vs_str = "VS";
+  char first_level_handler_name[100];
+  char section_name[100];
+  char symbol_name[100];
+
+  /* Set the vector number offset so that we can calculate
+     the value that user specifies in the attribute.
+     We also prepare the category string for first level handler name.  */
+  switch (nds32_isr_vectors[vector_id].category)
+    {
+    case NDS32_ISR_INTERRUPT:
+      vector_number_offset = 9;
+      c_str = "i";
+      break;
+    case NDS32_ISR_EXCEPTION:
+      vector_number_offset = 0;
+      c_str = "e";
+      break;
+    case NDS32_ISR_NONE:
+    case NDS32_ISR_RESET:
+      /* Normally it should not be here.  */
+      gcc_unreachable ();
+      break;
+    }
+
+  /* Prepare save reg string for first level handler name.  */
+  switch (nds32_isr_vectors[vector_id].save_reg)
+    {
+    case NDS32_SAVE_ALL:
+      sr_str = "sa";
+      break;
+    case NDS32_PARTIAL_SAVE:
+      sr_str = "ps";
+      break;
+    }
+
+  /* Prepare nested type string for first level handler name.  */
+  switch (nds32_isr_vectors[vector_id].nested_type)
+    {
+    case NDS32_NESTED:
+      nt_str = "ns";
+      break;
+    case NDS32_NOT_NESTED:
+      nt_str = "nn";
+      break;
+    case NDS32_NESTED_READY:
+      nt_str = "nr";
+      break;
+    }
+
+  /* Currently we have 4-byte or 16-byte size for each vector.
+     If it is 4-byte, the first level handler name has suffix string "_4b".  */
+  vs_str = (nds32_isr_vector_size == 4) ? "_4b" : "";
+
+  /* Now we can create first level handler name.  */
+  snprintf (first_level_handler_name, sizeof (first_level_handler_name),
+	    "_nds32_%s_%s_%s%s", c_str, sr_str, nt_str, vs_str);
+
+  /* Prepare vector section and symbol name.  */
+  snprintf (section_name, sizeof (section_name),
+	    ".nds32_vector.%02d", vector_id);
+  snprintf (symbol_name, sizeof (symbol_name),
+	    "_nds32_vector_%02d%s", vector_id, vs_str);
+
+
+  /* Everything is ready.  We can start emit vector section content.  */
+  nds32_emit_section_head_template (section_name, symbol_name,
+				    floor_log2 (nds32_isr_vector_size), false);
+
+  /* According to the vector size, the instructions in the
+     vector section may be different.  */
+  if (nds32_isr_vector_size == 4)
+    {
+      /* This block is for 4-byte vector size.
+         Hardware $VID support is necessary and only one instruction
+         is needed in vector section.  */
+      fprintf (asm_out_file, "\tj\t%s ! jump to first level handler\n",
+			     first_level_handler_name);
+    }
+  else
+    {
+      /* This block is for 16-byte vector size.
+         There is NO hardware $VID so that we need several instructions
+         such as pushing GPRs and preparing software vid at vector section.
+         For pushing GPRs, there are four variations for
+         16-byte vector content and we have to handle each combination.
+         For preparing software vid, note that the vid need to
+         be substracted vector_number_offset.  */
+      if (TARGET_REDUCED_REGS)
+	{
+	  if (nds32_isr_vectors[vector_id].save_reg == NDS32_SAVE_ALL)
+	    {
+	      /* Case of reduced set registers and save_all attribute.  */
+	      fprintf (asm_out_file, "\t! reduced set regs + save_all\n");
+	      fprintf (asm_out_file, "\tsmw.adm\t$r15, [$sp], $r15, 0xf\n");
+	      fprintf (asm_out_file, "\tsmw.adm\t$r0, [$sp], $r10, 0x0\n");
+
+	    }
+	  else
+	    {
+	      /* Case of reduced set registers and partial_save attribute.  */
+	      fprintf (asm_out_file, "\t! reduced set regs + partial_save\n");
+	      fprintf (asm_out_file, "\tsmw.adm\t$r15, [$sp], $r15, 0x2\n");
+	      fprintf (asm_out_file, "\tsmw.adm\t$r0, [$sp], $r5, 0x0\n");
+	    }
+	}
+      else
+	{
+	  if (nds32_isr_vectors[vector_id].save_reg == NDS32_SAVE_ALL)
+	    {
+	      /* Case of full set registers and save_all attribute.  */
+	      fprintf (asm_out_file, "\t! full set regs + save_all\n");
+	      fprintf (asm_out_file, "\tsmw.adm\t$r0, [$sp], $r27, 0xf\n");
+	    }
+	  else
+	    {
+	      /* Case of full set registers and partial_save attribute.  */
+	      fprintf (asm_out_file, "\t! full set regs + partial_save\n");
+	      fprintf (asm_out_file, "\tsmw.adm\t$r15, [$sp], $r27, 0x2\n");
+	      fprintf (asm_out_file, "\tsmw.adm\t$r0, [$sp], $r5, 0x0\n");
+	    }
+	}
+
+      fprintf (asm_out_file, "\tmovi\t$r0, %d ! preparing software vid\n",
+			     vector_id - vector_number_offset);
+      fprintf (asm_out_file, "\tj\t%s ! jump to first level handler\n",
+			     first_level_handler_name);
+    }
+
+  nds32_emit_section_tail_template (symbol_name);
+}
+
+/* Function to emit isr reset handler content.
+   Including all jmptbl/vector references, jmptbl section,
+   vector section, nmi handler section, and warm handler section.  */
+static void
+nds32_emit_isr_reset_content (void)
+{
+  unsigned int i;
+  unsigned int total_n_vectors;
+  const char *vs_str;
+  char reset_handler_name[100];
+  char section_name[100];
+  char symbol_name[100];
+
+  total_n_vectors = nds32_isr_vectors[0].total_n_vectors;
+  vs_str = (nds32_isr_vector_size == 4) ? "_4b" : "";
+
+  fprintf (asm_out_file, "\t! RESET HANDLER CONTENT - BEGIN !\n");
+
+  /* Create references in .rodata according to total number of vectors.  */
+  fprintf (asm_out_file, "\t.section\t.rodata\n");
+  fprintf (asm_out_file, "\t.align\t2\n");
+
+  /* Emit jmptbl references.  */
+  fprintf (asm_out_file, "\t ! references to jmptbl section entries\n");
+  for (i = 0; i < total_n_vectors; i++)
+    fprintf (asm_out_file, "\t.word\t_nds32_jmptbl_%02d\n", i);
+
+  /* Emit vector references.  */
+  fprintf (asm_out_file, "\t ! references to vector section entries\n");
+  for (i = 0; i < total_n_vectors; i++)
+    fprintf (asm_out_file, "\t.word\t_nds32_vector_%02d%s\n", i, vs_str);
+
+  /* Emit jmptbl_00 section.  */
+  snprintf (section_name, sizeof (section_name), ".nds32_jmptbl.00");
+  snprintf (symbol_name, sizeof (symbol_name), "_nds32_jmptbl_00");
+
+  fprintf (asm_out_file, "\t! ....................................\n");
+  nds32_emit_section_head_template (section_name, symbol_name, 2, true);
+  fprintf (asm_out_file, "\t.word\t%s\n",
+			 nds32_isr_vectors[0].func_name);
+  nds32_emit_section_tail_template (symbol_name);
+
+  /* Emit vector_00 section.  */
+  snprintf (section_name, sizeof (section_name), ".nds32_vector.00");
+  snprintf (symbol_name, sizeof (symbol_name), "_nds32_vector_00%s", vs_str);
+  snprintf (reset_handler_name, sizeof (reset_handler_name),
+	    "_nds32_reset%s", vs_str);
+
+  fprintf (asm_out_file, "\t! ....................................\n");
+  nds32_emit_section_head_template (section_name, symbol_name,
+				    floor_log2 (nds32_isr_vector_size), false);
+  fprintf (asm_out_file, "\tj\t%s ! jump to reset handler\n",
+			 reset_handler_name);
+  nds32_emit_section_tail_template (symbol_name);
+
+  /* Emit nmi handler section.  */
+  snprintf (section_name, sizeof (section_name), ".nds32_nmih");
+  snprintf (symbol_name, sizeof (symbol_name), "_nds32_nmih");
+
+  fprintf (asm_out_file, "\t! ....................................\n");
+  nds32_emit_section_head_template (section_name, symbol_name, 2, true);
+  fprintf (asm_out_file, "\t.word\t%s\n",
+			 (strlen (nds32_isr_vectors[0].nmi_name) == 0)
+			 ? "0"
+			 : nds32_isr_vectors[0].nmi_name);
+  nds32_emit_section_tail_template (symbol_name);
+
+  /* Emit warm handler section.  */
+  snprintf (section_name, sizeof (section_name), ".nds32_wrh");
+  snprintf (symbol_name, sizeof (symbol_name), "_nds32_wrh");
+
+  fprintf (asm_out_file, "\t! ....................................\n");
+  nds32_emit_section_head_template (section_name, symbol_name, 2, true);
+  fprintf (asm_out_file, "\t.word\t%s\n",
+			 (strlen (nds32_isr_vectors[0].warm_name) == 0)
+			 ? "0"
+			 : nds32_isr_vectors[0].warm_name);
+  nds32_emit_section_tail_template (symbol_name);
+
+  fprintf (asm_out_file, "\t! RESET HANDLER CONTENT - END !\n");
+}
+
+/* Function to construct isr vectors information array.
+   We need to check:
+     1. Traverse interrupt/exception/reset for settig vector id.
+     2. Only 'nested', 'not_nested', or 'nested_ready' in the attributes.
+     3. Only 'save_all' or 'partial_save' in the attributes.  */
+static void
+nds32_construct_isr_vectors_information (tree func_attrs,
+					 const char *func_name)
+{
+  int nested_p, not_nested_p, nested_ready_p;
+  int save_all_p, partial_save_p;
+  tree intr, excp, reset;
+  int temp_count;
+
+  nested_p = not_nested_p = nested_ready_p = 0;
+  save_all_p = partial_save_p = 0;
+  temp_count = 0;
+
+  /* We must check at MOST one attribute to set save-reg.  */
+  if (lookup_attribute ("save_all", func_attrs))
+    save_all_p = 1;
+  if (lookup_attribute ("partial_save", func_attrs))
+    partial_save_p = 1;
+
+  if ((save_all_p + partial_save_p) > 1)
+    error ("multiple save reg attributes to function %qs", func_name);
+
+  /* We must check at MOST one attribute to set nested-type.  */
+  if (lookup_attribute ("nested", func_attrs))
+    nested_p = 1;
+  if (lookup_attribute ("not_nested", func_attrs))
+    not_nested_p = 1;
+  if (lookup_attribute ("nested_ready", func_attrs))
+    nested_ready_p = 1;
+
+  if ((nested_p + not_nested_p + nested_ready_p) > 1)
+    error ("multiple nested types attributes to function %qs", func_name);
+
+  /* We must check at MOST one attribute to set interrupt/exception/reset.
+     We also get its tree instance in the process.  */
+  intr = lookup_attribute ("interrupt", func_attrs);
+  excp = lookup_attribute ("exception", func_attrs);
+  reset = lookup_attribute ("reset", func_attrs);
+  if (intr)
+    temp_count++;
+  if (excp)
+    temp_count++;
+  if (reset)
+    temp_count++;
+
+  if (temp_count > 1)
+    error ("multiple interrupt attributes to function %qs", func_name);
+
+  /* If there is no interrupt/exception/reset, we can return immediately.  */
+  if (temp_count == 0)
+    return;
+
+  /* If we are here, either we have interrupt/exception,
+     or reset attribute.  */
+  if (intr || excp)
+    {
+      tree id_list;
+      unsigned int lower_bound, upper_bound;
+      unsigned int vector_number_offset;
+
+      /* The way to handle interrupt or exception is the same,
+         we just need to take care of actual vector number.
+         For interrupt(0..63), the actual vector number is (9..72).
+         For exception(1..8), the actual vector number is (1..8).  */
+      lower_bound = (intr) ? (0) : (1);
+      upper_bound = (intr) ? (63) : (8);
+      vector_number_offset = (intr) ? (9) : (0);
+
+      /* Prepare id list so that we can traverse and set vector id.  */
+      id_list = (intr) ? (TREE_VALUE (intr)) : (TREE_VALUE (excp));
+
+      while (id_list)
+	{
+	  tree id;
+
+	  /* Pick up each vector id value.  */
+	  id = TREE_VALUE (id_list);
+	  if (TREE_CODE (id) == INTEGER_CST
+	      && TREE_INT_CST_LOW (id) >= lower_bound
+	      && TREE_INT_CST_LOW (id) <= upper_bound)
+	    {
+	      int vector_id;
+
+	      /* Add vector_number_offset to get actual vector number.  */
+	      vector_id = TREE_INT_CST_LOW (id) + vector_number_offset;
+
+	      /* Enable corresponding vector and set function name.  */
+	      nds32_isr_vectors[vector_id].category = (intr)
+						      ? (NDS32_ISR_INTERRUPT)
+						      : (NDS32_ISR_EXCEPTION);
+	      strcpy (nds32_isr_vectors[vector_id].func_name, func_name);
+
+	      /* Set register saving scheme.  */
+	      if (save_all_p)
+	        nds32_isr_vectors[vector_id].save_reg = NDS32_SAVE_ALL;
+	      else if (partial_save_p)
+	        nds32_isr_vectors[vector_id].save_reg = NDS32_PARTIAL_SAVE;
+
+	      /* Set nested type.  */
+	      if (nested_p)
+	        nds32_isr_vectors[vector_id].nested_type = NDS32_NESTED;
+	      else if (not_nested_p)
+	        nds32_isr_vectors[vector_id].nested_type = NDS32_NOT_NESTED;
+	      else if (nested_ready_p)
+	        nds32_isr_vectors[vector_id].nested_type = NDS32_NESTED_READY;
+	    }
+	  else
+	    {
+	      /* Issue error if it is not a valid integer value.  */
+	      error ("invalid id value for interrupt/exception attribute");
+	    }
+
+	  /* Advance to next id.  */
+	  id_list = TREE_CHAIN (id_list);
+	}
+    }
+  else
+    {
+      tree id_list;
+      tree id;
+      tree nmi, warm;
+      unsigned int lower_bound;
+      unsigned int upper_bound;
+
+      /* Deal with reset attribute.  Its vector number is always 0.  */
+      nds32_isr_vectors[0].category = NDS32_ISR_RESET;
+
+      /* Prepare id_list and identify id value so that
+         we can set total number of vectors.  */
+      id_list = TREE_VALUE (reset);
+      id = TREE_VALUE (id_list);
+
+      /* The maximum numbers for user's interrupt is 64.  */
+      lower_bound = 0;
+      upper_bound = 64;
+
+      if (TREE_CODE (id) == INTEGER_CST
+	  && TREE_INT_CST_LOW (id) >= lower_bound
+	  && TREE_INT_CST_LOW (id) <= upper_bound)
+	{
+	  /* The total vectors = interrupt + exception numbers + reset.
+	     There are 8 exception and 1 reset in nds32 architecture.  */
+	  nds32_isr_vectors[0].total_n_vectors = TREE_INT_CST_LOW (id) + 8 + 1;
+	  strcpy (nds32_isr_vectors[0].func_name, func_name);
+	}
+      else
+	{
+	  /* Issue error if it is not a valid integer value.  */
+	  error ("invalid id value for reset attribute");
+	}
+
+      /* Retrieve nmi and warm function.  */
+      nmi = lookup_attribute ("nmi", func_attrs);
+      warm = lookup_attribute ("warm", func_attrs);
+
+      if (nmi != NULL_TREE)
+	{
+	  tree nmi_func_list;
+	  tree nmi_func;
+
+	  nmi_func_list = TREE_VALUE (nmi);
+	  nmi_func = TREE_VALUE (nmi_func_list);
+
+	  if (TREE_CODE (nmi_func) == IDENTIFIER_NODE)
+	    {
+	      /* Record nmi function name.  */
+	      strcpy (nds32_isr_vectors[0].nmi_name,
+		      IDENTIFIER_POINTER (nmi_func));
+	    }
+	  else
+	    {
+	      /* Issue error if it is not a valid nmi function.  */
+	      error ("invalid nmi function for reset attribute");
+	    }
+	}
+
+      if (warm != NULL_TREE)
+	{
+	  tree warm_func_list;
+	  tree warm_func;
+
+	  warm_func_list = TREE_VALUE (warm);
+	  warm_func = TREE_VALUE (warm_func_list);
+
+	  if (TREE_CODE (warm_func) == IDENTIFIER_NODE)
+	    {
+	      /* Record warm function name.  */
+	      strcpy (nds32_isr_vectors[0].warm_name,
+		      IDENTIFIER_POINTER (warm_func));
+	    }
+	  else
+	    {
+	      /* Issue error if it is not a valid warm function.  */
+	      error ("invalid warm function for reset attribute");
+	    }
+	}
+    }
+}
+
+/* Function to construct call frame information
+   so that GCC can use it to output debug information.  */
+static rtx
+nds32_construct_call_frame_information (void)
+{
+  int regno, offset;
+  int num_of_pushed_regs;
+  int dwarf_info_index;
+  /* This value is used if the $sp is not
+     at the position after pushing registers.  */
+  int sp_post_adjust;
+
+  rtx dwarf_info;
+
+  rtx dwarf_reg;
+  rtx dwarf_modify_sp_insn;
+  rtx dwarf_save_reg_insn;
+
+  /* Prepare dwarf information about stack adjustment of push behavior,
+     'dwarf_info = gen_rtx_SEQUENCE (VOIDmode, rtvec_alloc (num_save_regs))'
+     so that we can create like this:
+     (sequence [(set (reg:SI SP_REGNUM)
+                     (plus (reg:SI SP_REGNUM) (const_int -24)))
+                (set (mem (plus (reg:SI SP_REGNUM) (const_int 20)))
+                     (reg:SI LP_REGNUM))
+                (set (mem (plus (reg:SI SP_REGNUM) (const_int 16)))
+                     (reg:SI GP_REGNUM))
+                (set (mem (plus (reg:SI SP_REGNUM) (const_int 12)))
+                     (reg:SI FP_REGNUM))
+                (set (mem (plus (reg:SI SP_REGNUM) (const_int  8)))
+                     (reg:SI last_regno))
+                ...
+                (set (mem (plus (reg:SI SP_REGNUM) (const_int  0)))
+                     (reg:SI first_regno))] */
+
+  /* Check how many callee-saved registers should be pushed.  */
+  if (cfun->machine->callee_saved_regs_first_regno == SP_REGNUM
+      && cfun->machine->callee_saved_regs_last_regno == SP_REGNUM)
+    {
+      /* No other callee-saved registers are pushed.  */
+      num_of_pushed_regs = 0;
+    }
+  else
+    {
+      num_of_pushed_regs = cfun->machine->callee_saved_regs_last_regno
+			   - cfun->machine->callee_saved_regs_first_regno + 1;
+    }
+
+  /* Note that using push25/pop25 is only available under V3/V3M ISA.  */
+  if (TARGET_V3PUSH)
+    {
+      /* We are using v3push, $fp and $lp will always be pushed.  */
+      num_of_pushed_regs += 2;
+      /* We are using v3push, $gp will be pushed as well.  */
+      num_of_pushed_regs += 1;
+
+      /* We need to check whether we can use 'v3push Re,imm8u' form.  */
+      sp_post_adjust = cfun->machine->local_size
+		       + cfun->machine->out_args_size
+		       + cfun->machine->callee_saved_area_padding_bytes;
+      if (satisfies_constraint_Iu08 (GEN_INT (sp_post_adjust))
+	  && NDS32_DOUBLE_WORD_ALIGN_P (sp_post_adjust))
+	{
+	  /* 'v3push Re,imm8u' form is available,
+	     stack pointer will be further changed after pushing registers.
+	     So here we DO NOTHING to leave the sp_post_adjust unchanged
+	     and its value will be used later.  */
+	}
+      else
+	{
+	  /* 'v3push Re,0' form will be used,
+	     stack pointer is just at the position after pushing registers.
+	     We set sp_post_adjust value to zero so that there will be
+	     no effect to caculate offset later.  */
+	  sp_post_adjust = 0;
+	}
+    }
+  else
+    {
+      /* We are using normal multiple-push, check fp_size and lp_size
+         to see if it requires space to construct call frame information.  */
+      if (cfun->machine->fp_size)
+        num_of_pushed_regs++;
+      if (cfun->machine->lp_size)
+        num_of_pushed_regs++;
+
+      /* We are using normal multiple-push, check gp_size
+         to see if it requires space to construct call frame information.  */
+      if (cfun->machine->gp_size)
+        num_of_pushed_regs++;
+
+      /* Stack pointer is just at the position after pushing reigsters.
+         We set sp_post_adjust value to zero so that there will be
+         no effect to caculate offset later.  */
+      sp_post_adjust = 0;
+    }
+
+  /* Because we need to generate store behavior
+     for each register and one stack adjustment behavior,
+     we need to allocate num_of_pushed_regs+1 elements space
+     for (sequence [...]) rtx pattern.  */
+  dwarf_info = gen_rtx_SEQUENCE (VOIDmode, rtvec_alloc (num_of_pushed_regs+1));
+  /* index 0 in the vector is to save stack adjustment information.  */
+  dwarf_info_index = 1;
+
+  /* Create first stack adjustment information,
+     remember to consider sp_post_adjust.  */
+  dwarf_modify_sp_insn
+    = gen_rtx_SET (VOIDmode, stack_pointer_rtx,
+		   plus_constant (Pmode, stack_pointer_rtx,
+				  -4 * num_of_pushed_regs - sp_post_adjust));
+  RTX_FRAME_RELATED_P (dwarf_modify_sp_insn) = 1;
+  XVECEXP (dwarf_info, 0, 0) = dwarf_modify_sp_insn;
+
+  /* Create $lp saving information.  */
+  if (cfun->machine->lp_size)
+    {
+      offset    = 4 * (num_of_pushed_regs - dwarf_info_index)
+		  + sp_post_adjust; /* Remember to consider sp_post_adjust.  */
+      dwarf_reg = gen_rtx_REG (SImode, LP_REGNUM);
+      dwarf_save_reg_insn
+	= gen_rtx_SET (VOIDmode,
+		       gen_frame_mem (SImode,
+				      plus_constant (Pmode, stack_pointer_rtx,
+						     offset)),
+		       dwarf_reg);
+      RTX_FRAME_RELATED_P (dwarf_save_reg_insn) = 1;
+
+      XVECEXP (dwarf_info, 0, dwarf_info_index) = dwarf_save_reg_insn;
+      dwarf_info_index++;
+    }
+
+  /* Create $gp saving information.  */
+  if (cfun->machine->gp_size)
+    {
+      offset    = 4 * (num_of_pushed_regs - dwarf_info_index)
+		  + sp_post_adjust; /* Remember to consider sp_post_adjust.  */
+      dwarf_reg = gen_rtx_REG (SImode, GP_REGNUM);
+      dwarf_save_reg_insn
+	= gen_rtx_SET (VOIDmode,
+		       gen_frame_mem (SImode,
+				      plus_constant (Pmode, stack_pointer_rtx,
+						     offset)),
+		       dwarf_reg);
+      RTX_FRAME_RELATED_P (dwarf_save_reg_insn) = 1;
+
+      XVECEXP (dwarf_info, 0, dwarf_info_index) = dwarf_save_reg_insn;
+      dwarf_info_index++;
+    }
+
+  /* Create $fp saving information.  */
+  if (cfun->machine->fp_size)
+    {
+      offset    = 4 * (num_of_pushed_regs - dwarf_info_index)
+		  + sp_post_adjust; /* Remember to consider sp_post_adjust.  */
+      dwarf_reg = gen_rtx_REG (SImode, FP_REGNUM);
+      dwarf_save_reg_insn
+	= gen_rtx_SET (VOIDmode,
+		       gen_frame_mem (SImode,
+				      plus_constant (Pmode, stack_pointer_rtx,
+						     offset)),
+		       dwarf_reg);
+      RTX_FRAME_RELATED_P (dwarf_save_reg_insn) = 1;
+
+      XVECEXP (dwarf_info, 0, dwarf_info_index) = dwarf_save_reg_insn;
+      dwarf_info_index++;
+    }
+
+  /* Create all other pushed registers information,
+     starting from last regno.  */
+  for (regno = cfun->machine->callee_saved_regs_last_regno;
+       regno >= cfun->machine->callee_saved_regs_first_regno;
+       regno--)
+    {
+      /* If regno == SP_REGNUM, it means there is no other pushed registers,
+         so we have to leave this loop immediately.  */
+      if (regno == SP_REGNUM)
+	break;
+
+      offset    = 4 * (num_of_pushed_regs - dwarf_info_index)
+		  + sp_post_adjust; /* Remember to consider sp_post_adjust.  */
+      dwarf_reg = gen_rtx_REG (SImode, regno);
+      dwarf_save_reg_insn
+	= gen_rtx_SET (VOIDmode,
+		       gen_frame_mem (SImode,
+				      plus_constant (Pmode, stack_pointer_rtx,
+						     offset)),
+		       dwarf_reg);
+      RTX_FRAME_RELATED_P (dwarf_save_reg_insn) = 1;
+
+      XVECEXP (dwarf_info, 0, dwarf_info_index) = dwarf_save_reg_insn;
+      dwarf_info_index++;
+    }
+
+  return dwarf_info;
+}
+
+/* Function that may creates more instructions
+   for large value on adjusting stack pointer.
+
+   In nds32 target, 'addi' can be used for stack pointer
+   adjustment in prologue/epilogue stage.
+   However, sometimes there are too many local variables so that
+   the adjustment value is not able to be fit in the 'addi' instruction.
+   One solution is to move value into a register
+   and then use 'add' instruction.
+   In practice, we use TA_REGNUM ($r15) to accomplish this purpose.
+   Also, we need to return zero for sp adjustment so that
+   proglogue/epilogue knows there is no need to create 'addi' instruction.  */
+static int
+nds32_force_addi_stack_int (int full_value)
+{
+  int adjust_value;
+
+  rtx tmp_reg;
+  rtx value_move_insn;
+  rtx sp_adjust_insn;
+
+  if (!satisfies_constraint_Is15 (GEN_INT (full_value)))
+    {
+      /* The value is not able to fit in single addi instruction.
+         Create more instructions of moving value into a register
+         and then add stack pointer with it.  */
+
+      /* $r15 is going to be temporary register to hold the value.  */
+      tmp_reg = gen_rtx_REG (SImode, TA_REGNUM);
+
+      /* Create one more instruction to move value
+         into the temporary register.  */
+      value_move_insn = gen_movsi (tmp_reg, GEN_INT (full_value));
+      /* Emit rtx into insn list and receive its transformed insn rtx.  */
+      value_move_insn = emit_insn (value_move_insn);
+
+      /* At prologue, we need to tell GCC that this is frame related insn,
+         so that we can consider this instruction to output debug information.
+         If full_value is NEGATIVE, it means this function
+         is invoked by expand_prologue.  */
+      if (full_value < 0)
+	RTX_FRAME_RELATED_P (value_move_insn) = 1;
+
+      /* Create new 'add' rtx.  */
+      sp_adjust_insn = gen_addsi3 (stack_pointer_rtx,
+				   stack_pointer_rtx,
+				   tmp_reg);
+      /* Emit rtx into insn list and receive its transformed insn rtx.  */
+      sp_adjust_insn = emit_insn (sp_adjust_insn);
+
+      /* At prologue, we need to tell GCC that this is frame related insn,
+         so that we can consider this instruction to output debug information.
+         If full_value is NEGATIVE, it means this function
+         is invoked by expand_prologue.  */
+      if (full_value < 0)
+	RTX_FRAME_RELATED_P (sp_adjust_insn) = 1;
+
+      /* We have used alternative way to adjust stack pointer value.
+         Return zero so that prologue/epilogue
+         will not generate other instructions.  */
+      return 0;
+    }
+  else
+    {
+      /* The value is able to fit in addi instruction.
+         However, remember to make it to be positive value
+         because we want to return 'adjustment' result.  */
+      adjust_value = (full_value < 0) ? (-full_value) : (full_value);
+
+      return adjust_value;
+    }
+}
+
+/* Return true if MODE/TYPE need double word alignment.  */
+static bool
+nds32_needs_double_word_align (enum machine_mode mode, const_tree type)
+{
+  return (GET_MODE_ALIGNMENT (mode) > PARM_BOUNDARY
+	  || (type && TYPE_ALIGN (type) > PARM_BOUNDARY));
+}
+
+/* Return true if FUNC is a naked function.  */
+static bool nds32_naked_function_p (tree func)
+{
+  tree t;
+
+  if (TREE_CODE (func) != FUNCTION_DECL)
+    abort ();
+
+  t = lookup_attribute ("naked", DECL_ATTRIBUTES (func));
+
+  return (t != NULL_TREE);
+}
+
+/* Function that check if 'X' is a valid address register.
+   The variable 'STRICT' is very important to
+   make decision for register number.
+
+   STRICT : true
+     => We are in reload pass or after reload pass.
+        The register number should be strictly limited in general registers.
+
+   STRICT : false
+     => Before reload pass, we are free to use any register number.  */
+static bool
+nds32_address_register_rtx_p (rtx x, bool strict)
+{
+  int regno;
+
+  if (GET_CODE (x) != REG)
+    return false;
+
+  regno = REGNO (x);
+
+  if (strict)
+    return REGNO_OK_FOR_BASE_P (regno);
+  else
+    return true;
+}
+
+/* Function that check if 'INDEX' is valid to be a index rtx for address.
+
+   OUTER_MODE : Machine mode of outer address rtx.
+   OUTER_CODE : rtx code of outer address rtx.
+        INDEX : Check if this rtx is valid to be a index for address.
+         BASE : The rtx of base register.
+       STRICT : If it is true, we are in reload pass or after reload pass.  */
+static bool
+nds32_legitimate_index_p (enum machine_mode outer_mode,
+			  enum rtx_code outer_code ATTRIBUTE_UNUSED,
+			  rtx index,
+			  rtx base ATTRIBUTE_UNUSED,
+			  bool strict)
+{
+  int regno;
+  rtx op0;
+  rtx op1;
+
+  switch (GET_CODE (index))
+    {
+    case REG:
+      regno = REGNO (index);
+      /* If we are in reload pass or after reload pass,
+         we need to limit it to general register.  */
+      if (strict)
+	return REGNO_OK_FOR_INDEX_P (regno);
+      else
+	return true;
+
+    case CONST_INT:
+      /* The alignment of the integer value is determined by 'outer_mode'.  */
+      if (GET_MODE_SIZE (outer_mode) == 1)
+	{
+	  /* Further check if the value is legal for the 'outer_mode'.  */
+	  if (!satisfies_constraint_Is15 (index))
+	    goto non_legitimate_index;
+
+	  /* Pass all test, the value is valid, return true.  */
+	  return true;
+	}
+      if (GET_MODE_SIZE (outer_mode) == 2
+	  && NDS32_HALF_WORD_ALIGN_P (INTVAL (index)))
+	{
+	  /* Further check if the value is legal for the 'outer_mode'.  */
+	  if (!satisfies_constraint_Is16 (index))
+	    goto non_legitimate_index;
+
+	  /* Pass all test, the value is valid, return true.  */
+	  return true;
+	}
+      if (GET_MODE_SIZE (outer_mode) == 4
+	  && NDS32_SINGLE_WORD_ALIGN_P (INTVAL (index)))
+	{
+	  /* Further check if the value is legal for the 'outer_mode'.  */
+	  if (!satisfies_constraint_Is17 (index))
+	    goto non_legitimate_index;
+
+	  /* Pass all test, the value is valid, return true.  */
+	  return true;
+	}
+      if (GET_MODE_SIZE (outer_mode) == 8
+	  && NDS32_SINGLE_WORD_ALIGN_P (INTVAL (index)))
+	{
+	  /* Further check if the value is legal for the 'outer_mode'.  */
+	  if (!satisfies_constraint_Is17 (gen_int_mode (INTVAL (index) + 4,
+							SImode)))
+	    goto non_legitimate_index;
+
+	  /* Pass all test, the value is valid, return true.  */
+	  return true;
+	}
+
+      goto non_legitimate_index;
+
+    case MULT:
+      op0 = XEXP (index, 0);
+      op1 = XEXP (index, 1);
+
+      if (REG_P (op0) && CONST_INT_P (op1))
+	{
+	  int multiplier;
+	  multiplier = INTVAL (op1);
+
+	  /* We only allow (mult reg const_int_1)
+	     or (mult reg const_int_2) or (mult reg const_int_4).  */
+	  if (multiplier != 1 && multiplier != 2 && multiplier != 4)
+	    goto non_legitimate_index;
+
+	  regno = REGNO (op0);
+	  /* Limit it in general registers if we are
+	     in reload pass or after reload pass.  */
+	  if(strict)
+	    return REGNO_OK_FOR_INDEX_P (regno);
+	  else
+	    return true;
+	}
+
+      goto non_legitimate_index;
+
+    case ASHIFT:
+      op0 = XEXP (index, 0);
+      op1 = XEXP (index, 1);
+
+      if (REG_P (op0) && CONST_INT_P (op1))
+	{
+	  int sv;
+	  /* op1 is already the sv value for use to do left shift.  */
+	  sv = INTVAL (op1);
+
+	  /* We only allow (ashift reg const_int_0)
+	     or (ashift reg const_int_1) or (ashift reg const_int_2).  */
+	  if (sv != 0 && sv != 1 && sv !=2)
+	    goto non_legitimate_index;
+
+	  regno = REGNO (op0);
+	  /* Limit it in general registers if we are
+	     in reload pass or after reload pass.  */
+	  if(strict)
+	    return REGNO_OK_FOR_INDEX_P (regno);
+	  else
+	    return true;
+	}
+
+      goto non_legitimate_index;
+
+    default:
+      goto non_legitimate_index;
+    }
+
+non_legitimate_index:
+  return false;
+}
+
+/* Function to expand builtin function for
+   '[(unspec [(reg)])]'.  */
+static rtx
+nds32_expand_builtin_null_ftype_reg (enum insn_code icode,
+				     tree exp, rtx target)
+{
+  tree arg0;
+  rtx op0;
+  rtx pat;
+  enum machine_mode mode0;
+
+  /* Grab the incoming argument and emit its RTL.  */
+  arg0 = CALL_EXPR_ARG (exp, 0);
+  op0 = expand_normal (arg0);
+
+  /* Determine the modes of the instruction operands.
+     Note that we don't have left-hand-side result,
+     so operands[0] IS FOR arg0.  */
+  mode0 = insn_data[icode].operand[0].mode;
+
+  /* Refer to nds32.intrinsic.md,
+     we want to ensure operands[0] is register_operand.  */
+  if (!((*insn_data[icode].operand[0].predicate) (op0, mode0)))
+    op0 = copy_to_mode_reg (mode0, op0);
+
+  /* Emit new instruction and return original target.  */
+  pat = GEN_FCN (icode) (op0);
+
+  if (!pat)
+    return 0;
+
+  emit_insn (pat);
+
+  return target;
+}
+
+/* Function to expand builtin function for
+   '[(set (reg) (unspec [(imm)]))]'.  */
+static rtx
+nds32_expand_builtin_reg_ftype_imm (enum insn_code icode,
+				    tree exp, rtx target)
+{
+  tree arg0;
+  rtx op0;
+  rtx pat;
+  enum machine_mode tmode, mode0;
+
+  /* Grab the incoming argument and emit its RTL.  */
+  arg0 = CALL_EXPR_ARG (exp, 0);
+  op0 = expand_normal (arg0);
+
+  /* Determine the modes of the instruction operands.
+     Note that we have left-hand-side result,
+     so operands[0] IS FOR target,
+        operands[1] IS FOR arg0.  */
+  tmode = insn_data[icode].operand[0].mode;
+  mode0 = insn_data[icode].operand[1].mode;
+
+  /* Refer to nds32.intrinsic.md,
+     we want to ensure operands[0] is register_operand.  */
+  if (!((*insn_data[icode].operand[0].predicate) (target, tmode)))
+    target = copy_to_mode_reg (tmode, target);
+  /* Refer to nds32.intrinsic.md,
+     we want to ensure operands[1] is immediate_operand.  */
+  if (!((*insn_data[icode].operand[1].predicate) (op0, mode0)))
+    error ("the first argument of this builtin function must be a constant");
+
+  /* Emit new instruction and return original target.  */
+  pat = GEN_FCN (icode) (target, op0);
+
+  if (!pat)
+    return 0;
+
+  emit_insn (pat);
+
+  return target;
+}
+
+/* Function to expand builtin function for
+   '[(unspec [(reg) (imm)])]' pattern.  */
+static rtx
+nds32_expand_builtin_null_ftype_reg_imm (enum insn_code icode,
+					 tree exp, rtx target)
+{
+  tree arg0, arg1;
+  rtx op0, op1;
+  rtx pat;
+  enum machine_mode mode0, mode1;
+
+  /* Grab the incoming argument and emit its RTL.  */
+  arg0 = CALL_EXPR_ARG (exp, 0);
+  arg1 = CALL_EXPR_ARG (exp, 1);
+  op0 = expand_normal (arg0);
+  op1 = expand_normal (arg1);
+
+  /* Determine the modes of the instruction operands.  */
+  mode0 = insn_data[icode].operand[0].mode;
+  mode1 = insn_data[icode].operand[1].mode;
+
+  /* Refer to nds32.intrinsic.md,
+     we want to ensure operands[0] is register_operand.  */
+  if (!((*insn_data[icode].operand[0].predicate) (op0, mode0)))
+    op0 = copy_to_mode_reg (mode0, op0);
+  /* Refer to nds32.intrinsic.md,
+     we want to ensure operands[1] is immediate_operand.  */
+  if (!((*insn_data[icode].operand[1].predicate) (op1, mode1)))
+    error ("the second argument of this builtin function must be a constant");
+
+  /* Emit new instruction and return original target.  */
+  pat = GEN_FCN (icode) (op0, op1);
+
+  if (!pat)
+    return 0;
+
+  emit_insn (pat);
+
+  return target;
+}
+
+/* A helper function to return character based on byte size.  */
+static char
+nds32_byte_to_size (int byte)
+{
+  switch (byte)
+    {
+    case 4:
+      return 'w';
+    case 2:
+      return 'h';
+    case 1:
+      return 'b';
+    default:
+      /* Normally it should not be here.  */
+      gcc_unreachable ();
+    }
+}
+
+/* A helper function to check if this function should contain prologue.  */
+static int
+nds32_have_prologue_p (void)
+{
+  int i;
+
+  for (i = 0; i < 28; i++)
+    if (NDS32_REQUIRED_CALLEE_SAVED_P (i))
+      return 1;
+
+  return (flag_pic
+	  || NDS32_REQUIRED_CALLEE_SAVED_P (FP_REGNUM)
+	  || NDS32_REQUIRED_CALLEE_SAVED_P (LP_REGNUM));
+}
+
+/* Return align 2 (log base 2) if the next instruction of LABEL is 4 byte.  */
+int
+nds32_target_alignment (rtx label)
+{
+  rtx insn;
+
+  if (optimize_size)
+    return 0;
+
+  insn = next_active_insn (label);
+
+  if (insn == 0)
+    return 0;
+  else if ((get_attr_length (insn) % 4) == 0)
+    return 2;
+  else
+    return 0;
+}
+
+/* ------------------------------------------------------------------------ */
+
+/* PART 3: Implement target hook stuff definitions.  */
+
+/* Register Classes.  */
+
+static unsigned char
+nds32_class_max_nregs (reg_class_t rclass ATTRIBUTE_UNUSED,
+		       enum machine_mode mode)
+{
+  /* Return the maximum number of consecutive registers
+     needed to represent "mode" in a register of "rclass".  */
+  return ((GET_MODE_SIZE (mode) + UNITS_PER_WORD - 1) / UNITS_PER_WORD);
+}
+
+static int
+nds32_register_priority (int hard_regno)
+{
+  /* Encourage to use r0-r7 for LRA when optimize for size.  */
+  if (optimize_size && hard_regno < 8)
+    return 4;
+  return 3;
+}
+
+
+/* Stack Layout and Calling Conventions.  */
+
+/* There are three kinds of pointer concepts using in GCC compiler:
+
+     frame pointer: A pointer to the first location of local variables.
+     stack pointer: A pointer to the top of a stack frame.
+     argument pointer: A pointer to the incoming arguments.
+
+   In nds32 target calling convention, we are using 8-byte alignment.
+   Besides, we would like to have each stack frame of a function includes:
+
+     [Block A]
+       1. previous hard frame pointer
+       2. return address
+       3. callee-saved registers
+       4. <padding bytes> (we will calculte in nds32_compute_stack_frame()
+                           and save it at
+                           cfun->machine->callee_saved_area_padding_bytes)
+
+     [Block B]
+       1. local variables
+       2. spilling location
+       3. <padding bytes> (it will be calculated by GCC itself)
+       4. incoming arguments
+       5. <padding bytes> (it will be calculated by GCC itself)
+
+     [Block C]
+       1. <padding bytes> (it will be calculated by GCC itself)
+       2. outgoing arguments
+
+   We 'wrap' these blocks together with
+   hard frame pointer ($r28) and stack pointer ($r31).
+   By applying the basic frame/stack/argument pointers concept,
+   the layout of a stack frame shoule be like this:
+
+                            |    |
+       old stack pointer ->  ----
+                            |    | \
+                            |    |   saved arguments for
+                            |    |   vararg functions
+                            |    | /
+      hard frame pointer ->   --
+      & argument pointer    |    | \
+                            |    |   previous hardware frame pointer
+                            |    |   return address
+                            |    |   callee-saved registers
+                            |    | /
+           frame pointer ->   --
+                            |    | \
+                            |    |   local variables
+                            |    |   and incoming arguments
+                            |    | /
+                              --
+                            |    | \
+                            |    |   outgoing
+                            |    |   arguments
+                            |    | /
+           stack pointer ->  ----
+
+  $SFP and $AP are used to represent frame pointer and arguments pointer,
+  which will be both eliminated as hard frame pointer.  */
+
+/* Eliminating Frame Pointer and Arg Pointer.  */
+
+static bool nds32_can_eliminate (const int from_reg, const int to_reg)
+{
+  if (from_reg == ARG_POINTER_REGNUM && to_reg == STACK_POINTER_REGNUM)
+    return true;
+
+  if (from_reg == ARG_POINTER_REGNUM && to_reg == HARD_FRAME_POINTER_REGNUM)
+    return true;
+
+  if (from_reg == FRAME_POINTER_REGNUM && to_reg == STACK_POINTER_REGNUM)
+    return true;
+
+  if (from_reg == FRAME_POINTER_REGNUM && to_reg == HARD_FRAME_POINTER_REGNUM)
+    return true;
+
+  return false;
+}
+
+/* Passing Arguments in Registers.  */
+
+static rtx
+nds32_function_arg (cumulative_args_t ca, enum machine_mode mode,
+		    const_tree type, bool named)
+{
+  CUMULATIVE_ARGS *cum = get_cumulative_args (ca);
+
+  /* The last time this hook is called,
+     it is called with MODE == VOIDmode.  */
+  if (mode == VOIDmode)
+    return NULL_RTX;
+
+  /* For nameless arguments, they are passed on the stack.  */
+  if (!named)
+    return NULL_RTX;
+
+  /* If there are still registers available, return it.  */
+  if (NDS32_ARG_PASS_IN_REG_P (cum->reg_offset, mode, type))
+    {
+      /* Pick up the next available register number.  */
+      return gen_rtx_REG (mode,
+			  NDS32_AVAILABLE_REGNUM_FOR_ARG (cum->reg_offset,
+							  mode,
+							  type));
+    }
+  else
+    {
+      /* No register available, return NULL_RTX.
+         The compiler will use stack to pass argument instead.  */
+      return NULL_RTX;
+    }
+}
+
+static void
+nds32_function_arg_advance (cumulative_args_t ca, enum machine_mode mode,
+			    const_tree type, bool named)
+{
+  CUMULATIVE_ARGS *cum = get_cumulative_args (ca);
+
+  /* Advance next register for use.
+     Only named argument could be advanced.  */
+  if (named)
+    {
+      cum->reg_offset
+	= NDS32_AVAILABLE_REGNUM_FOR_ARG (cum->reg_offset, mode, type)
+	  - NDS32_GPR_ARG_FIRST_REGNUM
+	  + NDS32_NEED_N_REGS_FOR_ARG (mode, type);
+    }
+}
+
+static unsigned int
+nds32_function_arg_boundary (enum machine_mode mode, const_tree type)
+{
+  return (nds32_needs_double_word_align (mode, type)
+	  ? NDS32_DOUBLE_WORD_ALIGNMENT
+	  : PARM_BOUNDARY);
+}
+
+/* How Scalar Function Values Are Returned.  */
+
+static rtx
+nds32_function_value (const_tree ret_type,
+		      const_tree fn_decl_or_type ATTRIBUTE_UNUSED,
+		      bool outgoing ATTRIBUTE_UNUSED)
+{
+  enum machine_mode mode;
+  int unsignedp;
+
+  mode = TYPE_MODE (ret_type);
+  unsignedp = TYPE_UNSIGNED (ret_type);
+
+  mode = promote_mode (ret_type, mode, &unsignedp);
+
+  return gen_rtx_REG (mode, NDS32_GPR_RET_FIRST_REGNUM);
+}
+
+static rtx
+nds32_libcall_value (enum machine_mode mode,
+		     const_rtx fun ATTRIBUTE_UNUSED)
+{
+  return gen_rtx_REG (mode, NDS32_GPR_RET_FIRST_REGNUM);
+}
+
+static bool
+nds32_function_value_regno_p (const unsigned int regno)
+{
+  return (regno == NDS32_GPR_RET_FIRST_REGNUM);
+}
+
+/* Function Entry and Exit.  */
+
+/* The content produced from this function
+   will be placed before prologue body.  */
+static void
+nds32_asm_function_prologue (FILE *file,
+			     HOST_WIDE_INT size ATTRIBUTE_UNUSED)
+{
+  int r;
+  const char *func_name;
+  tree attrs;
+  tree name;
+
+  /* All stack frame information is supposed to be
+     already computed when expanding prologue.
+     The result is in cfun->machine.
+     DO NOT call nds32_compute_stack_frame() here
+     because it may corrupt the essential information.  */
+
+  fprintf (file, "\t! BEGIN PROLOGUE\n");
+  fprintf (file, "\t!     fp needed: %d\n", frame_pointer_needed);
+  fprintf (file, "\t!  pretend_args: %d\n", cfun->machine->va_args_size);
+  fprintf (file, "\t!    local_size: %d\n", cfun->machine->local_size);
+  fprintf (file, "\t! out_args_size: %d\n", cfun->machine->out_args_size);
+
+  /* Use df_regs_ever_live_p() to detect if the register
+     is ever used in the current function.  */
+  fprintf (file, "\t! registers ever_live: ");
+  for (r = 0; r < 32; r++)
+    {
+      if (df_regs_ever_live_p (r))
+	fprintf (file, "%s, ", reg_names[r]);
+    }
+  fputc ('\n', file);
+
+  /* Display the attributes of this function.  */
+  fprintf (file, "\t! function attributes: ");
+  /* GCC build attributes list with reverse order,
+     so we use nreverse() to make it looks like
+     the order that user specifies.  */
+  attrs = nreverse (DECL_ATTRIBUTES (current_function_decl));
+
+  /* If there is no any attribute, print out "None".  */
+  if (!attrs)
+    fprintf (file, "None");
+
+  /* If there are some attributes, try if we need to
+     construct isr vector information.  */
+  func_name = IDENTIFIER_POINTER (DECL_NAME (current_function_decl));
+  nds32_construct_isr_vectors_information (attrs, func_name);
+
+  /* Display all attributes of this function.  */
+  while (attrs)
+    {
+      name = TREE_PURPOSE (attrs);
+      fprintf (file, "%s ", IDENTIFIER_POINTER (name));
+
+      /* Pick up the next attribute.  */
+      attrs = TREE_CHAIN (attrs);
+    }
+  fputc ('\n', file);
+}
+
+/* After rtl prologue has been expanded, this function is used.  */
+static void
+nds32_asm_function_end_prologue (FILE *file)
+{
+  fprintf (file, "\t! END PROLOGUE\n");
+
+  /* If frame pointer is NOT needed and -mfp-as-gp is issued,
+     we can generate special directive: ".omit_fp_begin"
+     to guide linker doing fp-as-gp optimization.
+     However, for a naked function, which means
+     it should not have prologue/epilogue,
+     using fp-as-gp still requires saving $fp by push/pop behavior and
+     there is no benefit to use fp-as-gp on such small function.
+     So we need to make sure this function is NOT naked as well.  */
+  if (!frame_pointer_needed
+      && !cfun->machine->naked_p
+      && cfun->machine->fp_as_gp_p)
+    {
+      fprintf (file, "\t! ----------------------------------------\n");
+      fprintf (file, "\t! Guide linker to do "
+		     "link time optimization: fp-as-gp\n");
+      fprintf (file, "\t! We add one more instruction to "
+		     "initialize $fp near to $gp location.\n");
+      fprintf (file, "\t! If linker fails to use fp-as-gp transformation,\n");
+      fprintf (file, "\t! this extra instruction should be "
+		     "eliminated at link stage.\n");
+      fprintf (file, "\t.omit_fp_begin\n");
+      fprintf (file, "\tla\t$fp,_FP_BASE_\n");
+      fprintf (file, "\t! ----------------------------------------\n");
+    }
+}
+
+/* Before rtl epilogue has been expanded, this function is used.  */
+static void
+nds32_asm_function_begin_epilogue (FILE *file)
+{
+  /* If frame pointer is NOT needed and -mfp-as-gp is issued,
+     we can generate special directive: ".omit_fp_end"
+     to claim fp-as-gp optimization range.
+     However, for a naked function,
+     which means it should not have prologue/epilogue,
+     using fp-as-gp still requires saving $fp by push/pop behavior and
+     there is no benefit to use fp-as-gp on such small function.
+     So we need to make sure this function is NOT naked as well.  */
+  if (!frame_pointer_needed
+      && !cfun->machine->naked_p
+      && cfun->machine->fp_as_gp_p)
+    {
+      fprintf (file, "\t! ----------------------------------------\n");
+      fprintf (file, "\t! Claim the range of fp-as-gp "
+		     "link time optimization\n");
+      fprintf (file, "\t.omit_fp_end\n");
+      fprintf (file, "\t! ----------------------------------------\n");
+    }
+
+  fprintf (file, "\t! BEGIN EPILOGUE\n");
+}
+
+/* The content produced from this function
+   will be placed after epilogue body.  */
+static void
+nds32_asm_function_epilogue (FILE *file,
+			     HOST_WIDE_INT size ATTRIBUTE_UNUSED)
+{
+  fprintf (file, "\t! END EPILOGUE\n");
+}
+
+static void
+nds32_asm_output_mi_thunk (FILE *file, tree thunk ATTRIBUTE_UNUSED,
+			   HOST_WIDE_INT delta,
+			   HOST_WIDE_INT vcall_offset ATTRIBUTE_UNUSED,
+			   tree function)
+{
+  int this_regno;
+
+  /* Make sure unwind info is emitted for the thunk if needed.  */
+  final_start_function (emit_barrier (), file, 1);
+
+  this_regno = (aggregate_value_p (TREE_TYPE (TREE_TYPE (function)), function)
+		? 1
+		: 0);
+
+  if (delta != 0)
+    {
+      if (satisfies_constraint_Is15 (GEN_INT (delta)))
+        {
+          fprintf (file, "\taddi\t$r%d, $r%d, %ld\n",
+		   this_regno, this_regno, delta);
+        }
+      else if (satisfies_constraint_Is20 (GEN_INT (delta)))
+        {
+          fprintf (file, "\tmovi\t$ta, %ld\n", delta);
+          fprintf (file, "\tadd\t$r%d, $r%d, $ta\n", this_regno, this_regno);
+        }
+      else
+        {
+          fprintf (file, "\tsethi\t$ta, hi20(%ld)\n", delta);
+          fprintf (file, "\tori\t$ta, $ta, lo12(%ld)\n", delta);
+          fprintf (file, "\tadd\t$r%d, $r%d, $ta\n", this_regno, this_regno);
+        }
+    }
+
+  fprintf (file, "\tb\t");
+  assemble_name (file, XSTR (XEXP (DECL_RTL (function), 0), 0));
+  fprintf (file, "\n");
+
+  final_end_function ();
+}
+
+/* Permitting tail calls.  */
+
+static bool
+nds32_warn_func_return (tree decl)
+{
+  /* Naked functions are implemented entirely in assembly, including the
+     return sequence, so suppress warnings about this.  */
+  return !nds32_naked_function_p (decl);
+}
+
+
+/* Implementing the Varargs Macros.  */
+
+static bool
+nds32_strict_argument_naming (cumulative_args_t ca ATTRIBUTE_UNUSED)
+{
+  /* Return true so that all the named arguments for FUNCTION_ARG have named=1.
+     If return false, for the variadic function, all named arguments EXCEPT
+     the last are treated as named.  */
+  return true;
+}
+
+
+/* Trampolines for Nested Functions.  */
+
+static void
+nds32_asm_trampoline_template (FILE *f)
+{
+  if (TARGET_REDUCED_REGS)
+    {
+      /* Trampoline is not supported on reduced-set registers yet.  */
+      sorry ("a nested function is not supported for reduced registers");
+    }
+  else
+    {
+      asm_fprintf (f, "\t! Trampoline code template\n");
+      asm_fprintf (f, "\t! This code fragment will be copied "
+		      "into stack on demand\n");
+
+      asm_fprintf (f, "\tmfusr\t$r16,$pc\n");
+      asm_fprintf (f, "\tlwi\t$r15,[$r16 + 20] "
+		      "! load nested function address\n");
+      asm_fprintf (f, "\tlwi\t$r16,[$r16 + 16] "
+		      "! load chain_value\n");
+      asm_fprintf (f, "\tjr\t$r15\n");
+    }
+
+  /* Preserve space ($pc + 16) for saving chain_value,
+     nds32_trampoline_init will fill the value in this slot.  */
+  asm_fprintf (f, "\t! space for saving chain_value\n");
+  assemble_aligned_integer (UNITS_PER_WORD, const0_rtx);
+
+  /* Preserve space ($pc + 20) for saving nested function address,
+     nds32_trampoline_init will fill the value in this slot.  */
+  asm_fprintf (f, "\t! space for saving nested function address\n");
+  assemble_aligned_integer (UNITS_PER_WORD, const0_rtx);
+}
+
+/* Emit RTL insns to initialize the variable parts of a trampoline.  */
+static void
+nds32_trampoline_init (rtx m_tramp, tree fndecl, rtx chain_value)
+{
+  int i;
+
+  /* Nested function address.  */
+  rtx fnaddr;
+  /* The memory rtx that is going to
+     be filled with chain_value.  */
+  rtx chain_value_mem;
+  /* The memory rtx that is going to
+     be filled with nested function address.  */
+  rtx nested_func_mem;
+
+  /* Start address of trampoline code in stack, for doing cache sync.  */
+  rtx sync_cache_addr;
+  /* Temporary register for sync instruction.  */
+  rtx tmp_reg;
+  /* Instruction-cache sync instruction,
+     requesting an arugment as staring address.  */
+  rtx isync_insn;
+  /* For convenience reason of doing comparison.  */
+  int tramp_align_in_bytes;
+
+  /* Trampoline is not supported on reduced-set registers yet.  */
+  if (TARGET_REDUCED_REGS)
+    sorry ("a nested function is not supported for reduced registers");
+
+  /* STEP 1: Copy trampoline code template into stack,
+             fill up essential data into stack.  */
+
+  /* Extract nested function address rtx.  */
+  fnaddr = XEXP (DECL_RTL (fndecl), 0);
+
+  /* m_tramp is memory rtx that is going to be filled with trampoline code.
+     We have nds32_asm_trampoline_template() to emit template pattern.  */
+  emit_block_move (m_tramp, assemble_trampoline_template (),
+		   GEN_INT (TRAMPOLINE_SIZE), BLOCK_OP_NORMAL);
+
+  /* After copying trampoline code into stack,
+     fill chain_value into stack.  */
+  chain_value_mem = adjust_address (m_tramp, SImode, 16);
+  emit_move_insn (chain_value_mem, chain_value);
+  /* After copying trampoline code int stack,
+     fill nested function address into stack.  */
+  nested_func_mem = adjust_address (m_tramp, SImode, 20);
+  emit_move_insn (nested_func_mem, fnaddr);
+
+  /* STEP 2: Sync instruction-cache.  */
+
+  /* We have successfully filled trampoline code into stack.
+     However, in order to execute code in stack correctly,
+     we must sync instruction cache.  */
+  sync_cache_addr = XEXP (m_tramp, 0);
+  tmp_reg         = gen_reg_rtx (SImode);
+  isync_insn      = gen_unspec_volatile_isync (tmp_reg);
+
+  /* Because nds32_cache_block_size is in bytes,
+     we get trampoline alignment in bytes for convenient comparison.  */
+  tramp_align_in_bytes = TRAMPOLINE_ALIGNMENT / BITS_PER_UNIT;
+
+  if (tramp_align_in_bytes >= nds32_cache_block_size
+      && (tramp_align_in_bytes % nds32_cache_block_size) == 0)
+    {
+      /* Under this condition, the starting address of trampoline
+         must be aligned to the starting address of each cache block
+         and we do not have to worry about cross-boundary issue.  */
+      for (i = 0;
+	   i < (TRAMPOLINE_SIZE + nds32_cache_block_size - 1)
+	       / nds32_cache_block_size;
+	   i++)
+	{
+	  emit_move_insn (tmp_reg,
+			  plus_constant (Pmode, sync_cache_addr,
+					 nds32_cache_block_size * i));
+	  emit_insn (isync_insn);
+	}
+    }
+  else if (TRAMPOLINE_SIZE > nds32_cache_block_size)
+    {
+      /* The starting address of trampoline code
+         may not be aligned to the cache block,
+         so the trampoline code may be across two cache block.
+         We need to sync the last element, which is 4-byte size,
+         of trampoline template.  */
+      for (i = 0;
+	   i < (TRAMPOLINE_SIZE + nds32_cache_block_size - 1)
+	       / nds32_cache_block_size;
+	   i++)
+	{
+	  emit_move_insn (tmp_reg,
+			  plus_constant (Pmode, sync_cache_addr,
+					 nds32_cache_block_size * i));
+	  emit_insn (isync_insn);
+	}
+
+      /* The last element of trampoline template is 4-byte size.  */
+      emit_move_insn (tmp_reg,
+		      plus_constant (Pmode, sync_cache_addr,
+				     TRAMPOLINE_SIZE - 4));
+      emit_insn (isync_insn);
+    }
+  else
+    {
+      /* This is the simplest case.
+         Because TRAMPOLINE_SIZE is less than or
+         equal to nds32_cache_block_size,
+         we can just sync start address and
+         the last element of trampoline code.  */
+
+      /* Sync starting address of tampoline code.  */
+      emit_move_insn (tmp_reg, sync_cache_addr);
+      emit_insn (isync_insn);
+      /* Sync the last element, which is 4-byte size,
+         of trampoline template.  */
+      emit_move_insn (tmp_reg,
+		      plus_constant (Pmode, sync_cache_addr,
+				     TRAMPOLINE_SIZE - 4));
+      emit_insn (isync_insn);
+    }
+
+  /* Set instruction serialization barrier
+     to guarantee the correct operations.  */
+  emit_insn (gen_unspec_volatile_isb ());
+}
+
+
+/* Addressing Modes.  */
+
+static bool
+nds32_legitimate_address_p (enum machine_mode mode, rtx x, bool strict)
+{
+  /* For (mem:DI addr) or (mem:DF addr) case,
+     we only allow 'addr' to be [reg], [symbol_ref],
+                                [const], or [reg + const_int] pattern.  */
+  if (mode == DImode || mode == DFmode)
+    {
+      /* Allow [Reg + const_int] addressing mode.  */
+      if (GET_CODE (x) == PLUS)
+	{
+	  if (nds32_address_register_rtx_p (XEXP (x, 0), strict)
+	      && nds32_legitimate_index_p (mode, GET_CODE (x),
+					   XEXP (x, 1), XEXP (x, 0),
+					   strict)
+	      && CONST_INT_P (XEXP (x, 1)))
+	    return true;
+
+	  else if (nds32_address_register_rtx_p (XEXP (x, 1), strict)
+		   && nds32_legitimate_index_p (mode, GET_CODE (x),
+						XEXP (x, 0), XEXP (x, 1),
+						strict)
+		   && CONST_INT_P (XEXP (x, 0)))
+	    return true;
+	}
+
+      /* Now check [reg], [symbol_ref], and [const].  */
+      if (GET_CODE (x) != REG
+	  && GET_CODE (x) != SYMBOL_REF
+	  && GET_CODE (x) != CONST)
+	goto non_legitimate_address;
+    }
+
+  /* Check if 'x' is a valid address.  */
+  switch (GET_CODE (x))
+    {
+    case REG:
+      /* (mem (reg A)) => [Ra] */
+      return nds32_address_register_rtx_p (x, strict);
+
+    case SYMBOL_REF:
+
+      if (!TARGET_GP_DIRECT
+	  && (reload_completed
+	      || reload_in_progress
+	      || lra_in_progress))
+	goto non_legitimate_address;
+
+      /* (mem (symbol_ref A)) => [symbol_ref] */
+      return !currently_expanding_to_rtl;
+
+    case CONST:
+
+      if (!TARGET_GP_DIRECT
+	  && (reload_completed
+	      || reload_in_progress
+	      || lra_in_progress))
+	goto non_legitimate_address;
+
+      /* (mem (const (...)))
+         => [ + const_addr ], where const_addr = symbol_ref + const_int */
+      if (GET_CODE (XEXP (x, 0)) == PLUS)
+	{
+	  rtx plus_op = XEXP (x, 0);
+
+	  rtx op0 = XEXP (plus_op, 0);
+	  rtx op1 = XEXP (plus_op, 1);
+
+	  if (GET_CODE (op0) == SYMBOL_REF && CONST_INT_P (op1))
+	    return true;
+	  else
+	    goto non_legitimate_address;
+	}
+      else
+	{
+	  goto non_legitimate_address;
+	}
+
+    case POST_MODIFY:
+      /* (mem (post_modify (reg) (plus (reg) (reg))))
+         => [Ra], Rb */
+      /* (mem (post_modify (reg) (plus (reg) (const_int))))
+         => [Ra], const_int */
+      if (GET_CODE (XEXP (x, 0)) == REG
+	  && GET_CODE (XEXP (x, 1)) == PLUS)
+	{
+	  rtx plus_op = XEXP (x, 1);
+
+	  rtx op0 = XEXP (plus_op, 0);
+	  rtx op1 = XEXP (plus_op, 1);
+
+	  if (nds32_address_register_rtx_p (op0, strict)
+	      && nds32_legitimate_index_p (mode, GET_CODE (x), op1, x, strict))
+	    return true;
+	  else
+	    goto non_legitimate_address;
+	}
+      else
+	{
+	  goto non_legitimate_address;
+	}
+
+    case POST_INC:
+    case POST_DEC:
+      /* (mem (post_inc reg)) => [Ra], 1/2/4 */
+      /* (mem (post_dec reg)) => [Ra], -1/-2/-4 */
+      /* The 1/2/4 or -1/-2/-4 have been displayed in nds32.md.
+         We only need to deal with register Ra.  */
+      if (nds32_address_register_rtx_p (XEXP (x, 0), strict))
+	return true;
+      else
+	goto non_legitimate_address;
+
+    case PLUS:
+      /* virtual_stack_vars_rtx will eventually transfer to SP or FP
+         force [virtual_stack_vars + reg or const] to register first,
+         make the offset_address combination to
+         other addressing mode possible.  */
+      if (mode == BLKmode
+	  && REG_P (XEXP (x, 0))
+	  && (REGNO (XEXP (x, 0)) == VIRTUAL_STACK_VARS_REGNUM))
+	goto non_legitimate_address;
+
+      /* (mem (plus reg const_int))
+         => [Ra + imm] */
+      /* (mem (plus reg reg))
+         => [Ra + Rb] */
+      /* (mem (plus (mult reg const_int) reg))
+         => [Ra + Rb << sv] */
+      if (nds32_address_register_rtx_p (XEXP (x, 0), strict)
+	  && nds32_legitimate_index_p (mode, GET_CODE (x),
+				       XEXP (x, 1), XEXP (x, 0),
+				       strict))
+	return true;
+      else if (nds32_address_register_rtx_p (XEXP (x, 1), strict)
+	       && nds32_legitimate_index_p (mode, GET_CODE (x),
+					    XEXP (x, 0), XEXP (x, 1),
+					    strict))
+	return true;
+      else
+	goto non_legitimate_address;
+
+    case LO_SUM:
+      if (!TARGET_GP_DIRECT)
+	return true;
+
+    default:
+      goto non_legitimate_address;
+    }
+
+non_legitimate_address:
+  return false;
+}
+
+static rtx
+nds32_legitimize_address (rtx x,
+			  rtx oldx ATTRIBUTE_UNUSED,
+			  enum machine_mode mode ATTRIBUTE_UNUSED)
+{
+  /* 'mode' is the machine mode of memory operand
+     that uses 'x' as address.  */
+
+  return x;
+}
+
+
+/* Describing Relative Costs of Operations.  */
+
+static int nds32_register_move_cost (enum machine_mode mode ATTRIBUTE_UNUSED,
+				     reg_class_t from,
+				     reg_class_t to)
+{
+  if (from == HIGH_REGS || to == HIGH_REGS)
+    return 6;
+
+  return 2;
+}
+
+static int nds32_memory_move_cost (enum machine_mode mode ATTRIBUTE_UNUSED,
+				   reg_class_t rclass ATTRIBUTE_UNUSED,
+				   bool in ATTRIBUTE_UNUSED)
+{
+  return 8;
+}
+
+/* This target hook describes the relative costs of RTL expressions.
+   Return 'true' when all subexpressions of x have been processed.
+   Return 'false' to sum the costs of sub-rtx, plus cost of this operation.
+   Refer to gcc/rtlanal.c for more information.  */
+static bool
+nds32_rtx_costs (rtx x,
+		 int code,
+		 int outer_code,
+		 int opno ATTRIBUTE_UNUSED,
+		 int *total,
+		 bool speed)
+{
+  /* According to 'speed', goto suitable cost model section.  */
+  if (speed)
+    goto performance_cost;
+  else
+    goto size_cost;
+
+
+performance_cost:
+  /* This is section for performance cost model.  */
+
+  /* In gcc/rtl.h, the default value of COSTS_N_INSNS(N) is N*4.
+     We treat it as 4-cycle cost for each instruction
+     under performance consideration.  */
+  switch (code)
+    {
+    case SET:
+      /* For 'SET' rtx, we need to return false
+         so that it can recursively calculate costs.  */
+      return false;
+
+    case USE:
+      /* Used in combine.c as a marker.  */
+      *total = 0;
+      break;
+
+    case MULT:
+      *total = COSTS_N_INSNS (5);
+      break;
+
+    case DIV:
+    case UDIV:
+    case MOD:
+    case UMOD:
+      *total = COSTS_N_INSNS (7);
+      break;
+
+    default:
+      *total = COSTS_N_INSNS (1);
+      break;
+    }
+
+  return true;
+
+
+size_cost:
+  /* This is section for size cost model.  */
+
+  /* In gcc/rtl.h, the default value of COSTS_N_INSNS(N) is N*4.
+     We treat it as 4-byte cost for each instruction
+     under code size consideration.  */
+  switch (code)
+    {
+    case SET:
+      /* For 'SET' rtx, we need to return false
+         so that it can recursively calculate costs.  */
+      return false;
+
+    case USE:
+      /* Used in combine.c as a marker.  */
+      *total = 0;
+      break;
+
+    case CONST_INT:
+      /* All instructions involving constant operation
+         need to be considered for cost evaluation.  */
+      if (outer_code == SET)
+	{
+	  /* (set X imm5s), use movi55, 2-byte cost.
+	     (set X imm20s), use movi, 4-byte cost.
+	     (set X BIG_INT), use sethi/ori, 8-byte cost.  */
+	  if (satisfies_constraint_Is05 (x))
+	    *total = COSTS_N_INSNS (1) - 2;
+	  else if (satisfies_constraint_Is20 (x))
+	    *total = COSTS_N_INSNS (1);
+	  else
+	    *total = COSTS_N_INSNS (2);
+	}
+      else if (outer_code == PLUS || outer_code == MINUS)
+	{
+	  /* Possible addi333/subi333 or subi45/addi45, 2-byte cost.
+	     General case, cost 1 instruction with 4-byte.  */
+	  if (satisfies_constraint_Iu05 (x))
+	    *total = COSTS_N_INSNS (1) - 2;
+	  else
+	    *total = COSTS_N_INSNS (1);
+	}
+      else if (outer_code == ASHIFT)
+	{
+	  /* Possible slli333, 2-byte cost.
+	     General case, cost 1 instruction with 4-byte.  */
+	  if (satisfies_constraint_Iu03 (x))
+	    *total = COSTS_N_INSNS (1) - 2;
+	  else
+	    *total = COSTS_N_INSNS (1);
+	}
+      else if (outer_code == ASHIFTRT || outer_code == LSHIFTRT)
+	{
+	  /* Possible srai45 or srli45, 2-byte cost.
+	     General case, cost 1 instruction with 4-byte.  */
+	  if (satisfies_constraint_Iu05 (x))
+	    *total = COSTS_N_INSNS (1) - 2;
+	  else
+	    *total = COSTS_N_INSNS (1);
+	}
+      else
+	{
+	  /* For other cases, simply set it 4-byte cost.  */
+	  *total = COSTS_N_INSNS (1);
+	}
+      break;
+
+    case CONST_DOUBLE:
+      /* It requires high part and low part processing, set it 8-byte cost.  */
+      *total = COSTS_N_INSNS (2);
+      break;
+
+    default:
+      /* For other cases, generally we set it 4-byte cost
+         and stop resurively traversing.  */
+      *total = COSTS_N_INSNS (1);
+      break;
+    }
+
+  return true;
+}
+
+static int nds32_address_cost (rtx address,
+			       enum machine_mode mode ATTRIBUTE_UNUSED,
+			       addr_space_t as ATTRIBUTE_UNUSED,
+			       bool speed)
+{
+  rtx plus0, plus1;
+  enum rtx_code code;
+
+  code = GET_CODE (address);
+
+  /* According to 'speed', goto suitable cost model section.  */
+  if (speed)
+    goto performance_cost;
+  else
+    goto size_cost;
+
+performance_cost:
+  /* This is section for performance cost model.  */
+
+  /* FALLTHRU, currently we use same cost model as size_cost.  */
+
+size_cost:
+  /* This is section for size cost model.  */
+
+  switch (code)
+    {
+    case POST_MODIFY:
+    case POST_INC:
+    case POST_DEC:
+      /* We encourage that rtx contains
+         POST_MODIFY/POST_INC/POST_DEC behavior.  */
+      return 0;
+
+    case SYMBOL_REF:
+      /* We can have gp-relative load/store for symbol_ref.
+         Have it 4-byte cost.  */
+      return COSTS_N_INSNS (1);
+
+    case CONST:
+      /* It is supposed to be the pattern (const (plus symbol_ref const_int)).
+         Have it 4-byte cost.  */
+      return COSTS_N_INSNS (1);
+
+    case REG:
+      /* Simply return 4-byte costs.  */
+      return COSTS_N_INSNS (1);
+
+    case PLUS:
+      /* We do not need to check if the address is a legitimate address,
+         because this hook is never called with an invalid address.
+         But we better check the range of
+         const_int value for cost, if it exists.  */
+      plus0 = XEXP (address, 0);
+      plus1 = XEXP (address, 1);
+
+      if (REG_P (plus0) && CONST_INT_P (plus1))
+        {
+	  /* If it is possible to be lwi333/swi333 form,
+	     make it 2-byte cost.  */
+	  if (satisfies_constraint_Iu05 (plus1))
+	    return (COSTS_N_INSNS (1) - 2);
+	  else
+	    return COSTS_N_INSNS (1);
+	}
+
+      /* For other 'plus' situation, make it cost 4-byte.  */
+      return COSTS_N_INSNS (1);
+
+    default:
+      break;
+    }
+
+  return COSTS_N_INSNS (4);
+}
+
+
+/* Defining the Output Assembler Language.  */
+
+/* The Overall Framework of an Assembler File.  */
+
+static void
+nds32_asm_file_start (void)
+{
+  int i;
+
+  default_file_start ();
+
+  /* Tell assembler that this asm code is generated by compiler.  */
+  fprintf (asm_out_file, "\t! This asm file is generated by compiler\n");
+  fprintf (asm_out_file, "\t.flag\tverbatim\n");
+  /* Give assembler the size of each vector for interrupt handler.  */
+  fprintf (asm_out_file, "\t! This vector size directive is required "
+			 "for checking inconsistency on interrupt handler\n");
+  fprintf (asm_out_file, "\t.vec_size\t%d\n", nds32_isr_vector_size);
+
+  /* If user enables '-mforce-fp-as-gp' or compiles programs with -Os,
+     the compiler may produce 'la $fp,_FP_BASE_' instruction
+     at prologue for fp-as-gp optimization.
+     We should emit weak reference of _FP_BASE_ to avoid undefined reference
+     in case user does not pass '--relax' option to linker.  */
+  if (TARGET_FORCE_FP_AS_GP || optimize_size)
+    {
+      fprintf (asm_out_file, "\t! This weak reference is required to do "
+			     "fp-as-gp link time optimization\n");
+      fprintf (asm_out_file, "\t.weak\t_FP_BASE_\n");
+    }
+  /* If user enables '-mex9', we should emit relaxation directive
+     to tell linker that this file is allowed to do ex9 optimization.  */
+  if (TARGET_EX9)
+    {
+      fprintf (asm_out_file, "\t! This relaxation directive is required "
+			     "to do ex9 link time optimization\n");
+      fprintf (asm_out_file, "\t.relax\tex9\n");
+    }
+
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+
+  if (TARGET_ISA_V2)
+    fprintf (asm_out_file, "\t! ISA family\t\t: %s\n", "V2");
+  if (TARGET_ISA_V3)
+    fprintf (asm_out_file, "\t! ISA family\t\t: %s\n", "V3");
+  if (TARGET_ISA_V3M)
+    fprintf (asm_out_file, "\t! ISA family\t\t: %s\n", "V3M");
+
+  fprintf (asm_out_file, "\t! Endian setting\t: %s\n",
+			 ((TARGET_BIG_ENDIAN) ? "big-endian"
+					      : "little-endian"));
+
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+
+  fprintf (asm_out_file, "\t! Use conditional move\t\t: %s\n",
+			 ((TARGET_CMOV) ? "Yes"
+					: "No"));
+  fprintf (asm_out_file, "\t! Use performance extension\t: %s\n",
+			 ((TARGET_PERF_EXT) ? "Yes"
+					    : "No"));
+
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+
+  fprintf (asm_out_file, "\t! V3PUSH instructions\t: %s\n",
+			 ((TARGET_V3PUSH) ? "Yes"
+					  : "No"));
+  fprintf (asm_out_file, "\t! 16-bit instructions\t: %s\n",
+			 ((TARGET_16_BIT) ? "Yes"
+					  : "No"));
+  fprintf (asm_out_file, "\t! GP base access\t: %s\n",
+			 ((TARGET_GP_DIRECT) ? "Yes"
+					     : "No"));
+  fprintf (asm_out_file, "\t! Reduced registers set\t: %s\n",
+			 ((TARGET_REDUCED_REGS) ? "Yes"
+						: "No"));
+
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+
+  if (optimize_size)
+    fprintf (asm_out_file, "\t! Optimization level\t: -Os\n");
+  else
+    fprintf (asm_out_file, "\t! Optimization level\t: -O%d\n", optimize);
+
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+
+  fprintf (asm_out_file, "\t! Cache block size\t: %d\n",
+			 nds32_cache_block_size);
+
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+
+  /* Initialize isr vector information array before compiling functions.  */
+  for (i = 0; i < NDS32_N_ISR_VECTORS; i++)
+    {
+      nds32_isr_vectors[i].category = NDS32_ISR_NONE;
+      strcpy (nds32_isr_vectors[i].func_name, "");
+      nds32_isr_vectors[i].save_reg = NDS32_PARTIAL_SAVE;
+      nds32_isr_vectors[i].nested_type = NDS32_NOT_NESTED;
+      nds32_isr_vectors[i].total_n_vectors = 0;
+      strcpy (nds32_isr_vectors[i].nmi_name, "");
+      strcpy (nds32_isr_vectors[i].warm_name, "");
+    }
+}
+
+static void
+nds32_asm_file_end (void)
+{
+  int i;
+
+  /* If all the vectors are NDS32_ISR_NONE, we can return immediately.  */
+  for (i = 0; i < NDS32_N_ISR_VECTORS; i++)
+    if (nds32_isr_vectors[i].category != NDS32_ISR_NONE)
+      break;
+
+  if (i == NDS32_N_ISR_VECTORS)
+    return;
+
+  /* At least one vector is NOT NDS32_ISR_NONE,
+     we should output isr vector information.  */
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+  fprintf (asm_out_file, "\t! The isr vector information:\n");
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+
+  /* Check reset handler first.  Its vector number is always 0.  */
+  if (nds32_isr_vectors[0].category == NDS32_ISR_RESET)
+    {
+      nds32_emit_isr_reset_content ();
+      fprintf (asm_out_file, "\t! ------------------------------------\n");
+    }
+
+  /* Check other vectors, starting from vector number 1.  */
+  for (i = 1; i < NDS32_N_ISR_VECTORS; i++)
+    {
+      if (nds32_isr_vectors[i].category == NDS32_ISR_INTERRUPT
+	  || nds32_isr_vectors[i].category == NDS32_ISR_EXCEPTION)
+	{
+	  /* Found one vector which is interupt or exception.
+	     Output its jmptbl and vector section content.  */
+	  fprintf (asm_out_file, "\t! interrupt/exception vector %02d\n", i);
+	  fprintf (asm_out_file, "\t! ------------------------------------\n");
+	  nds32_emit_isr_jmptbl_section (i);
+	  fprintf (asm_out_file, "\t! ....................................\n");
+	  nds32_emit_isr_vector_section (i);
+	  fprintf (asm_out_file, "\t! ------------------------------------\n");
+	}
+    }
+
+  fprintf (asm_out_file, "\t! ------------------------------------\n");
+}
+
+/* Output and Generation of Labels.  */
+
+static void
+nds32_asm_globalize_label (FILE *stream, const char *name)
+{
+  fputs ("\t.global\t", stream);
+  assemble_name (stream, name);
+  fputs ("\n", stream);
+}
+
+/* Output of Assembler Instructions.  */
+
+static void
+nds32_print_operand (FILE *stream, rtx x, int code)
+{
+  int op_value;
+
+  switch (code)
+    {
+    case 0 :
+      /* Do nothing special.  */
+      break;
+
+    case 'V':
+      /* 'x' is supposed to be CONST_INT, get the value.  */
+      gcc_assert (CONST_INT_P (x));
+      op_value = INTVAL (x);
+
+      /* According to the Andes architecture,
+         the system/user register index range is 0 ~ 1023.
+         In order to avoid conflict between user-specified-integer value
+         and enum-specified-register value,
+         the 'enum nds32_intrinsic_registers' value
+         in nds32_intrinsic.h starts from 1024.  */
+      if (op_value < 1024 && op_value >= 0)
+	{
+	  /* If user gives integer value directly (0~1023),
+	     we just print out the value.  */
+	  fprintf (stream, "%d", op_value);
+	}
+      else if (op_value < 0
+	       || op_value >= ((int) ARRAY_SIZE (nds32_intrinsic_register_names)
+			       + 1024))
+	{
+	  /* The enum index value for array size is out of range.  */
+	  error ("intrinsic register index is out of range");
+	}
+      else
+	{
+	  /* If user applies normal way with __NDS32_REG_XXX__ enum data,
+	     we can print out register name.  Remember to substract 1024.  */
+	  fprintf (stream, "%s",
+			   nds32_intrinsic_register_names[op_value - 1024]);
+	}
+
+      /* No need to handle following process, so return immediately.  */
+      return;
+
+    default :
+      /* Unknown flag.  */
+      output_operand_lossage ("invalid operand output code");
+      break;
+    }
+
+  switch (GET_CODE (x))
+    {
+    case LABEL_REF:
+    case SYMBOL_REF:
+      output_addr_const (stream, x);
+      break;
+
+    case REG:
+      /* Forbid using static chain register ($r16)
+         on reduced-set registers configuration.  */
+      if (TARGET_REDUCED_REGS
+	  && REGNO (x) == STATIC_CHAIN_REGNUM)
+	sorry ("a nested function is not supported for reduced registers");
+
+      /* Normal cases, print out register name.  */
+      fputs (reg_names[REGNO (x)], stream);
+      break;
+
+    case MEM:
+      output_address (XEXP (x, 0));
+      break;
+
+    case CODE_LABEL:
+    case CONST_INT:
+    case CONST:
+      output_addr_const (stream, x);
+      break;
+
+    default:
+      /* Generally, output_addr_const () is able to handle most cases.
+         We want to see what CODE could appear,
+         so we use gcc_unreachable() to stop it.  */
+      debug_rtx (x);
+      gcc_unreachable ();
+      break;
+    }
+}
+
+static void
+nds32_print_operand_address (FILE *stream, rtx x)
+{
+  rtx op0, op1;
+
+  switch (GET_CODE (x))
+    {
+    case SYMBOL_REF:
+    case CONST:
+      /* [ + symbol_ref] */
+      /* [ + const_addr], where const_addr = symbol_ref + const_int */
+      fputs ("[ + ", stream);
+      output_addr_const (stream, x);
+      fputs ("]", stream);
+      break;
+
+    case REG:
+      /* Forbid using static chain register ($r16)
+         on reduced-set registers configuration.  */
+      if (TARGET_REDUCED_REGS
+	  && REGNO (x) == STATIC_CHAIN_REGNUM)
+	sorry ("a nested function is not supported for reduced registers");
+
+      /* [Ra] */
+      fprintf (stream, "[%s]", reg_names[REGNO (x)]);
+      break;
+
+    case PLUS:
+      op0 = XEXP (x, 0);
+      op1 = XEXP (x, 1);
+
+      /* Checking op0, forbid using static chain register ($r16)
+         on reduced-set registers configuration.  */
+      if (TARGET_REDUCED_REGS
+	  && REG_P (op0)
+	  && REGNO (op0) == STATIC_CHAIN_REGNUM)
+	sorry ("a nested function is not supported for reduced registers");
+      /* Checking op1, forbid using static chain register ($r16)
+         on reduced-set registers configuration.  */
+      if (TARGET_REDUCED_REGS
+	  && REG_P (op1)
+	  && REGNO (op1) == STATIC_CHAIN_REGNUM)
+	sorry ("a nested function is not supported for reduced registers");
+
+      if (REG_P (op0) && CONST_INT_P (op1))
+	{
+	  /* [Ra + imm] */
+	  fprintf (stream, "[%s + (%d)]",
+			   reg_names[REGNO (op0)], (int)INTVAL (op1));
+	}
+      else if (REG_P (op0) && REG_P (op1))
+	{
+	  /* [Ra + Rb] */
+	  fprintf (stream, "[%s + %s]",
+			   reg_names[REGNO (op0)], reg_names[REGNO (op1)]);
+	}
+      else if (GET_CODE (op0) == MULT && REG_P (op1))
+	{
+	  /* [Ra + Rb << sv] */
+	  /* From observation, the pattern looks like:
+	   *  (plus:SI (mult:SI (reg:SI 58)
+	   *                    (const_int 4 [0x4]))
+	   *           (reg/f:SI 57))
+	   */
+	  int sv;
+
+	  /* We need to set sv to output shift value.  */
+	  if (INTVAL (XEXP (op0, 1)) == 1)
+	    sv = 0;
+	  else if (INTVAL (XEXP (op0, 1)) == 2)
+	    sv = 1;
+	  else if (INTVAL (XEXP (op0, 1)) == 4)
+	    sv = 2;
+	  else
+	    gcc_unreachable ();
+
+	  fprintf (stream, "[%s + %s << %d]",
+			   reg_names[REGNO (op1)],
+			   reg_names[REGNO (XEXP (op0, 0))],
+			   sv);
+	}
+      else
+	{
+	  /* The control flow is not supposed to be here.  */
+	  debug_rtx (x);
+	  gcc_unreachable ();
+	}
+
+      break;
+
+    case POST_MODIFY:
+      /* (post_modify (regA) (plus (regA) (regB)))
+         (post_modify (regA) (plus (regA) (const_int)))
+         We would like to extract
+         regA and regB (or const_int) from plus rtx.  */
+      op0 = XEXP (XEXP (x, 1), 0);
+      op1 = XEXP (XEXP (x, 1), 1);
+
+      /* Checking op0, forbid using static chain register ($r16)
+         on reduced-set registers configuration.  */
+      if (TARGET_REDUCED_REGS
+	  && REG_P (op0)
+	  && REGNO (op0) == STATIC_CHAIN_REGNUM)
+	sorry ("a nested function is not supported for reduced registers");
+      /* Checking op1, forbid using static chain register ($r16)
+         on reduced-set registers configuration.  */
+      if (TARGET_REDUCED_REGS
+	  && REG_P (op1)
+	  && REGNO (op1) == STATIC_CHAIN_REGNUM)
+	sorry ("a nested function is not supported for reduced registers");
+
+      if (REG_P (op0) && REG_P (op1))
+	{
+	  /* [Ra], Rb */
+	  fprintf (stream, "[%s], %s",
+			   reg_names[REGNO (op0)], reg_names[REGNO (op1)]);
+	}
+      else if (REG_P (op0) && CONST_INT_P (op1))
+	{
+	  /* [Ra], imm */
+	  fprintf (stream, "[%s], %d",
+			   reg_names[REGNO (op0)], (int)INTVAL (op1));
+	}
+      else
+	{
+	  /* The control flow is not supposed to be here.  */
+	  debug_rtx (x);
+	  gcc_unreachable ();
+	}
+
+      break;
+
+    case POST_INC:
+    case POST_DEC:
+      op0 = XEXP (x, 0);
+
+      /* Checking op0, forbid using static chain register ($r16)
+         on reduced-set registers configuration.  */
+      if (TARGET_REDUCED_REGS
+	  && REG_P (op0)
+	  && REGNO (op0) == STATIC_CHAIN_REGNUM)
+	sorry ("a nested function is not supported for reduced registers");
+
+      if (REG_P (op0))
+	{
+	  /* "[Ra], 1/2/4" or "[Ra], -1/-2/-4"
+	     The 1/2/4 or -1/-2/-4 have been displayed in nds32.md.
+	     We only need to deal with register Ra.  */
+	  fprintf (stream, "[%s]", reg_names[REGNO (op0)]);
+	}
+      else
+	{
+	  /* The control flow is not supposed to be here.  */
+	  debug_rtx (x);
+	  gcc_unreachable ();
+	}
+
+      break;
+
+    default :
+      /* Generally, output_addr_const () is able to handle most cases.
+         We want to see what CODE could appear,
+         so we use gcc_unreachable() to stop it.  */
+      debug_rtx (x);
+      gcc_unreachable ();
+      break;
+    }
+}
+
+
+/* Defining target-specific uses of __attribute__.  */
+
+static bool
+nds32_option_pragma_parse (tree args ATTRIBUTE_UNUSED,
+			   tree pop_target ATTRIBUTE_UNUSED)
+{
+  /* Currently, we do not parse any pragma target by ourself,
+     so just simply return false.  */
+  return false;
+}
+
+static void
+nds32_option_override (void)
+{
+  /* After all the command options have been parsed,
+     we shall deal with some flags for changing compiler settings.  */
+
+  /* At first, we check if we have to strictly
+     set some flags based on ISA family.  */
+  if (TARGET_ISA_V2)
+    {
+      /* Under V2 ISA, we need to strictly disable TARGET_V3PUSH.  */
+      target_flags &= ~MASK_V3PUSH;
+    }
+  if (TARGET_ISA_V3)
+    {
+      /* Under V3 ISA, currently nothing should be strictly set.  */
+    }
+  if (TARGET_ISA_V3M)
+    {
+      /* Under V3M ISA, we need to strictly enable TARGET_REDUCED_REGS.  */
+      target_flags |= MASK_REDUCED_REGS;
+      /* Under V3M ISA, we need to strictly disable TARGET_PERF_EXT.  */
+      target_flags &= ~MASK_PERF_EXT;
+    }
+
+  /* See if we are using reduced-set registers:
+       $r0~$r5, $r6~$r10, $r15, $r28, $r29, $r30, $r31
+     If so, we must forbid using $r11~$r14, $r16~$r27.  */
+  if (TARGET_REDUCED_REGS)
+    {
+      int r;
+
+      /* Prevent register allocator from
+         choosing it as doing register allocation.  */
+      for (r = 11; r <= 14; r++)
+	fixed_regs[r] = call_used_regs[r] = 1;
+      for (r = 16; r <= 27; r++)
+	fixed_regs[r] = call_used_regs[r] = 1;
+    }
+
+  /* See if user explicitly would like to use fp-as-gp optimization.
+     If so, we must prevent $fp from being allocated
+     during register allocation.  */
+  if (TARGET_FORCE_FP_AS_GP)
+    fixed_regs[FP_REGNUM] = call_used_regs[FP_REGNUM] = 1;
+
+  if (!TARGET_16_BIT)
+    {
+      /* Under no 16 bit ISA, we need to strictly disable TARGET_V3PUSH.  */
+      target_flags &= ~MASK_V3PUSH;
+    }
+
+  /* Currently, we don't support PIC code generation yet.  */
+  if (flag_pic)
+    sorry ("not support -fpic");
+}
+
+
+/* Miscellaneous Parameters.  */
+
+static void
+nds32_init_builtins (void)
+{
+  tree pointer_type_node  = build_pointer_type (integer_type_node);
+
+  tree void_ftype_void    = build_function_type (void_type_node,
+						 void_list_node);
+
+  tree void_ftype_pint    = build_function_type_list (void_type_node,
+						      pointer_type_node,
+						      NULL_TREE);
+
+  tree int_ftype_int      = build_function_type_list (integer_type_node,
+						      integer_type_node,
+						      NULL_TREE);
+
+  tree void_ftype_int_int = build_function_type_list (void_type_node,
+						      integer_type_node,
+						      integer_type_node,
+						      NULL_TREE);
+
+  /* Cache.  */
+  add_builtin_function ("__builtin_nds32_isync",  void_ftype_pint,
+			NDS32_BUILTIN_ISYNC,
+			BUILT_IN_MD, NULL, NULL_TREE);
+  add_builtin_function ("__builtin_nds32_isb",  void_ftype_void,
+			NDS32_BUILTIN_ISB,
+			BUILT_IN_MD, NULL, NULL_TREE);
+
+  /* Register Transfer.  */
+  add_builtin_function ("__builtin_nds32_mfsr",  int_ftype_int,
+			NDS32_BUILTIN_MFSR,
+			BUILT_IN_MD, NULL, NULL_TREE);
+  add_builtin_function ("__builtin_nds32_mfusr", int_ftype_int,
+			NDS32_BUILTIN_MFUSR,
+			BUILT_IN_MD, NULL, NULL_TREE);
+  add_builtin_function ("__builtin_nds32_mtsr",  void_ftype_int_int,
+			NDS32_BUILTIN_MTSR,
+			BUILT_IN_MD, NULL, NULL_TREE);
+  add_builtin_function ("__builtin_nds32_mtusr", void_ftype_int_int,
+			NDS32_BUILTIN_MTUSR,
+			BUILT_IN_MD, NULL, NULL_TREE);
+
+  /* Interrupt.  */
+  add_builtin_function ("__builtin_nds32_setgie_en",  void_ftype_void,
+			NDS32_BUILTIN_SETGIE_EN,
+			BUILT_IN_MD, NULL, NULL_TREE);
+  add_builtin_function ("__builtin_nds32_setgie_dis", void_ftype_void,
+			NDS32_BUILTIN_SETGIE_DIS,
+			BUILT_IN_MD, NULL, NULL_TREE);
+}
+
+static rtx
+nds32_expand_builtin (tree exp,
+		      rtx target,
+		      rtx subtarget ATTRIBUTE_UNUSED,
+		      enum machine_mode mode ATTRIBUTE_UNUSED,
+		      int ignore ATTRIBUTE_UNUSED)
+{
+  tree fndecl = TREE_OPERAND (CALL_EXPR_FN (exp), 0);
+
+  int fcode = DECL_FUNCTION_CODE (fndecl);
+
+  switch (fcode)
+    {
+    /* Cache.  */
+    case NDS32_BUILTIN_ISYNC:
+      return nds32_expand_builtin_null_ftype_reg
+	     (CODE_FOR_unspec_volatile_isync, exp, target);
+    case NDS32_BUILTIN_ISB:
+      /* Since there are no result and operands for isb instruciton,
+         we can simply emit this rtx.  */
+      emit_insn (gen_unspec_volatile_isb ());
+      return target;
+
+    /* Register Transfer.  */
+    case NDS32_BUILTIN_MFSR:
+      return nds32_expand_builtin_reg_ftype_imm
+	     (CODE_FOR_unspec_volatile_mfsr, exp, target);
+    case NDS32_BUILTIN_MFUSR:
+      return nds32_expand_builtin_reg_ftype_imm
+	     (CODE_FOR_unspec_volatile_mfusr, exp, target);
+    case NDS32_BUILTIN_MTSR:
+      return nds32_expand_builtin_null_ftype_reg_imm
+	     (CODE_FOR_unspec_volatile_mtsr, exp, target);
+    case NDS32_BUILTIN_MTUSR:
+      return nds32_expand_builtin_null_ftype_reg_imm
+	     (CODE_FOR_unspec_volatile_mtusr, exp, target);
+
+    /* Interrupt.  */
+    case NDS32_BUILTIN_SETGIE_EN:
+      /* Since there are no result and operands for setgie.e instruciton,
+         we can simply emit this rtx.  */
+      emit_insn (gen_unspec_volatile_setgie_en ());
+      return target;
+    case NDS32_BUILTIN_SETGIE_DIS:
+      /* Since there are no result and operands for setgie.d instruciton,
+         we can simply emit this rtx.  */
+      emit_insn (gen_unspec_volatile_setgie_dis ());
+      return target;
+
+    default:
+      gcc_unreachable ();
+    }
+
+  return NULL_RTX;
+}
+
+
+/* ------------------------------------------------------------------------ */
+
+/* PART 4: Implemet extern function definitions,
+           the prototype is in nds32-protos.h.  */
+
+/* Defining Data Structures for Per-function Information.  */
+
+void
+nds32_init_expanders (void)
+{
+  /* Arrange to initialize and mark the machine per-function status.  */
+  init_machine_status = nds32_init_machine_status;
+}
+
+
+/* Register Usage.  */
+
+/* How Values Fit in Registers.  */
+
+int
+nds32_hard_regno_nregs (int regno ATTRIBUTE_UNUSED,
+			enum machine_mode mode)
+{
+  return ((GET_MODE_SIZE (mode) + UNITS_PER_WORD - 1) / UNITS_PER_WORD);
+}
+
+int
+nds32_hard_regno_mode_ok (int regno, enum machine_mode mode)
+{
+  /* Restrict double-word quantities to even register pairs.  */
+  if (HARD_REGNO_NREGS (regno, mode) == 1
+      || !((regno) & 1))
+    return 1;
+
+  return 0;
+}
+
+
+/* Register Classes.  */
+
+enum reg_class
+nds32_regno_reg_class (int regno)
+{
+  /* Refer to nds32.h for more register class details.  */
+
+  if (regno >= 0 && regno <= 7)
+    return LOW_REGS;
+  else if (regno >= 8 && regno <= 11)
+    return MIDDLE_REGS;
+  else if (regno >= 12 && regno <= 14)
+    return HIGH_REGS;
+  else if (regno == 15)
+    return R15_TA_REG;
+  else if (regno >= 16 && regno <= 19)
+    return MIDDLE_REGS;
+  else if (regno >= 20 && regno <= 31)
+    return HIGH_REGS;
+  else if (regno == 32 || regno == 33)
+    return FRAME_REGS;
+  else
+    return NO_REGS;
+}
+
+
+/* Stack Layout and Calling Conventions.  */
+
+/* Basic Stack Layout.  */
+
+rtx
+nds32_return_addr_rtx (int count,
+		       rtx frameaddr ATTRIBUTE_UNUSED)
+{
+  /* There is no way to determine the return address
+     if frameaddr is the frame that has 'count' steps
+     up from current frame.  */
+  if (count != 0)
+    return NULL_RTX;
+
+  /* If count == 0, it means we are at current frame,
+     the return address is $r30 ($lp).  */
+  return get_hard_reg_initial_val (Pmode, LP_REGNUM);
+}
+
+/* Eliminating Frame Pointer and Arg Pointer.  */
+
+HOST_WIDE_INT
+nds32_initial_elimination_offset (unsigned int from_reg, unsigned int to_reg)
+{
+  HOST_WIDE_INT offset;
+
+  /* Compute and setup stack frame size.
+     The result will be in cfun->machine.  */
+  nds32_compute_stack_frame ();
+
+  /* Remember to consider
+     cfun->machine->callee_saved_area_padding_bytes
+     when calculating offset.  */
+  if (from_reg == ARG_POINTER_REGNUM && to_reg == STACK_POINTER_REGNUM)
+    {
+      offset = (cfun->machine->fp_size
+	        + cfun->machine->gp_size
+		+ cfun->machine->lp_size
+		+ cfun->machine->callee_saved_regs_size
+		+ cfun->machine->callee_saved_area_padding_bytes
+		+ cfun->machine->local_size
+		+ cfun->machine->out_args_size);
+    }
+  else if (from_reg == ARG_POINTER_REGNUM
+	   && to_reg == HARD_FRAME_POINTER_REGNUM)
+    {
+      offset = 0;
+    }
+  else if (from_reg == FRAME_POINTER_REGNUM
+	   && to_reg == STACK_POINTER_REGNUM)
+    {
+      offset = (cfun->machine->local_size + cfun->machine->out_args_size);
+    }
+  else if (from_reg == FRAME_POINTER_REGNUM
+	   && to_reg == HARD_FRAME_POINTER_REGNUM)
+    {
+      offset = (-1) * (cfun->machine->fp_size
+		       + cfun->machine->gp_size
+		       + cfun->machine->lp_size
+		       + cfun->machine->callee_saved_regs_size
+		       + cfun->machine->callee_saved_area_padding_bytes);
+    }
+  else
+    {
+      gcc_unreachable ();
+    }
+
+  return offset;
+}
+
+/* Passing Arguments in Registers.  */
+
+void
+nds32_init_cumulative_args (CUMULATIVE_ARGS *cum,
+			    tree fntype ATTRIBUTE_UNUSED,
+			    rtx libname ATTRIBUTE_UNUSED,
+			    tree fndecl ATTRIBUTE_UNUSED,
+			    int n_named_args ATTRIBUTE_UNUSED)
+{
+  /* Initial available registers
+     (in offset, corresponding to NDS32_GPR_ARG_FIRST_REGNUM)
+     for passing arguments.  */
+  cum->reg_offset = 0;
+}
+
+/* Function Entry and Exit.  */
+
+/* Function for normal multiple push prologue.  */
+void
+nds32_expand_prologue (void)
+{
+  int fp_adjust;
+  int sp_adjust;
+  int en4_const;
+
+  rtx Rb, Re;
+  rtx push_insn;
+  rtx fp_adjust_insn, sp_adjust_insn;
+
+  /* Compute and setup stack frame size.
+     The result will be in cfun->machine.  */
+  nds32_compute_stack_frame ();
+
+  /* If the function is 'naked',
+     we do not have to generate prologue code fragment.  */
+  if (cfun->machine->naked_p)
+    return;
+
+  /* Get callee_first_regno and callee_last_regno.  */
+  Rb = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_first_regno);
+  Re = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_last_regno);
+
+  /* push_insn = gen_stack_push_multiple(first_regno, last_regno),
+     the pattern 'stack_push_multiple' is implemented in nds32.md.
+     For En4 field, we have to calculate its constant value.
+     Refer to Andes ISA for more information.  */
+  en4_const = 0;
+  if (cfun->machine->fp_size)
+    en4_const += 8;
+  if (cfun->machine->gp_size)
+    en4_const += 4;
+  if (cfun->machine->lp_size)
+    en4_const += 2;
+
+  /* If fp$, $gp, $lp, and all callee-save registers are NOT required
+     to be saved, we don't have to create multiple push instruction.
+     Otherwise, a multiple push instruction is needed.  */
+  if (!(REGNO (Rb) == SP_REGNUM && REGNO (Re) == SP_REGNUM && en4_const == 0))
+    {
+      /* Create multiple push instruction rtx.  */
+      push_insn = nds32_gen_stack_push_multiple (Rb, Re, GEN_INT (en4_const));
+      /* Emit rtx into instructions list and receive INSN rtx form.  */
+      push_insn = emit_insn (push_insn);
+
+      /* The push_insn rtx form is special,
+         we need to create REG_NOTES for debug information manually.  */
+      add_reg_note (push_insn, REG_FRAME_RELATED_EXPR,
+		    nds32_construct_call_frame_information ());
+
+      /* The insn rtx 'push_insn' will change frame layout.
+         We need to use RTX_FRAME_RELATED_P so that GCC is able to
+         generate CFI (Call Frame Information) stuff.  */
+      RTX_FRAME_RELATED_P (push_insn) = 1;
+    }
+
+  /* Check frame_pointer_needed to see
+     if we shall emit fp adjustment instruction.  */
+  if (frame_pointer_needed)
+    {
+      /* adjust $fp = $sp + ($fp size) + ($gp size) + ($lp size)
+       *                  + (4 * callee-saved-registers)
+       *
+       * Note: No need to adjust
+       *       cfun->machine->callee_saved_area_padding_bytes,
+       *       because, at this point, stack pointer is just
+       *       at the position after push instruction.
+       */
+      fp_adjust = cfun->machine->fp_size
+		  + cfun->machine->gp_size
+		  + cfun->machine->lp_size
+		  + cfun->machine->callee_saved_regs_size;
+      fp_adjust_insn = gen_addsi3 (hard_frame_pointer_rtx,
+				   stack_pointer_rtx,
+				   GEN_INT (fp_adjust));
+      /* Emit rtx into instructions list and receive INSN rtx form.  */
+      fp_adjust_insn = emit_insn (fp_adjust_insn);
+    }
+
+  /* Adjust $sp = $sp - local_size - out_args_size
+                      - callee_saved_area_padding_bytes.  */
+  sp_adjust = cfun->machine->local_size
+	      + cfun->machine->out_args_size
+	      + cfun->machine->callee_saved_area_padding_bytes;
+  /* sp_adjust value may be out of range of the addi instruction,
+     create alternative add behavior with TA_REGNUM if necessary,
+     using NEGATIVE value to tell that we are decreasing address.  */
+  sp_adjust = nds32_force_addi_stack_int ( (-1) * sp_adjust);
+  if (sp_adjust)
+    {
+      /* Generate sp adjustment instruction if and only if sp_adjust != 0.  */
+      sp_adjust_insn = gen_addsi3 (stack_pointer_rtx,
+				   stack_pointer_rtx,
+				   GEN_INT (-1 * sp_adjust));
+      /* Emit rtx into instructions list and receive INSN rtx form.  */
+      sp_adjust_insn = emit_insn (sp_adjust_insn);
+
+      /* The insn rtx 'sp_adjust_insn' will change frame layout.
+         We need to use RTX_FRAME_RELATED_P so that GCC is able to
+         generate CFI (Call Frame Information) stuff.  */
+      RTX_FRAME_RELATED_P (sp_adjust_insn) = 1;
+    }
+
+  /* Prevent the instruction scheduler from
+     moving instructions across the boundary.  */
+  emit_insn (gen_blockage ());
+}
+
+/* Function for normal multiple pop epilogue.  */
+void
+nds32_expand_epilogue (void)
+{
+  int sp_adjust;
+  int en4_const;
+
+  rtx Rb, Re;
+  rtx pop_insn;
+  rtx sp_adjust_insn;
+
+  /* Compute and setup stack frame size.
+     The result will be in cfun->machine.  */
+  nds32_compute_stack_frame ();
+
+  /* Prevent the instruction scheduler from
+     moving instructions across the boundary.  */
+  emit_insn (gen_blockage ());
+
+  /* If the function is 'naked', we do not have to generate
+     epilogue code fragment BUT 'ret' instruction.  */
+  if (cfun->machine->naked_p)
+    {
+      /* Generate return instruction by using unspec_func_return pattern.
+         Make sure this instruction is after gen_blockage().
+         NOTE that $lp will become 'live'
+         after this instruction has been emitted.  */
+      emit_insn (gen_unspec_func_return ());
+      return;
+    }
+
+  if (frame_pointer_needed)
+    {
+      /* adjust $sp = $fp - ($fp size) - ($gp size) - ($lp size)
+       *                  - (4 * callee-saved-registers)
+       *
+       * Note: No need to adjust
+       *       cfun->machine->callee_saved_area_padding_bytes,
+       *       because we want to adjust stack pointer
+       *       to the position for pop instruction.
+       */
+      sp_adjust = cfun->machine->fp_size
+		  + cfun->machine->gp_size
+		  + cfun->machine->lp_size
+		  + cfun->machine->callee_saved_regs_size;
+      sp_adjust_insn = gen_addsi3 (stack_pointer_rtx,
+				   hard_frame_pointer_rtx,
+				   GEN_INT (-1 * sp_adjust));
+      /* Emit rtx into instructions list and receive INSN rtx form.  */
+      sp_adjust_insn = emit_insn (sp_adjust_insn);
+    }
+  else
+    {
+      /* If frame pointer is NOT needed,
+       * we cannot calculate the sp adjustment from frame pointer.
+       * Instead, we calculate the adjustment by local_size,
+       * out_args_size, and callee_saved_area_padding_bytes.
+       * Notice that such sp adjustment value may be out of range,
+       * so we have to deal with it as well.
+       */
+
+      /* Adjust $sp = $sp + local_size + out_args_size
+                          + callee_saved_area_padding_bytes.  */
+      sp_adjust = cfun->machine->local_size
+		  + cfun->machine->out_args_size
+		  + cfun->machine->callee_saved_area_padding_bytes;
+      /* sp_adjust value may be out of range of the addi instruction,
+         create alternative add behavior with TA_REGNUM if necessary,
+         using POSITIVE value to tell that we are increasing address.  */
+      sp_adjust = nds32_force_addi_stack_int (sp_adjust);
+      if (sp_adjust)
+	{
+	  /* Generate sp adjustment instruction
+	     if and only if sp_adjust != 0.  */
+	  sp_adjust_insn = gen_addsi3 (stack_pointer_rtx,
+				       stack_pointer_rtx,
+				       GEN_INT (sp_adjust));
+	  /* Emit rtx into instructions list and receive INSN rtx form.  */
+	  sp_adjust_insn = emit_insn (sp_adjust_insn);
+	}
+    }
+
+  /* Get callee_first_regno and callee_last_regno.  */
+  Rb = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_first_regno);
+  Re = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_last_regno);
+
+  /* pop_insn = gen_stack_pop_multiple(first_regno, last_regno),
+     the pattern 'stack_pop_multiple' is implementad in nds32.md.
+     For En4 field, we have to calculate its constant value.
+     Refer to Andes ISA for more information.  */
+  en4_const = 0;
+  if (cfun->machine->fp_size)
+    en4_const += 8;
+  if (cfun->machine->gp_size)
+    en4_const += 4;
+  if (cfun->machine->lp_size)
+    en4_const += 2;
+
+  /* If fp$, $gp, $lp, and all callee-save registers are NOT required
+     to be saved, we don't have to create multiple pop instruction.
+     Otherwise, a multiple pop instruction is needed.  */
+  if (!(REGNO (Rb) == SP_REGNUM && REGNO (Re) == SP_REGNUM && en4_const == 0))
+    {
+      /* Create multiple push instruction rtx.  */
+      pop_insn = nds32_gen_stack_pop_multiple (Rb, Re, GEN_INT (en4_const));
+      /* Emit pop instruction.  */
+      emit_insn (pop_insn);
+    }
+
+  /* Generate return instruction by using unspec_func_return pattern.  */
+  emit_insn (gen_unspec_func_return ());
+}
+
+/* Function for v3push prologue.  */
+void
+nds32_expand_prologue_v3push (void)
+{
+  int fp_adjust;
+  int sp_adjust;
+
+  rtx Rb, Re;
+  rtx push_insn;
+  rtx fp_adjust_insn, sp_adjust_insn;
+
+  /* Compute and setup stack frame size.
+     The result will be in cfun->machine.  */
+  nds32_compute_stack_frame ();
+
+  /* If the function is 'naked',
+     we do not have to generate prologue code fragment.  */
+  if (cfun->machine->naked_p)
+    return;
+
+  /* Get callee_first_regno and callee_last_regno.  */
+  Rb = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_first_regno);
+  Re = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_last_regno);
+
+  /* Calculate sp_adjust first to test if 'v3push Re,imm8u' is available,
+     where imm8u has to be 8-byte alignment.  */
+  sp_adjust = cfun->machine->local_size
+	      + cfun->machine->out_args_size
+	      + cfun->machine->callee_saved_area_padding_bytes;
+
+  if (satisfies_constraint_Iu08 (GEN_INT (sp_adjust))
+      && NDS32_DOUBLE_WORD_ALIGN_P (sp_adjust))
+    {
+      /* We can use 'v3push Re,imm8u'.  */
+
+      /* push_insn = gen_stack_v3push(last_regno, sp_adjust),
+         the pattern 'stack_v3push' is implemented in nds32.md.
+         The (const_int 14) means v3push always push { $fp $gp $lp }.  */
+      push_insn = nds32_gen_stack_v3push (Rb, Re,
+					  GEN_INT (14), GEN_INT (sp_adjust));
+      /* emit rtx into instructions list and receive INSN rtx form */
+      push_insn = emit_insn (push_insn);
+
+      /* The push_insn rtx form is special,
+         we need to create REG_NOTES for debug information manually.  */
+      add_reg_note (push_insn, REG_FRAME_RELATED_EXPR,
+		    nds32_construct_call_frame_information ());
+
+      /* The insn rtx 'push_insn' will change frame layout.
+         We need to use RTX_FRAME_RELATED_P so that GCC is able to
+         generate CFI (Call Frame Information) stuff.  */
+      RTX_FRAME_RELATED_P (push_insn) = 1;
+
+      /* Check frame_pointer_needed to see
+         if we shall emit fp adjustment instruction.  */
+      if (frame_pointer_needed)
+	{
+	  /* adjust $fp = $sp   + 4         ($fp size)
+	   *                    + 4         ($gp size)
+	   *                    + 4         ($lp size)
+	   *                    + (4 * n)   (callee-saved registers)
+	   *                    + sp_adjust ('v3push Re,imm8u')
+	   * Note: Since we use 'v3push Re,imm8u',
+	   *       the position of stack pointer is further
+	   *       changed after push instruction.
+	   *       Hence, we need to take sp_adjust value into consideration.
+	   */
+	  fp_adjust = cfun->machine->fp_size
+		      + cfun->machine->gp_size
+		      + cfun->machine->lp_size
+		      + cfun->machine->callee_saved_regs_size
+		      + sp_adjust;
+	  fp_adjust_insn = gen_addsi3 (hard_frame_pointer_rtx,
+				       stack_pointer_rtx,
+				       GEN_INT (fp_adjust));
+	  /* Emit rtx into instructions list and receive INSN rtx form.  */
+	  fp_adjust_insn = emit_insn (fp_adjust_insn);
+	}
+    }
+  else
+    {
+      /* We have to use 'v3push Re,0' and
+         expand one more instruction to adjust $sp later.  */
+
+      /* push_insn = gen_stack_v3push(last_regno, sp_adjust),
+         the pattern 'stack_v3push' is implemented in nds32.md.
+         The (const_int 14) means v3push always push { $fp $gp $lp }.  */
+      push_insn = nds32_gen_stack_v3push (Rb, Re,
+					  GEN_INT (14), GEN_INT (0));
+      /* Emit rtx into instructions list and receive INSN rtx form.  */
+      push_insn = emit_insn (push_insn);
+
+      /* The push_insn rtx form is special,
+         we need to create REG_NOTES for debug information manually.  */
+      add_reg_note (push_insn, REG_FRAME_RELATED_EXPR,
+		    nds32_construct_call_frame_information ());
+
+      /* The insn rtx 'push_insn' will change frame layout.
+         We need to use RTX_FRAME_RELATED_P so that GCC is able to
+         generate CFI (Call Frame Information) stuff.  */
+      RTX_FRAME_RELATED_P (push_insn) = 1;
+
+      /* Check frame_pointer_needed to see
+         if we shall emit fp adjustment instruction.  */
+      if (frame_pointer_needed)
+	{
+	  /* adjust $fp = $sp + 4        ($fp size)
+	   *                  + 4        ($gp size)
+	   *                  + 4        ($lp size)
+	   *                  + (4 * n)  (callee-saved registers)
+	   *
+	   * Note: Since we use 'v3push Re,0',
+	   *       the stack pointer is just at the position
+	   *       after push instruction.
+	   *       No need to take sp_adjust into consideration.
+	   */
+	  fp_adjust = cfun->machine->fp_size
+		      + cfun->machine->gp_size
+		      + cfun->machine->lp_size
+		      + cfun->machine->callee_saved_regs_size;
+	  fp_adjust_insn = gen_addsi3 (hard_frame_pointer_rtx,
+				       stack_pointer_rtx,
+				       GEN_INT (fp_adjust));
+	  /* Emit rtx into instructions list and receive INSN rtx form.  */
+	  fp_adjust_insn = emit_insn (fp_adjust_insn);
+	}
+
+      /* Because we use 'v3push Re,0',
+         we need to expand one more instruction to adjust $sp.
+         However, sp_adjust value may be out of range of the addi instruction,
+         create alternative add behavior with TA_REGNUM if necessary,
+         using NEGATIVE value to tell that we are decreasing address.  */
+      sp_adjust = nds32_force_addi_stack_int ( (-1) * sp_adjust);
+      if (sp_adjust)
+	{
+	  /* Generate sp adjustment instruction
+	     if and only if sp_adjust != 0.  */
+	  sp_adjust_insn = gen_addsi3 (stack_pointer_rtx,
+				       stack_pointer_rtx,
+				       GEN_INT (-1 * sp_adjust));
+	  /* Emit rtx into instructions list and receive INSN rtx form.  */
+	  sp_adjust_insn = emit_insn (sp_adjust_insn);
+
+	  /* The insn rtx 'sp_adjust_insn' will change frame layout.
+	     We need to use RTX_FRAME_RELATED_P so that GCC is able to
+	     generate CFI (Call Frame Information) stuff.  */
+	  RTX_FRAME_RELATED_P (sp_adjust_insn) = 1;
+	}
+    }
+
+  /* Prevent the instruction scheduler from
+     moving instructions across the boundary.  */
+  emit_insn (gen_blockage ());
+}
+
+/* Function for v3pop epilogue.  */
+void
+nds32_expand_epilogue_v3pop (void)
+{
+  int sp_adjust;
+
+  rtx Rb, Re;
+  rtx pop_insn;
+  rtx sp_adjust_insn;
+
+  /* Compute and setup stack frame size.
+     The result will be in cfun->machine.  */
+  nds32_compute_stack_frame ();
+
+  /* Prevent the instruction scheduler from
+     moving instructions across the boundary.  */
+  emit_insn (gen_blockage ());
+
+  /* If the function is 'naked', we do not have to generate
+     epilogue code fragment BUT 'ret' instruction.  */
+  if (cfun->machine->naked_p)
+    {
+      /* Generate return instruction by using unspec_func_return pattern.
+         Make sure this instruction is after gen_blockage().
+         NOTE that $lp will become 'live'
+         after this instruction has been emitted.  */
+      emit_insn (gen_unspec_func_return ());
+      return;
+    }
+
+  /* Get callee_first_regno and callee_last_regno.  */
+  Rb = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_first_regno);
+  Re = gen_rtx_REG (SImode, cfun->machine->callee_saved_regs_last_regno);
+
+  /* Calculate sp_adjust first to test if 'v3pop Re,imm8u' is available,
+     where imm8u has to be 8-byte alignment.  */
+  sp_adjust = cfun->machine->local_size
+	      + cfun->machine->out_args_size
+	      + cfun->machine->callee_saved_area_padding_bytes;
+
+  /* We have to consider alloca issue as well.
+     If the function does call alloca(), the stack pointer is not fixed.
+     In that case, we cannot use 'v3pop Re,imm8u' directly.
+     We have to caculate stack pointer from frame pointer
+     and then use 'v3pop Re,0'.
+     Of course, the frame_pointer_needed should be nonzero
+     if the function calls alloca().  */
+  if (satisfies_constraint_Iu08 (GEN_INT (sp_adjust))
+      && NDS32_DOUBLE_WORD_ALIGN_P (sp_adjust)
+      && !cfun->calls_alloca)
+    {
+      /* We can use 'v3pop Re,imm8u'.  */
+
+      /* pop_insn = gen_stack_v3pop(last_regno, sp_adjust),
+         the pattern 'stack_v3pop' is implementad in nds32.md.
+         The (const_int 14) means v3pop always push { $fp $gp $lp }.  */
+      pop_insn = nds32_gen_stack_v3pop (Rb, Re,
+					GEN_INT (14), GEN_INT (sp_adjust));
+
+      /* Emit pop instruction.  */
+      emit_insn (pop_insn);
+    }
+  else
+    {
+      /* We have to use 'v3pop Re,0', and prior to it,
+         we must expand one more instruction to adjust $sp.  */
+
+      if (frame_pointer_needed)
+	{
+	  /* adjust $sp = $fp - 4        ($fp size)
+	   *                  - 4        ($gp size)
+	   *                  - 4        ($lp size)
+	   *                  - (4 * n)  (callee-saved registers)
+	   *
+	   * Note: No need to adjust
+	   *       cfun->machine->callee_saved_area_padding_bytes,
+	   *       because we want to adjust stack pointer
+	   *       to the position for pop instruction.
+	   */
+	  sp_adjust = cfun->machine->fp_size
+		      + cfun->machine->gp_size
+		      + cfun->machine->lp_size
+		      + cfun->machine->callee_saved_regs_size;
+	  sp_adjust_insn = gen_addsi3 (stack_pointer_rtx,
+				       hard_frame_pointer_rtx,
+				       GEN_INT (-1 * sp_adjust));
+	  /* Emit rtx into instructions list and receive INSN rtx form.  */
+	  sp_adjust_insn = emit_insn (sp_adjust_insn);
+	}
+      else
+	{
+	  /* If frame pointer is NOT needed,
+	   * we cannot calculate the sp adjustment from frame pointer.
+	   * Instead, we calculate the adjustment by local_size,
+	   * out_args_size, and callee_saved_area_padding_bytes.
+	   * Notice that such sp adjustment value may be out of range,
+	   * so we have to deal with it as well.
+	   */
+
+	  /* Adjust $sp = $sp + local_size + out_args_size
+			      + callee_saved_area_padding_bytes.  */
+	  sp_adjust = cfun->machine->local_size
+		      + cfun->machine->out_args_size
+		      + cfun->machine->callee_saved_area_padding_bytes;
+	  /* sp_adjust value may be out of range of the addi instruction,
+	     create alternative add behavior with TA_REGNUM if necessary,
+	     using POSITIVE value to tell that we are increasing address.  */
+	  sp_adjust = nds32_force_addi_stack_int (sp_adjust);
+	  if (sp_adjust)
+	    {
+	      /* Generate sp adjustment instruction
+	         if and only if sp_adjust != 0.  */
+	      sp_adjust_insn = gen_addsi3 (stack_pointer_rtx,
+					   stack_pointer_rtx,
+					   GEN_INT (sp_adjust));
+	      /* Emit rtx into instructions list and receive INSN rtx form.  */
+	      sp_adjust_insn = emit_insn (sp_adjust_insn);
+	    }
+	}
+
+      /* pop_insn = gen_stack_v3pop(last_regno, sp_adjust),
+         the pattern 'stack_v3pop' is implementad in nds32.md.  */
+      /* The (const_int 14) means v3pop always push { $fp $gp $lp }.  */
+      pop_insn = nds32_gen_stack_v3pop (Rb, Re,
+					GEN_INT (14), GEN_INT (0));
+
+      /* Emit pop instruction.  */
+      emit_insn (pop_insn);
+    }
+}
+
+/* ------------------------------------------------------------------------ */
+
+/* Function to test 333-form for load/store instructions.
+   This is auxiliary extern function for auxiliary macro in nds32.h.
+   Because it is a little complicated, we use function instead of macro.  */
+bool
+nds32_ls_333_p (rtx rt, rtx ra, rtx imm, enum machine_mode mode)
+{
+  if (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS
+      && REGNO_REG_CLASS (REGNO (ra)) == LOW_REGS)
+    {
+      if (GET_MODE_SIZE (mode) == 4)
+	return satisfies_constraint_Iu05 (imm);
+
+      if (GET_MODE_SIZE (mode) == 2)
+	return satisfies_constraint_Iu04 (imm);
+
+      if (GET_MODE_SIZE (mode) == 1)
+	return satisfies_constraint_Iu03 (imm);
+    }
+
+  return false;
+}
+
+
+/* Functions to expand load_multiple and store_multiple.
+   They are auxiliary extern functions to help create rtx template.
+   Check nds32.multiple.md file for the patterns.  */
+rtx
+nds32_expand_load_multiple (int base_regno, int count,
+			    rtx base_addr, rtx basemem)
+{
+  int par_index;
+  int offset;
+  rtx result;
+  rtx new_addr, mem, reg;
+
+  /* Create the pattern that is presented in nds32.multiple.md.  */
+
+  result = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (count));
+
+  for (par_index = 0; par_index < count; par_index++)
+    {
+      offset   = par_index * 4;
+      /* 4-byte for loading data to each register.  */
+      new_addr = plus_constant (Pmode, base_addr, offset);
+      mem      = adjust_automodify_address_nv (basemem, SImode,
+					       new_addr, offset);
+      reg      = gen_rtx_REG (SImode, base_regno + par_index);
+
+      XVECEXP (result, 0, par_index) = gen_rtx_SET (VOIDmode, reg, mem);
+    }
+
+  return result;
+}
+
+rtx
+nds32_expand_store_multiple (int base_regno, int count,
+			     rtx base_addr, rtx basemem)
+{
+  int par_index;
+  int offset;
+  rtx result;
+  rtx new_addr, mem, reg;
+
+  /* Create the pattern that is presented in nds32.multiple.md.  */
+
+  result = gen_rtx_PARALLEL (VOIDmode, rtvec_alloc (count));
+
+  for (par_index = 0; par_index < count; par_index++)
+    {
+      offset   = par_index * 4;
+      /* 4-byte for storing data to memory.  */
+      new_addr = plus_constant (Pmode, base_addr, offset);
+      mem      = adjust_automodify_address_nv (basemem, SImode,
+					       new_addr, offset);
+      reg      = gen_rtx_REG (SImode, base_regno + par_index);
+
+      XVECEXP (result, 0, par_index) = gen_rtx_SET (VOIDmode, mem, reg);
+    }
+
+  return result;
+}
+
+/* Function to move block memory content by
+   using load_multiple and store_multiple.
+   This is auxiliary extern function to help create rtx template.
+   Check nds32.multiple.md file for the patterns.  */
+int
+nds32_expand_movmemqi (rtx dstmem, rtx srcmem, rtx total_bytes, rtx alignment)
+{
+  HOST_WIDE_INT in_words, out_words;
+  rtx dst_base_reg, src_base_reg;
+  int maximum_bytes;
+
+  /* Because reduced-set regsiters has few registers
+     (r0~r5, r6~10, r15, r28~r31, where 'r15' and 'r28~r31'
+      cannot be used for register allocation),
+     using 8 registers (32 bytes) for moving memory block
+     may easily consume all of them.
+     It makes register allocation/spilling hard to work.
+     So we only allow maximum=4 registers (16 bytes) for
+     moving memory block under reduced-set registers.  */
+  if (TARGET_REDUCED_REGS)
+    maximum_bytes = 16;
+  else
+    maximum_bytes = 32;
+
+  /* 1. Total_bytes is integer for sure.
+     2. Alignment is integer for sure.
+     3. Maximum 4 or 8 registers, 4 * 4 = 16 bytes, 8 * 4 = 32 bytes.
+     4. Requires (n * 4) block size.
+     5. Requires 4-byte alignment.  */
+  if (GET_CODE (total_bytes) != CONST_INT
+      || GET_CODE (alignment) != CONST_INT
+      || INTVAL (total_bytes) > maximum_bytes
+      || INTVAL (total_bytes) & 3
+      || INTVAL (alignment) & 3)
+    return 0;
+
+  dst_base_reg = copy_to_mode_reg (SImode, XEXP (dstmem, 0));
+  src_base_reg = copy_to_mode_reg (SImode, XEXP (srcmem, 0));
+
+  out_words = in_words = INTVAL (total_bytes) / UNITS_PER_WORD;
+
+  emit_insn (nds32_expand_load_multiple (0, in_words, src_base_reg, srcmem));
+  emit_insn (nds32_expand_store_multiple (0, out_words, dst_base_reg, dstmem));
+
+  /* Successfully create patterns, return 1.  */
+  return 1;
+}
+
+
+/* Computing the Length of an Insn.
+   Modifies the length assigned to instruction INSN.
+   LEN is the initially computed length of the insn.  */
+int
+nds32_adjust_insn_length (rtx insn, int length)
+{
+  rtx src, dst;
+
+  switch (recog_memoized (insn))
+    {
+    case CODE_FOR_move_df:
+    case CODE_FOR_move_di:
+      /* Adjust length of movd44 to 2.  */
+      src = XEXP (PATTERN (insn), 1);
+      dst = XEXP (PATTERN (insn), 0);
+
+      if (REG_P (src)
+	  && REG_P (dst)
+	  && (REGNO (src) % 2) == 0
+	  && (REGNO (dst) % 2) == 0)
+	length = 2;
+      break;
+
+    default:
+      break;
+    }
+
+  return length;
+}
+
+
+/* Function to check if 'bclr' instruction can be used with IVAL.  */
+int
+nds32_can_use_bclr_p (int ival)
+{
+  int one_bit_count;
+
+  /* Caculate the number of 1-bit of (~ival), if there is only one 1-bit,
+     it means the original ival has only one 0-bit,
+     So it is ok to perform 'bclr' operation.  */
+
+  one_bit_count = __builtin_popcount ((unsigned int) (~ival));
+
+  /* 'bclr' is a performance extension instruction.  */
+  return (TARGET_PERF_EXT && (one_bit_count == 1));
+}
+
+/* Function to check if 'bset' instruction can be used with IVAL.  */
+int
+nds32_can_use_bset_p (int ival)
+{
+  int one_bit_count;
+
+  /* Caculate the number of 1-bit of ival, if there is only one 1-bit,
+     it is ok to perform 'bset' operation.  */
+
+  one_bit_count = __builtin_popcount ((unsigned int) (ival));
+
+  /* 'bset' is a performance extension instruction.  */
+  return (TARGET_PERF_EXT && (one_bit_count == 1));
+}
+
+/* Function to check if 'btgl' instruction can be used with IVAL.  */
+int
+nds32_can_use_btgl_p (int ival)
+{
+  int one_bit_count;
+
+  /* Caculate the number of 1-bit of ival, if there is only one 1-bit,
+     it is ok to perform 'btgl' operation.  */
+
+  one_bit_count = __builtin_popcount ((unsigned int) (ival));
+
+  /* 'btgl' is a performance extension instruction.  */
+  return (TARGET_PERF_EXT && (one_bit_count == 1));
+}
+
+/* Function to check if 'bitci' instruction can be used with IVAL.  */
+int
+nds32_can_use_bitci_p (int ival)
+{
+  /* If we are using V3 ISA, we have 'bitci' instruction.
+     Try to see if we can present 'andi' semantic with
+     such 'bit-clear-immediate' operation.
+     For example, 'andi $r0,$r0,0xfffffffc' can be
+     presented with 'bitci $r0,$r0,3'.  */
+  return (TARGET_ISA_V3
+	  && (ival < 0)
+	  && satisfies_constraint_Iu15 (gen_int_mode (~ival, SImode)));
+}
+
+
+/* Return true if is load/store with SYMBOL_REF addressing mode
+   and memory mode is SImode.  */
+bool
+nds32_symbol_load_store_p (rtx insn)
+{
+  rtx mem_src = NULL_RTX;
+
+  switch (get_attr_type (insn))
+    {
+    case TYPE_LOAD:
+      mem_src = SET_SRC (PATTERN (insn));
+      break;
+    case TYPE_STORE:
+      mem_src = SET_DEST (PATTERN (insn));
+      break;
+    default:
+      break;
+    }
+
+  /* Find load/store insn with addressing mode is SYMBOL_REF.  */
+  if (mem_src != NULL_RTX)
+    {
+      if ((GET_CODE (mem_src) == ZERO_EXTEND)
+	  || (GET_CODE (mem_src) == SIGN_EXTEND))
+	mem_src = XEXP (mem_src, 0);
+
+      if ((GET_CODE (XEXP (mem_src, 0)) == SYMBOL_REF)
+	   || (GET_CODE (XEXP (mem_src, 0)) == LO_SUM))
+	return true;
+    }
+
+  return false;
+}
+
+/* Function to determine whether it is worth to do fp_as_gp optimization.
+   Return 0: It is NOT worth to do fp_as_gp optimization.
+   Return 1: It is APPROXIMATELY worth to do fp_as_gp optimization.
+   Note that if it is worth to do fp_as_gp optimization,
+   we MUST set FP_REGNUM ever live in this function.  */
+int
+nds32_fp_as_gp_check_available (void)
+{
+  /* If the register allocation and reload phase
+     have not completed yet, return 0 immediately.
+     We must check this at beginning, otherwise
+     the subsequent checking may cause segmentation fault.  */
+  if (!reload_completed)
+    return 0;
+
+  /* If cfun->machine->fp_as_gp_p is 1,
+     it means we have evaluted it previously
+     and decided to use fp_ap_gp optimization.
+     So there is no need to evalute it again.
+     Return 1 immediately.  */
+  if (cfun->machine->fp_as_gp_p)
+    return 1;
+
+  /* If there exists ANY of following conditions,
+     we DO NOT perform fp_as_gp optimization:
+       1. TARGET_FORBID_FP_AS_GP is set
+          regardless of the TARGET_FORCE_FP_AS_GP.
+       2. User explicitly uses 'naked' attribute.
+       3. Not optimize for size.
+       4. Need frame pointer.
+       5. If $fp is already required to be saved,
+          it means $fp is already choosen by register allocator.
+          Thus we better not to use it for fp_as_gp optimization.
+       6. This function is a vararg function.
+          DO NOT apply fp_as_gp optimization on this function
+          because it may change and break stack frame.
+       7. The epilogue is empty.
+          This happens when the function uses exit()
+          or its attribute is no_return.
+          In that case, compiler will not expand epilogue
+          so that we have no chance to output .omit_fp_end directive.  */
+  if (TARGET_FORBID_FP_AS_GP
+      || lookup_attribute ("naked", DECL_ATTRIBUTES (current_function_decl))
+      || !optimize_size
+      || frame_pointer_needed
+      || NDS32_REQUIRED_CALLEE_SAVED_P (FP_REGNUM)
+      || (cfun->stdarg == 1)
+      || (find_fallthru_edge (EXIT_BLOCK_PTR->preds) == NULL))
+    return 0;
+
+  /* Now we can check the possibility of using fp_as_gp optimization.  */
+  if (TARGET_FORCE_FP_AS_GP)
+    {
+      /* User explicitly issues -mforce-fp-as-gp option.  */
+      df_set_regs_ever_live (FP_REGNUM, 1);
+      return 1;
+    }
+  else
+    {
+      /* In the following we are going to evaluate whether
+         it is worth to do fp_as_gp optimization.  */
+      int good_gain     = 0;
+      int symbol_count  = 0;
+
+      int threshold;
+      rtx insn;
+
+      /* We check if there already requires prologue.
+         Note that $gp will be saved in prologue for PIC code generation.
+         After that, we can set threshold by the existence of prologue.
+         Each fp-implied instruction will gain 2-byte code size
+         from gp-aware instruction, so we have following heuristics.  */
+      if (flag_pic
+	  || nds32_have_prologue_p ())
+	{
+	  /* Have-prologue:
+	       Compiler already intends to generate prologue content,
+	       so the fp_as_gp optimization will only insert
+	       'la $fp,_FP_BASE_' instruction, which will be
+	       converted into 4-byte instruction at link time.
+	       The threshold is "3" symbol accesses, 2 + 2 + 2 > 4.  */
+	  threshold = 3;
+	}
+      else
+	{
+	  /* None-prologue:
+	       Compiler originally does not generate prologue content,
+	       so the fp_as_gp optimization will NOT ONLY insert
+	       'la $fp,_FP_BASE' instruction, but also causes
+	       push/pop instructions.
+	       If we are using v3push (push25/pop25),
+	       the threshold is "5" symbol accesses, 5*2 > 4 + 2 + 2;
+	       If we are using normal push (smw/lmw),
+	       the threshold is "5+2" symbol accesses 7*2 > 4 + 4 + 4.  */
+	  threshold = 5 + (TARGET_V3PUSH ? 0 : 2);
+	}
+
+      /* We would like to traverse every instruction in this function.
+         So we need to have push_topmost_sequence()/pop_topmost_sequence()
+         surrounding our for-loop evaluation.  */
+      push_topmost_sequence ();
+      /* Counting the insn number which the addressing mode is symbol.  */
+      for (insn = get_insns (); insn; insn = NEXT_INSN (insn))
+	{
+	  if (single_set (insn) && nds32_symbol_load_store_p (insn))
+	    symbol_count++;
+
+	  if (symbol_count == threshold)
+	    {
+	      good_gain = 1;
+	      break;
+	    }
+	}
+      pop_topmost_sequence ();
+
+      /* Enable fp_as_gp optimization when potential gain is good enough.  */
+      if (good_gain)
+	{
+	  df_set_regs_ever_live (FP_REGNUM, 1);
+	  return 1;
+	}
+    }
+
+  /* By default we return 0.  */
+  return 0;
+}
+
+
+/* Function to generate PC relative jump table.
+   Refer to nds32.md for more details.
+
+   The following is the sample for the case that diff value
+   can be presented in '.short' size.
+
+     addi    $r1, $r1, -(case_lower_bound)
+     slti    $ta, $r1, (case_number)
+     beqz    $ta, .L_skip_label
+
+     la      $ta, .L35             ! get jump table address
+     lh      $r1, [$ta + $r1 << 1] ! load symbol diff from jump table entry
+     addi    $ta, $r1, $ta
+     jr5     $ta
+
+     ! jump table entry
+   L35:
+     .short  .L25-.L35
+     .short  .L26-.L35
+     .short  .L27-.L35
+     .short  .L28-.L35
+     .short  .L29-.L35
+     .short  .L30-.L35
+     .short  .L31-.L35
+     .short  .L32-.L35
+     .short  .L33-.L35
+     .short  .L34-.L35 */
+const char *
+nds32_output_casesi_pc_relative (rtx *operands)
+{
+  enum machine_mode mode;
+  rtx diff_vec;
+
+  diff_vec = PATTERN (next_active_insn (operands[1]));
+
+  gcc_assert (GET_CODE (diff_vec) == ADDR_DIFF_VEC);
+
+  /* Step C: "t <-- operands[1]".  */
+  output_asm_insn ("la\t$ta, %l1", operands);
+
+  /* Get the mode of each element in the difference vector.  */
+  mode = GET_MODE (diff_vec);
+
+  /* Step D: "z <-- (mem (plus (operands[0] << m) t))",
+     where m is 0, 1, or 2 to load address-diff value from table.  */
+  switch (mode)
+    {
+    case QImode:
+      output_asm_insn ("lb\t%2, [$ta + %0 << 0]", operands);
+      break;
+    case HImode:
+      output_asm_insn ("lh\t%2, [$ta + %0 << 1]", operands);
+      break;
+    case SImode:
+      output_asm_insn ("lw\t%2, [$ta + %0 << 2]", operands);
+      break;
+    default:
+      gcc_unreachable ();
+    }
+
+  /* Step E: "t <-- z + t".
+     Add table label_ref with address-diff value to
+     obtain target case address.  */
+  output_asm_insn ("add\t$ta, %2, $ta", operands);
+
+  /* Step F: jump to target with register t.  */
+  if (TARGET_16_BIT)
+    return "jr5\t$ta";
+  else
+    return "jr\t$ta";
+}
+
+/* Function to generate normal jump table.  */
+const char *
+nds32_output_casesi (rtx *operands)
+{
+  /* Step C: "t <-- operands[1]".  */
+  output_asm_insn ("la\t$ta, %l1", operands);
+
+  /* Step D: "z <-- (mem (plus (operands[0] << 2) t))".  */
+  output_asm_insn ("lw\t%2, [$ta + %0 << 2]", operands);
+
+  /* No need to perform Step E, which is only used for
+     pc relative jump table.  */
+
+  /* Step F: jump to target with register z.  */
+  if (TARGET_16_BIT)
+    return "jr5\t%2";
+  else
+    return "jr\t%2";
+}
+
+
+/* Function to return memory format.  */
+enum nds32_16bit_address_type
+nds32_mem_format (rtx op)
+{
+  enum machine_mode mode_test;
+  int val;
+  int regno;
+
+  if (!TARGET_16_BIT)
+    return ADDRESS_NOT_16BIT_FORMAT;
+
+  mode_test = GET_MODE (op);
+
+  op = XEXP (op, 0);
+
+  /* 45 format.  */
+  if (GET_CODE (op) == REG && (mode_test == SImode))
+    return ADDRESS_REG;
+
+  /* 333 format for QI/HImode.  */
+  if (GET_CODE (op) == REG && (REGNO (op) < R8_REGNUM))
+    return ADDRESS_LO_REG_IMM3U;
+
+  /* post_inc 333 format.  */
+  if ((GET_CODE (op) == POST_INC) && (mode_test == SImode))
+    {
+      regno = REGNO(XEXP (op, 0));
+
+      if (regno < 8)
+	return ADDRESS_POST_INC_LO_REG_IMM3U;
+    }
+
+  /* post_inc 333 format.  */
+  if ((GET_CODE (op) == POST_MODIFY)
+      && (mode_test == SImode)
+      && (REG_P (XEXP (XEXP (op, 1), 0)))
+      && (CONST_INT_P (XEXP (XEXP (op, 1), 1))))
+    {
+      regno = REGNO (XEXP (XEXP (op, 1), 0));
+      val = INTVAL (XEXP (XEXP (op, 1), 1));
+      if (regno < 8 && val < 32)
+	return ADDRESS_POST_INC_LO_REG_IMM3U;
+    }
+
+  if ((GET_CODE (op) == PLUS)
+      && (GET_CODE (XEXP (op, 0)) == REG)
+      && (GET_CODE (XEXP (op, 1)) == CONST_INT))
+    {
+      val = INTVAL (XEXP (op, 1));
+
+      regno = REGNO(XEXP (op, 0));
+
+      if (regno > 7
+	  && regno != SP_REGNUM
+	  && regno != FP_REGNUM)
+	return ADDRESS_NOT_16BIT_FORMAT;
+
+      switch (mode_test)
+	{
+	case QImode:
+	  /* 333 format.  */
+	  if (val >=0 && val < 8 && regno < 8)
+	    return ADDRESS_LO_REG_IMM3U;
+	  break;
+
+	case HImode:
+	  /* 333 format.  */
+	  if (val >=0 && val < 16 && (val % 2 == 0) && regno < 8)
+	    return ADDRESS_LO_REG_IMM3U;
+	  break;
+
+	case SImode:
+	case SFmode:
+	case DFmode:
+	  /* fp imply 37 format.  */
+	  if ((regno == FP_REGNUM) &&
+	      (val >=0 && val < 512 && (val % 4 == 0)))
+	    return ADDRESS_FP_IMM7U;
+	  /* sp imply 37 format.  */
+	  else if ((regno == SP_REGNUM) &&
+		   (val >=0 && val < 512 && (val % 4 == 0)))
+	    return ADDRESS_SP_IMM7U;
+	  /* 333 format.  */
+	  else if (val >=0 && val < 32 && (val % 4 == 0) && regno < 8)
+	    return ADDRESS_LO_REG_IMM3U;
+	  break;
+
+	default:
+	  break;
+	}
+    }
+
+  return ADDRESS_NOT_16BIT_FORMAT;
+}
+
+/* Output 16-bit store.  */
+const char *
+nds32_output_16bit_store (rtx *operands, int byte)
+{
+  char pattern[100];
+  char size;
+  rtx code = XEXP (operands[0], 0);
+
+  size = nds32_byte_to_size (byte);
+
+  switch (nds32_mem_format (operands[0]))
+    {
+    case ADDRESS_REG:
+      operands[0] = code;
+      output_asm_insn ("swi450\t%1, [%0]", operands);
+      break;
+    case ADDRESS_LO_REG_IMM3U:
+      snprintf (pattern, sizeof (pattern), "s%ci333\t%%1, %%0", size);
+      output_asm_insn (pattern, operands);
+      break;
+    case ADDRESS_POST_INC_LO_REG_IMM3U:
+      snprintf (pattern, sizeof (pattern), "s%ci333.bi\t%%1, %%0", size);
+      output_asm_insn (pattern, operands);
+      break;
+    case ADDRESS_FP_IMM7U:
+      output_asm_insn ("swi37\t%1, %0", operands);
+      break;
+    case ADDRESS_SP_IMM7U:
+      /* Get immediate value and set back to operands[1].  */
+      operands[0] = XEXP (code, 1);
+      output_asm_insn ("swi37.sp\t%1, [ + (%0)]", operands);
+      break;
+    default:
+      break;
+    }
+
+  return "";
+}
+
+/* Output 16-bit load.  */
+const char *
+nds32_output_16bit_load (rtx *operands, int byte)
+{
+  char pattern[100];
+  unsigned char size;
+  rtx code = XEXP (operands[1], 0);
+
+  size = nds32_byte_to_size (byte);
+
+  switch (nds32_mem_format (operands[1]))
+    {
+    case ADDRESS_REG:
+      operands[1] = code;
+      output_asm_insn ("lwi450\t%0, [%1]", operands);
+      break;
+    case ADDRESS_LO_REG_IMM3U:
+      snprintf (pattern, sizeof (pattern), "l%ci333\t%%0, %%1", size);
+      output_asm_insn (pattern, operands);
+      break;
+    case ADDRESS_POST_INC_LO_REG_IMM3U:
+      snprintf (pattern, sizeof (pattern), "l%ci333.bi\t%%0, %%1", size);
+      output_asm_insn (pattern, operands);
+      break;
+    case ADDRESS_FP_IMM7U:
+      output_asm_insn ("lwi37\t%0, %1", operands);
+      break;
+    case ADDRESS_SP_IMM7U:
+      /* Get immediate value and set back to operands[0].  */
+      operands[1] = XEXP (code, 1);
+      output_asm_insn ("lwi37.sp\t%0, [ + (%1)]", operands);
+      break;
+    default:
+      break;
+    }
+
+  return "";
+}
+
+/* Output 32-bit store.  */
+const char *
+nds32_output_32bit_store (rtx *operands, int byte)
+{
+  char pattern[100];
+  unsigned char size;
+  rtx code = XEXP (operands[0], 0);
+
+  size = nds32_byte_to_size (byte);
+
+  switch (GET_CODE (code))
+    {
+    case REG:
+      /* (mem (reg X))
+	 => access location by using register,
+	 use "sbi / shi / swi" */
+      snprintf (pattern, sizeof (pattern), "s%ci\t%%1, %%0", size);
+      break;
+
+    case SYMBOL_REF:
+    case CONST:
+      /* (mem (symbol_ref X))
+	 (mem (const (...)))
+	 => access global variables,
+	 use "sbi.gp / shi.gp / swi.gp" */
+      operands[0] = XEXP (operands[0], 0);
+      snprintf (pattern, sizeof (pattern), "s%ci.gp\t%%1, [ + %%0]", size);
+      break;
+
+    case POST_INC:
+      /* (mem (post_inc reg))
+	 => access location by using register which will be post increment,
+	 use "sbi.bi / shi.bi / swi.bi" */
+      snprintf (pattern, sizeof (pattern),
+		"s%ci.bi\t%%1, %%0, %d", size, byte);
+      break;
+
+    case POST_DEC:
+      /* (mem (post_dec reg))
+	 => access location by using register which will be post decrement,
+	 use "sbi.bi / shi.bi / swi.bi" */
+      snprintf (pattern, sizeof (pattern),
+		"s%ci.bi\t%%1, %%0, -%d", size, byte);
+      break;
+
+    case POST_MODIFY:
+      switch (GET_CODE (XEXP (XEXP (code, 1), 1)))
+	{
+	case REG:
+	case SUBREG:
+	  /* (mem (post_modify (reg) (plus (reg) (reg))))
+	     => access location by using register which will be
+	     post modified with reg,
+	     use "sb.bi/ sh.bi / sw.bi" */
+	  snprintf (pattern, sizeof (pattern), "s%c.bi\t%%1, %%0", size);
+	  break;
+	case CONST_INT:
+	  /* (mem (post_modify (reg) (plus (reg) (const_int))))
+	     => access location by using register which will be
+	     post modified with const_int,
+	     use "sbi.bi/ shi.bi / swi.bi" */
+	  snprintf (pattern, sizeof (pattern), "s%ci.bi\t%%1, %%0", size);
+	  break;
+	default:
+	  abort ();
+	}
+      break;
+
+    case PLUS:
+      switch (GET_CODE (XEXP (code, 1)))
+	{
+	case REG:
+	case SUBREG:
+	  /* (mem (plus reg reg)) or (mem (plus (mult reg const_int) reg))
+	     => access location by adding two registers,
+	     use "sb / sh / sw" */
+	  snprintf (pattern, sizeof (pattern), "s%c\t%%1, %%0", size);
+	  break;
+	case CONST_INT:
+	  /* (mem (plus reg const_int))
+	     => access location by adding one register with const_int,
+	     use "sbi / shi / swi" */
+	  snprintf (pattern, sizeof (pattern), "s%ci\t%%1, %%0", size);
+	  break;
+	default:
+	  abort ();
+	}
+      break;
+
+    case LO_SUM:
+      operands[2] = XEXP (code, 1);
+      operands[0] = XEXP (code, 0);
+      snprintf (pattern, sizeof (pattern),
+		"s%ci\t%%1, [%%0 + lo12(%%2)]", size);
+      break;
+
+    default:
+      abort ();
+    }
+
+  output_asm_insn (pattern, operands);
+  return "";
+}
+
+/* Output 32-bit load.  */
+const char *
+nds32_output_32bit_load (rtx *operands, int byte)
+{
+  char pattern[100];
+  unsigned char size;
+  rtx code;
+
+  code = XEXP (operands[1], 0);
+
+  size = nds32_byte_to_size (byte);
+
+  switch (GET_CODE (code))
+    {
+    case REG:
+      /* (mem (reg X))
+	 => access location by using register,
+	 use "lbi / lhi / lwi" */
+      snprintf (pattern, sizeof (pattern), "l%ci\t%%0, %%1", size);
+      break;
+
+    case SYMBOL_REF:
+    case CONST:
+      /* (mem (symbol_ref X))
+	 (mem (const (...)))
+	 => access global variables,
+	 use "lbi.gp / lhi.gp / lwi.gp" */
+      operands[1] = XEXP (operands[1], 0);
+      snprintf (pattern, sizeof (pattern), "l%ci.gp\t%%0, [ + %%1]", size);
+      break;
+
+    case POST_INC:
+      /* (mem (post_inc reg))
+	 => access location by using register which will be post increment,
+	 use "lbi.bi / lhi.bi / lwi.bi" */
+      snprintf (pattern, sizeof (pattern),
+		"l%ci.bi\t%%0, %%1, %d", size, byte);
+      break;
+
+    case POST_DEC:
+      /* (mem (post_dec reg))
+	 => access location by using register which will be post decrement,
+	 use "lbi.bi / lhi.bi / lwi.bi" */
+      snprintf (pattern, sizeof (pattern),
+		"l%ci.bi\t%%0, %%1, -%d", size, byte);
+      break;
+
+    case POST_MODIFY:
+      switch (GET_CODE (XEXP (XEXP (code, 1), 1)))
+	{
+	case REG:
+	case SUBREG:
+	  /* (mem (post_modify (reg) (plus (reg) (reg))))
+	     => access location by using register which will be
+	     post modified with reg,
+	     use "lb.bi/ lh.bi / lw.bi" */
+	  snprintf (pattern, sizeof (pattern), "l%c.bi\t%%0, %%1", size);
+	  break;
+	case CONST_INT:
+	  /* (mem (post_modify (reg) (plus (reg) (const_int))))
+	     => access location by using register which will be
+	     post modified with const_int,
+	     use "lbi.bi/ lhi.bi / lwi.bi" */
+	  snprintf (pattern, sizeof (pattern), "l%ci.bi\t%%0, %%1", size);
+	  break;
+	default:
+	  abort ();
+	}
+      break;
+
+    case PLUS:
+      switch (GET_CODE (XEXP (code, 1)))
+	{
+	case REG:
+	case SUBREG:
+	  /* (mem (plus reg reg)) or (mem (plus (mult reg const_int) reg))
+	     use "lb / lh / lw" */
+	  snprintf (pattern, sizeof (pattern), "l%c\t%%0, %%1", size);
+	  break;
+	case CONST_INT:
+	  /* (mem (plus reg const_int))
+	     => access location by adding one register with const_int,
+	     use "lbi / lhi / lwi" */
+	  snprintf (pattern, sizeof (pattern), "l%ci\t%%0, %%1", size);
+	  break;
+	default:
+	  abort ();
+	}
+      break;
+
+    case LO_SUM:
+      operands[2] = XEXP (code, 1);
+      operands[1] = XEXP (code, 0);
+      snprintf (pattern, sizeof (pattern),
+		"l%ci\t%%0, [%%1 + lo12(%%2)]", size);
+      break;
+
+    default:
+      abort ();
+    }
+
+  output_asm_insn (pattern, operands);
+  return "";
+}
+
+/* ------------------------------------------------------------------------ */
+
+/* PART 5: Initialize target hook structure and definitions.  */
+
+/* Controlling the Compilation Driver.  */
+
+
+/* Run-time Target Specification.  */
+
+
+/* Defining Data Structures for Per-function Information.  */
+
+
+/* Storage Layout.  */
+
+#undef TARGET_PROMOTE_FUNCTION_MODE
+#define TARGET_PROMOTE_FUNCTION_MODE \
+  default_promote_function_mode_always_promote
+
+
+/* Layout of Source Language Data Types.  */
+
+
+/* Register Usage.  */
+
+/* -- Basic Characteristics of Registers.  */
+
+/* -- Order of Allocation of Registers.  */
+
+/* -- How Values Fit in Registers.  */
+
+/* -- Handling Leaf Functions.  */
+
+/* -- Registers That Form a Stack.  */
+
+
+/* Register Classes.  */
+
+#undef TARGET_CLASS_MAX_NREGS
+#define TARGET_CLASS_MAX_NREGS nds32_class_max_nregs
+
+#undef TARGET_LRA_P
+#define TARGET_LRA_P hook_bool_void_true
+
+#undef TARGET_REGISTER_PRIORITY
+#define TARGET_REGISTER_PRIORITY nds32_register_priority
+
+
+/* Obsolete Macros for Defining Constraints.  */
+
+
+/* Stack Layout and Calling Conventions.  */
+
+/* -- Basic Stack Layout.  */
+
+/* -- Exception Handling Support.  */
+
+/* -- Specifying How Stack Checking is Done.  */
+
+/* -- Registers That Address the Stack Frame.  */
+
+/* -- Eliminating Frame Pointer and Arg Pointer.  */
+
+#undef TARGET_CAN_ELIMINATE
+#define TARGET_CAN_ELIMINATE nds32_can_eliminate
+
+/* -- Passing Function Arguments on the Stack.  */
+
+/* -- Passing Arguments in Registers.  */
+
+#undef TARGET_FUNCTION_ARG
+#define TARGET_FUNCTION_ARG nds32_function_arg
+
+#undef TARGET_FUNCTION_ARG_ADVANCE
+#define TARGET_FUNCTION_ARG_ADVANCE nds32_function_arg_advance
+
+#undef TARGET_FUNCTION_ARG_BOUNDARY
+#define TARGET_FUNCTION_ARG_BOUNDARY nds32_function_arg_boundary
+
+/* -- How Scalar Function Values Are Returned.  */
+
+#undef TARGET_FUNCTION_VALUE
+#define TARGET_FUNCTION_VALUE nds32_function_value
+
+#undef TARGET_LIBCALL_VALUE
+#define TARGET_LIBCALL_VALUE nds32_libcall_value
+
+#undef TARGET_FUNCTION_VALUE_REGNO_P
+#define TARGET_FUNCTION_VALUE_REGNO_P nds32_function_value_regno_p
+
+/* -- How Large Values Are Returned.  */
+
+/* -- Caller-Saves Register Allocation.  */
+
+/* -- Function Entry and Exit.  */
+
+#undef TARGET_ASM_FUNCTION_PROLOGUE
+#define TARGET_ASM_FUNCTION_PROLOGUE nds32_asm_function_prologue
+
+#undef TARGET_ASM_FUNCTION_END_PROLOGUE
+#define TARGET_ASM_FUNCTION_END_PROLOGUE nds32_asm_function_end_prologue
+
+#undef  TARGET_ASM_FUNCTION_BEGIN_EPILOGUE
+#define TARGET_ASM_FUNCTION_BEGIN_EPILOGUE nds32_asm_function_begin_epilogue
+
+#undef TARGET_ASM_FUNCTION_EPILOGUE
+#define TARGET_ASM_FUNCTION_EPILOGUE nds32_asm_function_epilogue
+
+#undef TARGET_ASM_OUTPUT_MI_THUNK
+#define TARGET_ASM_OUTPUT_MI_THUNK nds32_asm_output_mi_thunk
+
+#undef TARGET_ASM_CAN_OUTPUT_MI_THUNK
+#define TARGET_ASM_CAN_OUTPUT_MI_THUNK default_can_output_mi_thunk_no_vcall
+
+/* -- Generating Code for Profiling.  */
+
+/* -- Permitting tail calls.  */
+
+#undef TARGET_WARN_FUNC_RETURN
+#define TARGET_WARN_FUNC_RETURN nds32_warn_func_return
+
+/* Stack smashing protection.  */
+
+
+/* Implementing the Varargs Macros.  */
+
+#undef TARGET_STRICT_ARGUMENT_NAMING
+#define TARGET_STRICT_ARGUMENT_NAMING nds32_strict_argument_naming
+
+
+/* Trampolines for Nested Functions.  */
+
+#undef TARGET_ASM_TRAMPOLINE_TEMPLATE
+#define TARGET_ASM_TRAMPOLINE_TEMPLATE nds32_asm_trampoline_template
+
+#undef TARGET_TRAMPOLINE_INIT
+#define TARGET_TRAMPOLINE_INIT nds32_trampoline_init
+
+
+/* Implicit Calls to Library Routines.  */
+
+
+/* Addressing Modes.  */
+
+#undef TARGET_LEGITIMATE_ADDRESS_P
+#define TARGET_LEGITIMATE_ADDRESS_P nds32_legitimate_address_p
+
+#undef TARGET_LEGITIMIZE_ADDRESS
+#define TARGET_LEGITIMIZE_ADDRESS nds32_legitimize_address
+
+
+/* Anchored Addresses.  */
+
+
+/* Condition Code Status.  */
+
+/* -- Representation of condition codes using (cc0).  */
+
+/* -- Representation of condition codes using registers.  */
+
+/* -- Macros to control conditional execution.  */
+
+
+/* Describing Relative Costs of Operations.  */
+
+#undef TARGET_REGISTER_MOVE_COST
+#define TARGET_REGISTER_MOVE_COST nds32_register_move_cost
+
+#undef TARGET_MEMORY_MOVE_COST
+#define TARGET_MEMORY_MOVE_COST nds32_memory_move_cost
+
+#undef TARGET_RTX_COSTS
+#define TARGET_RTX_COSTS nds32_rtx_costs
+
+#undef TARGET_ADDRESS_COST
+#define TARGET_ADDRESS_COST nds32_address_cost
+
+
+/* Adjusting the Instruction Scheduler.  */
+
+
+/* Dividing the Output into Sections (Texts, Data, . . . ).  */
+
+
+/* Position Independent Code.  */
+
+
+/* Defining the Output Assembler Language.  */
+
+/* -- The Overall Framework of an Assembler File.  */
+
+#undef TARGET_ASM_FILE_START
+#define TARGET_ASM_FILE_START nds32_asm_file_start
+#undef TARGET_ASM_FILE_END
+#define TARGET_ASM_FILE_END nds32_asm_file_end
+
+/* -- Output of Data.  */
+
+#undef TARGET_ASM_ALIGNED_HI_OP
+#define TARGET_ASM_ALIGNED_HI_OP "\t.hword\t"
+
+#undef TARGET_ASM_ALIGNED_SI_OP
+#define TARGET_ASM_ALIGNED_SI_OP "\t.word\t"
+
+/* -- Output of Uninitialized Variables.  */
+
+/* -- Output and Generation of Labels.  */
+
+#undef TARGET_ASM_GLOBALIZE_LABEL
+#define TARGET_ASM_GLOBALIZE_LABEL nds32_asm_globalize_label
+
+/* -- How Initialization Functions Are Handled.  */
+
+/* -- Macros Controlling Initialization Routines.  */
+
+/* -- Output of Assembler Instructions.  */
+
+#undef TARGET_PRINT_OPERAND
+#define TARGET_PRINT_OPERAND nds32_print_operand
+#undef TARGET_PRINT_OPERAND_ADDRESS
+#define TARGET_PRINT_OPERAND_ADDRESS nds32_print_operand_address
+
+/* -- Output of Dispatch Tables.  */
+
+/* -- Assembler Commands for Exception Regions.  */
+
+/* -- Assembler Commands for Alignment.  */
+
+
+/* Controlling Debugging Information Format.  */
+
+/* -- Macros Affecting All Debugging Formats.  */
+
+/* -- Specific Options for DBX Output.  */
+
+/* -- Open-Ended Hooks for DBX Format.  */
+
+/* -- File Names in DBX Format.  */
+
+/* -- Macros for SDB and DWARF Output.  */
+
+/* -- Macros for VMS Debug Format.  */
+
+
+/* Cross Compilation and Floating Point.  */
+
+
+/* Mode Switching Instructions.  */
+
+
+/* Defining target-specific uses of __attribute__.  */
+
+#undef TARGET_ATTRIBUTE_TABLE
+#define TARGET_ATTRIBUTE_TABLE nds32_attribute_table
+
+#undef TARGET_OPTION_PRAGMA_PARSE
+#define TARGET_OPTION_PRAGMA_PARSE nds32_option_pragma_parse
+
+#undef TARGET_OPTION_OVERRIDE
+#define TARGET_OPTION_OVERRIDE nds32_option_override
+
+
+/* Emulating TLS.  */
+
+
+/* Defining coprocessor specifics for MIPS targets.  */
+
+
+/* Parameters for Precompiled Header Validity Checking.  */
+
+
+/* C++ ABI parameters.  */
+
+
+/* Adding support for named address spaces.  */
+
+
+/* Miscellaneous Parameters.  */
+
+#undef TARGET_INIT_BUILTINS
+#define TARGET_INIT_BUILTINS nds32_init_builtins
+
+#undef TARGET_EXPAND_BUILTIN
+#define TARGET_EXPAND_BUILTIN nds32_expand_builtin
+
+
+/* ------------------------------------------------------------------------ */
+
+/* Initialize the GCC target structure.  */
+
+struct gcc_target targetm = TARGET_INITIALIZER;
+
+/* ------------------------------------------------------------------------ */
diff --git gcc/config/nds32/nds32.h gcc/config/nds32/nds32.h
new file mode 100644
index 0000000..f464f66
--- /dev/null
+++ gcc/config/nds32/nds32.h
@@ -0,0 +1,975 @@ 
+/* Definitions of target machine of Andes NDS32 cpu for GNU compiler
+   Copyright (C) 2012-2013 Free Software Foundation, Inc.
+   Contributed by Andes Technology Corporation.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+
+/* ------------------------------------------------------------------------ */
+
+/* The following are auxiliary macros or structure declarations
+   that are used all over the nds32.c and nds32.h.  */
+
+
+/* Computing the Length of an Insn.  */
+#define ADJUST_INSN_LENGTH(INSN, LENGTH) \
+  (LENGTH = nds32_adjust_insn_length (INSN, LENGTH))
+
+/* Check instruction LS-37-FP-implied form.
+   Note: actually its immediate range is imm9u
+         since it is used for lwi37/swi37 instructions.  */
+#define NDS32_LS_37_FP_P(rt, ra, imm)       \
+  (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS \
+   && REGNO (ra) == FP_REGNUM               \
+   && satisfies_constraint_Iu09 (imm))
+
+/* Check instruction LS-37-SP-implied form.
+   Note: actually its immediate range is imm9u
+         since it is used for lwi37/swi37 instructions.  */
+#define NDS32_LS_37_SP_P(rt, ra, imm)       \
+  (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS \
+   && REGNO (ra) == SP_REGNUM               \
+   && satisfies_constraint_Iu09 (imm))
+
+
+/* Check load/store instruction form : Rt3, Ra3, imm3u.  */
+#define NDS32_LS_333_P(rt, ra, imm, mode) nds32_ls_333_p (rt, ra, imm, mode)
+
+/* Check load/store instruction form : Rt4, Ra5, const_int_0.
+   Note: no need to check ra because Ra5 means it covers all registers.  */
+#define NDS32_LS_450_P(rt, ra, imm)                     \
+  ((imm == const0_rtx)                                  \
+   && (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS         \
+       || REGNO_REG_CLASS (REGNO (rt)) == MIDDLE_REGS))
+
+/* Check instruction RRI-333-form.  */
+#define NDS32_RRI_333_P(rt, ra, imm)           \
+  (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS    \
+   && REGNO_REG_CLASS (REGNO (ra)) == LOW_REGS \
+   && satisfies_constraint_Iu03 (imm))
+
+/* Check instruction RI-45-form.  */
+#define NDS32_RI_45_P(rt, ra, imm)                     \
+  (REGNO (rt) == REGNO (ra)                            \
+   && (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS        \
+       || REGNO_REG_CLASS (REGNO (rt)) == MIDDLE_REGS) \
+   && satisfies_constraint_Iu05 (imm))
+
+
+/* Check instruction RR-33-form.  */
+#define NDS32_RR_33_P(rt, ra)                   \
+  (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS     \
+   && REGNO_REG_CLASS (REGNO (ra)) == LOW_REGS)
+
+/* Check instruction RRR-333-form.  */
+#define NDS32_RRR_333_P(rt, ra, rb)             \
+  (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS     \
+   && REGNO_REG_CLASS (REGNO (ra)) == LOW_REGS  \
+   && REGNO_REG_CLASS (REGNO (rb)) == LOW_REGS)
+
+/* Check instruction RR-45-form.
+   Note: no need to check rb because Rb5 means it covers all registers.  */
+#define NDS32_RR_45_P(rt, ra, rb)               \
+  (REGNO (rt) == REGNO (ra)                     \
+   && (REGNO_REG_CLASS (REGNO (rt)) == LOW_REGS \
+       || REGNO_REG_CLASS (REGNO (rt)) == MIDDLE_REGS))
+
+/* Classifies address type to distinguish 16-bit/32-bit format.  */
+enum nds32_16bit_address_type
+{
+  /* [reg]: 45 format address.  */
+  ADDRESS_REG,
+  /* [lo_reg + imm3u]: 333 format address.  */
+  ADDRESS_LO_REG_IMM3U,
+  /* post_inc [lo_reg + imm3u]: 333 format address.  */
+  ADDRESS_POST_INC_LO_REG_IMM3U,
+  /* [$fp + imm7u]: fp imply address.  */
+  ADDRESS_FP_IMM7U,
+  /* [$sp + imm7u]: sp imply address.  */
+  ADDRESS_SP_IMM7U,
+  /* Other address format.  */
+  ADDRESS_NOT_16BIT_FORMAT
+};
+
+
+/* ------------------------------------------------------------------------ */
+
+/* Define maximum numbers of registers for passing arguments.  */
+#define NDS32_MAX_REGS_FOR_ARGS 6
+
+/* Define the register number for first argument.  */
+#define NDS32_GPR_ARG_FIRST_REGNUM 0
+
+/* Define the register number for return value.  */
+#define NDS32_GPR_RET_FIRST_REGNUM 0
+
+
+/* Define double word alignment bits.  */
+#define NDS32_DOUBLE_WORD_ALIGNMENT 64
+
+/* Define alignment checking macros for convenience.  */
+#define NDS32_HALF_WORD_ALIGN_P(value)   (((value) & 0x01) == 0)
+#define NDS32_SINGLE_WORD_ALIGN_P(value) (((value) & 0x03) == 0)
+#define NDS32_DOUBLE_WORD_ALIGN_P(value) (((value) & 0x07) == 0)
+
+/* Round X up to the nearest double word.  */
+#define NDS32_ROUND_UP_DOUBLE_WORD(value)  (((value) + 7) & ~7)
+
+
+/* This macro is used to calculate the numbers of registers for
+   containing 'size' bytes of the argument.
+   The size of a register is a word in nds32 target.
+   So we use UNITS_PER_WORD to do the calculation.  */
+#define NDS32_NEED_N_REGS_FOR_ARG(mode, type)                            \
+  ((mode == BLKmode)                                                     \
+   ? ((int_size_in_bytes (type) + UNITS_PER_WORD - 1) / UNITS_PER_WORD)  \
+   : ((GET_MODE_SIZE (mode)     + UNITS_PER_WORD - 1) / UNITS_PER_WORD))
+
+/* This macro is used to return the register number for passing argument.
+   We need to obey the following rules:
+     1. If it is required MORE THAN one register,
+        make sure the register number is a even value.
+     2. If it is required ONLY one register,
+        the register number can be odd or even value.  */
+#define NDS32_AVAILABLE_REGNUM_FOR_ARG(reg_offset, mode, type) \
+  ((NDS32_NEED_N_REGS_FOR_ARG (mode, type) > 1)                \
+   ? (((reg_offset) + NDS32_GPR_ARG_FIRST_REGNUM + 1) & ~1)    \
+   : ((reg_offset) + NDS32_GPR_ARG_FIRST_REGNUM))
+
+/* This macro is to check if there are still available registers
+   for passing argument.  */
+#define NDS32_ARG_PASS_IN_REG_P(reg_offset, mode, type)      \
+  (((reg_offset) < NDS32_MAX_REGS_FOR_ARGS)                  \
+   && ((reg_offset) + NDS32_NEED_N_REGS_FOR_ARG (mode, type) \
+       <= NDS32_MAX_REGS_FOR_ARGS))
+
+/* This macro is to check if the register is required to be saved on stack.
+   If call_used_regs[regno] == 0, regno is the callee-saved register.
+   If df_regs_ever_live_p(regno) == true, it is used in the current function.
+   As long as the register satisfies both criteria above,
+   it is required to be saved.  */
+#define NDS32_REQUIRED_CALLEE_SAVED_P(regno)                  \
+  ((!call_used_regs[regno]) && (df_regs_ever_live_p (regno)))
+
+/* ------------------------------------------------------------------------ */
+
+/* A C structure for machine-specific, per-function data.
+   This is added to the cfun structure.  */
+struct GTY(()) machine_function
+{
+  /* Number of bytes allocated on the stack for variadic args
+     if we want to push them into stack as pretend arguments by ourself.  */
+  int va_args_size;
+  /* Number of bytes reserved on the stack for
+     local and temporary variables.  */
+  int local_size;
+  /* Number of bytes allocated on the stack for outgoing arguments.  */
+  int out_args_size;
+
+  /* Number of bytes on the stack for saving $fp.  */
+  int fp_size;
+  /* Number of bytes on the stack for saving $gp.  */
+  int gp_size;
+  /* Number of bytes on the stack for saving $lp.  */
+  int lp_size;
+
+  /* Number of bytes on the stack for saving callee-saved registers.  */
+  int callee_saved_regs_size;
+  /* The padding bytes in callee-saved area may be required.  */
+  int callee_saved_area_padding_bytes;
+
+  /* The first required register that should be saved on stack
+     for va_args (one named argument + nameless arguments).  */
+  int va_args_first_regno;
+  /* The last required register that should be saved on stack
+     for va_args (one named argument + nameless arguments).  */
+  int va_args_last_regno;
+
+  /* The first required callee-saved register.  */
+  int callee_saved_regs_first_regno;
+  /* The last required callee-saved register.  */
+  int callee_saved_regs_last_regno;
+
+  /* Indicate that whether this function needs
+     prologue/epilogue code generation.  */
+  int naked_p;
+  /* Indicate that whether this function
+     uses fp_as_gp optimization.  */
+  int fp_as_gp_p;
+};
+
+/* A C structure that contains the arguments information.  */
+typedef struct
+{
+  unsigned int reg_offset;
+} nds32_cumulative_args;
+
+/* ------------------------------------------------------------------------ */
+
+/* The following we define C-ISR related stuff.
+   In nds32 architecture, we have 73 vectors for interrupt/exception.
+   For each vector (except for vector 0, which is used for reset behavior),
+   we allow users to set its register saving scheme and interrupt level.  */
+
+/* There are 73 vectors in nds32 architecture.
+   0 for reset handler,
+   1-8 for exception handler,
+   and 9-72 for interrupt handler.
+   We use an array, which is defined in nds32.c, to record
+   essential information for each vector.  */
+#define NDS32_N_ISR_VECTORS 73
+
+/* Define possible isr category.  */
+enum nds32_isr_category
+{
+  NDS32_ISR_NONE,
+  NDS32_ISR_INTERRUPT,
+  NDS32_ISR_EXCEPTION,
+  NDS32_ISR_RESET
+};
+
+/* Define isr register saving scheme.  */
+enum nds32_isr_save_reg
+{
+  NDS32_SAVE_ALL,
+  NDS32_PARTIAL_SAVE
+};
+
+/* Define isr nested type.  */
+enum nds32_isr_nested_type
+{
+  NDS32_NESTED,
+  NDS32_NOT_NESTED,
+  NDS32_NESTED_READY
+};
+
+/* Define structure to record isr information.
+   The isr vector array 'isr_vectors[]' with this structure
+   is defined in nds32.c.  */
+struct nds32_isr_info
+{
+  /* The field to identify isr category.
+     It should be set to NDS32_ISR_NONE by default.
+     If user specifies a function as isr by using attribute,
+     this field will be set accordingly.  */
+  enum nds32_isr_category category;
+
+  /* A string for the applied function name.
+     It should be set to empty string by default.  */
+  char func_name[100];
+
+  /* The register saving scheme.
+     It should be set to NDS32_PARTIAL_SAVE by default
+     unless user specifies attribute to change it.  */
+  enum nds32_isr_save_reg save_reg;
+
+  /* The nested type.
+     It should be set to NDS32_NOT_NESTED by default
+     unless user specifies attribute to change it.  */
+  enum nds32_isr_nested_type nested_type;
+
+  /* Total vectors.
+     The total vectors = interrupt + exception numbers + reset.
+     It should be set to 0 by default.
+     This field is ONLY used in NDS32_ISR_RESET category.  */
+  unsigned int total_n_vectors;
+
+  /* A string for nmi handler name.
+     It should be set to empty string by default.
+     This field is ONLY used in NDS32_ISR_RESET category.  */
+  char nmi_name[100];
+
+  /* A string for warm handler name.
+     It should be set to empty string by default.
+     This field is ONLY used in NDS32_ISR_RESET category.  */
+  char warm_name[100];
+};
+
+/* ------------------------------------------------------------------------ */
+
+/* Define code for all nds32 builtins.  */
+enum nds32_builtins
+{
+  NDS32_BUILTIN_ISYNC,
+  NDS32_BUILTIN_ISB,
+  NDS32_BUILTIN_MFSR,
+  NDS32_BUILTIN_MFUSR,
+  NDS32_BUILTIN_MTSR,
+  NDS32_BUILTIN_MTUSR,
+  NDS32_BUILTIN_SETGIE_EN,
+  NDS32_BUILTIN_SETGIE_DIS
+};
+
+/* ------------------------------------------------------------------------ */
+
+#define TARGET_ISA_V2   (nds32_arch_option == ARCH_V2)
+#define TARGET_ISA_V3   (nds32_arch_option == ARCH_V3)
+#define TARGET_ISA_V3M  (nds32_arch_option == ARCH_V3M)
+
+/* ------------------------------------------------------------------------ */
+
+/* Controlling the Compilation Driver.  */
+
+#define OPTION_DEFAULT_SPECS \
+  {"arch", "%{!march=*:-march=%(VALUE)}" },
+
+#define CC1_SPEC \
+  ""
+
+#define ASM_SPEC \
+  " %{mbig-endian:-EB} %{mlittle-endian:-EL}"
+
+#define LINK_SPEC \
+  " %{mbig-endian:-EB} %{mlittle-endian:-EL}"
+
+#define LIB_SPEC \
+  " -lc -lgloss"
+
+/* The option -mno-ctor-dtor can disable constructor/destructor feature
+   by applying different crt stuff.  In the convention, crt0.o is the
+   startup file without constructor/desctructor;
+   crt1.o, crti.o, crtbegin.o, crtend.o, and crtn.o are the
+   startup files with constructor/destructor.
+   Note that crt0.o, crt1.o, crti.o, and crtn.o are provided
+   by newlib/mculib/glibc/ublic, while crtbegin.o and crtend.o are
+   currently provided by GCC for nds32 target.
+
+   For nds32 target so far:
+   If -mno-ctor-dtor, we are going to link
+   "crt0.o [user objects]".
+   If general cases, we are going to link
+   "crt1.o crtbegin1.o [user objects] crtend1.o".  */
+#define STARTFILE_SPEC \
+  " %{!mno-ctor-dtor:crt1.o%s;:crt0.o%s}" \
+  " %{!mno-ctor-dtor:crtbegin1.o%s}"
+#define ENDFILE_SPEC \
+  " %{!mno-ctor-dtor:crtend1.o%s}"
+
+/* The TARGET_BIG_ENDIAN_DEFAULT is defined if we configure gcc
+   with --target=nds32be-* setting.
+   Check gcc/config.gcc for more information.  */
+#ifdef TARGET_BIG_ENDIAN_DEFAULT
+#define MULTILIB_DEFAULTS { "mbig-endian" }
+#else
+#define MULTILIB_DEFAULTS { "mlittle-endian" }
+#endif
+
+
+/* Run-time Target Specification.  */
+
+#define TARGET_CPU_CPP_BUILTINS()                     \
+  do                                                  \
+    {                                                 \
+      builtin_define ("__nds32__");                   \
+                                                      \
+      if (TARGET_ISA_V2)                              \
+        builtin_define ("__NDS32_ISA_V2__");          \
+      if (TARGET_ISA_V3)                              \
+        builtin_define ("__NDS32_ISA_V3__");          \
+      if (TARGET_ISA_V3M)                             \
+        builtin_define ("__NDS32_ISA_V3M__");         \
+                                                      \
+      if (TARGET_BIG_ENDIAN)                          \
+        builtin_define ("__big_endian__");            \
+      if (TARGET_REDUCED_REGS)                        \
+        builtin_define ("__NDS32_REDUCED_REGS__");    \
+      if (TARGET_CMOV)                                \
+        builtin_define ("__NDS32_CMOV__");            \
+      if (TARGET_PERF_EXT)                            \
+        builtin_define ("__NDS32_PERF_EXT__");        \
+      if (TARGET_16_BIT)                              \
+        builtin_define ("__NDS32_16_BIT__");          \
+      if (TARGET_GP_DIRECT)                           \
+        builtin_define ("__NDS32_GP_DIRECT__");       \
+                                                      \
+      builtin_assert ("cpu=nds32");                   \
+      builtin_assert ("machine=nds32");               \
+    } while (0)
+
+
+/* Defining Data Structures for Per-function Information.  */
+
+/* This macro is called once per function,
+   before generation of any RTL has begun.  */
+#define INIT_EXPANDERS  nds32_init_expanders ()
+
+
+/* Storage Layout.  */
+
+#define BITS_BIG_ENDIAN 0
+
+#define BYTES_BIG_ENDIAN (TARGET_BIG_ENDIAN)
+
+#define WORDS_BIG_ENDIAN (TARGET_BIG_ENDIAN)
+
+#define UNITS_PER_WORD 4
+
+#define PROMOTE_MODE(m, unsignedp, type)                                    \
+  if (GET_MODE_CLASS (m) == MODE_INT && GET_MODE_SIZE (m) < UNITS_PER_WORD) \
+    {                                                                       \
+      (m) = SImode;                                                         \
+    }
+
+#define PARM_BOUNDARY 32
+
+#define STACK_BOUNDARY 64
+
+#define FUNCTION_BOUNDARY 32
+
+#define BIGGEST_ALIGNMENT 64
+
+#define EMPTY_FIELD_BOUNDARY 32
+
+#define STRUCTURE_SIZE_BOUNDARY 8
+
+#define STRICT_ALIGNMENT 1
+
+#define PCC_BITFIELD_TYPE_MATTERS 1
+
+
+/* Layout of Source Language Data Types.  */
+
+#define INT_TYPE_SIZE           32
+#define SHORT_TYPE_SIZE         16
+#define LONG_TYPE_SIZE          32
+#define LONG_LONG_TYPE_SIZE     64
+
+#define FLOAT_TYPE_SIZE         32
+#define DOUBLE_TYPE_SIZE        64
+#define LONG_DOUBLE_TYPE_SIZE   64
+
+#define DEFAULT_SIGNED_CHAR 1
+
+#define SIZE_TYPE "long unsigned int"
+#define PTRDIFF_TYPE "long int"
+#define WCHAR_TYPE "short unsigned int"
+#define WCHAR_TYPE_SIZE 16
+
+
+/* Register Usage.  */
+
+/* Number of actual hardware registers.
+   The hardware registers are assigned numbers for the compiler
+   from 0 to just below FIRST_PSEUDO_REGISTER.
+   All registers that the compiler knows about must be given numbers,
+   even those that are not normally considered general registers.  */
+#define FIRST_PSEUDO_REGISTER 34
+
+/* An initializer that says which registers are used for fixed
+   purposes all throughout the compiled code and are therefore
+   not available for general allocation.
+
+   $r28 : $fp
+   $r29 : $gp
+   $r30 : $lp
+   $r31 : $sp
+
+   caller-save registers: $r0 ~ $r5, $r16 ~ $r23
+   callee-save registers: $r6 ~ $r10, $r11 ~ $r14
+
+   reserved for assembler : $r15
+   reserved for other use : $r24, $r25, $r26, $r27 */
+#define FIXED_REGISTERS                 \
+{ /* r0  r1  r2  r3  r4  r5  r6  r7  */ \
+      0,  0,  0,  0,  0,  0,  0,  0,    \
+  /* r8  r9  r10 r11 r12 r13 r14 r15 */ \
+      0,  0,  0,  0,  0,  0,  0,  1,    \
+  /* r16 r17 r18 r19 r20 r21 r22 r23 */ \
+      0,  0,  0,  0,  0,  0,  0,  0,    \
+  /* r24 r25 r26 r27 r28 r29 r30 r31 */ \
+      1,  1,  1,  1,  0,  1,  0,  1,    \
+  /* ARG_POINTER:32 */                  \
+      1,                                \
+  /* FRAME_POINTER:33 */                \
+      1                                 \
+}
+
+/* Identifies the registers that are not available for
+   general allocation of values that must live across
+   function calls -- so they are caller-save registers.
+
+   0 : callee-save registers
+   1 : caller-save registers */
+#define CALL_USED_REGISTERS             \
+{ /* r0  r1  r2  r3  r4  r5  r6  r7  */ \
+      1,  1,  1,  1,  1,  1,  0,  0,    \
+  /* r8  r9  r10 r11 r12 r13 r14 r15 */ \
+      0,  0,  0,  0,  0,  0,  0,  1,    \
+  /* r16 r17 r18 r19 r20 r21 r22 r23 */ \
+      1,  1,  1,  1,  1,  1,  1,  1,    \
+  /* r24 r25 r26 r27 r28 r29 r30 r31 */ \
+      1,  1,  1,  1,  0,  1,  0,  1,    \
+  /* ARG_POINTER:32 */                  \
+      1,                                \
+  /* FRAME_POINTER:33 */                \
+      1                                 \
+}
+
+/* In nds32 target, we have three levels of registers:
+     LOW_COST_REGS    : $r0 ~ $r7
+     MIDDLE_COST_REGS : $r8 ~ $r11, $r16 ~ $r19
+     HIGH_COST_REGS   : $r12 ~ $r14, $r20 ~ $r31 */
+#define REG_ALLOC_ORDER           \
+{                                 \
+   0,  1,  2,  3,  4,  5,  6,  7, \
+   8,  9, 10, 11, 16, 17, 18, 19, \
+  12, 13, 14, 15, 20, 21, 22, 23, \
+  24, 25, 26, 27, 28, 29, 30, 31, \
+  32,                             \
+  33                              \
+}
+
+/* Tell IRA to use the order we define rather than messing it up with its
+   own cost calculations.  */
+#define HONOR_REG_ALLOC_ORDER
+
+/* The number of consecutive hard regs needed starting at
+   reg "regno" for holding a value of mode "mode".  */
+#define HARD_REGNO_NREGS(regno, mode) nds32_hard_regno_nregs (regno, mode)
+
+/* Value is 1 if hard register "regno" can hold a value
+   of machine-mode "mode".  */
+#define HARD_REGNO_MODE_OK(regno, mode) nds32_hard_regno_mode_ok (regno, mode)
+
+/* A C expression that is nonzero if a value of mode1
+   is accessible in mode2 without copying.
+   Define this macro to return nonzero in as many cases as possible
+   since doing so will allow GCC to perform better register allocation.
+   We can use general registers to tie QI/HI/SI modes together.  */
+#define MODES_TIEABLE_P(mode1, mode2)          \
+  (GET_MODE_CLASS (mode1) == MODE_INT          \
+   && GET_MODE_CLASS (mode2) == MODE_INT       \
+   && GET_MODE_SIZE (mode1) <= UNITS_PER_WORD  \
+   && GET_MODE_SIZE (mode2) <= UNITS_PER_WORD)
+
+
+/* Register Classes.  */
+
+/* In nds32 target, we have three levels of registers:
+     Low cost regsiters    : $r0 ~ $r7
+     Middle cost registers : $r8 ~ $r11, $r16 ~ $r19
+     High cost registers   : $r12 ~ $r14, $r20 ~ $r31
+
+   In practice, we have MIDDLE_REGS cover LOW_REGS register class contents
+   so that it provides more chance to use low cost registers.  */
+enum reg_class
+{
+  NO_REGS,
+  R15_TA_REG,
+  STACK_REG,
+  LOW_REGS,
+  MIDDLE_REGS,
+  HIGH_REGS,
+  GENERAL_REGS,
+  FRAME_REGS,
+  ALL_REGS,
+  LIM_REG_CLASSES
+};
+
+#define N_REG_CLASSES (int) LIM_REG_CLASSES
+
+#define REG_CLASS_NAMES \
+{                       \
+  "NO_REGS",            \
+  "R15_TA_REG",         \
+  "STACK_REG",          \
+  "LOW_REGS",           \
+  "MIDDLE_REGS",        \
+  "HIGH_REGS",          \
+  "GENERAL_REGS",       \
+  "FRAME_REGS",         \
+  "ALL_REGS"            \
+}
+
+#define REG_CLASS_CONTENTS \
+{                                                            \
+  {0x00000000, 0x00000000}, /* NO_REGS     :              */ \
+  {0x00008000, 0x00000000}, /* R15_TA_REG  : 15           */ \
+  {0x80000000, 0x00000000}, /* STACK_REG   : 31           */ \
+  {0x000000ff, 0x00000000}, /* LOW_REGS    : 0-7          */ \
+  {0x000f0fff, 0x00000000}, /* MIDDLE_REGS : 0-11, 16-19  */ \
+  {0xfff07000, 0x00000000}, /* HIGH_REGS   : 12-14, 20-31 */ \
+  {0xffffffff, 0x00000000}, /* GENERAL_REGS: 0-31         */ \
+  {0x00000000, 0x00000003}, /* FRAME_REGS  : 32, 33       */ \
+  {0xffffffff, 0x00000003}  /* ALL_REGS    : 0-31, 32, 33 */ \
+}
+
+#define REGNO_REG_CLASS(regno) nds32_regno_reg_class (regno)
+
+#define BASE_REG_CLASS GENERAL_REGS
+#define INDEX_REG_CLASS GENERAL_REGS
+
+/* Return nonzero if it is suitable for use as a
+   base register in operand addresses.
+   So far, we return nonzero only if "num" is a hard reg
+   of the suitable class or a pseudo register which is
+   allocated to a suitable hard reg.  */
+#define REGNO_OK_FOR_BASE_P(num) \
+  ((num) < 32 || (unsigned) reg_renumber[num] < 32)
+
+/* Return nonzero if it is suitable for use as a
+   index register in operand addresses.
+   So far, we return nonzero only if "num" is a hard reg
+   of the suitable class or a pseudo register which is
+   allocated to a suitable hard reg.
+   The difference between an index register and a base register is that
+   the index register may be scaled.  */
+#define REGNO_OK_FOR_INDEX_P(num) \
+  ((num) < 32 || (unsigned) reg_renumber[num] < 32)
+
+
+/* Obsolete Macros for Defining Constraints.  */
+
+
+/* Stack Layout and Calling Conventions.  */
+
+#define STACK_GROWS_DOWNWARD
+
+#define FRAME_GROWS_DOWNWARD 1
+
+#define STARTING_FRAME_OFFSET 0
+
+#define STACK_POINTER_OFFSET 0
+
+#define FIRST_PARM_OFFSET(fundecl) 0
+
+#define RETURN_ADDR_RTX(count, frameaddr) \
+  nds32_return_addr_rtx (count, frameaddr)
+
+/* A C expression whose value is RTL representing the location
+   of the incoming return address at the beginning of any function
+   before the prologue.
+   If this RTL is REG, you should also define
+   DWARF_FRAME_RETURN_COLUMN to DWARF_FRAME_REGNUM (REGNO).  */
+#define INCOMING_RETURN_ADDR_RTX    gen_rtx_REG (Pmode, LP_REGNUM)
+#define DWARF_FRAME_RETURN_COLUMN   DWARF_FRAME_REGNUM (LP_REGNUM)
+
+#define STACK_POINTER_REGNUM SP_REGNUM
+
+#define FRAME_POINTER_REGNUM 33
+
+#define HARD_FRAME_POINTER_REGNUM FP_REGNUM
+
+#define ARG_POINTER_REGNUM 32
+
+#define STATIC_CHAIN_REGNUM 16
+
+#define ELIMINABLE_REGS                                \
+{ { ARG_POINTER_REGNUM,   STACK_POINTER_REGNUM },      \
+  { ARG_POINTER_REGNUM,   HARD_FRAME_POINTER_REGNUM }, \
+  { FRAME_POINTER_REGNUM, STACK_POINTER_REGNUM },      \
+  { FRAME_POINTER_REGNUM, HARD_FRAME_POINTER_REGNUM } }
+
+#define INITIAL_ELIMINATION_OFFSET(from_reg, to_reg, offset_var) \
+  (offset_var) = nds32_initial_elimination_offset (from_reg, to_reg)
+
+#define ACCUMULATE_OUTGOING_ARGS 1
+
+#define OUTGOING_REG_PARM_STACK_SPACE(fntype) 1
+
+#define CUMULATIVE_ARGS nds32_cumulative_args
+
+#define INIT_CUMULATIVE_ARGS(cum, fntype, libname, fndecl, n_named_args) \
+  nds32_init_cumulative_args (&cum, fntype, libname, fndecl, n_named_args)
+
+/* The REGNO is an unsigned integer but NDS32_GPR_ARG_FIRST_REGNUM may be 0.
+   We better cast REGNO into signed integer so that we can avoid
+   'comparison of unsigned expression >= 0 is always true' warning.  */
+#define FUNCTION_ARG_REGNO_P(regno)                                        \
+  (((int) regno - NDS32_GPR_ARG_FIRST_REGNUM >= 0)                         \
+   && ((int) regno - NDS32_GPR_ARG_FIRST_REGNUM < NDS32_MAX_REGS_FOR_ARGS))
+
+#define DEFAULT_PCC_STRUCT_RETURN 0
+
+/* EXIT_IGNORE_STACK should be nonzero if, when returning
+   from a function, the stack pointer does not matter.
+   The value is tested only in functions that have frame pointers.
+   In nds32 target, the function epilogue recovers the
+   stack pointer from the frame.  */
+#define EXIT_IGNORE_STACK 1
+
+#define FUNCTION_PROFILER(file, labelno) \
+  fprintf (file, "/* profiler %d */", (labelno))
+
+
+/* Implementing the Varargs Macros.  */
+
+
+/* Trampolines for Nested Functions.  */
+
+/* Giving A-function and B-function,
+   if B-function wants to call A-function's nested function,
+   we need to fill trampoline code into A-function's stack
+   so that B-function can execute the code in stack to indirectly
+   jump to (like 'trampoline' action) desired nested function.
+
+   The trampoline code for nds32 target must contains folloing parts:
+
+     1. instructions (4 * 4 = 16 bytes):
+          get $pc first
+          load chain_value to static chain register via $pc
+          load nested function address to $r15 via $pc
+          jump to desired nested function via $r15
+     2. data (4 * 2 = 8 bytes):
+          chain_value
+          nested function address
+
+   Please check nds32.c implementation for more information.  */
+#define TRAMPOLINE_SIZE 24
+
+/* Because all instructions/data in trampoline template are 4-byte size,
+   we set trampoline alignment 8*4=32 bits.  */
+#define TRAMPOLINE_ALIGNMENT 32
+
+
+/* Implicit Calls to Library Routines.  */
+
+
+/* Addressing Modes.  */
+
+/* We can use "LWI.bi  Rt, [Ra], 4" to support post increment.  */
+#define HAVE_POST_INCREMENT 1
+/* We can use "LWI.bi  Rt, [Ra], -4" to support post decrement.  */
+#define HAVE_POST_DECREMENT 1
+
+/* We have "LWI.bi  Rt, [Ra], imm" instruction form.  */
+#define HAVE_POST_MODIFY_DISP 1
+/* We have "LW.bi   Rt, [Ra], Rb" instruction form.  */
+#define HAVE_POST_MODIFY_REG  1
+
+#define CONSTANT_ADDRESS_P(x) (CONSTANT_P (x) && GET_CODE (x) != CONST_DOUBLE)
+
+#define MAX_REGS_PER_ADDRESS 2
+
+
+/* Anchored Addresses.  */
+
+
+/* Condition Code Status.  */
+
+
+/* Describing Relative Costs of Operations.  */
+
+/* A C expression for the cost of a branch instruction.
+   A value of 1 is the default;
+   other values are interpreted relative to that.  */
+#define BRANCH_COST(speed_p, predictable_p) ((speed_p) ? 2 : 0)
+
+#define SLOW_BYTE_ACCESS 1
+
+#define NO_FUNCTION_CSE
+
+
+/* Adjusting the Instruction Scheduler.  */
+
+
+/* Dividing the Output into Sections (Texts, Data, . . . ).  */
+
+#define TEXT_SECTION_ASM_OP     "\t.text"
+#define DATA_SECTION_ASM_OP     "\t.data"
+
+/* Currently, nds32 assembler does NOT handle '.bss' pseudo-op.
+   So we use '.section .bss' alternatively.  */
+#define BSS_SECTION_ASM_OP      "\t.section\t.bss"
+
+/* Define this macro to be an expression with a nonzero value if jump tables
+   (for tablejump insns) should be output in the text section,
+   along with the assembler instructions.
+   Otherwise, the readonly data section is used.  */
+#define JUMP_TABLES_IN_TEXT_SECTION 1
+
+
+/* Position Independent Code.  */
+
+
+/* Defining the Output Assembler Language.  */
+
+#define ASM_COMMENT_START "!"
+
+#define ASM_APP_ON "! #APP"
+
+#define ASM_APP_OFF "! #NO_APP\n"
+
+#define ASM_OUTPUT_LABELREF(stream, name) \
+  asm_fprintf (stream, "%U%s", (*targetm.strip_name_encoding) (name))
+
+#define ASM_OUTPUT_SYMBOL_REF(stream, sym) \
+  assemble_name (stream, XSTR (sym, 0))
+
+#define ASM_OUTPUT_LABEL_REF(stream, buf) \
+  assemble_name (stream, buf)
+
+#define LOCAL_LABEL_PREFIX "."
+
+#define REGISTER_NAMES                                            \
+{                                                                 \
+  "$r0",  "$r1",  "$r2",  "$r3",  "$r4",  "$r5",  "$r6",  "$r7",  \
+  "$r8",  "$r9",  "$r10", "$r11", "$r12", "$r13", "$r14", "$ta",  \
+  "$r16", "$r17", "$r18", "$r19", "$r20", "$r21", "$r22", "$r23", \
+  "$r24", "$r25", "$r26", "$r27", "$fp",  "$gp",  "$lp",  "$sp",  \
+  "$AP",                                                          \
+  "$SFP"                                                          \
+}
+
+/* Output normal jump table entry.  */
+#define ASM_OUTPUT_ADDR_VEC_ELT(stream, value) \
+  asm_fprintf (stream, "\t.word\t%LL%d\n", value)
+
+/* Output pc relative jump table entry.  */
+#define ASM_OUTPUT_ADDR_DIFF_ELT(stream, body, value, rel)              \
+  do                                                                    \
+    {                                                                   \
+      switch (GET_MODE (body))                                          \
+        {                                                               \
+        case QImode:                                                    \
+          asm_fprintf (stream, "\t.byte\t.L%d-.L%d\n", value, rel);     \
+          break;                                                        \
+        case HImode:                                                    \
+          asm_fprintf (stream, "\t.short\t.L%d-.L%d\n", value, rel);    \
+          break;                                                        \
+        case SImode:                                                    \
+          asm_fprintf (stream, "\t.word\t.L%d-.L%d\n", value, rel);     \
+          break;                                                        \
+        default:                                                        \
+          gcc_unreachable();                                            \
+        }                                                               \
+    } while (0)
+
+/* We have to undef it first because elfos.h formerly define it
+   check gcc/config.gcc and gcc/config/elfos.h for more information.  */
+#undef  ASM_OUTPUT_CASE_LABEL
+#define ASM_OUTPUT_CASE_LABEL(stream, prefix, num, table)          \
+  do                                                               \
+    {                                                              \
+      asm_fprintf (stream, "\t! Jump Table Begin\n");              \
+      (*targetm.asm_out.internal_label) (stream, prefix, num);     \
+    } while (0)
+
+#define ASM_OUTPUT_CASE_END(stream, num, table)        \
+  do                                                   \
+    {                                                  \
+      /* Because our jump table is in text section,    \
+         we need to make sure 2-byte alignment after   \
+         the jump table for instructions fetch.  */    \
+      if (GET_MODE (PATTERN (table)) == QImode)        \
+        ASM_OUTPUT_ALIGN (stream, 1);                  \
+      asm_fprintf (stream, "\t! Jump Table End\n");    \
+    }  while (0)
+
+/* This macro is not documented yet.
+   But we do need it to make jump table vector aligned.  */
+#define ADDR_VEC_ALIGN(JUMPTABLE) 2
+
+#define DWARF2_UNWIND_INFO 1
+
+#define JUMP_ALIGN(x) \
+  (align_jumps_log ? align_jumps_log : nds32_target_alignment (x))
+
+#define LOOP_ALIGN(x) \
+  (align_loops_log ? align_loops_log : nds32_target_alignment (x))
+
+#define LABEL_ALIGN(x) \
+  (align_labels_log ? align_labels_log : nds32_target_alignment (x))
+
+#define ASM_OUTPUT_ALIGN(stream, power) \
+  fprintf (stream, "\t.align\t%d\n", power)
+
+
+/* Controlling Debugging Information Format.  */
+
+#define PREFERRED_DEBUGGING_TYPE DWARF2_DEBUG
+
+#define DWARF2_DEBUGGING_INFO 1
+
+#define DWARF2_ASM_LINE_DEBUG_INFO 1
+
+
+/* Cross Compilation and Floating Point.  */
+
+
+/* Mode Switching Instructions.  */
+
+
+/* Defining target-specific uses of __attribute__.  */
+
+
+/* Emulating TLS.  */
+
+
+/* Defining coprocessor specifics for MIPS targets.  */
+
+
+/* Parameters for Precompiled Header Validity Checking.  */
+
+
+/* C++ ABI parameters.  */
+
+
+/* Adding support for named address spaces.  */
+
+
+/* Miscellaneous Parameters.  */
+
+/* This is the machine mode that elements of a jump-table should have.  */
+#define CASE_VECTOR_MODE Pmode
+
+/* Return the preferred mode for and addr_diff_vec when the mininum
+   and maximum offset are known.  */
+#define CASE_VECTOR_SHORTEN_MODE(min_offset, max_offset, body)  \
+   ((min_offset < 0 || max_offset >= 0x2000 ) ? SImode          \
+   : (max_offset >= 100) ? HImode                               \
+   : QImode)
+
+/* Generate pc relative jump table when -fpic or -Os.  */
+#define CASE_VECTOR_PC_RELATIVE (flag_pic || optimize_size)
+
+/* Define this macro if operations between registers with integral mode
+   smaller than a word are always performed on the entire register.  */
+#define WORD_REGISTER_OPERATIONS
+
+/* A C expression indicating when insns that read memory in mem_mode,
+   an integral mode narrower than a word, set the bits outside of mem_mode
+   to be either the sign-extension or the zero-extension of the data read.  */
+#define LOAD_EXTEND_OP(MODE) ZERO_EXTEND
+
+/* The maximum number of bytes that a single instruction can move quickly
+   between memory and registers or between two memory locations.  */
+#define MOVE_MAX 4
+
+/* A C expression that is nonzero if on this machine the number of bits
+   actually used for the count of a shift operation is equal to the number
+   of bits needed to represent the size of the object being shifted.  */
+#define SHIFT_COUNT_TRUNCATED 1
+
+/* A C expression which is nonzero if on this machine it is safe to "convert"
+   an integer of 'inprec' bits to one of 'outprec' bits by merely operating
+   on it as if it had only 'outprec' bits.  */
+#define TRULY_NOOP_TRUNCATION(outprec, inprec) 1
+
+/* A C expression describing the value returned by a comparison operator with
+   an integral mode and stored by a store-flag instruction ('cstoremode4')
+   when the condition is true.  */
+#define STORE_FLAG_VALUE 1
+
+/* An alias for the machine mode for pointers.  */
+#define Pmode SImode
+
+/* An alias for the machine mode used for memory references to functions
+   being called, in call RTL expressions.  */
+#define FUNCTION_MODE SImode
+
+/* ------------------------------------------------------------------------ */