Patchwork Ping^3/repost: contribute Synopsys Designware ARC port: 3/3: remaining contents of config/arc

login
register
mail settings
Submitter Joern Rennecke
Date Feb. 12, 2013, 12:49 p.m.
Message ID <20130212074950.dxv34574gcggc004-nzlynne@webmail.spamcop.net>
Download mbox | patch
Permalink /patch/219851/
State New
Headers show

Comments

Joern Rennecke - Feb. 12, 2013, 12:49 p.m.
In January, we got a Steering commitee approval for the inclusion for the
contribution of the Synopsys Designware ARC port:
http://gcc.gnu.org/ml/gcc/2013-01/msg00094.html

But for the actual commit, I was told I still need a Global Review approval
first.

The patches as posted previously still apply cleanly to the current mainline
sources, however, in the meantime, the people at Synopsys got to try the
port that I had modified according to previous feedback on the mailing list.
It turned out that previously harmless differences in the conditionalized
and unconditionalized patterns for PLUS / AND / IOR / XOR have become
a problem with the addition of the arc_ifcvt pass, which was done to
reduce the complexity of the branch shortening/if-conversion/scheduling
code.  I've reworked the patterns in question to use two modified/new
functions in arc.c, to make keeping them consistent simpler.
This means I made some localized changes to arc.m, arc.c, and arc-protos.h .
(arc_output_addsi, arc_output_commutative_cond_exec, addsi3_i, add_cond_exec,
commutative_cond_exec)
Therefore I re-post the modified parts of the port submission with this
ping.

libgcc:

2012-10-09  Joern Rennecke  <joern.rennecke@embecosm.com>

          * config.host (arc-*-elf*, arc*-*-linux-uclibc*): New configurations.
gcc:

2012-11-22  Joern Rennecke  <joern.rennecke@embecosm.com>
              Brendan Kehoe  <brendan@zen.org>

          * config.gcc (arc-*-elf*, arc*-*-linux-uclibc*): New configurations.
          * doc/install.texi (--with-cpu): Mention ARC.
          (arc-*-elf32): New paragraph.
          (arc-linux-uclibc): Likewise.
          * doc/md.texi (Machine Constraints): Add ARC part.
          * doc/invoke.texi: (menu): Add ARC Options.
          (Machine Dependent Options) <aRC Options>: Add synopsis.
          (node ARC Options): Add.
          * doc/extend.texi (long_call / short_call attribute): Add ARC.

gcc/testsuite:

2012-11-22  Joern Rennecke  <joern.rennecke@embecosm.com>

          * gcc.c-torture/execute/20101011-1.c [__arc__] (DO_TEST):  
Define as 0.
          * gcc.dg/torture/pr37868.c: Also skip for arc*-*-*.
          * gcc.dg/stack-usage-1.c [__arc__] (SIZE): Define.

libstdc++-v3:

2012-08-16  Joern Rennecke  <joern.rennecke@embecosm.com>

          * acinclude.m4 (GLIBCXX_ENABLE_SJLJ_EXCEPTIONS): Also check for
          _Unwind_SjLj_Register when deciding if to set enable_sjlj_exceptions.
          * configure: Regenerate.

gcc:

2013-02-11  Saurabh Verma  <saurabh.verma@codito.com>
              Ramana Radhakrishnan  <ramana.radhakrishnan@codito.com>
              Joern Rennecke  <joern.rennecke@embecosm.com>
              Muhammad Khurram Riaz <khurram.riaz@arc.com>
              Brendan Kehoe  <brendan@zen.org>
              Michael Eager  <eager@eagercon.com>

          * config/arc, common/config/arc: New directories.

gcc/testsuite:

2012-08-28  Joern Rennecke  <joern.rennecke@embecosm.com>

          * gcc.target/arc: New directory.

libgcc:

2012-10-18  Joern Rennecke  <joern.rennecke@embecosm.com>
              Brendan Kehoe  <brendan@zen.org>

          * libgcc/config/arc: New directory.

config/arc/arc.c:
http://gcc.gnu.org/ml/gcc-patches/2013-02/msg00516.html
config/arc/arc.md:
http://gcc.gnu.org/ml/gcc-patches/2013-02/msg00521.html
remaining contents of config/arc: See attachment.

Plus the other stuff from here:

http://gcc.gnu.org/ml/gcc-patches/2012-11/msg01891.html

Patch

diff -Nu --exclude arc.c --exclude arc.md emptydir/arc600.md config/arc/arc600.md
--- emptydir/arc600.md	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc600.md	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,63 @@ 
+;; DFA scheduling description of the Synopsys DesignWare ARC600 cpu
+;; for GNU C compiler
+;; Copyright (C) 2007-2012 Free Software Foundation, Inc.
+;; Contributor: Joern Rennecke <joern.rennecke@embecosm.com>
+;;              on behalf of Synopsys Inc.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+
+(define_automaton "ARC600")
+
+(define_cpu_unit "issue_600" "ARC600")
+(define_cpu_unit "mul64_600" "ARC600")
+
+; latency from flag-setting insns to branches is 3.
+(define_insn_reservation "compare_600" 3
+  (and (eq_attr "tune" "arc600")
+       (eq_attr "type" "compare"))
+  "issue_600")
+
+(define_insn_reservation "load_DI_600" 4
+  (and (eq_attr "tune" "arc600")
+       (eq_attr "type" "load")
+       (match_operand:DI 0 "" ""))
+  "issue_600")
+
+(define_insn_reservation "load_600" 3
+  (and (eq_attr "tune" "arc600")
+       (eq_attr "type" "load")
+       (not (match_operand:DI 0 "" "")))
+  "issue_600")
+
+(define_insn_reservation "mul_600_fast" 3
+  (and (eq_attr "tune" "arc600")
+       (match_test "arc_multcost < COSTS_N_INSNS (7)")
+       (eq_attr "type" "multi,umulti"))
+  "mul64_600*3")
+
+(define_insn_reservation "mul_600_slow" 8
+  (and (eq_attr "tune" "arc600")
+       (match_test "arc_multcost >= COSTS_N_INSNS (7)")
+       (eq_attr "type" "multi,umulti"))
+  "mul64_600*8")
+
+(define_insn_reservation "mul_mac_600" 3
+  (and (eq_attr "tune" "arc600")
+       (eq_attr "type" "mulmac_600"))
+  "nothing*3")
+
+(define_bypass 1 "mul_mac_600" "mul_mac_600")
diff -Nu --exclude arc.c --exclude arc.md emptydir/arc700.md config/arc/arc700.md
--- emptydir/arc700.md	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc700.md	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,170 @@ 
+;; DFA scheduling description of the Synopsys DesignWare ARC700 cpu
+;; for GNU C compiler
+;;    Comments and Support For ARC700 instructions added by
+;;    Saurabh Verma (saurabh.verma@codito.com)
+;;    Ramana Radhakrishnan(ramana.radhakrishnan@codito.com)
+;;    Factoring out and improvement of ARC700 Scheduling by
+;;    Joern Rennecke (joern.rennecke@embecosm.com)
+;; Copyright (C) 2006-2012 Free Software Foundation, Inc.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+
+(define_automaton "ARC700")
+
+;; aux to be added here
+(define_cpu_unit "core, dmp,  write_port, dmp_write_port, multiplier, issue, blockage, simd_unit" "ARC700")
+
+(define_insn_reservation "core_insn_DI" 2
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "unary, move, cmove, binary")
+       (match_operand:DI 0 "" ""))
+  "issue+core, issue+core+write_port, write_port")
+
+(define_insn_reservation "lr" 2
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "lr"))
+  "issue+blockage, blockage*2, write_port")
+
+(define_insn_reservation "sr" 1
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "sr"))
+  "issue+dmp_write_port+blockage, blockage*9")
+
+(define_insn_reservation "core_insn" 1
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "unary, move, binary"))
+  "issue+core, nothing, write_port")
+
+(define_insn_reservation "cmove" 1
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "cmove"))
+  "issue+core, nothing, write_port")
+
+(define_insn_reservation "cc_arith" 1
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "cc_arith"))
+  "issue+core, nothing, write_port")
+
+(define_insn_reservation "two_cycle_core_insn" 2
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "two_cycle_core"))
+  "issue+core, nothing, write_port")
+
+(define_insn_reservation "divaw_insn" 2
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "divaw"))
+  "issue+core, nothing, write_port")
+
+(define_insn_reservation "shift_insn" 2
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "shift"))
+  "issue+core, nothing, write_port")
+
+; Latency from flag setters to arithmetic with carry is 3.
+(define_insn_reservation "compare_700" 3
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "compare"))
+  "issue+core, nothing, write_port")
+
+; Assume here the branch is predicted correctly and has a delay slot insn
+; or is properly unaligned.
+(define_insn_reservation "branch_700" 1
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "compare"))
+  "issue+core, nothing, write_port")
+
+; TODOs: is this correct ??
+(define_insn_reservation "multi_DI" 10
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "multi")
+       (match_operand:DI 0 "" ""))
+  "issue+multiplier, multiplier*2,issue+multiplier, multiplier*2,
+   nothing,write_port,nothing*2, write_port")
+
+(define_insn_reservation "umulti_DI" 9
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "umulti")
+       (match_operand:DI 0 "" ""))
+  "issue+multiplier, multiplier,issue+multiplier, multiplier*2,
+   write_port,nothing*3, write_port")
+
+(define_insn_reservation "umulti_xmac" 5
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "umulti"))
+  "issue+multiplier, multiplier, nothing*3, write_port")
+
+; latency of mpyu is lower than mpy / mpyh / mpyhu
+(define_insn_reservation "umulti_std" 6
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "umulti"))
+  "issue+multiplier, multiplier*3, nothing*2, write_port")
+
+;; arc700 xmac multiplier
+(define_insn_reservation "multi_xmac" 5
+  (and (eq_attr "tune" "arc700_4_2_xmac")
+       (eq_attr "type" "multi"))
+  "issue+multiplier,multiplier,nothing*3,write_port")
+
+; arc700 standard multiplier
+(define_insn_reservation "multi_std" 7
+  (and (eq_attr "tune" "arc700_4_2_std")
+       (eq_attr "type" "multi"))
+  "issue+multiplier,multiplier*4,nothing*2,write_port")
+
+;(define_insn_reservation "multi_SI" 7
+;       (eq_attr "type" "multi")
+;  "issue+multiplier, multiplier*2, nothing*4, write_port")
+
+; There is no multiplier -> multiplier bypass except for the
+; mac -> mac dependency on the accumulator.
+
+; divaw -> divaw latency is 1 cycle
+(define_bypass 1 "divaw_insn" "divaw_insn")
+
+(define_bypass 1 "compare_700" "branch_700,core_insn,data_store,data_load")
+
+; we could shedule the cmove immediately after the compare, but then
+; the cmove would have higher latency... so just keep the cmove apart
+; from the compare.
+(define_bypass 2 "compare_700" "cmove")
+
+; no functional unit runs when blockage is reserved
+(exclusion_set "blockage" "core, multiplier")
+
+(define_insn_reservation "data_load_DI" 4
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "load")
+       (match_operand:DI 0 "" ""))
+  "issue+dmp, issue+dmp, dmp_write_port, dmp_write_port")
+
+(define_insn_reservation "data_load" 3
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "load")
+       (not (match_operand:DI 0 "" "")))
+  "issue+dmp, nothing, dmp_write_port")
+
+(define_insn_reservation "data_store_DI" 2
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "store")
+       (match_operand:DI 0 "" ""))
+  "issue+dmp_write_port, issue+dmp_write_port")
+
+(define_insn_reservation "data_store" 1
+  (and (eq_attr "tune_arc700" "true")
+       (eq_attr "type" "store")
+       (not (match_operand:DI 0 "" "")))
+  "issue+dmp_write_port")
diff -Nu --exclude arc.c --exclude arc.md emptydir/arc.h config/arc/arc.h
--- emptydir/arc.h	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc.h	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,1683 @@ 
+/* Definitions of target machine for GNU compiler, Synopsys DesignWare ARC cpu.
+   Copyright (C) 1994, 1995, 1997, 1998, 2007-2012
+   Free Software Foundation, Inc.
+
+   Sources derived from work done by Sankhya Technologies (www.sankhya.com) on
+   behalf of Synopsys Inc.
+
+   Position Independent Code support added,Code cleaned up,
+   Comments and Support For ARC700 instructions added by
+   Saurabh Verma (saurabh.verma@codito.com)
+   Ramana Radhakrishnan(ramana.radhakrishnan@codito.com)
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifndef GCC_ARC_H
+#define GCC_ARC_H
+
+/* Things to do:
+
+   - incscc, decscc?
+
+*/
+
+#define SYMBOL_FLAG_SHORT_CALL	(SYMBOL_FLAG_MACH_DEP << 0)
+#define SYMBOL_FLAG_LONG_CALL	(SYMBOL_FLAG_MACH_DEP << 1)
+
+/* Check if this symbol has a long_call attribute in its declaration */
+#define SYMBOL_REF_LONG_CALL_P(X)	\
+	((SYMBOL_REF_FLAGS (X) & SYMBOL_FLAG_LONG_CALL) != 0)
+
+/* Check if this symbol has a short_call attribute in its declaration */
+#define SYMBOL_REF_SHORT_CALL_P(X)	\
+	((SYMBOL_REF_FLAGS (X) & SYMBOL_FLAG_SHORT_CALL) != 0)
+
+#undef ASM_SPEC
+#undef LINK_SPEC
+#undef STARTFILE_SPEC
+#undef ENDFILE_SPEC
+#undef SIZE_TYPE
+#undef PTRDIFF_TYPE
+#undef WCHAR_TYPE
+#undef WCHAR_TYPE_SIZE
+#undef ASM_APP_ON
+#undef ASM_APP_OFF
+#undef CC1_SPEC
+
+/* Names to predefine in the preprocessor for this target machine.  */
+#define TARGET_CPU_CPP_BUILTINS()	\
+ do {					\
+    builtin_define ("__arc__");		\
+    if (TARGET_A5)			\
+      builtin_define ("__A5__");	\
+    else if (TARGET_ARC600)			\
+      {					\
+	builtin_define ("__A6__");	\
+	builtin_define ("__ARC600__");	\
+      }					\
+    else if (TARGET_ARC601)			\
+      {					\
+	builtin_define ("__ARC601__");	\
+      }					\
+    else if (TARGET_ARC700)			\
+      {					\
+	builtin_define ("__A7__");	\
+	builtin_define ("__ARC700__");	\
+      }					\
+    if (TARGET_NORM)			\
+      {					\
+	builtin_define ("__ARC_NORM__");\
+	builtin_define ("__Xnorm");	\
+      }					\
+    if (TARGET_MUL64_SET)		\
+      builtin_define ("__ARC_MUL64__");\
+    if (TARGET_MULMAC_32BY16_SET)	\
+      builtin_define ("__ARC_MUL32BY16__");\
+    if (TARGET_SIMD_SET)        	\
+      builtin_define ("__ARC_SIMD__");	\
+    if (TARGET_BARREL_SHIFTER)		\
+      builtin_define ("__Xbarrel_shifter");\
+    builtin_assert ("cpu=arc");		\
+    builtin_assert ("machine=arc");	\
+    builtin_define (TARGET_BIG_ENDIAN	\
+		    ? "__BIG_ENDIAN__" : "__LITTLE_ENDIAN__"); \
+    if (TARGET_BIG_ENDIAN)		\
+      builtin_define ("__big_endian__"); \
+} while(0)
+
+/* Match the macros used in the assembler.  */
+#define CPP_SPEC "\
+%{msimd:-D__Xsimd} %{mno-mpy:-D__Xno_mpy} %{mswap:-D__Xswap} \
+%{mmin_max:-D__Xmin_max} %{mEA:-D__Xea} \
+%{mspfp*:-D__Xspfp} %{mdpfp*:-D__Xdpfp} \
+%{mmac_d16:-D__Xxmac_d16} %{mmac_24:-D__Xxmac_24} \
+%{mdsp_packa:-D__Xdsp_packa} %{mcrc:-D__Xcrc} %{mdvbf:-D__Xdvbf} \
+%{mtelephony:-D__Xtelephony} %{mxy:-D__Xxy} %{mmul64: -D__Xmult32} \
+%{mlock:-D__Xlock} %{mswape:-D__Xswape} %{mrtsc:-D__Xrtsc} \
+"
+
+#define CC1_SPEC "\
+%{EB:%{EL:%emay not use both -EB and -EL}} \
+%{EB:-mbig-endian} %{EL:-mlittle-endian} \
+"
+#define ASM_SPEC  "\
+%{mbig-endian|EB:-EB} %{EL} \
+%{mcpu=A5|mcpu=a5|mA5:-mA5} \
+%{mcpu=ARC600|mcpu=arc600|mARC600|mA6:-mARC600} \
+%{mcpu=ARC601|mcpu=arc601:-mARC601} \
+%{mcpu=ARC700|mcpu=arc700|mARC700|mA7:-mARC700} \
+%{mcpu=ARC700|mcpu=arc700|mARC700|mA7:-mEA} \
+%{!mcpu=*:%{!A5:%{!A6:%{!mARC600:%{!mARC700:-mARC700 -mEA}}}}} \
+%{mbarrel_shifter} %{mno-mpy} %{mmul64} %{mmul32x16:-mdsp} %{mnorm} %{mswap} \
+%{mEA} %{mmin_max} %{mspfp*} %{mdpfp*} \
+%{msimd} \
+%{mmac_d16} %{mmac_24} %{mdsp_packa} %{mcrc} %{mdvbf} %{mtelephony} %{mxy} \
+%{mcpu=ARC700|mARC700|mA7:%{mlock}} \
+%{mcpu=ARC700|mARC700|mA7:%{mswape}} \
+%{mcpu=ARC700|mARC700|mA7:%{mrtsc}} \
+"
+
+#if DEFAULT_LIBC == LIBC_UCLIBC
+/* Note that the default is to link against dynamic libraries, if they are
+   available.  Override with -static.  */
+#define LINK_SPEC "%{h*} \
+		   %{static:-Bstatic} \
+		   %{symbolic:-Bsymbolic} \
+		   %{rdynamic:-export-dynamic}\
+		   -dynamic-linker /lib/ld-uClibc.so.0 \
+		   -X %{mbig-endian:-EB} \
+		   %{EB} %{EL} \
+		   %{marclinux*} \
+		   %{!marclinux*: %{pg|p|profile:-marclinux_prof;: -marclinux}} \
+		   %{!z:-z max-page-size=0x1000 -z common-page-size=0x1000} \
+		   %{shared:-shared}"
+/* Like the standard LINK_COMMAND_SPEC, but add %G when building
+   a shared library with -nostdlib, so that the hidden functions of libgcc
+   will be incorporated.
+   N.B., we don't want a plain -lgcc, as this would lead to re-exporting
+   non-hidden functions, so we have to consider libgcc_s.so.* first, which in
+   turn should be wrapped with --as-needed.  */
+#define LINK_COMMAND_SPEC "\
+%{!fsyntax-only:%{!c:%{!M:%{!MM:%{!E:%{!S:\
+    %(linker) %l " LINK_PIE_SPEC "%X %{o*} %{A} %{d} %{e*} %{m} %{N} %{n} %{r}\
+    %{s} %{t} %{u*} %{x} %{z} %{Z} %{!A:%{!nostdlib:%{!nostartfiles:%S}}}\
+    %{static:} %{L*} %(mfwrap) %(link_libgcc) %o\
+    %{fopenmp:%:include(libgomp.spec)%(link_gomp)} %(mflib)\
+    %{fprofile-arcs|fprofile-generate|coverage:-lgcov}\
+    %{!nostdlib:%{!nodefaultlibs:%(link_ssp) %(link_gcc_c_sequence)}}\
+    %{shared:%{nostdlib:%{!really-nostdlib: %G }}} \
+    %{!A:%{!nostdlib:%{!nostartfiles:%E}}} %{T*} }}}}}}"
+
+#else
+#define LINK_SPEC "%{mbig-endian:-EB} %{EB} %{EL}\
+  %{pg|p:-marcelf_prof;mA7|mARC700|mcpu=arc700|mcpu=ARC700: -marcelf}"
+#endif
+
+#if DEFAULT_LIBC != LIBC_UCLIBC
+#define STARTFILE_SPEC "%{!shared:crt0.o%s} crti%O%s %{pg|p:crtg.o%s} crtbegin.o%s"
+#else
+#define STARTFILE_SPEC   "%{!shared:%{!mkernel:crt1.o%s}} crti.o%s \
+  %{!shared:%{pg|p|profile:crtg.o%s} crtbegin.o%s} %{shared:crtbeginS.o%s}"
+
+#endif
+
+#if DEFAULT_LIBC != LIBC_UCLIBC
+#define ENDFILE_SPEC "%{pg|p:crtgend.o%s} crtend.o%s crtn%O%s"
+#else
+#define ENDFILE_SPEC "%{!shared:%{pg|p|profile:crtgend.o%s} crtend.o%s} \
+  %{shared:crtendS.o%s} crtn.o%s"
+
+#endif
+
+#if DEFAULT_LIBC == LIBC_UCLIBC
+#undef LIB_SPEC
+#define LIB_SPEC  \
+  "%{pthread:-lpthread} \
+   %{shared:-lc} \
+   %{!shared:%{pg|p|profile:-lgmon -u profil --defsym __profil=profil} -lc}"
+#else
+#undef LIB_SPEC
+/* -lc_p not present for arc-elf32-* : ashwin */
+#define LIB_SPEC "%{!shared:%{g*:-lg} %{pg|p:-lgmon} -lc}"
+#endif
+
+#ifndef DRIVER_ENDIAN_SELF_SPECS
+#define DRIVER_ENDIAN_SELF_SPECS ""
+#endif
+
+#define DRIVER_SELF_SPECS DRIVER_ENDIAN_SELF_SPECS \
+  "%{mARC5:-mcpu=A5 %<mA5}" \
+  "%{mARC600:-mcpu=ARC600 %<mARC600}" \
+  "%{mARC601:-mcpu=ARC601 %<mARC601}" \
+  "%{mARC700:-mcpu=ARC700 %<mARC700}"
+
+/* Run-time compilation parameters selecting different hardware subsets.  */
+
+#define TARGET_MIXED_CODE (TARGET_MIXED_CODE_SET)
+
+#define TARGET_SPFP (TARGET_SPFP_FAST_SET || TARGET_SPFP_COMPACT_SET)
+#define TARGET_DPFP (TARGET_DPFP_FAST_SET || TARGET_DPFP_COMPACT_SET)
+
+#define SUBTARGET_SWITCHES
+
+/* Instruction set characteristics.
+   These are internal macros, set by the appropriate -m option.  */
+
+/* Non-zero means the cpu supports norm instruction.  This flag is set by
+   default for A7, and only for pre A7 cores when -mnorm is given.  */
+#define TARGET_NORM (TARGET_ARC700 || TARGET_NORM_SET)
+/* Indicate if an optimized floating point emulation library is available.  */
+#define TARGET_OPTFPE \
+ (TARGET_ARC700 \
+  || ((TARGET_ARC600 || TARGET_ARC601) && TARGET_NORM_SET \
+      && (TARGET_MUL64_SET || TARGET_MULMAC_32BY16_SET)))
+
+/* Non-zero means the cpu supports swap instruction.  This flag is set by
+   default for A7, and only for pre A7 cores when -mswap is given.  */
+#define TARGET_SWAP (TARGET_ARC700 || TARGET_SWAP_SET)
+
+/* Provide some macros for size / scheduling features of the ARC700, so
+   that we can pick & choose features if we get a new cpu family member.  */
+
+/* Should we try to unalign likely taken branches without a delay slot.  */
+#define TARGET_UNALIGN_BRANCH (TARGET_ARC700 && !optimize_size)
+
+/* Should we upsize short delayed branches with a short delay insn?  */
+#define TARGET_UPSIZE_DBR (TARGET_ARC700 && !optimize_size)
+
+/* Should we add padding before a return insn to avoid mispredict?  */
+#define TARGET_PAD_RETURN (TARGET_ARC700 && !optimize_size)
+
+/* For an anulled-true delay slot insn for a delayed branch, should we only
+   use conditional execution?  */
+#define TARGET_AT_DBR_CONDEXEC  (!TARGET_ARC700)
+
+#define TARGET_A5 (arc_cpu == PROCESSOR_A5)
+#define TARGET_ARC600 (arc_cpu == PROCESSOR_ARC600)
+#define TARGET_ARC601 (arc_cpu == PROCESSOR_ARC601)
+#define TARGET_ARC700 (arc_cpu == PROCESSOR_ARC700)
+
+/* Recast the cpu class to be the cpu attribute.  */
+#define arc_cpu_attr ((enum attr_cpu)arc_cpu)
+
+#ifndef MULTILIB_DEFAULTS
+#define MULTILIB_DEFAULTS { "mARC700" }
+#endif
+
+/* Target machine storage layout.  */
+
+/* We want zero_extract to mean the same
+   no matter what the byte endianness is.  */
+#define BITS_BIG_ENDIAN 0
+
+/* Define this if most significant byte of a word is the lowest numbered.  */
+#define BYTES_BIG_ENDIAN (TARGET_BIG_ENDIAN)
+
+/* Define this if most significant word of a multiword number is the lowest
+   numbered.  */
+#define WORDS_BIG_ENDIAN (TARGET_BIG_ENDIAN)
+
+/* Number of bits in an addressable storage unit.  */
+#define BITS_PER_UNIT 8
+
+/* Width in bits of a "word", which is the contents of a machine register.
+   Note that this is not necessarily the width of data type `int';
+   if using 16-bit ints on a 68000, this would still be 32.
+   But on a machine with 16-bit registers, this would be 16.  */
+#define BITS_PER_WORD 32
+
+/* Width of a word, in units (bytes).  */
+#define UNITS_PER_WORD 4
+
+/* Define this macro if it is advisable to hold scalars in registers
+   in a wider mode than that declared by the program.  In such cases,
+   the value is constrained to be within the bounds of the declared
+   type, but kept valid in the wider mode.  The signedness of the
+   extension may differ from that of the type.  */
+#define PROMOTE_MODE(MODE,UNSIGNEDP,TYPE) \
+if (GET_MODE_CLASS (MODE) == MODE_INT		\
+    && GET_MODE_SIZE (MODE) < UNITS_PER_WORD)	\
+{						\
+  (MODE) = SImode;				\
+}
+
+/* Width in bits of a pointer.
+   See also the macro `Pmode' defined below.  */
+#define POINTER_SIZE 32
+
+/* Allocation boundary (in *bits*) for storing arguments in argument list.  */
+#define PARM_BOUNDARY 32
+
+/* Boundary (in *bits*) on which stack pointer should be aligned.  */
+/* TOCHECK: Changed from 64 to 32 */
+#define STACK_BOUNDARY 32
+
+/* ALIGN FRAMES on word boundaries.  */
+#define ARC_STACK_ALIGN(LOC) \
+  (((LOC) + STACK_BOUNDARY / BITS_PER_UNIT - 1) & -STACK_BOUNDARY/BITS_PER_UNIT)
+
+/* Allocation boundary (in *bits*) for the code of a function.  */
+#define FUNCTION_BOUNDARY 32
+
+/* Alignment of field after `int : 0' in a structure.  */
+#define EMPTY_FIELD_BOUNDARY 32
+
+/* Every structure's size must be a multiple of this.  */
+#define STRUCTURE_SIZE_BOUNDARY 8
+
+/* A bitfield declared as `int' forces `int' alignment for the struct.  */
+#define PCC_BITFIELD_TYPE_MATTERS 1
+
+/* An expression for the alignment of a structure field FIELD if the
+   alignment computed in the usual way (including applying of
+   `BIGGEST_ALIGNMENT' and `BIGGEST_FIELD_ALIGNMENT' to the
+   alignment) is COMPUTED.  It overrides alignment only if the field
+   alignment has not been set by the `__attribute__ ((aligned (N)))'
+   construct.
+*/
+
+#define ADJUST_FIELD_ALIGN(FIELD, COMPUTED) \
+(TYPE_MODE (strip_array_types (TREE_TYPE (FIELD))) == DFmode \
+ ? MIN ((COMPUTED), 32) : (COMPUTED))
+
+
+
+/* No data type wants to be aligned rounder than this.  */
+/* This is bigger than currently necessary for the ARC.  If 8 byte floats are
+   ever added it's not clear whether they'll need such alignment or not.  For
+   now we assume they will.  We can always relax it if necessary but the
+   reverse isn't true.  */
+/* TOCHECK: Changed from 64 to 32 */
+#define BIGGEST_ALIGNMENT 32
+
+/* The best alignment to use in cases where we have a choice.  */
+#define FASTEST_ALIGNMENT 32
+
+/* Make strings word-aligned so strcpy from constants will be faster.  */
+#define CONSTANT_ALIGNMENT(EXP, ALIGN)  \
+  ((TREE_CODE (EXP) == STRING_CST	\
+    && (ALIGN) < FASTEST_ALIGNMENT)	\
+   ? FASTEST_ALIGNMENT : (ALIGN))
+
+
+/* Make arrays of chars word-aligned for the same reasons.  */
+#define LOCAL_ALIGNMENT(TYPE, ALIGN)             \
+  (TREE_CODE (TYPE) == ARRAY_TYPE               \
+   && TYPE_MODE (TREE_TYPE (TYPE)) == QImode    \
+   && (ALIGN) < FASTEST_ALIGNMENT ? FASTEST_ALIGNMENT : (ALIGN))
+
+#define DATA_ALIGNMENT(TYPE, ALIGN)		\
+  (TREE_CODE (TYPE) == ARRAY_TYPE		\
+   && TYPE_MODE (TREE_TYPE (TYPE)) == QImode	\
+   && arc_size_opt_level < 3			\
+   && (ALIGN) < FASTEST_ALIGNMENT ? FASTEST_ALIGNMENT : (ALIGN))
+
+/* Set this nonzero if move instructions will actually fail to work
+   when given unaligned data.  */
+/* On the ARC the lower address bits are masked to 0 as necessary.  The chip
+   won't croak when given an unaligned address, but the insn will still fail
+   to produce the correct result.  */
+#define STRICT_ALIGNMENT 1
+
+/* Layout of source language data types.  */
+
+#define SHORT_TYPE_SIZE		16
+#define INT_TYPE_SIZE		32
+#define LONG_TYPE_SIZE		32
+#define LONG_LONG_TYPE_SIZE	64
+#define FLOAT_TYPE_SIZE		32
+#define DOUBLE_TYPE_SIZE	64
+#define LONG_DOUBLE_TYPE_SIZE	64
+
+/* Define this as 1 if `char' should by default be signed; else as 0.  */
+#define DEFAULT_SIGNED_CHAR 0
+
+#define SIZE_TYPE "long unsigned int"
+#define PTRDIFF_TYPE "long int"
+#define WCHAR_TYPE "int"
+#define WCHAR_TYPE_SIZE 32
+
+
+/* ashwin : shifted from arc.c:102 */
+#define PROGRAM_COUNTER_REGNO 63
+
+/* Standard register usage.  */
+
+/* Number of actual hardware registers.
+   The hardware registers are assigned numbers for the compiler
+   from 0 to just below FIRST_PSEUDO_REGISTER.
+   All registers that the compiler knows about must be given numbers,
+   even those that are not normally considered general registers.
+
+   Registers 61, 62, and 63 are not really registers and we needn't treat
+   them as such.  We still need a register for the condition code and
+   argument pointer.  */
+
+/* r63 is pc, r64-r127 = simd vregs, r128-r143 = simd dma config regs
+   r144, r145 = lp_start, lp_end
+   and therefore the pseudo registers start from r146. */
+#define FIRST_PSEUDO_REGISTER 146
+
+/* 1 for registers that have pervasive standard uses
+   and are not available for the register allocator.
+
+   0-28  - general purpose registers
+   29    - ilink1 (interrupt link register)
+   30    - ilink2 (interrupt link register)
+   31    - blink (branch link register)
+   32-59 - reserved for extensions
+   60    - LP_COUNT
+   61    - condition code
+   62    - argument pointer
+   63    - program counter
+
+   FWIW, this is how the 61-63 encodings are used by the hardware:
+   61    - reserved
+   62    - long immediate data indicator
+   63    - PCL (program counter aligned to 32 bit, read-only)
+
+   The general purpose registers are further broken down into:
+
+   0-7   - arguments/results
+   8-12  - call used (r11 - static chain pointer)
+   13-25 - call saved
+   26    - global pointer
+   27    - frame pointer
+   28    - stack pointer
+   29    - ilink1
+   30    - ilink2
+   31    - return address register
+
+   By default, the extension registers are not available.  */
+/* Present implementations only have VR0-VR23 only.  */
+/* ??? FIXME: r27 and r31 should not be fixed registers.  */
+#define FIXED_REGISTERS \
+{ 0, 0, 0, 0, 0, 0, 0, 0,	\
+  0, 0, 0, 0, 0, 0, 0, 0,	\
+  0, 0, 0, 0, 0, 0, 0, 0,	\
+  0, 0, 1, 1, 1, 1, 1, 1,	\
+				\
+  1, 1, 1, 1, 1, 1, 1, 1,	\
+  0, 0, 0, 0, 1, 1, 1, 1,	\
+  1, 1, 1, 1, 1, 1, 1, 1,	\
+  1, 1, 1, 1, 0, 1, 1, 1,       \
+				\
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+				\
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+				\
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  0, 0, 0, 0, 0, 0, 0, 0,	\
+  1, 1}
+
+/* 1 for registers not available across function calls.
+   These must include the FIXED_REGISTERS and also any
+   registers that can be used without being saved.
+   The latter must include the registers where values are returned
+   and the register where structure-value addresses are passed.
+   Aside from that, you can include as many other registers as you like.  */
+#define CALL_USED_REGISTERS     \
+{                               \
+  1, 1, 1, 1, 1, 1, 1, 1,	\
+  1, 1, 1, 1, 1, 0, 0, 0,	\
+  0, 0, 0, 0, 0, 0, 0, 0,	\
+  0, 0, 1, 1, 1, 1, 1, 1,	\
+				\
+  1, 1, 1, 1, 1, 1, 1, 1,	\
+  1, 1, 1, 1, 1, 1, 1, 1,	\
+  1, 1, 1, 1, 1, 1, 1, 1,	\
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+				\
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+				\
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+  1, 1, 1, 1, 1, 1, 1, 1,       \
+				\
+  0, 0, 0, 0, 0, 0, 0, 0,       \
+  0, 0, 0, 0, 0, 0, 0, 0,	\
+  1, 1}
+
+/* If defined, an initializer for a vector of integers, containing the
+   numbers of hard registers in the order in which GCC should
+   prefer to use them (from most preferred to least).  */
+#define REG_ALLOC_ORDER \
+{ 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 0, 1,			\
+  16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 				\
+  32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,	\
+  48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62,		\
+  27, 28, 29, 30, 31, 63}
+
+/* Return number of consecutive hard regs needed starting at reg REGNO
+   to hold something of mode MODE.
+   This is ordinarily the length in words of a value of mode MODE
+   but can be less for certain modes in special long registers.  */
+#define HARD_REGNO_NREGS(REGNO, MODE) \
+((GET_MODE_SIZE (MODE) == 16 && REGNO >= 64 && REGNO < 88) ? 1 \
+ : (GET_MODE_SIZE (MODE) + UNITS_PER_WORD - 1) / UNITS_PER_WORD)
+
+/* Value is 1 if hard register REGNO can hold a value of machine-mode MODE.  */
+extern unsigned int arc_hard_regno_mode_ok[];
+extern unsigned int arc_mode_class[];
+#define HARD_REGNO_MODE_OK(REGNO, MODE) \
+((arc_hard_regno_mode_ok[REGNO] & arc_mode_class[MODE]) != 0)
+
+/* A C expression that is nonzero if it is desirable to choose
+   register allocation so as to avoid move instructions between a
+   value of mode MODE1 and a value of mode MODE2.
+
+   If `HARD_REGNO_MODE_OK (R, MODE1)' and `HARD_REGNO_MODE_OK (R,
+   MODE2)' are ever different for any R, then `MODES_TIEABLE_P (MODE1,
+   MODE2)' must be zero.  */
+
+/* Tie QI/HI/SI modes together.  */
+#define MODES_TIEABLE_P(MODE1, MODE2) \
+(GET_MODE_CLASS (MODE1) == MODE_INT		\
+ && GET_MODE_CLASS (MODE2) == MODE_INT		\
+ && GET_MODE_SIZE (MODE1) <= UNITS_PER_WORD	\
+ && GET_MODE_SIZE (MODE2) <= UNITS_PER_WORD)
+
+/* Internal macros to classify a register number as to whether it's a
+   general purpose register for compact insns (r0-r3,r12-r15), or
+   stack pointer (r28).  */
+
+#define COMPACT_GP_REG_P(REGNO) \
+   (((signed)(REGNO) >= 0 && (REGNO) <= 3) || ((REGNO) >= 12 && (REGNO) <= 15))
+#define SP_REG_P(REGNO)  ((REGNO) == 28)
+
+
+
+/* Register classes and constants.  */
+
+/* Define the classes of registers for register constraints in the
+   machine description.  Also define ranges of constants.
+
+   One of the classes must always be named ALL_REGS and include all hard regs.
+   If there is more than one class, another class must be named NO_REGS
+   and contain no registers.
+
+   The name GENERAL_REGS must be the name of a class (or an alias for
+   another name such as ALL_REGS).  This is the class of registers
+   that is allowed by "g" or "r" in a register constraint.
+   Also, registers outside this class are allocated only when
+   instructions express preferences for them.
+
+   The classes must be numbered in nondecreasing order; that is,
+   a larger-numbered class must never be contained completely
+   in a smaller-numbered class.
+
+   For any two classes, it is very desirable that there be another
+   class that represents their union.
+
+   It is important that any condition codes have class NO_REGS.
+   See `register_operand'.  */
+
+enum reg_class
+{
+   NO_REGS,
+   R0_REG,			/* 'x' */
+   GP_REG,			/* 'Rgp' */
+   FP_REG,			/* 'f' */
+   SP_REGS,			/* 'b' */
+   LPCOUNT_REG, 		/* 'l' */
+   LINK_REGS,	 		/* 'k' */
+   DOUBLE_REGS,			/* D0, D1 */
+   SIMD_VR_REGS,		/* VR00-VR63 */
+   SIMD_DMA_CONFIG_REGS,	/* DI0-DI7,DO0-DO7 */
+   ARCOMPACT16_REGS,		/* 'q' */
+   AC16_BASE_REGS,  		/* 'e' */
+   SIBCALL_REGS,		/* "Rsc" */
+   GENERAL_REGS,		/* 'r' */
+   MPY_WRITABLE_CORE_REGS,	/* 'W' */
+   WRITABLE_CORE_REGS,		/* 'w' */
+   CHEAP_CORE_REGS,		/* 'c' */
+   ALL_CORE_REGS,		/* 'Rac' */
+   ALL_REGS,
+   LIM_REG_CLASSES
+};
+
+#define N_REG_CLASSES (int) LIM_REG_CLASSES
+
+/* Give names of register classes as strings for dump file.   */
+#define REG_CLASS_NAMES	  \
+{                         \
+  "NO_REGS",           	  \
+  "R0_REG",            	  \
+  "GP_REG",            	  \
+  "FP_REG",            	  \
+  "SP_REGS",		  \
+  "LPCOUNT_REG",	  \
+  "LINK_REGS",         	  \
+  "DOUBLE_REGS",          \
+  "SIMD_VR_REGS",         \
+  "SIMD_DMA_CONFIG_REGS", \
+  "ARCOMPACT16_REGS",  	  \
+  "AC16_BASE_REGS",       \
+  "SIBCALL_REGS",	  \
+  "GENERAL_REGS",      	  \
+  "MPY_WRITABLE_CORE_REGS",   \
+  "WRITABLE_CORE_REGS",   \
+  "CHEAP_CORE_REGS",	  \
+  "ALL_CORE_REGS",	  \
+  "ALL_REGS"          	  \
+}
+
+/* Define which registers fit in which classes.
+   This is an initializer for a vector of HARD_REG_SET
+   of length N_REG_CLASSES.  */
+
+#define REG_CLASS_CONTENTS \
+{													\
+  {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000},	     /* No Registers */			\
+  {0x00000001, 0x00000000, 0x00000000, 0x00000000, 0x00000000},      /* 'x', r0 register , r0 */	\
+  {0x04000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000},      /* 'Rgp', Global Pointer, r26 */	\
+  {0x08000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000},      /* 'f', Frame Pointer, r27 */	\
+  {0x10000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000},      /* 'b', Stack Pointer, r28 */	\
+  {0x00000000, 0x10000000, 0x00000000, 0x00000000, 0x00000000},      /* 'l', LPCOUNT Register, r60 */	\
+  {0xe0000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000},      /* 'k', LINK Registers, r29-r31 */	\
+  {0x00000000, 0x00000f00, 0x00000000, 0x00000000, 0x00000000},      /* 'D', D1, D2 Registers */	\
+  {0x00000000, 0x00000000, 0xffffffff, 0xffffffff, 0x00000000},      /* 'V', VR00-VR63 Registers */	\
+  {0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x0000ffff},      /* 'V', DI0-7,DO0-7 Registers */	\
+  {0x0000f00f, 0x00000000, 0x00000000, 0x00000000, 0x00000000},	     /* 'q', r0-r3, r12-r15 */		\
+  {0x1000f00f, 0x00000000, 0x00000000, 0x00000000, 0x00000000},	     /* 'e', r0-r3, r12-r15, sp */	\
+  {0x1c001fff, 0x00000000, 0x00000000, 0x00000000, 0x00000000},    /* "Rsc", r0-r12 */ \
+  {0x9fffffff, 0xc0000000, 0x00000000, 0x00000000, 0x00000000},      /* 'r', r0-r28, blink, ap and pcl */	\
+  {0xffffffff, 0x00000000, 0x00000000, 0x00000000, 0x00000000},      /* 'W',  r0-r31 */ \
+  /* Include ap / pcl in WRITABLE_CORE_REGS for sake of symmetry.  As these \
+     registers are fixed, it does not affect the literal meaning of the \
+     constraints, but it makes it a superset of GENERAL_REGS, thus \
+     enabling some operations that would otherwise not be possible.  */ \
+  {0xffffffff, 0xd0000000, 0x00000000, 0x00000000, 0x00000000},      /* 'w', r0-r31, r60 */ \
+  {0xffffffff, 0xdfffffff, 0x00000000, 0x00000000, 0x00000000},      /* 'c', r0-r60, ap, pcl */ \
+  {0xffffffff, 0xdfffffff, 0x00000000, 0x00000000, 0x00000000},      /* 'Rac', r0-r60, ap, pcl */ \
+  {0xffffffff, 0xffffffff, 0xffffffff, 0xffffffff, 0x0003ffff}       /* All Registers */		\
+}
+
+/* Local macros to mark the first and last regs of different classes.  */
+#define ARC_FIRST_SIMD_VR_REG              64
+#define ARC_LAST_SIMD_VR_REG               127
+
+#define ARC_FIRST_SIMD_DMA_CONFIG_REG      128
+#define ARC_FIRST_SIMD_DMA_CONFIG_IN_REG   128
+#define ARC_FIRST_SIMD_DMA_CONFIG_OUT_REG  136
+#define ARC_LAST_SIMD_DMA_CONFIG_REG       143
+
+/* The same information, inverted:
+   Return the class number of the smallest class containing
+   reg number REGNO.  This could be a conditional expression
+   or could index an array.  */
+
+extern enum reg_class arc_regno_reg_class[];
+
+#define REGNO_REG_CLASS(REGNO) (arc_regno_reg_class[REGNO])
+
+/* The class value for valid index registers. An index register is
+   one used in an address where its value is either multiplied by
+   a scale factor or added to another register (as well as added to a
+   displacement).  */
+
+#define INDEX_REG_CLASS (TARGET_MIXED_CODE ? ARCOMPACT16_REGS : GENERAL_REGS)
+
+/* The class value for valid base registers. A base register is one used in
+   an address which is the register value plus a displacement.  */
+
+#define BASE_REG_CLASS (TARGET_MIXED_CODE ? AC16_BASE_REGS : GENERAL_REGS)
+
+/* These assume that REGNO is a hard or pseudo reg number.
+   They give nonzero only if REGNO is a hard reg of the suitable class
+   or a pseudo reg currently allocated to a suitable hard reg.
+   Since they use reg_renumber, they are safe only once reg_renumber
+   has been allocated, which happens in local-alloc.c.  */
+#define REGNO_OK_FOR_BASE_P(REGNO) \
+((REGNO) < 29 || ((REGNO) == ARG_POINTER_REGNUM) || ((REGNO) == 63) ||\
+ (unsigned) reg_renumber[REGNO] < 29)
+
+#define REGNO_OK_FOR_INDEX_P(REGNO) REGNO_OK_FOR_BASE_P(REGNO)
+
+/* Given an rtx X being reloaded into a reg required to be
+   in class CLASS, return the class of reg to actually use.
+   In general this is just CLASS; but on some machines
+   in some cases it is preferable to use a more restrictive class.  */
+
+#define PREFERRED_RELOAD_CLASS(X, CLASS) \
+  arc_preferred_reload_class((X), (CLASS))
+
+  extern enum reg_class arc_preferred_reload_class (rtx, enum reg_class);
+
+/* Return the maximum number of consecutive registers
+   needed to represent mode MODE in a register of class CLASS.  */
+
+#define CLASS_MAX_NREGS(CLASS, MODE) \
+(( GET_MODE_SIZE (MODE) == 16 && CLASS == SIMD_VR_REGS) ? 1: \
+((GET_MODE_SIZE (MODE) + UNITS_PER_WORD - 1) / UNITS_PER_WORD))
+
+#define SMALL_INT(X) ((unsigned) ((X) + 0x100) < 0x200)
+#define SMALL_INT_RANGE(X, OFFSET, SHIFT) \
+  ((unsigned) (((X) >> (SHIFT)) + 0x100) \
+   < 0x200 - ((unsigned) (OFFSET) >> (SHIFT)))
+#define SIGNED_INT12(X) ((unsigned) ((X) + 0x800) < 0x1000)
+#define LARGE_INT(X) \
+(((X) < 0) \
+ ? (X) >= (-(HOST_WIDE_INT) 0x7fffffff - 1) \
+ : (unsigned HOST_WIDE_INT) (X) <= (unsigned HOST_WIDE_INT) 0xffffffff)
+#define UNSIGNED_INT3(X) ((unsigned) (X) < 0x8)
+#define UNSIGNED_INT5(X) ((unsigned) (X) < 0x20)
+#define UNSIGNED_INT6(X) ((unsigned) (X) < 0x40)
+#define UNSIGNED_INT7(X) ((unsigned) (X) < 0x80)
+#define UNSIGNED_INT8(X) ((unsigned) (X) < 0x100)
+#define IS_ONE(X) ((X) == 1)
+#define IS_ZERO(X) ((X) == 0)
+
+/* Stack layout and stack pointer usage.  */
+
+/* Define this macro if pushing a word onto the stack moves the stack
+   pointer to a smaller address.  */
+#define STACK_GROWS_DOWNWARD
+
+/* Define this if the nominal address of the stack frame
+   is at the high-address end of the local variables;
+   that is, each additional local variable allocated
+   goes at a more negative offset in the frame.  */
+#define FRAME_GROWS_DOWNWARD 1
+
+/* Offset within stack frame to start allocating local variables at.
+   If FRAME_GROWS_DOWNWARD, this is the offset to the END of the
+   first local allocated.  Otherwise, it is the offset to the BEGINNING
+   of the first local allocated.  */
+#define STARTING_FRAME_OFFSET 0
+
+/* Offset from the stack pointer register to the first location at which
+   outgoing arguments are placed.  */
+#define STACK_POINTER_OFFSET (0)
+
+/* Offset of first parameter from the argument pointer register value.  */
+#define FIRST_PARM_OFFSET(FNDECL) (0)
+
+/* A C expression whose value is RTL representing the address in a
+   stack frame where the pointer to the caller's frame is stored.
+   Assume that FRAMEADDR is an RTL expression for the address of the
+   stack frame itself.
+
+   If you don't define this macro, the default is to return the value
+   of FRAMEADDR--that is, the stack frame address is also the address
+   of the stack word that points to the previous frame.  */
+/* ??? unfinished */
+/*define DYNAMIC_CHAIN_ADDRESS (FRAMEADDR)*/
+
+/* A C expression whose value is RTL representing the value of the
+   return address for the frame COUNT steps up from the current frame.
+   FRAMEADDR is the frame pointer of the COUNT frame, or the frame
+   pointer of the COUNT - 1 frame if `RETURN_ADDR_IN_PREVIOUS_FRAME'
+   is defined.  */
+/* The current return address is in r31.  The return address of anything
+   farther back is at [%fp,4].  */
+
+#define RETURN_ADDR_RTX(COUNT, FRAME) \
+arc_return_addr_rtx(COUNT,FRAME)
+
+/* Register to use for pushing function arguments.  */
+#define STACK_POINTER_REGNUM 28
+
+/* Base register for access to local variables of the function.  */
+#define FRAME_POINTER_REGNUM 27
+
+/* Base register for access to arguments of the function. This register
+   will be eliminated into either fp or sp.  */
+#define ARG_POINTER_REGNUM 62
+
+#define RETURN_ADDR_REGNUM 31
+
+/* TODO - check usage of STATIC_CHAIN_REGNUM with a testcase */
+/* Register in which static-chain is passed to a function.  This must
+   not be a register used by the prologue.  */
+#define STATIC_CHAIN_REGNUM  11
+
+/* Function argument passing.  */
+
+/* If defined, the maximum amount of space required for outgoing
+   arguments will be computed and placed into the variable
+   `crtl->outgoing_args_size'.  No space will be pushed
+   onto the stack for each call; instead, the function prologue should
+   increase the stack frame size by this amount.  */
+#define ACCUMULATE_OUTGOING_ARGS 1
+
+/* Define a data type for recording info about an argument list
+   during the scan of that argument list.  This data type should
+   hold all necessary information about the function itself
+   and about the args processed so far, enough to enable macros
+   such as FUNCTION_ARG to determine where the next arg should go.  */
+#define CUMULATIVE_ARGS int
+
+/* Initialize a variable CUM of type CUMULATIVE_ARGS
+   for a call to a function whose data type is FNTYPE.
+   For a library call, FNTYPE is 0.  */
+#define INIT_CUMULATIVE_ARGS(CUM,FNTYPE,LIBNAME,INDIRECT,N_NAMED_ARGS) \
+((CUM) = 0)
+
+/* The number of registers used for parameter passing.  Local to this file.  */
+#define MAX_ARC_PARM_REGS 8
+
+/* 1 if N is a possible register number for function argument passing.  */
+#define FUNCTION_ARG_REGNO_P(N) \
+((unsigned) (N) < MAX_ARC_PARM_REGS)
+
+/* The ROUND_ADVANCE* macros are local to this file.  */
+/* Round SIZE up to a word boundary.  */
+#define ROUND_ADVANCE(SIZE) \
+(((SIZE) + UNITS_PER_WORD - 1) / UNITS_PER_WORD)
+
+/* Round arg MODE/TYPE up to the next word boundary.  */
+#define ROUND_ADVANCE_ARG(MODE, TYPE) \
+((MODE) == BLKmode				\
+ ? ROUND_ADVANCE (int_size_in_bytes (TYPE))	\
+ : ROUND_ADVANCE (GET_MODE_SIZE (MODE)))
+
+#define ARC_FUNCTION_ARG_BOUNDARY(MODE,TYPE) PARM_BOUNDARY
+/* Round CUM up to the necessary point for argument MODE/TYPE.  */
+/* N.B. Vectors have alignment exceeding BIGGEST_ALIGNMENT.
+   ARC_FUNCTION_ARG_BOUNDARY reduces this to no more than 32 bit.  */
+#define ROUND_ADVANCE_CUM(CUM, MODE, TYPE) \
+  ((((CUM) - 1) | (ARC_FUNCTION_ARG_BOUNDARY ((MODE), (TYPE)) - 1)/BITS_PER_WORD)\
+   + 1)
+
+/* Return boolean indicating arg of type TYPE and mode MODE will be passed in
+   a reg.  This includes arguments that have to be passed by reference as the
+   pointer to them is passed in a reg if one is available (and that is what
+   we're given).
+   When passing arguments NAMED is always 1.  When receiving arguments NAMED
+   is 1 for each argument except the last in a stdarg/varargs function.  In
+   a stdarg function we want to treat the last named arg as named.  In a
+   varargs function we want to treat the last named arg (which is
+   `__builtin_va_alist') as unnamed.
+   This macro is only used in this file.  */
+#define PASS_IN_REG_P(CUM, MODE, TYPE) \
+((CUM) < MAX_ARC_PARM_REGS)
+
+
+/* Function results.  */
+
+/* Define how to find the value returned by a library function
+   assuming the value has mode MODE.  */
+#define LIBCALL_VALUE(MODE) gen_rtx_REG (MODE, 0)
+
+/* 1 if N is a possible register number for a function value
+   as seen by the caller.  */
+/* ??? What about r1 in DI/DF values.  */
+#define FUNCTION_VALUE_REGNO_P(N) ((N) == 0)
+
+/* Tell GCC to use RETURN_IN_MEMORY.  */
+#define DEFAULT_PCC_STRUCT_RETURN 0
+
+/* Register in which address to store a structure value
+   is passed to a function, or 0 to use `invisible' first argument.  */
+#define STRUCT_VALUE 0
+
+/* EXIT_IGNORE_STACK should be nonzero if, when returning from a function,
+   the stack pointer does not matter.  The value is tested only in
+   functions that have frame pointers.
+   No definition is equivalent to always zero.  */
+#define EXIT_IGNORE_STACK 0
+
+#define EPILOGUE_USES(REGNO) arc_epilogue_uses ((REGNO))
+
+/* Definitions for register eliminations.
+
+   This is an array of structures.  Each structure initializes one pair
+   of eliminable registers.  The "from" register number is given first,
+   followed by "to".  Eliminations of the same "from" register are listed
+   in order of preference.
+
+   We have two registers that can be eliminated on the ARC.  First, the
+   argument pointer register can always be eliminated in favor of the stack
+   pointer register or frame pointer register.  Secondly, the frame pointer
+   register can often be eliminated in favor of the stack pointer register.
+*/
+
+#define ELIMINABLE_REGS					\
+{{ARG_POINTER_REGNUM, STACK_POINTER_REGNUM},		\
+ {ARG_POINTER_REGNUM, FRAME_POINTER_REGNUM},		\
+ {FRAME_POINTER_REGNUM, STACK_POINTER_REGNUM}}
+
+/* Define the offset between two registers, one to be eliminated, and the other
+   its replacement, at the start of a routine.  */
+extern int arc_initial_elimination_offset(int from, int to);
+#define INITIAL_ELIMINATION_OFFSET(FROM, TO, OFFSET)                    \
+  (OFFSET) = arc_initial_elimination_offset ((FROM), (TO))
+
+/* Output assembler code to FILE to increment profiler label # LABELNO
+   for profiling a function entry.
+   We actually emit the profiler code at the call site, so leave this one
+   empty.  */
+#define FUNCTION_PROFILER(FILE, LABELNO)
+#define NO_PROFILE_COUNTERS  1
+
+/* Trampolines.  */
+
+/* Length in units of the trampoline for entering a nested function.  */
+#define TRAMPOLINE_SIZE 20
+
+/* Alignment required for a trampoline in bits .  */
+/* For actual data alignment we just need 32, no more than the stack;
+   however, to reduce cache coherency issues, we want to make sure that
+   trampoline instructions always appear the same in any given cache line.  */
+#define TRAMPOLINE_ALIGNMENT 256
+
+/* Library calls.  */
+
+/* Addressing modes, and classification of registers for them.  */
+
+/* Maximum number of registers that can appear in a valid memory address.  */
+/* The `ld' insn allows 2, but the `st' insn only allows 1.  */
+#define MAX_REGS_PER_ADDRESS 1
+
+/* We have pre inc/dec (load/store with update).  */
+#define HAVE_PRE_INCREMENT 1
+#define HAVE_PRE_DECREMENT 1
+#define HAVE_POST_INCREMENT 1
+#define HAVE_POST_DECREMENT 1
+#define HAVE_PRE_MODIFY_DISP 1
+#define HAVE_POST_MODIFY_DISP 1
+#define HAVE_PRE_MODIFY_REG 1
+#define HAVE_POST_MODIFY_REG 1
+/* ??? should also do PRE_MODIFY_REG / POST_MODIFY_REG, but that requires
+   a special predicate for the memory operand of stores, like for the SH.  */
+
+/* Recognize any constant value that is a valid address.  */
+#define CONSTANT_ADDRESS_P(X) \
+(flag_pic?arc_legitimate_pic_addr_p (X): \
+(GET_CODE (X) == LABEL_REF || GET_CODE (X) == SYMBOL_REF	\
+ || GET_CODE (X) == CONST_INT || GET_CODE (X) == CONST))
+
+/* Is the argument a const_int rtx, containing an exact power of 2 */
+#define  IS_POWEROF2_P(X) (! ( (X) & ((X) - 1)) && (X))
+
+/* The macros REG_OK_FOR..._P assume that the arg is a REG rtx
+   and check its validity for a certain class.
+   We have two alternate definitions for each of them.
+   The *_NONSTRICT definition accepts all pseudo regs; the other rejects
+   them unless they have been allocated suitable hard regs.
+
+   Most source files want to accept pseudo regs in the hope that
+   they will get allocated to the class that the insn wants them to be in.
+   Source files for reload pass need to be strict.
+   After reload, it makes no difference, since pseudo regs have
+   been eliminated by then.  */
+
+/* Nonzero if X is a hard reg that can be used as an index
+   or if it is a pseudo reg.  */
+#define REG_OK_FOR_INDEX_P_NONSTRICT(X) \
+((unsigned) REGNO (X) >= FIRST_PSEUDO_REGISTER || \
+ (unsigned) REGNO (X) < 29 || \
+ (unsigned) REGNO (X) == 63 || \
+ (unsigned) REGNO (X) == ARG_POINTER_REGNUM)
+/* Nonzero if X is a hard reg that can be used as a base reg
+   or if it is a pseudo reg.  */
+#define REG_OK_FOR_BASE_P_NONSTRICT(X) \
+((unsigned) REGNO (X) >= FIRST_PSEUDO_REGISTER || \
+ (unsigned) REGNO (X) < 29 || \
+ (unsigned) REGNO (X) == 63 || \
+ (unsigned) REGNO (X) == ARG_POINTER_REGNUM)
+
+/* Nonzero if X is a hard reg that can be used as an index.  */
+#define REG_OK_FOR_INDEX_P_STRICT(X) REGNO_OK_FOR_INDEX_P (REGNO (X))
+/* Nonzero if X is a hard reg that can be used as a base reg.  */
+#define REG_OK_FOR_BASE_P_STRICT(X) REGNO_OK_FOR_BASE_P (REGNO (X))
+
+/* GO_IF_LEGITIMATE_ADDRESS recognizes an RTL expression
+   that is a valid memory address for an instruction.
+   The MODE argument is the machine mode for the MEM expression
+   that wants to use this address.  */
+/* The `ld' insn allows [reg],[reg+shimm],[reg+limm],[reg+reg],[limm]
+   but the `st' insn only allows [reg],[reg+shimm],[limm].
+   The only thing we can do is only allow the most strict case `st' and hope
+   other parts optimize out the restrictions for `ld'.  */
+
+#define RTX_OK_FOR_BASE_P(X, STRICT) \
+(REG_P (X) \
+ && ((STRICT) ? REG_OK_FOR_BASE_P_STRICT (X) : REG_OK_FOR_BASE_P_NONSTRICT (X)))
+
+#define RTX_OK_FOR_INDEX_P(X, STRICT) \
+(REG_P (X) \
+ && ((STRICT) ? REG_OK_FOR_INDEX_P_STRICT (X) : REG_OK_FOR_INDEX_P_NONSTRICT (X)))
+
+/* A C compound statement that attempts to replace X, which is an address
+   that needs reloading, with a valid memory address for an operand of
+   mode MODE.  WIN is a C statement label elsewhere in the code.
+
+   We try to get a normal form
+   of the address.  That will allow inheritance of the address reloads.  */
+
+#define LEGITIMIZE_RELOAD_ADDRESS(X,MODE,OPNUM,TYPE,IND_LEVELS,WIN)	\
+{									\
+  if (GET_CODE (X) == PLUS						\
+      && CONST_INT_P (XEXP (X, 1))					\
+      && (RTX_OK_FOR_BASE_P (XEXP (X, 0), true)				\
+	  || (REG_P (XEXP (X, 0))					\
+	      && reg_equiv_constant (REGNO (XEXP (X, 0))))))		\
+    {									\
+      int scale = GET_MODE_SIZE (MODE);					\
+      int shift;							\
+      rtx index_rtx = XEXP (X, 1);					\
+      HOST_WIDE_INT offset = INTVAL (index_rtx), offset_base;		\
+      rtx reg, sum, sum2;						\
+									\
+      if (scale > 4)							\
+	scale = 4;							\
+      if ((scale-1) & offset)						\
+	scale = 1;							\
+      shift = scale >> 1;						\
+      offset_base = (offset + (256 << shift)) & (-512 << shift);	\
+      /* Sometimes the normal form does not suit DImode.  We		\
+	 could avoid that by using smaller ranges, but that		\
+	 would give less optimized code when SImode is			\
+	 prevalent.  */							\
+      if (GET_MODE_SIZE (MODE) + offset - offset_base <= (256 << shift))\
+	{								\
+	  int regno;							\
+									\
+	  reg = XEXP (X, 0);						\
+	  regno = REGNO (reg);						\
+	  sum2 = sum = plus_constant (Pmode, reg, offset_base);		\
+									\
+	  if (reg_equiv_constant (regno))				\
+	    {								\
+	      sum2 = plus_constant (Pmode, reg_equiv_constant (regno),	\
+				    offset_base);			\
+	      if (GET_CODE (sum2) == PLUS)				\
+		sum2 = gen_rtx_CONST (Pmode, sum2);			\
+	    }								\
+	  X = gen_rtx_PLUS (Pmode, sum, GEN_INT (offset - offset_base));\
+	  push_reload (sum2, NULL_RTX, &XEXP (X, 0), NULL,		\
+		       BASE_REG_CLASS, Pmode, VOIDmode, 0, 0, (OPNUM),	\
+		       (TYPE));						\
+	  goto WIN;							\
+	}								\
+    }									\
+  /* We must re-recognize what we created before.  */			\
+  else if (GET_CODE (X) == PLUS						\
+	   && GET_CODE (XEXP (X, 0)) == PLUS				\
+	   && CONST_INT_P (XEXP (XEXP (X, 0), 1))			\
+	   && REG_P  (XEXP (XEXP (X, 0), 0))				\
+	   && CONST_INT_P (XEXP (X, 1)))				\
+    {									\
+      /* Because this address is so complex, we know it must have	\
+	 been created by LEGITIMIZE_RELOAD_ADDRESS before; thus,	\
+	 it is already unshared, and needs no further unsharing.  */	\
+      push_reload (XEXP ((X), 0), NULL_RTX, &XEXP ((X), 0), NULL,	\
+		   BASE_REG_CLASS, Pmode, VOIDmode, 0, 0, (OPNUM), (TYPE));\
+      goto WIN;								\
+    }									\
+}
+
+/* Reading lp_count for anything but the lp instruction is very slow on the
+   ARC700.  */
+#define DONT_REALLOC(REGNO,MODE) \
+  (TARGET_ARC700 && (REGNO) == 60)
+
+
+/* Given a comparison code (EQ, NE, etc.) and the first operand of a COMPARE,
+   return the mode to be used for the comparison.  */
+/*extern enum machine_mode arc_select_cc_mode ();*/
+#define SELECT_CC_MODE(OP, X, Y) \
+arc_select_cc_mode (OP, X, Y)
+
+/* Return non-zero if SELECT_CC_MODE will never return MODE for a
+   floating point inequality comparison.  */
+#define REVERSIBLE_CC_MODE(MODE) 1 /*???*/
+
+/* Costs.  */
+
+/* Compute extra cost of moving data between one register class
+   and another.  */
+#define REGISTER_MOVE_COST(MODE, CLASS, TO_CLASS) \
+   arc_register_move_cost ((MODE), (CLASS), (TO_CLASS))
+
+/* Compute the cost of moving data between registers and memory.  */
+/* Memory is 3 times as expensive as registers.
+   ??? Is that the right way to look at it?  */
+#define MEMORY_MOVE_COST(MODE,CLASS,IN) \
+(GET_MODE_SIZE (MODE) <= UNITS_PER_WORD ? 6 : 12)
+
+/* The cost of a branch insn.  */
+/* ??? What's the right value here?  Branches are certainly more
+   expensive than reg->reg moves.  */
+#define BRANCH_COST(speed_p, predictable_p) 2
+
+/* Nonzero if access to memory by bytes is slow and undesirable.
+   For RISC chips, it means that access to memory by bytes is no
+   better than access by words when possible, so grab a whole word
+   and maybe make use of that.  */
+#define SLOW_BYTE_ACCESS  0
+
+/* Define this macro if it is as good or better to call a constant
+   function address than to call an address kept in a register.  */
+/* On the ARC, calling through registers is slow.  */
+#define NO_FUNCTION_CSE
+
+/* Section selection.  */
+/* WARNING: These section names also appear in dwarfout.c.  */
+
+#define TEXT_SECTION_ASM_OP	"\t.section\t.text"
+#define DATA_SECTION_ASM_OP	"\t.section\t.data"
+
+#define BSS_SECTION_ASM_OP	"\t.section\t.bss"
+#define SDATA_SECTION_ASM_OP	"\t.section\t.sdata"
+#define SBSS_SECTION_ASM_OP	"\t.section\t.sbss"
+
+/* Expression whose value is a string, including spacing, containing the
+   assembler operation to identify the following data as initialization/termination
+   code. If not defined, GCC will assume such a section does not exist. */
+#define INIT_SECTION_ASM_OP "\t.section\t.init"
+#define FINI_SECTION_ASM_OP "\t.section\t.fini"
+
+/* Define this macro if jump tables (for tablejump insns) should be
+   output in the text section, along with the assembler instructions.
+   Otherwise, the readonly data section is used.
+   This macro is irrelevant if there is no separate readonly data section.  */
+#define JUMP_TABLES_IN_TEXT_SECTION  (flag_pic || CASE_VECTOR_PC_RELATIVE)
+
+/* For DWARF.  Marginally different than default so output is "prettier"
+   (and consistent with above).  */
+#define PUSHSECTION_FORMAT "\t%s %s\n"
+
+/* Tell crtstuff.c we're using ELF.  */
+#define OBJECT_FORMAT_ELF
+
+/* PIC */
+
+/* The register number of the register used to address a table of static
+   data addresses in memory.  In some cases this register is defined by a
+   processor's ``application binary interface'' (ABI).  When this macro
+   is defined, RTL is generated for this register once, as with the stack
+   pointer and frame pointer registers.  If this macro is not defined, it
+   is up to the machine-dependent files to allocate such a register (if
+   necessary).  */
+#define PIC_OFFSET_TABLE_REGNUM 26
+
+/* Define this macro if the register defined by PIC_OFFSET_TABLE_REGNUM is
+   clobbered by calls.  Do not define this macro if PIC_OFFSET_TABLE_REGNUM
+   is not defined.  */
+/* This register is call-saved on the ARC.  */
+/*#define PIC_OFFSET_TABLE_REG_CALL_CLOBBERED*/
+
+/* A C expression that is nonzero if X is a legitimate immediate
+   operand on the target machine when generating position independent code.
+   You can assume that X satisfies CONSTANT_P, so you need not
+   check this.  You can also assume `flag_pic' is true, so you need not
+   check it either.  You need not define this macro if all constants
+   (including SYMBOL_REF) can be immediate operands when generating
+   position independent code.  */
+#define LEGITIMATE_PIC_OPERAND_P(X)  (arc_legitimate_pic_operand_p(X))
+
+/* Control the assembler format that we output.  */
+
+/* A C string constant describing how to begin a comment in the target
+   assembler language.  The compiler assumes that the comment will
+   end at the end of the line.  */
+/* Gas needs this to be "#" in order to recognize line directives.  */
+#define ASM_COMMENT_START "#"
+
+/* Output to assembler file text saying following lines
+   may contain character constants, extra white space, comments, etc.  */
+#define ASM_APP_ON ""
+
+/* Output to assembler file text saying following lines
+   no longer contain unusual constructs.  */
+#define ASM_APP_OFF ""
+
+/* Globalizing directive for a label.  */
+#define GLOBAL_ASM_OP "\t.global\t"
+
+/* This is how to output an assembler line defining a `char' constant.  */
+#define ASM_OUTPUT_CHAR(FILE, VALUE) \
+( fprintf (FILE, "\t.byte\t"),			\
+  output_addr_const (FILE, (VALUE)),		\
+  fprintf (FILE, "\n"))
+
+/* This is how to output an assembler line defining a `short' constant.  */
+#define ASM_OUTPUT_SHORT(FILE, VALUE) \
+( fprintf (FILE, "\t.hword\t"),			\
+  output_addr_const (FILE, (VALUE)),		\
+  fprintf (FILE, "\n"))
+
+/* This is how to output an assembler line defining an `int' constant.
+   We also handle symbol output here.  Code addresses must be right shifted
+   by 2 because that's how the jump instruction wants them.  */
+#define ASM_OUTPUT_INT(FILE, VALUE) \
+do {									\
+  fprintf (FILE, "\t.word\t");						\
+  if (GET_CODE (VALUE) == LABEL_REF)					\
+    {									\
+      fprintf (FILE, "%%st(@");						\
+      output_addr_const (FILE, (VALUE));				\
+      fprintf (FILE, ")");						\
+    }									\
+  else									\
+    output_addr_const (FILE, (VALUE));					\
+  fprintf (FILE, "\n");					                \
+} while (0)
+
+/* This is how to output an assembler line defining a `float' constant.  */
+#define ASM_OUTPUT_FLOAT(FILE, VALUE) \
+{							\
+  long t;						\
+  char str[30];						\
+  REAL_VALUE_TO_TARGET_SINGLE ((VALUE), t);		\
+  REAL_VALUE_TO_DECIMAL ((VALUE), "%.20e", str);	\
+  fprintf (FILE, "\t.word\t0x%lx %s %s\n",		\
+	   t, ASM_COMMENT_START, str);			\
+}
+
+/* This is how to output an assembler line defining a `double' constant.  */
+#define ASM_OUTPUT_DOUBLE(FILE, VALUE) \
+{							\
+  long t[2];						\
+  char str[30];						\
+  REAL_VALUE_TO_TARGET_DOUBLE ((VALUE), t);		\
+  REAL_VALUE_TO_DECIMAL ((VALUE), "%.20e", str);	\
+  fprintf (FILE, "\t.word\t0x%lx %s %s\n\t.word\t0x%lx\n", \
+	   t[0], ASM_COMMENT_START, str, t[1]);		\
+}
+
+/* This is how to output the definition of a user-level label named NAME,
+   such as the label on a static function or variable NAME.  */
+#define ASM_OUTPUT_LABEL(FILE, NAME) \
+do { assemble_name (FILE, NAME); fputs (":\n", FILE); } while (0)
+
+#define ASM_NAME_P(NAME) ( NAME[0]=='*')
+
+/* This is how to output a reference to a user-level label named NAME.
+   `assemble_name' uses this.  */
+/* We work around a dwarfout.c deficiency by watching for labels from it and
+   not adding the '_' prefix.  There is a comment in
+   dwarfout.c that says it should be using ASM_OUTPUT_INTERNAL_LABEL.  */
+#define ASM_OUTPUT_LABELREF(FILE, NAME1) \
+do {							\
+  const char *NAME;					\
+  NAME = (*targetm.strip_name_encoding)(NAME1);		\
+  if ((NAME)[0] == '.' && (NAME)[1] == 'L')		\
+    fprintf (FILE, "%s", NAME);				\
+  else							\
+    {							\
+      if (!ASM_NAME_P (NAME1))				\
+	fprintf (FILE, "%s", user_label_prefix);	\
+      fprintf (FILE, "%s", NAME);			\
+    }							\
+} while (0)
+
+/* This is how to output a reference to a symbol_ref / label_ref as
+   (part of) an operand.  To disambiguate from register names like
+   a1 / a2 / status etc, symbols are preceded by '@'.  */
+#define ASM_OUTPUT_SYMBOL_REF(FILE,SYM) \
+  ASM_OUTPUT_LABEL_REF ((FILE), XSTR ((SYM), 0))
+#define ASM_OUTPUT_LABEL_REF(FILE,STR)			\
+  do							\
+    {							\
+      fputc ('@', file);				\
+      assemble_name ((FILE), (STR));			\
+    }							\
+  while (0)
+
+/* Store in OUTPUT a string (made with alloca) containing
+   an assembler-name for a local static variable named NAME.
+   LABELNO is an integer which is different for each call.  */
+#define ASM_FORMAT_PRIVATE_NAME(OUTPUT, NAME, LABELNO) \
+( (OUTPUT) = (char *) alloca (strlen ((NAME)) + 10),	\
+  sprintf ((OUTPUT), "%s.%d", (NAME), (LABELNO)))
+
+/* The following macro defines the format used to output the second
+   operand of the .type assembler directive.  Different svr4 assemblers
+   expect various different forms for this operand.  The one given here
+   is just a default.  You may need to override it in your machine-
+   specific tm.h file (depending upon the particulars of your assembler).  */
+
+#undef  TYPE_OPERAND_FMT
+#define TYPE_OPERAND_FMT	"@%s"
+
+/*  A C string containing the appropriate assembler directive to
+    specify the size of a symbol, without any arguments.  On systems
+    that use ELF, the default (in `config/elfos.h') is `"\t.size\t"';
+    on other systems, the default is not to define this macro.  */
+#undef SIZE_ASM_OP
+#define SIZE_ASM_OP "\t.size\t"
+
+/* Assembler pseudo-op to equate one value with another.  */
+/* ??? This is needed because dwarfout.c provides a default definition too
+   late for defaults.h (which contains the default definition of ASM_OTPUT_DEF
+   that we use).  */
+#ifdef SET_ASM_OP
+#undef SET_ASM_OP
+#endif
+#define SET_ASM_OP "\t.set\t"
+
+extern char rname56[], rname57[], rname58[], rname59[];
+/* How to refer to registers in assembler output.
+   This sequence is indexed by compiler's hard-register-number (see above).  */
+#define REGISTER_NAMES								\
+{  "r0",   "r1",   "r2",   "r3",       "r4",     "r5",     "r6",    "r7",	\
+   "r8",   "r9",  "r10",  "r11",      "r12",    "r13",    "r14",   "r15",	\
+  "r16",  "r17",  "r18",  "r19",      "r20",    "r21",    "r22",   "r23",	\
+  "r24",  "r25",   "gp",   "fp",       "sp", "ilink1", "ilink2", "blink",	\
+  "r32",  "r33",  "r34",  "r35",      "r36",    "r37",    "r38",   "r39",	\
+   "d1",   "d1",   "d2",   "d2",      "r44",    "r45",    "r46",   "r47",	\
+  "r48",  "r49",  "r50",  "r51",      "r52",    "r53",    "r54",   "r55",	\
+  rname56,rname57,rname58,rname59,"lp_count",    "cc",     "ap",   "pcl",	\
+  "vr0",  "vr1",  "vr2",  "vr3",      "vr4",    "vr5",    "vr6",   "vr7",       \
+  "vr8",  "vr9", "vr10", "vr11",     "vr12",   "vr13",   "vr14",  "vr15",	\
+ "vr16", "vr17", "vr18", "vr19",     "vr20",   "vr21",   "vr22",  "vr23",	\
+ "vr24", "vr25", "vr26", "vr27",     "vr28",   "vr29",   "vr30",  "vr31",	\
+ "vr32", "vr33", "vr34", "vr35",     "vr36",   "vr37",   "vr38",  "vr39",	\
+ "vr40", "vr41", "vr42", "vr43",     "vr44",   "vr45",   "vr46",  "vr47",	\
+ "vr48", "vr49", "vr50", "vr51",     "vr52",   "vr53",   "vr54",  "vr55",	\
+ "vr56", "vr57", "vr58", "vr59",     "vr60",   "vr61",   "vr62",  "vr63",	\
+  "dr0",  "dr1",  "dr2",  "dr3",      "dr4",    "dr5",    "dr6",   "dr7",	\
+  "dr0",  "dr1",  "dr2",  "dr3",      "dr4",    "dr5",    "dr6",   "dr7",	\
+  "lp_start", "lp_end" \
+}
+
+/* Entry to the insn conditionalizer.  */
+#define FINAL_PRESCAN_INSN(INSN, OPVEC, NOPERANDS) \
+  arc_final_prescan_insn (INSN, OPVEC, NOPERANDS)
+
+/* A C expression which evaluates to true if CODE is a valid
+   punctuation character for use in the `PRINT_OPERAND' macro.  */
+extern char arc_punct_chars[];
+#define PRINT_OPERAND_PUNCT_VALID_P(CHAR) \
+arc_punct_chars[(unsigned char) (CHAR)]
+
+/* Print operand X (an rtx) in assembler syntax to file FILE.
+   CODE is a letter or dot (`z' in `%z0') or 0 if no letter was specified.
+   For `%' followed by punctuation, CODE is the punctuation and X is null.  */
+#define PRINT_OPERAND(FILE, X, CODE) \
+arc_print_operand (FILE, X, CODE)
+
+/* A C compound statement to output to stdio stream STREAM the
+   assembler syntax for an instruction operand that is a memory
+   reference whose address is ADDR.  ADDR is an RTL expression.
+
+   On some machines, the syntax for a symbolic address depends on
+   the section that the address refers to.  On these machines,
+   define the macro `ENCODE_SECTION_INFO' to store the information
+   into the `symbol_ref', and then check for it here.  */
+#define PRINT_OPERAND_ADDRESS(FILE, ADDR) \
+arc_print_operand_address (FILE, ADDR)
+
+/* This is how to output an element of a case-vector that is absolute.  */
+#define ASM_OUTPUT_ADDR_VEC_ELT(FILE, VALUE)  \
+do {							\
+  char label[30];					\
+  ASM_GENERATE_INTERNAL_LABEL (label, "L", VALUE);	\
+  fprintf (FILE, "\t.word ");				\
+  assemble_name (FILE, label);				\
+  fprintf(FILE, "\n");					\
+} while (0)
+
+/* This is how to output an element of a case-vector that is relative.  */
+#define ASM_OUTPUT_ADDR_DIFF_ELT(FILE, BODY, VALUE, REL) \
+do {							\
+  char label[30];					\
+  ASM_GENERATE_INTERNAL_LABEL (label, "L", VALUE);	\
+  switch (GET_MODE (BODY))				\
+    {							\
+    case QImode: fprintf (FILE, "\t.byte "); break;	\
+    case HImode: fprintf (FILE, "\t.hword "); break;	\
+    case SImode: fprintf (FILE, "\t.word "); break;	\
+    default: gcc_unreachable ();			\
+    }							\
+  assemble_name (FILE, label);				\
+  fprintf (FILE, "-");					\
+  ASM_GENERATE_INTERNAL_LABEL (label, "L", REL);	\
+  assemble_name (FILE, label);				\
+  if (TARGET_COMPACT_CASESI)				\
+    fprintf (FILE, " + %d", 4 + arc_get_unalign ());	\
+  fprintf(FILE, "\n");                                  \
+} while (0)
+
+/* ADDR_DIFF_VECs are in the text section and thus can affect the
+   current alignment.  */
+#define ASM_OUTPUT_CASE_END(FILE, NUM, JUMPTABLE)       \
+  do                                                    \
+    {                                                   \
+      if (GET_CODE (PATTERN (JUMPTABLE)) == ADDR_DIFF_VEC \
+	  && ((GET_MODE_SIZE (GET_MODE (PATTERN (JUMPTABLE))) \
+	       * XVECLEN (PATTERN (JUMPTABLE), 1) + 1)	\
+	      & 2))					\
+      arc_toggle_unalign ();				\
+    }                                                   \
+  while (0)
+
+#define JUMP_ALIGN(LABEL) (arc_size_opt_level < 2 ? 2 : 0)
+#define LABEL_ALIGN_AFTER_BARRIER(LABEL) \
+  (JUMP_ALIGN(LABEL) \
+   ? JUMP_ALIGN(LABEL) \
+   : GET_CODE (PATTERN (prev_active_insn (LABEL))) == ADDR_DIFF_VEC \
+   ? 1 : 0)
+/* The desired alignment for the location counter at the beginning
+   of a loop.  */
+/* On the ARC, align loops to 4 byte boundaries unless doing all-out size
+   optimization.  */
+#define LOOP_ALIGN JUMP_ALIGN
+
+#define LABEL_ALIGN(LABEL) (arc_label_align (LABEL))
+
+/* This is how to output an assembler line
+   that says to advance the location counter
+   to a multiple of 2**LOG bytes.  */
+#define ASM_OUTPUT_ALIGN(FILE,LOG) \
+do { \
+  if ((LOG) != 0) fprintf (FILE, "\t.align %d\n", 1 << (LOG)); \
+  if ((LOG)  > 1) \
+    arc_clear_unalign (); \
+} while (0)
+
+/*  ASM_OUTPUT_ALIGNED_DECL_LOCAL (STREAM, DECL, NAME, SIZE, ALIGNMENT)
+    Define this macro when you need to see the variable's decl in order to
+    chose what to output.  */
+#define ASM_OUTPUT_ALIGNED_DECL_LOCAL(STREAM, DECL, NAME, SIZE, ALIGNMENT) \
+  arc_asm_output_aligned_decl_local (STREAM, DECL, NAME, SIZE, ALIGNMENT, 0)
+
+/* To translate the return value of arc_function_type into a register number
+   to jump through for function return.  */
+extern int arc_return_address_regs[4];
+
+/* Debugging information.  */
+
+/* Generate DBX and DWARF debugging information.  */
+#ifdef DBX_DEBUGGING_INFO
+#undef DBX_DEBUGGING_INFO
+#endif
+#define DBX_DEBUGGING_INFO
+
+#ifdef DWARF2_DEBUGGING_INFO
+#undef DWARF2_DEBUGGING_INFO
+#endif
+#define DWARF2_DEBUGGING_INFO
+
+/* Prefer STABS (for now).  */
+#undef PREFERRED_DEBUGGING_TYPE
+#define PREFERRED_DEBUGGING_TYPE DWARF2_DEBUG
+
+/* How to renumber registers for dbx and gdb.  */
+#define DBX_REGISTER_NUMBER(REGNO) \
+  ((TARGET_MULMAC_32BY16_SET && (REGNO) >= 56 && (REGNO) <= 57) \
+   ? ((REGNO) ^ !TARGET_BIG_ENDIAN) \
+   : (TARGET_MUL64_SET && (REGNO) >= 57 && (REGNO) <= 59) \
+   ? ((REGNO) == 57 \
+      ? 58 /* MMED */ \
+      : ((REGNO) & 1) ^ TARGET_BIG_ENDIAN \
+      ? 59 /* MHI */ \
+      : 57 + !!TARGET_MULMAC_32BY16_SET) /* MLO */ \
+   : (REGNO))
+
+#define DWARF_FRAME_REGNUM(REG) (REG)
+
+#define DWARF_FRAME_RETURN_COLUMN 	DWARF_FRAME_REGNUM (31)
+
+#define INCOMING_RETURN_ADDR_RTX  gen_rtx_REG (Pmode, 31)
+
+/* Frame info.  */
+/* Force the generation of dwarf .debug_frame sections even if not
+   compiling -g.  This guarantees that we can unwind the stack.  */
+
+#define DWARF2_FRAME_INFO 1
+
+/* Define this macro to 0 if your target supports DWARF 2 frame unwind
+   information, but it does not yet work with exception handling.  */
+#define DWARF2_UNWIND_INFO 0
+
+
+/* Turn off splitting of long stabs.  */
+#define DBX_CONTIN_LENGTH 0
+
+/* Miscellaneous.  */
+
+/* Specify the machine mode that this machine uses
+   for the index in the tablejump instruction.
+   If we have pc relative case vectors, we start the case vector shortening
+   with QImode.  */
+#define CASE_VECTOR_MODE \
+  ((optimize && (CASE_VECTOR_PC_RELATIVE || flag_pic)) ? QImode : Pmode)
+
+/* Define as C expression which evaluates to nonzero if the tablejump
+   instruction expects the table to contain offsets from the address of the
+   table.
+   Do not define this if the table should contain absolute addresses.  */
+#define CASE_VECTOR_PC_RELATIVE TARGET_CASE_VECTOR_PC_RELATIVE
+
+#define CASE_VECTOR_SHORTEN_MODE(MIN_OFFSET, MAX_OFFSET, BODY) \
+  CASE_VECTOR_SHORTEN_MODE_1 \
+    (MIN_OFFSET, TARGET_COMPACT_CASESI ? MAX_OFFSET + 6 : MAX_OFFSET, BODY)
+
+#define CASE_VECTOR_SHORTEN_MODE_1(MIN_OFFSET, MAX_OFFSET, BODY) \
+((MIN_OFFSET) >= 0 && (MAX_OFFSET) <= 255 \
+ ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 1, QImode) \
+ : (MIN_OFFSET) >= -128 && (MAX_OFFSET) <= 127 \
+ ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 0, QImode) \
+ : (MIN_OFFSET) >= 0 && (MAX_OFFSET) <= 65535 \
+ ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 1, HImode) \
+ : (MIN_OFFSET) >= -32768 && (MAX_OFFSET) <= 32767 \
+ ? (ADDR_DIFF_VEC_FLAGS (BODY).offset_unsigned = 0, HImode) \
+ : SImode)
+
+#define ADDR_VEC_ALIGN(VEC_INSN) \
+  (exact_log2 (GET_MODE_SIZE (GET_MODE (PATTERN (VEC_INSN)))))
+#undef ASM_OUTPUT_BEFORE_CASE_LABEL
+#define ASM_OUTPUT_BEFORE_CASE_LABEL(FILE, PREFIX, NUM, TABLE) \
+  ASM_OUTPUT_ALIGN ((FILE), ADDR_VEC_ALIGN (TABLE));
+
+#define INSN_LENGTH_ALIGNMENT(INSN) \
+  ((JUMP_P (INSN) \
+    && GET_CODE (PATTERN (INSN)) == ADDR_DIFF_VEC \
+    && GET_MODE (PATTERN (INSN)) == QImode) \
+   ? 0 : length_unit_log)
+
+/* Define if operations between registers always perform the operation
+   on the full register even if a narrower mode is specified.  */
+#define WORD_REGISTER_OPERATIONS
+
+/* Define if loading in MODE, an integral mode narrower than BITS_PER_WORD
+   will either zero-extend or sign-extend.  The value of this macro should
+   be the code that says which one of the two operations is implicitly
+   done, NIL if none.  */
+#define LOAD_EXTEND_OP(MODE) ZERO_EXTEND
+
+
+/* Max number of bytes we can move from memory to memory
+   in one reasonably fast instruction.  */
+#define MOVE_MAX 4
+
+/* Let the movmem expander handle small block moves.  */
+#define MOVE_BY_PIECES_P(LEN, ALIGN)  0
+#define CAN_MOVE_BY_PIECES(SIZE, ALIGN) \
+  (move_by_pieces_ninsns (SIZE, ALIGN, MOVE_MAX_PIECES + 1) \
+   < (unsigned int) MOVE_RATIO (!optimize_size))
+
+/* Undo the effects of the movmem pattern presence on STORE_BY_PIECES_P .  */
+#define MOVE_RATIO(SPEED) ((SPEED) ? 15 : 3)
+
+/* Define this to be nonzero if shift instructions ignore all but the low-order
+   few bits. Changed from 1 to 0 for rotate pattern testcases
+   (e.g. 20020226-1.c). This change truncates the upper 27 bits of a word
+   while rotating a word. Came to notice through a combine phase
+   optimization viz. a << (32-b) is equivalent to a << (-b).
+*/
+#define SHIFT_COUNT_TRUNCATED 0
+
+/* Value is 1 if truncating an integer of INPREC bits to OUTPREC bits
+   is done just by pretending it is already truncated.  */
+#define TRULY_NOOP_TRUNCATION(OUTPREC, INPREC) 1
+
+/* We assume that the store-condition-codes instructions store 0 for false
+   and some other value for true.  This is the value stored for true.  */
+#define STORE_FLAG_VALUE 1
+
+/* Specify the machine mode that pointers have.
+   After generation of rtl, the compiler makes no further distinction
+   between pointers and any other objects of this machine mode.  */
+/* ARCompact has full 32-bit pointers.  */
+#define Pmode SImode
+
+/* A function address in a call instruction.  */
+#define FUNCTION_MODE SImode
+
+/* Define the information needed to generate branch and scc insns.  This is
+   stored from the compare operation.  Note that we can't use "rtx" here
+   since it hasn't been defined!  */
+extern struct rtx_def *arc_compare_op0, *arc_compare_op1;
+
+/* ARC function types.   */
+enum arc_function_type {
+  ARC_FUNCTION_UNKNOWN, ARC_FUNCTION_NORMAL,
+  /* These are interrupt handlers.  The name corresponds to the register
+     name that contains the return address.  */
+  ARC_FUNCTION_ILINK1, ARC_FUNCTION_ILINK2
+};
+#define ARC_INTERRUPT_P(TYPE) \
+((TYPE) == ARC_FUNCTION_ILINK1 || (TYPE) == ARC_FUNCTION_ILINK2)
+
+/* Compute the type of a function from its DECL.  Needed for EPILOGUE_USES.  */
+struct function;
+extern enum arc_function_type arc_compute_function_type (struct function *);
+
+/* Called by crtstuff.c to make calls to function FUNCTION that are defined in
+   SECTION_OP, and then to switch back to text section.  */
+#undef CRT_CALL_STATIC_FUNCTION
+#define CRT_CALL_STATIC_FUNCTION(SECTION_OP, FUNC) \
+    asm (SECTION_OP "\n\t"				\
+	"bl @" USER_LABEL_PREFIX #FUNC "\n"		\
+	TEXT_SECTION_ASM_OP);
+
+/* This macro expands to the name of the scratch register r12, used for
+   temporary calculations according to the ABI.  */
+#define ARC_TEMP_SCRATCH_REG "r12"
+
+/* The C++ compiler must use one bit to indicate whether the function
+   that will be called through a pointer-to-member-function is
+   virtual.  Normally, we assume that the low-order bit of a function
+   pointer must always be zero.  Then, by ensuring that the
+   vtable_index is odd, we can distinguish which variant of the union
+   is in use.  But, on some platforms function pointers can be odd,
+   and so this doesn't work.  In that case, we use the low-order bit
+   of the `delta' field, and shift the remainder of the `delta' field
+   to the left. We needed to do this for A4 because the address was always
+   shifted and thus could be odd.  */
+#define TARGET_PTRMEMFUNC_VBIT_LOCATION \
+  (ptrmemfunc_vbit_in_pfn)
+
+#define INSN_SETS_ARE_DELAYED(X)		\
+  (GET_CODE (X) == INSN				\
+   && GET_CODE (PATTERN (X)) != SEQUENCE	\
+   && GET_CODE (PATTERN (X)) != USE		\
+   && GET_CODE (PATTERN (X)) != CLOBBER		\
+   && (get_attr_type (X) == TYPE_CALL || get_attr_type (X) == TYPE_SFUNC))
+
+#define INSN_REFERENCES_ARE_DELAYED(insn) INSN_SETS_ARE_DELAYED (insn)
+
+#define CALL_ATTR(X, NAME) \
+  ((CALL_P (X) || NONJUMP_INSN_P (X)) \
+   && GET_CODE (PATTERN (X)) != USE \
+   && GET_CODE (PATTERN (X)) != CLOBBER \
+   && get_attr_is_##NAME (X) == IS_##NAME##_YES) \
+
+#define REVERSE_CONDITION(CODE,MODE) \
+	(((MODE) == CC_FP_GTmode || (MODE) == CC_FP_GEmode \
+	  || (MODE) == CC_FP_UNEQmode || (MODE) == CC_FP_ORDmode \
+	  || (MODE) == CC_FPXmode) \
+	 ? reverse_condition_maybe_unordered ((CODE)) \
+	 : reverse_condition ((CODE)))
+
+#define ADJUST_INSN_LENGTH(X, LENGTH) \
+  ((LENGTH) \
+   = (GET_CODE (PATTERN (X)) == SEQUENCE \
+      ? ((LENGTH) \
+	 + arc_adjust_insn_length (XVECEXP (PATTERN (X), 0, 0), \
+				   get_attr_length (XVECEXP (PATTERN (X), \
+						    0, 0)), \
+				   true) \
+	 - get_attr_length (XVECEXP (PATTERN (X), 0, 0)) \
+	 + arc_adjust_insn_length (XVECEXP (PATTERN (X), 0, 1), \
+				   get_attr_length (XVECEXP (PATTERN (X), \
+						    0, 1)), \
+				   true) \
+	 - get_attr_length (XVECEXP (PATTERN (X), 0, 1))) \
+      : arc_adjust_insn_length ((X), (LENGTH), false)))
+
+#define IS_ASM_LOGICAL_LINE_SEPARATOR(C,STR) ((C) == '`')
+
+#define INIT_EXPANDERS arc_init_expanders ()
+
+#define CFA_FRAME_BASE_OFFSET(FUNDECL) (-arc_decl_pretend_args ((FUNDECL)))
+
+#define ARG_POINTER_CFA_OFFSET(FNDECL) \
+  (FIRST_PARM_OFFSET (FNDECL) + arc_decl_pretend_args ((FNDECL)))
+
+enum
+{
+  ARC_LRA_PRIORITY_NONE, ARC_LRA_PRIORITY_NONCOMPACT, ARC_LRA_PRIORITY_COMPACT
+};
+
+#endif /* GCC_ARC_H */
diff -Nu --exclude arc.c --exclude arc.md emptydir/arc-modes.def config/arc/arc-modes.def
--- emptydir/arc-modes.def	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc-modes.def	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,37 @@ 
+/* Definitions of target machine for GNU compiler, Synopsys DesignWare ARC cpu.
+   Copyright (C) 2002, 2007-2012 Free Software Foundation, Inc.
+   Contributor: Joern Rennecke <joern.rennecke@embecosm.com>
+		on behalf of Synopsys Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+/* Some insns set all condition code flags, some only set the ZNC flags, and
+   some only set the ZN flags.  */
+
+CC_MODE (CC_ZN);
+CC_MODE (CC_Z);
+CC_MODE (CC_C);
+CC_MODE (CC_FP_GT);
+CC_MODE (CC_FP_GE);
+CC_MODE (CC_FP_ORD);
+CC_MODE (CC_FP_UNEQ);
+CC_MODE (CC_FPX);
+
+/* Vector modes.  */
+VECTOR_MODES (INT, 4);        /*            V4QI V2HI */
+VECTOR_MODES (INT, 8);        /*       V8QI V4HI V2SI */
+VECTOR_MODES (INT, 16);       /* V16QI V8HI V4SI V2DI */
diff -Nu --exclude arc.c --exclude arc.md emptydir/arc.opt config/arc/arc.opt
--- emptydir/arc.opt	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc.opt	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,353 @@ 
+; Options for the Synopsys DesignWare ARC port of the compiler
+;
+; Copyright (C) 2005, 2007-2012 Free Software Foundation, Inc.
+;
+; This file is part of GCC.
+;
+; GCC is free software; you can redistribute it and/or modify it under
+; the terms of the GNU General Public License as published by the Free
+; Software Foundation; either version 3, or (at your option) any later
+; version.
+;
+; GCC is distributed in the hope that it will be useful, but WITHOUT
+; ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+; or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+; License for more details.
+;
+; You should have received a copy of the GNU General Public License
+; along with GCC; see the file COPYING3.  If not see
+; <http://www.gnu.org/licenses/>.
+
+HeaderInclude
+config/arc/arc-opts.h
+
+mbig-endian
+Target Report RejectNegative Mask(BIG_ENDIAN)
+Compile code for big endian mode
+
+mlittle-endian
+Target Report RejectNegative InverseMask(BIG_ENDIAN)
+Compile code for little endian mode.  This is the default
+
+mno-cond-exec
+Target Report RejectNegative Mask(NO_COND_EXEC)
+Disable ARCompact specific pass to generate conditional execution instructions
+
+mA5
+Target Report
+Generate ARCompact 32-bit code for ARCtangent-A5 processor
+
+mA6
+Target Report
+Generate ARCompact 32-bit code for ARCtangent-ARC600 processor
+
+mARC600
+Target Report
+Same as -mA6
+
+mA7
+Target Report
+Generate ARCompact 32-bit code for ARCtangent-ARC700 processor
+
+mARC700
+Target Report
+Same as -mA7
+
+mmixed-code
+Target Report Mask(MIXED_CODE_SET)
+Tweak register allocation to help 16-bit instruction generation
+; originally this was:
+;Generate ARCompact 16-bit instructions intermixed with 32-bit instructions for ARCtangent-A5 and higher processors
+; but we do that without -mmixed-code, too, it's just a different instruction
+; count / size tradeoff.
+
+mvolatile-cache
+Target Report Mask(VOLATILE_CACHE_SET)
+Use ordinarily cached memory accesses for volatile references
+
+mno-volatile-cache
+Target Report InverseMask(VOLATILE_CACHE_SET)
+Enable cache bypass for volatile references
+
+mbarrel_shifter
+Target Report Mask(BARREL_SHIFTER)
+Generate instructions supported by barrel shifter
+
+mnorm
+Target Report Mask(NORM_SET)
+Generate norm instruction
+
+mswap
+Target Report Mask(SWAP_SET)
+Generate swap instruction
+
+mmul64
+Target Report Mask(MUL64_SET)
+Generate mul64 and mulu64 instructions
+
+mno-mpy
+Target Report Mask(NOMPY_SET)
+Do not generate mpy instructions for ARC700
+
+mEA
+Target Report Mask(EA_SET)
+Generate Extended arithmetic instructions.  Currently only divaw, adds, subs and sat16 are supported
+
+msoft-float
+Target Report Mask(0)
+Dummy flag. This is the default unless FPX switches are provided explicitly
+
+mlong-calls
+Target Report Mask(LONG_CALLS_SET)
+Generate call insns as register indirect calls
+
+mno-brcc
+Target Report Mask(NO_BRCC_SET)
+Do no generate BRcc instructions in arc_reorg.
+
+mno-sdata
+Target Report Mask(NO_SDATA_SET)
+Do not generate sdata references
+
+mno-millicode
+Target Report Mask(NO_MILLICODE_THUNK_SET)
+Do not generate millicode thunks (needed only with -Os)
+
+mspfp
+Target Report Mask(SPFP_COMPACT_SET)
+FPX: Generate Single Precision FPX (compact) instructions.
+
+mspfp_compact
+Target Report Mask(SPFP_COMPACT_SET) MaskExists
+FPX: Generate Single Precision FPX (compact) instructions.
+
+mspfp_fast
+Target Report Mask(SPFP_FAST_SET)
+FPX: Generate Single Precision FPX (fast) instructions.
+
+margonaut
+Target Report Mask(ARGONAUT_SET)
+FPX: Enable Argonaut ARC CPU Double Precision Floating Point extensions.
+
+mdpfp
+Target Report Mask(DPFP_COMPACT_SET)
+FPX: Generate Double Precision FPX (compact) instructions.
+
+mdpfp_compact
+Target Report Mask(DPFP_COMPACT_SET) MaskExists
+FPX: Generate Double Precision FPX (compact) instructions.
+
+mdpfp_fast
+Target Report Mask(DPFP_FAST_SET)
+FPX: Generate Double Precision FPX (fast) instructions.
+
+mno-dpfp-lrsr
+Target Report Mask(DPFP_DISABLE_LRSR)
+Disable LR and SR instructions from using FPX extension aux registers.
+
+msimd
+Target Report Mask(SIMD_SET)
+Enable generation of ARC SIMD instructions via target-specific builtins.
+
+mcpu=
+Target RejectNegative Joined Var(arc_cpu) Enum(processor_type) Init(PROCESSOR_NONE)
+-mcpu=CPU	Compile code for ARC variant CPU
+
+Enum
+Name(processor_type) Type(enum processor_type)
+
+EnumValue
+Enum(processor_type) String(A5) Value(PROCESSOR_A5)
+
+EnumValue
+Enum(processor_type) String(ARC600) Value(PROCESSOR_ARC600)
+
+EnumValue
+Enum(processor_type) String(ARC601) Value(PROCESSOR_ARC601)
+
+EnumValue
+Enum(processor_type) String(ARC700) Value(PROCESSOR_ARC700)
+
+msize-level=
+Target RejectNegative Joined UInteger Var(arc_size_opt_level) Init(-1)
+size optimization level: 0:none 1:opportunistic 2: regalloc 3:drop align, -Os
+
+misize
+Target Report Var(TARGET_DUMPISIZE)
+Annotate assembler instructions with estimated addresses
+
+multcost=
+Target RejectNegative Joined UInteger Var(arc_multcost) Init(-1)
+Cost to assume for a multiply instruction, with 4 being equal to a normal insn.
+
+mtune=arc600
+Target RejectNegative Var(arc_tune, TUNE_ARC600)
+Tune for ARC600 cpu.
+
+mtune=arc601
+Target RejectNegative Var(arc_tune, TUNE_ARC600)
+Tune for ARC601 cpu.
+
+mtune=arc700
+Target RejectNegative Var(arc_tune, TUNE_ARC700_4_2_STD)
+Tune for ARC700 R4.2 Cpu with standard multiplier block.
+
+mtune=arc700-xmac
+Target RejectNegative Var(arc_tune, TUNE_ARC700_4_2_XMAC)
+Tune for ARC700 R4.2 Cpu with XMAC block.
+
+mtune=ARC725D
+Target RejectNegative Var(arc_tune, TUNE_ARC700_4_2_XMAC)
+Tune for ARC700 R4.2 Cpu with XMAC block.
+
+mtune=ARC750D
+Target RejectNegative Var(arc_tune, TUNE_ARC700_4_2_XMAC)
+Tune for ARC700 R4.2 Cpu with XMAC block.
+
+mindexed-loads
+Target Var(TARGET_INDEXED_LOADS)
+Enable the use of indexed loads
+
+mauto-modify-reg
+Target Var(TARGET_AUTO_MODIFY_REG)
+Enable the use of pre/post modify with register displacement.
+
+mmul32x16
+Target Report Mask(MULMAC_32BY16_SET)
+Generate 32x16 multiply and mac instructions
+
+; the initializer is supposed to be: Init(REG_BR_PROB_BASE/2) ,
+; alas, basic-block.h is not included in options.c .
+munalign-prob-threshold=
+Target RejectNegative Joined UInteger Var(arc_unalign_prob_threshold) Init(10000/2)
+Set probability threshold for unaligning branches
+
+mmedium-calls
+Target Var(TARGET_MEDIUM_CALLS)
+Don't use less than 25 bit addressing range for calls.
+
+mannotate-align
+Target Var(TARGET_ANNOTATE_ALIGN)
+Explain what alignment considerations lead to the decision to make an insn short or long.
+
+malign-call
+Target Var(TARGET_ALIGN_CALL)
+Do alignment optimizations for call instructions.
+
+mRcq
+Target Var(TARGET_Rcq)
+Enable Rcq constraint handling - most short code generation depends on this.
+
+mRcw
+Target Var(TARGET_Rcw)
+Enable Rcw constraint handling - ccfsm condexec mostly depends on this.
+
+mearly-cbranchsi
+Target Var(TARGET_EARLY_CBRANCHSI)
+Enable pre-reload use of cbranchsi pattern
+
+mbbit-peephole
+Target Var(TARGET_BBIT_PEEPHOLE)
+Enable bbit peephole2
+
+mcase-vector-pcrel
+Target Var(TARGET_CASE_VECTOR_PC_RELATIVE)
+Use pc-relative switch case tables - this enables case table shortening.
+
+mcompact-casesi
+Target Var(TARGET_COMPACT_CASESI)
+Enable compact casesi pattern
+
+mq-class
+Target Var(TARGET_Q_CLASS)
+Enable 'q' instruction alternatives.
+
+mexpand-adddi
+Target Var(TARGET_EXPAND_ADDDI)
+Expand adddi3 and subdi3 at rtl generation time into add.f / adc etc.
+
+
+; Flags used by the assembler, but for which we define preprocessor
+; macro symbols as well.
+mcrc
+Target Report RejectNegative
+Enable variable polynomial CRC extension
+
+mdsp_packa
+Target Report RejectNegative
+Enable DSP 3.1 Pack A extensions
+
+mdvbf
+Target Report RejectNegative
+Enable dual viterbi butterfly extension
+
+mmac_d16
+Target Report RejectNegative Undocumented
+
+mmac_24
+Target Report RejectNegative Undocumented
+
+mtelephony
+Target Report RejectNegative
+Enable Dual and Single Operand Instructions for Telephony
+
+mxy
+Target Report RejectNegative
+Enable XY Memory extension (DSP version 3)
+
+; ARC700 4.10 extension instructions
+mlock
+Target Report RejectNegative
+Enable Locked Load/Store Conditional extension
+
+mswape
+Target Report RejectNegative
+Enable swap byte ordering extension instruction
+
+mrtsc
+Target Report RejectNegative
+Enable 64-bit Time-Stamp Counter extension instruction
+
+mno-epilogue-cfi
+Target Report RejectNegative InverseMask(EPILOGUE_CFI)
+Disable generation of cfi for epilogues.
+
+mepilogue-cfi
+Target RejectNegative Mask(EPILOGUE_CFI)
+Enable generation of cfi for epilogues.
+
+EB
+Target
+Pass -EB option through to linker.
+
+EL
+Target
+Pass -EL option through to linker.
+
+marclinux
+target
+Pass -marclinux option through to linker.
+
+marclinux_prof
+target
+Pass -marclinux_prof option through to linker.
+
+;; lra is still unproven for ARC, so allow to fall back to reload with -mno-lra.
+;Target InverseMask(NO_LRA)
+mlra
+; lra still won't allow to configure libgcc; see PR rtl-optimization/55464.
+; so don't enable by default.
+Target Mask(LRA)
+Enable lra
+
+mlra-priority-none
+Target RejectNegative Var(arc_lra_priority_tag, ARC_LRA_PRIORITY_NONE)
+Don't indicate any priority with TARGET_REGISTER_PRIORITY
+
+mlra-priority-compact
+Target RejectNegative Var(arc_lra_prioritytag, ARC_LRA_PRIORITY_COMPACT)
+Indicate priority for r0..r3 / r12..r15 with TARGET_REGISTER_PRIORITY
+
+mlra-priority-noncompact
+Target RejectNegative Var(arc_lra_prioritytag, ARC_LRA_PRIORITY_NONCOMPACT)
+Reduce priority for r0..r3 / r12..r15 with TARGET_REGISTER_PRIORITY
diff -Nu --exclude arc.c --exclude arc.md emptydir/arc-opts.h config/arc/arc-opts.h
--- emptydir/arc-opts.h	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc-opts.h	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,28 @@ 
+/* GCC option-handling definitions for the Synopsys DesignWare ARC architecture.
+
+   Copyright (C) 2007-2012 Free Software Foundation, Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+enum processor_type
+{
+  PROCESSOR_NONE,
+  PROCESSOR_A5,
+  PROCESSOR_ARC600,
+  PROCESSOR_ARC601,
+  PROCESSOR_ARC700
+};
diff -Nu --exclude arc.c --exclude arc.md emptydir/arc-protos.h config/arc/arc-protos.h
--- emptydir/arc-protos.h	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc-protos.h	2013-02-12 11:30:45.127021345 +0000
@@ -0,0 +1,115 @@ 
+/* Definitions of target machine for GNU compiler, Synopsys DesignWare ARC cpu.
+   Copyright (C) 2000, 2007-2013 Free Software Foundation, Inc.
+
+This file is part of GCC.
+
+GCC is free software; you can redistribute it and/or modify
+it under the terms of the GNU General Public License as published by
+the Free Software Foundation; either version 3, or (at your option)
+any later version.
+
+GCC is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+GNU General Public License for more details.
+
+You should have received a copy of the GNU General Public License
+along with GCC; see the file COPYING3.  If not see
+<http://www.gnu.org/licenses/>.  */
+
+#ifdef RTX_CODE
+
+extern enum machine_mode arc_select_cc_mode (enum rtx_code, rtx, rtx);
+
+/* Define the function that build the compare insn for scc, bcc and mov*cc.  */
+extern struct rtx_def *gen_compare_reg (rtx, enum machine_mode);
+
+/* Declarations for various fns used in the .md file.  */
+extern void arc_output_function_epilogue (FILE *, HOST_WIDE_INT, int);
+extern const char *output_shift (rtx *);
+extern bool compact_sda_memory_operand (rtx op,enum machine_mode  mode);
+extern bool arc_double_limm_p (rtx);
+extern void arc_print_operand (FILE *, rtx, int);
+extern void arc_print_operand_address (FILE *, rtx);
+extern void arc_final_prescan_insn (rtx, rtx *, int);
+extern void arc_set_default_type_attributes(tree type);
+extern const char *arc_output_libcall (const char *);
+extern bool prepare_extend_operands (rtx *operands, enum rtx_code code,
+				     enum machine_mode omode);
+extern int arc_output_addsi (rtx *operands, bool, bool);
+extern int arc_output_commutative_cond_exec (rtx *operands, bool);
+extern bool arc_expand_movmem (rtx *operands);
+extern bool prepare_move_operands (rtx *operands, enum machine_mode mode);
+extern void emit_shift (enum rtx_code, rtx, rtx, rtx);
+#endif /* RTX_CODE */
+
+#ifdef TREE_CODE
+extern enum arc_function_type arc_compute_function_type (struct function *);
+#endif /* TREE_CODE */
+
+
+extern void arc_init (void);
+extern unsigned int arc_compute_frame_size (int);
+extern bool arc_ccfsm_branch_deleted_p (void);
+extern void arc_ccfsm_record_branch_deleted (void);
+
+extern rtx arc_legitimize_pic_address (rtx, rtx);
+void arc_asm_output_aligned_decl_local (FILE *, tree, const char *,
+					unsigned HOST_WIDE_INT,
+					unsigned HOST_WIDE_INT,
+					unsigned HOST_WIDE_INT);
+extern rtx arc_return_addr_rtx (int , rtx);
+extern bool check_if_valid_regno_const (rtx *, int);
+extern bool check_if_valid_sleep_operand (rtx *, int);
+extern bool arc_legitimate_constant_p (enum machine_mode, rtx);
+extern bool arc_legitimate_pc_offset_p (rtx);
+extern bool arc_legitimate_pic_addr_p (rtx);
+extern void emit_pic_move (rtx *, enum machine_mode);
+extern bool arc_raw_symbolic_reference_mentioned_p (rtx, bool);
+extern bool arc_legitimate_pic_operand_p (rtx);
+extern bool arc_is_longcall_p (rtx);
+extern bool arc_profile_call (rtx callee);
+extern bool valid_brcc_with_delay_p (rtx *);
+extern bool small_data_pattern (rtx , enum machine_mode);
+extern rtx arc_rewrite_small_data (rtx);
+extern bool arc_ccfsm_cond_exec_p (void);
+struct secondary_reload_info;
+extern int arc_register_move_cost (enum machine_mode, enum reg_class,
+				   enum reg_class);
+extern rtx disi_highpart (rtx);
+extern int arc_adjust_insn_length (rtx, int, bool);
+extern int arc_corereg_hazard (rtx, rtx);
+extern int arc_hazard (rtx, rtx);
+extern int arc_write_ext_corereg (rtx);
+extern rtx gen_acc1 (void);
+extern rtx gen_acc2 (void);
+extern rtx gen_mlo (void);
+extern rtx gen_mhi (void);
+extern bool arc_branch_size_unknown_p (void);
+struct arc_ccfsm;
+extern void arc_ccfsm_record_condition (rtx, int, rtx, struct arc_ccfsm *);
+extern void arc_expand_prologue (void);
+extern void arc_expand_epilogue (int);
+extern void arc_init_expanders (void);
+extern int arc_check_millicode (rtx op, int offset, int load_p);
+extern int arc_get_unalign (void);
+extern void arc_clear_unalign (void);
+extern void arc_toggle_unalign (void);
+extern void split_addsi (rtx *);
+extern void split_subsi (rtx *);
+extern void arc_pad_return (void);
+extern rtx arc_split_move (rtx *);
+extern int arc_verify_short (rtx insn, int unalign, int);
+extern const char *arc_short_long (rtx insn, const char *, const char *);
+extern rtx arc_regno_use_in (unsigned int, rtx);
+extern int arc_attr_type (rtx);
+extern bool arc_scheduling_not_expected (void);
+extern bool arc_sets_cc_p (rtx insn);
+extern int arc_label_align (rtx label);
+extern bool arc_need_delay (rtx insn);
+extern bool arc_text_label (rtx);
+extern int arc_decl_pretend_args (tree decl);
+extern bool arc_short_comparison_p (rtx, int);
+extern bool arc_epilogue_uses (int regno);
+/* insn-attrtab.c doesn't include reload.h, which declares regno_clobbered_p. */
+extern int regno_clobbered_p (unsigned int, rtx, enum machine_mode, int);
diff -Nu --exclude arc.c --exclude arc.md emptydir/arc-simd.h config/arc/arc-simd.h
--- emptydir/arc-simd.h	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/arc-simd.h	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,186 @@ 
+/* Synopsys DesignWare ARC SIMD include file.
+   Copyright (C) 2007-2012 Free Software Foundation, Inc.
+   Written by Saurabh Verma (saurabh.verma@celunite.com) on behalf os Synopsys
+   Inc.
+
+   This file is part of GCC.
+
+   GCC is free software; you can redistribute it and/or modify it
+   under the terms of the GNU General Public License as published
+   by the Free Software Foundation; either version 3, or (at your
+   option) any later version.
+
+   GCC is distributed in the hope that it will be useful, but WITHOUT
+   ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
+   or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public
+   License for more details.
+
+   You should have received a copy of the GNU General Public License
+   along with GCC; see the file COPYING3.  If not see
+   <http://www.gnu.org/licenses/>.  */
+
+/* As a special exception, if you include this header file into source
+   files compiled by GCC, this header file does not by itself cause
+   the resulting executable to be covered by the GNU General Public
+   License.  This exception does not however invalidate any other
+   reasons why the executable file might be covered by the GNU General
+   Public License.  */
+
+#ifndef _ARC_SIMD_H
+#define _ARC_SIMD_H 1
+
+#ifndef __ARC_SIMD__
+#error Use the "-msimd" flag to enable ARC SIMD support
+#endif
+
+/* I0-I7 registers.  */
+#define _IREG_I0  0
+#define _IREG_I1  1
+#define _IREG_I2  2
+#define _IREG_I3  3
+#define _IREG_I4  4
+#define _IREG_I5  5
+#define _IREG_I6  6
+#define _IREG_I7  7
+
+/* DMA configuration registers.  */
+#define _DMA_REG_DR0		0
+#define _DMA_SDM_SRC_ADR_REG	_DMA_REG_DR0
+#define _DMA_SDM_DEST_ADR_REG	_DMA_REG_DR0
+
+#define _DMA_REG_DR1		1
+#define _DMA_SDM_STRIDE_REG	_DMA_REG_DR1
+
+#define _DMA_REG_DR2		2
+#define _DMA_BLK_REG		_DMA_REG_DR2
+
+#define _DMA_REG_DR3		3
+#define _DMA_LOC_REG		_DMA_REG_DR3
+
+#define _DMA_REG_DR4		4
+#define _DMA_SYS_SRC_ADR_REG	_DMA_REG_DR4
+#define _DMA_SYS_DEST_ADR_REG	_DMA_REG_DR4
+
+#define _DMA_REG_DR5		5
+#define _DMA_SYS_STRIDE_REG	_DMA_REG_DR5
+
+#define _DMA_REG_DR6		6
+#define _DMA_CFG_REG		_DMA_REG_DR6
+
+#define _DMA_REG_DR7		7
+#define _DMA_FT_BASE_ADR_REG	_DMA_REG_DR7
+
+/* Predefined types used in vector instructions.  */
+typedef int   __v4si  __attribute__((vector_size(16)));
+typedef short __v8hi  __attribute__((vector_size(16)));
+
+/* Synonyms */
+#define _vaddaw    __builtin_arc_vaddaw
+#define _vaddw     __builtin_arc_vaddw
+#define _vavb      __builtin_arc_vavb
+#define _vavrb     __builtin_arc_vavrb
+#define _vdifaw    __builtin_arc_vdifaw
+#define _vdifw     __builtin_arc_vdifw
+#define _vmaxaw    __builtin_arc_vmaxaw
+#define _vmaxw     __builtin_arc_vmaxw
+#define _vminaw    __builtin_arc_vminaw
+#define _vminw     __builtin_arc_vminw
+#define _vmulaw    __builtin_arc_vmulaw
+#define _vmulfaw   __builtin_arc_vmulfaw
+#define _vmulfw    __builtin_arc_vmulfw
+#define _vmulw     __builtin_arc_vmulw
+#define _vsubaw    __builtin_arc_vsubaw
+#define _vsubw     __builtin_arc_vsubw
+#define _vsummw    __builtin_arc_vsummw
+#define _vand      __builtin_arc_vand
+#define _vandaw    __builtin_arc_vandaw
+#define _vbic      __builtin_arc_vbic
+#define _vbicaw    __builtin_arc_vbicaw
+#define _vor       __builtin_arc_vor
+#define _vxor      __builtin_arc_vxor
+#define _vxoraw    __builtin_arc_vxoraw
+#define _veqw      __builtin_arc_veqw
+#define _vlew      __builtin_arc_vlew
+#define _vltw      __builtin_arc_vltw
+#define _vnew      __builtin_arc_vnew
+#define _vmr1aw    __builtin_arc_vmr1aw
+#define _vmr1w     __builtin_arc_vmr1w
+#define _vmr2aw    __builtin_arc_vmr2aw
+#define _vmr2w     __builtin_arc_vmr2w
+#define _vmr3aw    __builtin_arc_vmr3aw
+#define _vmr3w     __builtin_arc_vmr3w
+#define _vmr4aw    __builtin_arc_vmr4aw
+#define _vmr4w     __builtin_arc_vmr4w
+#define _vmr5aw    __builtin_arc_vmr5aw
+#define _vmr5w     __builtin_arc_vmr5w
+#define _vmr6aw    __builtin_arc_vmr6aw
+#define _vmr6w     __builtin_arc_vmr6w
+#define _vmr7aw    __builtin_arc_vmr7aw
+#define _vmr7w     __builtin_arc_vmr7w
+#define _vmrb      __builtin_arc_vmrb
+#define _vh264f    __builtin_arc_vh264f
+#define _vh264ft   __builtin_arc_vh264ft
+#define _vh264fw   __builtin_arc_vh264fw
+#define _vvc1f     __builtin_arc_vvc1f
+#define _vvc1ft    __builtin_arc_vvc1ft
+#define _vbaddw    __builtin_arc_vbaddw
+#define _vbmaxw    __builtin_arc_vbmaxw
+#define _vbminw    __builtin_arc_vbminw
+#define _vbmulaw   __builtin_arc_vbmulaw
+#define _vbmulfw   __builtin_arc_vbmulfw
+#define _vbmulw    __builtin_arc_vbmulw
+#define _vbrsubw   __builtin_arc_vbrsubw
+#define _vbsubw    __builtin_arc_vbsubw
+#define _vasrw     __builtin_arc_vasrw
+#define _vsr8      __builtin_arc_vsr8
+#define _vsr8aw    __builtin_arc_vsr8aw
+#define _vasrrwi   __builtin_arc_vasrrwi
+#define _vasrsrwi  __builtin_arc_vasrsrwi
+#define _vasrwi    __builtin_arc_vasrwi
+#define _vasrpwbi  __builtin_arc_vasrpwbi
+#define _vasrrpwbi __builtin_arc_vasrrpwbi
+#define _vsr8awi   __builtin_arc_vsr8awi
+#define _vsr8i     __builtin_arc_vsr8i
+#define _vmvaw     __builtin_arc_vmvaw
+#define _vmvw      __builtin_arc_vmvw
+#define _vmvzw     __builtin_arc_vmvzw
+#define _vd6tapf   __builtin_arc_vd6tapf
+#define _vmovaw    __builtin_arc_vmovaw
+#define _vmovw     __builtin_arc_vmovw
+#define _vmovzw    __builtin_arc_vmovzw
+#define _vabsaw    __builtin_arc_vabsaw
+#define _vabsw     __builtin_arc_vabsw
+#define _vaddsuw   __builtin_arc_vaddsuw
+#define _vsignw    __builtin_arc_vsignw
+#define _vexch1    __builtin_arc_vexch1
+#define _vexch2    __builtin_arc_vexch2
+#define _vexch4    __builtin_arc_vexch4
+#define _vupbaw    __builtin_arc_vupbaw
+#define _vupbw     __builtin_arc_vupbw
+#define _vupsbaw   __builtin_arc_vupsbaw
+#define _vupsbw    __builtin_arc_vupsbw
+#define _vdirun    __builtin_arc_vdirun
+#define _vdorun    __builtin_arc_vdorun
+#define _vdiwr     __builtin_arc_vdiwr
+#define _vdowr     __builtin_arc_vdowr
+#define _vrec      __builtin_arc_vrec
+#define _vrun      __builtin_arc_vrun
+#define _vrecrun   __builtin_arc_vrecrun
+#define _vendrec   __builtin_arc_vendrec
+#define _vld32wh   __builtin_arc_vld32wh
+#define _vld32wl   __builtin_arc_vld32wl
+#define _vld64     __builtin_arc_vld64
+#define _vld32     __builtin_arc_vld32
+#define _vld64w    __builtin_arc_vld64w
+#define _vld128    __builtin_arc_vld128
+#define _vst128    __builtin_arc_vst128
+#define _vst64     __builtin_arc_vst64
+#define _vst16_n   __builtin_arc_vst16_n
+#define _vst32_n   __builtin_arc_vst32_n
+#define _vinti     __builtin_arc_vinti
+
+/* Additional synonyms to ease programming.  */
+#define _setup_dma_in_channel_reg  _vdiwr
+#define _setup_dma_out_channel_reg _vdowr
+
+#endif /* _ARC_SIMD_H */
diff -Nu --exclude arc.c --exclude arc.md emptydir/constraints.md config/arc/constraints.md
--- emptydir/constraints.md	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/constraints.md	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,393 @@ 
+;; Constraint definitions for Synopsys DesignWare ARC.
+;; Copyright (C) 2007-2012 Free Software Foundation, Inc.
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+;;
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+
+;; Register constraints
+
+; Most instructions accept arbitrary core registers for their inputs, even
+; if the core register in question cannot be written to, like the multiply
+; result registers of the ARCtangent-A5 and ARC600 .
+; First, define a class for core registers that can be read cheaply.  This
+; is most or all core registers for ARC600, but only r0-r31 for ARC700
+(define_register_constraint "c" "CHEAP_CORE_REGS"
+  "core register @code{r0}-@code{r31}, @code{ap},@code{pcl}")
+
+; All core regs - e.g. for when we must have a way to reload a register.
+(define_register_constraint "Rac" "ALL_CORE_REGS"
+  "core register @code{r0}-@code{r60}, @code{ap},@code{pcl}")
+
+; Some core registers (.e.g lp_count) aren't general registers because they
+; can't be used as the destination of a multi-cycle operation like
+; load and/or multiply, yet they are still writable in the sense that
+; register-register moves and single-cycle arithmetic (e.g "add", "and",
+; but not "mpy") can write to them.
+(define_register_constraint "w" "WRITABLE_CORE_REGS"
+  "writable core register: @code{r0}-@code{r31}, @code{r60}, nonfixed core register")
+
+(define_register_constraint "W" "MPY_WRITABLE_CORE_REGS"
+  "writable core register except @code{LP_COUNT} (@code{r60}): @code{r0}-@code{r31}, nonfixed core register")
+
+(define_register_constraint "l" "LPCOUNT_REG"
+  "@internal
+   Loop count register @code{r60}")
+
+(define_register_constraint "x" "R0_REG"
+  "@code{R0} register.")
+
+(define_register_constraint "Rgp" "GP_REG"
+  "@internal
+   Global Pointer register @code{r26}")
+
+(define_register_constraint "f" "FP_REG"
+  "@internal
+   Frame Pointer register @code{r27}")
+
+(define_register_constraint "b" "SP_REGS"
+  "@internal
+   Stack Pointer register @code{r28}")
+
+(define_register_constraint "k" "LINK_REGS"
+  "@internal
+   Link Registers @code{ilink1}:@code{r29}, @code{ilink2}:@code{r30},
+   @code{blink}:@code{r31},")
+
+(define_register_constraint "q" "ARCOMPACT16_REGS"
+  "Registers usable in ARCompact 16-bit instructions: @code{r0}-@code{r3},
+   @code{r12}-@code{r15}")
+
+(define_register_constraint "e" "AC16_BASE_REGS"
+  "Registers usable as base-regs of memory addresses in ARCompact 16-bit memory
+   instructions: @code{r0}-@code{r3}, @code{r12}-@code{r15}, @code{sp}")
+
+(define_register_constraint "D" "DOUBLE_REGS"
+  "ARC FPX (dpfp) 64-bit registers. @code{D0}, @code{D1}")
+
+(define_register_constraint "d" "SIMD_DMA_CONFIG_REGS"
+  "@internal
+   ARC SIMD DMA configuration registers @code{di0}-@code{di7},
+   @code{do0}-@code{do7}")
+
+(define_register_constraint "v" "SIMD_VR_REGS"
+  "ARC SIMD 128-bit registers @code{VR0}-@code{VR23}")
+
+; We could allow call-saved registers for sibling calls if we restored them
+; in the delay slot of the call.  However, that would not allow to adjust the
+; stack pointer afterwards, so the call-saved register would have to be
+; restored from a call-used register that was just loaded with the value
+; before.  So sticking to call-used registers for sibcalls will likely
+; generate better code overall.
+(define_register_constraint "Rsc" "SIBCALL_REGS"
+  "@internal
+   Sibling call register")
+
+;; Integer constraints
+
+(define_constraint "I"
+  "@internal
+   A signed 12-bit integer constant."
+  (and (match_code "const_int")
+       (match_test "SIGNED_INT12 (ival)")))
+
+(define_constraint "K"
+  "@internal
+   A 3-bit unsigned integer constant"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT3 (ival)")))
+
+(define_constraint "L"
+  "@internal
+   A 6-bit unsigned integer constant"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT6 (ival)")))
+
+(define_constraint "CnL"
+  "@internal
+   One's complement of a 6-bit unsigned integer constant"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT6 (~ival)")))
+
+(define_constraint "CmL"
+  "@internal
+   Two's complement of a 6-bit unsigned integer constant"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT6 (-ival)")))
+
+(define_constraint "M"
+  "@internal
+   A 5-bit unsigned integer constant"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT5 (ival)")))
+
+(define_constraint "N"
+  "@internal
+   Integer constant 1"
+  (and (match_code "const_int")
+       (match_test "IS_ONE (ival)")))
+
+(define_constraint "O"
+  "@internal
+   A 7-bit unsigned integer constant"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT7 (ival)")))
+
+(define_constraint "P"
+  "@internal
+   An 8-bit unsigned integer constant"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT8 (ival)")))
+
+(define_constraint "C_0"
+  "@internal
+   Zero"
+  (and (match_code "const_int")
+       (match_test "ival == 0")))
+
+(define_constraint "Cca"
+  "@internal
+   Conditional or three-address add / sub constant"
+  (and (match_code "const_int")
+       (match_test "ival == -1 << 31
+		    || (ival >= -0x1f8 && ival <= 0x1f8
+			&& ((ival >= 0 ? ival : -ival)
+			    <= 0x3f * (ival & -ival)))")))
+
+; intersection of "O" and "Cca".
+(define_constraint "CL2"
+  "@internal
+   A 6-bit unsigned integer constant times 2"
+  (and (match_code "const_int")
+       (match_test "!(ival & ~126)")))
+
+(define_constraint "CM4"
+  "@internal
+   A 5-bit unsigned integer constant times 4"
+  (and (match_code "const_int")
+       (match_test "!(ival & ~124)")))
+
+(define_constraint "Csp"
+  "@internal
+   A valid stack pointer offset for a short add"
+  (and (match_code "const_int")
+       (match_test "!(ival & ~124) || !(-ival & ~124)")))
+
+(define_constraint "C2a"
+  "@internal
+   Unconditional two-address add / sub constant"
+  (and (match_code "const_int")
+       (match_test "ival == -1 << 31
+		    || (ival >= -0x4000 && ival <= 0x4000
+			&& ((ival >= 0 ? ival : -ival)
+			    <= 0x7ff * (ival & -ival)))")))
+
+(define_constraint "C0p"
+ "@internal
+  power of two"
+  (and (match_code "const_int")
+       (match_test "IS_POWEROF2_P (ival)")))
+
+(define_constraint "C1p"
+ "@internal
+  constant such that x+1 is a power of two, and x != 0"
+  (and (match_code "const_int")
+       (match_test "ival && IS_POWEROF2_P (ival + 1)")))
+
+(define_constraint "Ccp"
+ "@internal
+  constant such that ~x (one's Complement) is a power of two"
+  (and (match_code "const_int")
+       (match_test "IS_POWEROF2_P (~ival)")))
+
+(define_constraint "Cux"
+ "@internal
+  constant such that AND gives an unsigned extension"
+  (and (match_code "const_int")
+       (match_test "ival == 0xff || ival == 0xffff")))
+
+(define_constraint "Crr"
+ "@internal
+  constant that can be loaded with ror b,u6"
+  (and (match_code "const_int")
+       (match_test "(ival & ~0x8000001f) == 0 && !arc_ccfsm_cond_exec_p ()")))
+
+;; Floating-point constraints
+
+(define_constraint "G"
+  "@internal
+   A 32-bit constant double value"
+  (and (match_code "const_double")
+       (match_test "arc_double_limm_p (op)")))
+
+(define_constraint "H"
+  "@internal
+   All const_double values (including 64-bit values)"
+  (and (match_code "const_double")
+       (match_test "1")))
+
+;; Memory constraints
+(define_memory_constraint "T"
+  "@internal
+   A valid memory operand for ARCompact load instructions"
+  (and (match_code "mem")
+       (match_test "compact_load_memory_operand (op, VOIDmode)")))
+
+(define_memory_constraint "S"
+  "@internal
+   A valid memory operand for ARCompact store instructions"
+  (and (match_code "mem")
+       (match_test "compact_store_memory_operand (op, VOIDmode)")))
+
+(define_memory_constraint "Usd"
+  "@internal
+   A valid _small-data_ memory operand for ARCompact instructions"
+  (and (match_code "mem")
+       (match_test "compact_sda_memory_operand (op, VOIDmode)")))
+
+(define_memory_constraint "Usc"
+  "@internal
+   A valid memory operand for storing constants"
+  (and (match_code "mem")
+       (match_test "!CONSTANT_P (XEXP (op,0))")
+;; ??? the assembler rejects stores of immediates to small data.
+       (match_test "!compact_sda_memory_operand (op, VOIDmode)")))
+
+(define_memory_constraint "Us<"
+  "@internal
+   Stack pre-decrement"
+  (and (match_code "mem")
+       (match_test "GET_CODE (XEXP (op, 0)) == PRE_DEC")
+       (match_test "REG_P (XEXP (XEXP (op, 0), 0))")
+       (match_test "REGNO (XEXP (XEXP (op, 0), 0)) == SP_REG")))
+
+(define_memory_constraint "Us>"
+  "@internal
+   Stack post-increment"
+  (and (match_code "mem")
+       (match_test "GET_CODE (XEXP (op, 0)) == POST_INC")
+       (match_test "REG_P (XEXP (XEXP (op, 0), 0))")
+       (match_test "REGNO (XEXP (XEXP (op, 0), 0)) == SP_REG")))
+
+;; General constraints
+
+(define_constraint "Cbr"
+  "Branch destination"
+  (ior (and (match_code "symbol_ref")
+	    (match_test "!arc_is_longcall_p (op)"))
+       (match_code "label_ref")))
+
+(define_constraint "Cbp"
+  "predicable branch/call destination"
+  (ior (and (match_code "symbol_ref")
+	    (match_test "!arc_is_longcall_p (op) && !TARGET_MEDIUM_CALLS"))
+       (match_code "label_ref")))
+
+(define_constraint "Cpc"
+  "pc-relative constant"
+  (match_test "arc_legitimate_pc_offset_p (op)"))
+
+(define_constraint "Clb"
+  "label"
+  (and (match_code "label_ref")
+       (match_test "arc_text_label (XEXP (op, 0))")))
+
+(define_constraint "Cal"
+  "constant for arithmetic/logical operations"
+  (match_test "immediate_operand (op, VOIDmode) && !arc_legitimate_pc_offset_p (op)"))
+
+(define_constraint "C32"
+  "32 bit constant for arithmetic/logical operations"
+  (match_test "immediate_operand (op, VOIDmode)
+	       && !arc_legitimate_pc_offset_p (op)
+	       && !satisfies_constraint_I (op)"))
+
+; Note that the 'cryptic' register constraints will not make reload use the
+; associated class to reload into, but this will not penalize reloading of any
+; other operands, or using an alternate part of the same alternative.
+
+; Rcq is different in three important ways from a register class constraint:
+; - It does not imply a register class, hence reload will not use it to drive
+;   reloads.
+; - It matches even when there is no register class to describe its accepted
+;   set; not having such a set again lessens the impact on register allocation.
+; - It won't match when the instruction is conditionalized by the ccfsm.
+(define_constraint "Rcq"
+  "@internal
+   Cryptic q - for short insn generation while not affecting register allocation
+   Registers usable in ARCompact 16-bit instructions: @code{r0}-@code{r3},
+   @code{r12}-@code{r15}"
+  (and (match_code "REG")
+       (match_test "TARGET_Rcq
+		    && !arc_ccfsm_cond_exec_p ()
+		    && ((((REGNO (op) & 7) ^ 4) - 4) & 15) == REGNO (op)")))
+
+; If we need a reload, we generally want to steer reload to use three-address
+; alternatives in preference of two-address alternatives, unless the
+; three-address alternative introduces a LIMM that is unnecessary for the
+; two-address alternative.
+(define_constraint "Rcw"
+  "@internal
+   Cryptic w - for use in early alternatives with matching constraint"
+  (and (match_code "REG")
+       (match_test
+	"TARGET_Rcw
+	 && REGNO (op) < FIRST_PSEUDO_REGISTER
+	 && TEST_HARD_REG_BIT (reg_class_contents[WRITABLE_CORE_REGS],
+			       REGNO (op))")))
+
+(define_constraint "Rcr"
+  "@internal
+   Cryptic r - for use in early alternatives with matching constraint"
+  (and (match_code "REG")
+       (match_test
+	"TARGET_Rcw
+	 && REGNO (op) < FIRST_PSEUDO_REGISTER
+	 && TEST_HARD_REG_BIT (reg_class_contents[GENERAL_REGS],
+			       REGNO (op))")))
+
+(define_constraint "Rcb"
+  "@internal
+   Stack Pointer register @code{r28} - do not reload into its class"
+  (and (match_code "REG")
+       (match_test "REGNO (op) == 28")))
+
+(define_constraint "Rck"
+  "@internal
+   blink (usful for push_s / pop_s)"
+  (and (match_code "REG")
+       (match_test "REGNO (op) == 31")))
+
+(define_constraint "Rs5"
+  "@internal
+   sibcall register - only allow one of the five available 16 bit isnsn.
+   Registers usable in ARCompact 16-bit instructions: @code{r0}-@code{r3},
+   @code{r12}"
+  (and (match_code "REG")
+       (match_test "!arc_ccfsm_cond_exec_p ()")
+       (ior (match_test "(unsigned) REGNO (op) <= 3")
+	    (match_test "REGNO (op) == 12"))))
+
+(define_constraint "Rcc"
+  "@internal
+  Condition Codes"
+  (and (match_code "REG") (match_test "cc_register (op, VOIDmode)")))
+
+
+(define_constraint "Q"
+  "@internal
+   Integer constant zero"
+  (and (match_code "const_int")
+       (match_test "IS_ZERO (ival)")))
diff -Nu --exclude arc.c --exclude arc.md emptydir/fpx.md config/arc/fpx.md
--- emptydir/fpx.md	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/fpx.md	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,674 @@ 
+;; Machine description of the Synopsys DesignWare ARC cpu Floating Point
+;; extensions for GNU C compiler
+;; Copyright (C) 2007-2012 Free Software Foundation, Inc.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; TODOs:
+;;        dpfp blocks?
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; Scheduler descriptions for the fpx instructions
+(define_insn_reservation "spfp_compact" 3
+  (and (match_test "TARGET_SPFP_COMPACT_SET")
+       (eq_attr "type" "spfp"))
+  "issue+core, nothing*2, write_port")
+
+(define_insn_reservation "spfp_fast" 6
+  (and (match_test "TARGET_SPFP_FAST_SET")
+       (eq_attr "type" "spfp"))
+  "issue+core, nothing*5, write_port")
+
+(define_insn_reservation "dpfp_compact_mult" 7
+  (and (match_test "TARGET_DPFP_COMPACT_SET")
+       (eq_attr "type" "dpfp_mult"))
+  "issue+core, nothing*6, write_port")
+
+(define_insn_reservation "dpfp_compact_addsub" 5
+  (and (match_test "TARGET_DPFP_COMPACT_SET")
+       (eq_attr "type" "dpfp_addsub"))
+  "issue+core, nothing*4, write_port")
+
+(define_insn_reservation "dpfp_fast" 5
+  (and (match_test "TARGET_DPFP_FAST_SET")
+       (eq_attr "type" "dpfp_mult,dpfp_addsub"))
+  "issue+core, nothing*4, write_port")
+
+;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+(define_insn "addsf3"
+  [(set (match_operand:SF 0 "register_operand"          "=r,r,r,r,r ")
+	(plus:SF (match_operand:SF 1 "nonmemory_operand" "0,r,GCal,r,0")
+		 (match_operand:SF 2 "nonmemory_operand" "I,rL,r,GCal,LrCal")))]
+;  "(TARGET_ARC700 || TARGET_ARC600) && TARGET_SPFP_SET";Add flag for float
+  "TARGET_SPFP"
+  "@
+   fadd %0,%1,%2
+   fadd %0,%1,%2
+   fadd   %0,%S1,%2
+   fadd   %0,%1,%S2
+   fadd%? %0,%1,%S2"
+  [(set_attr "type" "spfp")
+  (set_attr "length" "4,4,8,8,8")])
+
+(define_insn "subsf3"
+  [(set (match_operand:SF 0 "register_operand"          "=r,r,r,r,r ")
+	(minus:SF (match_operand:SF 1 "nonmemory_operand" "r,0,GCal,r,0")
+		 (match_operand:SF 2 "nonmemory_operand" "rL,I,r,GCal,LrCal")))]
+  ;"(TARGET_ARC700 || TARGET_ARC600) && TARGET_SPFP_SET";Add flag for float
+  "TARGET_SPFP"
+  "@
+   fsub %0,%1,%2
+   fsub %0,%1,%2
+   fsub   %0,%S1,%2
+   fsub   %0,%1,%S2
+   fsub%? %0,%1,%S2"
+  [(set_attr "type" "spfp")
+  (set_attr "length" "4,4,8,8,8")])
+
+(define_insn "mulsf3"
+  [(set (match_operand:SF 0 "register_operand"          "=r,r,r,r,r ")
+	(mult:SF (match_operand:SF 1 "nonmemory_operand" "r,0,GCal,r,0")
+		 (match_operand:SF 2 "nonmemory_operand" "rL,I,r,GCal,LrCal")))]
+;  "(TARGET_ARC700 || TARGET_ARC600) && TARGET_SPFP_SET"	;Add flag for float
+  "TARGET_SPFP"
+  "@
+   fmul %0,%1,%2
+   fmul %0,%1,%2
+   fmul   %0,%S1,%2
+   fmul   %0,%1,%S2
+   fmul%? %0,%1,%S2"
+  [(set_attr "type" "spfp")
+  (set_attr "length" "4,4,8,8,8")])
+
+
+;; For comparisons, we can avoid storing the top half of the result into
+;; a register since '.f' lets us set the Z bit for the conditional
+;; branch insns.
+
+;; ??? FIXME (x-y)==0 is not a correct comparison for floats:
+;;     http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
+(define_insn "cmpsfpx_raw"
+  [(set (reg:CC_FPX 61)
+	(compare:CC_FPX (match_operand:SF 0 "register_operand" "r")
+			 (match_operand:SF 1 "register_operand" "r")))]
+  "TARGET_ARGONAUT_SET && TARGET_SPFP"
+  "fsub.f 0,%0,%1"
+  [(set_attr "type" "spfp")
+   (set_attr "length" "4")])
+
+;; ??? FIXME (x-y)==0 is not a correct comparison for floats:
+;;     http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
+;; ??? FIXME we claim to clobber operand 2, yet the two numbers appended
+;; to the actual instructions are incorrect.  The result of the d*subh
+;; insn is stored in the Dx register specified by that first number.
+(define_insn "cmpdfpx_raw"
+  [(set (reg:CC_FPX 61)
+	(compare:CC_FPX (match_operand:DF 0 "nonmemory_operand" "D,r")
+			 (match_operand:DF 1 "nonmemory_operand" "r,D")))
+   (clobber (match_scratch:DF 2 "=D,D"))]
+  "TARGET_ARGONAUT_SET && TARGET_DPFP"
+  "@
+   dsubh%F0%F1.f 0,%H2,%L2
+   drsubh%F0%F2.f 0,%H1,%L1"
+  [(set_attr "type" "dpfp_addsub")
+   (set_attr "length" "4")])
+
+;; ??? FIXME subtraction is not a correct comparison for floats:
+;;     http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
+(define_insn "*cmpfpx_gt"
+  [(set (reg:CC_FP_GT 61) (compare:CC_FP_GT (reg:CC_FPX 61) (const_int 0)))]
+  "TARGET_ARGONAUT_SET"
+  "cmp.ls pcl,pcl"
+  [(set_attr "type" "compare")
+   (set_attr "length" "4")])
+
+;; ??? FIXME subtraction is not a correct comparison for floats:
+;;     http://www.cygnus-software.com/papers/comparingfloats/comparingfloats.htm
+(define_insn "*cmpfpx_ge"
+  [(set (reg:CC_FP_GE 61) (compare:CC_FP_GE (reg:CC_FPX 61) (const_int 0)))]
+  "TARGET_ARGONAUT_SET"
+  "rcmp.pnz pcl,0"
+  [(set_attr "type" "compare")
+   (set_attr "length" "4")])
+
+;; DPFP instructions begin...
+
+;; op0_reg = D1_reg.low
+(define_insn "*lr_double_lower"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+	(unspec_volatile:SI [(match_operand:DF 1 "arc_double_register_operand" "D")] VUNSPEC_LR ))]
+ "TARGET_DPFP && !TARGET_DPFP_DISABLE_LRSR"
+"lr %0, [%1l] ; *lr_double_lower"
+[(set_attr "length" "8")
+(set_attr "type" "lr")]
+)
+
+(define_insn "*lr_double_higher"
+  [(set (match_operand:SI 0 "register_operand" "=r")
+	(unspec_volatile:SI [(match_operand:DF 1 "arc_double_register_operand" "D")] VUNSPEC_LR_HIGH ))]
+ "TARGET_DPFP && !TARGET_DPFP_DISABLE_LRSR"
+"lr %0, [%1h] ; *lr_double_higher"
+[(set_attr "length" "8")
+(set_attr "type" "lr")]
+)
+
+
+(define_insn "*dexcl_3op_peep2_insn"
+  [(set (match_operand:SI 0 "dest_reg_operand" "=r") ; not register_operand, to accept SUBREG
+		   (unspec_volatile:SI [
+		   			(match_operand:DF 1 "arc_double_register_operand" "D")
+					(match_operand:SI 2 "shouldbe_register_operand" "r")  ; r1
+					(match_operand:SI 3 "shouldbe_register_operand" "r") ; r0
+					] VUNSPEC_DEXCL ))
+  ]
+  "TARGET_DPFP"
+  "dexcl%F1 %0, %2, %3"
+  [(set_attr "type" "move")
+   (set_attr "length" "4")]
+)
+
+;; version which will not overwrite operand0
+(define_insn "*dexcl_3op_peep2_insn_nores"
+  [   (unspec_volatile:SI [
+		   			(match_operand:DF 0 "arc_double_register_operand" "D")
+					(match_operand:SI 1 "shouldbe_register_operand" "r")  ; r1
+					(match_operand:SI 2 "shouldbe_register_operand" "r") ; r0
+					] VUNSPEC_DEXCL_NORES )
+  ]
+  "TARGET_DPFP"
+  "dexcl%F0 0, %1, %2"
+  [(set_attr "type" "move")
+   (set_attr "length" "4")]
+)
+
+;; dexcl a,b,c pattern generated by the peephole2 above
+(define_insn "*dexcl_3op_peep2_insn_lr"
+  [(parallel [(set (match_operand:SI 0 "register_operand" "=r")
+		   (unspec_volatile:SI [(match_operand:DF 1 "arc_double_register_operand" "=D")] VUNSPEC_LR ))
+	     (set (match_dup 1) (match_operand:DF 2 "register_operand" "r"))]
+	    )
+  ]
+  "TARGET_DPFP && !TARGET_DPFP_DISABLE_LRSR"
+  "dexcl%F1 %0, %H2, %L2"
+  [(set_attr "type" "move")
+   (set_attr "length" "4")]
+)
+
+
+;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;;                             doubles support for ARC
+;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+
+;; D0 = D1+{reg_pair}2
+;; (define_expand "adddf3"
+;;   [(set (match_operand:DF 0 "arc_double_register_operand"          "")
+;; 	(plus:DF (match_operand:DF 1 "arc_double_register_operand" "")
+;; 		 (match_operand:DF 2 "nonmemory_operand" "")))]
+;;  "TARGET_DPFP"
+;;  " "
+;; )
+;; daddh{0}{1} 0, {reg_pair}2.hi, {reg_pair}2.lo
+;; OR
+;; daddh{0}{1} 0, reg3, limm2.lo
+(define_expand "adddf3"
+  [(set (match_operand:DF 0 "arc_double_register_operand"          "")
+	(plus:DF (match_operand:DF 1 "arc_double_register_operand" "")
+		 (match_operand:DF 2 "nonmemory_operand" "")))
+     ]
+ "TARGET_DPFP"
+ " if (GET_CODE (operands[2]) == CONST_DOUBLE)
+     {
+        rtx high, low, tmp;
+        split_double (operands[2], &low, &high);
+        tmp = force_reg (SImode, high);
+        emit_insn(gen_adddf3_insn(operands[0], operands[1], operands[2],tmp,const0_rtx));
+     }
+   else
+     emit_insn(gen_adddf3_insn(operands[0], operands[1], operands[2],const1_rtx,const1_rtx));
+     DONE;
+ "
+)
+
+;; daddh{0}{1} 0, {reg_pair}2.hi, {reg_pair}2.lo  /* operand 4 = 1*/
+;; OR
+;; daddh{0}{1} 0, reg3, limm2.lo /* operand 4 = 0 */
+;;
+(define_insn "adddf3_insn"
+  [(set (match_operand:DF 0 "arc_double_register_operand"          "=D,D")
+	(plus:DF (match_operand:DF 1 "arc_double_register_operand" "D,D")
+		 (match_operand:DF 2 "nonmemory_operand" "!r,G")))
+  (use (match_operand:SI 3 "" "N,r"))
+  (use (match_operand:SI 4 "" "N,Q"))
+  ; Prevent can_combine_p from combining muldf3_insn patterns with
+  ; different USE pairs.
+  (use (match_dup 2))
+  ]
+  "TARGET_DPFP &&
+   !(GET_CODE(operands[2]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT)"
+  "@
+     daddh%F0%F1 0,%H2,%L2
+     daddh%F0%F1 0,%3,%L2"
+  [(set_attr "type" "dpfp_addsub")
+  (set_attr "length" "4,8")])
+
+;; dmulh{0}{1} 0, {reg_pair}2.hi, {reg_pair}2.lo
+;; OR
+;; dmulh{0}{1} 0, reg3, limm2.lo
+(define_expand "muldf3"
+  [(set (match_operand:DF 0 "arc_double_register_operand"          "")
+	(mult:DF (match_operand:DF 1 "arc_double_register_operand" "")
+		 (match_operand:DF 2 "nonmemory_operand" "")))]
+"TARGET_DPFP"
+"  if (GET_CODE (operands[2]) == CONST_DOUBLE)
+     {
+        rtx high, low, tmp;
+        split_double (operands[2], &low, &high);
+        tmp = force_reg (SImode, high);
+        emit_insn(gen_muldf3_insn(operands[0], operands[1], operands[2],tmp,const0_rtx));
+     }
+   else
+     emit_insn(gen_muldf3_insn(operands[0], operands[1], operands[2],const1_rtx,const1_rtx));
+
+  DONE;
+ ")
+
+
+;; dmulh{0}{1} 0, {reg_pair}2.hi, {reg_pair}2.lo /* operand 4 = 1*/
+;; OR
+;; dmulh{0}{1} 0, reg3, limm2.lo /* operand 4 = 0*/
+(define_insn "muldf3_insn"
+  [(set (match_operand:DF 0 "arc_double_register_operand"          "=D,D")
+	(mult:DF (match_operand:DF 1 "arc_double_register_operand" "D,D")
+		 (match_operand:DF 2 "nonmemory_operand" "!r,G")))
+  (use (match_operand:SI 3 "" "N,!r"))
+  (use (match_operand:SI 4 "" "N,Q"))
+  ; Prevent can_combine_p from combining muldf3_insn patterns with
+  ; different USE pairs.
+  (use (match_dup 2))
+  ]
+  "TARGET_DPFP &&
+   !(GET_CODE(operands[2]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT)"
+  "@
+    dmulh%F0%F1 0,%H2,%L2
+    dmulh%F0%F1 0,%3, %L2"
+  [(set_attr "type" "dpfp_mult")
+  (set_attr "length" "4,8")])
+
+;; dsubh{0}{1} 0, {reg_pair}2.hi, {reg_pair}2.lo
+;; OR
+;; dsubh{0}{1} 0, reg3, limm2.lo
+;; OR
+;; drsubh{0}{2} 0, {reg_pair}1.hi, {reg_pair}1.lo
+;; OR
+;; drsubh{0}{2} 0, reg3, limm1.lo
+(define_expand "subdf3"
+  [(set (match_operand:DF 0 "arc_double_register_operand"          "")
+		    (minus:DF (match_operand:DF 1 "nonmemory_operand" "")
+				  (match_operand:DF 2 "nonmemory_operand" "")))]
+"TARGET_DPFP"
+"   if (GET_CODE (operands[1]) == CONST_DOUBLE || GET_CODE (operands[2]) == CONST_DOUBLE)
+     {
+        rtx high, low, tmp;
+        int const_index = ((GET_CODE (operands[1]) == CONST_DOUBLE) ? 1: 2);
+        split_double (operands[const_index], &low, &high);
+        tmp = force_reg (SImode, high);
+        emit_insn(gen_subdf3_insn(operands[0], operands[1], operands[2],tmp,const0_rtx));
+     }
+   else
+     emit_insn(gen_subdf3_insn(operands[0], operands[1], operands[2],const1_rtx,const1_rtx));
+
+   DONE;
+  "
+)
+
+;; dsubh{0}{1} 0, {reg_pair}2.hi, {reg_pair}2.lo /* operand 4 = 1 */
+;; OR
+;; dsubh{0}{1} 0, reg3, limm2.lo /* operand 4 = 0*/
+;; OR
+;; drsubh{0}{2} 0, {reg_pair}1.hi, {reg_pair}1.lo /* operand 4 = 1 */
+;; OR
+;; drsubh{0}{2} 0, reg3, limm1.lo /* operand 4 = 0*/
+(define_insn "subdf3_insn"
+  [(set (match_operand:DF 0 "arc_double_register_operand"          "=D,D,D,D")
+		   (minus:DF (match_operand:DF 1 "nonmemory_operand" "D,D,!r,G")
+			    (match_operand:DF 2 "nonmemory_operand" "!r,G,D,D")))
+  (use (match_operand:SI 3 "" "N,r,N,r"))
+  (use (match_operand:SI 4 "" "N,Q,N,Q"))
+  ; Prevent can_combine_p from combining muldf3_insn patterns with
+  ; different USE pairs.
+  (use (match_dup 2))]
+  "TARGET_DPFP &&
+   !(GET_CODE(operands[2]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT) &&
+   !(GET_CODE(operands[1]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT)"
+  "@
+     dsubh%F0%F1 0,%H2,%L2
+     dsubh%F0%F1 0,%3,%L2
+     drsubh%F0%F2 0,%H1,%L1
+     drsubh%F0%F2 0,%3,%L1"
+  [(set_attr "type" "dpfp_addsub")
+  (set_attr "length" "4,8,4,8")])
+
+;; ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; ;; Peephole for following conversion
+;; ;;                    D0 = D2<op>{reg_pair}3
+;; ;;                    {reg_pair}5 = D0
+;; ;;                    D0 = {reg_pair}6
+;; ;;                            |
+;; ;;                            V
+;; ;;            _________________________________________________________
+;; ;;           / D0             = D2 <op> {regpair3_or_limmreg34}
+;; ;;    ---- +   {reg_pair}5.hi = ( D2<op>{regpair3_or_limmreg34} ).hi
+;; ;;   |       \_________________________________________________________
+;; ;;   |
+;; ;;   |         ________________________________________________________
+;; ;;   |      / {reg_pair}5.lo  = ( D2<op>{regpair3_or_limmreg34} ).lo
+;; ;;   +-----+  D0              = {reg_pair}6
+;; ;;          \ _________________________________________________________
+;; ;;                            ||
+;; ;;                            ||
+;; ;;                            \/
+;; ;;  d<op>{0}{2}h {reg_pair}5.hi, {regpair3_or_limmreg34}.lo, {regpair3_or_limmreg34}.hi
+;; ;;  dexcl{0}    {reg_pair}5.lo, {reg_pair}6.lo, {reg_pair}6.hi
+;; ;; -----------------------------------------------------------------------------------------
+;; ;;  where <op> is one of {+,*,-}
+;; ;;        <opname> is {add,mult,sub}
+;; ;;
+;; ;; NOTE: For rsub insns D2 and {regpair3_or_limmreg34} get interchanged as
+;; ;;       {regpair2_or_limmreg24} and D3
+;; ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; (define_peephole2
+;;   [(parallel [(set (match_operand:DF 0 "register_operand"          "")
+;; 	(match_operator:DF 1 "arc_dpfp_operator" [(match_operand:DF 2 "nonmemory_operand" "")
+;; 			   (match_operand:DF 3 "nonmemory_operand" "")]))
+;; 	     (use (match_operand:SI 4 "" ""))])
+;;   (set (match_operand:DF 5 "register_operand" "")
+;;        (match_dup 0))
+;;   (set (match_dup 0)
+;;        (match_operand:DF 6 "register_operand" ""))
+;;   ]
+;;   "TARGET_DPFP"
+;;   [
+;;   (parallel [(set (match_dup 0)
+;; 		  (match_op_dup:DF 1 [(match_dup 2)
+;; 				   (match_dup 3)]))
+;; 	    (use (match_dup 4))
+;;             (set (match_dup 5)
+;; 		 (match_op_dup:DF  1 [(match_dup 2)
+;; 				   (match_dup 3)]))])
+;;   (parallel [
+;; ;;	    (set (subreg:SI (match_dup 5) 0)
+;; 	    (set (match_dup 7)
+;; 		 (unspec_volatile [(match_dup 0)] VUNSPEC_LR ))
+;; 	    (set (match_dup 0) (match_dup 6))]
+;; 	    )
+;;   ]
+;;   "operands[7] = simplify_gen_subreg(SImode,operands[5],DFmode,0);"
+;;   )
+;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; Peephole for following conversion
+;;                    D0 = D2<op>{reg_pair}3
+;;                    {reg_pair}6 = D0
+;;                    D0 = {reg_pair}7
+;;                            |
+;;                            V
+;;            _________________________________________________________
+;;           / D0             = D2 <op> {regpair3_or_limmreg34}
+;;    ---- +   {reg_pair}6.hi = ( D2<op>{regpair3_or_limmreg34} ).hi
+;;   |       \_________________________________________________________
+;;   |
+;;   |         ________________________________________________________
+;;   |      / {reg_pair}6.lo  = ( D2<op>{regpair3_or_limmreg34} ).lo
+;;   +-----+  D0              = {reg_pair}7
+;;          \ _________________________________________________________
+;;                            ||
+;;                            ||
+;;                            \/
+;;  d<op>{0}{2}h {reg_pair}6.hi, {regpair3_or_limmreg34}.lo, {regpair3_or_limmreg34}.hi
+;;  dexcl{0}    {reg_pair}6.lo, {reg_pair}7.lo, {reg_pair}7.hi
+;; -----------------------------------------------------------------------------------------
+;;  where <op> is one of {+,*,-}
+;;        <opname> is {add,mult,sub}
+;;
+;; NOTE: For rsub insns D2 and {regpair3_or_limmreg34} get interchanged as
+;;       {regpair2_or_limmreg24} and D3
+;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+(define_peephole2
+  [(parallel [(set (match_operand:DF 0 "register_operand"          "")
+	(match_operator:DF 1 "arc_dpfp_operator" [(match_operand:DF 2 "nonmemory_operand" "")
+			   (match_operand:DF 3 "nonmemory_operand" "")]))
+	     (use (match_operand:SI 4 "" ""))
+	     (use (match_operand:SI 5 "" ""))
+	     (use (match_operand:SI 6 "" ""))])
+  (set (match_operand:DF 7 "register_operand" "")
+       (match_dup 0))
+  (set (match_dup 0)
+       (match_operand:DF 8 "register_operand" ""))
+  ]
+  "TARGET_DPFP && !TARGET_DPFP_DISABLE_LRSR"
+  [
+  (parallel [(set (match_dup 0)
+		  (match_op_dup:DF 1 [(match_dup 2)
+				   (match_dup 3)]))
+	    (use (match_dup 4))
+	    (use (match_dup 5))
+            (set (match_dup 7)
+		 (match_op_dup:DF  1 [(match_dup 2)
+				   (match_dup 3)]))])
+  (parallel [
+;;	    (set (subreg:SI (match_dup 7) 0)
+	    (set (match_dup 9)
+		 (unspec_volatile:SI [(match_dup 0)] VUNSPEC_LR ))
+	    (set (match_dup 0) (match_dup 8))]
+	    )
+  ]
+  "operands[9] = simplify_gen_subreg(SImode,operands[7],DFmode,0);"
+  )
+
+;; ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; ;; Peephole to generate d<opname>{ij}h a,b,c instructions
+;; ;;                    D0 = D2<op>{reg_pair}3
+;; ;;                    {reg_pair}5 = D0
+;; ;;                            |
+;; ;;                            V
+;; ;;            __________________________________________
+;; ;;           / D0             = D2 <op> {regpair3_or_limmreg34}
+;; ;;    ---- +   {reg_pair}5.hi = ( D2<op>{regpair3_or_limmreg34} ).hi
+;; ;;   |       \__________________________________________
+;; ;;   |
+;; ;;   + ---    {reg_pair}5.lo     = ( D2<op>{regpair3_or_limmreg34} ).lo
+;; ;;                            ||
+;; ;;                            ||
+;; ;;                            \/
+;; ;;  d<op>{0}{2}h {reg_pair}4.hi, {regpair3_or_limmreg34}.lo, {regpair3_or_limmreg34}.hi
+;; ;;  lr    {reg_pair}4.lo, {D2l}
+;; ;; ----------------------------------------------------------------------------------------
+;; ;;  where <op> is one of {+,*,-}
+;; ;;        <opname> is {add,mult,sub}
+;; ;;
+;; ;; NOTE: For rsub insns D2 and {regpair3_or_limmreg34} get interchanged as
+;; ;;       {regpair2_or_limmreg24} and D3
+;; ;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; (define_peephole2
+;;   [(parallel [(set (match_operand:DF 0 "register_operand"          "")
+;; 		   (match_operator:DF 1 "arc_dpfp_operator" [(match_operand:DF 2 "nonmemory_operand" "")
+;; 				      (match_operand:DF 3 "nonmemory_operand" "")]))
+;; 	     (use (match_operand:SI 4 "" ""))])
+;;   (set (match_operand:DF 5 "register_operand" "")
+;;        (match_dup 0))
+;;   ]
+;;   "TARGET_DPFP"
+;;   [
+;;   (parallel [(set (match_dup 0)
+;; 		  (match_op_dup:DF 1 [(match_dup 2)
+;; 				   (match_dup 3)]))
+;; 	    (use (match_dup 4))
+;;             (set (match_dup 5)
+;; 		 (match_op_dup:DF  1 [(match_dup 2)
+;; 				   (match_dup 3)]))])
+;; ;  (set (subreg:SI (match_dup 5) 0)
+;;   (set (match_dup 6)
+;;        (unspec_volatile [(match_dup 0)] VUNSPEC_LR ))
+;;   ]
+;;   "operands[6] = simplify_gen_subreg(SImode,operands[5],DFmode,0);"
+;;   )
+;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+;; Peephole to generate d<opname>{ij}h a,b,c instructions
+;;                    D0 = D2<op>{reg_pair}3
+;;                    {reg_pair}6 = D0
+;;                            |
+;;                            V
+;;            __________________________________________
+;;           / D0             = D2 <op> {regpair3_or_limmreg34}
+;;    ---- +   {reg_pair}6.hi = ( D2<op>{regpair3_or_limmreg34} ).hi
+;;   |       \__________________________________________
+;;   |
+;;   + ---    {reg_pair}6.lo     = ( D2<op>{regpair3_or_limmreg34} ).lo
+;;                            ||
+;;                            ||
+;;                            \/
+;;  d<op>{0}{2}h {reg_pair}4.hi, {regpair3_or_limmreg34}.lo, {regpair3_or_limmreg34}.hi
+;;  lr    {reg_pair}4.lo, {D2l}
+;; ----------------------------------------------------------------------------------------
+;;  where <op> is one of {+,*,-}
+;;        <opname> is {add,mult,sub}
+;;
+;; NOTE: For rsub insns D2 and {regpair3_or_limmreg34} get interchanged as
+;;       {regpair2_or_limmreg24} and D3
+;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
+(define_peephole2
+  [(parallel [(set (match_operand:DF 0 "register_operand"          "")
+		   (match_operator:DF 1 "arc_dpfp_operator" [(match_operand:DF 2 "nonmemory_operand" "")
+				      (match_operand:DF 3 "nonmemory_operand" "")]))
+	     (use (match_operand:SI 4 "" ""))
+	     (use (match_operand:SI 5 "" ""))
+	     (use (match_operand:SI 6 "" ""))])
+  (set (match_operand:DF 7 "register_operand" "")
+       (match_dup 0))
+  ]
+  "TARGET_DPFP  && !TARGET_DPFP_DISABLE_LRSR"
+  [
+  (parallel [(set (match_dup 0)
+		  (match_op_dup:DF 1 [(match_dup 2)
+				   (match_dup 3)]))
+	    (use (match_dup 4))
+	    (use (match_dup 5))
+            (set (match_dup 7)
+		 (match_op_dup:DF  1 [(match_dup 2)
+				   (match_dup 3)]))])
+;  (set (subreg:SI (match_dup 7) 0)
+  (set (match_dup 8)
+       (unspec_volatile:SI [(match_dup 0)] VUNSPEC_LR ))
+  ]
+  "operands[8] = simplify_gen_subreg(SImode,operands[7],DFmode,0);"
+  )
+
+;; ;;            _______________________________________________________
+;; ;;           / D0             = D1 + {regpair2_or_limmreg23}
+;; ;;         +   {reg_pair}4.hi = ( D1 + {regpair2_or_limmreg23} ).hi
+;; ;;           \_______________________________________________________
+;; (define_insn "*daddh_peep2_insn"
+;;   [(parallel [(set (match_operand:DF 0 "arc_double_register_operand" "=D,D")
+;; 		   (plus:DF (match_operand:DF 1 "arc_double_register_operand" "D,D")
+;; 			    (match_operand:DF 2 "nonmemory_operand" "r,G")))
+;; 	     (use (match_operand:SI 3 "" "N,r"))
+;; 	     (set (match_operand:DF 4 "register_operand" "=r,r")
+;; 		  (plus:DF (match_dup 1)
+;; 			   (match_dup 2)))])]
+;;  "TARGET_DPFP"
+;;  "@
+;;     daddh%F0%F1 %H4, %H2, %L2
+;;     daddh%F0%F1 %H4, %3, %L2"
+;;  [(set_attr "type" "dpfp_addsub")
+;;  (set_attr "length" "4,8")]
+;; )
+;;            _______________________________________________________
+;;           / D0             = D1 + {regpair2_or_limmreg23}
+;;         +   {reg_pair}5.hi = ( D1 + {regpair2_or_limmreg23} ).hi
+;;           \_______________________________________________________
+(define_insn "*daddh_peep2_insn"
+  [(parallel [(set (match_operand:DF 0 "arc_double_register_operand" "=D,D")
+		   (plus:DF (match_operand:DF 1 "arc_double_register_operand" "D,D")
+			    (match_operand:DF 2 "nonmemory_operand" "r,G")))
+	     (use (match_operand:SI 3 "" "N,r"))
+	     (use (match_operand:SI 4 "" "N,Q"))
+	     (use (match_operand:SI 5 "" ""))
+	     (set (match_operand:DF 6 "register_operand" "=r,r")
+		  (plus:DF (match_dup 1)
+			   (match_dup 2)))])]
+ "TARGET_DPFP &&
+   !(GET_CODE(operands[2]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT)"
+ "@
+    daddh%F0%F1 %H6, %H2, %L2
+    daddh%F0%F1 %H6, %3, %L2"
+ [(set_attr "type" "dpfp_addsub")
+ (set_attr "length" "4,8")]
+)
+
+;;            _______________________________________________________
+;;           / D0             = D1 * {regpair2_or_limmreg23}
+;;         +   {reg_pair}5.hi = ( D1 * {regpair2_or_limmreg23} ).hi
+;;           \_______________________________________________________
+(define_insn "*dmulh_peep2_insn"
+  [(parallel [(set (match_operand:DF 0 "arc_double_register_operand" "=D,D")
+		   (mult:DF (match_operand:DF 1 "arc_double_register_operand" "D,D")
+			    (match_operand:DF 2 "nonmemory_operand" "r,G")))
+	     (use (match_operand:SI 3 "" "N,r"))
+	     (use (match_operand:SI 4 "" "N,Q"))
+	     (use (match_operand:SI 5 "" ""))
+	     (set (match_operand:DF 6 "register_operand" "=r,r")
+		  (mult:DF (match_dup 1)
+				      (match_dup 2)))])]
+ "TARGET_DPFP &&
+   !(GET_CODE(operands[2]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT)"
+ "@
+    dmulh%F0%F1 %H6, %H2, %L2
+    dmulh%F0%F1 %H6, %3, %L2"
+ [(set_attr "type" "dpfp_mult")
+ (set_attr "length" "4,8")]
+)
+
+;;            _______________________________________________________
+;;           / D0             = D1 - {regpair2_or_limmreg23}
+;;         +   {reg_pair}5.hi = ( D1 - {regpair2_or_limmreg23} ).hi
+;;           \_______________________________________________________
+;;  OR
+;;            _______________________________________________________
+;;           / D0             = {regpair1_or_limmreg13} - D2
+;;         +   {reg_pair}5.hi = ( {regpair1_or_limmreg13} ).hi - D2
+;;           \_______________________________________________________
+(define_insn "*dsubh_peep2_insn"
+  [(parallel [(set (match_operand:DF 0 "arc_double_register_operand" "=D,D,D,D")
+		   (minus:DF (match_operand:DF 1 "nonmemory_operand" "D,D,r,G")
+			     (match_operand:DF 2 "nonmemory_operand" "r,G,D,D")))
+	     (use (match_operand:SI 3 "" "N,r,N,r"))
+	     (use (match_operand:SI 4 "" "N,Q,N,Q"))
+	     (use (match_operand:SI 5 "" ""))
+	     (set (match_operand:DF 6 "register_operand" "=r,r,r,r")
+		  (minus:DF (match_dup 1)
+				      (match_dup 2)))])]
+ "TARGET_DPFP &&
+   !(GET_CODE(operands[2]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT)  &&
+   !(GET_CODE(operands[1]) == CONST_DOUBLE && GET_CODE(operands[3]) == CONST_INT)"
+ "@
+  dsubh%F0%F1 %H6, %H2, %L2
+  dsubh%F0%F1 %H6, %3, %L2
+  drsubh%F0%F2 %H6, %H1, %L1
+  drsubh%F0%F2 %H6, %3, %L1"
+ [(set_attr "type" "dpfp_addsub")
+  (set_attr "length" "4,8,4,8")]
+)
diff -Nu --exclude arc.c --exclude arc.md emptydir/predicates.md config/arc/predicates.md
--- emptydir/predicates.md	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/predicates.md	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,802 @@ 
+;; Predicate definitions for Synopsys DesignWare ARC.
+;; Copyright (C) 2007-2012 Free Software Foundation, Inc.
+;;
+;; This file is part of GCC.
+;;
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+;;
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+;;
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+
+(define_predicate "dest_reg_operand"
+  (match_code "reg,subreg")
+{
+  rtx op0 = op;
+
+  if (GET_CODE (op0) == SUBREG)
+    op0 = SUBREG_REG (op0);
+  if (REG_P (op0) && REGNO (op0) < FIRST_PSEUDO_REGISTER
+      && TEST_HARD_REG_BIT (reg_class_contents[ALL_CORE_REGS],
+			    REGNO (op0))
+      && !TEST_HARD_REG_BIT (reg_class_contents[WRITABLE_CORE_REGS],
+			    REGNO (op0)))
+    return 0;
+  return register_operand (op, mode);
+})
+
+(define_predicate "mpy_dest_reg_operand"
+  (match_code "reg,subreg")
+{
+  rtx op0 = op;
+
+  if (GET_CODE (op0) == SUBREG)
+    op0 = SUBREG_REG (op0);
+  if (REG_P (op0) && REGNO (op0) < FIRST_PSEUDO_REGISTER
+      && TEST_HARD_REG_BIT (reg_class_contents[ALL_CORE_REGS],
+			    REGNO (op0))
+      /* Make sure the destination register is not LP_COUNT.  */
+      && !TEST_HARD_REG_BIT (reg_class_contents[MPY_WRITABLE_CORE_REGS],
+			    REGNO (op0)))
+    return 0;
+  return register_operand (op, mode);
+})
+
+
+;; Returns 1 if OP is a symbol reference.
+(define_predicate "symbolic_operand"
+  (match_code "symbol_ref, label_ref, const")
+)
+
+;; Acceptable arguments to the call insn.
+(define_predicate "call_address_operand"
+  (ior (match_code "const_int, reg")
+       (match_operand 0 "symbolic_operand")
+       (match_test "CONSTANT_P (op)
+		    && arc_legitimate_constant_p (VOIDmode, op)"))
+)
+
+(define_predicate "call_operand"
+  (and (match_code "mem")
+       (match_test "call_address_operand (XEXP (op, 0), mode)"))
+)
+
+;; Return true if OP is a unsigned 6-bit immediate (u6) value.
+(define_predicate "u6_immediate_operand"
+  (and (match_code "const_int")
+       (match_test "UNSIGNED_INT6 (INTVAL (op))"))
+)
+
+;; Return true if OP is a short immediate (shimm) value.
+(define_predicate "short_immediate_operand"
+  (and (match_code "const_int")
+       (match_test "SMALL_INT (INTVAL (op))"))
+)
+
+(define_predicate "p2_immediate_operand"
+  (and (match_code "const_int")
+       (match_test "((INTVAL (op) - 1) & INTVAL (op)) == 0")
+       (match_test "INTVAL (op)"))
+)
+
+;; Return true if OP will require a long immediate (limm) value.
+;; This is currently only used when calculating length attributes.
+(define_predicate "long_immediate_operand"
+  (match_code "symbol_ref, label_ref, const, const_double, const_int")
+{
+  switch (GET_CODE (op))
+    {
+    case SYMBOL_REF :
+    case LABEL_REF :
+    case CONST :
+      return 1;
+    case CONST_INT :
+      return !SIGNED_INT12 (INTVAL (op));
+    case CONST_DOUBLE :
+      /* These can happen because large unsigned 32 bit constants are
+	 represented this way (the multiplication patterns can cause these
+	 to be generated).  They also occur for SFmode values.  */
+      return 1;
+    default:
+      break;
+    }
+  return 0;
+}
+)
+
+;; Return true if OP is a MEM that when used as a load or store address will
+;; require an 8 byte insn.
+;; Load and store instructions don't allow the same possibilities but they're
+;; similar enough that this one function will do.
+;; This is currently only used when calculating length attributes.  */
+(define_predicate "long_immediate_loadstore_operand"
+  (match_code "mem")
+{
+  op = XEXP (op, 0);
+  switch (GET_CODE (op))
+    {
+    case SYMBOL_REF :
+    case LABEL_REF :
+    case CONST :
+      return 1;
+    case CONST_INT :
+      /* This must be handled as "st c,[limm]".  Ditto for load.
+	 Technically, the assembler could translate some possibilities to
+	 "st c,[limm/2 + limm/2]" if limm/2 will fit in a shimm, but we don't
+	 assume that it does.  */
+      return 1;
+    case CONST_DOUBLE :
+      /* These can happen because large unsigned 32 bit constants are
+	 represented this way (the multiplication patterns can cause these
+	 to be generated).  They also occur for SFmode values.  */
+      return 1;
+    case REG :
+      return 0;
+    case PLUS :
+      {
+	rtx x = XEXP (op, 1);
+
+	if (GET_CODE (x) == CONST)
+	  {
+	    x = XEXP (x, 0);
+	    if (GET_CODE (x) == PLUS)
+	      x = XEXP (x, 0);
+	  }
+	if (CONST_INT_P (x))
+	  return !SMALL_INT (INTVAL (x));
+	else if (GET_CODE (x) == SYMBOL_REF)
+	  return TARGET_NO_SDATA_SET || !SYMBOL_REF_SMALL_P (x);
+	return 0;
+      }
+    default:
+      break;
+    }
+  return 0;
+}
+)
+
+;; Return true if OP is any of R0-R3,R12-R15 for ARCompact 16-bit
+;; instructions
+(define_predicate "compact_register_operand"
+  (match_code "reg, subreg")
+  {
+     if ((GET_MODE (op) != mode) && (mode != VOIDmode))
+         return 0;
+
+      return (GET_CODE (op) == REG)
+      && (REGNO (op) >= FIRST_PSEUDO_REGISTER
+		|| COMPACT_GP_REG_P (REGNO (op))) ;
+  }
+)
+
+;; Return true if OP is an acceptable memory operand for ARCompact
+;; 16-bit load instructions.
+(define_predicate "compact_load_memory_operand"
+  (match_code "mem")
+{
+  rtx addr, plus0, plus1;
+  int size, off;
+
+  /* Eliminate non-memory operations.  */
+  if (GET_CODE (op) != MEM)
+    return 0;
+
+  /* .di instructions have no 16-bit form.  */
+  if (MEM_VOLATILE_P (op) && !TARGET_VOLATILE_CACHE_SET)
+     return 0;
+
+  if (mode == VOIDmode)
+    mode = GET_MODE (op);
+
+  size = GET_MODE_SIZE (mode);
+
+  /* dword operations really put out 2 instructions, so eliminate them.  */
+  if (size > UNITS_PER_WORD)
+    return 0;
+
+  /* Decode the address now.  */
+  addr = XEXP (op, 0);
+  switch (GET_CODE (addr))
+    {
+    case REG:
+      return (REGNO (addr) >= FIRST_PSEUDO_REGISTER
+	      || COMPACT_GP_REG_P (REGNO (addr))
+	      || (SP_REG_P (REGNO (addr)) && (size != 2)));
+	/* Reverting for the moment since ldw_s does not have sp as a valid
+	   parameter.  */
+    case PLUS:
+      plus0 = XEXP (addr, 0);
+      plus1 = XEXP (addr, 1);
+
+      if ((GET_CODE (plus0) == REG)
+          && ((REGNO (plus0) >= FIRST_PSEUDO_REGISTER)
+              || COMPACT_GP_REG_P (REGNO (plus0)))
+          && ((GET_CODE (plus1) == REG)
+              && ((REGNO (plus1) >= FIRST_PSEUDO_REGISTER)
+                  || COMPACT_GP_REG_P (REGNO (plus1)))))
+        {
+          return 1;
+        }
+
+      if ((GET_CODE (plus0) == REG)
+          && ((REGNO (plus0) >= FIRST_PSEUDO_REGISTER)
+              || COMPACT_GP_REG_P (REGNO (plus0)))
+          && (GET_CODE (plus1) == CONST_INT))
+        {
+          off = INTVAL (plus1);
+
+          /* Negative offset is not supported in 16-bit load/store insns.  */
+          if (off < 0)
+            return 0;
+
+          switch (size)
+            {
+            case 1:
+              return (off < 32);
+            case 2:
+              return ((off < 64) && (off % 2 == 0));
+            case 4:
+              return ((off < 128) && (off % 4 == 0));
+            }
+        }
+
+      if ((GET_CODE (plus0) == REG)
+          && ((REGNO (plus0) >= FIRST_PSEUDO_REGISTER)
+              || SP_REG_P (REGNO (plus0)))
+          && (GET_CODE (plus1) == CONST_INT))
+        {
+          off = INTVAL (plus1);
+          return ((size != 2) && (off >= 0 && off < 128) && (off % 4 == 0));
+        }
+    default:
+      break ;
+      /* TODO: 'gp' and 'pcl' are to supported as base address operand
+               for 16-bit load instructions.  */
+    }
+  return 0;
+
+}
+)
+
+;; Return true if OP is an acceptable memory operand for ARCompact
+;; 16-bit store instructions
+(define_predicate "compact_store_memory_operand"
+  (match_code "mem")
+{
+  rtx addr, plus0, plus1;
+  int size, off;
+
+  if (mode == VOIDmode)
+    mode = GET_MODE (op);
+
+  /* .di instructions have no 16-bit form.  */
+  if (MEM_VOLATILE_P (op) && !TARGET_VOLATILE_CACHE_SET)
+     return 0;
+
+  size = GET_MODE_SIZE (mode);
+
+  /* dword operations really put out 2 instructions, so eliminate them.  */
+  if (size > UNITS_PER_WORD)
+    return 0;
+
+  /* Decode the address now.  */
+  addr = XEXP (op, 0);
+  switch (GET_CODE (addr))
+    {
+    case REG:
+      return (REGNO (addr) >= FIRST_PSEUDO_REGISTER
+                || COMPACT_GP_REG_P (REGNO (addr))
+	      || (SP_REG_P (REGNO (addr)) && (size != 2)));
+	/* stw_s does not support SP as a parameter.  */
+    case PLUS:
+      plus0 = XEXP (addr, 0);
+      plus1 = XEXP (addr, 1);
+
+      if ((GET_CODE (plus0) == REG)
+          && ((REGNO (plus0) >= FIRST_PSEUDO_REGISTER)
+              || COMPACT_GP_REG_P (REGNO (plus0)))
+          && (GET_CODE (plus1) == CONST_INT))
+        {
+          off = INTVAL (plus1);
+
+          /* Negative offset is not supported in 16-bit load/store insns.  */
+          if (off < 0)
+            return 0;
+
+          switch (size)
+            {
+            case 1:
+              return (off < 32);
+            case 2:
+              return ((off < 64) && (off % 2 == 0));
+            case 4:
+              return ((off < 128) && (off % 4 == 0));
+            }
+        }
+
+      if ((GET_CODE (plus0) == REG)
+          && ((REGNO (plus0) >= FIRST_PSEUDO_REGISTER)
+              || SP_REG_P (REGNO (plus0)))
+          && (GET_CODE (plus1) == CONST_INT))
+        {
+          off = INTVAL (plus1);
+
+          return ((size != 2) && (off >= 0 && off < 128) && (off % 4 == 0));
+        }
+    default:
+      break;
+    }
+  return 0;
+  }
+)
+
+;; Return true if OP is an acceptable argument for a single word
+;;   move source.
+(define_predicate "move_src_operand"
+  (match_code "symbol_ref, label_ref, const, const_int, const_double, reg, subreg, mem")
+{
+  switch (GET_CODE (op))
+    {
+    case SYMBOL_REF :
+    case LABEL_REF :
+    case CONST :
+      return (!flag_pic || arc_legitimate_pic_operand_p(op));
+    case CONST_INT :
+      return (LARGE_INT (INTVAL (op)));
+    case CONST_DOUBLE :
+      /* We can handle DImode integer constants in SImode if the value
+	 (signed or unsigned) will fit in 32 bits.  This is needed because
+	 large unsigned 32 bit constants are represented as CONST_DOUBLEs.  */
+      if (mode == SImode)
+	return arc_double_limm_p (op);
+      /* We can handle 32 bit floating point constants.  */
+      if (mode == SFmode)
+	return GET_MODE (op) == SFmode;
+      return 0;
+    case REG :
+      return register_operand (op, mode);
+    case SUBREG :
+      /* (subreg (mem ...) ...) can occur here if the inner part was once a
+	 pseudo-reg and is now a stack slot.  */
+      if (GET_CODE (SUBREG_REG (op)) == MEM)
+	return address_operand (XEXP (SUBREG_REG (op), 0), mode);
+      else
+	return register_operand (op, mode);
+    case MEM :
+      return address_operand (XEXP (op, 0), mode);
+    default :
+      return 0;
+    }
+}
+)
+
+;; Return true if OP is an acceptable argument for a double word
+;; move source.
+(define_predicate "move_double_src_operand"
+  (match_code "reg, subreg, mem, const_int, const_double")
+{
+  switch (GET_CODE (op))
+    {
+    case REG :
+      return register_operand (op, mode);
+    case SUBREG :
+      /* (subreg (mem ...) ...) can occur here if the inner part was once a
+	 pseudo-reg and is now a stack slot.  */
+      if (GET_CODE (SUBREG_REG (op)) == MEM)
+	return move_double_src_operand (SUBREG_REG (op), mode);
+      else
+	return register_operand (op, mode);
+    case MEM :
+      return address_operand (XEXP (op, 0), mode);
+    case CONST_INT :
+    case CONST_DOUBLE :
+      return 1;
+    default :
+      return 0;
+    }
+}
+)
+
+;; Return true if OP is an acceptable argument for a move destination.
+(define_predicate "move_dest_operand"
+  (match_code "reg, subreg, mem")
+{
+  switch (GET_CODE (op))
+    {
+    case REG :
+     /* Program Counter register cannot be the target of a move.  It is
+	 a readonly register.  */
+      if (REGNO (op) == PROGRAM_COUNTER_REGNO)
+	return 0;
+      else if (TARGET_MULMAC_32BY16_SET
+               && (REGNO (op) == 56 || REGNO(op) == 57))
+        return 0;
+      else if (TARGET_MUL64_SET
+	       && (REGNO (op) == 57 || REGNO(op) == 58 || REGNO(op) == 59 ))
+	return 0;
+      else
+	return dest_reg_operand (op, mode);
+    case SUBREG :
+      /* (subreg (mem ...) ...) can occur here if the inner part was once a
+	 pseudo-reg and is now a stack slot.  */
+      if (GET_CODE (SUBREG_REG (op)) == MEM)
+	return address_operand (XEXP (SUBREG_REG (op), 0), mode);
+      else
+	return dest_reg_operand (op, mode);
+    case MEM :
+      {
+	rtx addr = XEXP (op, 0);
+
+	if (GET_CODE (addr) == PLUS
+	    && (GET_CODE (XEXP (addr, 0)) == MULT
+		|| (!CONST_INT_P (XEXP (addr, 1))
+		    && (TARGET_NO_SDATA_SET
+			|| GET_CODE (XEXP (addr, 1)) != SYMBOL_REF
+			|| !SYMBOL_REF_SMALL_P (XEXP (addr, 1))))))
+	  return 0;
+	if ((GET_CODE (addr) == PRE_MODIFY || GET_CODE (addr) == POST_MODIFY)
+	    && (GET_CODE (XEXP (addr, 1)) != PLUS
+		|| !CONST_INT_P (XEXP (XEXP (addr, 1), 1))))
+	  return 0;
+	return address_operand (addr, mode);
+      }
+    default :
+      return 0;
+    }
+
+}
+)
+
+;; Return true if OP is valid load with update operand.
+(define_predicate "load_update_operand"
+  (match_code "mem")
+{
+  if (GET_CODE (op) != MEM
+      || GET_MODE (op) != mode)
+    return 0;
+  op = XEXP (op, 0);
+  if (GET_CODE (op) != PLUS
+      || GET_MODE (op) != Pmode
+      || !register_operand (XEXP (op, 0), Pmode)
+      || !nonmemory_operand (XEXP (op, 1), Pmode))
+    return 0;
+  return 1;
+
+}
+)
+
+;; Return true if OP is valid store with update operand.
+(define_predicate "store_update_operand"
+  (match_code "mem")
+{
+  if (GET_CODE (op) != MEM
+      || GET_MODE (op) != mode)
+    return 0;
+  op = XEXP (op, 0);
+  if (GET_CODE (op) != PLUS
+      || GET_MODE (op) != Pmode
+      || !register_operand (XEXP (op, 0), Pmode)
+      || !(GET_CODE (XEXP (op, 1)) == CONST_INT
+	   && SMALL_INT (INTVAL (XEXP (op, 1)))))
+    return 0;
+  return 1;
+}
+)
+
+;; Return true if OP is a non-volatile non-immediate operand.
+;; Volatile memory refs require a special "cache-bypass" instruction
+;; and only the standard movXX patterns are set up to handle them.
+(define_predicate "nonvol_nonimm_operand"
+  (and (match_code "subreg, reg, mem")
+       (match_test "(GET_CODE (op) != MEM || !MEM_VOLATILE_P (op)) && nonimmediate_operand (op, mode)"))
+)
+
+;; Return 1 if OP is a comparison operator valid for the mode of CC.
+;; This allows the use of MATCH_OPERATOR to recognize all the branch insns.
+
+(define_predicate "proper_comparison_operator"
+  (match_code "eq, ne, le, lt, ge, gt, leu, ltu, geu, gtu, unordered, ordered, uneq, unge, ungt, unle, unlt, ltgt")
+{
+  enum rtx_code code = GET_CODE (op);
+
+  if (!COMPARISON_P (op))
+    return 0;
+
+  /* After generic flag-setting insns, we can use eq / ne / pl / mi / pnz .
+     There are some creative uses for hi / ls after shifts, but these are
+     hard to understand for the compiler and could be at best the target of
+     a peephole.  */
+  switch (GET_MODE (XEXP (op, 0)))
+    {
+    case CC_ZNmode:
+      return (code == EQ || code == NE || code == GE || code == LT
+	      || code == GT);
+    case CC_Zmode:
+      return code == EQ || code == NE;
+    case CC_Cmode:
+      return code == LTU || code == GEU;
+    case CC_FP_GTmode:
+      return code == GT || code == UNLE;
+    case CC_FP_GEmode:
+      return code == GE || code == UNLT;
+    case CC_FP_ORDmode:
+      return code == ORDERED || code == UNORDERED;
+    case CC_FP_UNEQmode:
+      return code == UNEQ || code == LTGT;
+    case CC_FPXmode:
+      return (code == EQ || code == NE || code == UNEQ || code == LTGT
+	      || code == ORDERED || code == UNORDERED);
+
+    case CCmode:
+    case SImode: /* Used for BRcc.  */
+      return 1;
+    /* From combiner.  */
+    case QImode: case HImode: case DImode: case SFmode: case DFmode:
+      return 0;
+    default:
+      gcc_unreachable ();
+  }
+})
+
+(define_predicate "equality_comparison_operator"
+  (match_code "eq, ne"))
+
+(define_predicate "brcc_nolimm_operator"
+  (ior (match_test "REG_P (XEXP (op, 1))")
+       (and (match_code "eq, ne, lt, ge, ltu, geu")
+	    (match_test "u6_immediate_operand (XEXP (op, 1), SImode)"))
+       (and (match_code "le, gt, leu, gtu")
+	    (match_test "UNSIGNED_INT6 (INTVAL (XEXP (op, 1)) + 1)"))))
+
+;; Return TRUE if this is the condition code register, if we aren't given
+;; a mode, accept any CCmode register
+(define_special_predicate "cc_register"
+  (match_code "reg")
+{
+  if (mode == VOIDmode)
+    {
+      mode = GET_MODE (op);
+      if (GET_MODE_CLASS (mode) != MODE_CC)
+        return FALSE;
+    }
+
+  if (mode == GET_MODE (op) && GET_CODE (op) == REG && REGNO (op) == CC_REG)
+    return TRUE;
+
+  return FALSE;
+})
+
+;; Return TRUE if this is the condition code register; if we aren't given
+;; a mode, accept any CCmode register.  If we are given a mode, accept
+;; modes that set a subset of flags.
+(define_special_predicate "cc_set_register"
+  (match_code "reg")
+{
+  enum machine_mode rmode = GET_MODE (op);
+
+  if (mode == VOIDmode)
+    {
+      mode = rmode;
+      if (GET_MODE_CLASS (mode) != MODE_CC)
+        return FALSE;
+    }
+
+  if (REGNO (op) != 61)
+    return FALSE;
+  if (mode == rmode
+      || (mode == CC_ZNmode && rmode == CC_Zmode)
+      || (mode == CCmode && rmode == CC_Zmode)
+      || (mode == CCmode && rmode == CC_ZNmode)
+      || (mode == CCmode && rmode == CC_Cmode))
+    return TRUE;
+
+  return FALSE;
+})
+
+; Accept CC_REG in modes which provide the flags needed for MODE.  */
+(define_special_predicate "cc_use_register"
+  (match_code "reg")
+{
+  if (REGNO (op) != CC_REG)
+    return 0;
+  if (GET_MODE (op) == mode)
+    return 1;
+  switch (mode)
+    {
+    case CC_Zmode:
+      if (GET_MODE (op) == CC_ZNmode)
+	return 1;
+      /* Fall through.  */
+    case CC_ZNmode: case CC_Cmode:
+      return GET_MODE (op) == CCmode;
+    default:
+      gcc_unreachable ();
+    }
+})
+
+(define_special_predicate "zn_compare_operator"
+  (match_code "compare")
+{
+  return GET_MODE (op) == CC_ZNmode || GET_MODE (op) == CC_Zmode;
+})
+
+;; Return true if OP is a shift operator.
+(define_predicate "shift_operator"
+  (match_code "ashiftrt, lshiftrt, ashift")
+)
+
+;; Return true if OP is a left shift operator that can be implemented in
+;; four insn words or less without a barrel shifter or multiplier.
+(define_predicate "shiftl4_operator"
+  (and (match_code "ashift")
+       (match_test "const_int_operand (XEXP (op, 1), VOIDmode) ")
+       (match_test "UINTVAL (XEXP (op, 1)) <= 9U
+		    || INTVAL (XEXP (op, 1)) == 29
+		    || INTVAL (XEXP (op, 1)) == 30
+		    || INTVAL (XEXP (op, 1)) == 31")))
+
+;; Return true if OP is a right shift operator that can be implemented in
+;; four insn words or less without a barrel shifter or multiplier.
+(define_predicate "shiftr4_operator"
+  (and (match_code "ashiftrt, lshiftrt")
+       (match_test "const_int_operand (XEXP (op, 1), VOIDmode) ")
+       (match_test "UINTVAL (XEXP (op, 1)) <= 4U
+		    || INTVAL (XEXP (op, 1)) == 30
+		    || INTVAL (XEXP (op, 1)) == 31")))
+
+;; Return true if OP is a shift operator that can be implemented in
+;; four insn words or less without a barrel shifter or multiplier.
+(define_predicate "shift4_operator"
+  (ior (match_operand 0 "shiftl4_operator")
+       (match_operand 0 "shiftr4_operator")))
+
+(define_predicate "commutative_operator"
+  (ior (match_code "plus,ior,xor,and")
+       (and (match_code "mult") (match_test "TARGET_ARC700"))
+       (and (match_code "ss_plus")
+	    (match_test "TARGET_ARC700 || TARGET_EA_SET")))
+)
+
+(define_predicate "commutative_operator_sans_mult"
+  (ior (match_code "plus,ior,xor,and")
+       (and (match_code "ss_plus")
+	    (match_test "TARGET_ARC700 || TARGET_EA_SET")))
+)
+
+(define_predicate "mult_operator"
+    (and (match_code "mult") (match_test "TARGET_ARC700"))
+)
+
+(define_predicate "noncommutative_operator"
+  (ior (match_code "minus,ashift,ashiftrt,lshiftrt,rotatert")
+       (and (match_code "ss_minus")
+	    (match_test "TARGET_ARC700 || TARGET_EA_SET")))
+)
+
+(define_predicate "unary_operator"
+  (ior (match_code "abs,neg,not,sign_extend,zero_extend")
+       (and (ior (match_code "ss_neg")
+		 (and (match_code "ss_truncate")
+		      (match_test "GET_MODE (XEXP (op, 0)) == HImode")))
+	    (match_test "TARGET_ARC700 || TARGET_EA_SET")))
+)
+
+(define_predicate "_2_4_8_operand"
+  (and (match_code "const_int")
+       (match_test "INTVAL (op) == 2 || INTVAL (op) == 4 || INTVAL (op) == 8"))
+)
+
+(define_predicate "arc_double_register_operand"
+  (match_code "reg")
+{
+  if ((GET_MODE (op) != mode) && (mode != VOIDmode))
+    return 0;
+
+  return (GET_CODE (op) == REG
+		   && (REGNO (op) >= FIRST_PSEUDO_REGISTER
+			     || REGNO_REG_CLASS (REGNO (op)) == DOUBLE_REGS));
+})
+
+(define_predicate "shouldbe_register_operand"
+  (match_code "reg,subreg,mem")
+{
+  return ((reload_in_progress || reload_completed)
+	  ? general_operand : register_operand) (op, mode);
+})
+
+(define_predicate "vector_register_operand"
+  (match_code "reg")
+{
+  if ((GET_MODE (op) != mode) && (mode != VOIDmode))
+    return 0;
+
+  return (GET_CODE (op) == REG
+	  && (REGNO (op) >= FIRST_PSEUDO_REGISTER
+	      || REGNO_REG_CLASS (REGNO (op)) == SIMD_VR_REGS));
+})
+
+(define_predicate "vector_register_or_memory_operand"
+  ( ior (match_code "reg")
+	(match_code "mem"))
+{
+  if ((GET_MODE (op) != mode) && (mode != VOIDmode))
+    return 0;
+
+  if ((GET_CODE (op) == MEM)
+      && (mode == V8HImode)
+      && GET_CODE (XEXP (op,0)) == REG)
+    return 1;
+
+  return (GET_CODE (op) == REG
+	  && (REGNO (op) >= FIRST_PSEUDO_REGISTER
+	      || REGNO_REG_CLASS (REGNO (op)) == SIMD_VR_REGS));
+})
+
+(define_predicate "arc_dpfp_operator"
+  (match_code "plus, mult,minus")
+)
+
+(define_predicate "arc_simd_dma_register_operand"
+  (match_code "reg")
+{
+  if ((GET_MODE (op) != mode) && (mode != VOIDmode))
+    return 0;
+
+  return (GET_CODE (op) == REG
+	  && (REGNO (op) >= FIRST_PSEUDO_REGISTER
+	      || REGNO_REG_CLASS (REGNO (op)) == SIMD_DMA_CONFIG_REGS));
+})
+
+(define_predicate "acc1_operand"
+  (and (match_code "reg")
+       (match_test "REGNO (op) == (TARGET_BIG_ENDIAN ? 56 : 57)")))
+
+(define_predicate "acc2_operand"
+  (and (match_code "reg")
+       (match_test "REGNO (op) == (TARGET_BIG_ENDIAN ? 57 : 56)")))
+
+(define_predicate "mlo_operand"
+  (and (match_code "reg")
+       (match_test "REGNO (op) == (TARGET_BIG_ENDIAN ? 59 : 58)")))
+
+(define_predicate "mhi_operand"
+  (and (match_code "reg")
+       (match_test "REGNO (op) == (TARGET_BIG_ENDIAN ? 58 : 59)")))
+
+(define_predicate "extend_operand"
+  (ior (match_test "register_operand (op, mode)")
+       (and (match_test "immediate_operand (op, mode)")
+	    (not (match_test "const_int_operand (op, mode)")))))
+
+(define_predicate "millicode_store_operation"
+  (match_code "parallel")
+{
+  return arc_check_millicode (op, 0, 0);
+})
+
+(define_predicate "millicode_load_operation"
+  (match_code "parallel")
+{
+  return arc_check_millicode (op, 2, 2);
+})
+
+(define_predicate "millicode_load_clob_operation"
+  (match_code "parallel")
+{
+  return arc_check_millicode (op, 0, 1);
+})
+
+(define_special_predicate "immediate_usidi_operand"
+  (if_then_else
+    (match_code "const_int")
+    (match_test "INTVAL (op) >= 0")
+    (and (match_test "const_double_operand (op, mode)")
+	 (match_test "CONST_DOUBLE_HIGH (op) == 0"))))
diff -Nu --exclude arc.c --exclude arc.md emptydir/simdext.md config/arc/simdext.md
--- emptydir/simdext.md	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/simdext.md	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,1313 @@ 
+;; Machine description of the Synopsys DesignWare ARC cpu for GNU C compiler
+;; Copyright (C) 2007-2012 Free Software Foundation, Inc.
+
+;; This file is part of GCC.
+
+;; GCC is free software; you can redistribute it and/or modify
+;; it under the terms of the GNU General Public License as published by
+;; the Free Software Foundation; either version 3, or (at your option)
+;; any later version.
+
+;; GCC is distributed in the hope that it will be useful,
+;; but WITHOUT ANY WARRANTY; without even the implied warranty of
+;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+;; GNU General Public License for more details.
+
+;; You should have received a copy of the GNU General Public License
+;; along with GCC; see the file COPYING3.  If not see
+;; <http://www.gnu.org/licenses/>.
+
+(define_constants
+  [
+  ;; Va, Vb, Vc builtins
+  (UNSPEC_ARC_SIMD_VADDAW     1000)
+  (UNSPEC_ARC_SIMD_VADDW      1001)
+  (UNSPEC_ARC_SIMD_VAVB       1002)
+  (UNSPEC_ARC_SIMD_VAVRB      1003)
+  (UNSPEC_ARC_SIMD_VDIFAW     1004)
+  (UNSPEC_ARC_SIMD_VDIFW      1005)
+  (UNSPEC_ARC_SIMD_VMAXAW     1006)
+  (UNSPEC_ARC_SIMD_VMAXW      1007)
+  (UNSPEC_ARC_SIMD_VMINAW     1008)
+  (UNSPEC_ARC_SIMD_VMINW      1009)
+  (UNSPEC_ARC_SIMD_VMULAW     1010)
+  (UNSPEC_ARC_SIMD_VMULFAW    1011)
+  (UNSPEC_ARC_SIMD_VMULFW     1012)
+  (UNSPEC_ARC_SIMD_VMULW      1013)
+  (UNSPEC_ARC_SIMD_VSUBAW     1014)
+  (UNSPEC_ARC_SIMD_VSUBW      1015)
+  (UNSPEC_ARC_SIMD_VSUMMW     1016)
+  (UNSPEC_ARC_SIMD_VAND       1017)
+  (UNSPEC_ARC_SIMD_VANDAW     1018)
+  (UNSPEC_ARC_SIMD_VBIC       1019)
+  (UNSPEC_ARC_SIMD_VBICAW     1020)
+  (UNSPEC_ARC_SIMD_VOR        1021)
+  (UNSPEC_ARC_SIMD_VXOR       1022)
+  (UNSPEC_ARC_SIMD_VXORAW     1023)
+  (UNSPEC_ARC_SIMD_VEQW       1024)
+  (UNSPEC_ARC_SIMD_VLEW       1025)
+  (UNSPEC_ARC_SIMD_VLTW       1026)
+  (UNSPEC_ARC_SIMD_VNEW       1027)
+  (UNSPEC_ARC_SIMD_VMR1AW     1028)
+  (UNSPEC_ARC_SIMD_VMR1W      1029)
+  (UNSPEC_ARC_SIMD_VMR2AW     1030)
+  (UNSPEC_ARC_SIMD_VMR2W      1031)
+  (UNSPEC_ARC_SIMD_VMR3AW     1032)
+  (UNSPEC_ARC_SIMD_VMR3W      1033)
+  (UNSPEC_ARC_SIMD_VMR4AW     1034)
+  (UNSPEC_ARC_SIMD_VMR4W      1035)
+  (UNSPEC_ARC_SIMD_VMR5AW     1036)
+  (UNSPEC_ARC_SIMD_VMR5W      1037)
+  (UNSPEC_ARC_SIMD_VMR6AW     1038)
+  (UNSPEC_ARC_SIMD_VMR6W      1039)
+  (UNSPEC_ARC_SIMD_VMR7AW     1040)
+  (UNSPEC_ARC_SIMD_VMR7W      1041)
+  (UNSPEC_ARC_SIMD_VMRB       1042)
+  (UNSPEC_ARC_SIMD_VH264F     1043)
+  (UNSPEC_ARC_SIMD_VH264FT    1044)
+  (UNSPEC_ARC_SIMD_VH264FW    1045)
+  (UNSPEC_ARC_SIMD_VVC1F      1046)
+  (UNSPEC_ARC_SIMD_VVC1FT     1047)
+  ;; Va, Vb, rc/limm builtins
+  (UNSPEC_ARC_SIMD_VBADDW     1050)
+  (UNSPEC_ARC_SIMD_VBMAXW     1051)
+  (UNSPEC_ARC_SIMD_VBMINW     1052)
+  (UNSPEC_ARC_SIMD_VBMULAW    1053)
+  (UNSPEC_ARC_SIMD_VBMULFW    1054)
+  (UNSPEC_ARC_SIMD_VBMULW     1055)
+  (UNSPEC_ARC_SIMD_VBRSUBW    1056)
+  (UNSPEC_ARC_SIMD_VBSUBW     1057)
+
+  ;; Va, Vb, Ic builtins
+  (UNSPEC_ARC_SIMD_VASRW      1060)
+  (UNSPEC_ARC_SIMD_VSR8       1061)
+  (UNSPEC_ARC_SIMD_VSR8AW     1062)
+
+  ;; Va, Vb, Ic builtins
+  (UNSPEC_ARC_SIMD_VASRRWi    1065)
+  (UNSPEC_ARC_SIMD_VASRSRWi   1066)
+  (UNSPEC_ARC_SIMD_VASRWi     1067)
+  (UNSPEC_ARC_SIMD_VASRPWBi   1068)
+  (UNSPEC_ARC_SIMD_VASRRPWBi  1069)
+  (UNSPEC_ARC_SIMD_VSR8AWi    1070)
+  (UNSPEC_ARC_SIMD_VSR8i      1071)
+
+  ;; Va, Vb, u8 (simm) builtins
+  (UNSPEC_ARC_SIMD_VMVAW      1075)
+  (UNSPEC_ARC_SIMD_VMVW       1076)
+  (UNSPEC_ARC_SIMD_VMVZW      1077)
+  (UNSPEC_ARC_SIMD_VD6TAPF    1078)
+
+  ;; Va, rlimm, u8 (simm) builtins
+  (UNSPEC_ARC_SIMD_VMOVAW     1080)
+  (UNSPEC_ARC_SIMD_VMOVW      1081)
+  (UNSPEC_ARC_SIMD_VMOVZW     1082)
+
+  ;; Va, Vb builtins
+  (UNSPEC_ARC_SIMD_VABSAW     1085)
+  (UNSPEC_ARC_SIMD_VABSW      1086)
+  (UNSPEC_ARC_SIMD_VADDSUW    1087)
+  (UNSPEC_ARC_SIMD_VSIGNW     1088)
+  (UNSPEC_ARC_SIMD_VEXCH1     1089)
+  (UNSPEC_ARC_SIMD_VEXCH2     1090)
+  (UNSPEC_ARC_SIMD_VEXCH4     1091)
+  (UNSPEC_ARC_SIMD_VUPBAW     1092)
+  (UNSPEC_ARC_SIMD_VUPBW      1093)
+  (UNSPEC_ARC_SIMD_VUPSBAW    1094)
+  (UNSPEC_ARC_SIMD_VUPSBW     1095)
+
+  (UNSPEC_ARC_SIMD_VDIRUN     1100)
+  (UNSPEC_ARC_SIMD_VDORUN     1101)
+  (UNSPEC_ARC_SIMD_VDIWR      1102)
+  (UNSPEC_ARC_SIMD_VDOWR      1103)
+
+  (UNSPEC_ARC_SIMD_VREC      1105)
+  (UNSPEC_ARC_SIMD_VRUN      1106)
+  (UNSPEC_ARC_SIMD_VRECRUN   1107)
+  (UNSPEC_ARC_SIMD_VENDREC   1108)
+
+  (UNSPEC_ARC_SIMD_VLD32WH   1110)
+  (UNSPEC_ARC_SIMD_VLD32WL   1111)
+
+  (UNSPEC_ARC_SIMD_VCAST     1200)
+  (UNSPEC_ARC_SIMD_VINTI     1201)
+   ]
+)
+
+;; Scheduler descriptions for the simd instructions
+(define_insn_reservation "simd_lat_0_insn" 1
+  (eq_attr "type" "simd_dma, simd_vstore, simd_vcontrol")
+  "issue+simd_unit")
+
+(define_insn_reservation "simd_lat_1_insn" 2
+       (eq_attr "type" "simd_vcompare, simd_vlogic,
+                        simd_vmove_else_zero, simd_varith_1cycle")
+  "issue+simd_unit, nothing")
+
+(define_insn_reservation "simd_lat_2_insn" 3
+       (eq_attr "type" "simd_valign, simd_vpermute,
+                        simd_vpack, simd_varith_2cycle")
+  "issue+simd_unit, nothing*2")
+
+(define_insn_reservation "simd_lat_3_insn" 4
+       (eq_attr "type" "simd_valign_with_acc, simd_vpack_with_acc,
+                        simd_vlogic_with_acc, simd_vload128,
+                        simd_vmove_with_acc, simd_vspecial_3cycle,
+                        simd_varith_with_acc")
+  "issue+simd_unit, nothing*3")
+
+(define_insn_reservation "simd_lat_4_insn" 5
+       (eq_attr "type" "simd_vload, simd_vmove, simd_vspecial_4cycle")
+  "issue+simd_unit, nothing*4")
+
+(define_expand "movv8hi"
+  [(set (match_operand:V8HI 0 "general_operand" "")
+	(match_operand:V8HI 1 "general_operand" ""))]
+  ""
+  "
+{
+  /* Everything except mem = const or mem = mem can be done easily.  */
+
+  if (GET_CODE (operands[0]) == MEM && GET_CODE(operands[1]) == MEM)
+    operands[1] = force_reg (V8HImode, operands[1]);
+}")
+
+;; This pattern should appear before the movv8hi_insn pattern
+(define_insn "vld128_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand" "=v")
+	(mem:V8HI (plus:SI (zero_extend:SI (vec_select:HI (match_operand:V8HI 1 "vector_register_operand"  "v")
+							  (parallel [(match_operand:SI 2 "immediate_operand" "L")])))
+			   (match_operand:SI 3 "immediate_operand" "P"))))]
+ "TARGET_SIMD_SET"
+ "vld128 %0, [i%2, %3]"
+ [(set_attr "type" "simd_vload128")
+  (set_attr "length" "4")
+  (set_attr "cond" "nocond")]
+)
+
+(define_insn "vst128_insn"
+  [(set	(mem:V8HI (plus:SI (zero_extend:SI (vec_select:HI (match_operand:V8HI 0 "vector_register_operand"  "v")
+							  (parallel [(match_operand:SI 1 "immediate_operand" "L")])))
+			   (match_operand:SI 2 "immediate_operand" "P")))
+	(match_operand:V8HI 3 "vector_register_operand" "=v"))]
+ "TARGET_SIMD_SET"
+ "vst128 %3, [i%1, %2]"
+ [(set_attr "type" "simd_vstore")
+  (set_attr "length" "4")
+  (set_attr "cond" "nocond")]
+)
+
+(define_insn "vst64_insn"
+  [(set	(mem:V4HI (plus:SI (zero_extend:SI (vec_select:HI (match_operand:V8HI 0 "vector_register_operand"  "v")
+							  (parallel [(match_operand:SI 1 "immediate_operand" "L")])))
+			   (match_operand:SI 2 "immediate_operand" "P")))
+	(vec_select:V4HI (match_operand:V8HI 3 "vector_register_operand" "=v")
+			 (parallel [(const_int 0)])))]
+ "TARGET_SIMD_SET"
+ "vst64 %3, [i%1, %2]"
+ [(set_attr "type" "simd_vstore")
+  (set_attr "length" "4")
+  (set_attr "cond" "nocond")]
+)
+
+(define_insn "movv8hi_insn"
+  [(set (match_operand:V8HI 0 "vector_register_or_memory_operand" "=v,m,v")
+	(match_operand:V8HI 1 "vector_register_or_memory_operand" "m,v,v"))]
+  "TARGET_SIMD_SET && !(GET_CODE (operands[0]) == MEM && GET_CODE(operands[1]) == MEM)"
+  "@
+    vld128r %0, %1
+    vst128r %1, %0
+    vmvzw %0,%1,0xffff"
+  [(set_attr "type" "simd_vload128,simd_vstore,simd_vmove_else_zero")
+   (set_attr "length" "8,8,4")
+   (set_attr "cond" "nocond, nocond, nocond")])
+
+(define_insn "movti_insn"
+  [(set (match_operand:TI 0 "vector_register_or_memory_operand" "=v,m,v")
+	(match_operand:TI 1 "vector_register_or_memory_operand" "m,v,v"))]
+  ""
+  "@
+    vld128r %0, %1
+    vst128r %1, %0
+    vmvzw %0,%1,0xffff"
+  [(set_attr "type" "simd_vload128,simd_vstore,simd_vmove_else_zero")
+   (set_attr "length" "8,8,4")
+   (set_attr "cond" "nocond, nocond, nocond")])
+
+;; (define_insn "*movv8hi_insn_rr"
+;;   [(set (match_operand:V8HI 0 "vector_register_operand" "=v")
+;; 	(match_operand:V8HI 1 "vector_register_operand" "v"))]
+;;   ""
+;;   "mov reg,reg"
+;;   [(set_attr "length" "8")
+;;   (set_attr "type" "move")])
+
+;; (define_insn "*movv8_out"
+;;   [(set (match_operand:V8HI 0 "memory_operand" "=m")
+;; 	(match_operand:V8HI 1 "vector_register_operand" "v"))]
+;;   ""
+;;   "mov out"
+;;   [(set_attr "length" "8")
+;;   (set_attr "type" "move")])
+
+
+;; (define_insn "addv8hi3"
+;;   [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+;; 	(plus:V8HI (match_operand:V8HI 1 "vector_register_operand"  "v")
+;; 		   (match_operand:V8HI 2 "vector_register_operand" "v")))]
+;;   "TARGET_SIMD_SET"
+;;   "vaddw %0, %1, %2"
+;;   [(set_attr "length" "8")
+;;    (set_attr "cond" "nocond")])
+
+;; (define_insn "vaddw_insn"
+;;   [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+;; 	(unspec [(match_operand:V8HI 1 "vector_register_operand"  "v")
+;; 			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VADDW))]
+;;   "TARGET_SIMD_SET"
+;;   "vaddw %0, %1, %2"
+;;   [(set_attr "length" "8")
+;;    (set_attr "cond" "nocond")])
+
+;; V V V Insns
+(define_insn "vaddaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VADDAW))]
+  "TARGET_SIMD_SET"
+  "vaddaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vaddw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VADDW))]
+  "TARGET_SIMD_SET"
+  "vaddw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vavb_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VAVB))]
+  "TARGET_SIMD_SET"
+  "vavb %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vavrb_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VAVRB))]
+  "TARGET_SIMD_SET"
+  "vavrb %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vdifaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VDIFAW))]
+  "TARGET_SIMD_SET"
+  "vdifaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vdifw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VDIFW))]
+  "TARGET_SIMD_SET"
+  "vdifw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmaxaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMAXAW))]
+  "TARGET_SIMD_SET"
+  "vmaxaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmaxw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMAXW))]
+  "TARGET_SIMD_SET"
+  "vmaxw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vminaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMINAW))]
+  "TARGET_SIMD_SET"
+  "vminaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vminw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMINW))]
+  "TARGET_SIMD_SET"
+  "vminw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmulaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMULAW))]
+  "TARGET_SIMD_SET"
+  "vmulaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmulfaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMULFAW))]
+  "TARGET_SIMD_SET"
+  "vmulfaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmulfw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMULFW))]
+  "TARGET_SIMD_SET"
+  "vmulfw %0, %1, %2"
+  [(set_attr "type" "simd_varith_2cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmulw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMULW))]
+  "TARGET_SIMD_SET"
+  "vmulw %0, %1, %2"
+  [(set_attr "type" "simd_varith_2cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vsubaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VSUBAW))]
+  "TARGET_SIMD_SET"
+  "vsubaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vsubw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VSUBW))]
+  "TARGET_SIMD_SET"
+  "vsubw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vsummw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VSUMMW))]
+  "TARGET_SIMD_SET"
+  "vsummw %0, %1, %2"
+  [(set_attr "type" "simd_varith_2cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vand_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VAND))]
+  "TARGET_SIMD_SET"
+  "vand %0, %1, %2"
+  [(set_attr "type" "simd_vlogic")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vandaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VANDAW))]
+  "TARGET_SIMD_SET"
+  "vandaw %0, %1, %2"
+  [(set_attr "type" "simd_vlogic_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbic_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VBIC))]
+  "TARGET_SIMD_SET"
+  "vbic %0, %1, %2"
+  [(set_attr "type" "simd_vlogic")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbicaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VBICAW))]
+  "TARGET_SIMD_SET"
+  "vbicaw %0, %1, %2"
+  [(set_attr "type" "simd_vlogic_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vor_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VOR))]
+  "TARGET_SIMD_SET"
+  "vor %0, %1, %2"
+  [(set_attr "type" "simd_vlogic")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vxor_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VXOR))]
+  "TARGET_SIMD_SET"
+  "vxor %0, %1, %2"
+  [(set_attr "type" "simd_vlogic")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vxoraw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VXORAW))]
+  "TARGET_SIMD_SET"
+  "vxoraw %0, %1, %2"
+  [(set_attr "type" "simd_vlogic_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "veqw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VEQW))]
+  "TARGET_SIMD_SET"
+  "veqw %0, %1, %2"
+  [(set_attr "type" "simd_vcompare")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vlew_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VLEW))]
+  "TARGET_SIMD_SET"
+  "vlew %0, %1, %2"
+  [(set_attr "type" "simd_vcompare")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vltw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VLTW))]
+  "TARGET_SIMD_SET"
+  "vltw %0, %1, %2"
+  [(set_attr "type" "simd_vcompare")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vnew_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VNEW))]
+  "TARGET_SIMD_SET"
+  "vnew %0, %1, %2"
+  [(set_attr "type" "simd_vcompare")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr1aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR1AW))]
+  "TARGET_SIMD_SET"
+  "vmr1aw %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr1w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR1W))]
+  "TARGET_SIMD_SET"
+  "vmr1w %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr2aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR2AW))]
+  "TARGET_SIMD_SET"
+  "vmr2aw %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr2w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR2W))]
+  "TARGET_SIMD_SET"
+  "vmr2w %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr3aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR3AW))]
+  "TARGET_SIMD_SET"
+  "vmr3aw %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr3w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR3W))]
+  "TARGET_SIMD_SET"
+  "vmr3w %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr4aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR4AW))]
+  "TARGET_SIMD_SET"
+  "vmr4aw %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr4w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR4W))]
+  "TARGET_SIMD_SET"
+  "vmr4w %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr5aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR5AW))]
+  "TARGET_SIMD_SET"
+  "vmr5aw %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr5w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR5W))]
+  "TARGET_SIMD_SET"
+  "vmr5w %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr6aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR6AW))]
+  "TARGET_SIMD_SET"
+  "vmr6aw %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr6w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR6W))]
+  "TARGET_SIMD_SET"
+  "vmr6w %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr7aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR7AW))]
+  "TARGET_SIMD_SET"
+  "vmr7aw %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmr7w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMR7W))]
+  "TARGET_SIMD_SET"
+  "vmr7w %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmrb_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VMRB))]
+  "TARGET_SIMD_SET"
+  "vmrb %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vh264f_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VH264F))]
+  "TARGET_SIMD_SET"
+  "vh264f %0, %1, %2"
+  [(set_attr "type" "simd_vspecial_3cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vh264ft_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VH264FT))]
+  "TARGET_SIMD_SET"
+  "vh264ft %0, %1, %2"
+  [(set_attr "type" "simd_vspecial_3cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vh264fw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VH264FW))]
+  "TARGET_SIMD_SET"
+  "vh264fw %0, %1, %2"
+  [(set_attr "type" "simd_vspecial_3cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vvc1f_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VVC1F))]
+  "TARGET_SIMD_SET"
+  "vvc1f %0, %1, %2"
+  [(set_attr "type" "simd_vspecial_3cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vvc1ft_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+			 (match_operand:V8HI 2 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VVC1FT))]
+  "TARGET_SIMD_SET"
+  "vvc1ft %0, %1, %2"
+  [(set_attr "type" "simd_vspecial_3cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+
+
+;;---
+;; V V r/limm Insns
+
+;; (define_insn "vbaddw_insn"
+;;   [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+;; 	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+;; 			      (match_operand:SI 2 "nonmemory_operand" "rCal")] UNSPEC_ARC_SIMD_VBADDW))]
+;;   "TARGET_SIMD_SET"
+;;   "vbaddw %0, %1, %2"
+;;   [(set_attr "length" "4")
+;;    (set_attr "cond" "nocond")])
+
+(define_insn "vbaddw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBADDW))]
+  "TARGET_SIMD_SET"
+  "vbaddw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbmaxw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBMAXW))]
+  "TARGET_SIMD_SET"
+  "vbmaxw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbminw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBMINW))]
+  "TARGET_SIMD_SET"
+  "vbminw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbmulaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBMULAW))]
+  "TARGET_SIMD_SET"
+  "vbmulaw %0, %1, %2"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbmulfw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBMULFW))]
+  "TARGET_SIMD_SET"
+  "vbmulfw %0, %1, %2"
+  [(set_attr "type" "simd_varith_2cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbmulw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBMULW))]
+  "TARGET_SIMD_SET"
+  "vbmulw %0, %1, %2"
+  [(set_attr "type" "simd_varith_2cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbrsubw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		     (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBRSUBW))]
+  "TARGET_SIMD_SET"
+  "vbrsubw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vbsubw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VBSUBW))]
+  "TARGET_SIMD_SET"
+  "vbsubw %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+; Va, Vb, Ic instructions
+
+; Va, Vb, u6 instructions
+(define_insn "vasrrwi_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "L")] UNSPEC_ARC_SIMD_VASRRWi))]
+  "TARGET_SIMD_SET"
+  "vasrrwi %0, %1, %2"
+  [(set_attr "type" "simd_varith_2cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vasrsrwi_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		     (match_operand:SI 2 "immediate_operand" "L")] UNSPEC_ARC_SIMD_VASRSRWi))]
+  "TARGET_SIMD_SET"
+  "vasrsrwi %0, %1, %2"
+  [(set_attr "type" "simd_varith_2cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vasrwi_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "L")] UNSPEC_ARC_SIMD_VASRWi))]
+  "TARGET_SIMD_SET"
+  "vasrwi %0, %1, %2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vasrpwbi_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "L")] UNSPEC_ARC_SIMD_VASRPWBi))]
+  "TARGET_SIMD_SET"
+  "vasrpwbi %0, %1, %2"
+  [(set_attr "type" "simd_vpack")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vasrrpwbi_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "L")] UNSPEC_ARC_SIMD_VASRRPWBi))]
+  "TARGET_SIMD_SET"
+  "vasrrpwbi %0, %1, %2"
+  [(set_attr "type" "simd_vpack")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vsr8awi_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "L")] UNSPEC_ARC_SIMD_VSR8AWi))]
+  "TARGET_SIMD_SET"
+  "vsr8awi %0, %1, %2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vsr8i_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "L")] UNSPEC_ARC_SIMD_VSR8i))]
+  "TARGET_SIMD_SET"
+  "vsr8i %0, %1, %2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+;; Va, Vb, u8 (simm) insns
+
+(define_insn "vmvaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+        (unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "P")] UNSPEC_ARC_SIMD_VMVAW))]
+  "TARGET_SIMD_SET"
+  "vmvaw %0, %1, %2"
+  [(set_attr "type" "simd_vmove_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmvw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+        (unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "P")] UNSPEC_ARC_SIMD_VMVW))]
+  "TARGET_SIMD_SET"
+  "vmvw %0, %1, %2"
+  [(set_attr "type" "simd_vmove")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmvzw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+        (unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "P")] UNSPEC_ARC_SIMD_VMVZW))]
+  "TARGET_SIMD_SET"
+  "vmvzw %0, %1, %2"
+  [(set_attr "type" "simd_vmove_else_zero")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vd6tapf_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+        (unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "P")] UNSPEC_ARC_SIMD_VD6TAPF))]
+  "TARGET_SIMD_SET"
+  "vd6tapf %0, %1, %2"
+  [(set_attr "type" "simd_vspecial_4cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+;; Va, rlimm, u8 (simm) insns
+(define_insn "vmovaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+        (unspec:V8HI [(match_operand:SI 1 "nonmemory_operand"  "r")
+		      (match_operand:SI 2 "immediate_operand" "P")] UNSPEC_ARC_SIMD_VMOVAW))]
+  "TARGET_SIMD_SET"
+  "vmovaw %0, %1, %2"
+  [(set_attr "type" "simd_vmove_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmovw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+        (unspec:V8HI [(match_operand:SI 1 "nonmemory_operand"  "r")
+		      (match_operand:SI 2 "immediate_operand" "P")] UNSPEC_ARC_SIMD_VMOVW))]
+  "TARGET_SIMD_SET"
+  "vmovw %0, %1, %2"
+  [(set_attr "type" "simd_vmove")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vmovzw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+        (unspec:V8HI [(match_operand:SI 1 "nonmemory_operand"  "r")
+		      (match_operand:SI 2 "immediate_operand" "P")] UNSPEC_ARC_SIMD_VMOVZW))]
+  "TARGET_SIMD_SET"
+  "vmovzw %0, %1, %2"
+  [(set_attr "type" "simd_vmove_else_zero")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+;; Va, rlimm, Ic insns
+(define_insn "vsr8_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "K")
+		      (match_operand:V8HI 3 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VSR8))]
+  "TARGET_SIMD_SET"
+  "vsr8 %0, %1, i%2"
+  [(set_attr "type" "simd_valign")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vasrw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "K")
+		      (match_operand:V8HI 3 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VASRW))]
+  "TARGET_SIMD_SET"
+  "vasrw %0, %1, i%2"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vsr8aw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")
+		      (match_operand:SI 2 "immediate_operand" "K")
+		      (match_operand:V8HI 3 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VSR8AW))]
+  "TARGET_SIMD_SET"
+  "vsr8aw %0, %1, i%2"
+  [(set_attr "type" "simd_valign_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+;; Va, Vb insns
+(define_insn "vabsaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VABSAW))]
+  "TARGET_SIMD_SET"
+  "vabsaw %0, %1"
+  [(set_attr "type" "simd_varith_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vabsw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VABSW))]
+  "TARGET_SIMD_SET"
+  "vabsw %0, %1"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vaddsuw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VADDSUW))]
+  "TARGET_SIMD_SET"
+  "vaddsuw %0, %1"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vsignw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VSIGNW))]
+  "TARGET_SIMD_SET"
+  "vsignw %0, %1"
+  [(set_attr "type" "simd_varith_1cycle")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vexch1_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VEXCH1))]
+  "TARGET_SIMD_SET"
+  "vexch1 %0, %1"
+  [(set_attr "type" "simd_vpermute")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vexch2_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VEXCH2))]
+  "TARGET_SIMD_SET"
+  "vexch2 %0, %1"
+  [(set_attr "type" "simd_vpermute")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vexch4_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VEXCH4))]
+  "TARGET_SIMD_SET"
+  "vexch4 %0, %1"
+  [(set_attr "type" "simd_vpermute")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vupbaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VUPBAW))]
+  "TARGET_SIMD_SET"
+  "vupbaw %0, %1"
+  [(set_attr "type" "simd_vpack_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vupbw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VUPBW))]
+  "TARGET_SIMD_SET"
+  "vupbw %0, %1"
+  [(set_attr "type" "simd_vpack")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vupsbaw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VUPSBAW))]
+  "TARGET_SIMD_SET"
+  "vupsbaw %0, %1"
+  [(set_attr "type" "simd_vpack_with_acc")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vupsbw_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"  "=v")
+	(unspec:V8HI [(match_operand:V8HI 1 "vector_register_operand"  "v")] UNSPEC_ARC_SIMD_VUPSBW))]
+  "TARGET_SIMD_SET"
+  "vupsbw %0, %1"
+  [(set_attr "type" "simd_vpack")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+; DMA setup instructions
+(define_insn "vdirun_insn"
+  [(set (match_operand:SI 0 "arc_simd_dma_register_operand"           "=d")
+        (unspec_volatile:SI [(match_operand:SI 1 "nonmemory_operand"  "r")
+			     (match_operand:SI 2 "nonmemory_operand" "r")] UNSPEC_ARC_SIMD_VDIRUN))]
+  "TARGET_SIMD_SET"
+  "vdirun %1, %2"
+  [(set_attr "type" "simd_dma")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vdorun_insn"
+  [(set (match_operand:SI 0 "arc_simd_dma_register_operand"              "=d")
+        (unspec_volatile:SI [(match_operand:SI 1 "nonmemory_operand"     "r")
+			     (match_operand:SI 2 "nonmemory_operand"     "r")] UNSPEC_ARC_SIMD_VDORUN))]
+  "TARGET_SIMD_SET"
+  "vdorun %1, %2"
+  [(set_attr "type" "simd_dma")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vdiwr_insn"
+  [(set (match_operand:SI 0 "arc_simd_dma_register_operand"           "=d,d")
+        (unspec_volatile:SI [(match_operand:SI 1 "nonmemory_operand"  "r,Cal")] UNSPEC_ARC_SIMD_VDIWR))]
+  "TARGET_SIMD_SET"
+  "vdiwr %0, %1"
+  [(set_attr "type" "simd_dma")
+   (set_attr "length" "4,8")
+   (set_attr "cond" "nocond,nocond")])
+
+(define_insn "vdowr_insn"
+  [(set (match_operand:SI 0 "arc_simd_dma_register_operand"           "=d,d")
+        (unspec_volatile:SI [(match_operand:SI 1 "nonmemory_operand"  "r,Cal")] UNSPEC_ARC_SIMD_VDOWR))]
+  "TARGET_SIMD_SET"
+  "vdowr %0, %1"
+  [(set_attr "type" "simd_dma")
+   (set_attr "length" "4,8")
+   (set_attr "cond" "nocond,nocond")])
+
+;; vector record and run instructions
+(define_insn "vrec_insn"
+  [(unspec_volatile [(match_operand:SI 0 "nonmemory_operand"  "r")] UNSPEC_ARC_SIMD_VREC)]
+  "TARGET_SIMD_SET"
+  "vrec %0"
+  [(set_attr "type" "simd_vcontrol")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vrun_insn"
+  [(unspec_volatile [(match_operand:SI 0 "nonmemory_operand"  "r")] UNSPEC_ARC_SIMD_VRUN)]
+  "TARGET_SIMD_SET"
+  "vrun %0"
+  [(set_attr "type" "simd_vcontrol")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vrecrun_insn"
+  [(unspec_volatile [(match_operand:SI 0 "nonmemory_operand"  "r")] UNSPEC_ARC_SIMD_VRECRUN)]
+  "TARGET_SIMD_SET"
+  "vrecrun %0"
+  [(set_attr "type" "simd_vcontrol")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vendrec_insn"
+  [(unspec_volatile [(match_operand:SI 0 "nonmemory_operand"  "r")] UNSPEC_ARC_SIMD_VENDREC)]
+  "TARGET_SIMD_SET"
+  "vendrec %S0"
+  [(set_attr "type" "simd_vcontrol")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+;; Va, [Ib,u8] instructions
+;; (define_insn "vld32wh_insn"
+;;   [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+;; 	(vec_concat:V8HI (unspec:V4HI [(match_operand:SI 1 "immediate_operand" "P")
+;; 				      (vec_select:HI (match_operand:V8HI 2 "vector_register_operand"  "v")
+;; 						      (parallel [(match_operand:SI 3 "immediate_operand" "L")]))] UNSPEC_ARC_SIMD_VLD32WH)
+;; 			 (vec_select:V4HI (match_dup 0)
+;; 					  (parallel[(const_int 0)]))))]
+;; (define_insn "vld32wl_insn"
+;;   [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+;; 	(unspec:V8HI [(match_operand:SI 1 "immediate_operand" "L")
+;; 		     (match_operand:SI 2 "immediate_operand" "P")
+;; 		     (match_operand:V8HI 3 "vector_register_operand"  "v")
+;; 		     (match_dup 0)] UNSPEC_ARC_SIMD_VLD32WL))]
+;;   "TARGET_SIMD_SET"
+;;   "vld32wl %0, [I%1,%2]"
+;;   [(set_attr "length" "4")
+;;   (set_attr "cond" "nocond")])
+(define_insn "vld32wh_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(vec_concat:V8HI (zero_extend:V4HI (mem:V4QI (plus:SI (match_operand:SI 1 "immediate_operand" "P")
+							      (zero_extend: SI (vec_select:HI (match_operand:V8HI 2 "vector_register_operand"  "v")
+											      (parallel [(match_operand:SI 3 "immediate_operand" "L")]))))))
+			 (vec_select:V4HI (match_dup 0)
+					  (parallel [(const_int 0)]))))]
+  "TARGET_SIMD_SET"
+  "vld32wh %0, [i%3,%1]"
+  [(set_attr "type" "simd_vload")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vld32wl_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(vec_concat:V8HI (vec_select:V4HI (match_dup 0)
+					  (parallel [(const_int 1)]))
+			 (zero_extend:V4HI (mem:V4QI (plus:SI (match_operand:SI 1 "immediate_operand" "P")
+							      (zero_extend: SI (vec_select:HI (match_operand:V8HI 2 "vector_register_operand"  "v")
+											      (parallel [(match_operand:SI 3 "immediate_operand" "L")])))))) ))]
+  "TARGET_SIMD_SET"
+  "vld32wl %0, [i%3,%1]"
+  [(set_attr "type" "simd_vload")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vld64w_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand" "=v")
+	(zero_extend:V8HI (mem:V4HI (plus:SI (zero_extend:SI (vec_select:HI (match_operand:V8HI 1 "vector_register_operand"  "v")
+									    (parallel [(match_operand:SI 2 "immediate_operand" "L")])))
+					     (match_operand:SI 3 "immediate_operand" "P")))))]
+ "TARGET_SIMD_SET"
+ "vld64w %0, [i%2, %3]"
+ [(set_attr "type" "simd_vload")
+  (set_attr "length" "4")
+  (set_attr "cond" "nocond")]
+)
+
+(define_insn "vld64_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(vec_concat:V8HI (vec_select:V4HI (match_dup 0)
+					  (parallel [(const_int 1)]))
+			 (mem:V4HI (plus:SI (match_operand:SI 1 "immediate_operand" "P")
+					    (zero_extend: SI (vec_select:HI (match_operand:V8HI 2 "vector_register_operand"  "v")
+									    (parallel [(match_operand:SI 3 "immediate_operand" "L")]))))) ))]
+  "TARGET_SIMD_SET"
+  "vld64 %0, [i%3,%1]"
+  [(set_attr "type" "simd_vload")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vld32_insn"
+  [(set (match_operand:V8HI 0 "vector_register_operand"           "=v")
+	(vec_concat:V8HI (vec_select:V4HI (match_dup 0)
+					  (parallel [(const_int 1)]))
+			 (vec_concat:V4HI  (vec_select:V2HI (match_dup 0)
+							    (parallel [(const_int 1)]))
+					   (mem:V2HI (plus:SI (match_operand:SI 1 "immediate_operand" "P")
+							      (zero_extend: SI (vec_select:HI (match_operand:V8HI 2 "vector_register_operand"  "v")
+											      (parallel [(match_operand:SI 3 "immediate_operand" "L")])))))) ))]
+  "TARGET_SIMD_SET"
+  "vld32 %0, [i%3,%1]"
+  [(set_attr "type" "simd_vload")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
+
+(define_insn "vst16_n_insn"
+  [(set  (mem:HI (plus:SI (match_operand:SI 0 "immediate_operand" "P")
+			  (zero_extend: SI (vec_select:HI (match_operand:V8HI 1 "vector_register_operand"  "v")
+							  (parallel [(match_operand:SI 2 "immediate_operand" "L")])))))
+	 (vec_select:HI (match_operand:V8HI 3 "vector_register_operand" "v")
+			(parallel [(match_operand:SI 4 "immediate_operand" "L")])))]
+ "TARGET_SIMD_SET"
+ "vst16_%4 %3,[i%2, %0]"
+ [(set_attr "type" "simd_vstore")
+  (set_attr "length" "4")
+  (set_attr "cond" "nocond")])
+
+(define_insn "vst32_n_insn"
+  [(set  (mem:SI (plus:SI (match_operand:SI 0 "immediate_operand" "P")
+			  (zero_extend: SI (vec_select:HI (match_operand:V8HI 1 "vector_register_operand"  "v")
+							  (parallel [(match_operand:SI 2 "immediate_operand" "L")])))))
+	 (vec_select:SI (unspec:V4SI [(match_operand:V8HI 3 "vector_register_operand" "v")] UNSPEC_ARC_SIMD_VCAST)
+			(parallel [(match_operand:SI 4 "immediate_operand" "L")])))]
+ "TARGET_SIMD_SET"
+ "vst32_%4 %3,[i%2, %0]"
+ [(set_attr "type" "simd_vstore")
+  (set_attr "length" "4")
+  (set_attr "cond" "nocond")])
+
+;; SIMD unit interrupt
+(define_insn "vinti_insn"
+  [(unspec_volatile [(match_operand:SI 0 "nonmemory_operand"  "L")] UNSPEC_ARC_SIMD_VINTI)]
+  "TARGET_SIMD_SET"
+  "vinti %0"
+  [(set_attr "type" "simd_vcontrol")
+   (set_attr "length" "4")
+   (set_attr "cond" "nocond")])
diff -Nu --exclude arc.c --exclude arc.md emptydir/t-arc config/arc/t-arc
--- emptydir/t-arc	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/t-arc	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,20 @@ 
+# GCC Makefile fragment for Synopsys DesignWare ARC.
+
+# Copyright (C) 2007-2012 Free Software Foundation, Inc.
+
+# This file is part of GCC.
+
+# GCC is free software; you can redistribute it and/or modify it under the
+# terms of the GNU General Public License as published by the Free Software
+# Foundation; either version 3, or (at your option) any later version.
+
+# GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+# details.
+
+# You should have received a copy of the GNU General Public License along
+# with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+$(out_object_file): gt-arc.h
diff -Nu --exclude arc.c --exclude arc.md emptydir/t-arc-newlib config/arc/t-arc-newlib
--- emptydir/t-arc-newlib	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/t-arc-newlib	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,38 @@ 
+# GCC Makefile fragment for Synopsys DesignWare ARC with newlib.
+
+# Copyright (C) 2007-2012 Free Software Foundation, Inc.
+
+# This file is part of GCC.
+
+# GCC is free software; you can redistribute it and/or modify it under the
+# terms of the GNU General Public License as published by the Free Software
+# Foundation; either version 3, or (at your option) any later version.
+
+# GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+# details.
+
+# You should have received a copy of the GNU General Public License along
+# with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+# Selecting -mA5 uses the same functional multilib files/libraries
+# as get used for -mARC600 aka -mA6.
+MULTILIB_OPTIONS=mcpu=ARC600/mcpu=ARC601 mmul64/mmul32x16 mnorm
+MULTILIB_DIRNAMES=arc600 arc601 mul64 mul32x16 norm
+#
+# Aliases:
+MULTILIB_MATCHES  = mcpu?ARC600=mcpu?arc600
+MULTILIB_MATCHES += mcpu?ARC600=mARC600
+MULTILIB_MATCHES += mcpu?ARC600=mA6
+MULTILIB_MATCHES += mcpu?ARC600=mA5
+MULTILIB_MATCHES += mcpu?ARC600=mno-mpy
+MULTILIB_MATCHES += mcpu?ARC601=mcpu?arc601
+MULTILIB_MATCHES += EL=mlittle-endian
+MULTILIB_MATCHES += EB=mbig-endian
+#
+# These don't make sense for the ARC700 default target:
+MULTILIB_EXCEPTIONS=mmul64* mmul32x16* mnorm*
+# And neither of the -mmul* options make sense without -mnorm:
+MULTILIB_EXCLUSIONS=mARC600/mmul64/!mnorm mcpu=ARC601/mmul64/!mnorm mARC600/mmul32x16/!mnorm
diff -Nu --exclude arc.c --exclude arc.md emptydir/t-arc-uClibc config/arc/t-arc-uClibc
--- emptydir/t-arc-uClibc	1970-01-01 01:00:00.000000000 +0100
+++ config/arc/t-arc-uClibc	2013-01-30 07:47:21.000000000 +0000
@@ -0,0 +1,20 @@ 
+# GCC Makefile fragment for Synopsys DesignWare ARC with uClibc
+
+# Copyright (C) 2007-2012 Free Software Foundation, Inc.
+
+# This file is part of GCC.
+
+# GCC is free software; you can redistribute it and/or modify it under the
+# terms of the GNU General Public License as published by the Free Software
+# Foundation; either version 3, or (at your option) any later version.
+
+# GCC is distributed in the hope that it will be useful, but WITHOUT ANY
+# WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+# FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more
+# details.
+
+# You should have received a copy of the GNU General Public License along
+# with GCC; see the file COPYING3.  If not see
+# <http://www.gnu.org/licenses/>.
+
+MULTILIB_EXTRA_OPTS = mno-sdata