From patchwork Wed Jul 6 11:52:01 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Georg-Johann Lay X-Patchwork-Id: 103457 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) by ozlabs.org (Postfix) with SMTP id 4B79BB6F7E for ; Wed, 6 Jul 2011 21:52:38 +1000 (EST) Received: (qmail 32334 invoked by alias); 6 Jul 2011 11:52:36 -0000 Received: (qmail 32322 invoked by uid 22791); 6 Jul 2011 11:52:34 -0000 X-SWARE-Spam-Status: No, hits=-1.3 required=5.0 tests=AWL, BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, RCVD_IN_DNSWL_NONE X-Spam-Check-By: sourceware.org Received: from mo-p00-ob.rzone.de (HELO mo-p00-ob.rzone.de) (81.169.146.162) by sourceware.org (qpsmtpd/0.43rc1) with ESMTP; Wed, 06 Jul 2011 11:52:17 +0000 X-RZG-AUTH: :LXoWVUeid/7A29J/hMvvT2k715jHQaJercGObUOFkj18odoYNahU4Q== X-RZG-CLASS-ID: mo00 Received: from [192.168.0.22] (business-188-111-022-002.static.arcor-ip.net [188.111.22.2]) by smtp.strato.de (cohen mo43) (RZmta 26.0) with ESMTPA id Q009b9n66BKx7y ; Wed, 6 Jul 2011 13:52:04 +0200 (MEST) Message-ID: <4E144C61.60600@gjlay.de> Date: Wed, 06 Jul 2011 13:52:01 +0200 From: Georg-Johann Lay User-Agent: Thunderbird 2.0.0.24 (X11/20100302) MIME-Version: 1.0 To: gcc-patches@gcc.gnu.org CC: Eric Weddington , Anatoly Sokolov , Denis Chertykov Subject: [Path,AVR]: Improve loading of 32-bit constants X-IsSubscribed: yes Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org For loading a 32-bit constant in a register, there is room for improvement: * SF can be handled the same way as SI and therefore the patch adds a peep2 to produce a *reload_insf analogon to *reload_insi. * If the destination register overlaps NO_LD_REGS, values already loaded into some other byte can be reused by a simple MOV. This is helpful then moving values like, e.g. -2, -100 etc. because all high bytes are 0xff. * 0.0f can be directly moved to memory. * The mov insns contain "!d" constraint. I see no reason to make "d" expensive and discourage use of d-regs. A "*d" to hide is better because it does it neither puts additional pressure on "d" nor discourages "d". The patch is basically a rewrite of output_reload_insisf. Tested without regressions. Ok to commit? Johann * config/avr/avr.md (*reload_insi): Change predicate #1 to const_int_operand. Ditto for peep2 producing this insn. Add argument to output_reload_insisf call. (*movsi,*movsf): Add argument to output_movsisf call. Change "!d" constraint to "*d". (*reload_insf): New insn and new peep2 to produce it. * config/avr/avr-protos.h (output_movsisf): Change prototype. (output_reload_insisf): Change prototype. * config/avr/avr.c (avr_asm_len): New function. (output_reload_insisf): Rewrite. (output_movsisf): Change prototype. output_reload_insisf for all CONST_INT and CONST_DOUBLE. ALlow moving 0.0f to memory. (adjust_insn_length): Add argument to output_movsisf and output_reload_insisf call. Index: config/avr/avr.md =================================================================== --- config/avr/avr.md (revision 175811) +++ config/avr/avr.md (working copy) @@ -402,10 +402,10 @@ (define_expand "movsi" -(define_peephole2 ; movsi_lreg_const +(define_peephole2 ; *reload_insi [(match_scratch:QI 2 "d") (set (match_operand:SI 0 "l_register_operand" "") - (match_operand:SI 1 "immediate_operand" "")) + (match_operand:SI 1 "const_int_operand" "")) (match_dup 2)] "(operands[1] != const0_rtx && operands[1] != constm1_rtx)" @@ -416,22 +416,26 @@ (define_peephole2 ; movsi_lreg_const ;; '*' because it is not used in rtl generation. (define_insn "*reload_insi" [(set (match_operand:SI 0 "register_operand" "=r") - (match_operand:SI 1 "immediate_operand" "i")) + (match_operand:SI 1 "const_int_operand" "n")) (clobber (match_operand:QI 2 "register_operand" "=&d"))] "reload_completed" - "* return output_reload_insisf (insn, operands, NULL);" + { + return output_reload_insisf (insn, operands, operands[2], NULL); + } [(set_attr "length" "8") - (set_attr "cc" "none")]) + (set_attr "cc" "clobber")]) (define_insn "*movsi" - [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,Qm,!d,r") + [(set (match_operand:SI 0 "nonimmediate_operand" "=r,r,r,Qm,*d,r") (match_operand:SI 1 "general_operand" "r,L,Qm,rL,i,i"))] "(register_operand (operands[0],SImode) || register_operand (operands[1],SImode) || const0_rtx == operands[1])" - "* return output_movsisf (insn, operands, NULL);" + { + return output_movsisf (insn, operands, NULL_RTX, NULL); + } [(set_attr "length" "4,4,8,9,4,10") - (set_attr "cc" "none,set_zn,clobber,clobber,none,clobber")]) + (set_attr "cc" "none,set_zn,clobber,clobber,clobber,clobber")]) ;; fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff ;; move floating point numbers (32 bit) @@ -451,13 +455,39 @@ (define_expand "movsf" }") (define_insn "*movsf" - [(set (match_operand:SF 0 "nonimmediate_operand" "=r,r,r,Qm,!d,r") - (match_operand:SF 1 "general_operand" "r,G,Qm,r,F,F"))] + [(set (match_operand:SF 0 "nonimmediate_operand" "=r,r,r,Qm,*d,r") + (match_operand:SF 1 "general_operand" "r,G,Qm,rG,F,F"))] "register_operand (operands[0], SFmode) - || register_operand (operands[1], SFmode)" - "* return output_movsisf (insn, operands, NULL);" + || register_operand (operands[1], SFmode) + || operands[1] == CONST0_RTX (SFmode)" + { + return output_movsisf (insn, operands, NULL_RTX, NULL); + } [(set_attr "length" "4,4,8,9,4,10") - (set_attr "cc" "none,set_zn,clobber,clobber,none,clobber")]) + (set_attr "cc" "none,set_zn,clobber,clobber,clobber,clobber")]) + +(define_peephole2 ; *reload_insf + [(match_scratch:QI 2 "d") + (set (match_operand:SF 0 "l_register_operand" "") + (match_operand:SF 1 "const_double_operand" "")) + (match_dup 2)] + "operands[1] != CONST0_RTX (SFmode)" + [(parallel [(set (match_dup 0) + (match_dup 1)) + (clobber (match_dup 2))])] + "") + +;; '*' because it is not used in rtl generation. +(define_insn "*reload_insf" + [(set (match_operand:SF 0 "register_operand" "=r") + (match_operand:SF 1 "const_double_operand" "F")) + (clobber (match_operand:QI 2 "register_operand" "=&d"))] + "reload_completed" + { + return output_reload_insisf (insn, operands, operands[2], NULL); + } + [(set_attr "length" "8") + (set_attr "cc" "clobber")]) ;;========================================================================= ;; move string (like memcpy) Index: config/avr/avr-protos.h =================================================================== --- config/avr/avr-protos.h (revision 175811) +++ config/avr/avr-protos.h (working copy) @@ -56,7 +56,7 @@ extern const char *out_movhi_r_mr (rtx i extern const char *out_movhi_mr_r (rtx insn, rtx op[], int *l); extern const char *out_movsi_r_mr (rtx insn, rtx op[], int *l); extern const char *out_movsi_mr_r (rtx insn, rtx op[], int *l); -extern const char *output_movsisf (rtx insn, rtx operands[], int *l); +extern const char *output_movsisf (rtx insn, rtx operands[], rtx clobber, int *l); extern const char *out_tstsi (rtx insn, rtx src, int *l); extern const char *out_tsthi (rtx insn, rtx src, int *l); extern const char *ret_cond_branch (rtx x, int len, int reverse); @@ -85,7 +85,7 @@ extern const char *avr_out_sbxx_branch ( extern int extra_constraint_Q (rtx x); extern int adjust_insn_length (rtx insn, int len); extern const char *output_reload_inhi (rtx insn, rtx *operands, int *len); -extern const char *output_reload_insisf (rtx insn, rtx *operands, int *len); +extern const char *output_reload_insisf (rtx insn, rtx *operands, rtx clobber, int *len); extern enum reg_class secondary_input_reload_class (enum reg_class, enum machine_mode, rtx); Index: config/avr/avr.c =================================================================== --- config/avr/avr.c (revision 175811) +++ config/avr/avr.c (working copy) @@ -1184,6 +1184,32 @@ avr_legitimize_address (rtx x, rtx oldx, } +/* Helper function to print assembler resp. track instruction + sequence lengths. + + If PLEN == NULL: + Output assembler code from template TPL with operands supplied + by OPERANDS. This is just forwarding to output_asm_insn. + + If PLEN != NULL: + Add N_WORDS to *PLEN. + Don't output anything. +*/ + +static void +avr_asm_len (const char* tpl, rtx* operands, int* plen, int n_words) +{ + if (NULL == plen) + { + output_asm_insn (tpl, operands); + } + else + { + *plen += n_words; + } +} + + /* Return a pointer register name as a string. */ static const char * @@ -2600,7 +2626,7 @@ out_movsi_mr_r (rtx insn, rtx op[], int } const char * -output_movsisf(rtx insn, rtx operands[], int *l) +output_movsisf (rtx insn, rtx operands[], rtx clobber_reg, int *l) { int dummy; rtx dest = operands[0]; @@ -2643,6 +2669,11 @@ output_movsisf(rtx insn, rtx operands[], AS2 (mov,%D0,%D1)); } } + else if (CONST_INT_P (src) + || CONST_DOUBLE_P (src)) + { + return output_reload_insisf (insn, operands, clobber_reg, real_l); + } else if (CONSTANT_P (src)) { if (test_hard_reg_class (LD_REGS, dest)) /* ldi d,i */ @@ -2653,68 +2684,6 @@ output_movsisf(rtx insn, rtx operands[], AS2 (ldi,%C0,hlo8(%1)) CR_TAB AS2 (ldi,%D0,hhi8(%1))); } - - if (GET_CODE (src) == CONST_INT) - { - const char *const clr_op0 = - AVR_HAVE_MOVW ? (AS1 (clr,%A0) CR_TAB - AS1 (clr,%B0) CR_TAB - AS2 (movw,%C0,%A0)) - : (AS1 (clr,%A0) CR_TAB - AS1 (clr,%B0) CR_TAB - AS1 (clr,%C0) CR_TAB - AS1 (clr,%D0)); - - if (src == const0_rtx) /* mov r,L */ - { - *l = AVR_HAVE_MOVW ? 3 : 4; - return clr_op0; - } - else if (src == const1_rtx) - { - if (!real_l) - output_asm_insn (clr_op0, operands); - *l = AVR_HAVE_MOVW ? 4 : 5; - return AS1 (inc,%A0); - } - else if (src == constm1_rtx) - { - /* Immediate constants -1 to any register */ - if (AVR_HAVE_MOVW) - { - *l = 4; - return (AS1 (clr,%A0) CR_TAB - AS1 (dec,%A0) CR_TAB - AS2 (mov,%B0,%A0) CR_TAB - AS2 (movw,%C0,%A0)); - } - *l = 5; - return (AS1 (clr,%A0) CR_TAB - AS1 (dec,%A0) CR_TAB - AS2 (mov,%B0,%A0) CR_TAB - AS2 (mov,%C0,%A0) CR_TAB - AS2 (mov,%D0,%A0)); - } - else - { - int bit_nr = exact_log2 (INTVAL (src)); - - if (bit_nr >= 0) - { - *l = AVR_HAVE_MOVW ? 5 : 6; - if (!real_l) - { - output_asm_insn (clr_op0, operands); - output_asm_insn ("set", operands); - } - if (!real_l) - avr_output_bld (operands, bit_nr); - - return ""; - } - } - } - /* Last resort, better than loading from memory. */ *l = 10; return (AS2 (mov,__tmp_reg__,r31) CR_TAB @@ -2735,7 +2704,7 @@ output_movsisf(rtx insn, rtx operands[], { const char *templ; - if (src == const0_rtx) + if (src == CONST0_RTX (GET_MODE (dest))) operands[1] = zero_reg_rtx; templ = out_movsi_mr_r (insn, operands, real_l); @@ -4612,7 +4581,7 @@ adjust_insn_length (rtx insn, int len) break; case SImode: case SFmode: - output_movsisf (insn, op, &len); + output_movsisf (insn, op, NULL_RTX, &len); break; default: break; @@ -4683,7 +4652,7 @@ adjust_insn_length (rtx insn, int len) break; case SImode: case SFmode: - output_reload_insisf (insn, op, &len); + output_reload_insisf (insn, op, XEXP (op[2], 0), &len); break; default: break; @@ -6212,53 +6181,199 @@ output_reload_inhi (rtx insn ATTRIBUTE_U } +/* Reload a SI or SF compile time constant (OP[1]) into a GPR (OP[0]). + CLOBBER_REG is a QI clobber reg needed to move vast majority of consts + into a NO_LD_REGS. If CLOBBER_REG is NULL_RTX we either don't need a + clobber reg or have to cook one up. + + LEN == NULL: Output instructions. + + LEN != NULL: Output nothing. Increment *LEN by number of words occupied + by the insns printed. + + Return "". */ + const char * -output_reload_insisf (rtx insn ATTRIBUTE_UNUSED, rtx *operands, int *len) +output_reload_insisf (rtx insn ATTRIBUTE_UNUSED, + rtx *op, rtx clobber_reg, int *len) { - rtx src = operands[1]; - int cnst = (GET_CODE (src) == CONST_INT); + rtx src = op[1]; + rtx dest = op[0]; + rtx xval, xdest[4]; + int ival[4]; + int clobber_val = 1234; + bool cooked_clobber_p = false; + bool set_p = false; + unsigned int n; + enum machine_mode mode = GET_MODE (dest); + + gcc_assert (REG_P (dest)); if (len) + *len = 0; + + /* (REG:SI 14) is special: It's neither in LD_REGS nor in NO_LD_REGS + but has some subregs that are in LD_REGS. Use the MSB (REG:QI 17). */ + + if (14 == REGNO (dest)) { - if (cnst) - *len = 4 + ((INTVAL (src) & 0xff) != 0) - + ((INTVAL (src) & 0xff00) != 0) - + ((INTVAL (src) & 0xff0000) != 0) - + ((INTVAL (src) & 0xff000000) != 0); - else - *len = 8; - - return ""; + clobber_reg = gen_rtx_REG (QImode, 17); } - if (cnst && ((INTVAL (src) & 0xff) == 0)) - output_asm_insn (AS2 (mov, %A0, __zero_reg__), operands); - else - { - output_asm_insn (AS2 (ldi, %2, lo8(%1)), operands); - output_asm_insn (AS2 (mov, %A0, %2), operands); - } - if (cnst && ((INTVAL (src) & 0xff00) == 0)) - output_asm_insn (AS2 (mov, %B0, __zero_reg__), operands); - else + /* We might need a clobber reg but don't have one. Look at the value + to be loaded more closely. A clobber is only needed if it contains + a byte that is neither 0, -1 or a power of 2. */ + + if (NULL_RTX == clobber_reg + && !test_hard_reg_class (LD_REGS, dest)) { - output_asm_insn (AS2 (ldi, %2, hi8(%1)), operands); - output_asm_insn (AS2 (mov, %B0, %2), operands); + for (n = 0; n < GET_MODE_SIZE (mode); n++) + { + xval = simplify_gen_subreg (QImode, src, mode, n); + + if (!(const0_rtx == xval + || constm1_rtx == xval + || single_one_operand (xval, QImode))) + { + /* We have no clobber reg but need one. Cook one up. + That's cheaper than loading from constant pool. */ + + cooked_clobber_p = true; + clobber_reg = gen_rtx_REG (QImode, 31); + avr_asm_len ("mov __tmp_reg__,%0", &clobber_reg, len, 1); + break; + } + } } - if (cnst && ((INTVAL (src) & 0xff0000) == 0)) - output_asm_insn (AS2 (mov, %C0, __zero_reg__), operands); - else + + /* Now start filling DEST from LSB to MSB. */ + + for (n = 0; n < GET_MODE_SIZE (mode); n++) { - output_asm_insn (AS2 (ldi, %2, hlo8(%1)), operands); - output_asm_insn (AS2 (mov, %C0, %2), operands); + bool done_byte = false; + unsigned int j; + rtx xop[3]; + + /* Crop the n-th sub-byte. */ + + xval = simplify_gen_subreg (QImode, src, mode, n); + xdest[n] = simplify_gen_subreg (QImode, dest, mode, n); + ival[n] = INTVAL (xval); + + /* Look if we can reuse the low word by means of MOVW. */ + + if (n == 2 + && AVR_HAVE_MOVW) + { + rtx lo16 = simplify_gen_subreg (HImode, src, mode, 0); + rtx hi16 = simplify_gen_subreg (HImode, src, mode, 2); + + if (INTVAL (lo16) == INTVAL (hi16)) + { + avr_asm_len ("movw %C0,%A0", &op[0], len, 1); + break; + } + } + + /* Use CLR to zero a value so that cc0 is set as expected + for zero. */ + + if (ival[n] == 0) + { + avr_asm_len ("clr %0", &xdest[n], len, 1); + continue; + } + + if (clobber_val == ival[n] + && REGNO (clobber_reg) == REGNO (xdest[n])) + { + continue; + } + + /* LD_REGS can use LDI to move a constant value */ + + if (test_hard_reg_class (LD_REGS, xdest[n])) + { + xop[0] = xdest[n]; + xop[1] = xval; + avr_asm_len ("ldi %0,lo8(%1)", xop, len, 1); + continue; + } + + /* Try to reuse value already loaded in some lower byte. */ + + for (j = 0; j < n; j++) + if (ival[j] == ival[n]) + { + xop[0] = xdest[n]; + xop[1] = xdest[j]; + + avr_asm_len ("mov %0,%1", xop, len, 1); + done_byte = true; + break; + } + + if (done_byte) + continue; + + /* Need no clobber reg for -1: Use CLR/DEC */ + + if (-1 == ival[n]) + { + avr_asm_len ("clr %0" CR_TAB + "dec %0", &xdest[n], len, 2); + continue; + } + + /* Use T flag or INC to manage powers of 2 if we have + no clobber reg. */ + + if (NULL_RTX == clobber_reg + && single_one_operand (xval, QImode)) + { + if (1 == ival[n]) + { + avr_asm_len ("clr %0" CR_TAB + "inc %0", &xdest[n], len, 2); + continue; + } + + xop[0] = xdest[n]; + xop[1] = GEN_INT (exact_log2 (ival[n] & GET_MODE_MASK (QImode))); + + gcc_assert (constm1_rtx != xop[1]); + + if (!set_p) + { + set_p = true; + avr_asm_len ("set", xop, len, 1); + } + + avr_asm_len ("clr %0" CR_TAB + "bld %0,%1", xop, len, 2); + continue; + } + + /* We actually need the LD_REGS clobber reg. */ + + gcc_assert (NULL_RTX != clobber_reg); + + xop[0] = xdest[n]; + xop[1] = xval; + xop[2] = clobber_reg; + clobber_val = ival[n]; + + avr_asm_len ("ldi %2,lo8(%1)" CR_TAB + "mov %0,%2", xop, len, 2); } - if (cnst && ((INTVAL (src) & 0xff000000) == 0)) - output_asm_insn (AS2 (mov, %D0, __zero_reg__), operands); - else + + /* If we cooked up a clobber reg above, restore it. */ + + if (cooked_clobber_p) { - output_asm_insn (AS2 (ldi, %2, hhi8(%1)), operands); - output_asm_insn (AS2 (mov, %D0, %2), operands); + avr_asm_len ("mov %0,__tmp_reg__", &clobber_reg, len, 1); } + return ""; }