From patchwork Mon Feb 22 18:10:02 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Roger Sayle X-Patchwork-Id: 586445 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 75E3E140C1D for ; Tue, 23 Feb 2016 05:10:21 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.b=sTxfavnC; dkim-atps=neutral DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :content-type:subject:message-id:date:to:mime-version; q=dns; s= default; b=UKSc8rAawBUYpOsgckyVHvsjZug3KY3+ITUz6eiBXkD4W6kAGzLth t/Rcqj1kQ2GsGP2IeWXENwFTsewS6L0YAuQzyjE/SECzvKnVF7c+qJCYBF0BLgYG dCJIZPXIp3aRJrxZQMJuZgMRyLn+QH7PyvZRcTKazfeHHGHF2anlWc= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :content-type:subject:message-id:date:to:mime-version; s= default; bh=niPez/maHai68Aru20kZ2CodlEA=; b=sTxfavnCcKMbmT9aCZhp HrTptXPZvdqnEQTWB5SC6Yv6fpgreBlqryoww2YmhfTdTJLGaO5zYk6gWjSrO9iz 25e2j1CA/+3qU+ZhwbeLRJfO4dHkRTFILZK2zkDgwAiwKRtpy7GadE3rTIFieHGA pY6FlEKqToNVdKWwSvRidek= Received: (qmail 19951 invoked by alias); 22 Feb 2016 18:10:12 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 19928 invoked by uid 89); 22 Feb 2016 18:10:11 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: Yes, score=5.6 required=5.0 tests=AWL, BAYES_50, RP_MATCHES_RCVD, UNSUBSCRIBE_BODY autolearn=no version=3.3.2 spammy=Sum, 85, 95, HTo:U*java-patches X-Spam-User: qpsmtpd, 2 recipients X-HELO: server.nextmovesoftware.com Received: from server.nextmovesoftware.com (HELO server.nextmovesoftware.com) (162.254.253.69) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with (AES256-GCM-SHA384 encrypted) ESMTPS; Mon, 22 Feb 2016 18:10:08 +0000 Received: from host109-147-72-25.range109-147.btcentralplus.com ([109.147.72.25]:61159 helo=macbookpro.home) by server.nextmovesoftware.com with esmtpsa (TLSv1:AES128-SHA:128) (Exim 4.85) (envelope-from ) id 1aXuvs-0005x8-Ox; Mon, 22 Feb 2016 13:10:05 -0500 From: "roger@nextmovesoftware.com" Subject: [JAVA PATCH] Enable more array bounds check elimination Message-Id: <4F362C11-3DAA-4A9A-AEAB-089C20B3590C@nextmovesoftware.com> Date: Mon, 22 Feb 2016 18:10:02 +0000 To: java-patches@gcc.gnu.org, gcc-patches@gcc.gnu.org Mime-Version: 1.0 (Mac OS X Mail 7.3 \(1878.6\)) X-Get-Message-Sender-Via: server.nextmovesoftware.com: authenticated_id: roger@nextmovesoftware.com It has been a while since my last contribution. The following patch allows GCC's optimizers to more aggressively eliminate and optimize java array bounds checks. The results are quite impressive, for example producing a 26% performance improvement on the sieve.java benchmark given at http://keithlea.com/javabench/ on x86_64-pc-linux-gnu, reducing the runtime for 1 million iterations from 31.5 seconds on trunk, to 25.0s with this patch, in fact eliminating all array bounds checks. This is close to the 22.3s of an equivalent C/C++ implementation, and significantly closes the gap to Java HotSpot(TM) JIT at 23.0 seconds. The approach is to provide sufficient information in the gimple generated by the gcj front-end to allow the optimizers to do their thing. For array allocations of constant length, I propose generating an additional (cheap) write to the array length field returned from _Jv_NewPrimArray, which is then sufficient to allow this value to propagate throughout the optimizers. This is probably best explained by a simple example. Consider the array initializer below: private static int mk1[] = { 71,85,95 }; which gets compiled into the java byte code sequence below: 0: iconst_3 1: newarray int 3: dup 4: iconst_0 5: bipush 71 7: iastore 8: dup 9: iconst_1 10: bipush 85 12: iastore 13: dup 14: iconst_2 15: bipush 95 17: iastore 18: putstatic #3 // Field mk1:[I 21: return Currently, the .004t.gimple generated by gcj for the array allocation is the cryptic: #slot#0#0 = 3; #ref#0#2 = _Jv_NewPrimArray (&_Jv_intClass, #slot#0#0); #ref#1#4 = #ref#0#2; _ref_1_4.6 = #ref#1#4; which unfortunately doesn't provide many clues for the middle-end, so we end up generating the following .210t.optimized: void * _3 = _Jv_NewPrimArray (&_Jv_intClass, 3); int _4 = MEM[(struct int[] *)_3].length; unsigned int _5 = (unsigned int) _4; if (_4 == 0) goto ; else goto ; : _Jv_ThrowBadArrayIndex (0); : MEM[(int *)_3 + 12B] = 71; if (_5 == 1) goto ; else goto ; : _Jv_ThrowBadArrayIndex (1); : MEM[(int *)_3 + 16B] = 85; if (_5 == 2) goto ; else goto ; : _Jv_ThrowBadArrayIndex (2); : MEM[(int *)_3 + 20B] = 95; mk1 = _3; return; which obviously contains three run-time array bounds checks. These same checks appear in the x86_64 assembly language: subq $8, %rsp xorl %eax, %eax movl $3, %esi movl $_Jv_intClass, %edi call _Jv_NewPrimArray movl 8(%rax), %edx testl %edx, %edx je .L13 cmpl $1, %edx movl $71, 12(%rax) je .L14 cmpl $2, %edx movl $85, 16(%rax) je .L15 movl $95, 20(%rax) movq %rax, _ZN9TestArray3mk1E(%rip) addq $8, %rsp ret .L13: xorl %edi, %edi xorl %eax, %eax call _Jv_ThrowBadArrayIndex .L15: movl $2, %edi xorl %eax, %eax call _Jv_ThrowBadArrayIndex .L14: movl $1, %edi xorl %eax, %eax call _Jv_ThrowBadArrayIndex With the patch below, we now generate the much more informative .004t.gimple for this: D.926 = _Jv_NewPrimArray (&_Jv_intClass, 3); D.926->length = 3; This additional write to the array length field of the newly allocated array enables much more simplification. The resulting .210t.optimized for our array initialization now becomes: struct int[3] * _3; _3 = _Jv_NewPrimArray (&_Jv_intClass, 3); MEM[(int *)_3 + 8B] = { 3, 71, 85, 95 }; mk1 = _3; return; And the x86_64 assembly code is also much prettier: subq $8, %rsp movl $3, %esi movl $_Jv_intClass, %edi xorl %eax, %eax call _Jv_NewPrimArray movdqa .LC0(%rip), %xmm0 movq %rax, _ZN9TestArray3mk1E(%rip) movups %xmm0, 8(%rax) addq $8, %rsp ret .LC0: .long 3 .long 71 .long 85 .long 95 Achieving this result required two minor tweaks. The first is to allow the array length constant to reach the newarray call, by allowing constants to remain on the quickstack. This allows the call to _Jv_NewPrimArray to have a constant integer argument instead of the opaque #slot#0#0. Then in the code that constructs the call to _Jv_NewPrimArray we wrap it in a COMPOUND_EXPR allowing us to insert the superfluous, but helpful, write to the length field. Whilst working on this improvement I also noticed that the array bounds checks we were initially generating could also be improved. Currently, an array bound check in 004t.gimple looks like: D.925 = MEM[(struct int[] *)_ref_1_4.6].length; D.926 = (unsigned int) D.925; if (_slot_2_5.9 >= D.926) goto ; else goto ; : _Jv_ThrowBadArrayIndex (_slot_2_5.8); if (0 != 0) goto ; else goto ; : iftmp.7 = 1; goto ; : iftmp.7 = 0; : Notice the unnecessary "0 != 0" and the dead assignments to iftmp.7 (which is unused). With the patch below, we now not only avoid this conditional but also use __builtin_expect to inform the compiler that throwing an BadArrayIndex exception is typically unlikely. i.e. D.930 = MEM[(struct int[] *)_ref_1_4.4].length; D.931 = D.930 <= 1; D.932 = __builtin_expect (D.931, 0); if (D.932 != 0) goto ; else goto ; : _Jv_ThrowBadArrayIndex (0); : The following patch has been tested on x86_64-pc-linux-gnu with a full make bootstrap and make check, with no new failures/regressions. Please let me know what you think (for stage 1 once it reopens)? Roger --- Roger Sayle, Ph.D. CEO and founder NextMove Software Limited Registered in England No. 07588305 Registered Office: Innovation Centre (Unit 23), Cambridge Science Park, Cambridge CB4 0EY 2016-02-21 Roger Sayle * expr.c (push_value): Only call flush_quick_stack for non-constant arguments. (build_java_throw_out_of_bounds_exception): No longer wrap calls to _Jv_ThowBadArrayIndex in a COMPOUND_EXPR as no longer needed. (java_check_reference): Annotate COND_EXPR with __builtin_expect to indicate that calling _Jv_ThrowNullPointerException is unlikely. (build_java_arrayaccess): Construct an unlikely COND_EXPR instead of a TRUTH_ANDIF_EXPR in a COMPOUND_EXPR. Only generate array index MULT_EXPR when size_exp is not unity. (build_array_length_annotation): When optimizing, generate a write to the allocated array's length field to expose constant lengths to GCC's optimizers. (build_newarray): Call new build_array_length_annotation. (build_anewarray): Likewise. Index: expr.c =================================================================== --- expr.c (revision 233190) +++ expr.c (working copy) @@ -37,6 +37,7 @@ #include "jcf.h" #include "parse.h" #include "tree-iterator.h" +#include "tree-eh.h" static void flush_quick_stack (void); static void push_value (tree); @@ -54,6 +55,7 @@ static void expand_java_pushc (int, tree); static void expand_java_return (tree); static void expand_load_internal (int, tree, int); +static void expand_store_internal (tree, int, int); static void expand_java_NEW (tree); static void expand_java_INSTANCEOF (tree); static void expand_java_CHECKCAST (tree); @@ -273,10 +275,12 @@ /* If the value has a side effect, then we need to evaluate it whether or not the result is used. If the value ends up on the quick stack and is then popped, this won't happen -- so we flush - the quick stack. It is safest to simply always flush, though, - since TREE_SIDE_EFFECTS doesn't capture COMPONENT_REF, and for - the latter we may need to strip conversions. */ - flush_quick_stack (); + the quick stack. It is safest to always flush non-constant + operands. */ + if (! TREE_CONSTANT (value) + || TREE_SIDE_EFFECTS (value) + || tree_could_trap_p (value)) + flush_quick_stack (); } /* Pop a type from the type stack. @@ -778,19 +782,13 @@ { tree node; - /* We need to build a COMPOUND_EXPR because _Jv_ThrowBadArrayIndex() - has void return type. We cannot just set the type of the CALL_EXPR below - to int_type_node because we would lose it during gimplification. */ + /* _Jv_ThrowBadArrayIndex() has void return type. */ gcc_assert (VOID_TYPE_P (TREE_TYPE (TREE_TYPE (soft_badarrayindex_node)))); node = build_call_nary (void_type_node, - build_address_of (soft_badarrayindex_node), - 1, index); + build_address_of (soft_badarrayindex_node), + 1, index); TREE_SIDE_EFFECTS (node) = 1; - - node = build2 (COMPOUND_EXPR, int_type_node, node, integer_zero_node); - TREE_SIDE_EFFECTS (node) = 1; /* Allows expansion within ANDIF */ - - return (node); + return node; } /* Return the length of an array. Doesn't perform any checking on the nature @@ -833,10 +831,12 @@ { if (!flag_syntax_only && check) { + tree test; expr = save_expr (expr); - expr = build3 (COND_EXPR, TREE_TYPE (expr), - build2 (EQ_EXPR, boolean_type_node, - expr, null_pointer_node), + test = build2 (EQ_EXPR, boolean_type_node, expr, null_pointer_node); + test = build_call_expr (builtin_decl_implicit (BUILT_IN_EXPECT), 2, + test, boolean_false_node); + expr = build3 (COND_EXPR, TREE_TYPE (expr), test, build_call_nary (void_type_node, build_address_of (soft_nullpointer_node), 0), @@ -865,7 +865,7 @@ tree build_java_arrayaccess (tree array, tree type, tree index) { - tree node, throw_expr = NULL_TREE; + tree node; tree data_field; tree ref; tree array_type = TREE_TYPE (TREE_TYPE (array)); @@ -882,9 +882,9 @@ { /* Generate: * (unsigned jint) INDEX >= (unsigned jint) LEN - * && throw ArrayIndexOutOfBoundsException. + * ? throw ArrayIndexOutOfBoundsException : INDEX. * Note this is equivalent to and more efficient than: - * INDEX < 0 || INDEX >= LEN && throw ... */ + * INDEX < 0 || INDEX >= LEN ? throw ... : INDEX. */ tree test; tree len = convert (unsigned_int_type_node, build_java_array_length_access (array)); @@ -893,19 +893,14 @@ len); if (! integer_zerop (test)) { - throw_expr - = build2 (TRUTH_ANDIF_EXPR, int_type_node, test, - build_java_throw_out_of_bounds_exception (index)); - /* allows expansion within COMPOUND */ - TREE_SIDE_EFFECTS( throw_expr ) = 1; + test = build_call_expr (builtin_decl_implicit (BUILT_IN_EXPECT), 2, + test, boolean_false_node); + index = build3(COND_EXPR, int_type_node, test, + build_java_throw_out_of_bounds_exception (index), + index); } } - /* If checking bounds, wrap the index expr with a COMPOUND_EXPR in order - to have the bounds check evaluated first. */ - if (throw_expr != NULL_TREE) - index = build2 (COMPOUND_EXPR, int_type_node, throw_expr, index); - data_field = lookup_field (&array_type, get_identifier ("data")); ref = build3 (COMPONENT_REF, TREE_TYPE (data_field), @@ -919,9 +914,11 @@ /* Multiply the index by the size of an element to obtain a byte offset. Convert the result to a pointer to the element type. */ - index = build2 (MULT_EXPR, sizetype, - fold_convert (sizetype, index), - size_exp); + index = fold_convert (sizetype, index); + if (! integer_onep (size_exp)) + { + index = build2 (MULT_EXPR, sizetype, index, size_exp); + } /* Sum the byte offset and the address of the data field. */ node = fold_build_pointer_plus (node, index); @@ -1026,6 +1023,34 @@ return indexed_type; } +/* When optimizing, wrap calls to array allocation functions taking + constant length arguments, in a COMPOUND_EXPR, containing an + explict assignment of the .length field, for GCC's optimizers. */ + +static tree +build_array_length_annotation (tree call, tree length) +{ + if (optimize + && TREE_CONSTANT (length) + && is_array_type_p (TREE_TYPE (call))) + { + tree type, note; + type = TREE_TYPE (call); + call = save_expr(call); + note = build3 (COMPONENT_REF, int_type_node, + build1 (INDIRECT_REF, TREE_TYPE (type), call), + lookup_field (&TREE_TYPE (type), + get_identifier ("length")), + NULL_TREE); + note = build2 (MODIFY_EXPR, int_type_node, note, length); + TREE_SIDE_EFFECTS (note) = 1; + call = build2 (COMPOUND_EXPR, TREE_TYPE (call), note, call); + TREE_SIDE_EFFECTS (call) = 1; + } + return call; +} + + /* newarray triggers a call to _Jv_NewPrimArray. This function should be called with an integer code (the type of array to create), and the length of the array to create. */ @@ -1033,7 +1058,7 @@ tree build_newarray (int atype_value, tree length) { - tree type_arg; + tree type_arg, call; tree prim_type = decode_newarray_type (atype_value); tree type @@ -1045,9 +1070,10 @@ some work. */ type_arg = build_class_ref (prim_type); - return build_call_nary (promote_type (type), + call = build_call_nary (promote_type (type), build_address_of (soft_newarray_node), 2, type_arg, length); + return build_array_length_annotation (call, length); } /* Generates anewarray from a given CLASS_TYPE. Gets from the stack the size @@ -1061,12 +1087,14 @@ tree_fits_shwi_p (length) ? tree_to_shwi (length) : -1); - return build_call_nary (promote_type (type), - build_address_of (soft_anewarray_node), - 3, - length, - build_class_ref (class_type), - null_pointer_node); + tree call = build_call_nary (promote_type (type), + build_address_of (soft_anewarray_node), + 3, + length, + build_class_ref (class_type), + null_pointer_node); + + return build_array_length_annotation (call, length); } /* Return a node the evaluates 'new TYPE[LENGTH]'. */