From patchwork Mon Nov 10 09:16:23 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bin Cheng X-Patchwork-Id: 408697 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from sourceware.org (server1.sourceware.org [209.132.180.131]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 8C14814012A for ; Mon, 10 Nov 2014 20:16:43 +1100 (AEDT) DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; q=dns; s= default; b=SRYcmM847FWiloGFW0xHl3CqhFHc1cjjpg5pUTzQMo0ulbJKa5T/A palULwk5zjSRF9T0+S/wlR6RT6m7AtEvL+/HBOeKwuSuTV2kV9+NVNJrI4ob8nIO oGEIk2o0VxLJfluoy4wppmnP48lR2mO2j8K8kEmPiTp5mkRHGjVyvk= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:subject:date:message-id:mime-version:content-type; s= default; bh=2uB+06LHsyXPLLPPKBJhXENWqO0=; b=qrgbdDMU/4TrQa3IdIR4 RX0H/OP2KJgnB9u8yrN8Pg9xvp0CnyiyIkhezUl3GqODi1JvTtx9A9R+48R9awLw zU0iiY+SIvzXl7xxF28s+o5oj7dArQbRT2GyCyOu8xZUMAgFgk2ZorOSxAZ12tV5 Q1jSez1AptIJNP8gDcRLx4U= Received: (qmail 18974 invoked by alias); 10 Nov 2014 09:16:36 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 18960 invoked by uid 89); 10 Nov 2014 09:16:36 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=-1.8 required=5.0 tests=AWL, BAYES_00, SPF_PASS autolearn=ham version=3.3.2 X-HELO: service87.mimecast.com Received: from service87.mimecast.com (HELO service87.mimecast.com) (91.220.42.44) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Mon, 10 Nov 2014 09:16:34 +0000 Received: from cam-owa1.Emea.Arm.com (fw-tnat.cambridge.arm.com [217.140.96.21]) by service87.mimecast.com; Mon, 10 Nov 2014 09:16:31 +0000 Received: from shawin233 ([10.1.255.212]) by cam-owa1.Emea.Arm.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 10 Nov 2014 09:16:30 +0000 From: "Bin Cheng" To: Subject: [PATCH GCC]Fix checking on MAX_PENDING_LIST_LENGTH Date: Mon, 10 Nov 2014 17:16:23 +0800 Message-ID: <000001cffcc7$00a72840$01f578c0$@arm.com> MIME-Version: 1.0 X-MC-Unique: 114111009163102401 X-IsSubscribed: yes Hi, There is parameter max-pending-list-length in gcc scheduler, but the parameter is checked using greater than condition. As a result, the real max pending list length is actually "max-pending-list-length + 1". This patch fixes this by using ">=" rather than ">" comparison operator. Though it is kind of nit-picking, I want to change this: a) it breaks sched-fusion because the 33rd couldn't be paired; b) when sched-fusion tries to sort many consecutive stores, it breaks dcache line alignment at large probability. I mean without cache sensitive optimizer, GCC breaks dcache line alignment randomly, but 33 is definitely worse than 32. Of course, this only happens in very restricted case. Bootstrap and test on x86_64. Is it OK? 2014-11-10 Bin Cheng * sched-deps.c (sched_analyze_1): Check pending list if it is not less than MAX_PENDING_LIST_LENGTH. (sched_analyze_2, sched_analyze_insn, deps_analyze_insn): Ditto. Index: gcc/sched-deps.c =================================================================== --- gcc/sched-deps.c (revision 217273) +++ gcc/sched-deps.c (working copy) @@ -2504,7 +2504,7 @@ sched_analyze_1 (struct deps_desc *deps, rtx x, rt /* Pending lists can't get larger with a readonly context. */ if (!deps->readonly && ((deps->pending_read_list_length + deps->pending_write_list_length) - > MAX_PENDING_LIST_LENGTH)) + >= MAX_PENDING_LIST_LENGTH)) { /* Flush all pending reads and writes to prevent the pending lists from getting any larger. Insn scheduling runs too slowly when @@ -2722,7 +2722,7 @@ sched_analyze_2 (struct deps_desc *deps, rtx x, rt { if ((deps->pending_read_list_length + deps->pending_write_list_length) - > MAX_PENDING_LIST_LENGTH + >= MAX_PENDING_LIST_LENGTH && !DEBUG_INSN_P (insn)) flush_pending_lists (deps, insn, true, true); add_insn_mem_dependence (deps, true, insn, x); @@ -3227,8 +3227,8 @@ sched_analyze_insn (struct deps_desc *deps, rtx x, EXECUTE_IF_SET_IN_REG_SET (reg_pending_clobbers, 0, i, rsi) { struct deps_reg *reg_last = &deps->reg_last[i]; - if (reg_last->uses_length > MAX_PENDING_LIST_LENGTH - || reg_last->clobbers_length > MAX_PENDING_LIST_LENGTH) + if (reg_last->uses_length >= MAX_PENDING_LIST_LENGTH + || reg_last->clobbers_length >= MAX_PENDING_LIST_LENGTH) { add_dependence_list_and_free (deps, insn, ®_last->sets, 0, REG_DEP_OUTPUT, false); @@ -3661,7 +3661,7 @@ deps_analyze_insn (struct deps_desc *deps, rtx_ins && sel_insn_is_speculation_check (insn))) { /* Keep the list a reasonable size. */ - if (deps->pending_flush_length++ > MAX_PENDING_LIST_LENGTH) + if (deps->pending_flush_length++ >= MAX_PENDING_LIST_LENGTH) flush_pending_lists (deps, insn, true, true); else deps->pending_jump_insns