From patchwork Tue Dec 6 10:00:53 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jan Hubicka X-Patchwork-Id: 1712615 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=gcc.gnu.org (client-ip=2620:52:3:1:0:246e:9693:128c; helo=sourceware.org; envelope-from=gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=gcc.gnu.org header.i=@gcc.gnu.org header.a=rsa-sha256 header.s=default header.b=PUQOTSrn; dkim-atps=neutral Received: from sourceware.org (server2.sourceware.org [IPv6:2620:52:3:1:0:246e:9693:128c]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature ECDSA (P-384) server-digest SHA384) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4NRGCP1Lskz23ns for ; Tue, 6 Dec 2022 21:01:19 +1100 (AEDT) Received: from server2.sourceware.org (localhost [IPv6:::1]) by sourceware.org (Postfix) with ESMTP id 3F977395A435 for ; Tue, 6 Dec 2022 10:01:17 +0000 (GMT) DKIM-Filter: OpenDKIM Filter v2.11.0 sourceware.org 3F977395A435 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gcc.gnu.org; s=default; t=1670320877; bh=p1MWbjzARCZL9MfHsypJz5nklHWRUkZt0rpdYsuVjQ8=; h=Date:To:Subject:List-Id:List-Unsubscribe:List-Archive:List-Post: List-Help:List-Subscribe:From:Reply-To:From; b=PUQOTSrn7Q+S+ja9O1A8K6OnnOPyR6Kkopxf/xih/S/tFi49fM4fMBBJS41FW4y18 QQR7Ti5JOsZcSg4CF6jOuirrETvkZJpxECnzelwNSU/2l6u2bawLa0dDfbJXtf8hlB H97ZJNAmOMCSpnVQZxwWHuuRJZifRUFXLS8gDXVM= X-Original-To: gcc-patches@gcc.gnu.org Delivered-To: gcc-patches@gcc.gnu.org Received: from nikam.ms.mff.cuni.cz (nikam.ms.mff.cuni.cz [195.113.20.16]) by sourceware.org (Postfix) with ESMTPS id C5D89395A049 for ; Tue, 6 Dec 2022 10:00:54 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.1 sourceware.org C5D89395A049 Received: by nikam.ms.mff.cuni.cz (Postfix, from userid 16202) id 2BE2F280499; Tue, 6 Dec 2022 11:00:53 +0100 (CET) Date: Tue, 6 Dec 2022 11:00:53 +0100 To: gcc-patches@gcc.gnu.org, mjambor@suse.cz, Alexander Monakov , "Kumar, Venkataramanan" , Tejas Sanjay Subject: Zen4 tuning part 1 - cost tables Message-ID: MIME-Version: 1.0 Content-Disposition: inline X-Spam-Status: No, score=-11.1 required=5.0 tests=BAYES_00, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, GIT_PATCH_0, HEADER_FROM_DIFFERENT_DOMAINS, RCVD_IN_MSPIKE_H3, RCVD_IN_MSPIKE_WL, SPF_HELO_NONE, SPF_NONE, TXREP autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org X-BeenThere: gcc-patches@gcc.gnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: Gcc-patches mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-Patchwork-Original-From: Jan Hubicka via Gcc-patches From: Jan Hubicka Reply-To: Jan Hubicka Errors-To: gcc-patches-bounces+incoming=patchwork.ozlabs.org@gcc.gnu.org Sender: "Gcc-patches" Hi this patch updates cost of znver4 mostly based on data measued by Agner Fog. Compared to previous generations x87 became bit slower which is probably not big deal (and we have minimal benchmarking coverage for it). One interesting improvement is reducation of FMA cost. I also updated costs of AVX256 loads/stores based on latencies (not throughput which is twice of avx256). Overall AVX512 vectorization seems to improve noticeably some of TSVC benchmarks but since internally 512 vectors are split to 256 vectors it is somewhat risky and does not win in SPEC scores (mostly by regressing benchmarks with loop that have small trip count like x264 and exchange), so for now I am going to set AVX256_OPTIMAL tune but I am still playing with it. We improved since ZNVER1 on choosing vectorization size and also have vectorized prologues/epilogues so it may be possible to make avx512 small win overall. In general I would like to keep cost tables latency based unless we have a good reason to not do so. There are some interesting diferences in znver3 tables that I also patched and seems performance neutral. I will send that separately. Bootstrapped/regtested x86_64-linux, also benchmarked on SPEC2017 along with AVX512 tuning. I plan to commit it tomorrow unless there are some comments. Honza * x86-tune-costs.h (znver4_cost): Upate costs of FP and SSE moves, division multiplication, gathers, L2 cache size, and more complex FP instrutions. diff --git a/gcc/config/i386/x86-tune-costs.h b/gcc/config/i386/x86-tune-costs.h index f01b8ee9eef..3a6ce02f093 100644 --- a/gcc/config/i386/x86-tune-costs.h +++ b/gcc/config/i386/x86-tune-costs.h @@ -1867,9 +1868,9 @@ struct processor_costs znver4_cost = { {8, 8, 8}, /* cost of storing integer registers. */ 2, /* cost of reg,reg fld/fst. */ - {6, 6, 16}, /* cost of loading fp registers + {14, 14, 17}, /* cost of loading fp registers in SFmode, DFmode and XFmode. */ - {8, 8, 16}, /* cost of storing fp registers + {12, 12, 16}, /* cost of storing fp registers in SFmode, DFmode and XFmode. */ 2, /* cost of moving MMX register. */ {6, 6}, /* cost of loading MMX registers @@ -1878,13 +1879,13 @@ struct processor_costs znver4_cost = { in SImode and DImode. */ 2, 2, 3, /* cost of moving XMM,YMM,ZMM register. */ - {6, 6, 6, 6, 12}, /* cost of loading SSE registers + {6, 6, 10, 10, 12}, /* cost of loading SSE registers in 32,64,128,256 and 512-bit. */ - {8, 8, 8, 8, 16}, /* cost of storing SSE registers + {8, 8, 8, 12, 12}, /* cost of storing SSE registers in 32,64,128,256 and 512-bit. */ - 6, 6, /* SSE->integer and integer->SSE + 6, 8, /* SSE->integer and integer->SSE moves. */ - 8, 8, /* mask->integer and integer->mask moves */ + 8, 8, /* mask->integer and integer->mask moves */ {6, 6, 6}, /* cost of loading mask register in QImode, HImode, SImode. */ {8, 8, 8}, /* cost if storing mask register @@ -1894,6 +1895,7 @@ struct processor_costs znver4_cost = { }, COSTS_N_INSNS (1), /* cost of an add instruction. */ + /* TODO: Lea with 3 components has cost 2. */ COSTS_N_INSNS (1), /* cost of a lea instruction. */ COSTS_N_INSNS (1), /* variable shift costs. */ COSTS_N_INSNS (1), /* constant shift costs. */ @@ -1904,11 +1906,11 @@ struct processor_costs znver4_cost = { COSTS_N_INSNS (3)}, /* other. */ 0, /* cost of multiply per each bit set. */ - {COSTS_N_INSNS (9), /* cost of a divide/mod for QI. */ - COSTS_N_INSNS (10), /* HI. */ - COSTS_N_INSNS (12), /* SI. */ - COSTS_N_INSNS (17), /* DI. */ - COSTS_N_INSNS (17)}, /* other. */ + {COSTS_N_INSNS (12), /* cost of a divide/mod for QI. */ + COSTS_N_INSNS (13), /* HI. */ + COSTS_N_INSNS (13), /* SI. */ + COSTS_N_INSNS (18), /* DI. */ + COSTS_N_INSNS (18)}, /* other. */ COSTS_N_INSNS (1), /* cost of movsx. */ COSTS_N_INSNS (1), /* cost of movzx. */ 8, /* "large" insn. */ @@ -1919,22 +1921,22 @@ struct processor_costs znver4_cost = { Relative to reg-reg move (2). */ {8, 8, 8}, /* cost of storing integer registers. */ - {6, 6, 6, 6, 12}, /* cost of loading SSE registers + {6, 6, 10, 10, 12}, /* cost of loading SSE registers in 32bit, 64bit, 128bit, 256bit and 512bit */ - {8, 8, 8, 8, 16}, /* cost of storing SSE register + {8, 8, 8, 12, 12}, /* cost of storing SSE register in 32bit, 64bit, 128bit, 256bit and 512bit */ - {6, 6, 6, 6, 12}, /* cost of unaligned loads. */ - {8, 8, 8, 8, 16}, /* cost of unaligned stores. */ - 2, 2, 3, /* cost of moving XMM,YMM,ZMM + {6, 6, 6, 6, 6}, /* cost of unaligned loads. */ + {8, 8, 8, 8, 8}, /* cost of unaligned stores. */ + 2, 2, 2, /* cost of moving XMM,YMM,ZMM register. */ 6, /* cost of moving SSE register to integer. */ - /* VGATHERDPD is 15 uops and throughput is 4, VGATHERDPS is 23 uops, - throughput 9. Approx 7 uops do not depend on vector size and every load - is 4 uops. */ - 14, 8, /* Gather load static, per_elt. */ - 14, 10, /* Gather store static, per_elt. */ + /* VGATHERDPD is 17 uops and throughput is 4, VGATHERDPS is 24 uops, + throughput 5. Approx 7 uops do not depend on vector size and every load + is 5 uops. */ + 14, 10, /* Gather load static, per_elt. */ + 14, 20, /* Gather store static, per_elt. */ 32, /* size of l1 cache. */ - 512, /* size of l2 cache. */ + 1024, /* size of l2 cache. */ 64, /* size of prefetch block. */ /* New AMD processors never drop prefetches; if they cannot be performed immediately, they are queued. We set number of simultaneous prefetches @@ -1943,26 +1945,26 @@ struct processor_costs znver4_cost = { time). */ 100, /* number of parallel prefetches. */ 3, /* Branch cost. */ - COSTS_N_INSNS (5), /* cost of FADD and FSUB insns. */ - COSTS_N_INSNS (5), /* cost of FMUL instruction. */ + COSTS_N_INSNS (7), /* cost of FADD and FSUB insns. */ + COSTS_N_INSNS (7), /* cost of FMUL instruction. */ /* Latency of fdiv is 8-15. */ COSTS_N_INSNS (15), /* cost of FDIV instruction. */ COSTS_N_INSNS (1), /* cost of FABS instruction. */ COSTS_N_INSNS (1), /* cost of FCHS instruction. */ /* Latency of fsqrt is 4-10. */ - COSTS_N_INSNS (10), /* cost of FSQRT instruction. */ + COSTS_N_INSNS (25), /* cost of FSQRT instruction. */ COSTS_N_INSNS (1), /* cost of cheap SSE instruction. */ COSTS_N_INSNS (3), /* cost of ADDSS/SD SUBSS/SD insns. */ COSTS_N_INSNS (3), /* cost of MULSS instruction. */ COSTS_N_INSNS (3), /* cost of MULSD instruction. */ - COSTS_N_INSNS (5), /* cost of FMA SS instruction. */ - COSTS_N_INSNS (5), /* cost of FMA SD instruction. */ - COSTS_N_INSNS (10), /* cost of DIVSS instruction. */ + COSTS_N_INSNS (4), /* cost of FMA SS instruction. */ + COSTS_N_INSNS (4), /* cost of FMA SD instruction. */ + COSTS_N_INSNS (13), /* cost of DIVSS instruction. */ /* 9-13. */ COSTS_N_INSNS (13), /* cost of DIVSD instruction. */ - COSTS_N_INSNS (10), /* cost of SQRTSS instruction. */ - COSTS_N_INSNS (15), /* cost of SQRTSD instruction. */ + COSTS_N_INSNS (15), /* cost of SQRTSS instruction. */ + COSTS_N_INSNS (21), /* cost of SQRTSD instruction. */ /* Zen can execute 4 integer operations per cycle. FP operations take 3 cycles and it can execute 2 integer additions and 2 multiplications thus reassociation may make sense up to with of 6.