From patchwork Wed Jul 19 15:01:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Koba Ko X-Patchwork-Id: 1809918 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=QeIWLJjm; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4R5fD53CCsz20FK for ; Thu, 20 Jul 2023 01:01:41 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1qM8gJ-0002Yl-1E; Wed, 19 Jul 2023 15:01:35 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1qM8gH-0002VJ-4g for kernel-team@lists.ubuntu.com; Wed, 19 Jul 2023 15:01:33 +0000 Received: from mail-pj1-f72.google.com (mail-pj1-f72.google.com [209.85.216.72]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 08BAE3F0F8 for ; Wed, 19 Jul 2023 15:01:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1689778892; bh=v42W2K+VTomUvedNQDr5Rvdu92OTdyYuQr6U2lgTKiU=; h=From:To:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QeIWLJjmKqmimiO8fEGvZTpqPodobSzdpfGOf6NZ94QB8vTPMzEqLg00MmEKu5ACi BqWBUk56JmLGVf+fs7m3qN+HS+dKOy2GA19PV38o6OEEJPyfSJjcJ9n7b5dZCHBBAy mZS0FE5pEphJHzWNaRkHIk3QqWzmp14h+yIb7MgkwsQPLG8qcMCXlBy3tiBzBA//qk SVPIGE323+vrGgFdVQQ+HXojINBzGb+IdUTpxkTHdOrd8dVeidOf6Rz+DIL6a6/pB/ UjQY6GB5qCxjsBTJQbxWyDgFX2r3TFj5zZzwfEt15PF5etga7rXW+dDZjDiS+t3erT htUSaKnkPbAig== Received: by mail-pj1-f72.google.com with SMTP id 98e67ed59e1d1-2631fc29e8aso3581685a91.3 for ; Wed, 19 Jul 2023 08:01:31 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1689778889; x=1690383689; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=v42W2K+VTomUvedNQDr5Rvdu92OTdyYuQr6U2lgTKiU=; b=WJE1FRIDAnA54QQiPvjgDwGSaf5MVLuBzIuG0RltZTw1yx4YDaIgGH1k/4oDRQmflU 8xjZQuXX3SzDepLE9Djaf6vVfCAqA053bUyMIMFoEHgN5FlX9HZyPV6uWIKs/+ubcW6a hnH6aAGzFon13Gpz+tDTcZPfxd+Uj05SkzbkMSOBQvNf/AOb1jEKoR81rPW2ApTtza/O 00GvZU92b/Vx8lOesxhVJwcYBh+0ylsnS2d0DUfeTcQW1SjaxoOY0oK99snv/6JNZil5 Q7rG5jtaU/CTEcPM8UfhsDRRY2NjBX/JCxp4MavC3z2yL1shuBRU6j0Gz2V2QUP65Za9 +gTw== X-Gm-Message-State: ABy/qLZdzJC6bKFS7Yh9RrpIxwAE5NdU46qBVzfm4cYe0EhoDTI0hGsW vGiq0rf4exGrEXH41Fr+KM6L23/KhXp+c3wtAOD+CGyq/2SdITSFLRFDrRzv+B3UZmzCAXVljcH bv2bPV6Cc7A3ad/HadoTJMMgpDqbk22TuUsolV/uFxMYn9/bY9g== X-Received: by 2002:a17:90a:474e:b0:263:8eec:550f with SMTP id y14-20020a17090a474e00b002638eec550fmr1848391pjg.10.1689778889238; Wed, 19 Jul 2023 08:01:29 -0700 (PDT) X-Google-Smtp-Source: APBJJlGTCvdzgktqEYZfkaQz4C1JFSY5nCchRT5uK10/AR/yrJUZQxlZwHN6a1lAOnTaW8Gfiq9O2Q== X-Received: by 2002:a17:90a:474e:b0:263:8eec:550f with SMTP id y14-20020a17090a474e00b002638eec550fmr1848363pjg.10.1689778888870; Wed, 19 Jul 2023 08:01:28 -0700 (PDT) Received: from canonical.com (211-75-139-218.hinet-ip.hinet.net. [211.75.139.218]) by smtp.gmail.com with ESMTPSA id v3-20020a17090a0c8300b00263f33eef41sm1369990pja.37.2023.07.19.08.01.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 19 Jul 2023 08:01:28 -0700 (PDT) From: Koba Ko To: kernel-team@lists.ubuntu.com Subject: [PATCH 5/5 V2][SRU][L] drm/amd: Align SMU11 SMU_MSG_OverridePcieParameters implementation with SMU13 Date: Wed, 19 Jul 2023 23:01:12 +0800 Message-Id: <20230719150112.1883903-6-koba.ko@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230719150112.1883903-1-koba.ko@canonical.com> References: <20230719150112.1883903-1-koba.ko@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Mario Limonciello SMU13 overrides dynamic PCIe lane width and dynamic speed by when on certain hosts. commit 38e4ced80479 ("drm/amd/pm: conditionally disable pcie lane switching for some sienna_cichlid SKUs") worked around this issue by setting up certain SKUs to set up certain limits, but the same fundamental problem with those hosts affects all SMU11 implmentations as well, so align the SMU11 and SMU13 driver handling. Signed-off-by: Mario Limonciello Reviewed-by: Evan Quan Signed-off-by: Alex Deucher Cc: stable@vger.kernel.org # 6.1.x (backported from commit e701156ccc6c7a5f104a968dda74cd6434178712 linux-next) Signed-off-by: Koba Ko --- .../amd/pm/swsmu/smu11/sienna_cichlid_ppt.c | 45 ++++++++++--------- 1 file changed, 24 insertions(+), 21 deletions(-) diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c index 75f18681e984c..7ecd773c7f518 100644 --- a/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c +++ b/drivers/gpu/drm/amd/pm/swsmu/smu11/sienna_cichlid_ppt.c @@ -2072,28 +2072,36 @@ static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, uint32_t pcie_width_cap) { struct smu_11_0_dpm_context *dpm_context = smu->smu_dpm.dpm_context; - - uint32_t smu_pcie_arg; - uint8_t *table_member1, *table_member2; + struct smu_11_0_pcie_table *pcie_table = &dpm_context->dpm_tables.pcie_table; + u32 smu_pcie_arg; int ret, i; - GET_PPTABLE_MEMBER(PcieGenSpeed, &table_member1); - GET_PPTABLE_MEMBER(PcieLaneCount, &table_member2); + /* PCIE gen speed and lane width override */ + if (!amdgpu_device_pcie_dynamic_switching_supported()) { + if (pcie_table->pcie_gen[NUM_LINK_LEVELS - 1] < pcie_gen_cap) + pcie_gen_cap = pcie_table->pcie_gen[NUM_LINK_LEVELS - 1]; + + if (pcie_table->pcie_lane[NUM_LINK_LEVELS - 1] < pcie_width_cap) + pcie_width_cap = pcie_table->pcie_lane[NUM_LINK_LEVELS - 1]; - /* lclk dpm table setup */ - for (i = 0; i < MAX_PCIE_CONF; i++) { - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = table_member1[i]; - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = table_member2[i]; + /* Force all levels to use the same settings */ + for (i = 0; i < NUM_LINK_LEVELS; i++) { + pcie_table->pcie_gen[i] = pcie_gen_cap; + pcie_table->pcie_lane[i] = pcie_width_cap; + } + } else { + for (i = 0; i < NUM_LINK_LEVELS; i++) { + if (pcie_table->pcie_gen[i] > pcie_gen_cap) + pcie_table->pcie_gen[i] = pcie_gen_cap; + if (pcie_table->pcie_lane[i] > pcie_width_cap) + pcie_table->pcie_lane[i] = pcie_width_cap; + } } for (i = 0; i < NUM_LINK_LEVELS; i++) { - smu_pcie_arg = (i << 16) | - ((table_member1[i] <= pcie_gen_cap) ? - (table_member1[i] << 8) : - (pcie_gen_cap << 8)) | - ((table_member2[i] <= pcie_width_cap) ? - table_member2[i] : - pcie_width_cap); + smu_pcie_arg = (i << 16 | + pcie_table->pcie_gen[i] << 8 | + pcie_table->pcie_lane[i]); ret = smu_cmn_send_smc_msg_with_param(smu, SMU_MSG_OverridePcieParameters, @@ -2101,11 +2109,6 @@ static int sienna_cichlid_update_pcie_parameters(struct smu_context *smu, NULL); if (ret) return ret; - - if (table_member1[i] > pcie_gen_cap) - dpm_context->dpm_tables.pcie_table.pcie_gen[i] = pcie_gen_cap; - if (table_member2[i] > pcie_width_cap) - dpm_context->dpm_tables.pcie_table.pcie_lane[i] = pcie_width_cap; } return 0;