From patchwork Mon Apr 29 03:23:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mostafa Saleh X-Patchwork-Id: 1928777 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=google.com header.i=@google.com header.a=rsa-sha256 header.s=20230601 header.b=3gZNCAC2; dkim-atps=neutral Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=nongnu.org (client-ip=209.51.188.17; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=patchwork.ozlabs.org) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VSTJ72YWZz23t4 for ; Mon, 29 Apr 2024 13:26:15 +1000 (AEST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1s1HdY-0000UO-H7; Sun, 28 Apr 2024 23:25:05 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from <3DBMvZggKCicVPRVWDIDJRRJOH.FRPTHPX-GHYHOQRQJQX.RUJ@flex--smostafa.bounces.google.com>) id 1s1HdX-0000Tu-AU for qemu-devel@nongnu.org; Sun, 28 Apr 2024 23:25:03 -0400 Received: from mail-yb1-xb4a.google.com ([2607:f8b0:4864:20::b4a]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from <3DBMvZggKCicVPRVWDIDJRRJOH.FRPTHPX-GHYHOQRQJQX.RUJ@flex--smostafa.bounces.google.com>) id 1s1HdV-00041m-ES for qemu-devel@nongnu.org; Sun, 28 Apr 2024 23:25:03 -0400 Received: by mail-yb1-xb4a.google.com with SMTP id 3f1490d57ef6-de468af2b73so8092481276.0 for ; Sun, 28 Apr 2024 20:25:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1714361100; x=1714965900; darn=nongnu.org; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id :reply-to; bh=oW2D69loP/x98JCkLEbfxO2gw4J5/AFJUayqA/n1rU8=; b=3gZNCAC24RxBVfSKnQknn2E4RdHezB2dKIq85G7dV8w16tlVn6r0sq8fm+b+h7Pw4I 18lWC31N/GtiW6t6IeNyofeIWrVymi/tvtpPZKcUWf/mwMi2eURfToLTwy0S97MxwsYl 9QRQknpOf794y+MX/KsQ+MDnvtdRKo3mJYdKl1VgiRXmaHjNOV+KzDmxAWZygNcrhdfU hSyxX52jY4don86s2ldVi6p1NEElDOKzGJh052prqCSiOVfMq0WPk9moU21IIIag3fgw EbUOOUtUoajrpCftEXNAYUV2WXp13vLJav2DyeslreO1jKU9RYKcyj8SR48JCbIlHF47 oe8w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1714361100; x=1714965900; h=content-transfer-encoding:cc:to:from:subject:message-id:references :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject :date:message-id:reply-to; bh=oW2D69loP/x98JCkLEbfxO2gw4J5/AFJUayqA/n1rU8=; b=pmqn9WySZgOp9vGEwbW7si/ywmp3ehzi56Jjl6SrsHuQ+fsHxX1he8iMFMUJHkglic ECWqcQlUY9VAi0ZW9IIShGdx9iaapeOoeKmGbFgLwvgppvJWZ+rVnCyoFICDMUukLn0L aR13VPisdHRzFwBmor4z5xTE3KnA4jYkZrsIC9K81gVhSE9AmM3mUasVwgOPVOb0g6aO E8AjqljqQyohmWPnLndfOJ7EgeJy6k6sVhIUq9DDNTH+rGPZfnho2I3YoI+fNTIzpgYl R8nMPwbZuHpzWAFcVJfzAI+fmA6eGaueCR8BRjsWJYgWS7Y0P6p0im+ZZBP3rYrUD3yc iGrQ== X-Forwarded-Encrypted: i=1; AJvYcCW+dWU97FRfjNHeStm+4vvJ/sw6OoNFa9dOI+Hxf1xKUcVv9z6Kddf6xR0Ne+WEIPlJhoWDaVG6B+rnIdWt1OUJS8Yk8f0= X-Gm-Message-State: AOJu0YxRxTZ1PT+xRxtNs733z6pEBbG+AgiAl2sCDoMON6OoFwV3vXjA 2+9pEcEKXXXrEAlYuxI6NlQzwR7Rt/xgs7aqQW13Teo+f9BMwgC3jFsjipdxJ9QHRaTdeXlE/J4 09cU3upRPig== X-Google-Smtp-Source: AGHT+IFr2eXpjPPCfdsJ2BE21RpPzH2hLgspKqU74n+07aHXcmmqZyx1+cPGbsJvdg8T+rM7KLUKix+CFsMPCQ== X-Received: from mostafa.c.googlers.com ([fda3:e722:ac3:cc00:28:9cb1:c0a8:333c]) (user=smostafa job=sendgmr) by 2002:a05:6902:1002:b0:dd9:1db5:8348 with SMTP id w2-20020a056902100200b00dd91db58348mr3290722ybt.8.1714361100182; Sun, 28 Apr 2024 20:25:00 -0700 (PDT) Date: Mon, 29 Apr 2024 03:23:56 +0000 In-Reply-To: <20240429032403.74910-1-smostafa@google.com> Mime-Version: 1.0 References: <20240429032403.74910-1-smostafa@google.com> X-Mailer: git-send-email 2.44.0.769.g3c40516874-goog Message-ID: <20240429032403.74910-13-smostafa@google.com> Subject: [RFC PATCH v3 12/18] hw/arm/smmu: Support nesting in the rest of commands From: Mostafa Saleh To: qemu-arm@nongnu.org, eric.auger@redhat.com, peter.maydell@linaro.org, qemu-devel@nongnu.org Cc: jean-philippe@linaro.org, alex.bennee@linaro.org, maz@kernel.org, nicolinc@nvidia.com, julien@xen.org, richard.henderson@linaro.org, marcin.juszkiewicz@linaro.org, Mostafa Saleh Received-SPF: pass client-ip=2607:f8b0:4864:20::b4a; envelope-from=3DBMvZggKCicVPRVWDIDJRRJOH.FRPTHPX-GHYHOQRQJQX.RUJ@flex--smostafa.bounces.google.com; helo=mail-yb1-xb4a.google.com X-Spam_score_int: -95 X-Spam_score: -9.6 X-Spam_bar: --------- X-Spam_report: (-9.6 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_MED=-0.001, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, USER_IN_DEF_DKIM_WL=-7.5 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Some commands need rework for nesting, as they used to assume S1 and S2 are mutually exclusive: - CMD_TLBI_NH_ASID: Consider VMID if stage-2 is supported - CMD_TLBI_NH_ALL: Consider VMID if stage-2 is supported, otherwise invalidate everything, this required a new vmid invalidation function for stage-1 only (ASID >= 0) Also, rework trace events to reflect the new implementation. Signed-off-by: Mostafa Saleh --- hw/arm/smmu-common.c | 36 +++++++++++++++++++++++++++++------- hw/arm/smmuv3.c | 31 +++++++++++++++++++++++++++++-- hw/arm/trace-events | 6 ++++-- include/hw/arm/smmu-common.h | 3 ++- 4 files changed, 64 insertions(+), 12 deletions(-) diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c index fa2460cf64..3ed0be05ef 100644 --- a/hw/arm/smmu-common.c +++ b/hw/arm/smmu-common.c @@ -147,13 +147,14 @@ void smmu_iotlb_inv_all(SMMUState *s) g_hash_table_remove_all(s->iotlb); } -static gboolean smmu_hash_remove_by_asid(gpointer key, gpointer value, - gpointer user_data) +static gboolean smmu_hash_remove_by_asid_vmid(gpointer key, gpointer value, + gpointer user_data) { - int asid = *(int *)user_data; + SMMUIOTLBPageInvInfo *info = (SMMUIOTLBPageInvInfo *)user_data; SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key; - return SMMU_IOTLB_ASID(*iotlb_key) == asid; + return (SMMU_IOTLB_ASID(*iotlb_key) == info->asid) && + (SMMU_IOTLB_VMID(*iotlb_key) == info->vmid); } static gboolean smmu_hash_remove_by_vmid(gpointer key, gpointer value, @@ -165,6 +166,16 @@ static gboolean smmu_hash_remove_by_vmid(gpointer key, gpointer value, return SMMU_IOTLB_VMID(*iotlb_key) == vmid; } +static gboolean smmu_hash_remove_by_vmid_s1(gpointer key, gpointer value, + gpointer user_data) +{ + int vmid = *(int *)user_data; + SMMUIOTLBKey *iotlb_key = (SMMUIOTLBKey *)key; + + return (SMMU_IOTLB_VMID(*iotlb_key) == vmid) && + (SMMU_IOTLB_ASID(*iotlb_key) >= 0); +} + static gboolean smmu_hash_remove_by_asid_vmid_iova(gpointer key, gpointer value, gpointer user_data) { @@ -258,10 +269,15 @@ void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg, &info); } -void smmu_iotlb_inv_asid(SMMUState *s, int asid) +void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid) { - trace_smmu_iotlb_inv_asid(asid); - g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid, &asid); + SMMUIOTLBPageInvInfo info = { + .asid = asid, + .vmid = vmid, + }; + + trace_smmu_iotlb_inv_asid_vmid(asid, vmid); + g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_asid_vmid, &info); } void smmu_iotlb_inv_vmid(SMMUState *s, int vmid) @@ -270,6 +286,12 @@ void smmu_iotlb_inv_vmid(SMMUState *s, int vmid) g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_vmid, &vmid); } +inline void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid) +{ + trace_smmu_iotlb_inv_vmid_s1(vmid); + g_hash_table_foreach_remove(s->iotlb, smmu_hash_remove_by_vmid_s1, &vmid); +} + /* VMSAv8-64 Translation */ /** diff --git a/hw/arm/smmuv3.c b/hw/arm/smmuv3.c index 82d918d9b5..e0fd494646 100644 --- a/hw/arm/smmuv3.c +++ b/hw/arm/smmuv3.c @@ -1303,25 +1303,52 @@ static int smmuv3_cmdq_consume(SMMUv3State *s) case SMMU_CMD_TLBI_NH_ASID: { int asid = CMD_ASID(&cmd); + int vmid = -1; if (!STAGE1_SUPPORTED(s)) { cmd_error = SMMU_CERROR_ILL; break; } + /* + * VMID is only matched when stage 2 is supported for the Security + * state corresponding to the command queue that the command was + * issued in. + * QEMU ignores the field by setting to -1, similarly to what STE + * decoding does. And invalidation commands ignore VMID < 0. + */ + if (STAGE2_SUPPORTED(s)) { + vmid = CMD_VMID(&cmd); + } + trace_smmuv3_cmdq_tlbi_nh_asid(asid); smmu_inv_notifiers_all(&s->smmu_state); - smmu_iotlb_inv_asid(bs, asid); + smmu_iotlb_inv_asid_vmid(bs, asid, vmid); break; } case SMMU_CMD_TLBI_NH_ALL: + { + int vmid = -1; + if (!STAGE1_SUPPORTED(s)) { cmd_error = SMMU_CERROR_ILL; break; } + + /* + * If stage-2 is supported, invalidate for this VMID only, otherwise + * invalidate the whole thing, see SMMU_CMD_TLBI_NH_ASID() + */ + if (STAGE2_SUPPORTED(s)) { + vmid = CMD_VMID(&cmd); + trace_smmuv3_cmdq_tlbi_nh(vmid); + smmu_iotlb_inv_vmid_s1(bs, vmid); + break; + } QEMU_FALLTHROUGH; + } case SMMU_CMD_TLBI_NSNH_ALL: - trace_smmuv3_cmdq_tlbi_nh(); + trace_smmuv3_cmdq_tlbi_nsnh(); smmu_inv_notifiers_all(&s->smmu_state); smmu_iotlb_inv_all(bs); break; diff --git a/hw/arm/trace-events b/hw/arm/trace-events index 7d9c1703da..593cc571da 100644 --- a/hw/arm/trace-events +++ b/hw/arm/trace-events @@ -11,8 +11,9 @@ smmu_ptw_page_pte(int stage, int level, uint64_t iova, uint64_t baseaddr, uint6 smmu_ptw_block_pte(int stage, int level, uint64_t baseaddr, uint64_t pteaddr, uint64_t pte, uint64_t iova, uint64_t gpa, int bsize_mb) "stage=%d level=%d base@=0x%"PRIx64" pte@=0x%"PRIx64" pte=0x%"PRIx64" iova=0x%"PRIx64" block address = 0x%"PRIx64" block size = %d MiB" smmu_get_pte(uint64_t baseaddr, int index, uint64_t pteaddr, uint64_t pte) "baseaddr=0x%"PRIx64" index=0x%x, pteaddr=0x%"PRIx64", pte=0x%"PRIx64 smmu_iotlb_inv_all(void) "IOTLB invalidate all" -smmu_iotlb_inv_asid(int asid) "IOTLB invalidate asid=%d" +smmu_iotlb_inv_asid_vmid(int asid, int vmid) "IOTLB invalidate asid=%d vmid=%d" smmu_iotlb_inv_vmid(int vmid) "IOTLB invalidate vmid=%d" +smmu_iotlb_inv_vmid_s1(int vmid) "IOTLB invalidate vmid=%d" smmu_iotlb_inv_iova(int asid, uint64_t addr) "IOTLB invalidate asid=%d addr=0x%"PRIx64 smmu_inv_notifiers_mr(const char *name) "iommu mr=%s" smmu_iotlb_lookup_hit(int asid, int vmid, uint64_t addr, uint32_t hit, uint32_t miss, uint32_t p) "IOTLB cache HIT asid=%d vmid=%d addr=0x%"PRIx64" hit=%d miss=%d hit rate=%d" @@ -47,7 +48,8 @@ smmuv3_cmdq_cfgi_cd(uint32_t sid) "sid=0x%x" smmuv3_config_cache_hit(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache HIT for sid=0x%x (hits=%d, misses=%d, hit rate=%d)" smmuv3_config_cache_miss(uint32_t sid, uint32_t hits, uint32_t misses, uint32_t perc) "Config cache MISS for sid=0x%x (hits=%d, misses=%d, hit rate=%d)" smmuv3_range_inval(int vmid, int asid, uint64_t addr, uint8_t tg, uint64_t num_pages, uint8_t ttl, bool leaf, int stage) "vmid=%d asid=%d addr=0x%"PRIx64" tg=%d num_pages=0x%"PRIx64" ttl=%d leaf=%d stage=%d" -smmuv3_cmdq_tlbi_nh(void) "" +smmuv3_cmdq_tlbi_nh(int vmid) "vmid=%d" +smmuv3_cmdq_tlbi_nsnh(void) "" smmuv3_cmdq_tlbi_nh_asid(int asid) "asid=%d" smmuv3_cmdq_tlbi_s12_vmid(int vmid) "vmid=%d" smmuv3_config_cache_inv(uint32_t sid) "Config cache INV for sid=0x%x" diff --git a/include/hw/arm/smmu-common.h b/include/hw/arm/smmu-common.h index de032fdfd1..361e639630 100644 --- a/include/hw/arm/smmu-common.h +++ b/include/hw/arm/smmu-common.h @@ -212,8 +212,9 @@ void smmu_iotlb_insert(SMMUState *bs, SMMUTransCfg *cfg, SMMUTLBEntry *entry); SMMUIOTLBKey smmu_get_iotlb_key(int asid, int vmid, uint64_t iova, uint8_t tg, uint8_t level); void smmu_iotlb_inv_all(SMMUState *s); -void smmu_iotlb_inv_asid(SMMUState *s, int asid); +void smmu_iotlb_inv_asid_vmid(SMMUState *s, int asid, int vmid); void smmu_iotlb_inv_vmid(SMMUState *s, int vmid); +void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid); void smmu_iotlb_inv_iova(SMMUState *s, int asid, int vmid, dma_addr_t iova, uint8_t tg, uint64_t num_pages, uint8_t ttl); void smmu_iotlb_inv_ipa(SMMUState *s, int vmid, dma_addr_t ipa, uint8_t tg,