From patchwork Wed Jun 14 16:25:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Elmously X-Patchwork-Id: 1795040 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=mbFstzJX; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4Qh9lW39qFz20WR for ; Thu, 15 Jun 2023 02:25:59 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q9TJi-0006tA-FT; Wed, 14 Jun 2023 16:25:54 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q9TJb-0006kb-7n for kernel-team@lists.ubuntu.com; Wed, 14 Jun 2023 16:25:47 +0000 Received: from mail-ua1-f72.google.com (mail-ua1-f72.google.com [209.85.222.72]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id C340C3F171 for ; Wed, 14 Jun 2023 16:25:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1686759946; bh=mjVokMxdZFtVIQvK+b6gaGZk2o85JgXa6hVQGXqygoQ=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=mbFstzJXzR2xktBjClxFm6o+8k0M1squDzcbhrEbTykkXhIBszj2g1UHF03cSxXB9 VJnztuj/4e535TsvnphQKXXavTxmtVXNGG1Ne/KEZFZCMhQxerRgIMvEFY+zoGzv9H bfkNcd5xnyBQ42ejJPlb7/e5VEuRaPrqRMlDd87rVo2ggu93WeCvpM4cGaPhcNU8WV lhQzwR5A24E6G80OhpCV6ExFulQhRF0g5+rRb+3K0HkMknw96UqF0SySKFnFGojZ7g 3qWQ8i+aqusGupNf+Xml1pgXe/FVod4zGYg/vu4BmWPBI4JObDVHfSDb9tyCHaGAmH Bcgv1r0HWEO5Q== Received: by mail-ua1-f72.google.com with SMTP id a1e0cc1a2514c-783c3be6f39so884554241.1 for ; Wed, 14 Jun 2023 09:25:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1686759945; x=1689351945; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mjVokMxdZFtVIQvK+b6gaGZk2o85JgXa6hVQGXqygoQ=; b=BWYjsbLzVy2ttaBTpYhEAYtmXC6xqEXhHINgTmC70ohttjrbgxMdzefkiM0rivXp6w /bpMitunkiRP9lVxYXiM82Ermrx8a+P1A8VvWT8EJmU3yBxbv78gofFT0oez97gKoDAi bdyikgFYKVoKYyr0OfzQMlKWrkRwI7jZJ6qS+41HbpZk1cMh5inHCdWl1oxmN1JvXxoX qH7/G7Uaow7CPNi4f6qZ3HgGpSvVghrP6hPd5yTJLnwfxQwaElb4b37poNBjz89DBLnp HXLFRbuNG8CXWG6Dp/9cn9YIt9jeVnnoV2DkfNgVQ+A6i2rM5kfNORIlew+crO4BtHs9 M18g== X-Gm-Message-State: AC+VfDwsNfSW9ZUOmr5U9qnqHPdP3oB5FtS423iq12c8X/lq20/lodbt jRUNuPI/xtJ7wtnIbDEOd4SFuXPpiIO3FwU01PTIsna9uIAaqlj8IaBqEG5Kulb5BWUFDSGynpu lXE3psiS1GQYZKAz8Xtv1rX2WqUiqeKhRcmZcuUwU4QtACpws6K/D X-Received: by 2002:a05:6102:e57:b0:43f:54fe:2090 with SMTP id p23-20020a0561020e5700b0043f54fe2090mr937316vst.16.1686759945751; Wed, 14 Jun 2023 09:25:45 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ5ojwoEIQW2QEJSBeJt9Ob7jwqV2GSDf5roNUDeRghYxwbgvYDIGL/obnbf0nMdYMJIf4h1cQ== X-Received: by 2002:a05:6102:e57:b0:43f:54fe:2090 with SMTP id p23-20020a0561020e5700b0043f54fe2090mr937308vst.16.1686759945501; Wed, 14 Jun 2023 09:25:45 -0700 (PDT) Received: from k2.fuzzbuzz.org ([38.147.253.170]) by smtp.gmail.com with ESMTPSA id b10-20020a0ccd0a000000b005e750d07153sm4815468qvm.135.2023.06.14.09.25.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jun 2023 09:25:45 -0700 (PDT) From: Khalid Elmously To: kernel-team@lists.ubuntu.com Subject: [PATCH 3/3] iommu/amd: Add map/unmap_pages() iommu_domain_ops callback support Date: Wed, 14 Jun 2023 12:25:35 -0400 Message-Id: <20230614162535.8637-4-khalid.elmously@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230614162535.8637-1-khalid.elmously@canonical.com> References: <20230614162535.8637-1-khalid.elmously@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" From: Vasant Hegde BugLink: https://bugs.launchpad.net/bugs/2023313 Implement the map_pages() and unmap_pages() callback for the AMD IOMMU driver to allow calls from iommu core to map and unmap multiple pages. Also deprecate map/unmap callbacks. Finally gatherer is not updated by iommu_v1_unmap_pages(). Hence pass NULL instead of gather to iommu_v1_unmap_pages. Suggested-by: Robin Murphy Signed-off-by: Vasant Hegde Link: https://lore.kernel.org/r/20220825063939.8360-4-vasant.hegde@amd.com Signed-off-by: Joerg Roedel (cherry picked from commit 6b080c4e815ceba3c08ffa980c858595c07e786a) Signed-off-by: Khalid Elmously --- drivers/iommu/amd/iommu.c | 29 ++++++++++++++++------------- 1 file changed, 16 insertions(+), 13 deletions(-) diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c index a0924144bac8..99177129ef92 100644 --- a/drivers/iommu/amd/iommu.c +++ b/drivers/iommu/amd/iommu.c @@ -2070,13 +2070,13 @@ static void amd_iommu_iotlb_sync_map(struct iommu_domain *dom, struct protection_domain *domain = to_pdomain(dom); struct io_pgtable_ops *ops = &domain->iop.iop.ops; - if (ops->map) + if (ops->map_pages) domain_flush_np_cache(domain, iova, size); } -static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova, - phys_addr_t paddr, size_t page_size, int iommu_prot, - gfp_t gfp) +static int amd_iommu_map_pages(struct iommu_domain *dom, unsigned long iova, + phys_addr_t paddr, size_t pgsize, size_t pgcount, + int iommu_prot, gfp_t gfp, size_t *mapped) { struct protection_domain *domain = to_pdomain(dom); struct io_pgtable_ops *ops = &domain->iop.iop.ops; @@ -2092,8 +2092,10 @@ static int amd_iommu_map(struct iommu_domain *dom, unsigned long iova, if (iommu_prot & IOMMU_WRITE) prot |= IOMMU_PROT_IW; - if (ops->map) - ret = ops->map(ops, iova, paddr, page_size, prot, gfp); + if (ops->map_pages) { + ret = ops->map_pages(ops, iova, paddr, pgsize, + pgcount, prot, gfp, mapped); + } return ret; } @@ -2119,9 +2121,9 @@ static void amd_iommu_iotlb_gather_add_page(struct iommu_domain *domain, iommu_iotlb_gather_add_range(gather, iova, size); } -static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova, - size_t page_size, - struct iommu_iotlb_gather *gather) +static size_t amd_iommu_unmap_pages(struct iommu_domain *dom, unsigned long iova, + size_t pgsize, size_t pgcount, + struct iommu_iotlb_gather *gather) { struct protection_domain *domain = to_pdomain(dom); struct io_pgtable_ops *ops = &domain->iop.iop.ops; @@ -2131,9 +2133,10 @@ static size_t amd_iommu_unmap(struct iommu_domain *dom, unsigned long iova, (domain->iop.mode == PAGE_MODE_NONE)) return 0; - r = (ops->unmap) ? ops->unmap(ops, iova, page_size, gather) : 0; + r = (ops->unmap_pages) ? ops->unmap_pages(ops, iova, pgsize, pgcount, NULL) : 0; - amd_iommu_iotlb_gather_add_page(dom, gather, iova, page_size); + if (r) + amd_iommu_iotlb_gather_add_page(dom, gather, iova, r); return r; } @@ -2288,8 +2291,8 @@ const struct iommu_ops amd_iommu_ops = { .default_domain_ops = &(const struct iommu_domain_ops) { .attach_dev = amd_iommu_attach_device, .detach_dev = amd_iommu_detach_device, - .map = amd_iommu_map, - .unmap = amd_iommu_unmap, + .map_pages = amd_iommu_map_pages, + .unmap_pages = amd_iommu_unmap_pages, .iotlb_sync_map = amd_iommu_iotlb_sync_map, .iova_to_phys = amd_iommu_iova_to_phys, .flush_iotlb_all = amd_iommu_flush_iotlb_all,