Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/2217053/?format=api
{ "id": 2217053, "url": "http://patchwork.ozlabs.org/api/patches/2217053/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linux-pci/patch/20260327160132.2946114-4-yilun.xu@linux.intel.com/", "project": { "id": 28, "url": "http://patchwork.ozlabs.org/api/projects/28/?format=api", "name": "Linux PCI development", "link_name": "linux-pci", "list_id": "linux-pci.vger.kernel.org", "list_email": "linux-pci@vger.kernel.org", "web_url": null, "scm_url": null, "webscm_url": null, "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20260327160132.2946114-4-yilun.xu@linux.intel.com>", "list_archive_url": null, "date": "2026-03-27T16:01:04", "name": "[v2,03/31] x86/virt/tdx: Add tdx_page_array helpers for new TDX Module objects", "commit_ref": null, "pull_url": null, "state": "new", "archived": false, "hash": "aa61c7a64da13ab605a469b309e71c20976f0a56", "submitter": { "id": 87470, "url": "http://patchwork.ozlabs.org/api/people/87470/?format=api", "name": "Xu Yilun", "email": "yilun.xu@linux.intel.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/linux-pci/patch/20260327160132.2946114-4-yilun.xu@linux.intel.com/mbox/", "series": [ { "id": 497793, "url": "http://patchwork.ozlabs.org/api/series/497793/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linux-pci/list/?series=497793", "date": "2026-03-27T16:01:02", "name": "PCI/TSM: PCIe Link Encryption Establishment via TDX platform services", "version": 2, "mbox": "http://patchwork.ozlabs.org/series/497793/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/2217053/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/2217053/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "\n <linux-pci+bounces-51288-incoming=patchwork.ozlabs.org@vger.kernel.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "linux-pci@vger.kernel.org" ], "Delivered-To": "patchwork-incoming@legolas.ozlabs.org", "Authentication-Results": [ "legolas.ozlabs.org;\n\tdkim=pass (2048-bit key;\n unprotected) header.d=intel.com header.i=@intel.com header.a=rsa-sha256\n header.s=Intel header.b=l08lMvyJ;\n\tdkim-atps=neutral", "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=172.105.105.114; helo=tor.lore.kernel.org;\n envelope-from=linux-pci+bounces-51288-incoming=patchwork.ozlabs.org@vger.kernel.org;\n receiver=patchwork.ozlabs.org)", "smtp.subspace.kernel.org;\n\tdkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.b=\"l08lMvyJ\"", "smtp.subspace.kernel.org;\n arc=none smtp.client-ip=198.175.65.14", "smtp.subspace.kernel.org;\n dmarc=pass (p=none dis=none) header.from=linux.intel.com", "smtp.subspace.kernel.org;\n spf=pass smtp.mailfrom=linux.intel.com" ], "Received": [ "from tor.lore.kernel.org (tor.lore.kernel.org [172.105.105.114])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519 server-signature ECDSA (secp384r1) server-digest SHA384)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fj5jw1PhDz1y1P\n\tfor <incoming@patchwork.ozlabs.org>; Sat, 28 Mar 2026 03:31:00 +1100 (AEDT)", "from smtp.subspace.kernel.org (conduit.subspace.kernel.org\n [100.90.174.1])\n\tby tor.lore.kernel.org (Postfix) with ESMTP id 1F4A130E4DC0\n\tfor <incoming@patchwork.ozlabs.org>; Fri, 27 Mar 2026 16:23:20 +0000 (UTC)", "from localhost.localdomain (localhost.localdomain [127.0.0.1])\n\tby smtp.subspace.kernel.org (Postfix) with ESMTP id 1B1C834CFB7;\n\tFri, 27 Mar 2026 16:22:48 +0000 (UTC)", "from mgamail.intel.com (mgamail.intel.com [198.175.65.14])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby smtp.subspace.kernel.org (Postfix) with ESMTPS id 10A0335B65F;\n\tFri, 27 Mar 2026 16:22:45 +0000 (UTC)", "from fmviesa006.fm.intel.com ([10.60.135.146])\n by orvoesa106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 27 Mar 2026 09:22:44 -0700", "from yilunxu-optiplex-7050.sh.intel.com ([10.239.159.165])\n by fmviesa006.fm.intel.com with ESMTP; 27 Mar 2026 09:22:41 -0700" ], "ARC-Seal": "i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;\n\tt=1774628568; cv=none;\n b=YZQH9XN8uROumQ0n6ihw99WZC7Tkl3/m84JoWCPEUqjG37Z691XUZKFf5r4MPYF1TqhJTCB6VOUmvDsB2IM5/x/8XC+Qh3U5OHn1K2Kupion0HhzefKAKuWfjNh1bZQDIHCfzP78rqMv+j62vldDWhN5BUfisZZQdGSUwdLgJvs=", "ARC-Message-Signature": "i=1; a=rsa-sha256; d=subspace.kernel.org;\n\ts=arc-20240116; t=1774628568; c=relaxed/simple;\n\tbh=AG4iml9WAszT+rigBrGpH8H5M7c1d9jK3pet9HvraTI=;\n\th=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:\n\t MIME-Version;\n b=AgFMO0gBWFuhdPbYeFN9v2Pt6PzIHDmElA28nes7Jm7bbml5a+QbXROEOK47kCixBpo9NUMc6g+9jzCZLj8MJubG8A6B9CmAGd1GAzyZVwEzs/KbYvlU42Kll+HSDIScyNO1NiJD9qy2FkqcWBMLJahAglwQ0SXajQHdseV4Tbg=", "ARC-Authentication-Results": "i=1; smtp.subspace.kernel.org;\n dmarc=pass (p=none dis=none) header.from=linux.intel.com;\n spf=pass smtp.mailfrom=linux.intel.com;\n dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com\n header.b=l08lMvyJ; arc=none smtp.client-ip=198.175.65.14", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1774628564; x=1806164564;\n h=from:to:cc:subject:date:message-id:in-reply-to:\n references:mime-version:content-transfer-encoding;\n bh=AG4iml9WAszT+rigBrGpH8H5M7c1d9jK3pet9HvraTI=;\n b=l08lMvyJAgqCdtAAbmwyB2qIqHQw5+4hetBEN5m1oK/R9h24gv2OK/HI\n pOXBRce/PaiubjEbPgKbLg5JWKl9DnuBuKNEbfoDyEqblIadPkGJEma8j\n MMPOJgE2AvrmoNefdHAAIvPqW9cwfDlGuBzockWacmD2AIvwCeaiVpOE3\n GBTx9zmOgZRCiB8qSwYud/TB7zY4YnPtk3ukq5fOMCNjLSyFI7Q2Q61Nu\n OKfRaUMsdiTFJ6eARTMoJte/sDYsSzRDj85EA2fW8ITfKp1stmKb3AWUV\n +esc4UgnDY+amTARYx3o5WWn9kyhvMrvP6CDQSU7+PsiNtD/YBr67ZDrp\n w==;", "X-CSE-ConnectionGUID": [ "yz5aT9laR5GrvwOa4Bj9Dg==", "u9OExjqsRtGhT2BbGg6cfg==" ], "X-CSE-MsgGUID": [ "DLmGIb0BRPm/oJOtmlPDkQ==", "DJi5nfsTRC+prmNX404Lmw==" ], "X-IronPort-AV": [ "E=McAfee;i=\"6800,10657,11741\"; a=\"79565504\"", "E=Sophos;i=\"6.23,144,1770624000\";\n d=\"scan'208\";a=\"79565504\"", "E=Sophos;i=\"6.23,144,1770624000\";\n d=\"scan'208\";a=\"220516132\"" ], "X-ExtLoop1": "1", "From": "Xu Yilun <yilun.xu@linux.intel.com>", "To": "linux-coco@lists.linux.dev,\n\tlinux-pci@vger.kernel.org,\n\tdan.j.williams@intel.com,\n\tx86@kernel.org", "Cc": "chao.gao@intel.com,\n\tdave.jiang@intel.com,\n\tbaolu.lu@linux.intel.com,\n\tyilun.xu@linux.intel.com,\n\tyilun.xu@intel.com,\n\tzhenzhong.duan@intel.com,\n\tkvm@vger.kernel.org,\n\trick.p.edgecombe@intel.com,\n\tdave.hansen@linux.intel.com,\n\tkas@kernel.org,\n\txiaoyao.li@intel.com,\n\tvishal.l.verma@intel.com,\n\tlinux-kernel@vger.kernel.org", "Subject": "[PATCH v2 03/31] x86/virt/tdx: Add tdx_page_array helpers for new TDX\n Module objects", "Date": "Sat, 28 Mar 2026 00:01:04 +0800", "Message-Id": "<20260327160132.2946114-4-yilun.xu@linux.intel.com>", "X-Mailer": "git-send-email 2.25.1", "In-Reply-To": "<20260327160132.2946114-1-yilun.xu@linux.intel.com>", "References": "<20260327160132.2946114-1-yilun.xu@linux.intel.com>", "Precedence": "bulk", "X-Mailing-List": "linux-pci@vger.kernel.org", "List-Id": "<linux-pci.vger.kernel.org>", "List-Subscribe": "<mailto:linux-pci+subscribe@vger.kernel.org>", "List-Unsubscribe": "<mailto:linux-pci+unsubscribe@vger.kernel.org>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit" }, "content": "Add struct tdx_page_array definition for new TDX Module object\ntypes - HPA_ARRAY_T and HPA_LIST_INFO. They are used as input/output\nparameters in newly defined SEAMCALLs. Also define some helpers to\nallocate, setup and free tdx_page_array.\n\nHPA_ARRAY_T and HPA_LIST_INFO are similar in most aspects. They both\nrepresent a list of pages for TDX Module accessing. There are several\nuse cases for these 2 structures:\n\n - As SEAMCALL inputs. They are claimed by TDX Module as control pages.\n Control pages are private pages for TDX Module to hold its internal\n control structures or private data. TDR, TDCS, TDVPR... are existing\n control pages, just not added via tdx_page_array.\n - As SEAMCALL outputs. They were TDX Module control pages and now are\n released.\n - As SEAMCALL inputs. They are just temporary buffers for exchanging\n data blobs in one SEAMCALL. TDX Module will not hold them for long\n time.\n\nThe 2 structures both need a 'root page' which contains a list of HPAs.\nThey collapse the HPA of the root page and the number of valid HPAs\ninto a 64 bit raw value for SEAMCALL parameters. The root page is\nalways a medium for passing data pages, TDX Module never keeps the\nroot page.\n\nA main difference is HPA_ARRAY_T requires singleton mode when\ncontaining just 1 functional page (page0). In this mode the root page is\nnot needed and the HPA field of the raw value directly points to the\npage0. But in this patch, root page is always allocated for user\nfriendly kAPIs.\n\nAnother small difference is HPA_LIST_INFO contains a 'first entry' field\nwhich could be filled by TDX Module. This simplifies host by providing\nthe same structure when re-invoke the interrupted SEAMCALL. No need for\nhost to touch this field.\n\nTypical usages of the tdx_page_array:\n\n1. Add control pages:\n - struct tdx_page_array *array = tdx_page_array_create(nr_pages);\n - seamcall(TDH_XXX_CREATE, array, ...);\n\n2. Release control pages:\n - seamcall(TDX_XXX_DELETE, array, &nr_released, &released_hpa);\n - tdx_page_array_ctrl_release(array, nr_released, released_hpa);\n\n3. Exchange data blobs:\n - struct tdx_page_array *array = tdx_page_array_create(nr_pages);\n - seamcall(TDX_XXX, array, ...);\n - Read data from array.\n - tdx_page_array_free(array);\n\n4. Note the root page contains 512 HPAs at most, if more pages are\n required, re-populate the tdx_page_array is needed.\n\n - struct tdx_page_array *array = tdx_page_array_alloc(nr_pages);\n - for each 512-page bulk\n - tdx_page_array_populate(array, offset);\n - seamcall(TDH_XXX_ADD, array, ...);\n\nIn case 2, SEAMCALLs output the released page array in the form of\nHPA_ARRAY_T or PAGE_LIST_INFO. Use tdx_page_array_ctrl_release() to\ncheck if the output pages match the original input pages. If failed,\nTDX Module is buggy. In this case the safer way is to leak the\ncontrol pages, call tdx_page_array_ctrl_leak().\n\nThe usage of tdx_page_array will be in following patches.\n\nCo-developed-by: Zhenzhong Duan <zhenzhong.duan@intel.com>\nSigned-off-by: Zhenzhong Duan <zhenzhong.duan@intel.com>\nSigned-off-by: Xu Yilun <yilun.xu@linux.intel.com>\n---\n arch/x86/include/asm/tdx.h | 37 +++++\n arch/x86/virt/vmx/tdx/tdx.c | 299 ++++++++++++++++++++++++++++++++++++\n 2 files changed, 336 insertions(+)", "diff": "diff --git a/arch/x86/include/asm/tdx.h b/arch/x86/include/asm/tdx.h\nindex 65c4da396450..9173a432b312 100644\n--- a/arch/x86/include/asm/tdx.h\n+++ b/arch/x86/include/asm/tdx.h\n@@ -139,6 +139,43 @@ void tdx_guest_keyid_free(unsigned int keyid);\n \n void tdx_quirk_reset_page(struct page *page);\n \n+/**\n+ * struct tdx_page_array - Represents a list of pages for TDX Module access\n+ * @nr_pages: Total number of data pages in the collection\n+ * @pages: Array of data page pointers containing all the data\n+ *\n+ * @offset: Internal: The starting index in @pages, positions the currently\n+ *\t populated page window in @root.\n+ * @nents: Internal: Number of valid HPAs for the page window in @root\n+ * @root: Internal: A single 4KB page holding the 8-byte HPAs of the page\n+ *\t window. The page window max size is constrained by the root page,\n+ *\t which is 512 HPAs.\n+ *\n+ * This structure abstracts several TDX Module defined object types, e.g.,\n+ * HPA_ARRAY_T and HPA_LIST_INFO. Typically they all use a \"root page\" as the\n+ * medium to exchange a list of data pages between host and TDX Module. This\n+ * structure serves as a unified parameter type for SEAMCALL wrappers, where\n+ * these hardware object types are needed.\n+ */\n+struct tdx_page_array {\n+\t/* public: */\n+\tunsigned int nr_pages;\n+\tstruct page **pages;\n+\n+\t/* private: */\n+\tunsigned int offset;\n+\tunsigned int nents;\n+\tu64 *root;\n+};\n+\n+void tdx_page_array_free(struct tdx_page_array *array);\n+DEFINE_FREE(tdx_page_array_free, struct tdx_page_array *, if (_T) tdx_page_array_free(_T))\n+struct tdx_page_array *tdx_page_array_create(unsigned int nr_pages);\n+void tdx_page_array_ctrl_leak(struct tdx_page_array *array);\n+int tdx_page_array_ctrl_release(struct tdx_page_array *array,\n+\t\t\t\tunsigned int nr_released,\n+\t\t\t\tu64 released_hpa);\n+\n struct tdx_td {\n \t/* TD root structure: */\n \tstruct page *tdr_page;\ndiff --git a/arch/x86/virt/vmx/tdx/tdx.c b/arch/x86/virt/vmx/tdx/tdx.c\nindex 8b8e165a2001..a3021e7e2490 100644\n--- a/arch/x86/virt/vmx/tdx/tdx.c\n+++ b/arch/x86/virt/vmx/tdx/tdx.c\n@@ -30,6 +30,7 @@\n #include <linux/suspend.h>\n #include <linux/idr.h>\n #include <linux/kvm_types.h>\n+#include <linux/bitfield.h>\n #include <asm/page.h>\n #include <asm/special_insns.h>\n #include <asm/msr-index.h>\n@@ -258,6 +259,304 @@ static int build_tdx_memlist(struct list_head *tmb_list)\n \treturn ret;\n }\n \n+#define TDX_PAGE_ARRAY_MAX_NENTS\t(PAGE_SIZE / sizeof(u64))\n+\n+static int tdx_page_array_populate(struct tdx_page_array *array,\n+\t\t\t\t unsigned int offset)\n+{\n+\tu64 *entries;\n+\tint i;\n+\n+\tif (offset >= array->nr_pages)\n+\t\treturn 0;\n+\n+\tarray->offset = offset;\n+\tarray->nents = umin(array->nr_pages - offset,\n+\t\t\t TDX_PAGE_ARRAY_MAX_NENTS);\n+\n+\tentries = array->root;\n+\tfor (i = 0; i < array->nents; i++)\n+\t\tentries[i] = page_to_phys(array->pages[offset + i]);\n+\n+\treturn array->nents;\n+}\n+\n+static void tdx_free_pages_bulk(unsigned int nr_pages, struct page **pages)\n+{\n+\tint i;\n+\n+\tfor (i = 0; i < nr_pages; i++)\n+\t\t__free_page(pages[i]);\n+}\n+\n+static int tdx_alloc_pages_bulk(unsigned int nr_pages, struct page **pages)\n+{\n+\tunsigned int filled, done = 0;\n+\n+\tdo {\n+\t\tfilled = alloc_pages_bulk(GFP_KERNEL, nr_pages - done,\n+\t\t\t\t\t pages + done);\n+\t\tif (!filled) {\n+\t\t\ttdx_free_pages_bulk(done, pages);\n+\t\t\treturn -ENOMEM;\n+\t\t}\n+\n+\t\tdone += filled;\n+\t} while (done != nr_pages);\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * tdx_page_array_free() - Free all memory for a tdx_page_array\n+ * @array: The tdx_page_array to be freed.\n+ *\n+ * Free all associated pages and the container itself.\n+ */\n+void tdx_page_array_free(struct tdx_page_array *array)\n+{\n+\tif (!array)\n+\t\treturn;\n+\n+\ttdx_free_pages_bulk(array->nr_pages, array->pages);\n+\tkfree(array->pages);\n+\tkfree(array->root);\n+\tkfree(array);\n+}\n+EXPORT_SYMBOL_GPL(tdx_page_array_free);\n+\n+static struct tdx_page_array *\n+tdx_page_array_alloc(unsigned int nr_pages)\n+{\n+\tstruct tdx_page_array *array = NULL;\n+\tstruct page **pages = NULL;\n+\tu64 *root = NULL;\n+\tint ret;\n+\n+\tif (!nr_pages)\n+\t\treturn NULL;\n+\n+\tarray = kzalloc_obj(*array);\n+\tif (!array)\n+\t\tgoto out_free;\n+\n+\troot = kzalloc(PAGE_SIZE, GFP_KERNEL);\n+\tif (!root)\n+\t\tgoto out_free;\n+\n+\tpages = kcalloc(nr_pages, sizeof(*pages), GFP_KERNEL);\n+\tif (!pages)\n+\t\tgoto out_free;\n+\n+\tret = tdx_alloc_pages_bulk(nr_pages, pages);\n+\tif (ret)\n+\t\tgoto out_free;\n+\n+\tarray->nr_pages = nr_pages;\n+\tarray->pages = pages;\n+\tarray->root = root;\n+\n+\treturn array;\n+\n+out_free:\n+\tkfree(pages);\n+\tkfree(root);\n+\tkfree(array);\n+\n+\treturn NULL;\n+}\n+\n+/**\n+ * tdx_page_array_create() - Create a small tdx_page_array (up to 512 pages)\n+ * @nr_pages: Number of pages to allocate (must be <= 512).\n+ *\n+ * Allocate and populate a tdx_page_array in a single step. This is intended\n+ * for small collections that fit within a single root page. The allocated\n+ * pages are all order-0 pages. This is the most common use case for a list of\n+ * TDX control pages.\n+ *\n+ * If more pages are required, use tdx_page_array_alloc() and\n+ * tdx_page_array_populate() to build tdx_page_array chunk by chunk.\n+ *\n+ * Return: Fully populated tdx_page_array or NULL on failure.\n+ */\n+struct tdx_page_array *tdx_page_array_create(unsigned int nr_pages)\n+{\n+\tstruct tdx_page_array *array;\n+\tint populated;\n+\n+\tif (nr_pages > TDX_PAGE_ARRAY_MAX_NENTS)\n+\t\treturn NULL;\n+\n+\tarray = tdx_page_array_alloc(nr_pages);\n+\tif (!array)\n+\t\treturn NULL;\n+\n+\tpopulated = tdx_page_array_populate(array, 0);\n+\tif (populated != nr_pages)\n+\t\tgoto out_free;\n+\n+\treturn array;\n+\n+out_free:\n+\ttdx_page_array_free(array);\n+\treturn NULL;\n+}\n+EXPORT_SYMBOL_GPL(tdx_page_array_create);\n+\n+/**\n+ * tdx_page_array_ctrl_leak() - Leak data pages and free the container\n+ * @array: The tdx_page_array to be leaked.\n+ *\n+ * Call this function when failed to reclaim the control pages. Free the root\n+ * page and the holding structures, but orphan the data pages, to prevent the\n+ * host from re-allocating and accessing memory that the hardware may still\n+ * consider private.\n+ */\n+void tdx_page_array_ctrl_leak(struct tdx_page_array *array)\n+{\n+\tif (!array)\n+\t\treturn;\n+\n+\tkfree(array->pages);\n+\tkfree(array->root);\n+\tkfree(array);\n+}\n+EXPORT_SYMBOL_GPL(tdx_page_array_ctrl_leak);\n+\n+static bool tdx_page_array_validate_release(struct tdx_page_array *array,\n+\t\t\t\t\t unsigned int offset,\n+\t\t\t\t\t unsigned int nr_released,\n+\t\t\t\t\t u64 released_hpa)\n+{\n+\tunsigned int nents;\n+\n+\tif (offset >= array->nr_pages)\n+\t\treturn false;\n+\n+\tnents = umin(array->nr_pages - offset, TDX_PAGE_ARRAY_MAX_NENTS);\n+\n+\tif (nents != nr_released) {\n+\t\tpr_err(\"%s nr_released [%d] doesn't match page array nents [%d]\\n\",\n+\t\t __func__, nr_released, nents);\n+\t\treturn false;\n+\t}\n+\n+\t/*\n+\t * Unfortunately TDX has multiple page allocation protocols, check the\n+\t * \"singleton\" case required for HPA_ARRAY_T.\n+\t */\n+\tif (page_to_phys(array->pages[0]) == released_hpa &&\n+\t array->nr_pages == 1)\n+\t\treturn true;\n+\n+\t/* Then check the \"non-singleton\" case */\n+\tif (virt_to_phys(array->root) == released_hpa) {\n+\t\tu64 *entries = array->root;\n+\t\tint i;\n+\n+\t\tfor (i = 0; i < nents; i++) {\n+\t\t\tstruct page *page = array->pages[offset + i];\n+\t\t\tu64 val = page_to_phys(page);\n+\n+\t\t\tif (val != entries[i]) {\n+\t\t\t\tpr_err(\"%s entry[%d] [0x%llx] doesn't match page hpa [0x%llx]\\n\",\n+\t\t\t\t __func__, i, entries[i], val);\n+\t\t\t\treturn false;\n+\t\t\t}\n+\t\t}\n+\n+\t\treturn true;\n+\t}\n+\n+\tpr_err(\"%s failed to validate, released_hpa [0x%llx], root page hpa [0x%llx], page0 hpa [%#llx], number pages %u\\n\",\n+\t __func__, released_hpa, virt_to_phys(array->root),\n+\t page_to_phys(array->pages[0]), array->nr_pages);\n+\n+\treturn false;\n+}\n+\n+/**\n+ * tdx_page_array_ctrl_release() - Verify and release TDX control pages\n+ * @array: The tdx_page_array used to originally create control pages.\n+ * @nr_released: Number of HPAs the TDX Module reported as released.\n+ * @released_hpa: The HPA list the TDX Module reported as released.\n+ *\n+ * TDX Module can at most release 512 control pages, so this function only\n+ * accepts small tdx_page_array (up to 512 pages), usually created by\n+ * tdx_page_array_create().\n+ *\n+ * Return: 0 on success, -errno on page release protocol error.\n+ */\n+int tdx_page_array_ctrl_release(struct tdx_page_array *array,\n+\t\t\t\tunsigned int nr_released,\n+\t\t\t\tu64 released_hpa)\n+{\n+\tint i;\n+\n+\t/*\n+\t * The only case where ->nr_pages is allowed to be >\n+\t * TDX_PAGE_ARRAY_MAX_NENTS is a case where those pages are never\n+\t * expected to be released by this function.\n+\t */\n+\tif (WARN_ON(array->nr_pages > TDX_PAGE_ARRAY_MAX_NENTS))\n+\t\treturn -EINVAL;\n+\n+\tif (WARN_ONCE(!tdx_page_array_validate_release(array, 0, nr_released,\n+\t\t\t\t\t\t released_hpa),\n+\t\t \"page release protocol error, consider reboot and replace TDX Module.\\n\"))\n+\t\treturn -EFAULT;\n+\n+\tfor (i = 0; i < array->nr_pages; i++) {\n+\t\tu64 r;\n+\n+\t\tr = tdh_phymem_page_wbinvd_hkid(tdx_global_keyid,\n+\t\t\t\t\t\tarray->pages[i]);\n+\t\tif (WARN_ON(r))\n+\t\t\treturn -EFAULT;\n+\t}\n+\n+\ttdx_page_array_free(array);\n+\treturn 0;\n+}\n+EXPORT_SYMBOL_GPL(tdx_page_array_ctrl_release);\n+\n+#define HPA_LIST_INFO_FIRST_ENTRY\tGENMASK_U64(11, 3)\n+#define HPA_LIST_INFO_PFN\t\tGENMASK_U64(51, 12)\n+#define HPA_LIST_INFO_LAST_ENTRY\tGENMASK_U64(63, 55)\n+\n+static u64 __maybe_unused hpa_list_info_assign_raw(struct tdx_page_array *array)\n+{\n+\treturn FIELD_PREP(HPA_LIST_INFO_FIRST_ENTRY, 0) |\n+\t FIELD_PREP(HPA_LIST_INFO_PFN,\n+\t\t\t PFN_DOWN(virt_to_phys(array->root))) |\n+\t FIELD_PREP(HPA_LIST_INFO_LAST_ENTRY, array->nents - 1);\n+}\n+\n+#define HPA_ARRAY_T_PFN\t\tGENMASK_U64(51, 12)\n+#define HPA_ARRAY_T_SIZE\tGENMASK_U64(63, 55)\n+\n+static u64 __maybe_unused hpa_array_t_assign_raw(struct tdx_page_array *array)\n+{\n+\tunsigned long pfn;\n+\n+\tif (array->nents == 1)\n+\t\tpfn = page_to_pfn(array->pages[array->offset]);\n+\telse\n+\t\tpfn = PFN_DOWN(virt_to_phys(array->root));\n+\n+\treturn FIELD_PREP(HPA_ARRAY_T_PFN, pfn) |\n+\t FIELD_PREP(HPA_ARRAY_T_SIZE, array->nents - 1);\n+}\n+\n+static u64 __maybe_unused hpa_array_t_release_raw(struct tdx_page_array *array)\n+{\n+\tif (array->nents == 1)\n+\t\treturn 0;\n+\n+\treturn virt_to_phys(array->root);\n+}\n+\n static int read_sys_metadata_field(u64 field_id, u64 *data)\n {\n \tstruct tdx_module_args args = {};\n", "prefixes": [ "v2", "03/31" ] }