From patchwork Thu May 17 08:59:20 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Xu X-Patchwork-Id: 915220 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 40mldQ0plnz9s1d for ; Thu, 17 May 2018 19:00:42 +1000 (AEST) Received: from localhost ([::1]:43781 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fJEm5-0000tE-KV for incoming@patchwork.ozlabs.org; Thu, 17 May 2018 05:00:37 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:59211) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fJElK-0000rt-Oe for qemu-devel@nongnu.org; Thu, 17 May 2018 04:59:53 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fJElJ-0006q8-EN for qemu-devel@nongnu.org; Thu, 17 May 2018 04:59:50 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:51826 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fJElJ-0006q4-8s for qemu-devel@nongnu.org; Thu, 17 May 2018 04:59:49 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BCC3240200A3; Thu, 17 May 2018 08:59:48 +0000 (UTC) Received: from xz-mi.redhat.com (ovpn-12-109.pek2.redhat.com [10.72.12.109]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0F1DD215CDA7; Thu, 17 May 2018 08:59:45 +0000 (UTC) From: Peter Xu To: qemu-devel@nongnu.org Date: Thu, 17 May 2018 16:59:20 +0800 Message-Id: <20180517085927.24925-6-peterx@redhat.com> In-Reply-To: <20180517085927.24925-1-peterx@redhat.com> References: <20180517085927.24925-1-peterx@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 17 May 2018 08:59:48 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Thu, 17 May 2018 08:59:48 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'peterx@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [PATCH v3 05/12] intel-iommu: introduce vtd_page_walk_info X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Tian Kevin , "Michael S . Tsirkin" , Jason Wang , peterx@redhat.com, Alex Williamson , Jintack Lim Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" During the recursive page walking of IOVA page tables, some stack variables are constant variables and never changed during the whole page walking procedure. Isolate them into a struct so that we don't need to pass those contants down the stack every time and multiple times. Signed-off-by: Peter Xu Reviewed-by: Eric Auger --- hw/i386/intel_iommu.c | 84 ++++++++++++++++++++++++++----------------- 1 file changed, 52 insertions(+), 32 deletions(-) diff --git a/hw/i386/intel_iommu.c b/hw/i386/intel_iommu.c index 9a418abfb6..4953d02ed0 100644 --- a/hw/i386/intel_iommu.c +++ b/hw/i386/intel_iommu.c @@ -747,9 +747,27 @@ static int vtd_iova_to_slpte(VTDContextEntry *ce, uint64_t iova, bool is_write, typedef int (*vtd_page_walk_hook)(IOMMUTLBEntry *entry, void *private); +/** + * Constant information used during page walking + * + * @hook_fn: hook func to be called when detected page + * @private: private data to be passed into hook func + * @notify_unmap: whether we should notify invalid entries + * @aw: maximum address width + */ +typedef struct { + vtd_page_walk_hook hook_fn; + void *private; + bool notify_unmap; + uint8_t aw; +} vtd_page_walk_info; + static int vtd_page_walk_one(IOMMUTLBEntry *entry, int level, - vtd_page_walk_hook hook_fn, void *private) + vtd_page_walk_info *info) { + vtd_page_walk_hook hook_fn = info->hook_fn; + void *private = info->private; + assert(hook_fn); trace_vtd_page_walk_one(level, entry->iova, entry->translated_addr, entry->addr_mask, entry->perm); @@ -762,17 +780,13 @@ static int vtd_page_walk_one(IOMMUTLBEntry *entry, int level, * @addr: base GPA addr to start the walk * @start: IOVA range start address * @end: IOVA range end address (start <= addr < end) - * @hook_fn: hook func to be called when detected page - * @private: private data to be passed into hook func * @read: whether parent level has read permission * @write: whether parent level has write permission - * @notify_unmap: whether we should notify invalid entries - * @aw: maximum address width + * @info: constant information for the page walk */ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start, - uint64_t end, vtd_page_walk_hook hook_fn, - void *private, uint32_t level, bool read, - bool write, bool notify_unmap, uint8_t aw) + uint64_t end, uint32_t level, bool read, + bool write, vtd_page_walk_info *info) { bool read_cur, write_cur, entry_valid; uint32_t offset; @@ -822,24 +836,24 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start, if (vtd_is_last_slpte(slpte, level)) { /* NOTE: this is only meaningful if entry_valid == true */ - entry.translated_addr = vtd_get_slpte_addr(slpte, aw); - if (!entry_valid && !notify_unmap) { + entry.translated_addr = vtd_get_slpte_addr(slpte, info->aw); + if (!entry_valid && !info->notify_unmap) { trace_vtd_page_walk_skip_perm(iova, iova_next); goto next; } - ret = vtd_page_walk_one(&entry, level, hook_fn, private); + ret = vtd_page_walk_one(&entry, level, info); if (ret < 0) { return ret; } } else { if (!entry_valid) { - if (notify_unmap) { + if (info->notify_unmap) { /* * The whole entry is invalid; unmap it all. * Translated address is meaningless, zero it. */ entry.translated_addr = 0x0; - ret = vtd_page_walk_one(&entry, level, hook_fn, private); + ret = vtd_page_walk_one(&entry, level, info); if (ret < 0) { return ret; } @@ -848,10 +862,9 @@ static int vtd_page_walk_level(dma_addr_t addr, uint64_t start, } goto next; } - ret = vtd_page_walk_level(vtd_get_slpte_addr(slpte, aw), iova, - MIN(iova_next, end), hook_fn, private, - level - 1, read_cur, write_cur, - notify_unmap, aw); + ret = vtd_page_walk_level(vtd_get_slpte_addr(slpte, info->aw), + iova, MIN(iova_next, end), level - 1, + read_cur, write_cur, info); if (ret < 0) { return ret; } @@ -870,28 +883,24 @@ next: * @ce: context entry to walk upon * @start: IOVA address to start the walk * @end: IOVA range end address (start <= addr < end) - * @hook_fn: the hook that to be called for each detected area - * @private: private data for the hook function - * @aw: maximum address width + * @info: page walking information struct */ static int vtd_page_walk(VTDContextEntry *ce, uint64_t start, uint64_t end, - vtd_page_walk_hook hook_fn, void *private, - bool notify_unmap, uint8_t aw) + vtd_page_walk_info *info) { dma_addr_t addr = vtd_ce_get_slpt_base(ce); uint32_t level = vtd_ce_get_level(ce); - if (!vtd_iova_range_check(start, ce, aw)) { + if (!vtd_iova_range_check(start, ce, info->aw)) { return -VTD_FR_ADDR_BEYOND_MGAW; } - if (!vtd_iova_range_check(end, ce, aw)) { + if (!vtd_iova_range_check(end, ce, info->aw)) { /* Fix end so that it reaches the maximum */ - end = vtd_iova_limit(ce, aw); + end = vtd_iova_limit(ce, info->aw); } - return vtd_page_walk_level(addr, start, end, hook_fn, private, - level, true, true, notify_unmap, aw); + return vtd_page_walk_level(addr, start, end, level, true, true, info); } /* Map a device to its corresponding domain (context-entry) */ @@ -1446,13 +1455,18 @@ static void vtd_iotlb_page_invalidate_notify(IntelIOMMUState *s, vtd_as->devfn, &ce); if (!ret && domain_id == VTD_CONTEXT_ENTRY_DID(ce.hi)) { if (vtd_as_notify_mappings(vtd_as)) { + vtd_page_walk_info info = { + .hook_fn = vtd_page_invalidate_notify_hook, + .private = (void *)&vtd_as->iommu, + .notify_unmap = true, + .aw = s->aw_bits, + }; + /* * For MAP-inclusive notifiers, we need to walk the * page table to sync the shadow page table. */ - vtd_page_walk(&ce, addr, addr + size, - vtd_page_invalidate_notify_hook, - (void *)&vtd_as->iommu, true, s->aw_bits); + vtd_page_walk(&ce, addr, addr + size, &info); } else { /* * For UNMAP-only notifiers, we don't need to walk the @@ -2922,8 +2936,14 @@ static void vtd_iommu_replay(IOMMUMemoryRegion *iommu_mr, IOMMUNotifier *n) ce.hi, ce.lo); if (vtd_as_notify_mappings(vtd_as)) { /* This is required only for MAP typed notifiers */ - vtd_page_walk(&ce, 0, ~0ULL, vtd_replay_hook, (void *)n, false, - s->aw_bits); + vtd_page_walk_info info = { + .hook_fn = vtd_replay_hook, + .private = (void *)n, + .notify_unmap = false, + .aw = s->aw_bits, + }; + + vtd_page_walk(&ce, 0, ~0ULL, &info); } } else { trace_vtd_replay_ce_invalid(bus_n, PCI_SLOT(vtd_as->devfn),