From patchwork Fri Dec 20 02:49:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Sumner, William" X-Patchwork-Id: 303851 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 58EB82C00B5 for ; Fri, 20 Dec 2013 13:53:05 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752629Ab3LTCwf (ORCPT ); Thu, 19 Dec 2013 21:52:35 -0500 Received: from g1t0028.austin.hp.com ([15.216.28.35]:8043 "EHLO g1t0028.austin.hp.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751738Ab3LTCuX (ORCPT ); Thu, 19 Dec 2013 21:50:23 -0500 Received: from g2t2360.austin.hp.com (txe01lbes9037-9038-vl451-snat0.austin.hp.com [15.216.28.90]) by g1t0028.austin.hp.com (Postfix) with ESMTP id 80F9B1C014; Fri, 20 Dec 2013 02:50:23 +0000 (UTC) Received: from lxbuild.fcux.usa.hp.com (unknown [16.78.34.175]) by g2t2360.austin.hp.com (Postfix) with ESMTP id E31D24F; Fri, 20 Dec 2013 02:50:22 +0000 (UTC) From: Bill Sumner To: dwmw2@infradead.org, indou.takao@jp.fujitsu.com, bhe@redhat.com Cc: iommu@lists.linux-foundation.org, kexec@lists.infradead.org, alex.williamson@redhat.com, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, ddutile@redhat.com, ishii.hironobu@jp.fujitsu.com, bhelgaas@google.com, bill.sumner@hp.com, doug.hatch@hp.com Subject: [PATCHv2 2/6] Crashdump-Accepting-Active-IOMMU-Utility-functions Date: Thu, 19 Dec 2013 19:49:43 -0700 Message-Id: <1387507787-14163-3-git-send-email-bill.sumner@hp.com> X-Mailer: git-send-email 1.7.11.3 In-Reply-To: <1387507787-14163-1-git-send-email-bill.sumner@hp.com> References: <1387507787-14163-1-git-send-email-bill.sumner@hp.com> Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org Most of the code for Crashdump Accepting Active IOMMU is contained in a large section at the end of intel-iommu.c -- beginning here. This patch contains small utility functions used to access the bit fields of the context entry, plus one to copy from a physically- addressed area of memory (primarily pages from the panicked kernel) into a virtually-addressed area within the crashdump kernel. v1->v2: Updated patch description Signed-off-by: Bill Sumner --- drivers/iommu/intel-iommu.c | 74 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 74 insertions(+) diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c index 17c4537..4172a2b 100644 --- a/drivers/iommu/intel-iommu.c +++ b/drivers/iommu/intel-iommu.c @@ -4455,3 +4455,77 @@ static void __init check_tylersburg_isoch(void) printk(KERN_WARNING "DMAR: Recommended TLB entries for ISOCH unit is 16; your BIOS set %d\n", vtisochctrl); } +#ifdef CONFIG_CRASH_DUMP + +/* ======================================================================== + * Utility functions for accessing the iommu Translation Tables + * ------------------------------------------------------------------------ + */ +static inline struct context_entry * +get_context_phys_from_root(struct root_entry *root) +{ + return (struct context_entry *) + (root_present(root) ? (void *) (root->val & VTD_PAGE_MASK) + : NULL); +} + +static int +context_get_p(struct context_entry *c) {return((c->lo >> 0) & 0x1); } + +static int +context_get_fpdi(struct context_entry *c) {return((c->lo >> 1) & 0x1); } + +static int +context_get_t(struct context_entry *c) {return((c->lo >> 2) & 0x3); } + +static u64 +context_get_asr(struct context_entry *c) {return((c->lo >> 12)); } + +static int +context_get_aw(struct context_entry *c) {return((c->hi >> 0) & 0x7); } + +static int +context_get_aval(struct context_entry *c) {return((c->hi >> 3) & 0xf); } + +static int +context_get_did(struct context_entry *c) {return((c->hi >> 8) & 0xffff); } + +static void context_put_asr(struct context_entry *c, unsigned long asr) +{ + c->lo &= (~VTD_PAGE_MASK); + c->lo |= (asr << VTD_PAGE_SHIFT); +} + + +/* + * Copy memory from a physically-addressed area into a virtually-addressed area + */ +static int oldcopy(void *to, void *from, int size) +{ + size_t ret = 0; /* Length copied */ + unsigned long pfn; /* Page Frame Number */ + char *buf = to; /* Adr(Output buffer) */ + size_t csize = (size_t)size; /* Num(bytes to copy) */ + unsigned long offset; /* Lower 12 bits of from */ + int userbuf = 0; /* to is in kernel space */ + + if (pr_dbg.enter_oldcopy) + pr_debug("ENTER %s to=%16.16llx, from = %16.16llx, size = %d\n", + __func__, + (unsigned long long) to, + (unsigned long long) from, size); + + if (intel_iommu_translation_tables_are_mapped) + memcpy(to, phys_to_virt((phys_addr_t)from), csize); + else { + pfn = ((unsigned long) from) >> VTD_PAGE_SHIFT; + offset = ((unsigned long) from) & (~VTD_PAGE_MASK); + ret = copy_oldmem_page(pfn, buf, csize, offset, userbuf); + } + + if (pr_dbg.leave_oldcopy) + pr_debug("LEAVE %s ret=%d\n", __func__, (int) ret); + + return (int) ret; +} +#endif /* CONFIG_CRASH_DUMP */