From patchwork Wed Jun 15 12:21:51 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Kiarie X-Patchwork-Id: 635851 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3rV5PH0NScz9t1J for ; Wed, 15 Jun 2016 22:27:15 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b=ub5J16/5; dkim-atps=neutral Received: from localhost ([::1]:41851 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bD9ua-0006Zc-Ux for incoming@patchwork.ozlabs.org; Wed, 15 Jun 2016 08:27:13 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:36995) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bD9q8-0002F1-Cu for qemu-devel@nongnu.org; Wed, 15 Jun 2016 08:22:41 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bD9q1-0004Gk-Nv for qemu-devel@nongnu.org; Wed, 15 Jun 2016 08:22:35 -0400 Received: from mail-wm0-x241.google.com ([2a00:1450:400c:c09::241]:35202) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bD9q1-0004Gd-4G for qemu-devel@nongnu.org; Wed, 15 Jun 2016 08:22:29 -0400 Received: by mail-wm0-x241.google.com with SMTP id k184so5554961wme.2 for ; Wed, 15 Jun 2016 05:22:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DLs3DebH9wCuvnp9G4O7es7aAFyTfFjuyUxGYYkp4H0=; b=ub5J16/5eVJdmpj5RIihd1GlMa+VZeHuujKCib3Ki9M5r/fbS2U0Eh09aMDqX4W1ma GB1kJ3/oNrcvT00YiZlvPVt/LsUZi2xRHoz+ZeZdfFFXChJKIznyOXmVq4b3RytpP8Ja qEA/5+ZxfYblm+pU2m6rggrfQMJMN56EqVelZhyc/y0BdIeip7zUVqsBcmPjLoVXkKAW TK1T/lmJb5pAh2b1IXQm0Ttvxi7gmyLLtUhpM8qVdQbIfDxwFZh4JOiVnEP01qXcTbzx PnreA1dd7Cx1OgHV/64UDAd05uSZdRY2krVH9ZYS9wAKD+ruIPx4AEsRx1xDT9rKvZh5 r88Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=DLs3DebH9wCuvnp9G4O7es7aAFyTfFjuyUxGYYkp4H0=; b=UVTYfAB5pMRk2wI45kVm5zozUUeKJEG+ec+GTD3hBxw7GSgnUscbBUJQ8OUe05iw1N f9d+zlAK6TvwGwToRUkm+zxTP0JteOE60gimJqSaNSPTXA5Iir2V2OhdIZXmM71mBdCl FQElXpVa3jYt/f+ym3Qm1+OpEQ/BnC3XGbMK4X+Hy7cNAHA+iuJjO2c0jSuvgcmtdcEk 0FOrXI3iq3nNFctImq0bV3Caf8bHFH4Mxu/K6nyjGkIKHEKZMZqvDYYOo6E9Fq/80pGG fMVr/9yORjo/CsYK+sXD2MNQJjyQatr60XtVHeXQ5b4n8eH8vw6ecgHRqDoHZuTcyS/b SN+w== X-Gm-Message-State: ALyK8tLxMjcH1CyUv7k/yZCjtwdfrz4g1VrpGlXKJihd5OFB1aXKxYtkgF6M53qQIObEDw== X-Received: by 10.28.147.19 with SMTP id v19mr9625029wmd.13.1465993347643; Wed, 15 Jun 2016 05:22:27 -0700 (PDT) Received: from debian.flybox.orange ([154.122.146.192]) by smtp.googlemail.com with ESMTPSA id s67sm3107339wmf.3.2016.06.15.05.22.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Wed, 15 Jun 2016 05:22:26 -0700 (PDT) From: David Kiarie To: qemu-devel@nongnu.org Date: Wed, 15 Jun 2016 15:21:51 +0300 Message-Id: <1465993312-18119-4-git-send-email-davidkiarie4@gmail.com> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1465993312-18119-1-git-send-email-davidkiarie4@gmail.com> References: <1465993312-18119-1-git-send-email-davidkiarie4@gmail.com> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a00:1450:400c:c09::241 Subject: [Qemu-devel] [V12 3/4] hw/i386: Introduce AMD IOMMU X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: ehabkost@redhat.com, mst@redhat.com, valentine.sinitsyn@gmail.com, peterx@redhat.com, alex.williamson@redhat.com, jan.kiszka@web.de, marcel@redhat.com, David Kiarie Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" Add AMD IOMMU emulaton to Qemu in addition to Intel IOMMU The IOMMU does basic translation, error checking and has a minimal IOTLB implementation. This IOMMU bypassed the need for target aborts by responding with IOMMU_NONE access rights and exempts the region 0xfee00000-0xfeefffff from translation as it is the q35 interrupt region. We also advertise features that are not yet implemented to please the Linux IOMMU driver. IOTLB aims at implementing commands on real IOMMUs which is essential for debugging and may not offer any performance benefits Signed-off-by: David Kiarie --- hw/i386/Makefile.objs | 1 + hw/i386/amd_iommu.c | 1559 +++++++++++++++++++++++++++++++++++++++++++++++++ hw/i386/amd_iommu.h | 287 +++++++++ 3 files changed, 1847 insertions(+) create mode 100644 hw/i386/amd_iommu.c create mode 100644 hw/i386/amd_iommu.h diff --git a/hw/i386/Makefile.objs b/hw/i386/Makefile.objs index b52d5b8..2f1a265 100644 --- a/hw/i386/Makefile.objs +++ b/hw/i386/Makefile.objs @@ -3,6 +3,7 @@ obj-y += multiboot.o obj-y += pc.o pc_piix.o pc_q35.o obj-y += pc_sysfw.o obj-y += intel_iommu.o +obj-y += amd_iommu.o obj-$(CONFIG_XEN) += ../xenpv/ xen/ obj-y += kvmvapic.o diff --git a/hw/i386/amd_iommu.c b/hw/i386/amd_iommu.c new file mode 100644 index 0000000..460a8f3 --- /dev/null +++ b/hw/i386/amd_iommu.c @@ -0,0 +1,1559 @@ +/* + * QEMU emulation of AMD IOMMU (AMD-Vi) + * + * Copyright (C) 2011 Eduard - Gabriel Munteanu + * Copyright (C) 2015 David Kiarie, + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + * + * Cache implementation inspired by hw/i386/intel_iommu.c + * + */ +#include "qemu/osdep.h" +#include "trace.h" +#include "hw/pci/msi.h" +#include "hw/i386/pc.h" +#include "hw/i386/amd_iommu.h" + +#define ENCODE_EVENT(devid, info, addr, rshift) do { \ + *(uint16_t *)&evt[0] = devid; \ + *(uint8_t *)&evt[3] = info; \ + *(uint64_t *)&evt[4] = rshift ? cpu_to_le64(addr) :\ + cpu_to_le64(addr) >> rshift; \ + } while (0) + +typedef struct AMDVIAddressSpace { + uint8_t bus_num; /* bus number */ + uint8_t devfn; /* device function */ + AMDVIState *iommu_state; /* AMDVI - one per machine */ + MemoryRegion iommu; /* Device's iommu region */ + AddressSpace as; /* device's corresponding address space */ +} AMDVIAddressSpace; + +/* AMD-Vi cache entry */ +typedef struct AMDVIIOTLBEntry { + uint64_t gfn; /* guest frame number */ + uint16_t domid; /* assigned domain id */ + uint16_t devid; /* device owning entry */ + uint64_t perms; /* access permissions */ + uint64_t translated_addr; /* translated address */ + uint64_t page_mask; /* physical page size */ +} AMDVIIOTLBEntry; + +#if defined(HOST_WORDS_BIGENDIAN) +#define __BIG_ENDIAN_BITFIELD +#endif + +/* serialize IOMMU command processing */ +struct CMDCompletionWait { + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t type:4; /* command type */ + uint64_t reserved:8; + uint64_t store_addr:49; /* addr to write */ + uint64_t completion_flush:1; /* allow more executions */ + uint64_t completion_int:1; /* set MMIOWAITINT */ + uint64_t completion_store:1; /* write data to address */ +#else + uint64_t completion_store:1; + uint64_t completion_int:1; + uint64_t completion_flush:1; + uint64_t store_addr:49; + uint64_t reserved:8; + uint64_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + + uint64_t store_data; /* data to write */ +} QEMU_PACKED; + +/* invalidate internal caches for devid */ +struct CMDInvalDevEntry { + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t devid; /* device to invalidate */ + uint64_t reserved_1:44; + uint64_t type:4; /* command type */ +#else + uint64_t devid; + uint64_t reserved_1:44; + uint64_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + + uint64_t reserved_2; +} QEMU_PACKED; + +/* invalidate a range of entries in IOMMU translation cache for devid */ +struct CMDInvalIommuPages { + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t type:4; /* command type */ + uint64_t reserved_2:12 + uint64_t domid:16; /* domain to inval for */ + uint64_t reserved_1:12; + uint64_t pasid:20; +#else + uint64_t pasid:20; + uint64_t reserved_1:12; + uint64_t domid:16; + uint64_t reserved_2:12; + uint64_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t address:51; /* address to invalidate */ + uint64_t reserved_3:10; + uint64_t guest:1; /* G/N invalidation */ + uint64_t pde:1; /* invalidate cached ptes */ + uint64_t size:1 /* size of invalidation */ +#else + uint64_t size:1; + uint64_t pde:1; + uint64_t guest:1; + uint64_t reserved_3:10; + uint64_t address:51; +#endif /* __BIG_ENDIAN_BITFIELD */ + +} QEMU_PACKED; + +/* inval specified address for devid from remote IOTLB */ +struct CMDInvalIOTLBPages { + +#ifdef _BIG_ENDIAN_BITFIELD + uint64_t type:4; /* command type */ + uint64_t pasid_19_6:4; + uint64_t pasid_7_0:8; + uint64_t queuid:16; + uint64_t maxpend:8; + uint64_t pasid_15_8; + uint64_t devid:16; /* related devid */ +#else + uint64_t devid:16; + uint64_t pasid_15_8:8; + uint64_t maxpend:8; + uint64_t queuid:16; + uint64_t pasid_7_0:8; + uint64_t pasid_19_6:4; + uint64_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t address:52; /* invalidate addr */ + uint64_t reserved_2:9; + uint64_t guest:1; /* G/N invalidate */ + uint64_t reserved_1:1; + uint64_t size:1 /* size of invalidation */ +#else + uint64_t size:1; + uint64_t reserved_1:1; + uint64_t guest:1; + uint64_t reserved_2:9; + uint64_t address:52; +#endif /* __BIG_ENDIAN_BITFIELD */ +} QEMU_PACKED; + +/* invalidate all cached interrupt info for devid */ +struct CMDInvalIntrTable { + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t type:4; /* command type */ + uint64_t reserved_1:44; + uint64_t devid:16; /* related devid */ +#else + uint64_t devid:16; + uint64_t reserved_1:44; + uint64_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + + uint64_t reserved_2; +} QEMU_PACKED; + +/* load adddress translation info for devid into translation cache */ +struct CMDPrefetchPages { + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t type:4; /* command type */ + uint64_t reserved_2:8; + uint64_t pasid_19_0:20; + uint64_t pfcount_7_0:8; + uint64_t reserved_1:8; + uint64_t devid; /* related devid */ +#else + uint64_t devid; + uint64_t reserved_1:8; + uint64_t pfcount_7_0:8; + uint64_t pasid_19_0:20; + uint64_t reserved_2:8; + uint64_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t address:52; /* invalidate address */ + uint64_t reserved_5:7; + uint64_t inval:1; /* inval matching entries */ + uint64_t reserved_4:1; + uint64_t guest:1; /* G/N invalidate */ + uint64_t reserved_3:1; + uint64_t size:1; /* prefetched page size */ +#else + uint64_t size:1; + uint64_t reserved_3:1; + uint64_t guest:1; + uint64_t reserved_4:1; + uint64_t inval:1; + uint64_t reserved_5:7; + uint64_t address:52; +#endif /* __BIG_ENDIAN_BITFIELD */ + +} QEMU_PACKED; + +/* clear all address translation/interrupt remapping caches */ +struct CMDInvalIommuAll { + +#ifdef __BIG_ENDIAN_BITFIELD + uint64_t type:4; /* command type */ + uint64_t reserved_1:60; +#else + uint64_t reserved_1:60; + uint64_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + + uint64_t reserved_2; + +} QEMU_PACKED; + +/* issue a PCIe completion packet for devid */ +struct CMDCompletePPR { +#ifdef __BIT_ENDIAN_BITFIELD + uint32_t devid; /* related devid */ + uint32_t reserved_1; +#else + uint32_t reserved_1; + uint32_t devid; +#endif /* __BIG_ENDIAN_BITFIELD */ + +#ifdef __BIT_ENDIAN_BITFIELD + uint32_t type:4; /* command type */ + uint32_t reserved_2:8; + uint32_t pasid_19_0:20 +#else + uint32_t pasid_19_0:20; + uint32_t reserved_2:8; + uint32_t type:4; +#endif /* __BIG_ENDIAN_BITFIELD */ + +#ifdef __BIT_ENDIAN_BITFIELD + uint32_t reserved_3:29; + uint32_t guest:1; + uint32_t reserved_4:2; +#else + uint32_t reserved_3:2; + uint32_t guest:1; + uint32_t reserved_4:29; +#endif /* __BIG_ENDIAN_BITFIELD */ + +#ifdef __BIT_ENDIAN_BITFIELD + uint32_t reserved_5:16; + uint32_t completion_tag:16 /* PCIe PRI informatin */ +#else + uint32_t completion_tag:16; + uint32_t reserved_5:16; +#endif /* __BIG_ENDIAN_BITFIELD */ + +} QEMU_PACKED; + +#if defined(HOST_WORDS_BIGENDIAN) +#undef __BIG_ENDIAN_BITFIELD +#endif + +typedef struct CMDInvalIommuAll CMDInvalIommuAll; +typedef struct CMDCompletionWait CMDCompletionWait; +typedef struct CMDInvalDevEntry CMDInvalDevEntry; +typedef struct CMDInvalIommuPages CMDInvalIommuPages; +typedef struct CMDInvalIOTLBPages CMDInvalIOTLBPages; +typedef struct CMDInvalIntrTable CMDInvalIntrTable; +typedef struct CMDPrefetchPages CMDPrefetchPages; +typedef struct CMDCompletePPR CMDCompletePPR; + +/* configure MMIO registers at startup/reset */ +static void amdvi_set_quad(AMDVIState *s, hwaddr addr, uint64_t val, + uint64_t romask, uint64_t w1cmask) +{ + stq_le_p(&s->mmior[addr], val); + stq_le_p(&s->romask[addr], romask); + stq_le_p(&s->w1cmask[addr], w1cmask); +} + +static uint16_t amdvi_readw(AMDVIState *s, hwaddr addr) +{ + return lduw_le_p(&s->mmior[addr]); +} + +static uint32_t amdvi_readl(AMDVIState *s, hwaddr addr) +{ + return ldl_le_p(&s->mmior[addr]); +} + +static uint64_t amdvi_readq(AMDVIState *s, hwaddr addr) +{ + return ldq_le_p(&s->mmior[addr]); +} + +/* internal write */ +static void amdvi_writeq_raw(AMDVIState *s, uint64_t val, hwaddr addr) +{ + stq_le_p(&s->mmior[addr], val); +} + +/* external write */ +static void amdvi_writew(AMDVIState *s, hwaddr addr, uint16_t val) +{ + uint16_t romask = lduw_le_p(&s->romask[addr]); + uint16_t w1cmask = lduw_le_p(&s->w1cmask[addr]); + uint16_t oldval = lduw_le_p(&s->mmior[addr]); + stw_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval)); +} + +static void amdvi_writel(AMDVIState *s, hwaddr addr, uint32_t val) +{ + uint32_t romask = ldl_le_p(&s->romask[addr]); + uint32_t w1cmask = ldl_le_p(&s->w1cmask[addr]); + uint32_t oldval = ldl_le_p(&s->mmior[addr]); + stl_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval)); +} + +static void amdvi_writeq(AMDVIState *s, hwaddr addr, uint64_t val) +{ + uint64_t romask = ldq_le_p(&s->romask[addr]); + uint64_t w1cmask = ldq_le_p(&s->w1cmask[addr]); + uint32_t oldval = ldq_le_p(&s->mmior[addr]); + stq_le_p(&s->mmior[addr], (val & ~(val & w1cmask)) | (romask & oldval)); +} + +/* OR a 64-bit register with a 64-bit value */ +static bool amdvi_orq(AMDVIState *s, hwaddr addr, uint64_t val) +{ + return amdvi_readq(s, addr) | val; +} + +/* OR a 64-bit register with a 64-bit value storing result in the register */ +static void amdvi_orassignq(AMDVIState *s, hwaddr addr, uint64_t val) +{ + amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) | val); +} + +/* AND a 64-bit register with a 64-bit value storing result in the register */ +static void amdvi_and_assignq(AMDVIState *s, hwaddr addr, uint64_t val) +{ + amdvi_writeq_raw(s, addr, amdvi_readq(s, addr) & val); +} + +static void amdvi_generate_msi_interrupt(AMDVIState *s) +{ + MSIMessage msg; + if (msi_enabled(&s->dev)) { + msg = msi_get_message(&s->dev, 0); + address_space_stl_le(&address_space_memory, msg.address, msg.data, + MEMTXATTRS_UNSPECIFIED, NULL); + } +} + +static void amdvi_log_event(AMDVIState *s, uint16_t *evt) +{ + /* event logging not enabled */ + if (!s->evtlog_enabled || amdvi_orq(s, AMDVI_MMIO_STATUS, + AMDVI_MMIO_STATUS_EVT_OVF)) { + return; + } + + /* event log buffer full */ + if (s->evtlog_tail >= s->evtlog_len) { + amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_OVF); + /* generate interrupt */ + amdvi_generate_msi_interrupt(s); + return; + } + + if (dma_memory_write(&address_space_memory, s->evtlog_len + s->evtlog_tail, + &evt, AMDVI_EVENT_LEN)) { + trace_amdvi_evntlog_fail(s->evtlog, s->evtlog_tail); + } + + s->evtlog_tail += AMDVI_EVENT_LEN; + amdvi_orassignq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT); + amdvi_generate_msi_interrupt(s); +} + +/* log an error encountered page-walking + * + * @addr: virtual address in translation request + */ +static void amdvi_page_fault(AMDVIState *s, uint16_t devid, + hwaddr addr, uint16_t info) +{ + uint16_t evt[8]; + + info |= AMDVI_EVENT_IOPF_I | AMDVI_EVENT_IOPF; + + /* encode information */ + ENCODE_EVENT(devid, info, addr, 0); + + /* log a page fault */ + amdvi_log_event(s, evt); + + /* Abort the translation */ + pci_word_test_and_set_mask(s->dev.config + PCI_STATUS, + PCI_STATUS_SIG_TARGET_ABORT); +} +/* + * log a master abort accessing device table + * @devtab : address of device table entry + * @info : error flags + */ +static void amdvi_log_devtab_error(AMDVIState *s, uint16_t devid, + hwaddr devtab, uint16_t info) +{ + uint16_t evt[8]; + + info |= AMDVI_EVENT_DEV_TAB_HW_ERROR; + + /* encode information */ + ENCODE_EVENT(devid, info, devtab, 0); + + amdvi_log_event(s, evt); + + /* Abort the translation */ + pci_word_test_and_set_mask(s->dev.config + PCI_STATUS, + PCI_STATUS_SIG_TARGET_ABORT); + +} + +/* log an event trying to access command buffer + * @addr : address that couldn't be accessed + */ +static void amdvi_log_command_error(AMDVIState *s, hwaddr addr) +{ + uint16_t evt[8], info = AMDVI_EVENT_COMMAND_HW_ERROR; + + /* encode information */ + ENCODE_EVENT(0, info, addr, 3); + + amdvi_log_event(s, evt); + + /* Abort the translation */ + pci_word_test_and_set_mask(s->dev.config + PCI_STATUS, + PCI_STATUS_SIG_TARGET_ABORT); +} + +/* log an illegal comand event + * @addr : address of illegal command + */ +static void amdvi_log_illegalcom_error(AMDVIState *s, uint16_t info, + hwaddr addr) +{ + uint16_t evt[8]; + info |= AMDVI_EVENT_ILLEGAL_COMMAND_ERROR; + + /* encode information */ + ENCODE_EVENT(0, info, addr, 3); + + amdvi_log_event(s, evt); +} + +/* log an error accessing device table + * + * @devid : device owning the table entry + * @devtab : address of device table entry + * @info : error flags + */ +static void amdvi_log_illegaldevtab_error(AMDVIState *s, uint16_t devid, + hwaddr addr, uint16_t info) +{ + uint16_t evt[8]; + + info |= AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY; + + ENCODE_EVENT(devid, info, addr, 3); + + amdvi_log_event(s, evt); +} + +/* log an error accessing a PTE entry + * @addr : address that couldn't be accessed + */ +static void amdvi_log_pagetab_error(AMDVIState *s, uint16_t devid, + hwaddr addr, uint16_t info) +{ + uint16_t evt[8]; + + info |= AMDVI_EVENT_PAGE_TAB_HW_ERROR; + + ENCODE_EVENT(devid, info, addr, 3); + amdvi_log_event(s, evt); + + pci_word_test_and_set_mask(s->dev.config + PCI_STATUS, + PCI_STATUS_SIG_TARGET_ABORT); +} + +static gboolean amdvi_uint64_equal(gconstpointer v1, gconstpointer v2) +{ + return *((const uint64_t *)v1) == *((const uint64_t *)v2); +} + +static guint amdvi_uint64_hash(gconstpointer v) +{ + return (guint)*(const uint64_t *)v; +} + +static AMDVIIOTLBEntry *amdvi_iotlb_lookup(AMDVIState *s, hwaddr addr, + uint64_t devid) +{ + uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) | + ((uint64_t)(devid) << AMDVI_DEVID_SHIFT); + return g_hash_table_lookup(s->iotlb, &key); +} + +static void amdvi_iotlb_reset(AMDVIState *s) +{ + assert(s->iotlb); + g_hash_table_remove_all(s->iotlb); +} + +static gboolean amdvi_iotlb_remove_by_devid(gpointer key, gpointer value, + gpointer user_data) +{ + AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value; + uint16_t devid = *(uint16_t *)user_data; + return entry->devid == devid; +} + +static void amdvi_iotlb_remove_page(AMDVIState *s, hwaddr addr, + uint64_t devid) +{ + uint64_t key = (addr >> AMDVI_PAGE_SHIFT_4K) | + ((uint64_t)(devid) << AMDVI_DEVID_SHIFT); + g_hash_table_remove(s->iotlb, &key); +} + +static void amdvi_update_iotlb(AMDVIState *s, uint16_t devid, + uint64_t gpa, IOMMUTLBEntry to_cache, + uint16_t domid) +{ + AMDVIIOTLBEntry *entry = g_malloc(sizeof(*entry)); + uint64_t *key = g_malloc(sizeof(key)); + uint64_t gfn = gpa >> AMDVI_PAGE_SHIFT_4K; + + /* don't cache erroneous translations */ + if (to_cache.perm != IOMMU_NONE) { + trace_amdvi_cache_update(domid, PCI_BUS_NUM(devid), PCI_SLOT(devid), + PCI_FUNC(devid), gpa, to_cache.translated_addr); + + if (g_hash_table_size(s->iotlb) >= AMDVI_IOTLB_MAX_SIZE) { + trace_amdvi_iotlb_reset(); + amdvi_iotlb_reset(s); + } + + entry->gfn = gfn; + entry->domid = domid; + entry->perms = to_cache.perm; + entry->translated_addr = to_cache.translated_addr; + entry->page_mask = to_cache.addr_mask; + *key = gfn | ((uint64_t)(devid) << AMDVI_DEVID_SHIFT); + g_hash_table_replace(s->iotlb, key, entry); + } +} + +static void amdvi_completion_wait(AMDVIState *s, CMDCompletionWait *wait) +{ + /* pad the last 3 bits */ + hwaddr addr = cpu_to_le64(wait->store_addr << 3); + uint64_t data = cpu_to_le64(wait->store_data); + + if (wait->reserved) { + amdvi_log_illegalcom_error(s, wait->type, s->cmdbuf + s->cmdbuf_head); + } + + if (wait->completion_store) { + if (dma_memory_write(&address_space_memory, addr, &data, + AMDVI_COMPLETION_DATA_SIZE)) + { + trace_amdvi_completion_wait_fail(addr); + } + } + + /* set completion interrupt */ + if (wait->completion_int) { + amdvi_orq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_COMP_INT); + /* generate interrupt */ + amdvi_generate_msi_interrupt(s); + } + + trace_amdvi_completion_wait(addr, data); + +} + +/* log error without aborting since linux seems to be using reserved bits */ +static void amdvi_inval_devtab_entry(AMDVIState *s, void *cmd) +{ + CMDInvalIntrTable *inval = (CMDInvalIntrTable *)cmd; + /* This command should invalidate internal caches of which there isn't */ + if (inval->reserved_1 || inval->reserved_2) { + amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head); + } + trace_amdvi_devtab_inval(PCI_BUS_NUM(inval->devid), PCI_SLOT(inval->devid), + PCI_FUNC(inval->devid)); +} + +static void amdvi_complete_ppr(AMDVIState *s, void *cmd) +{ + CMDCompletePPR *pprcomp = (CMDCompletePPR *)cmd; + + if (pprcomp->reserved_1 || pprcomp->reserved_2 || pprcomp->reserved_3 || + pprcomp->reserved_4 || pprcomp->reserved_5) { + amdvi_log_illegalcom_error(s, pprcomp->type, s->cmdbuf + + s->cmdbuf_head); + } + trace_amdvi_ppr_exec(); +} + + +static void amdvi_inval_all(AMDVIState *s, CMDInvalIommuAll *inval) +{ + if (inval->reserved_2 || inval->reserved_1) { + amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head); + } + + amdvi_iotlb_reset(s); + trace_amdvi_all_inval(); +} + +static gboolean amdvi_iotlb_remove_by_domid(gpointer key, gpointer value, + gpointer user_data) +{ + AMDVIIOTLBEntry *entry = (AMDVIIOTLBEntry *)value; + uint16_t domid = *(uint16_t *)user_data; + return entry->domid == domid; +} + +/* we don't have devid - we can't remove pages by address */ +static void amdvi_inval_pages(AMDVIState *s, CMDInvalIommuPages *inval) +{ + uint16_t domid = inval->domid; + + if (inval->reserved_1 || inval->reserved_2 || inval->reserved_3) { + amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head); + } + + g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_domid, + &domid); + trace_amdvi_pages_inval(inval->domid); +} + +static void amdvi_prefetch_pages(AMDVIState *s, CMDPrefetchPages *prefetch) +{ + if (prefetch->reserved_1 || prefetch->reserved_2 || prefetch->reserved_3 + || prefetch->reserved_4 || prefetch->reserved_5) { + amdvi_log_illegalcom_error(s, prefetch->type, s->cmdbuf + + s->cmdbuf_head); + } + trace_amdvi_prefetch_pages(); +} + +static void amdvi_inval_inttable(AMDVIState *s, CMDInvalIntrTable *inval) +{ + if (inval->reserved_1 || inval->reserved_2) { + amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head); + return; + } + trace_amdvi_intr_inval(); +} + +/* FIXME: Try to work with the specified size instead of all the pages + * when the S bit is on + */ +static void iommu_inval_iotlb(AMDVIState *s, CMDInvalIOTLBPages *inval) +{ + uint16_t devid = inval->devid; + + if (inval->reserved_1 || inval->reserved_2) { + amdvi_log_illegalcom_error(s, inval->type, s->cmdbuf + s->cmdbuf_head); + return; + } + + if (inval->size) { + g_hash_table_foreach_remove(s->iotlb, amdvi_iotlb_remove_by_devid, + &devid); + } else { + amdvi_iotlb_remove_page(s, inval->address << 12, inval->devid); + } + trace_amdvi_iotlb_inval(); +} + +/* not honouring reserved bits is regarded as an illegal command */ +static void amdvi_cmdbuf_exec(AMDVIState *s) +{ + CMDCompletionWait cmd; + + if (dma_memory_read(&address_space_memory, s->cmdbuf + s->cmdbuf_head, &cmd, + AMDVI_COMMAND_SIZE)) { + trace_amdvi_command_read_fail(s->cmdbuf, s->cmdbuf_head); + amdvi_log_command_error(s, s->cmdbuf + s->cmdbuf_head); + return; + } + + switch (cmd.type) { + case AMDVI_CMD_COMPLETION_WAIT: + amdvi_completion_wait(s, (CMDCompletionWait *)&cmd); + break; + + case AMDVI_CMD_INVAL_DEVTAB_ENTRY: + amdvi_inval_devtab_entry(s, (CMDInvalDevEntry *)&cmd); + break; + + case AMDVI_CMD_INVAL_AMDVI_PAGES: + amdvi_inval_pages(s, (CMDInvalIommuPages *)&cmd); + break; + + case AMDVI_CMD_INVAL_IOTLB_PAGES: + iommu_inval_iotlb(s, (CMDInvalIOTLBPages *)&cmd); + break; + + case AMDVI_CMD_INVAL_INTR_TABLE: + amdvi_inval_inttable(s, (CMDInvalIntrTable *)&cmd); + break; + + case AMDVI_CMD_PREFETCH_AMDVI_PAGES: + amdvi_prefetch_pages(s, (CMDPrefetchPages *)&cmd); + break; + + case AMDVI_CMD_COMPLETE_PPR_REQUEST: + amdvi_complete_ppr(s, (CMDCompletePPR *)&cmd); + break; + + case AMDVI_CMD_INVAL_AMDVI_ALL: + amdvi_inval_all(s, (CMDInvalIommuAll *)&cmd); + break; + + default: + trace_amdvi_unhandled_command(cmd.type); + /* log illegal command */ + amdvi_log_illegalcom_error(s, cmd.type, + s->cmdbuf + s->cmdbuf_head); + break; + } +} + +static void amdvi_cmdbuf_run(AMDVIState *s) +{ + if (!s->cmdbuf_enabled) { + trace_amdvi_command_error(amdvi_readq(s, AMDVI_MMIO_CONTROL)); + return; + } + + /* check if there is work to do. */ + while (s->cmdbuf_head != s->cmdbuf_tail) { + trace_amdvi_command_exec(s->cmdbuf_head, s->cmdbuf_tail, s->cmdbuf); + amdvi_cmdbuf_exec(s); + s->cmdbuf_head += AMDVI_COMMAND_SIZE; + amdvi_writeq_raw(s, s->cmdbuf_head, AMDVI_MMIO_COMMAND_HEAD); + + /* wrap head pointer */ + if (s->cmdbuf_head >= s->cmdbuf_len * AMDVI_COMMAND_SIZE) { + s->cmdbuf_head = 0; + } + } +} + +/* System Software might never read from some of this fields but anyways */ +static uint64_t amdvi_mmio_read(void *opaque, hwaddr addr, unsigned size) +{ + AMDVIState *s = opaque; + + uint64_t val = -1; + if (addr + size > AMDVI_MMIO_SIZE) { + trace_amdvi_mmio_read("error: addr outside region: max ", + (uint64_t)AMDVI_MMIO_SIZE, addr, size); + return (uint64_t)-1; + } + + if (size == 2) { + val = amdvi_readw(s, addr); + } else if (size == 4) { + val = amdvi_readl(s, addr); + } else if (size == 8) { + val = amdvi_readq(s, addr); + } + + switch (addr & ~0x07) { + case AMDVI_MMIO_DEVICE_TABLE: + trace_amdvi_mmio_read("MMIO_DEVICE_TABLE", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_COMMAND_BASE: + trace_amdvi_mmio_read("MMIO_COMMAND_BASE", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_EVENT_BASE: + trace_amdvi_mmio_read("MMIO_EVENT_BASE", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_CONTROL: + trace_amdvi_mmio_read("MMIO_MMIO_CONTROL", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_EXCL_BASE: + trace_amdvi_mmio_read("MMIO_EXCL_BASE", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_EXCL_LIMIT: + trace_amdvi_mmio_read("MMIO_EXCL_LIMIT", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_COMMAND_HEAD: + trace_amdvi_mmio_read("MMIO_COMMAND_HEAD", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_COMMAND_TAIL: + trace_amdvi_mmio_read("MMIO_COMMAND_TAIL", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_EVENT_HEAD: + trace_amdvi_mmio_read("MMIO_EVENT_HEAD", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_EVENT_TAIL: + trace_amdvi_mmio_read("MMIO_EVENT_TAIL", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_STATUS: + trace_amdvi_mmio_read("MMIO_STATUS", addr, size, addr & ~0x07); + break; + + case AMDVI_MMIO_EXT_FEATURES: + trace_amdvi_mmio_read("MMIO_EXT_FEATURES", addr, size, addr & ~0x07); + break; + + default: + trace_amdvi_mmio_read("UNHANDLED READ", addr, size, addr & ~0x07); + } + return val; +} + +static void amdvi_handle_control_write(AMDVIState *s) +{ + /* + * read whatever is already written in case + * software is writing in chucks less than 8 bytes + */ + unsigned long control = amdvi_readq(s, AMDVI_MMIO_CONTROL); + s->enabled = !!(control & AMDVI_MMIO_CONTROL_AMDVIEN); + + s->ats_enabled = !!(control & AMDVI_MMIO_CONTROL_HTTUNEN); + s->evtlog_enabled = s->enabled && !!(control & + AMDVI_MMIO_CONTROL_EVENTLOGEN); + + s->evtlog_intr = !!(control & AMDVI_MMIO_CONTROL_EVENTINTEN); + s->completion_wait_intr = !!(control & AMDVI_MMIO_CONTROL_COMWAITINTEN); + s->cmdbuf_enabled = s->enabled && !!(control & + AMDVI_MMIO_CONTROL_CMDBUFLEN); + + /* update the flags depending on the control register */ + if (s->cmdbuf_enabled) { + amdvi_orq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_CMDBUF_RUN); + } else { + amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_CMDBUF_RUN); + } + if (s->evtlog_enabled) { + amdvi_orq(s, AMDVI_MMIO_STATUS, AMDVI_MMIO_STATUS_EVT_RUN); + } else { + amdvi_and_assignq(s, AMDVI_MMIO_STATUS, ~AMDVI_MMIO_STATUS_EVT_RUN); + } + + trace_amdvi_control_status(control); + + amdvi_cmdbuf_run(s); +} + +static inline void amdvi_handle_devtab_write(AMDVIState *s) + +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_DEVICE_TABLE); + s->devtab = (val & AMDVI_MMIO_DEVTAB_BASE_MASK); + + /* set device table length */ + s->devtab_len = ((val & AMDVI_MMIO_DEVTAB_SIZE_MASK) + 1 * + (AMDVI_MMIO_DEVTAB_SIZE_UNIT / + AMDVI_MMIO_DEVTAB_ENTRY_SIZE)); +} + +static inline void amdvi_handle_cmdhead_write(AMDVIState *s) +{ + s->cmdbuf_head = amdvi_readq(s, AMDVI_MMIO_COMMAND_HEAD) + & AMDVI_MMIO_CMDBUF_HEAD_MASK; + amdvi_cmdbuf_run(s); +} + +static inline void amdvi_handle_cmdbase_write(AMDVIState *s) +{ + s->cmdbuf = amdvi_readq(s, AMDVI_MMIO_COMMAND_BASE) + & AMDVI_MMIO_CMDBUF_BASE_MASK; + s->cmdbuf_len = 1UL << (s->mmior[AMDVI_MMIO_CMDBUF_SIZE_BYTE] + & AMDVI_MMIO_CMDBUF_SIZE_MASK); + s->cmdbuf_head = s->cmdbuf_tail = 0; +} + +static inline void amdvi_handle_cmdtail_write(AMDVIState *s) +{ + s->cmdbuf_tail = amdvi_readq(s, AMDVI_MMIO_COMMAND_TAIL) + & AMDVI_MMIO_CMDBUF_TAIL_MASK; + amdvi_cmdbuf_run(s); +} + +static inline void amdvi_handle_excllim_write(AMDVIState *s) +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_EXCL_LIMIT); + s->excl_limit = (val & AMDVI_MMIO_EXCL_LIMIT_MASK) | + AMDVI_MMIO_EXCL_LIMIT_LOW; +} + +static inline void amdvi_handle_evtbase_write(AMDVIState *s) +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_BASE); + s->evtlog = val & AMDVI_MMIO_EVTLOG_BASE_MASK; + s->evtlog_len = 1UL << (*(uint64_t *)&s->mmior[AMDVI_MMIO_EVTLOG_SIZE_BYTE] + & AMDVI_MMIO_EVTLOG_SIZE_MASK); +} + +static inline void amdvi_handle_evttail_write(AMDVIState *s) +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_TAIL); + s->evtlog_tail = val & AMDVI_MMIO_EVTLOG_TAIL_MASK; +} + +static inline void amdvi_handle_evthead_write(AMDVIState *s) +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_EVENT_HEAD); + s->evtlog_head = val & AMDVI_MMIO_EVTLOG_HEAD_MASK; +} + +static inline void amdvi_handle_pprbase_write(AMDVIState *s) +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_BASE); + s->ppr_log = val & AMDVI_MMIO_PPRLOG_BASE_MASK; + s->pprlog_len = 1UL << (*(uint64_t *)&s->mmior[AMDVI_MMIO_PPRLOG_SIZE_BYTE] + & AMDVI_MMIO_PPRLOG_SIZE_MASK); +} + +static inline void amdvi_handle_pprhead_write(AMDVIState *s) +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_HEAD); + s->pprlog_head = val & AMDVI_MMIO_PPRLOG_HEAD_MASK; +} + +static inline void amdvi_handle_pprtail_write(AMDVIState *s) +{ + uint64_t val = amdvi_readq(s, AMDVI_MMIO_PPR_TAIL); + s->pprlog_tail = val & AMDVI_MMIO_PPRLOG_TAIL_MASK; +} + +/* FIXME: something might go wrong if System Software writes in chunks + * of one byte but linux writes in chunks of 4 bytes so currently it + * works correctly with linux but will definitely be busted if software + * reads/writes 8 bytes + */ +static void amdvi_mmio_write(void *opaque, hwaddr addr, uint64_t val, + unsigned size) +{ + AMDVIState *s = opaque; + unsigned long offset = addr & 0x07; + + if (addr + size > AMDVI_MMIO_SIZE) { + trace_amdvi_mmio_write("error: addr outside region: max ", + (uint64_t)AMDVI_MMIO_SIZE, size, val, offset); + return; + } + + switch (addr & ~0x07) { + case AMDVI_MMIO_CONTROL: + trace_amdvi_mmio_write("MMIO_CONTROL", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_control_write(s); + break; + + case AMDVI_MMIO_DEVICE_TABLE: + trace_amdvi_mmio_write("MMIO_DEVICE_TABLE", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + + /* set device table address + * This also suffers from inability to tell whether software + * is done writing + */ + + if (offset || (size == 8)) { + amdvi_handle_devtab_write(s); + } + break; + + case AMDVI_MMIO_COMMAND_HEAD: + trace_amdvi_mmio_write("MMIO_COMMAND_HEAD", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + + amdvi_handle_cmdhead_write(s); + break; + + case AMDVI_MMIO_COMMAND_BASE: + trace_amdvi_mmio_write("MMIO_COMMAND_BASE", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + + /* FIXME - make sure System Software has finished writing incase + * it writes in chucks less than 8 bytes in a robust way.As for + * now, this hacks works for the linux driver + */ + if (offset || (size == 8)) { + amdvi_handle_cmdbase_write(s); + } + break; + + case AMDVI_MMIO_COMMAND_TAIL: + trace_amdvi_mmio_write("MMIO_COMMAND_TAIL", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_cmdtail_write(s); + break; + + case AMDVI_MMIO_EVENT_BASE: + trace_amdvi_mmio_write("MMIO_EVENT_BASE", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_evtbase_write(s); + break; + + case AMDVI_MMIO_EVENT_HEAD: + trace_amdvi_mmio_write("MMIO_EVENT_HEAD", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_evthead_write(s); + break; + + case AMDVI_MMIO_EVENT_TAIL: + trace_amdvi_mmio_write("MMIO_EVENT_TAIL", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_evttail_write(s); + break; + + case AMDVI_MMIO_EXCL_LIMIT: + trace_amdvi_mmio_write("MMIO_EXCL_LIMIT", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_excllim_write(s); + break; + + /* PPR log base - unused for now */ + case AMDVI_MMIO_PPR_BASE: + trace_amdvi_mmio_write("MMIO_PPR_BASE", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_pprbase_write(s); + break; + /* PPR log head - also unused for now */ + case AMDVI_MMIO_PPR_HEAD: + trace_amdvi_mmio_write("MMIO_PPR_HEAD", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_pprhead_write(s); + break; + /* PPR log tail - unused for now */ + case AMDVI_MMIO_PPR_TAIL: + trace_amdvi_mmio_write("MMIO_PPR_TAIL", addr, size, val, offset); + if (size == 2) { + amdvi_writew(s, addr, val); + } else if (size == 4) { + amdvi_writel(s, addr, val); + } else if (size == 8) { + amdvi_writeq(s, addr, val); + } + amdvi_handle_pprtail_write(s); + break; + + /* ignore write to ext_features */ + default: + trace_amdvi_mmio_write("UNHANDLED MMIO", addr, size, val, offset); + } + +} + +static inline uint64_t amdvi_get_perms(uint64_t entry) +{ + return (entry & (AMDVI_DEV_PERM_READ | AMDVI_DEV_PERM_WRITE)) >> + AMDVI_DEV_PERM_SHIFT; +} + +/* + * bridge_host_amd_iommu: setup an IOMMU function on a bus + * + * called for all PCI devices + * + * @bus: PCI bus to host the IOMMU + * @opaque: opaque pointer to AMDIOMMUState struct + * @defvn: PCI function of device for which to setup IOMMU region for + * + */ +static AddressSpace *bridge_host_amdvi(PCIBus *bus, void *opaque, int devfn) +{ + AMDVIState *s = opaque; + AMDVIAddressSpace **iommu_as; + int bus_num = pci_bus_num(bus); + + iommu_as = s->address_spaces[bus_num]; + + /* allocate memory during the first run */ + if (!iommu_as) { + iommu_as = g_malloc0(sizeof(AMDVIAddressSpace *) * PCI_DEVFN_MAX); + s->address_spaces[bus_num] = iommu_as; + } + + /* set up AMDVI region */ + if (!iommu_as[devfn]) { + iommu_as[devfn] = g_malloc0(sizeof(AMDVIAddressSpace)); + iommu_as[devfn]->bus_num = (uint8_t)bus_num; + iommu_as[devfn]->devfn = (uint8_t)devfn; + iommu_as[devfn]->iommu_state = s; + + memory_region_init_iommu(&iommu_as[devfn]->iommu, OBJECT(s), + &s->iommu_ops, "amd-iommu", UINT64_MAX); + address_space_init(&iommu_as[devfn]->as, &iommu_as[devfn]->iommu, + "amd-iommu"); + } + return &iommu_as[devfn]->as; +} + +/* a valid entry should have V = 1 and reserved bits honoured */ +static bool amdvi_validate_dte(AMDVIState *s, uint16_t devid, + uint64_t *dte) +{ + if ((dte[0] & AMDVI_DTE_LOWER_QUAD_RESERVED) + || (dte[1] & AMDVI_DTE_MIDDLE_QUAD_RESERVED) + || (dte[2] & AMDVI_DTE_UPPER_QUAD_RESERVED) || dte[3]) { + amdvi_log_illegaldevtab_error(s, devid, + s->devtab + devid * AMDVI_DEVTAB_ENTRY_SIZE, 0); + return false; + } + + return dte[0] & AMDVI_DEV_VALID; +} + +/* get a device table entry given the devid */ +static bool amdvi_get_dte(AMDVIState *s, int devid, uint64_t *entry) +{ + uint32_t offset = devid * AMDVI_DEVTAB_ENTRY_SIZE; + + if (dma_memory_read(&address_space_memory, s->devtab + offset, entry, + AMDVI_DEVTAB_ENTRY_SIZE)) { + trace_amdvi_dte_get_fail(s->devtab, offset); + /* log error accessing dte */ + amdvi_log_devtab_error(s, devid, s->devtab + offset, 0); + return false; + } + + *entry = le64_to_cpu(*entry); + if (!amdvi_validate_dte(s, devid, entry)) { + trace_amdvi_invalid_dte(entry[0]); + return false; + } + + return true; +} + +/* get pte translation mode */ +static inline uint8_t get_pte_translation_mode(uint64_t pte) +{ + return (pte >> AMDVI_DEV_MODE_RSHIFT) & AMDVI_DEV_MODE_MASK; +} + +static inline uint64_t pte_override_page_mask(uint64_t pte) +{ + uint8_t page_mask = 12; + uint64_t addr = (pte & AMDVI_DEV_PT_ROOT_MASK) ^ AMDVI_DEV_PT_ROOT_MASK; + /* find the first zero bit */ + while (addr & 1) { + page_mask++; + addr = addr >> 1; + } + + return ~((1ULL << page_mask) - 1); +} + +static inline uint64_t pte_get_page_mask(uint64_t oldlevel) +{ + return ~((1UL << ((oldlevel * 9) + 3)) - 1); +} + +static inline uint64_t amdvi_get_pte_entry(AMDVIState *s, uint64_t pte_addr, + uint16_t devid) +{ + uint64_t pte; + + if (dma_memory_read(&address_space_memory, pte_addr, &pte, sizeof(pte))) { + trace_amdvi_get_pte_hwerror(pte_addr); + amdvi_log_pagetab_error(s, devid, pte_addr, 0); + pte = 0; + return pte; + } + + pte = cpu_to_le64(pte); + return pte; +} + +static void amdvi_page_walk(AMDVIAddressSpace *as, uint64_t *dte, + IOMMUTLBEntry *ret, unsigned perms, + hwaddr addr) +{ + unsigned level, present, pte_perms, oldlevel; + uint64_t pte = dte[0], pte_addr, page_mask; + + /* make sure the DTE has TV = 1 */ + if (pte & AMDVI_DEV_TRANSLATION_VALID) { + level = get_pte_translation_mode(pte); + if (level >= 7) { + trace_amdvi_mode_invalid(level, addr); + return; + } + if (level == 0) { + goto no_remap; + } + + /* we are at the leaf page table or page table encodes a huge page */ + while (level > 0) { + pte_perms = amdvi_get_perms(pte); + present = pte & 1; + if (!present || perms != (perms & pte_perms)) { + amdvi_page_fault(as->iommu_state, as->devfn, addr, perms); + trace_amdvi_page_fault(addr); + return; + } + + /* go to the next lower level */ + pte_addr = pte & AMDVI_DEV_PT_ROOT_MASK; + /* add offset and load pte */ + pte_addr += ((addr >> (3 + 9 * level)) & 0x1FF) << 3; + pte = amdvi_get_pte_entry(as->iommu_state, pte_addr, as->devfn); + if (!pte) { + return; + } + oldlevel = level; + level = get_pte_translation_mode(pte); + if (level == 0x7) { + break; + } + } + + if (level == 0x7) { + page_mask = pte_override_page_mask(pte); + } else { + page_mask = pte_get_page_mask(oldlevel); + } + + /* get access permissions from pte */ + ret->iova = addr & page_mask; + ret->translated_addr = (pte & AMDVI_DEV_PT_ROOT_MASK) & page_mask; + ret->addr_mask = ~page_mask; + ret->perm = amdvi_get_perms(pte); + return; + } +no_remap: + ret->iova = addr & AMDVI_PAGE_MASK_4K; + ret->translated_addr = addr & AMDVI_PAGE_MASK_4K; + ret->addr_mask = ~AMDVI_PAGE_MASK_4K; + ret->perm = amdvi_get_perms(pte); +} + +static void amdvi_do_translate(AMDVIAddressSpace *as, hwaddr addr, + bool is_write, IOMMUTLBEntry *ret) +{ + AMDVIState *s = as->iommu_state; + uint16_t devid = PCI_DEVID(as->bus_num, as->devfn); + AMDVIIOTLBEntry *iotlb_entry = amdvi_iotlb_lookup(s, addr, as->devfn); + uint64_t entry[4]; + + if (iotlb_entry) { + trace_amdvi_iotlb_hit(PCI_BUS_NUM(devid), PCI_SLOT(devid), + PCI_FUNC(devid), addr, iotlb_entry->translated_addr); + ret->iova = addr & ~iotlb_entry->page_mask; + ret->translated_addr = iotlb_entry->translated_addr; + ret->addr_mask = iotlb_entry->page_mask; + ret->perm = iotlb_entry->perms; + return; + } + + /* devices with V = 0 are not translated */ + if (!amdvi_get_dte(s, devid, entry)) { + goto out; + } + + amdvi_page_walk(as, entry, ret, + is_write ? AMDVI_PERM_WRITE : AMDVI_PERM_READ, addr); + + amdvi_update_iotlb(s, as->devfn, addr, *ret, + entry[1] & AMDVI_DEV_DOMID_ID_MASK); + return; + +out: + ret->iova = addr & AMDVI_PAGE_MASK_4K; + ret->translated_addr = addr & AMDVI_PAGE_MASK_4K; + ret->addr_mask = ~AMDVI_PAGE_MASK_4K; + ret->perm = IOMMU_RW; +} + +static inline bool amdvi_is_interrupt_addr(hwaddr addr) +{ + return addr >= AMDVI_INT_ADDR_FIRST && addr <= AMDVI_INT_ADDR_LAST; +} + +static IOMMUTLBEntry amdvi_translate(MemoryRegion *iommu, hwaddr addr, + bool is_write) +{ + AMDVIAddressSpace *as = container_of(iommu, AMDVIAddressSpace, iommu); + AMDVIState *s = as->iommu_state; + IOMMUTLBEntry ret = { + .target_as = &address_space_memory, + .iova = addr, + .translated_addr = 0, + .addr_mask = ~(hwaddr)0, + .perm = IOMMU_NONE + }; + + if (!s->enabled) { + /* AMDVI disabled - corresponds to iommu=off not + * failure to provide any parameter + */ + ret.iova = addr & AMDVI_PAGE_MASK_4K; + ret.translated_addr = addr & AMDVI_PAGE_MASK_4K; + ret.addr_mask = ~AMDVI_PAGE_MASK_4K; + ret.perm = IOMMU_RW; + return ret; + } else if (amdvi_is_interrupt_addr(addr)) { + ret.iova = addr & AMDVI_PAGE_MASK_4K; + ret.translated_addr = addr & AMDVI_PAGE_MASK_4K; + ret.addr_mask = ~AMDVI_PAGE_MASK_4K; + ret.perm = IOMMU_WO; + return ret; + } + + amdvi_do_translate(as, addr, is_write, &ret); + trace_amdvi_translation_result(as->bus_num, PCI_SLOT(as->devfn), + PCI_FUNC(as->devfn), addr, ret.translated_addr); + + return ret; +} + +static const MemoryRegionOps mmio_mem_ops = { + .read = amdvi_mmio_read, + .write = amdvi_mmio_write, + .endianness = DEVICE_LITTLE_ENDIAN, + .impl = { + .min_access_size = 1, + .max_access_size = 8, + .unaligned = false, + }, + .valid = { + .min_access_size = 1, + .max_access_size = 8, + } +}; + +static void amdvi_init(AMDVIState *s) +{ + amdvi_iotlb_reset(s); + + s->iommu_ops.translate = amdvi_translate; + + s->devtab_len = 0; + s->cmdbuf_len = 0; + s->cmdbuf_head = 0; + s->cmdbuf_tail = 0; + s->evtlog_head = 0; + s->evtlog_tail = 0; + s->excl_enabled = false; + s->excl_allow = false; + s->mmio_enabled = false; + s->enabled = false; + s->ats_enabled = false; + s->cmdbuf_enabled = false; + + /* reset MMIO */ + memset(s->mmior, 0, AMDVI_MMIO_SIZE); + amdvi_set_quad(s, AMDVI_MMIO_EXT_FEATURES, AMDVI_EXT_FEATURES, + 0xffffffffffffffef, 0); + amdvi_set_quad(s, AMDVI_MMIO_STATUS, 0, 0x98, 0x67); + /* reset device ident */ + pci_config_set_vendor_id(s->dev.config, PCI_VENDOR_ID_AMD); + pci_config_set_prog_interface(s->dev.config, 00); + pci_config_set_device_id(s->dev.config, s->devid); + pci_config_set_class(s->dev.config, 0x0806); + + /* reset AMDVI specific capabilities, all r/o */ + pci_set_long(s->dev.config + s->capab_offset, AMDVI_CAPAB_FEATURES); + pci_set_long(s->dev.config + s->capab_offset + AMDVI_CAPAB_BAR_LOW, + s->mmio.addr & ~(0xffff0000)); + pci_set_long(s->dev.config + s->capab_offset + AMDVI_CAPAB_BAR_HIGH, + (s->mmio.addr & ~(0xffff)) >> 16); + pci_set_long(s->dev.config + s->capab_offset + AMDVI_CAPAB_RANGE, + 0xff000000); + pci_set_long(s->dev.config + s->capab_offset + AMDVI_CAPAB_MISC, 0); + pci_set_long(s->dev.config + s->capab_offset + AMDVI_CAPAB_MISC, + AMDVI_MAX_PH_ADDR | AMDVI_MAX_GVA_ADDR | AMDVI_MAX_VA_ADDR); +} + +static void amdvi_reset(DeviceState *dev) +{ + AMDVIState *s = AMD_IOMMU_DEVICE(dev); + + msi_reset(&s->dev); + + amdvi_init(s); +} + +static void amdvi_realize(PCIDevice *dev, Error **error) +{ + AMDVIState *s = container_of(dev, AMDVIState, dev); + PCIBus *bus = PC_MACHINE(qdev_get_machine())->bus; + s->iotlb = g_hash_table_new_full(amdvi_uint64_hash, + amdvi_uint64_equal, g_free, g_free); + s->capab_offset = pci_add_capability(dev, AMDVI_CAPAB_ID_SEC, 0, + AMDVI_CAPAB_SIZE); + + /* add msi and hypertransport capabilities */ + msi_init(dev, 0, 1, true, false); + pci_add_capability(&s->dev, PCI_CAP_ID_HT, 0, AMDVI_CAPAB_REG_SIZE); + + /* set up MMIO */ + memory_region_init_io(&s->mmio, OBJECT(s), &mmio_mem_ops, s, "mmio", + AMDVI_MMIO_SIZE); + + if (s->mmio.addr == AMDVI_BASE_ADDR) { + return; + } + + s->mmio.addr = AMDVI_BASE_ADDR; + memory_region_add_subregion(get_system_memory(), AMDVI_BASE_ADDR, &s->mmio); + + s->devid = object_property_get_int(OBJECT(s), "addr", error); + pci_setup_iommu(bus, bridge_host_amdvi, s); + amdvi_init(s); +} + +static const VMStateDescription vmstate_amdvi = { + .name = "amd-iommu", + .fields = (VMStateField[]) { + VMSTATE_PCI_DEVICE(dev, AMDVIState), + VMSTATE_END_OF_LIST() + } +}; + +static Property amdvi_properties[] = { + DEFINE_PROP_UINT32("version", AMDVIState, version, 2), + DEFINE_PROP_END_OF_LIST(), +}; + +static void amdvi_uninit(PCIDevice *dev) +{ + AMDVIState *s = container_of(dev, AMDVIState, dev); + amdvi_iotlb_reset(s); +} + +static void amdvi_class_init(ObjectClass *klass, void* data) +{ + DeviceClass *dc = DEVICE_CLASS(klass); + PCIDeviceClass *k = PCI_DEVICE_CLASS(klass); + + k->realize = amdvi_realize; + k->exit = amdvi_uninit; + + dc->reset = amdvi_reset; + dc->vmsd = &vmstate_amdvi; + dc->props = amdvi_properties; +} + +static const TypeInfo amdvi = { + .name = TYPE_AMD_IOMMU_DEVICE, + .parent = TYPE_PCI_DEVICE, + .instance_size = sizeof(AMDVIState), + .class_init = amdvi_class_init +}; + +static void amdvi_register_types(void) +{ + type_register_static(&amdvi); +} + +type_init(amdvi_register_types); diff --git a/hw/i386/amd_iommu.h b/hw/i386/amd_iommu.h new file mode 100644 index 0000000..9c6f6aa --- /dev/null +++ b/hw/i386/amd_iommu.h @@ -0,0 +1,287 @@ +/* + * QEMU emulation of an AMD IOMMU (AMD-Vi) + * + * Copyright (C) 2011 Eduard - Gabriel Munteanu + * Copyright (C) 2015 David Kiarie, + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License as published by + * the Free Software Foundation; either version 2 of the License, or + * (at your option) any later version. + + * This program is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the + * GNU General Public License for more details. + + * You should have received a copy of the GNU General Public License along + * with this program; if not, see . + */ + +#ifndef AMD_IOMMU_H_ +#define AMD_IOMMU_H_ + +#include "hw/hw.h" +#include "hw/pci/pci.h" +#include "hw/pci/msi.h" +#include "hw/sysbus.h" +#include "sysemu/dma.h" + +/* Capability registers */ +#define AMDVI_CAPAB_BAR_LOW 0x04 +#define AMDVI_CAPAB_BAR_HIGH 0x08 +#define AMDVI_CAPAB_RANGE 0x0C +#define AMDVI_CAPAB_MISC 0x10 + +#define AMDVI_CAPAB_SIZE 0x18 +#define AMDVI_CAPAB_REG_SIZE 0x04 + +/* Capability header data */ +#define AMDVI_CAPAB_ID_SEC 0xf +#define AMDVI_CAPAB_FLAT_EXT (1 << 28) +#define AMDVI_CAPAB_EFR_SUP (1 << 27) +#define AMDVI_CAPAB_FLAG_NPCACHE (1 << 26) +#define AMDVI_CAPAB_FLAG_HTTUNNEL (1 << 25) +#define AMDVI_CAPAB_FLAG_IOTLBSUP (1 << 24) +#define AMDVI_CAPAB_INIT_TYPE (3 << 16) + +/* MMIO registers */ +#define AMDVI_MMIO_DEVICE_TABLE 0x0000 +#define AMDVI_MMIO_COMMAND_BASE 0x0008 +#define AMDVI_MMIO_EVENT_BASE 0x0010 +#define AMDVI_MMIO_CONTROL 0x0018 +#define AMDVI_MMIO_EXCL_BASE 0x0020 +#define AMDVI_MMIO_EXCL_LIMIT 0x0028 +#define AMDVI_MMIO_EXT_FEATURES 0x0030 +#define AMDVI_MMIO_COMMAND_HEAD 0x2000 +#define AMDVI_MMIO_COMMAND_TAIL 0x2008 +#define AMDVI_MMIO_EVENT_HEAD 0x2010 +#define AMDVI_MMIO_EVENT_TAIL 0x2018 +#define AMDVI_MMIO_STATUS 0x2020 +#define AMDVI_MMIO_PPR_BASE 0x0038 +#define AMDVI_MMIO_PPR_HEAD 0x2030 +#define AMDVI_MMIO_PPR_TAIL 0x2038 + +#define AMDVI_MMIO_SIZE 0x4000 + +#define AMDVI_MMIO_DEVTAB_SIZE_MASK ((1ULL << 12) - 1) +#define AMDVI_MMIO_DEVTAB_BASE_MASK (((1ULL << 52) - 1) & ~ \ + AMDVI_MMIO_DEVTAB_SIZE_MASK) +#define AMDVI_MMIO_DEVTAB_ENTRY_SIZE 32 +#define AMDVI_MMIO_DEVTAB_SIZE_UNIT 4096 + +/* some of this are similar but just for readability */ +#define AMDVI_MMIO_CMDBUF_SIZE_BYTE (AMDVI_MMIO_COMMAND_BASE + 7) +#define AMDVI_MMIO_CMDBUF_SIZE_MASK 0x0F +#define AMDVI_MMIO_CMDBUF_BASE_MASK AMDVI_MMIO_DEVTAB_BASE_MASK +#define AMDVI_MMIO_CMDBUF_HEAD_MASK (((1ULL << 19) - 1) & ~0x0F) +#define AMDVI_MMIO_CMDBUF_TAIL_MASK AMDVI_MMIO_EVTLOG_HEAD_MASK + +#define AMDVI_MMIO_EVTLOG_SIZE_BYTE (AMDVI_MMIO_EVENT_BASE + 7) +#define AMDVI_MMIO_EVTLOG_SIZE_MASK AMDVI_MMIO_CMDBUF_SIZE_MASK +#define AMDVI_MMIO_EVTLOG_BASE_MASK AMDVI_MMIO_CMDBUF_BASE_MASK +#define AMDVI_MMIO_EVTLOG_HEAD_MASK (((1ULL << 19) - 1) & ~0x0F) +#define AMDVI_MMIO_EVTLOG_TAIL_MASK AMDVI_MMIO_EVTLOG_HEAD_MASK + +#define AMDVI_MMIO_PPRLOG_SIZE_BYTE (AMDVI_MMIO_EVENT_BASE + 7) +#define AMDVI_MMIO_PPRLOG_HEAD_MASK AMDVI_MMIO_EVTLOG_HEAD_MASK +#define AMDVI_MMIO_PPRLOG_TAIL_MASK AMDVI_MMIO_EVTLOG_HEAD_MASK +#define AMDVI_MMIO_PPRLOG_BASE_MASK AMDVI_MMIO_EVTLOG_BASE_MASK +#define AMDVI_MMIO_PPRLOG_SIZE_MASK AMDVI_MMIO_EVTLOG_SIZE_MASK + +#define AMDVI_MMIO_EXCL_ENABLED_MASK (1ULL << 0) +#define AMDVI_MMIO_EXCL_ALLOW_MASK (1ULL << 1) +#define AMDVI_MMIO_EXCL_LIMIT_MASK AMDVI_MMIO_DEVTAB_BASE_MASK +#define AMDVI_MMIO_EXCL_LIMIT_LOW 0xFFF + +/* mmio control register flags */ +#define AMDVI_MMIO_CONTROL_AMDVIEN (1ULL << 0) +#define AMDVI_MMIO_CONTROL_HTTUNEN (1ULL << 1) +#define AMDVI_MMIO_CONTROL_EVENTLOGEN (1ULL << 2) +#define AMDVI_MMIO_CONTROL_EVENTINTEN (1ULL << 3) +#define AMDVI_MMIO_CONTROL_COMWAITINTEN (1ULL << 4) +#define AMDVI_MMIO_CONTROL_CMDBUFLEN (1ULL << 12) + +/* MMIO status register bits */ +#define AMDVI_MMIO_STATUS_CMDBUF_RUN (1 << 4) +#define AMDVI_MMIO_STATUS_EVT_RUN (1 << 3) +#define AMDVI_MMIO_STATUS_COMP_INT (1 << 2) +#define AMDVI_MMIO_STATUS_EVT_OVF (1 << 0) + +#define AMDVI_CMDBUF_ID_BYTE 0x07 +#define AMDVI_CMDBUF_ID_RSHIFT 4 + +#define AMDVI_CMD_COMPLETION_WAIT 0x01 +#define AMDVI_CMD_INVAL_DEVTAB_ENTRY 0x02 +#define AMDVI_CMD_INVAL_AMDVI_PAGES 0x03 +#define AMDVI_CMD_INVAL_IOTLB_PAGES 0x04 +#define AMDVI_CMD_INVAL_INTR_TABLE 0x05 +#define AMDVI_CMD_PREFETCH_AMDVI_PAGES 0x06 +#define AMDVI_CMD_COMPLETE_PPR_REQUEST 0x07 +#define AMDVI_CMD_INVAL_AMDVI_ALL 0x08 + +#define AMDVI_DEVTAB_ENTRY_SIZE 32 + +/* Device table entry bits 0:63 */ +#define AMDVI_DEV_VALID (1ULL << 0) +#define AMDVI_DEV_TRANSLATION_VALID (1ULL << 1) +#define AMDVI_DEV_MODE_MASK 0x7 +#define AMDVI_DEV_MODE_RSHIFT 9 +#define AMDVI_DEV_PT_ROOT_MASK 0xFFFFFFFFFF000 +#define AMDVI_DEV_PT_ROOT_RSHIFT 12 +#define AMDVI_DEV_PERM_SHIFT 61 +#define AMDVI_DEV_PERM_READ (1ULL << 61) +#define AMDVI_DEV_PERM_WRITE (1ULL << 62) + +/* Device table entry bits 64:127 */ +#define AMDVI_DEV_DOMID_ID_MASK ((1ULL << 16) - 1) + +/* Event codes and flags, as stored in the info field */ +#define AMDVI_EVENT_ILLEGAL_DEVTAB_ENTRY (0x1U << 12) +#define AMDVI_EVENT_IOPF (0x2U << 12) +#define AMDVI_EVENT_IOPF_I (1U << 3) +#define AMDVI_EVENT_DEV_TAB_HW_ERROR (0x3U << 12) +#define AMDVI_EVENT_PAGE_TAB_HW_ERROR (0x4U << 12) +#define AMDVI_EVENT_ILLEGAL_COMMAND_ERROR (0x5U << 12) +#define AMDVI_EVENT_COMMAND_HW_ERROR (0x6U << 12) + +#define AMDVI_EVENT_LEN 16 +#define AMDVI_PERM_READ (1 << 0) +#define AMDVI_PERM_WRITE (1 << 1) + +#define AMDVI_FEATURE_PREFETCH (1ULL << 0) /* page prefetch */ +#define AMDVI_FEATURE_PPR (1ULL << 1) /* PPR Support */ +#define AMDVI_FEATURE_GT (1ULL << 4) /* Guest Translation */ +#define AMDVI_FEATURE_IA (1ULL << 6) /* inval all support */ +#define AMDVI_FEATURE_GA (1ULL << 7) /* guest VAPIC support */ +#define AMDVI_FEATURE_HE (1ULL << 8) /* hardware error regs */ +#define AMDVI_FEATURE_PC (1ULL << 9) /* Perf counters */ + +/* reserved DTE bits */ +#define AMDVI_DTE_LOWER_QUAD_RESERVED 0x80300000000000fc +#define AMDVI_DTE_MIDDLE_QUAD_RESERVED 0x0000000000000100 +#define AMDVI_DTE_UPPER_QUAD_RESERVED 0x08f0000000000000 + +/* AMDVI paging mode */ +#define AMDVI_GATS_MODE (6ULL << 12) +#define AMDVI_HATS_MODE (6ULL << 10) + +/* PCI SIG constants */ +#define PCI_BUS_MAX 256 +#define PCI_SLOT_MAX 32 +#define PCI_FUNC_MAX 8 +#define PCI_DEVFN_MAX 256 + +/* IOTLB */ +#define AMDVI_IOTLB_MAX_SIZE 1024 +#define AMDVI_DEVID_SHIFT 36 + +/* extended feature support */ +#define AMDVI_EXT_FEATURES (AMDVI_FEATURE_PREFETCH | AMDVI_FEATURE_PPR | \ + AMDVI_FEATURE_IA | AMDVI_FEATURE_GT | AMDVI_FEATURE_GA | \ + AMDVI_FEATURE_HE | AMDVI_GATS_MODE | AMDVI_HATS_MODE) + +/* capabilities header */ +#define AMDVI_CAPAB_FEATURES (AMDVI_CAPAB_FLAT_EXT | \ + AMDVI_CAPAB_FLAG_NPCACHE | AMDVI_CAPAB_FLAG_IOTLBSUP \ + | AMDVI_CAPAB_ID_SEC | AMDVI_CAPAB_INIT_TYPE | \ + AMDVI_CAPAB_FLAG_HTTUNNEL | AMDVI_CAPAB_EFR_SUP) + +/* AMDVI default address */ +#define AMDVI_BASE_ADDR 0xfed80000 + +/* page management constants */ +#define AMDVI_PAGE_SHIFT 12 +#define AMDVI_PAGE_SIZE (1ULL << AMDVI_PAGE_SHIFT) + +#define AMDVI_PAGE_SHIFT_4K 12 +#define AMDVI_PAGE_MASK_4K (~((1ULL << AMDVI_PAGE_SHIFT_4K) - 1)) + +#define AMDVI_MAX_VA_ADDR (48UL << 5) +#define AMDVI_MAX_PH_ADDR (40UL << 8) +#define AMDVI_MAX_GVA_ADDR (48UL << 15) + +/* invalidation command device id */ +#define AMDVI_INVAL_DEV_ID_SHIFT 32 +#define AMDVI_INVAL_DEV_ID_MASK (~((1UL << AMDVI_INVAL_DEV_ID_SHIFT) - 1)) + +/* invalidation address */ +#define AMDVI_INVAL_ADDR_MASK_SHIFT 12 +#define AMDVI_INVAL_ADDR_MASK (~((1UL << AMDVI_INVAL_ADDR_MASK_SHIFT) - 1)) + +/* invalidation S bit mask */ +#define AMDVI_INVAL_ALL(val) ((val) & (0x1)) + +/* Completion Wait data size */ +#define AMDVI_COMPLETION_DATA_SIZE 8 + +#define AMDVI_COMMAND_SIZE 16 + +#define AMDVI_INT_ADDR_FIRST 0xfee00000 +#define AMDVI_INT_ADDR_LAST 0xfeefffff + +#define TYPE_AMD_IOMMU_DEVICE "amd-iommu" +#define AMD_IOMMU_DEVICE(obj)\ + OBJECT_CHECK(AMDVIState, (obj), TYPE_AMD_IOMMU_DEVICE) + +typedef struct AMDVIAddressSpace AMDVIAddressSpace; + +typedef struct AMDVIState { + PCIDevice dev; /* The PCI device itself */ + + uint32_t version; + uint32_t devid; /* auto-assigned devid */ + + uint32_t capab_offset; /* capability offset pointer */ + uint64_t mmio_addr; + uint8_t *capab; /* capabilities registers */ + + bool enabled; /* IOMMU enabled */ + bool ats_enabled; /* address translation enabled */ + bool cmdbuf_enabled; /* command buffer enabled */ + bool evtlog_enabled; /* event log enabled */ + bool excl_enabled; + + hwaddr devtab; /* base address device table */ + size_t devtab_len; /* device table length */ + + hwaddr cmdbuf; /* command buffer base address */ + uint64_t cmdbuf_len; /* command buffer length */ + uint32_t cmdbuf_head; /* current IOMMU read position */ + uint32_t cmdbuf_tail; /* next Software write position */ + bool completion_wait_intr; + + hwaddr evtlog; /* base address event log */ + bool evtlog_intr; + uint32_t evtlog_len; /* event log length */ + uint32_t evtlog_head; /* current IOMMU write position */ + uint32_t evtlog_tail; /* current Software read position */ + + /* unused for now */ + hwaddr excl_base; /* base DVA - IOMMU exclusion range */ + hwaddr excl_limit; /* limit of IOMMU exclusion range */ + bool excl_allow; /* translate accesses to the exclusion range */ + bool excl_enable; /* exclusion range enabled */ + + hwaddr ppr_log; /* base address ppr log */ + uint32_t pprlog_len; /* ppr log len */ + uint32_t pprlog_head; /* ppr log head */ + uint32_t pprlog_tail; /* ppr log tail */ + + MemoryRegion mmio; /* MMIO region */ + uint8_t mmior[AMDVI_MMIO_SIZE]; /* read/write MMIO */ + uint8_t w1cmask[AMDVI_MMIO_SIZE]; /* read/write 1 clear mask */ + uint8_t romask[AMDVI_MMIO_SIZE]; /* MMIO read/only mask */ + bool mmio_enabled; + + /* IOMMU function */ + MemoryRegionIOMMUOps iommu_ops; + + /* for each served device */ + AMDVIAddressSpace **address_spaces[PCI_BUS_MAX]; + + /* IOTLB */ + GHashTable *iotlb; +} AMDVIState; + +#endif