From patchwork Thu Feb 14 08:22:43 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Bharat Bhushan X-Patchwork-Id: 220392 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 905892C0090 for ; Thu, 14 Feb 2013 19:23:19 +1100 (EST) Received: from localhost ([::1]:58329 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U5u69-0004AM-I4 for incoming@patchwork.ozlabs.org; Thu, 14 Feb 2013 03:23:17 -0500 Received: from eggs.gnu.org ([208.118.235.92]:51768) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U5u5w-000499-Li for qemu-devel@nongnu.org; Thu, 14 Feb 2013 03:23:09 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1U5u5s-0007tH-JM for qemu-devel@nongnu.org; Thu, 14 Feb 2013 03:23:04 -0500 Received: from co9ehsobe004.messaging.microsoft.com ([207.46.163.27]:44212 helo=co9outboundpool.messaging.microsoft.com) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1U5u5m-0007qE-DI; Thu, 14 Feb 2013 03:22:54 -0500 Received: from mail94-co9-R.bigfish.com (10.236.132.231) by CO9EHSOBE013.bigfish.com (10.236.130.76) with Microsoft SMTP Server id 14.1.225.23; Thu, 14 Feb 2013 08:22:52 +0000 Received: from mail94-co9 (localhost [127.0.0.1]) by mail94-co9-R.bigfish.com (Postfix) with ESMTP id A4917B80240; Thu, 14 Feb 2013 08:22:52 +0000 (UTC) X-Forefront-Antispam-Report: CIP:70.37.183.190; KIP:(null); UIP:(null); IPV:NLI; H:mail.freescale.net; RD:none; EFVD:NLI X-SpamScore: 2 X-BigFish: VS2(z551bizd799hzz1f42h1ee6h1de0h1202h1e76h1d1ah1d2ahzzz2dh2a8h668h839h8e2h8e3h944hd25hf0ah1220h1288h12a5h12a9h12bdh137ah13b6h1441h1504h1537h153bh15d0h162dh1631h1758h18e1h1946h19b5hbe9i1155h) Received: from mail94-co9 (localhost.localdomain [127.0.0.1]) by mail94-co9 (MessageSwitch) id 1360830170285243_29087; Thu, 14 Feb 2013 08:22:50 +0000 (UTC) Received: from CO9EHSMHS005.bigfish.com (unknown [10.236.132.235]) by mail94-co9.bigfish.com (Postfix) with ESMTP id 43A9530005F; Thu, 14 Feb 2013 08:22:50 +0000 (UTC) Received: from mail.freescale.net (70.37.183.190) by CO9EHSMHS005.bigfish.com (10.236.130.15) with Microsoft SMTP Server (TLS) id 14.1.225.23; Thu, 14 Feb 2013 08:22:44 +0000 Received: from 039-SN2MPN1-022.039d.mgd.msft.net ([169.254.2.82]) by 039-SN1MMR1-001.039d.mgd.msft.net ([10.84.1.13]) with mapi id 14.02.0328.011; Thu, 14 Feb 2013 08:22:43 +0000 From: Bhushan Bharat-R65777 To: Alex Williamson Thread-Topic: VFIO: Not require to make VFIO_IOMMU_MAP_DMA for MMAPED PCI bars Thread-Index: Ac4KjHSGMzJydONITT+Ac+x/SM8WVg== Date: Thu, 14 Feb 2013 08:22:43 +0000 Message-ID: <6A3DF150A5B70D4F9B66A25E3F7C888D065ABF7B@039-SN2MPN1-022.039d.mgd.msft.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.232.14.2] MIME-Version: 1.0 X-OriginatorOrg: freescale.com X-detected-operating-system: by eggs.gnu.org: Windows 7 or 8 X-Received-From: 207.46.163.27 Cc: Graf Alexander-B36701 , Yoder Stuart-B08248 , "qemu-ppc@nongnu.org" , "qemu-devel@nongnu.org" , Wood Scott-B07421 Subject: [Qemu-devel] VFIO: Not require to make VFIO_IOMMU_MAP_DMA for MMAPED PCI bars X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Hi Alex Williamson, I am able (with some hacks :)) to directly assign the e1000 PCI device to KVM guest using VFIO on Freescale device. One of the problem I am facing is about the DMA mapping in IOMMU for PCI device BARs. On Freescale devices, the mmaped PCI device BARs are not required to be mapped in IOMMU. Typically the flow of in/out transaction (from CPU) is: Incoming flow: -----| |----------| |---------------| |-------------| CORE |<----<------<-----<--| IOMMU |<---<---<| PCI-Controller|<------<-----<----<| PCI device | -----| |----------| |---------------| |-------------| Outgoing Flow: IOMMU is bypassed for out transactions -----| |----------| |---------------| |-------------| CORE |>---->------>----| | IOMMU | ->-->| PCI-Controller|>------>----->---->| PCI device | -----| | |----------| ^ |---------------| |-------------| | | |------------------| Also because of some hardware limitations on our IOMMU it is difficult to map these BAR regions with RAM (DDR) regions. So on Freescale device we want the VFIO_IOMMU_MAP_DMA ioctl to be called for RAM regions (DDR) only and _not_ for these mmaped ram regions of PCI device bars. I can understand that we need to add these mmaped PCI bars as RAM type in qemu memory_region_*(). So for that I tried to skip these regions in VFIO memory_listeners. Below changes which works for me. I am not sure whether this is correct approach, please suggest. ------------- ----------------- Thanks -Bharat diff --git a/hw/vfio_pci.c b/hw/vfio_pci.c index c51ae67..63728d8 100644 --- a/hw/vfio_pci.c +++ b/hw/vfio_pci.c @@ -1115,9 +1115,35 @@ static int vfio_dma_map(VFIOContainer *container, hwaddr iova, return -errno; } -static bool vfio_listener_skipped_section(MemoryRegionSection *section) +static int memory_region_is_mmap_bars(VFIOContainer *container, + MemoryRegionSection *section) { - return !memory_region_is_ram(section->mr); + VFIOGroup *group; + VFIODevice *vdev; + int i; + + QLIST_FOREACH(group, &container->group_list, next) { + QLIST_FOREACH(vdev, &group->device_list, next) { + if (vdev->msix->mmap_mem.ram_addr == section->mr->ram_addr) + return 1; + for (i = 0; i < PCI_ROM_SLOT; i++) { + VFIOBAR *bar = &vdev->bars[i]; + if (bar->mmap_mem.ram_addr == section->mr->ram_addr) + return 1; + } + } + } + + return 0; +} + +static bool vfio_listener_skipped_section(VFIOContainer *container, + MemoryRegionSection *section) +{ + if (!memory_region_is_ram(section->mr)) + return 1; + + return memory_region_is_mmap_bars(container, section); } static void vfio_listener_region_add(MemoryListener *listener, @@ -1129,7 +1155,7 @@ static void vfio_listener_region_add(MemoryListener *listener, void *vaddr; int ret; - if (vfio_listener_skipped_section(section)) { + if (vfio_listener_skipped_section(container, section)) { DPRINTF("vfio: SKIPPING region_add %"HWADDR_PRIx" - %"PRIx64"\n", section->offset_within_address_space, section->offset_within_address_space + section->size - 1); @@ -1173,7 +1199,7 @@ static void vfio_listener_region_del(MemoryListener *listener, hwaddr iova, end; int ret; - if (vfio_listener_skipped_section(section)) { + if (vfio_listener_skipped_section(container, section)) { DPRINTF("vfio: SKIPPING region_del %"HWADDR_PRIx" - %"PRIx64"\n", section->offset_within_address_space, section->offset_within_address_space + section->size - 1);