From patchwork Wed Jun 1 01:38:23 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eduard - Gabriel Munteanu X-Patchwork-Id: 98115 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [140.186.70.17]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 3E0ECB6F7B for ; Wed, 1 Jun 2011 15:22:15 +1000 (EST) Received: from localhost ([::1]:35884 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QRdsh-0000yi-54 for incoming@patchwork.ozlabs.org; Wed, 01 Jun 2011 01:22:11 -0400 Received: from eggs.gnu.org ([140.186.70.92]:50020) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QRaOc-00051s-Ox for qemu-devel@nongnu.org; Tue, 31 May 2011 21:38:56 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1QRaOZ-0007Gt-8o for qemu-devel@nongnu.org; Tue, 31 May 2011 21:38:54 -0400 Received: from mail-bw0-f45.google.com ([209.85.214.45]:36086) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1QRaOY-0007GK-Kd for qemu-devel@nongnu.org; Tue, 31 May 2011 21:38:51 -0400 Received: by mail-bw0-f45.google.com with SMTP id 16so4438176bwz.4 for ; Tue, 31 May 2011 18:38:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:sender:from:to:cc:subject:date:message-id :x-mailer:in-reply-to:references; bh=Y6aY6EtlQJoNDCHE59hST3l01Yq0X434e8CrDJ0nuk8=; b=O+h77DW0Z3Y99OAFBI29ivsVxPJ6THMeRuaJ3wZLsaNphWxbj9fJJI3+XFpTXVgGMZ Bj3nlrv+JEiUKKzmoCLco0Iss1mp75f9cy3ChPtGQSWdcUoDx5s/OgJ+V50KdhAdJe0L gASHH6KomTXWVO6zHHxrkVatfFXI+IhUWkFmE= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=sender:from:to:cc:subject:date:message-id:x-mailer:in-reply-to :references; b=auYAHt3uB8tMI3YjEppPyU5PeW88e6lIBjK1oLaZhKoD9Xz9zNgzafyKrRveGgtxSG tniCN2l0xXDTe+/duO/OH/9KvK8NBZ2VVJ0lZ4LJN9mEpqwegfxXygn01GpZSKo0zpNF VLywmBejT3ezeSNwqydKN/LN6Cv+l8ibqcERY= Received: by 10.204.46.98 with SMTP id i34mr6019310bkf.95.1306892330091; Tue, 31 May 2011 18:38:50 -0700 (PDT) Received: from localhost.localdomain ([188.25.93.127]) by mx.google.com with ESMTPS id ag6sm453451bkc.18.2011.05.31.18.38.47 (version=SSLv3 cipher=OTHER); Tue, 31 May 2011 18:38:49 -0700 (PDT) From: Eduard - Gabriel Munteanu To: mst@redhat.com Date: Wed, 1 Jun 2011 04:38:23 +0300 Message-Id: <1306892315-7306-2-git-send-email-eduard.munteanu@linux360.ro> X-Mailer: git-send-email 1.7.3.4 In-Reply-To: <1306892315-7306-1-git-send-email-eduard.munteanu@linux360.ro> References: <1306892315-7306-1-git-send-email-eduard.munteanu@linux360.ro> X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.6 (newer, 2) X-Received-From: 209.85.214.45 X-Mailman-Approved-At: Wed, 01 Jun 2011 00:57:35 -0400 Cc: aliguori@us.ibm.com, david@gibson.dropbear.id.au, kvm@vger.kernel.org, rth@twiddle.net, aik@ozlabs.ru, joro@8bytes.org, seabios@seabios.org, qemu-devel@nongnu.org, agraf@suse.de, blauwirbel@gmail.com, yamahata@valinux.co.jp, kevin@koconnor.net, avi@redhat.com, Eduard - Gabriel Munteanu , dwg@au1.ibm.com, paul@codesourcery.com Subject: [Qemu-devel] [RFC PATCH 01/13] Generic DMA memory access interface X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org This introduces replacements for memory access functions like cpu_physical_memory_read(). The new interface can handle address translation and access checking through an IOMMU. Signed-off-by: Eduard - Gabriel Munteanu --- Makefile.target | 2 +- hw/dma_rw.c | 155 +++++++++++++++++++++++++++++++++++++++ hw/dma_rw.h | 217 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ 3 files changed, 373 insertions(+), 1 deletions(-) create mode 100644 hw/dma_rw.c create mode 100644 hw/dma_rw.h diff --git a/Makefile.target b/Makefile.target index 21f864a..ee0c80d 100644 --- a/Makefile.target +++ b/Makefile.target @@ -224,7 +224,7 @@ obj-i386-y += cirrus_vga.o apic.o ioapic.o piix_pci.o obj-i386-y += vmport.o obj-i386-y += device-hotplug.o pci-hotplug.o smbios.o wdt_ib700.o obj-i386-y += debugcon.o multiboot.o -obj-i386-y += pc_piix.o kvmclock.o +obj-i386-y += pc_piix.o kvmclock.o dma_rw.o obj-i386-$(CONFIG_SPICE) += qxl.o qxl-logger.o qxl-render.o # shared objects diff --git a/hw/dma_rw.c b/hw/dma_rw.c new file mode 100644 index 0000000..824db83 --- /dev/null +++ b/hw/dma_rw.c @@ -0,0 +1,155 @@ +/* + * Generic DMA memory access interface. + * + * Copyright (c) 2011 Eduard - Gabriel Munteanu + * + * Permission is hereby granted, free of charge, to any person obtaining a copy + * of this software and associated documentation files (the "Software"), to deal + * in the Software without restriction, including without limitation the rights + * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + * copies of the Software, and to permit persons to whom the Software is + * furnished to do so, subject to the following conditions: + * + * The above copyright notice and this permission notice shall be included in + * all copies or substantial portions of the Software. + * + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN + * THE SOFTWARE. + */ + +#include "dma_rw.h" +#include "range.h" + +static void dma_register_memory_map(DMADevice *dev, + void *buffer, + dma_addr_t addr, + dma_addr_t len, + DMAInvalidateMapFunc *invalidate, + void *invalidate_opaque) +{ + DMAMemoryMap *map; + + map = qemu_malloc(sizeof(DMAMemoryMap)); + map->buffer = buffer; + map->addr = addr; + map->len = len; + map->invalidate = invalidate; + map->invalidate_opaque = invalidate_opaque; + + QLIST_INSERT_HEAD(&dev->mmu->memory_maps, map, list); +} + +static void dma_unregister_memory_map(DMADevice *dev, + void *buffer, + dma_addr_t len) +{ + DMAMemoryMap *map; + + QLIST_FOREACH(map, &dev->mmu->memory_maps, list) { + if (map->buffer == buffer && map->len == len) { + QLIST_REMOVE(map, list); + free(map); + } + } +} + +void dma_invalidate_memory_range(DMADevice *dev, + dma_addr_t addr, + dma_addr_t len) +{ + DMAMemoryMap *map; + + QLIST_FOREACH(map, &dev->mmu->memory_maps, list) { + if (ranges_overlap(addr, len, map->addr, map->len)) { + map->invalidate(map->invalidate_opaque); + QLIST_REMOVE(map, list); + free(map); + } + } +} + +void *dma_memory_map(DMADevice *dev, + DMAInvalidateMapFunc *cb, + void *opaque, + dma_addr_t addr, + dma_addr_t *len, + int is_write) +{ + int err; + target_phys_addr_t paddr, plen; + void *buf; + + if (!dev || !dev->mmu) { + return cpu_physical_memory_map(addr, len, is_write); + } + + plen = *len; + err = dev->mmu->translate(dev, addr, &paddr, &plen, is_write); + if (err) { + return NULL; + } + + /* + * If this is true, the virtual region is contiguous, + * but the translated physical region isn't. We just + * clamp *len, much like cpu_physical_memory_map() does. + */ + if (plen < *len) { + *len = plen; + } + + buf = cpu_physical_memory_map(paddr, len, is_write); + + /* We treat maps as remote TLBs to cope with stuff like AIO. */ + if (cb) { + dma_register_memory_map(dev, buf, addr, *len, cb, opaque); + } + + return buf; +} + +void dma_memory_unmap(DMADevice *dev, + void *buffer, + dma_addr_t len, + int is_write, + dma_addr_t access_len) +{ + cpu_physical_memory_unmap(buffer, len, is_write, access_len); + if (dev && dev->mmu) { + dma_unregister_memory_map(dev, buffer, len); + } +} + +void dma_memory_rw_iommu(DMADevice *dev, + dma_addr_t addr, + void *buf, + dma_addr_t len, + int is_write) +{ + dma_addr_t paddr, plen; + int err; + + while (len) { + err = dev->mmu->translate(dev, addr, &paddr, &plen, is_write); + if (err) { + return; + } + + /* The translation might be valid for larger regions. */ + if (plen > len) { + plen = len; + } + + cpu_physical_memory_rw(paddr, buf, plen, is_write); + + len -= plen; + addr += plen; + buf += plen; + } +} + diff --git a/hw/dma_rw.h b/hw/dma_rw.h new file mode 100644 index 0000000..39482cb --- /dev/null +++ b/hw/dma_rw.h @@ -0,0 +1,217 @@ +#ifndef DMA_RW_H +#define DMA_RW_H + +#include "qemu-common.h" + +typedef uint64_t dma_addr_t; + +typedef struct DMAMmu DMAMmu; +typedef struct DMADevice DMADevice; +typedef struct DMAMemoryMap DMAMemoryMap; + +typedef int DMATranslateFunc(DMADevice *dev, + dma_addr_t addr, + dma_addr_t *paddr, + dma_addr_t *len, + int is_write); + +typedef void DMAInvalidateMapFunc(void *); + +struct DMAMmu { + DeviceState *iommu; + DMATranslateFunc *translate; + QLIST_HEAD(memory_maps, DMAMemoryMap) memory_maps; +}; + +struct DMADevice { + DMAMmu *mmu; +}; + +struct DMAMemoryMap { + void *buffer; + dma_addr_t addr; + dma_addr_t len; + DMAInvalidateMapFunc *invalidate; + void *invalidate_opaque; + + QLIST_ENTRY(DMAMemoryMap) list; +}; + +void dma_memory_rw_iommu(DMADevice *dev, + dma_addr_t addr, + void *buf, + dma_addr_t len, + int is_write); + +static inline void dma_memory_rw(DMADevice *dev, + dma_addr_t addr, + void *buf, + dma_addr_t len, + int is_write) +{ + /* + * Fast-path non-iommu. + * More importantly, makes it obvious what this function does. + */ + if (!dev || !dev->mmu) { + cpu_physical_memory_rw(addr, buf, len, is_write); + return; + } + + dma_memory_rw_iommu(dev, addr, buf, len, is_write); +} + +static inline void dma_memory_read(DMADevice *dev, + dma_addr_t addr, + void *buf, + dma_addr_t len) +{ + dma_memory_rw(dev, addr, buf, len, 0); +} + +static inline void dma_memory_write(DMADevice *dev, + dma_addr_t addr, + const void *buf, + dma_addr_t len) +{ + dma_memory_rw(dev, addr, (void *) buf, len, 1); +} + +void *dma_memory_map(DMADevice *dev, + DMAInvalidateMapFunc *cb, + void *opaque, + dma_addr_t addr, + dma_addr_t *len, + int is_write); +void dma_memory_unmap(DMADevice *dev, + void *buffer, + dma_addr_t len, + int is_write, + dma_addr_t access_len); + +void dma_invalidate_memory_range(DMADevice *dev, + dma_addr_t addr, + dma_addr_t len); + + +/* + * All the following macro magic tries is to + * achieve some type safety and avoid duplication. + */ + +#define DEFINE_DMA_LD(prefix, suffix, devtype, dmafield, size) \ +static inline uint##size##_t \ +dma_ld##suffix(DMADevice *dev, dma_addr_t addr) \ +{ \ + int err; \ + dma_addr_t paddr, plen; \ + \ + if (!dev || !dev->mmu) { \ + return ld##suffix##_phys(addr); \ + } \ + \ + err = dev->mmu->translate(dev, addr, &paddr, &plen, 0); \ + if (err || (plen < size / 8)) { \ + return 0; \ + } \ + \ + return ld##suffix##_phys(paddr); \ +} + +#define DEFINE_DMA_ST(prefix, suffix, devtype, dmafield, size) \ +static inline void \ +dma_st##suffix(DMADevice *dev, dma_addr_t addr, uint##size##_t val) \ +{ \ + int err; \ + target_phys_addr_t paddr, plen; \ + \ + if (!dev || !dev->mmu) { \ + st##suffix##_phys(addr, val); \ + return; \ + } \ + err = dev->mmu->translate(dev, addr, &paddr, &plen, 1); \ + if (err || (plen < size / 8)) { \ + return; \ + } \ + \ + st##suffix##_phys(paddr, val); \ +} + +#define DEFINE_DMA_MEMORY_RW(prefix, devtype, dmafield) +#define DEFINE_DMA_MEMORY_READ(prefix, devtype, dmafield) +#define DEFINE_DMA_MEMORY_WRITE(prefix, devtype, dmafield) + +#define DEFINE_DMA_OPS(prefix, devtype, dmafield) \ + /* \ + * FIXME: find a way to handle these: \ + * DEFINE_DMA_LD(prefix, ub, devtype, dmafield, 8) \ + * DEFINE_DMA_LD(prefix, uw, devtype, dmafield, 16) \ + */ \ + DEFINE_DMA_LD(prefix, l, devtype, dmafield, 32) \ + DEFINE_DMA_LD(prefix, q, devtype, dmafield, 64) \ + \ + DEFINE_DMA_ST(prefix, b, devtype, dmafield, 8) \ + DEFINE_DMA_ST(prefix, w, devtype, dmafield, 16) \ + DEFINE_DMA_ST(prefix, l, devtype, dmafield, 32) \ + DEFINE_DMA_ST(prefix, q, devtype, dmafield, 64) \ + \ + DEFINE_DMA_MEMORY_RW(prefix, devtype, dmafield) \ + DEFINE_DMA_MEMORY_READ(prefix, devtype, dmafield) \ + DEFINE_DMA_MEMORY_WRITE(prefix, devtype, dmafield) + +DEFINE_DMA_OPS(UNUSED, UNUSED, UNUSED) + +/* + * From here on, various bus interfaces can use DEFINE_DMA_OPS + * to summon their own personalized clone of the DMA interface. + */ + +#undef DEFINE_DMA_LD +#undef DEFINE_DMA_ST +#undef DEFINE_DMA_MEMORY_RW +#undef DEFINE_DMA_MEMORY_READ +#undef DEFINE_DMA_MEMORY_WRITE + +#define DEFINE_DMA_LD(prefix, suffix, devtype, dma_field, size) \ +static inline uint##size##_t \ +prefix##_ld##suffix(devtype *dev, dma_addr_t addr) \ +{ \ + return dma_ld##suffix(&dev->dma_field, addr); \ +} + +#define DEFINE_DMA_ST(prefix, suffix, devtype, dma_field, size) \ +static inline void \ +prefix##_st##suffix(devtype *dev, dma_addr_t addr, uint##size##_t val) \ +{ \ + dma_st##suffix(&dev->dma_field, addr, val); \ +} + +#define DEFINE_DMA_MEMORY_RW(prefix, devtype, dmafield) \ +static inline void prefix##_memory_rw(devtype *dev, \ + dma_addr_t addr, \ + void *buf, \ + dma_addr_t len, \ + int is_write) \ +{ \ + dma_memory_rw(&dev->dmafield, addr, buf, len, is_write); \ +} + +#define DEFINE_DMA_MEMORY_READ(prefix, devtype, dmafield) \ +static inline void prefix##_memory_read(devtype *dev, \ + dma_addr_t addr, \ + void *buf, \ + dma_addr_t len) \ +{ \ + dma_memory_read(&dev->dmafield, addr, buf, len); \ +} + +#define DEFINE_DMA_MEMORY_WRITE(prefix, devtype, dmafield) \ +static inline void prefix##_memory_write(devtype *dev, \ + dma_addr_t addr, \ + const void *buf, \ + dma_addr_t len) \ +{ \ + dma_memory_write(&dev->dmafield, addr, buf, len); \ +} + +#endif