From patchwork Mon Aug 6 20:14:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Auger X-Patchwork-Id: 954217 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 41kpn91yl7z9ryt for ; Tue, 7 Aug 2018 06:15:57 +1000 (AEST) Received: from localhost ([::1]:36089 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fmlv0-00060N-Qj for incoming@patchwork.ozlabs.org; Mon, 06 Aug 2018 16:15:54 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:58171) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1fmluF-0005wn-A9 for qemu-devel@nongnu.org; Mon, 06 Aug 2018 16:15:08 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1fmluE-0006bm-1K for qemu-devel@nongnu.org; Mon, 06 Aug 2018 16:15:07 -0400 Received: from mx3-rdu2.redhat.com ([66.187.233.73]:56762 helo=mx1.redhat.com) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1fmlu8-0006XH-6P; Mon, 06 Aug 2018 16:15:00 -0400 Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com [10.11.54.6]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BFC9E7C6CA; Mon, 6 Aug 2018 20:14:59 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-116-58.ams2.redhat.com [10.36.116.58]) by smtp.corp.redhat.com (Postfix) with ESMTP id 45636215670D; Mon, 6 Aug 2018 20:14:57 +0000 (UTC) From: Eric Auger To: eric.auger.pro@gmail.com, eric.auger@redhat.com, qemu-devel@nongnu.org, qemu-arm@nongnu.org, peter.maydell@linaro.org, alex.williamson@redhat.com, mst@redhat.com, jean-philippe.brucker@arm.com Date: Mon, 6 Aug 2018 22:14:31 +0200 Message-Id: <1533586484-5737-4-git-send-email-eric.auger@redhat.com> In-Reply-To: <1533586484-5737-1-git-send-email-eric.auger@redhat.com> References: <1533586484-5737-1-git-send-email-eric.auger@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.6 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Mon, 06 Aug 2018 20:14:59 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.2]); Mon, 06 Aug 2018 20:14:59 +0000 (UTC) for IP:'10.11.54.6' DOMAIN:'int-mx06.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'eric.auger@redhat.com' RCPT:'' X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 66.187.233.73 Subject: [Qemu-devel] [RFC v7 03/16] virtio-iommu: Decode the command payload X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: wei@redhat.com, kevin.tian@intel.com, tn@semihalf.com, will.deacon@arm.com, drjones@redhat.com, peterx@redhat.com, linuc.decode@gmail.com, bharat.bhushan@nxp.com Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" This patch adds the command payload decoding and introduces the functions that will do the actual command handling. Those functions are not yet implemented. Signed-off-by: Eric Auger --- v5 -> v6: - change map/unmap semantics (remove size) v4 -> v5: - adopt new v0.5 terminology v3 -> v4: - no flags field anymore in struct virtio_iommu_req_unmap - test reserved on attach/detach, change trace proto - rebase on v2.10.0. --- hw/virtio/trace-events | 6 ++- hw/virtio/virtio-iommu.c | 104 +++++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 105 insertions(+), 5 deletions(-) diff --git a/hw/virtio/trace-events b/hw/virtio/trace-events index 01dc20f..f6675cf 100644 --- a/hw/virtio/trace-events +++ b/hw/virtio/trace-events @@ -52,4 +52,8 @@ virtio_balloon_to_target(uint64_t target, uint32_t num_pages) "balloon target: 0 virtio_iommu_set_features(uint64_t features) "features accepted by the driver =0x%"PRIx64 virtio_iommu_device_reset(void) "reset!" virtio_iommu_device_status(uint8_t status) "driver status = %d" -virtio_iommu_get_config(uint64_t page_size_mask, uint64_t start, uint64_t end, uint8_t ioasid_bits, uint32_t probe_size) "page_size_mask=0x%"PRIx64" start=0x%"PRIx64" end=0x%"PRIx64" ioasid_bits=%d probe_size=0x%x" +virtio_iommu_get_config(uint64_t page_size_mask, uint64_t start, uint64_t end, uint8_t domain_bits, uint32_t probe_size) "page_size_mask=0x%"PRIx64" start=0x%"PRIx64" end=0x%"PRIx64" domain_bits=%d probe_size=0x%x" +virtio_iommu_attach(uint32_t domain_id, uint32_t ep_id) "domain=%d endpoint=%d" +virtio_iommu_detach(uint32_t ep_id) "endpoint=%d" +virtio_iommu_map(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end, uint64_t phys_start, uint32_t flags) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64 " phys_start=0x%"PRIx64" flags=%d" +virtio_iommu_unmap(uint32_t domain_id, uint64_t virt_start, uint64_t virt_end) "domain=%d virt_start=0x%"PRIx64" virt_end=0x%"PRIx64 diff --git a/hw/virtio/virtio-iommu.c b/hw/virtio/virtio-iommu.c index 34c7584..492cf22 100644 --- a/hw/virtio/virtio-iommu.c +++ b/hw/virtio/virtio-iommu.c @@ -34,29 +34,125 @@ /* Max size */ #define VIOMMU_DEFAULT_QUEUE_SIZE 256 +static int virtio_iommu_attach(VirtIOIOMMU *s, + struct virtio_iommu_req_attach *req) +{ + uint32_t domain_id = le32_to_cpu(req->domain); + uint32_t ep_id = le32_to_cpu(req->endpoint); + uint32_t reserved = le32_to_cpu(req->reserved); + + trace_virtio_iommu_attach(domain_id, ep_id); + + if (reserved) { + return VIRTIO_IOMMU_S_INVAL; + } + + return VIRTIO_IOMMU_S_UNSUPP; +} + +static int virtio_iommu_detach(VirtIOIOMMU *s, + struct virtio_iommu_req_detach *req) +{ + uint32_t ep_id = le32_to_cpu(req->endpoint); + uint32_t reserved = le32_to_cpu(req->reserved); + + trace_virtio_iommu_detach(ep_id); + + if (reserved) { + return VIRTIO_IOMMU_S_INVAL; + } + + return VIRTIO_IOMMU_S_UNSUPP; +} + +static int virtio_iommu_map(VirtIOIOMMU *s, + struct virtio_iommu_req_map *req) +{ + uint32_t domain_id = le32_to_cpu(req->domain); + uint64_t phys_start = le64_to_cpu(req->phys_start); + uint64_t virt_start = le64_to_cpu(req->virt_start); + uint64_t virt_end = le64_to_cpu(req->virt_end); + uint32_t flags = le32_to_cpu(req->flags); + + trace_virtio_iommu_map(domain_id, virt_start, virt_end, phys_start, flags); + + return VIRTIO_IOMMU_S_UNSUPP; +} + +static int virtio_iommu_unmap(VirtIOIOMMU *s, + struct virtio_iommu_req_unmap *req) +{ + uint32_t domain_id = le32_to_cpu(req->domain); + uint64_t virt_start = le64_to_cpu(req->virt_start); + uint64_t virt_end = le64_to_cpu(req->virt_end); + + trace_virtio_iommu_unmap(domain_id, virt_start, virt_end); + + return VIRTIO_IOMMU_S_UNSUPP; +} + +#define get_payload_size(req) (\ +sizeof((req)) - sizeof(struct virtio_iommu_req_tail)) + static int virtio_iommu_handle_attach(VirtIOIOMMU *s, struct iovec *iov, unsigned int iov_cnt) { - return -ENOENT; + struct virtio_iommu_req_attach req; + size_t sz, payload_sz; + + payload_sz = get_payload_size(req); + + sz = iov_to_buf(iov, iov_cnt, 0, &req, payload_sz); + if (sz != payload_sz) { + return VIRTIO_IOMMU_S_INVAL; + } + return virtio_iommu_attach(s, &req); } static int virtio_iommu_handle_detach(VirtIOIOMMU *s, struct iovec *iov, unsigned int iov_cnt) { - return -ENOENT; + struct virtio_iommu_req_detach req; + size_t sz, payload_sz; + + payload_sz = get_payload_size(req); + + sz = iov_to_buf(iov, iov_cnt, 0, &req, payload_sz); + if (sz != payload_sz) { + return VIRTIO_IOMMU_S_INVAL; + } + return virtio_iommu_detach(s, &req); } static int virtio_iommu_handle_map(VirtIOIOMMU *s, struct iovec *iov, unsigned int iov_cnt) { - return -ENOENT; + struct virtio_iommu_req_map req; + size_t sz, payload_sz; + + payload_sz = get_payload_size(req); + + sz = iov_to_buf(iov, iov_cnt, 0, &req, payload_sz); + if (sz != payload_sz) { + return VIRTIO_IOMMU_S_INVAL; + } + return virtio_iommu_map(s, &req); } static int virtio_iommu_handle_unmap(VirtIOIOMMU *s, struct iovec *iov, unsigned int iov_cnt) { - return -ENOENT; + struct virtio_iommu_req_unmap req; + size_t sz, payload_sz; + + payload_sz = get_payload_size(req); + + sz = iov_to_buf(iov, iov_cnt, 0, &req, payload_sz); + if (sz != payload_sz) { + return VIRTIO_IOMMU_S_INVAL; + } + return virtio_iommu_unmap(s, &req); } static void virtio_iommu_handle_command(VirtIODevice *vdev, VirtQueue *vq)