From patchwork Mon Dec 10 17:31:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Dr. David Alan Gilbert" X-Patchwork-Id: 1010575 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=pass (mailfrom) smtp.mailfrom=nongnu.org (client-ip=2001:4830:134:3::11; helo=lists.gnu.org; envelope-from=qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=redhat.com Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 43D9Tx1vHNz9s3Z for ; Tue, 11 Dec 2018 04:45:57 +1100 (AEDT) Received: from localhost ([::1]:34045 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gWPcw-0001QV-9e for incoming@patchwork.ozlabs.org; Mon, 10 Dec 2018 12:45:54 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:46114) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gWPPb-0006TQ-GJ for qemu-devel@nongnu.org; Mon, 10 Dec 2018 12:32:08 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gWPPY-0003ek-9F for qemu-devel@nongnu.org; Mon, 10 Dec 2018 12:32:07 -0500 Received: from mx1.redhat.com ([209.132.183.28]:52586) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1gWPPX-0003eM-Ur for qemu-devel@nongnu.org; Mon, 10 Dec 2018 12:32:04 -0500 Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 3F8C130016F4; Mon, 10 Dec 2018 17:32:03 +0000 (UTC) Received: from dgilbert-t530.redhat.com (unknown [10.36.118.21]) by smtp.corp.redhat.com (Postfix) with ESMTP id D7EFA1001914; Mon, 10 Dec 2018 17:32:01 +0000 (UTC) From: "Dr. David Alan Gilbert (git)" To: qemu-devel@nongnu.org Date: Mon, 10 Dec 2018 17:31:49 +0000 Message-Id: <20181210173151.16629-6-dgilbert@redhat.com> In-Reply-To: <20181210173151.16629-1-dgilbert@redhat.com> References: <20181210173151.16629-1-dgilbert@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Mon, 10 Dec 2018 17:32:03 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [RFC PATCH 5/7] virtio-fs: Fill in slave commands for mapping X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: sweil@redhat.com, swhiteho@redhat.com, stefanha@redhat.com, vgoyal@redhat.com, miklos@szeredi.hu Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: "Dr. David Alan Gilbert" Fill in definitions for map, unmap and sync commands. Signed-off-by: Dr. David Alan Gilbert --- hw/virtio/vhost-user-fs.c | 129 ++++++++++++++++++++++++++++++++++++-- 1 file changed, 123 insertions(+), 6 deletions(-) diff --git a/hw/virtio/vhost-user-fs.c b/hw/virtio/vhost-user-fs.c index da70d9cd2c..bbb15477e5 100644 --- a/hw/virtio/vhost-user-fs.c +++ b/hw/virtio/vhost-user-fs.c @@ -24,20 +24,137 @@ int vhost_user_fs_slave_map(struct vhost_dev *dev, VhostUserFSSlaveMsg *sm, int fd) { - /* TODO */ - return -1; + VHostUserFS *fs = VHOST_USER_FS(dev->vdev); + size_t cache_size = fs->conf.cache_size; + void *cache_host = memory_region_get_ram_ptr(&fs->cache); + + unsigned int i; + int res = 0; + + if (fd < 0) { + fprintf(stderr, "%s: Bad fd for map\n", __func__); + return -1; + } + + for (i = 0; i < VHOST_USER_FS_SLAVE_ENTRIES; i++) { + if (sm->len[i] == 0) { + continue; + } + + if ((sm->c_offset[i] + sm->len[i]) < sm->len[i] || + (sm->c_offset[i] + sm->len[i]) > cache_size) { + fprintf(stderr, "%s: Bad offset/len for map [%d] %" + PRIx64 "+%" PRIx64 "\n", __func__, + i, sm->c_offset[i], sm->len[i]); + res = -1; + break; + } + + if (mmap(cache_host + sm->c_offset[i], sm->len[i], + ((sm->flags[i] & VHOST_USER_FS_FLAG_MAP_R) ? PROT_READ : 0) | + ((sm->flags[i] & VHOST_USER_FS_FLAG_MAP_W) ? PROT_WRITE : 0), + MAP_SHARED | MAP_FIXED, + fd, sm->fd_offset[i]) != (cache_host + sm->c_offset[i])) { + fprintf(stderr, "%s: map failed err %d [%d] %" + PRIx64 "+%" PRIx64 " from %" PRIx64 "\n", __func__, + errno, i, sm->c_offset[i], sm->len[i], + sm->fd_offset[i]); + res = -1; + break; + } + } + + if (res) { + /* Something went wrong, unmap them all */ + vhost_user_fs_slave_unmap(dev, sm); + } + return res; } int vhost_user_fs_slave_unmap(struct vhost_dev *dev, VhostUserFSSlaveMsg *sm) { - /* TODO */ - return -1; + VHostUserFS *fs = VHOST_USER_FS(dev->vdev); + size_t cache_size = fs->conf.cache_size; + void *cache_host = memory_region_get_ram_ptr(&fs->cache); + + unsigned int i; + int res = 0; + + /* Note even if one unmap fails we try the rest, since the effect + * is to clean up as much as possible. + */ + for (i = 0; i < VHOST_USER_FS_SLAVE_ENTRIES; i++) { + void *ptr; + if (sm->len[i] == 0) { + continue; + } + + if (sm->len[i] == ~(uint64_t)0) { + /* Special case meaning the whole arena */ + sm->len[i] = cache_size; + } + + if ((sm->c_offset[i] + sm->len[i]) < sm->len[i] || + (sm->c_offset[i] + sm->len[i]) > cache_size) { + fprintf(stderr, "%s: Bad offset/len for unmap [%d] %" + PRIx64 "+%" PRIx64 "\n", __func__, + i, sm->c_offset[i], sm->len[i]); + res = -1; + continue; + } + + ptr = mmap(cache_host + sm->c_offset[i], sm->len[i], + PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS | MAP_FIXED, -1, 0); + if (ptr != (cache_host + sm->c_offset[i])) { + fprintf(stderr, "%s: mmap failed (%s) [%d] %" + PRIx64 "+%" PRIx64 " from %" PRIx64 " res: %p\n", + __func__, + strerror(errno), + i, sm->c_offset[i], sm->len[i], + sm->fd_offset[i], ptr); + res = -1; + } + } + + return res; } int vhost_user_fs_slave_sync(struct vhost_dev *dev, VhostUserFSSlaveMsg *sm) { - /* TODO */ - return -1; + VHostUserFS *fs = VHOST_USER_FS(dev->vdev); + size_t cache_size = fs->conf.cache_size; + void *cache_host = memory_region_get_ram_ptr(&fs->cache); + + unsigned int i; + int res = 0; + + /* Note even if one sync fails we try the rest */ + for (i = 0; i < VHOST_USER_FS_SLAVE_ENTRIES; i++) { + if (sm->len[i] == 0) { + continue; + } + + if ((sm->c_offset[i] + sm->len[i]) < sm->len[i] || + (sm->c_offset[i] + sm->len[i]) > cache_size) { + fprintf(stderr, "%s: Bad offset/len for sync [%d] %" + PRIx64 "+%" PRIx64 "\n", __func__, + i, sm->c_offset[i], sm->len[i]); + res = -1; + continue; + } + + if (msync(cache_host + sm->c_offset[i], sm->len[i], + MS_SYNC /* ?? */)) { + fprintf(stderr, "%s: msync failed (%s) [%d] %" + PRIx64 "+%" PRIx64 " from %" PRIx64 "\n", __func__, + strerror(errno), + i, sm->c_offset[i], sm->len[i], + sm->fd_offset[i]); + res = -1; + } + } + + return res; }