From patchwork Fri Dec 7 08:48:40 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 1009227 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43B5mW2gv4z9s3q for ; Fri, 7 Dec 2018 19:51:23 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1725996AbeLGIvD (ORCPT ); Fri, 7 Dec 2018 03:51:03 -0500 Received: from mga18.intel.com ([134.134.136.126]:1705 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1725978AbeLGIvC (ORCPT ); Fri, 7 Dec 2018 03:51:02 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Dec 2018 00:51:01 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,324,1539673200"; d="scan'208";a="96888331" Received: from btwcube1.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 07 Dec 2018 00:50:59 -0800 From: Tiwei Bie To: mst@redhat.com, jasowang@redhat.com, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, virtio-dev@lists.oasis-open.org Cc: wexu@redhat.com, jfreimann@redhat.com, maxime.coquelin@redhat.com, tiwei.bie@intel.com Subject: [RFC 1/3] virtio_ring: define flags as shifts consistently Date: Fri, 7 Dec 2018 16:48:40 +0800 Message-Id: <20181207084842.13133-2-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181207084842.13133-1-tiwei.bie@intel.com> References: <20181207084842.13133-1-tiwei.bie@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce _SPLIT_ and/or _PACKED_ variants for VRING_DESC_F_*, VRING_AVAIL_F_NO_INTERRUPT and VRING_USED_F_NO_NOTIFY. These variants are defined as shifts instead of shifted values for consistency with other _F_ flags. Suggested-by: Michael S. Tsirkin Signed-off-by: Tiwei Bie --- include/uapi/linux/virtio_ring.h | 57 ++++++++++++++++++++++---------- 1 file changed, 40 insertions(+), 17 deletions(-) diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_ring.h index 2414f8af26b3..9b0c0d92ab62 100644 --- a/include/uapi/linux/virtio_ring.h +++ b/include/uapi/linux/virtio_ring.h @@ -37,29 +37,52 @@ #include #include -/* This marks a buffer as continuing via the next field. */ +/* + * Notice: unlike other _F_ flags, below flags are defined as shifted + * values instead of shifts for compatibility. + */ +/* Same as VRING_SPLIT_DESC_F_NEXT. */ #define VRING_DESC_F_NEXT 1 -/* This marks a buffer as write-only (otherwise read-only). */ +/* Same as VRING_SPLIT_DESC_F_WRITE. */ #define VRING_DESC_F_WRITE 2 -/* This means the buffer contains a list of buffer descriptors. */ +/* Same as VRING_SPLIT_DESC_F_INDIRECT. */ #define VRING_DESC_F_INDIRECT 4 - -/* - * Mark a descriptor as available or used in packed ring. - * Notice: they are defined as shifts instead of shifted values. - */ -#define VRING_PACKED_DESC_F_AVAIL 7 -#define VRING_PACKED_DESC_F_USED 15 - -/* The Host uses this in used->flags to advise the Guest: don't kick me when - * you add a buffer. It's unreliable, so it's simply an optimization. Guest - * will still kick if it's out of buffers. */ +/* Same as VRING_SPLIT_USED_F_NO_NOTIFY. */ #define VRING_USED_F_NO_NOTIFY 1 -/* The Guest uses this in avail->flags to advise the Host: don't interrupt me - * when you consume a buffer. It's unreliable, so it's simply an - * optimization. */ +/* Same as VRING_SPLIT_AVAIL_F_NO_INTERRUPT. */ #define VRING_AVAIL_F_NO_INTERRUPT 1 +/* Mark a buffer as continuing via the next field in split ring. */ +#define VRING_SPLIT_DESC_F_NEXT 0 +/* Mark a buffer as write-only (otherwise read-only) in split ring. */ +#define VRING_SPLIT_DESC_F_WRITE 1 +/* Mean the buffer contains a list of buffer descriptors in split ring. */ +#define VRING_SPLIT_DESC_F_INDIRECT 2 + +/* + * The Host uses this in used->flags in split ring to advise the Guest: + * don't kick me when you add a buffer. It's unreliable, so it's simply + * an optimization. Guest will still kick if it's out of buffers. + */ +#define VRING_SPLIT_USED_F_NO_NOTIFY 0 +/* + * The Guest uses this in avail->flags in split ring to advise the Host: + * don't interrupt me when you consume a buffer. It's unreliable, so it's + * simply an optimization. + */ +#define VRING_SPLIT_AVAIL_F_NO_INTERRUPT 0 + +/* Mark a buffer as continuing via the next field in packed ring. */ +#define VRING_PACKED_DESC_F_NEXT 0 +/* Mark a buffer as write-only (otherwise read-only) in packed ring. */ +#define VRING_PACKED_DESC_F_WRITE 1 +/* Mean the buffer contains a list of buffer descriptors in packed ring. */ +#define VRING_PACKED_DESC_F_INDIRECT 2 + +/* Mark a descriptor as available or used in packed ring. */ +#define VRING_PACKED_DESC_F_AVAIL 7 +#define VRING_PACKED_DESC_F_USED 15 + /* Enable events in packed ring. */ #define VRING_PACKED_EVENT_FLAG_ENABLE 0x0 /* Disable events in packed ring. */ From patchwork Fri Dec 7 08:48:41 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 1009226 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43B5mS0NXLz9s55 for ; Fri, 7 Dec 2018 19:51:20 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726074AbeLGIvH (ORCPT ); Fri, 7 Dec 2018 03:51:07 -0500 Received: from mga18.intel.com ([134.134.136.126]:1705 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726057AbeLGIvE (ORCPT ); Fri, 7 Dec 2018 03:51:04 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Dec 2018 00:51:03 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,324,1539673200"; d="scan'208";a="96888346" Received: from btwcube1.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 07 Dec 2018 00:51:01 -0800 From: Tiwei Bie To: mst@redhat.com, jasowang@redhat.com, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, virtio-dev@lists.oasis-open.org Cc: wexu@redhat.com, jfreimann@redhat.com, maxime.coquelin@redhat.com, tiwei.bie@intel.com Subject: [RFC 2/3] virtio_ring: add VIRTIO_RING_NO_LEGACY Date: Fri, 7 Dec 2018 16:48:41 +0800 Message-Id: <20181207084842.13133-3-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181207084842.13133-1-tiwei.bie@intel.com> References: <20181207084842.13133-1-tiwei.bie@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Introduce VIRTIO_RING_NO_LEGACY to support disabling legacy macros and layout definitions. Suggested-by: Michael S. Tsirkin Signed-off-by: Tiwei Bie --- VRING_AVAIL_ALIGN_SIZE, VRING_USED_ALIGN_SIZE and VRING_DESC_ALIGN_SIZE are not pre-virtio 1.0, but can also be disabled by VIRTIO_RING_NO_LEGACY in this patch, because their names are not consistent with other names. Not sure whether this is a good idea. If we want this, we may also want to define _SPLIT_ version for them. include/uapi/linux/virtio_ring.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_ring.h index 9b0c0d92ab62..192573827850 100644 --- a/include/uapi/linux/virtio_ring.h +++ b/include/uapi/linux/virtio_ring.h @@ -37,6 +37,7 @@ #include #include +#ifndef VIRTIO_RING_NO_LEGACY /* * Notice: unlike other _F_ flags, below flags are defined as shifted * values instead of shifts for compatibility. @@ -51,6 +52,7 @@ #define VRING_USED_F_NO_NOTIFY 1 /* Same as VRING_SPLIT_AVAIL_F_NO_INTERRUPT. */ #define VRING_AVAIL_F_NO_INTERRUPT 1 +#endif /* VIRTIO_RING_NO_LEGACY */ /* Mark a buffer as continuing via the next field in split ring. */ #define VRING_SPLIT_DESC_F_NEXT 0 @@ -151,6 +153,7 @@ struct vring { struct vring_used *used; }; +#ifndef VIRTIO_RING_NO_LEGACY /* Alignment requirements for vring elements. * When using pre-virtio 1.0 layout, these fall out naturally. */ @@ -203,6 +206,7 @@ static inline unsigned vring_size(unsigned int num, unsigned long align) + align - 1) & ~(align - 1)) + sizeof(__virtio16) * 3 + sizeof(struct vring_used_elem) * num; } +#endif /* VIRTIO_RING_NO_LEGACY */ /* The following is used with USED_EVENT_IDX and AVAIL_EVENT_IDX */ /* Assuming a given event_idx value from the other side, if From patchwork Fri Dec 7 08:48:42 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tiwei Bie X-Patchwork-Id: 1009225 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=intel.com Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 43B5mN4yZPz9s8J for ; Fri, 7 Dec 2018 19:51:16 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726087AbeLGIvI (ORCPT ); Fri, 7 Dec 2018 03:51:08 -0500 Received: from mga18.intel.com ([134.134.136.126]:1705 "EHLO mga18.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726069AbeLGIvG (ORCPT ); Fri, 7 Dec 2018 03:51:06 -0500 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga007.jf.intel.com ([10.7.209.58]) by orsmga106.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 07 Dec 2018 00:51:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,324,1539673200"; d="scan'208";a="96888358" Received: from btwcube1.sh.intel.com ([10.67.104.173]) by orsmga007.jf.intel.com with ESMTP; 07 Dec 2018 00:51:04 -0800 From: Tiwei Bie To: mst@redhat.com, jasowang@redhat.com, virtualization@lists.linux-foundation.org, linux-kernel@vger.kernel.org, netdev@vger.kernel.org, virtio-dev@lists.oasis-open.org Cc: wexu@redhat.com, jfreimann@redhat.com, maxime.coquelin@redhat.com, tiwei.bie@intel.com Subject: [RFC 3/3] virtio_ring: use new vring flags Date: Fri, 7 Dec 2018 16:48:42 +0800 Message-Id: <20181207084842.13133-4-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20181207084842.13133-1-tiwei.bie@intel.com> References: <20181207084842.13133-1-tiwei.bie@intel.com> Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org Switch to using the _SPLIT_ and _PACKED_ variants of vring flags in split ring and packed ring respectively. Signed-off-by: Tiwei Bie --- drivers/virtio/virtio_ring.c | 100 +++++++++++++++++++++-------------- 1 file changed, 59 insertions(+), 41 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index cd7e755484e3..2806f69c6c9f 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -371,17 +371,17 @@ static void vring_unmap_one_split(const struct vring_virtqueue *vq, flags = virtio16_to_cpu(vq->vq.vdev, desc->flags); - if (flags & VRING_DESC_F_INDIRECT) { + if (flags & BIT(VRING_SPLIT_DESC_F_INDIRECT)) { dma_unmap_single(vring_dma_dev(vq), virtio64_to_cpu(vq->vq.vdev, desc->addr), virtio32_to_cpu(vq->vq.vdev, desc->len), - (flags & VRING_DESC_F_WRITE) ? + (flags & BIT(VRING_SPLIT_DESC_F_WRITE)) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } else { dma_unmap_page(vring_dma_dev(vq), virtio64_to_cpu(vq->vq.vdev, desc->addr), virtio32_to_cpu(vq->vq.vdev, desc->len), - (flags & VRING_DESC_F_WRITE) ? + (flags & BIT(VRING_SPLIT_DESC_F_WRITE)) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } } @@ -481,7 +481,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, if (vring_mapping_error(vq, addr)) goto unmap_release; - desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT); + desc[i].flags = cpu_to_virtio16(_vq->vdev, + BIT(VRING_SPLIT_DESC_F_NEXT)); desc[i].addr = cpu_to_virtio64(_vq->vdev, addr); desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length); prev = i; @@ -494,7 +495,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, if (vring_mapping_error(vq, addr)) goto unmap_release; - desc[i].flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT | VRING_DESC_F_WRITE); + desc[i].flags = cpu_to_virtio16(_vq->vdev, + BIT(VRING_SPLIT_DESC_F_NEXT) | + BIT(VRING_SPLIT_DESC_F_WRITE)); desc[i].addr = cpu_to_virtio64(_vq->vdev, addr); desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length); prev = i; @@ -502,7 +505,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, } } /* Last one doesn't continue. */ - desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT); + desc[prev].flags &= cpu_to_virtio16(_vq->vdev, + (u16)~BIT(VRING_SPLIT_DESC_F_NEXT)); if (indirect) { /* Now that the indirect table is filled in, map it. */ @@ -513,7 +517,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, goto unmap_release; vq->split.vring.desc[head].flags = cpu_to_virtio16(_vq->vdev, - VRING_DESC_F_INDIRECT); + BIT(VRING_SPLIT_DESC_F_INDIRECT)); vq->split.vring.desc[head].addr = cpu_to_virtio64(_vq->vdev, addr); @@ -603,8 +607,8 @@ static bool virtqueue_kick_prepare_split(struct virtqueue *_vq) new, old); } else { needs_kick = !(vq->split.vring.used->flags & - cpu_to_virtio16(_vq->vdev, - VRING_USED_F_NO_NOTIFY)); + cpu_to_virtio16(_vq->vdev, + BIT(VRING_SPLIT_USED_F_NO_NOTIFY))); } END_USE(vq); return needs_kick; @@ -614,7 +618,8 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, void **ctx) { unsigned int i, j; - __virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT); + __virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, + BIT(VRING_SPLIT_DESC_F_NEXT)); /* Clear data ptr. */ vq->split.desc_state[head].data = NULL; @@ -649,7 +654,8 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, vq->split.vring.desc[head].len); BUG_ON(!(vq->split.vring.desc[head].flags & - cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT))); + cpu_to_virtio16(vq->vq.vdev, + BIT(VRING_SPLIT_DESC_F_INDIRECT)))); BUG_ON(len == 0 || len % sizeof(struct vring_desc)); for (j = 0; j < len / sizeof(struct vring_desc); j++) @@ -715,7 +721,8 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, /* If we expect an interrupt for the next entry, tell host * by writing event index and flush out the write before * the read in the next get_buf call. */ - if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) + if (!(vq->split.avail_flags_shadow & + BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT))) virtio_store_mb(vq->weak_barriers, &vring_used_event(&vq->split.vring), cpu_to_virtio16(_vq->vdev, vq->last_used_idx)); @@ -730,8 +737,10 @@ static void virtqueue_disable_cb_split(struct virtqueue *_vq) { struct vring_virtqueue *vq = to_vvq(_vq); - if (!(vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) { - vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; + if (!(vq->split.avail_flags_shadow & + BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT))) { + vq->split.avail_flags_shadow |= + BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT); if (!vq->event) vq->split.vring.avail->flags = cpu_to_virtio16(_vq->vdev, @@ -751,8 +760,10 @@ static unsigned virtqueue_enable_cb_prepare_split(struct virtqueue *_vq) /* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to * either clear the flags bit or point the event index at the next * entry. Always do both to keep code simple. */ - if (vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { - vq->split.avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; + if (vq->split.avail_flags_shadow & + BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT)) { + vq->split.avail_flags_shadow &= + ~BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT); if (!vq->event) vq->split.vring.avail->flags = cpu_to_virtio16(_vq->vdev, @@ -784,8 +795,10 @@ static bool virtqueue_enable_cb_delayed_split(struct virtqueue *_vq) /* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to * either clear the flags bit or point the event index at the next * entry. Always update the event index to keep code simple. */ - if (vq->split.avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) { - vq->split.avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT; + if (vq->split.avail_flags_shadow & + BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT)) { + vq->split.avail_flags_shadow &= + ~BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT); if (!vq->event) vq->split.vring.avail->flags = cpu_to_virtio16(_vq->vdev, @@ -912,15 +925,15 @@ static void vring_unmap_state_packed(const struct vring_virtqueue *vq, flags = state->flags; - if (flags & VRING_DESC_F_INDIRECT) { + if (flags & BIT(VRING_PACKED_DESC_F_INDIRECT)) { dma_unmap_single(vring_dma_dev(vq), state->addr, state->len, - (flags & VRING_DESC_F_WRITE) ? + (flags & BIT(VRING_PACKED_DESC_F_WRITE)) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } else { dma_unmap_page(vring_dma_dev(vq), state->addr, state->len, - (flags & VRING_DESC_F_WRITE) ? + (flags & BIT(VRING_PACKED_DESC_F_WRITE)) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } } @@ -935,17 +948,17 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq, flags = le16_to_cpu(desc->flags); - if (flags & VRING_DESC_F_INDIRECT) { + if (flags & BIT(VRING_PACKED_DESC_F_INDIRECT)) { dma_unmap_single(vring_dma_dev(vq), le64_to_cpu(desc->addr), le32_to_cpu(desc->len), - (flags & VRING_DESC_F_WRITE) ? + (flags & BIT(VRING_PACKED_DESC_F_WRITE)) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } else { dma_unmap_page(vring_dma_dev(vq), le64_to_cpu(desc->addr), le32_to_cpu(desc->len), - (flags & VRING_DESC_F_WRITE) ? + (flags & BIT(VRING_PACKED_DESC_F_WRITE)) ? DMA_FROM_DEVICE : DMA_TO_DEVICE); } } @@ -1002,7 +1015,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, goto unmap_release; desc[i].flags = cpu_to_le16(n < out_sgs ? - 0 : VRING_DESC_F_WRITE); + 0 : BIT(VRING_PACKED_DESC_F_WRITE)); desc[i].addr = cpu_to_le64(addr); desc[i].len = cpu_to_le32(sg->length); i++; @@ -1025,8 +1038,9 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, vq->packed.desc_extra[id].addr = addr; vq->packed.desc_extra[id].len = total_sg * sizeof(struct vring_packed_desc); - vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT | - vq->packed.avail_used_flags; + vq->packed.desc_extra[id].flags = + BIT(VRING_PACKED_DESC_F_INDIRECT) | + vq->packed.avail_used_flags; } /* @@ -1035,8 +1049,9 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, * the list are made available. */ virtio_wmb(vq->weak_barriers); - vq->packed.vring.desc[head].flags = cpu_to_le16(VRING_DESC_F_INDIRECT | - vq->packed.avail_used_flags); + vq->packed.vring.desc[head].flags = + cpu_to_le16(BIT(VRING_PACKED_DESC_F_INDIRECT) | + vq->packed.avail_used_flags); /* We're using some buffers from the free list. */ vq->vq.num_free -= 1; @@ -1047,8 +1062,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq, n = 0; vq->packed.avail_wrap_counter ^= 1; vq->packed.avail_used_flags ^= - 1 << VRING_PACKED_DESC_F_AVAIL | - 1 << VRING_PACKED_DESC_F_USED; + BIT(VRING_PACKED_DESC_F_AVAIL) | + BIT(VRING_PACKED_DESC_F_USED); } vq->packed.next_avail_idx = n; vq->free_head = vq->packed.desc_state[id].next; @@ -1141,8 +1156,10 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, goto unmap_release; flags = cpu_to_le16(vq->packed.avail_used_flags | - (++c == total_sg ? 0 : VRING_DESC_F_NEXT) | - (n < out_sgs ? 0 : VRING_DESC_F_WRITE)); + (++c == total_sg ? 0 : + BIT(VRING_PACKED_DESC_F_NEXT)) | + (n < out_sgs ? 0 : + BIT(VRING_PACKED_DESC_F_WRITE))); if (i == head) head_flags = flags; else @@ -1164,8 +1181,8 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq, if ((unlikely(++i >= vq->packed.vring.num))) { i = 0; vq->packed.avail_used_flags ^= - 1 << VRING_PACKED_DESC_F_AVAIL | - 1 << VRING_PACKED_DESC_F_USED; + BIT(VRING_PACKED_DESC_F_AVAIL) | + BIT(VRING_PACKED_DESC_F_USED); } } } @@ -1258,7 +1275,7 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq) off_wrap = le16_to_cpu(snapshot.off_wrap); wrap_counter = off_wrap >> VRING_PACKED_EVENT_F_WRAP_CTR; - event_idx = off_wrap & ~(1 << VRING_PACKED_EVENT_F_WRAP_CTR); + event_idx = off_wrap & ~BIT(VRING_PACKED_EVENT_F_WRAP_CTR); if (wrap_counter != vq->packed.avail_wrap_counter) event_idx -= vq->packed.vring.num; @@ -1321,8 +1338,8 @@ static inline bool is_used_desc_packed(const struct vring_virtqueue *vq, u16 flags; flags = le16_to_cpu(vq->packed.vring.desc[idx].flags); - avail = !!(flags & (1 << VRING_PACKED_DESC_F_AVAIL)); - used = !!(flags & (1 << VRING_PACKED_DESC_F_USED)); + avail = !!(flags & BIT(VRING_PACKED_DESC_F_AVAIL)); + used = !!(flags & BIT(VRING_PACKED_DESC_F_USED)); return avail == used && used == used_wrap_counter; } @@ -1452,7 +1469,7 @@ static bool virtqueue_poll_packed(struct virtqueue *_vq, u16 off_wrap) u16 used_idx; wrap_counter = off_wrap >> VRING_PACKED_EVENT_F_WRAP_CTR; - used_idx = off_wrap & ~(1 << VRING_PACKED_EVENT_F_WRAP_CTR); + used_idx = off_wrap & ~BIT(VRING_PACKED_EVENT_F_WRAP_CTR); return is_used_desc_packed(vq, used_idx, wrap_counter); } @@ -1625,7 +1642,7 @@ static struct virtqueue *vring_create_virtqueue_packed( vq->packed.avail_wrap_counter = 1; vq->packed.used_wrap_counter = 1; vq->packed.event_flags_shadow = 0; - vq->packed.avail_used_flags = 1 << VRING_PACKED_DESC_F_AVAIL; + vq->packed.avail_used_flags = BIT(VRING_PACKED_DESC_F_AVAIL); vq->packed.desc_state = kmalloc_array(num, sizeof(struct vring_desc_state_packed), @@ -2088,7 +2105,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, /* No callback? Tell other side not to bother us. */ if (!callback) { - vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; + vq->split.avail_flags_shadow |= + BIT(VRING_SPLIT_AVAIL_F_NO_INTERRUPT); if (!vq->event) vq->split.vring.avail->flags = cpu_to_virtio16(vdev, vq->split.avail_flags_shadow);