Message ID | 09f6e8db17903d91c840716db4bc7c551e302e92.1620304098.git.limings@nvidia.com |
---|---|
State | New |
Headers | show |
Series | UBUNTU: SAUCE: tmfifo: Fix a memory barrier issue | expand |
On 06/05/2021 08:30, Liming Sun wrote: > From: Liming Sun <lsun@mellanox.com> > > BugLink: https://bugs.launchpad.net/bugs/1927262 > > The virtio framework uses wmb() when updating avail->idx. It > gurantees the write order, but not necessarily loading order > for the code accessing the memory. This commit adds a load barrier > after reading the avail->idx to make sure all the data in the > descriptor is visible. It also adds a barrier when returning the > packet to virtio framework to make sure read/writes are visible to > the virtio code. > > Signed-off-by: Liming Sun <limings@nvidia.com> > --- > drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com> Best regards, Krzysztof
Acked-by: Tim Gardner <tim.gardner@canonical.com> Its unlikely to make things any worse. Should this patch be submitted upstream ? On 5/6/21 6:30 AM, Liming Sun wrote: > From: Liming Sun <lsun@mellanox.com> > > BugLink: https://bugs.launchpad.net/bugs/1927262 > > The virtio framework uses wmb() when updating avail->idx. It > gurantees the write order, but not necessarily loading order > for the code accessing the memory. This commit adds a load barrier > after reading the avail->idx to make sure all the data in the > descriptor is visible. It also adds a barrier when returning the > packet to virtio framework to make sure read/writes are visible to > the virtio code. > > Signed-off-by: Liming Sun <limings@nvidia.com> > --- > drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c > index 5739a966..92bda873 100644 > --- a/drivers/platform/mellanox/mlxbf-tmfifo.c > +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c > @@ -294,6 +294,9 @@ static irqreturn_t mlxbf_tmfifo_irq_handler(int irq, void *arg) > if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx)) > return NULL; > > + /* Make sure 'avail->idx' is visible already. */ > + virtio_rmb(false); > + > idx = vring->next_avail % vr->num; > head = virtio16_to_cpu(vdev, vr->avail->ring[idx]); > if (WARN_ON(head >= vr->num)) > @@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring, > * done or not. Add a memory barrier here to make sure the update above > * completes before updating the idx. > */ > - mb(); > + virtio_mb(false); > vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1); > } > > @@ -730,6 +733,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring, > desc = NULL; > fifo->vring[is_rx] = NULL; > > + /* > + * Make sure the load/store are in order before > + * returning back to virtio. > + */ > + virtio_mb(false); > + > /* Notify upper layer that packet is done. */ > spin_lock_irqsave(&fifo->spin_lock[is_rx], flags); > vring_interrupt(0, vring->vq); >
Thanks for the suggestion. The fix solved some reported issues. Yes, I'll work on submitting it to upstream. - Liming > -----Original Message----- > From: Tim Gardner <tim.gardner@canonical.com> > Sent: Friday, May 7, 2021 10:36 AM > To: Liming Sun <limings@nvidia.com>; kernel-team@lists.ubuntu.com > Cc: Liming Sun <limings@nvidia.com> > Subject: ACK/Cmnt: [SRU][F:linux-bluefield][PATCH v3 1/1] UBUNTU: SAUCE: > tmfifo: Fix a memory barrier issue > > Acked-by: Tim Gardner <tim.gardner@canonical.com> > > Its unlikely to make things any worse. Should this patch be submitted > upstream ? > > On 5/6/21 6:30 AM, Liming Sun wrote: > > From: Liming Sun <lsun@mellanox.com> > > > > BugLink: > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbugs > .launchpad.net%2Fbugs%2F1927262&data=04%7C01%7Climings%40nvid > ia.com%7C8506ed8979e242b6dcf608d911656ca3%7C43083d15727340c1b7db3 > 9efd9ccc17a%7C0%7C0%7C637559949579388347%7CUnknown%7CTWFpbGZs > b3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn > 0%3D%7C1000&sdata=1fkw63qWkMq50ceOgPpjNsxsB3ZOBHz37xMqgT > vkypg%3D&reserved=0 > > > > The virtio framework uses wmb() when updating avail->idx. It > > gurantees the write order, but not necessarily loading order > > for the code accessing the memory. This commit adds a load barrier > > after reading the avail->idx to make sure all the data in the > > descriptor is visible. It also adds a barrier when returning the > > packet to virtio framework to make sure read/writes are visible to > > the virtio code. > > > > Signed-off-by: Liming Sun <limings@nvidia.com> > > --- > > drivers/platform/mellanox/mlxbf-tmfifo.c | 11 ++++++++++- > > 1 file changed, 10 insertions(+), 1 deletion(-) > > > > diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c > b/drivers/platform/mellanox/mlxbf-tmfifo.c > > index 5739a966..92bda873 100644 > > --- a/drivers/platform/mellanox/mlxbf-tmfifo.c > > +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c > > @@ -294,6 +294,9 @@ static irqreturn_t mlxbf_tmfifo_irq_handler(int irq, > void *arg) > > if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx)) > > return NULL; > > > > + /* Make sure 'avail->idx' is visible already. */ > > + virtio_rmb(false); > > + > > idx = vring->next_avail % vr->num; > > head = virtio16_to_cpu(vdev, vr->avail->ring[idx]); > > if (WARN_ON(head >= vr->num)) > > @@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct > mlxbf_tmfifo_vring *vring, > > * done or not. Add a memory barrier here to make sure the update > above > > * completes before updating the idx. > > */ > > - mb(); > > + virtio_mb(false); > > vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1); > > } > > > > @@ -730,6 +733,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct > mlxbf_tmfifo_vring *vring, > > desc = NULL; > > fifo->vring[is_rx] = NULL; > > > > + /* > > + * Make sure the load/store are in order before > > + * returning back to virtio. > > + */ > > + virtio_mb(false); > > + > > /* Notify upper layer that packet is done. */ > > spin_lock_irqsave(&fifo->spin_lock[is_rx], flags); > > vring_interrupt(0, vring->vq); > > > > -- > ----------- > Tim Gardner > Canonical, Inc
diff --git a/drivers/platform/mellanox/mlxbf-tmfifo.c b/drivers/platform/mellanox/mlxbf-tmfifo.c index 5739a966..92bda873 100644 --- a/drivers/platform/mellanox/mlxbf-tmfifo.c +++ b/drivers/platform/mellanox/mlxbf-tmfifo.c @@ -294,6 +294,9 @@ static irqreturn_t mlxbf_tmfifo_irq_handler(int irq, void *arg) if (vring->next_avail == virtio16_to_cpu(vdev, vr->avail->idx)) return NULL; + /* Make sure 'avail->idx' is visible already. */ + virtio_rmb(false); + idx = vring->next_avail % vr->num; head = virtio16_to_cpu(vdev, vr->avail->ring[idx]); if (WARN_ON(head >= vr->num)) @@ -322,7 +325,7 @@ static void mlxbf_tmfifo_release_desc(struct mlxbf_tmfifo_vring *vring, * done or not. Add a memory barrier here to make sure the update above * completes before updating the idx. */ - mb(); + virtio_mb(false); vr->used->idx = cpu_to_virtio16(vdev, vr_idx + 1); } @@ -730,6 +733,12 @@ static bool mlxbf_tmfifo_rxtx_one_desc(struct mlxbf_tmfifo_vring *vring, desc = NULL; fifo->vring[is_rx] = NULL; + /* + * Make sure the load/store are in order before + * returning back to virtio. + */ + virtio_mb(false); + /* Notify upper layer that packet is done. */ spin_lock_irqsave(&fifo->spin_lock[is_rx], flags); vring_interrupt(0, vring->vq);