Message ID | 20220331132510.17020-1-tim.gardner@canonical.com |
---|---|
State | New |
Headers | show |
Series | [focal/linux] net/mlx5e: Fix page DMA map/unmap attributes | expand |
Acked-by: Dimitri John Ledkov <dimitri.ledkov@canonical.com> -- Regards, Dimitri. On Thu, 31 Mar 2022 at 14:25, Tim Gardner <tim.gardner@canonical.com> wrote: > > From: Aya Levin <ayal@nvidia.com> > > BugLink: https://bugs.launchpad.net/bugs/1967292 > > Driver initiates DMA sync, hence it may skip CPU sync. Add > DMA_ATTR_SKIP_CPU_SYNC as input attribute both to dma_map_page and > dma_unmap_page to avoid redundant sync with the CPU. > When forcing the device to work with SWIOTLB, the extra sync might cause > data corruption. The driver unmaps the whole page while the hardware > used just a part of the bounce buffer. So syncing overrides the entire > page with bounce buffer that only partially contains real data. > > Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE") > Fixes: db05815b36cb ("net/mlx5e: Add XSK zero-copy support") > Signed-off-by: Aya Levin <ayal@nvidia.com> > Reviewed-by: Gal Pressman <gal@nvidia.com> > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> > (backported from commit 0b7cfa4082fbf550595bc0e40f05614bd83bf0cd) > [rtg - drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c does not exist] > Signed-off-by: Tim Gardner <tim.gardner@canonical.com> > --- > > commit 0b7cfa4082fbf550595bc0e40f05614bd83bf0cd has already been included in 5.13.0-40.45 and 5.15.0-19.19 > > --- > drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > index 386f49949a23d..3ae7a3973c745 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > @@ -248,8 +248,8 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, > if (unlikely(!dma_info->page)) > return -ENOMEM; > > - dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0, > - PAGE_SIZE, rq->buff.map_dir); > + dma_info->addr = dma_map_page_attrs(rq->pdev, dma_info->page, 0, PAGE_SIZE, > + rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); > if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) { > page_pool_recycle_direct(rq->page_pool, dma_info->page); > dma_info->page = NULL; > @@ -270,7 +270,8 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq, > > void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info) > { > - dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir); > + dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir, > + DMA_ATTR_SKIP_CPU_SYNC); > } > > void mlx5e_page_release_dynamic(struct mlx5e_rq *rq, > -- > 2.35.1 > > > -- > kernel-team mailing list > kernel-team@lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/kernel-team
On 31.03.22 15:25, Tim Gardner wrote: > From: Aya Levin <ayal@nvidia.com> > > BugLink: https://bugs.launchpad.net/bugs/1967292 > > Driver initiates DMA sync, hence it may skip CPU sync. Add > DMA_ATTR_SKIP_CPU_SYNC as input attribute both to dma_map_page and > dma_unmap_page to avoid redundant sync with the CPU. > When forcing the device to work with SWIOTLB, the extra sync might cause > data corruption. The driver unmaps the whole page while the hardware > used just a part of the bounce buffer. So syncing overrides the entire > page with bounce buffer that only partially contains real data. > > Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE") > Fixes: db05815b36cb ("net/mlx5e: Add XSK zero-copy support") > Signed-off-by: Aya Levin <ayal@nvidia.com> > Reviewed-by: Gal Pressman <gal@nvidia.com> > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> > (backported from commit 0b7cfa4082fbf550595bc0e40f05614bd83bf0cd) > [rtg - drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c does not exist] > Signed-off-by: Tim Gardner <tim.gardner@canonical.com> Acked-by: Stefan Bader <stefan.bader@canonical.com> > --- > > commit 0b7cfa4082fbf550595bc0e40f05614bd83bf0cd has already been included in 5.13.0-40.45 and 5.15.0-19.19 > > --- > drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > index 386f49949a23d..3ae7a3973c745 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > @@ -248,8 +248,8 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, > if (unlikely(!dma_info->page)) > return -ENOMEM; > > - dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0, > - PAGE_SIZE, rq->buff.map_dir); > + dma_info->addr = dma_map_page_attrs(rq->pdev, dma_info->page, 0, PAGE_SIZE, > + rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); > if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) { > page_pool_recycle_direct(rq->page_pool, dma_info->page); > dma_info->page = NULL; > @@ -270,7 +270,8 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq, > > void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info) > { > - dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir); > + dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir, > + DMA_ATTR_SKIP_CPU_SYNC); > } > > void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
Applied to focal:linux/master-next. Thanks. -Zack On 3/31/22 9:25 AM, Tim Gardner wrote: > From: Aya Levin <ayal@nvidia.com> > > BugLink: https://bugs.launchpad.net/bugs/1967292 > > Driver initiates DMA sync, hence it may skip CPU sync. Add > DMA_ATTR_SKIP_CPU_SYNC as input attribute both to dma_map_page and > dma_unmap_page to avoid redundant sync with the CPU. > When forcing the device to work with SWIOTLB, the extra sync might cause > data corruption. The driver unmaps the whole page while the hardware > used just a part of the bounce buffer. So syncing overrides the entire > page with bounce buffer that only partially contains real data. > > Fixes: bc77b240b3c5 ("net/mlx5e: Add fragmented memory support for RX multi packet WQE") > Fixes: db05815b36cb ("net/mlx5e: Add XSK zero-copy support") > Signed-off-by: Aya Levin <ayal@nvidia.com> > Reviewed-by: Gal Pressman <gal@nvidia.com> > Signed-off-by: Saeed Mahameed <saeedm@nvidia.com> > (backported from commit 0b7cfa4082fbf550595bc0e40f05614bd83bf0cd) > [rtg - drivers/net/ethernet/mellanox/mlx5/core/en/xsk/pool.c does not exist] > Signed-off-by: Tim Gardner <tim.gardner@canonical.com> > --- > > commit 0b7cfa4082fbf550595bc0e40f05614bd83bf0cd has already been included in 5.13.0-40.45 and 5.15.0-19.19 > > --- > drivers/net/ethernet/mellanox/mlx5/core/en_rx.c | 7 ++++--- > 1 file changed, 4 insertions(+), 3 deletions(-) > > diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > index 386f49949a23d..3ae7a3973c745 100644 > --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c > @@ -248,8 +248,8 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, > if (unlikely(!dma_info->page)) > return -ENOMEM; > > - dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0, > - PAGE_SIZE, rq->buff.map_dir); > + dma_info->addr = dma_map_page_attrs(rq->pdev, dma_info->page, 0, PAGE_SIZE, > + rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); > if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) { > page_pool_recycle_direct(rq->page_pool, dma_info->page); > dma_info->page = NULL; > @@ -270,7 +270,8 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq, > > void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info) > { > - dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir); > + dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir, > + DMA_ATTR_SKIP_CPU_SYNC); > } > > void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c index 386f49949a23d..3ae7a3973c745 100644 --- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c +++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c @@ -248,8 +248,8 @@ static inline int mlx5e_page_alloc_pool(struct mlx5e_rq *rq, if (unlikely(!dma_info->page)) return -ENOMEM; - dma_info->addr = dma_map_page(rq->pdev, dma_info->page, 0, - PAGE_SIZE, rq->buff.map_dir); + dma_info->addr = dma_map_page_attrs(rq->pdev, dma_info->page, 0, PAGE_SIZE, + rq->buff.map_dir, DMA_ATTR_SKIP_CPU_SYNC); if (unlikely(dma_mapping_error(rq->pdev, dma_info->addr))) { page_pool_recycle_direct(rq->page_pool, dma_info->page); dma_info->page = NULL; @@ -270,7 +270,8 @@ static inline int mlx5e_page_alloc(struct mlx5e_rq *rq, void mlx5e_page_dma_unmap(struct mlx5e_rq *rq, struct mlx5e_dma_info *dma_info) { - dma_unmap_page(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir); + dma_unmap_page_attrs(rq->pdev, dma_info->addr, PAGE_SIZE, rq->buff.map_dir, + DMA_ATTR_SKIP_CPU_SYNC); } void mlx5e_page_release_dynamic(struct mlx5e_rq *rq,