Message ID | 20220512120706.3871-2-tim.gardner@canonical.com |
---|---|
State | New |
Headers | show |
Series | UBUNTU: SAUCE: swiotlb: Max mapping size takes min align mask into account | expand |
On Thu, May 12, 2022 at 06:07:06AM -0600, Tim Gardner wrote: > From: Tianyu Lan <Tianyu.Lan@microsoft.com> > > BugLink: https://bugs.launchpad.net/bugs/1973169 > > swiotlb_find_slots() skips slots according to io tlb aligned mask > calculated from min aligned mask and original physical address > offset. This affects max mapping size. The mapping size can't > achieve the IO_TLB_SEGSIZE * IO_TLB_SIZE when original offset is > non-zero. This will cause system boot up failure in Hyper-V > Isolation VM where swiotlb force is enabled. Scsi layer use return > value of dma_max_mapping_size() to set max segment size and it > finally calls swiotlb_max_mapping_size(). Hyper-V storage driver > sets min align mask to 4k - 1. Scsi layer may pass 256k length of > request buffer with 0~4k offset and Hyper-V storage driver can't > get swiotlb bounce buffer via DMA API. Swiotlb_find_slots() can't > find 256k length bounce buffer with offset. Make swiotlb_max_mapping > _size() take min align mask into account. > > (patch merged from https://lore.kernel.org/lkml/20220511060212.GA32192@lst.de/T/) > > Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> > Signed-off-by: Tim Gardner <tim.gardner@canonical.com> > --- > kernel/dma/swiotlb.c | 13 ++++++++++++- > 1 file changed, 12 insertions(+), 1 deletion(-) > > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c > index ce241d72ad03..570af75332f5 100644 > --- a/kernel/dma/swiotlb.c > +++ b/kernel/dma/swiotlb.c > @@ -766,7 +766,18 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, > > size_t swiotlb_max_mapping_size(struct device *dev) > { > - return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE; > + int min_align_mask = dma_get_min_align_mask(dev); > + int min_align = 0; > + > + /* > + * swiotlb_find_slots() skips slots according to > + * min align mask. This affects max mapping size. > + * Take it into acount here. > + */ > + if (min_align_mask) > + min_align = roundup(min_align_mask, IO_TLB_SIZE); > + > + return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE - min_align; > } > I don't like this much, not sure how dma_get_min_align_mask is bound. But given this is restricted to linux-azure and the mailing list discussion is leaning torwards accepting such a fix: Acked-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> > bool is_swiotlb_active(struct device *dev) > -- > 2.36.0 > > > -- > kernel-team mailing list > kernel-team@lists.ubuntu.com > https://lists.ubuntu.com/mailman/listinfo/kernel-team
On 5/12/22 07:07, Thadeu Lima de Souza Cascardo wrote: > On Thu, May 12, 2022 at 06:07:06AM -0600, Tim Gardner wrote: >> From: Tianyu Lan <Tianyu.Lan@microsoft.com> >> >> BugLink: https://bugs.launchpad.net/bugs/1973169 >> >> swiotlb_find_slots() skips slots according to io tlb aligned mask >> calculated from min aligned mask and original physical address >> offset. This affects max mapping size. The mapping size can't >> achieve the IO_TLB_SEGSIZE * IO_TLB_SIZE when original offset is >> non-zero. This will cause system boot up failure in Hyper-V >> Isolation VM where swiotlb force is enabled. Scsi layer use return >> value of dma_max_mapping_size() to set max segment size and it >> finally calls swiotlb_max_mapping_size(). Hyper-V storage driver >> sets min align mask to 4k - 1. Scsi layer may pass 256k length of >> request buffer with 0~4k offset and Hyper-V storage driver can't >> get swiotlb bounce buffer via DMA API. Swiotlb_find_slots() can't >> find 256k length bounce buffer with offset. Make swiotlb_max_mapping >> _size() take min align mask into account. >> >> (patch merged from https://lore.kernel.org/lkml/20220511060212.GA32192@lst.de/T/) >> >> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> >> Signed-off-by: Tim Gardner <tim.gardner@canonical.com> >> --- >> kernel/dma/swiotlb.c | 13 ++++++++++++- >> 1 file changed, 12 insertions(+), 1 deletion(-) >> >> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c >> index ce241d72ad03..570af75332f5 100644 >> --- a/kernel/dma/swiotlb.c >> +++ b/kernel/dma/swiotlb.c >> @@ -766,7 +766,18 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, >> >> size_t swiotlb_max_mapping_size(struct device *dev) >> { >> - return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE; >> + int min_align_mask = dma_get_min_align_mask(dev); >> + int min_align = 0; >> + >> + /* >> + * swiotlb_find_slots() skips slots according to >> + * min align mask. This affects max mapping size. >> + * Take it into acount here. >> + */ >> + if (min_align_mask) >> + min_align = roundup(min_align_mask, IO_TLB_SIZE); >> + >> + return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE - min_align; >> } >> > > I don't like this much, not sure how dma_get_min_align_mask is bound. But given > this is restricted to linux-azure and the mailing list discussion is leaning > torwards accepting such a fix: > > Acked-by: Thadeu Lima de Souza Cascardo <cascardo@canonical.com> > I should have mentioned that MSFT has tested this patch. It does seem like there are getting to be a lot of swiotlb CVM SAUCE patches. A more thorough upstream review would have cut down on the number of tweak patches. >> bool is_swiotlb_active(struct device *dev) >> -- >> 2.36.0 >> >> >> -- >> kernel-team mailing list >> kernel-team@lists.ubuntu.com >> https://lists.ubuntu.com/mailman/listinfo/kernel-team
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index ce241d72ad03..570af75332f5 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -766,7 +766,18 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size, size_t swiotlb_max_mapping_size(struct device *dev) { - return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE; + int min_align_mask = dma_get_min_align_mask(dev); + int min_align = 0; + + /* + * swiotlb_find_slots() skips slots according to + * min align mask. This affects max mapping size. + * Take it into acount here. + */ + if (min_align_mask) + min_align = roundup(min_align_mask, IO_TLB_SIZE); + + return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE - min_align; } bool is_swiotlb_active(struct device *dev)