| Message ID | 20251025032221.2905818-19-libaokun@huaweicloud.com |
|---|---|
| State | Superseded |
| Headers | show |
| Series | ext4: enable block size larger than page size | expand |
On Sat 25-10-25 11:22:14, libaokun@huaweicloud.com wrote: > From: Baokun Li <libaokun1@huawei.com> > > Use the EXT4_P_TO_LBLK/EXT4_LBLK_TO_P macros to complete the conversion > between folio indexes and blocks to avoid negative left/right shifts after > supporting blocksize greater than PAGE_SIZE. > > Signed-off-by: Baokun Li <libaokun1@huawei.com> > Reviewed-by: Zhang Yi <yi.zhang@huawei.com> Looks good. Feel free to add: Reviewed-by: Jan Kara <jack@suse.cz> Honza > --- > fs/ext4/inode.c | 7 +++---- > 1 file changed, 3 insertions(+), 4 deletions(-) > > diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c > index d97ce88d6e0a..cbf04b473ae7 100644 > --- a/fs/ext4/inode.c > +++ b/fs/ext4/inode.c > @@ -2289,15 +2289,14 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) > struct folio_batch fbatch; > unsigned nr, i; > struct inode *inode = mpd->inode; > - int bpp_bits = PAGE_SHIFT - inode->i_blkbits; > pgoff_t start, end; > ext4_lblk_t lblk; > ext4_fsblk_t pblock; > int err; > bool map_bh = false; > > - start = mpd->map.m_lblk >> bpp_bits; > - end = (mpd->map.m_lblk + mpd->map.m_len - 1) >> bpp_bits; > + start = EXT4_LBLK_TO_P(inode, mpd->map.m_lblk); > + end = EXT4_LBLK_TO_P(inode, mpd->map.m_lblk + mpd->map.m_len - 1); > pblock = mpd->map.m_pblk; > > folio_batch_init(&fbatch); > @@ -2308,7 +2307,7 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) > for (i = 0; i < nr; i++) { > struct folio *folio = fbatch.folios[i]; > > - lblk = folio->index << bpp_bits; > + lblk = EXT4_P_TO_LBLK(inode, folio->index); > err = mpage_process_folio(mpd, folio, &lblk, &pblock, > &map_bh); > /* > -- > 2.46.1 >
diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index d97ce88d6e0a..cbf04b473ae7 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -2289,15 +2289,14 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) struct folio_batch fbatch; unsigned nr, i; struct inode *inode = mpd->inode; - int bpp_bits = PAGE_SHIFT - inode->i_blkbits; pgoff_t start, end; ext4_lblk_t lblk; ext4_fsblk_t pblock; int err; bool map_bh = false; - start = mpd->map.m_lblk >> bpp_bits; - end = (mpd->map.m_lblk + mpd->map.m_len - 1) >> bpp_bits; + start = EXT4_LBLK_TO_P(inode, mpd->map.m_lblk); + end = EXT4_LBLK_TO_P(inode, mpd->map.m_lblk + mpd->map.m_len - 1); pblock = mpd->map.m_pblk; folio_batch_init(&fbatch); @@ -2308,7 +2307,7 @@ static int mpage_map_and_submit_buffers(struct mpage_da_data *mpd) for (i = 0; i < nr; i++) { struct folio *folio = fbatch.folios[i]; - lblk = folio->index << bpp_bits; + lblk = EXT4_P_TO_LBLK(inode, folio->index); err = mpage_process_folio(mpd, folio, &lblk, &pblock, &map_bh); /*