From patchwork Tue Feb 17 15:32:21 2009 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Theodore Ts'o X-Patchwork-Id: 23264 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.176.167]) by ozlabs.org (Postfix) with ESMTP id 9EB50DDD1B for ; Wed, 18 Feb 2009 02:33:33 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752390AbZBQPdc (ORCPT ); Tue, 17 Feb 2009 10:33:32 -0500 Received: (majordomo@vger.kernel.org) by vger.kernel.org id S1752386AbZBQPdc (ORCPT ); Tue, 17 Feb 2009 10:33:32 -0500 Received: from thunk.org ([69.25.196.29]:42096 "EHLO thunker.thunk.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752358AbZBQPdc (ORCPT ); Tue, 17 Feb 2009 10:33:32 -0500 Received: from c-98-216-98-217.hsd1.ma.comcast.net ([98.216.98.217] helo=localhost.localdomain) by thunker.thunk.org with esmtp (Exim 4.50 #1 (Debian)) id 1LZRwY-0006Cq-1b; Tue, 17 Feb 2009 10:33:11 -0500 From: Theodore Ts'o To: stable@kernel.org Cc: linux-ext4@vger.kernel.org, "Aneesh Kumar K.V" , "Theodore Ts'o" Date: Tue, 17 Feb 2009 10:32:21 -0500 Message-Id: <1234884762-13580-4-git-send-email-tytso@mit.edu> X-Mailer: git-send-email 1.5.6.3 In-Reply-To: <1234884762-13580-3-git-send-email-tytso@mit.edu> References: <1234884762-13580-1-git-send-email-tytso@mit.edu> <1234884762-13580-2-git-send-email-tytso@mit.edu> <1234884762-13580-3-git-send-email-tytso@mit.edu> X-SA-Exim-Connect-IP: 98.216.98.217 X-SA-Exim-Mail-From: tytso@mit.edu X-Spam-Checker-Version: SpamAssassin 3.1.4 (2006-07-26) on thunker.thunk.org X-Spam-Level: X-Spam-Status: No, score=-1.4 required=5.0 tests=AWL,BAYES_00, FORGED_RCVD_HELO,RCVD_IN_SORBS_DUL autolearn=no version=3.1.4 Subject: [PATCH FOR-STABLE-2.6.28 03/24] ext4: Fix the delalloc writepages to allocate blocks at the right offset. X-SA-Exim-Version: 4.2 (built Thu, 03 Mar 2005 10:44:12 +0100) X-SA-Exim-Scanned: Yes (on thunker.thunk.org) Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org From: Aneesh Kumar K.V When iterating through the pages which have mapped buffer_heads, we failed to update the b_state value. This results in allocating blocks at logical offset 0. Signed-off-by: Aneesh Kumar K.V Signed-off-by: "Theodore Ts'o" Cc: stable@kernel.org (cherry picked from commit 791b7f08954869d7b8ff438f3dac3cfb39778297) --- fs/ext4/inode.c | 56 ++++++++++++++++++++++++++++++++++++++---------------- 1 files changed, 39 insertions(+), 17 deletions(-) diff --git a/fs/ext4/inode.c b/fs/ext4/inode.c index 25d6adc..008c4b0 100644 --- a/fs/ext4/inode.c +++ b/fs/ext4/inode.c @@ -1644,35 +1644,39 @@ struct mpage_da_data { */ static int mpage_da_submit_io(struct mpage_da_data *mpd) { - struct address_space *mapping = mpd->inode->i_mapping; - int ret = 0, err, nr_pages, i; - unsigned long index, end; - struct pagevec pvec; long pages_skipped; + struct pagevec pvec; + unsigned long index, end; + int ret = 0, err, nr_pages, i; + struct inode *inode = mpd->inode; + struct address_space *mapping = inode->i_mapping; BUG_ON(mpd->next_page <= mpd->first_page); - pagevec_init(&pvec, 0); + /* + * We need to start from the first_page to the next_page - 1 + * to make sure we also write the mapped dirty buffer_heads. + * If we look at mpd->lbh.b_blocknr we would only be looking + * at the currently mapped buffer_heads. + */ index = mpd->first_page; end = mpd->next_page - 1; + pagevec_init(&pvec, 0); while (index <= end) { - /* - * We can use PAGECACHE_TAG_DIRTY lookup here because - * even though we have cleared the dirty flag on the page - * We still keep the page in the radix tree with tag - * PAGECACHE_TAG_DIRTY. See clear_page_dirty_for_io. - * The PAGECACHE_TAG_DIRTY is cleared in set_page_writeback - * which is called via the below writepage callback. - */ - nr_pages = pagevec_lookup_tag(&pvec, mapping, &index, - PAGECACHE_TAG_DIRTY, - min(end - index, - (pgoff_t)PAGEVEC_SIZE-1) + 1); + nr_pages = pagevec_lookup(&pvec, mapping, index, PAGEVEC_SIZE); if (nr_pages == 0) break; for (i = 0; i < nr_pages; i++) { struct page *page = pvec.pages[i]; + index = page->index; + if (index > end) + break; + index++; + + BUG_ON(!PageLocked(page)); + BUG_ON(PageWriteback(page)); + pages_skipped = mpd->wbc->pages_skipped; err = mapping->a_ops->writepage(page, mpd->wbc); if (!err && (pages_skipped == mpd->wbc->pages_skipped)) @@ -2086,11 +2090,29 @@ static int __mpage_da_writepage(struct page *page, bh = head; do { BUG_ON(buffer_locked(bh)); + /* + * We need to try to allocate + * unmapped blocks in the same page. + * Otherwise we won't make progress + * with the page in ext4_da_writepage + */ if (buffer_dirty(bh) && (!buffer_mapped(bh) || buffer_delay(bh))) { mpage_add_bh_to_extent(mpd, logical, bh); if (mpd->io_done) return MPAGE_DA_EXTENT_TAIL; + } else if (buffer_dirty(bh) && (buffer_mapped(bh))) { + /* + * mapped dirty buffer. We need to update + * the b_state because we look at + * b_state in mpage_da_map_blocks. We don't + * update b_size because if we find an + * unmapped buffer_head later we need to + * use the b_state flag of that buffer_head. + */ + if (mpd->lbh.b_size == 0) + mpd->lbh.b_state = + bh->b_state & BH_FLAGS; } logical++; } while ((bh = bh->b_this_page) != head);