From patchwork Mon Apr 25 20:23:19 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Curt Wohlgemuth X-Patchwork-Id: 92784 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 7D080B6EF0 for ; Tue, 26 Apr 2011 06:59:50 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1757316Ab1DYU65 (ORCPT ); Mon, 25 Apr 2011 16:58:57 -0400 Received: from smtp-out.google.com ([216.239.44.51]:51069 "EHLO smtp-out.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1756298Ab1DYUXn (ORCPT ); Mon, 25 Apr 2011 16:23:43 -0400 Received: from kpbe13.cbf.corp.google.com (kpbe13.cbf.corp.google.com [172.25.105.77]) by smtp-out.google.com with ESMTP id p3PKNNcc019426; Mon, 25 Apr 2011 13:23:23 -0700 DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=google.com; s=beta; t=1303763004; bh=Kxbo5aEaj/eJA+RsFLA2+gYYbAg=; h=From:To:Cc:Subject:Date:Message-Id; b=ST/UJ8YIuIh3DdgSW7zGkK1dG7JtLEl8kO+885oMHCO+VXocIF0ovp+yjeXrWgWH7 W8Qw1CM+CQFz/0Amg/hpg== DomainKey-Signature: a=rsa-sha1; s=beta; d=google.com; c=nofws; q=dns; h=from:to:cc:subject:date:message-id:x-mailer; b=g0ahZU4At1Zw9J87XnV23GPqr25ygcIOS+8te13cZVNnCcmcjzJPJ/D6XSoNbDtnz n/6Sfg/kxFsKhGejz0KTA== Received: from rostrum.mtv.corp.google.com (rostrum.mtv.corp.google.com [172.18.96.39]) by kpbe13.cbf.corp.google.com with ESMTP id p3PKNLmf012871; Mon, 25 Apr 2011 13:23:21 -0700 Received: by rostrum.mtv.corp.google.com (Postfix, from userid 87825) id 7C1DFECB2C; Mon, 25 Apr 2011 13:23:21 -0700 (PDT) From: Curt Wohlgemuth To: linux-ext4@vger.kernel.org Cc: jim@meyering.net, cmm@us.ibm.com, hughd@google.com, tytso@mit.edu, Curt Wohlgemuth Subject: [PATCH v3] ext4: Don't set PageUptodate in ext4_end_bio() Date: Mon, 25 Apr 2011 13:23:19 -0700 Message-Id: <1303762999-20541-1-git-send-email-curtw@google.com> X-Mailer: git-send-email 1.7.3.1 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org In the bio completion routine, we should not be setting PageUptodate at all -- it's set at sys_write() time, and is unaffected by success/failure of the write to disk. This can cause a page corruption bug when block size < page size if we have only written a single block -- we might end up setting the entire PageUptodate, which will cause subsequent reads to get bad data. This commit also takes the opportunity to clean up error handling in ext4_end_bio(), and remove some extraneous code: - fixes ext4_end_bio() to set AS_EIO in the page->mapping->flags on error, which was left out by mistake. - remove the clear_buffer_dirty() call on unmapped buffers for each page. - consolidate page/buffer error handling in a single section. Signed-off-by: Curt Wohlgemuth Reported-by: Jim Meyering Reported-by: Hugh Dickins Cc: Mingming Cao --- Changlog since v2: - Removed clear_buffer_dirty() call - Consolidated error handling for pages and buffer heads - Loop over BHs in a page even for page size == block size, so we emit the correct error for such a case. Changlog since v1: - Added commit message text about setting AS_EIO for the page on error. - Continue to loop over all BHs in a page and emit unique errors for each of them. --- fs/ext4/page-io.c | 39 +++++++++++---------------------------- 1 files changed, 11 insertions(+), 28 deletions(-) diff --git a/fs/ext4/page-io.c b/fs/ext4/page-io.c index b6dbd05..7bb8f76 100644 --- a/fs/ext4/page-io.c +++ b/fs/ext4/page-io.c @@ -203,46 +203,29 @@ static void ext4_end_bio(struct bio *bio, int error) for (i = 0; i < io_end->num_io_pages; i++) { struct page *page = io_end->pages[i]->p_page; struct buffer_head *bh, *head; - int partial_write = 0; + loff_t offset; + loff_t io_end_offset; - head = page_buffers(page); - if (error) + if (error) { SetPageError(page); - BUG_ON(!head); - if (head->b_size != PAGE_CACHE_SIZE) { - loff_t offset; - loff_t io_end_offset = io_end->offset + io_end->size; + set_bit(AS_EIO, &page->mapping->flags); + head = page_buffers(page); + BUG_ON(!head); + + io_end_offset = io_end->offset + io_end->size; offset = (sector_t) page->index << PAGE_CACHE_SHIFT; bh = head; do { if ((offset >= io_end->offset) && - (offset+bh->b_size <= io_end_offset)) { - if (error) - buffer_io_error(bh); - - } - if (buffer_delay(bh)) - partial_write = 1; - else if (!buffer_mapped(bh)) - clear_buffer_dirty(bh); - else if (buffer_dirty(bh)) - partial_write = 1; + (offset+bh->b_size <= io_end_offset)) + buffer_io_error(bh); + offset += bh->b_size; bh = bh->b_this_page; } while (bh != head); } - /* - * If this is a partial write which happened to make - * all buffers uptodate then we can optimize away a - * bogus readpage() for the next read(). Here we - * 'discover' whether the page went uptodate as a - * result of this (potentially partial) write. - */ - if (!partial_write) - SetPageUptodate(page); - put_io_page(io_end->pages[i]); } io_end->num_io_pages = 0;