From patchwork Tue Jun 7 03:56:50 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tao Ma X-Patchwork-Id: 99072 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 90CB6B6F98 for ; Tue, 7 Jun 2011 13:57:07 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1754667Ab1FGD5F (ORCPT ); Mon, 6 Jun 2011 23:57:05 -0400 Received: from oproxy7-pub.bluehost.com ([67.222.55.9]:46077 "HELO oproxy7-pub.bluehost.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with SMTP id S1753993Ab1FGD5F (ORCPT ); Mon, 6 Jun 2011 23:57:05 -0400 Received: (qmail 24625 invoked by uid 0); 7 Jun 2011 03:57:03 -0000 Received: from unknown (HELO box585.bluehost.com) (66.147.242.185) by oproxy7.bluehost.com with SMTP; 7 Jun 2011 03:57:03 -0000 DomainKey-Signature: a=rsa-sha1; q=dns; c=nofws; s=default; d=tao.ma; h=Received:From:To:Cc:Subject:Date:Message-Id:X-Mailer:X-Identified-User; b=qEa6CdC3lrp7eYPxKH1y2r6cbWn5TK8kbIbG6CFrenMeyYIGRYIpf1cposqoBRI7c7YM2YZ9Xd0JcLeNobWRX3tnuLT63JkWbqMuBsZQ6zM/vBE2JIbxpy4PH/1CiddX; Received: from [114.251.86.0] (helo=tma-laptop1.taobao.ali.com) by box585.bluehost.com with esmtpsa (TLSv1:AES256-SHA:256) (Exim 4.69) (envelope-from ) id 1QTnPb-0004PC-CB; Mon, 06 Jun 2011 21:57:03 -0600 From: Tao Ma To: linux-ext4@vger.kernel.org Cc: Jan Kara Subject: [PATCH] jbd: Use WRITE_SYNC in journal checkpoint. Date: Tue, 7 Jun 2011 11:56:50 +0800 Message-Id: <1307419010-3390-1-git-send-email-tm@tao.ma> X-Mailer: git-send-email 1.7.4.1 X-Identified-User: {1390:box585.bluehost.com:colyli:tao.ma} {sentby:smtp auth 114.251.86.0 authed with tm@tao.ma} Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org From: Tao Ma In journal checkpoint, we write the buffer and wait for its finish. But in cfq, the async queue has a very low priority, and in our test, if there are too many sync queues and every queue is filled up with requests, and the process will hang waiting for the log space. So this patch tries to use WRITE_SYNC in __flush_batch so that the request will be moved into sync queue and handled by cfq timely. We also use the new plug, sot that all the WRITE_SYNC requests can be given as a whole when we unplug it. Cc: Jan Kara Reported-by: Robin Dong Signed-off-by: Tao Ma --- fs/jbd/checkpoint.c | 6 +++++- 1 files changed, 5 insertions(+), 1 deletions(-) diff --git a/fs/jbd/checkpoint.c b/fs/jbd/checkpoint.c index e4b87bc..a7ce053 100644 --- a/fs/jbd/checkpoint.c +++ b/fs/jbd/checkpoint.c @@ -22,6 +22,7 @@ #include #include #include +#include /* * Unlink a buffer from a transaction checkpoint list. @@ -253,9 +254,12 @@ static void __flush_batch(journal_t *journal, struct buffer_head **bhs, int *batch_count) { int i; + struct blk_plug plug; + blk_start_plug(&plug); for (i = 0; i < *batch_count; i++) - write_dirty_buffer(bhs[i], WRITE); + write_dirty_buffer(bhs[i], WRITE_SYNC); + blk_finish_plug(&plug); for (i = 0; i < *batch_count; i++) { struct buffer_head *bh = bhs[i];