From patchwork Tue Oct 12 14:14:55 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Christoph Hellwig X-Patchwork-Id: 67592 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 29285B70E6 for ; Wed, 13 Oct 2010 01:15:30 +1100 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S932189Ab0JLOP1 (ORCPT ); Tue, 12 Oct 2010 10:15:27 -0400 Received: from verein.lst.de ([213.95.11.210]:48499 "EHLO verein.lst.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S932516Ab0JLOP0 (ORCPT ); Tue, 12 Oct 2010 10:15:26 -0400 Received: from verein.lst.de (localhost [127.0.0.1]) by verein.lst.de (8.12.3/8.12.3/Debian-7.1) with ESMTP id o9CEEv88027819 (version=TLSv1/SSLv3 cipher=EDH-RSA-DES-CBC3-SHA bits=168 verify=NO); Tue, 12 Oct 2010 16:14:57 +0200 Received: (from hch@localhost) by verein.lst.de (8.12.3/8.12.3/Debian-7.2) id o9CEEt7c027818; Tue, 12 Oct 2010 16:14:55 +0200 Date: Tue, 12 Oct 2010 16:14:55 +0200 From: Christoph Hellwig To: "Darrick J. Wong" Cc: Ric Wheeler , Andreas Dilger , "Ted Ts'o" , Mingming Cao , linux-ext4 , linux-kernel , Keith Mannthey , Mingming Cao , Tejun Heo , hch@lst.de, Josef Bacik , Mike Snitzer Subject: Re: Performance testing of various barrier reduction patches [was: Re: [RFC v4] ext4: Coordinate fsync requests] Message-ID: <20101012141455.GA27572@lst.de> References: <4D5AEB7F-32E2-481A-A6C8-7E7E0BD3CE98@dilger.ca> <20100809233805.GH2109@tux1.beaverton.ibm.com> <20100819021441.GM2109@tux1.beaverton.ibm.com> <20100823183119.GA28105@tux1.beaverton.ibm.com> <20100923232527.GB25624@tux1.beaverton.ibm.com> <20100927230111.GV25555@tux1.beaverton.ibm.com> <20101008212606.GE25624@tux1.beaverton.ibm.com> <4CAF937C.4020500@redhat.com> <20101011202020.GF25624@tux1.beaverton.ibm.com> Mime-Version: 1.0 Content-Disposition: inline In-Reply-To: <20101011202020.GF25624@tux1.beaverton.ibm.com> User-Agent: Mutt/1.3.28i X-Spam-Score: 0 () X-Scanned-By: MIMEDefang 2.39 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org I still think adding code to every filesystem to optimize for a rather stupid use case is not a good idea. I dropped out a bit from the thread in the middle, but what was the real use case for lots of concurrent fsyncs on the same inode again? And what is the amount of performance you need? If we go back to the direct submission of REQ_FLUSH request from the earlier flush+fua setups that were faster or high end storage, would that be enough for you? Below is a patch brining the optimization back. WARNING: completely untested! --- To unsubscribe from this list: send the line "unsubscribe linux-ext4" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Index: linux-2.6/block/blk-flush.c =================================================================== --- linux-2.6.orig/block/blk-flush.c 2010-10-12 10:08:43.777004514 -0400 +++ linux-2.6/block/blk-flush.c 2010-10-12 10:10:37.547016093 -0400 @@ -143,6 +143,17 @@ struct request *blk_do_flush(struct requ unsigned skip = 0; /* + * Just issue pure flushes directly. + */ + if (!blk_rq_sectors(rq)) { + if (!do_preflush) { + __blk_end_request_all(rq, 0); + return NULL; + } + return rq; + } + + /* * Special case. If there's data but flush is not necessary, * the request can be issued directly. *