diff mbox

[3/3] block: reimplement FLUSH/FUA to support merge

Message ID 1295625598-15203-4-git-send-email-tj@kernel.org
State Not Applicable, archived
Headers show

Commit Message

Tejun Heo Jan. 21, 2011, 3:59 p.m. UTC
The current FLUSH/FUA support has evolved from the implementation
which had to perform queue draining.  As such, sequencing is done
queue-wide one flush request after another.  However, with the
draining requirement gone, there's no reason to keep the queue-wide
sequential approach.

This patch reimplements FLUSH/FUA support such that each FLUSH/FUA
request is sequenced individually.  The actual FLUSH execution is
double buffered and whenever a request wants to execute one for either
PRE or POSTFLUSH, it queues on the pending queue.  Once certain
conditions are met, a flush request is issued and on its completion
all pending requests proceed to the next sequence.

This allows arbitrary merging of different type of flushes.  How they
are merged can be primarily controlled and tuned by adjusting the
above said 'conditions' used to determine when to issue the next
flush.

This is inspired by Darrick's patches to merge multiple zero-data
flushes which helps workloads with highly concurrent fsync requests.

* As flush requests are never put on the IO scheduler, request fields
  used for flush share space with rq->rb_node.  rq->completion_data is
  moved out of the union.  This increases the request size by one
  pointer.

  As rq->elevator_private* are used only by the iosched too, it is
  possible to reduce the request size further.  However, to do that,
  we need to modify request allocation path such that iosched data is
  not allocated for flush requests.

* FLUSH/FUA processing happens on insertion now instead of dispatch.

Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: "Darrick J. Wong" <djwong@us.ibm.com>
Cc: Shaohua Li <shli@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c         |   10 +-
 block/blk-flush.c        |  440 ++++++++++++++++++++++++++++++++--------------
 block/blk.h              |   12 +-
 block/elevator.c         |    7 +
 include/linux/blkdev.h   |   18 ++-
 include/linux/elevator.h |    1 +
 6 files changed, 332 insertions(+), 156 deletions(-)

Comments

Vivek Goyal Jan. 21, 2011, 6:56 p.m. UTC | #1
On Fri, Jan 21, 2011 at 04:59:58PM +0100, Tejun Heo wrote:

[..]
> + * The actual execution of flush is double buffered.  Whenever a request
> + * needs to execute PRE or POSTFLUSH, it queues at
> + * q->flush_queue[q->flush_pending_idx].  Once certain criteria are met, a
> + * flush is issued and the pending_idx is toggled.  When the flush
> + * completes, all the requests which were pending are proceeded to the next
> + * step.  This allows arbitrary merging of different types of FLUSH/FUA
> + * requests.
> + *
> + * Currently, the following conditions are used to determine when to issue
> + * flush.
> + *
> + * C1. At any given time, only one flush shall be in progress.  This makes
> + *     double buffering sufficient.
> + *
> + * C2. Flush is not deferred if any request is executing DATA of its
> + *     sequence.  This avoids issuing separate POSTFLUSHes for requests
> + *     which shared PREFLUSH.

Tejun, did you mean "Flush is deferred" instead of "Flush is not deferred"
above?

IIUC, C2 might help only if requests which contain data are also going to 
issue postflush. Couple of cases come to mind.

- If queue supports FUA, I think we will not issue POSTFLUSH. In that
  case issuing next PREFLUSH which data is in flight might make sense.

- Even if queue does not support FUA and we are only getting requests
  with REQ_FLUSH then also waiting for data requests to finish before
  issuing next FLUSH might not help.

- Even if queue does not support FUA and say we have a mix of REQ_FUA
  and REQ_FLUSH, then this will help only if in a batch we have more
  than 1 request which is going to issue POSTFLUSH and those postflush
  will be merged.

- Ric Wheeler was once mentioning that there are boxes which advertise
  writeback cache but are battery backed so they ignore flush internally and
  signal completion immediately. I am not sure how prevalent those
  cases are but I think waiting for data to finish will delay processing
  of new REQ_FLUSH requests in pending queue for such array. There
  we will not anyway benefit from merging of FLUSH.

Given that C2 is going to benefit primarily only if queue does not support
FUA and we have many requets with REQ_FUA set, will it make sense to 
put additional checks for C2. Atleast a simple queue support FUA
check might help.

In practice does C2 really help or we can get rid of it entirely?

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Vivek Goyal Jan. 21, 2011, 7:19 p.m. UTC | #2
On Fri, Jan 21, 2011 at 01:56:17PM -0500, Vivek Goyal wrote:
> On Fri, Jan 21, 2011 at 04:59:58PM +0100, Tejun Heo wrote:
> 
> [..]
> > + * The actual execution of flush is double buffered.  Whenever a request
> > + * needs to execute PRE or POSTFLUSH, it queues at
> > + * q->flush_queue[q->flush_pending_idx].  Once certain criteria are met, a
> > + * flush is issued and the pending_idx is toggled.  When the flush
> > + * completes, all the requests which were pending are proceeded to the next
> > + * step.  This allows arbitrary merging of different types of FLUSH/FUA
> > + * requests.
> > + *
> > + * Currently, the following conditions are used to determine when to issue
> > + * flush.
> > + *
> > + * C1. At any given time, only one flush shall be in progress.  This makes
> > + *     double buffering sufficient.
> > + *
> > + * C2. Flush is not deferred if any request is executing DATA of its
> > + *     sequence.  This avoids issuing separate POSTFLUSHes for requests
> > + *     which shared PREFLUSH.
> 
> Tejun, did you mean "Flush is deferred" instead of "Flush is not deferred"
> above?
> 
> IIUC, C2 might help only if requests which contain data are also going to 
> issue postflush. Couple of cases come to mind.
> 
> - If queue supports FUA, I think we will not issue POSTFLUSH. In that
>   case issuing next PREFLUSH which data is in flight might make sense.
> 
> - Even if queue does not support FUA and we are only getting requests
>   with REQ_FLUSH then also waiting for data requests to finish before
>   issuing next FLUSH might not help.
> 
> - Even if queue does not support FUA and say we have a mix of REQ_FUA
>   and REQ_FLUSH, then this will help only if in a batch we have more
>   than 1 request which is going to issue POSTFLUSH and those postflush
>   will be merged.
> 
> - Ric Wheeler was once mentioning that there are boxes which advertise
>   writeback cache but are battery backed so they ignore flush internally and
>   signal completion immediately. I am not sure how prevalent those
>   cases are but I think waiting for data to finish will delay processing
>   of new REQ_FLUSH requests in pending queue for such array. There
>   we will not anyway benefit from merging of FLUSH.
> 
> Given that C2 is going to benefit primarily only if queue does not support
> FUA and we have many requets with REQ_FUA set, will it make sense to 
> put additional checks for C2. Atleast a simple queue support FUA
> check might help.

Reading through the blk_insert_flush() bit more, looks like pure REQ_FUA
requests will not even show up in data list if queue supports FUA. But
IIUC requests with both REQ_FLUSH and REQ_FUA set will still show up even if
queue supports FUA.

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mike Snitzer Jan. 22, 2011, 12:49 a.m. UTC | #3
On Fri, Jan 21 2011 at 10:59am -0500,
Tejun Heo <tj@kernel.org> wrote:

> diff --git a/block/blk-flush.c b/block/blk-flush.c
> index 8592869..cd73e93 100644
> --- a/block/blk-flush.c
> +++ b/block/blk-flush.c
> @@ -1,6 +1,69 @@
>  /*
>   * Functions to sequence FLUSH and FUA writes.
> + *
> + * Copyright (C) 2011		Max Planck Institute for Gravitational Physics
> + * Copyright (C) 2011		Tejun Heo <tj@kernel.org>
> + *
> + * This file is released under the GPLv2.
> + *
> + * REQ_{FLUSH|FUA} requests are decomposed to sequences consisted of three
> + * optional steps - PREFLUSH, DATA and POSTFLUSH - according to the request
> + * properties and hardware capability.
> + *
> + * If a request doesn't have data, only REQ_FLUSH makes sense, which
> + * indicates a simple flush request.  If there is data, REQ_FLUSH indicates
> + * that the device cache should be flushed before the data is executed, and
> + * REQ_FUA means that the data must be on non-volatile media on request
> + * completion.
> + *
> + * If the device doesn't have writeback cache, FLUSH and FUA don't make any
> + * difference.  The requests are either completed immediately if there's no
> + * data or executed as normal requests otherwise.

For devices without a writeback cache, I'm not seeing where pure flushes
are completed immediately.  But I do see where data is processed
directly in blk_insert_flush().


> -struct request *blk_do_flush(struct request_queue *q, struct request *rq)
> +/**
> + * blk_abort_flush - @q is being aborted, abort flush requests
      ^^^^^^^^^^^^^^^ 
Small comment nit, s/blk_abort_flush/blk_abort_flushes/

> + * @q: request_queue being aborted
> + *
> + * To be called from elv_abort_queue().  @q is being aborted.  Prepare all
> + * FLUSH/FUA requests for abortion.
> + *
> + * CONTEXT:
> + * spin_lock_irq(q->queue_lock)
> + */
> +void blk_abort_flushes(struct request_queue *q)
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tejun Heo Jan. 23, 2011, 10:25 a.m. UTC | #4
Hello,

On Fri, Jan 21, 2011 at 01:56:17PM -0500, Vivek Goyal wrote:
> > + * Currently, the following conditions are used to determine when to issue
> > + * flush.
> > + *
> > + * C1. At any given time, only one flush shall be in progress.  This makes
> > + *     double buffering sufficient.
> > + *
> > + * C2. Flush is not deferred if any request is executing DATA of its
> > + *     sequence.  This avoids issuing separate POSTFLUSHes for requests
> > + *     which shared PREFLUSH.
> 
> Tejun, did you mean "Flush is deferred" instead of "Flush is not deferred"
> above?

Oh yeah, I did.  :-)

> IIUC, C2 might help only if requests which contain data are also going to 
> issue postflush. Couple of cases come to mind.

That's true.  I didn't want to go too advanced on it.  I wanted
something which is fairly mechanical (without intricate parameters)
and effective enough for common cases.

> - If queue supports FUA, I think we will not issue POSTFLUSH. In that
>   case issuing next PREFLUSH which data is in flight might make sense.
>
> - Even if queue does not support FUA and we are only getting requests
>   with REQ_FLUSH then also waiting for data requests to finish before
>   issuing next FLUSH might not help.
> 
> - Even if queue does not support FUA and say we have a mix of REQ_FUA
>   and REQ_FLUSH, then this will help only if in a batch we have more
>   than 1 request which is going to issue POSTFLUSH and those postflush
>   will be merged.

Sure, not applying C2 and 3 if the underlying device supports REQ_FUA
would probably be the most compelling change of the bunch; however,
please keep in mind that issuing flush as soon as possible doesn't
necessarily result in better performance.  It's inherently a balancing
act between latency and throughput.  Even inducing artificial issue
latencies is likely to help if done right (as the ioscheds do).

So, I think it's better to start with something simple and improve it
with actual testing.  If the current simple implementation can match
Darrick's previous numbers, let's first settle the mechanisms.  We can
tune the latency/throughput balance all we want later.  Other than the
double buffering contraint (which can be relaxed too but I don't think
that would be necessary or a good idea) things can be easily adjusted
in blk_kick_flush().  It's intentionally designed that way.

> - Ric Wheeler was once mentioning that there are boxes which advertise
>   writeback cache but are battery backed so they ignore flush internally and
>   signal completion immediately. I am not sure how prevalent those
>   cases are but I think waiting for data to finish will delay processing
>   of new REQ_FLUSH requests in pending queue for such array. There
>   we will not anyway benefit from merging of FLUSH.

I don't really think we should design the whole thing around broken
devices which incorrectly report writeback cache when it need not.
The correct place to work around that is during device identification
not in the flush logic.

> Given that C2 is going to benefit primarily only if queue does not support
> FUA and we have many requets with REQ_FUA set, will it make sense to 
> put additional checks for C2. Atleast a simple queue support FUA
> check might help.
> 
> In practice does C2 really help or we can get rid of it entirely?

Again, issuing flushes as fast as possible isn't necessarily better.
It might feel counter-intuitive but it generally makes sense to delay
flush if there are a lot of concurrent flush activities going on.
Another related interesting point is that with flush merging,
depending on workload, there's a likelihood that FUA, even if the
device supports it, might result in worse performance than merged DATA
+ single POSTFLUSH sequence.

Thanks.
Tejun Heo Jan. 23, 2011, 10:29 a.m. UTC | #5
On Sun, Jan 23, 2011 at 11:25:26AM +0100, Tejun Heo wrote:
> Again, issuing flushes as fast as possible isn't necessarily better.
> It might feel counter-intuitive but it generally makes sense to delay
> flush if there are a lot of concurrent flush activities going on.
> Another related interesting point is that with flush merging,
> depending on workload, there's a likelihood that FUA, even if the
> device supports it, might result in worse performance than merged DATA
> + single POSTFLUSH sequence.

Let me add a bit.

In general, I'm a bit skeptical about the usefulness of hardware FUA
on a rotating disk.  All it saves is a single command issue overhead.
On storage array or SSDs, the balance might be different tho.  Event
hen, with flush merging, I think it would heavily depend on the
workload which way it would turn out.

Thanks.
Tejun Heo Jan. 23, 2011, 10:31 a.m. UTC | #6
On Fri, Jan 21, 2011 at 07:49:55PM -0500, Mike Snitzer wrote:
> > + * If the device doesn't have writeback cache, FLUSH and FUA don't make any
> > + * difference.  The requests are either completed immediately if there's no
> > + * data or executed as normal requests otherwise.
> 
> For devices without a writeback cache, I'm not seeing where pure flushes
> are completed immediately.  But I do see where data is processed
> directly in blk_insert_flush().

Yeah, it does.  Pure flushes on a device w/o writeback cache, @policy
is zero and blk_flush_complete_seq() will directly proceed to
REQ_FSEQ_DONE.

> > -struct request *blk_do_flush(struct request_queue *q, struct request *rq)
> > +/**
> > + * blk_abort_flush - @q is being aborted, abort flush requests
>       ^^^^^^^^^^^^^^^ 
> Small comment nit, s/blk_abort_flush/blk_abort_flushes/

Thanks.
Darrick J. Wong Jan. 24, 2011, 8:31 p.m. UTC | #7
On Sun, Jan 23, 2011 at 11:25:26AM +0100, Tejun Heo wrote:
> Hello,
> 
> On Fri, Jan 21, 2011 at 01:56:17PM -0500, Vivek Goyal wrote:
> > > + * Currently, the following conditions are used to determine when to issue
> > > + * flush.
> > > + *
> > > + * C1. At any given time, only one flush shall be in progress.  This makes
> > > + *     double buffering sufficient.
> > > + *
> > > + * C2. Flush is not deferred if any request is executing DATA of its
> > > + *     sequence.  This avoids issuing separate POSTFLUSHes for requests
> > > + *     which shared PREFLUSH.
> > 
> > Tejun, did you mean "Flush is deferred" instead of "Flush is not deferred"
> > above?
> 
> Oh yeah, I did.  :-)
> 
> > IIUC, C2 might help only if requests which contain data are also going to 
> > issue postflush. Couple of cases come to mind.
> 
> That's true.  I didn't want to go too advanced on it.  I wanted
> something which is fairly mechanical (without intricate parameters)
> and effective enough for common cases.
> 
> > - If queue supports FUA, I think we will not issue POSTFLUSH. In that
> >   case issuing next PREFLUSH which data is in flight might make sense.
> >
> > - Even if queue does not support FUA and we are only getting requests
> >   with REQ_FLUSH then also waiting for data requests to finish before
> >   issuing next FLUSH might not help.
> > 
> > - Even if queue does not support FUA and say we have a mix of REQ_FUA
> >   and REQ_FLUSH, then this will help only if in a batch we have more
> >   than 1 request which is going to issue POSTFLUSH and those postflush
> >   will be merged.
> 
> Sure, not applying C2 and 3 if the underlying device supports REQ_FUA
> would probably be the most compelling change of the bunch; however,
> please keep in mind that issuing flush as soon as possible doesn't
> necessarily result in better performance.  It's inherently a balancing
> act between latency and throughput.  Even inducing artificial issue
> latencies is likely to help if done right (as the ioscheds do).
> 
> So, I think it's better to start with something simple and improve it
> with actual testing.  If the current simple implementation can match
> Darrick's previous numbers, let's first settle the mechanisms.  We can

Yep, the fsync-happy numbers more or less match... at least for 2.6.37:
http://tinyurl.com/4q2xeao

I'll give 2.6.38-rc2 a try later, though -rc1 didn't boot for me, so these
numbers are based on a backport to .37. :(

In general, the effect of this patchset is to change a 100% drop in fsync-happy
performance into a 20% drop.  As always, the higher the average flush time, the
more the storage system benefits from having flush coordination.  The only
exception to that is elm3b231_ipr, which is a md array of disks that are
attached to a controller that is now throwing errors, so I'm not sure I
entirely trust that machine's numbers.

As for elm3c44_sas, I'm not sure why enabling flushes always increases
performance, other than to say that I suspect it has something to do with
md-raid'ing disk trays together, because elm3a4_sas and elm3c71_extsas consist
of the same configuration of disk trays, only without the md.  I've also been
told by our storage folks that md atop raid trays is not really a recommended
setup anyway.

The long and short of it is that this latest patchset looks and delivers the
behavior that I was aiming for. :)

> tune the latency/throughput balance all we want later.  Other than the
> double buffering contraint (which can be relaxed too but I don't think
> that would be necessary or a good idea) things can be easily adjusted
> in blk_kick_flush().  It's intentionally designed that way.
> 
> > - Ric Wheeler was once mentioning that there are boxes which advertise
> >   writeback cache but are battery backed so they ignore flush internally and
> >   signal completion immediately. I am not sure how prevalent those
> >   cases are but I think waiting for data to finish will delay processing
> >   of new REQ_FLUSH requests in pending queue for such array. There
> >   we will not anyway benefit from merging of FLUSH.
> 
> I don't really think we should design the whole thing around broken
> devices which incorrectly report writeback cache when it need not.
> The correct place to work around that is during device identification
> not in the flush logic.

elm3a4_sas and elm3c71_extsas advertise writeback cache yet the flush completion
times are suspiciously low.  I suppose it could be useful to disable flushes to
squeeze out that last bit of performance, though I don't know how one goes
about querying the disk array to learn if there's a battery behind the cache.
I guess the current mechanism (admin knob that picks a safe default) is good
enough.

> > Given that C2 is going to benefit primarily only if queue does not support
> > FUA and we have many requets with REQ_FUA set, will it make sense to 
> > put additional checks for C2. Atleast a simple queue support FUA
> > check might help.
> > 
> > In practice does C2 really help or we can get rid of it entirely?
> 
> Again, issuing flushes as fast as possible isn't necessarily better.
> It might feel counter-intuitive but it generally makes sense to delay
> flush if there are a lot of concurrent flush activities going on.
> Another related interesting point is that with flush merging,
> depending on workload, there's a likelihood that FUA, even if the
> device supports it, might result in worse performance than merged DATA
> + single POSTFLUSH sequence.
> 
> Thanks.
> 
> -- 
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Tejun Heo Jan. 25, 2011, 10:21 a.m. UTC | #8
Hello, Darrick.

On Mon, Jan 24, 2011 at 12:31:55PM -0800, Darrick J. Wong wrote:
> > So, I think it's better to start with something simple and improve it
> > with actual testing.  If the current simple implementation can match
> > Darrick's previous numbers, let's first settle the mechanisms.  We can
> 
> Yep, the fsync-happy numbers more or less match... at least for 2.6.37:
> http://tinyurl.com/4q2xeao

Good to hear.  Thanks for the detailed testing.

> I'll give 2.6.38-rc2 a try later, though -rc1 didn't boot for me, so these
> numbers are based on a backport to .37. :(

Well, there hasn' been any change in the area during the merge window
anyway, so I think testing on 2.6.37 should be fine.

> > I don't really think we should design the whole thing around broken
> > devices which incorrectly report writeback cache when it need not.
> > The correct place to work around that is during device identification
> > not in the flush logic.
> 
> elm3a4_sas and elm3c71_extsas advertise writeback cache yet the
> flush completion times are suspiciously low.  I suppose it could be
> useful to disable flushes to squeeze out that last bit of
> performance, though I don't know how one goes about querying the
> disk array to learn if there's a battery behind the cache.  I guess
> the current mechanism (admin knob that picks a safe default) is good
> enough.

Yeap, that or a blacklist of devices which lie.

Jens, what do you think?  If you don't object, let's put this through
linux-next.

Thank you.
Jens Axboe Jan. 25, 2011, 11:39 a.m. UTC | #9
On 2011-01-25 11:21, Tejun Heo wrote:
> Hello, Darrick.
> 
> On Mon, Jan 24, 2011 at 12:31:55PM -0800, Darrick J. Wong wrote:
>>> So, I think it's better to start with something simple and improve it
>>> with actual testing.  If the current simple implementation can match
>>> Darrick's previous numbers, let's first settle the mechanisms.  We can
>>
>> Yep, the fsync-happy numbers more or less match... at least for 2.6.37:
>> http://tinyurl.com/4q2xeao
> 
> Good to hear.  Thanks for the detailed testing.
> 
>> I'll give 2.6.38-rc2 a try later, though -rc1 didn't boot for me, so these
>> numbers are based on a backport to .37. :(
> 
> Well, there hasn' been any change in the area during the merge window
> anyway, so I think testing on 2.6.37 should be fine.
> 
>>> I don't really think we should design the whole thing around broken
>>> devices which incorrectly report writeback cache when it need not.
>>> The correct place to work around that is during device identification
>>> not in the flush logic.
>>
>> elm3a4_sas and elm3c71_extsas advertise writeback cache yet the
>> flush completion times are suspiciously low.  I suppose it could be
>> useful to disable flushes to squeeze out that last bit of
>> performance, though I don't know how one goes about querying the
>> disk array to learn if there's a battery behind the cache.  I guess
>> the current mechanism (admin knob that picks a safe default) is good
>> enough.
> 
> Yeap, that or a blacklist of devices which lie.
> 
> Jens, what do you think?  If you don't object, let's put this through
> linux-next.

I like the approach, I'll queue it up for 2.6.39.
Vivek Goyal Jan. 25, 2011, 8:46 p.m. UTC | #10
On Sun, Jan 23, 2011 at 11:31:33AM +0100, Tejun Heo wrote:
> On Fri, Jan 21, 2011 at 07:49:55PM -0500, Mike Snitzer wrote:
> > > + * If the device doesn't have writeback cache, FLUSH and FUA don't make any
> > > + * difference.  The requests are either completed immediately if there's no
> > > + * data or executed as normal requests otherwise.
> > 
> > For devices without a writeback cache, I'm not seeing where pure flushes
> > are completed immediately.  But I do see where data is processed
> > directly in blk_insert_flush().
> 
> Yeah, it does.  Pure flushes on a device w/o writeback cache, @policy
> is zero and blk_flush_complete_seq() will directly proceed to
> REQ_FSEQ_DONE.

I see following code in __generic_make_request(). I am wondering if empty
flushes will be completed here itself if device does not have writeback
cache.

                /*
                 * Filter flush bio's early so that make_request based
                 * drivers without flush support don't have to worry
                 * about them.
                 */
                if ((bio->bi_rw & (REQ_FLUSH | REQ_FUA)) && !q->flush_flags) {
                        bio->bi_rw &= ~(REQ_FLUSH | REQ_FUA);
                        if (!nr_sectors) {
                                err = 0;
                                goto end_io;
                        }
                }

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mike Snitzer Jan. 25, 2011, 9:04 p.m. UTC | #11
On Tue, Jan 25 2011 at  3:46pm -0500,
Vivek Goyal <vgoyal@redhat.com> wrote:

> On Sun, Jan 23, 2011 at 11:31:33AM +0100, Tejun Heo wrote:
> > On Fri, Jan 21, 2011 at 07:49:55PM -0500, Mike Snitzer wrote:
> > > > + * If the device doesn't have writeback cache, FLUSH and FUA don't make any
> > > > + * difference.  The requests are either completed immediately if there's no
> > > > + * data or executed as normal requests otherwise.
> > > 
> > > For devices without a writeback cache, I'm not seeing where pure flushes
> > > are completed immediately.  But I do see where data is processed
> > > directly in blk_insert_flush().
> > 
> > Yeah, it does.  Pure flushes on a device w/o writeback cache, @policy
> > is zero and blk_flush_complete_seq() will directly proceed to
> > REQ_FSEQ_DONE.
> 
> I see following code in __generic_make_request(). I am wondering if empty
> flushes will be completed here itself if device does not have writeback
> cache.
> 
>                 /*
>                  * Filter flush bio's early so that make_request based
>                  * drivers without flush support don't have to worry
>                  * about them.
>                  */
>                 if ((bio->bi_rw & (REQ_FLUSH | REQ_FUA)) && !q->flush_flags) {
>                         bio->bi_rw &= ~(REQ_FLUSH | REQ_FUA);
>                         if (!nr_sectors) {
>                                 err = 0;
>                                 goto end_io;
>                         }
>                 }

Yes, due to the !q->flush_flags check, empty flushes will complete here
if the device's request_queue doesn't advertise support for FLUSH or FUA
via blk_queue_flush().

Mike


--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick J. Wong Jan. 25, 2011, 10:56 p.m. UTC | #12
On Tue, Jan 25, 2011 at 11:21:28AM +0100, Tejun Heo wrote:
> Hello, Darrick.
> 
> On Mon, Jan 24, 2011 at 12:31:55PM -0800, Darrick J. Wong wrote:
> > > So, I think it's better to start with something simple and improve it
> > > with actual testing.  If the current simple implementation can match
> > > Darrick's previous numbers, let's first settle the mechanisms.  We can
> > 
> > Yep, the fsync-happy numbers more or less match... at least for 2.6.37:
> > http://tinyurl.com/4q2xeao
> 
> Good to hear.  Thanks for the detailed testing.
> 
> > I'll give 2.6.38-rc2 a try later, though -rc1 didn't boot for me, so these
> > numbers are based on a backport to .37. :(
> 
> Well, there hasn' been any change in the area during the merge window
> anyway, so I think testing on 2.6.37 should be fine.

Well, I gave it a spin on -rc2 with no problems and no significant change in
performance, so:

Acked-by: Darrick J. Wong <djwong@us.ibm.com>

> > > I don't really think we should design the whole thing around broken
> > > devices which incorrectly report writeback cache when it need not.
> > > The correct place to work around that is during device identification
> > > not in the flush logic.
> > 
> > elm3a4_sas and elm3c71_extsas advertise writeback cache yet the
> > flush completion times are suspiciously low.  I suppose it could be
> > useful to disable flushes to squeeze out that last bit of
> > performance, though I don't know how one goes about querying the
> > disk array to learn if there's a battery behind the cache.  I guess
> > the current mechanism (admin knob that picks a safe default) is good
> > enough.
> 
> Yeap, that or a blacklist of devices which lie.

Hmm... I don't think a blacklist would work for our arrays, since one can force
them to run with write cache and no battery.  I _do_ have a patch that adds a
sysfs knob to the block layer to drop flush/fua if the admin really really
really wants it, so I'll send that out shortly along with another one to remove
the barrier= mount option from ext4.

(Unless the screams of objection rain from the skies. :))

--D
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Darrick J. Wong March 23, 2011, 11:37 p.m. UTC | #13
On Tue, Jan 25, 2011 at 12:39:24PM +0100, Jens Axboe wrote:
> On 2011-01-25 11:21, Tejun Heo wrote:
> > Hello, Darrick.
> > 
> > On Mon, Jan 24, 2011 at 12:31:55PM -0800, Darrick J. Wong wrote:
> >>> So, I think it's better to start with something simple and improve it
> >>> with actual testing.  If the current simple implementation can match
> >>> Darrick's previous numbers, let's first settle the mechanisms.  We can
> >>
> >> Yep, the fsync-happy numbers more or less match... at least for 2.6.37:
> >> http://tinyurl.com/4q2xeao
> > 
> > Good to hear.  Thanks for the detailed testing.
> > 
> >> I'll give 2.6.38-rc2 a try later, though -rc1 didn't boot for me, so these
> >> numbers are based on a backport to .37. :(
> > 
> > Well, there hasn' been any change in the area during the merge window
> > anyway, so I think testing on 2.6.37 should be fine.
> > 
> >>> I don't really think we should design the whole thing around broken
> >>> devices which incorrectly report writeback cache when it need not.
> >>> The correct place to work around that is during device identification
> >>> not in the flush logic.
> >>
> >> elm3a4_sas and elm3c71_extsas advertise writeback cache yet the
> >> flush completion times are suspiciously low.  I suppose it could be
> >> useful to disable flushes to squeeze out that last bit of
> >> performance, though I don't know how one goes about querying the
> >> disk array to learn if there's a battery behind the cache.  I guess
> >> the current mechanism (admin knob that picks a safe default) is good
> >> enough.
> > 
> > Yeap, that or a blacklist of devices which lie.
> > 
> > Jens, what do you think?  If you don't object, let's put this through
> > linux-next.
> 
> I like the approach, I'll queue it up for 2.6.39.

Is this patch set still on the merge list for 2.6.39?

--D
> 
> -- 
> Jens Axboe
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/block/blk-core.c b/block/blk-core.c
index 038519b..72dd23b 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -149,8 +149,6 @@  EXPORT_SYMBOL(blk_rq_init);
 static void req_bio_endio(struct request *rq, struct bio *bio,
 			  unsigned int nbytes, int error)
 {
-	struct request_queue *q = rq->q;
-
 	if (error)
 		clear_bit(BIO_UPTODATE, &bio->bi_flags);
 	else if (!test_bit(BIO_UPTODATE, &bio->bi_flags))
@@ -174,8 +172,6 @@  static void req_bio_endio(struct request *rq, struct bio *bio,
 	/* don't actually finish bio if it's part of flush sequence */
 	if (bio->bi_size == 0 && !(rq->cmd_flags & REQ_FLUSH_SEQ))
 		bio_endio(bio, error);
-	else if (error && !q->flush_err)
-		q->flush_err = error;
 }
 
 void blk_dump_rq_flags(struct request *rq, char *msg)
@@ -534,7 +530,9 @@  struct request_queue *blk_alloc_queue_node(gfp_t gfp_mask, int node_id)
 	init_timer(&q->unplug_timer);
 	setup_timer(&q->timeout, blk_rq_timed_out_timer, (unsigned long) q);
 	INIT_LIST_HEAD(&q->timeout_list);
-	INIT_LIST_HEAD(&q->pending_flushes);
+	INIT_LIST_HEAD(&q->flush_queue[0]);
+	INIT_LIST_HEAD(&q->flush_queue[1]);
+	INIT_LIST_HEAD(&q->flush_data_in_flight);
 	INIT_WORK(&q->unplug_work, blk_unplug_work);
 
 	kobject_init(&q->kobj, &blk_queue_ktype);
@@ -1213,7 +1211,7 @@  static int __make_request(struct request_queue *q, struct bio *bio)
 	spin_lock_irq(q->queue_lock);
 
 	if (bio->bi_rw & (REQ_FLUSH | REQ_FUA)) {
-		where = ELEVATOR_INSERT_FRONT;
+		where = ELEVATOR_INSERT_FLUSH;
 		goto get_rq;
 	}
 
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 8592869..cd73e93 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -1,6 +1,69 @@ 
 /*
  * Functions to sequence FLUSH and FUA writes.
+ *
+ * Copyright (C) 2011		Max Planck Institute for Gravitational Physics
+ * Copyright (C) 2011		Tejun Heo <tj@kernel.org>
+ *
+ * This file is released under the GPLv2.
+ *
+ * REQ_{FLUSH|FUA} requests are decomposed to sequences consisted of three
+ * optional steps - PREFLUSH, DATA and POSTFLUSH - according to the request
+ * properties and hardware capability.
+ *
+ * If a request doesn't have data, only REQ_FLUSH makes sense, which
+ * indicates a simple flush request.  If there is data, REQ_FLUSH indicates
+ * that the device cache should be flushed before the data is executed, and
+ * REQ_FUA means that the data must be on non-volatile media on request
+ * completion.
+ *
+ * If the device doesn't have writeback cache, FLUSH and FUA don't make any
+ * difference.  The requests are either completed immediately if there's no
+ * data or executed as normal requests otherwise.
+ *
+ * If the device has writeback cache and supports FUA, REQ_FLUSH is
+ * translated to PREFLUSH but REQ_FUA is passed down directly with DATA.
+ *
+ * If the device has writeback cache and doesn't support FUA, REQ_FLUSH is
+ * translated to PREFLUSH and REQ_FUA to POSTFLUSH.
+ *
+ * The actual execution of flush is double buffered.  Whenever a request
+ * needs to execute PRE or POSTFLUSH, it queues at
+ * q->flush_queue[q->flush_pending_idx].  Once certain criteria are met, a
+ * flush is issued and the pending_idx is toggled.  When the flush
+ * completes, all the requests which were pending are proceeded to the next
+ * step.  This allows arbitrary merging of different types of FLUSH/FUA
+ * requests.
+ *
+ * Currently, the following conditions are used to determine when to issue
+ * flush.
+ *
+ * C1. At any given time, only one flush shall be in progress.  This makes
+ *     double buffering sufficient.
+ *
+ * C2. Flush is not deferred if any request is executing DATA of its
+ *     sequence.  This avoids issuing separate POSTFLUSHes for requests
+ *     which shared PREFLUSH.
+ *
+ * C3. The second condition is ignored if there is a request which has
+ *     waited longer than FLUSH_PENDING_TIMEOUT.  This is to avoid
+ *     starvation in the unlikely case where there are continuous stream of
+ *     FUA (without FLUSH) requests.
+ *
+ * For devices which support FUA, it isn't clear whether C2 (and thus C3)
+ * is beneficial.
+ *
+ * Note that a sequenced FLUSH/FUA request with DATA is completed twice.
+ * Once while executing DATA and again after the whole sequence is
+ * complete.  The first completion updates the contained bio but doesn't
+ * finish it so that the bio submitter is notified only after the whole
+ * sequence is complete.  This is implemented by testing REQ_FLUSH_SEQ in
+ * req_bio_endio().
+ *
+ * The above peculiarity requires that each FLUSH/FUA request has only one
+ * bio attached to it, which is guaranteed as they aren't allowed to be
+ * merged in the usual way.
  */
+
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/bio.h>
@@ -11,185 +74,290 @@ 
 
 /* FLUSH/FUA sequences */
 enum {
-	QUEUE_FSEQ_STARTED	= (1 << 0), /* flushing in progress */
-	QUEUE_FSEQ_PREFLUSH	= (1 << 1), /* pre-flushing in progress */
-	QUEUE_FSEQ_DATA		= (1 << 2), /* data write in progress */
-	QUEUE_FSEQ_POSTFLUSH	= (1 << 3), /* post-flushing in progress */
-	QUEUE_FSEQ_DONE		= (1 << 4),
+	REQ_FSEQ_PREFLUSH	= (1 << 0), /* pre-flushing in progress */
+	REQ_FSEQ_DATA		= (1 << 1), /* data write in progress */
+	REQ_FSEQ_POSTFLUSH	= (1 << 2), /* post-flushing in progress */
+	REQ_FSEQ_DONE		= (1 << 3),
+
+	REQ_FSEQ_ACTIONS	= REQ_FSEQ_PREFLUSH | REQ_FSEQ_DATA |
+				  REQ_FSEQ_POSTFLUSH,
+
+	/*
+	 * If flush has been pending longer than the following timeout,
+	 * it's issued even if flush_data requests are still in flight.
+	 */
+	FLUSH_PENDING_TIMEOUT	= 5 * HZ,
 };
 
-static struct request *queue_next_fseq(struct request_queue *q);
+static bool blk_kick_flush(struct request_queue *q);
 
-unsigned blk_flush_cur_seq(struct request_queue *q)
+static unsigned int blk_flush_policy(unsigned int fflags, struct request *rq)
 {
-	if (!q->flush_seq)
-		return 0;
-	return 1 << ffz(q->flush_seq);
+	unsigned int policy = 0;
+
+	if (fflags & REQ_FLUSH) {
+		if (rq->cmd_flags & REQ_FLUSH)
+			policy |= REQ_FSEQ_PREFLUSH;
+		if (blk_rq_sectors(rq))
+			policy |= REQ_FSEQ_DATA;
+		if (!(fflags & REQ_FUA) && (rq->cmd_flags & REQ_FUA))
+			policy |= REQ_FSEQ_POSTFLUSH;
+	}
+	return policy;
 }
 
-static struct request *blk_flush_complete_seq(struct request_queue *q,
-					      unsigned seq, int error)
+static unsigned int blk_flush_cur_seq(struct request *rq)
 {
-	struct request *next_rq = NULL;
-
-	if (error && !q->flush_err)
-		q->flush_err = error;
-
-	BUG_ON(q->flush_seq & seq);
-	q->flush_seq |= seq;
-
-	if (blk_flush_cur_seq(q) != QUEUE_FSEQ_DONE) {
-		/* not complete yet, queue the next flush sequence */
-		next_rq = queue_next_fseq(q);
-	} else {
-		/* complete this flush request */
-		__blk_end_request_all(q->orig_flush_rq, q->flush_err);
-		q->orig_flush_rq = NULL;
-		q->flush_seq = 0;
-
-		/* dispatch the next flush if there's one */
-		if (!list_empty(&q->pending_flushes)) {
-			next_rq = list_entry_rq(q->pending_flushes.next);
-			list_move(&next_rq->queuelist, &q->queue_head);
-		}
-	}
-	return next_rq;
+	return 1 << ffz(rq->flush.seq);
 }
 
-static void blk_flush_complete_seq_end_io(struct request_queue *q,
-					  unsigned seq, int error)
+static void blk_flush_restore_request(struct request *rq)
 {
-	bool was_empty = elv_queue_empty(q);
-	struct request *next_rq;
-
-	next_rq = blk_flush_complete_seq(q, seq, error);
-
 	/*
-	 * Moving a request silently to empty queue_head may stall the
-	 * queue.  Kick the queue in those cases.
+	 * After flush data completion, @rq->bio is %NULL but we need to
+	 * complete the bio again.  @rq->biotail is guaranteed to equal the
+	 * original @rq->bio.  Restore it.
 	 */
-	if (was_empty && next_rq)
-		__blk_run_queue(q);
+	rq->bio = rq->biotail;
+
+	/* make @rq a normal request */
+	rq->cmd_flags &= ~REQ_FLUSH_SEQ;
+	rq->end_io = NULL;
 }
 
-static void pre_flush_end_io(struct request *rq, int error)
+/**
+ * blk_flush_complete_seq - complete flush sequence
+ * @rq: FLUSH/FUA request being sequenced
+ * @seq: sequences to complete (mask of %REQ_FSEQ_*, can be zero)
+ * @error: whether an error occurred
+ *
+ * @rq just completed @seq part of its flush sequence, record the
+ * completion and trigger the next step.
+ *
+ * CONTEXT:
+ * spin_lock_irq(q->queue_lock)
+ *
+ * RETURNS:
+ * %true if requests were added to the dispatch queue, %false otherwise.
+ */
+static bool blk_flush_complete_seq(struct request *rq, unsigned int seq,
+				   int error)
 {
-	elv_completed_request(rq->q, rq);
-	blk_flush_complete_seq_end_io(rq->q, QUEUE_FSEQ_PREFLUSH, error);
+	struct request_queue *q = rq->q;
+	struct list_head *pending = &q->flush_queue[q->flush_pending_idx];
+	bool queued = false;
+
+	BUG_ON(rq->flush.seq & seq);
+	rq->flush.seq |= seq;
+
+	if (likely(!error))
+		seq = blk_flush_cur_seq(rq);
+	else
+		seq = REQ_FSEQ_DONE;
+
+	switch (seq) {
+	case REQ_FSEQ_PREFLUSH:
+	case REQ_FSEQ_POSTFLUSH:
+		/* queue for flush */
+		if (list_empty(pending))
+			q->flush_pending_since = jiffies;
+		list_move_tail(&rq->flush.list, pending);
+		break;
+
+	case REQ_FSEQ_DATA:
+		list_move_tail(&rq->flush.list, &q->flush_data_in_flight);
+		list_add(&rq->queuelist, &q->queue_head);
+		queued = true;
+		break;
+
+	case REQ_FSEQ_DONE:
+		/*
+		 * @rq was previously adjusted by blk_flush_issue() for
+		 * flush sequencing and may already have gone through the
+		 * flush data request completion path.  Restore @rq for
+		 * normal completion and end it.
+		 */
+		BUG_ON(!list_empty(&rq->queuelist));
+		list_del_init(&rq->flush.list);
+		blk_flush_restore_request(rq);
+		__blk_end_request_all(rq, error);
+		break;
+
+	default:
+		BUG();
+	}
+
+	return blk_kick_flush(q) | queued;
 }
 
-static void flush_data_end_io(struct request *rq, int error)
+static void flush_end_io(struct request *flush_rq, int error)
 {
-	elv_completed_request(rq->q, rq);
-	blk_flush_complete_seq_end_io(rq->q, QUEUE_FSEQ_DATA, error);
+	struct request_queue *q = flush_rq->q;
+	struct list_head *running = &q->flush_queue[q->flush_running_idx];
+	bool was_empty = elv_queue_empty(q);
+	bool queued = false;
+	struct request *rq, *n;
+
+	BUG_ON(q->flush_pending_idx == q->flush_running_idx);
+
+	/* account completion of the flush request */
+	q->flush_running_idx ^= 1;
+	elv_completed_request(q, flush_rq);
+
+	/* and push the waiting requests to the next stage */
+	list_for_each_entry_safe(rq, n, running, flush.list) {
+		unsigned int seq = blk_flush_cur_seq(rq);
+
+		BUG_ON(seq != REQ_FSEQ_PREFLUSH && seq != REQ_FSEQ_POSTFLUSH);
+		queued |= blk_flush_complete_seq(rq, seq, error);
+	}
+
+	/* after populating an empty queue, kick it to avoid stall */
+	if (queued && was_empty)
+		__blk_run_queue(q);
 }
 
-static void post_flush_end_io(struct request *rq, int error)
+/**
+ * blk_kick_flush - consider issuing flush request
+ * @q: request_queue being kicked
+ *
+ * Flush related states of @q have changed, consider issuing flush request.
+ * Please read the comment at the top of this file for more info.
+ *
+ * CONTEXT:
+ * spin_lock_irq(q->queue_lock)
+ *
+ * RETURNS:
+ * %true if flush was issued, %false otherwise.
+ */
+static bool blk_kick_flush(struct request_queue *q)
 {
-	elv_completed_request(rq->q, rq);
-	blk_flush_complete_seq_end_io(rq->q, QUEUE_FSEQ_POSTFLUSH, error);
+	struct list_head *pending = &q->flush_queue[q->flush_pending_idx];
+	struct request *first_rq =
+		list_first_entry(pending, struct request, flush.list);
+
+	/* C1 described at the top of this file */
+	if (q->flush_pending_idx != q->flush_running_idx || list_empty(pending))
+		return false;
+
+	/* C2 and C3 */
+	if (!list_empty(&q->flush_data_in_flight) &&
+	    time_before(jiffies,
+			q->flush_pending_since + FLUSH_PENDING_TIMEOUT))
+		return false;
+
+	/*
+	 * Issue flush and toggle pending_idx.  This makes pending_idx
+	 * different from running_idx, which means flush is in flight.
+	 */
+	blk_rq_init(q, &q->flush_rq);
+	q->flush_rq.cmd_type = REQ_TYPE_FS;
+	q->flush_rq.cmd_flags = WRITE_FLUSH | REQ_FLUSH_SEQ;
+	q->flush_rq.rq_disk = first_rq->rq_disk;
+	q->flush_rq.end_io = flush_end_io;
+
+	q->flush_pending_idx ^= 1;
+	elv_insert(q, &q->flush_rq, ELEVATOR_INSERT_FRONT);
+	return true;
 }
 
-static void init_flush_request(struct request *rq, struct gendisk *disk)
+static void flush_data_end_io(struct request *rq, int error)
 {
-	rq->cmd_type = REQ_TYPE_FS;
-	rq->cmd_flags = WRITE_FLUSH;
-	rq->rq_disk = disk;
+	struct request_queue *q = rq->q;
+	bool was_empty = elv_queue_empty(q);
+
+	/* after populating an empty queue, kick it to avoid stall */
+	if (blk_flush_complete_seq(rq, REQ_FSEQ_DATA, error) && was_empty)
+		__blk_run_queue(q);
 }
 
-static struct request *queue_next_fseq(struct request_queue *q)
+/**
+ * blk_insert_flush - insert a new FLUSH/FUA request
+ * @rq: request to insert
+ *
+ * To be called from elv_insert() for %ELEVATOR_INSERT_FLUSH insertions.
+ * @rq is being submitted.  Analyze what needs to be done and put it on the
+ * right queue.
+ *
+ * CONTEXT:
+ * spin_lock_irq(q->queue_lock)
+ */
+void blk_insert_flush(struct request *rq)
 {
-	struct request *orig_rq = q->orig_flush_rq;
-	struct request *rq = &q->flush_rq;
+	struct request_queue *q = rq->q;
+	unsigned int fflags = q->flush_flags;	/* may change, cache */
+	unsigned int policy = blk_flush_policy(fflags, rq);
 
-	blk_rq_init(q, rq);
+	BUG_ON(rq->end_io);
+	BUG_ON(!rq->bio || rq->bio != rq->biotail);
 
-	switch (blk_flush_cur_seq(q)) {
-	case QUEUE_FSEQ_PREFLUSH:
-		init_flush_request(rq, orig_rq->rq_disk);
-		rq->end_io = pre_flush_end_io;
-		break;
-	case QUEUE_FSEQ_DATA:
-		init_request_from_bio(rq, orig_rq->bio);
-		/*
-		 * orig_rq->rq_disk may be different from
-		 * bio->bi_bdev->bd_disk if orig_rq got here through
-		 * remapping drivers.  Make sure rq->rq_disk points
-		 * to the same one as orig_rq.
-		 */
-		rq->rq_disk = orig_rq->rq_disk;
-		rq->cmd_flags &= ~(REQ_FLUSH | REQ_FUA);
-		rq->cmd_flags |= orig_rq->cmd_flags & (REQ_FLUSH | REQ_FUA);
-		rq->end_io = flush_data_end_io;
-		break;
-	case QUEUE_FSEQ_POSTFLUSH:
-		init_flush_request(rq, orig_rq->rq_disk);
-		rq->end_io = post_flush_end_io;
-		break;
-	default:
-		BUG();
+	/*
+	 * @policy now records what operations need to be done.  Adjust
+	 * REQ_FLUSH and FUA for the driver.
+	 */
+	rq->cmd_flags &= ~REQ_FLUSH;
+	if (!(fflags & REQ_FUA))
+		rq->cmd_flags &= ~REQ_FUA;
+
+	/*
+	 * If there's data but flush is not necessary, the request can be
+	 * processed directly without going through flush machinery.  Queue
+	 * for normal execution.
+	 */
+	if ((policy & REQ_FSEQ_DATA) &&
+	    !(policy & (REQ_FSEQ_PREFLUSH | REQ_FSEQ_POSTFLUSH))) {
+		list_add(&rq->queuelist, &q->queue_head);
+		return;
 	}
 
+	/*
+	 * @rq should go through flush machinery.  Mark it part of flush
+	 * sequence and submit for further processing.
+	 */
+	memset(&rq->flush, 0, sizeof(rq->flush));
+	INIT_LIST_HEAD(&rq->flush.list);
 	rq->cmd_flags |= REQ_FLUSH_SEQ;
-	elv_insert(q, rq, ELEVATOR_INSERT_FRONT);
-	return rq;
+	rq->end_io = flush_data_end_io;
+
+	blk_flush_complete_seq(rq, REQ_FSEQ_ACTIONS & ~policy, 0);
 }
 
-struct request *blk_do_flush(struct request_queue *q, struct request *rq)
+/**
+ * blk_abort_flush - @q is being aborted, abort flush requests
+ * @q: request_queue being aborted
+ *
+ * To be called from elv_abort_queue().  @q is being aborted.  Prepare all
+ * FLUSH/FUA requests for abortion.
+ *
+ * CONTEXT:
+ * spin_lock_irq(q->queue_lock)
+ */
+void blk_abort_flushes(struct request_queue *q)
 {
-	unsigned int fflags = q->flush_flags; /* may change, cache it */
-	bool has_flush = fflags & REQ_FLUSH, has_fua = fflags & REQ_FUA;
-	bool do_preflush = has_flush && (rq->cmd_flags & REQ_FLUSH);
-	bool do_postflush = has_flush && !has_fua && (rq->cmd_flags & REQ_FUA);
-	unsigned skip = 0;
+	struct request *rq, *n;
+	int i;
 
 	/*
-	 * Special case.  If there's data but flush is not necessary,
-	 * the request can be issued directly.
-	 *
-	 * Flush w/o data should be able to be issued directly too but
-	 * currently some drivers assume that rq->bio contains
-	 * non-zero data if it isn't NULL and empty FLUSH requests
-	 * getting here usually have bio's without data.
+	 * Requests in flight for data are already owned by the dispatch
+	 * queue or the device driver.  Just restore for normal completion.
 	 */
-	if (blk_rq_sectors(rq) && !do_preflush && !do_postflush) {
-		rq->cmd_flags &= ~REQ_FLUSH;
-		if (!has_fua)
-			rq->cmd_flags &= ~REQ_FUA;
-		return rq;
+	list_for_each_entry_safe(rq, n, &q->flush_data_in_flight, flush.list) {
+		list_del_init(&rq->flush.list);
+		blk_flush_restore_request(rq);
 	}
 
 	/*
-	 * Sequenced flushes can't be processed in parallel.  If
-	 * another one is already in progress, queue for later
-	 * processing.
+	 * We need to give away requests on flush queues.  Restore for
+	 * normal completion and put them on the dispatch queue.
 	 */
-	if (q->flush_seq) {
-		list_move_tail(&rq->queuelist, &q->pending_flushes);
-		return NULL;
+	for (i = 0; i < ARRAY_SIZE(q->flush_queue); i++) {
+		list_for_each_entry_safe(rq, n, &q->flush_queue[i],
+					 flush.list) {
+			list_del_init(&rq->flush.list);
+			blk_flush_restore_request(rq);
+			list_add_tail(&rq->queuelist, &q->queue_head);
+		}
 	}
-
-	/*
-	 * Start a new flush sequence
-	 */
-	q->flush_err = 0;
-	q->flush_seq |= QUEUE_FSEQ_STARTED;
-
-	/* adjust FLUSH/FUA of the original request and stash it away */
-	rq->cmd_flags &= ~REQ_FLUSH;
-	if (!has_fua)
-		rq->cmd_flags &= ~REQ_FUA;
-	blk_dequeue_request(rq);
-	q->orig_flush_rq = rq;
-
-	/* skip unneded sequences and return the first one */
-	if (!do_preflush)
-		skip |= QUEUE_FSEQ_PREFLUSH;
-	if (!blk_rq_sectors(rq))
-		skip |= QUEUE_FSEQ_DATA;
-	if (!do_postflush)
-		skip |= QUEUE_FSEQ_POSTFLUSH;
-	return blk_flush_complete_seq(q, skip, 0);
 }
 
 static void bio_end_flush(struct bio *bio, int err)
diff --git a/block/blk.h b/block/blk.h
index 9d2ee8f..284b500 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -51,21 +51,17 @@  static inline void blk_clear_rq_complete(struct request *rq)
  */
 #define ELV_ON_HASH(rq)		(!hlist_unhashed(&(rq)->hash))
 
-struct request *blk_do_flush(struct request_queue *q, struct request *rq);
+void blk_insert_flush(struct request *rq);
+void blk_abort_flushes(struct request_queue *q);
 
 static inline struct request *__elv_next_request(struct request_queue *q)
 {
 	struct request *rq;
 
 	while (1) {
-		while (!list_empty(&q->queue_head)) {
+		if (!list_empty(&q->queue_head)) {
 			rq = list_entry_rq(q->queue_head.next);
-			if (!(rq->cmd_flags & (REQ_FLUSH | REQ_FUA)) ||
-			    (rq->cmd_flags & REQ_FLUSH_SEQ))
-				return rq;
-			rq = blk_do_flush(q, rq);
-			if (rq)
-				return rq;
+			return rq;
 		}
 
 		if (!q->elevator->ops->elevator_dispatch_fn(q, 0))
diff --git a/block/elevator.c b/block/elevator.c
index 2569512..270e097 100644
--- a/block/elevator.c
+++ b/block/elevator.c
@@ -673,6 +673,11 @@  void elv_insert(struct request_queue *q, struct request *rq, int where)
 		q->elevator->ops->elevator_add_req_fn(q, rq);
 		break;
 
+	case ELEVATOR_INSERT_FLUSH:
+		rq->cmd_flags |= REQ_SOFTBARRIER;
+		blk_insert_flush(rq);
+		break;
+
 	default:
 		printk(KERN_ERR "%s: bad insertion point %d\n",
 		       __func__, where);
@@ -785,6 +790,8 @@  void elv_abort_queue(struct request_queue *q)
 {
 	struct request *rq;
 
+	blk_abort_flushes(q);
+
 	while (!list_empty(&q->queue_head)) {
 		rq = list_entry_rq(q->queue_head.next);
 		rq->cmd_flags |= REQ_QUIET;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4d18ff3..8a082a5 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -99,13 +99,18 @@  struct request {
 	/*
 	 * The rb_node is only used inside the io scheduler, requests
 	 * are pruned when moved to the dispatch queue. So let the
-	 * completion_data share space with the rb_node.
+	 * flush fields share space with the rb_node.
 	 */
 	union {
 		struct rb_node rb_node;	/* sort/lookup */
-		void *completion_data;
+		struct {
+			unsigned int			seq;
+			struct list_head		list;
+		} flush;
 	};
 
+	void *completion_data;
+
 	/*
 	 * Three pointers are available for the IO schedulers, if they need
 	 * more they have to dynamically allocate it.
@@ -363,11 +368,12 @@  struct request_queue
 	 * for flush operations
 	 */
 	unsigned int		flush_flags;
-	unsigned int		flush_seq;
-	int			flush_err;
+	unsigned int		flush_pending_idx:1;
+	unsigned int		flush_running_idx:1;
+	unsigned long		flush_pending_since;
+	struct list_head	flush_queue[2];
+	struct list_head	flush_data_in_flight;
 	struct request		flush_rq;
-	struct request		*orig_flush_rq;
-	struct list_head	pending_flushes;
 
 	struct mutex		sysfs_lock;
 
diff --git a/include/linux/elevator.h b/include/linux/elevator.h
index 4d85797..39b68ed 100644
--- a/include/linux/elevator.h
+++ b/include/linux/elevator.h
@@ -167,6 +167,7 @@  extern struct request *elv_rb_find(struct rb_root *, sector_t);
 #define ELEVATOR_INSERT_BACK	2
 #define ELEVATOR_INSERT_SORT	3
 #define ELEVATOR_INSERT_REQUEUE	4
+#define ELEVATOR_INSERT_FLUSH	5
 
 /*
  * return values from elevator_may_queue_fn