Patchwork [Oneiric] Convert dm-raid45 to new block device plugging

login
register
mail settings
Submitter Stefan Bader
Date May 26, 2011, 2:42 p.m.
Message ID <4DDE66BF.3010104@canonical.com>
Download mbox | patch
Permalink /patch/97584/
State New
Headers show

Comments

Stefan Bader - May 26, 2011, 2:42 p.m.
I was discussing this with the maintainer. According to him the dm-raid45 code 
is not developed further. The goal would be to convert the dmraid user-space 
side to use dm-raid (which is a target for RAID4/5/6 that uses md).

However looking at any user-space code that was available to the public right 
now, I do not see any support for this. The table format, which is used to 
create the device-mapper targets is substantially different between the two 
modules. So user-space would have to change accordingly (and I am not sure how 
well all features would be supported at the moment).

So, at least for Oneiric, I think we need to stay with the dm-raid45. I tested 
the change below on an Intel soft-RAID5 and it survived an iozone and a bonnie++ 
run.

-Stefan
Leann Ogasawara - May 26, 2011, 3:43 p.m.
Applied to Oneiric master-next.  Slightly modified to separate the
config change from the code change.  I also chose to revert the previous
disablement of CONFIG_DM_RAID45 instead so that it's easy to drop it
from existence upon the next rebase, eg.

Revert "UBUNTU: [Config] Disable CONFIG_DM_RAID45"

Thanks,
Leann

On Thu, 2011-05-26 at 16:42 +0200, Stefan Bader wrote:
> I was discussing this with the maintainer. According to him the dm-raid45 code 
> is not developed further. The goal would be to convert the dmraid user-space 
> side to use dm-raid (which is a target for RAID4/5/6 that uses md).
> 
> However looking at any user-space code that was available to the public right 
> now, I do not see any support for this. The table format, which is used to 
> create the device-mapper targets is substantially different between the two 
> modules. So user-space would have to change accordingly (and I am not sure how 
> well all features would be supported at the moment).
> 
> So, at least for Oneiric, I think we need to stay with the dm-raid45. I tested 
> the change below on an Intel soft-RAID5 and it survived an iozone and a bonnie++ 
> run.
> 
> -Stefan

Patch

From 3f1298b21f4aa5d21a8a22a69f6d602fd091789e Mon Sep 17 00:00:00 2001
From: Stefan Bader <stefan.bader@canonical.com>
Date: Thu, 26 May 2011 13:19:57 +0200
Subject: [PATCH] UBUNTU: SAUCE: Convert dm-raid45 to new block plugging

Plugging for IOs to block devices was changed to an explicit, per task
base. This converts the module to the new framework, fixing the compile
failure.

Signed-off-by: Stefan Bader <stefan.bader@canonical.com>
---
 debian.master/config/config.common.ubuntu |    2 +-
 ubuntu/dm-raid4-5/dm-raid4-5.c            |   19 +++++--------------
 2 files changed, 6 insertions(+), 15 deletions(-)

diff --git a/debian.master/config/config.common.ubuntu b/debian.master/config/config.common.ubuntu
index 44cb9ee..fc9081f 100644
--- a/debian.master/config/config.common.ubuntu
+++ b/debian.master/config/config.common.ubuntu
@@ -1185,7 +1185,7 @@  CONFIG_DM_CRYPT=m
 CONFIG_DM_MULTIPATH_QL=m
 CONFIG_DM_MULTIPATH_ST=m
 CONFIG_DM_RAID=m
-# CONFIG_DM_RAID45 is not set
+CONFIG_DM_RAID45=m
 CONFIG_DM_UEVENT=y
 CONFIG_DM_ZERO=m
 CONFIG_DNET=m
diff --git a/ubuntu/dm-raid4-5/dm-raid4-5.c b/ubuntu/dm-raid4-5/dm-raid4-5.c
index 504aee3..fcc782c 100644
--- a/ubuntu/dm-raid4-5/dm-raid4-5.c
+++ b/ubuntu/dm-raid4-5/dm-raid4-5.c
@@ -3275,18 +3275,6 @@  static void do_ios(struct raid_set *rs, struct bio_list *ios)
 	bio_list_merge_head(ios, &reject);
 }
 
-/* Unplug: let any queued io role on the sets devices. */
-static void do_unplug(struct raid_set *rs)
-{
-	struct raid_dev *dev = rs->dev + rs->set.raid_devs;
-
-	while (dev-- > rs->dev) {
-		/* Only call any device unplug function, if io got queued. */
-		if (TestClearDevIoQueued(dev))
-			blk_unplug(bdev_get_queue(dev->dev->bdev));
-	}
-}
-
 /* Send an event in case we're getting too busy. */
 static void do_busy_event(struct raid_set *rs)
 {
@@ -3326,6 +3314,7 @@  static void do_raid(struct work_struct *ws)
 	struct raid_set *rs = container_of(ws, struct raid_set,
 					   io.dws_do_raid.work);
 	struct bio_list *ios = &rs->io.work, *ios_in = &rs->io.in;
+	struct blk_plug plug;
 
 	/*
 	 * We always need to end io, so that ios can get errored in
@@ -3342,8 +3331,9 @@  static void do_raid(struct work_struct *ws)
 	do_sc_resize(rs);
 
 	/* Try to recover regions. */
+	blk_start_plug(&plug);
 	do_recovery(rs);
-	do_unplug(rs);		/* Unplug the sets device queues. */
+	blk_finish_plug(&plug);	/* Unplug the queue */
 
 	/* Quickly grab all new ios queued and add them to the work list. */
 	mutex_lock(&rs->io.in_lock);
@@ -3351,11 +3341,12 @@  static void do_raid(struct work_struct *ws)
 	bio_list_init(ios_in);
 	mutex_unlock(&rs->io.in_lock);
 
+	blk_start_plug(&plug);
 	if (!bio_list_empty(ios))
 		do_ios(rs, ios); /* Got ios to work into the cache. */
 
 	do_flush(rs);		/* Flush any stripes on io list. */
-	do_unplug(rs);		/* Unplug the sets device queues. */
+	blk_finish_plug(&plug);	/* Unplug the queue */
 	do_busy_event(rs);	/* Check if we got too busy. */
 }
 
-- 
1.7.4.1