diff mbox

ubi_update_fastmap: could not find an early PEB

Message ID 4FE49E19.4070005@nod.at
State New, archived
Headers show

Commit Message

Richard Weinberger June 22, 2012, 4:32 p.m. UTC
Am 22.06.2012 18:05, schrieb Nikita V. Youshchenko:
> As far as I understand, this happens because all PEBs at the beginning of 
> device are occupied. But this will always be the case after creating image 
> with ubinize...

ubinize will also get fastmap support.
But first we have to finish the kernel level support.

> How to overcome this?

Can you please try the attached patch?
This patch allows the WL-worker to produce free anchor PEBS.

Thanks,
//richard

P.s: Please use the most current fastmap code!

Comments

Nikita V. Youshchenko June 22, 2012, 5:26 p.m. UTC | #1
> > How to overcome this?
>
> Can you please try the attached patch?
> This patch allows the WL-worker to produce free anchor PEBS.

This patch does not apply against 'fastmap' branch of 
git://git.infradead.org/linux-ubi.git

And incompatibility is big enough to be not obviously resolvable.

> P.s: Please use the most current fastmap code!

Where to get that?

Nikita
Richard Weinberger June 22, 2012, 5:37 p.m. UTC | #2
Hi!

Am 22.06.2012 19:26, schrieb Nikita V. Youshchenko:
> Where to get that?

git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubi2.git ubi2/v12

In the meanwhile I had the chance to look closer at the issue.
The previously posted patch my help, but cannot help in all cases.
It can happen that a freshly created anchor (aka early) PEB will immediately
go into a fastmap pool, such that it cannot be used as fastmap super block.
I'll present a solution next week.

v12 has another issue.
If your flash contains bad PEBs the following WARN_ON() in fastmap.c will
spuriously trigger:

        /*
         * If fastmap is leaking PEBs (must not happen), raise a
         * fat warning and fall back to scanning mode.
         * We do this here because in ubi_wl_init() it's too late
         * and we cannot fall back to scanning.
         */
        if (WARN_ON(self_check_fastmap(ai) != ubi->peb_count -
                    ubi->bad_peb_count - used_blocks)) {
                ret = UBI_BAD_FASTMAP;
                kfree(fm);
                goto free_hdr;
        }


Instead of ubi->bad_peb_count it has to be ai->bad_peb_count.

Thanks,
//richard
Nikita V. Youshchenko June 22, 2012, 6:46 p.m. UTC | #3
> Hi!
>
> Am 22.06.2012 19:26, schrieb Nikita V. Youshchenko:
> > Where to get that?
>
> git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubi2.git ubi2/v12

Tried code from there, plus patch you've sent, and still getting

# ubidetach /dev/ubi_ctrl -m 2
UBI error: ubi_update_fastmap: could not find an anchor PEB
UBI warning: ubi_update_fastmap: Unable to write new fastmap, err=-28
UBI: mtd2 is detached from ubi0

Nkita
Richard Weinberger June 22, 2012, 6:49 p.m. UTC | #4
Am 22.06.2012 20:46, schrieb Nikita V. Youshchenko:
> Tried code from there, plus patch you've sent, and still getting
> 
> # ubidetach /dev/ubi_ctrl -m 2
> UBI error: ubi_update_fastmap: could not find an anchor PEB
> UBI warning: ubi_update_fastmap: Unable to write new fastmap, err=-28
> UBI: mtd2 is detached from ubi0

As I said, the posted patch may or may not help.
Most likely the freshly created anchor PEBs got consumed by one
of the pools.
Anyway, I'll present a proper solution within the next few days.

Thanks,
//richard
Artem Bityutskiy June 27, 2012, 10:56 a.m. UTC | #5
On Fri, 2012-06-22 at 21:26 +0400, Nikita V. Youshchenko wrote:
> > > How to overcome this?
> >
> > Can you please try the attached patch?
> > This patch allows the WL-worker to produce free anchor PEBS.
> 
> This patch does not apply against 'fastmap' branch of 
> git://git.infradead.org/linux-ubi.git
> 
> And incompatibility is big enough to be not obviously resolvable.

I've just pushed out the latest work of Richard to the "fastmap" branch,
should be up-to-date now. Thanks for feed-back, testing is very
appreciated and certainly should help merging this faster.
Richard Weinberger June 27, 2012, 11 a.m. UTC | #6
Am 27.06.2012 12:56, schrieb Artem Bityutskiy:
> I've just pushed out the latest work of Richard to the "fastmap" branch,
> should be up-to-date now. Thanks for feed-back, testing is very
> appreciated and certainly should help merging this faster.
> 

I'll send another patch set today.
It contains only minor coding style fixes and such stuff.

Before fastmap gets merged I'd like to have a discussion about fastmaps
parameters.

/* A fastmap anchor block can be located between PEB 0 and
 * UBI_FM_MAX_START */
#define UBI_FM_MAX_START        64

/* A fastmap can use up to UBI_FM_MAX_BLOCKS PEBs */
#define UBI_FM_MAX_BLOCKS       32

/* 5% of the total number of PEBs have to be scanned while attaching
 * from a fastmap.
 * But the size of this pool is limited to be between UBI_FM_MIN_POOL_SIZE and
 * UBI_FM_MAX_POOL_SIZE */
#define UBI_FM_MIN_POOL_SIZE    8
#define UBI_FM_MAX_POOL_SIZE    256
#define UBI_FM_WL_POOL_SIZE     25

Are we all fine with them?
Especially the size of both pools used by fastmap needs to be discussed more.
If the pool size is too small UBI was to write the fastmap very often.
A too large pool means that attaching takes longer because fastmap has to rescan
more PEBs.

Thanks,
//richard
Nikita V. Youshchenko June 27, 2012, 11:07 a.m. UTC | #7
> Especially the size of both pools used by fastmap needs to be discussed
> more. If the pool size is too small UBI was to write the fastmap very
> often. A too large pool means that attaching takes longer because
> fastmap has to rescan more PEBs.

I'd prefer to see those configurable.

I see two obvious cases when different settings are needed.

First is system that has very little changes at runtime. Maybe as small as 
save user settings if he ever changes those. Everything else is readonly. 
Then, better to have lower pool size to make attach faster.

Second is system with extensice writes expected. Then indeed pool needs to 
be larger to avoid too often fastmap writes.

Nikita
Artem Bityutskiy June 27, 2012, 12:15 p.m. UTC | #8
On Wed, 2012-06-27 at 15:07 +0400, Nikita V. Youshchenko wrote:
> > Especially the size of both pools used by fastmap needs to be discussed
> > more. If the pool size is too small UBI was to write the fastmap very
> > often. A too large pool means that attaching takes longer because
> > fastmap has to rescan more PEBs.
> 
> I'd prefer to see those configurable.

I agree, all parameters should be in the fastmap anchor instead of being
hard-coded. It is OK to hard-code the default values for the
auto-format, but if the UBI image is created with ubinize - the user
should be able to tune for himself.
diff mbox

Patch

diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
index 582f5ee..4793ba8 100644
--- a/drivers/mtd/ubi/fastmap.c
+++ b/drivers/mtd/ubi/fastmap.c
@@ -1374,6 +1374,12 @@  int ubi_update_fastmap(struct ubi_device *ubi)
 		return 0;
 	}
 
+	ret = ubi_ensure_anchor_pebs(ubi);
+	if (ret) {
+		mutex_unlock(&ubi->fm_mutex);
+		return ret;
+	}
+
 	new_fm = kzalloc(sizeof(*new_fm), GFP_KERNEL);
 	if (!new_fm) {
 		mutex_unlock(&ubi->fm_mutex);
diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
index 534e851..56b1c5c 100644
--- a/drivers/mtd/ubi/ubi.h
+++ b/drivers/mtd/ubi/ubi.h
@@ -663,6 +663,7 @@  struct ubi_attach_info {
  * @func: worker function
  * @e: physical eraseblock to erase
  * @torture: if the physical eraseblock has to be tortured
+ * @anchor: produce a anchor PEB to by used by fastmap
  *
  * The @func pointer points to the worker function. If the @cancel argument is
  * not zero, the worker has to free the resources and exit immediately. The
@@ -675,6 +676,7 @@  struct ubi_work {
 	/* The below fields are only relevant to erasure works */
 	struct ubi_wl_entry *e;
 	int torture;
+	int anchor;
 };
 
 #include "debug.h"
@@ -759,6 +761,7 @@  struct ubi_wl_entry *ubi_wl_get_fm_peb(struct ubi_device *ubi, int max_pnum);
 int ubi_wl_put_fm_peb(struct ubi_device *ubi, struct ubi_wl_entry *used_e, int torture);
 int ubi_is_erase_work(struct ubi_work *wrk);
 void ubi_refill_pools(struct ubi_device *ubi);
+int ubi_ensure_anchor_pebs(struct ubi_device *ubi);
 
 /* io.c */
 int ubi_io_read(const struct ubi_device *ubi, void *buf, int pnum, int offset,
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 6771f30..b4d4358 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -419,6 +419,18 @@  static struct ubi_wl_entry *find_anchor_wl_entry(struct rb_root *root,
 	return victim;
 }
 
+static int anchor_pebs_avalible(struct rb_root *root, int max_pnum)
+{
+	struct rb_node *p;
+	struct ubi_wl_entry *e;
+
+	ubi_rb_for_each_entry(p, e, root, u.rb)
+		if (e->pnum < max_pnum)
+			return 1;
+
+	return 0;
+}
+
 /**
  * ubi_wl_get_fm_peb - find a physical erase block with a given maximal number.
  * @ubi: UBI device description object
@@ -901,10 +913,11 @@  static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
 				int cancel)
 {
 	int err, scrubbing = 0, torture = 0, protect = 0, erroneous = 0;
-	int vol_id = -1, uninitialized_var(lnum);
+	int anchor, vol_id = -1, uninitialized_var(lnum);
 	struct ubi_wl_entry *e1, *e2;
 	struct ubi_vid_hdr *vid_hdr;
 
+	anchor = wrk->anchor;
 	kfree(wrk);
 	if (cancel)
 		return 0;
@@ -935,7 +948,23 @@  static int wear_leveling_worker(struct ubi_device *ubi, struct ubi_work *wrk,
 		goto out_cancel;
 	}
 
-	if (!ubi->scrub.rb_node) {
+	/* Check whether we need to produce an anchor PEB */
+	if (!anchor)
+		anchor = !anchor_pebs_avalible(&ubi->free, UBI_FM_MAX_START);
+
+	if (anchor) {
+		e1 = find_anchor_wl_entry(&ubi->used, UBI_FM_MAX_START);
+		if (!e1)
+			goto out_cancel;
+		e2 = get_peb_for_wl(ubi);
+		if (!e2)
+			goto out_cancel;
+
+		self_check_in_wl_tree(ubi, e1, &ubi->used);
+		rb_erase(&e1->u.rb, &ubi->used);
+		dbg_wl("anchor-move PEB %d to PEB %d", e1->pnum, e2->pnum);
+	}
+	else if (!ubi->scrub.rb_node) {
 		/*
 		 * Now pick the least worn-out used physical eraseblock and a
 		 * highly worn-out free physical eraseblock. If the erase
@@ -1229,6 +1258,7 @@  static int ensure_wear_leveling(struct ubi_device *ubi, int nested)
 		goto out_cancel;
 	}
 
+	wrk->anchor = 0;
 	wrk->func = &wear_leveling_worker;
 	if (nested)
 		__schedule_ubi_work(ubi, wrk);
@@ -1244,6 +1274,32 @@  out_unlock:
 	return err;
 }
 
+int ubi_ensure_anchor_pebs(struct ubi_device *ubi)
+{
+	struct ubi_work *wrk;
+
+	spin_lock(&ubi->wl_lock);
+	if (ubi->wl_scheduled) {
+		spin_unlock(&ubi->wl_lock);
+		return 0;
+	}
+	ubi->wl_scheduled = 1;
+	spin_unlock(&ubi->wl_lock);
+
+	wrk = kmalloc(sizeof(struct ubi_work), GFP_NOFS);
+	if (!wrk) {
+		spin_lock(&ubi->wl_lock);
+		ubi->wl_scheduled = 0;
+		spin_unlock(&ubi->wl_lock);
+		return -ENOMEM;
+	}
+
+	wrk->anchor = 1;
+	wrk->func = &wear_leveling_worker;
+	schedule_ubi_work(ubi, wrk);
+	return 0;
+}
+
 /**
  * erase_worker - physical eraseblock erase worker function.
  * @ubi: UBI device description object