diff mbox

UBI fastmap updates

Message ID 20120708144745.36109029@pixies.home.jungo.com
State New, archived
Headers show

Commit Message

Shmulik Ladkani July 8, 2012, 11:47 a.m. UTC
Hi Richard,

On Fri, 29 Jun 2012 17:14:18 +0200 Richard Weinberger <richard@nod.at> wrote:
> This is the next round of UBI fastmap updates.

Please examine some TODOs (and questions) I've spotted while diffing
against "vanilla" ubi.

This patch should apply to linux-ubi at d41a140

Sorry I couldn't review entirely, diff has gotten really enormous...
I'll try to pick it up again when you'll get to the final merge
submitting incremental patches.

Regards,
Shmulik

Comments

Richard Weinberger July 8, 2012, 12:07 p.m. UTC | #1
Hi Shmulik!

Am 08.07.2012 13:47, schrieb Shmulik Ladkani:
> +
> +		/* TODO: if find_fastmap==1, we do not enter this block at all.
> +		 * shouldn't we? shouldn't we care of compatability of unknown
> +		 * internal volumes OTHER than the fastmap ones, even if
> +		 * find_fastmap==1?
> +		 */
> +

If find_fastmap=1 we scan only the first 64 PEBs (now by using scan_peb()).
When using fastmap can do not care about compatibility of unknown internal volumes
at all.
Fastmap keeps only track of known (and compatible volumes).
Taking care of unknown internal volumes would imply a full scan.

		int lnum = be32_to_cpu(vidh->lnum);
>  
>  		/* Unsupported internal volume */
> @@ -1221,6 +1228,7 @@ static void destroy_ai(struct ubi_attach_info *ai)
>   * scan_all - scan entire MTD device.
>   * @ubi: UBI device description object
>   * @ai: attach info object
> + * TODO: document @start

Tomorrow I'll send another kernel-doc update.
There more tags missing.

>   * This function does full scanning of an MTD device and returns complete
>   * information about it in form of a "struct ubi_attach_info" object. In case
> @@ -1350,6 +1358,11 @@ out_vidh:
>  out_ech:
>  	kfree(ech);
>  out_ai:
> +	/* TODO: doesn't seem clean to destroy 'ai' as it was allocated by the
> +	 * caller, its his responsibility.
> +	 * also looks like it leads to double freee in case 'err' returned is
> +	 * negative
> +	 */

I have to look closer into this.
It looks like the call to destroy_ai() in scan_fast() has to go.

>  	destroy_ai(ai);
>  	return err;
>  }
> @@ -1441,6 +1454,7 @@ int ubi_attach(struct ubi_device *ubi, int force_scan)
>  
>  		err = scan_all(ubi, scan_ai, 0);
>  		if (err) {
> +			/* TODO: hmm... kfree or destroy_ai ? */

True. Must be desroy_ai().

>  			kfree(scan_ai);
>  			goto out_wl;
>  		}
> diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
> index 50b7590..f769c22 100644
> --- a/drivers/mtd/ubi/build.c
> +++ b/drivers/mtd/ubi/build.c
> @@ -1053,6 +1053,7 @@ int ubi_detach_mtd_dev(int ubi_num, int anyway)
>  	ubi_notify_all(ubi, UBI_VOLUME_REMOVED, NULL);
>  	dbg_msg("detaching mtd%d from ubi%d", ubi->mtd->index, ubi_num);
>  
> +	/* TODO: any action on failure? */

What do you expect on failure?
At this point we have either no fastmap or a valid fastmap on flash.
If ubi_update_fastmap() fails it will ensure that no invalid fastmap remains on flash.
See invalidate_fastmap().

> --- a/drivers/mtd/ubi/wl.c
> +++ b/drivers/mtd/ubi/wl.c
> @@ -272,6 +272,10 @@ static int produce_free_peb(struct ubi_device *ubi)
>  		dbg_wl("do one work synchronously");
>  		err = do_work(ubi);
>  		if (err)
> +			/* TODO: in the new locking scheme, produce_free_peb is
> +			 * called under wl_lock taken.
> +			 * so when returning, should reacquire the lock
> +			 */

Which new locking scheme?

>  			return err;
>  
>  		spin_lock(&ubi->wl_lock);
> @@ -377,6 +381,7 @@ static struct ubi_wl_entry *find_wl_entry(struct ubi_device *ubi,
>  	 * as anchor PEB, hold it back and return the second best WL entry
>  	 * such that fastmap can use the anchor PEB later. */
>  	if (!ubi->fm_disabled && !ubi->fm && e->pnum < UBI_FM_MAX_START)
> +		/* TODO: do we have a risk returning NULL here? */

How?

>  		return prev_e;
>  
>  	return e;
> @@ -405,6 +410,7 @@ static struct ubi_wl_entry *find_mean_wl_entry(struct ubi_device *ubi,
>  		/* If no fastmap has been written and this WL entry can be used
>  		 * as anchor PEB, hold it back and return the second best
>  		 * WL entry such that fastmap can use the anchor PEB later. */
> +		/* TODO: why this is specific just to < WL_FREE_MAX_DIFF case? */

Because find_wl_entry() already takes care of this.

Thanks,
//richard
Richard Weinberger July 8, 2012, 3:11 p.m. UTC | #2
Am 08.07.2012 14:07, schrieb Richard Weinberger:
> Hi Shmulik!
> 
> Am 08.07.2012 13:47, schrieb Shmulik Ladkani:
>> +
>> +		/* TODO: if find_fastmap==1, we do not enter this block at all.
>> +		 * shouldn't we? shouldn't we care of compatability of unknown
>> +		 * internal volumes OTHER than the fastmap ones, even if
>> +		 * find_fastmap==1?
>> +		 */
>> +
> 
> If find_fastmap=1 we scan only the first 64 PEBs (now by using scan_peb()).
> When using fastmap can do not care about compatibility of unknown internal volumes
> at all.
> Fastmap keeps only track of known (and compatible volumes).
> Taking care of unknown internal volumes would imply a full scan.

Please forget the above statement.
We have to think of the case where no fastmap was found and the
64 scanned PEBs get reused by scan_all().
Thanks for pointing this out!

Thanks,
//richard
Shmulik Ladkani July 9, 2012, 7:37 a.m. UTC | #3
Hi Richard,

On Sun, 08 Jul 2012 14:07:41 +0200 Richard Weinberger <richard@nod.at> wrote:
> > +			/* TODO: in the new locking scheme, produce_free_peb is
> > +			 * called under wl_lock taken.
> > +			 * so when returning, should reacquire the lock
> > +			 */
> 
> Which new locking scheme?

I am diffing linux-ubi fastmap HEAD against its fork point (vanilla
ubi), that's 6b16351..d41a140 on linux-ubi.

Which gives the following diff in produce_free_pebs:

@@ -261,7 +266,6 @@ static int produce_free_peb(struct ubi_device *ubi)
 {
 	int err;
 
-	spin_lock(&ubi->wl_lock);
 	while (!ubi->free.rb_node) {
 		spin_unlock(&ubi->wl_lock);
 
@@ -272,7 +276,6 @@ static int produce_free_peb(struct ubi_device *ubi)
 
 		spin_lock(&ubi->wl_lock);
 	}
-	spin_unlock(&ubi->wl_lock);
 
 	return 0;
 }

Which is probably okay, since you obtain the lock in the new
'ubi_refill_pools', which calls produce_free_peb:

+void ubi_refill_pools(struct ubi_device *ubi)
+{
+	spin_lock(&ubi->wl_lock);
+	refill_wl_pool(ubi);
+	refill_wl_user_pool(ubi);
+	spin_unlock(&ubi->wl_lock);
+}

However if 'do_work' fails within 'produce_free_peb', you return the
error but leave wl_lock unlocked - where it is expected to be locked
(otherwise, ubi_refill_pools will unlock it again):

static int produce_free_peb(struct ubi_device *ubi)
{
	int err;

	while (!ubi->free.rb_node) {
		spin_unlock(&ubi->wl_lock);

		dbg_wl("do one work synchronously");
		err = do_work(ubi);
		if (err)
			return err;

		spin_lock(&ubi->wl_lock);
	}

	return 0;
}

Regards,
Shmulik
Richard Weinberger July 9, 2012, 8:19 a.m. UTC | #4
Am 09.07.2012 09:37, schrieb Shmulik Ladkani:
> Hi Richard,
> 
> On Sun, 08 Jul 2012 14:07:41 +0200 Richard Weinberger <richard@nod.at> wrote:
>>> +			/* TODO: in the new locking scheme, produce_free_peb is
>>> +			 * called under wl_lock taken.
>>> +			 * so when returning, should reacquire the lock
>>> +			 */
>>
>> Which new locking scheme?
> 
> I am diffing linux-ubi fastmap HEAD against its fork point (vanilla
> ubi), that's 6b16351..d41a140 on linux-ubi.
> 
> Which gives the following diff in produce_free_pebs:

Ahh. _my_ new locking scheme. I feared someone else changed it meanwhile in mainline. ;)
Yes, the &ubi->wl_lock in produce_free_peb() is no  longer needed.
Again, thanks for pointing this out!

Thanks,
//richard
diff mbox

Patch

diff --git a/drivers/mtd/ubi/attach.c b/drivers/mtd/ubi/attach.c
index a343a41..dae9674 100644
--- a/drivers/mtd/ubi/attach.c
+++ b/drivers/mtd/ubi/attach.c
@@ -999,6 +999,13 @@  static int scan_peb(struct ubi_device *ubi, struct ubi_attach_info *ai,
 
 	if (!find_fastmap &&
 	   (vol_id > UBI_MAX_VOLUMES && vol_id != UBI_LAYOUT_VOLUME_ID)) {
+
+		/* TODO: if find_fastmap==1, we do not enter this block at all.
+		 * shouldn't we? shouldn't we care of compatability of unknown
+		 * internal volumes OTHER than the fastmap ones, even if
+		 * find_fastmap==1?
+		 */
+
 		int lnum = be32_to_cpu(vidh->lnum);
 
 		/* Unsupported internal volume */
@@ -1221,6 +1228,7 @@  static void destroy_ai(struct ubi_attach_info *ai)
  * scan_all - scan entire MTD device.
  * @ubi: UBI device description object
  * @ai: attach info object
+ * TODO: document @start
  *
  * This function does full scanning of an MTD device and returns complete
  * information about it in form of a "struct ubi_attach_info" object. In case
@@ -1350,6 +1358,11 @@  out_vidh:
 out_ech:
 	kfree(ech);
 out_ai:
+	/* TODO: doesn't seem clean to destroy 'ai' as it was allocated by the
+	 * caller, its his responsibility.
+	 * also looks like it leads to double freee in case 'err' returned is
+	 * negative
+	 */
 	destroy_ai(ai);
 	return err;
 }
@@ -1441,6 +1454,7 @@  int ubi_attach(struct ubi_device *ubi, int force_scan)
 
 		err = scan_all(ubi, scan_ai, 0);
 		if (err) {
+			/* TODO: hmm... kfree or destroy_ai ? */
 			kfree(scan_ai);
 			goto out_wl;
 		}
diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
index 50b7590..f769c22 100644
--- a/drivers/mtd/ubi/build.c
+++ b/drivers/mtd/ubi/build.c
@@ -1053,6 +1053,7 @@  int ubi_detach_mtd_dev(int ubi_num, int anyway)
 	ubi_notify_all(ubi, UBI_VOLUME_REMOVED, NULL);
 	dbg_msg("detaching mtd%d from ubi%d", ubi->mtd->index, ubi_num);
 
+	/* TODO: any action on failure? */
 	ubi_update_fastmap(ubi);
 
 	/*
@@ -1070,7 +1071,6 @@  int ubi_detach_mtd_dev(int ubi_num, int anyway)
 
 	ubi_debugfs_exit_dev(ubi);
 	uif_close(ubi);
-
 	ubi_wl_close(ubi);
 	ubi_free_internal_volumes(ubi);
 	vfree(ubi->vtbl);
diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
index 9f766ff..0b2d0cf 100644
--- a/drivers/mtd/ubi/ubi.h
+++ b/drivers/mtd/ubi/ubi.h
@@ -219,7 +219,7 @@  struct ubi_volume_desc;
  * @size: size of the fastmap in bytes
  * @used_blocks: number of used PEBs
  * @max_pool_size: maximal size of the user pool
- * @max_wl_pool_size: maximal size of the pooly used by the WL sub-system
+ * @max_wl_pool_size: maximal size of the pool used by the WL sub-system
  * @raw: the fastmap itself as byte array (only valid while attaching)
  */
 struct ubi_fastmap_layout {
@@ -242,7 +242,6 @@  struct ubi_fastmap_layout {
  * A pool gets filled with up to max_size.
  * If all PEBs within the pool are used a new fastmap will be written
  * to the flash and the pool gets refilled with empty PEBs.
- *
  */
 struct ubi_fm_pool {
 	int pebs[UBI_FM_MAX_POOL_SIZE];
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 6c69017..688881b 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -272,6 +272,10 @@  static int produce_free_peb(struct ubi_device *ubi)
 		dbg_wl("do one work synchronously");
 		err = do_work(ubi);
 		if (err)
+			/* TODO: in the new locking scheme, produce_free_peb is
+			 * called under wl_lock taken.
+			 * so when returning, should reacquire the lock
+			 */
 			return err;
 
 		spin_lock(&ubi->wl_lock);
@@ -377,6 +381,7 @@  static struct ubi_wl_entry *find_wl_entry(struct ubi_device *ubi,
 	 * as anchor PEB, hold it back and return the second best WL entry
 	 * such that fastmap can use the anchor PEB later. */
 	if (!ubi->fm_disabled && !ubi->fm && e->pnum < UBI_FM_MAX_START)
+		/* TODO: do we have a risk returning NULL here? */
 		return prev_e;
 
 	return e;
@@ -405,6 +410,7 @@  static struct ubi_wl_entry *find_mean_wl_entry(struct ubi_device *ubi,
 		/* If no fastmap has been written and this WL entry can be used
 		 * as anchor PEB, hold it back and return the second best
 		 * WL entry such that fastmap can use the anchor PEB later. */
+		/* TODO: why this is specific just to < WL_FREE_MAX_DIFF case? */
 		if (e && !ubi->fm_disabled && !ubi->fm &&
 		    e->pnum < UBI_FM_MAX_START)
 			e = rb_entry(rb_next(root->rb_node),