diff mbox

[RFC/PATCH] mtd: ubi: Free peb's synchronously for fastmap

Message ID 1396339305-16005-1-git-send-email-tlinder@codeaurora.org
State RFC
Headers show

Commit Message

Tatyana Brokhman April 1, 2014, 8:01 a.m. UTC
At first mount it's possible that there are not enough free PEBs since
there are PEB's pending to be erased. In such scenario, fm_pool (which is
the pool from which user required PEBs are allocated) will be empty.
Try fixing the above described situation by synchronously performing
pending erase work, thus produce another free PEB.

Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

Comments

Richard Weinberger April 7, 2014, 1:02 p.m. UTC | #1
On Tue, Apr 1, 2014 at 10:01 AM, Tanya Brokhman <tlinder@codeaurora.org> wrote:
> At first mount it's possible that there are not enough free PEBs since
> there are PEB's pending to be erased. In such scenario, fm_pool (which is
> the pool from which user required PEBs are allocated) will be empty.
> Try fixing the above described situation by synchronously performing
> pending erase work, thus produce another free PEB.
>
> Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
>
> diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
> index 457ead3..9a36f78 100644
> --- a/drivers/mtd/ubi/wl.c
> +++ b/drivers/mtd/ubi/wl.c
> @@ -595,10 +595,29 @@ static void refill_wl_pool(struct ubi_device *ubi)
>  static void refill_wl_user_pool(struct ubi_device *ubi)
>  {
>         struct ubi_fm_pool *pool = &ubi->fm_pool;
> +       int err;
>
>         return_unused_pool_pebs(ubi, pool);
>
>         for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
> +retry:
> +               if (!ubi->free.rb_node ||
> +                  (ubi->free_count - ubi->beb_rsvd_pebs < 1)) {
> +                       /*
> +                        * There are no available PEBs. Try to free
> +                        * PEB by means of synchronous execution of
> +                        * pending works.
> +                        */
> +                       if (ubi->works_count == 0)
> +                               break;
> +                       spin_unlock(&ubi->wl_lock);
> +                       err = do_work(ubi);
> +                       spin_lock(&ubi->wl_lock);

This is basically what produce_free_peb() does.

> +                       if (err < 0)
> +                               break;
> +                       goto retry;
> +               }
> +
>                 pool->pebs[pool->size] = __wl_get_peb(ubi);

__wl_get_peb() already calls produce_free_peb() when we run out of free PEBs.

Does your patch really fix a problem you encounter or did you find the
issue by reviewing
the code?
Tatyana Brokhman April 7, 2014, 4:05 p.m. UTC | #2
On 4/7/2014 4:02 PM, Richard Weinberger wrote:
> On Tue, Apr 1, 2014 at 10:01 AM, Tanya Brokhman <tlinder@codeaurora.org> wrote:
>> At first mount it's possible that there are not enough free PEBs since
>> there are PEB's pending to be erased. In such scenario, fm_pool (which is
>> the pool from which user required PEBs are allocated) will be empty.
>> Try fixing the above described situation by synchronously performing
>> pending erase work, thus produce another free PEB.
>>
>> Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
>>
>> diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
>> index 457ead3..9a36f78 100644
>> --- a/drivers/mtd/ubi/wl.c
>> +++ b/drivers/mtd/ubi/wl.c
>> @@ -595,10 +595,29 @@ static void refill_wl_pool(struct ubi_device *ubi)
>>   static void refill_wl_user_pool(struct ubi_device *ubi)
>>   {
>>          struct ubi_fm_pool *pool = &ubi->fm_pool;
>> +       int err;
>>
>>          return_unused_pool_pebs(ubi, pool);
>>
>>          for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
>> +retry:
>> +               if (!ubi->free.rb_node ||
>> +                  (ubi->free_count - ubi->beb_rsvd_pebs < 1)) {
>> +                       /*
>> +                        * There are no available PEBs. Try to free
>> +                        * PEB by means of synchronous execution of
>> +                        * pending works.
>> +                        */
>> +                       if (ubi->works_count == 0)
>> +                               break;
>> +                       spin_unlock(&ubi->wl_lock);
>> +                       err = do_work(ubi);
>> +                       spin_lock(&ubi->wl_lock);
>
> This is basically what produce_free_peb() does.

Right. I didn't use t just because of the termination condition. 
produce_free_peb stops if there is 1 free peb. I need more then 1

>
>> +                       if (err < 0)
>> +                               break;
>> +                       goto retry;
>> +               }
>> +
>>                  pool->pebs[pool->size] = __wl_get_peb(ubi);
>
> __wl_get_peb() already calls produce_free_peb() when we run out of free PEBs.
>
> Does your patch really fix a problem you encounter or did you find the
> issue by reviewing
> the code?
>

Yes. We encountered this issue, as described in the commit message. This 
is the fix. Verified and working for us.
Richard Weinberger April 7, 2014, 4:42 p.m. UTC | #3
Am 07.04.2014 18:05, schrieb Tanya Brokhman:
> On 4/7/2014 4:02 PM, Richard Weinberger wrote:
>> On Tue, Apr 1, 2014 at 10:01 AM, Tanya Brokhman <tlinder@codeaurora.org> wrote:
>>> At first mount it's possible that there are not enough free PEBs since
>>> there are PEB's pending to be erased. In such scenario, fm_pool (which is
>>> the pool from which user required PEBs are allocated) will be empty.
>>> Try fixing the above described situation by synchronously performing
>>> pending erase work, thus produce another free PEB.
>>>
>>> Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
>>>
>>> diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
>>> index 457ead3..9a36f78 100644
>>> --- a/drivers/mtd/ubi/wl.c
>>> +++ b/drivers/mtd/ubi/wl.c
>>> @@ -595,10 +595,29 @@ static void refill_wl_pool(struct ubi_device *ubi)
>>>   static void refill_wl_user_pool(struct ubi_device *ubi)
>>>   {
>>>          struct ubi_fm_pool *pool = &ubi->fm_pool;
>>> +       int err;
>>>
>>>          return_unused_pool_pebs(ubi, pool);
>>>
>>>          for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
>>> +retry:
>>> +               if (!ubi->free.rb_node ||
>>> +                  (ubi->free_count - ubi->beb_rsvd_pebs < 1)) {
>>> +                       /*
>>> +                        * There are no available PEBs. Try to free
>>> +                        * PEB by means of synchronous execution of
>>> +                        * pending works.
>>> +                        */
>>> +                       if (ubi->works_count == 0)
>>> +                               break;
>>> +                       spin_unlock(&ubi->wl_lock);
>>> +                       err = do_work(ubi);
>>> +                       spin_lock(&ubi->wl_lock);
>>
>> This is basically what produce_free_peb() does.
> 
> Right. I didn't use t just because of the termination condition. produce_free_peb stops if there is 1 free peb. I need more then 1
> 
>>
>>> +                       if (err < 0)
>>> +                               break;
>>> +                       goto retry;
>>> +               }
>>> +
>>>                  pool->pebs[pool->size] = __wl_get_peb(ubi);
>>
>> __wl_get_peb() already calls produce_free_peb() when we run out of free PEBs.
>>
>> Does your patch really fix a problem you encounter or did you find the
>> issue by reviewing
>> the code?
>>
> 
> Yes. We encountered this issue, as described in the commit message. This is the fix. Verified and working for us.

Wouldn't it be better to fix produce_free_pep() instead of duplicating it?
I.e. Such that you can tell it how many PEBs you need.

Thanks,
//richard
Artem Bityutskiy April 8, 2014, 1:42 p.m. UTC | #4
On Tue, 2014-04-01 at 11:01 +0300, Tanya Brokhman wrote:
> At first mount it's possible that there are not enough free PEBs since
> there are PEB's pending to be erased. In such scenario, fm_pool (which is
> the pool from which user required PEBs are allocated) will be empty.
> Try fixing the above described situation by synchronously performing
> pending erase work, thus produce another free PEB.
> 
> Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>

Pushed to linux-ubifs.git / master, thanks!
Bityutskiy, Artem April 8, 2014, 1:44 p.m. UTC | #5
On Tue, 2014-04-08 at 16:42 +0300, Artem Bityutskiy wrote:
> On Tue, 2014-04-01 at 11:01 +0300, Tanya Brokhman wrote:
> > At first mount it's possible that there are not enough free PEBs since
> > there are PEB's pending to be erased. In such scenario, fm_pool (which is
> > the pool from which user required PEBs are allocated) will be empty.
> > Try fixing the above described situation by synchronously performing
> > pending erase work, thus produce another free PEB.
> > 
> > Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
> 
> Pushed to linux-ubifs.git / master, thanks!

Oh, sorry, this one I actually _dropped_. Would you please rather
re-structure the code to avoid duplication. E.g., do what Richard
suggested.
Richard Weinberger April 8, 2014, 10:17 p.m. UTC | #6
On Tue, Apr 8, 2014 at 3:44 PM, Bityutskiy, Artem
<artem.bityutskiy@intel.com> wrote:
> On Tue, 2014-04-08 at 16:42 +0300, Artem Bityutskiy wrote:
>> On Tue, 2014-04-01 at 11:01 +0300, Tanya Brokhman wrote:
>> > At first mount it's possible that there are not enough free PEBs since
>> > there are PEB's pending to be erased. In such scenario, fm_pool (which is
>> > the pool from which user required PEBs are allocated) will be empty.
>> > Try fixing the above described situation by synchronously performing
>> > pending erase work, thus produce another free PEB.
>> >
>> > Signed-off-by: Tatyana Brokhman <tlinder@codeaurora.org>
>>
>> Pushed to linux-ubifs.git / master, thanks!
>
> Oh, sorry, this one I actually _dropped_. Would you please rather
> re-structure the code to avoid duplication. E.g., do what Richard
> suggested.

Tatyana, can you also please find out how many PEBs you need?
Strictly speaking we need only one (which should be produced by __wl_get_peb().
I want to make sure that we're not just papering over an issue. :-)
diff mbox

Patch

diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 457ead3..9a36f78 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -595,10 +595,29 @@  static void refill_wl_pool(struct ubi_device *ubi)
 static void refill_wl_user_pool(struct ubi_device *ubi)
 {
 	struct ubi_fm_pool *pool = &ubi->fm_pool;
+	int err;
 
 	return_unused_pool_pebs(ubi, pool);
 
 	for (pool->size = 0; pool->size < pool->max_size; pool->size++) {
+retry:
+		if (!ubi->free.rb_node ||
+		   (ubi->free_count - ubi->beb_rsvd_pebs < 1)) {
+			/*
+			 * There are no available PEBs. Try to free
+			 * PEB by means of synchronous execution of
+			 * pending works.
+			 */
+			if (ubi->works_count == 0)
+				break;
+			spin_unlock(&ubi->wl_lock);
+			err = do_work(ubi);
+			spin_lock(&ubi->wl_lock);
+			if (err < 0)
+				break;
+			goto retry;
+		}
+
 		pool->pebs[pool->size] = __wl_get_peb(ubi);
 		if (pool->pebs[pool->size] < 0)
 			break;