From patchwork Wed Oct 29 12:45:31 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Richard Weinberger X-Patchwork-Id: 404577 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2001:1868:205::9]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 6256314008B for ; Wed, 29 Oct 2014 23:48:52 +1100 (AEDT) Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XjSf3-00070s-LP; Wed, 29 Oct 2014 12:47:37 +0000 Received: from mail.sigma-star.at ([95.130.255.111]) by bombadil.infradead.org with esmtp (Exim 4.80.1 #2 (Red Hat Linux)) id 1XjSeK-0006Sl-KI for linux-mtd@lists.infradead.org; Wed, 29 Oct 2014 12:46:56 +0000 Received: from localhost (localhost.localdomain [127.0.0.1]) by mail.sigma-star.at (Postfix) with ESMTP id 2C5D316B42B6; Wed, 29 Oct 2014 13:46:11 +0100 (CET) X-Virus-Scanned: amavisd-new at mail.sigma-star.at Received: from localhost.localdomain (chello213047235169.tirol.surfer.at [213.47.235.169]) by mail.sigma-star.at (Postfix) with ESMTPSA id 790E316B42AE; Wed, 29 Oct 2014 13:46:09 +0100 (CET) From: Richard Weinberger To: dedekind1@gmail.com Subject: [PATCH 08/35] UBI: Split __wl_get_peb() Date: Wed, 29 Oct 2014 13:45:31 +0100 Message-Id: <1414586758-9972-9-git-send-email-richard@nod.at> X-Mailer: git-send-email 1.8.4.5 In-Reply-To: <1414586758-9972-1-git-send-email-richard@nod.at> References: <1414586758-9972-1-git-send-email-richard@nod.at> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20141029_054653_085811_0509ACE5 X-CRM114-Status: GOOD ( 12.43 ) X-Spam-Score: -0.0 (/) X-Spam-Report: SpamAssassin version 3.4.0 on bombadil.infradead.org summary: Content analysis details: (-0.0 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.0 SPF_PASS SPF: sender matches SPF record Cc: Richard Weinberger , linux-mtd@lists.infradead.org, linux-kernel@vger.kernel.org, tlinder@codeaurora.org X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.18-1 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Sender: "linux-mtd" Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Make it two functions, wl_get_wle() and wl_get_peb(). wl_get_peb() works exactly like __wl_get_peb() but wl_get_wle() does not call produce_free_peb(). While refilling the fastmap user pool we cannot release ubi->wl_lock as produce_free_peb() does. Hence the fastmap logic uses now wl_get_wle(). Signed-off-by: Richard Weinberger --- drivers/mtd/ubi/wl.c | 48 +++++++++++++++++++++++++++--------------------- 1 file changed, 27 insertions(+), 21 deletions(-) diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c index 5f8d2ff..95bb12b 100644 --- a/drivers/mtd/ubi/wl.c +++ b/drivers/mtd/ubi/wl.c @@ -502,7 +502,30 @@ out: * This function returns a physical eraseblock in case of success and a * negative error code in case of failure. */ -static int __wl_get_peb(struct ubi_device *ubi) +static struct ubi_wl_entry *wl_get_wle(struct ubi_device *ubi) +{ + struct ubi_wl_entry *e; + + e = find_mean_wl_entry(ubi, &ubi->free); + if (!e) { + ubi_err("no free eraseblocks"); + return NULL; + } + + self_check_in_wl_tree(ubi, e, &ubi->free); + + /* + * Move the physical eraseblock to the protection queue where it will + * be protected from being moved for some time. + */ + rb_erase(&e->u.rb, &ubi->free); + ubi->free_count--; + dbg_wl("PEB %d EC %d", e->pnum, e->ec); + + return e; +} + +static int wl_get_peb(struct ubi_device *ubi) { int err; struct ubi_wl_entry *e; @@ -519,29 +542,12 @@ retry: if (err < 0) return err; goto retry; - } - e = find_mean_wl_entry(ubi, &ubi->free); - if (!e) { - ubi_err("no free eraseblocks"); - return -ENOSPC; } - self_check_in_wl_tree(ubi, e, &ubi->free); - - /* - * Move the physical eraseblock to the protection queue where it will - * be protected from being moved for some time. - */ - rb_erase(&e->u.rb, &ubi->free); - ubi->free_count--; - dbg_wl("PEB %d EC %d", e->pnum, e->ec); -#ifndef CONFIG_MTD_UBI_FASTMAP - /* We have to enqueue e only if fastmap is disabled, - * is fastmap enabled prot_queue_add() will be called by - * ubi_wl_get_peb() after removing e from the pool. */ + e = wl_get_wle(ubi); prot_queue_add(ubi, e); -#endif + return e->pnum; } @@ -699,7 +705,7 @@ int ubi_wl_get_peb(struct ubi_device *ubi) int peb, err; spin_lock(&ubi->wl_lock); - peb = __wl_get_peb(ubi); + peb = wl_get_peb(ubi); spin_unlock(&ubi->wl_lock); if (peb < 0)