From patchwork Tue May 29 13:48:11 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Shmulik Ladkani X-Patchwork-Id: 161751 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from merlin.infradead.org (merlin.infradead.org [IPv6:2001:4978:20e::2]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 92E60B6FA3 for ; Tue, 29 May 2012 23:49:52 +1000 (EST) Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.76 #1 (Red Hat Linux)) id 1SZMmh-0002iR-Np; Tue, 29 May 2012 13:48:27 +0000 Received: from mail-lpp01m010-f49.google.com ([209.85.215.49]) by merlin.infradead.org with esmtps (Exim 4.76 #1 (Red Hat Linux)) id 1SZMme-0002hk-MQ for linux-mtd@lists.infradead.org; Tue, 29 May 2012 13:48:25 +0000 Received: by laap9 with SMTP id p9so3298917laa.36 for ; Tue, 29 May 2012 06:48:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=date:from:to:cc:subject:message-id:mime-version:content-type :content-transfer-encoding; bh=KMbpTgjGaWc9jr9yOkzhxQWx/mhpZJuu7e3CUTKuCJ8=; b=rZElxCXhbz91Ri/8Uhqs5ZEYlUmg9aL6yCyRCHhUf6SqZ7qT/LKuCLu+H7BteuEVfP GhTVNy4GvR+jX1yLpXHGfNwJ9BskxbDB5PwNJVC/fF1AT6fhAFMbO42tA2sYBCx1bwnF HiZNepL5tppb0sPfmj2bg2FWwIilY/V4tePgJ26Bk3HdSVuoxGN2OhcrmorZ0TjuGAv/ t1fb+VmaXx2D9ZyClQyeePVuWZD6OglOJ7x3UvmtGb4mhO3efKCd/RFODeFX0dwitfJJ vbUAwfs9yZ8rJLjSZTTnLDvwfl4TBnITy8fvVgIfZ81J9E0tqpePi82XcsWolI5eZkfP M/ew== Received: by 10.112.26.131 with SMTP id l3mr5460548lbg.80.1338299302435; Tue, 29 May 2012 06:48:22 -0700 (PDT) Received: from pixies.home.jungo.com (212-150-239-254.bb.netvision.net.il. [212.150.239.254]) by mx.google.com with ESMTPS id gv8sm23287586lab.14.2012.05.29.06.48.19 (version=TLSv1/SSLv3 cipher=OTHER); Tue, 29 May 2012 06:48:21 -0700 (PDT) Date: Tue, 29 May 2012 16:48:11 +0300 From: Shmulik Ladkani To: Artem Bityutskiy , Richard Weinberger Subject: [PATCH] ubi: fastmap: harmonize medium erase-counter seek algorithm Message-ID: <20120529164811.5e86b190@pixies.home.jungo.com> Mime-Version: 1.0 X-Spam-Note: CRM114 invocation failed X-Spam-Score: -2.7 (--) X-Spam-Report: SpamAssassin version 3.3.2 on merlin.infradead.org summary: Content analysis details: (-2.7 points) pts rule name description ---- ---------------------- -------------------------------------------------- -0.7 RCVD_IN_DNSWL_LOW RBL: Sender listed at http://www.dnswl.org/, low trust [209.85.215.49 listed in list.dnswl.org] 0.0 FREEMAIL_FROM Sender email is commonly abused enduser mail provider (shmulik.ladkani[at]gmail.com) -0.0 SPF_PASS SPF: sender matches SPF record -1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1% [score: 0.0000] -0.1 DKIM_VALID_AU Message has a valid DKIM or DK signature from author's domain 0.1 DKIM_SIGNED Message has a DKIM or DK signature, not necessarily valid -0.1 DKIM_VALID Message has at least one valid DKIM or DK signature Cc: Heinz.Egger@linutronix.de, tglx@linutronix.de, linux-mtd@lists.infradead.org, tim.bird@am.sony.com X-BeenThere: linux-mtd@lists.infradead.org X-Mailman-Version: 2.1.14 Precedence: list List-Id: Linux MTD discussion mailing list List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linux-mtd-bounces@lists.infradead.org Errors-To: linux-mtd-bounces+incoming=patchwork.ozlabs.org@lists.infradead.org Currently, there are two different locations where a wear-leveling entry with a medium erase counter is looked-for, with the same search algorithm duplicated. Harmonize this by introducing a common function 'find_mean_wl_entry'. Signed-off-by: Shmulik Ladkani --- Compile tested. Applies to branch fastmap on linux-ubi. diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c index 7d34495..6727b6c 100644 --- a/drivers/mtd/ubi/wl.c +++ b/drivers/mtd/ubi/wl.c @@ -386,6 +386,29 @@ static struct ubi_wl_entry *find_wl_entry(struct rb_root *root, int diff) } /** + * find_mean_wl_entry - find wear-leveling entry with medium erase counter. + * @root: the RB-tree where to look for + * + * This function looks for a wear leveling entry with medium erase counter, + * but not greater or equivalent than the lowest erase counter plus + * %WL_FREE_MAX_DIFF/2. + */ +static struct ubi_wl_entry *find_mean_wl_entry(struct rb_root *root) +{ + struct ubi_wl_entry *e, *first, *last; + + first = rb_entry(rb_first(root), struct ubi_wl_entry, u.rb); + last = rb_entry(rb_last(root), struct ubi_wl_entry, u.rb); + + if (last->ec - first->ec < WL_FREE_MAX_DIFF) + e = rb_entry(root->rb_node, struct ubi_wl_entry, u.rb); + else + e = find_wl_entry(root, WL_FREE_MAX_DIFF/2); + + return e; +} + +/** * find_early_wl_entry - find wear-leveling entry with the lowest pnum. * @root: the RB-tree where to look for * @max_pnum: highest possible pnum @@ -419,7 +442,7 @@ static struct ubi_wl_entry *find_early_wl_entry(struct rb_root *root, int ubi_wl_get_fm_peb(struct ubi_device *ubi, int max_pnum) { int ret = -ENOSPC; - struct ubi_wl_entry *e, *first, *last; + struct ubi_wl_entry *e; if (!ubi->free.rb_node) { ubi_err("no free eraseblocks"); @@ -427,18 +450,9 @@ int ubi_wl_get_fm_peb(struct ubi_device *ubi, int max_pnum) goto out; } - if (max_pnum < 0) { - first = rb_entry(rb_first(&ubi->free), - struct ubi_wl_entry, u.rb); - last = rb_entry(rb_last(&ubi->free), - struct ubi_wl_entry, u.rb); - - if (last->ec - first->ec < WL_FREE_MAX_DIFF) - e = rb_entry(ubi->free.rb_node, - struct ubi_wl_entry, u.rb); - else - e = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF/2); - } else + if (max_pnum < 0) + e = find_mean_wl_entry(&ubi->free); + else e = find_early_wl_entry(&ubi->free, max_pnum); if (!e) @@ -464,7 +478,7 @@ out: static int __ubi_wl_get_peb(struct ubi_device *ubi) { int err; - struct ubi_wl_entry *e, *first, *last; + struct ubi_wl_entry *e; retry: spin_lock(&ubi->wl_lock); @@ -483,13 +497,7 @@ retry: goto retry; } - first = rb_entry(rb_first(&ubi->free), struct ubi_wl_entry, u.rb); - last = rb_entry(rb_last(&ubi->free), struct ubi_wl_entry, u.rb); - - if (last->ec - first->ec < WL_FREE_MAX_DIFF) - e = rb_entry(ubi->free.rb_node, struct ubi_wl_entry, u.rb); - else - e = find_wl_entry(&ubi->free, WL_FREE_MAX_DIFF/2); + e = find_mean_wl_entry(&ubi->free); self_check_in_wl_tree(ubi, e, &ubi->free);