From patchwork Thu Aug 7 21:55:44 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Kamal Mostafa X-Patchwork-Id: 378006 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 204AD140116; Fri, 8 Aug 2014 07:58:27 +1000 (EST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1XFVhY-0001MM-9J; Thu, 07 Aug 2014 21:58:24 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1XFVf0-0008Rt-VZ for kernel-team@lists.ubuntu.com; Thu, 07 Aug 2014 21:55:46 +0000 Received: from c-67-160-228-185.hsd1.ca.comcast.net ([67.160.228.185] helo=fourier) by youngberry.canonical.com with esmtpsa (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1XFVf0-0005u7-Jx; Thu, 07 Aug 2014 21:55:46 +0000 Received: from kamal by fourier with local (Exim 4.82) (envelope-from ) id 1XFVey-0006v4-OC; Thu, 07 Aug 2014 14:55:44 -0700 From: Kamal Mostafa To: Anssi Hannula Subject: [3.13.y.z extended stable] Patch "dm cache: fix race affecting dirty block count" has been added to staging queue Date: Thu, 7 Aug 2014 14:55:44 -0700 Message-Id: <1407448544-26567-1-git-send-email-kamal@canonical.com> X-Mailer: git-send-email 1.9.1 X-Extended-Stable: 3.13 Cc: roma1390@gmail.com, Mike Snitzer , Kamal Mostafa , kernel-team@lists.ubuntu.com, Joe Thornber X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com This is a note to let you know that I have just added a patch titled dm cache: fix race affecting dirty block count to the linux-3.13.y-queue branch of the 3.13.y.z extended stable tree which can be found at: http://kernel.ubuntu.com/git?p=ubuntu/linux.git;a=shortlog;h=refs/heads/linux-3.13.y-queue This patch is scheduled to be released in version 3.13.11.6. If you, or anyone else, feels it should not be added to this tree, please reply to this email. For more information about the 3.13.y.z tree, see https://wiki.ubuntu.com/Kernel/Dev/ExtendedStable Thanks. -Kamal ------ From c3422f9197825e271389a596bf9f2b40b0e712b3 Mon Sep 17 00:00:00 2001 From: Anssi Hannula Date: Fri, 1 Aug 2014 11:55:47 -0400 Subject: dm cache: fix race affecting dirty block count commit 44fa816bb778edbab6b6ddaaf24908dd6295937e upstream. nr_dirty is updated without locking, causing it to drift so that it is non-zero (either a small positive integer, or a very large one when an underflow occurs) even when there are no actual dirty blocks. This was due to a race between the workqueue and map function accessing nr_dirty in parallel without proper protection. People were seeing under runs due to a race on increment/decrement of nr_dirty, see: https://lkml.org/lkml/2014/6/3/648 Fix this by using an atomic_t for nr_dirty. Reported-by: roma1390@gmail.com Signed-off-by: Anssi Hannula Signed-off-by: Joe Thornber Signed-off-by: Mike Snitzer [ luis: backported to 3.11: adjusted context ] Signed-off-by: Luis Henriques Signed-off-by: Kamal Mostafa --- drivers/md/dm-cache-target.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) -- 1.9.1 diff --git a/drivers/md/dm-cache-target.c b/drivers/md/dm-cache-target.c index fd92101..ae73ee5 100644 --- a/drivers/md/dm-cache-target.c +++ b/drivers/md/dm-cache-target.c @@ -225,7 +225,7 @@ struct cache { /* * cache_size entries, dirty if set */ - dm_cblock_t nr_dirty; + atomic_t nr_dirty; unsigned long *dirty_bitset; /* @@ -487,7 +487,7 @@ static bool is_dirty(struct cache *cache, dm_cblock_t b) static void set_dirty(struct cache *cache, dm_oblock_t oblock, dm_cblock_t cblock) { if (!test_and_set_bit(from_cblock(cblock), cache->dirty_bitset)) { - cache->nr_dirty = to_cblock(from_cblock(cache->nr_dirty) + 1); + atomic_inc(&cache->nr_dirty); policy_set_dirty(cache->policy, oblock); } } @@ -496,8 +496,7 @@ static void clear_dirty(struct cache *cache, dm_oblock_t oblock, dm_cblock_t cbl { if (test_and_clear_bit(from_cblock(cblock), cache->dirty_bitset)) { policy_clear_dirty(cache->policy, oblock); - cache->nr_dirty = to_cblock(from_cblock(cache->nr_dirty) - 1); - if (!from_cblock(cache->nr_dirty)) + if (atomic_dec_return(&cache->nr_dirty) == 0) dm_table_event(cache->ti->table); } } @@ -2275,7 +2274,7 @@ static int cache_create(struct cache_args *ca, struct cache **result) atomic_set(&cache->quiescing_ack, 0); r = -ENOMEM; - cache->nr_dirty = 0; + atomic_set(&cache->nr_dirty, 0); cache->dirty_bitset = alloc_bitset(from_cblock(cache->cache_size)); if (!cache->dirty_bitset) { *error = "could not allocate dirty bitset"; @@ -2816,7 +2815,7 @@ static void cache_status(struct dm_target *ti, status_type_t type, residency = policy_residency(cache->policy); - DMEMIT("%llu/%llu %u %u %u %u %u %u %llu %u ", + DMEMIT("%llu/%llu %u %u %u %u %u %u %llu %lu ", (unsigned long long)(nr_blocks_metadata - nr_free_blocks_metadata), (unsigned long long)nr_blocks_metadata, (unsigned) atomic_read(&cache->stats.read_hit), @@ -2826,7 +2825,7 @@ static void cache_status(struct dm_target *ti, status_type_t type, (unsigned) atomic_read(&cache->stats.demotion), (unsigned) atomic_read(&cache->stats.promotion), (unsigned long long) from_cblock(residency), - cache->nr_dirty); + (unsigned long) atomic_read(&cache->nr_dirty)); if (writethrough_mode(&cache->features)) DMEMIT("1 writethrough ");