From patchwork Tue Sep 20 13:45:12 2011 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Johannes Weiner X-Patchwork-Id: 115538 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 6394AB70BF for ; Tue, 20 Sep 2011 23:48:19 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1750962Ab1ITNqo (ORCPT ); Tue, 20 Sep 2011 09:46:44 -0400 Received: from mx1.redhat.com ([209.132.183.28]:28196 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1750832Ab1ITNqm (ORCPT ); Tue, 20 Sep 2011 09:46:42 -0400 Received: from int-mx02.intmail.prod.int.phx2.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id p8KDk6xG027918 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Tue, 20 Sep 2011 09:46:06 -0400 Received: from dexter.home.cmpxchg.org (vpn-233-159.phx2.redhat.com [10.3.233.159]) by int-mx02.intmail.prod.int.phx2.redhat.com (8.13.8/8.13.8) with ESMTP id p8KDk4sk029022 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES128-SHA bits=128 verify=NO); Tue, 20 Sep 2011 09:46:05 -0400 From: Johannes Weiner To: Andrew Morton Cc: Mel Gorman , Christoph Hellwig , Dave Chinner , Wu Fengguang , Jan Kara , Rik van Riel , Minchan Kim , Chris Mason , "Theodore Ts'o" , Andreas Dilger , xfs@oss.sgi.com, linux-btrfs@vger.kernel.org, linux-ext4@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org Subject: [patch 1/4] mm: exclude reserved pages from dirtyable memory Date: Tue, 20 Sep 2011 15:45:12 +0200 Message-Id: <1316526315-16801-2-git-send-email-jweiner@redhat.com> In-Reply-To: <1316526315-16801-1-git-send-email-jweiner@redhat.com> References: <1316526315-16801-1-git-send-email-jweiner@redhat.com> X-Scanned-By: MIMEDefang 2.67 on 10.5.11.12 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org The amount of dirtyable pages should not include the total number of free pages: there is a number of reserved pages that the page allocator and kswapd always try to keep free. The closer (reclaimable pages - dirty pages) is to the number of reserved pages, the more likely it becomes for reclaim to run into dirty pages: +----------+ --- | anon | | +----------+ | | | | | | -- dirty limit new -- flusher new | file | | | | | | | | | -- dirty limit old -- flusher old | | | +----------+ --- reclaim | reserved | +----------+ | kernel | +----------+ Not treating reserved pages as dirtyable on a global level is only a conceptual fix. In reality, dirty pages are not distributed equally across zones and reclaim runs into dirty pages on a regular basis. But it is important to get this right before tackling the problem on a per-zone level, where the distance between reclaim and the dirty pages is mostly much smaller in absolute numbers. Signed-off-by: Johannes Weiner Reviewed-by: Rik van Riel --- include/linux/mmzone.h | 1 + mm/page-writeback.c | 8 +++++--- mm/page_alloc.c | 1 + 3 files changed, 7 insertions(+), 3 deletions(-) diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 1ed4116..e28f8e0 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -316,6 +316,7 @@ struct zone { * sysctl_lowmem_reserve_ratio sysctl changes. */ unsigned long lowmem_reserve[MAX_NR_ZONES]; + unsigned long totalreserve_pages; #ifdef CONFIG_NUMA int node; diff --git a/mm/page-writeback.c b/mm/page-writeback.c index da6d263..9f896db 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -169,8 +169,9 @@ static unsigned long highmem_dirtyable_memory(unsigned long total) struct zone *z = &NODE_DATA(node)->node_zones[ZONE_HIGHMEM]; - x += zone_page_state(z, NR_FREE_PAGES) + - zone_reclaimable_pages(z); + x += zone_page_state(z, NR_FREE_PAGES) - + zone->totalreserve_pages; + x += zone_reclaimable_pages(z); } /* * Make sure that the number of highmem pages is never larger @@ -194,7 +195,8 @@ static unsigned long determine_dirtyable_memory(void) { unsigned long x; - x = global_page_state(NR_FREE_PAGES) + global_reclaimable_pages(); + x = global_page_state(NR_FREE_PAGES) - totalreserve_pages; + x += global_reclaimable_pages(); if (!vm_highmem_is_dirtyable) x -= highmem_dirtyable_memory(x); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 1dba05e..7e8e2ee 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -5075,6 +5075,7 @@ static void calculate_totalreserve_pages(void) if (max > zone->present_pages) max = zone->present_pages; + zone->totalreserve_pages = max; reserve_pages += max; } }