From patchwork Thu Jan 26 11:57:51 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Kirill A. Shutemov" X-Patchwork-Id: 720093 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 3v8L6m5lH5z9t14 for ; Thu, 26 Jan 2017 22:58:56 +1100 (AEDT) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752818AbdAZL6s (ORCPT ); Thu, 26 Jan 2017 06:58:48 -0500 Received: from mga05.intel.com ([192.55.52.43]:54066 "EHLO mga05.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752106AbdAZL6q (ORCPT ); Thu, 26 Jan 2017 06:58:46 -0500 Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga105.fm.intel.com with ESMTP; 26 Jan 2017 03:58:40 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.33,289,1477983600"; d="scan'208";a="926869273" Received: from black.fi.intel.com ([10.237.72.28]) by orsmga003.jf.intel.com with ESMTP; 26 Jan 2017 03:58:36 -0800 Received: by black.fi.intel.com (Postfix, from userid 1000) id 8AD962B9; Thu, 26 Jan 2017 13:58:25 +0200 (EET) From: "Kirill A. Shutemov" To: "Theodore Ts'o" , Andreas Dilger , Jan Kara , Andrew Morton Cc: Alexander Viro , Hugh Dickins , Andrea Arcangeli , Dave Hansen , Vlastimil Babka , Matthew Wilcox , Ross Zwisler , linux-ext4@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-block@vger.kernel.org, "Kirill A. Shutemov" Subject: [PATCHv6 09/37] filemap: allocate huge page in pagecache_get_page(), if allowed Date: Thu, 26 Jan 2017 14:57:51 +0300 Message-Id: <20170126115819.58875-10-kirill.shutemov@linux.intel.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20170126115819.58875-1-kirill.shutemov@linux.intel.com> References: <20170126115819.58875-1-kirill.shutemov@linux.intel.com> Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Write path allocate pages using pagecache_get_page(). We should be able to allocate huge pages there, if it's allowed. As usually, fallback to small pages, if failed. Signed-off-by: Kirill A. Shutemov Reviewed-by: Matthew Wilcox --- mm/filemap.c | 17 +++++++++++++++-- 1 file changed, 15 insertions(+), 2 deletions(-) diff --git a/mm/filemap.c b/mm/filemap.c index 6cba69176ea9..4e398d5e4134 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -1374,13 +1374,16 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset, no_page: if (!page && (fgp_flags & FGP_CREAT)) { + pgoff_t hoffset; int err; if ((fgp_flags & FGP_WRITE) && mapping_cap_account_dirty(mapping)) gfp_mask |= __GFP_WRITE; if (fgp_flags & FGP_NOFS) gfp_mask &= ~__GFP_FS; - page = __page_cache_alloc(gfp_mask); + page = page_cache_alloc_huge(mapping, offset, gfp_mask); +no_huge: if (!page) + page = __page_cache_alloc(gfp_mask); if (!page) return NULL; @@ -1391,9 +1394,19 @@ struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset, if (fgp_flags & FGP_ACCESSED) __SetPageReferenced(page); - err = add_to_page_cache_lru(page, mapping, offset, + if (PageTransHuge(page)) + hoffset = round_down(offset, HPAGE_PMD_NR); + else + hoffset = offset; + + err = add_to_page_cache_lru(page, mapping, hoffset, gfp_mask & GFP_RECLAIM_MASK); if (unlikely(err)) { + if (PageTransHuge(page)) { + put_page(page); + page = NULL; + goto no_huge; + } put_page(page); page = NULL; if (err == -EEXIST)