From patchwork Mon Apr 15 12:48:53 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Libin X-Patchwork-Id: 236588 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id D57BD2C00DD for ; Mon, 15 Apr 2013 22:51:47 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S934002Ab3DOMvp (ORCPT ); Mon, 15 Apr 2013 08:51:45 -0400 Received: from szxga02-in.huawei.com ([119.145.14.65]:24128 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S933036Ab3DOMvo (ORCPT ); Mon, 15 Apr 2013 08:51:44 -0400 Received: from 172.24.2.119 (EHLO szxeml214-edg.china.huawei.com) ([172.24.2.119]) by szxrg02-dlp.huawei.com (MOS 4.3.4-GA FastPath queued) with ESMTP id BAB39224; Mon, 15 Apr 2013 20:50:31 +0800 (CST) Received: from SZXEML458-HUB.china.huawei.com (10.82.67.201) by szxeml214-edg.china.huawei.com (172.24.2.29) with Microsoft SMTP Server (TLS) id 14.1.323.7; Mon, 15 Apr 2013 20:50:12 +0800 Received: from localhost (10.135.74.57) by SZXEML458-HUB.china.huawei.com (10.82.67.201) with Microsoft SMTP Server id 14.1.323.7; Mon, 15 Apr 2013 20:50:04 +0800 From: Libin To: Arnd Bergmann , Greg Kroah-Hartman , David Airlie , Bjorn Helgaas , "Hans J. Koch" , Petr Vandrovec , Andrew Morton CC: Konstantin Khlebnikov , Thomas Hellstrom , Dave Airlie , Nadia Yvette Chambers , Jiri Kosina , Al Viro , Mel Gorman , Hugh Dickins , Rik van Riel , David Rientjes , Michel Lespinasse , , , , , , Subject: [PATCH 1/6] mm: use vma_pages() to replace (vm_end - vm_start) >> PAGE_SHIFT Date: Mon, 15 Apr 2013 20:48:53 +0800 Message-ID: <1366030138-71292-1-git-send-email-huawei.libin@huawei.com> X-Mailer: git-send-email 1.8.1.msysgit.1 MIME-Version: 1.0 X-Originating-IP: [10.135.74.57] X-CFilter-Loop: Reflected Sender: linux-pci-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-pci@vger.kernel.org (*->vm_end - *->vm_start) >> PAGE_SHIFT operation is implemented as a inline funcion vma_pages() in linux/mm.h, so using it. Signed-off-by: Libin Reviewed-by: Michel Lespinasse --- mm/memory.c | 2 +- mm/mmap.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 13cbc42..8b8ae1c 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -2866,7 +2866,7 @@ static inline void unmap_mapping_range_tree(struct rb_root *root, details->first_index, details->last_index) { vba = vma->vm_pgoff; - vea = vba + ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT) - 1; + vea = vba + vma_pages(vma) - 1; /* Assume for now that PAGE_CACHE_SHIFT == PAGE_SHIFT */ zba = details->first_index; if (zba < vba) diff --git a/mm/mmap.c b/mm/mmap.c index 0db0de1..118bfcb 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -919,7 +919,7 @@ can_vma_merge_after(struct vm_area_struct *vma, unsigned long vm_flags, if (is_mergeable_vma(vma, file, vm_flags) && is_mergeable_anon_vma(anon_vma, vma->anon_vma, vma)) { pgoff_t vm_pglen; - vm_pglen = (vma->vm_end - vma->vm_start) >> PAGE_SHIFT; + vm_pglen = vma_pages(vma); if (vma->vm_pgoff + vm_pglen == vm_pgoff) return 1; }