From patchwork Fri Jul 13 13:19:06 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lukas Czerner X-Patchwork-Id: 170887 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id E097C2C0395 for ; Fri, 13 Jul 2012 23:19:34 +1000 (EST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1030518Ab2GMNTd (ORCPT ); Fri, 13 Jul 2012 09:19:33 -0400 Received: from mx1.redhat.com ([209.132.183.28]:42262 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1030437Ab2GMNT3 (ORCPT ); Fri, 13 Jul 2012 09:19:29 -0400 Received: from int-mx09.intmail.prod.int.phx2.redhat.com (int-mx09.intmail.prod.int.phx2.redhat.com [10.5.11.22]) by mx1.redhat.com (8.14.4/8.14.4) with ESMTP id q6DDJPXX024801 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=OK); Fri, 13 Jul 2012 09:19:25 -0400 Received: from vpn-9-18.rdu.redhat.com (vpn-9-18.rdu.redhat.com [10.11.9.18]) by int-mx09.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id q6DDJJU8012919; Fri, 13 Jul 2012 09:19:23 -0400 From: Lukas Czerner To: linux-ext4@vger.kernel.org Cc: linux-fsdevel@vger.kernel.org, tytso@mit.edu, achender@linux.vnet.ibm.com, Lukas Czerner , Hugh Dickins Subject: [PATCH 03/12 v2] shmem: pass LLONG_MAX to shmem_truncate_range Date: Fri, 13 Jul 2012 15:19:06 +0200 Message-Id: <1342185555-21146-3-git-send-email-lczerner@redhat.com> In-Reply-To: <1342185555-21146-1-git-send-email-lczerner@redhat.com> References: <1342185555-21146-1-git-send-email-lczerner@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.22 Sender: linux-ext4-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-ext4@vger.kernel.org Currently we're passing -1 to shmem_truncate_range which can then call truncate_inode_pages_range() which is actually really confusing since the argument is signed so we do not get "huge" number as one would expect, but rather just -1. To make things clearer and easier for truncate_inode_pages_range() just pass LLONG_MAX since it is actually what was intended anyway. It also makes thing easier for allowing truncate_inode_pages_range() to handle non page aligned regions. Moreover letting the lend argument to be negative might actually hide some bugs. Signed-off-by: Lukas Czerner Cc: Hugh Dickins --- mm/shmem.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 4ce02e0..3199733 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2961,7 +2961,8 @@ void shmem_unlock_mapping(struct address_space *mapping) void shmem_truncate_range(struct inode *inode, loff_t lstart, loff_t lend) { - truncate_inode_pages_range(inode->i_mapping, lstart, lend); + truncate_inode_pages_range(inode->i_mapping, lstart, + lend == -1 ? LLONG_MAX : last); } EXPORT_SYMBOL_GPL(shmem_truncate_range);