From patchwork Fri Dec 19 05:44:57 2008 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Yuri Tikhonov X-Patchwork-Id: 14806 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id E183DDDF9D for ; Fri, 19 Dec 2008 16:56:46 +1100 (EST) X-Original-To: linuxppc-dev@ozlabs.org Delivered-To: linuxppc-dev@ozlabs.org Received: from ocean.emcraft.com (ocean.emcraft.com [213.221.7.182]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by ozlabs.org (Postfix) with ESMTPS id 9673ADDF28 for ; Fri, 19 Dec 2008 16:55:37 +1100 (EST) Received: from [172.17.0.11] (helo=iron.emcraft.com) by ocean.emcraft.com with esmtp (Exim 4.43) id 1LDYKi-0006yr-Mg; Fri, 19 Dec 2008 08:55:32 +0300 From: Yuri Tikhonov To: linux-kernel@vger.kernel.org Subject: [PATCH] mm/shmem.c: fix division by zero Date: Fri, 19 Dec 2008 08:44:57 +0300 User-Agent: KMail/1.9.4 MIME-Version: 1.0 Content-Disposition: inline Message-Id: <200812190844.57996.yur@emcraft.com> Cc: wd@denx.de, dzu@denx.de, miltonm@bga.com, linuxppc-dev@ozlabs.org, paulus@samba.org, viro@zeniv.linux.org.uk, Geert.Uytterhoeven@sonycom.com, hugh@veritas.com, akpm@linux-foundation.org, yanok@emcraft.com X-BeenThere: linuxppc-dev@ozlabs.org X-Mailman-Version: 2.1.11 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@ozlabs.org The following patch fixes division by zero, which we have in shmem_truncate_range() and shmem_unuse_inode(), if use big PAGE_SIZE values (e.g. 256KB on ppc44x). With 256KB PAGE_SIZE the ENTRIES_PER_PAGEPAGE constant becomes too large (0x1.0000.0000), so this patch just changes the types from 'ulong' to 'ullong' where it's necessary. Signed-off-by: Yuri Tikhonov --- mm/shmem.c | 8 ++++---- 1 files changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/shmem.c b/mm/shmem.c index 0ed0752..99d7c91 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -57,7 +57,7 @@ #include #define ENTRIES_PER_PAGE (PAGE_CACHE_SIZE/sizeof(unsigned long)) -#define ENTRIES_PER_PAGEPAGE (ENTRIES_PER_PAGE*ENTRIES_PER_PAGE) +#define ENTRIES_PER_PAGEPAGE ((unsigned long long)ENTRIES_PER_PAGE*ENTRIES_PER_PAGE) #define BLOCKS_PER_PAGE (PAGE_CACHE_SIZE/512) #define SHMEM_MAX_INDEX (SHMEM_NR_DIRECT + (ENTRIES_PER_PAGEPAGE/2) * (ENTRIES_PER_PAGE+1)) @@ -95,7 +95,7 @@ static unsigned long shmem_default_max_inodes(void) } #endif -static int shmem_getpage(struct inode *inode, unsigned long idx, +static int shmem_getpage(struct inode *inode, unsigned long long idx, struct page **pagep, enum sgp_type sgp, int *type); static inline struct page *shmem_dir_alloc(gfp_t gfp_mask) @@ -533,7 +533,7 @@ static void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end) int punch_hole; spinlock_t *needs_lock; spinlock_t *punch_lock; - unsigned long upper_limit; + unsigned long long upper_limit; inode->i_ctime = inode->i_mtime = CURRENT_TIME; idx = (start + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT; @@ -1175,7 +1175,7 @@ static inline struct mempolicy *shmem_get_sbmpol(struct shmem_sb_info *sbinfo) * vm. If we swap it in we mark it dirty since we also free the swap * entry since a page cannot live in both the swap and page cache */ -static int shmem_getpage(struct inode *inode, unsigned long idx, +static int shmem_getpage(struct inode *inode, unsigned long long idx, struct page **pagep, enum sgp_type sgp, int *type) { struct address_space *mapping = inode->i_mapping;