From patchwork Tue Mar 28 09:05:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peng Fan X-Patchwork-Id: 744166 X-Patchwork-Delegate: trini@ti.com Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.denx.de (dione.denx.de [81.169.180.215]) by ozlabs.org (Postfix) with ESMTP id 3vslPD4xVQz9s79 for ; Tue, 28 Mar 2017 20:06:08 +1100 (AEDT) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (1024-bit key; unprotected) header.d=nxp.com header.i=@nxp.com header.b="GHy/2qYu"; dkim-atps=neutral Received: by lists.denx.de (Postfix, from userid 105) id 522A2C21C4D; Tue, 28 Mar 2017 09:06:05 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on lists.denx.de X-Spam-Level: X-Spam-Status: No, score=-1.0 required=5.0 tests=BAD_ENC_HEADER, RCVD_IN_MSPIKE_H5,RCVD_IN_MSPIKE_WL,SPF_HELO_PASS,T_DKIM_INVALID autolearn=unavailable autolearn_force=no version=3.4.0 Received: from lists.denx.de (localhost [IPv6:::1]) by lists.denx.de (Postfix) with ESMTP id 98EFCC21C33; Tue, 28 Mar 2017 09:06:03 +0000 (UTC) Received: by lists.denx.de (Postfix, from userid 105) id A1F9CC21C5D; Tue, 28 Mar 2017 09:05:43 +0000 (UTC) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0042.outbound.protection.outlook.com [104.47.2.42]) by lists.denx.de (Postfix) with ESMTPS id 43959C21C50 for ; Tue, 28 Mar 2017 09:05:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=3jJ9ACOiRO51PBNbd/jR0umB9N8rwcmm9M0AfvpCZsI=; b=GHy/2qYu/bSJD8tjRXkFREQQebg6VZ2e4I0H6+LXE39SVWvimXeXSundio5v19um3+os8J8SntHetDkYe1lxvQJ9WVBJJNKctKQs9+l7eymLHy6Lw8/xU9+OD+l7kmaZzgutWmYoqLof3HTdgq0RalLCF0fhkyyGltcFPKWyiqg= Authentication-Results: konsulko.com; dkim=none (message not signed) header.d=none; konsulko.com; dmarc=none action=none header.from=nxp.com; Received: from linux-u7w5.ap.freescale.net (199.59.231.64) by DB5PR04MB1158.eurprd04.prod.outlook.com (2a01:111:e400:51c3::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.991.14; Tue, 28 Mar 2017 09:05:39 +0000 From: Peng Fan To: Date: Tue, 28 Mar 2017 17:05:09 +0800 Message-ID: <1490691909-3371-1-git-send-email-peng.fan@nxp.com> X-Mailer: git-send-email 2.6.6 MIME-Version: 1.0 X-Originating-IP: [199.59.231.64] X-ClientProxiedBy: HK2PR0101CA0030.apcprd01.prod.exchangelabs.com (2603:1096:201:1::40) To DB5PR04MB1158.eurprd04.prod.outlook.com (2a01:111:e400:51c3::27) X-MS-Office365-Filtering-Correlation-Id: f976ce0b-12a5-43ab-ff53-08d475b99b95 X-MS-Office365-Filtering-HT: Tenant X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001)(48565401081); SRVR:DB5PR04MB1158; X-Microsoft-Exchange-Diagnostics: 1; DB5PR04MB1158; 3:RTB1f52iqXTNDWjGNYGCvEy2xkYiefJVV6uDqX7d0cKG2hWZXvUTjw0TJd9uuoxF8+YsstkpXNm1INn3EuHD3yBxqczCAzBuqvYpNGhewnv6cxm1TMd/FHXbisxEnBxcLBgB3aC51GEM8l3MtXRgR9bvJEPB92TMMINIVE3fkASz0FTEwja4go+atdgh+SUXFdUgEPbap9r2zceZiIhchXgZp9yoAUKabyv+KVDfE77HmuLejUi1X4Hs1sPMTOPw3kFzRuDV9xmxebK6lxRYZaeiWeuI22UMHePbjaWXKrU=; 25:NigaozqrDnk36ddC4WE9jNTQsvfU8D2k7cCLr9LPFdNH8lSU74622/9GL3VoX81GYX+lY3VVeWaiwfuGWeYz321dvUR74FI2/IOKcPK3JUO6u4/Bgkt+sx75rPAb/A+HTN+GeYae6yachS9r9eMY5QWHRb8SpnO0NHv0/HhVBghP1ijcQzBRSAk/LM7wI83vCTvMPP1rzi1zsSrkfZdB2TFVMEsZ1jz+MtNUhZ/vgh32rTZtbDSC197ZUA5Q0NH0taG4B7nfmP5iDGHluTutqkYWx7djmP49nqsTRAyIedGBseN4tXiYyxzX5I7dF0FI4p+LsfPxyHmd8ft9KAQDmK3jUaIZLCz97n8ePYQ+RDoVjJeGLDTtn0jO3QUSrEvax+GxrGH1VCzLIDGHolRDP7RvMweh1VJoMxfqC2WgflhQXDKJFoI5ncksuVRmzj4jgcq5Ey+BojkfjfGJio817Q== X-Microsoft-Exchange-Diagnostics: 1; DB5PR04MB1158; 31:qn3/skgnOA+0tTrfEmVJLyeQNVsIGGFqLPXBIddJ+5IqjdpxctY1LnK+Lu5CM4zosO6+++gBnE2W48Ruu1VgQE88Zg+rLcDEcjm4eYJs4kYwfEgP5N88vZSVUWEjAtIV0TYLGrqTqglVr4EUuVe/EYvTI9tzR5E4XGAKdmY3wdGNol6I+9W6vG0qP273IAeG1dODGpfbK/LWaYdBXcG0Ixq76HmMeAFHSK3+U4zxyFUM0/t9ZxWnsP6EuRwlpbhG; 20:IWfbQZ+PVo37PDG/grMnrZL0GYE8NQ3m1emvphRZbVDF7aTuoWEw+fxootzj7N+Y/znT/P84rqN7HhxJVblbKmCJ025N/Gqy+mbCBZSXZ3ZNyveNpf6QnRVhfmY6lG6m6TZjvWt8VgHl4gNOAgtI5UVEEEQTNXxKQStvjSUftkitpNbOo7+QCdPxVJTdJDxGL6FqIZ+ukllZR0mPkUoZb3pZIgQBkpHPPkonCCrjzvC45IE3wTAWIfNxiuK8kQ5T4aC+rs/d3kRSblLTgo8GTDaDRa+lBoo7UBxkIy06G6a7lmq1lWxkMrv9w+5VpztWgKsT3sEau7vdV9L3lqBrZf62vSB+t/kfy1L1ZsTWmH24ibwABDgKe6T5ED4q05Hg2AiyeMsYgTRXOAfSCdQ8FaVMa1U4SJQtIUmGkJCpSdF0s0SDnxIz78aa3GuEKMiTfvNtLXWEx5g0QmOfukYVrM1KCCAgTtS/FzILEpPKxaTCQwyOdIE4ynFPdoMsmpd/ X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6040375)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6055026)(6041248)(20161123560025)(20161123564025)(20161123558025)(20161123555025)(20161123562025)(6072148); SRVR:DB5PR04MB1158; BCL:0; PCL:0; RULEID:; SRVR:DB5PR04MB1158; X-Microsoft-Exchange-Diagnostics: 1; DB5PR04MB1158; 4:Y9eHUDQwGa2s+62XD1z0GrMWaDwpfNJ3dYXFY9T0Iu80ldWmCn289gXwi/e0ZMWvb6+JEkjKv0Ka3c9VlgBlDH5VNyRHca4K3LRrsJhbvkKFjZji+VdTQuErjqATvmGauXkwCyLKnrXIoyiiA/mMdavmLx27dk8fw6+4chGQ/hhLbwE+Jg8qBulYnnKtRGY1n5flgal+CHWKkIZek+Z17iEIyk/nwgVl7j+sgVy2HA/sMzRNRu50qwOJhVnFrS3H6YQ8hx2p2zq5gvD1GyXpp2CVvAqDTRie07kWwTlkr+G/LasuTjAMxhhX8MCwWwbWc+0ixpSKPUYd3wCF+zbhZXdnsV6KLhvmT89xaD7JJssVogcCwHtlhXytFRSniGyDnNJlOwheJ1sfd8B8FWyQMUlE5grhWKJdYsKUREEiRqsPfD/24XKwPFi7jpgzDcELwEtS04GPwhxiLQa3jWwSuRqLtP10Z8+zO7kabsG363fxeZuMxe0/GjFJssWw5WRr7CJrZoqtqejSDF+eegSiIFTsrC22H40AsqEMrelALFEoNS4KSF2mteww9O237GJJFQFsCnxVAbsgRMqu4HUxT5/KG4UlsyCdhRDBbjVivxe8T39yIE85z+V9GzB/4EY2yODu4vTQlN+2KXtl2Pwe+bK6cef44JostLIYf7+Z+9c= X-Forefront-PRVS: 0260457E99 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(39840400002)(39860400002)(39410400002)(39850400002)(39450400003)(66066001)(86362001)(25786009)(6512007)(33646002)(110136004)(38730400002)(6306002)(6916009)(4326008)(6666003)(42186005)(3846002)(50466002)(6486002)(6506006)(2351001)(54906002)(189998001)(8676002)(6116002)(81166006)(50226002)(5660300001)(36756003)(5003940100001)(50986999)(53936002)(7736002)(305945005)(2906002); DIR:OUT; SFP:1101; SCL:1; SRVR:DB5PR04MB1158; H:linux-u7w5.ap.freescale.net; FPR:; SPF:None; MLV:sfv; LANG:en; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; DB5PR04MB1158; 23:RQUu+IgOIGVXmMcA+NZcFQy4waMqCj2/XbLekwiPE?= =?us-ascii?Q?GsL3fAFhbngAautUzrioFuBI4p8CpzN0pEIVpW5s++MDrKJRTmUTr82wFzTR?= =?us-ascii?Q?CN+DKcQR++rkCbhf92RCj8Kmz99kZd/fjIfmxvGDMaW+kfipaeXPq0syGmjk?= =?us-ascii?Q?3kDo3q22pX9qdS1XcoN2yKRLGZ4sBD7CZ+1ejGQNahuYrRcjLbXmtQfDxsOf?= =?us-ascii?Q?ag9q7u9KXldTV0JcMg/40LKWySWos0EvQ9Mvy8C8WIL+pj/aqHezLnho+GZu?= =?us-ascii?Q?5MocidLbAdQzkaq0/yM7N4W1Qvgg9jYOAQUB1UmVOEz/dGd030oibnwi2kFy?= =?us-ascii?Q?BU6i4tXUSd2ipZtW0ywRpi7VGDC7E4dp4QEy0MMlBsc/s8rTtIDZ7bb6WSt0?= =?us-ascii?Q?vjTTMxh9HfzoZRiPIOHl7ejSzj0STIpPG76kEzJnX0ONEn15YmMDpNxIwHA1?= =?us-ascii?Q?wPi2rdinMceoZNaA84ti6ChTXH1VOO8zft5ZMvFoTzgT9Nu/tp3P3smI7X2W?= =?us-ascii?Q?NdwnWkvvQCLNARnopYRfju1gHUGAdmWYjaB8eCrtM8mpozCi9nOa1yJFizNd?= =?us-ascii?Q?gWfxCHBaFVTuLzNtGUPKkIqCtVobmrgkI1BzvYJT0SLAiSRPqV5QSc6oF0d4?= =?us-ascii?Q?m2xSm0GD0NVS8m2jYGOoVujTcc8/olPpEC7n9GlIOZZHwlW3Joqw8GggTcAk?= =?us-ascii?Q?vUyAZWETWoth3RUoUGfeuoMv4nh17/7loOST8R3nqfR4JQ6gRQ4f/UtbbAD9?= =?us-ascii?Q?fEZfI+7sB0DlnsA+RyN5Ze9ZMSJZWbRpGnOlmMY1vnhzYVDxIqd2a13NVqSV?= =?us-ascii?Q?3TBfCLF6yEKWw3UtpzcNC3OLwfOyvmvKagiIblMS2fjN32wFJ+zi+/Zi08re?= =?us-ascii?Q?MBg7L88v/isucbwDUKAg55EYk3STST6+/G8N+cwDpFnXWzwESCIiAi5lvwqW?= =?us-ascii?Q?wzvcRqF7i1+BpxgHOoCV2Tez9mmoMdAUmGXT2ItJQ=3D=3D?= X-Microsoft-Exchange-Diagnostics: 1; DB5PR04MB1158; 6:rAdv/9QuXCgUqNow7oOtNdAOaCRHPQlVEXNVWVKlsS0Lx7srRL80FEMOo7rYHO4xI/8nwToQSRQmJkeo+3ITjMHpgI37u+ONhWU1ivNjJl+lCBniuoIr2aMFTF60/sL9dmkvuBK/qhEDFCtv1lfPUAtufAjlx+K68ns1Q0xp/WXY1/lW8GIfOp0upSE3VHKohulCagOU6TU0ii0eLXKGYuNh7vpQVifd9J6DMvi6jvXflDnO4nFU8D19rQFKCbpb4PkohOtVzEsOzZbl2Pd5a0pOTTmZdllviWkjSD4hjHX8Qk5B3KzoaLvjtnuHi4oeJTAacC4fqrjZpwdgyX5nLK47wvg/uEsmlAX1hQXuWkm+kyzE0Bf7WZE+leidEB8F+e8dbcpBSurm5Ap3EWyvduzrQyOxrJIpF7ji7eOfB7E=; 5:s4Bq1XWqixcBM9n8Kc+XMQ8sHxiDLnjNpArLExbHMxEA+wRIxz0e7Z+36fRgb599sZFzojpoUtymMLWR1gLv4sbhKETsDn1z3Ntpy7bct2KO5F2K5e97H8m4Cz6Xgr6Ub53NLETx7kr0vyN9tWPkig==; 24:mrwG+bua3+ST8M70nma/dmaiTm21EzSDKGmxxDWLGNfP4LG6o+y24D3IPBS1wN5zoAeG3TR1S1XmFMCbd5y8HCjMjsz2+DzB1e4KKqykSWU= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; DB5PR04MB1158; 7:13ntlMhICc2Sqz5/8rpVeF7M0TbliSFRD4VRru+DvVkmcpsEqFljsK/7EG4R9F4YlERdaOAjr+FsQAUbP2YrdzecRrY795aB2uK98QyPloQ4P4zmCXToK82CtO0hIarmF1f9oI1y61TLivaxK1OgS316iP3z4P2SjhjDJuNHooCYCAv+rwl3hyD+1EXyGWi2KJoXwFxZQhgpKFDxJAsULmEg+gnDqwkeULzSwy5GIquVxgquS5r8IGr+uQELwcXkLxkW5P74xyEySFCefFgGOvu/CmT1Rd4Mb9heg5SNPmUal1ci0lnZzZ+qbneKcqjRBqJJdH8IMAYPN5RhbVqowA== X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Mar 2017 09:05:39.0388 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB5PR04MB1158 Cc: u-boot@lists.denx.de Subject: [U-Boot] [PATCH] lib: div64: sync with Linux X-BeenThere: u-boot@lists.denx.de X-Mailman-Version: 2.1.18 Precedence: list List-Id: U-Boot discussion List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: u-boot-bounces@lists.denx.de Sender: "U-Boot" Sync with Linux commit ad0376eb1483b ("Merge tag 'edac_for_4.11_2'"). Signed-off-by: Peng Fan Cc: Tom Rini --- include/linux/math64.h | 172 +++++++++++++++++++++++++++++++++++++++++++++++++ lib/div64.c | 141 ++++++++++++++++++++++++++++++++++++++-- 2 files changed, 308 insertions(+), 5 deletions(-) diff --git a/include/linux/math64.h b/include/linux/math64.h index 6d760d7..08584c8 100644 --- a/include/linux/math64.h +++ b/include/linux/math64.h @@ -1,10 +1,15 @@ #ifndef _LINUX_MATH64_H #define _LINUX_MATH64_H +#include +#include #include #if BITS_PER_LONG == 64 +#define div64_long(x, y) div64_s64((x), (y)) +#define div64_ul(x, y) div64_u64((x), (y)) + /** * div_u64_rem - unsigned 64bit divide with 32bit divisor with remainder * @@ -27,6 +32,15 @@ static inline s64 div_s64_rem(s64 dividend, s32 divisor, s32 *remainder) } /** + * div64_u64_rem - unsigned 64bit divide with 64bit divisor and remainder + */ +static inline u64 div64_u64_rem(u64 dividend, u64 divisor, u64 *remainder) +{ + *remainder = dividend % divisor; + return dividend / divisor; +} + +/** * div64_u64 - unsigned 64bit divide with 64bit divisor */ static inline u64 div64_u64(u64 dividend, u64 divisor) @@ -34,8 +48,19 @@ static inline u64 div64_u64(u64 dividend, u64 divisor) return dividend / divisor; } +/** + * div64_s64 - signed 64bit divide with 64bit divisor + */ +static inline s64 div64_s64(s64 dividend, s64 divisor) +{ + return dividend / divisor; +} + #elif BITS_PER_LONG == 32 +#define div64_long(x, y) div_s64((x), (y)) +#define div64_ul(x, y) div_u64((x), (y)) + #ifndef div_u64_rem static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder) { @@ -48,10 +73,18 @@ static inline u64 div_u64_rem(u64 dividend, u32 divisor, u32 *remainder) extern s64 div_s64_rem(s64 dividend, s32 divisor, s32 *remainder); #endif +#ifndef div64_u64_rem +extern u64 div64_u64_rem(u64 dividend, u64 divisor, u64 *remainder); +#endif + #ifndef div64_u64 extern u64 div64_u64(u64 dividend, u64 divisor); #endif +#ifndef div64_s64 +extern s64 div64_s64(s64 dividend, s64 divisor); +#endif + #endif /* BITS_PER_LONG */ /** @@ -82,4 +115,143 @@ static inline s64 div_s64(s64 dividend, s32 divisor) u32 iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder); +static __always_inline u32 +__iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder) +{ + u32 ret = 0; + + while (dividend >= divisor) { + /* The following asm() prevents the compiler from + optimising this loop into a modulo operation. */ + asm("" : "+rm"(dividend)); + + dividend -= divisor; + ret++; + } + + *remainder = dividend; + + return ret; +} + +#ifndef mul_u32_u32 +/* + * Many a GCC version messes this up and generates a 64x64 mult :-( + */ +static inline u64 mul_u32_u32(u32 a, u32 b) +{ + return (u64)a * b; +} +#endif + +#if defined(CONFIG_ARCH_SUPPORTS_INT128) && defined(__SIZEOF_INT128__) + +#ifndef mul_u64_u32_shr +static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) +{ + return (u64)(((unsigned __int128)a * mul) >> shift); +} +#endif /* mul_u64_u32_shr */ + +#ifndef mul_u64_u64_shr +static inline u64 mul_u64_u64_shr(u64 a, u64 mul, unsigned int shift) +{ + return (u64)(((unsigned __int128)a * mul) >> shift); +} +#endif /* mul_u64_u64_shr */ + +#else + +#ifndef mul_u64_u32_shr +static inline u64 mul_u64_u32_shr(u64 a, u32 mul, unsigned int shift) +{ + u32 ah, al; + u64 ret; + + al = a; + ah = a >> 32; + + ret = mul_u32_u32(al, mul) >> shift; + if (ah) + ret += mul_u32_u32(ah, mul) << (32 - shift); + + return ret; +} +#endif /* mul_u64_u32_shr */ + +#ifndef mul_u64_u64_shr +static inline u64 mul_u64_u64_shr(u64 a, u64 b, unsigned int shift) +{ + union { + u64 ll; + struct { +#ifdef __BIG_ENDIAN + u32 high, low; +#else + u32 low, high; +#endif + } l; + } rl, rm, rn, rh, a0, b0; + u64 c; + + a0.ll = a; + b0.ll = b; + + rl.ll = mul_u32_u32(a0.l.low, b0.l.low); + rm.ll = mul_u32_u32(a0.l.low, b0.l.high); + rn.ll = mul_u32_u32(a0.l.high, b0.l.low); + rh.ll = mul_u32_u32(a0.l.high, b0.l.high); + + /* + * Each of these lines computes a 64-bit intermediate result into "c", + * starting at bits 32-95. The low 32-bits go into the result of the + * multiplication, the high 32-bits are carried into the next step. + */ + rl.l.high = c = (u64)rl.l.high + rm.l.low + rn.l.low; + rh.l.low = c = (c >> 32) + rm.l.high + rn.l.high + rh.l.low; + rh.l.high = (c >> 32) + rh.l.high; + + /* + * The 128-bit result of the multiplication is in rl.ll and rh.ll, + * shift it right and throw away the high part of the result. + */ + if (shift == 0) + return rl.ll; + if (shift < 64) + return (rl.ll >> shift) | (rh.ll << (64 - shift)); + return rh.ll >> (shift & 63); +} +#endif /* mul_u64_u64_shr */ + +#endif + +#ifndef mul_u64_u32_div +static inline u64 mul_u64_u32_div(u64 a, u32 mul, u32 divisor) +{ + union { + u64 ll; + struct { +#ifdef __BIG_ENDIAN + u32 high, low; +#else + u32 low, high; +#endif + } l; + } u, rl, rh; + + u.ll = a; + rl.ll = mul_u32_u32(u.l.low, mul); + rh.ll = mul_u32_u32(u.l.high, mul) + rl.l.high; + + /* Bits 32-63 of the result will be in rh.l.low. */ + rl.l.high = do_div(rh.ll, divisor); + + /* Bits 0-31 of the result will be in rl.l.low. */ + do_div(rl.ll, divisor); + + rl.l.high = rh.l.low; + return rl.ll; +} +#endif /* mul_u64_u32_div */ + #endif /* _LINUX_MATH64_H */ diff --git a/lib/div64.c b/lib/div64.c index 319fca5..206f582 100644 --- a/lib/div64.c +++ b/lib/div64.c @@ -13,14 +13,19 @@ * * Code generated for this function might be very inefficient * for some CPUs. __div64_32() can be overridden by linking arch-specific - * assembly versions such as arch/powerpc/lib/div64.S and arch/sh/lib/div64.S. + * assembly versions such as arch/ppc/lib/div64.S and arch/sh/lib/div64.S + * or by defining a preprocessor macro in arch/include/asm/div64.h. */ -#include -#include -#include +#include +#include +#include -uint32_t notrace __div64_32(uint64_t *n, uint32_t base) +/* Not needed on 64bit architectures */ +#if BITS_PER_LONG == 32 + +#ifndef __div64_32 +uint32_t __attribute__((weak)) __div64_32(uint64_t *n, uint32_t base) { uint64_t rem = *n; uint64_t b = base; @@ -52,3 +57,129 @@ uint32_t notrace __div64_32(uint64_t *n, uint32_t base) *n = res; return rem; } +EXPORT_SYMBOL(__div64_32); +#endif + +#ifndef div_s64_rem +s64 div_s64_rem(s64 dividend, s32 divisor, s32 *remainder) +{ + u64 quotient; + + if (dividend < 0) { + quotient = div_u64_rem(-dividend, abs(divisor), (u32 *)remainder); + *remainder = -*remainder; + if (divisor > 0) + quotient = -quotient; + } else { + quotient = div_u64_rem(dividend, abs(divisor), (u32 *)remainder); + if (divisor < 0) + quotient = -quotient; + } + return quotient; +} +EXPORT_SYMBOL(div_s64_rem); +#endif + +/** + * div64_u64_rem - unsigned 64bit divide with 64bit divisor and remainder + * @dividend: 64bit dividend + * @divisor: 64bit divisor + * @remainder: 64bit remainder + * + * This implementation is a comparable to algorithm used by div64_u64. + * But this operation, which includes math for calculating the remainder, + * is kept distinct to avoid slowing down the div64_u64 operation on 32bit + * systems. + */ +#ifndef div64_u64_rem +u64 div64_u64_rem(u64 dividend, u64 divisor, u64 *remainder) +{ + u32 high = divisor >> 32; + u64 quot; + + if (high == 0) { + u32 rem32; + quot = div_u64_rem(dividend, divisor, &rem32); + *remainder = rem32; + } else { + int n = 1 + fls(high); + quot = div_u64(dividend >> n, divisor >> n); + + if (quot != 0) + quot--; + + *remainder = dividend - quot * divisor; + if (*remainder >= divisor) { + quot++; + *remainder -= divisor; + } + } + + return quot; +} +EXPORT_SYMBOL(div64_u64_rem); +#endif + +/** + * div64_u64 - unsigned 64bit divide with 64bit divisor + * @dividend: 64bit dividend + * @divisor: 64bit divisor + * + * This implementation is a modified version of the algorithm proposed + * by the book 'Hacker's Delight'. The original source and full proof + * can be found here and is available for use without restriction. + * + * 'http://www.hackersdelight.org/hdcodetxt/divDouble.c.txt' + */ +#ifndef div64_u64 +u64 div64_u64(u64 dividend, u64 divisor) +{ + u32 high = divisor >> 32; + u64 quot; + + if (high == 0) { + quot = div_u64(dividend, divisor); + } else { + int n = 1 + fls(high); + quot = div_u64(dividend >> n, divisor >> n); + + if (quot != 0) + quot--; + if ((dividend - quot * divisor) >= divisor) + quot++; + } + + return quot; +} +EXPORT_SYMBOL(div64_u64); +#endif + +/** + * div64_s64 - signed 64bit divide with 64bit divisor + * @dividend: 64bit dividend + * @divisor: 64bit divisor + */ +#ifndef div64_s64 +s64 div64_s64(s64 dividend, s64 divisor) +{ + s64 quot, t; + + quot = div64_u64(abs(dividend), abs(divisor)); + t = (dividend ^ divisor) >> 63; + + return (quot ^ t) - t; +} +EXPORT_SYMBOL(div64_s64); +#endif + +#endif /* BITS_PER_LONG == 32 */ + +/* + * Iterative div/mod for use when dividend is not expected to be much + * bigger than divisor. + */ +u32 iter_div_u64_rem(u64 dividend, u32 divisor, u64 *remainder) +{ + return __iter_div_u64_rem(dividend, divisor, remainder); +} +EXPORT_SYMBOL(iter_div_u64_rem);