From patchwork Thu Oct 31 18:38:57 2013 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tom Musta X-Patchwork-Id: 287584 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from ozlabs.org (localhost [IPv6:::1]) by ozlabs.org (Postfix) with ESMTP id 673592C07C3 for ; Fri, 1 Nov 2013 05:41:02 +1100 (EST) Received: from mail-ob0-x229.google.com (mail-ob0-x229.google.com [IPv6:2607:f8b0:4003:c01::229]) (using TLSv1 with cipher ECDHE-RSA-RC4-SHA (128/128 bits)) (Client CN "smtp.gmail.com", Issuer "Google Internet Authority G2" (not verified)) by ozlabs.org (Postfix) with ESMTPS id EAF722C053E for ; Fri, 1 Nov 2013 05:39:53 +1100 (EST) Received: by mail-ob0-f169.google.com with SMTP id uz6so3529597obc.14 for ; Thu, 31 Oct 2013 11:39:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=DOd3+rUEf5hQE+zGsG1NXjERUFg7dPcnIR9y4a4aSH0=; b=tconvcgwPm707qDzeBIYvgwcQ4n5dKD+oZrNy1aAgfbGthRER72L+92Ju/took0h5X hdNTkwNqDK4ingaZrFjDa3i5RvknHTHx0c05kTI4VrrFq7j9gXUGN9iLJcx7QiEI/6bW tO1NK/XBz3goJrWNLaL6iamSPDrVwHkW/xAwPF/tLz7jonX76wAFWCyo+6mU96QGNCy9 laO4AIj6z/Tb0zdZU3ntq/u2i9+TPx4qIAWiUX4t6q6BmewZq43v0wmii4K3OMws6255 t7MqyA6o/OI7Fuxhkum3D3E4+VjqmTBwmpOBdqMaKas6QBbynBIKe2/Y2bbP/JhUVYm3 ZlOQ== X-Received: by 10.60.84.165 with SMTP id a5mr1199330oez.83.1383244790603; Thu, 31 Oct 2013 11:39:50 -0700 (PDT) Received: from tmusta-sc.rchland.ibm.com (rchp4.rochester.ibm.com. [129.42.161.36]) by mx.google.com with ESMTPSA id ru3sm7619068obc.2.2013.10.31.11.39.49 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Thu, 31 Oct 2013 11:39:50 -0700 (PDT) From: Tom To: linuxppc-dev@lists.ozlabs.org Subject: [V2 PATCH 2/3] powerpc: Fix Unaligned Fixed Point Loads and Stores Date: Thu, 31 Oct 2013 13:38:57 -0500 Message-Id: <1383244738-5986-3-git-send-email-tommusta@gmail.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1383244738-5986-1-git-send-email-tommusta@gmail.com> References: <1383244738-5986-1-git-send-email-tommusta@gmail.com> Cc: Tom Musta X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.16rc2 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Sender: "Linuxppc-dev" From: Tom Musta This patch modifies the unaligned access routines of the sstep.c module so that it properly reverses the bytes of storage operands in the little endian kernel kernel. Signed-off-by: Tom Musta --- arch/powerpc/lib/sstep.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 45 insertions(+), 0 deletions(-) diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c index 7bfaa9d..c8743e1 100644 --- a/arch/powerpc/lib/sstep.c +++ b/arch/powerpc/lib/sstep.c @@ -212,11 +212,19 @@ static int __kprobes read_mem_unaligned(unsigned long *dest, unsigned long ea, { int err; unsigned long x, b, c; +#ifdef __LITTLE_ENDIAN__ + int len = nb; /* save a copy of the length for byte reversal */ +#endif /* unaligned, do this in pieces */ x = 0; for (; nb > 0; nb -= c) { +#ifdef __LITTLE_ENDIAN__ + c = 1; +#endif +#ifdef __BIG_ENDIAN__ c = max_align(ea); +#endif if (c > nb) c = max_align(nb); err = read_mem_aligned(&b, ea, c); @@ -225,7 +233,24 @@ static int __kprobes read_mem_unaligned(unsigned long *dest, unsigned long ea, x = (x << (8 * c)) + b; ea += c; } +#ifdef __LITTLE_ENDIAN__ + switch (len) { + case 2: + *dest = byterev_2(x); + break; + case 4: + *dest = byterev_4(x); + break; +#ifdef __powerpc64__ + case 8: + *dest = byterev_8(x); + break; +#endif + } +#endif +#ifdef __BIG_ENDIAN__ *dest = x; +#endif return 0; } @@ -273,9 +298,29 @@ static int __kprobes write_mem_unaligned(unsigned long val, unsigned long ea, int err; unsigned long c; +#ifdef __LITTLE_ENDIAN__ + switch (nb) { + case 2: + val = byterev_2(val); + break; + case 4: + val = byterev_4(val); + break; +#ifdef __powerpc64__ + case 8: + val = byterev_8(val); + break; +#endif + } +#endif /* unaligned or little-endian, do this in pieces */ for (; nb > 0; nb -= c) { +#ifdef __LITTLE_ENDIAN__ + c = 1; +#endif +#ifdef __BIG_ENDIAN__ c = max_align(ea); +#endif if (c > nb) c = max_align(nb); err = write_mem_aligned(val >> (nb - c) * 8, ea, c);