From patchwork Tue May 4 13:02:09 2010 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Takuya Yoshikawa X-Patchwork-Id: 51620 Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Received: from bilbo.ozlabs.org (localhost [127.0.0.1]) by ozlabs.org (Postfix) with ESMTP id 0F301B8BB4 for ; Wed, 5 May 2010 01:28:48 +1000 (EST) Received: by ozlabs.org (Postfix) id CC800B7D1A; Tue, 4 May 2010 23:02:13 +1000 (EST) Delivered-To: linuxppc-dev@ozlabs.org Received: from mail-pz0-f185.google.com (mail-pz0-f185.google.com [209.85.222.185]) by ozlabs.org (Postfix) with ESMTP id 67E9AB7D21 for ; Tue, 4 May 2010 23:02:13 +1000 (EST) Received: by pzk15 with SMTP id 15so896119pzk.15 for ; Tue, 04 May 2010 06:02:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:date:from:to:cc:subject :message-id:in-reply-to:references:x-mailer:mime-version :content-type:content-transfer-encoding; bh=Phi4JbIEWEh8n0LhIElc7QHdValtuCzwdU0TEQtf8S8=; b=QUaYglFsmMFyTOTNEprRI370+extBHGll7PV+GDSV4B8DaAVrMYgcDEnn1rMIpzWiM Xyv7XQwyE9TzcrvnQeRw4p5ApICMg7cFAfww1I0RpRFDhglTrxzFyfZgKTqvA16rljjp wKma82sJPc2vcggqVm9Tt/6JX+HeNudKLbAKI= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=date:from:to:cc:subject:message-id:in-reply-to:references:x-mailer :mime-version:content-type:content-transfer-encoding; b=YAmBLKnJgkoM9vdMuEjrPBJS0v310Bc1lUFyiLBg9QxR9siGPAfI5QbKTCwvXgkPzC K+pDytztL8TQr4u5Ns3aHYiZPrlknXaJnJ5i+FPC6iugVG7WlVO0LWLycQQxZVT4Im8r uUXhYDst5mdNBwKFjhZP4HuCwA4HlJFlp9Dk0= Received: by 10.141.91.9 with SMTP id t9mr4370981rvl.53.1272978131728; Tue, 04 May 2010 06:02:11 -0700 (PDT) Received: from stein (v079161.dynamic.ppp.asahi-net.or.jp [124.155.79.161]) by mx.google.com with ESMTPS id c16sm1220628rvn.14.2010.05.04.06.02.07 (version=SSLv3 cipher=RC4-MD5); Tue, 04 May 2010 06:02:10 -0700 (PDT) Date: Tue, 4 May 2010 22:02:09 +0900 From: Takuya Yoshikawa To: Takuya Yoshikawa Subject: [RFC][PATCH 4/12] x86: introduce copy_in_user() for 32-bit Message-Id: <20100504220209.c3836ef3.takuya.yoshikawa@gmail.com> In-Reply-To: <20100504215645.6448af8f.takuya.yoshikawa@gmail.com> References: <20100504215645.6448af8f.takuya.yoshikawa@gmail.com> X-Mailer: Sylpheed 3.0.2 (GTK+ 2.20.0; i486-pc-linux-gnu) Mime-Version: 1.0 X-Mailman-Approved-At: Wed, 05 May 2010 01:27:55 +1000 Cc: linux-arch@vger.kernel.org, x86@kernel.org, arnd@arndb.de, kvm@vger.kernel.org, kvm-ia64@vger.kernel.org, fernando@oss.ntt.co.jp, mtosatti@redhat.com, agraf@suse.de, kvm-ppc@vger.kernel.org, linux-kernel@vger.kernel.org, yoshikawa.takuya@oss.ntt.co.jp, linuxppc-dev@ozlabs.org, mingo@redhat.com, paulus@samba.org, avi@redhat.com, hpa@zytor.com, tglx@linutronix.de X-BeenThere: linuxppc-dev@lists.ozlabs.org X-Mailman-Version: 2.1.13 Precedence: list List-Id: Linux on PowerPC Developers Mail List List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org Errors-To: linuxppc-dev-bounces+patchwork-incoming=ozlabs.org@lists.ozlabs.org During the work of KVM's dirty page logging optimization, we encountered the need of copy_in_user() for 32-bit x86 and ppc: these will be used for manipulating dirty bitmaps in user space. So we implement copy_in_user() for 32-bit with existing generic copy user helpers. Signed-off-by: Takuya Yoshikawa Signed-off-by: Fernando Luis Vazquez Cao CC: Avi Kivity Cc: Thomas Gleixner CC: Ingo Molnar Cc: "H. Peter Anvin" --- arch/x86/include/asm/uaccess_32.h | 2 ++ arch/x86/lib/usercopy_32.c | 26 ++++++++++++++++++++++++++ 2 files changed, 28 insertions(+), 0 deletions(-) diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h index 088d09f..85d396d 100644 --- a/arch/x86/include/asm/uaccess_32.h +++ b/arch/x86/include/asm/uaccess_32.h @@ -21,6 +21,8 @@ unsigned long __must_check __copy_from_user_ll_nocache (void *to, const void __user *from, unsigned long n); unsigned long __must_check __copy_from_user_ll_nocache_nozero (void *to, const void __user *from, unsigned long n); +unsigned long __must_check copy_in_user + (void __user *to, const void __user *from, unsigned n); /** * __copy_to_user_inatomic: - Copy a block of data into user space, with less checking. diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c index e218d5d..e90ffc3 100644 --- a/arch/x86/lib/usercopy_32.c +++ b/arch/x86/lib/usercopy_32.c @@ -889,3 +889,29 @@ void copy_from_user_overflow(void) WARN(1, "Buffer overflow detected!\n"); } EXPORT_SYMBOL(copy_from_user_overflow); + +/** + * copy_in_user: - Copy a block of data from user space to user space. + * @to: Destination address, in user space. + * @from: Source address, in user space. + * @n: Number of bytes to copy. + * + * Context: User context only. This function may sleep. + * + * Copy data from user space to user space. + * + * Returns number of bytes that could not be copied. + * On success, this will be zero. + */ +unsigned long +copy_in_user(void __user *to, const void __user *from, unsigned n) +{ + if (access_ok(VERIFY_WRITE, to, n) && access_ok(VERIFY_READ, from, n)) { + if (movsl_is_ok(to, from, n)) + __copy_user(to, from, n); + else + n = __copy_user_intel(to, (const void *)from, n); + } + return n; +} +EXPORT_SYMBOL(copy_in_user);