From patchwork Sun May 17 02:08:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: John Hubbard X-Patchwork-Id: 1292029 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming@ozlabs.org Delivered-To: patchwork-incoming@ozlabs.org Authentication-Results: ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=23.128.96.18; helo=vger.kernel.org; envelope-from=sparclinux-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=nvidia.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=nvidia.com header.i=@nvidia.com header.a=rsa-sha256 header.s=n1 header.b=AGf4hNM+; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by ozlabs.org (Postfix) with ESMTP id 49PltR5pgYz9sT8 for ; Sun, 17 May 2020 12:08:31 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726936AbgEQCIb (ORCPT ); Sat, 16 May 2020 22:08:31 -0400 Received: from hqnvemgate25.nvidia.com ([216.228.121.64]:12575 "EHLO hqnvemgate25.nvidia.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726880AbgEQCIa (ORCPT ); Sat, 16 May 2020 22:08:30 -0400 Received: from hqpgpgate101.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate25.nvidia.com (using TLS: TLSv1.2, DES-CBC3-SHA) id ; Sat, 16 May 2020 19:07:13 -0700 Received: from hqmail.nvidia.com ([172.20.161.6]) by hqpgpgate101.nvidia.com (PGP Universal service); Sat, 16 May 2020 19:08:30 -0700 X-PGP-Universal: processed; by hqpgpgate101.nvidia.com on Sat, 16 May 2020 19:08:30 -0700 Received: from HQMAIL109.nvidia.com (172.20.187.15) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Sun, 17 May 2020 02:08:30 +0000 Received: from hqnvemgw03.nvidia.com (10.124.88.68) by HQMAIL109.nvidia.com (172.20.187.15) with Microsoft SMTP Server (TLS) id 15.0.1473.3 via Frontend Transport; Sun, 17 May 2020 02:08:29 +0000 Received: from sandstorm.nvidia.com (Not Verified[10.2.48.175]) by hqnvemgw03.nvidia.com with Trustwave SEG (v7,5,8,10121) id ; Sat, 16 May 2020 19:08:29 -0700 From: John Hubbard To: LKML CC: John Hubbard , "David S . Miller" , Subject: [PATCH] oradax: convert get_user_pages() --> pin_user_pages() Date: Sat, 16 May 2020 19:08:29 -0700 Message-ID: <20200517020829.856731-1-jhubbard@nvidia.com> X-Mailer: git-send-email 2.26.2 MIME-Version: 1.0 X-NVConfidentiality: public DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1589681233; bh=XuRV06lShXX0mEEPxDvd+aeuC8h5utZZzMaYdY4ajCA=; h=X-PGP-Universal:From:To:CC:Subject:Date:Message-ID:X-Mailer: MIME-Version:X-NVConfidentiality:Content-Transfer-Encoding: Content-Type; b=AGf4hNM+Hifu2oHtMx1No3HpgirZ8E3C//643LsAa2WPrXWuKELF4Icoa4Kbtzday S9Cnaz7rPJVHC9Ult3l8VmEilT7ZO8da2Nexc90RtVkKCDzSAlZHeg8BeZqJdz1ECJ W7hXD7NkKjDQKErQJV4xA3659jO9NCbzu5NUMUFo4FgDKqkOIwdoGxL0S6ciGDx5Uu WXxhlYAp3CMoOIacLqwTt7d0POwcvwDyOt6r1jo67shqGeHwiekD7u4GUFYlko0Dts aeZ+LKH9CHcnkckthYL8gp3fHLycl/dDdjzgeFhuSu9oVprMxHuqjzatJ1kjwfbkpF EmD+wIi+5lI+A== Sender: sparclinux-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: sparclinux@vger.kernel.org This code was using get_user_pages_fast(), in a "Case 2" scenario (DMA/RDMA), using the categorization from [1]. That means that it's time to convert the get_user_pages_fast() + put_page() calls to pin_user_pages_fast() + unpin_user_pages() calls. There is some helpful background in [2]: basically, this is a small part of fixing a long-standing disconnect between pinning pages, and file systems' use of those pages. [1] Documentation/core-api/pin_user_pages.rst [2] "Explicit pinning of user-space pages": https://lwn.net/Articles/807108/ Cc: David S. Miller Cc: sparclinux@vger.kernel.org Signed-off-by: John Hubbard --- drivers/sbus/char/oradax.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) base-commit: 3d1c1e5931ce45b3a3f309385bbc00c78e9951c6 diff --git a/drivers/sbus/char/oradax.c b/drivers/sbus/char/oradax.c index 8af216287a84..21b7cb6e7e70 100644 --- a/drivers/sbus/char/oradax.c +++ b/drivers/sbus/char/oradax.c @@ -410,9 +410,7 @@ static void dax_unlock_pages(struct dax_ctx *ctx, int ccb_index, int nelem) if (p) { dax_dbg("freeing page %p", p); - if (j == OUT) - set_page_dirty(p); - put_page(p); + unpin_user_pages_dirty_lock(&p, 1, j == OUT); ctx->pages[i][j] = NULL; } } @@ -425,13 +423,13 @@ static int dax_lock_page(void *va, struct page **p) dax_dbg("uva %p", va); - ret = get_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p); + ret = pin_user_pages_fast((unsigned long)va, 1, FOLL_WRITE, p); if (ret == 1) { dax_dbg("locked page %p, for VA %p", *p, va); return 0; } - dax_dbg("get_user_pages failed, va=%p, ret=%d", va, ret); + dax_dbg("pin_user_pages failed, va=%p, ret=%d", va, ret); return -1; }