From patchwork Mon Jul 22 22:34:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 1135294 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="U6/9IuL/"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45sxHN1SXzz9sBZ for ; Tue, 23 Jul 2019 08:34:24 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733225AbfGVWeX (ORCPT ); Mon, 22 Jul 2019 18:34:23 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:32863 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727381AbfGVWeV (ORCPT ); Mon, 22 Jul 2019 18:34:21 -0400 Received: by mail-pf1-f196.google.com with SMTP id g2so18061379pfq.0; Mon, 22 Jul 2019 15:34:20 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Smy6CazaiFl/H6ExmdJz5h/Ca61boss3Hwy2iE3dK+Q=; b=U6/9IuL/TDFPDdHP/xEXC6qMrm1JOrSFcurq01xiF+Ds4AZyxPEN0zUc6M5cA6Ac2S lkz9/GilaTPQsdifymQ/iRSnBB3jF8OL7EtArtnl3P24bowo3fJlj3rjN86K6voNS6GH Dqmbtb60GhmjYXSXo6GRkm723W8wBuMOSeua5HmsQ6B7GzeInFzsL3p7urhRUckE7nOz cZAS/lxZQg2opPLxvraqp1cJo6cbVXc0mSOp571t2fjxJq+aMSbg5Xu5EFd+E1lReoM9 BS3AbtelcWJr2yfzvyHmXxv8QcBg7OH7i3yTert2ueqfUr/6ie3hJa/sawyB9+pwB1Ts W4KA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Smy6CazaiFl/H6ExmdJz5h/Ca61boss3Hwy2iE3dK+Q=; b=Ng2UUl2LuM+IcF/2WVTTxLeArX4t6iz6uVkruAOHE4eKCa8atjSauyUIhkvT/2Oa0N 7ey0f2KHCLnWnNCtyTx0G1Y09pEx/2nFDKnEuNpoubDQZJeYG/FTFnYEEHo5WZSydMiY Up5CYCbmiZrwHmoD3a4Gdl+0XBt3zotmVvRPrEweloHRk59s4LFpv/3b2aycfLfOUp1C TrwUuxwj3nm8wYlmAys+sXhIN6FpQ70srvFDZnfK+ZrLRuoXVUpFXkyVk6jab+CT/Y0H hZq6Hx34rPEhVgidzuUK4w5mYLJSoA/IJ9RyBS+Od3tJqvptCdmohnvu55U39yvNa4U7 5TQQ== X-Gm-Message-State: APjAAAUd4EPyvDo8Miyg28SUhS5FlVjZu7+U0qiPpodyw0TcKroeSaGK BY2bATiwmTTAXej/uCQzaJA= X-Google-Smtp-Source: APXvYqz/H2LrvbsAG7WTQI6TytjdHd6yi9WvqhWVYARGR3hkdauf6CDVf8InQZb+IHek/f+tTcf/6A== X-Received: by 2002:a62:1444:: with SMTP id 65mr2422112pfu.145.1563834860576; Mon, 22 Jul 2019 15:34:20 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id r18sm30597570pfg.77.2019.07.22.15.34.19 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Mon, 22 Jul 2019 15:34:20 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard Subject: [PATCH 1/3] mm/gup: introduce __put_user_pages() Date: Mon, 22 Jul 2019 15:34:13 -0700 Message-Id: <20190722223415.13269-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190722223415.13269-1-jhubbard@nvidia.com> References: <20190722223415.13269-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: John Hubbard Add a more capable variation of put_user_pages() to the API set, and call it from the simple ones. The new __put_user_pages() takes an enum that handles the various combinations of needing to call set_page_dirty() or set_page_dirty_lock(), before calling put_user_page(). Cc: Matthew Wilcox Cc: Jan Kara Cc: Christoph Hellwig Signed-off-by: John Hubbard --- include/linux/mm.h | 58 ++++++++++++++++++- mm/gup.c | 137 ++++++++++++++++++++++----------------------- 2 files changed, 124 insertions(+), 71 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..7218585681b2 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1057,8 +1057,62 @@ static inline void put_user_page(struct page *page) put_page(page); } -void put_user_pages_dirty(struct page **pages, unsigned long npages); -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages); +enum pup_flags_t { + PUP_FLAGS_CLEAN = 0, + PUP_FLAGS_DIRTY = 1, + PUP_FLAGS_LOCK = 2, + PUP_FLAGS_DIRTY_LOCK = 3, +}; + +void __put_user_pages(struct page **pages, unsigned long npages, + enum pup_flags_t flags); + +/** + * put_user_pages_dirty() - release and dirty an array of gup-pinned pages + * @pages: array of pages to be marked dirty and released. + * @npages: number of pages in the @pages array. + * + * "gup-pinned page" refers to a page that has had one of the get_user_pages() + * variants called on that page. + * + * For each page in the @pages array, make that page (or its head page, if a + * compound page) dirty, if it was previously listed as clean. Then, release + * the page using put_user_page(). + * + * Please see the put_user_page() documentation for details. + * + * set_page_dirty(), which does not lock the page, is used here. + * Therefore, it is the caller's responsibility to ensure that this is + * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * + */ +static inline void put_user_pages_dirty(struct page **pages, + unsigned long npages) +{ + __put_user_pages(pages, npages, PUP_FLAGS_DIRTY); +} + +/** + * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages + * @pages: array of pages to be marked dirty and released. + * @npages: number of pages in the @pages array. + * + * For each page in the @pages array, make that page (or its head page, if a + * compound page) dirty, if it was previously listed as clean. Then, release + * the page using put_user_page(). + * + * Please see the put_user_page() documentation for details. + * + * This is just like put_user_pages_dirty(), except that it invokes + * set_page_dirty_lock(), instead of set_page_dirty(). + * + */ +static inline void put_user_pages_dirty_lock(struct page **pages, + unsigned long npages) +{ + __put_user_pages(pages, npages, PUP_FLAGS_DIRTY_LOCK); +} + void put_user_pages(struct page **pages, unsigned long npages); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..6831ef064d76 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,87 +29,86 @@ struct follow_page_context { unsigned int page_mask; }; -typedef int (*set_dirty_func_t)(struct page *page); - -static void __put_user_pages_dirty(struct page **pages, - unsigned long npages, - set_dirty_func_t sdf) -{ - unsigned long index; - - for (index = 0; index < npages; index++) { - struct page *page = compound_head(pages[index]); - - /* - * Checking PageDirty at this point may race with - * clear_page_dirty_for_io(), but that's OK. Two key cases: - * - * 1) This code sees the page as already dirty, so it skips - * the call to sdf(). That could happen because - * clear_page_dirty_for_io() called page_mkclean(), - * followed by set_page_dirty(). However, now the page is - * going to get written back, which meets the original - * intention of setting it dirty, so all is well: - * clear_page_dirty_for_io() goes on to call - * TestClearPageDirty(), and write the page back. - * - * 2) This code sees the page as clean, so it calls sdf(). - * The page stays dirty, despite being written back, so it - * gets written back again in the next writeback cycle. - * This is harmless. - */ - if (!PageDirty(page)) - sdf(page); - - put_user_page(page); - } -} - /** - * put_user_pages_dirty() - release and dirty an array of gup-pinned pages + * __put_user_pages() - release an array of gup-pinned pages. * @pages: array of pages to be marked dirty and released. * @npages: number of pages in the @pages array. + * @flags: additional hints, to be applied to each page: * - * "gup-pinned page" refers to a page that has had one of the get_user_pages() - * variants called on that page. + * PUP_FLAGS_CLEAN: no additional steps required. (Consider calling + * put_user_pages() directly, instead.) * - * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). + * PUP_FLAGS_DIRTY: Call set_page_dirty() on the page (if not already + * dirty). * - * Please see the put_user_page() documentation for details. + * PUP_FLAGS_LOCK: meaningless by itself, but included in order to show + * the numeric relationship between the flags. * - * set_page_dirty(), which does not lock the page, is used here. - * Therefore, it is the caller's responsibility to ensure that this is - * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * PUP_FLAGS_DIRTY_LOCK: Call set_page_dirty_lock() on the page (if not + * already dirty). * + * For each page in the @pages array, release the page using put_user_page(). */ -void put_user_pages_dirty(struct page **pages, unsigned long npages) +void __put_user_pages(struct page **pages, unsigned long npages, + enum pup_flags_t flags) { - __put_user_pages_dirty(pages, npages, set_page_dirty); -} -EXPORT_SYMBOL(put_user_pages_dirty); + unsigned long index; -/** - * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. - * @npages: number of pages in the @pages array. - * - * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). - * - * Please see the put_user_page() documentation for details. - * - * This is just like put_user_pages_dirty(), except that it invokes - * set_page_dirty_lock(), instead of set_page_dirty(). - * - */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) -{ - __put_user_pages_dirty(pages, npages, set_page_dirty_lock); + /* + * TODO: this can be optimized for huge pages: if a series of pages is + * physically contiguous and part of the same compound page, then a + * single operation to the head page should suffice. + */ + + for (index = 0; index < npages; index++) { + struct page *page = compound_head(pages[index]); + + switch (flags) { + case PUP_FLAGS_CLEAN: + break; + + case PUP_FLAGS_DIRTY: + /* + * Checking PageDirty at this point may race with + * clear_page_dirty_for_io(), but that's OK. Two key + * cases: + * + * 1) This code sees the page as already dirty, so it + * skips the call to set_page_dirty(). That could happen + * because clear_page_dirty_for_io() called + * page_mkclean(), followed by set_page_dirty(). + * However, now the page is going to get written back, + * which meets the original intention of setting it + * dirty, so all is well: clear_page_dirty_for_io() goes + * on to call TestClearPageDirty(), and write the page + * back. + * + * 2) This code sees the page as clean, so it calls + * set_page_dirty(). The page stays dirty, despite being + * written back, so it gets written back again in the + * next writeback cycle. This is harmless. + */ + if (!PageDirty(page)) + set_page_dirty(page); + break; + + case PUP_FLAGS_LOCK: + VM_WARN_ON_ONCE(flags == PUP_FLAGS_LOCK); + /* + * Shouldn't happen, but treat it as _DIRTY_LOCK if + * it does: fall through. + */ + + case PUP_FLAGS_DIRTY_LOCK: + /* Same comments as for PUP_FLAGS_DIRTY apply here. */ + if (!PageDirty(page)) + set_page_dirty_lock(page); + break; + }; + put_user_page(page); + } } -EXPORT_SYMBOL(put_user_pages_dirty_lock); +EXPORT_SYMBOL(__put_user_pages); /** * put_user_pages() - release an array of gup-pinned pages. From patchwork Mon Jul 22 22:34:14 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 1135295 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Pcs9/tR3"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45sxHR0FX4z9sBZ for ; Tue, 23 Jul 2019 08:34:27 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733263AbfGVWeZ (ORCPT ); Mon, 22 Jul 2019 18:34:25 -0400 Received: from mail-pf1-f193.google.com ([209.85.210.193]:34709 "EHLO mail-pf1-f193.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731792AbfGVWeW (ORCPT ); Mon, 22 Jul 2019 18:34:22 -0400 Received: by mail-pf1-f193.google.com with SMTP id b13so18060427pfo.1; Mon, 22 Jul 2019 15:34:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=H5rDEOeFRvjrpBGS4dvuQZ6SJfLiKadFXLHgUye++Ig=; b=Pcs9/tR3DaHZa4DyTRc+V1EBVautcG67Y8Rlu5OVhq9gKq6GUPJQe1wcWgSdEsQ2Kq Radfqy6wlFYNzaWH7qZJhs5OiX9krJynNgS9czJHFLw45UEGbEegviTKLi3BYeTqppE6 wClCxR/cc9fseFTseNMdctyP1DsHiAKbtD3vrFPJ63NuNCU9kcF/wCkCf+3/B3Q0czjq ukVZJzmrau8htl9Bla/qPE0ArxviXBYhsjitL4/aZ7mWfpykEpu6jiXE0UB7wdf00smi 8G8KNFJh7oVyyxUX3OS2FzQaqS9mTboIDnjdqYM5MTevgkbFpW/jwrfFQHv/btHh152l vo1Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=H5rDEOeFRvjrpBGS4dvuQZ6SJfLiKadFXLHgUye++Ig=; b=ZxmZHwNrtXhSBaQzMWb7u886gHd1ocabiV0WJOpaZMfqwMMSwXgGOR0RXSXzaxPmRl E1LmIRmvOZuT4/eBmht8mzrngb8rbun272t7f5fg6zRXPKKZnXVAtu2eYj4Z6pXVZpLA pSs8Xxr1kfgx7Lwz5ieZo+2ThimPkgw1k+3oZy/eRMPgy18TUIe1fNC4CSiO41sfsmCO VW2FLY0CgguomSJmoHWDOHWzr5KXLT8D0GH1wK76eHZdjI1o517DhfEZlrXh4CGHwMVR QwcjRPlvq1J80Keqt5U2I5a0CK1R6ZjwuA0ccx89aTTqs9kBgJneiSyxZgqMQrR7mQHV rb4w== X-Gm-Message-State: APjAAAXrCq1IdglcGbYBpFTjM0+gKsYeuoxz/kigXa2XgX9GSqa4rEzC 6LoF3R/R5uv1E8ESmArGLVI= X-Google-Smtp-Source: APXvYqy5ILwW5Vkz7E3PJko1LlOqyK0nYxF3IsZhHj5oXXw0vxfHVHN4Sbd1j+lEz3Chtbcoljrf1w== X-Received: by 2002:aa7:8705:: with SMTP id b5mr2598762pfo.27.1563834861925; Mon, 22 Jul 2019 15:34:21 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id r18sm30597570pfg.77.2019.07.22.15.34.20 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Mon, 22 Jul 2019 15:34:21 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard Subject: [PATCH 2/3] drivers/gpu/drm/via: convert put_page() to put_user_page*() Date: Mon, 22 Jul 2019 15:34:14 -0700 Message-Id: <20190722223415.13269-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190722223415.13269-1-jhubbard@nvidia.com> References: <20190722223415.13269-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Also reverse the order of a comparison, in order to placate checkpatch.pl. Cc: David Airlie Cc: Daniel Vetter Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Hubbard --- drivers/gpu/drm/via/via_dmablit.c | 11 +++-------- 1 file changed, 3 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/via/via_dmablit.c b/drivers/gpu/drm/via/via_dmablit.c index 062067438f1d..754f2bb97d61 100644 --- a/drivers/gpu/drm/via/via_dmablit.c +++ b/drivers/gpu/drm/via/via_dmablit.c @@ -171,7 +171,6 @@ via_map_blit_for_device(struct pci_dev *pdev, static void via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) { - struct page *page; int i; switch (vsg->state) { @@ -186,13 +185,9 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) kfree(vsg->desc_pages); /* fall through */ case dr_via_pages_locked: - for (i = 0; i < vsg->num_pages; ++i) { - if (NULL != (page = vsg->pages[i])) { - if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction)) - SetPageDirty(page); - put_page(page); - } - } + __put_user_pages(vsg->pages, vsg->num_pages, + (vsg->direction == DMA_FROM_DEVICE) ? + PUP_FLAGS_DIRTY : PUP_FLAGS_CLEAN); /* fall through */ case dr_via_pages_alloc: vfree(vsg->pages); From patchwork Mon Jul 22 22:34:15 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 1135297 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="Sw0HvJJT"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 45sxHh0Jlkz9sBZ for ; Tue, 23 Jul 2019 08:34:40 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1733283AbfGVWef (ORCPT ); Mon, 22 Jul 2019 18:34:35 -0400 Received: from mail-pg1-f195.google.com ([209.85.215.195]:45556 "EHLO mail-pg1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733248AbfGVWeX (ORCPT ); Mon, 22 Jul 2019 18:34:23 -0400 Received: by mail-pg1-f195.google.com with SMTP id o13so18325311pgp.12; Mon, 22 Jul 2019 15:34:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/ofa5EhPy5N+KM7nweAsOE7rWC5Duznu1d2IMlN4I/Y=; b=Sw0HvJJTDmzbJlbRnGPA0tEKa9wBsoXOpSBSyyCpXXoTDy/oLlPojLp975N/iZT3Rt 8jfBQuYyMJoyHlJ2fyV7WUkxYIldTegTihpxRiAvNY8zRba3PO9qS3NZSvTUxei06Rk0 uOGdeG1FwadWsIBYlsr2yV0g+UwUeH2FEFiAyASHNNXam+5nvwUOaLXqS8GitFl6ninp eVYAIz6e9bu5JJNLX4jLtYsSme3OBg4qojzmnb1QA+ka0SBTx6a2p/N+h2SdySOnbmnd qH6vvH98sRnmJjM2M9uSzbppXdvFGZyQFLd9B0l+ZOt4ywhtBFZH/zZ3Y6U5IOl5jKuB XGJA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/ofa5EhPy5N+KM7nweAsOE7rWC5Duznu1d2IMlN4I/Y=; b=nQt9Bnhijl78wihrIrOnFKmCFYUnu+x8HxtHt8m4TeO104UJziZrGVWZY0UAXs0KOO XO8/WVxyHVBG5ixf98t40r1P1DrSt7NUE3UunuVsWXwoIP8LT3p0aR43V/aW8Op+N4H5 cbpl2BjeuTbAaLOgiRwBeMlfYIeZtjTgcaMMAtPX5J2lfmAEG0eHEM1iuRndHZ83e0yq eh4QWL8UMx3bwRsCu/eAPYJ69TUIsqvnOoeAt/NNUihVWOHoOAYFFsLgLbrsrEEq3ecQ 0VJ39BxrEeER1ZWG7f/dLjEUlsv5larFeIKBe9DtHORW4ia7N35IdXGA8NDzts0g5qpg TkMA== X-Gm-Message-State: APjAAAW1RyKoMQ6V4NLv/JvmQv8m7v8pLLu5MxYdd8N7Ae+zTAHwZRFE KiGi45FnS1H7Byuw4PJFVsY= X-Google-Smtp-Source: APXvYqwVaJn+EzirdPn13F0RFZ1U5rQgVF9dbTt0Dcd+iQUjCIySQADzRQKVKnRyYTBRNLHQ9Mrw9A== X-Received: by 2002:a62:7990:: with SMTP id u138mr2390135pfc.191.1563834863230; Mon, 22 Jul 2019 15:34:23 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id r18sm30597570pfg.77.2019.07.22.15.34.21 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Mon, 22 Jul 2019 15:34:22 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard Subject: [PATCH 3/3] net/xdp: convert put_page() to put_user_page*() Date: Mon, 22 Jul 2019 15:34:15 -0700 Message-Id: <20190722223415.13269-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190722223415.13269-1-jhubbard@nvidia.com> References: <20190722223415.13269-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Cc: Björn Töpel Cc: Magnus Karlsson Cc: David S. Miller Cc: netdev@vger.kernel.org Signed-off-by: John Hubbard --- net/xdp/xdp_umem.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c index 83de74ca729a..0325a17915de 100644 --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -166,14 +166,7 @@ void xdp_umem_clear_dev(struct xdp_umem *umem) static void xdp_umem_unpin_pages(struct xdp_umem *umem) { - unsigned int i; - - for (i = 0; i < umem->npgs; i++) { - struct page *page = umem->pgs[i]; - - set_page_dirty_lock(page); - put_page(page); - } + put_user_pages_dirty_lock(umem->pgs, umem->npgs); kfree(umem->pgs); umem->pgs = NULL;