From patchwork Thu Aug 1 23:47:33 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 1140828 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="XY0tvLDm"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 4606RP1Mbkz9s7T for ; Fri, 2 Aug 2019 09:47:45 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387972AbfHAXrn (ORCPT ); Thu, 1 Aug 2019 19:47:43 -0400 Received: from mail-pl1-f194.google.com ([209.85.214.194]:36839 "EHLO mail-pl1-f194.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726118AbfHAXrm (ORCPT ); Thu, 1 Aug 2019 19:47:42 -0400 Received: by mail-pl1-f194.google.com with SMTP id k8so32896999plt.3; Thu, 01 Aug 2019 16:47:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jtSqljpSDE/qLhKuzfPgyz4WtxgB0XXEpPKPdjuFDcw=; b=XY0tvLDmjuPVZVxwS5b4b7INHJjdW0uUQgZFBh9SEnLMraZJI0OQ0Jg23pRx6E0Y4G SRLXCdTX2Yau8Ux//pZOUJCkdyJkrobDIFvANiT0dOVo0Bg5617xVN+rzgfYWkWRL1+r 662Ju3FcWrwP1QjrGkKVjURieTDkgRo8Jx9Lstr7Vqn48EPdkgzvOauLByW4C4NXhcKG FNm6V1p6sijxvm84S+kkoLXRqoaX8C+a6wHUl1jc0LsVF9pQLmVMaQoW/aV8UupTEbWt WbQhp279E+nit3bEKcVYsUGacLDc93ygc9uGS6T8HgtyXYQnspaSYaOZ/nBKxLpLzMP9 Bzbw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jtSqljpSDE/qLhKuzfPgyz4WtxgB0XXEpPKPdjuFDcw=; b=DiNJolfNr2Qs9sPn73dSXgToyFoifeZZof6wah2CSdMNXZKbcpo2ZOz5cglidDJtvH fXte7NCJ9FK9g7D2nbL78evgXl6SsXfElS68aU8RuZhCIHSi8QCd6+b6dsdiainG6m5f R/mJU6COjS1BoiU8E4Uey33i4KhTkQRJtL5pBJ7M/7veseMwUZ//OTMTQkqenVKwWuNK eBMnTA1U1ioPFZFREqJIXtBGERyDHy3rGTBet7cp4zHAI6LaGjbD25et8gq+igYF0Q9X fwELCFtkg9C2LFtgNyEle7Gqd7h2KG7OcoRbOi8noBawKFfiOQUqBhpvsXQM7ZUVB3Hs ikuw== X-Gm-Message-State: APjAAAXmMucihp+NQOMqjO6GOduokmG7Biy/W5R6XbwJtG3qHx1K3ZJI /5hUFqZW3IyOaSu56otiqQk= X-Google-Smtp-Source: APXvYqz+KyNY8t4He+m9ZIeh73c4EAv2ceSDT/5SMG+Ut83JlVFgma+Y3h7gKqByFTAVKILRxOvnUw== X-Received: by 2002:a17:902:7687:: with SMTP id m7mr75178521pll.310.1564703261386; Thu, 01 Aug 2019 16:47:41 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id q7sm79090792pff.2.2019.08.01.16.47.39 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 16:47:40 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard , Ira Weiny Subject: [PATCH v5 1/3] mm/gup: add make_dirty arg to put_user_pages_dirty_lock() Date: Thu, 1 Aug 2019 16:47:33 -0700 Message-Id: <20190801234735.2149-2-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801234735.2149-1-jhubbard@nvidia.com> References: <20190801234735.2149-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: John Hubbard Provide more capable variation of put_user_pages_dirty_lock(), and delete put_user_pages_dirty(). This is based on the following: 1. Lots of call sites become simpler if a bool is passed into put_user_page*(), instead of making the call site choose which put_user_page*() variant to call. 2. Christoph Hellwig's observation that set_page_dirty_lock() is usually correct, and set_page_dirty() is usually a bug, or at least questionable, within a put_user_page*() calling chain. This leads to the following API choices: * put_user_pages_dirty_lock(page, npages, make_dirty) * There is no put_user_pages_dirty(). You have to hand code that, in the rare case that it's required. Reviewed-by: Christoph Hellwig Cc: Matthew Wilcox Cc: Jan Kara Cc: Ira Weiny Cc: Jason Gunthorpe Signed-off-by: John Hubbard --- drivers/infiniband/core/umem.c | 5 +- drivers/infiniband/hw/hfi1/user_pages.c | 5 +- drivers/infiniband/hw/qib/qib_user_pages.c | 13 +-- drivers/infiniband/hw/usnic/usnic_uiom.c | 5 +- drivers/infiniband/sw/siw/siw_mem.c | 18 +--- include/linux/mm.h | 5 +- mm/gup.c | 115 +++++++++------------ 7 files changed, 60 insertions(+), 106 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index 08da840ed7ee..965cf9dea71a 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -54,10 +54,7 @@ static void __ib_umem_release(struct ib_device *dev, struct ib_umem *umem, int d for_each_sg_page(umem->sg_head.sgl, &sg_iter, umem->sg_nents, 0) { page = sg_page_iter_page(&sg_iter); - if (umem->writable && dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, umem->writable && dirty); } sg_free_table(&umem->sg_head); diff --git a/drivers/infiniband/hw/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c index b89a9b9aef7a..469acb961fbd 100644 --- a/drivers/infiniband/hw/hfi1/user_pages.c +++ b/drivers/infiniband/hw/hfi1/user_pages.c @@ -118,10 +118,7 @@ int hfi1_acquire_user_pages(struct mm_struct *mm, unsigned long vaddr, size_t np void hfi1_release_user_pages(struct mm_struct *mm, struct page **p, size_t npages, bool dirty) { - if (dirty) - put_user_pages_dirty_lock(p, npages); - else - put_user_pages(p, npages); + put_user_pages_dirty_lock(p, npages, dirty); if (mm) { /* during close after signal, mm can be NULL */ atomic64_sub(npages, &mm->pinned_vm); diff --git a/drivers/infiniband/hw/qib/qib_user_pages.c b/drivers/infiniband/hw/qib/qib_user_pages.c index bfbfbb7e0ff4..26c1fb8d45cc 100644 --- a/drivers/infiniband/hw/qib/qib_user_pages.c +++ b/drivers/infiniband/hw/qib/qib_user_pages.c @@ -37,15 +37,6 @@ #include "qib.h" -static void __qib_release_user_pages(struct page **p, size_t num_pages, - int dirty) -{ - if (dirty) - put_user_pages_dirty_lock(p, num_pages); - else - put_user_pages(p, num_pages); -} - /** * qib_map_page - a safety wrapper around pci_map_page() * @@ -124,7 +115,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, return 0; bail_release: - __qib_release_user_pages(p, got, 0); + put_user_pages_dirty_lock(p, got, false); bail: atomic64_sub(num_pages, ¤t->mm->pinned_vm); return ret; @@ -132,7 +123,7 @@ int qib_get_user_pages(unsigned long start_page, size_t num_pages, void qib_release_user_pages(struct page **p, size_t num_pages) { - __qib_release_user_pages(p, num_pages, 1); + put_user_pages_dirty_lock(p, num_pages, true); /* during close after signal, mm can be NULL */ if (current->mm) diff --git a/drivers/infiniband/hw/usnic/usnic_uiom.c b/drivers/infiniband/hw/usnic/usnic_uiom.c index 0b0237d41613..62e6ffa9ad78 100644 --- a/drivers/infiniband/hw/usnic/usnic_uiom.c +++ b/drivers/infiniband/hw/usnic/usnic_uiom.c @@ -75,10 +75,7 @@ static void usnic_uiom_put_pages(struct list_head *chunk_list, int dirty) for_each_sg(chunk->page_list, sg, chunk->nents, i) { page = sg_page(sg); pa = sg_phys(sg); - if (dirty) - put_user_pages_dirty_lock(&page, 1); - else - put_user_page(page); + put_user_pages_dirty_lock(&page, 1, dirty); usnic_dbg("pa: %pa\n", &pa); } kfree(chunk); diff --git a/drivers/infiniband/sw/siw/siw_mem.c b/drivers/infiniband/sw/siw/siw_mem.c index 67171c82b0c4..2284966e4499 100644 --- a/drivers/infiniband/sw/siw/siw_mem.c +++ b/drivers/infiniband/sw/siw/siw_mem.c @@ -60,20 +60,6 @@ struct siw_mem *siw_mem_id2obj(struct siw_device *sdev, int stag_index) return NULL; } -static void siw_free_plist(struct siw_page_chunk *chunk, int num_pages, - bool dirty) -{ - struct page **p = chunk->plist; - - while (num_pages--) { - if (!PageDirty(*p) && dirty) - put_user_pages_dirty_lock(p, 1); - else - put_user_page(*p); - p++; - } -} - void siw_umem_release(struct siw_umem *umem, bool dirty) { struct mm_struct *mm_s = umem->owning_mm; @@ -82,8 +68,8 @@ void siw_umem_release(struct siw_umem *umem, bool dirty) for (i = 0; num_pages; i++) { int to_free = min_t(int, PAGES_PER_CHUNK, num_pages); - siw_free_plist(&umem->page_chunk[i], to_free, - umem->writable && dirty); + put_user_pages_dirty_lock(&umem->page_chunk[i], to_free, + umem->writable && dirty); kfree(umem->page_chunk[i].plist); num_pages -= to_free; } diff --git a/include/linux/mm.h b/include/linux/mm.h index 0334ca97c584..9759b6a24420 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1057,8 +1057,9 @@ static inline void put_user_page(struct page *page) put_page(page); } -void put_user_pages_dirty(struct page **pages, unsigned long npages); -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages); +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty); + void put_user_pages(struct page **pages, unsigned long npages); #if defined(CONFIG_SPARSEMEM) && !defined(CONFIG_SPARSEMEM_VMEMMAP) diff --git a/mm/gup.c b/mm/gup.c index 98f13ab37bac..7fefd7ab02c4 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -29,85 +29,70 @@ struct follow_page_context { unsigned int page_mask; }; -typedef int (*set_dirty_func_t)(struct page *page); - -static void __put_user_pages_dirty(struct page **pages, - unsigned long npages, - set_dirty_func_t sdf) -{ - unsigned long index; - - for (index = 0; index < npages; index++) { - struct page *page = compound_head(pages[index]); - - /* - * Checking PageDirty at this point may race with - * clear_page_dirty_for_io(), but that's OK. Two key cases: - * - * 1) This code sees the page as already dirty, so it skips - * the call to sdf(). That could happen because - * clear_page_dirty_for_io() called page_mkclean(), - * followed by set_page_dirty(). However, now the page is - * going to get written back, which meets the original - * intention of setting it dirty, so all is well: - * clear_page_dirty_for_io() goes on to call - * TestClearPageDirty(), and write the page back. - * - * 2) This code sees the page as clean, so it calls sdf(). - * The page stays dirty, despite being written back, so it - * gets written back again in the next writeback cycle. - * This is harmless. - */ - if (!PageDirty(page)) - sdf(page); - - put_user_page(page); - } -} - /** - * put_user_pages_dirty() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. + * put_user_pages_dirty_lock() - release and optionally dirty gup-pinned pages + * @pages: array of pages to be maybe marked dirty, and definitely released. * @npages: number of pages in the @pages array. + * @make_dirty: whether to mark the pages dirty * * "gup-pinned page" refers to a page that has had one of the get_user_pages() * variants called on that page. * * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). + * compound page) dirty, if @make_dirty is true, and if the page was previously + * listed as clean. In any case, releases all pages using put_user_page(), + * possibly via put_user_pages(), for the non-dirty case. * * Please see the put_user_page() documentation for details. * - * set_page_dirty(), which does not lock the page, is used here. - * Therefore, it is the caller's responsibility to ensure that this is - * safe. If not, then put_user_pages_dirty_lock() should be called instead. + * set_page_dirty_lock() is used internally. If instead, set_page_dirty() is + * required, then the caller should a) verify that this is really correct, + * because _lock() is usually required, and b) hand code it: + * set_page_dirty_lock(), put_user_page(). * */ -void put_user_pages_dirty(struct page **pages, unsigned long npages) +void put_user_pages_dirty_lock(struct page **pages, unsigned long npages, + bool make_dirty) { - __put_user_pages_dirty(pages, npages, set_page_dirty); -} -EXPORT_SYMBOL(put_user_pages_dirty); + unsigned long index; -/** - * put_user_pages_dirty_lock() - release and dirty an array of gup-pinned pages - * @pages: array of pages to be marked dirty and released. - * @npages: number of pages in the @pages array. - * - * For each page in the @pages array, make that page (or its head page, if a - * compound page) dirty, if it was previously listed as clean. Then, release - * the page using put_user_page(). - * - * Please see the put_user_page() documentation for details. - * - * This is just like put_user_pages_dirty(), except that it invokes - * set_page_dirty_lock(), instead of set_page_dirty(). - * - */ -void put_user_pages_dirty_lock(struct page **pages, unsigned long npages) -{ - __put_user_pages_dirty(pages, npages, set_page_dirty_lock); + /* + * TODO: this can be optimized for huge pages: if a series of pages is + * physically contiguous and part of the same compound page, then a + * single operation to the head page should suffice. + */ + + if (!make_dirty) { + put_user_pages(pages, npages); + return; + } + + for (index = 0; index < npages; index++) { + struct page *page = compound_head(pages[index]); + /* + * Checking PageDirty at this point may race with + * clear_page_dirty_for_io(), but that's OK. Two key + * cases: + * + * 1) This code sees the page as already dirty, so it + * skips the call to set_page_dirty(). That could happen + * because clear_page_dirty_for_io() called + * page_mkclean(), followed by set_page_dirty(). + * However, now the page is going to get written back, + * which meets the original intention of setting it + * dirty, so all is well: clear_page_dirty_for_io() goes + * on to call TestClearPageDirty(), and write the page + * back. + * + * 2) This code sees the page as clean, so it calls + * set_page_dirty(). The page stays dirty, despite being + * written back, so it gets written back again in the + * next writeback cycle. This is harmless. + */ + if (!PageDirty(page)) + set_page_dirty_lock(page); + put_user_page(page); + } } EXPORT_SYMBOL(put_user_pages_dirty_lock); From patchwork Thu Aug 1 23:47:34 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 1140832 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: patchwork-incoming-netdev@ozlabs.org Delivered-To: patchwork-incoming-netdev@ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=netdev-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="BGeE6sBP"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 4606Rf1Dk2z9sBF for ; Fri, 2 Aug 2019 09:47:58 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390472AbfHAXrz (ORCPT ); Thu, 1 Aug 2019 19:47:55 -0400 Received: from mail-pf1-f196.google.com ([209.85.210.196]:34672 "EHLO mail-pf1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1733221AbfHAXrn (ORCPT ); Thu, 1 Aug 2019 19:47:43 -0400 Received: by mail-pf1-f196.google.com with SMTP id b13so34959452pfo.1; Thu, 01 Aug 2019 16:47:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SsIRGkPWeq5HvTZzTxF/TGZYHjIKFiRIBMhKcsnvtic=; b=BGeE6sBPyOFa3wBesZ//yTnmIYiW2qdWshkClQ7aPWJjL7akX61KxZpR3Bc5xMlShv hSn2MfOI/RkCSfFPfN1GHpHW0mpAeEndeA1Bkn/hiOqupu2eVuK/0+WqRsoNrkoiam1u mAtMgD/4OhHWCVjWeannJUrXJT5cdnwC8V126zhAy+M9nCbcd2/YN0KObjLuWC1rZPXx 0AL6HjVWassUOkoH+wgHm7bpYMsDJhceK5sYbcW/PCa0rFZh5o/+tIRNg9imoY9BXY1x WJfFtHbODuo2mf66BZtQp/j03TkrW+CfXNYDl9m8qoTc8N6pWgt7ajKCVU594c51EagE tvJQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SsIRGkPWeq5HvTZzTxF/TGZYHjIKFiRIBMhKcsnvtic=; b=B33iMjuPZdZYhpsTMhIfZkOHAe3UYiB96wlA+hlC9Yf855mhs/bc7JEAtL74RnbQ31 h22d+rAf5v8uDI26dfPZJLiFgE7zlYWYtVPLzhMK8DhQEMQnvjrfkVkdSMHHbEI4eckD /pGvihK3Xi2WTWc1dozEz+VYNreJc8aucgKQ7EbTfgMkHV1nkaj+ddvKjMZJxQxD3/76 OkLqYF+umTB49wGPuNmhU1ZaWk8P6dfprG/JL1UCKgTQIyliV8W5XYYrqypuo+9p0GSU QGrOE5wvrQMkVQ6LccXvb5Lgx31WNHJOm8hpi8gmt3ScWzbEVDL+1v7akcjb2YxPdxHv EwCg== X-Gm-Message-State: APjAAAUVdh3TWzBRILq0OdIkZMv6qxlTrAJ6bDnRNFVpg1Kp5vx11+ap t3FfOnuyjdkIa0dbz7NydDc= X-Google-Smtp-Source: APXvYqxH8u29ZdpcjWLJrwgbMbyVAF40I2QUvMD98RL8LJpcLxolKyu7rs5+6caIWdcRdmo9vFlEcg== X-Received: by 2002:aa7:82da:: with SMTP id f26mr56649423pfn.82.1564703262707; Thu, 01 Aug 2019 16:47:42 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id q7sm79090792pff.2.2019.08.01.16.47.41 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 16:47:42 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard Subject: [PATCH v5 2/3] drivers/gpu/drm/via: convert put_page() to put_user_page*() Date: Thu, 1 Aug 2019 16:47:34 -0700 Message-Id: <20190801234735.2149-3-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801234735.2149-1-jhubbard@nvidia.com> References: <20190801234735.2149-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: netdev-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Also reverse the order of a comparison, in order to placate checkpatch.pl. Cc: David Airlie Cc: Daniel Vetter Cc: dri-devel@lists.freedesktop.org Signed-off-by: John Hubbard --- drivers/gpu/drm/via/via_dmablit.c | 10 ++-------- 1 file changed, 2 insertions(+), 8 deletions(-) diff --git a/drivers/gpu/drm/via/via_dmablit.c b/drivers/gpu/drm/via/via_dmablit.c index 062067438f1d..b5b5bf0ba65e 100644 --- a/drivers/gpu/drm/via/via_dmablit.c +++ b/drivers/gpu/drm/via/via_dmablit.c @@ -171,7 +171,6 @@ via_map_blit_for_device(struct pci_dev *pdev, static void via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) { - struct page *page; int i; switch (vsg->state) { @@ -186,13 +185,8 @@ via_free_sg_info(struct pci_dev *pdev, drm_via_sg_info_t *vsg) kfree(vsg->desc_pages); /* fall through */ case dr_via_pages_locked: - for (i = 0; i < vsg->num_pages; ++i) { - if (NULL != (page = vsg->pages[i])) { - if (!PageReserved(page) && (DMA_FROM_DEVICE == vsg->direction)) - SetPageDirty(page); - put_page(page); - } - } + put_user_pages_dirty_lock(vsg->pages, vsg->num_pages, + (vsg->direction == DMA_FROM_DEVICE)); /* fall through */ case dr_via_pages_alloc: vfree(vsg->pages); From patchwork Thu Aug 1 23:47:35 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: john.hubbard@gmail.com X-Patchwork-Id: 1140829 X-Patchwork-Delegate: davem@davemloft.net Return-Path: X-Original-To: incoming-bpf@patchwork.ozlabs.org Delivered-To: patchwork-incoming-bpf@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (mailfrom) smtp.mailfrom=vger.kernel.org (client-ip=209.132.180.67; helo=vger.kernel.org; envelope-from=bpf-owner@vger.kernel.org; receiver=) Authentication-Results: ozlabs.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: ozlabs.org; dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="E+zHlnIp"; dkim-atps=neutral Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by ozlabs.org (Postfix) with ESMTP id 4606RR3zQkz9sML for ; Fri, 2 Aug 2019 09:47:47 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2388309AbfHAXrp (ORCPT ); Thu, 1 Aug 2019 19:47:45 -0400 Received: from mail-pg1-f196.google.com ([209.85.215.196]:42895 "EHLO mail-pg1-f196.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388000AbfHAXro (ORCPT ); Thu, 1 Aug 2019 19:47:44 -0400 Received: by mail-pg1-f196.google.com with SMTP id t132so35047654pgb.9; Thu, 01 Aug 2019 16:47:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=GHzc1obwFKwJgG7nswfPGhmCYytguM8zaNUDmj5ciqU=; b=E+zHlnIpYzBr1HZfJIsD6iVbo/tVCWCbZWGouwMcKWXf5pRqLr+7fu6t6wyR76gxrS Pw0XWBh+4s5Kr4C0zZ1uLNkWtKi03BCdpa29tA51sk8I+Bq12u0svuAUfdj+sK8H6N5H /8lixInmiBO+tmKzSLyeiGhsebOFRuv5X1HYeJ9NG7mhRRnXPabVEo0th3yVnfWLj1Ya c7rmaefTzcGx11D0smecmd6tI08T3Ivoz0JbFA+LCWcAMomm5K9oYs5qsT1cHM0HE8Jc UMhXpM08RliBjrRUsniYuwevqpn1x1w55S5+T81xoskFTv+eqpjGrmLmlIP9IsRGKn2i cHVg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=GHzc1obwFKwJgG7nswfPGhmCYytguM8zaNUDmj5ciqU=; b=JjQj06NvW0tuDk5dXz02puMHpI4hGYufOcxy1pHOB3npYUZMCWl9BsfSOmc52xxZdU 8qSPFiSWa/Es2ZFvGy62O90bw+WGs056e3WMwlMvb8BaeJczdwfuRsM7lflT5KDCYrvI 4U2T+4Zra/WMSapxnp5+zXOymrq1VIH9xC+X7cpGAqT0wWlL3PFmM8PlY75PRi/KxcGO a0rOhq3Pz6mW+RdPLZ1VExkSMrkHv7t4X6bWbAnrAj0m1hAbjQ6NQvuzCR4RvRvkgJK5 RgPRI/7O9Qhn/XJEwgSAV5M55OqL6J1SxsL2jY6gmtDJyPlXvwTxc8CX6N4CLaQQNNE6 tuMQ== X-Gm-Message-State: APjAAAVguG+9l/vDRfMNEG8zJndXb0IJhneyloIBtXOcWpna9DIj1I3T srKCenHZ06qIjg7R5WTb9k4= X-Google-Smtp-Source: APXvYqwtPIy/q9DIVACpqZOM6oV+V9VPUP/5mN1kfjPMf5x4eodvXZuVHIq97aECtPB0t2lwdk42bg== X-Received: by 2002:a62:ce8e:: with SMTP id y136mr57190606pfg.29.1564703264155; Thu, 01 Aug 2019 16:47:44 -0700 (PDT) Received: from blueforge.nvidia.com (searspoint.nvidia.com. [216.228.112.21]) by smtp.gmail.com with ESMTPSA id q7sm79090792pff.2.2019.08.01.16.47.42 (version=TLS1_3 cipher=AEAD-AES256-GCM-SHA384 bits=256/256); Thu, 01 Aug 2019 16:47:43 -0700 (PDT) From: john.hubbard@gmail.com X-Google-Original-From: jhubbard@nvidia.com To: Andrew Morton Cc: Alexander Viro , =?utf-8?b?QmrDtnJuIFTDtnBl?= =?utf-8?q?l?= , Boaz Harrosh , Christoph Hellwig , Daniel Vetter , Dan Williams , Dave Chinner , David Airlie , "David S . Miller" , Ilya Dryomov , Jan Kara , Jason Gunthorpe , Jens Axboe , =?utf-8?b?SsOpcsO0bWUgR2xpc3Nl?= , Johannes Thumshirn , Magnus Karlsson , Matthew Wilcox , Miklos Szeredi , Ming Lei , Sage Weil , Santosh Shilimkar , Yan Zheng , netdev@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, linux-rdma@vger.kernel.org, bpf@vger.kernel.org, LKML , John Hubbard Subject: [PATCH v5 3/3] net/xdp: convert put_page() to put_user_page*() Date: Thu, 1 Aug 2019 16:47:35 -0700 Message-Id: <20190801234735.2149-4-jhubbard@nvidia.com> X-Mailer: git-send-email 2.22.0 In-Reply-To: <20190801234735.2149-1-jhubbard@nvidia.com> References: <20190801234735.2149-1-jhubbard@nvidia.com> MIME-Version: 1.0 X-NVConfidentiality: public Sender: bpf-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org From: John Hubbard For pages that were retained via get_user_pages*(), release those pages via the new put_user_page*() routines, instead of via put_page() or release_pages(). This is part a tree-wide conversion, as described in commit fc1d8e7cca2d ("mm: introduce put_user_page*(), placeholder versions"). Acked-by: Björn Töpel Cc: Magnus Karlsson Cc: David S. Miller Cc: netdev@vger.kernel.org Signed-off-by: John Hubbard --- net/xdp/xdp_umem.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/net/xdp/xdp_umem.c b/net/xdp/xdp_umem.c index 83de74ca729a..17c4b3d3dc34 100644 --- a/net/xdp/xdp_umem.c +++ b/net/xdp/xdp_umem.c @@ -166,14 +166,7 @@ void xdp_umem_clear_dev(struct xdp_umem *umem) static void xdp_umem_unpin_pages(struct xdp_umem *umem) { - unsigned int i; - - for (i = 0; i < umem->npgs; i++) { - struct page *page = umem->pgs[i]; - - set_page_dirty_lock(page); - put_page(page); - } + put_user_pages_dirty_lock(umem->pgs, umem->npgs, true); kfree(umem->pgs); umem->pgs = NULL;