From patchwork Tue May 30 14:16:25 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 1787614 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-cifs-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FklVq/qR; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QVvf73spPz20Pc for ; Wed, 31 May 2023 00:19:11 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233036AbjE3OTI (ORCPT ); Tue, 30 May 2023 10:19:08 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59982 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233000AbjE3OSm (ORCPT ); Tue, 30 May 2023 10:18:42 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2272B135 for ; Tue, 30 May 2023 07:17:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456226; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=SrEG3Id6KDn/gSf5yv4D68QElxexhX3bro3X0CJ6lE8=; b=FklVq/qRiZBILKfoFLWSaUDVfwxNUgCwbbH6/O2Bfs+9+xxm6b9gx0a641IsGB/QkIZPee OitTjMAy4gE7lxVfKCf3oi4mRyq9t187ROtt0ktFJUNdXaeZmmK8rPENZ1GOoluhvrZ/rz t5+NhTC8tL1RVUW/tNEyndbceQgMFmo= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-662-GYtZWPrcPFKGze4E1bwesg-1; Tue, 30 May 2023 10:17:04 -0400 X-MC-Unique: GYtZWPrcPFKGze4E1bwesg-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id BAE3B3C0ED43; Tue, 30 May 2023 14:17:02 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id 46E7917103; Tue, 30 May 2023 14:16:57 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH net-next v2 01/10] Drop the netfs_ prefix from netfs_extract_iter_to_sg() Date: Tue, 30 May 2023 15:16:25 +0100 Message-ID: <20230530141635.136968-2-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.5 X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Rename netfs_extract_iter_to_sg() and its auxiliary functions to drop the netfs_ prefix. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jens Axboe cc: Herbert Xu cc: "Matthew Wilcox (Oracle)" cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: linux-crypto@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org --- Notes: ver #2: - Put the "netfs_" prefix removal first to shorten lines and avoid checkpatch 80-char warnings. fs/cifs/smb2ops.c | 4 +-- fs/cifs/smbdirect.c | 2 +- fs/netfs/iterator.c | 66 +++++++++++++++++++++---------------------- include/linux/netfs.h | 6 ++-- 4 files changed, 39 insertions(+), 39 deletions(-) diff --git a/fs/cifs/smb2ops.c b/fs/cifs/smb2ops.c index 5065398665f1..2a0e0a7f009c 100644 --- a/fs/cifs/smb2ops.c +++ b/fs/cifs/smb2ops.c @@ -4334,8 +4334,8 @@ static void *smb2_get_aead_req(struct crypto_aead *tfm, struct smb_rqst *rqst, } sgtable.orig_nents = sgtable.nents; - rc = netfs_extract_iter_to_sg(iter, count, &sgtable, - num_sgs - sgtable.nents, 0); + rc = extract_iter_to_sg(iter, count, &sgtable, + num_sgs - sgtable.nents, 0); iov_iter_revert(iter, rc); sgtable.orig_nents = sgtable.nents; } diff --git a/fs/cifs/smbdirect.c b/fs/cifs/smbdirect.c index 0362ebd4fa0f..223e17c16b60 100644 --- a/fs/cifs/smbdirect.c +++ b/fs/cifs/smbdirect.c @@ -2227,7 +2227,7 @@ static int smbd_iter_to_mr(struct smbd_connection *info, memset(sgt->sgl, 0, max_sg * sizeof(struct scatterlist)); - ret = netfs_extract_iter_to_sg(iter, iov_iter_count(iter), sgt, max_sg, 0); + ret = extract_iter_to_sg(iter, iov_iter_count(iter), sgt, max_sg, 0); WARN_ON(ret < 0); if (sgt->nents > 0) sg_mark_end(&sgt->sgl[sgt->nents - 1]); diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 8a4c86687429..f8eba3de1a97 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -106,11 +106,11 @@ EXPORT_SYMBOL_GPL(netfs_extract_user_iter); * Extract and pin a list of up to sg_max pages from UBUF- or IOVEC-class * iterators, and add them to the scatterlist. */ -static ssize_t netfs_extract_user_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_user_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { struct scatterlist *sg = sgtable->sgl + sgtable->nents; struct page **pages; @@ -159,11 +159,11 @@ static ssize_t netfs_extract_user_to_sg(struct iov_iter *iter, * Extract up to sg_max pages from a BVEC-type iterator and add them to the * scatterlist. The pages are not pinned. */ -static ssize_t netfs_extract_bvec_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_bvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { const struct bio_vec *bv = iter->bvec; struct scatterlist *sg = sgtable->sgl + sgtable->nents; @@ -205,11 +205,11 @@ static ssize_t netfs_extract_bvec_to_sg(struct iov_iter *iter, * scatterlist. This can deal with vmalloc'd buffers as well as kmalloc'd or * static buffers. The pages are not pinned. */ -static ssize_t netfs_extract_kvec_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_kvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { const struct kvec *kv = iter->kvec; struct scatterlist *sg = sgtable->sgl + sgtable->nents; @@ -266,11 +266,11 @@ static ssize_t netfs_extract_kvec_to_sg(struct iov_iter *iter, * Extract up to sg_max folios from an XARRAY-type iterator and add them to * the scatterlist. The pages are not pinned. */ -static ssize_t netfs_extract_xarray_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +static ssize_t extract_xarray_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { struct scatterlist *sg = sgtable->sgl + sgtable->nents; struct xarray *xa = iter->xarray; @@ -312,7 +312,7 @@ static ssize_t netfs_extract_xarray_to_sg(struct iov_iter *iter, } /** - * netfs_extract_iter_to_sg - Extract pages from an iterator and add ot an sglist + * extract_iter_to_sg - Extract pages from an iterator and add ot an sglist * @iter: The iterator to extract from * @maxsize: The amount of iterator to copy * @sgtable: The scatterlist table to fill in @@ -339,9 +339,9 @@ static ssize_t netfs_extract_xarray_to_sg(struct iov_iter *iter, * The iov_iter_extract_mode() function should be used to query how cleanup * should be performed. */ -ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, - struct sg_table *sgtable, unsigned int sg_max, - iov_iter_extraction_t extraction_flags) +ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, + struct sg_table *sgtable, unsigned int sg_max, + iov_iter_extraction_t extraction_flags) { if (maxsize == 0) return 0; @@ -349,21 +349,21 @@ ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, switch (iov_iter_type(iter)) { case ITER_UBUF: case ITER_IOVEC: - return netfs_extract_user_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_user_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_BVEC: - return netfs_extract_bvec_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_bvec_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_KVEC: - return netfs_extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); case ITER_XARRAY: - return netfs_extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); + return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); default: pr_err("%s(%u) unsupported\n", __func__, iov_iter_type(iter)); WARN_ON_ONCE(1); return -EIO; } } -EXPORT_SYMBOL_GPL(netfs_extract_iter_to_sg); +EXPORT_SYMBOL_GPL(extract_iter_to_sg); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index a1f3522daa69..55e201c3a841 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -301,9 +301,9 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, struct iov_iter *new, iov_iter_extraction_t extraction_flags); struct sg_table; -ssize_t netfs_extract_iter_to_sg(struct iov_iter *iter, size_t len, - struct sg_table *sgtable, unsigned int sg_max, - iov_iter_extraction_t extraction_flags); +ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t len, + struct sg_table *sgtable, unsigned int sg_max, + iov_iter_extraction_t extraction_flags); /** * netfs_inode - Get the netfs inode context from the inode From patchwork Tue May 30 14:16:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 1787615 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-cifs-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=anIxK/gH; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QVvf96LF4z20Pc for ; Wed, 31 May 2023 00:19:13 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233136AbjE3OTM (ORCPT ); Tue, 30 May 2023 10:19:12 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60128 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233067AbjE3OSv (ORCPT ); Tue, 30 May 2023 10:18:51 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 100B4C5 for ; Tue, 30 May 2023 07:17:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456234; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=AZhuY8lo9alWA9NhmnnpTaWvKv5QvMPV1nDbwBiDKtM=; b=anIxK/gHmihgbVJPQIZJwsGJU0T1n6C4Ih7LluiKNBQWRhRafJAgyt106yl3B+OPjGN9Ri 5MJnhlj7jw6EVfcDLn/6zpkOa5J/x8XNTWX8WXq1eJGXeM53GUwCw8tgLf9Pa+VUWy4KEg WKDTQfzsYVGalU13ldnP3Q8nsQ6a82o= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-121-4s_DJpgzMaGbhp4TeKUhEQ-1; Tue, 30 May 2023 10:17:09 -0400 X-MC-Unique: 4s_DJpgzMaGbhp4TeKUhEQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 1C7E7858F18; Tue, 30 May 2023 14:17:08 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id 75F0E40CFD46; Tue, 30 May 2023 14:17:04 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Simon Horman , Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH net-next v2 02/10] Fix a couple of spelling mistakes Date: Tue, 30 May 2023 15:16:26 +0100 Message-ID: <20230530141635.136968-3-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Fix a couple of spelling mistakes in a comment. Suggested-by: Simon Horman Link: https://lore.kernel.org/r/ZHH2mSRqeL4Gs1ft@corigine.com/ Link: https://lore.kernel.org/r/ZHH1nqZWOGzxlidT@corigine.com/ Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jens Axboe cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Matthew Wilcox cc: linux-crypto@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org Reviewed-by: Simon Horman --- fs/netfs/iterator.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index f8eba3de1a97..f41a37bca1e8 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -312,7 +312,7 @@ static ssize_t extract_xarray_to_sg(struct iov_iter *iter, } /** - * extract_iter_to_sg - Extract pages from an iterator and add ot an sglist + * extract_iter_to_sg - Extract pages from an iterator and add to an sglist * @iter: The iterator to extract from * @maxsize: The amount of iterator to copy * @sgtable: The scatterlist table to fill in @@ -332,7 +332,7 @@ static ssize_t extract_xarray_to_sg(struct iov_iter *iter, * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA * be allowed on the pages extracted. * - * If successul, @sgtable->nents is updated to include the number of elements + * If successful, @sgtable->nents is updated to include the number of elements * added and the number of bytes added is returned. @sgtable->orig_nents is * left unaltered. * From patchwork Tue May 30 14:16:27 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 1787616 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-cifs-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=e2yeO2q6; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QVvfd2360z20Pc for ; Wed, 31 May 2023 00:19:37 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232506AbjE3OTf (ORCPT ); Tue, 30 May 2023 10:19:35 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60006 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233079AbjE3OTA (ORCPT ); Tue, 30 May 2023 10:19:00 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E539F1 for ; Tue, 30 May 2023 07:17:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456249; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=8XAsj4uxJsU5nzHkgzshZ8+UJL6y54h/fBUZHPzhfR8=; b=e2yeO2q6XAptKku9CYwvPXXug6z0McoeUJAdUiTqKPp+fU0oc4S1JdjCczAkRYHMxp2naV 9+foIw3Zt/kCLOInt4I8R9+8tPgT7eTk5OSovs4fNYIdWKYxF4t0tMBJ7Jbl6vbUsLHQE0 IY8fPuMXrBuwQ79Jj8P6S3psWRvN8R4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-444-uIi8tBz6PMukM1ed44bcvg-1; Tue, 30 May 2023 10:17:26 -0400 X-MC-Unique: uIi8tBz6PMukM1ed44bcvg-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 239E9803D78; Tue, 30 May 2023 14:17:13 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id 230E640CFD46; Tue, 30 May 2023 14:17:08 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , Simon Horman , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH net-next v2 03/10] Wrap lines at 80 Date: Tue, 30 May 2023 15:16:27 +0100 Message-ID: <20230530141635.136968-4-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Wrap a line at 80 to stop checkpatch complaining. Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jens Axboe cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Matthew Wilcox cc: Simon Horman cc: linux-crypto@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org --- fs/netfs/iterator.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index f41a37bca1e8..9f09dc30ceb6 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -119,7 +119,8 @@ static ssize_t extract_user_to_sg(struct iov_iter *iter, size_t len, off; /* We decant the page list into the tail of the scatterlist */ - pages = (void *)sgtable->sgl + array_size(sg_max, sizeof(struct scatterlist)); + pages = (void *)sgtable->sgl + + array_size(sg_max, sizeof(struct scatterlist)); pages -= sg_max; do { From patchwork Tue May 30 14:16:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Howells X-Patchwork-Id: 1787617 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org (client-ip=2620:137:e000::1:20; helo=out1.vger.email; envelope-from=linux-cifs-owner@vger.kernel.org; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=pass (1024-bit key; unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256 header.s=mimecast20190719 header.b=FgjGJK8h; dkim-atps=neutral Received: from out1.vger.email (out1.vger.email [IPv6:2620:137:e000::1:20]) by legolas.ozlabs.org (Postfix) with ESMTP id 4QVvgR09dxz20Q4 for ; Wed, 31 May 2023 00:20:19 +1000 (AEST) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232500AbjE3OUQ (ORCPT ); Tue, 30 May 2023 10:20:16 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60000 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233102AbjE3OTC (ORCPT ); Tue, 30 May 2023 10:19:02 -0400 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3617D19F for ; Tue, 30 May 2023 07:17:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1685456250; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=VjV1Ciimn6pzCandpdZtjXJYCOMjJSyNy1/qiT7b1Ng=; b=FgjGJK8hy3/xLVipNHJsx3pOMZwIYK54Iid0K82Q+okII+A10WtiWhtgvJ+G6GqRscYotx rkaDMBf1R6/lh47B796hK463hB/Ibq7i3NzwOjIYxF35M+cbX9Shis1a40isBVcek0XXDr JcnNaHyaKIexHak352W3FY/XcIWA2u0= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-530-Ueeskx28ORyWEErMR7otWg-1; Tue, 30 May 2023 10:17:25 -0400 X-MC-Unique: Ueeskx28ORyWEErMR7otWg-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com [10.11.54.2]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id E901F8030CF; Tue, 30 May 2023 14:17:21 +0000 (UTC) Received: from warthog.procyon.org.uk (unknown [10.42.28.182]) by smtp.corp.redhat.com (Postfix) with ESMTP id 517B640C6EC4; Tue, 30 May 2023 14:17:15 +0000 (UTC) From: David Howells To: netdev@vger.kernel.org Cc: David Howells , Herbert Xu , "David S. Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , Willem de Bruijn , David Ahern , Matthew Wilcox , Jens Axboe , linux-crypto@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, Jeff Layton , Steve French , Shyam Prasad N , Rohith Surabattula , linux-cachefs@redhat.com, linux-cifs@vger.kernel.org, linux-fsdevel@vger.kernel.org Subject: [PATCH net-next v2 04/10] Move netfs_extract_iter_to_sg() to lib/scatterlist.c Date: Tue, 30 May 2023 15:16:28 +0100 Message-ID: <20230530141635.136968-5-dhowells@redhat.com> In-Reply-To: <20230530141635.136968-1-dhowells@redhat.com> References: <20230530141635.136968-1-dhowells@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.2 X-Spam-Status: No, score=-2.3 required=5.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,RCVD_IN_DNSWL_NONE, SPF_HELO_NONE,SPF_NONE,T_SCC_BODY_TEXT_LINE autolearn=unavailable autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on lindbergh.monkeyblade.net Precedence: bulk List-ID: X-Mailing-List: linux-cifs@vger.kernel.org Move netfs_extract_iter_to_sg() to lib/scatterlist.c as it's going to be used by more than just network filesystems (AF_ALG, for example). Signed-off-by: David Howells cc: Jeff Layton cc: Steve French cc: Shyam Prasad N cc: Rohith Surabattula cc: Jens Axboe cc: Herbert Xu cc: "David S. Miller" cc: Eric Dumazet cc: Jakub Kicinski cc: Paolo Abeni cc: Matthew Wilcox cc: linux-crypto@vger.kernel.org cc: linux-cachefs@redhat.com cc: linux-cifs@vger.kernel.org cc: linux-fsdevel@vger.kernel.org cc: netdev@vger.kernel.org --- fs/netfs/iterator.c | 267 ----------------------------------------- include/linux/netfs.h | 4 - include/linux/uio.h | 5 + lib/scatterlist.c | 269 ++++++++++++++++++++++++++++++++++++++++++ 4 files changed, 274 insertions(+), 271 deletions(-) diff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c index 9f09dc30ceb6..2ff07ba655a0 100644 --- a/fs/netfs/iterator.c +++ b/fs/netfs/iterator.c @@ -101,270 +101,3 @@ ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, return npages; } EXPORT_SYMBOL_GPL(netfs_extract_user_iter); - -/* - * Extract and pin a list of up to sg_max pages from UBUF- or IOVEC-class - * iterators, and add them to the scatterlist. - */ -static ssize_t extract_user_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) -{ - struct scatterlist *sg = sgtable->sgl + sgtable->nents; - struct page **pages; - unsigned int npages; - ssize_t ret = 0, res; - size_t len, off; - - /* We decant the page list into the tail of the scatterlist */ - pages = (void *)sgtable->sgl + - array_size(sg_max, sizeof(struct scatterlist)); - pages -= sg_max; - - do { - res = iov_iter_extract_pages(iter, &pages, maxsize, sg_max, - extraction_flags, &off); - if (res < 0) - goto failed; - - len = res; - maxsize -= len; - ret += len; - npages = DIV_ROUND_UP(off + len, PAGE_SIZE); - sg_max -= npages; - - for (; npages > 0; npages--) { - struct page *page = *pages; - size_t seg = min_t(size_t, PAGE_SIZE - off, len); - - *pages++ = NULL; - sg_set_page(sg, page, seg, off); - sgtable->nents++; - sg++; - len -= seg; - off = 0; - } - } while (maxsize > 0 && sg_max > 0); - - return ret; - -failed: - while (sgtable->nents > sgtable->orig_nents) - put_page(sg_page(&sgtable->sgl[--sgtable->nents])); - return res; -} - -/* - * Extract up to sg_max pages from a BVEC-type iterator and add them to the - * scatterlist. The pages are not pinned. - */ -static ssize_t extract_bvec_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) -{ - const struct bio_vec *bv = iter->bvec; - struct scatterlist *sg = sgtable->sgl + sgtable->nents; - unsigned long start = iter->iov_offset; - unsigned int i; - ssize_t ret = 0; - - for (i = 0; i < iter->nr_segs; i++) { - size_t off, len; - - len = bv[i].bv_len; - if (start >= len) { - start -= len; - continue; - } - - len = min_t(size_t, maxsize, len - start); - off = bv[i].bv_offset + start; - - sg_set_page(sg, bv[i].bv_page, len, off); - sgtable->nents++; - sg++; - sg_max--; - - ret += len; - maxsize -= len; - if (maxsize <= 0 || sg_max == 0) - break; - start = 0; - } - - if (ret > 0) - iov_iter_advance(iter, ret); - return ret; -} - -/* - * Extract up to sg_max pages from a KVEC-type iterator and add them to the - * scatterlist. This can deal with vmalloc'd buffers as well as kmalloc'd or - * static buffers. The pages are not pinned. - */ -static ssize_t extract_kvec_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) -{ - const struct kvec *kv = iter->kvec; - struct scatterlist *sg = sgtable->sgl + sgtable->nents; - unsigned long start = iter->iov_offset; - unsigned int i; - ssize_t ret = 0; - - for (i = 0; i < iter->nr_segs; i++) { - struct page *page; - unsigned long kaddr; - size_t off, len, seg; - - len = kv[i].iov_len; - if (start >= len) { - start -= len; - continue; - } - - kaddr = (unsigned long)kv[i].iov_base + start; - off = kaddr & ~PAGE_MASK; - len = min_t(size_t, maxsize, len - start); - kaddr &= PAGE_MASK; - - maxsize -= len; - ret += len; - do { - seg = min_t(size_t, len, PAGE_SIZE - off); - if (is_vmalloc_or_module_addr((void *)kaddr)) - page = vmalloc_to_page((void *)kaddr); - else - page = virt_to_page(kaddr); - - sg_set_page(sg, page, len, off); - sgtable->nents++; - sg++; - sg_max--; - - len -= seg; - kaddr += PAGE_SIZE; - off = 0; - } while (len > 0 && sg_max > 0); - - if (maxsize <= 0 || sg_max == 0) - break; - start = 0; - } - - if (ret > 0) - iov_iter_advance(iter, ret); - return ret; -} - -/* - * Extract up to sg_max folios from an XARRAY-type iterator and add them to - * the scatterlist. The pages are not pinned. - */ -static ssize_t extract_xarray_to_sg(struct iov_iter *iter, - ssize_t maxsize, - struct sg_table *sgtable, - unsigned int sg_max, - iov_iter_extraction_t extraction_flags) -{ - struct scatterlist *sg = sgtable->sgl + sgtable->nents; - struct xarray *xa = iter->xarray; - struct folio *folio; - loff_t start = iter->xarray_start + iter->iov_offset; - pgoff_t index = start / PAGE_SIZE; - ssize_t ret = 0; - size_t offset, len; - XA_STATE(xas, xa, index); - - rcu_read_lock(); - - xas_for_each(&xas, folio, ULONG_MAX) { - if (xas_retry(&xas, folio)) - continue; - if (WARN_ON(xa_is_value(folio))) - break; - if (WARN_ON(folio_test_hugetlb(folio))) - break; - - offset = offset_in_folio(folio, start); - len = min_t(size_t, maxsize, folio_size(folio) - offset); - - sg_set_page(sg, folio_page(folio, 0), len, offset); - sgtable->nents++; - sg++; - sg_max--; - - maxsize -= len; - ret += len; - if (maxsize <= 0 || sg_max == 0) - break; - } - - rcu_read_unlock(); - if (ret > 0) - iov_iter_advance(iter, ret); - return ret; -} - -/** - * extract_iter_to_sg - Extract pages from an iterator and add to an sglist - * @iter: The iterator to extract from - * @maxsize: The amount of iterator to copy - * @sgtable: The scatterlist table to fill in - * @sg_max: Maximum number of elements in @sgtable that may be filled - * @extraction_flags: Flags to qualify the request - * - * Extract the page fragments from the given amount of the source iterator and - * add them to a scatterlist that refers to all of those bits, to a maximum - * addition of @sg_max elements. - * - * The pages referred to by UBUF- and IOVEC-type iterators are extracted and - * pinned; BVEC-, KVEC- and XARRAY-type are extracted but aren't pinned; PIPE- - * and DISCARD-type are not supported. - * - * No end mark is placed on the scatterlist; that's left to the caller. - * - * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA - * be allowed on the pages extracted. - * - * If successful, @sgtable->nents is updated to include the number of elements - * added and the number of bytes added is returned. @sgtable->orig_nents is - * left unaltered. - * - * The iov_iter_extract_mode() function should be used to query how cleanup - * should be performed. - */ -ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, - struct sg_table *sgtable, unsigned int sg_max, - iov_iter_extraction_t extraction_flags) -{ - if (maxsize == 0) - return 0; - - switch (iov_iter_type(iter)) { - case ITER_UBUF: - case ITER_IOVEC: - return extract_user_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); - case ITER_BVEC: - return extract_bvec_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); - case ITER_KVEC: - return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); - case ITER_XARRAY: - return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, - extraction_flags); - default: - pr_err("%s(%u) unsupported\n", __func__, iov_iter_type(iter)); - WARN_ON_ONCE(1); - return -EIO; - } -} -EXPORT_SYMBOL_GPL(extract_iter_to_sg); diff --git a/include/linux/netfs.h b/include/linux/netfs.h index 55e201c3a841..b11a84f6c32b 100644 --- a/include/linux/netfs.h +++ b/include/linux/netfs.h @@ -300,10 +300,6 @@ void netfs_stats_show(struct seq_file *); ssize_t netfs_extract_user_iter(struct iov_iter *orig, size_t orig_len, struct iov_iter *new, iov_iter_extraction_t extraction_flags); -struct sg_table; -ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t len, - struct sg_table *sgtable, unsigned int sg_max, - iov_iter_extraction_t extraction_flags); /** * netfs_inode - Get the netfs inode context from the inode diff --git a/include/linux/uio.h b/include/linux/uio.h index 044c1d8c230c..0ccb983cf645 100644 --- a/include/linux/uio.h +++ b/include/linux/uio.h @@ -433,4 +433,9 @@ static inline bool iov_iter_extract_will_pin(const struct iov_iter *iter) return user_backed_iter(iter); } +struct sg_table; +ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t len, + struct sg_table *sgtable, unsigned int sg_max, + iov_iter_extraction_t extraction_flags); + #endif diff --git a/lib/scatterlist.c b/lib/scatterlist.c index 8d7519a8f308..e97d7060329e 100644 --- a/lib/scatterlist.c +++ b/lib/scatterlist.c @@ -9,6 +9,8 @@ #include #include #include +#include +#include /** * sg_next - return the next scatterlist entry in a list @@ -1095,3 +1097,270 @@ size_t sg_zero_buffer(struct scatterlist *sgl, unsigned int nents, return offset; } EXPORT_SYMBOL(sg_zero_buffer); + +/* + * Extract and pin a list of up to sg_max pages from UBUF- or IOVEC-class + * iterators, and add them to the scatterlist. + */ +static ssize_t extract_user_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + struct scatterlist *sg = sgtable->sgl + sgtable->nents; + struct page **pages; + unsigned int npages; + ssize_t ret = 0, res; + size_t len, off; + + /* We decant the page list into the tail of the scatterlist */ + pages = (void *)sgtable->sgl + + array_size(sg_max, sizeof(struct scatterlist)); + pages -= sg_max; + + do { + res = iov_iter_extract_pages(iter, &pages, maxsize, sg_max, + extraction_flags, &off); + if (res < 0) + goto failed; + + len = res; + maxsize -= len; + ret += len; + npages = DIV_ROUND_UP(off + len, PAGE_SIZE); + sg_max -= npages; + + for (; npages > 0; npages--) { + struct page *page = *pages; + size_t seg = min_t(size_t, PAGE_SIZE - off, len); + + *pages++ = NULL; + sg_set_page(sg, page, seg, off); + sgtable->nents++; + sg++; + len -= seg; + off = 0; + } + } while (maxsize > 0 && sg_max > 0); + + return ret; + +failed: + while (sgtable->nents > sgtable->orig_nents) + put_page(sg_page(&sgtable->sgl[--sgtable->nents])); + return res; +} + +/* + * Extract up to sg_max pages from a BVEC-type iterator and add them to the + * scatterlist. The pages are not pinned. + */ +static ssize_t extract_bvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + const struct bio_vec *bv = iter->bvec; + struct scatterlist *sg = sgtable->sgl + sgtable->nents; + unsigned long start = iter->iov_offset; + unsigned int i; + ssize_t ret = 0; + + for (i = 0; i < iter->nr_segs; i++) { + size_t off, len; + + len = bv[i].bv_len; + if (start >= len) { + start -= len; + continue; + } + + len = min_t(size_t, maxsize, len - start); + off = bv[i].bv_offset + start; + + sg_set_page(sg, bv[i].bv_page, len, off); + sgtable->nents++; + sg++; + sg_max--; + + ret += len; + maxsize -= len; + if (maxsize <= 0 || sg_max == 0) + break; + start = 0; + } + + if (ret > 0) + iov_iter_advance(iter, ret); + return ret; +} + +/* + * Extract up to sg_max pages from a KVEC-type iterator and add them to the + * scatterlist. This can deal with vmalloc'd buffers as well as kmalloc'd or + * static buffers. The pages are not pinned. + */ +static ssize_t extract_kvec_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + const struct kvec *kv = iter->kvec; + struct scatterlist *sg = sgtable->sgl + sgtable->nents; + unsigned long start = iter->iov_offset; + unsigned int i; + ssize_t ret = 0; + + for (i = 0; i < iter->nr_segs; i++) { + struct page *page; + unsigned long kaddr; + size_t off, len, seg; + + len = kv[i].iov_len; + if (start >= len) { + start -= len; + continue; + } + + kaddr = (unsigned long)kv[i].iov_base + start; + off = kaddr & ~PAGE_MASK; + len = min_t(size_t, maxsize, len - start); + kaddr &= PAGE_MASK; + + maxsize -= len; + ret += len; + do { + seg = min_t(size_t, len, PAGE_SIZE - off); + if (is_vmalloc_or_module_addr((void *)kaddr)) + page = vmalloc_to_page((void *)kaddr); + else + page = virt_to_page(kaddr); + + sg_set_page(sg, page, len, off); + sgtable->nents++; + sg++; + sg_max--; + + len -= seg; + kaddr += PAGE_SIZE; + off = 0; + } while (len > 0 && sg_max > 0); + + if (maxsize <= 0 || sg_max == 0) + break; + start = 0; + } + + if (ret > 0) + iov_iter_advance(iter, ret); + return ret; +} + +/* + * Extract up to sg_max folios from an XARRAY-type iterator and add them to + * the scatterlist. The pages are not pinned. + */ +static ssize_t extract_xarray_to_sg(struct iov_iter *iter, + ssize_t maxsize, + struct sg_table *sgtable, + unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + struct scatterlist *sg = sgtable->sgl + sgtable->nents; + struct xarray *xa = iter->xarray; + struct folio *folio; + loff_t start = iter->xarray_start + iter->iov_offset; + pgoff_t index = start / PAGE_SIZE; + ssize_t ret = 0; + size_t offset, len; + XA_STATE(xas, xa, index); + + rcu_read_lock(); + + xas_for_each(&xas, folio, ULONG_MAX) { + if (xas_retry(&xas, folio)) + continue; + if (WARN_ON(xa_is_value(folio))) + break; + if (WARN_ON(folio_test_hugetlb(folio))) + break; + + offset = offset_in_folio(folio, start); + len = min_t(size_t, maxsize, folio_size(folio) - offset); + + sg_set_page(sg, folio_page(folio, 0), len, offset); + sgtable->nents++; + sg++; + sg_max--; + + maxsize -= len; + ret += len; + if (maxsize <= 0 || sg_max == 0) + break; + } + + rcu_read_unlock(); + if (ret > 0) + iov_iter_advance(iter, ret); + return ret; +} + +/** + * extract_iter_to_sg - Extract pages from an iterator and add to an sglist + * @iter: The iterator to extract from + * @maxsize: The amount of iterator to copy + * @sgtable: The scatterlist table to fill in + * @sg_max: Maximum number of elements in @sgtable that may be filled + * @extraction_flags: Flags to qualify the request + * + * Extract the page fragments from the given amount of the source iterator and + * add them to a scatterlist that refers to all of those bits, to a maximum + * addition of @sg_max elements. + * + * The pages referred to by UBUF- and IOVEC-type iterators are extracted and + * pinned; BVEC-, KVEC- and XARRAY-type are extracted but aren't pinned; PIPE- + * and DISCARD-type are not supported. + * + * No end mark is placed on the scatterlist; that's left to the caller. + * + * @extraction_flags can have ITER_ALLOW_P2PDMA set to request peer-to-peer DMA + * be allowed on the pages extracted. + * + * If successful, @sgtable->nents is updated to include the number of elements + * added and the number of bytes added is returned. @sgtable->orig_nents is + * left unaltered. + * + * The iov_iter_extract_mode() function should be used to query how cleanup + * should be performed. + */ +ssize_t extract_iter_to_sg(struct iov_iter *iter, size_t maxsize, + struct sg_table *sgtable, unsigned int sg_max, + iov_iter_extraction_t extraction_flags) +{ + if (maxsize == 0) + return 0; + + switch (iov_iter_type(iter)) { + case ITER_UBUF: + case ITER_IOVEC: + return extract_user_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); + case ITER_BVEC: + return extract_bvec_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); + case ITER_KVEC: + return extract_kvec_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); + case ITER_XARRAY: + return extract_xarray_to_sg(iter, maxsize, sgtable, sg_max, + extraction_flags); + default: + pr_err("%s(%u) unsupported\n", __func__, iov_iter_type(iter)); + WARN_ON_ONCE(1); + return -EIO; + } +} +EXPORT_SYMBOL_GPL(extract_iter_to_sg);