Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/2216355/?format=api
{ "id": 2216355, "url": "http://patchwork.ozlabs.org/api/patches/2216355/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linux-cifs-client/patch/20260326104544.509518-19-dhowells@redhat.com/", "project": { "id": 12, "url": "http://patchwork.ozlabs.org/api/projects/12/?format=api", "name": "Linux CIFS Client", "link_name": "linux-cifs-client", "list_id": "linux-cifs.vger.kernel.org", "list_email": "linux-cifs@vger.kernel.org", "web_url": "", "scm_url": "", "webscm_url": "", "list_archive_url": "", "list_archive_url_format": "", "commit_url_format": "" }, "msgid": "<20260326104544.509518-19-dhowells@redhat.com>", "list_archive_url": null, "date": "2026-03-26T10:45:33", "name": "[18/26] netfs: Switch to using bvecq rather than folio_queue and rolling_buffer", "commit_ref": null, "pull_url": null, "state": "new", "archived": false, "hash": "79af90b72989df330c761bb944e5f06c59278084", "submitter": { "id": 59, "url": "http://patchwork.ozlabs.org/api/people/59/?format=api", "name": "David Howells", "email": "dhowells@redhat.com" }, "delegate": null, "mbox": "http://patchwork.ozlabs.org/project/linux-cifs-client/patch/20260326104544.509518-19-dhowells@redhat.com/mbox/", "series": [ { "id": 497565, "url": "http://patchwork.ozlabs.org/api/series/497565/?format=api", "web_url": "http://patchwork.ozlabs.org/project/linux-cifs-client/list/?series=497565", "date": "2026-03-26T10:45:15", "name": "netfs: Keep track of folios in a segmented bio_vec[] chain", "version": 1, "mbox": "http://patchwork.ozlabs.org/series/497565/mbox/" } ], "comments": "http://patchwork.ozlabs.org/api/patches/2216355/comments/", "check": "pending", "checks": "http://patchwork.ozlabs.org/api/patches/2216355/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "\n <linux-cifs+bounces-10541-incoming=patchwork.ozlabs.org@vger.kernel.org>", "X-Original-To": [ "incoming@patchwork.ozlabs.org", "linux-cifs@vger.kernel.org" ], "Delivered-To": "patchwork-incoming@legolas.ozlabs.org", "Authentication-Results": [ "legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=eZ0qcJSN;\n\tdkim-atps=neutral", "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=172.234.253.10; helo=sea.lore.kernel.org;\n envelope-from=linux-cifs+bounces-10541-incoming=patchwork.ozlabs.org@vger.kernel.org;\n receiver=patchwork.ozlabs.org)", "smtp.subspace.kernel.org;\n\tdkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com\n header.b=\"eZ0qcJSN\"", "smtp.subspace.kernel.org;\n arc=none smtp.client-ip=170.10.133.124", "smtp.subspace.kernel.org;\n dmarc=pass (p=quarantine dis=none) header.from=redhat.com", "smtp.subspace.kernel.org;\n spf=pass smtp.mailfrom=redhat.com" ], "Received": [ "from sea.lore.kernel.org (sea.lore.kernel.org [172.234.253.10])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fhLV80NXXz1yGD\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 26 Mar 2026 22:03:12 +1100 (AEDT)", "from smtp.subspace.kernel.org (conduit.subspace.kernel.org\n [100.90.174.1])\n\tby sea.lore.kernel.org (Postfix) with ESMTP id 1A7E231B299F\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 26 Mar 2026 10:53:26 +0000 (UTC)", "from localhost.localdomain (localhost.localdomain [127.0.0.1])\n\tby smtp.subspace.kernel.org (Postfix) with ESMTP id 075CD2882B4;\n\tThu, 26 Mar 2026 10:48:56 +0000 (UTC)", "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [170.10.133.124])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby smtp.subspace.kernel.org (Postfix) with ESMTPS id F2FAD38C2B5\n\tfor <linux-cifs@vger.kernel.org>; Thu, 26 Mar 2026 10:48:51 +0000 (UTC)", "from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com\n (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by\n relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,\n cipher=TLS_AES_256_GCM_SHA384) id us-mta-696-656NMLk4PUe8PBu6pePmmw-1; Thu,\n 26 Mar 2026 06:48:47 -0400", "from mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com\n (mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.12])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS\n id 96E9A19560B5;\n\tThu, 26 Mar 2026 10:48:44 +0000 (UTC)", "from warthog.procyon.org.com (unknown [10.44.33.121])\n\tby mx-prod-int-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP\n id B14131955F25;\n\tThu, 26 Mar 2026 10:48:35 +0000 (UTC)" ], "ARC-Seal": "i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;\n\tt=1774522135; cv=none;\n b=s/ITHpP9OW5XSNBesFXGHJd/5zI1bQfFkjdnjb3H7je5zlBNvjhuyyFcl5Pq2Qvy6j6ja/up4jppOVnGPJvDzx3mjO3BW2ETCuRA9DA3XZ9EgA8GkbrOKPvVFFkqdS1W0JuWRTakBNZQL6avLTcqkAHO/5rGRP2S0W6+EZ3IeM8=", "ARC-Message-Signature": "i=1; a=rsa-sha256; d=subspace.kernel.org;\n\ts=arc-20240116; t=1774522135; c=relaxed/simple;\n\tbh=R8k+XNJfVpoadfcXtRALqlxAsDWkp6UZXOTlnSR4eiE=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t MIME-Version;\n b=YTerrxi5fkj53B19D1kvXPjgOq0q+Lxg2LjIFQR0tdEPGUj1Upc0ZHBGOh2a+jLgaw5zEmzeEl6WzFL20DwG+oqxcXtFbw/vDNu/cvWxRJTLtdbreqf4tbhG0PYq7eRIGmaKACLd+RnfwvoD82BNK4/hWf/W81VZG2mGvILCkTI=", "ARC-Authentication-Results": "i=1; smtp.subspace.kernel.org;\n dmarc=pass (p=quarantine dis=none) header.from=redhat.com;\n spf=pass smtp.mailfrom=redhat.com;\n dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com\n header.b=eZ0qcJSN; arc=none smtp.client-ip=170.10.133.124", "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n\ts=mimecast20190719; t=1774522131;\n\th=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n\t to:to:cc:cc:mime-version:mime-version:\n\t content-transfer-encoding:content-transfer-encoding:\n\t in-reply-to:in-reply-to:references:references;\n\tbh=DHzNGgNEUEYVKOYwuUioBIOm0YrjI0aYeod/gNOtkt4=;\n\tb=eZ0qcJSNHHsL/2Axc/K2eEm/+E1aHH23uTi8QVEDHa8f4zFd+3N5TiiLLtSI9ly2Okr+38\n\tP+mCP921qi43LhyWn3xq7p5GXuXay5BwwrDZiYflhFub7iEKhnjMW3RT9jl45Nt6pGb531\n\tqpu3a26aPDzpu/DrK/+wg0z3yc6SyOg=", "X-MC-Unique": "656NMLk4PUe8PBu6pePmmw-1", "X-Mimecast-MFC-AGG-ID": "656NMLk4PUe8PBu6pePmmw_1774522124", "From": "David Howells <dhowells@redhat.com>", "To": "Christian Brauner <christian@brauner.io>,\n\tMatthew Wilcox <willy@infradead.org>,\n\tChristoph Hellwig <hch@infradead.org>", "Cc": "David Howells <dhowells@redhat.com>,\n\tPaulo Alcantara <pc@manguebit.com>,\n\tJens Axboe <axboe@kernel.dk>,\n\tLeon Romanovsky <leon@kernel.org>,\n\tSteve French <sfrench@samba.org>,\n\tChenXiaoSong <chenxiaosong@chenxiaosong.com>,\n\tMarc Dionne <marc.dionne@auristor.com>,\n\tEric Van Hensbergen <ericvh@kernel.org>,\n\tDominique Martinet <asmadeus@codewreck.org>,\n\tIlya Dryomov <idryomov@gmail.com>,\n\tTrond Myklebust <trondmy@kernel.org>,\n\tnetfs@lists.linux.dev,\n\tlinux-afs@lists.infradead.org,\n\tlinux-cifs@vger.kernel.org,\n\tlinux-nfs@vger.kernel.org,\n\tceph-devel@vger.kernel.org,\n\tv9fs@lists.linux.dev,\n\tlinux-erofs@lists.ozlabs.org,\n\tlinux-fsdevel@vger.kernel.org,\n\tlinux-kernel@vger.kernel.org,\n\tPaulo Alcantara <pc@manguebit.org>,\n\tShyam Prasad N <sprasad@microsoft.com>,\n\tTom Talpey <tom@talpey.com>", "Subject": "[PATCH 18/26] netfs: Switch to using bvecq rather than folio_queue\n and rolling_buffer", "Date": "Thu, 26 Mar 2026 10:45:33 +0000", "Message-ID": "<20260326104544.509518-19-dhowells@redhat.com>", "In-Reply-To": "<20260326104544.509518-1-dhowells@redhat.com>", "References": "<20260326104544.509518-1-dhowells@redhat.com>", "Precedence": "bulk", "X-Mailing-List": "linux-cifs@vger.kernel.org", "List-Id": "<linux-cifs.vger.kernel.org>", "List-Subscribe": "<mailto:linux-cifs+subscribe@vger.kernel.org>", "List-Unsubscribe": "<mailto:linux-cifs+unsubscribe@vger.kernel.org>", "MIME-Version": "1.0", "Content-Transfer-Encoding": "8bit", "X-Scanned-By": "MIMEDefang 3.0 on 10.30.177.12" }, "content": "Switch netfslib to using bvecq, a segmented bio_vec[] queue, instead of the\nfolio_queue and rolling_buffer constructs, to keep track of the regions of\nmemory it is performing I/O upon. Each bvecq struct in the chain is marked\nwith the starting file position of that sequence so that discontiguities\ncan be handled (the contents of each individual bvecq must be contiguous).\n\nFor unbuffered/direct I/O, the iterator is extracted into the queue up\nfront. For buffered I/O, the folios are added to the queue as the\noperation proceeds, much as it does now with folio_queues. There is one\nimportant change for buffered writes: only the relevant part of the folio\nis included; this is expanded for writes to the cache in a copy of the\nbvecq segment (it is known that each bio_vec corresponds to part of a\nfolio in this case).\n\nThe bvecq structs are marked with information as to how the regions\ncontained therein should be disposed of (unlock-only, free, unpin).\n\nWhen setting up a subrequest, netfslib will furnish it with a slice of the\nmain buffer queue as a pointer to starting bvecq, slot and offset and, for\nthe moment, an ITER_BVECQ iterator is set to cover the slice in\nsubreq->io_iter.\n\nNotes on the implementation:\n\n (1) This patch uses the concept of a 'bvecq position', which is a tuple of\n { bvecq, slot, offset }. This is lighter weight than using a full\n iov_iter, though that would also suffice. If not NULL, the position\n also holds a reference on the bvecq it is pointing to. This is\n probably overkill as only the hindmost position (that of collection)\n needs to hold a reference.\n\n (2) There are three positions on the netfs_io_request struct. Not all are\n used by every request type.\n\n Firstly, there's ->load_cursor, which is used by buffered read and\n write to point to the next slot to have a folio inserted into it\n (either loaded from the readahead_control or from writeback_iter()).\n\n Secondly, there's ->dispatch_cursor, which is used to provide the\n position in the buffer from which we start dispatching a subrequest.\n\n Thirdly, there's the ->collect_cursor, which is used by the collection\n routines to point to the next memory region to be cleaned up.\n\n (3) There are two positions on the netfs_io_subrequest struct.\n\n Firstly, there's ->dispatch_pos, which indicates the position from\n which a subrequest's buffer begins. This is used as the base of the\n position from which to retry (advanced by ->transfer).\n\n Secondly, there's ->content, which is normally the same as\n ->dispatch_pos but if the bvecq chain got duplicated or the content\n got copied, then this will point to that and will that will be\n disposed of on retry.\n\n (4) Maintenance of the position structs is done with helper functions,\n such as bvecq_pos_attach() to hide the refcounting.\n\n (5) When sending a write to the cache, the bvecq will be duplicated and\n the ends rounded up/down to the backing file's DIO block alignment.\n\n (6) bvec_slice() is used to select a slice of the source buffer and assign\n it to a subrequest. The source buffer position is advanced.\n\n (7) netfs_extract_iter() is used by unbuffered/direct I/O API functions to\n decant a chunk of the iov_iter supplied by the VFS into a bvecq chain\n - and to label the bvecqs with appropriate disposal information\n (e.g. unpin, free, nothing).\n\nThere are further options that can be explored in the future:\n\n (1) Allow the provision of a duplicated bvecq chain for just that region\n so that the filesystem can add bits on either end (such as adding\n protocol headers and trailers and gluing several things together into\n a compound operation).\n\n (2) If a filesystem supports vectored/sparse read and write ops, it can be\n given a chain with discontiguities in it to perform in a single op\n (Ceph, for example, can do this).\n\n (3) Because each bvecq notes the start file position of the regions\n contained therein, there's no need to translate the info in the\n bio_vec into folio pointers in order to unlock the page after I/O.\n Instead, the inode's pagecache can be iterated over and the xarray\n marks cleared en masse.\n\n (4) Make MSG_SPLICE_PAGES handling read the disposal info in the bvecq and\n use that to indicate how it should get rid of the stuff it pasted into\n a sk_buff.\n\n (5) If a bounce buffer is needed (encryption, for example), the bounce\n buffer can be held in a bvecq and sliced up instead of the main buffer\n queue.\n\n (6) Get rid of subreq->io_iter and move the iov_iter stuff down into the\n filesystem. The I/O iterators are normally only needed transitorily,\n and the one currently in netfs_io_subrequest is unnecessary most of\n the time.\n\nfolio_queue and rolling_buffer will be removed in a follow up patch.\n\nSigned-off-by: David Howells <dhowells@redhat.com>\ncc: Paulo Alcantara <pc@manguebit.org>\ncc: Matthew Wilcox <willy@infradead.org>\ncc: Christoph Hellwig <hch@infradead.org>\ncc: Steve French <sfrench@samba.org>\ncc: Shyam Prasad N <sprasad@microsoft.com>\ncc: Tom Talpey <tom@talpey.com>\ncc: linux-cifs@vger.kernel.org\ncc: netfs@lists.linux.dev\ncc: linux-fsdevel@vger.kernel.org\n---\n fs/cachefiles/io.c | 12 ---\n fs/netfs/Makefile | 1 -\n fs/netfs/buffered_read.c | 115 ++++++++++++----------\n fs/netfs/direct_read.c | 73 +++++---------\n fs/netfs/direct_write.c | 86 +++++++++--------\n fs/netfs/internal.h | 10 +-\n fs/netfs/iterator.c | 2 +\n fs/netfs/misc.c | 20 +---\n fs/netfs/objects.c | 16 +---\n fs/netfs/read_collect.c | 83 +++++++++-------\n fs/netfs/read_pgpriv2.c | 68 +++++++++----\n fs/netfs/read_retry.c | 80 +++++++++-------\n fs/netfs/read_single.c | 12 ++-\n fs/netfs/stats.c | 4 +-\n fs/netfs/write_collect.c | 40 ++++----\n fs/netfs/write_issue.c | 180 ++++++++++++++++++++++++++---------\n fs/netfs/write_retry.c | 45 +++++----\n include/linux/netfs.h | 24 ++---\n include/trace/events/netfs.h | 46 ++++-----\n 19 files changed, 520 insertions(+), 397 deletions(-)", "diff": "diff --git a/fs/cachefiles/io.c b/fs/cachefiles/io.c\nindex b5ff75697b3e..2af55a75b511 100644\n--- a/fs/cachefiles/io.c\n+++ b/fs/cachefiles/io.c\n@@ -659,7 +659,6 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)\n \tstruct netfs_cache_resources *cres = &wreq->cache_resources;\n \tstruct cachefiles_object *object = cachefiles_cres_object(cres);\n \tstruct cachefiles_cache *cache = object->volume->cache;\n-\tstruct netfs_io_stream *stream = &wreq->io_streams[subreq->stream_nr];\n \tconst struct cred *saved_cred;\n \tsize_t off, pre, post, len = subreq->len;\n \tloff_t start = subreq->start;\n@@ -684,17 +683,6 @@ static void cachefiles_issue_write(struct netfs_io_subrequest *subreq)\n \t}\n \n \t/* We also need to end on the cache granularity boundary */\n-\tif (start + len == wreq->i_size) {\n-\t\tsize_t part = len & (cache->bsize - 1);\n-\t\tsize_t need = cache->bsize - part;\n-\n-\t\tif (part && stream->submit_extendable_to >= need) {\n-\t\t\tlen += need;\n-\t\t\tsubreq->len += need;\n-\t\t\tsubreq->io_iter.count += need;\n-\t\t}\n-\t}\n-\n \tpost = len & (cache->bsize - 1);\n \tif (post) {\n \t\tlen -= post;\ndiff --git a/fs/netfs/Makefile b/fs/netfs/Makefile\nindex e1f12ecb5abf..0621e6870cbd 100644\n--- a/fs/netfs/Makefile\n+++ b/fs/netfs/Makefile\n@@ -15,7 +15,6 @@ netfs-y := \\\n \tread_pgpriv2.o \\\n \tread_retry.o \\\n \tread_single.o \\\n-\trolling_buffer.o \\\n \twrite_collect.o \\\n \twrite_issue.o \\\n \twrite_retry.o\ndiff --git a/fs/netfs/buffered_read.c b/fs/netfs/buffered_read.c\nindex abdc990faaa2..2cfd33abfb80 100644\n--- a/fs/netfs/buffered_read.c\n+++ b/fs/netfs/buffered_read.c\n@@ -112,26 +112,21 @@ static int netfs_begin_cache_read(struct netfs_io_request *rreq, struct netfs_in\n static ssize_t netfs_prepare_read_iterator(struct netfs_io_subrequest *subreq)\n {\n \tstruct netfs_io_request *rreq = subreq->rreq;\n+\tstruct netfs_io_stream *stream = &rreq->io_streams[0];\n+\tssize_t extracted;\n \tsize_t rsize = subreq->len;\n \n \tif (subreq->source == NETFS_DOWNLOAD_FROM_SERVER)\n-\t\trsize = umin(rsize, rreq->io_streams[0].sreq_max_len);\n-\n-\tsubreq->len = rsize;\n-\tif (unlikely(rreq->io_streams[0].sreq_max_segs)) {\n-\t\tsize_t limit = netfs_limit_iter(&rreq->buffer.iter, 0, rsize,\n-\t\t\t\t\t\trreq->io_streams[0].sreq_max_segs);\n-\n-\t\tif (limit < rsize) {\n-\t\t\tsubreq->len = limit;\n-\t\t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_limited);\n-\t\t}\n+\t\trsize = umin(rsize, stream->sreq_max_len);\n+\n+\tbvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);\n+\textracted = bvecq_slice(&rreq->dispatch_cursor, subreq->len,\n+\t\t\t\tstream->sreq_max_segs, &subreq->nr_segs);\n+\tif (extracted < rsize) {\n+\t\tsubreq->len = extracted;\n+\t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_limited);\n \t}\n \n-\tsubreq->io_iter\t= rreq->buffer.iter;\n-\n-\tiov_iter_truncate(&subreq->io_iter, subreq->len);\n-\trolling_buffer_advance(&rreq->buffer, subreq->len);\n \treturn subreq->len;\n }\n \n@@ -195,6 +190,10 @@ static void netfs_queue_read(struct netfs_io_request *rreq,\n static void netfs_issue_read(struct netfs_io_request *rreq,\n \t\t\t struct netfs_io_subrequest *subreq)\n {\n+\tbvecq_pos_set(&subreq->content, &subreq->dispatch_pos);\n+\tiov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,\n+\t\t\t subreq->content.slot, subreq->content.offset, subreq->len);\n+\n \tswitch (subreq->source) {\n \tcase NETFS_DOWNLOAD_FROM_SERVER:\n \t\trreq->netfs_ops->issue_read(subreq);\n@@ -203,7 +202,8 @@ static void netfs_issue_read(struct netfs_io_request *rreq,\n \t\tnetfs_read_cache_to_pagecache(rreq, subreq);\n \t\tbreak;\n \tdefault:\n-\t\t__set_bit(NETFS_SREQ_CLEAR_TAIL, &subreq->flags);\n+\t\tbvecq_zero(&rreq->dispatch_cursor, subreq->len);\n+\t\tsubreq->transferred = subreq->len;\n \t\tsubreq->error = 0;\n \t\tiov_iter_zero(subreq->len, &subreq->io_iter);\n \t\tsubreq->transferred = subreq->len;\n@@ -233,6 +233,11 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)\n \tssize_t size = rreq->len;\n \tint ret = 0;\n \n+\t_enter(\"R=%08x\", rreq->debug_id);\n+\n+\tbvecq_pos_set(&rreq->dispatch_cursor, &rreq->load_cursor);\n+\tbvecq_pos_set(&rreq->collect_cursor, &rreq->dispatch_cursor);\n+\n \tdo {\n \t\tint (*prepare_read)(struct netfs_io_subrequest *subreq) = NULL;\n \t\tstruct netfs_io_subrequest *subreq;\n@@ -381,6 +386,9 @@ static void netfs_read_to_pagecache(struct netfs_io_request *rreq)\n \n \t/* Defer error return as we may need to wait for outstanding I/O. */\n \tcmpxchg(&rreq->error, 0, ret);\n+\n+\tbvecq_pos_unset(&rreq->load_cursor);\n+\tbvecq_pos_unset(&rreq->dispatch_cursor);\n }\n \n /**\n@@ -428,7 +436,7 @@ void netfs_readahead(struct readahead_control *ractl)\n \t * acquires a ref on each folio that we will need to release later -\n \t * but we don't want to do that until after we've started the I/O.\n \t */\n-\tadded = rolling_buffer_bulk_load_from_ra(&rreq->buffer, ractl, rreq->debug_id);\n+\tadded = bvecq_load_from_ra(&rreq->load_cursor, ractl);\n \tif (added < 0) {\n \t\tret = added;\n \t\tgoto cleanup_free;\n@@ -437,7 +445,7 @@ void netfs_readahead(struct readahead_control *ractl)\n \n \trreq->submitted = rreq->start + added;\n \trreq->cleaned_to = rreq->start;\n-\trreq->front_folio_order = folio_order(rreq->buffer.tail->vec.folios[0]);\n+\trreq->front_folio_order = get_order(rreq->load_cursor.bvecq->bv[0].bv_len);\n \n \tnetfs_read_to_pagecache(rreq);\n \tnetfs_maybe_bulk_drop_ra_refs(rreq);\n@@ -449,20 +457,19 @@ void netfs_readahead(struct readahead_control *ractl)\n EXPORT_SYMBOL(netfs_readahead);\n \n /*\n- * Create a rolling buffer with a single occupying folio.\n+ * Create a buffer queue with a single occupying folio.\n */\n-static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio,\n-\t\t\t\t\tunsigned int rollbuf_flags)\n+static int netfs_create_singular_buffer(struct netfs_io_request *rreq, struct folio *folio)\n {\n-\tssize_t added;\n+\tstruct bvecq *bq;\n+\tsize_t fsize = folio_size(folio);\n \n-\tif (rolling_buffer_init(&rreq->buffer, rreq->debug_id, ITER_DEST) < 0)\n+\tif (bvecq_buffer_init(&rreq->load_cursor, GFP_KERNEL) < 0)\n \t\treturn -ENOMEM;\n \n-\tadded = rolling_buffer_append(&rreq->buffer, folio, rollbuf_flags);\n-\tif (added < 0)\n-\t\treturn added;\n-\trreq->submitted = rreq->start + added;\n+\tbq = rreq->load_cursor.bvecq;\n+\tbvec_set_folio(&bq->bv[bq->nr_slots++], folio, fsize, 0);\n+\trreq->submitted = rreq->start + fsize;\n \treturn 0;\n }\n \n@@ -475,11 +482,11 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)\n \tstruct address_space *mapping = folio->mapping;\n \tstruct netfs_folio *finfo = netfs_folio_info(folio);\n \tstruct netfs_inode *ctx = netfs_inode(mapping->host);\n-\tstruct folio *sink = NULL;\n-\tstruct bio_vec *bvec;\n+\tstruct bvecq *bq = NULL;\n+\tstruct page *sink = NULL;\n \tunsigned int from = finfo->dirty_offset;\n \tunsigned int to = from + finfo->dirty_len;\n-\tunsigned int off = 0, i = 0;\n+\tunsigned int off = 0;\n \tsize_t flen = folio_size(folio);\n \tsize_t nr_bvec = flen / PAGE_SIZE + 2;\n \tsize_t part;\n@@ -504,38 +511,45 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)\n \t * end get copied to, but the middle is discarded.\n \t */\n \tret = -ENOMEM;\n-\tbvec = kmalloc_objs(*bvec, nr_bvec);\n-\tif (!bvec)\n+\tbq = bvecq_alloc_one(nr_bvec, GFP_KERNEL);\n+\tif (!bq)\n \t\tgoto discard;\n+\trreq->load_cursor.bvecq = bq;\n \n-\tsink = folio_alloc(GFP_KERNEL, 0);\n-\tif (!sink) {\n-\t\tkfree(bvec);\n+\tsink = alloc_page(GFP_KERNEL);\n+\tif (!sink)\n \t\tgoto discard;\n-\t}\n \n \ttrace_netfs_folio(folio, netfs_folio_trace_read_gaps);\n \n-\trreq->direct_bv = bvec;\n-\trreq->direct_bv_count = nr_bvec;\n+\tfor (struct bvecq *p = bq; p; p = p->next)\n+\t\tp->free = true;\n+\n \tif (from > 0) {\n-\t\tbvec_set_folio(&bvec[i++], folio, from, 0);\n+\t\tfolio_get(folio);\n+\t\tbvec_set_folio(&bq->bv[bq->nr_slots++], folio, from, 0);\n \t\toff = from;\n \t}\n \twhile (off < to) {\n-\t\tpart = min_t(size_t, to - off, PAGE_SIZE);\n-\t\tbvec_set_folio(&bvec[i++], sink, part, 0);\n+\t\tif (bvecq_is_full(bq))\n+\t\t\tbq = bq->next;\n+\t\tpart = umin(to - off, PAGE_SIZE);\n+\t\tget_page(sink);\n+\t\tbvec_set_page(&bq->bv[bq->nr_slots++], sink, part, 0);\n \t\toff += part;\n \t}\n-\tif (to < flen)\n-\t\tbvec_set_folio(&bvec[i++], folio, flen - to, to);\n-\tiov_iter_bvec(&rreq->buffer.iter, ITER_DEST, bvec, i, rreq->len);\n+\tif (to < flen) {\n+\t\tif (bvecq_is_full(bq))\n+\t\t\tbq = bq->next;\n+\t\tfolio_get(folio);\n+\t\tbvec_set_folio(&bq->bv[bq->nr_slots++], folio, flen - to, to);\n+\t}\n+\n \trreq->submitted = rreq->start + flen;\n \n \tnetfs_read_to_pagecache(rreq);\n \n-\tif (sink)\n-\t\tfolio_put(sink);\n+\tput_page(sink);\n \n \tret = netfs_wait_for_read(rreq);\n \tif (ret >= 0) {\n@@ -547,6 +561,8 @@ static int netfs_read_gaps(struct file *file, struct folio *folio)\n \treturn ret < 0 ? ret : 0;\n \n discard:\n+\tif (sink)\n+\t\tput_page(sink);\n \tnetfs_put_failed_request(rreq);\n alloc_error:\n \tfolio_unlock(folio);\n@@ -597,7 +613,7 @@ int netfs_read_folio(struct file *file, struct folio *folio)\n \ttrace_netfs_read(rreq, rreq->start, rreq->len, netfs_read_trace_readpage);\n \n \t/* Set up the output buffer */\n-\tret = netfs_create_singular_buffer(rreq, folio, 0);\n+\tret = netfs_create_singular_buffer(rreq, folio);\n \tif (ret < 0)\n \t\tgoto discard;\n \n@@ -754,7 +770,7 @@ int netfs_write_begin(struct netfs_inode *ctx,\n \ttrace_netfs_read(rreq, pos, len, netfs_read_trace_write_begin);\n \n \t/* Set up the output buffer */\n-\tret = netfs_create_singular_buffer(rreq, folio, 0);\n+\tret = netfs_create_singular_buffer(rreq, folio);\n \tif (ret < 0)\n \t\tgoto error_put;\n \n@@ -819,9 +835,10 @@ int netfs_prefetch_for_write(struct file *file, struct folio *folio,\n \ttrace_netfs_read(rreq, start, flen, netfs_read_trace_prefetch_for_write);\n \n \t/* Set up the output buffer */\n-\tret = netfs_create_singular_buffer(rreq, folio, NETFS_ROLLBUF_PAGECACHE_MARK);\n+\tret = netfs_create_singular_buffer(rreq, folio);\n \tif (ret < 0)\n \t\tgoto error_put;\n+\trreq->load_cursor.bvecq->free = true;\n \n \tnetfs_read_to_pagecache(rreq);\n \tret = netfs_wait_for_read(rreq);\ndiff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c\nindex f72e6da88cca..05d09ba3d0d0 100644\n--- a/fs/netfs/direct_read.c\n+++ b/fs/netfs/direct_read.c\n@@ -16,31 +16,6 @@\n #include <linux/netfs.h>\n #include \"internal.h\"\n \n-static void netfs_prepare_dio_read_iterator(struct netfs_io_subrequest *subreq)\n-{\n-\tstruct netfs_io_request *rreq = subreq->rreq;\n-\tsize_t rsize;\n-\n-\trsize = umin(subreq->len, rreq->io_streams[0].sreq_max_len);\n-\tsubreq->len = rsize;\n-\n-\tif (unlikely(rreq->io_streams[0].sreq_max_segs)) {\n-\t\tsize_t limit = netfs_limit_iter(&rreq->buffer.iter, 0, rsize,\n-\t\t\t\t\t\trreq->io_streams[0].sreq_max_segs);\n-\n-\t\tif (limit < rsize) {\n-\t\t\tsubreq->len = limit;\n-\t\t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_limited);\n-\t\t}\n-\t}\n-\n-\ttrace_netfs_sreq(subreq, netfs_sreq_trace_prepare);\n-\n-\tsubreq->io_iter\t= rreq->buffer.iter;\n-\tiov_iter_truncate(&subreq->io_iter, subreq->len);\n-\tiov_iter_advance(&rreq->buffer.iter, subreq->len);\n-}\n-\n /*\n * Perform a read to a buffer from the server, slicing up the region to be read\n * according to the network rsize.\n@@ -52,9 +27,10 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)\n \tssize_t size = rreq->len;\n \tint ret = 0;\n \n+\tbvecq_pos_set(&rreq->dispatch_cursor, &rreq->load_cursor);\n+\n \tdo {\n \t\tstruct netfs_io_subrequest *subreq;\n-\t\tssize_t slice;\n \n \t\tsubreq = netfs_alloc_subrequest(rreq);\n \t\tif (!subreq) {\n@@ -89,16 +65,24 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)\n \t\t\t}\n \t\t}\n \n-\t\tnetfs_prepare_dio_read_iterator(subreq);\n-\t\tslice = subreq->len;\n-\t\tsize -= slice;\n-\t\tstart += slice;\n-\t\trreq->submitted += slice;\n+\t\tbvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);\n+\t\tbvecq_pos_set(&subreq->content, &rreq->dispatch_cursor);\n+\t\tsubreq->len = bvecq_slice(&rreq->dispatch_cursor,\n+\t\t\t\t\t umin(size, stream->sreq_max_len),\n+\t\t\t\t\t stream->sreq_max_segs,\n+\t\t\t\t\t &subreq->nr_segs);\n+\n+\t\tsize -= subreq->len;\n+\t\tstart += subreq->len;\n+\t\trreq->submitted += subreq->len;\n \t\tif (size <= 0) {\n \t\t\tsmp_wmb(); /* Write lists before ALL_QUEUED. */\n \t\t\tset_bit(NETFS_RREQ_ALL_QUEUED, &rreq->flags);\n \t\t}\n \n+\t\tiov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,\n+\t\t\t\t subreq->content.slot, subreq->content.offset, subreq->len);\n+\n \t\trreq->netfs_ops->issue_read(subreq);\n \n \t\tif (test_bit(NETFS_RREQ_PAUSE, &rreq->flags))\n@@ -114,6 +98,7 @@ static int netfs_dispatch_unbuffered_reads(struct netfs_io_request *rreq)\n \t\tnetfs_wake_collector(rreq);\n \t}\n \n+\tbvecq_pos_unset(&rreq->dispatch_cursor);\n \treturn ret;\n }\n \n@@ -197,25 +182,15 @@ ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_iter *i\n \t * buffer for ourselves as the caller's iterator will be trashed when\n \t * we return.\n \t *\n-\t * In such a case, extract an iterator to represent as much of the the\n-\t * output buffer as we can manage. Note that the extraction might not\n-\t * be able to allocate a sufficiently large bvec array and may shorten\n-\t * the request.\n+\t * Extract a buffer queue to represent as much of the output buffer as\n+\t * we can manage. The fragments are extracted into a bvecq which will\n+\t * have sufficient nodes allocated to hold all the data, though this\n+\t * may end up truncated if ENOMEM is encountered.\n \t */\n-\tif (user_backed_iter(iter)) {\n-\t\tret = netfs_extract_user_iter(iter, rreq->len, &rreq->buffer.iter, 0);\n-\t\tif (ret < 0)\n-\t\t\tgoto error_put;\n-\t\trreq->direct_bv = (struct bio_vec *)rreq->buffer.iter.bvec;\n-\t\trreq->direct_bv_count = ret;\n-\t\trreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);\n-\t\trreq->len = iov_iter_count(&rreq->buffer.iter);\n-\t} else {\n-\t\trreq->buffer.iter = *iter;\n-\t\trreq->len = orig_count;\n-\t\trreq->direct_bv_unpin = false;\n-\t\tiov_iter_advance(iter, orig_count);\n-\t}\n+\tret = netfs_extract_iter(iter, rreq->len, INT_MAX, iocb->ki_pos,\n+\t\t\t\t &rreq->load_cursor.bvecq, 0);\n+\tif (ret < 0)\n+\t\tgoto error_put;\n \n \t// TODO: Set up bounce buffer if needed\n \ndiff --git a/fs/netfs/direct_write.c b/fs/netfs/direct_write.c\nindex f9ab69de3e29..a61c6d6fd17f 100644\n--- a/fs/netfs/direct_write.c\n+++ b/fs/netfs/direct_write.c\n@@ -73,7 +73,11 @@ static void netfs_unbuffered_write_collect(struct netfs_io_request *wreq,\n \tspin_unlock(&wreq->lock);\n \n \twreq->transferred += subreq->transferred;\n-\tiov_iter_advance(&wreq->buffer.iter, subreq->transferred);\n+\tif (subreq->transferred < subreq->len) {\n+\t\tbvecq_pos_unset(&wreq->dispatch_cursor);\n+\t\tbvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos);\n+\t\tbvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred);\n+\t}\n \n \tstream->collected_to = subreq->start + subreq->transferred;\n \twreq->collected_to = stream->collected_to;\n@@ -99,6 +103,9 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)\n \n \t_enter(\"%llx\", wreq->len);\n \n+\tbvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);\n+\tbvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);\n+\n \tif (wreq->origin == NETFS_DIO_WRITE)\n \t\tinode_dio_begin(wreq->inode);\n \n@@ -111,6 +118,8 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)\n \t\t\tnetfs_prepare_write(wreq, stream, wreq->start + wreq->transferred);\n \t\t\tsubreq = stream->construct;\n \t\t\tstream->construct = NULL;\n+\t\t} else {\n+\t\t\tbvecq_pos_set(&subreq->dispatch_pos, &wreq->dispatch_cursor);\n \t\t}\n \n \t\t/* Check if (re-)preparation failed. */\n@@ -120,16 +129,18 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)\n \t\t\tbreak;\n \t\t}\n \n-\t\tiov_iter_truncate(&subreq->io_iter, wreq->len - wreq->transferred);\n+\t\tsubreq->len = bvecq_slice(&wreq->dispatch_cursor, stream->sreq_max_len,\n+\t\t\t\t\t stream->sreq_max_segs, &subreq->nr_segs);\n+\t\tbvecq_pos_set(&subreq->content, &subreq->dispatch_pos);\n+\n+\t\tiov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,\n+\t\t\t\t subreq->content.bvecq, subreq->content.slot,\n+\t\t\t\t subreq->content.offset,\n+\t\t\t\t subreq->len);\n+\n \t\tif (!iov_iter_count(&subreq->io_iter))\n \t\t\tbreak;\n \n-\t\tsubreq->len = netfs_limit_iter(&subreq->io_iter, 0,\n-\t\t\t\t\t stream->sreq_max_len,\n-\t\t\t\t\t stream->sreq_max_segs);\n-\t\tiov_iter_truncate(&subreq->io_iter, subreq->len);\n-\t\tstream->submit_extendable_to = subreq->len;\n-\n \t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_submit);\n \t\tstream->issue_write(subreq);\n \n@@ -166,8 +177,15 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)\n \t\t */\n \t\tsubreq->error = -EAGAIN;\n \t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_retry);\n-\t\tif (subreq->transferred > 0)\n-\t\t\tiov_iter_advance(&wreq->buffer.iter, subreq->transferred);\n+\n+\t\tbvecq_pos_unset(&subreq->content);\n+\t\tbvecq_pos_unset(&wreq->dispatch_cursor);\n+\t\tbvecq_pos_transfer(&wreq->dispatch_cursor, &subreq->dispatch_pos);\n+\n+\t\tif (subreq->transferred > 0) {\n+\t\t\twreq->transferred += subreq->transferred;\n+\t\t\tbvecq_pos_advance(&wreq->dispatch_cursor, subreq->transferred);\n+\t\t}\n \n \t\tif (stream->source == NETFS_UPLOAD_TO_SERVER &&\n \t\t wreq->netfs_ops->retry_request)\n@@ -176,7 +194,6 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)\n \t\t__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);\n \t\t__clear_bit(NETFS_SREQ_BOUNDARY, &subreq->flags);\n \t\t__clear_bit(NETFS_SREQ_FAILED, &subreq->flags);\n-\t\tsubreq->io_iter\t\t= wreq->buffer.iter;\n \t\tsubreq->start\t\t= wreq->start + wreq->transferred;\n \t\tsubreq->len\t\t= wreq->len - wreq->transferred;\n \t\tsubreq->transferred\t= 0;\n@@ -186,19 +203,14 @@ static int netfs_unbuffered_write(struct netfs_io_request *wreq)\n \n \t\tnetfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);\n \n-\t\tif (stream->prepare_write) {\n+\t\tif (stream->prepare_write)\n \t\t\tstream->prepare_write(subreq);\n-\t\t\t__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);\n-\t\t\tnetfs_stat(&netfs_n_wh_retry_write_subreq);\n-\t\t} else {\n-\t\t\tstruct iov_iter source;\n-\n-\t\t\tnetfs_reset_iter(subreq);\n-\t\t\tsource = subreq->io_iter;\n-\t\t\tnetfs_reissue_write(stream, subreq, &source);\n-\t\t}\n+\t\t__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);\n+\t\tnetfs_stat(&netfs_n_wh_retry_write_subreq);\n \t}\n \n+\tbvecq_pos_unset(&wreq->dispatch_cursor);\n+\tbvecq_pos_unset(&wreq->load_cursor);\n \tnetfs_unbuffered_write_done(wreq);\n \t_leave(\" = %d\", ret);\n \treturn ret;\n@@ -217,12 +229,12 @@ static void netfs_unbuffered_write_async(struct work_struct *work)\n * encrypted file. This can also be used for direct I/O writes.\n */\n ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *iter,\n-\t\t\t\t\t\t struct netfs_group *netfs_group)\n+\t\t\t\t\t struct netfs_group *netfs_group)\n {\n \tstruct netfs_io_request *wreq;\n \tunsigned long long start = iocb->ki_pos;\n \tunsigned long long end = start + iov_iter_count(iter);\n-\tssize_t ret, n;\n+\tssize_t ret;\n \tsize_t len = iov_iter_count(iter);\n \tbool async = !is_sync_kiocb(iocb);\n \n@@ -256,25 +268,17 @@ ssize_t netfs_unbuffered_write_iter_locked(struct kiocb *iocb, struct iov_iter *\n \t\t * allocate a sufficiently large bvec array and may shorten the\n \t\t * request.\n \t\t */\n-\t\tif (user_backed_iter(iter)) {\n-\t\t\tn = netfs_extract_user_iter(iter, len, &wreq->buffer.iter, 0);\n-\t\t\tif (n < 0) {\n-\t\t\t\tret = n;\n-\t\t\t\tgoto error_put;\n-\t\t\t}\n-\t\t\twreq->direct_bv = (struct bio_vec *)wreq->buffer.iter.bvec;\n-\t\t\twreq->direct_bv_count = n;\n-\t\t\twreq->direct_bv_unpin = iov_iter_extract_will_pin(iter);\n-\t\t} else {\n-\t\t\t/* If this is a kernel-generated async DIO request,\n-\t\t\t * assume that any resources the iterator points to\n-\t\t\t * (eg. a bio_vec array) will persist till the end of\n-\t\t\t * the op.\n-\t\t\t */\n-\t\t\twreq->buffer.iter = *iter;\n-\t\t}\n+\t\tssize_t n = netfs_extract_iter(iter, len, INT_MAX, iocb->ki_pos,\n+\t\t\t\t\t &wreq->load_cursor.bvecq, 0);\n \n-\t\twreq->len = iov_iter_count(&wreq->buffer.iter);\n+\t\tif (n < 0) {\n+\t\t\tret = n;\n+\t\t\tgoto error_put;\n+\t\t}\n+\t\twreq->len = n;\n+\t\t_debug(\"dio-write %zx/%zx %u/%u\",\n+\t\t n, len, wreq->load_cursor.bvecq->nr_slots,\n+\t\t wreq->load_cursor.bvecq->max_slots);\n \t}\n \n \t__set_bit(NETFS_RREQ_USE_IO_ITER, &wreq->flags);\ndiff --git a/fs/netfs/internal.h b/fs/netfs/internal.h\nindex ad47bcc1947b..ddae82f94ce0 100644\n--- a/fs/netfs/internal.h\n+++ b/fs/netfs/internal.h\n@@ -7,7 +7,6 @@\n \n #include <linux/slab.h>\n #include <linux/seq_file.h>\n-#include <linux/folio_queue.h>\n #include <linux/netfs.h>\n #include <linux/fscache.h>\n #include <linux/fscache-cache.h>\n@@ -67,9 +66,8 @@ static inline void netfs_proc_del_rreq(struct netfs_io_request *rreq) {}\n /*\n * misc.c\n */\n-struct folio_queue *netfs_buffer_make_space(struct netfs_io_request *rreq,\n-\t\t\t\t\t enum netfs_folioq_trace trace);\n-void netfs_reset_iter(struct netfs_io_subrequest *subreq);\n+struct bvecq *netfs_buffer_make_space(struct netfs_io_request *rreq,\n+\t\t\t\t enum netfs_bvecq_trace trace);\n void netfs_wake_collector(struct netfs_io_request *rreq);\n void netfs_subreq_clear_in_progress(struct netfs_io_subrequest *subreq);\n void netfs_wait_for_in_progress_stream(struct netfs_io_request *rreq,\n@@ -167,7 +165,6 @@ extern atomic_t netfs_n_wh_retry_write_req;\n extern atomic_t netfs_n_wh_retry_write_subreq;\n extern atomic_t netfs_n_wb_lock_skip;\n extern atomic_t netfs_n_wb_lock_wait;\n-extern atomic_t netfs_n_folioq;\n extern atomic_t netfs_n_bvecq;\n \n int netfs_stats_show(struct seq_file *m, void *v);\n@@ -205,8 +202,7 @@ void netfs_prepare_write(struct netfs_io_request *wreq,\n \t\t\t struct netfs_io_stream *stream,\n \t\t\t loff_t start);\n void netfs_reissue_write(struct netfs_io_stream *stream,\n-\t\t\t struct netfs_io_subrequest *subreq,\n-\t\t\t struct iov_iter *source);\n+\t\t\t struct netfs_io_subrequest *subreq);\n void netfs_issue_write(struct netfs_io_request *wreq,\n \t\t struct netfs_io_stream *stream);\n size_t netfs_advance_write(struct netfs_io_request *wreq,\ndiff --git a/fs/netfs/iterator.c b/fs/netfs/iterator.c\nindex e77fd39327c2..581dbf650a19 100644\n--- a/fs/netfs/iterator.c\n+++ b/fs/netfs/iterator.c\n@@ -136,6 +136,7 @@ ssize_t netfs_extract_iter(struct iov_iter *orig, size_t orig_len, size_t max_se\n }\n EXPORT_SYMBOL_GPL(netfs_extract_iter);\n \n+#if 0\n /**\n * netfs_extract_user_iter - Extract the pages from a user iterator into a bvec\n * @orig: The original iterator\n@@ -421,3 +422,4 @@ size_t netfs_limit_iter(const struct iov_iter *iter, size_t start_offset,\n \tBUG();\n }\n EXPORT_SYMBOL(netfs_limit_iter);\n+#endif\ndiff --git a/fs/netfs/misc.c b/fs/netfs/misc.c\nindex 6df89c92b10b..ab142cbaad35 100644\n--- a/fs/netfs/misc.c\n+++ b/fs/netfs/misc.c\n@@ -8,6 +8,7 @@\n #include <linux/swap.h>\n #include \"internal.h\"\n \n+#if 0\n /**\n * netfs_alloc_folioq_buffer - Allocate buffer space into a folio queue\n * @mapping: Address space to set on the folio (or NULL).\n@@ -103,24 +104,7 @@ void netfs_free_folioq_buffer(struct folio_queue *fq)\n \tfolio_batch_release(&fbatch);\n }\n EXPORT_SYMBOL(netfs_free_folioq_buffer);\n-\n-/*\n- * Reset the subrequest iterator to refer just to the region remaining to be\n- * read. The iterator may or may not have been advanced by socket ops or\n- * extraction ops to an extent that may or may not match the amount actually\n- * read.\n- */\n-void netfs_reset_iter(struct netfs_io_subrequest *subreq)\n-{\n-\tstruct iov_iter *io_iter = &subreq->io_iter;\n-\tsize_t remain = subreq->len - subreq->transferred;\n-\n-\tif (io_iter->count > remain)\n-\t\tiov_iter_advance(io_iter, io_iter->count - remain);\n-\telse if (io_iter->count < remain)\n-\t\tiov_iter_revert(io_iter, remain - io_iter->count);\n-\tiov_iter_truncate(&subreq->io_iter, remain);\n-}\n+#endif\n \n /**\n * netfs_dirty_folio - Mark folio dirty and pin a cache object for writeback\ndiff --git a/fs/netfs/objects.c b/fs/netfs/objects.c\nindex b8c4918d3dcd..eff431cd7d6a 100644\n--- a/fs/netfs/objects.c\n+++ b/fs/netfs/objects.c\n@@ -119,7 +119,6 @@ static void netfs_free_request_rcu(struct rcu_head *rcu)\n static void netfs_deinit_request(struct netfs_io_request *rreq)\n {\n \tstruct netfs_inode *ictx = netfs_inode(rreq->inode);\n-\tunsigned int i;\n \n \ttrace_netfs_rreq(rreq, netfs_rreq_trace_free);\n \n@@ -134,16 +133,9 @@ static void netfs_deinit_request(struct netfs_io_request *rreq)\n \t\trreq->netfs_ops->free_request(rreq);\n \tif (rreq->cache_resources.ops)\n \t\trreq->cache_resources.ops->end_operation(&rreq->cache_resources);\n-\tif (rreq->direct_bv) {\n-\t\tfor (i = 0; i < rreq->direct_bv_count; i++) {\n-\t\t\tif (rreq->direct_bv[i].bv_page) {\n-\t\t\t\tif (rreq->direct_bv_unpin)\n-\t\t\t\t\tunpin_user_page(rreq->direct_bv[i].bv_page);\n-\t\t\t}\n-\t\t}\n-\t\tkvfree(rreq->direct_bv);\n-\t}\n-\trolling_buffer_clear(&rreq->buffer);\n+\tbvecq_pos_unset(&rreq->load_cursor);\n+\tbvecq_pos_unset(&rreq->dispatch_cursor);\n+\tbvecq_pos_unset(&rreq->collect_cursor);\n \n \tif (atomic_dec_and_test(&ictx->io_count))\n \t\twake_up_var(&ictx->io_count);\n@@ -236,6 +228,8 @@ static void netfs_free_subrequest(struct netfs_io_subrequest *subreq)\n \ttrace_netfs_sreq(subreq, netfs_sreq_trace_free);\n \tif (rreq->netfs_ops->free_subrequest)\n \t\trreq->netfs_ops->free_subrequest(subreq);\n+\tbvecq_pos_unset(&subreq->dispatch_pos);\n+\tbvecq_pos_unset(&subreq->content);\n \tmempool_free(subreq, rreq->netfs_ops->subrequest_pool ?: &netfs_subrequest_pool);\n \tnetfs_stat_d(&netfs_n_rh_sreq);\n \tnetfs_put_request(rreq, netfs_rreq_trace_put_subreq);\ndiff --git a/fs/netfs/read_collect.c b/fs/netfs/read_collect.c\nindex e5f6665b3341..c7180680226c 100644\n--- a/fs/netfs/read_collect.c\n+++ b/fs/netfs/read_collect.c\n@@ -27,9 +27,13 @@\n */\n static void netfs_clear_unread(struct netfs_io_subrequest *subreq)\n {\n-\tnetfs_reset_iter(subreq);\n-\tWARN_ON_ONCE(subreq->len - subreq->transferred != iov_iter_count(&subreq->io_iter));\n-\tiov_iter_zero(iov_iter_count(&subreq->io_iter), &subreq->io_iter);\n+\tstruct iov_iter iter;\n+\n+\tiov_iter_bvec_queue(&iter, ITER_DEST, subreq->content.bvecq,\n+\t\t\t subreq->content.slot, subreq->content.offset, subreq->len);\n+\tiov_iter_advance(&iter, subreq->transferred);\n+\tiov_iter_zero(subreq->len, &iter);\n+\n \tif (subreq->start + subreq->transferred >= subreq->rreq->i_size)\n \t\t__set_bit(NETFS_SREQ_HIT_EOF, &subreq->flags);\n }\n@@ -40,11 +44,11 @@ static void netfs_clear_unread(struct netfs_io_subrequest *subreq)\n * dirty and let writeback handle it.\n */\n static void netfs_unlock_read_folio(struct netfs_io_request *rreq,\n-\t\t\t\t struct folio_queue *folioq,\n+\t\t\t\t struct bvecq *bvecq,\n \t\t\t\t int slot)\n {\n \tstruct netfs_folio *finfo;\n-\tstruct folio *folio = folioq_folio(folioq, slot);\n+\tstruct folio *folio = page_folio(bvecq->bv[slot].bv_page);\n \n \tif (unlikely(folio_pos(folio) < rreq->abandon_to)) {\n \t\ttrace_netfs_folio(folio, netfs_folio_trace_abandon);\n@@ -75,7 +79,7 @@ static void netfs_unlock_read_folio(struct netfs_io_request *rreq,\n \t\t\ttrace_netfs_folio(folio, netfs_folio_trace_read_done);\n \t\t}\n \n-\t\tfolioq_clear(folioq, slot);\n+\t\tbvecq->bv[slot].bv_page = NULL;\n \t} else {\n \t\t// TODO: Use of PG_private_2 is deprecated.\n \t\tif (test_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags))\n@@ -91,7 +95,7 @@ static void netfs_unlock_read_folio(struct netfs_io_request *rreq,\n \t\tfolio_unlock(folio);\n \t}\n \n-\tfolioq_clear(folioq, slot);\n+\tbvecq->bv[slot].bv_page = NULL;\n }\n \n /*\n@@ -100,18 +104,24 @@ static void netfs_unlock_read_folio(struct netfs_io_request *rreq,\n static void netfs_read_unlock_folios(struct netfs_io_request *rreq,\n \t\t\t\t unsigned int *notes)\n {\n-\tstruct folio_queue *folioq = rreq->buffer.tail;\n+\tstruct bvecq *bvecq = rreq->collect_cursor.bvecq;\n \tunsigned long long collected_to = rreq->collected_to;\n-\tunsigned int slot = rreq->buffer.first_tail_slot;\n+\tunsigned int slot = rreq->collect_cursor.slot;\n \n \tif (rreq->cleaned_to >= rreq->collected_to)\n \t\treturn;\n \n \t// TODO: Begin decryption\n \n-\tif (slot >= folioq_nr_slots(folioq)) {\n-\t\tfolioq = rolling_buffer_delete_spent(&rreq->buffer);\n-\t\tif (!folioq) {\n+\tif (slot >= bvecq->nr_slots) {\n+\t\t/* We need to be very careful - the cleanup can catch the\n+\t\t * dispatcher, which could lead to us having nothing left in\n+\t\t * the queue, causing the front and back pointers to end up on\n+\t\t * different tracks. To avoid this, we must always keep at\n+\t\t * least one segment in the queue.\n+\t\t */\n+\t\tbvecq = bvecq_delete_spent(&rreq->collect_cursor);\n+\t\tif (!bvecq) {\n \t\t\trreq->front_folio_order = 0;\n \t\t\treturn;\n \t\t}\n@@ -127,13 +137,13 @@ static void netfs_read_unlock_folios(struct netfs_io_request *rreq,\n \t\tif (*notes & COPY_TO_CACHE)\n \t\t\tset_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);\n \n-\t\tfolio = folioq_folio(folioq, slot);\n+\t\tfolio = page_folio(bvecq->bv[slot].bv_page);\n \t\tif (WARN_ONCE(!folio_test_locked(folio),\n \t\t\t \"R=%08x: folio %lx is not locked\\n\",\n \t\t\t rreq->debug_id, folio->index))\n \t\t\ttrace_netfs_folio(folio, netfs_folio_trace_not_locked);\n \n-\t\torder = folioq_folio_order(folioq, slot);\n+\t\torder = folio_order(folio);\n \t\trreq->front_folio_order = order;\n \t\tfsize = PAGE_SIZE << order;\n \t\tfpos = folio_pos(folio);\n@@ -145,33 +155,32 @@ static void netfs_read_unlock_folios(struct netfs_io_request *rreq,\n \t\tif (collected_to < fend)\n \t\t\tbreak;\n \n-\t\tnetfs_unlock_read_folio(rreq, folioq, slot);\n+\t\tnetfs_unlock_read_folio(rreq, bvecq, slot);\n \t\tWRITE_ONCE(rreq->cleaned_to, fpos + fsize);\n \t\t*notes |= MADE_PROGRESS;\n \n \t\tclear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &rreq->flags);\n \n-\t\t/* Clean up the head folioq. If we clear an entire folioq, then\n-\t\t * we can get rid of it provided it's not also the tail folioq\n-\t\t * being filled by the issuer.\n+\t\t/* Clean up the head bvecq segment. If we clear an entire\n+\t\t * segment, then we can get rid of it provided it's not also\n+\t\t * the tail segment being filled by the issuer.\n \t\t */\n-\t\tfolioq_clear(folioq, slot);\n \t\tslot++;\n-\t\tif (slot >= folioq_nr_slots(folioq)) {\n-\t\t\tfolioq = rolling_buffer_delete_spent(&rreq->buffer);\n-\t\t\tif (!folioq)\n+\t\tif (slot >= bvecq->nr_slots) {\n+\t\t\tbvecq = bvecq_delete_spent(&rreq->collect_cursor);\n+\t\t\tif (!bvecq)\n \t\t\t\tgoto done;\n \t\t\tslot = 0;\n-\t\t\ttrace_netfs_folioq(folioq, netfs_trace_folioq_read_progress);\n+\t\t\t//trace_netfs_bvecq(bvecq, netfs_trace_folioq_read_progress);\n \t\t}\n \n \t\tif (fpos + fsize >= collected_to)\n \t\t\tbreak;\n \t}\n \n-\trreq->buffer.tail = folioq;\n+\tbvecq_pos_move(&rreq->collect_cursor, bvecq);\n done:\n-\trreq->buffer.first_tail_slot = slot;\n+\trreq->collect_cursor.slot = slot;\n }\n \n /*\n@@ -346,12 +355,14 @@ static void netfs_rreq_assess_dio(struct netfs_io_request *rreq)\n \n \tif (rreq->origin == NETFS_UNBUFFERED_READ ||\n \t rreq->origin == NETFS_DIO_READ) {\n-\t\tfor (i = 0; i < rreq->direct_bv_count; i++) {\n-\t\t\tflush_dcache_page(rreq->direct_bv[i].bv_page);\n-\t\t\t// TODO: cifs marks pages in the destination buffer\n-\t\t\t// dirty under some circumstances after a read. Do we\n-\t\t\t// need to do that too?\n-\t\t\tset_page_dirty(rreq->direct_bv[i].bv_page);\n+\t\tfor (struct bvecq *bq = rreq->collect_cursor.bvecq; bq; bq = bq->next) {\n+\t\t\tfor (i = 0; i < bq->nr_slots; i++) {\n+\t\t\t\tflush_dcache_page(bq->bv[i].bv_page);\n+\t\t\t\t// TODO: cifs marks pages in the destination buffer\n+\t\t\t\t// dirty under some circumstances after a read. Do we\n+\t\t\t\t// need to do that too?\n+\t\t\t\tset_page_dirty(bq->bv[i].bv_page);\n+\t\t\t}\n \t\t}\n \t}\n \n@@ -442,7 +453,15 @@ bool netfs_read_collection(struct netfs_io_request *rreq)\n \n \ttrace_netfs_rreq(rreq, netfs_rreq_trace_done);\n \tnetfs_clear_subrequests(rreq);\n-\tnetfs_unlock_abandoned_read_pages(rreq);\n+\tswitch (rreq->origin) {\n+\tcase NETFS_READAHEAD:\n+\tcase NETFS_READPAGE:\n+\tcase NETFS_READ_FOR_WRITE:\n+\t\tnetfs_unlock_abandoned_read_pages(rreq);\n+\t\tbreak;\n+\tdefault:\n+\t\tbreak;\n+\t}\n \tif (unlikely(rreq->copy_to_cache))\n \t\tnetfs_pgpriv2_end_copy_to_cache(rreq);\n \treturn true;\ndiff --git a/fs/netfs/read_pgpriv2.c b/fs/netfs/read_pgpriv2.c\nindex a1489aa29f78..fb783318318e 100644\n--- a/fs/netfs/read_pgpriv2.c\n+++ b/fs/netfs/read_pgpriv2.c\n@@ -19,6 +19,9 @@\n static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio *folio)\n {\n \tstruct netfs_io_stream *cache = &creq->io_streams[1];\n+\tstruct bvecq *queue;\n+\tunsigned int slot;\n+\tsize_t dio_size = PAGE_SIZE;\n \tsize_t fsize = folio_size(folio), flen = fsize;\n \tloff_t fpos = folio_pos(folio), i_size;\n \tbool to_eof = false;\n@@ -48,17 +51,40 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio\n \t\tto_eof = true;\n \t}\n \n+\tflen = round_up(flen, dio_size);\n+\n \t_debug(\"folio %zx %zx\", flen, fsize);\n \n \ttrace_netfs_folio(folio, netfs_folio_trace_store_copy);\n \n-\t/* Attach the folio to the rolling buffer. */\n-\tif (rolling_buffer_append(&creq->buffer, folio, 0) < 0) {\n-\t\tclear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &creq->flags);\n-\t\treturn;\n+\n+\t/* Institute a new bvec queue segment if the current one is full or if\n+\t * we encounter a discontiguity. The discontiguity break is important\n+\t * when it comes to bulk unlocking folios by file range.\n+\t */\n+\tqueue = creq->load_cursor.bvecq;\n+\tif (bvecq_is_full(queue) ||\n+\t (fpos != creq->last_end && creq->last_end > 0)) {\n+\t\tif (bvecq_buffer_make_space(&creq->load_cursor, GFP_KERNEL) < 0) {\n+\t\t\tclear_bit(NETFS_RREQ_FOLIO_COPY_TO_CACHE, &creq->flags);\n+\t\t\treturn;\n+\t\t}\n+\n+\t\tqueue = creq->load_cursor.bvecq;\n+\t\tqueue->fpos = fpos;\n+\t\tif (fpos != creq->last_end)\n+\t\t\tqueue->discontig = true;\n \t}\n \n-\tcache->submit_extendable_to = fsize;\n+\t/* Attach the folio to the rolling buffer. */\n+\tslot = queue->nr_slots;\n+\tbvec_set_folio(&queue->bv[slot], folio, fsize, 0);\n+\t/* Order incrementing the slot counter after the slot is filled. */\n+\tsmp_store_release(&queue->nr_slots, slot + 1);\n+\tcreq->load_cursor.slot = slot + 1;\n+\tcreq->load_cursor.offset = 0;\n+\ttrace_netfs_bv_slot(queue, slot);\n+\n \tcache->submit_off = 0;\n \tcache->submit_len = flen;\n \n@@ -70,10 +96,9 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio\n \tdo {\n \t\tssize_t part;\n \n-\t\tcreq->buffer.iter.iov_offset = cache->submit_off;\n+\t\tcreq->dispatch_cursor.offset = cache->submit_off;\n \n \t\tatomic64_set(&creq->issued_to, fpos + cache->submit_off);\n-\t\tcache->submit_extendable_to = fsize - cache->submit_off;\n \t\tpart = netfs_advance_write(creq, cache, fpos + cache->submit_off,\n \t\t\t\t\t cache->submit_len, to_eof);\n \t\tcache->submit_off += part;\n@@ -83,8 +108,7 @@ static void netfs_pgpriv2_copy_folio(struct netfs_io_request *creq, struct folio\n \t\t\tcache->submit_len -= part;\n \t} while (cache->submit_len > 0);\n \n-\tcreq->buffer.iter.iov_offset = 0;\n-\trolling_buffer_advance(&creq->buffer, fsize);\n+\tbvecq_pos_step(&creq->dispatch_cursor);\n \tatomic64_set(&creq->issued_to, fpos + fsize);\n \n \tif (flen < fsize)\n@@ -110,6 +134,10 @@ static struct netfs_io_request *netfs_pgpriv2_begin_copy_to_cache(\n \tif (!creq->io_streams[1].avail)\n \t\tgoto cancel_put;\n \n+\tbvecq_buffer_init(&creq->load_cursor, GFP_KERNEL);\n+\tbvecq_pos_set(&creq->dispatch_cursor, &creq->load_cursor);\n+\tbvecq_pos_set(&creq->collect_cursor, &creq->dispatch_cursor);\n+\n \t__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &creq->flags);\n \ttrace_netfs_copy2cache(rreq, creq);\n \ttrace_netfs_write(creq, netfs_write_trace_copy_to_cache);\n@@ -170,22 +198,23 @@ void netfs_pgpriv2_end_copy_to_cache(struct netfs_io_request *rreq)\n */\n bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *creq)\n {\n-\tstruct folio_queue *folioq = creq->buffer.tail;\n+\tstruct bvecq *bq = creq->collect_cursor.bvecq;\n \tunsigned long long collected_to = creq->collected_to;\n-\tunsigned int slot = creq->buffer.first_tail_slot;\n+\tunsigned int slot;\n \tbool made_progress = false;\n \n-\tif (slot >= folioq_nr_slots(folioq)) {\n-\t\tfolioq = rolling_buffer_delete_spent(&creq->buffer);\n+\tif (bvecq_is_full(bq)) {\n+\t\tbq = bvecq_delete_spent(&creq->collect_cursor);\n \t\tslot = 0;\n \t}\n+\tslot = creq->collect_cursor.slot;\n \n \tfor (;;) {\n \t\tstruct folio *folio;\n \t\tunsigned long long fpos, fend;\n \t\tsize_t fsize, flen;\n \n-\t\tfolio = folioq_folio(folioq, slot);\n+\t\tfolio = page_folio(bq->bv[slot].bv_page);\n \t\tif (WARN_ONCE(!folio_test_private_2(folio),\n \t\t\t \"R=%08x: folio %lx is not marked private_2\\n\",\n \t\t\t creq->debug_id, folio->index))\n@@ -212,11 +241,11 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *creq)\n \t\t * we can get rid of it provided it's not also the tail folioq\n \t\t * being filled by the issuer.\n \t\t */\n-\t\tfolioq_clear(folioq, slot);\n+\t\tbq->bv[slot].bv_page = NULL;\n \t\tslot++;\n-\t\tif (slot >= folioq_nr_slots(folioq)) {\n-\t\t\tfolioq = rolling_buffer_delete_spent(&creq->buffer);\n-\t\t\tif (!folioq)\n+\t\tif (slot >= bq->nr_slots) {\n+\t\t\tbq = bvecq_delete_spent(&creq->collect_cursor);\n+\t\t\tif (!bq)\n \t\t\t\tgoto done;\n \t\t\tslot = 0;\n \t\t}\n@@ -225,8 +254,7 @@ bool netfs_pgpriv2_unlock_copied_folios(struct netfs_io_request *creq)\n \t\t\tbreak;\n \t}\n \n-\tcreq->buffer.tail = folioq;\n done:\n-\tcreq->buffer.first_tail_slot = slot;\n+\tcreq->collect_cursor.slot = slot;\n \treturn made_progress;\n }\ndiff --git a/fs/netfs/read_retry.c b/fs/netfs/read_retry.c\nindex 68fc869513ef..6f2eb14aac72 100644\n--- a/fs/netfs/read_retry.c\n+++ b/fs/netfs/read_retry.c\n@@ -12,6 +12,11 @@\n static void netfs_reissue_read(struct netfs_io_request *rreq,\n \t\t\t struct netfs_io_subrequest *subreq)\n {\n+\tbvecq_pos_set(&subreq->content, &subreq->dispatch_pos);\n+\tiov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,\n+\t\t\t subreq->content.slot, subreq->content.offset, subreq->len);\n+\tiov_iter_advance(&subreq->io_iter, subreq->transferred);\n+\n \tsubreq->error = 0;\n \t__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);\n \t__set_bit(NETFS_SREQ_IN_PROGRESS, &subreq->flags);\n@@ -27,6 +32,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n {\n \tstruct netfs_io_subrequest *subreq;\n \tstruct netfs_io_stream *stream = &rreq->io_streams[0];\n+\tstruct bvecq_pos dispatch_cursor = {};\n \tstruct list_head *next;\n \n \t_enter(\"R=%x\", rreq->debug_id);\n@@ -48,7 +54,6 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \t\t\tif (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {\n \t\t\t\t__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);\n \t\t\t\tsubreq->retry_count++;\n-\t\t\t\tnetfs_reset_iter(subreq);\n \t\t\t\tnetfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);\n \t\t\t\tnetfs_reissue_read(rreq, subreq);\n \t\t\t}\n@@ -74,11 +79,12 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \n \tdo {\n \t\tstruct netfs_io_subrequest *from, *to, *tmp;\n-\t\tstruct iov_iter source;\n \t\tunsigned long long start, len;\n \t\tsize_t part;\n \t\tbool boundary = false, subreq_superfluous = false;\n \n+\t\tbvecq_pos_unset(&dispatch_cursor);\n+\n \t\t/* Go through the subreqs and find the next span of contiguous\n \t\t * buffer that we then rejig (cifs, for example, needs the\n \t\t * rsize renegotiating) and reissue.\n@@ -113,9 +119,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \t\t/* Determine the set of buffers we're going to use. Each\n \t\t * subreq gets a subset of a single overall contiguous buffer.\n \t\t */\n-\t\tnetfs_reset_iter(from);\n-\t\tsource = from->io_iter;\n-\t\tsource.count = len;\n+\t\tbvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos);\n+\t\tbvecq_pos_advance(&dispatch_cursor, from->transferred);\n \n \t\t/* Work through the sublist. */\n \t\tsubreq = from;\n@@ -131,10 +136,14 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \t\t\t__clear_bit(NETFS_SREQ_MADE_PROGRESS, &subreq->flags);\n \t\t\tsubreq->retry_count++;\n \n+\t\t\tbvecq_pos_unset(&subreq->dispatch_pos);\n+\t\t\tbvecq_pos_unset(&subreq->content);\n+\n \t\t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_retry);\n \n \t\t\t/* Renegotiate max_len (rsize) */\n \t\t\tstream->sreq_max_len = subreq->len;\n+\t\t\tstream->sreq_max_segs = INT_MAX;\n \t\t\tif (rreq->netfs_ops->prepare_read &&\n \t\t\t rreq->netfs_ops->prepare_read(subreq) < 0) {\n \t\t\t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_reprep_failed);\n@@ -142,13 +151,13 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \t\t\t\tgoto abandon;\n \t\t\t}\n \n-\t\t\tpart = umin(len, stream->sreq_max_len);\n-\t\t\tif (unlikely(stream->sreq_max_segs))\n-\t\t\t\tpart = netfs_limit_iter(&source, 0, part, stream->sreq_max_segs);\n+\t\t\tbvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);\n+\t\t\tpart = bvecq_slice(&dispatch_cursor,\n+\t\t\t\t\t umin(len, stream->sreq_max_len),\n+\t\t\t\t\t stream->sreq_max_segs,\n+\t\t\t\t\t &subreq->nr_segs);\n \t\t\tsubreq->len = subreq->transferred + part;\n-\t\t\tsubreq->io_iter = source;\n-\t\t\tiov_iter_truncate(&subreq->io_iter, part);\n-\t\t\tiov_iter_advance(&source, part);\n+\n \t\t\tlen -= part;\n \t\t\tstart += part;\n \t\t\tif (!len) {\n@@ -208,9 +217,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \t\t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_retry);\n \n \t\t\tstream->sreq_max_len\t= umin(len, rreq->rsize);\n-\t\t\tstream->sreq_max_segs\t= 0;\n-\t\t\tif (unlikely(stream->sreq_max_segs))\n-\t\t\t\tpart = netfs_limit_iter(&source, 0, part, stream->sreq_max_segs);\n+\t\t\tstream->sreq_max_segs\t= INT_MAX;\n \n \t\t\tnetfs_stat(&netfs_n_rh_download);\n \t\t\tif (rreq->netfs_ops->prepare_read(subreq) < 0) {\n@@ -219,11 +226,12 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \t\t\t\tgoto abandon;\n \t\t\t}\n \n-\t\t\tpart = umin(len, stream->sreq_max_len);\n+\t\t\tbvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);\n+\t\t\tpart = bvecq_slice(&dispatch_cursor,\n+\t\t\t\t\t umin(len, stream->sreq_max_len),\n+\t\t\t\t\t stream->sreq_max_segs,\n+\t\t\t\t\t &subreq->nr_segs);\n \t\t\tsubreq->len = subreq->transferred + part;\n-\t\t\tsubreq->io_iter = source;\n-\t\t\tiov_iter_truncate(&subreq->io_iter, part);\n-\t\t\tiov_iter_advance(&source, part);\n \n \t\t\tlen -= part;\n \t\t\tstart += part;\n@@ -237,6 +245,8 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \n \t} while (!list_is_head(next, &stream->subrequests));\n \n+out:\n+\tbvecq_pos_unset(&dispatch_cursor);\n \treturn;\n \n \t/* If we hit an error, fail all remaining incomplete subrequests */\n@@ -253,6 +263,7 @@ static void netfs_retry_read_subrequests(struct netfs_io_request *rreq)\n \t\t__set_bit(NETFS_SREQ_FAILED, &subreq->flags);\n \t\t__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);\n \t}\n+\tgoto out;\n }\n \n /*\n@@ -281,23 +292,24 @@ void netfs_retry_reads(struct netfs_io_request *rreq)\n */\n void netfs_unlock_abandoned_read_pages(struct netfs_io_request *rreq)\n {\n-\tstruct folio_queue *p;\n-\n-\tfor (p = rreq->buffer.tail; p; p = p->next) {\n-\t\tfor (int slot = 0; slot < folioq_count(p); slot++) {\n-\t\t\tstruct folio *folio = folioq_folio(p, slot);\n-\n-\t\t\tif (folio && !folioq_is_marked2(p, slot)) {\n-\t\t\t\tif (folio->index == rreq->no_unlock_folio &&\n-\t\t\t\t test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO,\n-\t\t\t\t\t &rreq->flags)) {\n-\t\t\t\t\t_debug(\"no unlock\");\n-\t\t\t\t} else {\n-\t\t\t\t\ttrace_netfs_folio(folio,\n-\t\t\t\t\t\tnetfs_folio_trace_abandon);\n-\t\t\t\t\tfolio_unlock(folio);\n-\t\t\t\t}\n+\tstruct bvecq *p;\n+\n+\tfor (p = rreq->collect_cursor.bvecq; p; p = p->next) {\n+\t\tif (!p->free)\n+\t\t\tcontinue;\n+\t\tfor (int slot = 0; slot < p->nr_slots; slot++) {\n+\t\t\tif (!p->bv[slot].bv_page)\n+\t\t\t\tcontinue;\n+\n+\t\t\tstruct folio *folio = page_folio(p->bv[slot].bv_page);\n+\n+\t\t\tif (folio->index == rreq->no_unlock_folio &&\n+\t\t\t test_bit(NETFS_RREQ_NO_UNLOCK_FOLIO, &rreq->flags)) {\n+\t\t\t\t_debug(\"no unlock\");\n+\t\t\t\tcontinue;\n \t\t\t}\n+\t\t\ttrace_netfs_folio(folio, netfs_folio_trace_abandon);\n+\t\t\tfolio_unlock(folio);\n \t\t}\n \t}\n }\ndiff --git a/fs/netfs/read_single.c b/fs/netfs/read_single.c\nindex d87a03859ebd..b386cae77ece 100644\n--- a/fs/netfs/read_single.c\n+++ b/fs/netfs/read_single.c\n@@ -94,7 +94,12 @@ static int netfs_single_dispatch_read(struct netfs_io_request *rreq)\n \tsubreq->source\t= NETFS_DOWNLOAD_FROM_SERVER;\n \tsubreq->start\t= 0;\n \tsubreq->len\t= rreq->len;\n-\tsubreq->io_iter\t= rreq->buffer.iter;\n+\n+\tbvecq_pos_set(&subreq->dispatch_pos, &rreq->dispatch_cursor);\n+\tbvecq_pos_set(&subreq->content, &rreq->dispatch_cursor);\n+\n+\tiov_iter_bvec_queue(&subreq->io_iter, ITER_DEST, subreq->content.bvecq,\n+\t\t\t subreq->content.slot, subreq->content.offset, subreq->len);\n \n \t/* Try to use the cache if the cache content matches the size of the\n \t * remote file.\n@@ -180,6 +185,10 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite\n \tif (IS_ERR(rreq))\n \t\treturn PTR_ERR(rreq);\n \n+\tret = netfs_extract_iter(iter, rreq->len, INT_MAX, 0, &rreq->dispatch_cursor.bvecq, 0);\n+\tif (ret < 0)\n+\t\tgoto cleanup_free;\n+\n \tret = netfs_single_begin_cache_read(rreq, ictx);\n \tif (ret == -ENOMEM || ret == -EINTR || ret == -ERESTARTSYS)\n \t\tgoto cleanup_free;\n@@ -187,7 +196,6 @@ ssize_t netfs_read_single(struct inode *inode, struct file *file, struct iov_ite\n \tnetfs_stat(&netfs_n_rh_read_single);\n \ttrace_netfs_read(rreq, 0, rreq->len, netfs_read_trace_read_single);\n \n-\trreq->buffer.iter = *iter;\n \tnetfs_single_dispatch_read(rreq);\n \n \tret = netfs_wait_for_read(rreq);\ndiff --git a/fs/netfs/stats.c b/fs/netfs/stats.c\nindex 84c2a4bcc762..1dfb5667b931 100644\n--- a/fs/netfs/stats.c\n+++ b/fs/netfs/stats.c\n@@ -47,7 +47,6 @@ atomic_t netfs_n_wh_retry_write_req;\n atomic_t netfs_n_wh_retry_write_subreq;\n atomic_t netfs_n_wb_lock_skip;\n atomic_t netfs_n_wb_lock_wait;\n-atomic_t netfs_n_folioq;\n atomic_t netfs_n_bvecq;\n \n int netfs_stats_show(struct seq_file *m, void *v)\n@@ -91,11 +90,10 @@ int netfs_stats_show(struct seq_file *m, void *v)\n \t\t atomic_read(&netfs_n_rh_retry_read_subreq),\n \t\t atomic_read(&netfs_n_wh_retry_write_req),\n \t\t atomic_read(&netfs_n_wh_retry_write_subreq));\n-\tseq_printf(m, \"Objs : rr=%u sr=%u bq=%u foq=%u wsc=%u\\n\",\n+\tseq_printf(m, \"Objs : rr=%u sr=%u bq=%u wsc=%u\\n\",\n \t\t atomic_read(&netfs_n_rh_rreq),\n \t\t atomic_read(&netfs_n_rh_sreq),\n \t\t atomic_read(&netfs_n_bvecq),\n-\t\t atomic_read(&netfs_n_folioq),\n \t\t atomic_read(&netfs_n_wh_wstream_conflict));\n \tseq_printf(m, \"WbLock : skip=%u wait=%u\\n\",\n \t\t atomic_read(&netfs_n_wb_lock_skip),\ndiff --git a/fs/netfs/write_collect.c b/fs/netfs/write_collect.c\nindex a839735d5675..fb8daf50c86d 100644\n--- a/fs/netfs/write_collect.c\n+++ b/fs/netfs/write_collect.c\n@@ -111,12 +111,12 @@ int netfs_folio_written_back(struct folio *folio)\n static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,\n \t\t\t\t\t unsigned int *notes)\n {\n-\tstruct folio_queue *folioq = wreq->buffer.tail;\n+\tstruct bvecq *bvecq = wreq->collect_cursor.bvecq;\n \tunsigned long long collected_to = wreq->collected_to;\n-\tunsigned int slot = wreq->buffer.first_tail_slot;\n+\tunsigned int slot = wreq->collect_cursor.slot;\n \n-\tif (WARN_ON_ONCE(!folioq)) {\n-\t\tpr_err(\"[!] Writeback unlock found empty rolling buffer!\\n\");\n+\tif (WARN_ON_ONCE(!bvecq)) {\n+\t\tpr_err(\"[!] Writeback unlock found empty buffer!\\n\");\n \t\tnetfs_dump_request(wreq);\n \t\treturn;\n \t}\n@@ -127,9 +127,15 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,\n \t\treturn;\n \t}\n \n-\tif (slot >= folioq_nr_slots(folioq)) {\n-\t\tfolioq = rolling_buffer_delete_spent(&wreq->buffer);\n-\t\tif (!folioq)\n+\tif (slot >= bvecq->nr_slots) {\n+\t\t/* We need to be very careful - the cleanup can catch the\n+\t\t * dispatcher, which could lead to us having nothing left in\n+\t\t * the queue, causing the front and back pointers to end up on\n+\t\t * different tracks. To avoid this, we must always keep at\n+\t\t * least one segment in the queue.\n+\t\t */\n+\t\tbvecq = bvecq_delete_spent(&wreq->collect_cursor);\n+\t\tif (!bvecq)\n \t\t\treturn;\n \t\tslot = 0;\n \t}\n@@ -140,7 +146,7 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,\n \t\tunsigned long long fpos, fend;\n \t\tsize_t fsize, flen;\n \n-\t\tfolio = folioq_folio(folioq, slot);\n+\t\tfolio = page_folio(bvecq->bv[slot].bv_page);\n \t\tif (WARN_ONCE(!folio_test_writeback(folio),\n \t\t\t \"R=%08x: folio %lx is not under writeback\\n\",\n \t\t\t wreq->debug_id, folio->index))\n@@ -163,15 +169,15 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,\n \t\twreq->cleaned_to = fpos + fsize;\n \t\t*notes |= MADE_PROGRESS;\n \n-\t\t/* Clean up the head folioq. If we clear an entire folioq, then\n-\t\t * we can get rid of it provided it's not also the tail folioq\n+\t\t/* Clean up the head bvecq. If we clear an entire bvecq, then\n+\t\t * we can get rid of it provided it's not also the tail bvecq\n \t\t * being filled by the issuer.\n \t\t */\n-\t\tfolioq_clear(folioq, slot);\n+\t\tbvecq->bv[slot].bv_page = NULL;\n \t\tslot++;\n-\t\tif (slot >= folioq_nr_slots(folioq)) {\n-\t\t\tfolioq = rolling_buffer_delete_spent(&wreq->buffer);\n-\t\t\tif (!folioq)\n+\t\tif (slot >= bvecq->nr_slots) {\n+\t\t\tbvecq = bvecq_delete_spent(&wreq->collect_cursor);\n+\t\t\tif (!bvecq)\n \t\t\t\tgoto done;\n \t\t\tslot = 0;\n \t\t}\n@@ -180,9 +186,8 @@ static void netfs_writeback_unlock_folios(struct netfs_io_request *wreq,\n \t\t\tbreak;\n \t}\n \n-\twreq->buffer.tail = folioq;\n done:\n-\twreq->buffer.first_tail_slot = slot;\n+\twreq->collect_cursor.slot = slot;\n }\n \n static void netfs_cache_collect(struct netfs_io_request *wreq,\n@@ -217,7 +222,8 @@ static void netfs_collect_write_results(struct netfs_io_request *wreq)\n \ttrace_netfs_rreq(wreq, netfs_rreq_trace_collect);\n \n reassess_streams:\n-\tissued_to = atomic64_read(&wreq->issued_to);\n+\t/* Order reading the issued_to point before reading the queue it refers to. */\n+\tissued_to = atomic64_read_acquire(&wreq->issued_to);\n \tsmp_rmb();\n \tcollected_to = ULLONG_MAX;\n \tif (wreq->origin == NETFS_WRITEBACK ||\ndiff --git a/fs/netfs/write_issue.c b/fs/netfs/write_issue.c\nindex 9ca2c780f469..d4c4bee4299e 100644\n--- a/fs/netfs/write_issue.c\n+++ b/fs/netfs/write_issue.c\n@@ -108,8 +108,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,\n \tictx = netfs_inode(wreq->inode);\n \tif (is_cacheable && netfs_is_cache_enabled(ictx))\n \t\tfscache_begin_write_operation(&wreq->cache_resources, netfs_i_cookie(ictx));\n-\tif (rolling_buffer_init(&wreq->buffer, wreq->debug_id, ITER_SOURCE) < 0)\n-\t\tgoto nomem;\n \n \twreq->cleaned_to = wreq->start;\n \tif (wreq->cache_resources.dio_size > 1)\n@@ -134,9 +132,6 @@ struct netfs_io_request *netfs_create_write_req(struct address_space *mapping,\n \t}\n \n \treturn wreq;\n-nomem:\n-\tnetfs_put_failed_request(wreq);\n-\treturn ERR_PTR(-ENOMEM);\n }\n \n /**\n@@ -161,21 +156,13 @@ void netfs_prepare_write(struct netfs_io_request *wreq,\n \t\t\t loff_t start)\n {\n \tstruct netfs_io_subrequest *subreq;\n-\tstruct iov_iter *wreq_iter = &wreq->buffer.iter;\n-\n-\t/* Make sure we don't point the iterator at a used-up folio_queue\n-\t * struct being used as a placeholder to prevent the queue from\n-\t * collapsing. In such a case, extend the queue.\n-\t */\n-\tif (iov_iter_is_folioq(wreq_iter) &&\n-\t wreq_iter->folioq_slot >= folioq_nr_slots(wreq_iter->folioq))\n-\t\trolling_buffer_make_space(&wreq->buffer);\n \n \tsubreq = netfs_alloc_subrequest(wreq);\n \tsubreq->source\t\t= stream->source;\n \tsubreq->start\t\t= start;\n \tsubreq->stream_nr\t= stream->stream_nr;\n-\tsubreq->io_iter\t\t= *wreq_iter;\n+\n+\tbvecq_pos_set(&subreq->dispatch_pos, &wreq->dispatch_cursor);\n \n \t_enter(\"R=%x[%x]\", wreq->debug_id, subreq->debug_index);\n \n@@ -240,15 +227,15 @@ static void netfs_do_issue_write(struct netfs_io_stream *stream,\n }\n \n void netfs_reissue_write(struct netfs_io_stream *stream,\n-\t\t\t struct netfs_io_subrequest *subreq,\n-\t\t\t struct iov_iter *source)\n+\t\t\t struct netfs_io_subrequest *subreq)\n {\n-\tsize_t size = subreq->len - subreq->transferred;\n-\n \t// TODO: Use encrypted buffer\n-\tsubreq->io_iter = *source;\n-\tiov_iter_advance(source, size);\n-\tiov_iter_truncate(&subreq->io_iter, size);\n+\tbvecq_pos_set(&subreq->content, &subreq->dispatch_pos);\n+\tiov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,\n+\t\t\t subreq->content.bvecq, subreq->content.slot,\n+\t\t\t subreq->content.offset,\n+\t\t\t subreq->len);\n+\tiov_iter_advance(&subreq->io_iter, subreq->transferred);\n \n \tsubreq->retry_count++;\n \tsubreq->error = 0;\n@@ -266,8 +253,57 @@ void netfs_issue_write(struct netfs_io_request *wreq,\n \tif (!subreq)\n \t\treturn;\n \n+\t/* If we have a write to the cache, we need to round out the first and\n+\t * last entries (only those as the data will be on virtually contiguous\n+\t * folios) to cache DIO boundaries.\n+\t */\n+\tif (subreq->source == NETFS_WRITE_TO_CACHE) {\n+\t\tstruct bvecq_pos tmp_pos;\n+\t\tstruct bio_vec *bv;\n+\t\tstruct bvecq *bq;\n+\t\tsize_t dio_size = wreq->cache_resources.dio_size;\n+\t\tsize_t disp, len;\n+\t\tint ret;\n+\n+\t\tbvecq_pos_set(&tmp_pos, &subreq->dispatch_pos);\n+\t\tret = bvecq_extract(&tmp_pos, subreq->len, INT_MAX, &subreq->content.bvecq);\n+\t\tbvecq_pos_unset(&tmp_pos);\n+\t\tif (ret < 0) {\n+\t\t\tnetfs_write_subrequest_terminated(subreq, -ENOMEM);\n+\t\t\treturn;\n+\t\t}\n+\n+\t\t/* Round the first entry down. */\n+\t\tbq = subreq->content.bvecq;\n+\t\tbv = &bq->bv[0];\n+\t\tdisp = bv->bv_offset & (dio_size - 1);\n+\t\tif (disp) {\n+\t\t\tbv->bv_offset -= disp;\n+\t\t\tbv->bv_len += disp;\n+\t\t\tbq->fpos -= disp;\n+\t\t\tsubreq->start -= disp;\n+\t\t\tsubreq->len += disp;\n+\t\t}\n+\n+\t\t/* Round the end of the last entry up. */\n+\t\twhile (bq->next)\n+\t\t\tbq = bq->next;\n+\t\tbv = &bq->bv[bq->nr_slots - 1];\n+\t\tlen = round_up(bv->bv_len, dio_size);\n+\t\tif (len > bv->bv_len) {\n+\t\t\tsubreq->len += len - bv->bv_len;\n+\t\t\tbv->bv_len = len;\n+\t\t}\n+\t} else {\n+\t\tbvecq_pos_set(&subreq->content, &subreq->dispatch_pos);\n+\t}\n+\n+\tiov_iter_bvec_queue(&subreq->io_iter, ITER_SOURCE,\n+\t\t\t subreq->content.bvecq, subreq->content.slot,\n+\t\t\t subreq->content.offset,\n+\t\t\t subreq->len);\n+\n \tstream->construct = NULL;\n-\tsubreq->io_iter.count = subreq->len;\n \tnetfs_do_issue_write(stream, subreq);\n }\n \n@@ -304,7 +340,6 @@ size_t netfs_advance_write(struct netfs_io_request *wreq,\n \t_debug(\"part %zx/%zx %zx/%zx\", subreq->len, stream->sreq_max_len, part, len);\n \tsubreq->len += part;\n \tsubreq->nr_segs++;\n-\tstream->submit_extendable_to -= part;\n \n \tif (subreq->len >= stream->sreq_max_len ||\n \t subreq->nr_segs >= stream->sreq_max_segs ||\n@@ -328,16 +363,35 @@ static int netfs_write_folio(struct netfs_io_request *wreq,\n \tstruct netfs_io_stream *stream;\n \tstruct netfs_group *fgroup; /* TODO: Use this with ceph */\n \tstruct netfs_folio *finfo;\n-\tsize_t iter_off = 0;\n+\tstruct bvecq *queue = wreq->load_cursor.bvecq;\n+\tunsigned int slot;\n \tsize_t fsize = folio_size(folio), flen = fsize, foff = 0;\n \tloff_t fpos = folio_pos(folio), i_size;\n \tbool to_eof = false, streamw = false;\n \tbool debug = false;\n+\tint ret;\n \n \t_enter(\"\");\n \n-\tif (rolling_buffer_make_space(&wreq->buffer) < 0)\n-\t\treturn -ENOMEM;\n+\t/* Institute a new bvec queue segment if the current one is full or if\n+\t * we encounter a discontiguity. The discontiguity break is important\n+\t * when it comes to bulk unlocking folios by file range.\n+\t */\n+\tif (bvecq_is_full(queue) ||\n+\t (fpos != wreq->last_end && wreq->last_end > 0)) {\n+\t\tret = bvecq_buffer_make_space(&wreq->load_cursor, GFP_NOFS);\n+\t\tif (ret < 0) {\n+\t\t\tfolio_unlock(folio);\n+\t\t\treturn ret;\n+\t\t}\n+\n+\t\tqueue = wreq->load_cursor.bvecq;\n+\t\tqueue->fpos = fpos;\n+\t\tif (fpos != wreq->last_end)\n+\t\t\tqueue->discontig = true;\n+\t\tbvecq_pos_move(&wreq->dispatch_cursor, queue);\n+\t\twreq->dispatch_cursor.slot = 0;\n+\t}\n \n \t/* netfs_perform_write() may shift i_size around the page or from out\n \t * of the page to beyond it, but cannot move i_size into or through the\n@@ -443,7 +497,13 @@ static int netfs_write_folio(struct netfs_io_request *wreq,\n \t}\n \n \t/* Attach the folio to the rolling buffer. */\n-\trolling_buffer_append(&wreq->buffer, folio, 0);\n+\tslot = queue->nr_slots;\n+\tbvec_set_folio(&queue->bv[slot], folio, flen, 0);\n+\tqueue->nr_slots = slot + 1;\n+\twreq->load_cursor.slot = slot + 1;\n+\twreq->load_cursor.offset = 0;\n+\twreq->last_end = fpos + foff + flen;\n+\ttrace_netfs_bv_slot(queue, slot);\n \n \t/* Move the submission point forward to allow for write-streaming data\n \t * not starting at the front of the page. We don't do write-streaming\n@@ -454,7 +514,7 @@ static int netfs_write_folio(struct netfs_io_request *wreq,\n \t */\n \tfor (int s = 0; s < NR_IO_STREAMS; s++) {\n \t\tstream = &wreq->io_streams[s];\n-\t\tstream->submit_off = foff;\n+\t\tstream->submit_off = 0;\n \t\tstream->submit_len = flen;\n \t\tif (!stream->avail ||\n \t\t (stream->source == NETFS_WRITE_TO_CACHE && streamw) ||\n@@ -489,15 +549,11 @@ static int netfs_write_folio(struct netfs_io_request *wreq,\n \t\t\tbreak;\n \t\tstream = &wreq->io_streams[choose_s];\n \n-\t\t/* Advance the iterator(s). */\n-\t\tif (stream->submit_off > iter_off) {\n-\t\t\trolling_buffer_advance(&wreq->buffer, stream->submit_off - iter_off);\n-\t\t\titer_off = stream->submit_off;\n-\t\t}\n+\t\t/* Advance the cursor. */\n+\t\twreq->dispatch_cursor.offset = stream->submit_off;\n \n-\t\tatomic64_set(&wreq->issued_to, fpos + stream->submit_off);\n-\t\tstream->submit_extendable_to = fsize - stream->submit_off;\n-\t\tpart = netfs_advance_write(wreq, stream, fpos + stream->submit_off,\n+\t\tatomic64_set(&wreq->issued_to, fpos + foff + stream->submit_off);\n+\t\tpart = netfs_advance_write(wreq, stream, fpos + foff + stream->submit_off,\n \t\t\t\t\t stream->submit_len, to_eof);\n \t\tstream->submit_off += part;\n \t\tif (part > stream->submit_len)\n@@ -508,9 +564,9 @@ static int netfs_write_folio(struct netfs_io_request *wreq,\n \t\t\tdebug = true;\n \t}\n \n-\tif (fsize > iter_off)\n-\t\trolling_buffer_advance(&wreq->buffer, fsize - iter_off);\n-\tatomic64_set(&wreq->issued_to, fpos + fsize);\n+\tbvecq_pos_step(&wreq->dispatch_cursor);\n+\t/* Order loading the queue before updating the issue_to point */\n+\tatomic64_set_release(&wreq->issued_to, fpos + fsize);\n \n \tif (!debug)\n \t\tkdebug(\"R=%x: No submit\", wreq->debug_id);\n@@ -578,6 +634,11 @@ int netfs_writepages(struct address_space *mapping,\n \t\tgoto couldnt_start;\n \t}\n \n+\tif (bvecq_buffer_init(&wreq->load_cursor, GFP_NOFS) < 0)\n+\t\tgoto nomem;\n+\tbvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);\n+\tbvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);\n+\n \t__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);\n \ttrace_netfs_write(wreq, netfs_write_trace_writeback);\n \tnetfs_stat(&netfs_n_wh_writepages);\n@@ -602,12 +663,17 @@ int netfs_writepages(struct address_space *mapping,\n \tnetfs_end_issue_write(wreq);\n \n \tmutex_unlock(&ictx->wb_lock);\n+\tbvecq_pos_unset(&wreq->load_cursor);\n+\tbvecq_pos_unset(&wreq->dispatch_cursor);\n \tnetfs_wake_collector(wreq);\n \n \tnetfs_put_request(wreq, netfs_rreq_trace_put_return);\n \t_leave(\" = %d\", error);\n \treturn error;\n \n+nomem:\n+\terror = -ENOMEM;\n+\tnetfs_put_failed_request(wreq);\n couldnt_start:\n \tnetfs_kill_dirty_pages(mapping, wbc, folio);\n out:\n@@ -634,6 +700,15 @@ struct netfs_io_request *netfs_begin_writethrough(struct kiocb *iocb, size_t len\n \t\treturn wreq;\n \t}\n \n+\tif (bvecq_buffer_init(&wreq->load_cursor, GFP_NOFS) < 0) {\n+\t\tnetfs_put_failed_request(wreq);\n+\t\tmutex_unlock(&ictx->wb_lock);\n+\t\treturn ERR_PTR(-ENOMEM);\n+\t}\n+\n+\tbvecq_pos_set(&wreq->dispatch_cursor, &wreq->load_cursor);\n+\tbvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);\n+\n \twreq->io_streams[0].avail = true;\n \ttrace_netfs_write(wreq, netfs_write_trace_writethrough);\n \treturn wreq;\n@@ -649,8 +724,8 @@ int netfs_advance_writethrough(struct netfs_io_request *wreq, struct writeback_c\n \t\t\t struct folio *folio, size_t copied, bool to_page_end,\n \t\t\t struct folio **writethrough_cache)\n {\n-\t_enter(\"R=%x ic=%zu ws=%u cp=%zu tp=%u\",\n-\t wreq->debug_id, wreq->buffer.iter.count, wreq->wsize, copied, to_page_end);\n+\t_enter(\"R=%x ws=%u cp=%zu tp=%u\",\n+\t wreq->debug_id, wreq->wsize, copied, to_page_end);\n \n \tif (!*writethrough_cache) {\n \t\tif (folio_test_dirty(folio))\n@@ -692,6 +767,9 @@ ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_c\n \n \tmutex_unlock(&ictx->wb_lock);\n \n+\tbvecq_pos_unset(&wreq->load_cursor);\n+\tbvecq_pos_unset(&wreq->dispatch_cursor);\n+\n \tif (wreq->iocb)\n \t\tret = -EIOCBQUEUED;\n \telse\n@@ -707,7 +785,7 @@ ssize_t netfs_end_writethrough(struct netfs_io_request *wreq, struct writeback_c\n * @iter: Data to write.\n *\n * Write a monolithic, non-pagecache object back to the server and/or\n- * the cache.\n+ * the cache. There's a maximum of one subrequest per stream.\n */\n int netfs_writeback_single(struct address_space *mapping,\n \t\t\t struct writeback_control *wbc,\n@@ -731,10 +809,18 @@ int netfs_writeback_single(struct address_space *mapping,\n \t\tret = PTR_ERR(wreq);\n \t\tgoto couldnt_start;\n \t}\n-\n-\twreq->buffer.iter = *iter;\n \twreq->len = iov_iter_count(iter);\n \n+\tret = netfs_extract_iter(iter, wreq->len, INT_MAX, 0, &wreq->dispatch_cursor.bvecq, 0);\n+\tif (ret < 0)\n+\t\tgoto cleanup_free;\n+\tif (ret < wreq->len) {\n+\t\tret = -EIO;\n+\t\tgoto cleanup_free;\n+\t}\n+\n+\tbvecq_pos_set(&wreq->collect_cursor, &wreq->dispatch_cursor);\n+\n \t__set_bit(NETFS_RREQ_OFFLOAD_COLLECTION, &wreq->flags);\n \ttrace_netfs_write(wreq, netfs_write_trace_writeback_single);\n \tnetfs_stat(&netfs_n_wh_writepages);\n@@ -754,11 +840,11 @@ int netfs_writeback_single(struct address_space *mapping,\n \t\tsubreq = stream->construct;\n \t\tsubreq->len = wreq->len;\n \t\tstream->submit_len = subreq->len;\n-\t\tstream->submit_extendable_to = round_up(wreq->len, PAGE_SIZE);\n \n \t\tnetfs_issue_write(wreq, stream);\n \t}\n \n+\twreq->submitted = wreq->len;\n \tsmp_wmb(); /* Write lists before ALL_QUEUED. */\n \tset_bit(NETFS_RREQ_ALL_QUEUED, &wreq->flags);\n \n@@ -774,6 +860,8 @@ int netfs_writeback_single(struct address_space *mapping,\n \t_leave(\" = %d\", ret);\n \treturn ret;\n \n+cleanup_free:\n+\tnetfs_put_failed_request(wreq);\n couldnt_start:\n \tmutex_unlock(&ictx->wb_lock);\n \t_leave(\" = %d\", ret);\ndiff --git a/fs/netfs/write_retry.c b/fs/netfs/write_retry.c\nindex 29489a23a220..5df5c34d4610 100644\n--- a/fs/netfs/write_retry.c\n+++ b/fs/netfs/write_retry.c\n@@ -17,6 +17,7 @@\n static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \t\t\t\t struct netfs_io_stream *stream)\n {\n+\tstruct bvecq_pos dispatch_cursor = {};\n \tstruct list_head *next;\n \n \t_enter(\"R=%x[%x:]\", wreq->debug_id, stream->stream_nr);\n@@ -39,12 +40,8 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \t\t\tif (test_bit(NETFS_SREQ_FAILED, &subreq->flags))\n \t\t\t\tbreak;\n \t\t\tif (__test_and_clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags)) {\n-\t\t\t\tstruct iov_iter source;\n-\n-\t\t\t\tnetfs_reset_iter(subreq);\n-\t\t\t\tsource = subreq->io_iter;\n \t\t\t\tnetfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);\n-\t\t\t\tnetfs_reissue_write(stream, subreq, &source);\n+\t\t\t\tnetfs_reissue_write(stream, subreq);\n \t\t\t}\n \t\t}\n \t\treturn;\n@@ -54,11 +51,12 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \n \tdo {\n \t\tstruct netfs_io_subrequest *subreq = NULL, *from, *to, *tmp;\n-\t\tstruct iov_iter source;\n \t\tunsigned long long start, len;\n \t\tsize_t part;\n \t\tbool boundary = false;\n \n+\t\tbvecq_pos_unset(&dispatch_cursor);\n+\n \t\t/* Go through the stream and find the next span of contiguous\n \t\t * data that we then rejig (cifs, for example, needs the wsize\n \t\t * renegotiating) and reissue.\n@@ -70,7 +68,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \n \t\tif (test_bit(NETFS_SREQ_FAILED, &from->flags) ||\n \t\t !test_bit(NETFS_SREQ_NEED_RETRY, &from->flags))\n-\t\t\treturn;\n+\t\t\tgoto out;\n \n \t\tlist_for_each_continue(next, &stream->subrequests) {\n \t\t\tsubreq = list_entry(next, struct netfs_io_subrequest, rreq_link);\n@@ -85,9 +83,8 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \t\t/* Determine the set of buffers we're going to use. Each\n \t\t * subreq gets a subset of a single overall contiguous buffer.\n \t\t */\n-\t\tnetfs_reset_iter(from);\n-\t\tsource = from->io_iter;\n-\t\tsource.count = len;\n+\t\tbvecq_pos_transfer(&dispatch_cursor, &from->dispatch_pos);\n+\t\tbvecq_pos_advance(&dispatch_cursor, from->transferred);\n \n \t\t/* Work through the sublist. */\n \t\tsubreq = from;\n@@ -100,14 +97,20 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \t\t\t__clear_bit(NETFS_SREQ_NEED_RETRY, &subreq->flags);\n \t\t\ttrace_netfs_sreq(subreq, netfs_sreq_trace_retry);\n \n+\t\t\tbvecq_pos_unset(&subreq->dispatch_pos);\n+\t\t\tbvecq_pos_unset(&subreq->content);\n+\n \t\t\t/* Renegotiate max_len (wsize) */\n \t\t\tstream->sreq_max_len = len;\n+\t\t\tstream->sreq_max_segs = INT_MAX;\n \t\t\tstream->prepare_write(subreq);\n \n-\t\t\tpart = umin(len, stream->sreq_max_len);\n-\t\t\tif (unlikely(stream->sreq_max_segs))\n-\t\t\t\tpart = netfs_limit_iter(&source, 0, part, stream->sreq_max_segs);\n-\t\t\tsubreq->len = part;\n+\t\t\tbvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);\n+\t\t\tpart = bvecq_slice(&dispatch_cursor,\n+\t\t\t\t\t umin(len, stream->sreq_max_len),\n+\t\t\t\t\t stream->sreq_max_segs,\n+\t\t\t\t\t &subreq->nr_segs);\n+\t\t\tsubreq->len = subreq->transferred + part;\n \t\t\tsubreq->transferred = 0;\n \t\t\tlen -= part;\n \t\t\tstart += part;\n@@ -116,7 +119,7 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \t\t\t\tboundary = true;\n \n \t\t\tnetfs_get_subrequest(subreq, netfs_sreq_trace_get_resubmit);\n-\t\t\tnetfs_reissue_write(stream, subreq, &source);\n+\t\t\tnetfs_reissue_write(stream, subreq);\n \t\t\tif (subreq == to)\n \t\t\t\tbreak;\n \t\t}\n@@ -173,8 +176,13 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \n \t\t\tstream->prepare_write(subreq);\n \n-\t\t\tpart = umin(len, stream->sreq_max_len);\n+\t\t\tbvecq_pos_set(&subreq->dispatch_pos, &dispatch_cursor);\n+\t\t\tpart = bvecq_slice(&dispatch_cursor,\n+\t\t\t\t\t umin(len, stream->sreq_max_len),\n+\t\t\t\t\t stream->sreq_max_segs,\n+\t\t\t\t\t &subreq->nr_segs);\n \t\t\tsubreq->len = subreq->transferred + part;\n+\n \t\t\tlen -= part;\n \t\t\tstart += part;\n \t\t\tif (!len && boundary) {\n@@ -182,13 +190,16 @@ static void netfs_retry_write_stream(struct netfs_io_request *wreq,\n \t\t\t\tboundary = false;\n \t\t\t}\n \n-\t\t\tnetfs_reissue_write(stream, subreq, &source);\n+\t\t\tnetfs_reissue_write(stream, subreq);\n \t\t\tif (!len)\n \t\t\t\tbreak;\n \n \t\t} while (len);\n \n \t} while (!list_is_head(next, &stream->subrequests));\n+\n+out:\n+\tbvecq_pos_unset(&dispatch_cursor);\n }\n \n /*\ndiff --git a/include/linux/netfs.h b/include/linux/netfs.h\nindex b4602f7b6431..3345c88bbd8e 100644\n--- a/include/linux/netfs.h\n+++ b/include/linux/netfs.h\n@@ -19,12 +19,13 @@\n #include <linux/pagemap.h>\n #include <linux/bvecq.h>\n #include <linux/uio.h>\n-#include <linux/rolling_buffer.h>\n \n enum netfs_sreq_ref_trace;\n typedef struct mempool mempool_t;\n+struct readahead_control;\n+struct netfs_io_request;\n+struct netfs_io_subrequest;\n struct fscache_occupancy;\n-struct folio_queue;\n \n /**\n * folio_start_private_2 - Start an fscache write on a folio. [DEPRECATED]\n@@ -137,7 +138,6 @@ struct netfs_io_stream {\n \tunsigned int\t\tsreq_max_segs;\t/* 0 or max number of segments in an iterator */\n \tunsigned int\t\tsubmit_off;\t/* Folio offset we're submitting from */\n \tunsigned int\t\tsubmit_len;\t/* Amount of data left to submit */\n-\tunsigned int\t\tsubmit_extendable_to; /* Amount I/O can be rounded up to */\n \tvoid (*prepare_write)(struct netfs_io_subrequest *subreq);\n \tvoid (*issue_write)(struct netfs_io_subrequest *subreq);\n \t/* Collection tracking */\n@@ -178,6 +178,8 @@ struct netfs_io_subrequest {\n \tstruct netfs_io_request *rreq;\t\t/* Supervising I/O request */\n \tstruct work_struct\twork;\n \tstruct list_head\trreq_link;\t/* Link in rreq->subrequests */\n+\tstruct bvecq_pos\tdispatch_pos;\t/* Bookmark in the combined queue of the start */\n+\tstruct bvecq_pos\tcontent;\t/* The (copied) content of the subrequest */\n \tstruct iov_iter\t\tio_iter;\t/* Iterator for this subrequest */\n \tunsigned long long\tstart;\t\t/* Where to start the I/O */\n \tsize_t\t\t\tlen;\t\t/* Size of the I/O */\n@@ -239,13 +241,13 @@ struct netfs_io_request {\n \tstruct netfs_io_stream\tio_streams[2];\t/* Streams of parallel I/O operations */\n #define NR_IO_STREAMS 2 //wreq->nr_io_streams\n \tstruct netfs_group\t*group;\t\t/* Writeback group being written back */\n-\tstruct rolling_buffer\tbuffer;\t\t/* Unencrypted buffer */\n-#define NETFS_ROLLBUF_PUT_MARK\t\tROLLBUF_MARK_1\n-#define NETFS_ROLLBUF_PAGECACHE_MARK\tROLLBUF_MARK_2\n+\tstruct bvecq_pos\tcollect_cursor;\t/* Clear-up point of I/O buffer */\n+\tstruct bvecq_pos\tload_cursor;\t/* Point at which new folios are loaded in */\n+\tstruct bvecq_pos\tdispatch_cursor; /* Point from which buffers are dispatched */\n \twait_queue_head_t\twaitq;\t\t/* Processor waiter */\n \tvoid\t\t\t*netfs_priv;\t/* Private data for the netfs */\n \tvoid\t\t\t*netfs_priv2;\t/* Private data for the netfs */\n-\tstruct bio_vec\t\t*direct_bv;\t/* DIO buffer list (when handling iovec-iter) */\n+\tunsigned long long\tlast_end;\t/* End pos of last folio submitted */\n \tunsigned long long\tsubmitted;\t/* Amount submitted for I/O so far */\n \tunsigned long long\tlen;\t\t/* Length of the request */\n \tsize_t\t\t\ttransferred;\t/* Amount to be indicated as transferred */\n@@ -258,7 +260,6 @@ struct netfs_io_request {\n \tunsigned long long\tcleaned_to;\t/* Position we've cleaned folios to */\n \tunsigned long long\tabandon_to;\t/* Position to abandon folios to */\n \tpgoff_t\t\t\tno_unlock_folio; /* Don't unlock this folio after read */\n-\tunsigned int\t\tdirect_bv_count; /* Number of elements in direct_bv[] */\n \tunsigned int\t\tdebug_id;\n \tunsigned int\t\trsize;\t\t/* Maximum read size (0 for none) */\n \tunsigned int\t\twsize;\t\t/* Maximum write size (0 for none) */\n@@ -267,7 +268,6 @@ struct netfs_io_request {\n \tspinlock_t\t\tlock;\t\t/* Lock for queuing subreqs */\n \tunsigned char\t\tfront_folio_order; /* Order (size) of front folio */\n \tenum netfs_io_origin\torigin;\t\t/* Origin of the request */\n-\tbool\t\t\tdirect_bv_unpin; /* T if direct_bv[] must be unpinned */\n \trefcount_t\t\tref;\n \tunsigned long\t\tflags;\n #define NETFS_RREQ_IN_PROGRESS\t\t0\t/* Unlocked when the request completes (has ref) */\n@@ -463,12 +463,6 @@ void netfs_end_io_write(struct inode *inode);\n int netfs_start_io_direct(struct inode *inode);\n void netfs_end_io_direct(struct inode *inode);\n \n-/* Miscellaneous APIs. */\n-struct folio_queue *netfs_folioq_alloc(unsigned int rreq_id, gfp_t gfp,\n-\t\t\t\t unsigned int trace /*enum netfs_folioq_trace*/);\n-void netfs_folioq_free(struct folio_queue *folioq,\n-\t\t unsigned int trace /*enum netfs_trace_folioq*/);\n-\n /* Buffer wrangling helpers API. */\n int netfs_alloc_folioq_buffer(struct address_space *mapping,\n \t\t\t struct folio_queue **_buffer,\ndiff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h\nindex fbb094231659..df3d440563ec 100644\n--- a/include/trace/events/netfs.h\n+++ b/include/trace/events/netfs.h\n@@ -213,7 +213,9 @@\n \tEM(netfs_folio_trace_store_copy,\t\"store-copy\")\t\\\n \tEM(netfs_folio_trace_store_plus,\t\"store+\")\t\\\n \tEM(netfs_folio_trace_wthru,\t\t\"wthru\")\t\\\n-\tE_(netfs_folio_trace_wthru_plus,\t\"wthru+\")\n+\tEM(netfs_folio_trace_wthru_plus,\t\"wthru+\")\t\\\n+\tEM(netfs_folio_trace_zero,\t\t\"zero\")\t\t\\\n+\tE_(netfs_folio_trace_zero_ra,\t\t\"zero-ra\")\n \n #define netfs_collect_contig_traces\t\t\t\t\\\n \tEM(netfs_contig_trace_collect,\t\t\"Collect\")\t\\\n@@ -226,13 +228,13 @@\n \tEM(netfs_trace_donate_to_next,\t\t\"to-next\")\t\\\n \tE_(netfs_trace_donate_to_deferred_next,\t\"defer-next\")\n \n-#define netfs_folioq_traces\t\t\t\t\t\\\n-\tEM(netfs_trace_folioq_alloc_buffer,\t\"alloc-buf\")\t\\\n-\tEM(netfs_trace_folioq_clear,\t\t\"clear\")\t\\\n-\tEM(netfs_trace_folioq_delete,\t\t\"delete\")\t\\\n-\tEM(netfs_trace_folioq_make_space,\t\"make-space\")\t\\\n-\tEM(netfs_trace_folioq_rollbuf_init,\t\"roll-init\")\t\\\n-\tE_(netfs_trace_folioq_read_progress,\t\"r-progress\")\n+#define netfs_bvecq_traces\t\t\t\t\t\\\n+\tEM(netfs_trace_bvecq_alloc_buffer,\t\"alloc-buf\")\t\\\n+\tEM(netfs_trace_bvecq_clear,\t\t\"clear\")\t\\\n+\tEM(netfs_trace_bvecq_delete,\t\t\"delete\")\t\\\n+\tEM(netfs_trace_bvecq_make_space,\t\"make-space\")\t\\\n+\tEM(netfs_trace_bvecq_rollbuf_init,\t\"roll-init\")\t\\\n+\tE_(netfs_trace_bvecq_read_progress,\t\"r-progress\")\n \n #ifndef __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY\n #define __NETFS_DECLARE_TRACE_ENUMS_ONCE_ONLY\n@@ -252,7 +254,7 @@ enum netfs_sreq_ref_trace { netfs_sreq_ref_traces } __mode(byte);\n enum netfs_folio_trace { netfs_folio_traces } __mode(byte);\n enum netfs_collect_contig_trace { netfs_collect_contig_traces } __mode(byte);\n enum netfs_donate_trace { netfs_donate_traces } __mode(byte);\n-enum netfs_folioq_trace { netfs_folioq_traces } __mode(byte);\n+enum netfs_bvecq_trace { netfs_bvecq_traces } __mode(byte);\n \n #endif\n \n@@ -276,7 +278,7 @@ netfs_sreq_ref_traces;\n netfs_folio_traces;\n netfs_collect_contig_traces;\n netfs_donate_traces;\n-netfs_folioq_traces;\n+netfs_bvecq_traces;\n \n /*\n * Now redefine the EM() and E_() macros to map the enums to the strings that\n@@ -378,10 +380,10 @@ TRACE_EVENT(netfs_sreq,\n \t\t __entry->len\t= sreq->len;\n \t\t __entry->transferred = sreq->transferred;\n \t\t __entry->start\t= sreq->start;\n-\t\t __entry->slot\t= sreq->io_iter.folioq_slot;\n+\t\t __entry->slot\t= sreq->dispatch_pos.slot;\n \t\t\t ),\n \n-\t TP_printk(\"R=%08x[%x] %s %s f=%03x s=%llx %zx/%zx s=%u e=%d\",\n+\t TP_printk(\"R=%08x[%x] %s %s f=%03x s=%llx %zx/%zx qs=%u e=%d\",\n \t\t __entry->rreq, __entry->index,\n \t\t __print_symbolic(__entry->source, netfs_sreq_sources),\n \t\t __print_symbolic(__entry->what, netfs_sreq_traces),\n@@ -756,27 +758,25 @@ TRACE_EVENT(netfs_collect_stream,\n \t\t __entry->collected_to, __entry->issued_to)\n \t );\n \n-TRACE_EVENT(netfs_folioq,\n-\t TP_PROTO(const struct folio_queue *fq,\n-\t\t enum netfs_folioq_trace trace),\n+TRACE_EVENT(netfs_bvecq,\n+\t TP_PROTO(const struct bvecq *bq,\n+\t\t enum netfs_bvecq_trace trace),\n \n-\t TP_ARGS(fq, trace),\n+\t TP_ARGS(bq, trace),\n \n \t TP_STRUCT__entry(\n-\t\t __field(unsigned int,\t\trreq)\n \t\t __field(unsigned int,\t\tid)\n-\t\t __field(enum netfs_folioq_trace,\ttrace)\n+\t\t __field(enum netfs_bvecq_trace,\ttrace)\n \t\t\t ),\n \n \t TP_fast_assign(\n-\t\t __entry->rreq\t= fq ? fq->rreq_id : 0;\n-\t\t __entry->id\t\t= fq ? fq->debug_id : 0;\n+\t\t __entry->id\t\t= bq ? bq->priv : 0;\n \t\t __entry->trace\t= trace;\n \t\t\t ),\n \n-\t TP_printk(\"R=%08x fq=%x %s\",\n-\t\t __entry->rreq, __entry->id,\n-\t\t __print_symbolic(__entry->trace, netfs_folioq_traces))\n+\t TP_printk(\"fq=%x %s\",\n+\t\t __entry->id,\n+\t\t __print_symbolic(__entry->trace, netfs_bvecq_traces))\n \t );\n \n TRACE_EVENT(netfs_bv_slot,\n", "prefixes": [ "18/26" ] }