get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/2216353/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 2216353,
    "url": "http://patchwork.ozlabs.org/api/patches/2216353/?format=api",
    "web_url": "http://patchwork.ozlabs.org/project/linux-cifs-client/patch/20260326104544.509518-14-dhowells@redhat.com/",
    "project": {
        "id": 12,
        "url": "http://patchwork.ozlabs.org/api/projects/12/?format=api",
        "name": "Linux CIFS Client",
        "link_name": "linux-cifs-client",
        "list_id": "linux-cifs.vger.kernel.org",
        "list_email": "linux-cifs@vger.kernel.org",
        "web_url": "",
        "scm_url": "",
        "webscm_url": "",
        "list_archive_url": "",
        "list_archive_url_format": "",
        "commit_url_format": ""
    },
    "msgid": "<20260326104544.509518-14-dhowells@redhat.com>",
    "list_archive_url": null,
    "date": "2026-03-26T10:45:28",
    "name": "[13/26] netfs: Add some tools for managing bvecq chains",
    "commit_ref": null,
    "pull_url": null,
    "state": "new",
    "archived": false,
    "hash": "e744cec08b44aac41ea3f940105969bf1f11253f",
    "submitter": {
        "id": 59,
        "url": "http://patchwork.ozlabs.org/api/people/59/?format=api",
        "name": "David Howells",
        "email": "dhowells@redhat.com"
    },
    "delegate": null,
    "mbox": "http://patchwork.ozlabs.org/project/linux-cifs-client/patch/20260326104544.509518-14-dhowells@redhat.com/mbox/",
    "series": [
        {
            "id": 497565,
            "url": "http://patchwork.ozlabs.org/api/series/497565/?format=api",
            "web_url": "http://patchwork.ozlabs.org/project/linux-cifs-client/list/?series=497565",
            "date": "2026-03-26T10:45:15",
            "name": "netfs: Keep track of folios in a segmented bio_vec[] chain",
            "version": 1,
            "mbox": "http://patchwork.ozlabs.org/series/497565/mbox/"
        }
    ],
    "comments": "http://patchwork.ozlabs.org/api/patches/2216353/comments/",
    "check": "pending",
    "checks": "http://patchwork.ozlabs.org/api/patches/2216353/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "\n <linux-cifs+bounces-10536-incoming=patchwork.ozlabs.org@vger.kernel.org>",
        "X-Original-To": [
            "incoming@patchwork.ozlabs.org",
            "linux-cifs@vger.kernel.org"
        ],
        "Delivered-To": "patchwork-incoming@legolas.ozlabs.org",
        "Authentication-Results": [
            "legolas.ozlabs.org;\n\tdkim=pass (1024-bit key;\n unprotected) header.d=redhat.com header.i=@redhat.com header.a=rsa-sha256\n header.s=mimecast20190719 header.b=Uy4iVOrw;\n\tdkim-atps=neutral",
            "legolas.ozlabs.org;\n spf=pass (sender SPF authorized) smtp.mailfrom=vger.kernel.org\n (client-ip=172.234.253.10; helo=sea.lore.kernel.org;\n envelope-from=linux-cifs+bounces-10536-incoming=patchwork.ozlabs.org@vger.kernel.org;\n receiver=patchwork.ozlabs.org)",
            "smtp.subspace.kernel.org;\n\tdkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com\n header.b=\"Uy4iVOrw\"",
            "smtp.subspace.kernel.org;\n arc=none smtp.client-ip=170.10.133.124",
            "smtp.subspace.kernel.org;\n dmarc=pass (p=quarantine dis=none) header.from=redhat.com",
            "smtp.subspace.kernel.org;\n spf=pass smtp.mailfrom=redhat.com"
        ],
        "Received": [
            "from sea.lore.kernel.org (sea.lore.kernel.org [172.234.253.10])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange x25519)\n\t(No client certificate requested)\n\tby legolas.ozlabs.org (Postfix) with ESMTPS id 4fhLRx0TFSz1yGD\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 26 Mar 2026 22:01:17 +1100 (AEDT)",
            "from smtp.subspace.kernel.org (conduit.subspace.kernel.org\n [100.90.174.1])\n\tby sea.lore.kernel.org (Postfix) with ESMTP id 541633075AA0\n\tfor <incoming@patchwork.ozlabs.org>; Thu, 26 Mar 2026 10:51:46 +0000 (UTC)",
            "from localhost.localdomain (localhost.localdomain [127.0.0.1])\n\tby smtp.subspace.kernel.org (Postfix) with ESMTP id 510933ED5A8;\n\tThu, 26 Mar 2026 10:48:12 +0000 (UTC)",
            "from us-smtp-delivery-124.mimecast.com\n (us-smtp-delivery-124.mimecast.com [170.10.133.124])\n\t(using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))\n\t(No client certificate requested)\n\tby smtp.subspace.kernel.org (Postfix) with ESMTPS id AD1F63ECBE1\n\tfor <linux-cifs@vger.kernel.org>; Thu, 26 Mar 2026 10:48:09 +0000 (UTC)",
            "from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com\n (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by\n relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,\n cipher=TLS_AES_256_GCM_SHA384) id us-mta-70-2aoNTZEbMpuAFj9PGCTVWg-1; Thu,\n 26 Mar 2026 06:48:03 -0400",
            "from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com\n (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93])\n\t(using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)\n\t key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest\n SHA256)\n\t(No client certificate requested)\n\tby mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS\n id 9BE531956065;\n\tThu, 26 Mar 2026 10:48:00 +0000 (UTC)",
            "from warthog.procyon.org.com (unknown [10.44.33.121])\n\tby mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP\n id CC011180075D;\n\tThu, 26 Mar 2026 10:47:53 +0000 (UTC)"
        ],
        "ARC-Seal": "i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116;\n\tt=1774522092; cv=none;\n b=oAGEU85udvUKEEDbcBgiRu+MB/RgBY3TB92wNlGYzfs0Q2no/HVWG2q8Pp99c84lrsSYksoxbqPIU40PH8y/Yq3n26jxJovHQR88y0xWgJwjymLRlk2emb+bbd2ggqrQuDKgb8/Bi6Yh7AMZLH2qWGfVj9BGAVot2amMrjsuBKg=",
        "ARC-Message-Signature": "i=1; a=rsa-sha256; d=subspace.kernel.org;\n\ts=arc-20240116; t=1774522092; c=relaxed/simple;\n\tbh=KDBbASYwY1Jt9g1tjqPa+0s93HJKDlMALHBY+TRuzqw=;\n\th=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References:\n\t MIME-Version;\n b=rgvATvtc8wJ4dfhPAxh7hXQccMgMKTZmwAFfijIlfN5OXJ5iiSf26oouf3GX1dKcNj7NHSeMA79dUvS2R3qKk5uXYukPwN4XbZLKYrCK1U8cnOPBBvLOLPUMmph7UAs4nNaWd+XFBMKz8ZA6vAvRNoInz/fFx9S7p7jPAkDVpYk=",
        "ARC-Authentication-Results": "i=1; smtp.subspace.kernel.org;\n dmarc=pass (p=quarantine dis=none) header.from=redhat.com;\n spf=pass smtp.mailfrom=redhat.com;\n dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com\n header.b=Uy4iVOrw; arc=none smtp.client-ip=170.10.133.124",
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n\ts=mimecast20190719; t=1774522088;\n\th=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n\t to:to:cc:cc:mime-version:mime-version:\n\t content-transfer-encoding:content-transfer-encoding:\n\t in-reply-to:in-reply-to:references:references;\n\tbh=yDP/Sm6+MeU+8XN4gJgSUzN+frxmLx3rFrwpkE8Ii+s=;\n\tb=Uy4iVOrwVMPEcaT2Lb9Z/dl5YQYrTae5WGbVC4vM4+hK9oUGM7zGY+BxeZZPocHbMimqRr\n\tsWQKQrpy12quc3rLFEshe5cd5J8+gDknArV9yGjiQ31cpSykJuRtesn8vUpdmAcJJ0u743\n\tEYXEEEStOF8NGINhx9jufWUIB0y/lSQ=",
        "X-MC-Unique": "2aoNTZEbMpuAFj9PGCTVWg-1",
        "X-Mimecast-MFC-AGG-ID": "2aoNTZEbMpuAFj9PGCTVWg_1774522080",
        "From": "David Howells <dhowells@redhat.com>",
        "To": "Christian Brauner <christian@brauner.io>,\n\tMatthew Wilcox <willy@infradead.org>,\n\tChristoph Hellwig <hch@infradead.org>",
        "Cc": "David Howells <dhowells@redhat.com>,\n\tPaulo Alcantara <pc@manguebit.com>,\n\tJens Axboe <axboe@kernel.dk>,\n\tLeon Romanovsky <leon@kernel.org>,\n\tSteve French <sfrench@samba.org>,\n\tChenXiaoSong <chenxiaosong@chenxiaosong.com>,\n\tMarc Dionne <marc.dionne@auristor.com>,\n\tEric Van Hensbergen <ericvh@kernel.org>,\n\tDominique Martinet <asmadeus@codewreck.org>,\n\tIlya Dryomov <idryomov@gmail.com>,\n\tTrond Myklebust <trondmy@kernel.org>,\n\tnetfs@lists.linux.dev,\n\tlinux-afs@lists.infradead.org,\n\tlinux-cifs@vger.kernel.org,\n\tlinux-nfs@vger.kernel.org,\n\tceph-devel@vger.kernel.org,\n\tv9fs@lists.linux.dev,\n\tlinux-erofs@lists.ozlabs.org,\n\tlinux-fsdevel@vger.kernel.org,\n\tlinux-kernel@vger.kernel.org,\n\tPaulo Alcantara <pc@manguebit.org>",
        "Subject": "[PATCH 13/26] netfs: Add some tools for managing bvecq chains",
        "Date": "Thu, 26 Mar 2026 10:45:28 +0000",
        "Message-ID": "<20260326104544.509518-14-dhowells@redhat.com>",
        "In-Reply-To": "<20260326104544.509518-1-dhowells@redhat.com>",
        "References": "<20260326104544.509518-1-dhowells@redhat.com>",
        "Precedence": "bulk",
        "X-Mailing-List": "linux-cifs@vger.kernel.org",
        "List-Id": "<linux-cifs.vger.kernel.org>",
        "List-Subscribe": "<mailto:linux-cifs+subscribe@vger.kernel.org>",
        "List-Unsubscribe": "<mailto:linux-cifs+unsubscribe@vger.kernel.org>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-Scanned-By": "MIMEDefang 3.4.1 on 10.30.177.93"
    },
    "content": "Provide a selection of tools for managing bvec queue chains.  This\nincludes:\n\n (1) Allocation, prepopulation, expansion, shortening and refcounting of\n     bvecqs and bvecq chains.\n\n     This can be used to do things like creating an encryption buffer in\n     cifs or a directory content buffer in afs.  The memory segments will\n     be appropriate disposed off according to the flags on the bvecq.\n\n (2) Management of a bvecq chain as a rolling buffer and the management of\n     positions within it.\n\n (3) Loading folios, slicing chains and clearing content.\n\nSigned-off-by: David Howells <dhowells@redhat.com>\ncc: Paulo Alcantara <pc@manguebit.org>\ncc: Matthew Wilcox <willy@infradead.org>\ncc: Christoph Hellwig <hch@infradead.org>\ncc: linux-cifs@vger.kernel.org\ncc: netfs@lists.linux.dev\ncc: linux-fsdevel@vger.kernel.org\n---\n fs/netfs/Makefile            |   1 +\n fs/netfs/bvecq.c             | 706 +++++++++++++++++++++++++++++++++++\n fs/netfs/internal.h          |   1 +\n fs/netfs/stats.c             |   4 +-\n include/linux/bvecq.h        | 165 +++++++-\n include/linux/iov_iter.h     |   4 +-\n include/linux/netfs.h        |   1 +\n include/trace/events/netfs.h |  24 ++\n lib/iov_iter.c               |  16 +-\n lib/scatterlist.c            |   4 +-\n lib/tests/kunit_iov_iter.c   |  18 +-\n 11 files changed, 919 insertions(+), 25 deletions(-)\n create mode 100644 fs/netfs/bvecq.c",
    "diff": "diff --git a/fs/netfs/Makefile b/fs/netfs/Makefile\nindex b43188d64bd8..e1f12ecb5abf 100644\n--- a/fs/netfs/Makefile\n+++ b/fs/netfs/Makefile\n@@ -3,6 +3,7 @@\n netfs-y := \\\n \tbuffered_read.o \\\n \tbuffered_write.o \\\n+\tbvecq.o \\\n \tdirect_read.o \\\n \tdirect_write.o \\\n \titerator.o \\\ndiff --git a/fs/netfs/bvecq.c b/fs/netfs/bvecq.c\nnew file mode 100644\nindex 000000000000..c71646ea5243\n--- /dev/null\n+++ b/fs/netfs/bvecq.c\n@@ -0,0 +1,706 @@\n+// SPDX-License-Identifier: GPL-2.0-only\n+/* Buffering helpers for bvec queues\n+ *\n+ * Copyright (C) 2025 Red Hat, Inc. All Rights Reserved.\n+ * Written by David Howells (dhowells@redhat.com)\n+ */\n+\n+#include \"internal.h\"\n+\n+void bvecq_dump(const struct bvecq *bq)\n+{\n+\tint b = 0;\n+\n+\tfor (; bq; bq = bq->next, b++) {\n+\t\tint skipz = 0;\n+\n+\t\tpr_notice(\"BQ[%u] %u/%u fp=%llx\\n\", b, bq->nr_slots, bq->max_slots, bq->fpos);\n+\t\tfor (int s = 0; s < bq->nr_slots; s++) {\n+\t\t\tconst struct bio_vec *bv = &bq->bv[s];\n+\n+\t\t\tif (!bv->bv_page && !bv->bv_len && skipz < 2) {\n+\t\t\t\tskipz = 1;\n+\t\t\t\tcontinue;\n+\t\t\t}\n+\t\t\tif (skipz == 1)\n+\t\t\t\tpr_notice(\"BQ[%u:00-%02u] ...\\n\", b, s - 1);\n+\t\t\tskipz = 2;\n+\t\t\tpr_notice(\"BQ[%u:%02u] %10lx %04x %04x %u\\n\",\n+\t\t\t\t  b, s,\n+\t\t\t\t  bv->bv_page ? page_to_pfn(bv->bv_page) : 0,\n+\t\t\t\t  bv->bv_offset, bv->bv_len,\n+\t\t\t\t  bv->bv_page ? page_count(bv->bv_page) : 0);\n+\t\t}\n+\t}\n+}\n+EXPORT_SYMBOL(bvecq_dump);\n+\n+/**\n+ * bvecq_alloc_one - Allocate a single bvecq node with unpopulated slots\n+ * @nr_slots: Number of slots to allocate\n+ * @gfp: The allocation constraints.\n+ *\n+ * Allocate a single bvecq node and initialise the header.  A number of inline\n+ * slots are also allocated, rounded up to fit after the header in a power-of-2\n+ * slab object of up to 512 bytes (up to 29 slots on a 64-bit cpu).  The slot\n+ * array is not initialised.\n+ *\n+ * Return: The node pointer or NULL on allocation failure.\n+ */\n+struct bvecq *bvecq_alloc_one(size_t nr_slots, gfp_t gfp)\n+{\n+\tstruct bvecq *bq;\n+\tconst size_t max_size = 512;\n+\tconst size_t max_slots = (max_size - sizeof(*bq)) / sizeof(bq->__bv[0]);\n+\tsize_t part = umin(nr_slots, max_slots);\n+\tsize_t size = roundup_pow_of_two(struct_size(bq, __bv, part));\n+\n+\tbq = kmalloc(size, gfp);\n+\tif (bq) {\n+\t\t*bq = (struct bvecq) {\n+\t\t\t.ref\t\t= REFCOUNT_INIT(1),\n+\t\t\t.bv\t\t= bq->__bv,\n+\t\t\t.inline_bv\t= true,\n+\t\t\t.max_slots\t= (size - sizeof(*bq)) / sizeof(bq->__bv[0]),\n+\t\t};\n+\t\tnetfs_stat(&netfs_n_bvecq);\n+\t}\n+\treturn bq;\n+}\n+EXPORT_SYMBOL(bvecq_alloc_one);\n+\n+/**\n+ * bvecq_alloc_chain - Allocate an unpopulated bvecq chain\n+ * @nr_slots: Number of slots to allocate\n+ * @gfp: The allocation constraints.\n+ *\n+ * Allocate a chain of bvecq nodes providing at least the requested cumulative\n+ * number of slots.\n+ *\n+ * Return: The first node pointer or NULL on allocation failure.\n+ */\n+struct bvecq *bvecq_alloc_chain(size_t nr_slots, gfp_t gfp)\n+{\n+\tstruct bvecq *head = NULL, *tail = NULL;\n+\n+\t_enter(\"%zu\", nr_slots);\n+\n+\tfor (;;) {\n+\t\tstruct bvecq *bq;\n+\n+\t\tbq = bvecq_alloc_one(nr_slots, gfp);\n+\t\tif (!bq)\n+\t\t\tgoto oom;\n+\n+\t\tif (tail) {\n+\t\t\ttail->next = bq;\n+\t\t\tbq->prev = tail;\n+\t\t} else {\n+\t\t\thead = bq;\n+\t\t}\n+\t\ttail = bq;\n+\t\tif (tail->max_slots >= nr_slots)\n+\t\t\tbreak;\n+\t\tnr_slots -= tail->max_slots;\n+\t}\n+\n+\treturn head;\n+oom:\n+\tbvecq_put(head);\n+\treturn NULL;\n+}\n+EXPORT_SYMBOL(bvecq_alloc_chain);\n+\n+/**\n+ * bvecq_alloc_buffer - Allocate a bvecq chain and populate with buffers\n+ * @size: Target size of the buffer (can be 0 for an empty buffer)\n+ * @pre_slots: Number of preamble slots to set aside\n+ * @gfp: The allocation constraints.\n+ *\n+ * Allocate a chain of bvecq nodes and populate the slots with sufficient pages\n+ * to provide at least the requested amount of space, leaving the first\n+ * @pre_slots slots unset.  The pages allocated may be compound pages larger\n+ * than PAGE_SIZE and thus occupy fewer slots.  The pages have their refcounts\n+ * set to 1 and can be passed to MSG_SPLICE_PAGES.\n+ *\n+ * Return: The first node pointer or NULL on allocation failure.\n+ */\n+struct bvecq *bvecq_alloc_buffer(size_t size, unsigned int pre_slots, gfp_t gfp)\n+{\n+\tstruct bvecq *head = NULL, *tail = NULL, *p = NULL;\n+\tsize_t count = DIV_ROUND_UP(size, PAGE_SIZE);\n+\n+\t_enter(\"%zx,%zx,%u\", size, count, pre_slots);\n+\n+\tdo {\n+\t\tstruct page **pages;\n+\t\tint want, got;\n+\n+\t\tp = bvecq_alloc_one(umin(count, 32 - 3), gfp);\n+\t\tif (!p)\n+\t\t\tgoto oom;\n+\n+\t\tp->free = true;\n+\n+\t\tif (tail) {\n+\t\t\ttail->next = p;\n+\t\t\tp->prev = tail;\n+\t\t} else {\n+\t\t\thead = p;\n+\t\t}\n+\t\ttail = p;\n+\t\tif (!count)\n+\t\t\tbreak;\n+\n+\t\tpages = (struct page **)&p->bv[p->max_slots];\n+\t\tpages -= p->max_slots - pre_slots;\n+\t\tmemset(pages, 0, (p->max_slots - pre_slots) * sizeof(pages[0]));\n+\n+\t\twant = umin(count, p->max_slots - pre_slots);\n+\t\tgot = alloc_pages_bulk(gfp, want, pages);\n+\t\tif (got < want) {\n+\t\t\tfor (int i = 0; i < got; i++)\n+\t\t\t\t__free_page(pages[i]);\n+\t\t\tgoto oom;\n+\t\t}\n+\n+\t\ttail->nr_slots = pre_slots + got;\n+\t\tfor (int i = 0; i < got; i++) {\n+\t\t\tint j = pre_slots + i;\n+\n+\t\t\tset_page_count(pages[i], 1);\n+\t\t\tbvec_set_page(&tail->bv[j], pages[i], PAGE_SIZE, 0);\n+\t\t}\n+\n+\t\tcount -= got;\n+\t\tpre_slots = 0;\n+\t} while (count > 0);\n+\n+\treturn head;\n+oom:\n+\tbvecq_put(head);\n+\treturn NULL;\n+}\n+EXPORT_SYMBOL(bvecq_alloc_buffer);\n+\n+/*\n+ * Free the page pointed to be a segment as necessary.\n+ */\n+static void bvecq_free_seg(struct bvecq *bq, unsigned int seg)\n+{\n+\tif (!bq->free ||\n+\t    !bq->bv[seg].bv_page)\n+\t\treturn;\n+\n+\tif (bq->unpin)\n+\t\tunpin_user_page(bq->bv[seg].bv_page);\n+\telse\n+\t\t__free_page(bq->bv[seg].bv_page);\n+}\n+\n+/**\n+ * bvecq_put - Put a ref on a bvec queue\n+ * @bq: The start of the folio queue to free\n+ *\n+ * Put the ref(s) on the nodes in a bvec queue, freeing up the node and the\n+ * page fragments it points to as the refcounts become zero.\n+ */\n+void bvecq_put(struct bvecq *bq)\n+{\n+\tstruct bvecq *next;\n+\n+\tfor (; bq; bq = next) {\n+\t\tif (!refcount_dec_and_test(&bq->ref))\n+\t\t\tbreak;\n+\t\tfor (int seg = 0; seg < bq->nr_slots; seg++)\n+\t\t\tbvecq_free_seg(bq, seg);\n+\t\tnext = bq->next;\n+\t\tnetfs_stat_d(&netfs_n_bvecq);\n+\t\tkfree(bq);\n+\t}\n+}\n+EXPORT_SYMBOL(bvecq_put);\n+\n+/**\n+ * bvecq_expand_buffer - Allocate buffer space into a bvec queue\n+ * @_buffer: Pointer to the bvecq chain to expand (may point to a NULL; updated).\n+ * @_cur_size: Current size of the buffer (updated).\n+ * @size: Target size of the buffer.\n+ * @gfp: The allocation constraints.\n+ */\n+int bvecq_expand_buffer(struct bvecq **_buffer, size_t *_cur_size, ssize_t size, gfp_t gfp)\n+{\n+\tstruct bvecq *tail = *_buffer;\n+\tconst size_t max_slots = 32;\n+\n+\tsize = round_up(size, PAGE_SIZE);\n+\tif (*_cur_size >= size)\n+\t\treturn 0;\n+\n+\tif (tail)\n+\t\twhile (tail->next)\n+\t\t\ttail = tail->next;\n+\n+\tdo {\n+\t\tstruct page *page;\n+\t\tint order = 0;\n+\n+\t\tif (!tail || bvecq_is_full(tail)) {\n+\t\t\tstruct bvecq *p;\n+\n+\t\t\tp = bvecq_alloc_one(max_slots, gfp);\n+\t\t\tif (!p)\n+\t\t\t\treturn -ENOMEM;\n+\t\t\tif (tail) {\n+\t\t\t\ttail->next = p;\n+\t\t\t\tp->prev = tail;\n+\t\t\t} else {\n+\t\t\t\t*_buffer = p;\n+\t\t\t}\n+\t\t\ttail = p;\n+\t\t}\n+\n+\t\tif (size - *_cur_size > PAGE_SIZE)\n+\t\t\torder = umin(ilog2(size - *_cur_size) - PAGE_SHIFT,\n+\t\t\t\t     MAX_PAGECACHE_ORDER);\n+\n+\t\tpage = alloc_pages(gfp | __GFP_COMP, order);\n+\t\tif (!page && order > 0)\n+\t\t\tpage = alloc_pages(gfp | __GFP_COMP, 0);\n+\t\tif (!page)\n+\t\t\treturn -ENOMEM;\n+\n+\t\tbvec_set_page(&tail->bv[tail->nr_slots++], page, PAGE_SIZE << order, 0);\n+\t\t*_cur_size += PAGE_SIZE << order;\n+\t} while (*_cur_size < size);\n+\n+\treturn 0;\n+}\n+EXPORT_SYMBOL(bvecq_expand_buffer);\n+\n+/**\n+ * bvecq_shorten_buffer - Shorten a bvec queue buffer\n+ * @bq: The start of the buffer to shorten\n+ * @slot: The slot to start from\n+ * @size: The size to retain\n+ *\n+ * Shorten the content of a bvec queue down to the minimum number of segments,\n+ * starting at the specified segment, to retain the specified size.\n+ *\n+ * Return: 0 if successful; -EMSGSIZE if there is insufficient content.\n+ */\n+int bvecq_shorten_buffer(struct bvecq *bq, unsigned int slot, size_t size)\n+{\n+\tssize_t retain = size;\n+\n+\t/* Skip through the segments we want to keep. */\n+\tfor (; bq; bq = bq->next) {\n+\t\tfor (; slot < bq->nr_slots; slot++) {\n+\t\t\tretain -= bq->bv[slot].bv_len;\n+\t\t\tif (retain < 0)\n+\t\t\t\tgoto found;\n+\t\t}\n+\t\tslot = 0;\n+\t}\n+\tif (WARN_ON_ONCE(retain > 0))\n+\t\treturn -EMSGSIZE;\n+\treturn 0;\n+\n+found:\n+\t/* Shorten the entry to be retained and clean the rest of this bvecq. */\n+\tbq->bv[slot].bv_len += retain;\n+\tslot++;\n+\tfor (int i = slot; i < bq->nr_slots; i++)\n+\t\tbvecq_free_seg(bq, i);\n+\tbq->nr_slots = slot;\n+\n+\t/* Free the queue tail. */\n+\tbvecq_put(bq->next);\n+\tbq->next = NULL;\n+\treturn 0;\n+}\n+EXPORT_SYMBOL(bvecq_shorten_buffer);\n+\n+/**\n+ * bvecq_buffer_init - Initialise a buffer and set position\n+ * @pos: The position to point at the new buffer.\n+ * @gfp: The allocation constraints.\n+ *\n+ * Initialise a rolling buffer.  We allocate an unpopulated bvecq node to so\n+ * that the pointers can be independently driven by the producer and the\n+ * consumer.\n+ *\n+ * Return 0 if successful; -ENOMEM on allocation failure.\n+ */\n+int bvecq_buffer_init(struct bvecq_pos *pos, gfp_t gfp)\n+{\n+\tstruct bvecq *bq;\n+\n+\tbq = bvecq_alloc_one(13, gfp);\n+\tif (!bq)\n+\t\treturn -ENOMEM;\n+\n+\tpos->bvecq  = bq; /* Comes with a ref. */\n+\tpos->slot   = 0;\n+\tpos->offset = 0;\n+\treturn 0;\n+}\n+\n+/**\n+ * bvecq_buffer_make_space - Start a new bvecq node in a buffer\n+ * @pos: The position of the last node.\n+ * @gfp: The allocation constraints.\n+ *\n+ * Add a new node on to the buffer chain at the specified position, either\n+ * because the previous one is full or because we have a discontiguity to\n+ * contend with, and update @pos to point to it.\n+ *\n+ * Return: 0 if successful; -ENOMEM on allocation failure.\n+ */\n+int bvecq_buffer_make_space(struct bvecq_pos *pos, gfp_t gfp)\n+{\n+\tstruct bvecq *bq, *head = pos->bvecq;\n+\n+\tbq = bvecq_alloc_one(14, gfp);\n+\tif (!bq)\n+\t\treturn -ENOMEM;\n+\tbq->prev = head;\n+\n+\tpos->bvecq = bvecq_get(bq);\n+\tpos->slot = 0;\n+\tpos->offset = 0;\n+\n+\t/* Make sure the initialisation is stored before the next pointer.\n+\t *\n+\t * [!] NOTE: After we set head->next, the consumer is at liberty to\n+\t * immediately delete the old head.\n+\t */\n+\tsmp_store_release(&head->next, bq);\n+\tbvecq_put(head);\n+\treturn 0;\n+}\n+\n+/**\n+ * bvecq_pos_advance - Advance a bvecq position\n+ * @pos: The position to advance.\n+ * @amount: The amount of bytes to advance by.\n+ *\n+ * Advance the specified bvecq position by @amount bytes.  @pos is updated and\n+ * bvecq ref counts may have been manipulated.  If the position hits the end of\n+ * the queue, then it is left pointing beyond the last slot of the last bvecq\n+ * so that it doesn't break the chain.\n+ */\n+void bvecq_pos_advance(struct bvecq_pos *pos, size_t amount)\n+{\n+\tstruct bvecq *bq = pos->bvecq;\n+\tunsigned int slot = pos->slot;\n+\tsize_t offset = pos->offset;\n+\n+\tif (slot >= bq->nr_slots) {\n+\t\tbq = bq->next;\n+\t\tslot = 0;\n+\t}\n+\n+\twhile (amount) {\n+\t\tconst struct bio_vec *bv = &bq->bv[slot];\n+\t\tsize_t part = umin(bv->bv_len - offset, amount);\n+\n+\t\tif (likely(part < bv->bv_len)) {\n+\t\t\toffset += part;\n+\t\t\tbreak;\n+\t\t}\n+\t\tamount -= part;\n+\t\toffset = 0;\n+\t\tslot++;\n+\t\tif (slot >= bq->nr_slots) {\n+\t\t\tif (!bq->next)\n+\t\t\t\tbreak;\n+\t\t\tbq = bq->next;\n+\t\t\tslot = 0;\n+\t\t}\n+\t}\n+\n+\tpos->slot   = slot;\n+\tpos->offset = offset;\n+\tbvecq_pos_move(pos, bq);\n+}\n+\n+/**\n+ * bvecq_zero - Clear memory starting at the bvecq position.\n+ * @pos: The position in the bvecq chain to start clearing.\n+ * @amount: The number of bytes to clear.\n+ *\n+ * Clear memory fragments pointed to by a bvec queue.  @pos is updated and\n+ * bvecq ref counts may have been manipulated.  If the position hits the end of\n+ * the queue, then it is left pointing beyond the last slot of the last bvecq\n+ * so that it doesn't break the chain.\n+ *\n+ * Return: The number of bytes cleared.\n+ */\n+ssize_t bvecq_zero(struct bvecq_pos *pos, size_t amount)\n+{\n+\tstruct bvecq *bq = pos->bvecq;\n+\tunsigned int slot = pos->slot;\n+\tssize_t cleared = 0;\n+\tsize_t offset = pos->offset;\n+\n+\tif (WARN_ON_ONCE(!bq))\n+\t\treturn 0;\n+\n+\tif (slot >= bq->nr_slots) {\n+\t\tbq = bq->next;\n+\t\tif (WARN_ON_ONCE(!bq))\n+\t\t\treturn 0;\n+\t\tslot = 0;\n+\t}\n+\n+\tdo {\n+\t\tconst struct bio_vec *bv = &bq->bv[slot];\n+\n+\t\tif (offset < bv->bv_len) {\n+\t\t\tsize_t part = umin(amount - cleared, bv->bv_len - offset);\n+\n+\t\t\tmemzero_page(bv->bv_page, bv->bv_offset + offset, part);\n+\n+\t\t\toffset += part;\n+\t\t\tcleared += part;\n+\t\t}\n+\n+\t\tif (offset >= bv->bv_len) {\n+\t\t\toffset = 0;\n+\t\t\tslot++;\n+\t\t\tif (slot >= bq->nr_slots) {\n+\t\t\t\tif (!bq->next)\n+\t\t\t\t\tbreak;\n+\t\t\t\tbq = bq->next;\n+\t\t\t\tslot = 0;\n+\t\t\t}\n+\t\t}\n+\t} while (cleared < amount);\n+\n+\tbvecq_pos_move(pos, bq);\n+\tpos->slot = slot;\n+\tpos->offset = offset;\n+\treturn cleared;\n+}\n+\n+/**\n+ * bvecq_slice - Find a slice of a bvecq queue\n+ * @pos: The position to start at.\n+ * @max_size: The maximum size of the slice (or ULONG_MAX).\n+ * @max_segs: The maximum number of segments in the slice (or INT_MAX).\n+ * @_nr_segs: Where to put the number of segments (updated).\n+ *\n+ * Determine the size and number of segments that can be obtained the next\n+ * slice of bvec queue up to the maximum size and segment count specified.  The\n+ * slice is also limited if a discontiguity is found.\n+ *\n+ * @pos is updated to the end of the slice.  If the position hits the end of\n+ * the queue, then it is left pointing beyond the last slot of the last bvecq\n+ * so that it doesn't break the chain.\n+ *\n+ * Return: The number of bytes in the slice.\n+ */\n+size_t bvecq_slice(struct bvecq_pos *pos, size_t max_size,\n+\t\t   unsigned int max_segs, unsigned int *_nr_segs)\n+{\n+\tstruct bvecq *bq;\n+\tunsigned int slot = pos->slot, nsegs = 0;\n+\tsize_t size = 0;\n+\tsize_t offset = pos->offset;\n+\n+\tbq = pos->bvecq;\n+\tfor (;;) {\n+\t\tfor (; slot < bq->nr_slots; slot++) {\n+\t\t\tconst struct bio_vec *bvec = &bq->bv[slot];\n+\n+\t\t\tif (offset < bvec->bv_len && bvec->bv_page) {\n+\t\t\t\tsize_t part = umin(bvec->bv_len - offset, max_size);\n+\n+\t\t\t\tsize += part;\n+\t\t\t\toffset += part;\n+\t\t\t\tmax_size -= part;\n+\t\t\t\tnsegs++;\n+\t\t\t\tif (!max_size || nsegs >= max_segs)\n+\t\t\t\t\tgoto out;\n+\t\t\t}\n+\t\t\toffset = 0;\n+\t\t}\n+\n+\t\t/* pos->bvecq isn't allowed to go NULL as the queue may get\n+\t\t * extended and we would lose our place.\n+\t\t */\n+\t\tif (!bq->next)\n+\t\t\tbreak;\n+\t\tslot = 0;\n+\t\tbq = bq->next;\n+\t\tif (bq->discontig && size > 0)\n+\t\t\tbreak;\n+\t}\n+\n+out:\n+\t*_nr_segs = nsegs;\n+\tif (slot == bq->nr_slots && bq->next) {\n+\t\tbq = bq->next;\n+\t\tslot = 0;\n+\t\toffset = 0;\n+\t}\n+\tbvecq_pos_move(pos, bq);\n+\tpos->slot = slot;\n+\tpos->offset = offset;\n+\treturn size;\n+}\n+\n+/**\n+ * bvecq_extract - Extract a slice of a bvecq queue into a new bvecq queue\n+ * @pos: The position to start at.\n+ * @max_size: The maximum size of the slice (or ULONG_MAX).\n+ * @max_segs: The maximum number of segments in the slice (or INT_MAX).\n+ * @to: Where to put the extraction bvecq chain head (updated).\n+ *\n+ * Allocate a new bvecq and extract into it memory fragments from a slice of\n+ * bvec queue, starting at @pos.  The slice is also limited if a discontiguity\n+ * is found.  No refs are taken on the page.\n+ *\n+ * @pos is updated to the end of the slice.  If the position hits the end of\n+ * the queue, then it is left pointing beyond the last slot of the last bvecq\n+ * so that it doesn't break the chain.\n+ *\n+ * If successful, *@to is set to point to the head of the newly allocated chain\n+ * and the caller inherits a ref to it.\n+ *\n+ * Return: The number of bytes extracted; -ENOMEM on allocation failure or -EIO\n+ * if no segments were available to extract.\n+ */\n+ssize_t bvecq_extract(struct bvecq_pos *pos, size_t max_size,\n+\t\t      unsigned int max_segs, struct bvecq **to)\n+{\n+\tstruct bvecq_pos tmp_pos;\n+\tstruct bvecq *src, *dst = NULL;\n+\tunsigned int slot = pos->slot, nsegs;\n+\tssize_t extracted = 0;\n+\tsize_t offset = pos->offset, amount;\n+\n+\t*to = NULL;\n+\tif (WARN_ON_ONCE(!max_segs))\n+\t\tmax_segs = INT_MAX;\n+\n+\tbvecq_pos_set(&tmp_pos, pos);\n+\tamount = bvecq_slice(&tmp_pos, max_size, max_segs, &nsegs);\n+\tbvecq_pos_unset(&tmp_pos);\n+\tif (nsegs == 0)\n+\t\treturn -EIO;\n+\n+\tdst = bvecq_alloc_chain(nsegs, GFP_KERNEL);\n+\tif (!dst)\n+\t\treturn -ENOMEM;\n+\t*to = dst;\n+\tmax_segs = nsegs;\n+\tnsegs = 0;\n+\n+\t/* Transcribe the segments */\n+\tsrc = pos->bvecq;\n+\tfor (;;) {\n+\t\tfor (; slot < src->nr_slots; slot++) {\n+\t\t\tconst struct bio_vec *sv = &src->bv[slot];\n+\t\t\tstruct bio_vec *dv = &dst->bv[dst->nr_slots];\n+\n+\t\t\t_debug(\"EXTR BQ=%x[%x] off=%zx am=%zx p=%lx\",\n+\t\t\t       src->priv, slot, offset, amount, page_to_pfn(sv->bv_page));\n+\n+\t\t\tif (offset < sv->bv_len && sv->bv_page) {\n+\t\t\t\tsize_t part = umin(sv->bv_len - offset, amount);\n+\n+\t\t\t\tbvec_set_page(dv, sv->bv_page, part,\n+\t\t\t\t\t      sv->bv_offset + offset);\n+\t\t\t\textracted += part;\n+\t\t\t\tamount -= part;\n+\t\t\t\toffset += part;\n+\t\t\t\ttrace_netfs_bv_slot(dst, dst->nr_slots);\n+\t\t\t\tdst->nr_slots++;\n+\t\t\t\tnsegs++;\n+\t\t\t\tif (bvecq_is_full(dst))\n+\t\t\t\t\tdst = dst->next;\n+\t\t\t\tif (nsegs >= max_segs)\n+\t\t\t\t\tgoto out;\n+\t\t\t\tif (amount == 0)\n+\t\t\t\t\tgoto out;\n+\t\t\t}\n+\t\t\toffset = 0;\n+\t\t}\n+\n+\t\t/* pos->bvecq isn't allowed to go NULL as the queue may get\n+\t\t * extended and we would lose our place.\n+\t\t */\n+\t\tif (!src->next)\n+\t\t\tbreak;\n+\t\tslot = 0;\n+\t\tsrc = src->next;\n+\t\tif (src->discontig && extracted > 0)\n+\t\t\tbreak;\n+\t}\n+\n+out:\n+\tif (slot == src->nr_slots && src->next) {\n+\t\tsrc = src->next;\n+\t\tslot = 0;\n+\t\toffset = 0;\n+\t}\n+\tbvecq_pos_move(pos, src);\n+\tpos->slot = slot;\n+\tpos->offset = offset;\n+\treturn extracted;\n+}\n+\n+/**\n+ * bvecq_load_from_ra - Allocate a bvecq chain and load from readahead\n+ * @pos: Blank position object to attach the new chain to.\n+ * @ractl: The readahead control context.\n+ *\n+ * Decant the set of folios to be read from the readahead context into a bvecq\n+ * chain.  Each folio occupies one bio_vec element.\n+ *\n+ * Return: Amount of data loaded or -ENOMEM on allocation failure.\n+ */\n+ssize_t bvecq_load_from_ra(struct bvecq_pos *pos, struct readahead_control *ractl)\n+{\n+\tXA_STATE(xas, &ractl->mapping->i_pages, ractl->_index);\n+\tstruct folio *folio;\n+\tstruct bvecq *bq;\n+\tsize_t loaded = 0;\n+\n+\tbq = bvecq_alloc_chain(ractl->_nr_folios, GFP_NOFS);\n+\tif (!bq)\n+\t\treturn -ENOMEM;\n+\n+\tpos->bvecq  = bq;\n+\tpos->slot   = 0;\n+\tpos->offset = 0;\n+\n+\trcu_read_lock();\n+\n+\txas_for_each(&xas, folio, ractl->_index + ractl->_nr_pages - 1) {\n+\t\tsize_t len;\n+\n+\t\tif (xas_retry(&xas, folio))\n+\t\t\tcontinue;\n+\t\tVM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);\n+\n+\t\tlen = folio_size(folio);\n+\t\tbvec_set_folio(&bq->bv[bq->nr_slots++], folio, len, 0);\n+\t\tloaded += len;\n+\t\ttrace_netfs_folio(folio, netfs_folio_trace_read);\n+\n+\t\tif (bq->nr_slots >= bq->max_slots) {\n+\t\t\tbq = bq->next;\n+\t\t\tif (!bq)\n+\t\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\trcu_read_unlock();\n+\n+\tractl->_index += ractl->_nr_pages;\n+\tractl->_nr_pages = 0;\n+\treturn loaded;\n+}\ndiff --git a/fs/netfs/internal.h b/fs/netfs/internal.h\nindex 2fcf31de5f2c..ad47bcc1947b 100644\n--- a/fs/netfs/internal.h\n+++ b/fs/netfs/internal.h\n@@ -168,6 +168,7 @@ extern atomic_t netfs_n_wh_retry_write_subreq;\n extern atomic_t netfs_n_wb_lock_skip;\n extern atomic_t netfs_n_wb_lock_wait;\n extern atomic_t netfs_n_folioq;\n+extern atomic_t netfs_n_bvecq;\n \n int netfs_stats_show(struct seq_file *m, void *v);\n \ndiff --git a/fs/netfs/stats.c b/fs/netfs/stats.c\nindex ab6b916addc4..84c2a4bcc762 100644\n--- a/fs/netfs/stats.c\n+++ b/fs/netfs/stats.c\n@@ -48,6 +48,7 @@ atomic_t netfs_n_wh_retry_write_subreq;\n atomic_t netfs_n_wb_lock_skip;\n atomic_t netfs_n_wb_lock_wait;\n atomic_t netfs_n_folioq;\n+atomic_t netfs_n_bvecq;\n \n int netfs_stats_show(struct seq_file *m, void *v)\n {\n@@ -90,9 +91,10 @@ int netfs_stats_show(struct seq_file *m, void *v)\n \t\t   atomic_read(&netfs_n_rh_retry_read_subreq),\n \t\t   atomic_read(&netfs_n_wh_retry_write_req),\n \t\t   atomic_read(&netfs_n_wh_retry_write_subreq));\n-\tseq_printf(m, \"Objs   : rr=%u sr=%u foq=%u wsc=%u\\n\",\n+\tseq_printf(m, \"Objs   : rr=%u sr=%u bq=%u foq=%u wsc=%u\\n\",\n \t\t   atomic_read(&netfs_n_rh_rreq),\n \t\t   atomic_read(&netfs_n_rh_sreq),\n+\t\t   atomic_read(&netfs_n_bvecq),\n \t\t   atomic_read(&netfs_n_folioq),\n \t\t   atomic_read(&netfs_n_wh_wstream_conflict));\n \tseq_printf(m, \"WbLock : skip=%u wait=%u\\n\",\ndiff --git a/include/linux/bvecq.h b/include/linux/bvecq.h\nindex 462125af1cc7..6c58a7fb6472 100644\n--- a/include/linux/bvecq.h\n+++ b/include/linux/bvecq.h\n@@ -17,7 +17,7 @@\n  * iterated over with an ITER_BVECQ iterator.  The list is non-circular; next\n  * and prev are NULL at the ends.\n  *\n- * The bv pointer points to the segment array; this may be __bv if allocated\n+ * The bv pointer points to the bio_vec array; this may be __bv if allocated\n  * together.  The caller is responsible for determining whether or not this is\n  * the case as the array pointed to by bv may be follow on directly from the\n  * bvecq by accident of allocation (ie. ->bv == ->__bv is *not* sufficient to\n@@ -33,8 +33,8 @@ struct bvecq {\n \tunsigned long long fpos;\t/* File position */\n \trefcount_t\tref;\n \tu32\t\tpriv;\t\t/* Private data */\n-\tu16\t\tnr_segs;\t/* Number of elements in bv[] used */\n-\tu16\t\tmax_segs;\t/* Number of elements allocated in bv[] */\n+\tu16\t\tnr_slots;\t/* Number of elements in bv[] used */\n+\tu16\t\tmax_slots;\t/* Number of elements allocated in bv[] */\n \tbool\t\tinline_bv:1;\t/* T if __bv[] is being used */\n \tbool\t\tfree:1;\t\t/* T if the pages need freeing */\n \tbool\t\tunpin:1;\t/* T if the pages need unpinning, not freeing */\n@@ -43,4 +43,163 @@ struct bvecq {\n \tstruct bio_vec\t__bv[];\t\t/* Default array (if ->inline_bv) */\n };\n \n+/*\n+ * Position in a bio_vec queue.  The bvecq holds a ref on the queue segment it\n+ * points to.\n+ */\n+struct bvecq_pos {\n+\tstruct bvecq\t\t*bvecq;\t\t/* The first bvecq */\n+\tunsigned int\t\toffset;\t\t/* The offset within the starting slot */\n+\tu16\t\t\tslot;\t\t/* The starting slot */\n+};\n+\n+void bvecq_dump(const struct bvecq *bq);\n+struct bvecq *bvecq_alloc_one(size_t nr_slots, gfp_t gfp);\n+struct bvecq *bvecq_alloc_chain(size_t nr_slots, gfp_t gfp);\n+struct bvecq *bvecq_alloc_buffer(size_t size, unsigned int pre_slots, gfp_t gfp);\n+void bvecq_put(struct bvecq *bq);\n+int bvecq_expand_buffer(struct bvecq **_buffer, size_t *_cur_size, ssize_t size, gfp_t gfp);\n+int bvecq_shorten_buffer(struct bvecq *bq, unsigned int slot, size_t size);\n+int bvecq_buffer_init(struct bvecq_pos *pos, gfp_t gfp);\n+int bvecq_buffer_make_space(struct bvecq_pos *pos, gfp_t gfp);\n+void bvecq_pos_advance(struct bvecq_pos *pos, size_t amount);\n+ssize_t bvecq_zero(struct bvecq_pos *pos, size_t amount);\n+size_t bvecq_slice(struct bvecq_pos *pos, size_t max_size,\n+\t\t   unsigned int max_segs, unsigned int *_nr_segs);\n+ssize_t bvecq_extract(struct bvecq_pos *pos, size_t max_size,\n+\t\t      unsigned int max_segs, struct bvecq **to);\n+ssize_t bvecq_load_from_ra(struct bvecq_pos *pos, struct readahead_control *ractl);\n+\n+/**\n+ * bvecq_get - Get a ref on a bvecq\n+ * @bq: The bvecq to get a ref on\n+ */\n+static inline struct bvecq *bvecq_get(struct bvecq *bq)\n+{\n+\trefcount_inc(&bq->ref);\n+\treturn bq;\n+}\n+\n+/**\n+ * bvecq_is_full - Determine if a bvecq is full\n+ * @bvecq: The object to query\n+ *\n+ * Return: true if full; false if not.\n+ */\n+static inline bool bvecq_is_full(const struct bvecq *bvecq)\n+{\n+\treturn bvecq->nr_slots >= bvecq->max_slots;\n+}\n+\n+/**\n+ * bvecq_pos_set - Set one position to be the same as another\n+ * @pos: The position object to set\n+ * @at: The source position.\n+ *\n+ * Set @pos to have the same position as @at.  This may take a ref on the\n+ * bvecq pointed to.\n+ */\n+static inline void bvecq_pos_set(struct bvecq_pos *pos, const struct bvecq_pos *at)\n+{\n+\t*pos = *at;\n+\tbvecq_get(pos->bvecq);\n+}\n+\n+/**\n+ * bvecq_pos_unset - Unset a position\n+ * @pos: The position object to unset\n+ *\n+ * Unset @pos.  This does any needed ref cleanup.\n+ */\n+static inline void bvecq_pos_unset(struct bvecq_pos *pos)\n+{\n+\tbvecq_put(pos->bvecq);\n+\tpos->bvecq = NULL;\n+\tpos->slot = 0;\n+\tpos->offset = 0;\n+}\n+\n+/**\n+ * bvecq_pos_transfer - Transfer one position to another, clearing the first\n+ * @pos: The position object to set\n+ * @from: The source position to clear.\n+ *\n+ * Set @pos to have the same position as @from and then clear @from.  This may\n+ * transfer a ref on the bvecq pointed to.\n+ */\n+static inline void bvecq_pos_transfer(struct bvecq_pos *pos, struct bvecq_pos *from)\n+{\n+\t*pos = *from;\n+\tfrom->bvecq = NULL;\n+\tfrom->slot = 0;\n+\tfrom->offset = 0;\n+}\n+\n+/**\n+ * bvecq_pos_move - Update a position to a new bvecq\n+ * @pos: The position object to update.\n+ * @to: The new bvecq to point at.\n+ *\n+ * Update @pos to point to @to if it doesn't already do so.  This may\n+ * manipulate refs on the bvecqs pointed to.\n+ */\n+static inline void bvecq_pos_move(struct bvecq_pos *pos, struct bvecq *to)\n+{\n+\tstruct bvecq *old = pos->bvecq;\n+\n+\tif (old != to) {\n+\t\tpos->bvecq = bvecq_get(to);\n+\t\tbvecq_put(old);\n+\t}\n+}\n+\n+/**\n+ * bvecq_pos_step - Step a position to the next slot if possible\n+ * @pos: The position object to step.\n+ *\n+ * Update @pos to point to the next slot in the queue if not at the end.  This\n+ * may manipulate refs on the bvecqs pointed to.\n+ *\n+ * Return: true if successful, false if was at the end.\n+ */\n+static inline bool bvecq_pos_step(struct bvecq_pos *pos)\n+{\n+\tstruct bvecq *bq = pos->bvecq;\n+\n+\tpos->slot++;\n+\tpos->offset = 0;\n+\tif (pos->slot <= bq->nr_slots)\n+\t\treturn true;\n+\tif (!bq->next)\n+\t\treturn false;\n+\tbvecq_pos_move(pos, bq->next);\n+\treturn true;\n+}\n+\n+/**\n+ * bvecq_delete_spent - Delete the bvecq at the front if possible\n+ * @pos: The position object to update.\n+ *\n+ * Delete the used up bvecq at the front of the queue that @pos points to if it\n+ * is not the last node in the queue; if it is the last node in the queue, it\n+ * is kept so that the queue doesn't become detached from the other end.  This\n+ * may manipulate refs on the bvecqs pointed to.\n+ */\n+static inline struct bvecq *bvecq_delete_spent(struct bvecq_pos *pos)\n+{\n+\tstruct bvecq *spent = pos->bvecq;\n+\t/* Read the contents of the queue node after the pointer to it. */\n+\tstruct bvecq *next = smp_load_acquire(&spent->next);\n+\n+\tif (!next)\n+\t\treturn NULL;\n+\tnext->prev = NULL;\n+\tspent->next = NULL;\n+\tbvecq_put(spent);\n+\tpos->bvecq = next; /* We take spent's ref */\n+\tpos->slot = 0;\n+\tpos->offset = 0;\n+\treturn next;\n+}\n+\n #endif /* _LINUX_BVECQ_H */\ndiff --git a/include/linux/iov_iter.h b/include/linux/iov_iter.h\nindex 999607ece481..309642b3901f 100644\n--- a/include/linux/iov_iter.h\n+++ b/include/linux/iov_iter.h\n@@ -152,7 +152,7 @@ size_t iterate_bvecq(struct iov_iter *iter, size_t len, void *priv, void *priv2,\n \tunsigned int slot = iter->bvecq_slot;\n \tsize_t progress = 0, skip = iter->iov_offset;\n \n-\tif (slot == bq->nr_segs) {\n+\tif (slot == bq->nr_slots) {\n \t\t/* The iterator may have been extended. */\n \t\tbq = bq->next;\n \t\tslot = 0;\n@@ -176,7 +176,7 @@ size_t iterate_bvecq(struct iov_iter *iter, size_t len, void *priv, void *priv2,\n \t\tif (skip >= bvec->bv_len) {\n \t\t\tskip = 0;\n \t\t\tslot++;\n-\t\t\tif (slot >= bq->nr_segs) {\n+\t\t\tif (slot >= bq->nr_slots) {\n \t\t\t\tif (!bq->next)\n \t\t\t\t\tbreak;\n \t\t\t\tbq = bq->next;\ndiff --git a/include/linux/netfs.h b/include/linux/netfs.h\nindex cc56b6512769..5bc48aacf7f6 100644\n--- a/include/linux/netfs.h\n+++ b/include/linux/netfs.h\n@@ -17,6 +17,7 @@\n #include <linux/workqueue.h>\n #include <linux/fs.h>\n #include <linux/pagemap.h>\n+#include <linux/bvecq.h>\n #include <linux/uio.h>\n #include <linux/rolling_buffer.h>\n \ndiff --git a/include/trace/events/netfs.h b/include/trace/events/netfs.h\nindex b8236f9e940e..fbb094231659 100644\n--- a/include/trace/events/netfs.h\n+++ b/include/trace/events/netfs.h\n@@ -779,6 +779,30 @@ TRACE_EVENT(netfs_folioq,\n \t\t      __print_symbolic(__entry->trace, netfs_folioq_traces))\n \t    );\n \n+TRACE_EVENT(netfs_bv_slot,\n+\t    TP_PROTO(const struct bvecq *bq, int slot),\n+\n+\t    TP_ARGS(bq, slot),\n+\n+\t    TP_STRUCT__entry(\n+\t\t    __field(unsigned long,\t\tpfn)\n+\t\t    __field(unsigned int,\t\toffset)\n+\t\t    __field(unsigned int,\t\tlen)\n+\t\t    __field(unsigned int,\t\tslot)\n+\t\t\t     ),\n+\n+\t    TP_fast_assign(\n+\t\t    __entry->slot = slot;\n+\t\t    __entry->pfn = page_to_pfn(bq->bv[slot].bv_page);\n+\t\t    __entry->offset = bq->bv[slot].bv_offset;\n+\t\t    __entry->len = bq->bv[slot].bv_len;\n+\t\t\t   ),\n+\n+\t    TP_printk(\"bq[%x] p=%lx %x-%x\",\n+\t\t      __entry->slot,\n+\t\t      __entry->pfn, __entry->offset, __entry->offset + __entry->len)\n+\t    );\n+\n #undef EM\n #undef E_\n #endif /* _TRACE_NETFS_H */\ndiff --git a/lib/iov_iter.c b/lib/iov_iter.c\nindex df8d037894b1..4f091e6d4a22 100644\n--- a/lib/iov_iter.c\n+++ b/lib/iov_iter.c\n@@ -580,7 +580,7 @@ static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by)\n \t\treturn;\n \ti->count -= by;\n \n-\tif (slot >= bq->nr_segs) {\n+\tif (slot >= bq->nr_slots) {\n \t\tbq = bq->next;\n \t\tslot = 0;\n \t}\n@@ -593,7 +593,7 @@ static void iov_iter_bvecq_advance(struct iov_iter *i, size_t by)\n \t\t\tbreak;\n \t\tby -= len;\n \t\tslot++;\n-\t\tif (slot >= bq->nr_segs && bq->next) {\n+\t\tif (slot >= bq->nr_slots && bq->next) {\n \t\t\tbq = bq->next;\n \t\t\tslot = 0;\n \t\t}\n@@ -662,7 +662,7 @@ static void iov_iter_bvecq_revert(struct iov_iter *i, size_t unroll)\n \n \t\tif (slot == 0) {\n \t\t\tbq = bq->prev;\n-\t\t\tslot = bq->nr_segs;\n+\t\t\tslot = bq->nr_slots;\n \t\t}\n \t\tslot--;\n \n@@ -947,7 +947,7 @@ static unsigned long iov_iter_alignment_bvecq(const struct iov_iter *iter)\n \t\treturn res;\n \n \tfor (bq = iter->bvecq; bq; bq = bq->next) {\n-\t\tfor (; slot < bq->nr_segs; slot++) {\n+\t\tfor (; slot < bq->nr_slots; slot++) {\n \t\t\tconst struct bio_vec *bvec = &bq->bv[slot];\n \t\t\tsize_t part = umin(bvec->bv_len - skip, size);\n \n@@ -1331,7 +1331,7 @@ static size_t iov_npages_bvecq(const struct iov_iter *iter, size_t maxpages)\n \tsize_t size = iter->count;\n \n \tfor (bq = iter->bvecq; bq; bq = bq->next) {\n-\t\tfor (; slot < bq->nr_segs; slot++) {\n+\t\tfor (; slot < bq->nr_slots; slot++) {\n \t\t\tconst struct bio_vec *bvec = &bq->bv[slot];\n \t\t\tsize_t offs = (bvec->bv_offset + skip) % PAGE_SIZE;\n \t\t\tsize_t part = umin(bvec->bv_len - skip, size);\n@@ -1731,7 +1731,7 @@ static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter,\n \tunsigned int seg = iter->bvecq_slot, count = 0, nr = 0;\n \tsize_t extracted = 0, offset = iter->iov_offset;\n \n-\tif (seg >= bvecq->nr_segs) {\n+\tif (seg >= bvecq->nr_slots) {\n \t\tbvecq = bvecq->next;\n \t\tif (WARN_ON_ONCE(!bvecq))\n \t\t\treturn 0;\n@@ -1763,7 +1763,7 @@ static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter,\n \t\tif (offset >= blen) {\n \t\t\toffset = 0;\n \t\t\tseg++;\n-\t\t\tif (seg >= bvecq->nr_segs) {\n+\t\t\tif (seg >= bvecq->nr_slots) {\n \t\t\t\tif (!bvecq->next) {\n \t\t\t\t\tWARN_ON_ONCE(extracted < iter->count);\n \t\t\t\t\tbreak;\n@@ -1816,7 +1816,7 @@ static ssize_t iov_iter_extract_bvecq_pages(struct iov_iter *iter,\n \t\tif (offset >= blen) {\n \t\t\toffset = 0;\n \t\t\tseg++;\n-\t\t\tif (seg >= bvecq->nr_segs) {\n+\t\t\tif (seg >= bvecq->nr_slots) {\n \t\t\t\tif (!bvecq->next) {\n \t\t\t\t\tWARN_ON_ONCE(extracted < iter->count);\n \t\t\t\t\tbreak;\ndiff --git a/lib/scatterlist.c b/lib/scatterlist.c\nindex 03e3883a1a2d..93a3d194a914 100644\n--- a/lib/scatterlist.c\n+++ b/lib/scatterlist.c\n@@ -1345,7 +1345,7 @@ static ssize_t extract_bvecq_to_sg(struct iov_iter *iter,\n \tssize_t ret = 0;\n \tsize_t offset = iter->iov_offset;\n \n-\tif (seg >= bvecq->nr_segs) {\n+\tif (seg >= bvecq->nr_slots) {\n \t\tbvecq = bvecq->next;\n \t\tif (WARN_ON_ONCE(!bvecq))\n \t\t\treturn 0;\n@@ -1373,7 +1373,7 @@ static ssize_t extract_bvecq_to_sg(struct iov_iter *iter,\n \t\tif (offset >= blen) {\n \t\t\toffset = 0;\n \t\t\tseg++;\n-\t\t\tif (seg >= bvecq->nr_segs) {\n+\t\t\tif (seg >= bvecq->nr_slots) {\n \t\t\t\tif (!bvecq->next) {\n \t\t\t\t\tWARN_ON_ONCE(ret < iter->count);\n \t\t\t\t\tbreak;\ndiff --git a/lib/tests/kunit_iov_iter.c b/lib/tests/kunit_iov_iter.c\nindex 5bc941f64343..ff0621636ff1 100644\n--- a/lib/tests/kunit_iov_iter.c\n+++ b/lib/tests/kunit_iov_iter.c\n@@ -543,28 +543,28 @@ static void iov_kunit_destroy_bvecq(void *data)\n \n \tfor (bq = data; bq; bq = next) {\n \t\tnext = bq->next;\n-\t\tfor (int i = 0; i < bq->nr_segs; i++)\n+\t\tfor (int i = 0; i < bq->nr_slots; i++)\n \t\t\tif (bq->bv[i].bv_page)\n \t\t\t\tput_page(bq->bv[i].bv_page);\n \t\tkfree(bq);\n \t}\n }\n \n-static struct bvecq *iov_kunit_alloc_bvecq(struct kunit *test, unsigned int max_segs)\n+static struct bvecq *iov_kunit_alloc_bvecq(struct kunit *test, unsigned int max_slots)\n {\n \tstruct bvecq *bq;\n \n-\tbq = kzalloc(struct_size(bq, __bv, max_segs), GFP_KERNEL);\n+\tbq = kzalloc(struct_size(bq, __bv, max_slots), GFP_KERNEL);\n \tKUNIT_ASSERT_NOT_ERR_OR_NULL(test, bq);\n-\tbq->max_segs = max_segs;\n+\tbq->max_slots = max_slots;\n \treturn bq;\n }\n \n-static struct bvecq *iov_kunit_create_bvecq(struct kunit *test, unsigned int max_segs)\n+static struct bvecq *iov_kunit_create_bvecq(struct kunit *test, unsigned int max_slots)\n {\n \tstruct bvecq *bq;\n \n-\tbq = iov_kunit_alloc_bvecq(test, max_segs);\n+\tbq = iov_kunit_alloc_bvecq(test, max_slots);\n \tkunit_add_action_or_reset(test, iov_kunit_destroy_bvecq, bq);\n \treturn bq;\n }\n@@ -578,13 +578,13 @@ static void __init iov_kunit_load_bvecq(struct kunit *test,\n \tsize_t size = 0;\n \n \tfor (int i = 0; i < npages; i++) {\n-\t\tif (bq->nr_segs >= bq->max_segs) {\n+\t\tif (bq->nr_slots >= bq->max_slots) {\n \t\t\tbq->next = iov_kunit_alloc_bvecq(test, 8);\n \t\t\tbq->next->prev = bq;\n \t\t\tbq = bq->next;\n \t\t}\n-\t\tbvec_set_page(&bq->bv[bq->nr_segs], pages[i], PAGE_SIZE, 0);\n-\t\tbq->nr_segs++;\n+\t\tbvec_set_page(&bq->bv[bq->nr_slots], pages[i], PAGE_SIZE, 0);\n+\t\tbq->nr_slots++;\n \t\tsize += PAGE_SIZE;\n \t}\n \tiov_iter_bvec_queue(iter, dir, bq_head, 0, 0, size);\n",
    "prefixes": [
        "13/26"
    ]
}