From patchwork Sun May 12 06:09:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrea Righi X-Patchwork-Id: 1934195 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=185.125.189.65; helo=lists.ubuntu.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=patchwork.ozlabs.org) Received: from lists.ubuntu.com (lists.ubuntu.com [185.125.189.65]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4VcXMN3ts0z20dH for ; Sun, 12 May 2024 16:11:59 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=lists.ubuntu.com) by lists.ubuntu.com with esmtp (Exim 4.86_2) (envelope-from ) id 1s62Qz-0000hC-1t; Sun, 12 May 2024 06:11:45 +0000 Received: from smtp-relay-internal-1.internal ([10.131.114.114] helo=smtp-relay-internal-1.canonical.com) by lists.ubuntu.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1s62Qw-0000gO-QH for kernel-team@lists.ubuntu.com; Sun, 12 May 2024 06:11:42 +0000 Received: from mail-ej1-f71.google.com (mail-ej1-f71.google.com [209.85.218.71]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-1.canonical.com (Postfix) with ESMTPS id 453293F339 for ; Sun, 12 May 2024 06:11:42 +0000 (UTC) Received: by mail-ej1-f71.google.com with SMTP id a640c23a62f3a-a59a1fe7396so216233666b.3 for ; Sat, 11 May 2024 23:11:42 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1715494302; x=1716099102; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QphCGqFSiS9OC+qyD42q/NMtCmko7KhleK497s2crCU=; b=pP91yhtP9DFw6ixOcXSKWOck4WtssqQxj//LMBLYStQp2qx/PUu5XpXXm+P+Dboo9g FHozqtVWQS9NAkQJ+r8hIOgfDjT5xvo6vtQgQeJIxOVTZ8s2G0xejd8RvuoVl744+O7y FKfBgTx9M8mzqls7qhvIHO+pT6L+ejf/tXK1f/qHXT/BUwXsCOAKnEp6qZmr8WUoyrNn lTvaMPuXM33BxT68GHzlrxjFfX8ei9/4XR0pmMInMia3umj+Nx0gDhxy+L/ENf0vjMrW ULjEjmGAqyHZmnfHjZAGF5bP3bqbS3liLnSDr9gUWXHNF4uXztdQow3HO2U6JIwTtooU QJ2w== X-Gm-Message-State: AOJu0YwGy2Aw0dVc7A0JRCeUutMhoTzypJMtJ7sIKKf3jVCpScI94xMW K8407RzHVnJzBTCHtXM7HNH8k6jZzfX+qIOJMeCs+WlBIa80NbXmBacs7UnyeoWw8ppEf4693D0 DwDbMRkJ5a8Exmh425cIunPRuthogs/wYOGqLagdZp/xUvXO5DVA3c6ysFfw/R2a+OvVMWB1zH8 9HbuQdaQUgYQ== X-Received: by 2002:a17:906:75a:b0:a5a:15f6:157f with SMTP id a640c23a62f3a-a5a2d54c7fdmr410613866b.14.1715494301549; Sat, 11 May 2024 23:11:41 -0700 (PDT) X-Google-Smtp-Source: AGHT+IEeJF/Q2vF3g4makmICmZ+Stvd5hKf3QQUKqobOOB+H3ORHBUSdPWbAntUZBYhLVM86E6YSJg== X-Received: by 2002:a17:906:75a:b0:a5a:15f6:157f with SMTP id a640c23a62f3a-a5a2d54c7fdmr410612866b.14.1715494300928; Sat, 11 May 2024 23:11:40 -0700 (PDT) Received: from gpd.homenet.telecomitalia.it (host-95-234-187-171.retail.telecomitalia.it. [95.234.187.171]) by smtp.gmail.com with ESMTPSA id a640c23a62f3a-a5a17b0125bsm424487566b.143.2024.05.11.23.11.40 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sat, 11 May 2024 23:11:40 -0700 (PDT) From: Andrea Righi To: kernel-team@lists.ubuntu.com Subject: [SRU][N][PATCH 1/1] UBUNTU: SAUCE: Revert "9p: Use netfslib read/write_iter" Date: Sun, 12 May 2024 08:09:58 +0200 Message-ID: <20240512061136.726299-2-andrea.righi@canonical.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20240512061136.726299-1-andrea.righi@canonical.com> References: <20240512061136.726299-1-andrea.righi@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/2056461 Revert commit 80105ed2fd27 ("9p: Use netfslib read/write_iter") that seems to introduce a regression in our autopkgtest workflow. Signed-off-by: Andrea Righi --- fs/9p/vfs_addr.c | 293 +++++++++++++++++++++++++++++++---------- fs/9p/vfs_file.c | 89 +++++++++++-- fs/9p/vfs_inode.c | 5 +- fs/9p/vfs_inode_dotl.c | 7 +- 4 files changed, 309 insertions(+), 85 deletions(-) diff --git a/fs/9p/vfs_addr.c b/fs/9p/vfs_addr.c index 047855033d32..055b672a247d 100644 --- a/fs/9p/vfs_addr.c +++ b/fs/9p/vfs_addr.c @@ -19,45 +19,12 @@ #include #include #include -#include #include "v9fs.h" #include "v9fs_vfs.h" #include "cache.h" #include "fid.h" -static void v9fs_upload_to_server(struct netfs_io_subrequest *subreq) -{ - struct p9_fid *fid = subreq->rreq->netfs_priv; - int err, len; - - trace_netfs_sreq(subreq, netfs_sreq_trace_submit); - len = p9_client_write(fid, subreq->start, &subreq->io_iter, &err); - netfs_write_subrequest_terminated(subreq, len ?: err, false); -} - -static void v9fs_upload_to_server_worker(struct work_struct *work) -{ - struct netfs_io_subrequest *subreq = - container_of(work, struct netfs_io_subrequest, work); - - v9fs_upload_to_server(subreq); -} - -/* - * Set up write requests for a writeback slice. We need to add a write request - * for each write we want to make. - */ -static void v9fs_create_write_requests(struct netfs_io_request *wreq, loff_t start, size_t len) -{ - struct netfs_io_subrequest *subreq; - - subreq = netfs_create_write_request(wreq, NETFS_UPLOAD_TO_SERVER, - start, len, v9fs_upload_to_server_worker); - if (subreq) - netfs_queue_write_request(subreq); -} - /** * v9fs_issue_read - Issue a read from 9P * @subreq: The read to make @@ -66,10 +33,14 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq) { struct netfs_io_request *rreq = subreq->rreq; struct p9_fid *fid = rreq->netfs_priv; + struct iov_iter to; + loff_t pos = subreq->start + subreq->transferred; + size_t len = subreq->len - subreq->transferred; int total, err; - total = p9_client_read(fid, subreq->start + subreq->transferred, - &subreq->io_iter, &err); + iov_iter_xarray(&to, ITER_DEST, &rreq->mapping->i_pages, pos, len); + + total = p9_client_read(fid, pos, &to, &err); /* if we just extended the file size, any portion not in * cache won't be on server and is zeroes */ @@ -79,42 +50,25 @@ static void v9fs_issue_read(struct netfs_io_subrequest *subreq) } /** - * v9fs_init_request - Initialise a request + * v9fs_init_request - Initialise a read request * @rreq: The read request * @file: The file being read from */ static int v9fs_init_request(struct netfs_io_request *rreq, struct file *file) { - struct p9_fid *fid; - bool writing = (rreq->origin == NETFS_READ_FOR_WRITE || - rreq->origin == NETFS_WRITEBACK || - rreq->origin == NETFS_WRITETHROUGH || - rreq->origin == NETFS_LAUNDER_WRITE || - rreq->origin == NETFS_UNBUFFERED_WRITE || - rreq->origin == NETFS_DIO_WRITE); - - if (file) { - fid = file->private_data; - if (!fid) - goto no_fid; - p9_fid_get(fid); - } else { - fid = v9fs_fid_find_inode(rreq->inode, writing, INVALID_UID, true); - if (!fid) - goto no_fid; - } + struct p9_fid *fid = file->private_data; + + BUG_ON(!fid); /* we might need to read from a fid that was opened write-only * for read-modify-write of page cache, use the writeback fid * for that */ - WARN_ON(rreq->origin == NETFS_READ_FOR_WRITE && !(fid->mode & P9_ORDWR)); + WARN_ON(rreq->origin == NETFS_READ_FOR_WRITE && + !(fid->mode & P9_ORDWR)); + + p9_fid_get(fid); rreq->netfs_priv = fid; return 0; - -no_fid: - WARN_ONCE(1, "folio expected an open fid inode->i_ino=%lx\n", - rreq->inode->i_ino); - return -EINVAL; } /** @@ -132,16 +86,217 @@ const struct netfs_request_ops v9fs_req_ops = { .init_request = v9fs_init_request, .free_request = v9fs_free_request, .issue_read = v9fs_issue_read, - .create_write_requests = v9fs_create_write_requests, }; +#ifdef CONFIG_9P_FSCACHE +static void v9fs_write_to_cache_done(void *priv, ssize_t transferred_or_error, + bool was_async) +{ + struct v9fs_inode *v9inode = priv; + __le32 version; + + if (IS_ERR_VALUE(transferred_or_error) && + transferred_or_error != -ENOBUFS) { + version = cpu_to_le32(v9inode->qid.version); + fscache_invalidate(v9fs_inode_cookie(v9inode), &version, + i_size_read(&v9inode->netfs.inode), 0); + } +} +#endif + +static int v9fs_vfs_write_folio_locked(struct folio *folio) +{ + struct inode *inode = folio_inode(folio); + loff_t start = folio_pos(folio); + loff_t i_size = i_size_read(inode); + struct iov_iter from; + size_t len = folio_size(folio); + struct p9_fid *writeback_fid; + int err; + struct v9fs_inode __maybe_unused *v9inode = V9FS_I(inode); + struct fscache_cookie __maybe_unused *cookie = v9fs_inode_cookie(v9inode); + + if (start >= i_size) + return 0; /* Simultaneous truncation occurred */ + + len = min_t(loff_t, i_size - start, len); + + iov_iter_xarray(&from, ITER_SOURCE, &folio_mapping(folio)->i_pages, start, len); + + writeback_fid = v9fs_fid_find_inode(inode, true, INVALID_UID, true); + if (!writeback_fid) { + WARN_ONCE(1, "folio expected an open fid inode->i_private=%p\n", + inode->i_private); + return -EINVAL; + } + + folio_wait_fscache(folio); + folio_start_writeback(folio); + + p9_client_write(writeback_fid, start, &from, &err); + +#ifdef CONFIG_9P_FSCACHE + if (err == 0 && + fscache_cookie_enabled(cookie) && + test_bit(FSCACHE_COOKIE_IS_CACHING, &cookie->flags)) { + folio_start_fscache(folio); + fscache_write_to_cache(v9fs_inode_cookie(v9inode), + folio_mapping(folio), start, len, i_size, + v9fs_write_to_cache_done, v9inode, + true); + } +#endif + + folio_end_writeback(folio); + p9_fid_put(writeback_fid); + + return err; +} + +static int v9fs_vfs_writepage(struct page *page, struct writeback_control *wbc) +{ + struct folio *folio = page_folio(page); + int retval; + + p9_debug(P9_DEBUG_VFS, "folio %p\n", folio); + + retval = v9fs_vfs_write_folio_locked(folio); + if (retval < 0) { + if (retval == -EAGAIN) { + folio_redirty_for_writepage(wbc, folio); + retval = 0; + } else { + mapping_set_error(folio_mapping(folio), retval); + } + } else + retval = 0; + + folio_unlock(folio); + return retval; +} + +static int v9fs_launder_folio(struct folio *folio) +{ + int retval; + + if (folio_clear_dirty_for_io(folio)) { + retval = v9fs_vfs_write_folio_locked(folio); + if (retval) + return retval; + } + folio_wait_fscache(folio); + return 0; +} + +/** + * v9fs_direct_IO - 9P address space operation for direct I/O + * @iocb: target I/O control block + * @iter: The data/buffer to use + * + * The presence of v9fs_direct_IO() in the address space ops vector + * allowes open() O_DIRECT flags which would have failed otherwise. + * + * In the non-cached mode, we shunt off direct read and write requests before + * the VFS gets them, so this method should never be called. + * + * Direct IO is not 'yet' supported in the cached mode. Hence when + * this routine is called through generic_file_aio_read(), the read/write fails + * with an error. + * + */ +static ssize_t +v9fs_direct_IO(struct kiocb *iocb, struct iov_iter *iter) +{ + struct file *file = iocb->ki_filp; + loff_t pos = iocb->ki_pos; + ssize_t n; + int err = 0; + + if (iov_iter_rw(iter) == WRITE) { + n = p9_client_write(file->private_data, pos, iter, &err); + if (n) { + struct inode *inode = file_inode(file); + loff_t i_size = i_size_read(inode); + + if (pos + n > i_size) + inode_add_bytes(inode, pos + n - i_size); + } + } else { + n = p9_client_read(file->private_data, pos, iter, &err); + } + return n ? n : err; +} + +static int v9fs_write_begin(struct file *filp, struct address_space *mapping, + loff_t pos, unsigned int len, + struct page **subpagep, void **fsdata) +{ + int retval; + struct folio *folio; + struct v9fs_inode *v9inode = V9FS_I(mapping->host); + + p9_debug(P9_DEBUG_VFS, "filp %p, mapping %p\n", filp, mapping); + + /* Prefetch area to be written into the cache if we're caching this + * file. We need to do this before we get a lock on the page in case + * there's more than one writer competing for the same cache block. + */ + retval = netfs_write_begin(&v9inode->netfs, filp, mapping, pos, len, &folio, fsdata); + if (retval < 0) + return retval; + + *subpagep = &folio->page; + return retval; +} + +static int v9fs_write_end(struct file *filp, struct address_space *mapping, + loff_t pos, unsigned int len, unsigned int copied, + struct page *subpage, void *fsdata) +{ + loff_t last_pos = pos + copied; + struct folio *folio = page_folio(subpage); + struct inode *inode = mapping->host; + + p9_debug(P9_DEBUG_VFS, "filp %p, mapping %p\n", filp, mapping); + + if (!folio_test_uptodate(folio)) { + if (unlikely(copied < len)) { + copied = 0; + goto out; + } + + folio_mark_uptodate(folio); + } + + /* + * No need to use i_size_read() here, the i_size + * cannot change under us because we hold the i_mutex. + */ + if (last_pos > inode->i_size) { + inode_add_bytes(inode, last_pos - inode->i_size); + i_size_write(inode, last_pos); +#ifdef CONFIG_9P_FSCACHE + fscache_update_cookie(v9fs_inode_cookie(V9FS_I(inode)), NULL, + &last_pos); +#endif + } + folio_mark_dirty(folio); +out: + folio_unlock(folio); + folio_put(folio); + + return copied; +} + const struct address_space_operations v9fs_addr_operations = { - .read_folio = netfs_read_folio, - .readahead = netfs_readahead, - .dirty_folio = netfs_dirty_folio, - .release_folio = netfs_release_folio, - .invalidate_folio = netfs_invalidate_folio, - .launder_folio = netfs_launder_folio, - .direct_IO = noop_direct_IO, - .writepages = netfs_writepages, + .read_folio = netfs_read_folio, + .readahead = netfs_readahead, + .dirty_folio = netfs_dirty_folio, + .writepage = v9fs_vfs_writepage, + .write_begin = v9fs_write_begin, + .write_end = v9fs_write_end, + .release_folio = netfs_release_folio, + .invalidate_folio = netfs_invalidate_folio, + .launder_folio = v9fs_launder_folio, + .direct_IO = v9fs_direct_IO, }; diff --git a/fs/9p/vfs_file.c b/fs/9p/vfs_file.c index bae330c2f0cf..11cd8d23f6f2 100644 --- a/fs/9p/vfs_file.c +++ b/fs/9p/vfs_file.c @@ -353,15 +353,25 @@ static ssize_t v9fs_file_read_iter(struct kiocb *iocb, struct iov_iter *to) { struct p9_fid *fid = iocb->ki_filp->private_data; + int ret, err = 0; p9_debug(P9_DEBUG_VFS, "fid %d count %zu offset %lld\n", fid->fid, iov_iter_count(to), iocb->ki_pos); - if (fid->mode & P9L_DIRECT) - return netfs_unbuffered_read_iter(iocb, to); + if (!(fid->mode & P9L_DIRECT)) { + p9_debug(P9_DEBUG_VFS, "(cached)\n"); + return generic_file_read_iter(iocb, to); + } + + if (iocb->ki_filp->f_flags & O_NONBLOCK) + ret = p9_client_read_once(fid, iocb->ki_pos, to, &err); + else + ret = p9_client_read(fid, iocb->ki_pos, to, &err); + if (!ret) + return err; - p9_debug(P9_DEBUG_VFS, "(cached)\n"); - return netfs_file_read_iter(iocb, to); + iocb->ki_pos += ret; + return ret; } /* @@ -397,14 +407,46 @@ v9fs_file_write_iter(struct kiocb *iocb, struct iov_iter *from) { struct file *file = iocb->ki_filp; struct p9_fid *fid = file->private_data; + ssize_t retval; + loff_t origin; + int err = 0; p9_debug(P9_DEBUG_VFS, "fid %d\n", fid->fid); - if (fid->mode & (P9L_DIRECT | P9L_NOWRITECACHE)) - return netfs_unbuffered_write_iter(iocb, from); + if (!(fid->mode & (P9L_DIRECT | P9L_NOWRITECACHE))) { + p9_debug(P9_DEBUG_CACHE, "(cached)\n"); + return generic_file_write_iter(iocb, from); + } - p9_debug(P9_DEBUG_CACHE, "(cached)\n"); - return netfs_file_write_iter(iocb, from); + retval = generic_write_checks(iocb, from); + if (retval <= 0) + return retval; + + origin = iocb->ki_pos; + retval = p9_client_write(file->private_data, iocb->ki_pos, from, &err); + if (retval > 0) { + struct inode *inode = file_inode(file); + loff_t i_size; + unsigned long pg_start, pg_end; + + pg_start = origin >> PAGE_SHIFT; + pg_end = (origin + retval - 1) >> PAGE_SHIFT; + if (inode->i_mapping && inode->i_mapping->nrpages) + invalidate_inode_pages2_range(inode->i_mapping, + pg_start, pg_end); + iocb->ki_pos += retval; + i_size = i_size_read(inode); + if (iocb->ki_pos > i_size) { + inode_add_bytes(inode, iocb->ki_pos - i_size); + /* + * Need to serialize against i_size_write() in + * v9fs_stat2inode() + */ + v9fs_i_size_write(inode, iocb->ki_pos); + } + return retval; + } + return err; } static int v9fs_file_fsync(struct file *filp, loff_t start, loff_t end, @@ -477,7 +519,36 @@ v9fs_file_mmap(struct file *filp, struct vm_area_struct *vma) static vm_fault_t v9fs_vm_page_mkwrite(struct vm_fault *vmf) { - return netfs_page_mkwrite(vmf, NULL); + struct folio *folio = page_folio(vmf->page); + struct file *filp = vmf->vma->vm_file; + struct inode *inode = file_inode(filp); + + + p9_debug(P9_DEBUG_VFS, "folio %p fid %lx\n", + folio, (unsigned long)filp->private_data); + + /* Wait for the page to be written to the cache before we allow it to + * be modified. We then assume the entire page will need writing back. + */ +#ifdef CONFIG_9P_FSCACHE + if (folio_test_fscache(folio) && + folio_wait_fscache_killable(folio) < 0) + return VM_FAULT_NOPAGE; +#endif + + /* Update file times before taking page lock */ + file_update_time(filp); + + if (folio_lock_killable(folio) < 0) + return VM_FAULT_RETRY; + if (folio_mapping(folio) != inode->i_mapping) + goto out_unlock; + folio_wait_stable(folio); + + return VM_FAULT_LOCKED; +out_unlock: + folio_unlock(folio); + return VM_FAULT_NOPAGE; } static void v9fs_mmap_vm_close(struct vm_area_struct *vma) diff --git a/fs/9p/vfs_inode.c b/fs/9p/vfs_inode.c index 32572982f72e..e9b744b9978c 100644 --- a/fs/9p/vfs_inode.c +++ b/fs/9p/vfs_inode.c @@ -373,8 +373,10 @@ void v9fs_evict_inode(struct inode *inode) truncate_inode_pages_final(&inode->i_data); +#ifdef CONFIG_9P_FSCACHE version = cpu_to_le32(v9inode->qid.version); netfs_clear_inode_writeback(inode, &version); +#endif clear_inode(inode); filemap_fdatawrite(&inode->i_data); @@ -1110,7 +1112,7 @@ static int v9fs_vfs_setattr(struct mnt_idmap *idmap, if ((iattr->ia_valid & ATTR_SIZE) && iattr->ia_size != i_size_read(inode)) { truncate_setsize(inode, iattr->ia_size); - netfs_resize_file(netfs_inode(inode), iattr->ia_size, true); + truncate_pagecache(inode, iattr->ia_size); #ifdef CONFIG_9P_FSCACHE if (v9ses->cache & CACHE_FSCACHE) { @@ -1178,7 +1180,6 @@ v9fs_stat2inode(struct p9_wstat *stat, struct inode *inode, mode |= inode->i_mode & ~S_IALLUGO; inode->i_mode = mode; - v9inode->netfs.remote_i_size = stat->length; if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) v9fs_i_size_write(inode, stat->length); /* not real number of blocks, but 512 byte ones ... */ diff --git a/fs/9p/vfs_inode_dotl.c b/fs/9p/vfs_inode_dotl.c index 3505227e1704..d0636b99f05b 100644 --- a/fs/9p/vfs_inode_dotl.c +++ b/fs/9p/vfs_inode_dotl.c @@ -599,7 +599,7 @@ int v9fs_vfs_setattr_dotl(struct mnt_idmap *idmap, if ((iattr->ia_valid & ATTR_SIZE) && iattr->ia_size != i_size_read(inode)) { truncate_setsize(inode, iattr->ia_size); - netfs_resize_file(netfs_inode(inode), iattr->ia_size, true); + truncate_pagecache(inode, iattr->ia_size); #ifdef CONFIG_9P_FSCACHE if (v9ses->cache & CACHE_FSCACHE) @@ -656,7 +656,6 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode, mode |= inode->i_mode & ~S_IALLUGO; inode->i_mode = mode; - v9inode->netfs.remote_i_size = stat->st_size; if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE)) v9fs_i_size_write(inode, stat->st_size); inode->i_blocks = stat->st_blocks; @@ -685,10 +684,8 @@ v9fs_stat2inode_dotl(struct p9_stat_dotl *stat, struct inode *inode, inode->i_mode = mode; } if (!(flags & V9FS_STAT2INODE_KEEP_ISIZE) && - stat->st_result_mask & P9_STATS_SIZE) { - v9inode->netfs.remote_i_size = stat->st_size; + stat->st_result_mask & P9_STATS_SIZE) v9fs_i_size_write(inode, stat->st_size); - } if (stat->st_result_mask & P9_STATS_BLOCKS) inode->i_blocks = stat->st_blocks; }