From patchwork Mon Jun 5 15:30:26 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Khalid Elmously X-Patchwork-Id: 1790491 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@legolas.ozlabs.org Authentication-Results: legolas.ozlabs.org; spf=pass (sender SPF authorized) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: legolas.ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical.com header.i=@canonical.com header.a=rsa-sha256 header.s=20210705 header.b=QnzxAimU; dkim-atps=neutral Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-ECDSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by legolas.ozlabs.org (Postfix) with ESMTPS id 4QZcyJ39jhz20Ty for ; Tue, 6 Jun 2023 01:31:04 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1q6CAc-00069r-Mb; Mon, 05 Jun 2023 15:30:58 +0000 Received: from smtp-relay-internal-0.internal ([10.131.114.225] helo=smtp-relay-internal-0.canonical.com) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1q6CAV-00065J-LX for kernel-team@lists.ubuntu.com; Mon, 05 Jun 2023 15:30:51 +0000 Received: from mail-qv1-f70.google.com (mail-qv1-f70.google.com [209.85.219.70]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by smtp-relay-internal-0.canonical.com (Postfix) with ESMTPS id 94B3F3F16C for ; Mon, 5 Jun 2023 15:30:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical.com; s=20210705; t=1685979050; bh=GWGpGnnYXAcKp3FY1X+dmMG0qu83ycOJ5uPg2hBCiAU=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=QnzxAimUo5jLMje/H69hlWV5MPbWnUCwK+R2+RkYf562+CaiDcZj7IF47YpEVGN/U g16DxXMquq8qme3YKMgZUlBN2aMSvXE8laHo9nQ9DrPGyAkBACLxeiD+T1CNLk7S/m UfP3e84b7Og6EkUhML6MUtZcNiEJ2kTyXrKSSj85Hjae340l4hN/4hwzqld5PQMFq6 4lAUv4twCR+qT86N1oIACK/wkNdnBsAC65Bskqk56kBwHYMxxD+l55jzlgO7GgPmeN /Zn0fpBH40E4PjS5jnyYW2yo8dOQ0DJeQWOMprCYanKTF229pDG7nzwDqDIH9WHYsx cWM21to+ajOTw== Received: by mail-qv1-f70.google.com with SMTP id 6a1803df08f44-625fd27e074so59676906d6.3 for ; Mon, 05 Jun 2023 08:30:50 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1685979049; x=1688571049; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=GWGpGnnYXAcKp3FY1X+dmMG0qu83ycOJ5uPg2hBCiAU=; b=XaczKWLl8sByGmpULRQVlwdL5Yz6g1NljlZSNQ/yUZu6Zn2EF9AVzbFOSW7BC02xGw Va0pPXFXsw4m3DbnrK2daalqaZNTBLI9zrlBKBavMFW1xyD4RXmIW/9XcDKrH+g/w+CY w3wBGFHR5DpJg10Tml5/QQSTkgFTl8kIvyyFkmSRVNOZ+ivJdHRvoCQ5MxRkanllEww+ OtiSUkrwPCvm0Bs/cY6/8/TVHyeWDST/Z7reEEBr/BK0Gp/3Gdz/LODbVTZHmOFCJIlk 1alRV7BiPENwD9UNy3KkU60iO2UoQIuUOH919OrgX5S2paAsoeqAqwQpiRzvKu8vKyH0 Tm7Q== X-Gm-Message-State: AC+VfDwIpk0/VSQw6yY3XT41ibF9WjKGP6t4zd49xzRtaDbEGCtDO028 TieYAw1FE3eavA3uvUeWr676oLER3ABE+nbbooW1lx767rIX3xUD/2VxGUirLjhfXg9zTmVNKAs r5IDuD2uqsU7smEmT9rhI3sLDPBiRXFFb+1xe4DfWEo6ZVAPNHbLS X-Received: by 2002:a05:6214:1c4a:b0:61b:637a:64df with SMTP id if10-20020a0562141c4a00b0061b637a64dfmr8756682qvb.3.1685979049645; Mon, 05 Jun 2023 08:30:49 -0700 (PDT) X-Google-Smtp-Source: ACHHUZ4tJxrrUME0WeR8JVSW+ibJypWqNsDZK6xZBHY+Q/vJ3XyXRTAta/PaIoYarXCLAZkExG0Psw== X-Received: by 2002:a05:6214:1c4a:b0:61b:637a:64df with SMTP id if10-20020a0562141c4a00b0061b637a64dfmr8756658qvb.3.1685979049381; Mon, 05 Jun 2023 08:30:49 -0700 (PDT) Received: from k2.fuzzbuzz.org ([38.147.253.170]) by smtp.gmail.com with ESMTPSA id ph14-20020a0562144a4e00b00605f796d30esm4663301qvb.51.2023.06.05.08.30.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 05 Jun 2023 08:30:48 -0700 (PDT) From: Khalid Elmously To: kernel-team@lists.ubuntu.com Subject: [SRU][j][PATCH] UBUNTU: SAUCE: Make NFS file-access stale cache behaviour opt-in Date: Mon, 5 Jun 2023 11:30:26 -0400 Message-Id: <20230605153026.628561-3-khalid.elmously@canonical.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20230605153026.628561-1-khalid.elmously@canonical.com> References: <20230605153026.628561-1-khalid.elmously@canonical.com> MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: chengen.du@canonical.com Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/2022098 The file-access stale cache "refresh" behaviour introduced to fix https://bugs.launchpad.net/bugs/2003053 has caused a massive increase in NFS server queries in some cases - which effectively DoS's the NFS server and makes it unusable in some cases. This SAUCE patch makes the new behaviour optional using the "nfs_fasc=1" module parameter to nfs Suggested-by: Thadeu Lima de Souza Cascardo Signed-off-by: Khalid Elmously Acked-by: Dimitri John Ledkov Acked-by: Tim Gardner --- fs/nfs/dir.c | 29 +++++++++++++++++++++-------- 1 file changed, 21 insertions(+), 8 deletions(-) diff --git a/fs/nfs/dir.c b/fs/nfs/dir.c index 5d079e4746a77..00472eb22deeb 100644 --- a/fs/nfs/dir.c +++ b/fs/nfs/dir.c @@ -2544,6 +2544,10 @@ static DEFINE_SPINLOCK(nfs_access_lru_lock); static LIST_HEAD(nfs_access_lru_list); static atomic_long_t nfs_access_nr_entries; +static bool nfs_fasc = false; +module_param(nfs_fasc, bool, 0644); +MODULE_PARM_DESC(nfs_fasc, "Use file access stale cache for NFS"); + static unsigned long nfs_access_max_cachesize = 4*1024*1024; module_param(nfs_access_max_cachesize, ulong, 0644); MODULE_PARM_DESC(nfs_access_max_cachesize, "NFS access maximum total cache length"); @@ -2718,7 +2722,9 @@ static u64 nfs_access_login_time(const struct task_struct *task, static int nfs_access_get_cached_locked(struct inode *inode, const struct cred *cred, u32 *mask, bool may_block) { struct nfs_inode *nfsi = NFS_I(inode); - u64 login_time = nfs_access_login_time(current, cred); + u64 login_time; + if (nfs_fasc) + login_time = nfs_access_login_time(current, cred); struct nfs_access_entry *cache; bool retry = true; int err; @@ -2746,9 +2752,11 @@ static int nfs_access_get_cached_locked(struct inode *inode, const struct cred * spin_lock(&inode->i_lock); retry = false; } - err = -ENOENT; - if ((s64)(login_time - cache->timestamp) > 0) - goto out; + if (nfs_fasc) { + err = -ENOENT; + if ((s64)(login_time - cache->timestamp) > 0) + goto out; + } *mask = cache->mask; list_move_tail(&cache->lru, &nfsi->access_cache_entry_lru); err = 0; @@ -2767,7 +2775,9 @@ static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cre * but do it without locking. */ struct nfs_inode *nfsi = NFS_I(inode); - u64 login_time = nfs_access_login_time(current, cred); + u64 login_time; + if (nfs_fasc) + login_time = nfs_access_login_time(current, cred); struct nfs_access_entry *cache; int err = -ECHILD; struct list_head *lh; @@ -2782,8 +2792,10 @@ static int nfs_access_get_cached_rcu(struct inode *inode, const struct cred *cre cache = NULL; if (cache == NULL) goto out; - if ((s64)(login_time - cache->timestamp) > 0) - goto out; + if (nfs_fasc) { + if ((s64)(login_time - cache->timestamp) > 0) + goto out; + } if (nfs_check_cache_invalid(inode, NFS_INO_INVALID_ACCESS)) goto out; *mask = cache->mask; @@ -2850,7 +2862,8 @@ void nfs_access_add_cache(struct inode *inode, struct nfs_access_entry *set) RB_CLEAR_NODE(&cache->rb_node); cache->cred = get_cred(set->cred); cache->mask = set->mask; - cache->timestamp = ktime_get_ns(); + if (nfs_fasc) + cache->timestamp = ktime_get_ns(); /* The above field assignments must be visible * before this item appears on the lru. We cannot easily