From patchwork Wed Jul 29 04:39:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Matthew Ruffell X-Patchwork-Id: 1338146 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Authentication-Results: ozlabs.org; spf=none (no SPF record) smtp.mailfrom=lists.ubuntu.com (client-ip=91.189.94.19; helo=huckleberry.canonical.com; envelope-from=kernel-team-bounces@lists.ubuntu.com; receiver=) Authentication-Results: ozlabs.org; dmarc=fail (p=none dis=none) header.from=canonical.com Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 4BGgp54V3sz9sSy; Wed, 29 Jul 2020 14:40:29 +1000 (AEST) Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.86_2) (envelope-from ) id 1k0dtA-0008KG-0w; Wed, 29 Jul 2020 04:40:24 +0000 Received: from youngberry.canonical.com ([91.189.89.112]) by huckleberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1k0dt8-0008KA-FA for kernel-team@lists.ubuntu.com; Wed, 29 Jul 2020 04:40:22 +0000 Received: from mail-pg1-f198.google.com ([209.85.215.198]) by youngberry.canonical.com with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2) (envelope-from ) id 1k0dt8-0008Ut-1t for kernel-team@lists.ubuntu.com; Wed, 29 Jul 2020 04:40:22 +0000 Received: by mail-pg1-f198.google.com with SMTP id 37so2897993pgx.17 for ; Tue, 28 Jul 2020 21:40:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:mime-version :content-transfer-encoding; bh=BmIQ5ADfPToGaOEHKVAMnP1iwAXUj5RnsqEPmdzMB/c=; b=XzzokFQZgACpS1xlI6QeloHaViBQVvyb0zhKK+NQUVYYhvVm2cOJYkP5Q32FP+8QVd sgbbUBjRFpO6+zlNbnGQ8NjPynTky48XDgRigtuPPIne4J/Oovy1FfVMBpfRY4fsXz// 0yPEqEcLeO9ba5+k7EpJUYyIrI6hWabJvRPnwizwMvbvjPtQjZ0rcmtyVuSNXprDzvwm 2E9mMvXRGrwPFi5CsUHWRGKX6GnnD3onIn/ewjR/Csyfu4iQezlOxooe518WoEK7gO7a aRb2F1U3MA1M0V2UUnZ8i+iMfHOpAmGN6gV4TpmyigWF+RzWrGIg0vuz0oh25U8VZ7oU AbWQ== X-Gm-Message-State: AOAM530HKxG+dKPyUsqn8Ok/ZXrpO5mpjxtGzbUOCAppPHP6nXrQ2/Yg /bgxHEL07WhJ1xxqawuafht0GZYaa9CI59h112l+xm20e3K08hxL7OhYhmuCjvCHFJxbQMWAC3B A1H3/hPD40zWoMgLv3sx8gDTwdT2cRzmdiz5w7oSd0Q== X-Received: by 2002:a62:82c1:: with SMTP id w184mr3115906pfd.202.1595997620262; Tue, 28 Jul 2020 21:40:20 -0700 (PDT) X-Google-Smtp-Source: ABdhPJx8V+mj58piUNkpMNq9A+Tf/4TgfDahpce3rCO8N5ntRh3nKUgK9zMSh1YS1wqbO81UzacUxw== X-Received: by 2002:a62:82c1:: with SMTP id w184mr3115875pfd.202.1595997619738; Tue, 28 Jul 2020 21:40:19 -0700 (PDT) Received: from localhost.localdomain (222-152-178-139-fibre.sparkbb.co.nz. [222.152.178.139]) by smtp.gmail.com with ESMTPSA id a2sm606062pgf.53.2020.07.28.21.40.18 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 28 Jul 2020 21:40:19 -0700 (PDT) From: Matthew Ruffell To: kernel-team@lists.ubuntu.com Subject: [SRU][Bionic][PATCH 0/3] NFSv4.1: Interrupted connections cause high bandwidth RPC ping-pong between client and server Date: Wed, 29 Jul 2020 16:39:59 +1200 Message-Id: <20200729044002.18762-1-matthew.ruffell@canonical.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.20 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: "kernel-team" BugLink: https://bugs.launchpad.net/bugs/1887607 [Impact] There is a bug in NFS v4.1 that causes a large amount of RPC calls between a client and server when a previous RPC call is interrupted. This uses a large amount of bandwidth and can saturate the network. The symptoms are so: * On NFS clients: Attempts to access mounted NFS shares associated with the affected server block indefinitely. * On the network: A storm of repeated RPCs between NFS client and server uses a lot of bandwidth. Each RPC is acknowledged by the server with an NFS4ERR_SEQ_MISORDERED error. * Other NFS clients connected to the same NFS server: Performance drops dramatically. This occurs during a "false retry", when a client attempts to make a new RPC call using a slot+sequence number that references an older, cached call. This happens when a user process interrupts an RPC call that is in progress. I had previously fixed this for Disco in bug 1828978, and now a customer has run into the issue in Bionic. A reproducer is supplied in the testcase section, which was something missing from bug 1828978, since we never determined how the issue actually occurred back then. [Fix] This was fixed in 5.1 upstream with the below commit: commit 3453d5708b33efe76f40eca1c0ed60923094b971 Author: Trond Myklebust Date: Wed Jun 20 17:53:34 2018 -0400 Subject: NFSv4.1: Avoid false retries when RPC calls are interrupted The fix is to pre-emptively increment the sequence number if an RPC call is interrupted, and to address corner cases we interpret the NFS4ERR_SEQ_MISORDERED error as a sign we need to locate an appropriate sequence number between the value we sent, and the last successfully acked SEQUENCE call. The commit also requires two fixup commits, which landed in 5.5 and 5.8-rc6 respectively: commit 5c441544f045e679afd6c3c6d9f7aaf5fa5f37b0 Author: Trond Myklebust Date: Wed Nov 13 08:34:00 2019 +0100 Subject: NFSv4.x: Handle bad/dead sessions correctly in nfs41_sequence_process() commit 913fadc5b105c3619d9e8d0fe8899ff1593cc737 Author: Anna Schumaker Date: Wed Jul 8 10:33:40 2020 -0400 Subject: NFS: Fix interrupted slots by sending a solo SEQUENCE operation Commits 3453d5708b33efe76f40eca1c0ed60923094b971 and 913fadc5b105c3619d9e8d0fe8899ff1593cc737 require small backports to bionic, as struct rpc_cred changed to const struct cred in 5.0, and the backports swap them back to struct rpc_cred since that is how 4.15 works. [Testcase] You will need four machines. The first, is a kerberos KDC. Set up Kerberos correctly and create new service principals for the NFS server and for the client. I used: nfs/nfskerb.mydomain.com and nfs/client.mydomain.com. The second machine will be a NFS server with the krb5p share. Add the nfs server kerberos keys to the system's keytab, and set up a NFS server that exports a directory with sec=krb5p. Example export: /mnt/secretfolder *.mydomain.com(rw,sync,no_subtree_check,sec=krb5p) The third machine is a regular NFS server. Export a directory with normal sec=sys security. Example export: /mnt/sharedfolder *.mydomain.com(rw,sync) The fourth is a desktop machine. Add the client kerberos keys to the system's keytab. Mount both NFS shares, making sure to use the NFS v4.2 protocol. I used the commands: mount -t nfs4 nfskerb.mydomain.com:/mnt/secretfolder /mnt/secretfolder_client/ mount -t nfs4 nfs.mydomain.com:/mnt/sharedfolder /mnt/sharedfolder_client Check "mount -l" to ensure that NFS v4.2 is used: nfskerb.mydomain.com:/mnt/secretfolder on /mnt/secretfolder_client type nfs4 (rw,relatime,vers=4.2,<...>,sec=krb5p,<...>) nfs.mydomain.com:/mnt/sharedfolder on /mnt/sharedfolder_client type nfs4 (rw,relatime,vers=4.2,<...>,sec=sys,<...>) Generate some files full of random data. I found 20MB from /dev/random works great. Open each NFS share up in tabs in Nautilus. Copy the random data files to the sec=sys NFS share. When they are done, one at a time cut and then paste the file into the sec=krb5p NFS share. The bug will trigger either on the first, or subsequent tries, but less than 10 tries are needed usually. There is a test kernel available in the following PPA: https://launchpad.net/~mruffell/+archive/ubuntu/sf285439-test If you install the test kernel, files will cut and paste correctly, and NFS will work as expected. [Regression Potential] The changes are localised to NFS v4.1 and 4.2 only, and other versions of NFS are not affected. If a regression occurs, users can downgrade NFS versions to v4.0 or v3.x until a fix is made. The changes only impact when connections are interrupted, and under typical blue sky scenarios would not be invoked. There have been several attempts to fix this in the past, starting with f9312a541050 "NFSv4.1: Fix the client behaviour on NFS4ERR_SEQ_FALSE_RETRY", and stemming to the commit mentioned in the fix section, along with its two other fixup commits. This seems to be an ongoing issue where edge cases crop up. I won't be surprised if there are further commits down the line. [Other Info] When I first submitted this fix for SRU, I believed that the fix was: commit 02ef04e432babf8fc703104212314e54112ecd2d Author: Chuck Lever Date: Mon Feb 11 11:25:25 2019 -0500 Subject: NFS: Account for XDR pad of buf->pages This is not the case. This was a false positive fix. What it did was break NFSv4 GETACL and FS_LOCATIONS requests. When you tried to reproduce, the calls were never made since they were broken, and thus could not be interrupted, and cutting and pasting files worked fine. When you applied the fixup commit 29e7ca715f2a0b6c0a99b1aec1b0956d1f271955 to fix NFSv4 GETACL and FS_LOCATIONS requests, the problem returns, as GETACL and FS_LOCATIONS are free to be interrupted and start a high bandwidth ping pong. Anna Schumaker (1): NFS: Fix interrupted slots by sending a solo SEQUENCE operation Trond Myklebust (2): NFSv4.1: Avoid false retries when RPC calls are interrupted NFSv4.x: Handle bad/dead sessions correctly in nfs41_sequence_process() fs/nfs/nfs4proc.c | 155 +++++++++++++++++++++++++------------------ fs/nfs/nfs4session.c | 5 +- fs/nfs/nfs4session.h | 5 +- 3 files changed, 96 insertions(+), 69 deletions(-) Acked-by: Stefan Bader Acked-by: Andrea Righi