From patchwork Tue Jun 27 12:16:34 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Po-Hsu Lin X-Patchwork-Id: 781187 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from huckleberry.canonical.com (huckleberry.canonical.com [91.189.94.19]) by ozlabs.org (Postfix) with ESMTP id 3wxlKj5rTcz9s74; Tue, 27 Jun 2017 22:17:13 +1000 (AEST) Authentication-Results: ozlabs.org; dkim=fail reason="signature verification failed" (2048-bit key; unprotected) header.d=canonical-com.20150623.gappssmtp.com header.i=@canonical-com.20150623.gappssmtp.com header.b="h7qcNVup"; dkim-atps=neutral Received: from localhost ([127.0.0.1] helo=huckleberry.canonical.com) by huckleberry.canonical.com with esmtp (Exim 4.76) (envelope-from ) id 1dPpQd-0005av-6n; Tue, 27 Jun 2017 12:17:11 +0000 Received: from mail-pf0-f171.google.com ([209.85.192.171]) by huckleberry.canonical.com with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.76) (envelope-from ) id 1dPpQI-0005TA-P1 for kernel-team@lists.ubuntu.com; Tue, 27 Jun 2017 12:16:50 +0000 Received: by mail-pf0-f171.google.com with SMTP id c73so15785152pfk.2 for ; Tue, 27 Jun 2017 05:16:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=canonical-com.20150623.gappssmtp.com; s=20150623; h=from:to:subject:date:message-id:in-reply-to:references; bh=No2vpMvwHhksUqHQ4JBO1njAMd/PQ1cyTvg/Lk1YyMI=; b=h7qcNVupwPugEj1+QL6kyfxEHadTl/a3cNwPvU/xuj5qOKFC+W6zZxxg8qZQQxJvhP uiCQyxndAbTZZMhSHP5IR2xdUie7+FONY7DShWjl5XeNvYPYRx1fT7zX/mQ6qe5eh3pc CnvJw9u/78dpmgyDMmbmuOc4hqVjLEUuKMQKxG1Qv7JCEL1k0G2W7xwe2ZgQ0RD/Apjs OQtT7RyPYA59LsOmun2AzamSLi4KZO0WGCtsgay/ZO3ldmIoEMnoKHX+Qj0g5AY3WEZQ sBc4asJkg2+FrfI+qpy8iZ5YxhFoWhGEKsMciIrZcLmtigrwi4dzcEi+ywI1txcX1NHw xzPg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:subject:date:message-id:in-reply-to :references; bh=No2vpMvwHhksUqHQ4JBO1njAMd/PQ1cyTvg/Lk1YyMI=; b=jI74o4s8cbyZ65ggDuzWSKbm5Xv+pjCG09JQRQaubW6m1EVzTh5fCWkipm51cmmjWg 5BUrz5tZ/UPLGGhzhqW05RDAPdfL5Hi4wMJHbD9y7WIEKFiQvVMFFaMYLYqFdV3j9LgO K0zOufSqdH08vi0zte5knS0C2PLnVNkH3Yr5u2n9gPgmhEtHMYG0I/sG9HoMpIjdPMSG Al4oCniGVcrV/v+a9dLFU6pimMr/1/eAQmKp/8pk50rx/wgkOenYF9S2AK2og9pVZ4UV EI8RfdVMjMPxK22iC/Hij0iy2qFH/8iL/Q5w8enhkOoPxJ+w80TkXSHxM1R6rdVYO73A aY5g== X-Gm-Message-State: AKS2vOyufqn3s4zYNFpvdkHvAuvvgwsOemYL4hvAwzrNklDjfV9Z1WLM cdKiYqgY8i29qflfHHQ= X-Received: by 10.99.0.212 with SMTP id 203mr4883549pga.259.1498565809233; Tue, 27 Jun 2017 05:16:49 -0700 (PDT) Received: from localhost.localdomain ([175.41.48.77]) by smtp.gmail.com with ESMTPSA id s62sm2540947pfi.36.2017.06.27.05.16.48 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 27 Jun 2017 05:16:48 -0700 (PDT) From: Po-Hsu Lin To: kernel-team@lists.ubuntu.com Subject: [CVE-2017-7895][T][PATCH 2/4] svcrdma: Do not add XDR padding to xdr_buf page vector Date: Tue, 27 Jun 2017 20:16:34 +0800 Message-Id: <1498565798-19727-3-git-send-email-po-hsu.lin@canonical.com> X-Mailer: git-send-email 1.7.9.5 In-Reply-To: <1498565798-19727-1-git-send-email-po-hsu.lin@canonical.com> References: <1498565798-19727-1-git-send-email-po-hsu.lin@canonical.com> X-BeenThere: kernel-team@lists.ubuntu.com X-Mailman-Version: 2.1.14 Precedence: list List-Id: Kernel team discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Errors-To: kernel-team-bounces@lists.ubuntu.com Sender: kernel-team-bounces@lists.ubuntu.com From: Chuck Lever CVE-2017-7895 An xdr_buf has a head, a vector of pages, and a tail. Each RPC request is presented to the NFS server contained in an xdr_buf. The RDMA transport would like to supply the NFS server with only the NFS WRITE payload bytes in the page vector. In some common cases, that would allow the NFS server to swap those pages right into the target file's page cache. Have the transport's RDMA Read logic put XDR pad bytes in the tail iovec, and not in the pages that hold the data payload. The NFSv3 WRITE XDR decoder is finicky about the lengths involved, so make sure it is looking in the correct places when computing the total length of the incoming NFS WRITE request. Signed-off-by: Chuck Lever Signed-off-by: J. Bruce Fields (backported from commit 6625d0913771df5f12b9531c8cb8414e55f1c21d) Just pick the change for nfs3xdr.c Signed-off-by: Po-Hsu Lin --- fs/nfsd/nfs3xdr.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/fs/nfsd/nfs3xdr.c b/fs/nfsd/nfs3xdr.c index ea0a07a..e848abd 100644 --- a/fs/nfsd/nfs3xdr.c +++ b/fs/nfsd/nfs3xdr.c @@ -384,7 +384,7 @@ nfs3svc_decode_writeargs(struct svc_rqst *rqstp, __be32 *p, */ hdr = (void*)p - rqstp->rq_arg.head[0].iov_base; dlen = rqstp->rq_arg.head[0].iov_len + rqstp->rq_arg.page_len - - hdr; + + rqstp->rq_arg.tail[0].iov_len - hdr; /* * Round the length of the data which was specified up to * the next multiple of XDR units and then compare that