From patchwork Fri May 27 10:06:21 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Paolo Bonzini X-Patchwork-Id: 627056 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3rGMVD6dqXz9t49 for ; Fri, 27 May 2016 20:20:52 +1000 (AEST) Received: from localhost ([::1]:45039 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b6Ess-0007kK-KK for incoming@patchwork.ozlabs.org; Fri, 27 May 2016 06:20:50 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35344) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b6EfV-0002Lz-Fp for qemu-devel@nongnu.org; Fri, 27 May 2016 06:07:02 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b6EfU-0007HZ-D9 for qemu-devel@nongnu.org; Fri, 27 May 2016 06:07:01 -0400 Received: from mx1.redhat.com ([209.132.183.28]:45229) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b6EfU-0007HM-71; Fri, 27 May 2016 06:07:00 -0400 Received: from int-mx14.intmail.prod.int.phx2.redhat.com (int-mx14.intmail.prod.int.phx2.redhat.com [10.5.11.27]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C59A1C04B306; Fri, 27 May 2016 10:06:59 +0000 (UTC) Received: from donizetti.redhat.com (ovpn-112-66.ams2.redhat.com [10.36.112.66]) by int-mx14.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id u4RA6isR030403; Fri, 27 May 2016 06:06:57 -0400 From: Paolo Bonzini To: qemu-devel@nongnu.org Date: Fri, 27 May 2016 12:06:21 +0200 Message-Id: <1464343604-517-9-git-send-email-pbonzini@redhat.com> In-Reply-To: <1464343604-517-1-git-send-email-pbonzini@redhat.com> References: <1464343604-517-1-git-send-email-pbonzini@redhat.com> X-Scanned-By: MIMEDefang 2.68 on 10.5.11.27 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Fri, 27 May 2016 10:06:59 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PULL 08/31] nbd: Don't trim unrequested bytes X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: qemu-stable@nongnu.org Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" From: Eric Blake Similar to commit df7b97ff, we are mishandling clients that give an unaligned NBD_CMD_TRIM request, and potentially trimming bytes that occur before their request; which in turn can cause potential unintended data loss (unlikely in practice, since most clients are sane and issue aligned trim requests). However, while we fixed read and write by switching to the byte interfaces of blk_, we don't yet have a byte interface for discard. On the other hand, trim is advisory, so rounding the user's request to simply ignore the first and last unaligned sectors (or the entire request, if it is sub-sector in length) is just fine. CC: qemu-stable@nongnu.org Signed-off-by: Eric Blake Message-Id: <1464173965-9694-1-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini --- nbd/server.c | 20 ++++++++++++++------ 1 file changed, 14 insertions(+), 6 deletions(-) diff --git a/nbd/server.c b/nbd/server.c index fa862cd..b2cfeb9 100644 --- a/nbd/server.c +++ b/nbd/server.c @@ -1153,12 +1153,20 @@ static void nbd_trip(void *opaque) break; case NBD_CMD_TRIM: TRACE("Request type is TRIM"); - ret = blk_co_discard(exp->blk, (request.from + exp->dev_offset) - / BDRV_SECTOR_SIZE, - request.len / BDRV_SECTOR_SIZE); - if (ret < 0) { - LOG("discard failed"); - reply.error = -ret; + /* Ignore unaligned head or tail, until block layer adds byte + * interface */ + if (request.len >= BDRV_SECTOR_SIZE) { + request.len -= (request.from + request.len) % BDRV_SECTOR_SIZE; + ret = blk_co_discard(exp->blk, + DIV_ROUND_UP(request.from + exp->dev_offset, + BDRV_SECTOR_SIZE), + request.len / BDRV_SECTOR_SIZE); + if (ret < 0) { + LOG("discard failed"); + reply.error = -ret; + } + } else { + TRACE("trim request too small, ignoring"); } if (nbd_co_send_reply(req, &reply, 0) < 0) { goto out;