From patchwork Fri Sep 30 15:13:07 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kurz X-Patchwork-Id: 677101 Return-Path: X-Original-To: incoming@patchwork.ozlabs.org Delivered-To: patchwork-incoming@bilbo.ozlabs.org Received: from lists.gnu.org (lists.gnu.org [IPv6:2001:4830:134:3::11]) (using TLSv1 with cipher AES256-SHA (256/256 bits)) (No client certificate requested) by ozlabs.org (Postfix) with ESMTPS id 3slwKR360tz9sf6 for ; Sat, 1 Oct 2016 01:27:07 +1000 (AEST) Received: from localhost ([::1]:45241 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bpziJ-0000C2-TO for incoming@patchwork.ozlabs.org; Fri, 30 Sep 2016 11:27:03 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:53860) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bpzV5-0004D1-7D for qemu-devel@nongnu.org; Fri, 30 Sep 2016 11:13:27 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bpzUz-0004XG-28 for qemu-devel@nongnu.org; Fri, 30 Sep 2016 11:13:22 -0400 Received: from mx0a-001b2d01.pphosted.com ([148.163.156.1]:37776) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bpzUy-0004Wh-R7 for qemu-devel@nongnu.org; Fri, 30 Sep 2016 11:13:17 -0400 Received: from pps.filterd (m0098396.ppops.net [127.0.0.1]) by mx0a-001b2d01.pphosted.com (8.16.0.17/8.16.0.17) with SMTP id u8UF7wPL104019 for ; Fri, 30 Sep 2016 11:13:16 -0400 Received: from e37.co.us.ibm.com (e37.co.us.ibm.com [32.97.110.158]) by mx0a-001b2d01.pphosted.com with ESMTP id 25sr7exn4n-1 (version=TLSv1.2 cipher=AES256-SHA bits=256 verify=NOT) for ; Fri, 30 Sep 2016 11:13:16 -0400 Received: from localhost by e37.co.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 30 Sep 2016 09:13:14 -0600 Received: from d03dlp01.boulder.ibm.com (9.17.202.177) by e37.co.us.ibm.com (192.168.1.137) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 30 Sep 2016 09:13:11 -0600 Received: from b03cxnp08028.gho.boulder.ibm.com (b03cxnp08028.gho.boulder.ibm.com [9.17.130.20]) by d03dlp01.boulder.ibm.com (Postfix) with ESMTP id D2A4F1FF0059; Fri, 30 Sep 2016 09:12:51 -0600 (MDT) Received: from b03ledav005.gho.boulder.ibm.com (b03ledav005.gho.boulder.ibm.com [9.17.130.236]) by b03cxnp08028.gho.boulder.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id u8UFDAjL15466906; Fri, 30 Sep 2016 08:13:10 -0700 Received: from b03ledav005.gho.boulder.ibm.com (unknown [127.0.0.1]) by IMSVA (Postfix) with ESMTP id AE450BE059; Fri, 30 Sep 2016 09:13:10 -0600 (MDT) Received: from [192.168.66.108] (unknown [9.164.139.88]) by b03ledav005.gho.boulder.ibm.com (Postfix) with ESMTP id 9547CBE042; Fri, 30 Sep 2016 09:13:08 -0600 (MDT) From: Greg Kurz To: qemu-devel@nongnu.org Date: Fri, 30 Sep 2016 17:13:07 +0200 In-Reply-To: <147524835229.953.16853431367911587573.stgit@bahia> References: <147524835229.953.16853431367911587573.stgit@bahia> User-Agent: StGit/0.17.1-dirty MIME-Version: 1.0 X-TM-AS-GCONF: 00 X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16093015-0024-0000-0000-000014AFF679 X-IBM-SpamModules-Scores: X-IBM-SpamModules-Versions: BY=3.00005830; HX=3.00000240; KW=3.00000007; PH=3.00000004; SC=3.00000186; SDB=6.00763113; UDB=6.00363859; IPR=6.00538276; BA=6.00004775; NDR=6.00000001; ZLA=6.00000005; ZF=6.00000009; ZB=6.00000000; ZP=6.00000000; ZH=6.00000000; ZU=6.00000002; MB=3.00012833; XFM=3.00000011; UTC=2016-09-30 15:13:14 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16093015-0025-0000-0000-000044F02093 Message-Id: <147524838750.953.14476394288086055364.stgit@bahia> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:, , definitions=2016-09-30_07:, , signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=3 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609280000 definitions=main-1609300276 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 3.x [generic] X-Received-From: 148.163.156.1 Subject: [Qemu-devel] [PATCH v4 4/9] virtio-blk: handle virtio_blk_handle_request() errors X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , "Michael S. Tsirkin" , Jason Wang , Greg Kurz , Max Reitz , "Aneesh Kumar K.V" , Stefan Hajnoczi , Cornelia Huck , Paolo Bonzini Errors-To: qemu-devel-bounces+incoming=patchwork.ozlabs.org@nongnu.org Sender: "Qemu-devel" All these errors are caused by a buggy guest: QEMU should not exit. With this patch, if virtio_blk_handle_request() detects a buggy request, it marks the device as broken and returns an error to the caller so it takes appropriate action. In the case of virtio_blk_handle_vq(), we detach the request from the virtqueue, free its allocated memory and stop popping new requests. We don't need to bother about multireq since virtio_blk_handle_request() errors out early and mrb.num_reqs == 0. In the case of virtio_blk_dma_restart_bh(), we need to detach and free all queued requests as well. Signed-off-by: Greg Kurz Reviewed-by: Stefan Hajnoczi --- v4: - added Stefan's R-b tag --- hw/block/virtio-blk.c | 38 ++++++++++++++++++++++++++++---------- 1 file changed, 28 insertions(+), 10 deletions(-) diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c index bbacd562cefb..0ddd7fbbe54f 100644 --- a/hw/block/virtio-blk.c +++ b/hw/block/virtio-blk.c @@ -468,30 +468,32 @@ static bool virtio_blk_sect_range_ok(VirtIOBlock *dev, return true; } -static void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) +static int virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) { uint32_t type; struct iovec *in_iov = req->elem.in_sg; struct iovec *iov = req->elem.out_sg; unsigned in_num = req->elem.in_num; unsigned out_num = req->elem.out_num; + VirtIOBlock *s = req->dev; + VirtIODevice *vdev = VIRTIO_DEVICE(s); if (req->elem.out_num < 1 || req->elem.in_num < 1) { - error_report("virtio-blk missing headers"); - exit(1); + virtio_error(vdev, "virtio-blk missing headers"); + return -1; } if (unlikely(iov_to_buf(iov, out_num, 0, &req->out, sizeof(req->out)) != sizeof(req->out))) { - error_report("virtio-blk request outhdr too short"); - exit(1); + virtio_error(vdev, "virtio-blk request outhdr too short"); + return -1; } iov_discard_front(&iov, &out_num, sizeof(req->out)); if (in_iov[in_num - 1].iov_len < sizeof(struct virtio_blk_inhdr)) { - error_report("virtio-blk request inhdr too short"); - exit(1); + virtio_error(vdev, "virtio-blk request inhdr too short"); + return -1; } /* We always touch the last byte, so just see how big in_iov is. */ @@ -529,7 +531,7 @@ static void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) block_acct_invalid(blk_get_stats(req->dev->blk), is_write ? BLOCK_ACCT_WRITE : BLOCK_ACCT_READ); virtio_blk_free_request(req); - return; + return 0; } block_acct_start(blk_get_stats(req->dev->blk), @@ -576,6 +578,7 @@ static void virtio_blk_handle_request(VirtIOBlockReq *req, MultiReqBuffer *mrb) virtio_blk_req_complete(req, VIRTIO_BLK_S_UNSUPP); virtio_blk_free_request(req); } + return 0; } void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) @@ -586,7 +589,11 @@ void virtio_blk_handle_vq(VirtIOBlock *s, VirtQueue *vq) blk_io_plug(s->blk); while ((req = virtio_blk_get_request(s, vq))) { - virtio_blk_handle_request(req, &mrb); + if (virtio_blk_handle_request(req, &mrb)) { + virtqueue_detach_element(req->vq, &req->elem, 0); + virtio_blk_free_request(req); + break; + } } if (mrb.num_reqs) { @@ -625,7 +632,18 @@ static void virtio_blk_dma_restart_bh(void *opaque) while (req) { VirtIOBlockReq *next = req->next; - virtio_blk_handle_request(req, &mrb); + if (virtio_blk_handle_request(req, &mrb)) { + /* Device is now broken and won't do any processing until it gets + * reset. Already queued requests will be lost: let's purge them. + */ + while (req) { + next = req->next; + virtqueue_detach_element(req->vq, &req->elem, 0); + virtio_blk_free_request(req); + req = next; + } + break; + } req = next; }