diff mbox

ide_dma_cancel will result in partial DMA transfer (resend #4)

Message ID 20100727190436.GP16655@random.random
State New
Headers show

Commit Message

Andrea Arcangeli July 27, 2010, 7:04 p.m. UTC
Subject: avoid canceling ide dma

From: Andrea Arcangeli <aarcange@redhat.com>

The reason for not actually canceling the I/O is because with
virtualization and lots of VM running, a guest fs may mistake a
overload of the host, as an IDE timeout. So rather than canceling the
I/O, it's safer to wait I/O completion and simulate that the I/O has
completed just before the io cancellation was requested by the
guest. This way if ntfs or an app writes data without checking for
-EIO retval, and it thinks the write has succeeded, it's less likely
to run into troubles. Similar issues for reads.

Furthermore because the DMA operation is splitted into many synchronous
aio_read/write if there's more than one entry in the SG table, without this
patch the DMA would be cancelled in the middle, something we've no idea if it
happens on real hardware too or not. Overall this seems a great risk for zero
gain.

This approach is sure safer than previous code given we can't pretend all guest
fs code out there to check for errors and reply the DMA if it was completed
partially, given a timeout would never materialize on a real harddisk unless
there are defective blocks (and defective blocks are practically only an issue
for reads never for writes in any recent hardware as writing to blocks is the
way to fix them) or the harddisk breaks as a whole.

Signed-off-by: Izik Eidus <ieidus@redhat.com>
Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
---

Comments

Kevin Wolf July 30, 2010, 8:02 a.m. UTC | #1
Am 27.07.2010 21:04, schrieb Andrea Arcangeli:
> Subject: avoid canceling ide dma
> 
> From: Andrea Arcangeli <aarcange@redhat.com>
> 
> The reason for not actually canceling the I/O is because with
> virtualization and lots of VM running, a guest fs may mistake a
> overload of the host, as an IDE timeout. So rather than canceling the
> I/O, it's safer to wait I/O completion and simulate that the I/O has
> completed just before the io cancellation was requested by the
> guest. This way if ntfs or an app writes data without checking for
> -EIO retval, and it thinks the write has succeeded, it's less likely
> to run into troubles. Similar issues for reads.
> 
> Furthermore because the DMA operation is splitted into many synchronous
> aio_read/write if there's more than one entry in the SG table, without this
> patch the DMA would be cancelled in the middle, something we've no idea if it
> happens on real hardware too or not. Overall this seems a great risk for zero
> gain.
> 
> This approach is sure safer than previous code given we can't pretend all guest
> fs code out there to check for errors and reply the DMA if it was completed
> partially, given a timeout would never materialize on a real harddisk unless
> there are defective blocks (and defective blocks are practically only an issue
> for reads never for writes in any recent hardware as writing to blocks is the
> way to fix them) or the harddisk breaks as a whole.
> 
> Signed-off-by: Izik Eidus <ieidus@redhat.com>
> Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>

Thanks, applied to the block branch.

Kevin
diff mbox

Patch

diff --git a/hw/ide/pci.c b/hw/ide/pci.c
index 4331d77..ec90f26 100644
--- a/hw/ide/pci.c
+++ b/hw/ide/pci.c
@@ -40,8 +40,27 @@  void bmdma_cmd_writeb(void *opaque, uint32_t addr, uint32_t val)
     printf("%s: 0x%08x\n", __func__, val);
 #endif
     if (!(val & BM_CMD_START)) {
-        /* XXX: do it better */
-        ide_dma_cancel(bm);
+        /*
+         * We can't cancel Scatter Gather DMA in the middle of the
+         * operation or a partial (not full) DMA transfer would reach
+         * the storage so we wait for completion instead (we beahve
+         * like if the DMA was completed by the time the guest trying
+         * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not
+         * set).
+         *
+         * In the future we'll be able to safely cancel the I/O if the
+         * whole DMA operation will be submitted to disk with a single
+         * aio operation with preadv/pwritev.
+         */
+        if (bm->aiocb) {
+            qemu_aio_flush();
+#ifdef DEBUG_IDE
+            if (bm->aiocb)
+                printf("ide_dma_cancel: aiocb still pending");
+            if (bm->status & BM_STATUS_DMAING)
+                printf("ide_dma_cancel: BM_STATUS_DMAING still pending");
+#endif
+        }
         bm->cmd = val & 0x09;
     } else {
         if (!(bm->status & BM_STATUS_DMAING)) {