Message ID | 1280846670-27063-7-git-send-email-kwolf@redhat.com |
---|---|
State | New |
Headers | show |
On Tue, Aug 03, 2010 at 04:44:30PM +0200, Kevin Wolf wrote: > Furthermore because the DMA operation is splitted into many synchronous > aio_read/write if there's more than one entry in the SG table, without this > patch the DMA would be cancelled in the middle, something we've no idea if it > happens on real hardware too or not. Overall this seems a great risk for zero > gain. That hasn't been true for a long time when this code was commited, at least on kernel supporting preadv/pwritev and/or aio.
Am 18.03.2011 12:19, schrieb Christoph Hellwig: > On Tue, Aug 03, 2010 at 04:44:30PM +0200, Kevin Wolf wrote: >> Furthermore because the DMA operation is splitted into many synchronous >> aio_read/write if there's more than one entry in the SG table, without this >> patch the DMA would be cancelled in the middle, something we've no idea if it >> happens on real hardware too or not. Overall this seems a great risk for zero >> gain. > > That hasn't been true for a long time when this code was commited, at least > on kernel supporting preadv/pwritev and/or aio. So what do you want to tell us? Should the patch be reverted? I think the request can still be split in multiple bdrv_aio_readv/writev calls in dma-helpers.c and in non-raw block drivers, so we would still cancel somewhere in the middle. Kevin
On Fri, Mar 18, 2011 at 12:34:09PM +0100, Kevin Wolf wrote: > > That hasn't been true for a long time when this code was commited, at least > > on kernel supporting preadv/pwritev and/or aio. > > So what do you want to tell us? Should the patch be reverted? Just looked into that area for a bug reported and noticed the comment, which at least should be corrected. To be honest I don't quite understand the point of this commit at all, but maybe Andrea can explain it a bit more.
diff --git a/hw/ide/pci.c b/hw/ide/pci.c index 4331d77..ec90f26 100644 --- a/hw/ide/pci.c +++ b/hw/ide/pci.c @@ -40,8 +40,27 @@ void bmdma_cmd_writeb(void *opaque, uint32_t addr, uint32_t val) printf("%s: 0x%08x\n", __func__, val); #endif if (!(val & BM_CMD_START)) { - /* XXX: do it better */ - ide_dma_cancel(bm); + /* + * We can't cancel Scatter Gather DMA in the middle of the + * operation or a partial (not full) DMA transfer would reach + * the storage so we wait for completion instead (we beahve + * like if the DMA was completed by the time the guest trying + * to cancel dma with bmdma_cmd_writeb with BM_CMD_START not + * set). + * + * In the future we'll be able to safely cancel the I/O if the + * whole DMA operation will be submitted to disk with a single + * aio operation with preadv/pwritev. + */ + if (bm->aiocb) { + qemu_aio_flush(); +#ifdef DEBUG_IDE + if (bm->aiocb) + printf("ide_dma_cancel: aiocb still pending"); + if (bm->status & BM_STATUS_DMAING) + printf("ide_dma_cancel: BM_STATUS_DMAING still pending"); +#endif + } bm->cmd = val & 0x09; } else { if (!(bm->status & BM_STATUS_DMAING)) {